►
From YouTube: Devcon VI Bogotá | Mild Flowers stage - Day 4
Description
Official livestream from Devcon VI Bogotá.
For a decentralized version of the steam, visit: https://live.devcon.org
Devcon is an intensive introduction for new Ethereum explorers, a global family reunion for those already a part of our ecosystem, and a source of energy and creativity for all.
Agenda 👉 https://devcon.org/
Follow us on Twitter 👉 https://twitter.com/EFDevcon
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
B
B
B
Dm
hope
you've
been
having
a
great
Defcon
conference
until
now.
Our
first
speaker
is
a
little
bit
delayed.
So
we're
really
sorry
about
this.
Please
stay
we're
going
to
continue
with
the
regular
schedule,
so
please
be
patient
and
we're
going
to
start
in
a
couple
of
more
minutes
and
nothing
welcome
enjoy
foreign.
A
A
A
B
B
C
Hello
I'm
super
happy
to
be
here
or
not
so
happy.
I
would
prefer
to
talk
about
something
else,
but
you
know
we
do
what
we
can
right.
So
first
we
have
to
start
with
the
basics
I'm
not
here
on
behalf
of
illusory,
which
is
the
company
behind
Nomad.
So
the
views
Express
are
my
own
and
do
not
reflect
those
of
the
company.
C
As
it's
an
active
investigation
right
now,
an
active
incident,
we
will
be
not
doing
any
q
a
unfortunately.
C
C
So
today
we'll
be
talking
about
Nomad.
What
is
Nomad,
how
the
protocol
works?
We
will
need
that
to
be
able
to
talk
about
bridges,
how
they
work
on
the
incident.
Finally,
what
are
the
learnings
we
have?
What
did
we
learn
from
losing
about
having
a
hug
has
resulted
in
a
190
million
dollars
in
tokens
being
evaporated?
C
C
C
We
don't
Define
how
your
application
will
react
to
some
event.
Basically,
the
normal
protocol
will
just
send
arbitrary
bytes
from
one
domain
to
another.
So
it's
you.
The
developer,
to
interpret
those
bytes.
So
Nomad
is
an
optimistic
protocol
for
interpretability
that
supports
arbitrary
messages
between
domains.
C
The
first
thing
you
have
to
know
about
Nomad
and
probably
the
last-
it's
that
it's
super
simple
on
the
sending
chain,
all
the
messages
that
are
being
sent,
they're
added
to
a
metal
tree
right
so
and
why
we
do.
That
is
because
it's
very
easy,
with
a
medical
tree
to
prove
whether
a
methods
belongs
to
the
tree
or
not.
The
information
of
inclusion
in
theory
and
in
practice
is
included,
he's
compressed
into
the
root,
so
the
protocol,
only
it
only
has
to
do-
is
to
send
that
route
from
the
sending
chain
the
receiving
chain.
C
C
Let's
see
the
life
cycle
of
a
massaging
nomad.
First,
we
go
to
the
home
contract.
If
you
see
here
the
home
contract
on
the
sending
chain-
and
we
send
the
message
right-
then
a
new
route
is
generated.
Then
that
route
is
relayed
to
the
receiving
chain
and
you
will
find
itself
in
a
contract
called
replica.
C
Then
we
must
wait
for
them
to
be
optimistic
window
to
pass
afterwards.
We
can
go
to
replica
to
the
receiving
domain
and
say
hey.
Here's.
The
other
proof
of
inclusion
here
is
the
message:
I
want
to
prove
that
this
message
was
indeed
sent
and
after
we
prove
a
message,
we
can
process
it
and
we
will
process
a
message.
Basically,
the
replica
contract
will
take
the
message:
metadata:
hold
the
contract
designation
contract
and
pass
the
message.
Payload
super
simple.
C
Basically,
you
go
to
the
sending
chain
right,
let's
say:
ethereum
you
go
to
the
contract
and
say:
hey
bridge
I
want
to
send
my
native
tokens
with,
for
example,
it's
a
native
token
to
ethereum.
It
has
intrinsic
value
and
I
want
to
send
it
to
Paul
to
evmos
right
then
the
bridge
will
hold
that
with
and
it
will
send
a
message
to
the
receiving
chain
to
the
Nomad,
Bridge
or
the
receiving
end
and
say:
hey,
you
should
mint.
C
You
should
create
a
new
mod
with
a
representation
token
which
doesn't
have
any
any
value
in
itself.
The
value
is
derived
from
the
fact
that
we
can
do
the
opposite,
but
the
user
that
holds
the
mod
with
can
go
to
the
bridge
and
say
Hey
I
want
to
burn
that
token
and
I
want
to
transfer
my
mod
with
back
into
ethereum.
So
the
bridge
will
send
a
message
back
and
say:
hey
unlock
and
your
wave
a
hug
is
when
the
all
this
locked
collateral
in
the
bridge
is
stolen
right.
C
So
now
all
these
representations
that
are
flying
around
they're
essentially
worthless
because
they
can't
be
redeemed
for
the
original
asset,
for
the
asset
has
intrinsic
value
and
I.
Think
that's
why
we
have
we're
seeing
so
many
Hudson
Bridges,
not
only
because
they
are
indeed
complex
systems
and
they
are,
but
because
also
they
have
so
much
collateral
locked
inside
them,
they're
very
juicy
targets.
They
make
good
targets
for
hikers
to
pry
and
test
for
the
vulnerabilities.
C
Let's
talk
about
the
incident,
what
how
was
the
name
of
the
bridge
possible
to
be
hacked
we'll
talk
about
two
mappings
in
the
replica
contract?
Two
pumpings
was
all
they
took.
The
first
mapping
connects
a
route
to
the
timestamp
that
says
that,
after
the
time
stop,
that
route
is
indeed
valid.
You
can
start
proving
messages
against
the
root
the
root
of
the
Merkel
tree
right,
so
a
new
route
comes
and
the
new
timestamp
is
generated.
C
Now
the
message
and
when
proving
a
message,
you
connect
the
the
message
which
about
32
the
route
which
was
proven
under.
C
C
But
of
course,
in
the
code,
we,
you
know
we
we
set
an
authentication
flow.
Let's
say
that,
of
course,
this
is
not
a
valid.
Of
course.
This
is
zero
and
if
the
the
time
step
is
zero,
then
it's
not
valid,
because
also
the
number
zero
is
the
default
value.
C
C
So
what
users
did
they
forged
messages
that
were
meant
to
the
bridge,
but
all
the
bridge
hay
and
all
that
collateral
and
see
any
arbitrary
methods
that
has
not
already
been
proven
is
proven
under
the
root
zero
it
could
prove
and
process
whatever
they
wanted
and
that's
right.
There
190
million
bug,
so
why
did
it
come
up
now
right?
The
normal
protocol
has
been
active
for
months.
C
We
had
an
upgrade
and
during
the
upgrade
we
changed
the
semantics
of
the
second
mapping.
So
we
used
to
store
an
enum
here
at
numbers
numbers.
So
if
the
number
is
one
it's
proven,
if
number
one
is
two
is
processed,
so
we
didn't
connect
messages
to
the
roots
under
the
under
which
they
are
proven
right.
So,
even
if
the
route
was
active,
we
you
know
through
the
authentication
flow.
C
We
made
sure
that
we
didn't
authenticate
that,
but
we
changed
the
semantics,
and
that
is
why
the
bug
was
so
hard
to
find
our
testing
didn't,
find
it
or
how
the
doors
didn't
find
it,
because
it
needed
the
old
state.
With
a
new
code,
the
old
state,
with
old
code,
secure
new
state
with
new
code,
secure
all
state
with
new
code,
not
secure.
C
Nomad
now
the
protocol
is
posed
will
restart
the
bridge.
That's
the
hard
part.
How
do
we
restart
an
uncollateralized
Bridge,
we'll
be
sharing
more
information
soon?
The
tldr
is
that
users
will
be
able
to
access
to
collect
some
of
the
recovered
funds
as
the
recovered
funds
and
continuously
flow
into
the
bridge,
and
they
will
do
so
fairly.
C
You'll
be
able
to
read
more
about
it
in
our
coming
weeks,
in
our
blog
posts
and
Twitter
accounts.
So
what
did
we
learn?
C
What
did
you
learn
from
this
I
like
history,
Bismarck
says
that
stupid
people
learn
from
their
own
mistakes.
Wise
people
learn
from
the
mistakes
of
others,
so
be
wise.
C
We'll
talk
about
not
improvising
that
and
overcome,
but
we'll
talk
about
test,
observe,
engage
and
communicate.
You
can
think
about
as
different
layers
in
the
defenses
of
a
castle
right.
Hopefully,
these
layers
of
defense
will
stop
the
attackers
from
reaching
the
Citadel,
the
king
or
queen.
It's
like
the
treats
his
analogy
security
I'm
sure
most
of
your
you,
researchers
will
know
about
that
security
analogy,
but
I
think
castles
are
way
cooler
than
cheese.
That's
why
I
prefer
to
do
castles?
C
Yes,
the
bread
and
butter
of
every
developer
right
there
are,
although
there
weren't
any
best
practices,
I
think
the
industry
is
now
slowly
aligning
on
these
on
some.
You
know
best
practice
on
this,
so
we're
having
the
unit
tests,
property-based
tests,
integration
tests,
working
tests
and
Environ
tests
and
I
want
to
go
a
quick
rundown
through
them.
C
First
of
all,
concrete
test.
Super
simple
I
want
to
make
sure
that
the
function,
if
I
give
it
five
I
will
get
25.
We
also
name
it
concrete
function.
Concrete
tests,
property-based
tests
are
more
advanced.
They
force
you
to
think
about
the
properties
of
your
code,
so
basically
to
verify
that
this
function
will
give
the
input
multiplied
by
five.
Always
then
we
have
the
integration
test
where
we
want
to
test
bigger
picture
features.
C
User
flows,
yeah
working
test,
which
is
like
a
web
free
specialty,
all
the
other
tests
you
can
find
them
in
other
paradigms.
A
forking
test
basically
could
be
an
integration
test
or
a
unit
test,
but
it
would
test
against
the
on-chain
state,
and
this
is
very
important
because,
as
you
saw,
a
bug
can
Surface
itself
only
using
the
Unchained
state
and
finally
warranties.
My
personal
favorite
are
these
equations
phrases
that
should
always
hold
for
your
protocol
right.
If
that
phrase
doesn't
hold
at
some
point,
the
protocol
is
should
be
post.
C
The
protocol
has
broken
in
some
way,
so
this
is
a
big
project.
You'll.
Do
it
in
two
phases:
first
phase
you
will
Define
the
invariance,
so
it's
a
very
theoretical
phase,
not
easy.
Definitely,
and
then
it's
testing
the
variance
using
any
tools.
There
are
a
lot
of
tools
out
there,
for
example,
in
Nomad
the
environment
that
broke
was
that
all
messages
that
are
processed
right
received
must
have
been
sent.
C
C
Basically,
you
can
use
tools
to
verify
that
the
storage
layout
of
your
upgradable
contracts
will
not
change
Without
You
noticing-
and
this
is
a
very
common
source
of
vulnerabilities.
C
In
my
view,
you
should
prioritize
unit
test,
obviously
a
property
based
test
on
forking
tests.
This
should
be
your
primary
focus
with
tools
like
Foundry
boundary.
It's
very
I
think
it's
easier
than
it
used
to
be.
Of
course
you
should
always
audit.
You
know,
use
a
do
an
audit,
although
it's
not
a
silver
bullet,
so
don't
just
do
audits
and,
of
course,
always
take
the
source
layout,
always.
C
Observe
the
second
pillar:
now
we
have
tested
alerting
if
you
receive
a
hero
passage,
and
you
aren't
already
up
it's
too
late-
that
your
alerting
has
failed.
You
shouldn't
wait
from
a
certain
Twitter
account
to
tell
you
that
there's
a
problem
with
your
protocol.
This
is
a
solved
problem
in
web
2.
Web
3
would
like
to
invent
things
again
and
again,
but
it's
a
salt
problem
in
wealth.
Two
so
go
and
read
the
Google
series
handbook
you'll
get
a
ton
of
input,
also
talk
with
your
devops
engineers.
C
If
they
have
worked
in
web
2
before
they
will
have
a
lot
of
insights,
for
you
usually
start
with
an
object,
you
know
the
business
objective
and
then
Define
actionable
alerts
actionable.
That's
the
key
word
actionable.
That
means
that
even
alert
a
is
activated
then
you
should
do.
B
should
be
a
very
simple
if
that,
if
then
you
know
close,
you
should
have
a
playbook
for
every
alert.
So
if
alert
a
happens,
you
should
do
that
and
that's
the
way
you
should
do
it
and
here's
the
script.
You
should
run
super
simple.
C
A
nice
mental
model
for
alerts
in
web
free
I
think
is
heuristics
and
environment-based
alerts
characteristics
are
rules,
they
regard
human
intervention
to
make
to
understand
whether
that's
a
false,
positive
or
not
environment-based
alerts
where
you
have
an
off-chain
agent
running
and
continuously
taking
the
variance
of
the
protocol
more
complex,
but
can
be
automated
because
invariants
shouldn't
produce
false
positive,
like
if
the
environment
alert
is
on.
That
means
that
either
you
didn't
Define
the
environment
properly
or
be
protocol
is
program.
Another
way,
probably
it's
a
good
thing
to
pause.
C
The
protocol
now
engage
testing
has
failed,
alerting.
Maybe
so
now
we're
engaging
we're
in
the
first
minutes
of
the
of
the
incident.
What
do
we
do?
Hello
who's
told
us
that
would
not
rise
to
the
level
of
our
expectations.
We
fall
to
the
level
for
training.
C
So
if
you
don't
have
don't
have
an
incident
Playbook,
if
you
haven't
gone
through
that,
you
will
not
be
able
to.
You
know,
act
appropriately
and
you
will
get
wrecked,
even
if
your
alerting
was
good.
So
a
good
Incident
Management
means
explicit
ownership.
C
Very
specific
persons
should
have
very
specific
ownership
of
the
management
of
the
incident
outcomes.
Every
person
should
know
what
should
be
the
outcome
of
their
work
during
an
incident
Game
Day
game
day
game
day.
Do
that
again
and
again
go
through
simulations.
The
entire
organization
should
create
game
days
for
this
incident
because
they
will
happen,
they
will
happen
and
you
shouldn't
be.
You
know
during
the
incident
you
shouldn't
read
the
incident
playbook
for
the
first
time.
C
Yearn
internally
has
a
very
nice
blog
post
about
it.
I
highly
suggest
you
look
at
it
and
you
adapt
it
to
your
organization.
C
C
I'm
sure
everyone
would
like
yeah.
Let's
talk
to
the
users,
you
know
we
have
to
be
transpired.
We
have
to
tell
them
what
happened.
We
have
to
be
no
tell
them
before
they
read
it.
On
wrecked,
no
nope,
you
shouldn't
talk
to
anyone
to
talk
to
your
legal
team.
You
don't
do
a
commit.
You
don't
do
a
tweet,
you
don't
to
your
mother.
C
C
Let's
see
a
quick
timeline
of
the
first
days
after
an
incident.
First
of
all,
we
talk
with
our
lawyers
and
we
form
them
of
the
situation,
so
they
can
start
talking
with
law
enforcement
agencies
for
asset
recovery.
C
We
do
the
first
batch
of
Public
Communication.
We
tell
them
what
happened.
What
do
we
want
to
do?
What
are
we
planning
to
do?
C
We
do
we
talk
with
our
partners,
ideally
through
more
privileged
communication
group
like
telegram,
maybe
we
can
share
it
more
than
we
share
with
the
public
because
of
ndas
and
all
that,
because,
as
you
are
losing
money
they
are
losing
as
well,
then
you
should
talk
with
a
chain
analytics
firm.
That
is
very
important.
C
I
will
tell
you
why,
because
apparently
people
they
suddenly
have
a
change
of
heart
when
legal
enforcement
agencies
are
zeroing
in
on
your
the
real
identities
right.
Suddenly
they
just
want
to
return
their
funds
back,
and
the
only
reason
and
the
only
way
for
to
recover
assets
is
using
a
chain
analytics
firm
as
we
go
on
and
analyze
all
these
tornado
transactions
and
find
correlations,
and
they
would
be
able
to
point
the
law
enforcement
agencies
to
centralizing
changes,
so
they
can
freeze
funds
and
requests
data.
C
C
C
C
B
B
B
B
B
B
B
E
My
name
is
egalano
I'm,
one
of
the
co-founders
and
the
general
manager
for
infuri
and
I'm
joined
today
with
my
colleague,
Tim
Myers,
one
of
our
lead
engineers
at
inferior,
so
you've
seen
the
title
of
the
talk
you
might
have
caught
our
announcement
at
East
Berlin
a
month
ago,
we're
here
to
talk
about
decentralizing
inferior,
it's
something
that's
been
on
the
Twitter
waves
and
crypto
Twitter
for
years,
something
that
people
ask
us
about
all
the
time
like
is
inferior,
centralized.
How
are
you
going
to
decentralize?
E
Are
you
going
to
decentralize?
This
is
this?
Is
the
talk
where
we
answer
quite
a
few
of
those
questions
for
you,
we
gave
you
a
preview
at
East
Berlin.
If
you're
able
to
see
the
recording
from
that
talk,
this
is
going
to
go
more
into
the
details
of
the.
Why,
behind
it,
the
why
now,
what
now
and
especially
the
technical
details
that
go
into
decentralizing
inferior.
E
E
It
was
four
of
us
from
the
inferior
team
we
flew
to
Devcon.
At
the
time
we
were
a
small
team
within
the
larger
consensus
organization,
focused
on
trying
to
build
something
that
would
be
useful,
and
so
we
all
went
to
this
conference
and
we
were
saying
that
we
were
going
to
get
up
on
that
stage
and
say:
there's
a
lot
of
innovation.
That's
happening
in
this
space.
E
We've
created
this
new
service
called
inferior,
and
it's
a
traditional
SAS
service
that
you
can
use
to
connect
to
blockchains,
and
we
had
a
lot
of
apprehension
about
that
like
what
would
the
reception
to
that
be
because
at
the
time
we
were
all
running
our
own
infrastructure,
we
were
all
running
our
own
nodes.
There
was
very
much
a
lot
of
interest
in
like
clients,
verifiability
of
data,
and
we
were
saying
don't
run
your
own
infrastructure,
trust
us
to
run
your
infrastructure,
and
so
the
the
reception
at
Devcon
was
a
little
mixed.
E
E
That
might
be
the
case,
but
we're
going
to
try
it
like
we're
just
here
to
try
to
provide
something
useful
and
maybe
it
will
be
useful
and
maybe
it
won't
be,
and
so
we
went
up
on
stage
and
we
we
announced
that
we
were
going
to
be
doing
this
and
then
during
Defcon
2.
The
ethereum
network
was
subjected
to
the
denial
of
service
attacks
that
almost
brought
down
the
network
and
some
of
the
many
of
the
heroes
that
helped
save
the
network
from
that
are
either
in
this
room
or
definitely
at
this
conference.
E
The
people
that
helped
mitigate
the
attacks
on
the
network
and
inferior's
role
in
that
at
the
time
was
trying
to
keep
our
endpoint
up.
That
was
already
being
used
to
serve
metamask
metamask
had
launched
two
three
months
beforehand
and
thousands
of
users
were
already
using
metamask
every
day
to
send
ether
do
some
basic
interactions
with
some
of
like
the
early
maker
contracts
that
were
out
there.
E
Some
of
the
early
other
experiments
that
were
out
there
around
that
2016
time
and
we
said
well,
we
can't
do
anything
actively
to
fix
that
client
we're
not
the
protocol
core
developers.
The
core
developers
are
solving
that
problem,
but
there
was
already
this
issue
where
we
were
in
Shanghai
the
Wi-Fi.
There
was
awful
even
worse
than
at
most
conferences
and
nobody
was
able
to
like
get
access
to
their
infrastructure
to
update
their
clients.
E
We
could
simplify
or
try
to
eliminate
a
lot
of
the
pain
for
these
users
by
yes,
we'll
take
on
this
like
aspect
of
centralization,
but
for
the
most
part,
it'll
be
beneficial,
and
we
said
we're
going
to
keep
doing
that
until
it
gets
to
the
point
where
we
feel
like
it's
crossed
a
Tipping
Point,
and
we
need
to
try
to
do
things
differently
and
when
you
look
around
the
space,
there's
other
points
of
centralization
that
have
paths
to
decentralization
things
like
Fiat,
on-ramps
and
centralized
exchanges
and
how
there's
a
transition
from
those
to
Alternatives
or
decentralized,
truly
decentralized,
ways
of
doing
the
same
operation.
E
E
So
that's
how
we
thought
of
ourselves
as
inferior
started,
centralized
and
we'll
see
when
we
get
more
decentralized
and
we're
going
to
have
a
a
solution
for
how
we
do
that,
and
the
reason
is
because
we
had
to
create
quite
a
bit
of
proprietary
infrastructure
to
serve
the
traffic
that
we've
done
over
the
last
six
seven
years.
What
we
run
today
as
inferior
is
far
more
complex
than
what
we
ran
with
in
2016..
E
If,
if
you
had
tried
to
replicate
what
we
had
in
2016,
maybe
a
month
or
two
after
we
launched,
you
could
do
it
within
a
week
or
two
now,
it
would
take
you
probably
years
to
try
to
replicate
the
systems
that
we've
had
to
build,
because
between
2016
and
now
we
went
through
the
2017-2018
Ico
bubble,
the
2019
crypto
winter
2020
D5
summer,
all
of
the
explosion
of
nft
growth
and
with
every
single
one
of
those
things
you
might
have
run
into
an
inferior
production
incident,
something
that
showed
Oh.
E
This
didn't
scale
exactly
right
and
every
time
it
didn't
scale
exactly
right.
We
improved
our
system
right,
and
so
it
forced
us
to
innovate.
Every
single
time
take
data
like
back.
In
the
day
we
used
to
send
requests
directly
to
ethereum
nodes,
and
there
was
really
no
virtualization
as
we
call
it
custom
services
that
would
handle
the
traffic
because
we
didn't
ever
want
to
be
in
a
position
of
providing
the
wrong
data
to
you,
because
that
would
be
really
bad
that'd,
be
the
end
of
our
service.
E
If
we
ever
provided
you
the
wrong
day
that,
like
that
and
especially
in
2021,
we
ran
into
this
issue
where
we
tried
to
be
very
conservative
with
how
often
we
updated
our
clients,
because
early
on
the
clients
were
a
little
bit
bleeding
edge
and
a
little
bit
unstable.
E
The
closer
you
tracked
to
the
latest
release
and
because
of
that
users
were
complaining
like
maybe
give
us
a
little
more
stability,
less
of
the
absolutely
latest
client
that
ended
up
biting
Us
in
2021
when
we
were
running
a
client
that
was
about
six
months
old
and
a
bug
was
triggered
by
by
a
team
that
I
won't
mention,
because
there
was
an
accident
and
they
apologized
to
us,
but
it
took
down
inferior
for
six
hours,
and
that
was
our
first
multi-hour
outage
that
we
have
ever
had,
and
it's
still,
our
worst
outage
that
we've
ever
had,
and
so
that
really
made
us
think.
E
E
We
said
that
we
are
going
to
set
out
to
prove
that
web
3
could
be
served
by
SAS,
we're
going
to
say
that
we're
going
to
run
a
SAS
service
and
that's
going
to
be
beneficial
for
the
ecosystem
and
it's
going
to
bring
more
developers
in
it's
going
to
bring
in
more
investor
interest
a
lot
of
activities
going
to
come
into
this
space,
because
people
will
see.
Oh
I
can
actually
make
a
living
and
support
myself
running
a
business
in
web
3
when
at
the
time
it
wasn't
very
clear.
E
Which
models
would
work,
and
so
now
inferior
is
not
the
only
game
in
town
There's,
dozens
like
they
can't
even
fit
on
the
screen
anymore,
and
we
feel
like
we've
accomplished
that
goal.
We've
proved
that
SAS
Works
in
web
3,
and
so
now
we
want
to
prove
that
web
3
models
can
be
used
to
serve
SAS
and
that's
what
we
took
as
our
approach
to
decentralizing
inferior
and
that's
where
we
are
today
and
the
first
step
to
that
is
open
collaboration
and
transparency.
E
We
don't
have
the
entire
solution
in
the
white
paper
form
that
we're
going
to
upload
to
ipfs
and
you're
all
going
to
read
it
right
now,
like
we're
still
in
that
design
phase,
but
we've
already
started
collaborating
with
other
people
in
the
ecosystem.
On
that
design,
we've
been
working
on
it
for
a
year
and
a
half.
This
is
where
we
want
to
start
showing
you
more
of
the
technical
details
of
what
goes
into
that.
What
does
decentralizing
inferior
look
like
it?
Could
people
have
said?
E
Oh,
are
you
just
going
to
create
a
Dao
so
that
we
can
like
have
some
governments
over
what
inferior
does?
Is
that
good
enough,
or
is
it
some
sort
of
utility
function
that
we're
going
to
try
to
have
as
part
of
this
network?
What
are
the
incentives
of
participating?
How
do
we
incentivize
the
right
behaviors
and
what
people
are
especially
asking
was,
does
participating
in
this
decentralized
infrastructure.
E
Network
mean,
like
inferior,
still
owns
and
controls
everything,
and
that's
the
biggest
question
that
we
got
in
the
last
month
since
we
announced
it
East,
Berlin
and
the
answer
to
that.
No
it
that's
that
defeats.
The
purpose
of
what
we're
seeing
with
a
decentralized
infrastructure,
Network,
that's
supposed
to
be
a
permissionless
network,
so
inferior
is
going
to
just
be
a
partner
in
a
decentralized
permissionless
network,
one
of
an
equal
amount
of
or
one
of
equals
in
operators.
F
G
Sorry
yeah,
so,
as
EG
mentioned,
we
don't
have
all
of
the
details
of
this
figured
out
yet
we're
still
pretty
deep
in
the
design
phase,
but
the
next
10
minutes
I'm
going
to
try
and
walk
you
through
as
much
as
we
have
figured
out
right
now,
as
I
can
and
then
hopefully
save
about
five
minutes
for
any
questions
you
all
might
have.
So
first
I
want
to
start
with
just
a
really
high
level
overview
of
what
we
see
as
the
network
participants
in
this
decentralized
inferior
Network.
G
G
All
of
those
things
that
you
typically
do
with
the
centralized
provider
and
they'll
provide
this
service
in
return
for
a
reward
which
is
what's
going
to
motivate
them
to
want
to
do
this.
The
next
participant
we're
calling
the
network
status
Watchers.
This
party
will
provide
performance
and
capability
reports
of
the
infrastructure
providers,
so
they'll
be
there
to
kind
of
check
and
keep
them
honest.
You
know:
they'll
they'll,
see
that
they're
keeping
up
with
the
head
of
the
chain
they'll
check
that
they're.
G
You
know
responding
quickly,
that
their
answers
are
accurate,
all
the
things
that
you
would
hope
that
an
API
does,
and
finally,
we
have
this
concept
of
an
Ingress
node.
They
kind
of
act
as
an
intermediary.
They
will
sell
the
actual
network
resources
to
users
after
purchasing
them
from
the
infrastructure
providers
and
later
I'll
also
show
how
they're
not
entirely
necessary,
but
they
can
provide
a
ux
option.
That
I
think
is
useful,
so
more
detail
on
the
infrastructure
providers.
G
The
first
thing,
of
course,
to
note
is
that,
instead
of
now
with
just
a
single
centralized
provider,
we'll
have
many
of
them
and
they
won't
be
run
by
a
single
party.
So
here
we
just
have
a
little
simple
diagram.
We
have
infero,
we'll,
probably
run
one.
G
Of
course,
you'll
also
see
other
named
providers
that
you're
used
to
seeing
that
we
hope
will
collaborate
with
us
and
you
might
also
have
Anonymous
providers,
so
you
never
really
know
who
they
are,
but
through
the
mechanisms
of
the
network,
you
can
come
to
trust
that
they
are
going
to
provide
a
good
service.
So
these
infrastructure
providers
they
commit
to
providing
capacity
on
the
network,
they'll
specify
the
protocols
and
capabilities
that
they
support.
G
So
we
envision
this
network
working
for
ethereum,
but
we
also
Envision
Envision
it
supporting
all
of
the
sort
of
new
networks
that
inferior
has
added
over
the
last
couple
of
years.
The
layer
twos
new
layer
ones,
but
not
every
infrastructure
provider
is
going
to
want
to
serve
all
of
those.
So
they
can
pick
and
choose
and
specify
what
they
want
to
support.
They'll
also
say
the
capabilities,
so
you
might
be
used
to
having
you
know,
archive
nodes
for
ethereum,
and
then
sometimes
you
need
that.
G
Sometimes
you
don't
and
running
archive
traffic
is
a
lot
harder,
so
some
providers
might
do
that.
Some
might
not
they'll
also
give
an
idea
of
the
amount
of
throughput
that
they're
capable
of
specifying,
and
this
may
take
a
couple
different
shapes,
but
you
know
you
can
think
of
it
as
like:
a
thousand
requests
per
minute
when
someone
registers
as
a
provider
they're
saying
I,
can
support
this
much
and
so
that
people
can
then
buy
that
back
and,
of
course,
they'll
be
compensated
for
for
doing
this.
G
I
want
to
talk
a
little
bit
about
what
it
actually
looks
to
be
a
node
provider,
and
so
this
is
a
diagram
of
what
inferior
does
and
what
other
big
centralized
providers
do
it's
complicated.
As
EG
said,
it's
become
much
more
complicated
since
2016.
to
start.
Of
course
you
have
your
blockchain
nodes.
These
are
the
ones
that
are
produced
by
the
client
teams
that
interact
with
the
peer-to-peer
Network
that
sync
blocks
to
verify
it,
but
there's
a
lot
of
supporting
infrastructure
around
that.
So
first
we
have
a
snapshot
system.
G
If
you've
ever
tried
to
sync
a
node
you'll
know
that
it
takes
a
very
long
time,
so
we
typically
take
the
disk
of
a
blockchain
node,
save
it
to
our
own
private,
Secure
Storage,
and
then,
when
we
need
to
horizontally
scale
out
those
nodes.
We
use
that
as
a
seed
for
new
blockchain
nodes,
so
it's
downloaded
onto
a
server
and
then
sync
from
there,
which
makes
it
much
faster.
You
also
have
various
indexers
and
accelerators.
G
So
if
you've
ever
tried
to
do
like
a
big
wide
git
logs
query
for
events
from
Apache
node
you'll
know
it's
really
slow.
So
we
have
special
indexers
to
speed
that
up
also
for
things
like
nfts
in
front
of
that
you
have
a
load
balancer.
That
makes
the
decision
of
what
accelerators
to
send
your
requests
to.
Maybe
for
just
sending
a
transaction.
It
goes
directly
to
the
nodes
and
then
front
of
all
of
that
we
often
have
a
consistency
system
that
helps
make
sure
that
users
see
a
canonical
view
of
the
chain.
G
So
this
helps
with
reorgs
to
make
those
little
less
painful
for
the
user
and
also
make
sure
that
if
they
like
query
a
block,
it
doesn't
change
in
between
requests,
and
this
is
something
that
is
altogether
quite
complicated.
There's
been
several
projects
out
in
open
source
land
that
have
attempted
to
help
with
this
I
think
most
of
them
are.
You
know
more
on
just
running
the
blockchain
nodes
running
several
of
those,
but
to
really
provide
good
service.
G
You
need
a
lot
more
than
that
and
we
will
be
releasing
our
own
open
source
infrastructure
kit
that
node
providers
can
use
to
help
them
participate
in
this
network.
Of
course,
there's
also
going
to
be
parties
that
already
know
what
they're
doing
and
we'll
probably
just
run
their
own
flavor
of
infrastructure
to
provide
the
same
service
in
the
end.
G
Next
we
have
the
network
status
Watchers
again,
there's
multiple
of
these
run
by
different
parties.
These
will
periodically
test
the
infrastructure
providers
by
sending
them
a
small
volume
of
requests
of
measure,
the
performance
and
check
for
correctness
of
those
responses.
They'll
report
all
this
to
a
Federated
status
page.
So
if
you're
thinking
about
using
this
network
you'll
be
able
to
go
there,
look
through
the
different
providers
see
how
they
perform.
What
their
you
know,
capabilities
are
of
those
things
and
also
in
cases
of
provably
incorrect
responses.
G
So
there
are
certain
requests
on
the
ethereum
Json
RPC
API
that
you
could
prove
with
Merkle
proofs
rather
easily.
So
in
those
cases
those
might
also
be
used
to
inform
penalties
with
the
node
providers
and
we
hope,
over
time
as
stateless
clients
start
to
improve
that
there
will
be
more
and
more
of
that
API
surface.
That
is
verifiable,
a
picture
of
kind
of
what
it
looks
like
to
run
a
network
status
Watcher.
G
They
will
have
to
run
some
small
amount
of
blockchain
infrastructure
themselves,
not
nearly
on
the
scale
that
an
infrastructure
provider
would
because
they're
not
serving
a
high
volume
of
requests,
but
they
need
sort
of
a
baseline
to
compare
to.
They
want
to
check
that
the
infrastructure
providers
are
near
what
they
think
is
the
head
of
the
chain
and
then,
of
course,
they'll
have
all
these
periodic
tests
that
they're
running
to
see
if
the
providers
are
operating
well,
so
they'll
interact
with
the
infrastructure
providers.
G
They'll
potentially
send
things
to
a
Smart
contract
on
chain
for
those
provably
incorrect
responses,
and
then
they
will
also
send
their
results
to
a
Federated
status
page
where
you
can
view
more
detailed
information
and
then
last
we
have
the
Ingress
node.
So,
as
I
mentioned,
this
kind
of
serves
as
an
intermediary
between
the
end
user
and
the
infrastructure
providers.
So
an
end
user
will
send
their
Json
RPC
request
to
an
Ingress
node
and
then
that
Ingress
node
May
then
forward
that
to
multiple
different
infrastructure
providers.
Maybe
they
generally
send
it
to
one?
G
And
if
that
goes
down,
they
have
a
fallback.
Maybe
they
round
robin.
Maybe
they
choose
to
actually
send
it
to
all
of
them
and
do
a
quorum
of
that
and
return
it
to
the
user.
This
will
be
something
that
the
Ingress
node
can
actually
choose
how
they
want
to
do
it,
and
one
reason
for
having
this
Ingress
node
is
that
you
can
use
it
to
give
the
same
web
to
ux
of
the
centralized
providers
today.
So
I
think
there's
a
reason
that
inferior
and
centralized
providers
have
been
so
successful.
G
In
addition
to
that,
though,
an
Ingress
node
might
purchase
resources
from
several
node
providers
and
then
register
themselves
as
a
node
provider
with
the
sort
of
the
you
know,
the
sum
of
all
of
that
capacity
and
then
an
end
user
can
go
and
pay
for
that
provider.
In
crypto,
and
then
you
have
a
fully
crypto
native
experience,
so
there's
options
here:
the
node
provider
lets
you
do
a
lot
of
interesting
things
with
combining
other
providers.
G
So
the
Unchained
registry
is
what
we're
typically
calling
our
smart
contract.
That
kind
of
coordinates
all
of
this
and
the
main
way
to
think
about
it
is
it's
basically,
the
data
structure
of
your
node
providers.
So
we
have
a
very
simple
example
of
what
that
kind
of
looks
like
this
is
just
one
detail,
but
you
have
your
name
providers
and
the
protocols
that
they're
supporting
so
as
inferior,
maybe
we're
supporting
the
five
that
are
listed
here,
but
maybe
operator
a
is
only
supporting
Ethan
polygon.
G
This
is
where
you
would
also
be
able
to
see
things
like,
maybe
what
region
they
exist
in,
whether
they
support
archive
nodes.
Things
like
that
going
to
look
sort
of
of
the
timeline
of
how
this
all
operates.
So
first,
of
course
the
smart
contract
is
deployed
some
amount
of
time
later
your
infrastructure
providers
come
in
and
they
register
they
specify
the
protocols,
the
capabilities
and
the
throughput
that
they'll
support
some
amount
of
time
later.
You'll
have
Network
watches
that
will
register.
G
So
in
the
center
you
have
the
on-chained
registry.
You
have
your
network
Watchers,
your
infrastructure
providers,
your
blockchain
API
consumers
and
your
Ingress
nodes
that
all
will
register
with
that
on-chain
registry
and
then
once
that's
all
set
up
your
blockchain
API
consumers
can
send
their
requests
to
the
various
infrastructure
providers
that
they've
chosen
or
to
an
Ingress
node,
which
is
sort
of
helping
them.
Do
that
and
yeah.
We
hope
once
we
have
that
all
figured
out
that
it'll
give
us
a
self-sustaining,
reliable
and
robust
network
of
infrastructure
providers.
G
That's
built
to
really
serve
the
high
throughput
that
inferior
can
today
or
other
centralized
providers
can
today
for
all
of
the
blockchain
API
requests
that
you
you
need,
but
instead
of
what
we
have
now
it'll
remove
a
single
point
of
failure.
We
help
to
hope
to
do
this
out
in
the
open
as
much
as
possible
and
really
follow
that
collaborative
web3
Spirit.
G
This
is,
we
hope,
really
going
to
improve
reliability
for
these
for
the
users
and
get
a
lot
closer
to
that
kind
of
100
uptime
dream,
but
also
which
I
think
is
important,
still
provide
the
ability
to
have
that
really
easy
web
to
ux.
If
you
want
it,
we
could
also
have
the
crypto
native
and
yeah.
That's
it
we're
seeking
strong
web
3
infrastructure
providers
to
work
together
with
us
if
you're
interested
in
joining
and
participating
in
this
network,
I
encourage
you
to
snap
this
QR
code.
G
B
Thank
you
so
much.
Please
raise
your
hand
if
you
have
questions
so
our
volunteers
can
lend
you
the
microphone
foreign.
E
Of
definitely
improved
censorship,
censorship,
resistance,
because
it's
permissionless
anybody
can
participate
in
the
network
by
meeting
the
criteria
to
join
the
network,
which
is
technical,
technical
criteria
of
meeting
a
level
of
performance
right.
If
you
can
meet
a
level
of
performance,
you
can
participate
in
the
network,
we're
not
going
to
be
selecting
who
can
participate.
E
They
would
be
penalized
by
the
network
right
there's.
There
would
be
a
financial
incentive
to
provide
the
minimum
level
of
performance
and
they
would
get
penalized
by
the
network
if
they're
not
meeting
that
level
of
performance.
Additionally,
that
Network
Watcher
that
we're
in
status
monitor
that
we're
talking
about
is
continuously
checking
and
Reporting
on
both
the
capabilities
of
the
network
as
well
as
the
performance.
So
you
can
see
here's
a
new
provider
that
just
joined
they've
only
been
part
of
the
network
for
seven
days.
They
support
seven
networks
of
those
seven
networks.
E
I
Can
you
give
an
example
of
what
the
centralized
infrastructure
currently
looks
like
in
terms
of
like
a
number
of
servers
and
bandwidth
and
so
on,
and
what
you
expect
will
be:
when
it's
decentralized
will
there
be
overhead?
Will
it
will
it
need
more
servers
to
do
the
same
performance?
That
kind
of
thing.
E
Dozens
of
subsystems,
hundreds
of
different
instances
or
servers,
providing
different
types
of
functionality
so,
like
we
have
a
separate
subsystem,
subsystem
that
all
it
does
is
index
event
logs
and
when
you
query
events
from
like
erc20
token
events
and
things
like
that,
it's
pulling
from
that
subsystem
and
then
we
have
this
consistency
system
that
tries
to
keep
all
of
that
in
sync
when
you're
querying
one
subsystem
versus
another.
E
So
that's
the
challenge
that
we've
had.
Is
we
built
that
as
quickly
as
possible
to
solve
problems
and
it's
very
proprietary,
Cloud
specific
and
we've
been
working
on?
How
do
we
retool
that
so
that
it's
something
that
can
be
not
just
open
source
and
something
that
somebody
else
can
run
but
can
run
with,
like
not
a
team
of
dozens
of
operations?
People,
and
so
it's
not.
E
What
we're
going
to
release
in
the
node
kit
is
what
we
think
is
an
minimum
level
of
functionality
of
performance,
where
this
is
better
than
running
a
bunch
of
nodes
behind
a
load
balancer,
because
it
now
has
some
of
these
indexing
and
performance
optimizing
systems.
But
it's
something
that
can
be
taken
on
by
hopefully
an
individual,
but
at
the
very
least,
a
small
team,
much
smaller
team
than
the
current
inferior
team.
J
E
That's
one
of
the
things
that
we've
been
considering
as
we're
going
through
the
design,
we're
leaning
towards
don't
don't
start
a
new
chain
for
that
like
try
to
use
something
that
exists,
and
that
goes
back
to
the.
Why
are
we
doing
it
now,
rather
than
in
2016,
when
people
are
talking
about
like
micro
payments
between
providers
and
like
do
you
use
State
channels
for
that?
And
all
these
other
things
like
we
would
have
had
to
develop
a
significant
amount
of
our
research
just
on
that
problem.
K
E
Okay,
so
multi-part,
so
who
decides?
Who
can
be
an
operator?
So
this
form
to
sign
up
is
to
participate
in
like
a
permission
data,
similarly
to
like
how,
when
other
people
are
developing
new
networks,
we're
going
to
have
to
start
with
a
small
group
to
try
to
test
and
iterate
iterate
with
and
for
that
we're
looking
for
people
that
already
have
experience
with
the
infrastructure
and
that
we
can
reach
out
to
and
get
feedback
from
pretty
quickly
on
the
technical
front.
But
once
we
actually
launch
this
network,
it's
fully
permissionless.
E
The
criteria
of
that
is
what
we're
defining
as
a
protocol
in
the
design
of
this
would
be.
You
know
the
amount
of
resources
that
you
would
have
to
provide,
or
this
would
be
the
minimum
level
of
performance,
so
you
have
to
at
least
support
one
network.
You
have
to
at
least
support
this
level
of
performance
and
then,
as
long
as
you
can
meet
that
it's
anybody.
L
G
Did
you
say,
like
the
indexing
notes?
No
no
Ingrid
of
the
Ingress
yeah
it'll
definitely
be
optional,
so
I
can't
quite
get
to
the
slides
it'll
take
too
long,
but
you
can
still
have
a
consumer
who
will
go
directly
to
the
contract
to
purchase
resources
and
go
directly
to
one
or
multiple
node
providers
that
they
choose.
The
Ingress
node
is
kind
of
an
option
where,
if
you
don't
want
to
go
to
the
trouble
of
doing
that,
maybe
you
want
to
use
one
that
allows
you
to
pay
in
Fiat
that
you
can.
B
N
N
What
we
will
do
is
first,
we
will
send
transactions
to
a
sequencer.
A
sequencer
will
already
give
a
state.
That
means
that
the
state
would
be
final
as
far
as
you
try
as
a
sequencer
sequencer.
The
first
version
is
going
to
be
a
centralized
sequencer.
We
would
decentralize
later
on,
but
if
you
trust
a
sequencer,
this
transaction
is
final
and
you
have
the
warranty
by
the
sequencer
that
this
transaction
is
going
to
be
mined.
N
It's
going
to
be
processed,
the
sequencer
will
collect
transactions
and,
at
some
point,
we'll
send
these
transactions
to
the
blockchain,
okay
and-
and
then
at
this
point
the
state
is
final
and
safe.
Here
we
don't
have
any
proof
yet
it's
just
the
transactions
are
set
and
you
have
the
warranty
that
those
transactions
are
going
to
be
processed
in
that
order
and
because
they
are
on
chain
you
can
they
cannot.
They
are
not
going
to
be
changed,
so
they
you
know
that
they
are
final
and
you
don't
need
to
trust
a
sequencer
anymore.
N
You
know
that
these
transactions
are
final
and
in
background.
In
parallel,
it's
going
to
be
the
proverb.
Actually
that's
going
to
take
all
these
transactions
and
it's
going
to
compute
it's
going
to
prove
that
this
implicit
state.
So
this
is
a
state,
that's
everybody
can
compute,
but
it's
not.
It's
not
Unchained,
because
non-chain
is
are
only
the
transactions,
the
data
availability,
if
you
want
the
transactions
are
going
to
be
processed,
but
it's
going
to
be
converted
to
a
real
to
real
estate,
and
this
is
proved
by
the
approver.
N
This
is
the
big
difference
with
the
optimistic
relapse
in
optimistic
relapse.
You
need
to
write
for
somebody
to
to
challenge
this
state
in
the
in
you
know,
in
in
a
zigiro
app
this
state,
it's
become
the
prover
just
set
this
state,
and
you
know
for
sure
that
this
that
that
this
is
valid
is
at
this
point
where
the
user
can
withdraw
with
router
funds.
So
here
the
most
important
piece
or
the
differential
pipe
is
of
a
CK
Roll
Up
is
the
Brewer.
The
proverb
is
a
zero
knowledge
proof
If.
N
You
want
a
validity
proof
that
validates
the
transactions
which
transactions
ethereum
transactions.
It's
taking
a
state
is
taking
a
set
of
infinity
transactions
and
is
Computing
in
your
state
and
validating
that
this
state
is
valid,
so
how
the
proverb
is
built.
So
what's
what's
inside
the
proverb
or
the
proverb
is
a
set
of
Technologies,
but
the
way
we
built
this,
we
have
a
circuit.
It's
a
traditional
circuit,
written
in
in
pill,
specific
language
we
built
for
the
zkbm,
that's
mainly
a
processor,
it's
a
generic
processor.
It's
generic!
N
It's
with
some
specifies
that,
but
it's
a
processor
that's
built
with
this
zero
knowledge
technology
and
on
top
of
this
processor,
we
are
running
a
program.
We
call
it
a
ROM
okay
that
actually
is
an
ethereum.
It
emulates
ethereum
it
actually
this
program
just
is
this
problem
is
the
one
that
is
actually
taking
the
transactions
analyzing,
the
transactions,
checking
that
the
signatory
is
valid,
discounting
the
balances,
checking
the
fees
deploying
the
smart
contracts
executing
the
smart
contracts
and
doing
exactly
the
same.
N
That
does
gaffer
that
doesn't
interior
note,
just
processing
this
processing
these
transactions.
All
this
goes
to
approver,
and
it's
this
provery
is
the
one
that's
verified
on
chain,
if
you
zoom
in
in
the
processor.
Well,
we
have
a
mainly
a
processor.
Is
this
ZK
processor?
It's
a
all.
These
processor
is
very
tailor
made
for
the
ckbn,
so
it's
not
like
a
very
generic
processor.
It's
this
part,
that's
generic,
but
there
are
specific
pieces
that
are
made
explicitly
to
be
optimal
for
running
the
ebm
program.
The
ckbm
program.
N
Okay,
this
processor
has
a
ram,
has
a
ROM
which
contains
this
program.
That's
executed
contains
the
storage
because
the
ABM
you
should
be
able
to
store
values
and
and
get
the
values
it
have
also
kind
of
a
suit
processor
that
handles
all
the
binaries
operations
here
includes
addition
subtraction
and
torque
xores,
and
so
on
has
a
model.
That's
for
idiomatic.
This
is
you
know
in
the
in
the
in
the
IBM.
N
It
works
in
256
bits,
so
this
arithmetic
circuit
actually
does
all
these
operations
in
the
256
56
bits,
okay
and
then
has
the
hashing
for
ketchups
and
other
hashes
that
are
also
inside
the
inside
the
process
inside
this
processor.
On
top
of
this
processor,
there
is
this
ROM
okay,
this
ROM
is
written
in
assembly.
It's
a
specific
assembly
for
this
processor,
and
here
is
what
contains
that
this
contains
all
the
logic.
N
Actually,
what
contains
all
the
all
the
ethereum
logic
that
when
we
are
processing
here
here,
I
want
to
show
you
just
a
snippet
of
code
of
how
this
looks
like
this
is
just,
for
example,
the
the
cops
two
dupe
one
and
two,
but
here
we
have
all
the
opcodes.
We
have
implemented
all
the
ethereum
of
codes
at
this
point
and
then
once
we
run
this,
then
we
need,
like
the
cryptographic
prover
the
cryptographic
proverb.
What
we
do
is
mainly
we're
using
Starks
with
a
a
very
optimal
way
to
compute
proving
systems.
N
We
are
using
Goldilocks,
it's
a
technology
for
from
people
from
for
our
colleagues
in
polygon
0.
That
makes
to
build
these
proofs
really
fast
and
it's
again
for
recursion.
So
it's
a
proverb
can
aggregate
many
proofs
and
at
the
end
this
is
a
stark
and
at
the
end,
what's
doing
is
we
are
converting
verifying
is
a
stark
with
a
snark.
So
at
the
end
we
just
in
ethereum
we
are
just
verifying
a
normal
growth,
16
or
block
proof.
It's
just
a
circle.
N
It's
a
circum
circuit
in
the
last
in
the
bottom
of
that
piece.
So
this
is
the
stack.
The
cryptographic
stack
for
verifying
that
okay.
So
let's
cross
fingers
and
let's
try
to
see
the
demo,
let's
see
if
it
works.
If
it
doesn't
work,
you
can
try
it.
Okay,
you
can
go
to
public.zkvm.test.net
and
test
it.
You
will
see.
That's
very
simple:
the
demo
that
I'm
gonna
do
is
first
of
all
I'm
gonna
Bridge
it.
N
Let
me
just
switch
okay,
so
the
first
thing
I
don't
want
to
do
is
I'm
gonna,
I'm,
gonna
Bridge,
so
I
mean
go
early.
I'm
going
to
early
Network
I
have
an
account
here.
That's
has
three
go
early
already
and
it's
just
a
strike
new
account
that
I
just
created
before
this.
So
the
first
thing
that
I'm
going
to
do
is
I'm
going
to
transfer
I'm
going
to
bridge
three
ethers,
three
well
0.25
ethers.
N
This
is
the
maximum
that
we
allow
here
in
this
test
net,
just
to
protect
some
Daniel
of
service
attacks
here,
and
we
are
just
to
transfer
to
the
to
the
layer
to
the
layered
tool.
So
I
mean
go
early
right
now,
so
I'm
just
like.
Let's
see
if
ethereum
works.
Okay,
here
we
are,
let's
Bridge.
It.
N
Film,
okay,
so
let's
sign,
let
me
just
let
me
just
let
me
just
modify
the
gas
fee
so
that
it
takes.
It
goes
faster,
let's
go
early,
so
you
never
know,
save
okay
and
then
just
send
this
transaction.
N
So
we
just
deposit
the
transaction.
This
transaction
is
mining
early.
We
need
to
White
also
a
little
bit
so
that
this
transaction
is
kind
of
a
final,
so
that
this
transaction
is
included.
Okay
right
now,
it's
this
is
already
so
it's.
This
is
already
done.
What
we
have
done
is
we
just
put
this
transaction
in
America
tree
and
then
the
root
of
this
Mercury
of
all
deposits
is
passed
is
passed
as
the
as
the
state
of
the
of
the
Roll
Up
the
sequencer.
N
Actually,
there
has
not
been
any
trans
any
special
transaction
on
chain,
because
the
sequencer
already
takes
in
account
that
okay,
so
now,
let's
finalize
this
okay-
and
this
finalize-
what
is
doing
is
collecting
is
doing
an
L2
transaction
to
collect
these
for
these
phones,
this
ethereum
in
the
in
the
layer
two
so
here
when
I
push
here.
The
first
thing
that
asked
me
is
ask
me,
is
just
to
switch
the
network,
so
I'm
just
switch
the
network.
N
N
Just
gonna
use
an
example,
very
simple:
smart
contract.
O
N
N
There
is
and
now
I'm
going
to
deploy
it
to
the
layer,
two
so
I'm
going
to
connect
this
to
metamask,
okay
I'm,
just
using
this
account
and
then
I'm
just
well.
This
is
I,
don't
know
if
I'm
connected
to
the
last
one
well
just
connect
here.
This
is
the
one
that
I
just
did.
I
have
the
25
feature
here
so
and
let's
deploy
this
okay,
so
just
deploy
this
a
smart
contract.
N
N
And
I
see
here,
22
right
here:
okay,
so
I
just
deployed
that
in
Layer
Two,
like
any
other
network
here-
okay,
so
what's
going
on
in
here,
so
let's,
let's
see,
let's
see
the
let's
go
to
the
let's
go
to
the
to
the.
This
is
the
roll
up
smart
contract
in
the
layer?
One
okay
and
here
we
see
two
kind
of
transactions.
One
are
the
sequence
batches
like
here
where
here
you
can
see
in
the
data
that
this
is
here,
all
the
all
the
transactions
that
we
are
setting?
N
Okay,
but
in
parallel,
but
in
parallel
to
this
we
have
the
the
batch.
The
batches
is
the
proverb:
that's
generating
batches,
that's
actually
validating
these
transactions,
and
here
is
where
all
the
magic
happens.
Okay,
here
in
the
blockchain,
you
cannot
see
much,
but
here
is
the
proof.
Actually,
this
is
the
row
16
proof
in
this
case
that
validates
all
the
transactions
that
we
have
been
processing.
Processing.
N
Ok,
so
until
here
is
well
I
can
maybe
I
can
give
you
a
bonus
bonus
track
here,
so
we
can
go,
for
example,
to
UNI,
swap
okay,
you
need
this
y
PC
is
already
deployed.
Assist
without
compelling
and
recompiling
anything,
it's
just
a
normal
unit,
swap
it's
deployed
in
Layer
Two,
and
here
we
can
do,
for
example,
a
transfer.
Let
me
just
just
change
the
account
to
the
first
one
so
that
I
have
some
some
tokens
to
to
change.
N
It's
fetching
the
price
I'm
just
doing
the
swap
firm
swap.
Why
should
sign
the
transaction?
So
here
we
have
deploy
it.
So
we
here
we
have
already
deployed
the
full
uni
swap
but
version
three
in
the
layer.
Two
and
all
this
is
verified
in
the
approver.
Just
confirm
that
okay,
now
this
is
confirmed
no
well
here.
The
the
unit
swap
interface
has
some
it's
iterating
takes
maybe
some
10
seconds
or
so
to
to
realize
that
the
transaction
has
been
moved,
but
that's
mainly
the
thing
okay,
so
let
me
go
well.
N
N
N
Nothing
fancy
right,
that's
the
cool
thing,
and
that's
like
strange
things,
even
for
me
explaining
for
me
here
I'm
just
feeling
that
I
did
a
demo
of
ethereum,
but
this
is
the
interesting
thing.
It's
that
the
all
this
is
running.
All
this
is
validated
in
the
program.
All
these
transactions
that
are
really
complex
are
all
the
unit
swap
transactions
and
everything.
This
is
validated
inside
the
broker,
and
this
is
what
this
zkbm
and
the
main
importance
of
this
design.
You
don't
have
to
recompile
anything.
You
just
take
the
code
exactly
the
same
code.
N
You
don't
have
to
reality.
Anything
you
don't
have
to
learn
anything.
You
can
use
exactly
the
same
tooling.
You
can
use
the
same,
the
same
language,
the
same
gas
model.
It's
no
difference
for
developers.
They
should
not
notice
any
difference
in
deploying
in
working
with
ethereum
or
in
or
in
zkbm.
The
only
difference
should
be
the
gas
price
and
the
quantity
of
transactions
that
you
should
be
able
to
deploy
foreign
please
test
it.
We
have
been
running
for
this
week.
Already
we
have
more
than
1
000
accounts
we
have.
N
Most
of
them
are
just
deploying
transactions.
Some
of
the
projects
already
tested
already
tested
without
any
big
issue.
We
have
some
reports
of
some
bugs
that
we
haven't
fixed
also,
and
we
will
continue
work.
It's
this
testnet
is
a
little
bit
like
a
baby.
It
was
worn
last
Monday,
but
it's
going
to
get
stronger,
and
this
is
the
the
previous
stage
just
to
the
to
the
to
the
main
net.
Okay,
here
is
okay,
and
what's
the
limit
of
the
scaling?
This
we
don't
know.
Yet
what
square
is
not
gonna?
N
It's
not
gonna,
be
in
the
proverb
it's
going
to
be,
maybe
in
the
data
availability
it's
going
to
be
in
the
maybe
in
the
sequencer
in
the
other
pieces,
but
because
why?
Because
the
prover
can
be
paralyzed
and
here
the
important
part.
Actually,
for
example,
we
are
running
seven
provers
at
this
point,
because
this
week
you
know
there
is
main
transactions.
Some
of
the
Brewers
have
been
stopped,
so
we
are
just
just
trying
to
catch
up
with
the
trend
with
some
of
these
transactions.
N
N
Well,
the
costs
right
now
is
less
than
one
cent
per
transaction,
and
there
is
this
is
in
AWS
cost,
which
is
probably
the
the
most
expensive
cloud
service
in
the
in
the
in
this
in
in
the
world,
and
there
is
also
a
lot
of
optimizations
that
are
coming.
They
are
in
GPU.
We
can.
We
believe
that
we
can
improve
one
order
of
magnitude
and
there
is
other
improvements
that
we
are
working
on.
That,
but
Brewer
is
not
the
is
not
the
the
bottleneck
anymore.
N
What's
missing?
Well,
not
much
so
we
are
fully
compatible.
We
are
running
all
the
all
the
all
the
off
codes.
Everything
works
as
as
ethereum
is.
There
are
some
things
that
we
are
already
implementing.
They
are
not
implemented
yet
and
I'm
just
listening
here,
but
everything
is
there.
What
is
missing
is
that
AIP
is
the
original
ethereum
transactions.
This
is
mainly
to
deploy
mainly
nursesafe,
but
it's
just
for
for
distance.
N
These
smart
contracts
that
have
the
same
address
in
many
in
in
in
in
many
chains
they
use
they're
using
these
primitive
transactions
that
they
don't
include
the
chain
AV
we
are
implementing
those
and
then
the
we
are
not
supporting
yet
the
shadow
56,
the
Blake
and
the
paintings
pre-compile
to
Smart
contracts.
But
this
is
a
work
in
progress.
All
of
them
are
doable
and
we
will
we
will
work
in
the
upcoming
on
those
on
in
the
upcoming
in
the
coming
months.
N
N
What
else
are
we
working
on?
We
are
working
also
in
aggregating
of
the
roof
right
now,
the
aggregators
right
now
we
are
running
one
one
proof
per
batch
okay,
but
we
have
to
run
one.
So
we
need
to
aggregate
all
these
proofs
in
a
single
transaction
and
and
with
this
in
a
single
proof.
Actually,
we
are
working
in
this
proof.
This
is
not
that
much
because
we
have
all
the
recursion
I've
already
done,
and
it's
just
putting
them
together.
N
That's
a
piece
that
we
need
to
put
there
and
we're
also
working
with
AAP
4844
for
line
shelving.
This
is
clearly
the
future
for
the
scalability,
and
we
already
working
on
that.
We
are
very
excited
on
that.
I
I
have
to
recognize
that
when
I
read
this
EIP
I
was
very
acceptable
at
the
beginning,
but
it's
really
the
way
to
go.
We
we
can
Implement
insight
and
it
really
go
will
go
even
faster
and
it's
even
better
that
what
I
would
what
I
would
do?
It's
just
that
you.
N
The
only
thing
is
that
you
need
to
go
a
little
bit
deep
to
understand
this
VIP.
That's
really
interesting
and
very
excited
for
the
scaling
testing
we're
running
the
the
ethereum
test.
Suites
right
now
we
are
at
97.7
97
of
passing
all
these
tests.
There
are
edge
cases
that
we
are
still
working
on
that,
but
I'm
sure
that
very
soon
we
will
be
covering
the
will
be
in
100
of
the
of
the
ethereum
of
ethereum
test
broad
map.
Of
course,
we
just
launched
the
public
test
net.
N
We
need
to
audit
and
we
will
launch
when
it's
ready
here
is
we
want.
We
want
to
be
enough
safe.
We
want
enough
sure
that
so
100
safe
is
going
to
be
impossible,
but
we
want.
We
want
to
be
responsible
and
we
want
to
update
this
very
well
and
when
we
feel
comfortable,
then
we
will
we
will.
We
will
launch
a
reminder.
Everything
is
open
source.
You
can
take
a
look.
You
can
see
you
can
you
can
review.
N
Everything
is
in
the
GitHub
repositories
and
yeah
gkvm
is
no
longer
a
myth
care
about
this
again.
B
Thank
you
so
much.
We
have
a
couple
of
minutes
for
questions.
So
if
you
have
a
question,
please
raise
your
hand,
foreign.
F
Is
not
a
frozen
object,
it's
leaving
and
it's
evolving
with
new
eips
and
we
are
going
to
see
possibly
big
change
like
the
eof
or
other
things
like
that,
and
they
will
be
way
easier
to
implement
for
core
devs
and
for
you.
So
are
you
worried
that
some
possible
living
change
will
be
hard
to
translate
into
circuits
in
the
future.
N
We
need
to
see
which
are
those
changes
and
once
I
see
the
changes.
I
will
tell
you,
but
there
is
one
thing
that,
because
this
is
this
is
some
of
the
the
questions
that
I
received.
Now
it
is
sometimes
it's
more
about
the
upgradability
of
this.
What
happened
if
the
ckvm
upgrade
and
then
what
happened
with
the
roll
up
it
will
upgrade,
and
if
you
want
a
decentralized
system,
and
but
here
is
we
need
to
understand
as
a
community
that
the
ABM
at
this
point
is
evolving
because
it's
work
in
progress.
N
But
at
some
point
this
will
this.
This
will
need
to
be
frozen.
I
don't
know
when
and
how
but
and
I'm
not
talking
about
the
zkbm
I'm
talking
about
the
AVM,
so
the
ABM,
if
you
want
I,
believe
I,
I,
I,
I,
believe
I,
hope
and
he's
talking
with
people
with
the
effort.
They
already
think
very
much
on
that
time.
M
N
Here
we
have
Carlos
in
the
room,
that's
responsible
for
testing,
you
can
ask
them,
but
they're
very
very
they
are
very,
very
complex
one.
It's
a
Carl
of
a
static
call
and
then
do
a
surface
truck
and
and
do
whatever
you
know.
This
is
our
very,
very
edge
cases
very,
very
edge
cases
testing,
but
you
have
to
take.
You
have
to
look
at
it
because
you
never
know
one
and
it's
important.
That's
why
those
tests
are
are
there
but
are
really
really
edge
cases
at
this
point
and
complex
ones,.
P
Hi,
thank
you
for
the
presentation
when
you
were
showing
the
on
either
scan
the
contents
of
the
badge
you
showed
like
that.
There
was
like
you
know
the
code
like
encoded
a
set
of
transactions.
Is
there
any
way
to
decode
them
to
to
see
what
exactly
is
in
the
batch
yeah.
N
It's
mainly
mainly
other
transactions,
one
after
the
other.
We
have
some
internal
tooling
just
to
to
take
a
look
on
that.
Well,
actually,
they
are
open
source.
You
can
check
the
repositories,
but
we
are
building
it
and
in
any
case,
it
would
not
be
difficult
to
interpret
again
it's
just
a
format
with
with
the
array
of
transactions.
Q
Cool
go
ahead.
Hey
thank
you.
Can
you
hear
me
yep?
Okay.
First
of
all,
congratulations!
This
is
amazing
when
you
launch,
let's
suppose
it's
early
next
year.
Obviously
it's
a
very
complicated
system.
If
you
find
some
bug
or
something
that
needs
to
be
addressed,
what
can
you
do
in
that
situation?.
N
With
the
strapping
decentralized
systems
is
not
is
not
an
easy
topic
here
we
have
the
experience
of
Hermes
1.0
in
our
team,
and
here
you
do,
some
I
would
say
nasty
tricks.
N
Here
is,
maybe
you
do
an
upgradability
things
that
you
we
can
you
can
do
is
an
upgrade
that
so
you
can
upgrade
the
smart
contracts
with
maybe
with
a
time
lock,
and
you
can
not
do,
for
example,
is
limit
the
the
withdrawal,
the
the
withdrawal
flow,
that's
leaving
the
smart
contract
so
that
it's
like,
if
you
are
working
with
less
as
a
certain
amount,
you
are
freely
decentralized.
N
But
if
you
want
to
run
fast
and
run
bigger
numbers,
then
you
can
you
have
the
option
to
go
Central
lights
or
just
or
or
white,
and
you
you
there
are
some
tricks
for
Buddha
strapping,
but
this
is
our
just
temporary
Solutions
until
we
feel
comfortable
at
some
point,
the
roll
app
should
go
along.
We
are
building
in
the
centralized
system
and
and
it
should
be
safe
enough
here
is
so
we
are
managing
that
in
the
in
the
in
the
in
the
beginning.
A
A
A
A
L
R
It
is
very
exciting
to
be
here.
Defcon
is
just
such
amazing
event,
and
you
know
the
energy
here.
It's
incredible
and
just
excited
to
be
able
to
talk
to
you
guys
today,
so
yeah
I'm,
Harry
I
am
the
CTO
and
one
of
the
co-founders
of
octane
Labs,
the
company
that
built
Arboretum
and
I'm
here
today,
I'm
going
to
kind
of
walk
you
through
a
bit
of
the
history
of
arbitrum
and
how
we
got
here
where
we
are
today
and
then
talk
some
about
sort
of
where
we're
heading
and
what
the
future
looks
like.
R
So
arbitrim
has
actually
originally
came
up
with
really
cool
history
in
2014.
Actually,
before
ethereum
had
even
launched,
one
of
our
co-founders
Ed
had
the
idea
of
hey.
This
thing
is
really
cool,
but
it
doesn't
seem
like
it's
going
to
scale
now
at
the
time
it
was
very
early
and
the
idea
ended
up
sort
of
you
know
forgotten
to
history
for
a
little
while
until
2017
we
started
working
on
it
at
the
time
we
were
at
Princeton,
University
academics
thought
hey.
R
This
is
really
cool
and
interesting,
and
it
seems
like
there
might
be
something
that'll
that'll
really
grow
here
and
we
started
out.
We
wrote
a
research
paper
back
in
2018,
it's
crazy
because
it
feels
like
yesterday
to
me,
but
it's
almost
ancient
history
in
in
crypto
time
and
then
managed
to
somehow
and
I
still
not
completely
sure
how
to
found
a
company
around
it
and
share
it
with
the
community.
R
So
at
that
point
we
had
this
idea.
We
were
building
back
in
February
2020,
just
around
East
Denver.
We
actually
had
the
first
Arbors
from
testnet.
It
was
a
lot
different
than
it
is
now.
Actually
we
started
with
technology
that
looks
a
lot
like.
It
looks
more
like
an
included
features
like
our
Nova
chain
has
today,
which
I'll
mention
a
little
bit,
but
it
was
not
a
roll-up.
R
It
had
a
committee,
it
had
sort
of
off-chain
agreement,
but
it
had
a
lot
of
features
like
it
has
now
and
doing
application
specific
change.
So,
interestingly,
sort
of
a
lot
of
where
we
are
now,
it
was
too
early
then,
but
there
was
sort
of
a
lot
of
ideas
that
have
been
bouncing
around.
That
I
think
are
now
sort
of
coming
out
as
really
popular,
and
then
we
went
from
there
we
launched
test
Nets.
We
figured
out
arbitrary
contract
deployment.
We
figured
out
arbitrary
messaging.
R
We
actually
got
our
main
net
out,
and
this
was
back
in
May
of
last
year
we
we
launched
our
main
net
to
developers
only
we
had
a
few
months
there
and
then
in
August.
So
over
a
year
ago,
now
we
launched
on
mainnet,
and
that
is
how
we
we
ended
up.
You
know
building
arbitrum
one
getting
it
out
there.
It's
been
an
incredible
time
since
then,
but
the
building
never
stops.
R
You
know.
If
you,
if
you
stop
in
crypto,
you
might
as
well.
You
know
you're
done,
nothing
is
ever
sort
of.
You
know.
Research
always
continues.
No
technology
today
will
be
sort
of
it'll
all
look
ancient
in
five
years
and
kind
of
arbitrary
when
we
launched
already
looks
ancient
compared
to
arbitrum,
where
it
is
today,
because
this
last
year
we
worked
on
our
Nitra
upgrade,
which
I'm
going
to
be
talking
about.
R
Some
today
we
launched
our
Nova
chain
which,
as
I
said,
sort
of
actually
takes
a
lot
of
ideas
from
our
original
paper
and
sort
of
the
world
is
now
I
think
in
a
position
where
they
make
a
lot
more
sense
than
they
did.
Then
we
went
through
a
really
sort
of
challenging
and
interesting
thing,
which
was
the
live
upgrade
of
our
arbitrim
chain
to
an
entirely
new
technology
stack,
and
it
was
really
interesting
to
us.
R
We
did
it
kind
of
a
couple
weeks
before
the
merge
and
it
was
essentially
our
merch
and
that
certainly
not
not.
R
You
know
the
same
amount
of
coordination
necessary,
but
the
idea
of
kind
of
taking
a
technology
stack
and
somehow
well
with
a
running
system,
actually
kind
of
replace
the
technology
underneath
it
with
a
newer
version
with
a
more
powerful
version
that
allowed
it
to
be
cheaper,
faster,
all
sorts
of
good
stuff
which
has
made
it
for
a
a
really
exciting
year
for
us,
so
Arboretum
one
just
to
kind
of
jump
into
like
well.
What
is
this
thing?
I've
been
talking
about.
R
I
walk
you
through
the
history,
but
you
know
it's
nice
to
sort
of
dig
in
a
bit
more
talk
high
level.
What
does
arbitrim
provide?
R
Why
should
you
care-
hopefully
most
people
here
already
know,
but
it's
certainly
nice
to
say,
low-cost
transactions,
security
rooted
in
ethereum,
so,
inheriting
security,
rather
than
trying
to
have
independent
security,
which
is
really
powerful,
which
is
what
Roll-Ups
give
us
and
really
really
really
full
compatibility
with
ethereum,
which
is
the
thing
that
only
optimistic
roll-ups
at
least
today
can
provide
and
means
that
sort
of
tooling
languages,
everything
just
works,
and
it's
been
really
huge
for
developer
adoption.
R
So
just
a
couple
stats-
and
this
is
actually
slightly
out
of
date
because
the
market
is
down
and
this
slide
was
made
before
the
so
I
think
it's
more
2
billion
plus
now,
although
the
amount
of
value
in
East
I
think
has
increased,
so
you
know
from
it
depends
how
you
look
at
it,
but
arbitrim
is
going
strong.
We
have
a
huge
amount
of
adoption,
huge
amount
of
projects,
huge
amount
of
users
and
it's
just
been
sort
of
incredibly
thrilling
to
watch.
R
We
have
a
huge
ecosystem,
we
have
kind
of
native
apps.
We
have
ethereum
daps
that
are
kind
of
have
been
on
L1
and
started
out
there,
but
then
it
migrated.
R
We
have
tons
of
infrastructure
support
from
all
sorts
of
different
companies
shout
out
to
to
tenderly
that
just
launched
our
full
arbitrum
support
recently,
which
was
very
exciting
and
arbitrim,
is
becoming
a
major
part
of
ethereum
two
kind
of
interesting
charts
here,
one
being
oh
and
it's
not
rendering
very
well,
but
you
can
look
on
etherscan
yourself,
one
being
that
arbitrum.
R
The
arbitrary
sequencer
is
one
of
the
biggest
gas
vendors
on
ethereum
that
we
are
now
sort
of
using
a
significant
amount
of
L1
resources
in
order
to
power
the
roll-up,
which
is
really
exciting
and
also
very
exciting-
that
that
will
hopefully
go
way
down
with
4844,
which
I
won't
talk
about
too
much
in
this
talk,
but
is
extremely
exciting.
R
That
and
the
other
thing
being,
the
amount
of
eth
is
just
in
arbitrum
that
our
Bridge
escrows
funds
that
are
deposited
in
the
system
and
the
amount
of
funds
in
that
bridge
is
a
very
kind
of
very
significant
chunk
of
of
eth,
which
is
crazy,
exciting
and
then
the
other
thing
to
look
at-
and
this
is
sort
of
just
you
know
very
exciting
and
also
has
been
kind
of
vastly
improved
with
Nitro-
is
how
cheap
it
is
to
use
I
think
we're
on
average
coming
in
at
somewhere
between
kind
of
10x
and
50x
price
reduction
compared
to
ethereum.
R
This
has
gotten
better
with
Nitro
as
we
added
compression
L2
costs
are
quite
low
because
of
all
of
the
efficiencies
of
the
system
and
the
fact
that
kind
of
the
gas
limits
can
be
quite
High,
and
so
you
can
see
we
have.
We
have
up
here
just
L2
fees,
which
is
a
very
nice
site
to
look
at.
You
can
see
that
we
are
coming
in
sort
of
just
a
couple
cents
to
to
transact.
R
In
the
simplest
case,
we're
doing
heat
transfer
and
even
smart
contract
execution
is
very
cheap
and
easy.
So
I
mean
that's.
R
One
thing
always
to
look
at
is
essentially,
if
transfer
it's
easy
to
be
cheap,
but
also
to
be
cheap,
with
with
contract
execution,
where
users
can
use
a
lot
of
gas,
and
it
still
comes
in
cheap,
is,
is
really
really
important
and
I
think
one
of
the
areas
that
I'd
say
is
sort
of,
especially,
of
course,
this
presentation
and
the
last
one,
obviously
for
people
who
saw
both
are
covered
covering
kind
of
you
know,
similar
ideas
and
I
think
this
is
one
place
to
distinguish
optimistic
rollups
is
that
we
can
do
a
lot
of
execute
a
lot
of
computation,
a
lot
of
execution
at
very
low
cost
and
just
sort
of
for
Roll-Ups
in
general,
and
this
is
sort
of
General
across
across
DK
Roll-Ups
across
optimistic
roles,
but
I
think
it's
really
important
to
talk
about
is
just
looking
at
sort
of
what
sort
of
what
Roll-Ups
can
give
you
versus
side
chains.
R
So
with
Roll-Ups
for
daily,
we
were
using
ethereum
and
we're
depending
on
ethereum
for
data
availability.
All
transactions
get
posted
unlike
side
chains,
where
kind
of
it's.
The
data
is
separate.
Maybe
they're
posting
headers
back,
but
it's
an
independent
system.
We
have
a
bike
cut
for
a
second
but
I'm
back.
R
We
have
L1
to
L2
bridging
that's
enforced
through
the
security
of
the
roll
up
and
that's
sort
of
one
of
the
most
important
features
of
Roll-Ups
to
me
is
the
fact
that
the
brick
is
part
of
the
system.
There's
no
sort
of
independent
multi-sig,
no
independent
Bridge
mechanism.
The
bridge
is
the
roll-up
which
is
really
fundamental,
as
opposed
to
sort
of
needing
some
sort
of
messaging
layer.
R
That's
independent
from
the
security
system,
Roll-Ups
use
fraud,
proofs
well
or
fraud
proofs
for
optimistic
Roll-Ups
fraud,
proofs,
of
course,
which
means
that
anybody
can
the
correctness
of
the
chain,
and
it
means
that.
Basically,
you
don't
need
two-thirds,
honest,
you
don't
even
need
half
honest.
R
R
R
There's
some
really
interesting
trade-offs
here
and
it's
a
kind
of
a
very
active
design
space
that
I
think
the
whole
roll-up
Community
is
exploring,
which
is
ways
to
make
data
cheaper
posting
to
ethereum
is
the
expensive
part
and
so
being
able
to
move
that
off
Chain
by
having
a
committee
where
you're
not
worried
about
a
majority
you're
only
worried
about
a
couple
on
that
committee
being
honest
is
a
really
powerful
thing
and
it's
what
allowed
us
to
be
able
to
offer
a
platform
that
could
really
be
repetitive,
with
sort
of
other
non-roll-up
systems
which
don't
have
the
cost
of
Hosting
to
ethereum.
R
It's
an
interesting
trade-off.
It
does
make
some
sacrifices
in
security.
Maybe
we
think
about
it.
Is
that
we're
not
competing
with
roles?
We
think
if
you
can
afford
a
roll
up,
you
should
use
it.
What
we're
competing
with
is
other
solutions
that
don't
have
that
level
of
security
and
try
to
provide
something
more
secure
than
they
are
now
just
a
little
bit
about
sort
of
how
Nitro
works
and
sort
of
how
to
imagine
the
system
a
lot
of
times
with
Roll-Ups,
because
it's
so
compatible
it's
kind
of
a
black
box.
R
You
have
the
RPC,
you
point
at
the
C,
you
use
it
just
like
ethereum
and
that's
it,
but
it's
really
I
think
you
know
valuable
to
really
understand.
What's
going
on
and
really
important,
so
we've
split
it
up
into
a
number
of
steps,
and
this
is
we've
done
this
a
lot.
How
do
you
explain
this
thing?
It's
complicated.
There
are
a
lot
of
different
pieces.
I
think
the
latest
iteration
is
one
that
we're
pretty
happy
with.
R
So
we
start
off
by
talking
about
sequencing,
and
this
is
probably
you
know.
The
role
of
the
sequencer
is
one
that's
sort
of
most
well
known
in
how
Roll-Ups
work
sequencer
is
a
is
a
is
a
node
that
orders
transactions
it
receives
transactions
in
it
puts
them
in
an
order,
and
then
it
runs
them.
R
It
evaluates.
Take
transition
function
and
it
produces
blocks
well.
That
basically
sounds
like
how
ethereum
works
if
you
replace
sequencer
with
miners,
although
for
the
sequencer
there's
one
entity
rather
than
a
lot
of
them,
the
interesting
part
is
in
parallel
or
slightly
trailing
The
Ordering
of
transactions.
The
sequencer
is
also
batching
and
compressing
those
transactions
and
posting
it
to
the
L1
chain.
Now
one
key
thing
to
understand
here
is
that
the
sequencer
is
not
attesting
to
State
routes.
The
sequencer
is
not
making
claims
about
what
the
result
of
executing
those
transactions
are
like.
R
R
You
could
post
an
invalid
transaction,
certainly,
but
it
would
just
be
ignored
by
the
state
transition
function
and
rejected,
and
the
sequencer
would
be
out
some
money,
but
it
wouldn't
have
any
other
consequence,
and
so
that
gets
posted
to
the
L1
chain
and
then
picked
up
by
the
actual
roll-up
security
mechanism,
which
I'm
not
going
to
be
able
to
get
too
much
into.
In
this
talk,
but
there's
some
great
material
about
online.
R
So
what
does
this
mean
in
terms
of
finality,
which
is
one
of
those
important
questions,
because
you
have
these
systems
and
you
have
like
you-
have
when
metamask
says:
okay,
the
transaction's
in
a
block,
but
that's
not
enough
finality.
Is
this
really
important
question,
which
is
when
can
your
transaction
be
reversed?
R
It's
one
of
the
things
that,
with
with
the
with
the
merge,
has
changed
a
lot
for
ethereum
in
very
interesting
ways
and
sort
of
arbitrum
has
its
own
notion
of
basically
how
you
can
tell
when
a
transaction
is
final,
and
so
we
split
it
up
into
three
phases.
We
have
soft
finality
where
the
sequencer
said
it's
the
order.
If
the
sequencer
is
honest,
that's
what
the
order
is,
but
you're
trusting
the
sequencer
and
so
for
many
applications.
This
is
this
is
actually
pretty
good.
R
Currently,
the
sequencer
is
being
run
by
us
long
term.
The
sequencer
will
be
decentralized
over
a
number
of
parties,
no
need
to
trust
it,
but
you
can
and
a
lot
of
people
do
after
that
is
kind
of
the
really
important
Mark
though,
which
is
when?
Can
you
actually
not
trust
the
sequencer?
Because
trust,
if
you
just
trust
it
the
whole
time
it's
a
centralized
system
and
for
particularly
for
kind
of
exchanges
for
anybody
doing
cross-chain
stuff?
R
You
really
don't
want
to
introduce
any
trust
assumptions,
and
so
for
that,
basically,
the
idea
is
that
after
the
sequencer
has
posted
a
batch
on
chain,
the
order
is
set
once
the
transaction
posting
that
batch
itself
has
L1
finality.
The
system
is
completely
deterministic,
so
any
node
off
chain
can
get
a
guarantee
of
what
the
current
state
of
the
chain
is
based
on
batches
posted,
and
so,
if
you're
familiar
with
optimistic
Roll-Ups,
we
have-
and
this
is
the
last
phase,
the
certification
process,
which
takes
seven
days.
R
The
only
thing
that's
for
is
to
prove
back
to
ethereum
what
the
result
is,
because
after
10
minutes
after
a
batch
is
posted
and
finalized
on
L1
anybody
in
the
world
looking
at
the
chain
other
than
ethereum
can
know,
and
the
reason
for
this
is
really
simple:
ethereum
can't
actually
run
all
the
transactions
because
then
it
wouldn't
be
a
roll
up
and
then
it
would
be
be
expensive,
ethereum's
not
running
them
we're
using
fraud
proofs
but
off
chain.
Anybody
can
just
run
the
transactions
themselves
and
calculate
the
result.
R
So
now
that
we
have
that
down,
what
is
the
state
transition
function?
I
mentioned
it
quickly,
but
it's
sort
of
a
very
core
part,
and
that
is
basically
with
Nitro.
We
now
have
essentially
a
wrapper
around
the
kind
of
core
gath
implementation
of
ethereum
State
transition
function,
which
means
our
functionality
can
be
essentially
identical
with
Geth.
We
don't
need
to
worry
about
Corner
cases.
R
The
last
part
that
I'm
going
to
talk
about
today
is
how
execution
and
proving
are
different,
which
is
really
interesting
and
really
where,
where
we
get
all
of
our
where
we
can
get
most
of
our
performance
from-
and
this
is
something
actually
that
changed
with
Nitro
before
Nitro,
there
was
a
VM,
it
ran
transactions,
a
result
was
produced,
and
that
also
contained-
and
that
was
also
a
proof-
that's
very
inefficient,
because
proving
tends
to
be
something
that's
very
slow
with
Nitro.
R
Instead,
we
split
up
these
processes,
and
so
we
have
basically
One
Core
code
base
compiled
in
two
different
ways:
one
to
run
at
Native
speed
on
your
computer
at
exactly
the
same
speed.
Any
any
evm
chain
can
and
one
compile
to
wasm
and
used
for
proving
both
from
the
exact
same
code.
So
there's
not
there's
no
need
for
kind
of
multiple
implementations.
R
Now
it's
it's
two
years
later,
which
is
kind
of
insane,
which
is
really
kind
of
how
we
ended
up
here
and
how
I
think
the
ethereum
community
came
to
this
path,
which
is
the
idea
that
Roll-Ups
are
the
way
that
ethereum
is
going
to
scale
and
I.
Think
that's
sort
of
really
important
to
us
and
really
we
the
way
we
think
about
it
is
that
kind
of.
Yes,
there
are
multiple
different
Technologies,
yes,
we're
all
building,
but
kind
of
really.
R
What
we're
doing
is
we're
empowering
ethereum
to
actually
kind
of
be
competitive
against
other
alternative
blockchains,
since
those
two
systems
in
combination
can
do
much
more
than
other
blockchains
can
do
alone
and
then
just
at
the
end,
I
wanted
to
mention.
So
what
is
Nova
I
mentioned
it
once
it's
got
this
other
system.
All
Nova
is-
and
this
is
really
cool-
is
adding
this
data
availability
committee,
so
otherwise
the
diagram
is
exactly
the
same.
It's
just
what
I
explained,
but
rather
than
batching
and
compressing
and
posting
to
L1.
R
We
instead
batch
and
compress
hand
it
to
a
committee.
Have
them
generate
signatures
and
post
those
signatures
to
ethereum,
which
is
how
you
get
so
much
cost
savings
when
using
when
using
the
Nova
chain,
but
why
it
has
an
additional
security
assumption
compared
to
ethereum
and
then
I
just
wanted
to
wrap
up
a
little
bit
by
talking
about
sort
of
I
talked
a
lot
about
where
we
are
and
sort
of
what's
great
about
the
technology.
R
I
want
to
really
talk
about
sort
of
where
the
technology
is
not
yet
and
what
still
needs
to
be
done,
and
this
is
something
that's
been
sort
of
a
huge
effort
for
us
to
try
to
figure
out
there's
so
much
going
on,
there's
so
much
complexity,
that
really
sort
of
keeping
people
aware
of
kind
of
the
status
of
this
technology
of
where
we
are,
is
really
important.
R
L2B
for
anybody
familiar
has
done
an
amazing
job
with
this
and
I
would
highly
recommend
anybody
who
hasn't
to
read
through
their
security
analysis,
which
they've
done
on
all
the
major
Roll-Ups,
but
just
to
talk
about
kind
of
where
arbitrum
is
in
this
regard.
So
arbitrim
is
I,
think
right
now,
fair
to
say
the
only
optimistic
roll
up
in
production
with
fraud,
proofs
which
is
incredibly
exciting
and
which
was
we
have
been
since
launch
and
is
really
core
to
us
to
actually
like
lead
tech.
First,
we
come
from
an
academic
background.
R
The
tech
is
important,
but
it's
not
a
full
roll
up
yet
and
I
wouldn't
want
to
consider
it
that,
because
for
arbitrum,
validation
is
currently
permissioned,
which
is
one
of
our
big
priorities
for
this
coming
year,
is
to
drop
permissioning.
Now,
it's
not
just
us,
there's
a
great
set
of
validators
I
think
actually,
in
the
coming
weeks,
we're
going
to
do
an
announcement
where
we
list
all
the
different
entities
that
are
currently
validating
the
arbitrum
chain,
but
having
it
be
fully
permissionless.
So
you
can
truly
make
good
on
the
promise.
R
This
real
promised
future
and
the
last
one
and
I
think
this
is
sort
of
the
hardest
kind
of
com,
the
hardest
issue
and
I
think
the
biggest
the
biggest
sort
of
discussion
and
one
that's
sort
of
really
important,
has
been
having
happening
a
lot
here
is
how
do
you
think
about
handling
critical
bugs
in
these
systems
that,
fundamentally,
this
is
sort
of
a
really
scary
process.
If
ethereum
hits
a
bug
or
a
Bitcoin
hits
a
bug,
they
will
fork
and
fix
that
dog,
because
it's
in
the
core
protocol
and
those
protocols
can
Fork.
R
R
That
would
be
amazing,
but
I
think
that's
a
long
way
to
come
and
that's
sort
of
what
you
get
into
when
you
imagine
enshrined
Roll-Ups,
which
is
a
whole
other
conversation,
but
how
to
do
sort
of
without
that,
in
a
way
that
maximizes
decentralization,
while
also
protecting
ourselves
from
the
risk
of
bugs,
because
anybody
who's
sort
of
extremely
confident
they
don't
have
bugs
today,
I
think
is-
is
overconfident
and
sort
of
figuring
out
what
the
balance
here,
between
sort
of
necessary
critical
emergency
paths
and
security
for
users
is
a
really
important
question
and
I
know
just
sort
of
shout
out
to
one
one
idea
that
vitalik's
been
been
running
with
a
lot
recently.
R
There's
some
really
cool
work
being
done
around
the
ideas
of
having
multiple
provers
and
having
a
majority
of
those
provers
need
to
all
agree,
because
then,
if
one
of
them
has
a
bug
as
long
as
the
other
one
doesn't
have
the
same
bug,
then
they
can
be
checks
for
each
other.
And
if
you
have
multiple
Implement,
multiple
independent
implementations,
then
you
get
a
lot
of
the
same.
Security
benefits
that
client
diversity
has
for
ethereum.
R
As
for
roll-up
security,
which
would
be
a
really
big
thing,
So
yeah-
thank
you
all
for
for
for
bearing
with
me
through
this
and
I
hope.
It
was
interesting,
been
a
total
blast
being
here
and
and
yeah
I
think
we
should
probably
have
a
few
minutes
for
questions.
B
Thank
you
so
much
Harry.
We
have
time
for
three
questions.
So
could
you
please
raise
your
hands
foreign.
O
This
was
great,
do
you
see
optimism's
bad
joke
and
arbitrans
Nitro
converge
in
the
specifications.
R
So
there's
absolutely
been
a
lot
of
kind
of
very
interesting
convergence
between
arbitrum
and
optimism
over
the
years,
both
projects,
obviously
that
have
been
building
for
quite
a
long
time.
R
I
think
that
sort
of-
and
this
is
you
know,
I-
don't
think
this
is
happenstance
that,
like
we've
learned
from
each
other
and
kind
of
designed,
models
have
shifted
I'd
like
to
say-
and
then
you
know
take
this
with
a
grain
of
salt,
because
I'm
obviously
biased
that
sort
of
our
initial
design
was
much
more
influential,
particularly
interactive
fraud,
proofs,
which
were
a
mechanism
that
we
have
been
kind
of
arguing
for
for
years
and
finally,
one
out
on
versus
the
alternative,
which
was
on
fraud,
proof
which
kind
of
was
the
original
optimism
design,
so
I
think
there's
a
lot
of
alignment
there
and
a
lot
of
coming
together
and
I.
Q
Hi
Harry
thanks
very
much
just
on
the
last
topic
you
were
talking
about
about
critical
bugs
and
how
you
could
fix
them,
and
the
sort
of
issues
that
raises
I
was
thinking
about
what
Danny
was
talking
about
in
his
talk
in
the
opening
day.
You
know
minimizing
governance.
Do
you
see
any
Prospect
of
breaking
out
the
arbitrim
design
into
multiple
components
where
a
bunch
of
them
are
immutable
and
you're
able
to
restrict
the
governance
to
just
a
small
part.
R
Nitro
was
our
biggest
priority
for
a
very
long
time,
because
it
really
kind
of
was
solving
key
performance
issues
that
users
were
having
I
think
our
next
priorities
are
all
around
really
deep,
diving
on
these
questions
of
exactly
how
much
can
you
minimize
upgradability
is
a
really
tricky
thing,
because
a
lot
of
the
time,
if
anything,
is
upgradable,
then
that
component
could
be
captured
and
so
really
figuring
out
sort
of
like
if
there's,
if
there
are
ways
to
sort
of
modularize,
your
security
in
a
way
to
create
protection
is
a
great,
is,
is
sort
of
a
really
interesting
open-ended
question.
R
So,
for
instance,
a
modular
stack
which
is
just
a
set
of
layers
on
top
of
each
other,
doesn't
really
help
there.
If
some
of
them
aren't
upgradable,
because
if
you
control
an
entire
layer
chances
are,
you
can
do
whatever
you
want
so
figuring
out
sort
of
what
Arrangements
there
are
to
minimize
that
I
don't
have
a
great
answer,
but.
L
R
A
really
interesting
area
for
work.
D
Hey
thanks
for
the
great
talk
so
asking
kind
of
like
a
spicy
question.
What
is
like
the
core
reason
why
you
are
still
maintaining
a
white
list
on
the
fraud
proofs
and
then
you
know,
if
that's
for,
like
you
know,
gath
reasons
or
like
dos
reasons
or
whatever,
is
there
a
reason
you
wouldn't
deploy?
You
know
a
version
of
like
arbitrum
on
something
like
Gorly
that
has
you
know
non-white
listed.
D
R
So
there's
another
there's
a
few
different
reasons:
I
think
kind
of
both
the
sort
of
performance
of
the
roll-up
protocol,
as
many
parties
come
in
and
sort
of
confidence
in
sort
of
the
underlying
fraud.
Proof
mechanisms
which
have
been
growing
over
time
are
sort
of
kind
of
core
reasons.
I
think
that
in
this
coming
year
and
I
fully
expect
within
the
six
within
the
next
six
months,
we
will
have
permission
fraud
proofs
on
mainnet
as
to
why
not
gorilla
it's
an
interesting
question.
R
R
We
have,
if
you're
interested
a
large
amount
of
fuzzing,
that
sort
of
has
been
done
on
the
fraud
proof
mechanism
itself,
which
is
obviously
sort
of
not
very
visible
to
to
users,
but
there
there
is
quite
a
bit
of.
It
would
be
definitely
happy
to
point
at
that.
B
A
A
A
A
A
A
B
B
B
This
is
our
next
talk,
the
history
and
future
of
the
centralized
operation
with
water
campaign.
Please
give
him
a
bigger
Plus.
A
A
A
A
S
Okay
seems
that
the
Wi-Fi
issues
have
been
resolved,
so
let's
get
started.
Welcome
everyone,
I'm,
embarger
I,
will
talk
to
you
today
about
the
history
and
future
of
decentralized
operations
at
makerdale.
S
S
This
started
with
Rooney
Christensen,
one
of
the
founders
posting
an
e-money
post
on
Reddit,
and
then
in
the
years
after
that,
we
launched
several
products,
such
as
a
single
collateral
die,
which
is
maker,
but
only
with
ethereum
has
collateral.
In
2019
there
was
a
launch
of
Multicultural
dye
which
also
added
other
collateral
types,
and
then
after
2019,
when
maker
was
still
functioning
as
a
classic
Foundation,
we
started
to
decentralize
and
that
simply
means
that
we
dissolved
that
foundation
and
we
rebuilt
everything
in
the
Dao.
S
So
we
have
had
almost
two
years
of
experience
with
that
now
and
yeah.
We
are
wondering
what
is
coming
for
a
2023.
Will
it
be
make
it
RV
next,
so
very
quickly,
myself,
I
have
a
background.
In
software
engineering,
I
joined
makerdown
in
November,
2017
I
had
to
make
a
foundation.
I
was
the
head
of
engineering
and,
after
that,
when
I
also
joined
the
I
became
the
co-founders
of
one
of
the
20
core
units,
as
we
call
them
the
contributor
teams
at
makerdale.
S
S
Lately
we
have
been
thinking
more
and
more
about
decentralized
operations,
and,
while
you
may
ask
why
don't
I
just
use
the
term
governance
as
any
proper
Dao
member?
Well
decentralized
operations,
I
Define
it
very
simply.
For
me,
it's
the
art
of
getting
things
done
in
an
open,
transparent
and
decentralized
organization,
and
by
things
we
like
to
have
work
that
produces
actual
value,
not
just
activity
but
actually
creating
value.
S
So
why
the
term
decentralized
operations?
Well,
Dow
governance
involves
mostly
discussion
and
decision
making.
It
is
not
enough
to
get
things
done.
This
decentralized
decision
making
needs
to
be
supported
by
effective
execution
and
decentralized
operations
includes
governance,
but
it
emphasizes
the
execution
part.
So
we
will
be
talking
a
lot
about
voting,
we'll
be
talking
a
lot
about
governance
stuff,
but
we
say
decentralized
operations
because
we
realize
we
need
to
execute
and
deliver.
S
So,
let's
return
to
maker.in
what
does
operations
and
governance
look
like
today
at
maker
Dev
there
are
a
number
of
well-known
Concepts
that
have
been
implemented,
basically
when,
when
the
Dow
first
started
for
real,
so
one
year
and
10
months
ago,
and
of
course,
the
foundation
of
it
all
is
token
token
voting
so
maker
has
token
voting
by
MTR
holders,
and
it
also
works
with
delegates
so
mqr
holders
they
can
improve
changes
to
the
maker
protocol.
S
They
have
to
approve
these
changes
because
the
maker
protocol
is
permissionless.
So
without
a
majority
of
the
vote,
you
will
not
be
able
to
make
any
changes
in
any
way.
S
Today
about
15
of
the
total
mkr
Supply
is
delegated,
and
this
is
delegated
by
222
addresses.
We
have
to
say
addresses
because
we
don't
know
how
many
people
are
behind
those
addresses.
S
There
are
24
recognized
delegates,
92
Shadow
delegates,
and
these
recognized
delegates.
They
do
receive
compensation
proportional
to
the
voting
rate,
although
there
is
a
limit
so,
for
example,
the
current
payout
of
the
delegates
is
around
120
000
in
cost
for
the
protocol
per
month.
If
you're
interested
in
voting
or
delegates,
then
definitely
have
a
look
at
our
voting
portal.
Vote.Makeitout.Com.
S
S
This
lasts
three
weeks
and
one
week
where
the
proposal
is
frozen,
so
that
the
governance
participants
can
properly
form
an
opinion
about
what
is
in
there.
If
the
RFC
proceeds,
then
there
will
be
a
governance
poll.
Typically,
this
is
about
three
weeks
and
yeah.
This
poll
will
then
be
up
on
the
voting
portal
that
you
just
saw.
S
Mpr
holders
and
delegates
will
then
be
voting
on
the
individual
proposals
could
be
that
one
proposal
makes
it
another.
Doesn't
all
the
proposals
that
make
it
then
are
then
bundled
in
the
executive
vote.
So
this
is
one
one
smart
contract
that
bundles
all
the
changes
that
have
been
voted
in
by
governance
that
month
and
then
yeah.
If
mkr
holders
approve
this
executive
vote,
then
these
changes
are
applied
to
the
blockchain
protocol.
S
S
And
the
last
piece
of
the
puzzle
are
the
contributors,
the
core
units,
as
you
may
know,
a
maker
is
a
pretty
big
Dam.
So
we
have
20
contributor
teams,
100
full-time
equivalents
of
contributors
around,
and
these
coordinates
the
way
that
they
voted
in
it's
with
three
proposal
types,
so
they
Define
or
they
propose
a
mandate.
They
propose
a
budget
and
they
propose
their
facilitator.
S
The
person
who
will
be
responsible
for
interfacing
between
governance
and
the
core
unit,
so
maker
Dow,
grew
quite
spectacularly
in
2021
and
as
a
result,
the
total
expense
has
also
grown
to
a
total
of
about
30
million
a
year.
S
The
scalability
of
the
elements
that
we
saw
is
is
reached
now
in
2022,
with
the
bear
Market
that
has
kicked
in
maker
has
long
been
known
as
the
protocol
or
one
of
the
few
protocols
in
D5
that
is
actually
profitable
in
the
the
longer
run,
if
you
average
it
out,
it's
still
the
case.
But
if
you
look
at
the
latest
months,
that
is
no
longer
the
case,
because
the
bear
Market
has
taken
its
toll
and
you
can
see
that
the
dye
expenses
remain
around
30
million.
S
The
first
challenge
that
I
want
to
talk
about
is
stakeholder
alignment
and
political
Deadlock.
So
in
a
truly
open
and
decentralized
organization,
it
is
very
difficult
to
agree
on
things,
even
the
smallest
things,
let
alone
long-term
vision
and
strategy
maker
has
been
experiencing
that
firsthand.
Throughout
the
last
two
years
there
has
been
a
lot
of
governance
drama
at
maker.
You
probably
read
about
us,
even
if
you're,
not
following
the
dial
up
close
and
at
times
core
units,
the
contributors-
they
really
got
sucked
into
this
drama
and
they
broke
down
entirely
in
certain
cases.
S
So
there
is
an
incredible
noise
Factor.
There
is
a
lot
of
distraction,
and
this
culminated
a
while
ago,
when
there
was
a
standoff
between
two
factions
over
a
set
of
mips
that
were
about
Real
World
Finance.
S
So
there
was
a
new
core
unit
that
was
up
for
vote
to
be
voted
in,
and
this
was
the
landing
oversight
core
unit,
and
then
there
was
a
maker
Hathaway
fund
that
was
up
for
vote.
I
won't
go
into
the
details
of
what
these
proposals
were
about,
but
they
came
to
an
ultimate
standoff
between
the
two
factions
and,
as
you
can
see,
in
the
screenshot,
there
was
a
record
MTR
voter
turnout
and
yeah.
This
attracted
quite
a
bit
of
attention
and
it
caused
a
lot
of
distraction
in
the
Dao
at
that
time.
S
Definitely,
political
rhetoric,
reigned,
Supreme,
Over,
the
rational
debate,
and
one
of
the
reasons
is
that
makerdale
has
very
few
consensus,
building
tools,
other
than
just
a
forum
to
have
a
discussion
and
then
majority
voting
which
really
isn't
a
consensus
building
Tool
you
do
that
when
you
can't
reach
a
consensus.
S
Recently,
there
has
been
a
lot
of
talk
about
the
idea
of
decentralized
voter
committees.
In
fact,
this
is
a
main
element
of
The
Proposal
that
was
put
forward
called
The
End
Game
Plan
by
the
one
of
the
original
Founders
roon
Christensen.
S
S
What
it
does
is
a
voter
committee
is
quite
simple:
it
is
just
a
group
of
mqr
holders
and
delegates
and
the
important
thing
is
they
are
self-selecting,
so
you
have
people
in
a
group
that
are
of
the
same
mind.
They
think
the
same
thing
about
what
makers
should
do
next,
what
the
long-term
strategy
should
be
Etc
and,
of
course
not.
Everyone
might
agree
with
that
group,
so
there
might
be
multiple
voter
committees.
S
If
you
think
that
this
is
quite
similar
to
political
parties
or
elections,
then
you're
quite
right,
I,
don't
think
it's
it's
a
coincidence
that
we
are
re,
discovering
the
same
mechanisms
that
have
been
tried
and
tested.
S
Another
part
of
the
voter
committee
setup
is
that
we
want
to.
We
want
to
restrict
the
flexibility
and
the
possibilities
that
these
voter
committees
have,
because
if
you
leave
all
possibilities
open,
then
there
is
a
lot
larger,
a
lot
larger
space
for
a
disagreement,
so
another
part
of
the
design
would
be
a
number
of
fixed
scopes.
A
number
of
business
activities
that
makers
should
focus
on
and
not
go
outside
of
that
maker
is
not
an
ice
cream
factory.
S
For
example,
the
Scopes
I
have
been
proposed
are
so
far
protocol
engineering,
real
world
collateral,
permissionless
collateral
and
growth.
This
will
probably
not
surprise
the
the
strength
of
these
is
that
they
Define
the
scope
and
and
make
it
closed
and
then,
as
a
last
element,
these
voter
committees
MTR
holders
and
delegates.
S
So
when
these
voter
committees
they
put
forward
their
strategy,
they
will
Define
for
each
one
of
those
Scopes
how
they
see
the
future
for
maker.
But
by
doing
so,
there
will
be.
There
will
be
supported
by
a
number
of
expert
councils
that
have
been
put
forward
by
the
workforce
by
the
core
units,
and
this
is
one
idea
to
deal
with
the
yeah,
the
the
difficulties
within
makers
to
reach
agreement
about
the
long-term
vision
and
strategy.
S
S
An
important
lesson
that
we've
learned
is
that
transparency
in
itself
is
not
enough.
The
right
information
really
needs
to
be
served
to
the
right
audiences
at
the
right
abstraction
level.
People
don't
have
time,
not
everyone
has
the
time
to
read
through
long
documents
and
just
distill
the
information
that
they
may
need
so
standard
size.
Structured
data
needs
to
be
made
available
via
apis
if
it
is
to
be
analyzed,
summarized
and
leveraged
successfully.
Today.
This
is
the
case
for
some
types
of
data
in
maker,
but
not
all
so.
S
The
clearest
example
of
this
was
the
core
units
and
their
budgets.
All
information
was
always
available.
It
was
on
the
forums,
but
it
was
very
difficult
to
find
our
user
research
has
shown
that
stakeholders
were
unable
to
find
even
the
simplest
relevant
information
which
core
units
exist.
What
are
they
doing?
What
is
their
budget
and
what
are
they?
S
S
This
isn't
exactly
a
very
original
idea,
but
it
is
one
that
will
work,
we
think
will
work
properly
for
maker.
S
In
fact,
we
have
already
been
doing
this
for
the
core
units
and
the
budgets
this
piece
of
information
that
was
so
difficult
to
to
understand
for
stakeholders.
We
created
a
limited
prototype
that
focuses
specifically
on
that
information.
So
if
you
go
your
expenses
at
make
a
diode.network,
you
will
see
a
clear
overview
of
the
core
units
right
now.
This
platform
is
being
used
for
core
units
to
submit
their
budgets
and
Koreans
can
use
it
to
verify
that
they've
properly
reported
on
their
budget.
S
S
The
mechanism
goes
like
this,
so
core
units
are
voted
in
individually
by
mkr
holders
and
delegates.
For
example,
you
may
have
the
growth
core
unit
that
is
voted
in
separately,
then
from
the
smart
contracts
or
at
the
protocol.
Engineering
core
unit
is
working
separately
from
the
Oracle
score
units,
for
example.
S
These
core
units,
I,
can
only
propose
work
to
other
core
units,
but
they
can't
enforce
a
collaboration.
So
there
is
a
constant
negotiation
that
goes
on
between
the
core
units.
How
things
should
be
done?
What
are
the
priorities
and
if
Koreans
don't
agree,
there
is
no
one
stepping
in
and
saying
we're
doing
it
this
way
or
that
way
in
fact,
the
core
units,
then
just
walk
away
and
say:
okay,
you
do
you
I'll.
Do
me
and
we'll
do
each
just
do
our
job.
S
The
result
is
because
these
core
units
they
have
a
defined
mandate.
They
have
a
long-term
work
stream,
but
there
are
no
end-to-end
deliverables
that
are
defined.
So
as
a
result,
core
units
don't
feel
responsible.
If
coordination
fails,
so
you
might
have
a
core
unit
that
say,
builds
a
front
end
and
does
a
tremendous
job.
S
And
then
you
might
have
a
core
unit
who
needs
to
promote
the
front
end,
but
they
haven't
been
aware
that
it
it
was
built
or
that
where
it
is
available
and
the
marketing
might
fail-
or
you
might
have
disagreements
between,
for
example,
technical
core
units
which
architectures
to
use
and
if
they
don't
agree
on
that
well,
the
end
result
will
simply
not
work.
So
a
lot
of
work
is
done.
It's
just
that.
Sometimes
not
a
lot
of
value
is
created
because
the
pieces
aren't
integrated
or
working
together.
S
Well
so,
which
idea
can
help
with
that?
We
have
been
talking
a
lot
about
project-based
budgeting
at
make
it
down
and
I
believe
that
this
will
be
a
much
better
structure
to
deal
with
this.
The
real
value
is
really
only
produced
when
delivering
an
integrated
solution,
delegates
and
MPR
holders.
They
should
approve
these
Integrated
Solutions
projects
more
so
than
mandates,
so
they
wouldn't
be
paying
for
work
like
for
coming
to
to
work
every
time
every
day,
but
they
would
be
paying
for
the
actual
results.
S
The
current
system
that
we
have
with
mid-40,
which
defines
the
budget
of
core
units
and
their
MIP
39
with
their
mandate.
It
will
probably
not
go
away,
but
we
will
gradually
try
and
transition
it
towards
a
system
that
is
more
balanced.
As
you
can
see
in
the
diagram,
you
can
think
of
today's
budgets
as
100
retention
budgets,
we're
just
paying
you
to
do
work
for
the
Dow.
S
Then,
for
the
last
challenge,
Talent
acquisition,
onboarding
and
the
compensation
question
maker
has
been
struggling
a
lot
consistently
with
hiring
onboarding
and
comp
challenges
maker,
dial,
SCS
micro
unit.
We
have
run
an
incubation
program
for
some
time,
but
now
we
are
winding
that
down
why?
Well
it's
very
difficult
and
time
intensive
to
coach
teams
throughout
the
long
and
costly
onboarding
period.
I
already
mentioned
when
you
do,
when
you
make
a
governance
proposal,
it
takes
you
all
in
all
two
months
to
go
from
the
RFC
to
the
final
vote.
S
However,
for
a
core
unit,
it's
also
very
difficult
to
make
these
calls,
so
we
have
been
doing
that
for
a
while,
but
we
felt
that
the
model
really
was
it
wasn't
suited
for
the
new
situation
that
we're
in
and,
of
course,
a
large
element
of
that
is
simply
the
bear
Market.
This
has
ended
the
onboarding
and
of
new
startup
teams,
and
it
also
has
created
a
lot
more
discussion
about
where
the
money
should
go.
S
It's
easy
to
get
a
budget
approved
if
there
is
there
is
about
enough
budget
to
do
everything.
Comp
questions
in
general,
they're,
extremely
difficult
in
a
global,
open
and
decentralized
organization.
With
such
diverse
areas
of
expertise,
even
if
you
know
how
much
an
engineer
is
worth
or
how
much
a
designer
a
web
designer
is
worth,
you
may
not
know
how
much
you
should
be
paying
for
a
risk
analyst
or
someone
working
in
banking,
for
example.
S
S
A
maker
Academy
would
be
a
platform
that
is
open
to
everyone,
and
that
is
also
permissionless,
meaning
that
it
is
out
there
in
the
open
in
the
Dao,
and
if
you
want
to
create
a
core
unit,
or
you
have
an
idea
for
a
project
that
you
want
to
see
funded,
you
need
to
know
how
can
I.
You
know
how
how
do
I
fulfill
the
role
of
being
a
good
coordinate
facilitator?
What
are
my
responsibilities?
Who?
S
How
do
I
need
to
interact
with
governance
and
all
these
things
that
can
be
made
available
in
open
platforms,
open
education
platforms
with
specific
courses,
but
also
General
topics,
especially
today,
when
maker
is
going
through
a
lot
of
transition?
S
There
is
a
lot
of
complexity
on
the
technical
side.
There
is
a
lot
of
complexity
on
the
Real
World
Finance
side.
There
is
a
lot
of
complexity
on
the
organizational
side
and
there
is
a
lot
of
complexity
in
the
new
proposals
that
are
up
for
vote.
S
So
yeah
an
open
and
available
platform
for
Education
and
Training
may
help
people
to
to
acquire
the
skills
they
need
and
then,
if
you
combine
it
with
the
possibility
of
trans,
then
you
get
a
funding
mechanism
that
doesn't
need
to
be
inside
a
single
core
unit,
but
that
can
be
done
out
in
the
open
and
so
maker
can
continue
to
onboard
the
talent
that
it
needs.
S
So
these
were
my
four
challenges.
So
what
does
the
future
of
decentralized
operations
that
make
it
look
like?
No
one
knows
there
are
a
lot
of
ideas
that
are
floating
around.
We
do
believe
that
the
solutions
we're
thinking
about
the
solutions,
we're
building
they're
all
open
source
software
and
they
could
be
interesting
for
other
projects
as
well,
so
yeah,
thank
you
for
coming
to
my
talk
and
let's
continue
discussion.
B
B
B
B
B
B
B
B
T
T
T
T
T
T
T
T
T
T
T
T
T
A
A
A
A
A
A
A
A
A
A
A
A
A
U
U
U
V
Hello:
okay,
cool,
hey!
What's
up,
everyone
give
me
one
second
to
get
set
up
here,
get
my
water.
L
V
V
Oh
there
we
go
cool
all
right,
so
today,
I'm
going
to
be
talking
about
LP
volatility,
harvesting
across
yield
rates,
so
element
Finance.
We
had
an
event
last
night.
If
any
of
you
are
there.
Thank
you
for
coming.
So
it
was
super
cool.
V
This
gives
opportunity
for
market
makers
to
then
go
ahead
and
ARB
the
market
make
profit,
bring
in
those
spot
prices
to
a
certain
level
and
allows
LPS
to
capture
fees
and
get
value
from
those
volatility
changes.
So
this
is
essentially
you
know
what
I
think
of
when
I
think
of
a
volatility
harvesting
tool.
So,
let's,
let's
apply
this
to
the
yield
markets
before
we
do
that.
Let
me
give
you
a
quick
intro
into
what
does
element
Finance
do
so
element
Finance
at
its
core
is
a
yield
splitting
protocol.
V
A
V
Position
a
lot
of
our
things
are
built
on
urine
and
what
we
do
is,
if
you
put
a
million
dollars
in
we
take
that
position.
We
split
it
into
two
parts,
so
there's
the
principle
and
the
interest,
so
you
have
a
one-year
lockup.
If
you
have
10
interest
on
this
position,
you
have
a
million
dollars
in
principle
and
I
guess
this
shows
20,
so
20
on
a
million
dollars
would
be
200k
in
interest.
V
So
at
the
end
of
the
term,
at
the
end
of
the
year,
you
can
collect
both
of
those
we
use
a
curve.
This
was
developed
mainly
by
actually
yield
space.
We
did
an
alteration
on
this
curve.
It's
called
a
constant
power
sum
and
essentially
what
we
do
is
during
the
market.
We
let
people
essentially
stake
the
principle
that
they
have
that's
locked
up
and
what
the
curve
does
is
it's
a
time-based
curve
and
it
sort
of
follows
this
concept.
V
You
know
where
I
give
you
a
dollar
for
99
cents
or
90
cents.
So
this
is
the
concept
of
the
opportunity
cost
of
money.
So
if
I
say,
hey
I'm
gonna
offer
you,
you
know
this
million
dollars,
but
you
can't
touch
it
for
a
year.
You
can't
do
anything
with
it.
What
is
it
worth
for
you
to
have
that
million
dollars?
V
You
know,
I
can't
put
in
a
savings
account
I
can't
stick
it
I
can't
farm
with
it
I'm
losing
some
type
of
interest
on
that
right,
so
I,
say:
hey
I
think
I
could
probably
get
like
15
apy
I'll
I'll
give
you
90
of
that.
So
here's
a
dollar
you
can't
use
it
for
the
next
year,
I'll
give
you
90
cents
for
that
dollar,
and
so
what
this
curve
essentially
does?
Is
it
acts
with
respect
to
this
time
period
between
the
constant
product
and
the
constant
sum
formula
so
early
on
it
has
price
Discovery.
V
You
can
have
the
apy
change
on
the
principle,
this
million
dollars
it's
locked
up
and
later
on
it
sort
of
converges
one-on-one
value.
If
I
say
hey
here's
a
million
dollars
tomorrow,
you
can't
use
it
until
tomorrow,
you'll
probably
buy
it
for
a
million
dollars
right
versus
for
a
year
from
now.
Then
you
get
that
opportunity
cost.
So
this
is
essentially
how
how
that
works.
V
So
what
we
saw
early
on
in
our
platform
is
these
markets
actually
worked
really
well,
as
people
locked
up
their
principal
in
the
amms,
people
were
actually
trading
these
with
opportunity
cost
of
money
based
on
the
variable
rates
that
you
could
see
in
the
market.
So,
for
example,
when
we
had
first
launched,
we
saw
a
curved
try,
crypto
term,
where
the
yield
was
the
fixed
apy
was
at
15
percent
and
the
variable
rate
was
at
nine
percent
in
this
term.
V
V
It's
hard
to
grab
that
it's
hard
to
understand
that
what
happened
is
we
were
in
a
market
low,
so
the
variable
rates
had
all
dropped
in
the
space,
and
people
were
speculating
that
they
would
rise
and
sure
enough
they
did.
They
ended
up
making
money
off
of
it,
and
so
why?
Why
is
that
related
to
the
number
value
here?
V
So
we
have
this
system
called
the
old
token
compounding
which
essentially
lets
you
go
through
and
leverage
your
exposure
to
variable
interest
to
go
long
on
variable
interest
and
it's
this
concept
where
you
meant
these
principles,
these
yield
tokens.
You
sell
your
principle
for
a
fixed
apy
percentage,
and
you
do
this
repeatedly
until
you
own
a
stronger
exposure
to
the
yield.
V
So
this
is
like
I.
Don't
want
to
go
into
too
much
depth
here,
but
if
the
variable
apy
is
at
20-
and
let's
say
the
fixed
apy
is
at
10,
I
have
a
10
spread
if
I,
basically
re-hypothecate
and
do
this
I
deposit,
I,
sell,
I
deposit,
I,
sell
I
deposit
I,
sell
I.
Do
this
six
or
seven
times
you
know,
I
can
7x
that
to
around
70.
V
So
we
saw
some
other
interesting
volatility
behaviors
at
one
point
there
was
I
actually
did
a
talk
about
this
at
another
conference
there
was
issues
with
MIM
I.
Think
a
lot
of
you
remember
that
there's
a
whole
Scandal
there.
We
have
a
lot
of
those,
sometimes
in
our
industry
and
the
fix
APR,
like
you
know,
popped
up
to
130
percent,
and
then
we
had
another
case
where
we
had
a
wbtc
vault,
where
it
dropped
completely
to
zero
percent.
Actually,
and
during
this
time
we
saw
a
lot
of
volatility
movement,
the
price
action.
V
V
And
so
this
is
sort
of
you
know
an
example
of
the
volatility
in
action
eventually
normalized
to
around
10
percent.
So
what
do
we
notice
in
the
fixed
rate?
Market?
Is
it
generally
sometimes,
except
for
some
of
these
exceptions?
It
tends
to
follow
the
variable
rate
ends
up
being
somewhat
similar,
the
apy
to
what
the
variable
rate
is
in
the
market,
sometimes
a
little
less,
sometimes
a
little
bit
higher.
They
tend
to
track
each
other
they're
correlated.
V
This
is
what
we've
seen
from
the
data
that
we've
gotten
in
our
current
V1,
and
this
is
where
you
know
I've
done.
A
talk
on
this
before
is
essentially
these
fixed
rate
markets
that
we
do
they're
not
really
just
fixed
rate
markets,
they're
they're,.
L
V
Markets,
we're
essentially
creating
this
value
capture
mechanism
for
yield
markets,
and
so
what
is
like
element
Finance
like
and
and
these
systems
and
fixed
rate
systems
as
a
whole,
you
can
sort
of
see
them
as
it's
more
than
a
fixed
rate
system.
V
V
It
also
allows
for
really
cool
things
like
Leverage
going
long
variable
being
you
know
negative
on
the
you
know,
future
variable
interest,
like
I,
believe
this
one's
going
to
go
to
zero
percent.
This
this
Farm
shot
up
in
value,
so
sort
of
the
the
dream
for
me
and
it's
not
just
element,
it's
sort
of
the
fixed
rate
space.
V
So
real,
quick,
I'm
gonna
go
into
like
one
example.
This
is
gonna,
be
I.
I
can't
go
into
a
lot
of
the
depth,
but
I
want
to
sort
of
paint
a
picture.
So
I
did
a
Twitter
thread.
You
can
check
check
it
out
on
Twitter,
it's
pinned
where
I
actually
coded
I
went
for
a
week
in
my
room
and
coded
a
bunch
of
simulations,
and
this
simulation
particularly
was
on
something
called
Fiat
Dao
and
what
they
did.
V
It
was
the
kin
to
maker
where
they
had
a
one
percent
stability
set
fee
and
that
one
percent
stability
fee
is
basically
your
borrow
rate
right
and
what
they
did
is
they
took
principal
tokens
as
collateral,
so
I
could
take
a
principal
or
a
fixed
rate.
Let's
these
principal
tokens
are
these
fixed
rate.
Tokens
right,
so
I
could
buy
a
fixed
rate.
V
Uses
collateral,
borrow
Fiat
swap
that
to
die,
buy
more
fixed
rate,
die
I
could
basically
leverage
into
the
fixed
rate
side,
and
so,
if
the
borrow
is
at
one
percent,
apy
or
let's
say
makers,
stability,
if
you
want
to
think
in
relation
to
maker
or
other
systems,
is
that
one
percent
apy
and
essentially
what
what
we're
able
to
do
is
if
the
fixed
rate's
at
three
percent,
you
can
leverage
into
that
fixed
rate
until
it
basically
converges
to
that
one
percent
value
which
is
really
cool,
and
what
does
this
also
allow
for
so
there's
this
concept
of
fixed
rate,
borrowing
mechanism
or
adapter,
someone
there's
a
few
people
who
are
working
on
this
on
the
element
platform
actually
currently,
and
essentially
what
this
is
is
you
could
take
these
these
instruments
that
we
have
and
you
can
plug
them
into
compound
and
Ave
and
existing
systems?
V
And
what
you
do
is
you
essentially
create
a
hedge
Market?
You
transform
them
into
fixed
rate,
borrow
markets.
So
let's
say
I
have
a
compound
borrow
position,
let's
say
I'm
borrowing
to
I
from
compound.
Let's
say
we
hit,
like
you,
know
very
low
utility
rate,
and
in
this
case
the
apy
to
borrow
goes
low.
Let's
say
it
goes
to
half
a
percent.
This
is
great
I
want
to
lock
that
in
you
know,
so
what
you
can
do
with
sort
of
the
system
that
you
know
we
have.
V
Is
you
take
out
that
borrow
and
you
hedge,
by
going
directly
into
the
yield?
So
these
are
the
yield
tokens
I
talked
about
earlier
and
what
you
do
is,
let's
say
in
this
case
I'm
saying:
okay
lending
is
at
three
percent.
Whatever
borrow
is
that
one
percent
apy?
If
suddenly
the
borrow,
goes
up
to
10
apy
The
Lending
side
also
is
going
to
spike
up
to
around
10
or
12,
and
so
because
I
hedged.
My
borrow
position
with
this
yield
exposure
on
the
lending
position.
V
L
V
We
can
basically
turn
it
in
any
platform.
Lending
platform
doesn't
matter
what
chain
into
a
fixed
rate
borrow
system
this
can
be
built
on
top.
This
can
be
an
adapter.
So
what
does
this
mean?
Is
if
you
sort
of
take
a
step
back,
you
can
see
that
these
mechanisms,
these
fixed
rate
and
variable
rate
in
these
yield
markets
that
we
have
what
they
do.
Is
they
create
this
convergence
layer
for
defy
rates
for
Lending
Market
rates?
V
You
use
it
to
take
out,
borrow
positions.
You
can
create
essentially
fixed
rate
and
variable
positions
on
the
lending
side,
the
side
where
you
get
apy
and
when
you
go
cross-platform,
if
I
basically
do
a
fixed
rate,
borrow
on
platform
a
and
then
I
essentially
use
that
to
sort
of
go
into
and
purchase
the
principal
tokens
on
platform
B
that
are
certain
value.
You
end
up
having
this
sort
of
liquid
layer
in
between
that
brings
all
those
rates
to
convergence
and
brings
them
together.
V
It's
really
cool
I
wish
I
had
a
better
diagram
for
this,
but
I'm
sort
of
starting
to
introduce
this
topic
and
and
playing
with
this,
and
this
works
across
different
l2s
l1s.
It's
it's
really
really
powerful,
actually,
and
so.
We've
already
started
batching,
basically
activities
on
Aztec
for
ethereum,
and
so
what
I
sort
of
see
is
this
world
where
we
can
sort
of
batch
a
lot
of
the
activities
from
different
different
layers,
different
chains
all
on
one
chain,
and
you
can
sort
of
interact
and
Arbitrage?
V
Foreign
I
want
to
talk
for
a
second
about
you
know,
amen,
so
amms
I
think
are
there's
a
lot
of
downsides
to
them.
So
you
know
we
had
unique
V3
with
concentrated
liquidity
model.
You
can
kind
of
see.
Amms
also
are
sort
of
a
free
straddle
option
for
market
makers.
A
lot
of
the
value
that's
captured
off
amm's
goes
to
market
makers
not
to
the
people,
providing
the
liquidity
not
to
the
lpers
an
organic
way.
This
is
why
a
lot
of
people
are
doing
research
on
cool
ways.
V
How
do
you
bring
Mev
into
the
actual,
the
actual
amm?
How
do
you
bring
transaction
ordering
and
validator
activities
into
the
actual
amm,
because
sort
of
the
most
altruistic
position
in
the
markets
is
being
an
LP,
especially
in
an
organic
market
where
you
don't
have
emissions
things
like
that,
and
they
take
on
a
lot
of
risk,
so
spot
prices?
Can
you
know,
go
up
down
through
the
roof?
V
They
a
lot
of
times
can
see
less
gains
than
if
they
had
held
one
position
or
you
know
worse
gains
on
the
other
Spectrum,
and
so
there
there's
definitely
work
that
needs
to
be
done
on
amms,
but
they're
really
really
good
for
yield
prices
and
yield
tokens
and
and
principal
tokens
and
everything
that's
involved
there
and
the
reason.
V
Why
is
because
if
I
have
you
know
fixed
rate
usdc
and
we
call
these
principal
tokens
in
our
platform
and
that's
at
10
apy
over
a
year,
that
means
it's
going
for
90
cents
day
by
day
that
90
cent
value
converges
to
one
dollar.
You
don't
really
experience
any
impermanent
loss,
it's
sort
of
more
like
a
stable
swap
the
fees
are,
are
you
know
pretty
cool
as
a
percentage
of
that
yield
that
people
secure
and
swap
with?
V
And
you
know
that
that's
really
interesting,
and
so
one
of
one
of
the
issues
we've
had
in
V1
and
what
we've
been
doing
is
we
have
this
liquidity
fragmentation?
It's
like
I
have
this
six
month
term,
this
fixed
rate
term
that
I'm
interested
in,
but
it's
three
months
through
and
now
I
need
to
switch
to
another
six
month
term,
or
you
know,
as
an
LP
I
need
to.
It
makes
sense
for
me
to
pull
out
my
position
and
go
into
another
one.
V
These
are
some
of
the
weaknesses,
so
we
did
a
bunch
of
simulations
analysis
in
our
current
fee
markets,
capturing
volatility
in
the
space
which
is
super
interesting.
This
is
fairly
complex.
The
main
thing
that
I'll
share
the
biggest
learnings
is,
as
you
see,
yields
rise.
V
That
is
the
highest
one
of
the
highest
value
capture
mechanisms
for
LPS.
As
you
see
a
yield
rate
drop.
This
is
actually
also
if
you're
active,
extremely
profitable.
So
what
happens
if
a
yield
rate
drops?
So
in
this
case
we
have
a
situation
where
we're
going
from
10
to
2.5
apy
and
in
2.5
months
it
drops
to
2.5
apy,
and
this
is
a
six
month
term,
so
that
drop
essentially
brings
that
principal
token
right.
If
90
cents
on
a
dollar
would
be
10.
V
If
it
drops
to
five
percent,
then
that's
95
cents
on
a
dollar
right.
Five,
five
cent
discount-
so
essentially
it's
worth
more
and
what
you
can
do
is
it
drops.
Is
you
just
sell
the
principal
token
so
someone
who
locked
in
a
10
fixed
rate
API
in
2.5
months
when
it
dropped
to
2.5
percent,
they
sell
the
principal
tokens.
V
They
got
4.2
percent
return
in
2.5
months
because
they
sold
it,
which
equates
to
20
APR.
So
if
you're
going
into
position-
and
you
see
you
know
what
I
believe
this
like
rate's
going
to
drop-
it
makes
sense
to
buy
the
fixed
rate
side,
because
once
the
rate
drops,
then
you
can
flip
that
you
can
sell
that
and
then
you
can
basically
get
an
early
Redemption
on
your
apy.
V
You
get
fulfilled
quicker
on
the
other
case,
if
the
API
drop
or
Rises,
you
get
a
higher
exposure
to
the
fixed
rate
apy,
so
you
get
more
more
yield
exposure,
it's
also
profitable,
Endeavor
and
mechanism,
and
so
what.
V
This
lead
to
this
sort
of
leads
to
products
that
can
be
built
on
top
active
strategies,
vaults,
really
interesting
things
even
Bots-
that
market
makers
can
do
to
really
capture
a
lot
of
this
value.
The
value
is
absurd
that
can
be
caught,
especially
once
you're
doing
borrowing
lending
markets
once
you're
hedging
on
those
once
you
have
printable
tokens
as
collateral
once
you
have
these
markets
like
running
truly
smoothly,
there's
a
lot
of
ways
to
sort
of
make
profit
off
of
changing
yield
rate.
So
it's
it's
fascinating.
V
This
is
just
for
fun,
so
another
thing
we
saw
is
it's
sort
of
unideal
for
an
LP,
because
with
these
like
terms,
we
used
to
have
we're.
Essentially
you
in
element.
We
have
like
usually
six
month
terms
and
we
do
a
new
six-month
term.
But
what
happens
is
like
here's
a
simulation.
The
fees
drop
off
as
this
term
ends.
It
doesn't
really
make
sense
for
me
as
the
lp
to
stay
the
full
term.
It
makes
more
sense
for
me
to
pull
out.
V
Why
do
the
fees
drop
out
because,
as
time
goes
on,
the
value
converges,
so
you
have
less
of
a
differential,
so
the
fees
are
less
and
also
people
are
less
likely
to
trade
on,
like
one
month
left
of
apy,
so
Johnny
Ray,
my
co-founder,
we're
both
E2
researchers,
but
he
actually
recently
came
up
with
a
new
model.
We're
calling
this
hyperdrive
and
it
introduces
no
more
terms.
V
Lps
is
Perpetual
positions
and
essentially
this
new
amm
that
we've
been
researching
and
we'll
be
releasing
a
lot
of
data
on
this
simulations
on
it
here
soon
lets
you
basically
underwrite
someone
to
take
out
a
fixed
rate
term
on
whatever
time
they
want.
They
can
say
six
months
three
months
and
it's
a
brand
new
term.
You
don't
have
to
worry
about
these
terms.
Going
halfway
into
a
term
I
have
one
month
left
you
underwrite
that
position
immediately,
and
this
is
really
good
for
LPS
too.
V
V
It's
it's
a
really
good
situation
for
them,
they're
able
to
capture
and
Garner
more
fees,
and
this
also
allows
for
just
better
systems
to
be
built
on
top
Simplicity
and
better,
better
Vault,
better
value
capture
mechanisms,
better
ways
to
go
long
and
variable
yield
on
the
market,
better
ways
to
be
negative
on
variable,
yield
and
sort
of
play
with
these
these
markets
and
and
have
fun.
So
this
is
LP
volatility,
harvesting,
cross,
yield
rates,
I'll,
try
and
release
a
lot
more
simulations
and
data
here
soon
on
Twitter
follow
us.
V
X
So
earlier
this
year
there
was
a
project
called
defrost
Finance
on
Avalanche
and
they
did
leveraged
yield
farming,
but
they
had
a
lot
of
trouble
with
keeping
the
liquidity
in
order
to
do
that.
So
in
your
example,
where
you
had
that
three
to
seven
leverage
on
the
dies,
how
do
you
maintain
that
liquidity
to
keep
that
so
people
actually
want
to
trade?
The
other
side
on
you.
V
Yeah
so
great
question,
so
we
actually
saw
like
I
think
it
was
through
the
bull
market,
something
like
400
million
dollars
of
trades
on
our
platform,
so
we
actually
saw
a
really
active
activity
on
yield
token
compounding
or
increasing
your
exposure
on
the
variable
side.
I
think.
Maybe
some
of
your
question
is
how
do
you
match
that
if
you
have
leverage
to
the
other
side,
because
that's
cell,
but
you
match
that
on
the
purchase
side?
V
That's
why
things
like
like
having
these
fixed
rates
is
collateral
right
being
able
to
leverage
into
buying
the
fixed
rates
is
important.
Liquidity
is
another
thing
like
liquidities
down
significantly
in
the
market,
including
our
platform,
I,
sort
of
think
we
build
sustainable,
strong
products.
Things
like
hyperdrive.
That
makes
sense
that
are
a
significant
leap
on
what
exists
in
existing
tradify
like
this
stuff
doesn't
exist.
It's
it's
crazy,
cool
and
I.
V
Think
it's
just
a
matter
of
time
of
sort
of
building
and
garnering
garnering
that
that
space
I've
been
looking
at
a
lot
of
things
too.
Looking
at
real
world
assets
like
some
some
other
things
that
can
also
play
in
as
yield
sources,
I
think
staking
derivative
is
a
really
good
yield
sources.
People
are
going
to
do
those
regardless
Mev
is
an
interesting
one,
etc,
etc.
So
yeah
any
other
questions.
U
Remember,
closing
ceremony
will
be
at
three
and
a
half
PM
top
of
the
mountain.
Thank
you.
A
A
A
A
U
Y
All
right,
let's
see
hi
everyone
and
thanks
a
lot
for
coming
to
our
talk,
so
I'm,
Daniel
and
with
me
here
is
Ari,
and
this
is
a
price
that
has
been
done
by
a
few
people.
So
there's
Sam
and
Lewis,
who
are
in
the
audience
but
won't
be
presenting
here
today,
and
so
today,
we're
going
to
talk
about
security
risk
in
defy
and
in
particular
we're
gonna,
try
to
sort
of
give
some
definitions
and
explain
how
technical
security
and
economic
security
are
different
or
how
they
differ.
Y
So
a
quick
outline
of
what
we'll
be
presenting
today,
so
this
talk
is
meant
to
be
fairly
accessible.
So
we'll
start
by
presenting
a
bit
the
different
Primitives
using
D5
and
then
we'll
present
a
couple
of
protocols
that
are
that
can
be
built
or,
and
once
we've
done.
This
we'll
enter
a
bit
more
into
the
main
part
of
the
talk
which
will
be
explaining
what
technical
security
and
economic
security
is,
and
we
will
finally
present
a
few
open
challenges
for
research,
with
the
focus
of
on
these
different
types
of
security.
Y
So
I
will
start
on
a
very
high
level,
but
what
is
D5?
So
we
have
a
couple
definitions,
properties
for
D5,
so
definition
of
give
is
a
peer-to-peer
powered
Financial
system
and
for
a
typical
D5,
we're
saying
that
it
should
have
a
few
idealized
properties,
first,
one
being
non-custodial,
which
means
that
participants
should
have
control
on
over
their
funds
at
any
point
in
time.
Y
And
next
one
is
that
it
should
be
permissionless.
Anyone
should
be
able
to
participate
in
this
financial
activities
without
restrictions
or
without
being
able
to
be
censored
by
a
third
party
should
be
openly
auditable,
which
means
that
anyone
can
look
at
the
blood
state
of
the
blockchain
or
whatever
the
defy
and
be
able
to
see
the
transactions
and
what
is
going
on.
And,
finally,
it
should
be
composable,
which
means
that
different
protocols
should
be
able
to
communicate
with
each
other
and
to
interact
to
form
sort
of
new
Financial
systems.
Y
On
top
of
this,
so
well
with
this
D5
coming,
there
has
been
a
lot
of
controversy
and
all-
and
we
can
see
this
a
bit
as
like
two
sort
of
very
point
of
views
and
optimistic
and
a
pessimistic
point
of
view
where,
for
the
defy
Optimist
defies
a
huge
technological
Advance.
This
is
new
Financial
system,
that's
openly
editable,
and
that
has
all
the
properties
listed
before
and
that's
obviously
very
promising
for
the
future,
and
there
has
been
already
a
lot
of
good
things
with
D5.
Y
For
example,
stable
coins
like
die,
has
been
used
in
countries
like
Argentina
to
fight
inflation,
any
sort
of
things,
and
also
we
have
seen
that
more
custodial
system
has
tend
to
fail
in
some
places
where
decentralized
Finance
could
could
have
allowed
people
to
have
more
visibility
on
what
was
going
on.
Y
On
the
other
hand,
there's
also
this
pessimistic
view
that
is
that
well
defied
and
regulated
it's
hack
prone
there.
It
can
allow
people
for
its
certain
most
nature,
to
to
commit
many
sorts
of
crime
like
scamming,
money
laundering
and
so
on
and
well.
There
had
also
been
like
many
hacks,
as
probably
you
have
all
seen,
and
this
North
Korean
hackers,
hacking
protocols
and
also
recent
a
bit
more
recently,
the
crypto
mixer
being
sanctioned
and
and
so
on.
Y
Y
A
complete
must
for
the
defy
the
vision
of
the
defy
Optimist
to
be
fulfilled,
and
really
what
we'll
be
trying
to
do
is
to
to
differentiate
between
what
is
technical
security
problem
and
what
is
an
economical
security
problem
before
this
we'll
give
a
bit
of
background
around
like
different
Primitives
that
are
needed
for
all
this
and
we'll
start
with
some
very
basic
assumptions
here
is
well
all
Z
by
protocols
rely
on
an
underlying
blockchain
and
it
assumes
some
security
properties
which
are
consistency,
integrity
and
availability
and
they're
going
to
be
any
D5
without
disease.
Y
To
begin
with,
then
it
uses
a
few
other
properties
of
the
blockchain,
and
here
one
that
I
want
to
highlight,
because
it's
very
there's
a
lot
of
sort
of
security
issues,
because
this
are
like
many
potential.
Let's
say
it's
atomicity,
which
means
that
if
a
transaction
starts,
it
will
either
succeed
completely
or
it
will
revert,
but
there
cannot
be
a
half
transactions
that
just
cannot
be
and
and
obviously,
if
I
relies
on
Smart
contracts,
which
are
programs
that
run
on
the
blockchain
and
using
these
Primitives.
Y
There
are
a
few
really
essential
piece
of
software
and
of
other
Primitives
that
are
required
for
defy
first
one
being
oracles
so
because
blockchain
cannot
have
access
to
off-chain
information.
Somebody
needs
to
take
this
option,
information
and
put
it
on
chain
and-
and
these
are
called
oracles
and
are
used,
for
example,
to
to
get
price
of,
say
USD,
because
this
is
not
an
information
you
could
possibly
have.
Y
Then
there
is
governance
which
is
used
typically
to
upgrade
D5
protocols
with
time
change,
parameters
and
this
sort
of
things
then
we
have
Keepers,
which
are
off
chain
sort
of
bots
and
that
will
submit
transactions
to
update
States.
This
is
because,
in
most
blockchain
systems
you
need
a
transaction
to
be
able
to
perform
any
sort
of
State
transition
and
therefore
somebody
has
to
take
care
of
this.
And
finally,
there
are
many
Market
mechanisms
that
are
used
in
D5.
Y
There
are
collateralization
where
people
will
put
some
money
at
stake
to
make
sure
that
you
can
hit
the
default
on
the
position.
For
example,
then,
there's
arbitragers
there's
also
liquidations
equations,
which
are
used
if
somebody
does
not
have
enough
collateral
for
whatever
position,
so
that
covers
roughly
the
main
primitive
that
we'll
need
in
D5
protocols
and
now
I'll
present
just
a
couple
of
the
five
protocols.
Y
Probably
some
most
of
you
are
already
familiar
with
these,
but
just
to
highlight
a
few
properties
so
that
we
can
kind
of
all
be
on
the
same
page
to
start
talking
a
bit
more
about
security
aspects.
So
there
are
many
types
of
protocols,
but
we
don't
really
have
time
to
go
through
all
of
them.
So
we'll
first
start
with
automated
market
makers,
which
are
a
decentralized
Unchained
electronic
changes,
because
on
chain,
it's
way
too
expensive
to
have
some
order.
Y
Book
based,
Texas,
amms
or
somehow
has
become
extremely
popular
and
they
have
a
lot
of
good
properties.
A
few
properties
are
less
good,
but
the
main
idea
is
that
people
will
come
and
provide
liquidity
to
a
pool
that
consists
of
typically
two
or
more
assets
and
by
providing
these
liquidity
they
in
some
way
commit
to
a
portfolio
of
these
underlying
Assets
in
a
portfolio
that
will
be
rebalanced
by
arbitragers.
That
will
try
to
keep
the
prices
consistent
with
some
other
off-chain,
for
example,
prices
or
prices
on
some
other
exchanges.
Y
Once
that
is
done,
people
can
trade
for
this
pool
and
that
generates
fee
for
the
fees
for
the
pool,
and
typically
this
is
profitable
in
the
case
where
they
are
volatility.
Harvesting,
when
it
was
talked
just
before,
was
a
lot
more
advanced
than
this,
but
basically
it's
if
the
price
is
around
some
line
and
going
up
and
down
it's
typically
profitable,
as
opposed
to.
If
the
price
is
consistently
diverging,
then
maybe
not
so,
and
it
there's
still
some
risk
and
especially
strategy
risk
and
adverse
selection.
Y
Risk
involved
with
these
amms
and
another
very
important
type
of
protocol
for
D5
or
protocol
for
low
limit
funds
also
called
lending
protocols
which
are
Unchained
markets
where
people
can
borrow
and
lend
assets.
So
typically,
people
will
come
and
deposit
some
assets
that
are
pulled
in
smart
contracts.
Other
people
can
come
and
borrow
these
assets,
and
to
do
so,
they
will
need
to
be
overcollateralized,
so
they
cannot
default
on
their
position.
Y
An
interesting
thing
is
that
there
are
algorithmic
interest
rates
and
which
means
that,
typically,
with
this
Market
there's
no
duration
risk
and
if
a
borrower
would
default
on
his
position,
which
means
his
collateral
ratio
is
not
high
enough
anymore,
he
can
get
liquidated
based
on
rules
are
imposed
by
the
protocol
and
a
final
Point.
That's
also
very
sort
of
typical
to
defy
our
flash
loans.
Y
That's
quite
an
interesting
primitive
because
it
allowed
people
to
borrow
money
without
having
any
under
equilateral
at
any
collateral,
and
the
condition
for
this
is
that
they
repay
this
loan
in
a
single
transaction,
and
this
works
mostly
because
of
the
Primitive
I
described
before,
which
is
atomicity.
Y
So
with
all
this,
then
this
protocols
can
communicate
together,
as
I
mentioned
earlier,
and,
for
example,
one
person
could
deposit
some
money
in
an
emm
and
get
some
LP
shares
and
use.
This
MP
shares,
for
example,
in
lending
protocol
as
collateral
to
be
able
to
borrow
some
other
type
of
asset,
and
that's
a
very
interesting
thing,
with
D5
that
all
the
protocol
can
really
very
easily
communicate.
Y
So
now
that
I'm
done
with
this
sort
of
intro
background
about
D5
itself,
we'll
dive
a
bit
more
into
the
security
and
we'll
really
try
now
to
to
delineate
Technical
and
economical
security
and
first
we'll
start
with
some
informal
definition.
And
so
we
say
here
that
for
protocol
or
smart
contract
to
be
technically
secure,
it
needs
to
be
secure
from
an
attacker
who
is
limited
to
Atomic
actions
and
we're
like
here
being
secure,
has
been
going
to
get
exploited.
Y
We
have
a
more
formal,
definitely
exploit
in
a
paper
that
we'll
try
at
the
end,
but
for,
for
example,
it
could
be
not
to
be
able
to
sell
assets,
and
so
here
Atomic
actions
means
that
the
action
would
be
either
a
single
transaction
or
either
a
bundle
of
transactions.
Y
But
the
property
needs
to
be
that
all
these
actions
will
be
executed,
atomically
and
because
of
this,
so
technical,
so
attacks
on
technical
security
are
risk-free
because,
basically
the
attacker
can
just
perform
the
attack
and
at
the
end
of
the
transaction
or
of
this
Atomic
operation,
he
can
see
if
yes
or
no,
he
made
money
and
if
he
did
not
make
any
money.
But
if
he
made
money
he
profits.
If
he
didn't
he'll,
only
pay
the
gas
fees
and
can
revert
the
transactions.
Y
So
by
definition,
or
by
kind
of
extending
the
definition,
a
technical
attack
will
always
be
risk-free.
Otherwise
it
it
will
fit
our
other
type
of
attack.
And
so
there
are
some
examples
of
tactical
attacks
are
Atomic
Mev
sandwich
attacks
and,
for
example,
like
reference,
C,
or
also
attacks
that
exploit
logical
bugs,
and
that's
all
now
fairly
well
studied.
We
know
more
or
less
how
to
protect
against
these.
So,
of
course,
like
testing
smart
contracts
very
well
in
program,
analysis
or
formal
methods,
and
these
are
in
general,
the
better
studied
one.
Y
There
are
single
transaction
sandwich
attacks,
which
is
where,
if
a
protocol
say,
would
use
the
spot
price
of
an
amm
to
to
use
as
a
price
in
their
protocol
an
attacker
could
come
and
and
balance
is
amm,
so
that
when
the
protocol
would
try
to
look
up
the
price
it
would
get
the
wrong
price
and
under
attacker
could
fairly
easily
exploit
this
to
make
money
or
governance
attack.
If
it's
possible
in
one
transaction
to
do
some
governance.
Y
Lastly,
there
are
transaction
ordering
attacks
so
framework
displayments
attacks
where
an
attacker
could
front
run
some
particular
transaction
to
make
profit
instead
of
the
person
who
initially
initiated
this
transaction
and
also
multis
transaction
sandwich,
attacks,
which
are
an
attack
in
where
an
attacker
could
come
and
see
that
somebody
is
trying
to
swap
but
have,
for
example,
a
very
high
slippage
tolerance
and
he
could
invalid
the
pool
before
to
give
the
the
victim
a
bad
price
and
then
rebalance
the
people
after
and
would
get
the
profit
that
the
victim
lost
because
of
the
of
the
price
he
got
so
now,
I'll
give
it
to
Ari
so
that
he
can
talk
about
Economic,
Security.
Z
So
the
other
type
of
security
we
Define
a
protocol
is
economically
secure
if
it's
not
profitable
for
an
attacker
who
can
perform
non-toxic
non-atomic
actions
to
manipulate
the
protocol
into
unintended
States
that
where
they
can
essentially
like
extract
assets
from
the
protocol
or
cause
other
sort
of
Mayhem
in
the
protocol,
and
so
Economic
Security
is
about
where
you
have
an
exploiting
agent
who's,
trying
to
manipulate
some
sort
of
incentive
structure
of
the
protocol
to
profit
like
by
stealing
assets,
and
since
these
are
non-atomic,
they
have
upfront
tangible
costs
and
are
not
risk-free.
Z
Basically,
you
have
to
like
set
up
set.
It
set
up
the
attack
and
then
actually
perform
the
attack
later
on,
and
something
could
happen
in
between
that
those
two
those
two
times
and
basically
something
the
attack
could
fail.
Z
If
something
happens
in
in
between
those
those
two
actions
and
such
as
a
market
responding
or
other
agents
responding,
and
to
address
this,
we
really
need
to
have
economic
models
of
what's
happening
in
between
these
transactions,
and
the
attacker
would
need
to
to
understand
this
and
basically
manipulate
what's
happening
in
between
these
transactions.
Z
So,
let's
hammer
down
a
little
bit
further,
what
the
difference
is
between
Technical
and
Economic
Security,
so
in
a
technical
exploit,
we
have
an
attacker
who's
effectively.
Finding
a
sequence
of
contract
calls
that
leads
to
a
profit,
and
these
are
either
in
a
single
transaction
or
a
bundle
of
transactions.
But
it's
being
done
all
at
once
or
not
at
all,
and
for
these
formal
models
of
contracts
are
basically
enough.
Z
So
to
say,
although
it
can
still
be
quite
a
hard
computer
science
problem
to
work
out
sort
of
optimal
the
optimal
ways
for
attacks
to
be
performed.
Z
So
there's
kind
of
a
setup
there's
actually
performing
the
attack
later,
but
in
between
some
sort
of
Market
can
respond
or
other
agents
can
respond,
and
so
the
attacker
doesn't
really
know
if,
if
it's
profitable
at
the
end,
and
for
this
we
need
models
of
what's
going
on
in
between
which
is
a
bit
different
than
just
formally
verifying
contracts.
And
so
this
is
kind
of
an
open
area
of
research,
especially
around
kind
of
understanding,
liquidity
of
markets.
Z
Z
Z
We
could
use
a
smarter
choice
of
Oracle
so
that
this
isn't
possible
and
that
leads
to
instead,
it
being
an
economic
exploit,
so
here
consider
that
the
protocol
instead
uses
a
little
bit
smarter
choice
of
of
Oracle,
a
Time
weighted
average
amm
price,
but
these
can
still
be
manipulated
over
time,
but
it
involves
risk
for
the
attacker,
but
they
still
may
be
able
to
steal
assets
and
actually
something
like
that.
Just
happened
very
recently
in
in
mango
I
believe.
Z
So
we
can
see
this
also
like
in
in
data
about
what's
been
happening
in
different
protocols.
So
one
example
here
to
kind
of
illustrate
this.
A
little
further
is
something
that
happened
in
compound
in
November
2020..
Now
this
wasn't
really
clearly
an
exploit,
but
it
kind
of
illustrates
what
could
have
been
an
exploit
and
what
can
be
exploits
in
other
protocols.
Z
So,
basically,
the
price
of
dye
was
trading
on
on
coinbase
and
for
a
very
short
period
of
time,
the
price
pumped
to
a
dollar
Thirty
and
because
compound
was
using
coinbase
as
an
oracle.
Z
Z
Now
this
wasn't
clearly
an
exploit,
but
you
could
imagine
that
somebody
might
set
this
up
intentionally
manipulate
this
Market
that
manipulates
the
Oracle
price
and
then
profiting
from
the
from
the
resulting
liquidations,
and
that's
essentially,
what
we've
seen
later
as
well,
so
in
a
clear
exploit
something
similar
happened
in
Venus
in
in
May
2021,
where
the
the
Venus
Market
was.
Was
manipulated
and
essentially
the
attacker
was
able
to
leave
the
protocol
with
a
lot
of
bad
debt
and
again
just
recently,
something
similar
happened,
also
in
mango.
Z
So
how
do
we?
What
are
the
tools
available
to
like
help
to
fix
Economic,
Security
and
and
make
protocols
more
secure,
one
of
the
first
ones?
The
biggest
is
really
over
collateralization,
and
here
it
just
doesn't
come
without
risks,
though,
and
so
it's
very
important
to
include
an
analysis
of
the
the
actual
economics
in
designing
and
calibrating
your
your
protocol.
Z
So,
for
instance,
you
could
have
persistent
negative
shocks
that
affect
collateral
prices,
and
you
could
also
have
kind
of
illiquid
markets
around
those
those
assets,
and
this
can
lead
to
loans
being
undercoateralized
and
the
system
being
left
with
bad
debt.
Z
It
can
also
lead
to
situations
where
it's
unprofitable
for
Liquidators
to
actually
initiate
the
liquidations,
which
then
also
can
lead
to
another
protocol
having
bad
debt,
because
the
liquidations
don't
happen
in
time
and
there's
also
sort
of
issues
that
can
happen
with
stable
coins
and
deleveraging
of
these
stable
coins,
like
we
saw
on
on
in
die
on
black
Thursday,
where
you
had
this,
like
short
squeeze
effect,
and
you
also
had
this
sort
of
like
collapse
of
the
of
the
liquidation
engine.
Z
Some
other
things
that
you
need
to
be
aware
of
when
you're
designing
protocols
with
respect
to
Economic
Security
is
the
minor
extractable
value
that
you
can
be
can
be.
Setting
up.
I
won't
go
too
in
depth
here,
because
there
have
been
a
lot
of
great
talks
already
about
Minor
extractable
value,
I'll
just
point
out
that
D5
applications
tend
to
give
many
new
sources
of
Mev,
and
you
need
to
be
considering
these,
and
this
is
essentially
coming
from
Arbitrage
opportunities.
Z
So,
for
instance,
in
indexes
you
can
have
sort
of
like
stale
order
quotes
and
whoever
fulfill
those
is
able
to
do
an
Arbitrage,
Loop
and
profit,
and
in
lending
protocols
there's
usually
a
a
liquidation
incentive
and
if
you're
the
person
who
can
come
in
and
perform
the
liquidation
when
it's
when
it's
allowed,
then
you
can
profit
from
from
being
the
person
who
does
that,
and
this
can
lead
to
consensus
layer
risks
if
this
Mev
is
greater
than
the
block
reward.
Z
So
the
governance
is
basically
introducing
a
way
to
upgrade
protocols
and
these
need
sort
of
careful,
guardrails
and
careful
design
so
that
your
Governors
aren't
going
to
have
Mis
incentives
to
do
things
that
are
actually
bad
for
protocol
users
and
so
commonly
governance
may
not
really
be
incentive
compatible
with
the
actual
users
of
the
protocol.
And
this
this
can
be
an
issue.
They
may
not
act
in
the
interest
of
these
protocol
users
and
to
illustrate
a
little
bit
in
some
sense.
Z
Governors
have
some
honest
cash
flows,
but
these
cash
flows
may
not
always
be
very
high.
Sometimes
they
can
crash
and
then,
if
they
do
crash,
the
region
of
incentive
compatibility
might
shrink,
and
it
may
be
more
profitable
for
these
Governors
to
instead
of
doing
sort
of
honest
actions
and
upgrading
the
protocol
in
good
ways
to
instead
decide
to
attack
the
protocol
and
basically
steal
assets
from
the
protocol
or
do
other
things
that
put
protocol
assets
at
risk
and
the
costs
to
do.
Z
Z
Z
So
here
we
need
to
distinguish
between
one,
a
market
price
that
is
being
manipulated,
but
correctly
supplied
by
an
Oracle
and
two
in
Oracle
that
is
itself
being
manipulated.
Z
So
in
Market
manipulation
you
have
an
adversary
who
is
manipulating
the
market
price,
either
on
or
off
chain,
depending
where
that
market
is,
is
occurring
over
some
period
of
time
and
they
can
profit
if,
if
they
can,
if,
if
the
manipulation
they
can
exploit
it
in
in
a
protocol
that
uses
that
market
as
an
oracle
and
these
problems
persist,
even
if
the
Oracle
is
not
an
instantaneous
amm,
because
it's
just
there
is
some
liquidity
in
the
market.
Depending
on
that
liquidity.
Z
Z
And
importantly,
though,
this
is
risky,
because
you
have
to
do
it
over
time.
It
can't
be
Atomic,
which
is
again
the
main
point
about
Economic
Security,
and
this
compares
to
Oracle
manipulation
where
it
it
depends
on
the
design
of
your
Oracle.
But
even
if
the
market
price
is
not
being
affected,
the
Oracle
might
be
reporting
incorrect
prices.
Z
So
centralized
oracles
have
potentially
a
single
point
of
failure
and
you
might
want
to
control
for
that
in
designing
your
protocol
and
on-chain
amm
based
oracles,
as
we've
seen,
can
be
manipulated,
and
so
the
costs
of
manipulating
that,
depending
on
the
liquidity
in
those
markets,
is
something
you
should
be
carefully
considering
and
other
decentralized
Oracle.
Solutions
are
really
imperfect
for
the
issue
that
you
can't
really
verify
the
correctness
of
prices
on
chain,
and
so
it's
quite
an
open
problem.
Z
How
to
do
this
very
well
so
that
sort
of
concludes
our
discussion
of
Technical
and
economic
security,
but
it
leads
to
a
host
of
new
research
challenges
that
are
really
going
to
be
important
for
securing
D5
protocols
into
the
future.
So
I'll
give
you
just
like
a
quick.
A
quick
flavor
of
these
one
is
around
composability
risks.
Mostly.
Z
These
are
not
very,
very
well
Quantified,
but
a
lot
of
program
analysis
can
be
done
to
to
understand
these
risks
a
bit
better
and
then
to
design
your
protocols
in
ways
where,
where
how
you're
composing
with
other
protocols
is
as
safe
as
possible.
Another
is
what
we
were
just
talking
about.
Z
This
governance,
sort
of
risk
and
modeling
the
incentive
compatibility
of
of
Governors
and
sort
of
modeling
out
what
we
call
governance,
extractable
value
and
trying
to
understand
when
our
Governors
and
your
system
incentivize
to
do
things
that
are
good
for
the
protocol.
And
how
do
you
stop
them
from
doing
bad
things,
and
these
need
economic
models
about
how
these
government
systems
work
over
time
and
how
the
agents
make
decisions.
Z
Another
is
around
oracles,
so
basically
a
similar
sort
of
role
as
governance,
incentive
compatibility
to
report,
correct
prices
and
then
in
a
fair
amount
of
work
to
be
done
in
in
Mev
and
there's
just
one
illustration
of
sort
of
like
what
makes
Mev
very
hard
is
that
if
you're,
if
you're,
looking
at
just
intrablock,
Mev
Atomic
Mev
this,
this
becomes
an
optimization
problem
that
resembles
knapsack,
but
where
the
items
in
the
knapsack
can
change
depending
on
the
current
selection.
Z
So
it
should
be
even
harder
than
knapsack,
and
so
it
should
be
an
NP,
hard
problem
and
this
becomes
even
harder
than
if
you're
looking
at
interblock
Mev.
And
this
includes
cross
chain
Mev.
Because
now
you
have
to
look
at
an
inter-temporal
version
of
that
of
that
same
optimization
problem
and
there's
also
a
lot
of
work
to
be
done
in
sort
of
making
Anonymous
D5
protocols
and
preserving
privacy.
Z
So
that
brings
us
to
the
end
of
the
talk.
Just
as
a
quick
recap:
we've
covered
how
D5
has
several
Innovations,
but
it
also
has
several
risks
and
to
fulfill
The
Verge
the
the
vision
of
the
defy
Optimist.
We
really
need
to
make
sure
that
D5
is
secure
and
to
do
that.
We've
delineated
two
types
of
security
risk
between
Technical
and
Economic
Security,
and
the
key
distinctions
that
that
allow
this
to
be
useful
are
that
it's
based
on
atomicity,
and
it
really
tells
you
a
lot
about
the
models.
Z
Y
Just
while
we're
waiting
for
the
mic,
so
this
is
based
on
the
paper
that
we
wrote-
and
this
is
a
QR
code
in
the
link.
If
you
want
to
have
a
look-
and
there
are
more
formal
definition
in
there-
so
please
feel
free
to
take
a
look.
V
So
how
does
this
overlap
with
a
lot
of
what
gyroscope
is
doing
and
sort
of
your
mission
and
vision?
The
super
super
curious
about
that.
Yeah.
Z
That's
a
great
question
so
how
we've
designed
so
it's
a
gyroscope
we're
working
on
a
new
stablecoin
project
where
we're
building
a
bunch
of
different
Primitives
that
allow
what
we
think
is
a
more
resilient,
stable
coin
design,
and
it's
really
coming
out
of
all
the
research
we've
been
doing.
Z
We've
set
up
some
of
the
initial
models
that
like
helped
understand,
for
instance,
Economic
Security
and
the
mechanism
design
that
went
into
gyroscope
takes
all
of
that
knowledge
into
consideration
and
tries
to
do
the
best
mechanism
design
that
we
can
considering
how
we
understand
Economic
Security
today,
foreign.
U
U
U
U
U
AA
Hi
everyone,
my
name,
is
Felix
I'm
working
with
an
awesome
team
of
people
at
cow,
Swap
and
I
want
to
talk
to
you
today
about
how
we
can
design
fera
trading
mechanisms
for
ethereum
and
I
want
to
frame
this
Vision
based
upon
an
idea
of
just
having
a
single
price
per
token
per
block
and
we'll
explore
a
little
bit
of
why.
We
think
this
is
the
most
Fair
Way
of
trading
on
ethereum.
AA
The
reason
why
we're
talking
about
this
is:
if
we
look
at
Main
adoption
of
decentralized
trading,
we
can
see.
We've
made
some
progress
in
the
last
couple
of
years,
but
we're
just
seeing
the
tip
of
the
iceberg
here.
Even
digital
asset
trading
as
a
whole,
including
centralized
exchanges,
is
many
times
larger
than
Dex
trading.
Today
and
when
we
want
to
get
into
more
interesting
markets
like
the
U.S,
Securities
Market,
or
maybe
the
Holy
Grail
of
trading
Global
foreign
exchange
trading,
then
we
really
need
to
step
up
our
game
and
I.
AA
Think
personally,
that
ethereum
has,
as
this
credible
neutral
settlement
layer,
the
potential
of
enabled
trading
of
all
different
parties
in
Cross,
Nations,
supernational
actors.
So
I
really
think
this
Global
foreign
exchange
trading
is
a
goal
that
we
can
strive
for
so
to
talk
about
how
we
get
there.
AA
Let's
first
revisit
a
little
bit
the
brief
history
honestly
that
we
had
in
the
in
the
context
of
decentralized
exchange,
Trading
and
I
want
to
frame
this
history
based
upon
an
article
from
Alvin
Roth,
who
is
an
expert
on
Market
design
and
won
a
Nobel
prize
in
economics
a
while
ago,
who
wrote
this
essay
on
necessary
requirements
for
a
market
to
function
in
Harvard
Business
Review
a
while
ago,
and
he
mentions
three
properties
that
a
market
needs
to
at
least
fulfill
in
order
to
function
properly.
AA
And
so,
if
we,
if
we
look
at
how
everything
started
a
couple
of
years
ago,
the
first
taxes
on
ethereum,
where
etherdale
to
idex
Oasis,
basically
on
chain
limit
order
book
taxes
and
those
worked
fine.
For
some
token
pairs,
like
etherus
DC,
maybe
worked
quite
quite
well,
but
they
had
a
fundamental
flaw,
which
was
that
they
required
active
Market.
AA
Amms
really
solved
the
problem
of
liquidity
provision
and
allowed
every
one
of
us
to
become
a
liquidity
provider
just
by
staking
two
assets
into
a
smart
contract
that
would
then
automatically
sell
them
on
our
behalf,
based
on
some
based
on
some
curve,
based
on
some
preference
curve
and
all
of
this
started
in
the
context
of
prediction
markets
way
before
blockchains
were
born.
Robin
Hansen
had
a
paper
on
on
logarithmic,
basically
amms
and
then
in
the
context
of
ethereum.
We
had
many
different
teams:
pioneering
more
or
less
complex
functions.
AA
So
what's
the
problem
with
amms
the
problem
with
amms
and
again,
we
have
heard
many
talks
about
this
at
this
conference
is
Mev
or
how
I
will
call
it
in
this
presentation,
pay
as
bit
pricing
and
pay
as
bit.
Pricing
comes
from
the
fact
that
what
you're
seeing
on,
for
example,
uni
swap
if
you're
trading
some
tokens,
is
not
actually
what
you
are
sending
into
the
mempool
to
be
executed.
This
is
a
trade
that
actually
happened
a
couple
of
weeks
ago.
AA
Somebody
was
selling
one
million
dollars
on
uni
Swap
and
probably
saw
on
the
UI
that
they
would
get
750e.
But
the
thing
that
you
need
to
add
on
uniswap
is
a
discount
to
the
current
fair
market
Price,
which
is
referred
to
as
slippage
tolerance,
and
you
can
think
about
slippage
tolerance,
as
basically
how
much
volatility
are
you
willing
to
accept
in
order
for
your
trade
to
still
go
through?
Blockchains
are
asynchronous
by
Design.
AA
So
there's
race
conditions,
and
so
the
moment
you
click
swap
on
uni
swap
somebody
else
might
also
be
clicking
swap
somewhere
else,
and
therefore
you
need
to
price
that
volatility,
tolerance
or
slippage
tolerance.
In
now.
The
problem
with
it
is
if
your
trade
is
large
enough.
If
the
slippage
tolerance
is
large
enough,
the
block,
Builder
proposer,
validator
Miner
column,
what
you
will
has
an
incentive
to
actually
manipulate
the
prices
and
execute
your
trade
exactly
at
your
bid
at
the
bid.
That
includes
your
slippage
tolerance,
and
so
this
is
what
happened
here.
AA
The
this
person
got
sandwiched
and
lost
ten
thousand
five
hundred
dollar
to
the.
In
this
case,
it
was
after
the
merge
to
to
the
to
the
block
producer
or
validator,
and
so
this
is
really
what
makes
amm's
fundamentally
unsafe
by
Design.
The
fact
that
you
cannot
be
honest
about
your
slippage
tolerance.
You
basically
have
to
lie
to
the
amm
and
say:
hey,
you
know
just
set
the
small
slip
Insurance,
because
I
know,
if
I
really
tell
you
what
I'm
willing
to
accept.
AA
Somebody
is
going
to
sandwich
me
and
and
run
over
me
and
so
maybe
to
go
a
step
back
and
and
revisit.
Why
is
it
important
to
save
a
safe
mechanism
design
here,
a
safe
Market?
The
first
reason
is
that
safe
markets
are
just
fundamentally
simpler
to
reason
about
and
make
conclusions
that,
basically
the
system
is
secure
and
that
the
allocations
is
is
optimal
and,
and
you
find
the
right
answer.
The
second
reason
why
safe
markets
are
better
or
important.
AA
Is
that
they're
fundamentally
more
efficient
than
unsafe
markets,
because
if
users
are
incentivized
to
Rel
to
reveal
their
true
preferences,
you
can
just
look
at
the
preference
and
find
the
globally
socially
optimal
application
in
in
one
round
trip,
rather
than
doing
multiple
bids
and
asks
and
basically
haggle
around
the
haggle
around
the
table.
And
then
the
last
point
is
maybe
a
more
altruistic
argument
that
safe
markets
are
just
fundamentally
fairer
to
the
least
sophisticated
participants.
AA
Tyrone
was
talking
about
stealing
from
grandmothers,
I
personally
think
more
about
people
that
visit
Reddit
and
find
a
new
coin
and
then
go
on
uni
swap
trying
to
trade
it
and
basically
Get
Wrecked
and
yeah.
The
least
sophisticated
participants
are
automatically
protected
in
safe
markets
and
so
I
agree
with
Maureen
O'hara
who's,
a
professor
at
Cornell
who
says
especially
on
this
problem
of
Mev.
That
blockchain
is
not
going
to
succeed
if
it's
not
viewed,
viewed
as
fared,
and
so,
let's
you
know,
revisit
the
site
that
we
saw
in
the
very
beginning.
AA
AA
The
first
thing
that
I
want
to
kind
of
revisit
is
what
we
think
is
probably
the
fundamental
root
cause
of
most
of
the
Meb
that's
out
there,
maybe
maybe
even
all
Meb,
and
that
is
that
a
single
asset
today
on
ethereum
can
have
many
different
prices
within
the
same
block.
Here's
an
example
block
from
I
think
a
week
and
a
half
ago,
where
the
most
liquid
pair
that
exists
on
ethereum
eat,
USD
was
traded
11
times
within
the
same
block
and
at
eight
different
prices.
AA
The
difference
between
the
lowest
and
the
highest
price
in
this
block
was
more
than
one
percent,
and
imagine
the
block
really
just
happens
at
a
single
moment
in
time.
That's
what
blockchains
do
they?
They
freeze
kind
of
the
the
state
of
the
world
in
a
specific
instant
of
time,
and
yet
that
instant
of
time
told
people
well,
there's
eight
different
prices
for
for
the
most
liquid
pair
and,
for
example,
latency
Arbitrage.
When
you
see
a
price
change
on
binance
and
you
try
to
Arbitrage
it
away
against
an
amm.
AA
AA
There
was
three
different
prices:
the
opening,
the
victim
and
the
sandwich
again,
one
asset,
many
prices
and
that's
why
we
have
Meb
so
I
argue
that
any
Market
structure
where
prices
depend
on
arbitrary,
the
the
block
producer
can
choose
it
at
no,
basically
at
their
own.
Will
that
arbitrary
interblock
ordering
just
makes
the
mechanisms
unsafe
by
Design.
AA
And
so
what
is
the
solution?
Well,
you
might
have
guessed
it
from
how
I've
started
this
this
this
chapter,
but
the
solution
is
to
just
have
one
price
per
token
per
block,
and
so
would
this
look
like
we
basically
associated
with
every
ethereum
block,
have
a
price
Vector
for
every
asset
that
is
traded
in
the
block
and
that
asset
just
has
a
single
price
at
which
it
can
be
accessed
within.
Well
that
block-
and
this
price
here
is
nominated
in
dollars.
AA
It
could
theoretically
be
nominated
in
every
in
any
currency
you'd
like,
but
the
idea
would
then
be
that
the
participants
that
are
trading
in
this
block
would
be
trading
according
to
this
single
price
clearing.
AA
So
if
you're
trading
eth
against
Bitcoin
in
this
block,
you
can
basically
get
your
exchange
rate
by
just
looking
up
the
two
values
here
and
that
defines
what
is
your
exchange
rate
in
this
in
this
block,
and
so
this
idea
of
uniform
price
clearing
is
very
tightly
coupled
to
the
idea
of
not
executing
trades
sequentially
one
after
the
other,
but
batching
them
together
and
executing
them
in
one
single
batch.
AA
AA
The
first
thing
that
we
need
to
do
is
we
need
to
stop
people
from
sending
raw
ethereum
transactions,
because
that's
just
a
fundamental
limitation
of
the
protocol
layer.
Today,
you
cannot,
when
a
user
signs
a
raw
transaction,
you
cannot
open
up
this
transaction
batch.
It
together
do
anything
with
it.
AA
So
we
basically
have
this
multi-dimensional
order
book
of
signed
trade
intents,
which
we
then
combine
with
the
entirety
of
on-chain
liquidity
that
exists
on
ethereum
today.
So
all
the
amms,
you
know
about
all
the
RFQ
systems
you
know
about
and
that
together
creates
a
thick
Market.
We
saw
earlier
that
amms
are
really
responsible
for
the
thickness
and
so
by
adding
user
orders
to
that.
To
that
liquidity
pool,
we
have
the
first
criteria,
which
is
a
fake
Market.
AA
In
this
case,
for
example,
the
maker
token
that
the
first
user
wanted
to
sell
would
flow
directly
to
another
user,
that
user
would
sell
their
usdc,
maybe
on
curve
to
die
and
then
sell
it
to
give
it
to
the
third
user,
and
so
we
can
in
so-called
ring
trades
mix
and
match
amms,
together
with
direct
peer-to-peer
trading
with
the
only
basically
the
only
credit,
the
only
constraint
that
everything
has
to
happen.
At
least
everything
between
those
hands
has
to
happen
at
this
uniform
price
screen
now.
AA
The
problem
is
that
this
poses
a
pretty
hard,
optimization
problem.
It's
basically
an
NPR
problem
because
we're
acting
over
multiple
Dimensions,
not
just
on
a
two-dimensional
order
book,
but
this
problem
can
be
quite
well
approximated
or
the
optimal
solution
can
be
quite
well
approximated.
If
we
just
maximize
the
total
user,
Surplus
user
Surplus
is
basically
the
price
Improvement
you
get
on
the
user's
limit
price,
so
the
user
was
willing
to
buy
ether
at
thirteen
hundred
dollars
and
you
were
delivering
it
at
12.50.
AA
You
would
have
given
fifty
dollars
of
surplus
to
that
user
and
if
we
sum
all
the
surpluses
up
together,
we
get
one
value
which
we
can
optimize
for
and
by
having
this
optimization
Criterion.
We
can
now
dispatch
this
hard
problem
to
a
network
of
what
we
originally
called
solvers.
Now
we're
starting
to
call
it
batch
Builders
we're
basically
tasked
to
try
to
find
the
most
optimal
solution
to
this
problem
and
they
could
employ
it
very
different
strategies
or
heuristics.
AA
Here
we
saw
the
the
settlement
from
the
from
the
previous
slide,
but
you
could
have
a
solver
that
just
goes
and
tries
to
settle
every
single
user
trade
with
the
most
liquid
amm
on
that
pair.
But
by
virtue
of
having
this
approximate
approximation
criteria
and
optimization
Criterion,
we
can
now
rank
the
different
solutions
and
find
the
one
that
settles
the
user
trade
at
the
best
possible
price
and
then
that
solution
will
get
chosen
and
the
proposal
for
that
solution
gets
a
protocol
reward.
AA
And
so
how
does
this
work
in
practice?
So
we
said:
Council
has
been
live
for
a
year
and
a
half
this
is
the
UI
looks
very
much
similar
to
what
you
might
be
used
from
your
favorite
amm
front
end,
but
here's
an
example
of
a
batch
that
we
saw
I
think
around
20
days
ago,
where
two
users
were
trading.
In
the
same,
so
we
can
see
there's
one
high-level,
ethereum
transaction,
but
in
that
single
transaction
we
have
two
people
Trading
two
trade
intents.
AA
So
we
have
one
person
that
is
selling
the
manifold
token
and
another
person
that
is
buying
the
manifold
token,
and
the
first
thing
to
notice
here
is
that
both
people
were
executed
at
the
exact
same
clearing
price,
whereas
if
they
had
gone
through
a
more
traditional
Dex
mechanism,
one
of
them
would
have
to
go
first
and
get
a
different
price
than
the
second
one.
In
this
case,
it
was
actually
better
to
be
the
second,
because
the
prices,
the
the
trades
were
against
one
another.
So
you
want
to
First.
AA
Have
the
the
other
person
move
the
price
in
your
favor
and
then
trade,
but
basically
by
matching
those
at
a
single
uniform
clearing
price?
We
removed
all
the
games
that
the
people
in
the
batch
could
have
played
against
one
another
and
made
the
system
more
simple,
fair
and
safe.
AA
To
reason
about
the
other
thing
that
is
cool
is
that,
because
these
tokens
were
traded
exactly
in
the
opposite
direction,
which
is
what
we
call
a
coincidence
of
once
you
want
the
exact
opposite
of
what
I
have
we
actually
saved
about
forty
thousand
dollars
in
trading
volume
that
we
didn't
have
to
send
to
UNI
swap
in
sushi
swap
because
it
could
be
settled
directly
peer-to-peer.
So
here
we
have
one
person
buying
40,
000
UCC
and
one
person
selling
48,
000
usdc,
and
so
in
the
specific
example,
we
actually
not
only
got
fairer
pricing.
AA
We
also
got
a
structural
price
Improvement
that
only
batching
people
together
and
trading
people
peer-to-peer
can
accomplish
because
we
saved
so
much
money
that
didn't
have
to
go
to
the
amm.
We
saved
about
800
in
reduced
LP
fees
and
we
also
saved
about
1
500
in
reduced
price
impact,
which
stems
from
the
fact
when
you
trade
against
an
amm
you're
moving
the
price
up
and
then,
when
you
sell
against
it,
you
will
move
the
price
down
again.
AA
And
so
this
is
how
the
the
solver
competition
I
talked
about
is
looking
at
the
moment
we
have.
As
of
last
week,
we
had
12
servers
in
production.
As
of
this
week.
AA
We
have
13
solvers
in
production
that
are
competing
for
finding
the
best
solution,
so
we
actually
have
quite
a
yeah,
quite
a
quite
a
heterogeneous
landscape
of
different
different
solver
entities
and
then
there's
a
an
entire
presentation
from
our
data
engineer,
Ghent
that
was
given
at
depcoin,
which
I
highly
recommend
watching
which
compares
the
performance
of
cow
swap
in
terms
of
how
much
slippage
can
we
actually
or
yeah?
AA
So
for
the
last
couple
of
minutes,
I
want
to
talk
a
little
bit
about
proposer,
Builder,
separation
or
kind
of
the
general
direction.
AA
I
think
that
the
ecosystem
is
taking
with
regards
to
Mev
and
why
that
could
be
or
why
I
think
this
is
dangerous
and
I
want
to
start
this
by
quoting
one
of
my
favorite
columnists,
the
money
stuff
BBC
writer,
Matt
Levine,
who
is
from
time
to
time
repeating
that
well
cryptos
in
the
business
of
constantly
rediscovering
the
basic
ideas
of
financial
history
and
why
this
I
think
this
is
true
specifically
for
PBS
is
well
I'll.
Talk
about
this
in
the
next
few
slides.
AA
But
you
might
have
seen
this
graph
before,
and
this
is
basically
trying
to
resemble
the
Mev
supply
chain
going
from
a
user
that
wants
to
make
a
trade
and
the
trade
being.
Something
like
I
want
to
sell,
UCC
for
Eve,
opening
their
favorite
wallet
or
opening
their
favorite
dab,
which
converts
that
trade
intent
into
a
transaction
and
sends
that
transaction
and
PBS
to
a
network
of
block
builders.
AA
This
meme
is
from
from
John
I
have
to
give
credits.
So
how
does
a
block
Builder
actually
get
the
money
to
to
perform?
This
bidding
basically
block
Builders,
try
to
extract
as
much
Mev
from
the
transaction
order
flow.
They
they
get
and
then
pay
back
some
or
most
of
that
in
this
Max
bidding
or
to
the
block
Builder.
So
what
might
a
block
Builder
do
in
order
to
get
an
edge
in
this
game?
AA
Well,
they
might
try
to
find
deals
with
wallets
to
get
exclusive
access
to
order
flow
if
they
are
the
only
one
who
can
extract
value-
or
even
just
you
know,
pose
this
transaction
for
its
transaction
fee
into
the
block
that
they
propose.
They
have
an
advantage
over
other
block
builders,
and
so
it
could
be
that
the
buck,
the
the
flow
just
stops
here
wallets
are
starting
to
receive
an
extra
income.
That's
nice.
They
might
take
that.
That
would
be
very
terrible
for
the
user,
maybe
in
a
slightly
better
world.
AA
Actually,
in
traditional
Finance,
you
know
there
there
could
be
there's
arguments
on
on
each
side,
but
what
I
try
to
stress
here
is
that
we've
learned
from
traditional
finance
that
this
mechanism
doesn't
work
in
a
decentralized
Fashions,
Market
maker,
there's
just
a
handful
of
them.
They're
highly
regulated
without
the
SEC
users
would
be
run
over
there's.
Basically,
this
law
that
requires
Market
maker
to
give
people
a
price
Improvement
on
top
of
the
official
bid
and
ads
that
they
see
on
NASDAQ
and
New
York
Stock
Exchange.
AA
AA
The
other
thing
that
I
hear
a
lot
is
you
know
this
narrative
around
Mev
maximization
is
great.
We
all
have
to
focus
on
Mev
maximization
sure
we
should
minimize,
but
we
need
to
maximize
the
the
of
what's
out
there
and
I
just
want
to
pose
the
question
of
these
two
philosophies:
Mev
maximization
and
minimization
can
coexist
at
the
same
time,
because,
at
least
in
my
mind,
Mev
maximization
leads
to
very
dangerous
incentives
for
the
participants
and
creates
an
extremely
hard
coordination
problem
that
Mev
minimizing
protocols
such
as
cow,
swap
need
to
overcome.
AA
Right
now
we
are
in
a
you
know
you
could
you
could
think
of
a
Mev
reducing
protocol
or
an
Mev,
reducing
Builder
to
having
to
play
a
repetitive
prisoner's
dilemma
where,
in
the
current
status
quo,
everyone
is
extracting
Meb,
so
everyone
is
making
a
little
profit.
AA
Definitely
the
the
people
that
are
trading
because
they're
not
getting
racked,
but
even
the
people
that
are
that
are
validating
I'd
argue
that
in
this
system
can
make
much
more
money
from
transaction
fees
and
just
adoption.
But
really
this
requires
all
Builders
to
cooperate.
Otherwise,
we'll
not
get
into
this
new
equilibrium.
And
what
we
see
today
is
that,
basically,
everyone
is
focusing
on
the
top
left
corner
and
the
status
quo
is
to
fight
over
the
existing
Meb.
AA
That's
in
there
and
potentially
even
fight
against
new
entrants
that
are
trying
to
propose
Mev,
reducing
protocols
because,
basically,
that's
eating
the
searcher's
lunch.
And
so
my
call
to
action
here
at
the
end
is:
let's
not
focus
on
Mev,
maximization
and
split
the
pie.
That's
out
there,
because
it's
tiny
compared
to
what
we
want
to
unlock.
Let's
work
together
and
grow
the
pie
and
try
to
get
ethereum
to
the
next
order
of
magnitude
of
adoption,
and
with
that.
Thank
you
very
much.
AB
There
we
go
all
right,
hey
great
talk
by
the
way,
so
I
have
a
kind
of
a
quick
question.
Right
like
what
you've
done
is
you've
taken
a
heart
problem.
You
have
made
it
an
NP,
hard
problem.
You
still,
you
have
an
NP,
hard
problem,
which
is
like
this
very
complicated
auction
which
brings
a
bit
of
a
you
know,
difficult
situation,
because
NP
hard
problems,
kind
of
by
definition,
are
very
sensitive
to
like
people,
kind
of
manipulating
things
and
adding
Solutions
and
I.
Think
fire
call
correctly.
Bancor
had
some
interesting
issues
with
this.
AB
So
why
not?
For
example,
you
know
one
simple
condition
to
have
one
price
for
many
assets
or
sorry,
one
price
for
many
dexes
with
or
for
one
for
specific
assets
is
just
to
do
optimal.
Routing
and
optimal.
Routing
is
a
convex
problem.
You
can
do
it
efficiently,
it's
very
simple,
to
solve.
Why
not
just
do
that.
Instead,
yeah.
AA
So
I
think
just
having
one
price
per
token
pair
per
block
is
I.
Think
what
you're
suggesting
that
you
know
would
already
be
a
huge
Improvement
to
the
status
quo
and
it
would
be
very
computationally
feasible
to
solve.
I
totally
agree
with
that.
I
think.
The
reason
why
we're
aiming
for
a
multi-dimensional
batch
auctions
is
to
make
the
life
of
our
solver
team
harder,
of
course,
but
also
to
because
we
we
know
that
or
we've
seen
that
the
token
space
is
just
absolutely
fragmented
right.
AA
But
we
would
still
have
this
implicit
fragmentation
and
this
implicit
Arbitrage,
and
so
we
think
the
most
efficient
way
of
like
you
know,
of
arbitraging
the
if
usdc
to
die,
is
one
one
and
then
eth
die
is
1300
and
ethucc
is
1200.
We
still
have
like
some
some
imbalance
on
that
right
and
so
I
mean
I,
agree
with
you.
It's
it's
a
good
first
step,
maybe,
but
we
kind
of
already
aimed
for
the
for
the
second.
AA
You
know
the
Holy
Grail
is
re-fragmenting
the
re-aggregating,
the
fragmented
liquidity
space
that
we
have
on
ethereum
and
therefore
we
went
multi-dimensional.
AC
Great
talk,
I
think,
like
one
question
on
kind
of
like
seeing
liquidity
effect,
went
between
l1s
and
l2s.
How
do
you
see
it
kind
of
like
like?
Is
there
a
way
on
how
color
swap
could
settle
like
cross
layer
of
one
like
cross
layer,
twos
because
I
think
like
yeah,
it's
probably
like
the
next
problem
in,
like
every
view.
AA
Yeah
I
mean
yeah.
We
are
definitely
I.
Think
one
one
problem
that
we're
trying
to
solve,
rather
sooner
than
later,
is
to
just
access
liquidity
on
another
chain
through
the
solver
abstraction.
So
right
now,
if
you're,
for
example,
trading
on
polygon,
a
large
amount
you
have
to
potentially
bridge
to
mainnet,
create
their
Bridge
back
and
that's
kind
of
annoying
from
a
user
perspective,
because
we
already
have
this
abstraction
of
solvers.
That
can
just
happen
under
the
hood
right.
AA
Think
it's
an
it's
a
very
interesting
research
problem
which
we've
where
we've
touched
the
surface
on,
but
not
gone,
super
deep,
yet
synchronicity
between
chains,
maybe
you
have
to
do
some
locking
of
funds
like
there's.
Actually
Mohammed
is
sitting
in
the
crowd,
there's
actually
a
hackathon
project
at
Amsterdam.
That
looked
a
little
bit
into
this,
but
yeah,
it's
very
early
days
for
that,
so
so
nothing
that
you
could
build
on
right
right
now,.
A
U
O
AD
All
right,
hi
everyone,
my
name,
is
Theo
I,
think
I'm
in
steel.
Proof
of
optimality
in
this
talk
from
Felix's
talk,
because
I
really
loved
that.
But
today
we
are
not
going
to
talk
about
D5.
We're
going
to
talk
about
multi-dimensional
fee
markets
or
kind
of
the
fancy
term
is
how
to
do
Dynamic
pricing
for
non-fungible
resources,
and
this
is
Joint
work
with
Alex,
Evans,
True
and
chitra,
and
guillermar
and
jaros
all
right.
AD
So
the
first
thing
I
hope
to
convince
you
of
is
that
fee
markets
with
the
joint
unit
of
account
like
gas,
are
actually
pretty
inefficient,
and
what
we're
going
to
slowly
work
towards
in
this
talk
is
a
framework
to
optimally
set
multi-dimensional
fees.
So
first
part
of
this
is
like:
why
are
transactions
so
expensive?
Why
is
having
one
market,
not
necessarily
something
that
you
want
to
do,
and
first
a
little
bit
of
an
aside
one
of
the
things
that
we
actually
do
sometimes
see
with
one-dimensional
fee
markets?
AD
So
these
have
been
termed
resource,
exhaustion,
exhaustion,
attacks
in
the
literature-
and
there
is
a
famous
one
back
in
2016
that
took
down
the
ethereum
network
or
essentially
made
it
unusable,
did
not
take
it
down,
but
made
it
very
hard
to
use
for
quite
some
time-
and
this
was
essentially
due
to
a
disk
read
mispricing
and
of
course
this
was
patched
in
a
subsequent
EIP.
But
if
we
had
a
multi-dimensional
rather
than
a
single
dimensional
Market,
we
might
have
been
able
to
adjust
prices
such
that
there
was
no
need
to
actually
reprice
the
op
codes.
AD
After
the
fact,
however,
what
we're
going
to
concentrate
a
little
bit
more
on
today
is
throughput.
Is
why
having
a
single
dimensional
Market
is
actually
bad
from
kind
of
a
network
designer's
perspective,
and
so
this
is
a
very,
very
stylized
example.
That
is
not
at
all.
Like
close
to
in
practice,
but
I
hope
it
illustrates
the
idea,
let's
assume,
that
we
have
a
bunch
of
users
that
are
submitting
transactions,
some
only
consume
CPU,
some
only
consume
bandwidth.
AD
The
CPU
ones
have
some
utility
of
four,
so
that's
kind
of
how
much
utility
they
give
to
the
user
that
submits
it
and
the
bandwidth
ones
have
utility
of
two.
And
let's
imagine
we
have
a
block.
Each
of
these
transactions
cost
one
gas.
The
gas
price
is
three
and
the
block
can
fit
four
CPU
transactions
and
four
bandwidth
transactions.
Well
in
a
single
dimensional
Market.
What
happens?
We
fill
this
up
with
CPU
transactions,
but
actually
we
have
a
lot
of
block
space.
AD
That's
not
used
because
these
bandwidth
transactions
aren't
high
enough
utility,
and
something
like
this
can
happen
say
with
like
an
nft
mint.
However,
if
we
have
a
2d
market
and
say
CPU
has
that
cost
of
three,
but
bandwidth
has
a
cost
of
one.
We
would
actually
end
up
filling
up
all
the
CPU
transactions
in
this
block
and
filling
up
all
the
bandwidth
transactions
as
well.
Like
I
said.
This
is
a
very
stylized
example,
but
it
does
illustrate
when
you
price
things
separately
or
in
other
words,
if
resources
are
orthogonal,
they
should
be
priced
separately.
AD
However,
that
of
course
needs
we
need
a
mechanism
for
Price
Discovery
to
do
this.
So
how
do
we
decide
that
you
know?
Cpu
is
three
and
bandwidth
is
one
or
what
are
those
prices?
What
do
those
prices
even
mean?
AD
How
do
we
get
to
those
prices
and
I'm
going
to
do
a
little
bit
of
an
aside
in
that
I've
been
throwing
around
this
term
resource
a
lot
and
there's
a
question
of
like
what
exactly
do
I
mean
by
that
the
working
definition
we're
going
to
use
for
this
talk
is
anything
that
can
be
metered,
so
a
resource
is
anything
that
I
can
say
how
much
of
this
thing
a
transaction
uses.
So,
for
example,
one
thing
right
now:
roll
up
data.
AD
However,
we
could
also
talk
about
like
kind
of
big
resources
like
compute
memory
and
storage.
We
could
go
down
to
the
op
code
level
and
think
of
each
individual
opcode
as
a
resource.
However,
we
could
also
say
you
know:
sequences
of
op
codes
are
resources,
for
instance,
if
you're
calling
like
a
hot
storage
slot
versus
a
cold
storage
slot,
that's
going
to
be
cheaper.
AD
So
maybe,
if
we
have
several
store
job
codes,
all
in
a
row,
that's
actually
a
different
resource
than
calling
these
one
by
one.
AD
Furthermore,
if
we're
running
full
nodes
on
multi-core
machines,
maybe
compute,
unlike
node
or
on
core
one-
is
a
different
resource
than
compute
on
core
2.
and
and
so
on.
You
can
imagine
this
is
a
very
general
construction.
Resources
can
be
very
dependent
on
each
other,
and
so
as
long
as
they
can
be
metered
or
we
can
say
how
much
of
a
trend
of
a
resource
a
transaction
uses
that
fits
into
our
framework
so
to
formalize
this,
and
this
is
kind
of
where
we
get
a
little
bit
into
the
math.
AD
We're
going
to
say
a
transaction
J
consumes
some
Vector
of
resources
and
there's
so
M
resources
and
the
Specter
is
8j.
So
essentially,
the
ith
element
of
that
Vector
is
going
to
be
the
amount
of
resource
I
consumed
by
this
transaction
J,
and
now
that
we're
starting
to
build
blocks
we're
going
to
denote
this
Vector
X.
That,
essentially,
is
this
0
1
vector
and
we
have
n
transactions.
Xj
is
going
to
be
one
if
that
transaction
is
included
in
a
block
and
0
otherwise.
AD
So
this
allows
us
to
very
easily
write
kind
of
the
quantity
of
resources
that's
consumed
by
a
given
block
and
we're
going
to
denote
that.
Why-
and
all
this
is,
is
summing
up
the
vector
of
resources,
that's
consumed
by
a
particular
transaction
times,
XJ
and
XJ.
You
know
if
it's
zero,
then
we're
not
going
to
include
this
in
the
sum.
AD
All
right
so
now
that
we
kind
of
have
a
notion
of
a
resource
and
what
each
resource
is.
We
can
talk
about
things
about
constraining
resources,
targets
for
resources
and
charging
for
each
individual
resource,
so
we're
going
to
first
Define
a
resource
Target,
and
that's
going
to
be
this
B
Star
and
then
the
deviation
of
the
target
based
on
what
I
introduced
earlier
is
just
ax.
Minus
B.
Remember
ax
is
the
quantity
of
or
the
resource
utilization
of
a
particular
block,
and
in
ethereum
this
is
one
dimensional.
AD
We
also
want
sometimes
a
resource
limit
that
says
how
much
or
after
a
certain
point
a
block
is
invalid,
and
then
we
can
have
transactions
satisfy
something
like
this,
where
ax
has
to
be
less
than
or
equal
to
B
in
ethereum,
this
is
30
million
gas,
so
again
in
ethereum.
This
is
all
one-dimensional,
however,
we're
extending
this
to
a
multi-dimensional
case.
AD
Finally,
this
allows
us
to
talk
about
prices
for
each
resource,
so
we're
going
to
have
some
Vector
P.
This
is
going
to
be
an
M
vector
and
Pi
is
essentially
going
to
be
the
resource
price
of
or
the
price
of
resource
I,
so
that
allows
us
to
very
easily
write
how
much
a
transaction
costs,
which
is
just
the
dot
product
of
its
resource,
vector
and
P,
and
then
this
is
split
up
into
the
sum
here.
AD
One
thing
here
is:
when
I
talk
about
prices,
this
is
going
to
be
the
amount
burned
by
the
network
or
essentially
the
price
that
the
network
charges
for
a
given
resource
so
think
like
EIP
1559,
it's
not
actually
going
to
be
the
price
that
users
pay
say
validators
for
inclusion
in
the
block,
so
nothing
about
tips
here.
This
is
just
going
to
be
purely
the
amount.
That's
burned
all
right,
so
we
set
up
all
the
math,
which
is
great,
but
we
still
have
to
go
back
and
say
well.
AD
AD
However,
if
we're
over
the
target
utilization,
we
would
want
the
price
of
that
resource
to
increase,
because
we
want
to
make
it
more
expensive,
so
people
decrease
their
usage
and,
if
we're
under
kind
of
vice
versa,
so
a
number
of
things
have
been
proposed.
Kind
of
to
this
end,
one
proposal
from
ethereum
research
forums
back
in
January
was
this
price
update
rule
here
you
can
kind
of
go
through
and
basically
see
that
it
does
satisfy
these
properties
that
we
want.
AD
However,
I
could
write
down
a
bunch
of
other
price
update
rules
that
also
satisfy
these
properties.
So
this
kind
of
begs
the
question:
is
this
a
good
update
rule
or
like
what
is
this
update
rule
actually
doing?
Are
there
other
update
rules
that
are
better
or
have
different
Behavior?
How
do
we
go
about
analyzing?
AD
This
and
kind
of
the
punch
line
of
this
talk
is
that
one,
the
all
these
update
rules
are
actually
implicitly
solving
an
optimization
problem
and
a
specific
choice
of
the
objective
which
you
can
think
of
is
how
the
network
designer
wants
the
network
to
perform.
Of
that
optimization
problem
is
going
to
then
give
you
a
price
update
rule.
So
this
means
essentially
what
I
kind
of
want
to
convey
is
the
a
good
way
to
think
about
price
update
rules
is
not
like.
Oh,
how
do
I
design
the
best
price
update
rule
it's.
AD
What
do
I
actually
want
the
network
Behavior
to
be
so
kind
of
what
is
my
objective
and
then
from
there
we'll
show
how
to
get
to
the
price
update
rule
all
right.
So
this
brings
us
to
what
we
call
the
resource
allocation
problem
and
the
setting
for
now
is
we're
going
to
pretend
the
network
designer
is
omniscient
and
gets
to
choose
all
the
transactions
in
each
block.
I
know
this
is
entirely
unrealistic
or
not
even
unrealistic.
It's
just
absolutely
false.
However.
This
is
going
to
allow
us
to
build
up
a
very
useful
mathematical
problem.
AD
AD
So,
there's
a
few
very
reasonable
or
or
potentially
kind
of
silly
loss
functions
that
we
could
choose.
One
is
this,
so
the
loss
function
of
Y
remember:
Y
is
going
to
be
the
resource
utilization
of
a
given
block.
Maybe
it's
zero
if
we're
exactly
at
our
Target
and
it's
Infinity.
Otherwise
another
thing
we
could
do
is
we
could
say
that,
actually
we
don't
care
if
we're
under
the
target.
We
only
care
if
we're
over
the
target,
so
we
can
say:
okay,
the
loss
is
zero
if
we're
under
the
Target
and
it's
Infinity.
AD
Otherwise
again,
these
might
not
be
what
you
actually
want
to
do
in
practice.
Potentially
you
want
to
have
something
where,
if
you're
a
little
bit
off
the
target,
you're,
not
that
unhappy
and
then
it
grows
say
quadratically
as
you
as
your
deviation
increases,
but
the
whole
point
is:
we
only
need
something
that
tells
us
kind
of
the
unhappiness
with
the
current
resource
utilization.
AD
AD
If
it's
over
30
million
gas,
however,
there's
a
lot
of
complex
interactions
among
transactions
as
well,
so,
for
instance,
if
a
lot
of
Searchers
are
all
trying
to
get
a
specific
liquidation,
only
one
of
them
can
get
that
liquidation,
and
this
can
also
be
encoded
in
this
set.
So
this
is
a
very
general
kind
of
object
that
just
says
what
transactions
are.
Okay,
we're
going
to
do.
This
is
kind
of
the
first
mathematical
trick
that
we
play
here,
and
this
isn't
that
important.
AD
But
it's
essentially
instead
of
considering
s,
we
consider
What's
called
the
convex
Hall
of
us.
This
just
means
that,
instead
of
forcing
X
to
be
zero
or
one,
we
allow
X
to
be
a
fractional
value
between
0
and
1..
So
the
way
to
think
about
this
and
the
way
this
kind
of
makes
sense,
is
from
the
network
designer's
perspective,
you
care
more
about
the
average
case
or
the
average
kind
of
usage
of
the
network,
not
one
particular
block
so
say.
AD
If
XJ
is
a
fraction,
that
would
just
say
that
we
include
that
transaction
after
roughly
1
over
XJ
blocks
and
we'll
see
that
we
can
actually
remove
this
constraint
in
a
little
bit.
But
again,
this
is
just
to
set
up
kind
of
the
mathematical
formalism.
So
this
doesn't
really
matter.
This
won't
really
matter
in
a
bit,
but
I
just
want
to
be
complete
all
right.
AD
The
final
thing
that
we
need
is
we
want
to
know
how
much
utility
a
given
transaction
gives
to
the
Joint
user
and
validator
set
regroup
these
two,
these
two
parties,
together
into
what
we
call
the
transaction
producers
and
the
reason
we
do
this
is
because
we
don't
want
to
deal
with
kind
of
the
game,
theoretic
analysis
of
looking
at
bids
and
auctions,
and
that
type
of
thing.
So
we
assume
that
kind
of
these.
AD
This
group
of
people
is
together,
they're
submitting
transactions
and
those
have
a
specific
type
of
utility
you'll
see
that
our
mechanism
actually
doesn't
matter.
It
doesn't
matter
that
we
group
these
things
into
the
transaction
producers,
but
this
does
present
an
area
for
future
work
and
I'd
like
to
point
out
that
we
almost
never
know
Q
in
practice,
it's
more
or
less
impossible
to
know
that,
however,
we
will
see
that
this
actually
doesn't
matter
once
we
write
out
this
problem.
AD
Okay,
so
a
lot
of
setup
I'm,
sorry,
but
this
is
kind
of
where
we
get
to
so
what
is
the
resource
allocation
problem?
It
is
to
maximize
the
utility
of
transactions
minus
the
loss
that
the
network
that
the
network
has
subject
to
kind
of
the
resource
utilization
being
defined
by
the
included
transactions
and
the
transactions
being
allowable.
So
this
is
the
ideal
kind
of
best
case
scenario
of
what
we
would
actually
like
to
solve
for
all
the
reasons
I
mentioned
earlier.
AD
AD
You
can't
partially
include
transactions
all
of
these
issues,
however,
we'll
see
that
we
can
actually
pull
from
a
branch
of
math
called
convex
analysis
and
specifically
Duality
Theory,
to
take
this
problem
and
turn
it
into
a
way
to
set
prices
so
that
the
validators
and
users
or
the
transaction
producers
implicitly
solve
that
optimization
problem
without
the
network
designer
needing
to
really
do
anything.
Just
update
prices
in
a
very
simple
way,
all
right.
So
the
the
30-second
version
of
Duality
theory
is
essentially,
it
allows
us
a
way
to
relax
constraints
into
penalties.
AD
So
I
can
say
that
you
actually
don't
have
to
satisfy
this
constraint.
You
just
have
to
pay
for
every
unit
of
violation,
and
this
allows
us
to
take
Y,
which
is
what
the
network
designer
cares
about.
That's
the
throughput
and
decouple
it
from
the
transactions
that
are
actually
included
in
the
block.
There's
just
going
to
be
a
penalty
for
these
two
things,
not
matching
exactly.
AD
However,
what
strong
Duality
tells
us
is
that
if
we
correctly
set
the
penalty,
this
penalty
being
the
prices,
then
the
Dual
problem
is
going
to
be
equivalent
to
their
original.
The
original
being
this
problem,
which
is
what
we
actually
want
to
solve,
and
these
two
utilizations
are
going
to
be
equal
and
they're,
going
to
have
the
same
optimal
value.
AD
So
again,
this
tells
us
we
correctly
set
the
prices
and
we
solve
this
problem
without
you
know
having
to
know
q
without
caring
about
the
fractional
transactions
without
all
the
things
that
I
mentioned
are
issues
all
right,
so
kind
of
turn
the
crank
of
the
math
a
little
bit,
and
you
can
decompose
this
dual
problem.
So
the
Dual
problem
is
to
maximize
this
thing
or
sorry
to
minimize
this
thing
into
a
network
problem
and
a
block
building
problem
and
P
is
going
to
be
the
Dual
variable.
AD
AD
This
first
term
here
is
actually
easy
to
evaluate.
You'll,
probably
just
have
to
trust
me
on
this.
It's
this
object
called
the
central
conjugate,
it's
something
that
we
have
in
closed
form,
and
that
means
that
essentially,
this
can
be
run
on
chain.
AD
AD
This
actually
has
the
same
optimal
value
if
we
just
use
S
instead
of
the
convex
Hall
of
s,
and
this
is
exactly
the
problem
that
is
solved
by
the
block
producers.
So
what
does
this
mean?
The
network
never
has
to
solve
this
problem.
It
could
just
observe
from
the
previous
block
which
transactions
were
actually
included
and
then
it
kind
of
gets
the
solution
of
this
for
free
from
kind
of
you
know
the
decentralized
block
builders.
AD
So
what
do
we
get
at
optimality?
Well,
if
we
assume
the
prices
are
set
correctly,
so
that's
going
to
be
P
star
and
then
the
block
Builders
use
those
prices
to
include
essentially
the
transactions
that
are
optimal.
Then
what
do
we
get?
Well,
we
get.
The
resource
utilization
of
the
network
is
exactly
equal
to
that
of
the
block.
Again,
that's
back
to
what
I
was
saying
earlier
is
that
we
essentially
get
that
this
constraint
does
hold
at
optimality
and
Y
satisfies
this,
which
we
can
look
in
a
little
bit
more
detail.
AD
What
this
means
is
essentially
the
prices
that
minimize
G,
so
this
is
the
Dual
problem
charge
the
transaction
producers
exactly
the
marginal
cost
faced
by
the
network.
So
if
you
set
the
prices
optimally
for
whatever
loss
function,
that
you
define,
the
marginal
cost
of
like
using
more
of
that
resource
is
exactly
what
the
price
is
that
you
charge.
AD
Furthermore,
these
prices
are
going
to
be
the
ones
that
incentivize
the
transaction
producers
to
include
transactions,
that
maximize
welfare
generated
minus
the
loss
incurred
by
the
network.
So
that's
back
to
that
original,
optimization
problem
that
we
saw
use
correctly
set
prices.
You
solve
that
problem.
The
network
designer
doesn't
need
to
know
the
utilities
or
anything
like
that,
all
right,
so,
okay,
that's
great
I,
still
haven't
told
you
how
to
choose
prices
I've,
just
kind
of
talked
around
this
for
a
while.
So
how
do
you
actually
do
this?
AD
AD
So
then
all
we
do
is
we
apply
our
favorite,
optimization
method
like
gradient
descent,
and
we
update
the
prices
using
this
gradient
up
here.
There's
a
lot
of
other
optimization
methods
that
you
could
choose
here:
they're
going
to
have
different
convergence,
behavior
and
different
trades
offs
between
say,
convergence
and
complexity.
This
is
all
stuff
that
we
leave
for
future
work
just
to
go
through
simple
examples
of
what
I
saw
showed
earlier.
So
let's
say
you
had
this
loss
function
looks
kind
of
silly.
AD
AD
If
you
use
this
one,
where
you're
only
unhappy,
if
you're
over
the
utilization,
you
have
the
same
update
except
you
essentially
make
it.
So
these
are
non-negative.
So
if
any
of
these
are
negative,
you
zero
them
out,
and
so
this
you
can
actually
see
that
this
makes
sense
here
in
this.
First
loss
function
we're
unhappy
if
we're
under
the
utilization,
so
this
means
that
we
might
actually
want
negative
prices
to
incentivize
people
to
use
more
of
a
particular
resource.
AD
Here,
though,
we
actually
don't
care
if
we're
under
utilizing
based
on
our
Target,
so
we're
never
going
to
have
negative
prices
and
again,
this
is
to
kind
of
get
to
the
point
that
the
network
designer
chooses
the
loss
function
and
the
loss
function
encodes
exactly
what
your
unhappiness
is
with
a
particular
resource
utilization
and
then,
once
you
do,
that
update
rules
that
will
maximum
or
sorry
minimize
that
loss
function
will
fall
out
of
it.
So
it
comes
down
to
instead
of
choosing
what
the
update
rule
is.
You're
choosing
what
the
loss
function
is
all
right.
AD
So
this
is
a
lot
of
math.
We
did
some
numerical
work
to
kind
of
see
how
this
would
work
and
very
simple,
very
simple
examples.
So
here
we
have
kind
of
a
steady
state,
behavior
of
a
network
with
only
one
type
of
transaction
that's
being
submitted.
You
can
think
of
this,
as
pretty
analogous
to
that
example.
That
I
showed
at
the
beginning
of
the
talk,
but
you
can
see
that
one
dimensional
prices
are
doing
about.
AD
10
transactions
per
block
and
multi-dimensional
prices
are
able
to
eke
out,
maybe
like
two
to
three
more
transactions
per
block,
but
this
is
even
when
you,
you
know
it's
the
same
type
of
transaction,
that's
going
through.
So
even
this
kind
of
like
the
simplest
case,
you
get
some
improvement
from
using
these
multi-dimensional
prices
and
I'm
not
going
to
go
through
all
the
details
of
kind
of
how
we
set
this
up,
but
I
would
encourage
you
to
look
at
the
paper
for
that.
AD
However,
where
this
really
shines
is
when
we
have
a
distribution
shift.
So
in
this
example,
what
we
did
is
we
have
this
type
one
of
transactions
which
you
saw
earlier,
but
we
add
this
type,
2
transaction,
which
has
a
much
different
resource
profile.
So
you
can
think
of
this
like
an
nft
Mint
or
something,
and
they
come
at
about
block
10..
AD
So
the
multi-dimensional
prices
are
the
purple
and
blue,
and
you
can
kind
of
see
that
there's
this
nice
Spike
here,
where
we
do
less
of
type
1
transaction,
more
of
type,
2
and
then
kind
of
once
we
go
through
all
of
these.
We
return
to
zero
and
we
clear
some
backlog
after
we've
gone
through
these.
You
could
also
see
on
the
right
here.
These
are
the
multi-dimensional
prices.
AD
So
once
we
hit
block
10
and
the
distribution
shifts
a
lot,
this
light
blue
price
goes
down,
the
other
price
goes
up
quite
a
bit
and
then
they
return
to
steady
state
a
little
bit
long,
a
little
bit
afterwards
again,
the
uniform
prices
are
still
able
to
adjust
to
a
certain
extent,
but
it's
going
to
be
less
throughput
overall
and
back
to
what
I
was
talking
about
earlier,
where
you
have
some
Target
utilization
that
you
want
to
use.
You
can
see
here.
AD
In
the
second
example,
the
dashed
lines
are
going
to
be
the
targets,
so
the
multi-dimensional
prices,
which
is
the
top
deviate
from
the
target
for
a
short
period
of
time,
to
handle
kind
of
the
big
spike
in
transactions.
AD
So
there's
a
lot
of
future
work
to
be
done
here.
One
thing
that
we
didn't
do
is
super
extensive
numerical
examples,
and
you
can
imagine
that
using
real
data
here
might
lead
to
valuable
insights
that
allow
you
to
tweak
the
framework
in
specific
ways.
In
addition,
like
I
mentioned
earlier,
we
group
the
transaction
producers,
or
we
group
The
users
and
validators
into
that
transaction
producer
kind
of
set,
and
there
is
some
work
on
the
dynamical
behavior
of
like.
Essentially,
how
do
we
make
the
strategy
proof?
How
do
we
kind
of
you
know?
AD
We
just
talked
about
essentially
the
amount
that
was
burned
by
the
network
in
our
prices.
So
how
do
we
kind
of
you
know
make
this
into
an
entire
system?
Also
I
mentioned
earlier
that
the
update
rule,
while
I
chose
gradient
descent
here,
there's
a
lot
of
other
things
that
you
could
do.
AD
You
could
actually
choose
the
update
rule
in
a
way
that
gets
you
something
that
looks
very
very
similar
to
what
was
proposed
on
the
ethereum
research
forums
back
in
January
and
there's
a
question
of
okay
well,
which
update
rules
are
good,
which
update
rules
are
the
most
useful
and
how
do
you,
trade
off
between
say,
convergence,
Behavior
and
complexity,
so
how
quickly
kind
of
your
prices
can
adjust
and
how
much
work
that
you're
doing
on
chain?
Then,
of
course
on
the
system
designer
side?
AD
So
if
you're
actually
trying
to
use
this
in
practice,
there's
a
lot
of
questions
that
this
General
framework
doesn't
totally
answer.
So,
for
example,
like
you
know,
what
should
the
actual
resources
be
in
a
given
system?
And
how
do
you
trade
off
kind
of
the
complexity,
pricing,
every
op
code,
every
sequence
of
op
codes
and
so
on
and
the
ease
of
use
of
these
things?
And
then,
of
course,
how
do
you
determine
a
loss
function
for
the
desired
performance
characteristics
again.
AD
Kind
of
the
very
important
Point
here
is
that
system
designers
should
be
thinking
about
these
questions,
but
should
not
necessarily
be
thinking
about.
Okay.
Well,
like
how
do
I
do
the
exact
update
rule
for
prices,
because
in
this
framework,
if
you
think
about
these
questions,
then
the
update
rule
falls
out.
Quite
naturally,
all
right
and
I
encourage
you
to
check
out
the
paper,
which
has
a
lot
more
and
is
kind
of
like
38
pages.
That
goes
through
this
entire
thing
and
excruciating
detail
and
happy
to
take
any
questions.
Thank
you.
H
Yeah,
seeing
as
your
your
models,
you're
willing
to
give
a
different
cost
to
different
op
code
and
resources
and
everything
I
think
something
that
could
be
interesting
to
see
is
the
order
of
a
transaction
itself
if
I
have
different
custom
items.
Specific
of
storage
like,
for
example,
November,
semi
visa
thing
and,
generally
speaking,
if
you
eat
a
storage
slot
earlier,
it
does
much
more
potential
value
because
this
menu
being
settled
first.
H
What
would
be
the
impact
of
costing
differently
storage
of
code,
depending
on
where
you
are
in
the
block
and
generally
speaking,
in
terms
of
resourcedurization
to
anything
that
happened
earlier
in
the
chain,
is
more
costly
for
the
network
as
a
whole,
because
you
need
to
store
it
for
longer
and
I.
Don't
think
your
framework
is
straight
up
compatible
with
this
costing
because
it's
missing
one
dimension
on
the
vectoral
cost,
Maybe
I'm
Wrong.
AD
Yeah,
that's
actually
a
great
question:
I
think
it's
compatible
with
the
first,
but
not
necessarily
the
second
or
the
second
one
you
could
probably
put
into
it,
but
it
would
be
a
little
bit
harder.
This
kind
of
multi-block
one,
however,
for
a
single
block,
you
could
actually
view
you
know
that
set
s
can
saying
like
you
know.
This
transaction
goes
before
this
one
or
this
transaction
goes
after
this
one
and
then
perhaps,
if
you're,
the
second
transaction
in
the
block
you're
actually
using
a
different
resource,
so
you're
using
like
the
second
read
so.
H
H
AD
AD
I
think
there's
probably
some
people
in
this
room
that
could
answer
that
better
than
I
can
but
I
I
think
we
we've
gotten
a
lot
of
interest
from
different
protocols
that
are,
you
know,
Roll-Ups
Etc
that
are
interested
in
us
and
from
the
ethereum
research
team
as
well.
AD
I
can't
speak
to
development
timelines,
though,
and
when
this
stuff
would
be
like
I
said,
there's,
definitely
quite
a
bit
of
future
work
that
has
to
go
into
making
this
production
ready
and
I.
Imagine
that
you
know
newer,
newer
chains
that
maybe
aren't,
as
don't
have
to
pressure
test
their
changes
quite
as
much
we'll,
probably
adopt
something
like
this
before
ethereum
would.
AE
W
So
in
the
interest
of
keeping
this
a
convex
optimization
problem,
are
there
any
limitations?
This
puts
on
how
we
can
construct
different
parts
of
the
problems
such
as
the
Lost
landscape,
or
have
I
missed
something,
and
is
there
kind
of
a
chance
we
can
land
in
a
local
Optimum
rather
than
a
global
Optimum?
Here.
AD
Also
a
good
question:
well,
so
the
loss
function
has
to
be
convex.
So
there's
you
know
one
immediate
thing
that
you
have
from
con
and
you
can
imagine
that
maybe
I
don't
know
if
there's
two
states
that
you
want
to
run
in
and
maybe
sometimes
you
want
to
go
hit
for
one
target.
Sometimes
you
want
to
hit
another
that
wouldn't
be
convex.
If
you
kind
of
your
landscape
looks
like
something
like
this,
so
there
definitely
are
limitations.
The
other
thing
here
is,
we
kind
of
have
this.
AD
The
resource
part
is
very
general
and
to
the
question
earlier
that
you
can
kind
of
have
these
resources
that
are
dependent
so
like
one
transaction
can
be
dependent
on
another.
However,
we
do
encode
this
all
in
like
an
additive
linear
way
and
there's
it's
probably
not
the
most
efficient
thing
to
do,
for
the
reasons
that
I
talked
about
earlier.
AD
Is
you
kind
of
get
this
exploding
complexity
as
you
do
that,
however,
if
you
don't
do
that,
you
get
to
kind
of
the
non-convex
world,
so
it
might
be
a
more
succinct
or
lower
like
complexity
way
of
describing
what
it
is
that
you
want
to
do,
but
you
won't
actually
be
able
to
solve
it,
and
this
entire
framework
does
rely
on
strong
Duality,
which
you
only
really
get
in
convex
on.
AD
You
mostly
only
get
in
convex,
optimization
problems,
or
it's
very
rare,
to
get
it
in
non-convex
problems
and
that
that
allows
us
to
look
at
the
prices
which
is
kind
of
the
Dual
of
that,
instead
of
looking
at
like
what
transactions
do,
I
include
I,
look
at
how
do
I
set
prices,
but
that's
a
great
question.
Thank.