►
From YouTube: Devcon VI Bogotá | The cold forest stage - Day 1
Description
Official livestream from Devcon VI Bogotá.
For a decentralized version of the steam, visit: https://live.devcon.org
Devcon is an intensive introduction for new Ethereum explorers, a global family reunion for those already a part of our ecosystem, and a source of energy and creativity for all.
Agenda 👉 https://devcon.org/
Follow us on Twitter 👉 https://twitter.com/EFDevcon
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
B
B
B
B
Talking
about
the
EP
case,
Zero
knowledge
of
the
KPS,
so
yeah
there's
a
lot
of
cool
talks
in
store,
but
since
we
are
already
running
a
little
bit
late
with
our
first
talk,
I'm
not
gonna,
use
too
much
time
talking
about
what
we
have
in
store,
but
Dive
Right,
In
and
announce
our
first
speaker,
which
is
Piper
yeah
a
long
time
core
contributor
and
core
developer
to
the
ethereum
protocol.
Today
he
will
talk
about
the
portal
Network,
which
is
all
about
lightweight
access
to
the
ethereum
protocol.
He
will
explain
what
this
all
is
about.
B
D
All
right,
you
guys
can
hear
me
I
can
see
my
slides
good
to
see
all
of
you
today,
I'm
Piper,
as
she
said
and
I'm
here,
to
talk
to
you
guys
about
the
portal
Network.
So
let's
just
get
right
into
it.
The
I
work
for
the
ethereum
foundation
I
already
got
introduced,
so
this
is
about
actually
finally
bringing
lightweight
decentralized
access
to
the
protocol,
and-
and
yes,
this
has
been
kind
of
a
a
long-term
project
to
get
us
to
where
we're
at.
D
Here
we
go
so,
let's
like
dive
into
kind
of
at
this
high
level.
What
the
portal
Network
is
for
me.
It's
a
giant
white
whale
that
I've
been
hunting
for
a
long
time
and
hopefully
doesn't
eat
me
in
the
end.
But
the
portal
network
is
five
new
decentralized,
special
purpose:
storage
Networks
that
serve
all
of
the
data
that
is
necessary
for
interacting
with
the
ethereum
protocol,
and
this
has
been
a
long
road
to
get
to
here.
D
We
spent
time
way
back
trying
to
build
lightweight
clients
on
the
existing
networks
and
where
we
ended
up
was
that
the
existing
networks
don't
give
us
what
we
need
to
actually
deliver
lightweight
protocol
access,
and
thus
we
have
these
five
special
purpose:
storage
networks
that
we
are
building
out
to
serve
this
data
and
to
essentially
realize
this.
This
dream
of
lightweight
access
to
the
protocol.
D
The
project
has
kind
of
these
high-level
design
goals
that
kind
of
informed
what
we
needed
to
build
and
how
it
needed
to
get
built,
and
one
of
the
main
things
is
that
the
portal
network
is
really
user,
focused
all
of
the
clients
that
you
hear
about
today,
Lighthouse
all
of
that
stuff
right.
It's
infrastructure
for
the
protocol
and
and
those
clients
are
built
with
the
protocol
in
mind
and
and
the
user-facing
stuff
right.
D
One
way
to
look
at
this
is
that,
if
you
want
to
interact
with
the
ethereum
protocol
today,
you
have
two
choices
right:
it's
the
are.
We
upper
right
or
lower
left
hand
corner
of
this
graph.
You
can
run
a
full
node
they're,
very
heavy
and
they're,
also
awesomely
decentralized.
Our
Network
there's
a
number
of
choices
for
what
to
run,
but
in
general,
when
we're
talking
execution
layer,
you're
talking
about
very
heavy
pieces
of
software,
we'll
get
into
the
details
of
this
in
a
minute.
D
D
So
anyways
you've
got
these
lightweight
options,
but
they're
also
centralized,
and
they
can
do
things
like
correlating
your
IP
address
with
the
transaction
you
send
or
selling
your
data
and
things
like
that
right.
We,
there
are
two
options
at
the
far
opposite
ends
of
the
spectrum,
and
we
want
to
build
this
thing
in
the
upper
left
hand
corner
this
kind
of
adorable,
pink,
smart
car
that
is
both
lightweight
and
decentralized,
which
supposedly
we
care
about.
D
This,
brings
us
to
this
lightweight
concept.
Like
I
said:
ethereum,
clients
are
heavy
and
we
want.
We
need
a
network
that
allows
lightweight
devices,
participate,
ethereum
clients
are
heavy
today
because
they
have
to
do
a
lot
of
things.
Evm
execution
is
CPU
intensive,
running
the
transaction,
pool
the
CPU
intensive
and
there's
gigabytes
upon
gigabytes
of
history
and
state
data,
and
things
like
that
that
they
need
to
do.
D
This
means
that
running
a
traditional
ethereum
client
is
a
inherently
heavy
thing,
and
you
generally
can't
do
it
on
things
like
Raspberry,
Pi's
or
phones
over
here
on
the
left.
We've
got
this
nice
little
strong
guy.
Who
can
hold
it
all
up?
That's
your
like
traditional
execution,
layer,
client.
Our
goal
is
building
out
a
network
that
lets
you
spread
things
out
that
takes
all
of
the
load
for
all
of
this
data
and
distributes
it
around
all
of
the
participants
of
the
network
in
a
nice
kind
of
even
way.
D
The
other
thing
that
we
focus
on
is
kind
of
removing
some
of
these
height
restrictions
and
by
height
restrictions,
I
mean
essentially
Hardware
restrictions
that
keep
you
from
joining
the
network.
This
is
one
of
the
things
that
blocked
us
from
Building
light
a
lightweight
client
years
ago.
Was
that
you've
got
these
sort
of.
You
must
be
this
tall
to
ride
things.
D
You
are
not
allowed
as
a
as
a
participant
in
the
the
dev
P2P
Network.
That's
the
network
that
supports
execution
layer,
clients.
You
can't
be
part
of
this
unless
you
have
all
of
the
state
and
all
of
the
history
and
enough
processing
power
to
process
every
block
and
enough
processing
power
to
run
the
transaction
in
pool
right.
If
you
aren't
tall
enough
you're
not
allowed
into
that
Network,
we
focused
on
a
different
model.
D
Traditional
clients
have
bad
ux
in
terms
of
like
the
user-facing
stuff,
and
that's
because
you
either
have
to
when
you
start
it
up,
you've
got
hours
or
days
to
wait
for
it
to
sync,
and
if
you
go
offline
for
some
period
of
time,
you
often
have
additional
time
to
catch
back
up
to
the
tip
of
the
chain.
These
sync
times
make
ux
for
kind
of
like
end
user
interactions,
kind
of
basically
unbearable.
D
The
other
piece
that
we
needed
was
something
that
was
scalable
and
we're
not
talking
this
scaling.
This
isn't
sharding
scaling
or
transactions
per
second
scaling.
This
is
a
number
of
of
network
participants.
This
is
having
potentially
millions
of
nodes,
be
part
of
this
network.
D
Some
of
the
past
work
towards
ethereum,
like
clients,
is
Les.
That's
the
light
ethereum
subprotocol,
and
this
has
been
around
for
five
years.
Maybe-
and
it
has
never
really
delivered
on
this
goal-
and
the
main
reason
here
is
that
it
exists
in
this
client
server
architecture.
D
Les
nodes
on
the
network
are
dependent
on
full
nodes,
serving
them
the
data
and
what
happens
over
time
is
that
a
full
node
who's
serving
this
data
ends
up
getting
kind
of
just
assaulted
by
all
of
these
Les
nodes,
constantly
asking
them
for
information
and
they're
expensive
requests
that
these
nodes
are
making,
and
it
hasn't
turned
out
super
well
Les
has
not
proven,
has-
has
not
delivered
a
reliable
like
light
protocol
access,
and
the
main
reason
is
this
imbalance
between
client
server
stuff
that
there's
no
incentive
that
there
aren't
incentives
to
run
Les
servers
running
an
Les
server,
just
costs
you
something,
and
so
the
ones
that
are
out
there
are
being
run
by
the
goodness
of
hobbyists
and
other
people's
hearts
or
people
who
misconfigured
their
client
on
accident
or
something
but
either
way
it
it
is.
D
D
Inferior,
Alchemy
they're.
This
centralized
model
right
their
servers
go
down.
Everything
stops
working
Les.
Is
this
decentralized
model
which
we
like,
but
because
there
aren't
enough
incentives
or
anything
for
people
to
run
Les
servers,
it
hasn't
worked
out
super
well,
we've
moved
all
the
way
into
this
distributed
model
where
we
have
a
homogeneous
Network
where
everybody
in
the
network
is
a
client
and
everybody
in
the
network
is
a
server
one
way
to
think
about.
D
This
is
very
akin
to
BitTorrent,
where
the
in
Les
you
have
this
like
kind
of
degenerative
thing
where
the
more
nodes
you
add
into
the
system,
they
take
up
a
limited
amount
of
capacity,
and
once
you
exceed
that
it
degrades
service
for
everybody
in
the
portal
Network
context,
we
have
built
these
networks
around
this
idea
that
the
more
nodes
you
throw
at
it,
the
more
powerful
it
gets
and
that's
kind
of
the
whole
core
part
of
this
all
right.
Let's
look
at
a
practical
example
of
what
how
you
would
serve
this.
D
Essentially,
a
balance
inquiry
from
the
portal
Network
I'll
remind
you.
We've
got
a
number
of
different
networks
here,
and
the
idea
is
that
they're
all
sort
of
special
purpose
partitioned
off
from
each
other
clients
can
be
part
of
any
number
of
them
that
they
want.
This
example
is
going
to
touch
three
of
our
Networks,
so
what
we're
going
to
do
here?
It's
a
very
simple
example:
we're
looking
up
your
ether
balance
in
a
tradition.
So
this
is
our
traditional
client.
D
You've
got
databases
over
here
on
the
right
where
they're
storing
information,
it's
running
this
Json
RPC
server,
a
request
comes
in
to
query
my
balance.
The
Json
RPC
server
is
going
to
do
a
couple
of
things
here.
It's
going
to
reach
into
an
index
to
figure
out
what
the
client
thinks
the
head
of
the
chain.
Is
it's
going
to
look
up
at
the
header
for
that
header,
from
its
store
from
whatever
database
it
stores
headers
in
once
it
gets
that
back.
D
It
can
look
at
a
field
inside
of
that
to
see
the
state
route
and
then
it
reaches
into
the
state
database
to
actually
read
your
account
balance
right.
This
all
happens
very
quickly
under
the
hood
and
the
reason
that
a
traditional
ethereum
client
can
do.
This
is
because
it
is
maintaining
these
three,
these
databases
that
it
is
constantly
online
and
it
is
constantly
keeping
these
things
populated.
D
The
portal
Network
concept
is
very
similar
right,
there's
very
little
right.
This
is
like
oh
God,
no
wrong
direction
too
much
too
much
so
in
the
portal
Network
context,
when
this
eth
get
balance,
request
comes
in
instead
of
reading
from
local
databases.
What
a
client's
going
to
do
is
it's
going
to
actually
reach
out
into
this
net.
These
networks
that
it's
part
of
to
get
the
data.
D
We
have
a
network
for
essentially
tracking
the
head
of
the
chain
that
provides
the
Beacon
Light
protocol
data,
your
client's,
going
to
reach
into
there
to
know
what
the
front
of
the
chain
looks
like,
which
what
the
head
of
the
chain
is.
It's
going
to
use
that
to
pull
the
header
from
the
history
Network,
which
stores
all
of
the
historical
block
bodies,
headers
receipts
things
like
that,
and
once
it
has
that
it
can
look
up
what
state
route
it's
supposed
to
be.
D
This
is
a
very
simplistic
example,
but
this
is
very
representative
of
what
the
majority
of
requests
are
going
to.
Look
like
there'll
be
a
little
bit
of
sampling
of
data
from
different
networks
in
order
to
get
the
information
that
you
need
before
it's
returned
to
the
user.
D
All
right,
where
are
we
at
like
I
said
this
has
been
a
long
road
to
get
here.
We
had
to
build
some
of
the
wrong
things
to
figure
out
what
the
right
thing
to
build
was
and,
and
it,
and
at
this
stage
this
we
are.
We
are
past
the
research
stage.
We
are
purely
in
the
get
it
built
and
get
it
out
the
door
stage.
We
have
three
different
Implement
client
implementations.
D
This
is
fantastic,
I'm,
so
happy
about
this,
because
we
wanted
to
build
a
protocol,
not
a
singular
client.
We
wanted
to
build
something
that
that
have
many
clients
to
it
and
instead
of
just
one
reference
implementation,
we've
got
Trend
written
in
Rust
by
my
team.
At
the
ethereum
foundation,
we've
got
ultralight
written
by
in
JavaScript
by
the
JavaScript
team
at
the
ethereum
foundation,
and
we've
got
fluffy
written
by
the
Nimbus
team
run
by
status,
and-
and
here
we
are
so
here's
our
rough
timeline
right
now.
D
Software
estimates
are
garbage
imminently.
We
are
right
at
the
edge
of
getting
our
first
Network,
really
fully
up
and
running.
That's
the
history
Network.
D
In
parallel
to
this,
the
merge
has
sort
of
kicked
off
us
having
this
Beacon
Light
Network
and
that's
sort
of
like
our
next
major
priority
and
after
that,
over
the
course
of
2023.
We
are
going
to
be
getting
the
remaining
networks
online.
The
zero
to
one
is
a
lot
harder
than
building
the
subsequent
networks
that
come
after
that.
We've
spent
a
lot
of
r
d
getting
to
this
stage,
where
we
almost
have
the
history
Network
up
and
running.
D
That
is
what
I've
got
for
you
today.
If
you
want
to
get
involved,
we
are
findable
on
the
internet.
Like
Danny
has
often
said,
the
doors
are
all
wide,
open
and
unlocked.
If
this
is
a
project
you'd
like
to
get
involved
in,
please
feel
free
to
reach
out
to
me
and
I've
got
some
time
for
questions.
If
that's
something
we
can
do,
I
think
we've
got
a
guy
with
a
mic
walking
around.
E
I
have
a
question
related
to
like
the
API
endpoints,
which
the
portal
Network,
and
so
will
it
be
able
to
serve
like
debug.
Endpoints
start
like
need,
like
a
whole
archival
stories
on
notes.
If.
E
D
Not
initially
all
of
the
data
to
do
things
with
those
will,
in
theory
be
present
in
the
network,
but
we
are
really
focused
on
like
human
driven
wallet
interactions.
That's
kind
of
like
our
primary
use
case
that
we're
that
we're
focused
on
delivering
and
the
debug
endpoints,
just
don't
play
a
role
in
that
and
in
general,
are
going
to
involve
a
much
heavier
level
of
requests
and
like
data
access
than
is
traditionally
involved
in
in
kind
of
standard
wallet
interactions.
So
no,
the
debug
names
based
in
points
are
not
officially
supported.
D
F
Thank
you
from
the
perspective
of
a
application
layer.
Client
is
the
thinking
that
an
application
would
be
like
speaking
directly
to
like
an
application
would
want
to
run
one
of
these
like
clients
itself
and
speak
to
that,
or
is
the
thinking
that,
for
some
reason
you
would
bundle
the
client
itself
into
the
application
or
neither
of
those
things
just
trying
to
grok
what
the
use
cases
I.
D
Think
I
understand
the
question,
so
the
the
there's
a
lot
of
ideas
on
the
table
and
exactly
which
ones
are
going
to
stick
and
which
ones
aren't
is
right,
we'll
find
out
as
time
goes
on,
but
the
general
idea
was
to
build
a
network
where
the
clients
can
be
lightweight
enough,
that
you
might
have
two
or
three
of
them
actually
running
on
your
machine
at
any
given
time
because
in
theory,
they're
lightweight
enough
to
actually
embed.
D
So
if
you're,
you
know
downloading
a
desktop
wallet
and
the
portal
Network's
up
and
secure,
you
know
up
and
live
and
production
ready.
There's
it's
entirely
likely
that
it
might
just
bake
the
client
right
into
the
into
the
application
that
you're
running
and
there
might
be
two
or
three
others
running
alongside
of
it.
D
One
of
the
things
that
my
team's
going
to
be
focusing
on
is
sort
of
like
a
system
level
process.
That's
really
easy
to
download
and
install
that
just
runs
the
thing
in
the
background,
which
makes
it
easy
for
you
to
do
things
like
connect
metamask
up
to
it,
or
things
like
that.
So
I
don't
think
that
there's
one
model
here
I
think
some
applications
might
embed
it.
Some
might
treat
it
as
an
external
dependency.
H
Yeah
there
we
go
yeah
so
something
that
I'm
familiar
with
the
Elias
client.
That
is
that,
when
you
want
to
execute
for
like
a
smart
contract
call,
you
might
need
to
do
several
round
trips,
because
you
are
basically
every
time
that
you
need
to
query
State.
You
are
going
to
go
to
the
full
node
and
ask
for
something
right
and
that
maybe
creates
like
again
several
round
trips,
which
increase
the
latency
of
like
whatever
you're
trying
to
do
so
is
with
the
important
Network.
D
H
Not
that
precisely,
but
that
you're
going
to
do
multiple
requests.
So,
for
example,
if
you
want
to
do
rc20
balance
off,
you
need
to
download
the
smart
contract,
then
you
need
to
start
executing
and
every
time
you
need
to
access
some
part
of
like
the
state
database,
you're
going
to
just
go
again
and
make
a
query,
and
so
you
are
basically
doing
multiple,
multiple.
H
Yeah
I
think
currently
you
you
do
done
sequentially,
and
so
basically
you
are
increasing
the
total
latency
of
like
balance
off.
But
this
you
can
find
a
way
of
like
doing
that,
concurrently
or
batching
up
that
you
only
do
a
single
request.
D
I
think
that's
going
to
be
a
thing
that
individual
Proto
clients
figure
out
on
their
own
there's
nothing
inherent
about
the
networks
that
keep
you
from
looking
up
large
swaths
of
data
in
parallel
at
the
same
time,
and
anything
that
can
be
parallelized
at
that
networking
level
will
definitely
benefit
total
amounts
of
latency
that
users
experience.
D
D
I
J
Hey
Piper
great
talk,
man
just
had
one
quick
question:
what
what
prevents
me
from
running
a
light
client
and
making
it
like
making
it
a
freeloader
that
does
no
work.
How
do
you
prevent
against.
D
We
don't
so
there's
two
things
that
I'll
say
here.
D
One
is
that
we
had
to
pick
some
cutoff
points
for
what
we
were
building,
because
this
is
big
like
like
I
I
I
took
a
lot
of
stabs
at
making
lightweight
protocol
access
in
smaller
ways,
and
this
is
what
came
out
of
Really
Trying,
a
bunch
of
things
that
they're
not
working.
So
so
in
order
to
deliver
this,
we
needed
to
build
something
that
was
much
bigger
than
I
originally
thought
we
were
going
to
have
to
build,
and
in
that
we
had
to
like
kind
of
cut
it
off
at
a
point.
D
The
thing
that
we're
building
is
attackable.
There
is
a
tax
surface
that
we
that
that
exists,
and
the
general
idea
here
is
that
those
are
solvable
problems
and
we're
not
going
to
focus
on
absolutely
trying
to
make
sure
that
we
have
them
all
solved
on
Day
Zero.
The
portal
network
is
not
core
infrastructure,
not
at
the
protocol
level.
D
The
protocol,
as
as
we
know
it
does
not
depend
on
the
portal
Network
for
anything,
and
so
if
the
portal
Network
Falls
over
ethereum
does
just
fine
and
initially
it's
possible
that
somebody
attacks
it
and
that
probably
means
we're
doing
something.
That's
working
because
it's
worth
attacking.
So
that's
the
one
piece
which
is
that
we
have
built
something
that
we
know
we
are
going
to
have
to
hone
and
fine-tune
and
work
on
the
security
part
of
it.
K
J
J
That
might
be
leading
I'm
kind
of
also
curious
if,
if
I
choose
to
make
my
light,
client
act
maliciously-
and
you
ask
me.
D
For
that
there
we
go
freeloaded.
The
network
is
designed
to
like,
like
if
there
are
too
many
freeloaders,
it'll
degrade
performance
for
everybody,
I'm,
relying
on
essentially
two
things
which
are
the
laziness
of
people,
so
people
are
inherently
lazy
and
so
going
and
configuring
your
client
differently
than
how
it
ships
is
like
something
that
people
often
won't
do
if
it's
working
just
fine
and
if
we
ship
it
with
sensible
defaults
that
aren't
running
your
fan,
speeds
at
full
speed
and
aren't
filling
up
your
hard
disk.
D
The
chances
of
you
taking
the
time
to
go
in
and
tweak
those
settings
are
pretty
small
and
there's
a
and-
and
you
can
call
it
altruism
or
you
can
call
it
laziness,
but
we're
fundamentally
built
on
this
idea
that
these
small
contributions
of
lots
and
lots
of
people
add
up
to
it
a
lot
right.
Bittorrent
works.
D
L
D
You're
talking,
there's
I,
believe
three
data
types
that
we'll
be
working
with
I
am
going
to
potentially
botch
this
if
I
list
it
off
the
top
of
my
head
Kim
on
the
fluffy
team
is
the
one
kind
of
leading
the
like
r
d
or
the
the
specification
for
what
that
Network
serves.
D
D
M
Can't
hear
you
I'll
repeat
your
question:
I
was
just
wondering
when
you're
testing
on
a
day-to-day,
what
user
flows
are
you
using
for
your
tests?
Well,.
D
So
we're
designing
at
the
Json
RPC
API
level,
so
we're
not
necessarily
building
interfaces
for
people
we're
taking
the
Json
RPC
API,
which
is
like
the
standard
API
that
execution
layer
clients
exposed
to
users
right.
This
is
what
Alchemy
generally
exposes.
These
are
the
things
that
metamask
is
like
calling
to,
and
so,
while
we
are
user
focused,
we
are
building
out
clients
that
can
serve
the
Json
RPC
API,
which
is
still
a
low-level
thing
right.
You're
still
talking
about
computers
talking
to
computers,
the
wallets
and
things
that
get
built.
D
On
top
of
this,
that's
I
think
the
type
of
user
testing
you're
you're,
potentially
asking
about
and
that's
kind
of
like
outside
of
our
purview,
are
I
I
guess
you
could
say
that
the
wallets
are
our
primary
clients
and
that
the
users
are
the
wallets
client.
If
that
makes
sense,
is
that
okay.
N
D
I
think
you
asked
whether
we're
building
out
a
like
one-to-one
to
the
Json
RPC
API.
We
are
not
trying
to
redefine
the
Json
RPC
API,
it
is
established
and
has
been
successful
and
is
generally
the
backbone
of
any
kind
of
ethereum
interactions
that
wallets
and
things
are
making,
and
so
we
are
building
off
of
the
existing
standard.
D
I
don't
have
an
answer
for
you
on
the
L2
thing,
it's
an
open
question
we'll
see
how
it
goes,
and
that
is
all
of
my
time.
Thank
you
all.
B
B
So
moving
on
from
the
Potter
Network
to
our
next
talk,
I'd
like
to
invite
David
to
the
stage
he
will
talk
about
debugging,
the
ethereum
merch,
with
parallel
universes
and
in
his
talk
he
will
explore
the
difficulties
of
developing
stateful
systems
like
a
blockchain
with
high
confidence
and
will
explain
how
the
merge
required
special
testing
in
order
to
find
the
worst
possible
bugs
in
these
systems
beforehand.
So
please
give
it
up
for
David.
B
P
O
Right,
oh
okay,
excellent.
O
Okay,
can
you
all
hear
me?
Are
we
ready
to
go?
Oh
cool
excellent.
Well,
welcome!
So
so
you're.
If
you're
in
the
wrong
room.
This
room
is
talking
about
testing
the
merge
with
parallel
universes.
O
My
name
is
David
Searle
and
I'm,
the
head
of
Amir
for
a
small
startup.
That's
been
working
with
the
ethereum
foundation
for
just
over
a
year
now
who
likes
Bugs,
okay,
okay,
well,
I
brought
some
bugs
along
with
me
today,
so
you're
gonna
have
to
be
like
who
would
like
a
bug,
because
I've
got
some
bugs
that
actually
I'd
like
to
give
to
various
members
of
the
audience
anybody
you'd
like
one
okay.
O
Here
we
go
just
after
a
little
you're,
gonna
catching
so
little
bugs
we've
been
spending
the
last
year,
kind
of
really
stress
testing
for
my
boat
so
see.
This
is
why
you
should
start
with
the
front.
So
I've
got
three
more
to
go
there.
We
go
see
so.
O
The
it
talks
about
testing
ethereum
and
it's
it's
basically
looking
at
using
a
pretty
sophisticated
and
unique
testing
methodology,
which
is
based
from
a
company
called
antithesis
which
I
belong
to
the
company
is
involved
in
using
deterministic
simulation
techniques
to
allow
us
to
basically
explore
a
whole
host
of
iterations
of
of
complex
distributed
technology.
That's
running
in
a
simulated
environment.
So
let
me
just
get
my
my
little
Clicker
here
we
go.
Let's
see
if
we
can
do
that.
I
brought
a
few
Stars
along
to
the
show.
O
Obviously
I've
just
explained
who
I
am
little?
Did
you
know
that
Dr,
Strange
and
Cheryl
are
also
going
to
participate
in
this?
This
presentation
I've
got
about
25
years
worth
of
experiences
in
the
tech
industry.
For
the
last
two
and
a
bit
years
now,
I've
been
working
for
this
business
antithesis
and
they've.
O
O
So
this
is
my
house
in
the
UK
you
can
tell
from
my
my
accent
and
as
you
can
see,
I've
mostly
got
a
few
gpus
I've
got
to
sell.
So
if
you're
interested-
please
let
me
know
no
joke,
but
some
the
the
day
of
the
merge
came
around
and
we've
been
doing
a
lot
of
work
with
with
all
the
various
clients,
all
that
hard,
arduous
testing,
making
sure
that
we
call
every
last
bug
what
was
going
to
happen.
Was
it
really
going
to
be
smooth
sailing?
O
Yeah
I
think
we
saw
this
morning,
which
obviously
is
great
news,
but
yeah
I
was
wondering
if
we
were
going
to
see
this.
You
have
to
press
play
on
the
the
video.
O
There
is
sound
as
well,
but
I
couldn't
believe
this
when
I
saw
this,
but
this
is
real.
O
Yeah,
it's
pretty
bad
eh,
not
my
front
garden
again.
So
testing
is
really
important
and
this
is
where
you
know
we
talk
about
it,
but
the
merge
was
a
significant
event.
That's
why
we
had
so
so
much
celebration
across
the
world
about
kind
of
what
was
really
going
on
when
when
the
merge
took
place.
O
So
congratulations
to
anybody
who's
involved
in
the
merge.
You
know
phenomenal
job
and
yeah
I
think
we
can,
you
know,
hold
our
heads
up
high
there.
So
I
say
testing
is
hard
and
you
only
got
to
use
Cura
to
basically,
you
know
ask
a
question:
you
know
what
you
know.
What
makes
software
testing
difficult
yeah
it's
a
bit
of
a
high
level
question.
You
know!
Apologies
for
that,
but
you
know
these
two
really
kind
of
sort
of
struck
me
here,
like
the
first
one.
You
know
this
is
the
basically
fundamentally.
O
This
is
an
impossible
requirement,
because
absence
of
evidence
is
not
the
evidence
of
absence,
and
so
often
we
run
these
simulations
in
parallel.
Universes
hitting
you
know,
hitting
a
whole
host
of
code
and
it's
clear
and
you're
like
well.
Is
that
good?
Who
knows
like
you
know
we,
we
can
only
really
understand
how
much
coverage
we've
got
and
it
may
be
maybe
good,
maybe
bad.
Who
knows
so?
That's
that's.
A
phenomenal
kind
of
you
know.
O
Kind
of
you
know
requirement
the
the
second
one
is,
then
about
kind
of
understanding
like
testers,
have
to
think
about
all
those
possible
scenarios
where
issues
may
arise
and
ensure
that
they
are
handled
by
the
code,
and
you
know
that's
just
not
it's
just
like
how
do
you?
How
do
you
approach
that
with
a
distributed?
You
know
kind
of
architecture
like
we
have
with
ethereum.
O
We've
got
a
whole
host
of
different
clients
using
different
architectures.
We've
got
the
execution
layout,
we've
got
the
consensus
layer,
it's
just
it's
amazing
to
see
it
actually
working
and,
and
but
the
complexity
of
that
and
is
just
is
just
phenomenal.
O
If
we
then
look
at
kind
of
what
ethereum
has
done
to
approach
this
task-
and
this
is
you
know,
this
is
not
looking
specific
antithesis.
We
are
a
part
of
the
equation
and
I.
Think
we're
very
much
of
a
complementary
is,
is
that
ethereum
is
using
unit
testing
test?
Nets
we've
got
a
list
of
Shadow
forks
and
test
net
mergers
occurred,
some
technologies
that
have
been
used
called
Hive
and
ketosis
that
do
similar
heaters
to
the
equation.
O
We
then
have
antithesis
that
is
then
doing
this
deterministic
piece
which
allows
us
to
to
Really.
You
know
get
into
the
detail
of
of
all
the
different
iterations
that
are
possible,
and
then
we
we
have
a
host
of
different
fuzzing
technologies
that
the
ethereum
are
using
to
to
really
try
and
isolate
and
kind
of
hammer
areas
that
are
of
concern.
O
So
you
know
throughout
the
entire
year
you
know
a
huge
amount
of
testing
is
being
done
and
on
top
of
that,
they've
also
used
static
analysis
to
just
you
know,
help
with
doing
code
Audits
and
making
sure
that
we
catch
things
before
the
merge
was
to
take
place
so
I'd
step
back
a
little
bit
in
terms
of
kind
of
the
testing
as
hard
statement.
You
know
it
is
a
phenomenal
kind
of
like
you
know,
obstacle
to
try
and
get
over
like
how.
O
How
do
we
find
every
iteration
and
how
do
we
find
those
hard
to
reach
bugs
that
you
know?
May
not
be
common
ground,
but
if
they
do
occur
will
be
catastrophic,
and
so,
if
we
just
look
at
the
right
hand
side,
this
is
a
representation
of
a
distribution
Network.
O
So
on
the
right,
we're
going
to
we're
going
to
call
this
node
we're
going
to
call
this
node,
you
know
using
Lighthouse
and
Geth,
so
the
lighthouse
and
Geth
are
running
on
this
node
they're
running
on
an
operating
system,
leveraging
CPU
we've
got
a
file
system,
they've
got
different
processes
happening
and
then
inside
those
processes
we've
got
different
threads
being
used
as
well.
So
just
in
its
own
sort
of
node
there's
a
lot
of
complexity
about
what's
going
on
and
we
can.
We
can
test
a
node.
We
can
test.
O
On
top
of
that,
the
the
complexity
of
all
those
different
communication
channels
operating
just
means
the
search
space
for
Hidden
Away,
Little,
Gems
I
call
them
gems,
but
you
know
they're
bugs
basically
those
those
they
can
hide
away
for
a
long
time,
and
you
may
never
find
that
search
when
you're.
Looking
at
your
testing,
you
may
never
find
that
that
particular
condition
where
the
network
was
slow
and
there's
some.
O
We
have
the
ability
to
kind
of
run
the
entire
collection
of
consensus,
clients
and
execution
clients
in
in
a
simulation,
run
it
and
and
understand
exactly
kind
of
every
way
in
which
we
can
actually
kind
of
see
outcomes.
That's
pretty
intense,
you
know.
How
do
you
do
that
and
fundamentally,
how
do
you
do
that
in
a
deterministic
fashion?
If
we
find
a
certain
situation
where
Lighthouse
crashes
well,
can
we
represent
that
again?
Can
we
replay
it?
O
Can
we
yes,
the
first
thing
you
do
right:
can
we
reproduce
it
and
often
bugs
are
not
reproducible?
They
often,
you
know
they
might
try
something
different.
They
might
move
around.
What
like,
how
do
we?
How
do
we
debug
this,
and-
and
this
is
again
where
we
step
in
so
what
do
we
do?
We
leveraged
the
ability
to
Auto
generate
networks
inside
this
simulation
environment
and
stand
up
all
the
containers
that
were
necessary
to
bring
up
ethereum.
O
O
So
you
know
under
duress
we
see
things
going
wrong
and
we
we
basically
use
fuzzing
technology
to
to
basically
hit
the
entire
system,
not
just
individual
pieces
of
software,
but
the
entire
system
is
fuzzed
and
that
allows
us
so
basically
kind
of
you
know:
deterministically
replay
the
complete
orchestration
of
a
situation,
not
just
one
particular
application,
but
we
know
that
in
this
combination,
different
clients
under
these
Network
conditions,
something
goes
wrong,
and
so
we
use
strategies
inside
the
system
to
allow
us
to
seek
rare
events.
O
And
so
when
you,
when
I
start
looking
at
the
numbers
and
kind
of
what
we've
been
doing
over
the
last
year,
you
know
we
hear
a
huge
amount
of
code.
Edges,
like
you
know,
a
huge
amount,
and
so
how
do
we?
How
do
we
find
and
allow
ourselves
to
to
seek
out
those
edge
cases
that
you
know
if
they
do
happen
again,
it
can
be,
can
be
pretty
catastrophic,
but
you
know
get
to
them
and
and
uncover
them
to
help
the
client
teams
debug
and
fix.
So
we
have.
O
We
have
the
ability
to
have
all
this
wrapped
up
in
in
a
a
tool
set
that
is
available
to
ourselves
and
we
share
it
to
the
rest
of
the
client
teams
and
the
F
and
that's
been
really
really
useful,
I
think
if
we
look
at
it,
we
have.
We
have
the
the
individual
testing
going
on,
which
is
fantastic.
O
We
continue
to
promote
that
and
say
keep
keep
going
with,
that
we
have
the
test,
Nets
and
Shadow
Forks
again,
I
think
you
know
we
can
all
say
that
they've
been
tremendously
successful
and
useful,
and
then
we
have
our
parallel
universes.
We
have
the
ability
to
run
not
just
one
simulation
but
we're
running
simulations
literally
every
day
that
are
generating
and
using
you
know,
exploring
the
various
search
spaces
that
exist
across
any
number
of
different
branches
inside
the
the
repositories.
So
what
does
this
mean
for
us?
O
One
so
so
for
us
winning
is
finding
these
edge
cases
right.
You
know
winning,
isn't
like.
Oh
all
tests
passed
yay.
We
want
to
find
those
those
really
intricate
kind
of
piece.
You
know
combinations
and
iterations
where
we've
covered
14
million
different
scenarios
and
we've
got
you
know,
maybe
not
more
than
one,
but
we
at
least
have
one.
You
know
we
can
actually
hold
our
hat
on.
So
that's
that's
an
example
of
what
we're
doing
and
we
think
that
kind
of
represents.
You
know
pretty
pretty
good
sort
of
analogy
if
we
move
forward.
O
We've
seen
it
again
here
we
are.
This
is
kind
of
what
it
looks
like
in
our
world.
You
know
we're
looking
for
bugs
this.
This
is
an
example
of
an
output
of
just
one
run.
O
So
we
we
ran
this
13
hours
worth
of
water
wall,
clock
time,
which
is,
if
you
look
at
the
wall
and
there's
a
clock
on
it,
it
will
last
13
hours,
but
it
actually
allows
us
to
to
exhaust
536
hours
worth
of
testing,
and
you
can
see
here
we
are
we're
talking
about
an
enormous
amount
of
edges
being
seen
here.
O
The
branches
are
kind
of
how
we
get
decision
points
we
we
get
like
an
if
then
else
on
a
kind
of
statement
inside
the
code
very
simple
example,
but
just
just
take
it.
We
can
basically
Branch
off
at
that
point
and
exhaust
both
Avenues
and
actually
see
what
happens
in
both
situations.
Underneath
those-
and
you
know,
under
those
test
conditions,
that's
an
example
of
a
branch
and
obviously
the
code
The
Edge
is
seen.
O
Is
we've
got
the
instrumentation
happening,
so
we
can
see
what
kind
of
functions
and
and
pieces
of
code
are
actually
being
executed.
Pretty
pretty
insane.
You
know,
that's
one
run
I've
got
180
000
edges
that
we're
seeing
across
across
the
entire
network.
It's
a
busy
Network
and
obviously,
with
all
the
different
clients
happening.
That's
there's
a
lot
going
on,
so
we
see
ourselves
as
a
complimentary
piece.
O
We,
you
know
we
we've
we've
been
looking
at
this
as
a
you
know,
a
great
example
of
a
project
where
we
can
basically
work
alongside
all
the
other
pieces
that
the
EF
are
kind
of
using
and
the
different
client
teams
are
using.
So
we're
just
another
layer
in
in
the
growing
list
of
of
of
strategies
being
being
used
and
I
think
that's
sort
of
Testament
to
the
approach
that
EF
have
used
to
to
Really
hold
on
to
this
resilience.
You
know
making
sure
that
we
take
advantage
of
every
way
we
can.
O
We
can
make
sure
that
you
know
nothing
untoward
happens
in
the
future.
You
know
more
more
and
more
upgrades
coming
I'm
sure.
So
you
know,
we've
only
got
4844
around
the
corner,
I'll
say
around
the
corner,
we'll
see,
but
it's
certainly
something
to
look
at
in
terms
of
you
know,
making
sure
that
we're
involved
at
every
step
of
the
way.
O
So
this
is
a
very
simple
sort
of
illustration
of
what
we've
been
doing.
We've
been
building
all
of
the
clients
giving
them
in
in
place
and
actually
establishing
Genesis.
So
we
actually
have
been
using
this
at
Genesis
block
and
then
moving
it
forward
towards
the
merge
in
our
in
our
world.
We
would
have
the
merge
kind
of
positioned.
O
We
obviously
start
fault
injection,
we've
established
a
chain
and
then
we
start
fault
injection
to
get
to
a
point
where
we've
got
a
whole
host
of
different
faults.
You
can
see
them
here,
labeled
below
so
anything
from
partitioning.
O
That's
a
great
one,
obviously,
to
encourage
forking
of
the
chains.
We've
then
also
got
you
know:
delays
happening
drops
of
packets,
the
nodes
themselves.
We
can
stop.
You
know
we
do.
We
do
we
put
this
thing
through
its
faces.
We
stop
things.
We
pause
things,
we
kill
things,
we
bring
them
back
up,
I
suppose
in
some
respects,
it's
quite
realistic.
You
know
things
do
get
rebooted
and
violators
come
back
up
again.
O
We
we
have
all
that
kind
of
like
happening
inside
the
simulation
and
then
from
that
we
then
have
you
know
any
number
of
threads
being
paused
released.
There's
there's
yeah
it's
it's
busy.
Every
type
of
this
sort
of
configuration
of
faults
that
we
put
into
a
simulation
is
complete
again
completely
deterministic.
So
if
we
know
that
this
ended
up
with
a
segful,
we
could
we
know
we
can
literally
put
exactly
the
same
into
the
into
the
environment.
O
So
what
are
the
numbers
like?
Well
yeah?
You
know.
In
one
year
we've
we've
conducted
31
years
worth
of
24
7
non-stop
testing
that
that
has
processed
and
and
explored
over
50
million
edges
of
code.
You
know
out
of
that,
and
you
can
see
there.
You
know
out
of
that.
We've
got
45
validated
errors,
some
of
which
were
catered
For,
but
30
of
the
33
of
them
were
logged,
bugs
that
were
pretty
catastrophic
things
that
really
would
bring
either
an
actual
note
down.
So
panics
seg
faults.
O
We'd,
have
you
know
nil,
pointer
exceptions?
We'd
have
there's,
there's
one
I'm
going
to
bring
up
in
a
minute
with
a
little
little
example,
but
things
that
you
don't
want
to
have
like
living
in
your
code
base
and
and
I
think
that's
you
know
we
would
forward
those
through
to
the
different
client
teams
and
they've
been
able
to
eradicate
them
and
and
obviously
through
the
successful
merge.
O
It's
been
great
to
see
our
efforts
put
to
put
to
the
test,
as
he
wants
an
example
or
I'll.
Give
you
I'll
give
you
some
so
here's
here's
an
output
of
our
of
our
sort
of
one
of
the
runs.
This
would
come
through
from
an
email,
and
hopefully
you
can
see
it's
Biven.
Our
chart
you've
seen
the
side
of
the
select
the
stuff
on
the
right
hand
side,
but
down
here
you
can
see
a
whole
host
of
assertions
that
we've
got
running
across
across
the
entire
network.
O
This
this
is
actually
flagged
up
by
fail
and
they've
got
a
fail
on
a
seg
fault.
This
is
this.
Is
you
know
something
that
is,
is
absolutely
real:
zigzag,
V,
illegal
storage,
access
attempt
to
read
from
nil
question,
mark
I,
guess
I'm
really
interested
ones
you
get
like
you.
Please
report
this!
Oh,
please
repeat,
please
fix
this,
so
this
is
just
an
example
here
and
we're
like
What.
Do
we
do
now?
How
do
we?
How
do
we
take
it
to
the
next
step?
Because
it's
great
saying,
oh
you've
got
a
problem.
O
This
you
can
see
here
is
actually
for
the
node.
That's
running
Nimbus
and
Geth,
so
we
know
there's
an
issue
there.
So,
okay,
fine,
have
you
got
any
other
occurrences
on
either
any
of
the
combinations
we
do
so.
Okay,
that's
cool!
Well,
let's,
let's
look
at
an
example
of
an
actual
kind
of
log
entry.
That's
coming
through.
So
this
is.
This
is
a
unified
log,
the
serialized
activity
that's
occurring
across
the
entire
simulated
environment
bit
of
mouthful.
So
you
see
you
can
see
here
on
the
third
column,
the
blue
column.
O
Here
again,
this
is
just
showing
us
all
the
different
combinations,
so
we've
got
Cloud
we've
got
prism,
client
Aragon,
we've
got,
there's
nimbers,
never
mind
nimbuspessu
all
kind
of
serialized,
and
you
can
see
that
we've
got
a
concept
of
time
running
through.
So
this
blown
up
piece
is
basically
the
stack
trace
of
showing
us
kind
of
what
is
happening
and
Nimbus
is
having
some
issue.
It's
receiving
some
Json
trying
to
figure
out.
O
Not
doing
too
well
for
it
so
because
it
basically
crashes
the
entire
node,
what
you
know
so
we
go
okay.
This
is
cool.
This
is
this.
Is
why
I
kind
of
go
yeah,
you
know,
there's
got
something
something
of
Interest
here.
Another
example
here
this
is
just
from
from
Prism.
Again
I.
Don't
love
sort
of
seeing
panics,
but,
like
you
know
it
does,
it
does
show
that
there's
something
like
of
value
here
that
we're
really
stressing
the
the
entire
environment.
So
again,
invalid
memory
address
yeah,
okay,
cool.
O
What
do
we
do
now?
You
know:
do
we
descend
it
to
the
client
team?
Well,
yeah
you
can
and
they
might
I
wrote
that
code
I
can
see
what's
going
on
here.
Well,
she
had
to
do
it.
Didn't
she
share.
That's
just
If
She
Could,
Turn,
Back
Time.
Well,
the
great
news
about
deterministic
environments.
Is
you
can
turn
back
time
right
so
we're
going
to
basically
jump
into
the
actual
environment
and
actually
say,
okay
right?
How
you
know
when
does
this
manifest
like?
Is
it
okay?
It
crashes
was
it
a
second
ago?
O
Was
it
five
seconds
ago
what
happened
in
the
execution
pass
of
all
the
different
Opera
operating
nodes
with
all
the
fall
injection
going
on
or
or
was
it
yeah?
What?
Where,
where
do
we?
Where
do
we
see
this
like
actual
kind
of
thing
manifest,
and
so
we
we
want
to
look
back
so
many
seconds.
We
may
want
to
turn
on
car
packet
capture
pack
capture
isn't
by
default
just
because
the
amount
of
data,
but
it
isn't
turned
on
by
default.
O
We
want
to
be
quite
selective
about
kind
of
where
we
see
packet
capture
happening
and
and
then
we
can
look
at
the
data
see.
If
again,
we
can
rerun
it
again.
Obviously,
determinism
you're
stressing
a
little
bit
of
kind
of
will.
It
still
be
deterministic
if
I
suddenly
turn
on
Packer
Gap
during
a
moment
in
time
was
have
to
look
at
that,
but
but
what
we've
seen
is
actually
yeah
you
generally,
if
you've
seen
something
you
know
manifest,
then
we
can.
O
We
can
actively
kind
of
kind
of
turn
on
things
to
help
kind
of
debug
and
work
out.
What's
going
on,
got
some
really
interesting
stuff
coming
through
next
year,
which
I
can't
share
but
really
cool,
so
this
is
what
we
can
do
today.
This
is
what
we've
done
with
the
with
the
is
basically
look
at
this
mountain
of
data.
Like
you
think
of
all
the
stuff,
that's
going
on
all
the
different
scenarios,
branches,
Paths
of
execution
that
have
happened.
We
can
we
have
all
these
available
in
a
big
massive
data
set.
So
we
can.
O
O
We
can
then
look
at
the
common
routes
that
are
happening
across
all
the
data
and
try
and
then
bring
it
together
to
see
what
is
happening.
You
know
where,
where
does
the
execution
occur?
Where
suddenly
the
probability
of
this
actually
becoming
a
bug,
yeah
happen,
and
and
lo
and
behold
we
have
a
huge
jump
here
from
literally
0.05
of
kind
of
the
the
kind
of
bug
occurring
to
over
50
this
jump
here
in
the
middle.
We
can
start
to
go
okay
right.
We
know
how
many
seconds
to
go
back
now.
O
We
can
actually
Replay
that
simulation
and
turn
back
time.
I
won't
sing
it
and
and
and
see,
what's
going
on
there,
because
otherwise
you're
looking
for
a
needle
in
a
haystack
like
how
do
you
do
that
and
that
that's
really
been
really
valuable
in
in
our
in
our
efforts?
So
what's
next
well,
merger's
great
love,
it
we're
doing
some
Pokemon
testing
on
some
stale
branches
which
isn't
you
know,
we've
got.
You
know
things
in
a
fairly
good
order
there,
but
we
know
ethereum's
not
standing
still.
O
We've
got
obviously
EIP
4844
just
around
the
corner.
I'll
say
it
again.
Thanks
sharding
withdrawals,
you
know
a
whole
host
of
new
capability,
which
is
you
know,
brand
new
products,
brand
new
code,
brand
new
testing,
there's
other
pieces
in
there
around
you
know.
Actually
the
one
that's
interesting
is
using
things
like
malicious
clients.
You
know
what
about
if
I
brought
up
an
environment
that
has
a
client
that
is,
you
know
not
doing
what
it
should
be
doing.
You
know.
Can
that
cause
issues?
O
You
know
how
does
that
kind
of
part
of
the
of
the
code
for
the
rest
of
the
the
network
handle
handle
that
client
and
does
it?
Does
it
operate
it
in
a
different
manner,
so
other
pieces
here
and
you
merge
clothes
clean
up?
You
know,
there's
changes
to
you,
know
kind
of
established
pieces
of
code
that
are
in
in
place
now
all
those
changes
introduce
you
know
probably
good,
and
it
could
appear.
O
You
know
Downstream,
so
pretty
cool
we're
working
actively
with
the
clients
and
open
to
really
you
know
kind
of
broadening
that
relationship
further.
So
that's
the
end
of
that
q,
a
any
questions
I'm,
not
sure.
If
that's
two
minutes
of
questions
or
two
minutes
of
the
end,
so.
R
Yeah
you
showed
us,
it's
like
calculating
the
probability
of
a
buck
and
I
was
wondering:
how
did
you
calculate
the
probability
of
being
a
bug
in
time.
R
You
showed
us
a
slide
with
the
probability
of
finding
a
bug.
How
did
you
calculate
the
probability
which
model
you
were
using
or
how
complex
is
you
know
the
calculations
and
that
stuff.
O
That's
a
good
question,
so
the
just
because
the
amounts
of
data
we've
got
available.
We
we
know
every
kind
of
path,
execution,
that's
occurring,
that
the
outcome
doesn't
occur
with
the
bug
and
we
have
then
obviously
all
the
different
outcomes
that
do
translate
to
the
bug
happening
and
so,
where
the
ability
to
you
know
do
some
simple
calculation.
We
can
see
that
you
know
the
path,
the
path,
that's
sort
of
trodden
and
worn-
and
you
know
this,
but
this.
If
this
is,
if
we
see
this
happening,
then
we
can
calculate
the
bug
occurring.
O
There
are
obviously
other
branches
down
that
that
well-trodden
path
where
my
bird
doesn't
go
somewhere
else
and
the
bug
doesn't
manifest.
So
we
that's
that's
how
we
at
every
point
in
that
graph
we
can.
We
can
calculate
if
you
know
if
it
does
contribute
or
not
contribute
to
the
bug.
Hopefully,
I'll
answer
your
question.
N
T
S
O
That's
a
good
question,
and
so
we
we
see
a
huge
amount
of
bugs
that
do
manifest
and
show
us
like
yeah
duplication,
like
you
know,
like
the
one
you
saw
with
Nimbus
and
and
Geth
that
we
can,
we
can
run
that
literary
day
in
there
and
then
it
will.
You
know
with
the
right
commit
code
which
is
in
history,
but
we
would
be
able
to
see
that
and
we'd
see
we.
We
have
counts
on
that
occurrence,
so
we
can
see
very
quickly.
This
isn't
just
like
an
edge
case.
O
That's
just
happened
on
one
sort
of
combined
client
set.
It's
it's
like
it's
a
problem
across
the
board,
and
so
we
have
all
of
that
wrapped
up
into
our
reporting.
U
Yeah,
you
know:
what
do
you
think
about
applying
formal
verification
in
this
huge
environment?
It
is
it,
do
you
think,
a
possibility
or.
O
I'm,
actually
not
the
best
person
to
answer
that
question,
but
bring
it
bring
it
to
us
at
the
end,
because
I've
got
some
people
that
can
answer
that
question
for
you,
okay,
Christians,
oh
okay!
Well,
please,
great
chance
of
us!
Keep
keep.
You
know,
keep
the
conversation
going
and
I
hope
you
enjoyed
the
talk.
O
B
Okay,
moving
on
from
testing
the
merge
and
debugging
the
merch
to
actual
test
Nets
I'd
like
to
introduce
the
next
speaker
to
the
stage,
which
is
our
free
he's,
the
head
of
protocol
engineering
at
chainsafe,
but
not
only
this.
He
also
is
known
as
the
initiator
of
the
girly
test
net
initiative
and
organizer
of
East
Berlin
and,
in
his
free
time,
he's
the
maintainer
of
the
open
source,
ethereum
and
Ruby
and
Crystal
libraries.
But
today
he
will
talk
about
test.
Nets
and
specifically
post
merge
test
Nets.
B
The
merch
has
introduced
quite
a
few
changes
that
we
are
all
aware
of
and
how
they
those
are
affecting
the
testing
infrastructure
and
which
test
Nets
you
should
be
using
in
future.
Afri
will
tell
us
everything
about
it,
so
please
give
it
up
for
free.
Thank
you.
S
Yeah
briefly
about
me,
I'm,
head
of
protocol
AT
chainsafe
Systems.
Thank
you
Francie
for
the
introduction
I'm,
also
one
of
the
co-organizers
of
East
Berlin
and
I,
used
to
work
on
various
EF
and
ESP
grants
in
the
past,
for
both
execution
layer
and
consensus,
layer,
clients
and,
among
others,
as
France.
You
already
mentioned,
I
launched
the
early
testnet
with
a
bunch
of
cool
people
that
are
also
sitting
here
in
the
room
in
2019.
S
Everything
started
with
a
test
net.
Does
anyone
here
know?
What's
the
meaning
of
C
serum
Genesis
extra
data
is.
S
S
Publicly
fairly
launched
by
announcing
a
block
number
on
a
Olympic
test
net,
that
was
one
million
28
201,
and
once
this
test
net
was
mined
on
the
test.
Once
this
block
was
mined
on
the
test
net,
this
hash
could
be
inserted
into
the
ethereum
Genesis
and
the
public
mining
on
ethium
mainnet
could
start.
S
an
attacker
exploited,
the
long
discrete
time
of
the
X
code,
size
op
code
in
one
of
the
client
implementations,
which
caused
the
denial
of
service
of
the
issue
magnet.
S
However,
this
protocol
change
caused
the
consensus
failure
on
Modern.
This
was
specifically
caused
by
the
different
ways
how
the
clients
handled
this
custom
starting
loans,
and
it
was
then
decided.
It's
not
not
a
good
idea
to
have
a
custom
protocol
on
the
test
net
and
therefore
modern,
was
discontinued
and
replaced
by
a
new
test.
Net
called
rubesne
Robson
was
the
first
tester
to
launch
with
chain
ID
right
from
Genesis
the
chain
ID
served
the
purpose
of
simple
replay
protection
according
to
EIP
155.
S
S
S
In
the
end,
the
initiative
managed
to
implement
click
and
parity
and
subsequently,
in
2019,
the
early
testnet
was
launched
as
a
first
cross-client
proof
of
authority,
test
mode
with
validators
from
both
guests
and
parity
ethereum.
At
the
same
time,
coincidentally,
after
Defcon
in
Prague
I
believe
the
pantheon
client
was
released.
So
there
was
even
a
third
client
available
for
running
validators
on
this
test
net
and
soon
after
also
nazarme
joined
the
validator
set.
S
Yeah
later
this
same
year,
parity
exits,
ethereum
and
leaving
coven
fairly
unmaintained.
Unfortunately,.
S
Now
going
fast
forward
in
time
just
earlier
this
year
to
prepare
for
the
merge,
some
of
the
older
testnets
had
to
be
deprecated.
The
protocol
support
team
announced
the
end
of
life
for
Robson
and
Rigby
on
the
amazing
ethereum
Foundation
block.
S
S
So
in
foresight,
however,
a
new
testnet
was
launched,
sipolia,
interestingly
sepolia
that
was
just
launched
not
even
a
year
ago,
it
was
launched
as
a
proof
of
work
test
net
using
the
same
esash
algorithm
as
main
net
back
in
the
day,
there
was
a
first
time
since
the
rubstone
launched
in
2017
that
we
actually
launched
a
new
proof
of
work
test
net.
S
S
S
S
Yeah
I
took
a
lot
of
time
to
build
these
slides.
So
in
ethereum
this
the
merge
stands
for
an
event
where
two
blockchains
are
literally
glued
together.
S
S
S
S
S
S
But
how
does
it
look
like
for
the
test
Nets,
so
girly
was
merged,
was
a
Prada
Beacon
chain
test
net
and
to
access
to
get
access
to
the
consensus
on
the
growly
test
net
prior
to
the
merge
was
con
was
permissioned,
so
girly
was
running
a
community
maintained,
validator
set.
S
S
S
Here
you
can
see
my
sepolia
minor
I
was
happy
that
I
once
again
can
mine
a
test
net
and
yeah
I
did
not
ask
for
permission.
I
can
just
I
was
just
able
to
mine
the
blocks.
I
got
a
lot
of
zipolia
easer,
but
now
there's
something
unique
about
the
C
polier
Beacon
chain.
S
Since
it
was
apparent
that
girly
will
no
longer
be
permissioned,
there
was
actually
the
the
Quest
for
having
another
permissioned
test
net
to
have
a
certain
stability
guarantees,
so
it
was
decided
that
the
polio
gets
a
beacon
chain
with
a
modified
deposit
contract
that
does
not
accept
regular
ether
deposits
instead
attracts
the
valid
data
sets
through
an
erc20
token.
S
So,
let's
take
a
look
at
this
overview
again.
This
is
a
timeline
actually
on
the
x-axis
axis.
The
first
test
net
to
merge
was
rubson.
S
Robinson
was
fairly
similar
to
mainnet.
It
was
proof
of
work
and
after
the
merge
it
was
proof
of
stake
right
so
permission
less
permission
less
then
sepolier
transitioned
with
this
modified
Beacon
chain.
S
So
technically
the
consensus
algorithm
is
still
proof
of
stake,
but
through
this
permissioned
erc20
token
that
you
require
to
take
part
in
consensus,
I
call
it
for
Simplicity
proof
of
authority
here,
because
access
to
this
consensus
is
actually
permissioned
and
then
girly
merch
was
Prada
and
test
net
were
exactly
the
opposite
happened.
The
proof
of
authority
equality,
test
net
became
a
proof
of
stake,
girly
test
net
after
the
merge
and
then
yeah.
We
are
here
eventually
just
a
couple
of
weeks
ago,
the
mainnet
merge
occurred.
S
So
yeah
just
take
a
look
at
the
post.
Merge
test
net
landscape
I
showed
you
that
groups
merged.
However,
it
has
been
deprecated
by
the
ethereum
foundations,
a
protocol
support
team
and
it
leaves
us
with
girly
and
simpleia,
is
the
only
long-standing
public
post
effect.
So
if
you
have
an
application
that
you
want
to
deploy
Deploy
on
girly
and
you
require
other
applications
or
interfaces,
you
might
find
some
on
zipolia
regularly,
but
not
on
zipolia,
because
sympoli
is
fairly
new
and
not
many
applications
and
libraries
are
deployed.
S
So,
to
summarize
this
please
do
use
girly
test
net
as
it
is
most
similar
to
the
assume.
Mainnet
girly
is
especially
interesting
for
you
if
you
plan
to
test
a
beacon
chain,
Valley
data.
If
you
want
to
test
your
setup,
if
you
want
to
test
upgrading
client
versions,
if
you
want
to
test
going
through
protocol
upgrades,
girly
is
potentially
the
best
or,
if
not
even
the
only
test
Network.
You
can
actually
conduct
this.
S
Yes,
please
also
do
use
the
sepolia
testnet
sepolia
comes
with
the
best
stability
guarantees.
Due
to
the
permissioned
validator
set
since
it's
fairly
new,
it
is
fastest
to
sync,
and
also
it
has
the
best
long-term
guarantees.
So
I
would
lean
towards
a
recommending
to
test
your
applications
or
even
migrate.
Your
applications
to
sepolia
instead
of
curly
at
this
point,
yeah
do
not
use
groups
and
obviously
I
when
I
prepared.
My
slides
I
realized
that
we
are
just
in
this
transition
phase,
where
big
service
providers
already
start
shutting
down
infrastructure.
S
So
you
have
to
expect
interruptions
and
downtime,
don't
use
coven
for
obvious
reasons,
and
yes,
also
I
would
not
recommend
using
Rigby.
Even
though
there
is
some
long-term
support
plan
for
Rigby
for
almost
another
year,
but
you
will
potentially
not
get
more
protocol
upgrades
on
that
Network.
So
I
would
also
not
recommend
using
ring
B
in
that
regard.
S
Foreign
yeah
right
there
are
some
caveats
so,
as
I
mentioned,
the
girly
test
net
has
ether
supply
issues
due
to
the
sheer
amount
of
users
trying
to
test
their
validator
set
up.
Each
validator
requires
32
East,
and
if
you
want
to
have
a
more
involved
setup
with
I,
don't
know
1000
validators
it
quickly
becomes
a
huge
burden,
huge
problem
to
access
these
required
test
net
tokens.
S
This
is
something
we
still
need
to
figure
out
going
forward.
So
if
you
have
an
idea,
please
hit
me
up
yeah
and
in
terms
of
test
net
age.
Gurley
is
also
fairly
old,
so
it
has.
S
It
comes
with
a
fairly
big,
a
rich
history
and
State
and
is
therefore
more
difficult
to
synchronize,
especially
now
that
it's
merged
with
prata,
which
also
comes
with
a
fairly
old
Beacon
chain
in
in
combination,
and
yes,
the
polio
is,
but
if
you're
new,
as
I
mentioned
it
I
wrote
down
lack
of
infrastructure,
I
put
it
in
Brackets,
because
it's
changing
really
rapidly
right.
S
Now,
I
just
noticed,
while
preparing
my
slides
that
even
metamask
has
a
sepolier
test
net
switch
now,
so
it's
happening,
I'm
really
happy
that
we
get
all
these
Integrations
now.
S
S
Okay,
this
was
my
talk.
Please
use
girly
or
sepolio
going
forward,
find
me
for
some
spare
Visa
cards
from
East
Berlin.
They
are
pre-loaded
with
Squirrely
and
sepulia
Esa.
So
if
you
want
some
I
will
not
throw
them
into
the
audience
if
I
just
fight
me
after
the
talk.
And
yes
since
we
have
a
couple
of
minutes
left,
please
ask
questions.
S
Short
answer:
is
you
can't
because,
as
I
mentioned,
even
though
it's
proof
of
stake,
you
need
a
special
erc20
token
to
get
access
to
this
value
data
set,
and
you
would
basically
have
to
Google
the
polar
GitHub
repository
and
there
is
an
issue
where
you
can
request
to
be
added
to
this
value
data
set,
but
in
general
not
many
teams
will
be
accepted
just
to
keep
it
really.
N
Maybe
two
questions
here:
one
when
eip1559
came
around.
It
was
very
hard
to
test
that
in
advance
because
you
have
these
proof
of
authority
networks
that
had
no
fee
market,
and
then
you
have
these
proof
of
work
networks
that
had
like
kind
of
a
fee
Market
you
could
kind
of
Reason
about
sort
of
what
that
change
would
do.
Sorry,
I
am
leading
into
question
here,
I'm
just
giving
some
background,
and
then
coven
is
if
I
seem
to
remember.
N
N
S
N
So
so
specific
testing,
specific
features
of
ethereum
forks
like
a
fee,
Market
changes
or
testing
the
or
validating
the
the
execution
spec
was
sort
of
the
goal
because,
like
you
had
like
you
know,
open
ethereum
or
parity
and
Geth
and
whether
they
synced
up
or
not
in
terms
of
where
they're
executing
you
know,
evm
basically
correctly,
was
always
a
question
right
if
they
ever
split
so
so
I.
S
S
So
if
we
want
to
test
execution
layer
changes
the
contents,
layer,
changes
and
whatnot,
you
usually
go
from
from
like
a
local
simulations
to
to
permission
to
deafness
to
through,
like
a
lot
of
hurls
until
you
actually
get
to
a
public
test
net
and
these
test
Nets
like
when
I
recommend,
Gurley
or
sepolier,
they
will
only
get
protocol
upgrades
that
have
been
sufficiently
tested
before
I.
Don't
know
if
this
answers
the
question:
okay,.
X
Do
you
know
why
did
the
mergers
happen?
The
way
around
they
did
could
wouldn't
it
have
been
more
logical
for
for
goalie
to
continue
as.
X
S
Yeah
I
think
that's
a
very
good
question.
The
reason
is
that
it
merged
with
the
product
test
net,
which
you
used
to
be
a
consensus
layer
testment
for
a
very
long
time
where
many
teams
already
tested
running
validators
for
more
than
a
year
or
one
and
a
half
year.
So
it
was
already
this
proof
of
stake
test
net
available
and
it
was
decided
not
to
launch
a
new
test
Beacon
chain
test
net
for
currently
specifically
and
just
use
the
existing
one.
So
this
is
that's
why
we
had
this
flip.
S
That
was
also
discussed,
so
maybe
have
a
new
deposit
contract
on
Gurley,
but
in
the
end
they
decided
it's
actually
worse
for
testing
the
merge
to
do
the
big
big
merge,
because
prata
was
also
the
only
consent
layer
test
method
had
about
approximately
the
same
amount
of
evaluators
and
mainnet
as
mainland.
S
S
What's
the
question,
if
there's
a
bug
fix
on
the
test
net,
how
often
do
you
have
to
rebuild
your
clients
yeah.
S
Yes,
this
is
basically
yeah.
The
answer
is
yes,
but
this
happens
very
rarely
because
most
bugs
are
usually
not
caught
on
test
Nets,
but
much
much
earlier.
S
I
think
I'm
out
of
time.
Yes,
so
thank
you,
everyone
and
don't
forget
to
run
a
test
net.
B
Awesome,
thank
you.
Afri
yeah
guys,
don't
forget
to
collect
these
sweet
ether
cards
that
afri
has
with
him,
because
we
all
know-
or
if
you
guys,
are
developing
on
a
test
that
you
know
that
girly
eat
there's
various
scars.
All
the
faucets
are
always
empty
because
of
some
evil.
Bots
draining
them.
So
if
you
want
to
get
your
hands
on
some
very
sweet,
girly,
eat
and
sepulia
is
come
and
find
him
afterwards.
B
I
would
recommend
not
in
front
of
the
stage,
but
maybe
somewhere
else,
yeah
moving
on
to
the
next
topic,
and
we
are
really
on
time.
So
that's
nice,
please
guys
come
in
for
the
next
talk.
Next
up
we
will
have
Khan,
who
is
the
main
developer
of
certify
at
the
ethereum
foundation,
and
today
he
will
talk
about
human
friendly
contract
interactions
and
explain
how
Source
verification
projects
like
sourcify
can
help.
Web3
users
take
more
informed
decision
before
signing
a
trans
action.
B
C
C
You
guys
something
you're
all
familiar
with
I
would
say
if
you're
a
web
tree
user
for
a
while,
just
a
normal
day
in
webtree
and
I
guess
this
will.
C
This
will
be
like
something
you
see
every
day
and
this
so
you
see
every
day
in
webg.
You
see
things
like
this
and
you
basically
have
no
idea
you're,
not
a
machine.
You
don't
understand.
What's
going
on
you're
like
trying
to
make
sense
of
it
am
I
doing
the
right
thing
am
I
talking
to
the
right
contract.
Is
this
doing?
Actually
what
I
want
to
do
and
basically
what
you
do
is
telling
them
to
like
shut
up
and
take
your
money
like
you
have
no
idea.
C
C
So,
at
the
end
of
the
day,
what
we
want
to
have
is
more
to
do
something
on
the
right
hand,
side
from
left
hand,
side
to
right
hand,
side
so
I
know
this
has
changed
actually
for
many
wallets,
so
many
wallets
actually
started
to
decode
things
like
metamask,
using
truffle
and
decoding
API,
but
still
we
have
a
long
way
to
go
and
we
have
a
lot
of
things
that
we
can
improve
the
human,
the
the
user
experience.
So
what
can
you
do
to
achieve
this?
C
C
The
first
thing
you
can
do
is
using
a
nutspec
documentation
and
as
well
as
doing
the
source
code,
verification
on
sourcify.
So
what
is
not
spec
documentation,
not
spec
documentation
is
what
is
called
ethereum
natural
language
specification
format.
It's
actually
part
of
the
solidity
spec,
and
probably
you
have
seen
this
if
you
have
seen
a
contract
before
this
is
how
it
looks
like
you
put
the
comment,
the
documentation
above
the
function,
and
you
have
the
developer
documentation.
You
have
the
user
documentation
with
the
at
notice
field
and
you
have
the
documentation
for
the
parameters.
C
So
another
nice
thing
about
nutspec
is
it
has
the
specification
for
dynamic
expression,
so
the
field
you
see
here,
the
old
owner
and
new
owner
parameters
in
back
quotes
these
actually
can
be
filled
dynamically
with
the
value
they
are
being
called.
So
this
replaces
the
owner,
the
address
gets
filled
can
get
filled
and
the
new
one
can
also
get
filled
in
in
the
parameters.
C
Okay,
so
you
you
did
you
did
your
job,
you
made
the
user
documentation
developer
documentation.
So
where
do
you
find
it,
then?
Where
can
I
can
find
it?
It
is
in
the
solid
contract
metadata
so
who
actually
knows
what
solidity
contract
metadata?
Is
anyone
just
a
few
people
cool?
That's
why
I'm
here
so
contract
metadata
is
actually
something
introduced
early
on
in
2016
in
the
earlier
versions,
but
it
was
actually
not
really
picked
up
by
the
community.
C
It
is
actually
a
Json
file
generated
by
the
compiler
itself
and
it
contains
the
metadata
okay.
But
what
is
metadata?
It
has
the
ABI,
the
user,
doc
Dev
dock,
as
well
as
compilation
info
and
source
file
info.
So
the
first
two.
C
It
feels
actually
is
concerned
with
how
to
interact
with
the
contract
so
how
to
interface
with
the
contract
and
then
the
second
two
is
about
how
to
reproduce
a
contract
compilation.
So
it's
embeds
the
information
during
the
contract
compilation
so
that
can
be
reproduced.
C
The
file
looks
like
this:
it's
a
Json
file,
it
has
a
set,
it
says:
compiler,
language,
settings,
source
file,
information
and
here,
for
example,
in
the
output.
You
can
see
the
user,
Doc
and
devdoc,
and
if
we
open
that
field
those
fields
again,
you
have
the
methods
and
with
the
methods
for
each
method,
you
have
the
notice
field
or
the
dev
doc
field.
If
there's
a
devlog
and
you
have
the
for
example.
In
this
case,
the
replaces
the
new
owner
with
the
old
owner
with
new
owner.
C
The
comment
we
have
seen
before
you
can
get
the
con
the
metadata
with
the
metadata
flag
on
the
compiler
itself,
on
with
the
Frameworks,
you
can
find
it
inside
the
build
files,
so
travel,
for
example,
put
this
inside
the
build
contracts,
the
contract
name
Json
under
that
you
can
find
the
metadata
fields
and
the
the
metadata
is
there
hard
hat
also
started
to
Output
metadata
for
inside
the
build
file.
Again,
you
can
find
the
metadata
here,
yeah
yeah.
C
You
can
also
find
the
traces
of
the
metadata
in
inside
the
bytecode,
so
the
bytecode-
this
is
an
example.
Contract
bytecode
and
the
bytecode
has
actually
a
special
field
at
the
end
of
the
bytecode.
This
is
appended
by
the
compiler
again
a
question
who
knows
what
this
is:
okay,
again
just
a
few
people
and
again.
What's
that's?
Why
I'm
here
so
this
field?
C
C
Alongside
some
other
information
and
yeah,
you
can
see
how
this
works
in
our
playground,
playground
Source
by
Dev.
We
basically
show
how
the
encoding
is
done,
what
the
encoding
contains,
and
we
also
try
to
metadata
from
ipfs
it's
already
there.
So
it's
a
nice
tool.
There
are
some
example
contracts
you
can
click
on
or
you
can
just
provide
us
with
the
contract
address
or
just
paste
the
contract
bytecode,
and
we
will
try
to
visualize
how
this
thing
works.
C
All
right,
let's,
let's
go
down
to
the
second
thing:
source
code,
verification
on
source5,
but
before
what
is
source
code
verification,
so
this
is
for
all
of
you
know.
This
is
like
you
probably
have
seen
this.
If
you
have
seen
a
contract
before
and
you
see
a
green
check
mark
you're
happy.
You
know
the
contract
right,
okay,
but
how
does
this
work?
C
Maybe
before
that
like?
It
is
the
reason
we
need
why
we
need
source
code.
Verification
is
that
the
contracts
actually
live
on
blockchain
as
bytecos
like
they
are.
We
write
we
humans,
write
code
in
human
language,
but
the
machines
read
it
and
bytes,
so
the
code
gets
compiled
and
deployed
to
the
blockchain
and
this
information
is
lost
in
the
process,
so
we
need
to
somehow
make
sure
just
a
random
code.
Uc
is
actually
the
same.
C
The
code
behind
the
contract,
so
that's
the
process
of
knowing
this
code
is
actually
the
one
that
is
running
the
contract
is
source
code
verification.
So
how
does
this
work?
You
have
the
contracts,
solidity
files.
In
this
case
you
have
a
Target
contract.
You
also
have
compilation
settings
like
the
version,
the
optimizer
settings,
the
other
things
and
we
feed
those
into
a
compiler
and
we
recompile
the
contract
and
remember
this
is
actually
when
the
second
part
of
the
Meta
Meta
data
information
comes
in
handy.
C
C
This
will
give
us
a
bytecode,
then
we'll
see
if
these
actually
match
and
in
source5
you
also
have
two
types
of
matches.
We
have
the
partial
match
when
bytecode
match
and
we
have
the
full
match
when
both
the
bytecode
and
the
metadata
field
field
match
so
in
right.
Now
today,
when
you
are
verifying
on
ethoscan
or
any
other
verifier,
they
actually
ignore
this
field,
like
they
don't
make
use
of
this
field.
They
just
trim
it
out
and
actually
that
there
have
been
cases
that
this
was
exploited.
C
So
it
wasn't
a
serious
thing,
but
this
could
this
wasn't
really
doing
was
being
done
properly,
but
yeah
with
the
full
match.
You
have
a
complete
match
of
the
bytecode,
and
here
the
metadata
actually
acts
as
a
compilation
fingerprint.
So
if
you
match
the
metadata
as
well,
it
means
the
compilation
is
exactly
the
same
as
the
original
or
as
when
the
contract
was
deployed
so
and
the
full
matches
actually
cryptographically
guarantee
that
the
whole
compilation
is
exactly
the
same,
including
the
solidifiles
comments.
Spaces
variable
names,
anything
like
you.
C
C
Then
the
hashes
of
these
files
are
actually
embedded
inside
the
metadata
file
or
the
the
metadata
file
that
we
saw
as
well
as
the
other
sources,
not
just
one
then,
as
said,
the
compiler
takes
the
ipfs
hash
of
this
whole
file,
and
then
this
ipfs
hash
is
embedded
at
the
end
of
the
bytecode,
and
then
we
see
if
these
match,
if
it's
say
full
match
it's
a
match.
It's
a
full
match
stay
match
and
let's
see
what
happens
when
you
change
something
when
you
make
a
slight
change
changes,
space
change,
your
variable
name,
any
comments.
C
So
we
have
it
my
contract
div
this
time,
so
the
hash
of
the
file
will
change.
Then
the
hash
inside
the
metadata
will
change
and
in
in
turn,
the
hash
of
the
metadata
file
itself
will
change.
So
this
field
will
be
different
this
time,
and
that
means
this
will
not
be
a
full
match,
and
but
this
will
be
a
partial
match.
Assuming
you
didn't
make
a
change
that
will
change
the
functionality
of
the
contract,
just
a
comment
or
variable
okay,
but
then
how
to
verify
so
you
can
use
the
source
file
UI.
C
You
can
give
us
the
source
code
and
the
metadata
file.
You
need
the
metadata
file
to
be
able
to
verify
either
from
your
computer
or
etherscan
remote
GitHub.
However,
you
like,
then
you
give
us
the
contract
address
contract
chain.
We
try
to
wait
file,
you
can
use
the
API,
we
have
an
endpoint
and
other
several
API
endpoints
as
well.
You
can
check
them
out
in
docs.sourcify
Dev.
We
have
some
detailed
talks
about
this
and
we
also
have
the
tooling.
C
So
if
you
are
using
hard
hat,
there's
a
hard
hat
deploy
plugin
and
with
the
plugin
when,
after
you
deploy
your
contract,
you
can
just
pass
the
network
and
then
say
sourcify
and
use
the
verify
your
contract.
We
have
the
remix
plugin.
If
you
are
using
remix,
you
can
provide
the
contract
address
the
chain,
then
we
will
verify.
We
recently
have
a
Foundry
support,
so
using
Foundry.
C
You
can
also
easily
verify
your
contracts
and
we
also
have
some
automatic
verification,
so
what
we
call
monitor,
so
we
have
a
monitor
running
that
is
listening
on
on
several
chains.
We
are
right
now
listening
to
ethereum
main
net
test
Nets,
as
well
as
some
Roll-Ups
out
as
far
as
you
remember,
so
the
monitor,
what
it
does
it
catches,
contract
creations
and
then,
when
it
finds
a
contract
creation,
it
will
fetch
the
metadata.
C
As
you
remember,
it's
like
ipfs
hash
is
over
there,
so
it
will
get
the
ipfs
try
to
fetch
it
from
ipfs
and
also
the
metadata
file
has
the
source
hashes
source,
ipfs
hashes.
It
will
try
to
get
the
source
files
from
ipfs
as
well.
If
it
finds
them,
then
it
will
automatically
compile
and
try
to
verify
the
contract.
C
C
So
we
have
this
contract.
Repo
of
all
verified
contracts,
it
is
served
over
HTTP
and
ipfs
under
report.sourcify
Dev,
so
we
pin
the
verified,
Source,
verified,
contract
source
files
and
the
metadata
so
that
they
will
be
accessible
by
decoding
the
bytecode.
So
here
remember
there
is
the
ipfs
hash
here
and
anyone.
If
it's
verified
on
source5,
we
will
be
pinning
it
and
there
are
other
people
pinning
our
repo
as
well,
so
you
they
will
be
accessible
by
their
ipfs
hash
and
yeah.
We
also
served
repo
under
an
ipns
name.
C
So
you
can
also
see
the
contract
repo
and
see
the
files
access
all
of
them
download
the
whole
repo.
If
you
want
so
yeah
okay,
so
we
have
seen
what
you
can
do
as
a
smart
contract
Dev,
let's
see
what
you
can
do
as
a
wallet
developer,
so
maybe
a
short
recap
what
we
are
trying
to
do
again.
So
we
have
a
contract
call.
We
are
talking
to
a
contract
and
instead
of
showing
instead
of
this
byte
string,
we
want
to
show
something
more
user
friendly.
C
So
one
thing
to
do
is
obviously
to
decode
this
call
the
despite
string
call
via
the
API
Json.
You
can
show
the
function
name,
the
variable
names
Etc
and
then
you
want
to
show
some
human
readable
description
of
what
the
user
is
trying
to
do.
If
you
have
documented
your
code.
Well,
so
what
you,
what
you
can
do
as
a
vault
developer,
you
go
to
sourcify.
You
report
that
sourcefy
Dev,
you
get
a
chain
ID
track
and
the
metadata.
C
No,
please
don't
do
that.
You
don't
come
to
us
because
it's
already
on
ipfs-
and
it
is
the
neat
thing-
is
it-
is
content
content
address.
So
you
know
the
file
you're
getting
is
actually
the
right
file,
so
you
just
your
wallet,
just
gets
the
bytecode
of
the
contract
decodes
the
ipfs
hash
here
at
the
end
of
the
bytecode
fetch
the
metadata
that
we
pinned
it
for
you,
and
the
metadata
file
has,
as
we
have
seen,
the
API
and
the
documentation
yeah.
C
This
is
where
the
first
two
Fields
come
in
handy
how
to
interface
with
the
contract
and
then
decode
the
API
and
populate
the
nutspec
commands
of
the
track.
So
hopefully,
at
the
end
of
today,
we
will
have
something
more
on
the
right
rather
than
something
on
the
left.
But
source
file
is
actually
not
the
only
way
for
human
friendliness.
The
idea
behind
sourceify
is
to
have
human,
readable
descriptions
via
nut
spec
comments
found
in
the
metadata.
C
There
are
other
ways
to
achieve
this
as
well,
so
one
is,
for
example,
these
two
eips
by
Richard,
Moore
and
Nick
Johnson.
So
the
idea
there
is
to
have
an
extra
function,
name,
an
extra
function,
a
described
function
so
to
say
that
will
return
the
user
something
unreadable.
So
it
can
be
anything
any
custom.
Any
custom
string
and
the
contract
will
return
the
string
to
the
user
and
continue
executing
the
executing
the
actual
function.
C
And
here
the
nice
thing
is
it
can
decode
things
like
ens
commit
that
is
normally
like.
It's
it's
a
hash
commit
and
it
doesn't
have
a
meaning
to
the
user,
but
you
can
add
actually
some
more
custom
strings,
custom
messages
to
the
user
that
they
can
make
sense
of
it.
But
obviously
this
discuss
extra
gas.
The
other
one
is
the
other
MP
proposal
by
Dan
Finley,
the
there.
C
The
idea
is
to
give
the
user
this
information
at
the
first
point
of
contact
so
say
you
want
to
do
an
exchange
for
the
first
time
in
at
uniswap.
Uniswap
app.uniswap
will
give
you
the
contract
metadata,
your
Vault
will
store
it
and
then
you,
your
wallet,
will
have
the
API
and
the
describers
so
that
it
can
show
you
some
something
more
human
readable.
The
advantage
here
is
it's
backwards
compatible.
C
So
we
don't
need
to
change
change
the
contracts
right
now,
most
of
the
contracts
don't
have
any
documentation
or
anything
and
or
they
are
not
actually
documented,
with
the
human
friendliness
in
mind,
so
this
will
be
backwards
compatible,
but
at
the
same
time
this
means
it's
mutable.
So
it's
like
it
can
be
changed.
So
it's
a
trade-off
but
yeah
and
also
there
are
many
ways
to
better
ux.
So
we
can
actually
show
the
users
many
things.
So
is
it
you
can
decode
the
contract.
C
Call
you
can
warn
if
the
user
has
never
been
talked
to.
This
contract
show
the
user
if
the
contract
is
verified
block,
if
it's
a
scam
address
many
things
as
well
as
other
types
of
things,
such
as
how
many
times
this
contract
was
interacted
with.
When
was
it
deployed
because
a
scam,
the
contract
would
be
like
likely
more
recent
and
less
interacted
with.
C
Is
this
contract
audited
and
is
it
by
audit
by
whom?
So
there
are
actually
many
ways
we
can
do
better.
So,
as
a
recap,
what
is
sourceify
technically?
It
is
an
open
source,
automatic,
smart
contract
verification
service.
Our
monitor
it's
a
user
interface,
server,
API
and
tooling
to
verify
contracts
manually.
C
So
thank
you
for
listening.
If
you
have,
if
you're
interested
you
can
find
us
in
Twitter,
join
our
Matrix
chats,
our
code
is
also
at
here.
It
is
a
term
sourceify
visit.
Our
website
and
yeah
I'll
be
happy
to
take
any
other
questions.
If
you
have
thank
you.
P
C
Languages,
yeah,
that's
also
one
consideration
there.
We
have
the
idea
of
maybe
having
a
custom
nuts
Big
Field
for
translations,
and
in
that
field
you
can
actually
link
to
another
translations
file,
so
that
would
be
inside
the
metadata,
for
example
the
translations
file,
and
that
will
be
another
ipfs
hash
so
that
you
can
fetch
it
and
you
can
have
other
languages
and
translations.
Z
Z
Are
you
thinking
about
ux
regression,
testing,
automation?
Somehow,
sorry,
are
you
thinking
about
end-to-end
regression,
testing
automation
with
sourceify,
so
being
able
to
include
the
depths
in
the
whole
cycle
of
testing?
How
things
look
like
like
what
can
the
aptitude
to
make
things
easier
for
end
users
to.
Z
Correct
like
currently
in
order
for
a
developer
to
somehow
test
the
end-to-end
user
experience.
AA
Z
C
I'm
not
sure
if
I
get
the
question
correctly,
but
we
are
more
like
a
well.
So
we're
not
we
just
say
people
here
it
is
here
are
the
tools
here?
Are
the
files
just
please
make
use
of
it?
So
we
don't
actively
get
involved
in
the
ux
contract
interactions
or
that's
not
not
in
your
complete
pipeline.
I
would
say.
AB
Thank
you.
Hi
for
user
protection
am
I
right
in
thinking
that
the
reputational
and
statistical
characteristics
you
mentioned
are
really
important.
AB
Because
I'm,
just
thinking
and
correct
me
if
I'm
wrong
but
I'm
malicious
deployer,
could
create
a
malicious
contract.
This
describe
it
in
a
malicious
way
with
Matt
Speck
and
then
yeah
take
advantage
of
the
user.
C
Yeah
I
mean
obviously
we
have
the
assumption
that
the
contract
deployer
is
nine,
it's
not
malicious,
so
it's
we
verify
the
content
of
the
the
contract,
but
there
are,
as
I
said,
there
are
other
ways
to
do
that,
for
example,
audits,
scam,
lists.
So
this
is
another
aspect,
so
we
have
like
this
neutral
eye
to
what's
inside
the
contract
and
it's
up
to
the
community
and
the
other
types
of
methods
to
actually
see
it's,
not
a
malicious
contract.
K
Cool
hi,
thanks
for
the
talk
so
question
I
had
was
for
contract
coverage
like
what
is
it
like?
Is
it
limited
to
what's
verified
on
ether
scan
or
like
I
like
yeah,
in
terms
of
like
abis?
That
might
not
be
fully
complete,
like
what
is
sourceify
fully
covered.
C
Yes,
I
mean
source
file
is
a
completely
different
thing
than
etherscan.
Etherscan
is
both
a
verification
service
and
a
block
Explorer,
but
source
file
is
not
a
block
Explorer.
We
just
we
are
just
a
contract,
verification
service
and
I
would
say
we
have
different
contract
sets
than
it
can,
so
you
can
actually
import,
but
it's
a
different
contract
set.
K
U
C
Even
also
different
chains
like
we
don't,
we
have
I
think
like
30,
something
evm
chains
right
now,
so
you
can
even
verify
contracts
on
other
chains,
but
at
the
end
of
the
day,
actually
we
we
have
support
for
different
chains,
but
we
actually
want
everyone
at
some
point
to
run
their
own
sourcify
for
their
own
chains.
Cool.
AC
Question
is
it
in
the
scope
of
sortsify
to
like
maintain
reputation
of
the
commands
or
maybe
even
validate.
The
comments
actually
reflect
the
code.
AC
Is
it
in
this
clock
of
source,
if
I
to
maintain
like
a
reputation
scope
for
the
comments
or
to
validate?
If
the
comments
reflect
the
code,
the.
C
Application
comments,
you
say,
yeah
the
comments
comments,
I
mean
no.
We
just
as
I
said
we
are
a
tool
to
achieve
this
and,
at
the
end
of
the
day,
the
developers
have
to
document
and
comment
their
code.
So
they
have
to
keep
in
mind
that
this
will.
This
might
be
a
user
facing
method
and
properly
document.
It
thanks,
yeah
I,
think
we're
out
of
time
just
find
me
after
the
talk
or
also
you
can
yeah
find
sourceify
and
reach
us
out
there.
Thank
you.
B
Hey
thank
you.
So
much
Khan
I
think
we
are
actually
running
10
minutes
early,
so
we
are
trying
to
delay
that
again
a
little
bit
so
that
people
who
are
actually
coming
for
the
talk
that
are
coming
for
are
here
on
time.
So,
first
of
all,
I'm
just
going
to
talk
a
little
bit
so
that
we
are
not
running
10
minutes
early
anymore
and
secondly,
I
see
that
there
are
many
people
in
the
back
feel
free
to
also
move
to
the
front.
B
There's
a
seating
space
available
here
and
yeah
to
basically
bridge
between
the
two
talks
that
we've
just
had
and
that
we
will
have
next
moving
from
solidity,
metadata
and
contract
verification.
We
now
advance
for
the
future
so
and
welcome
Daniel
to
the
stage
big
round
of
applause
for
him.
Please
welcome
him.
R
T
I'm
Daniel
I'm
in
the
compiler
team
for
yeah
over
four
years
now
and
yeah.
Originally,
the
plan
was
for
the
store
to
be
held
by
Chris,
who
couldn't
make
it
to
Bogota.
So
I
have
to
improvise
a
bit.
I
brought
a
lot
of
code
Snippets,
which
will
explain
what
we
have
currently
in
the
compiler,
currently
the
state
of
the
language
and
where
we
are
headed
with
this
so
yeah.
Let
me
just
dive
directly
into
it:
yeah
I!
T
Guess
you
if
you
are
familiar
with
Trinity,
which
I
hope
most
of
you
are,
because
it
was
kind
of
like
kind
of
physical
requirement
for
the
talk.
You
probably
know
a
new
doubles
by
now
it's
been
around
for
for
a
while,
and
it's
a
very
highly
appreciated
feature
from
our
impression,
which
yeah
I
still
will
quickly
explain
how
they
work.
T
It
doesn't
require
an
S
load,
but
it's
basically
inlined
in
the
code
as
if
it
was
a
literal,
so
yeah
you'll
probably
know
this
and
people
liked
it
a
lot
and
an
apparent
and
a
very
obvious
extension
they
asked
for
is
why
do
we
only
support
that
for
Value
types?
That's
the
Restriction.
We
had
so
far
that
it's
only
integers
on
the
addresses,
but
not
arrays,
of
things,
for
example,
for
example.
T
So
this
was,
for
example,
it's
taken
from
a
GitHub
issue
that
a
thing
opens
Apple
in
open,
but
that's
to
yeah
ask
for
an
array
of
immutables,
which
would
then
yeah
be
initialized
by
index
accessing
and
otherwise
work,
just
as
as
one
is
used
with
immutables
and
we're
working
towards
that
end.
But
there
are
some
issues
with
that.
T
So
I
mean
we
have
this
assignment
here,
where
randomly
assigned
into
elements
of
this
array
and
in
general
I
cannot
check
whether
this
is
really
assigned
only
once
I
mean
the
question
is:
is
there
still
an
immutable
thing
in
that
sense?
T
Also,
if
I
start
having
a
race
as
immutables
I
will
want
to
have
local
references
to
them,
which
I
could
then
reassign.
So
if
I
keep
the
name
immutable,
I
just
change
what
this
points
to
so
this
is
really.
It
doesn't
look
immutable
immutable.
It
doesn't
fit
this
concept
anymore.
Probably
so,
let's
yeah,
let
me
explain
a
bit
further.
T
How
Universe
actually
work
in
the
Constructor
owner
is
actually
a
position
in
memory,
so
we
store
whatever
you
write
to
this
variable
in
memory
and
then
when
the
actual
runtime
code
of
the
of
the
contract,
part
of
deploy
is
copy,
the
memory
to
be
returned
by
The
Constructor.
We
fill
in
from
this
network
location
into
the
byte
code,
the
value
that
is
there
which,
in
the
end,
results
in
the
runtime
code
to
actually
have
it
as
a
literal
in
the
bytecode.
T
T
But
filling
in
literal
values
in
some,
so
basically
the
push
argument
into
the
bytecode
won't
work
anymore
for
dynamic
types.
If
we
have
statically
sized
arrays,
they
could
still
do
that
and
we
probably
will
for
efficiency
reasons,
but
at
least
for
dynamic
types
to
go
full
way.
We
cannot
do
that
anymore
because
we
don't
know
the
length
of
the
thing,
so
we
can't
reserve
space
invite
code
for
that.
T
So
instead
we
need
to
rely
on
code
copy
and
yeah
I
already
mentioned.
We
will
probably
want
to
pass
the
immutables
by
around
by
reference.
We
will
want
to
slice
them
which,
if
you
think
about
all
of
that
together,
makes
immutables
not
an
annotation
for
a
safe
arrival
anymore,
but
it
will
become
a
proper
whole
data
location.
T
T
So
here
are
the
data
arrival
that
bites
the
dynamic
array
in
code,
which
I
can
then
in
the
Constructor
just
treat
like
any
old
memory
variable
assigned
to
it,
modify
it
freely.
You
can
drop
the
requirement
that
it's
only
written
to
Once
In
The
Constructor,
because
that
was
always
an
artificial
requirement.
T
So
if
we
actually
yeah
call
it,
but
it
is
a
code
variable
that
will
actually
be
inserted
or
used
as
in
the
code
in
the
end,
we
can
freely
modify
it
and
then
in
the
runtime
code
it
basically
behaves
like
a
call
data
reference,
a
read-only
reference
only
that
it
doesn't
come
from
call
data,
but
from
code
so
I
can
slice.
It
have
local
references
to
it
and
pass
it
around
the
functions
all
with
very
little
cost.
T
This
will
be
a
bit
tricky
to
write
to
type
check
in
the
end
because
yeah
in
the
Constructor,
it
is
a
memory
variable
in
the
runtime
code
apis
differently.
So
if
the
Constructor
calls
functions,
we
need
to
type
check
everything
twice,
but
we
have
different
costs
for
refract
factor
in
the
type
Checker
to
actually
do
that
as
well.
So
we
will
do
that
and
there's
still
some
considerations
of
gas.
T
T
However
long
it
will
take
so
yeah.
The
go-to,
down-to-earth
example
of
a
use
of
generics
is
something
like
a
resizable
array,
some
container,
which
yeah
in
this
case
it's
just
an
array
that
if
you
want
to
append
something
to
it
and
it,
the
array
is
already
full
you
just
reallocate
with
twice
the
size
copy
things
over
and
yeah.
Otherwise,
you
can
just
add
the
element
to
the
end
of
the
array
and
yeah.
T
We
had
often
the
request
to
allow
slicing
for
memory
types
which
we
can't,
because
the
representation
I'll
show
here
doesn't
allow
it,
since
we
expect
the
size
to
be
at
the
first
memory
offset
where
the
memory
pointer,
that
is,
the
representation
type
of
this
points
to
is
the
size
we
can
just
slice
away
from
the
first
element
for
that
to
work.
Memory
arrays
would
have
to
work
different.
Similarly
to
call
data
types
there
you
have
offset
and
size
on
stack.
T
It
would
be
a
huge
effort
for
us
currently
to
change
the
entire
compiler
to
come
to
change
the
representation
of
memory
types.
If
we
had
things
defined
like
this,
it's
minimal
changes,
I
can
just
say
now.
The
memory
array
is
defined
as
a
tuple
of
Stack
slots.
One
of
them
is
pointer
to
data
area,
the
other.
T
So
it's
the
size
and
yeah
have
a
similar
definition
as
before
slide
changes,
but
a
few
Source
changes
in
our
standard
Library
would
then
be
the
same
as
changing
the
entire
layout
of
of
memory
types,
which
would
be
yeah
month
of
month
of
work.
Why
we
maintained
everything
hard-coded
in
the
compiler.
T
Of
course,
it's
all
the
disadvantages.
If
we
actually
keep
this
extremely
generic,
like
that,
we
will
lose
semantic
information
that
will
actually
make
memory
optimization
harder
because
yeah,
but
the
compiler
sees
it's
just
a
bunch
of
Stack
slots.
There's
no
idea
that
that's
actually
memory
areas,
we're
talking
about
that
are
allocated
may
only
be
allocated
temporarily
and
stuff
like
that.
T
So
I'm
not
sure
whether
we
will
actually
go
this
far,
and
if
so,
we
would
maybe
probably
do
this
in
a
yeah,
compiler
internal
manner,
where
we
will
still
assume
certain
semantic
properties
about
these
functions
without
supporting
similar
optimizations
that
we
can
do
if
we
know
what
these
things
do
in
their
random
user
code,
but
yeah,
why
did
I
say
stack
slot
all
the
time?
That's
yeah,
maybe
obvious,
but
just
to
mention
the
stack
slot
would
be
the
one
primitive
type.
T
Maybe
apart
from
from
product
types
and
even
the
basic
integer
types
size,
initial
types
we
have
right
now
can
be
defined
generically
just
yeah.
Like
the
other
cases
we
had
so
I
mean
this
will
really
reduce
the
footprint
you
can
hear
could
also
then
distinguish
between
types
that
are
checked,
arithmetic
and
unchecked
arithmetic
by
having
very
few
functions
that
are
generally
written
and
yeah.
T
So
yeah,
that's
not
to
give
to
make
you
expect
this
to
happen
too
soon
we
are
still
in
early
design
phase
for
generics.
There's
a
lot
of
questions.
This
is
a
very
complex
thing
to
do,
and
a
very
dangerous
thing
to
do,
because
yeah
all
of
this
needs
to
be
logically
well
defined
to
not
bite
in
the
back.
In
the
end,
so
I
mean
we
will
take
some
time
to
design
this
properly
and
syntax
is
also
a
question.
T
T
We
need
to
decide
what
to
do
with
this
trade-off
between
making
this
the
language
really
self-defined
in
the
in
the
very
deepest
sense,
or
to
have
some
fixed
functions
which
are
fixed
in
the
compiler,
which
means
we
can
assume
that
semantics
or
there
are
compromises
between
that,
but
we'll
see
so
yeah.
To
summarize
what
I
was
talking
about
and
what
I
wasn't
talking
about,
and
we
will
still
do.
T
Hopefully,
we
will,
in
the
future,
try
to
allow
more
pre-competation
either
in
the
Constructor,
but
the
code,
data
location
or
in
compiler
term
by
compiler,
constant
expression,
evaluation,
which
is
something
yeah.
A
lot
of
people
have
asked
for
and
which
obviously
makes
it
easier
to
write
things
in
that
you
don't
need,
for
example,
magic,
constants,
embedded
in
a
contract
or
whatever,
because
you
can
compute
them
on
the
fly
without
it
costing,
and
he
had
a
huge.
T
A
huge
topic
for
the
future
will
be
to
make
the
language
extensible
and
self-defining
by
means
of
improving
user-defined
data
types.
Pushing
the
standard
library
and
making
a
move
for
generics,
but
we
also,
of
course,
can't
just
ignore-
is
that
we're
still
wasting
a
lot
of
memory?
I
mean
whoever
has
used
memory
and
solidity
will
know
that
yeah,
we
basically
don't
free
memory
which
for
a
long
time,
wasn't
the
main
concern.
But
in
the
meantime
it's
very
huge
pain
point
in
yeah
for
cost
of
contracts.
T
There
were
several
approaches
we
discussed
so
far
for
improving
the
situation
there
we
a
long
time
we
wanted
to
deal
with
this
on
the
viewer
level.
It
turns
out
that
may
not
be
as
simple,
so
maybe
we
will
move
actually
to
analyze
the
solidity
which
has
the
properties
right
there.
We
shied
our
way
of
doing
that
for
facility
being
the
more
complex
thing
to
analyze,
but
maybe
it's
fine.
We
will
see
and
yeah.
T
We
will
also,
of
course,
try
to
move
completely
towards
via
our
core
generation,
but
we
have
some
burdens
there
to
overcome
still
like
the
performance
of
the
optimizer,
better
tooling
support.
You
still
need
to
Define
got
the
background
data
for
the
tooling
tooling
to
consume,
to
actually
make
the
experience
as
nice.
T
T
T
T
Yeah
I
would
have
assumed
you
knew,
but
let
me
explain
that
yeah.
It
used
to
be
the
case
that
the
compiler
has
two
back-end
paths
at
the
moment.
So
I
mean
it
used
to
be
the
case
that
solidity
was
directly
translated
to
evm
bytecode
and
then
only
of
the
Omni
optimizations
that
took
place
were
on
the
bytecode
level.
T
For
the
past
years.
We
have
moved
away
from
that
and
have
a
different
new
code
generation
pipeline.
That
translates
solidity
first
into
yield
into
an
intermediate
language
which
preserves
some
structure
and
which
allows
for
more
complex
optimizations
for
inlining,
more
analysis
and
then
only
to
translate
Yule
to
evm
bytecode
as
a
second
step
which
can
reduce
gas
costs
significantly
in
some
cases.
In
some
cases
it's
the
same
as
before,
but
yeah
and
yeah
the
new
pipeline,
the
Via
IRS
via
the
intermediate
representation,
so
compilation,
bio,.
AD
Thank
you
for
the
talk
you
mentioned
generics
I'm
wondering
if
you
could
speak
to
how
you're
planning
on
implementing
that,
whether
you're
going
through
monomorphization,
because
I
worry
that
the
code
contract
size
will
balloon.
If
you
start,
you
know
doing
the
C,
plus
plus
style,
duplication
of
implementations
or
if
there's
some,
you
know
uniform
representations,
you
can
do
Allah
o
camel
or
you
know,
Java
I.
T
Think
there's
not
much
you
can
do
actually
I
mean
the
generics
of
C
plus
are
different
in
the
sense
that
they're
analyzed
differently
and
you
only
get
errors
in
on
instantiations,
but
we
will
still
need
to
instantiate
and
generate
code
for
each
specific
case,
but
that's
not
worth
a
worse
than
what
you
get
now.
What
you
get
now
is
writing
by
hand
for
different
types,
different
functions
that
would
end
up
separately
in
by
code
so
yeah,
it's
not.
Nothing
is
worse
than
that.
That's
a
duplication
in
code
and
in
byte
code.
AE
Hey
hi
Dan,
so
my
question
is
related
to
that
question
regarding
genetics,
but
from
a
different
aspect,
so
type
system
I
understand,
but
once
we
get
generics
into
solidity,
wouldn't
the
developer
have
to
focus
on
10
more
things.
Instead
of
focusing
on
writing
business
logic.
T
A
good
question
I
would
I
would
think
that
the
yeah
down
to
earth
go
to
smart
contract.
Writer
will
not
bother
with
this.
It's
mainly
something
for
us
for
defining
a
senate
library
and
for
people
writing
libraries
to
support
smart
contract
developer.
First,
so
I
mean
the
language
supporting
generics
and
having
generic
types
doesn't
force
you
to
use
them,
and
it
doesn't
mean
that
anybody
has
to
use
them,
but
it
will
make
the
language
the
the
evolution
of
a
language
much
faster
and
more
streamlined.
T
We
can,
in
the
future,
ask
people
if
they
propose
a
feature
to
just
implement
it
in
a
standard
Library
way
and
then
standardize
it
in
the
end.
If
it
works
out,
which
will
yeah
have
all
the
advantages
this
one
can
think
about
that,
but
for
user
code
for
the
in
smart
or
contract
code,
the
difference
is
not
that
large,
probably.
AF
E
T
Definitely
eventually,
I
would
say,
I'm
not
sure
whether
I
mean
getting
the
basic
type
system
going
and
all
that
will
already
take
some
time
but
yeah.
Eventually.
This
is,
of
course,
something
that
will
make
things
easier
to
read,
easier
to
write
and
are
beautiful,
so
and
maybe
not
I
mean
as
long
as
you
don't
want
these
things
to
capture
variables.
It's
easy
capturing,
even
maybe
something
at
some
point.
We
can
also
consider
but
yeah.
AG
T
Yeah
I
mean
on
the
solidity
level.
We
don't
yet
interact
that
much
with
yeah,
often
accommodations,
Layer,
Two,
stuff
or
whatever,
but
yeah.
We
are
aware
that
we
need
to
interact
with
that
and
support
that,
where
we
can,
you
know
sure,
okay,
I,
think,
okay,.
G
Okay,
I
think
you
mentioned
something
about
the
performance
of
via
AR.
Did
you
mean
the
how
long
it
takes
to
actually
use
it?.
T
I
mean
so
far
the
Via.
Our
compilation
pipeline
has
not
been
written
with
any
performance
considerations
in
mind
at
all,
if
you've
written
it
for
correctness
first
and
only
now
are
starting
to
yeah
realize
how
bad
that
got
and
that
we
need
to
do
something
about
performance
there.
So
I
could
imagine
that
we
can
get
quite
a
way,
but
yeah
it's
hard
to
tell
before
we're
actually
doing
it.
AH
Regarding
generics,
how
much
thought
I
was
put
into
the
auditability
for
external
code
Auditors?
Will
it
improve
the
story?
Make
it
worse,
more
training?
Can
they
forget
stuff
I.
T
Think
it
will
actually
improve
things.
I
mean
we
will
be
able
to
have
the
standard,
Library
definitions
of
all
the
built-in
functionality
which
can
be
exported,
which
can
be
analyzed,
I.
Think
at
the
point
where
we
have
generics
going
in
a
standard
Library
going.
We
will
actually
at
some
point
not
promising
that
happening
soon
either,
but
at
some
point
be
able
to
Define
a
form
of
semantics
for
the
core
language
that
remains
which
can
actually
help
hormonal
verification.
T
AD
Yeah
you
were
you
mentioned
that
the
data
location
is
going
to
be
coming
part
of
the
type
instead
of
being
associated
with
the
variable.
Does
that
mean
that
we
are
going
to
be
able
to
start
writing
things
like
an
in-memory
array
of
storage
pointers
or
a
in-storage
array
of
code
data
pointers,
because
those
are
all
well-formed,
but
something
like
a
storage
array
of
call
data
pointers
makes
no
sense
right.
So
what
does
the
well-formedness
look
like?
How
would
that?
How
does
that
sort
of
you
see
that
restricting
that.
T
T
AG
B
Thank
you
Daniel
from
the
solidity
compiler
team.
We
are
staying
in
the
space
of
solidity
and
moving
a
little
bit
towards
upgradeability
next
up
I'd
like
to
introduce
our
next
speaker,
Alejandro
Santander.
He
is
the
creator
of
the
ethernet
CTF
game
and
founder
of
the
ethernet
Dao.
If
you
are
a
developer
and
interested
in
that
stuff,
I
really
highly
recommend
you
checking
out
this
game
and
also
the
ethernet
Dao.
They
have
lots
of
good
solidity
tips
and
tricks
on
their
Twitter
account,
and
today
he
will
be
talking
about
Alison
proxy
land.
B
He'll
walk
through
the
struggle
of
creating
upgradable,
smart
contracts
and
different
proxy
architectures.
So
big
round
of
applause
for
Alejandro.
AI
For
coming
to
the
talk,
so
yeah
I'm
I've
been
a
a
salute:
the
auditor
and
open
sapling.
Then
I
worked
with
Aragon
for
a
bit
and
now
I'm
working
with
synthetics,
mainly
addressing
the
problem
of
how
do
you
build
a
super
complex,
smart
contract
system,
in
a
way
that
you
can
iterate
through
it
and
fix
bugs
and
and
improve
an
experiment
and
other
than
that?
I
I.
Consider
myself
a
bit
of
a
an
educator
in
the
space
I.
Just
it's
not
that
I
know
a
lot
of
things.
AI
I
I,
just
love
to
empower
other
people
with
with
knowledge
right.
So
what
we're
going
to
do
today
is
talk
about
proxies
and
like
in
general,
and
then
we're
gonna
talk
about
a
pretty
sophisticated
type
of
proxy
that
we're
using
in
synthetics,
so
I
think
that
it's
critical
for
for
everyone
to
understand
how
proxies
work
under
the
hood
I
think
that
it's
not
okay
for
a
Dev
to
use
proxies
and
not
know
how
they
work.
But
the
good
news
is
it's
that
they're
really
easy
like
in
the
end.
AI
There's
like
no
mystery,
it's
very
easy
to
demystify
it
right.
So
I
I
insist
besides
the
proxy
I'm
going
to
show
I.
Think
the
the
essence
of
this
talk
is
to
to
promote
the
awareness
of
how
proxies
work
and,
if
you're,
going
to
use
one
like
make
sure
you
understand
how
they
work
so
to
illustrate
this
we're
going
to
play
them
like
we're,
going
to
start
with
a
with
a
very
simple
contract
and
make
it
upgradable
and
see
what
happens.
AI
So
this
is
the
contract.
I,
don't
know
if
it's
a
good
idea
to
bring
in
code
to
a
talk,
but
I,
don't
know
if
you
can
see
it,
but
it's
just
a
contract
that
sets
a
value
right.
It
has
a
function
set
value.
You
can
set
the
value
and
it
records
who
said
the
value,
it's
message
sender
and
emits
an
event.
That's
it
so
this
this
contract
is
deployed
at
the
0x1
address
right
and
then
after
deploying
it
Bob
calls
set
value.
AI
42
right,
then
42
is
set
in
storage
slot
zero
because
the
variable
is
declared
it's.
The
first
variable
declared
and
zero
X
Bob,
which
is
the
address
of
Bob,
gets
stored
in
the
second
slot,
which
is
slot
one.
Okay,
that's
how
solidity
lays
out
storage
automatically
when
you
declare
variables
in
a
contract
and
an
event
is
emitted
from
0x1.
AI
Now
they
decide
to
make
their
contract
upgradable
all
right.
How
does
this
work?
They
they
decide
to
deploy
a
proxy
which
is
just
a
function,
a
contract
that
has
a
function.
You
can
tell
it
what
the
current
implementation
is,
which
is
going
to
be
the
other
contract,
and
then
it
has
a
like
a
magic
assembly
function
which
forwards
anything
using
call
to
the
implementation
contract.
AI
So
this
is
deployed
at
0x2,
then
Bob
calls
set
implementation
0x1
and
in
the
proxy
storage
it's
not.
Zero
now
holds
0x1
because
the
address
implementation
variable
is
declared
in
slot.
It's
the
first
one
declared
so
it's
saved
in
slot
zero,
then
they're
now
connected.
This
is
the
proxy,
and
this
is
implementation
and
Bob
calls
value
right.
AI
It
forwards
the
call
to
that
implicit
getter
that
solid
generates
and
everything's
return,
32,
which
is
expected
right.
AI
Yeah
now
Bob
called
set
value
one
three,
three:
seven,
let's
see
what
happens,
it
gets
forwarded
using
call
to
the
set
value
function
in
the
implementation,
and
it
affects
the
storage
of
the
implementation,
not
of
the
proxy
and
it
stores
one
three,
three:
seven
in
slot
zero
and
0x2
installed.
One.
AI
That's
weird
right,
that's
message,
sender,
so
the
the
problem
that
we
have
here
is
that
call
makes
the
execution
context
to
be
the
implementation,
not
the
proxy,
like
the
event
is
submitted
from
the
from
the
implementation,
which
is
also
not
a
good
thing
right,
because
you
don't
want
to
have
a
protocol
and
and
tell
people
like
to
be
changing,
addresses
every
time
you
update
the
implementation
right.
So
the
problem
that
we
have
with
this
particular
proxy
that
uses
call
is
that
the
execution
context
is
here
right
and
we
don't
want
that.
AI
So
what
is
an
execution
context
is
when
you
run
code,
basically,
what
determines
which
storage
space
to
use?
Who
message
sender
is
and
where
emits
come
out
from
right,
there's
more
to
it,
but
that's
like
pretty
much
it.
AI
So
how
can
we
take
the
proxy?
The
execution
context
to
the
proxy
we
just
need
to
use
delegate
call
so
call
wants
the
code
that
we're
running
in
the
current
context
and
delegate
call
runs
the
code
in
the
context
of
the
caller
right.
So
here
we
have
a
second
proxy
right,
which
is
the
only
difference
is
that
it
uses
delicate
call
right.
AI
AI
We
are
using
the
storage
space
of
the
proxy,
which
is
good.
The
event
is
coming
from
the
proxy,
which
is
good
now,
Bob
calls
value
like
the
getter
right.
AI
The
execution
context
is
still
that
it's
fine
now
this
is
going
to
delegate
call
to
whatever
is
stored
in
the
implementation,
and
the
implementation
is
in
slot
zero
and
the
value
now
holds
one
three
three
seven
right.
So
what
are
we
delegating
call
to
to
some
contract
at
one
three,
three,
seven,
which
there's
probably
nothing
there
right.
So
we
just
we
have
a
storage
Collision
right.
AI
We
overwrote
the
address
of
the
implementation
with
a
number
right,
so
we
basically
bricked
this
proxy
right.
So
daily
call
is
awesome,
but
it
is
dangerous
because
you
have
storage
collisions
so
to
solve.
This
Bob
goes
to
the
next
level
and
and
these
structures
the
proxy
storage.
So
what
is
this
structuring?
AI
It's
basically
choosing
where
to
put
to
put
to
store
something.
That's
it
it's
not
using
solidity's
custom
slots,
but
just
choosing
where
you
put
it
so
for
solidity.
First
variable!
Is
it
zero?
Second,
at
one
Etc
and
you
have
infinite
slots,
destruction
is
just
picking
a
custom
slot
right.
So
we
have.
AI
The
third
proxy
here
is
called
unstructured
proxy
and
the
difference
is
that
we
are
not
using
solidities
like
regular
storage
slots,
but
we
are
declaring
where
we
store
things
at
slot
1000
right
and
using
that
the
the
code
looks
a
little
bit
weirder,
but
that's
it
so
we
deploy
these
proxies.
He
works
for
we
connect
it
to
implementation.
Oh,
this
is
important.
Can
you
see
the
storage
it's?
AI
The
implementation
address
is
stored
at
the
custom
slot
of
1000..
So
that's
destructured
right
now,
so
now
we
call
set
value,
it
makes
a
delegate
call.
The
execution
context
is
that
we
write
the
new
value
values,
but
they
don't
like
step
over.
Let's
say
the
implementation
address
right
and
the
event
comes
from
from
the
proxy,
which
is
fine,
so
we
have
a
proxy
that
works
right
and
now
we
can
upgrade
the
implementation,
because
we
we
know
that
it
works.
So
we
have
value
holder
V2.
AI
The
only
difference
is
that
we
added
a
new
variable
called
date
right,
just
added
it
on
top
and
just
whenever
someone
sets
the
value.
We
also
record
record
when
that
happened
right.
AI
AI
This
is
another
type
of
storage
Collision.
We
shifted
the
the
implementation
storage
and
we
have
a
collision
between
versions
of
the
implementation
right.
We
have
income
incompatible,
storage
layout.
AI
So
Paul
understands
that
to
to
avoid
this,
you
in
an
implementation,
you,
you
only
append
to
the
storage
instead
of
like
putting
it
anywhere
right.
So
he
moves
the
date
variable
to
the
end
of
the
previous
storage
layout,
and
that's
it
that's
pretty
a
pretty
simple
fix,
that's
another
rule
of
like
using
proxies
always
append
to,
and
now
this
value
is
gonna
get
whatever
is
stored
at
slot
zero,
which
is
one
three
three
seven.
So
it's
fine!
So
we
we
avoided
that
Collision!
AI
So
storage
collisions
it's
it's
critical
to
understand
when
they
occur
and
it's
basically
the
two
types
of
collisions
that
I
just
showed
you
that's
it.
If
you
get
that
you,
you
can
pretty
much
think
about
any
type
of
collision.
Things
to
consider
the
execution
context
is
always
the
proxy,
so
everything
is
stored
in
the
in
the
proxy
there's
two
types
of
collisions
that
we
just
talked
about.
AI
The
first
kind
can
be
avoided
by
unstructuring,
the
storage
layout
and
the
the
second
one
can
be
avoided
by
just
making
sure
that
the
updates
to
the
storage
layout
and
implementations
is
valid,
always
append,
and
something
to
consider-
and
this
is
critical-
multiple
inheritance
flattens
your
contracts.
So
you
cannot
protect
the
storage
layout,
so
you
can
add
a
new
inherited
contract
to
you
to
your
like
super
contract
and
it
can
add,
like
five
new
variables
in
an
unpredicted
part
of
the
layout
part
of
the
layout.
AI
AI
AI
AI
AI
So
we
have
we're
pretty
happy
with
this
proxy.
Like
configuration,
we
we,
the
the
context,
is
kept
at
the
proxy
and
collisions
are
avoided
using
unstructured
storage
or
storage.
Namespace,
absolutely
everywhere,
tooling,
should
still
be
used
to
guarantee
that
there's,
like
no
storage
Collision,
that
you
don't
notice,
but
the
thing
is
that
this,
like
custom
or
manual
use
of
storage,
makes
storage
layouts
much
easier
to
control.
AI
So
there's
no
ideal
standard
solution
for
multi-contract
systems,
people
often
use
Registries,
which
is
basically
a
contract.
That
knows
every
other
contract
right
and
whenever
a
contract
a
needs
to
talk
to
contract
B,
it
needs
to
go
to
the
registry
and
ask
hey:
I,
want
to
talk
to
B
who's,
B,
here's
B,
and
then
it
makes
a
call
to
contract,
B
right
and
then
B
needs.
If
it's
a
a
sensitive
operation
needs
to
say,
okay,
who's.
Calling
me
a
is
a
from
the
system.
AI
It
asks
the
registry,
the
registry
goes
yes
and
then
okay,
then
you
can
perform
this.
It's
complicated
and
it
gets
it
gets
messy
pretty
fast.
So,
let's,
let's
try
a
pretty
crazy
solution,
which
is
we're
calling
it
the
router
proxy.
AI
So
we
basically
have
a
new
contract,
which
is
another
contract
that
has
one
variable.
It's
called
cool
value
and
it's
also
using
these.
This
storage
name
space
system
instead
of
solidities
like
own
a
storage
layout
thingy,
but
that's
it.
It
just
records
a
variable
right.
It
gets
the
store,
sets
the
the
stores
value
and
then
it's
about
an
event.
AI
So
we
deploy
it
at
0x8
and
then
this
is
the
tricky
part
bear
with
me.
With
this
part,
Bob
uses
tooling,
to
build
a
router
right.
So
this
is
basically
a
table
right.
AI
It
has
the
addresses
of
the
two
contracts,
value
holder
and
another
contract
hard-coded,
and
it's
a
fallback
function
basically
has
to
do
like
this
binary
search
algorithm
to
determine
which
implementation
has
that
function
right.
Is
it
valueholder
or
is
it
another
contract
right
and
it
just
checks
the
incoming
selector
and
forwards
it
to
the
appropriate
implementation
and
that's
it
and
then
it
just
makes
the
the
regular
delegate
call
proxy
forwarding.
AI
AI
AI
AI
This
new
another
contract,
B2
is
deployed
at
2x10.
A
new
router
is
generated
by
the
tooling
it
just
has.
The
only
difference
is
up
there
in
in
another
contracts
has
a
new
address.
AI
Everything
else
is
the
same,
and
Bob
sets
the
implementation
of
the
proxy
to
that
new
router.
So
that's
how
you
upgrade
the
any
contract
in
your
system.
So
what
would
a
more
complex
system?
Look
like
maybe
like
this.
You
have
the
main
proxy
right.
You
have
the
different
storage
name
spaces
of
that
proxy,
and
then
you
have
the
router,
which
you
keep
changing
every
time
you
upgrade
the
system
and
you
have
the
different
modules
that
specify
a
particular
behavior
of
your
system.
AI
And
then
you
have
this
thing,
which
is
really
cool
cool
because
called
makes
sense,
because
it
allows
intermodular
Communication
in
a
way
that
we
we're
going
to
see
that's
really
efficient
and
really
easy.
AI
It's
like
a
concern
with
this
pattern,
because
you're
used
doing
two
delay,
calls
keep
in
mind
that
transparent
proxy
is
the
ones
almost
everyone
uses
costs
like
3
000
gas,
Universal
proxies
about
1600,
and
this
system
uses
only
2
600
gas,
which
is
all
right
and
then
intermodial,
intermodular
Communications.
How
would
a
module
talk
to
another
module?
AI
You
could
you
could
cast
your
module
as
the
other
module
and
just
call
its
function,
because
every
module
is
this
the
system,
but
the
problem
with
this
is
that
message
sender
would
be
lost,
because
it's
it's
a
call.
You
you
break
the
delegate
call
chain
right,
so
you
need
to
delay,
call
to
the
other
module
just
the
the
same,
like
self-casting
mechanism,
but
with
daily
code.
It
works,
but
there's
something
much
better,
which
is
mixins,
which
are
pieces
of
code
that
know
how
to
interact
with
another
module
storage.
AI
AI
And
the
nice
thing
is
that
you
can
tell
the
mix
in
to
interact
with
the
other
module
without
even
making
a
call,
so
communication
becomes
super
cheap.
Let's,
let's
see
an
example,
we
have
owner
storage,
which
just
declares
a
struct
with
what
single
variable
that
mechanism
to
to
get
custom
storage
thoughts.
AI
Then
we
have
the
owner
mix
in
which
knows
that
storage,
and
only
has
an
only
only
owner,
modifier
right
that
does
the
typical
check
right
and
then
in
owner
module.
We
inherit
the
mixing,
which
gives
us
the
only
owner,
modifier
access
right,
and
we
have
a
getter
for
the
owner,
and
now
we
have
a
new
version
of
value,
holder,
B5
right.
AI
So
if
you
want
to
use
this,
you
only
have
to
change
your
code
style
a
little
bit,
it's
kind
of
weird,
but
you
get
used
to
it
fast
because
it's
simple,
you
just
need
to
use
storage
name
spaces
instead,
instead
of
like
regular
variables
and
yeah
you
get
used
to,
it
should
soluti
do
this
under
the
hood
there's
a
proposal
from
maxim4,
so
it
is
something
that
the
solidity
team
is
considering.
This
could
be
a
language
supported
feature
to
have
like
a
contract
hub
and
yeah.
Why
use
the
router?
AI
It's
like
the
router,
merges
all
the
contracts
into
a
single
contract.
So
then,
as
we
just
saw,
we
have
good
easy
Communications
between
the
models
without
having
to
use
a
registry
or
authentication
or
anything.
AI
It's
ideal
for
complex
experimental
systems
and
the
other
nice
thing
is
that
the
router
is,
since
the
addresses
are
hard
coded.
It's
very
explicit.
So
it's
good
for
for
governance.
If
you
want
to
make
an
update
to
the
system,
you
show
your
community
like
this
is
what
we're
going
to
change.
This
is
what
the
configuration
of
the
system
will
look
like
it's
not
hidden
in
some
Dynamic
storage
somewhere,
it's
right
there
and.
R
AI
AI
AI
AF
Okay,
hi
I
just
want
to
know
if
I'm
missing
something,
but
this
proxy
router
could
be
the
same
as
the
multi-facet
proxy.
You
know
the
diamond
proxy,
but
with
the
hardcodile
implementations.
AI
AF
AI
Basically,
okay,
it's
it's!
Some
people
are
calling
the
diamond
proxy
a
dynamic
router
and
this
one
a
static
router,
and
we
like
this
one
because
for
our
project,
because
it
it
saves
storage,
reads
because
the
the
values
are
hard-coded
and
it's
also
more
explicit
like
that.
What
we
don't
like
about
diamonds
is
that
you
don't
like
if
you're
a
Community
member
or
whatever,
and
you
want
to
know,
what's
the
current
composition
of
the
system,
you
need
to
query
it
a
lot
right,
but
yeah,
it's
the
same.
Otherwise,.
AJ
AI
Well,
if,
if
you
declare
your
array
or
any
Dynamic
type
inside
of
those
storage
namespace
drugs,
then
solidity
is
like
regular
storage.
Layouts
system
is
used,
which
uses
unstructured
storage
under
the
hood
like.
If
you
have
a
dynamic
array,
the
position
of
that
array
I
think
it's
going
to
be.
It's
slot
say
if
it's
9003,
because
it's
the
third
variable
instruct
the
hash
of
that.
So
it's
going
to
be
some
other
random
place,
so
it
probabilistically
the
even
though
they're
in
in
structure.
AK
AK
AI
AK
AI
All
sort
of
tools-
okay,
yeah,
so
if
you're
using
open
settlements
proxies,
you
should
use
their
tooling
if
you're
using
the
router.
As
you
can
see,
we
just
we
didn't
just
offer
like
a
solution
to
generate
the
code,
but
we
have.
We
have
a
it
checks,
your
the
storage
layout
of
your
entire
project.
So
in
that
case
you
can
use
our
code
right,
so
I
would
say
always
use
the
tooling
of
whoever
is
providing
you.
The
code,
the
smart
contract
code
of
the
proxy.
V
AI
I
mean
the
way
solidity
like
these
structures,
arrays
and
mappings,
and
all
that
it's
it's
theoretically
impossible
to
to
get
a
collision.
So
there's
no
need
to
sandbox
it.
The
problem
with
collisions
is
when
people
use
like
a
design-
that's
not
supported
at
a
language
or
at
a
protocol
layer
like
the
evm
right
and
they
get
collisions
between
two
contracts
right.
AI
So
so
I,
don't
I'm
not
aware
of
any
attempt
at
that
level.
To
avoid
coalitions
is.
AI
AI
AI
You
just
need
to
probably
choose
new
namespacences
and
populate
the
data
right
or
accept
that
your
modules
are
going
to
use
existing
search
right
and
make
sure
that
new
modules
declare
like
a
new
namespace
or
something
but
yeah
sure
you
can
do
it
even
if
so,
if,
if
solidity
like
makes
this
a
language
feature,
you
just
stop
using
generate
routers
and
deploy
a
solution
Hub
right.
So
it's
completely
future
proof.
I.
Think.
AL
How
standardized
it
is,
is
proxy,
are
you?
Are
you
the
only
ones
using
it
or
is
someone
else
using
it
in
production
doesn't
mean
first
question
and
the
second
question
is:
do
you
think
it
will
be
useful
to
have
something
in
the
like
a
public
function
in
the
in
the
proxy
to
share
the
signatures
that
are
being
used?
So,
let's
say
a
user
or
someone
that
wants
to
check
what
is
being
used,
doesn't
have
to
dive
into
the
source
code.
AI
Yep
so
standards
not
many
right
now,
if
you
deploy
this,
you
won't
see
anything
on
etherscan.
For
example,
etherscan
doesn't
know
how
to
interpret
a
proxy
that
has
multiple
implementations,
which
is
unfortunate,
but
we're
trying
to
solve
that
pretty
fast.
It
shouldn't
be
hard
and
your
second
question,
you
would
just
add
a
module
that
adopts
ERC
165.
Is
it
that
just
like
replies
it?
It
has
that
function?
I,
don't
remember
the
name
of
the
function
that
gives
you
the
entire
interface
of
the
whole
system.
B
All
right
moving
on
I
think
this
is
almost
our
last.
No
it's
not
our
last
talk
in
the
developer.
Infrastructure
We
are
continuing
with
developer
infrastructure,
as
our
next
speaker,
I
would
like
to
introduce
yet
again.
Another
OG
Lefteris
is
ethereum
developer.
Since
2014.
He
contributed
to
all
kinds
of
different
projects
from
the
solidity
compiler
to
C,
plus
plus
ethereum,
to
the
Dao
and
trade,
and
now
he
is
founder
of
rodkey,
an
open
source
portfolio
tracking
tool.
B
Today
he
will
talk
about
how
we
can
build
open
source
tools
that
help
users
understand
evm
transactions
without
the
need
of
being
a
shadow
shadowing
super
coder,
so
without
further
Ado
welcome,
lifters.
AM
Okay,
welcome
everybody!
Oh
all
right!
That's
my
slides
good
cool
magic,
so
I'll
be
talking
about
basically
decoding
evm
transactions
in
an
open
source,
accessible
and
modular
way.
AM
AM
Back
then,
then
I
worked
on
L2
payments
for
many
years
and
found
it
rodkey
today,
I'm
talking
about
user
transactions
and
how
we
can
start
to
understand
them
like
what
are
they
and
how
can
we
decode
them
in
a
human,
readable
format?
AM
So
everybody
has
used
ethereum
in
here
and
we
have
all
faced
these
two
big
problems
like
you
do
not
know
when
you
have
a
transaction,
you
do
not
know
how
to
get
it
like.
There
is
no
built-in
way
to
get
transactions
for
an
address.
You
need
to
utilize
some
kind
of
third-party
service
and
that
will
never
be
decentralized.
AM
AM
So
what
does
the
transaction
look
like?
Everybody
has
opened
Network
and
sees
this.
You
know
hex
block.
There
is
no
metadata.
There
is
no
human,
readable
info.
That's
what
every
new
user
to
ethereum
is
greeted
with.
There
is
no
way
to
understand
what
your
transaction
like
three
four
years
ago
has
done.
AM
AM
So
we
have
all
used,
others
can.
What
are
the
ways
to
decode
the
transaction
in
the
understanding
is
that
it
looks
like
this
is
a
complicated
transaction
that
does
swaps
over
multiple
protocols.
AM
It's
easy
to
use
you
just
type
in
other
scan.
You
know
transactions
last
and
it
gives
you
useful
insight
and
it's
totally
free.
Of
course
it
there
are
Cons
with
other
kind
of
centralized
it's
proprietary
and
closer.
That
means
that
you,
there
is
no
way
for
me
as
a
developer
to
see
how
they
do
what
they
do
or
extend
it
in
any
way.
AM
They
know
everything
about
you,
I
mean
I,
know
the
guys
from
understand.
They
are
good
guys,
but
you
never
know
who
in
there
could
be
malicious.
They
can
Maasai
peace
to
your
addresses
and
know
who
owns
what
address
and
where
they're
located,
and
it
actually
does
not
decode
everything.
We've
all
seen
that
there
are
transactions
that
data
can
doesn't
like
have
insights
for,
and
there
are
other
tools
that
actually
can
do.
This
then
there's
the
graph.
AM
The
graph
is
a
way
it's
kind
of
an
indexer
per
protocol,
you
they
have
sub
graphs
and
they
index
the
chain
for
a
particular
protocol
and
the
engine
query
this
index
and
get
any
insight
you
want
the
cons,
so
the
process
is
that
it's
very
good
for
single
protocol
data.
So,
for
example,
here
we
have
the
other
V2
theorem
by
massari.
AM
This
is
a
subgraph
that
say
the
other
guys
can
run
in
their
interface
and
query
any
information
they
want
for
their
protocol.
But
it
has
many
many
disadvantages.
It
needs
payment
per
query.
This
is
the
revision
that
everybody
should
be
paying
for
every
query:
it's
built
for
single
protocol
data,
so
they
subgraphs.
So
there
is
no
generic
solution.
If
you
have
a
portfolio
tracker
or
a
wallet
or
something
that
wants
to
decode
every
single
transaction,
no
matter
what
protocol
it
is
in
subclass
will
not
work.
AM
You
will
have
to
basically
query
every
single
subgraph
and
create
a
subgraph
for
everything
that
doesn't
is
unsupported
yet
and
it
does
not
work
with
the
local
apps.
What
I
like
to
call
true
dabs?
So,
basically,
when
the
company
that
makes
the
application
hosts
host
the
code,
they
can
have
an
API
key
and
pay
for
the
queries,
but
when
it's
a
local
application,
there
is
no
way
for
this
to
to
happen,
and
there
are
these
other
centralized
apis
like
covalent,
Morales
and
Alchemy
they're
easy
to
use.
AM
They
have
pretty
cool
apis,
which
can
decode
transactions,
give
you
all
the
transactions,
but
the
same
Concepts
errors
can
apply
like
it's
centralized.
It
knows
everything
about
you,
it's
proprietary,
so
you
cannot
extend
it.
It's
not
modular
at
all.
AM
AM
If
any
of
you
has
tried
to
get
the
history
of
transactions
for
your
address,
you
will
find
out
very
easily
that
there
is
absolutely
no
building
way
to
do
this.
For
for
ethereum
there
is
no
RPC
method.
This
is
all
due
to
the
way
that
evm
works
and
how
the
clients
are
built,
but
it
is
really
like
it's
absolutely
crazy
that
there
is
no
way
for
you,
as
a
user,
to
get
all
of
the
transactions
for
your
address,
someone
that
comes
as
a
developer
outside
from
web3
and
comes
to
ethereum
and
sees
this.
AM
They
think
that
we
are
we're
just
crazy
that
this
is
broken.
It's
it's
not
all
gloom
and
Dumbo.
There
are
ways
to
do
this.
Effort
can
again
comes
to
the
rescue.
They
have
many
apis
and
if
you
combine
free,
I
think
so
this
one
for
for
transactions,
then
there
is
one
for
reality
transfer
and
one
for
nfts.
If
you
combine
all
of
them,
you
get
a
pretty
accurate
picture
of
what
transactions
your
address
has
done.
AM
It
is
centralized
so
it
can
go
down,
they
can
cut
access
to
the
API
or
they
can
do
what
I
said
before
that
they
can
monitor
you
and
map
IP
to
your
address
the
truly
decentralized
way
to
go
around.
This
is
something
by
my
friend
Thomas
de
Ross,
It's
called
true
blocks.
It
is
really
the
best
and
most
complete
way
to
get
transaction
data
through
blocks.
AM
It
takes
all
appearances
of
an
address,
really
I
have
seen
demos
where
TJ
basically
shows
etherscan
and
then
Compares
it
with
true
blocks,
and
you
can
see
that,
for
some
addresses
through
box
does
indeed
detect
more
appearances
than
others.
Can
it
is
decentralized,
so
it
runs
on
top
of
your
local
node.
So
you
do
not
need
to
to
do
any
other
network
queries.
It's
super
fast,
like
it's
really
like
milliseconds
or
seconds,
depending
on
the
amount
of
addresses
that
you
query
for,
and
it's
built
to
share
the
this
index
with
others.
AM
AM
It
does
require
a
local
node,
so
you
need
to
be
running
an
ergon,
node,
I,
think
I'm,
not
sure
if
it
works
with
others
and
of
course
you
require
through
blocks
itself,
to
create
the
index
so
building
on
this
I
would
like
to
like
present
what
I
try
to
call
the
stack
of
3D
centralization,
which
is
something
that
we
should
be
driving
striving
for
in
crypto.
So
everybody
should
try
to
run
their
own
node,
so
something
like
a
tab,
node
or
a
Raspberry
Pi,
with
whatever
setup
you
guys
want
to
have.
AM
You
should
run
your
own
client
right
like
for
whatever
chain.
You
have
run
a
client
for
that
saying
that
you
want
to
to
use
and
triples
actually
works
for
all
chains,
all
evm
chains,
so
you
can
have
an
index
that,
like
Roblox
on
top
that
will
index,
attain
and
provide
you
an
answer
to
the
question
how
the
heck.
So
what
address
is
sorry
what
other
sizes
does
my
address
have
and
on
top
of
it
all
to
come
and
bind
it
all?
AM
You
have
the
aggregating
and
decoding
level
that
something
like
rotkey,
but
not
like
a
decoding,
so
Roti
right
now
is
a
an
application,
but
imagine
a
platform
where
you
can
have
a
generic
way
to
go
from
transactions
to
a
common,
readable
format
for
what
they
do
and
it's
actually
consumable
by
humans.
AM
So
going
from
how
we
get
data
to
what
actually
would
go
into
this
decoding
platform.
That
was
that
my
talk
is
about
so
once
you
have
all
the
data
like
either
from
method
or
from
True
blocks.
What
kind
of
data
is
this?
If
you
have
tried
to
play
with
understanding
transaction
history
of
ethereum?
You
know
that
there
is
two
ways
to
get
data:
it's
either
a
trace,
a
transaction
trace
or
transactionary
seats.
AM
There
are
two
kinds
of
traces:
one
is
the
gift
style
trace
and
the
other
is
a
parity.
Trace
I'm
gonna
go
through
them
a
bit
fast
for
those
of
you
who
do
not
know
what
they
are
so
give
style.
Trace
is
the
tracing
that
comes
with
the
git
client.
AM
When
a
transaction
happens,
it
touches
a
multi,
so
it
touches
multiple
contracts
right,
so
you
make
a
transaction
to
a
contract
and
this
one
may
make
a
call
to
another
contract
and
so
on
and
so
forth,
and
as
they
do
this,
they
touch
the
state
of
these
contracts
and
they
they
make
some
changes.
So
this
is
what
the
trace
of
the
transaction
is
and
the
give
style
Trace
is
the
most
completely
like
super
detailed.
It
has
every
single
step
of
the
execution
with
the
op
code,
the
program
counter
the
storage,
diff
Etc.
AM
AM
AM
It
gives
you
a
call
stack
like
this
of
what
did
your
transaction?
Do
there
like
the
cold
trace
of
the
transaction,
and
this
actually
does
not
require
an
archive
node
by
the
way
the
screenshots
is
from
a
very
nice
article
by
bantag
on
traces
that
came
out
like
two
months
ago.
I
think
so
Google
like
bunt
against
transaction
traces,
and
you
will.
You
can
read
about
it
in
more
detail
and
the
other
thing
that
you
can
use
to
understand
how
a
transaction.
AM
What
has
it
done
is
the
parity
style,
Trace
div,
and
this
gives
you
really
oops.
Sorry,
let's
go
back,
it
gives
you
a
state
the
for
each
account
that
you
touch.
It
gives
you
the
difference
in
Balance,
code,
nodes
and
storage.
The
cool
thing
is
here:
if
you
have
the
API,
you
can
play
with
it
a
bit
and
you
can
have
readable
names
for
the
storage
slots
and
how
did
they
change?
So
this
is
a
very
useful
Insight
on
what
did
the
transaction
do
and,
of
course,
then
transaction
receipts.
AM
These
events
are
actually
contained
in
something
that's
called
the
transactionary
seat,
so
let's
say
for
a
token
transfer,
it's
a
I,
don't
know
like
transfer,
Source
destination
and
value
or
something
almost
everything
generates
them.
It
looks
like
like
this,
this
it's
all
hex,
but
if
you
have
the
ABI
of
the
contract,
you
can
decode
this
into
a
human
audible
format.
AM
So
this
is
how
you
gather
that
data,
but
gathering
this
data
is
actually
expensive.
It
takes
time
and
exactly
because
this
is
expensive
in
in
resources.
Persistence
is
key,
so
any
kind
of
platform
that
you
create
and
the
thing
that
we
have
created
the
draft
key
needs
to
have
data
persistence.
AM
You
can
choose
various
ways:
we've
gone
with
a
simple
sqlite
database
for
now,
but
this
way
when
you
have
gathered
all
of
the
data-
and
you
know
that
they
are
true
and
will
not
change,
then
you
can
just
take
it
out
of
the
database
and
reuse
it
instead
of
having
to
re-query
again
through
blocks
or
etherscan
or
make
a
trace
again.
AM
So
we
talked
about
where
you
can
get
data,
how
to
get
it
and
then
I'm
going
to
go
to
the
mid
of
the
presentation,
which
is
the
decoders
themselves.
AM
For
this
we
can
see
that
you
can
check
for
its
receipt
for
its
log
the
address
and
then
send
it,
depending
on
the
address
to
either
the
generic
erc20
transfer
decoder,
if
it's
a
uni
swap
swap
to
the
units
of
decoder
and
so
on
and
so
forth,
like
Ave
Etc,
and
all
of
this
will,
at
the
end,
emit
a
common
event
format.
AM
What's
more,
some
decoders
feed
data
to
other
decoders.
So
oh
I
have
to
get
used
to
reusing
this.
The
esc-20
transfers
create
the
rc20
transfer
event
and
this
gets
fed
to
unisop,
which
translates
it
into
swabs,
so
yeah
I
started
decoder
platform
is
made
on
modularity.
This
is
rodky's
repo,
and
this
is
where
we
have
all
the
decoders
and
it's
like
a
huge
list.
AM
It
doesn't
have
all
of
them
because
they
don't
fit
on
the
screenshot,
but
the
idea
is
that
it's
easy
to
write,
easy
to
use
and
drag
and
drop
that
uses
drag
so
you
drop
it
in
there
and
it's
caught
by
the
the
system,
and
then
a
new
decoder
is
taking
into
account
whenever
we
we,
we
decode
a
transaction.
That's
the
idea,
we're
not
there!
Yet
we
build
binaries,
and
this
is
not
as
modular
as
it
should
be,
but
the
idea
is
that
it
should
just
big,
drag
and
drop.
AM
AM
So
it's
hard
to
read
probably,
but
the
idea
is
that
you
get
from
the
erc20
transfer
decoder,
you
get
the
erc20
transfer
and
then
you
see.
Oh,
it's
a
spend.
The
counterparty
is
the
F
bits
of
hop
and
the
asset
of
fmatism
the
amount
matches.
So
then
we
transform
it
into
the
common
event
format
for
hop
and
give
it
a
nice
suitable
explanation
which,
like
breeds
the
amount
of
if
to
either
your
own
address
or
to
some
other
address
in
the
chain.
AM
So
the
name
of
the
chain
via
hop
protocol
I
talk
a
lot
about
the
common
event
format,
it's
kind
of
a
POC
because
we
are,
we
are
only
consumer
right
now,
it's
changing.
So
this
is
how
it
looks
in
the
code.
It
has
like
a
sequence
index
inside
the
transaction.
So
where
did
it
happen
in
the
transaction?
Timestamp
location
location
is
mostly
something
that
we
use
in
rotkey
because
we
subtract
everything
into
this
format:
not
only
ethereum
transactions
but
pure
Kraken
trades.
AM
Your
ethereum
stating
everything
gets
subtracted
into
this
less
common
denominator
format.
We
have
the
history,
type,
A,
History
event,
type
and
history
event
subtype.
So
this
is
what
defines
the
mid
of
how
you
define
an
event,
the
asset
and
the
balance
change,
and
then
some
extra
stuff,
like
the
location
label,
is
along.
AM
The
counterparty
is
if
I
I
send
it
from
me
to
someone
or
if
I
got
something
else
and
some
extra
data
like
if
it's
a
CDP
for
maker,
we
have
the
CTP
ID
here
Etc,
as
I
said,
everything
is
broken
down
into
this
thing,
like
a
swap,
is
three
of
these
events,
so
it's
a
amount
out
amount
in
and
fee
out,
or
it
can
be
two
if
there
is
no
fee.
AM
AM
This
should
be
read,
unfortunately,
because
I
didn't
take
a
nice
cases
from
the
bottom
to
the
top,
so
I
claim
my
Badger
airdrop,
so
it
has
two
events:
the
gas
fee
that's
burnt
and
the
airdrop
claiming
then
approval
to
one
inch
V2
and
the
gas
fee
for
this,
and
then
the
swap
for
in
one
ends
for
basically
immediately
dumping
the
the
tokens.
AM
AM
AM
So
this
is
kind
of
the
vision
that
we
would
like
at
some
point
to
go
with
rodkey.
So
we
went
from
a
portfolio
turning
up
to
a
common
evm
decoder
and
now
more
towards
a
middleware
that
would
offer
an
abstraction
for
everything
in
in
accounting
for
crypto.
AM
Why
right
like
people
would
ask
why
the
heck?
Why
would
you
need
this?
Because
everybody
is
Reinventing
the
wheel?
There
is
again,
as
I
said:
never
in
protocols,
different
exchanges,
chains
jurisdictions,
it's
impossible
to
keep
up.
I
have
talked
I
have
spoken
with
people
in
both
small
startups
and
big
names
in
the
field
for
portfolio
trading
and
crypto
accounting.
Everybody's
saying
that
this
is
just
too
much
to
give
up
and
maintaining
just
one
module
is
a
full-time
job.
AM
So
I
believe
that
there
is
a
solution
to
this
problem,
so
the
problem
of
everybody
Reinventing,
the
wheel,
has
a
solution
that
we
can
have
an
open
source
platform
middleware,
if
you
want
maintained
by
a
core
team,
but
with
contributors
from
the
entire
industry
and
used
by
multiple
projects
and
for
the
problem
of
different
protocols
and
jurisdictions
and
being
it
impossible
for
a
single
organization
to
keep
up.
AM
Again,
my
amazing
inkscape
skills.
Imagine
a
middleware
where
you
have
like
someone
who
wants
to
use
to
do.
Port,
authoritarian
accounting
has
the
the
core
uses
Bitcoin
and
ethereum
plugs
in
the
Yen
module
the
other
module.
AM
He
also
wants
to
do
accounting
plugs
in
the
accounting
module
and
because
he's
in
Germany
he
plugs
in
the
German
accounting
with
fifo,
multiple
Depot
and
early
for
accounting
methods,
and
imagine
this
middleware
basically
being
used
by
many
people
in
the
field
and
in
the
end,
just
everything
plugging
into
this,
because
it's
better
to
use
a
common
open
source
middleware
rather
than
every
single
application
Reinventing
the
wheel.
AM
Any
such
platform
would
have
some
super
basic
requirements.
It
needs
to
be
open
source
like
everybody
does
tries
to
reinvent
the
wheel,
the
proprietary
closest
way.
This
is
absolutely
idiotic.
It
needs
to
have
a
modular
architecture
so
that,
as
we
saw
before
like
be
pluggable,
have
pluggable
modules
you
you're
in
a
different
country
than
Germany.
AM
You
can
just
plug
the
I,
don't
know
Netherlands
accounting
module,
you
don't
use
ethereum,
you
use
kusama,
you
do
the
substrate
module
and
you
can
do
all
the
polka.com
Etc
it
needs
to
have
a
this
is
a
hard
requirement
to
achieve,
but
it
needs
to
be
multilingual.
It
needs
to
have
multilingual
bindings
because
we
attract
your
python
house.
We
know
how
to
use
other
languages,
but
most
of
our
code
is
in
Python.
AM
Such
a
middleware
should
not
limit
the
user
to
so
we
cannot
ask
the
entire
industry
to
spot
to
python
if
they
are
to
to
use
such
a
thing.
The
platform
should
be
built
in
a
multilingual
way
and
as
for
incentivization,
the
creators
and
maintenance
or
modules
should
be
incentivized
to
actually
contribute
to
this
platform.
AM
If
it
becomes
an
open
source
standard,
then
everybody
should
be
like
oh
wait,
I
mean
we
made
this
new
platform.
We
need
to
write
about
the
module
for
it
also
because
otherwise
it
just
like
nobody
will
use
it
and
the
core
team
that
builds
and
maintains.
It
also
needs
to
be
incentivized
in
in
some
way
in
order
to
be
able
to
keep
building
the
ways
that
this
can
be
is
through
support
to
the
various
teams
or
through
slas
for
software
level.
AM
Agreements
for
companies
that
may
not
want
to
have
open
source
code,
so
you
can
have
do
a
licensing.
So
it's
a
bit
funny
I
wanted
to
show.
The
I
saw
the
timeline
thing
and
I
was
in
the
template
that
they
gave
us
and
I
thought
hey.
Why
not
put
a
timeline
so
how
the
heck
did
we
get
here?
2017
I
just
need
to
do
my
taxes
in
crypto
and
I
was
like
okay
I'm,
not
gonna.
What
is
the
the
way
to
do
this?
There
was
Bitcoin
to
attacks.
AM
There
was
nothing
else
back
then
I'm
not
going
to
use
a
centralized
service.
Okay,
I,
just
don't
trust
them.
So
I
just
made
some
python
CLI
scripts.
It
worked
I've
not
been
sued
by
the
German
government.
Yet
so
it
won't
I,
don't
know
and
later
build
a
UI
around
them.
In
2020
we
made
it
into
a
company.
We
were
a
team
of
two
people
and
maybe
we
had
200
users,
300
and
maybe
10
paid.
AM
So
last
year
the
app
had
grown,
we
hired
one
more
developer
and
we
were
2000
users
and
200
paying
users
in
the
beginning
of
this
year.
There
is
many
people
who
use
rotkey
right
now,
some
like
it
some
complain
with.
They
always
want
more
and
more,
but
it
is
at
a
level
that
many
people
can
use
it.
We
are
a
team
of
seven
now,
six
thousand
around
six
thousand
users,
it's
hard
to
know
because
we
it's
an
open
source
app
and
we
don't
have
Analytics
and
500
550
pen
users.
AM
We
came
all
this
way
without
anything
like
it
was
just
completely
bootstrapped
and
from
basically
your
donations
through
Bitcoin
and
from
Integrations
with
other
companies
like
Optimus,
gave
us
a
grant
lately
before
that
there
were
kusama
and
so
on
and
so
forth.
AM
So
for
getting
this
POC
that
I
described
here
to
the
full
rot
division,
we
would
need
to
go
further
from
here
and
try
to
grow
and
potentially
get
some
funding,
because,
with
the
current
team
that
we
have,
it's
well
impossible
to
actually
build
this
Vision.
The
POC
cannot
grow
to
a
level
of
something
that
can
be
used
by
the
entire
industry
with
just
six
people
six
developers.
This
is
just
impossible
so
with
this
I'm
coming
to
the
closing
notes.
AM
So
if
you
like
restaurant,
you
had
like
open
source
Locker.
First,
the
modular
thing
that
can
be
used
by
everybody
in
the
industry.
Then
please
talk
to
me
or
check
out
yeah.
This
thing
again
check
out
roti.com
jobs.
We
have
some
open
positions.
AM
We
came
here
thanks
to
you
like
seriously,
it's
a
boost
up
project
and
we
would
not
be
here
without
git
coin
donations
and
without
our
premium
users
so
keep
supporting
us.
You
can
donate
in
git,
going
grants
or
in
buy
premium
subscription
and
unlock
all
of
the
features
of
rodkey,
and
you
can
join
our
community
in
Twitter
or
Discord
like
that's.
Where
all
the
support
is.
AM
It
would
be
pretty
cool
for
you
to
if
you
can
join
the
chat
and
join
our
community
and,
if
you're
interested
in
helping
us
grow
in
realizing
this
visual
that
I
try
to
present
here
then
talk
to
me
either
like
any
day
in
the
conference
or
write
an
email
to
Lefteris
rodkey.com.
AN
We
good
afternoon,
you
mentioned
the
graph
on
true
blocks,
and
one
thing
that
I
don't
understand
entirely
understand
about
these
two
tools
to
query
historical
data
is
that
they
use
a
client
node,
and
this
node
basically
is
in
charge
of
storing
data
in
a
acql
SQL
database.
Right.
So
isn't
this
like
a
centralized
way
of
saving
data?
Is
this
when
ipfs
comes
into
play
and
if
that's
the
case,
can
you
explain
how
ipfs
and
SQL
database
work
together
to
solve
this.
AN
AN
Isn't
this
Central
as
way
of
saving
data
is
this
when
ipfs
comes
into
play
and
if
that's
the
case,
can
you
explain
how
ipfs
and
SQL
database
work
together
to
solve
this.
AM
Ifs
doesn't
come
anywhere
in
there
like
the
graph
is,
and
turbo
is
completely
different
things,
so
the
graph
is,
it
creates
an
index
on
top
of
your
already
existing
node
data
and
Roblox
does
the
same,
but
true
blocks.
Does
it
in
a
generic
way
for
all
of
your
transactions,
while
the
graph
has
specialized
subgraphs
written
by
developers
of
particular
protocol
that
basically
write
an
index
address
for
this
protocol
and
this
lives
on
top
of
your
node
data?
It
is
decentralized
like
the
graph
by
Design
is
also
decentralized.
AM
True
blocks
is
itself
also
decentralized.
It
creates
this
index,
and
this
index
is
said,
I
think
it's
pinned
in
ipfs
and
shared
with
others
who
use
true
blocks,
I
I'm,
not
totally
sure
on
the
readers
of
sharing
of
true
blocks,
because
I'm
not
a
developer,
but
I
think
that
this
is
how
they
do
it
and
for
the
graph
they
have
a
decentralized
network.
AO
AO
Great
talk
so,
if
I'm
understanding
this
right,
the
idea
is:
if
we
build
this
out
and
get
it
out,
there
then
like
we
could
get
around
using
services
like
tenderly
and
just
basically
run
tenderly
at
home
for
transaction
tracing
and
simulations
and
all.
AM
That
yeah
I
view
it
more
from
a
historical
perspective
and
Tenderly
is
a
current
emulator
but
yeah
I
suppose
that
you
could
also
do
the
same.
Correct
me.
If
I'm
wrong
then
release
proprietary
right.
AO
AM
AM
AL
Presentation
great
work
from
the
developer
perspective
on
on
a
solid
developer,
I
think
will
be
very
useful
for,
for
example,
I
write,
a
smart
contract
and
I
write
the
decoder,
let's
say:
I
write
a
decoder
and
I
host
it
on
ipfs.
Have
you
thought
about
that
like
how
we
can
have
that
I
can
standardized
I
know,
let's
say
Parable
where
we
Define
the
URI
or
hash,
where
we
set
our
decoder
and
you
guys
can
use
it
because
you
have
a
list
of
because
on
GitHub
right,
yeah.
AM
Yeah
yeah
I
mean
this
is
just
a
different
medium
of
of
delivering
the
decoder.
But
yes,
there
is
exactly
what
you
described
so
I'm
not
going
to
write
every
decoder.
My
team
is
not
going
to
write
every
decoder,
that's
impossible,
but
the
idea
is
exactly
that
you,
when
you're
at
your
smart
contract
for
a
protocol,
then
you
say:
okay
I'm,
also
going
to
write
a
decoder
for
this
and
then
somehow
it
should
be
delivered
to
this
middleware.
Yes,
it
can
be
through
a
link
it
can.
AL
AL
B
B
Thank
you
so
much
left
Harris
that
was
a
really
exciting
Outlook
and
vision
and,
of
course,
yay
for
open
source
from
one
developer.
Legend
to
the
next
I
am
very
excited
to
announce
our
upcoming
speaker.
It's
Richard
who's,
the
author
of
ethers
and
the
co-author
of
firefly,
and
today
he
will
walk
us
through
what's
new
in
version
6
of
ethos.
So
please
give
a
warm
welcome
to
Richard.
AK
AP
Let's
push
buttons
and
see
what
happens.
Thank
you.
Thank
you.
Hello,
buenos
dias,
everyone,
how's,
Stephanie,
yeah,
okay,
so
the
for
those.
Let's
jump
in
I,
think
my
slides
explain.
Most
things
I'm
still
pushing
there.
We
go
so
my
name
is
Richard
Moore
I
go
away,
Rick,
moo
online
and
so
I
write
ethereum,
a
library
called
ethers
Js.
AP
So
what
is
ether's?
It
aims
to
be
a
complete
compact
and
friendly
ethereum
library
that
developers
can
use,
but
also
part
of
user
friendliness.
Is
safety
like
you,
don't
let
user
don't
let
the
developer
do
things
that
like
bites
in
the
ass,
without
realizing
it.
So
one
of
the
cool
things
is
the
default
provider.
I,
don't
know
where
I
should
stand
so
with
the
default
provider
will
haul.
Basically
it
lets
you
just
connect
to
ethereum.
AP
Historically,
you
had
to
either
run
a
node
or
you
have
to
like
sign
up
for
inferior.
You
have
to
do
something
to
start
using
ether
ethereum
right
away,
so
we've
got
like
a
bunch
of
relationships
with
these
third-party
providers
and
right
off
the
bat
you
can
just
connect
to
ethereum
start
doing
Simple
Things,
it's
heavily
throttled,
but
you
can
at
least
do
something
before
deciding
whether
you
want
to
like
get
your
own
inferior
ID,
and
all
that
this
is
all
the
old
stuff.
So
I'm
going
to
go
over
quickly.
AP
It's
written
in
typescript,
now
very
few
dependencies,
26,
000
and
growing
test
cases.
Ens
is
a
first-class
citizen.
Everything
including
all
dependencies,
are
MIT
licensed
and
there's
extensive
documentation
and
in
V6
the
documentation
is
getting
much
better
and
an
important
thing
is
to
understand,
like
I
made
ethers
because
of
something
I
needed
and
something
that
I
use
a
lot,
and
so
I
mean
it's
in
my
best
interest
to
keep
it
getting
better
over
time
and
that
sort
of
thing
I,
think
dog
fooding,
is
very
important
to
keeping
a
library
good.
AP
Otherwise
you
get
a
library
is
very
good
at
hello
world,
but
very
difficult
at
anything,
more
complex,
so
modern,
so
getting
into
the
new
things.
Then
V6
that's
coming
out,
there's
currently
a
V6
available
both
on
npm
and
GitHub.
It's
still
very
beta,
but
it's
there
to
try
out.
So
one
of
the
biggest
features
is
modern
es
features
currently
V5
it
targets
ES3.
AP
So
if
you
are
trying
to
run
ethers
in
IE,
3
from
2002,
it'll
probably
still
work
mostly,
but
that's
no
longer
a
priority,
and
so
the
goal
is
to
start
adding
kind
of
like
new
age.
Javascript
things
heavily
reduces
the
code
size
because,
rather
than
having
this
big
class
for
big
numbers,
you
can
actually
just
use
the
built-in
big
numbers
that
JavaScript
now
provides
you
so
I
mean
you
can
now
use
like
the
equals
equals
equal
sign,
which
is
super
exciting.
AP
If
you
want
to
use
a
big
number,
you
just
put
an
n
on
the
end
and
that
tells
JavaScript.
This
number
should
be
treated
as
a
big
literal,
don't
go
doing
IEEE
or
yeah
IEEE
754
truncation
to
it
and
and
that
sort
of
thing
I'm
trying
going
to
try
to
go
through
this
faster
ish.
If
I'm
talking
too
fast,
let
me
know,
but
basically
it'd
be
nice
to
get
to
the
end.
So
people
who
have
questions
can
ask
questions.
AP
So
another
really
cool
feature
that
modern
JS
offers
is
proxies
for
those
that
don't
know
proxies
basically
and
are
are
an
object,
and
if
you
call
a
property
that
doesn't
exist,
code
gets
to
run
first
and
decide
whether
or
not
it
should.
Let
you
think
something
exists
and
continue.
So
this
heavily
improves
how
the
contract
object.
Works.
Historically,
you
had
like
a
ABI
with
multiple
different
signatures
for
for
different
methods.
AP
AP
Basically,
you
could
have
all
sorts
of
white
space
extra
little
letters
and
and
whatnot
in
there,
and
this
will
all
figure
it
out
at
runtime
and
map
it
back
into
that
and
get
the
right
thing
for
you,
basically
for
all
those
people,
who've
ever
had
duplicate,
ABI
definition
errors
and
have
filed
an
issue
saying
how
to
get
rid
of
them.
AP
This
is
for
you,
typed
values
are
other
cool
things,
so
going
back
to
the
same
situation
where
you
have
two
different
methods,
Foo
and
Foo,
and
they
take
in
different
things
in
V5,
there's
no
way
for
it
to
know
which
one
if
you
pass
in
two
parameters,
an
address
looks
very
much
like
a
number
of
160
bit
number,
but
it
looks
like
a
number
there's
no
way
for
ethers
to
actually
know
which
one
you
meant
to
call,
and
so
now
you
can
do
so
you
yeah.
So
this
would
be
an
error.
AP
It
doesn't
know
what
this
is.
This
looks
like
it
could
be
a
number.
This
is
a
perfectly
valid:
u
in
256.,
so
now
you
can
force
it
and
tell
it
that
it's
a
typed
object
of
an
address,
and
then,
if
you
do,
this
it'll
automatically
know.
Oh,
this
is
the
one
you
wanted
so
again
really
cool
things
we
can
do.
This
is
sort
of
related
to
proxies
but
kind
of
its
own
thing.
AP
Likewise,
if
you
have,
if
you're
doing
programmatic
things,
sometimes
you
just
have
a
bunch
of
keyworded
objects
and
so,
for
example,
you've
got
transfer
from
it's
got
an
from
a
two
and
a
value.
So
this
is
how
you
would
do
it
in
V5.
You
can
still
do
this
in
V6.
This
works
fine,
using
positional
parameters
for
those
that
are
used
to
python.
AP
They
call
them
positional
versus
keywords,
but
now
you
can
also
do
types.keywords
and
pass
in
a
from
a
two
and
a
value,
and
it
knows
that
those
things
should
get
deconstructed
into
the
from
to
some
value.
This
order
doesn't
matter
either.
If
you
wanted
to
construct
this
object
from
a
bunch
of
other
lines
of
code
reading
stuff
in
you
can
just
build
an
object
up
and
add
a
two.
AP
If
it's
not
null
and
out
of
this,
if
it's
not
that
and
then
it's
Off
to
the
Races
okay,
another
big
thing
is:
things
are
now
classes,
okay,
I'm
down
to
19
minutes,
so
things
have
class.
Basically,
oh
yes,
so
I'll
dive
into
this
a
bit
more
because
I
think
this
is
a
really
cool
feature
in
general
of
something
we
should
be
doing
more
in
solidity.
This
is
kind
of
steering
away
from
ethers,
specifically
into
more
solidity
generically.
AP
But
if
you're
using
V5
a
signature
is
literally
just
some
dumb
object,
it's
not
a
class,
it's
just
an
object
with
a
values:
r
s
and
V,
and
that's
all
you
get
by
making
it
a
class.
It
can
take
in
anything
whether
it's
a
like
a
raw
signature
or
whether
it's
an
RS
and
Y
parity,
RS
and
v,
r
and
Y
parody.
Are
our
y
pardon
s?
Whatever
random
combination
of
things,
you
have
feed
it
in
it'll
figure.
AP
Also
change
the
V
for
you,
it'll
update
all
the
y
pyridine
s,
it'll
it'll
reflect
all
the
changes
so
that
the
signature
stays
consistent,
which
means
there's
a
straight
slide
for
that
right,
so
you
can
get
all
the
stuff
out
of
it
and
that
sort
of
thing,
but
the
cool
thing
I
want
to
show
off
is
maybe
I'll
come
back
to
this
in
a
second
that's
how
things
are
done
today.
AP
Maybe
I
will
explain
this
quickly,
so
this
other
things
are
basically
done
today.
So
this
is
basically
people
that
pass
in
a
signature
for
for
solidity
for
doing
EC
recover
on
they
pass
into
bytes,
or
they
deconstruct
it
themselves.
So
that'll
be
the
third
slide
for
now,
but
they'll
pass
this
bytes
object
in
and
then
you've
got
this
expensive,
weird
byte
manipulation.
Library
I
mean
it's
well
written.
It's
by
open,
Zeppelin,
I.
Think
but
you're
doing
a
bunch
of
string,
manipulation
on
byte
arrays,
to
figure
out
where
the
RS
and
V
sit.
AP
It's
also
a
huge
amount
of
data.
Otherwise,
and
then
this
is
what
your
code
looks
like
the
cool
thing
with
these
new
things
are
classed.
You
could,
for
example,
in
your
code,
have
a
signature
that
has
an
R
an
S
and
a
v,
and
then
you
can
just
use
the
Sig
type
from
the
struct,
and
this
is
your
EC
recovery.
So
historically,
you
might
have
also
seen
a
verify
that
takes
into
bytes3
to
digest
a
bytes,
32,
V,
bytes
or
sorry
bytes.
AP
You
meant
eight
V,
byte
search,
You
Are
by
Sergio
s
and
go
off
the
races,
but
with
this
you
can
kind
of
pack
them
all
together.
This
is
the
same
size
for
the
ABI
point
of
view,
and
then
you
just
pass
in
the
signature
and
because
it's
a
class
when
it's
starting
to
build
out
that
encoded
piece
of
information,
it
knows
to
take
the
r
and
pack
it
together
with
the
S
and
pack,
it
Delta
V
in
a
nice
compact
format
and
then
you're
off
to
the
races.
AP
So
one
important
thing
to
note
with
this:
the
next
slide
is
this
line
here.
Bloop
does
not
change,
and
so
you
see
you're
still
creating
a
signature
you're
just
passing
the
signature
in
verbatim
and
yeah.
Basically,
it's
because
the
ABI
is
encoding
it.
It
can
look
at
this
object
and
figure
out.
Okay,
I
need
to
take
the
difference
in
this
case
is
the
r
and
the
Y
parity
and
S
for
those
that
aren't
familiar
with
EIP
2098.
Basically,
this
allows
you.
AP
AP
So
we
kind
of
just
like
slot
that
in
and
now
your
signatures
are
60,
30,
smaller
and
yeah
again
feel
free
to
ask
questions,
because
these
are
going
to
be
a
little
complicated,
but
you
need
a
little
bit
of
more
math
to
help
decouple
that
I'm
going
to
bug
the
the
facility
guys
to
see
if
we
can
kind
of
get
this
built
more
into
solidity.
So
you're
just
pass
in
the
signature
directly,
but
in
the
meantime
this
is
still
much
simpler
than
the
like
couple.
AP
AP
So
if
you
use
decompose,
that's
96
bytes,
but
if
you
use
the
compact
representation,
it's
64,
bytes,
yeah,
I'm,
not
gonna,
go
to
more
than
that.
That
was
just
a
quick
thing
for
people
who
are
looking
at
the
slides
afterwards.
Transactions
are
also
now
an
object.
So
if
you
decouple
a
transaction,
if
you
just
feed
in
a
bunch
of
raw
like
a
raw
transaction
object
or
a
bunch
of
stuff,
if
you
start
updating
the
stuff,
it
updates
the
other
parts
of
it
as
well.
AP
Yes,
basically,
well,
we
only
have
just
an
object
and
you
set
some
property
on
it.
You're
only
opening
that
property,
but
with
this,
when
you
set
a
property,
it
sets
that
property,
but
also
updates
all
the
all
the
entangled
properties.
For
example,
if
you
set
the
gas
price,
then
the
serialized
version
of
that
transaction
should
update
and
you
can
just
set
things
you
can
set
them
to
anything,
that's
valid.
You
could
so
in
this
case
I'm
using
a
bigint,
but
you
could
pass
it
a
string.
You
could
pass
in
a
hex
string.
AP
AP
AP
Well,
because
there's
now
these
visibility
apis
available
in
browsers,
you
can
tell
ethers
by
the
way
when,
when
I'm
not
visible
anymore
pause
the
provider
that
way
if
they
go
to
another
page
for
a
week
and
then
they
find
that
tab-
that's
left
around
from
like
a
week
ago
or
a
month
ago,
or
we
all
have
really
old
tabs.
AP
This
way,
they
haven't
been
consuming
your
your
bandwidth
every
time
that
you've
been
they've
been
doing
nothing
with
your
site.
So
you
can
pause
your
provider
and
then,
when
it
comes
when
the
tab
becomes
visibly
visible
again,
you
can
resume
and
you
get
the
choice
of
whether
you
want
to
replay
all
the
events
that
would
have
happened
during
the
time
that
was
paused
or
whether
you
kind
of
watched
you
just
drop
them
and
keep
keep
on
going.
AP
Both
situations
have
like
valid
use
cases
so
up
to
your
own
personal
use
case,
the
cool
thing
with
this
feature
is
because
we
can
now
pause
and
unpause
providers
all
the
underlying
API
and
framework
changes
that
went
into
making
this
happen.
It
means
that
if
you
have,
for
example,
a
websocket
and
the
websocket
disconnects,
it
can
now
reconnect
and
re-subscribe
to
all
your
events
before
it
starts
feeding
you.
AP
The
new
events
that
start
coming
in
it
can
actually
get
the
events
that
happen
from
the
time
it
disconnected
from
the
time
that
is
now,
and
so
all
your
events
come
back
in
order
and
you
don't
miss
a
beat,
and
you
don't
even
know
that
your
websocket
crashed
and
ordinance
I'll
just
go
on
these
quickly.
Basically,
networks
are
now
a
plug-in.
They
have
a
plug-in
system.
There's
a
lot
of
really
strange
networks
out
there
and
I
always
get
like
requests.
Saying
like.
AP
Oh
my
network,
computes
hashes
in
this
strange
way
or
my
network
doesn't
doesn't
have
an
author
in
the
block
or
this
sort
of
thing
in
the
other.
So
this
makes
it
possible
that
ethers
can
stop
capitulating
to
each
of
these
individual
things
and
all
those
all
those
Oddities
which
are
fine,
I
mean
you
know.
Competition
ecosystem
is
good,
but
then
we
can
like
push
all
those
differences
into
the
the
network
object
and
let
it
handle
it.
AP
Foreign
package
exports
I'm
moving
away
from
a
mono
repo
which
uses
a
bajillion
sub
packages
all
by
ethers
because
it
used
to
be
like
at
ether's
project
ABI
at
ether's
project
providers
and
that
sort
of
thing
package.
Exports
are
awesome,
they're
supported
by
all
major
bundlers
now,
and
it
means
that
there's
no
complicated,
weird
process
going
on
for
ether's
build
for
those
who've
used,
ether's
or
tried
modifying
ethers
at
all.
AP
The
build
process
is
absolutely
insane
because
I
try
to
Target
again
ES3
and
react
and
I
try
to
build
for
everything
and,
as
a
result,
people
who
are
using
more
modern
utilities
like
people
complain
about
Vite
a
lot
or,
if
you're,
trying
to
build
a
bundle
for
node.
It
just
fails.
So
the
nice
thing
is
by
using
package
exports.
Instead
of
all
this
crazy
custom
scripts,
it
just
kind
of
works
with
all
these
tools
and
everyone's
happier,
I
hope,
we'll
see
that
after
I
launch
the
non-beta.
AP
There's
better
and
fewer
dependencies
I'm
actually
going
to
grab
a
bit
of
water.
So
yes,
there's
basically
I
think
I'm
at
five
dependencies
ish
right
now,
but
down
to
four
authors,
which
is
an
important
thing.
I
feel
like
we've
all
seen.
We've
all
like
recently
been
pretty
worried
about
like
supply
chain
attacks,
going
on
from
mpm
installing
a
bajillion
things,
and
so
right
now,
there's
four
well-established
authors
that
are
responsible
for
all
the
library.
AP
So
there's
me
I
accept
myself
well
established,
there's
Paul
Miller,
who
does
an
awesome
library
for
hashing
and
for
signing
and
Microsoft
writes
TS
lib.
It's
a
tiny
little
thing.
It
just
helps
save
some
space
and
websocket
there's
a
popular
websocket
library
that
works
in
node.
AP
If
you're
using
a
browser,
you
don't
have
to
use
that
one
so
now
you're
down
to
like
three
authors
and
and
four
dependencies,
and
it
saves
me
a
lot
of
work
as
well,
because
there's
so
many
times
where,
like
elliptic,
has
some
bug
in
it
and
I'm
trying
to
track
down
on
him
to
get
him
to
update
things
and
once
that's
fixed
I
now
have
to
update
everything
and
put
a
new
build
out,
so
the
fewer
dependencies,
the
the
happier
my
life
is
as
well
and
that's
all
I've
got
I've
got
nine
minutes
for
questions.
AP
If
anybody's
questions
oh
and
come
find
me
afterwards,
I've
got
these
little
like
ahiva
I
brought
one
up
for
anyone.
Who's
like
curious
about
V6
I've
actually
started
to
write.
Oh
I
should
explain
that
so,
yes,
basically
I'm
trying
to
write
a
bunch
of
little
apps
right
now
against
V6
to
help
test
it.
So
this
one's
actually
found
quite
a
few
bugs
already
it's
an
nft.
It's
got
a
little
scratch
off
hologram.
AP
When
you
scratch
it
off,
you
can
scan
it
and
claim
it,
and
you
can
you
know,
fold
your
little
papercraft
robot
up
and
like
do
this
and
then
take
pictures
of
him
and
post
it
into
the
contract,
and
you
can
like
I'm
thinking
of
more
of
like
that
little
gnome
from
Amelie.
You
know
you
can
take
it
around
and
show
you
those
things
visited
different
places
right
now.
It's
only
deployed
to
Gourley
for
anyone
who
actually
like
goes
through
the
effort
of
like
submitting
this
I
will
be
migrating
your
tokens
to
mainnet.
AP
AP
So
that's
just
like
a
little
demo
I'm
throwing
together
to
to
test
and
make
sure
V6
works,
and
it's
definitely
not
ready
for
production
as
it
stands
right
now,
but
it's
getting
closer
and
I'll
probably
be
doing
a
few
more
little.
Weird
projects
like
this,
just
to
kind
of
like
find
those
little
foibles
and
and
things
that
happened
during
the
last
two
years
worth
of
rewriting
and
rewriting
and
rewriting
so
I've
got
7
Minutes
49
seconds
left
for
if
there's
any
questions
out
there.
AP
Yes,
I
think
seller
is
one
of
the
ones
that
has
do.
They
have
a
different
way
of
computing.
The
transaction
hash-
yes,
yes,
so
I
think
they're,
actually
one
of
the
ones.
That
was
like
an
early
headache
that
led
to
the
reason
for
me
putting
trying
to
put
that
stuff
into
some
place.
That
was
a
little
more
isolated
from
like
the
core
providers.
AP
AQ
Yeah,
so
you
know
again
thanks
so
much
for
all
you
do.
The
ecosystem
will
not
be
the
same
without
you,
but
just
curious,
like
at
this
point
in
the
state
of
ethereum.
AQ
At
this
point
in
the
state
of
ethers,
what
kind
of
support
are
you
getting
in
terms
of
like
audit
helps,
especially
as
we
see
more
like
front-end
attacks
and
things
like
that
right,
where
it's
like
you
know,
smart
contract
auditing
is
important,
but
the
interaction
layer
is
equally
as
important,
especially
now,
as
people
are
getting
more
clever
like
like
you
know,
not
to
not
to
throw
shade,
but
you
know
like
when
I
saw,
you
know,
string
normalization
for
for
ABI.
AQ
It
calls
you
know,
I
instantly
thought
like
just
made
me
like
uncomfortable
in
my
gut
of
like
different
things.
That
could
happen
so
just
curious,
like
like
who's
helping.
You
audit
who's,
helping
you
kind
of
like
comb
through
this
code
and
go
through
different
edge
cases,
and
things
like
that
right.
AP
So
basically,
auditing
is
all
done
by
me,
which
is
terrible
right
now.
One
of
the
cool
things,
though,
is
between
I've
got
like
oh
sorry,
the
EF
helps
sponsor
me.
I
get
GitHub
grants,
get
coin,
get
coin
grants
and
a
few
sponsors,
and
so
now
I'm
starting
to
develop
an
endowment.
So
I
am
hoping
to
be
able
to
get
like
a
proper
audit.
At
some
point
there
was
one
team
at
one
point
that
was
going
to
formally
verify
a
bunch
of
things.
AP
I
have
heard
from
them
in
a
while,
so
I
don't
know
if
that's
still
a
thing
but
I
think
four
of
her
I,
don't
I
I'm
on
the
festival.
Verification
like
it
helps
in
some
regards,
but
it's
not
a
Magic,
Bullet
I
think
absolutely
like
proper
auditing
would
work
and
I'm
also
a
big.
That's.
Why
I'm
a
big
fan
of
tests?
I,
don't
want
issues
to
happen,
so
I
feel
like
test
I
mean
tests
can
only
find
the
presence
of
bugs,
not
the
absence
of
bugs
so
I
mean
it's.
AP
A
great
question
talk
to
me
afterwards.
If
you've
got
some
ideas,
I'm
starting
to
get
some
money
off
to
the
side,
now
that
I
can
actually
start
throwing,
if
you
know
any
like
auditing
firms
that
do
typescript,
what
you
mean
should
definitely
pay
for
auditing
those
those
people
are
so
skilled
and
valuable.
AP
Oh
okay,
sure
well,
I
mean
you
kind
of
are
I
mean
thank
you.
Bitcoin
bitcoin's
been
awesome,
so.
AR
AP
AP
Oh
bash
transactions
by
bachelor's
actions
like
via
Smart
contract
wallets.
AR
AP
By
you
mean
through
smart
contracts
or
yes,
yeah
yeah
I
mean
that
all
already
Works,
you
just
need
the
ABI
for
it.
There's
not
currently
in
ethereum
an
official
like
batch
standard,
but
you
can
do
it
with
smart
contracts
like
smart
contract.
Well,
it's
like
a
gnosis
safe
for
example.
Allows
you
to
send
multiple.
Is
that
I'm
sorry?
Is
that
what
you
mean
like
being
able
to
send
multiple
transactions
at
once?
An
atomic
atomically.
AR
AP
AP
Contracts
so
that
you
can't
currently
in
ethereum
in
general,
send
multiple
transactions
atomically
or
at
once,
you
basically,
if
you
will
have
a
bunch
of
transactions
you're
trying
to
send
serially,
so
the
nonce
manager
already
does
that.
Basically,
it's
a
wrapper
that
goes
around
a
signer
and
you
can
just
quickly
fire
off
transactions
and
it
will
automatically
bump
up
the
nonce
from
the
initial
nonce
fetched.
AP
There
are
some
issues
with
that
as
well.
If
you
just
you,
know,
flood
the
mempool
with
a
bunch
of
transactions
from
the
same
address
that
are
all
serialized,
it
could
be
seen
from
the
Network's
point
of
view
as
a
Dos
attack,
and
so
it
might
drop
them.
So
that's
a
feature
I
want
to
add
to
the
nonce
manager
in
V6
as
well
is
kind
of
the
ability
to
rebroadcast
or
whenever
something
succeeds
kind
of
like
have
a
callback
that
lets.
AP
AS
A
AS
AS
So
yeah
more
than
deserved
one
question
about
philosophy-
and
this
is
this-
is
going
very
specific
on
being
that
guy.
But
one
thing
that
ethers
has
a
philosophy
is
really
the
how
it
returns
undefined
and
actually
it
doesn't
return
undefined
for
waiting
for
transactions,
and
we
actually
talked
once
in
one
of
the
PRS
of
typescript
really
giving
the
option
of
of
returning
undefined
one-day
transaction.
You
wait
on
it
and
it
doesn't
return
anything.
For
example-
and
you
mentioned
that
that's
not
really
part
of
the
philosophy
of
how
you're
building
the
types
for
for
ethers.
AS
AS
Let's
say:
let's
say
it's
a
hash
that
doesn't
exist:
okay,
what?
What
is
the
philosophy
of
ether's
returning
that
would
it
be
undefined
because
in
their
case,
the
type
script
for
that
doesn't
contain
undefined.
It.
AP
AP
Okay,
so
for
a
quick
background,
basically
ether's
V5
was
started.
Two
years
ago,
I
was
still
kind
of
learning
typescript.
It
did
not
have
strict
null
checks
enabled,
which
basically
means
anything
that
returns
anything
I'll
turn
that
anything
or
null,
and
so
in
V6
I've
got
no
typing
in
some
strict
null
checked
type
whatever
that
what
plague
is
it's
disabled
and
so,
for
example,
the
provider's
method
for
get
transaction.
It
will
be.
The
return
type
is
null
or
transaction
response,
whereas
in
V5
it
was
just
transaction
response
and
I
just
trusted.
AP
People
knew
that
it
could
be
null,
but
don't
worry
about
it.
So,
yes,
yes,
V6
has
been
fixed.
It
helps
fix
a
lot
of
people's
libraries
as
well
that
are
like
dependent
on
it
and
are
kind
of
crawling
into
the
ethers
tree
to
like
run
their
linting
rules.
So,
yes,
that's
absolutely
changed.
AP
V
Will
there
be
backwards?
Compatibility
between
those
new
big
in
type
and
the
old
big
number
type
like
will
pick
in
the
big
number-ish,
sir?
Will?
What
be
big.
AP
AP
Time,
oh,
yes,
the
big
I
mean
for
input
types,
you
mean
yeah,
yes,
there's
still
a
big
number-ish
type
because,
like
so,
for
example,
the
the
slide
up
there
that
demonstrated
I
lost
the
clicker,
the
slide
that
shows
you
like
the
transaction.max
gas
equals.
So
that
takes
in
a
big
numbers.
So
you
can
pass
in
a
bigint.
You
can
pass
in
a
string
that
happens
to
be
decimal
number.
You
can
pass
in
a
string.
AP
That's
a
hex
number
that'll
all
be
munged
into
a
big
int
type,
but
you
can
still
pass
anything
anything
that's
not
ambiguous.
Ethers
will
accept.
It
will
not
accept
things
that
it
can't
possibly
interpret
like,
for
example,
non-zero
X
prefixed
things,
because
if
it
did
that
it
wouldn't
know
whether
one
one
was
hexadecimal
17
or
binary
11,
and
so
it
requires
you
to
be
completely
nonambiguous.
AP
So
if
you
pass
in
a
string,
that's
one
one.
It's
going
to
assume
as
decimal,
but
yes
it's,
it's
still
a
big
number.
It's
still
a
big
number-ish
and
there's
functions
as
well
for
your
own
purposes.
AP
That
will
convert
any
big
number-ish
into
a
big
number
or
any
numeric,
which
is
another
type
that
it
now
has
into
a
number
I've
got
22
seconds
left
and
oh
also,
as
a
quick
note,
I'll
be
standing
outside
after
this
for
anybody
else
who
has
like
questions
to
like
poke
around
okay,
okay,
I'll
see
you
aside,
I'll
be
handing
these
little
cards
out
as
well
and
like
so.
B
AP
B
That
brings
us
to
our
last
talk
for
today
in
the
developer
infrastructure
category
before
we
move
on
to
some
zero
knowledge
stuff,
so
I'm
excited
to
introduce
POTUS,
he's
a
core
developer
at
Prismatic
labs
and
he
enjoys
the
types
of
puzzles
that
arise
when
trying
to
hack
the
protocol.
Today,
he'll
talk
about
the
right
way
to
Hash
a
marker
tree,
so
please
give
it
up
for
him.
AT
Thank
you
did.
C
AT
Happen
to
have
the
clicker
there
he
is.
Thank
you.
Thank
you
all
right.
So
thank
you
very
much
for
letting
me
speak
here.
This
is
going
to
be
a
very
different
kind
of
talk,
and
it's
going
it's
my
first
talk
in
this
community,
so
I
expect
something
I,
don't
promise
anything
I
just
want
to
sell
you
something
I
just
wanted
to
get
right
to
it.
I
want
to
sell
you
this
if
you're
hashing,
if
by
any
chance,
you
need
to
perform
hashes
that
this
happens
quite
a
lot
in
the
consensus
layer.
AT
I
want
to
tell
you
this
library
that
will
hash.
At
the
very
least
at
the
very
least,
it
will
hash
20
faster
than
your
current
backend,
without
any
changes
to
your
code,
and
if
you
see
here
in
the
bottom,
do
I
have
a
pointer
here.
I
guess
no!
This
is
just
a
click
okay.
So
if
you
see
here
in
the
bottom,
you
see
that
for
large
trees
this
is
hashing
four
and
a
half
times
faster
than
the
than
the
common
back
end.
AT
So
I
just
want
to
tell
you
this
and
since
I
can
sell
you
in
two
sentences.
I'm
gonna
spend
most
of
my
dog
just
telling
you
what
is
sha
256,
which
is
this
what
this
Library
does,
but
the
techniques
of
this
Library
can
be
applied
to
whatever
other
hashing
on
the
Sha
families
that
I
know
of.
AT
So,
let's
start
with
this
slide,
this
Shah,
the
hashing
library,
is
something
that
generically
just
takes
any
message,
any
an
arbitrary
length
message
as
in
a
sequence
of
bits
of
any
length,
and
it
just
spits
out
of
that
32
bytes.
The
digest
of
this
thing
is
256
bits
and
the
procedure
to
go
through
those
is
consists
of
three
parts,
so
I'll
just
go
over
the
three
parts
briefly,
the
first
part
is
breaking
your
message
into
multiples
of
64
bytes.
So
those
are
the
chunks
512
bits.
AT
AT
So
here's
just
a
brief
description
of
this.
This
slide
I
just
stole
from
somewhere
in
the
internet,
but
I
want
to
go
in
some
detail
in
each
one
of
the
three
parts.
So,
let's
start
with
the
easiest
one,
which
is
scheduling
words,
so
scheduling
words,
as
you
see
here,
you're
giving
a
message.
Let's
suppose
that
we've
already
broken
the
message
in
pieces,
each
one
of
them
is
64
bytes.
AT
So
the
first
thing
you
do
is
you
compute,
48,
double
words,
four
bytes,
each
the
first
you
need
64
in
total.
The
first
16
are
your
your
message.
Are
the
64
bytes
that
you're
giving?
These
are
16
words
and
with
those
16
words,
you
can
compute
the
next
16
words
and
with
those
16
words
that
you
just
completed
complete
the
next
ones
and
the
next
one,
and
you
compare
the
64
that
you
needed,
and
the
important
thing
that
you
need
to
remember
from
this
slide
is
that
to
compute
those
scheduled
words.
AT
The
only
thing
that
you
need
is
the
previous
words,
nothing
else.
That
means
that,
when
you're
scheduling,
words
well
I
think
the
consensus
people
are
starting
to
come
in
anyways
I've
already
started
guys.
So
when
you're
scheduling
words,
the
important
thing
is
that
you
don't
need
to
know
the
previous
state
of
the
Hashi.
You
only
need
to
know
what
were
the
previous
scheduled
words
in
particular,
it
only
depends
on
the
chunk
that
you're
hashing
now
and
it
doesn't
depend
on
the
other
chunks
that
you're
going
to
hatch.
AT
So
if
your
message
consists
of
10
000
chunks,
you
can
compute
the
scheduled
words
for
the
ten
times
and
challenge
without
caring
about
how
you
attach
each
one
of
them
and
without
caring
about
the
rounds.
Part
so
scheduling
words
only
requires
the
previewed
scheduled
words,
so
the
diagram
there
is
taken
out
of
an
Angel
paper
describing
what
were
two
new
instructions
that
do
this
scheduling
of
four
words
at
a
time
with
only
two
instructions.
AT
This
can
be
done
now
on
Modern
CPUs,
and
this
is
just
a
sketch
that
how
diagrammatically
you
compute
four
words
at
a
time,
but
it's
irrelevant
the
method
that
you
use
to
compute
the
scheduled
words.
What
I
want
you
to
remember
is
that
to
compute
them.
You
need
to
know
the
previous
words
and
nothing
else.
AT
AT
That
is
hard
that
is
computationally
hard
to
invert,
but
the
important
thing
that
we
need
to
remember
is
this:
we
already
we
were
given
the
message
we
broke
it
into
pieces
of
64
bytes
that
I
haven't
told
you
how
we
computed
those
scheduled
words
that
we
need
and
then
what
we
do
is
we
pass
an
incoming
digest
if
it's
the
very
first
chunk
that
you're
slash
that
you're
hashing,
we
pass
a
constant
digest
that
the
method
has.
If
it's
the
third
chunk,
you're
gonna
pass
the
past
that
the
second
chunk
produced.
AT
The
point
is
that
you
have
a
status
which
is
your
current
hash
and
you
pass
it
through
this
function
that
takes
this
status,
the
hash,
the
32
bytes.
It
takes
one
of
the
scheduled
words
that
you
computed,
and
it
takes
another
constant
word
that
the
protocol
has
so
it
takes
this
data
and
it
produces
for
you
a
new
hash.
AT
So
what
do
you
need
to
remember
from
this
is
that
you
need
to
have
computed
at
least
that
scheduled
word
before
passing
through
the
rounds,
and
you
cannot
do
this
in
parallel
to
pass
through
the
round.
You
need
to
have
computer
already
hash
before
so.
This
is
something
that
you
cannot
do
in
paragraph
foreign,
okay,
so
the
padding
block.
So
this
is
the
first
part
of
the
three
process
which
is
breaking
the
message
into
multiple
of
64
bytes.
How
do
you
do
this?
Well,
you
just
break
your
message
into
multiple
of
64
bytes.
AT
You
add
one
bit
at
the
very
end
of
the
message
just
to
Signal.
This
message
has
ended
so
here
the
message
is
less
than
64.
Bytes
is
24
bits
this
three
bytes
there,
a
b
and
c
this
one.
There
is
showing
that
we
added
that
extra
bit
to
show
the
message
has
finished
and
then
you
pad
with
zero
bits
up
to
a
multiple
of
64
bytes
minus
eight,
because
you're
going
to
use
the
last
eight
bytes
or
64
bits
to
encode
the
length
of
the
whole
message,
as
is
there
in
binary.
AT
That's
the
number
24,
which
is
the
actual
length
of
this
message.
Okay,
so
this
is
the
procedure
to
pad
your
message,
which
was
arbitrary
length
into
a
multiple
of
64
bytes.
AT
So
the
first
thing
to
notice
is
that
you're
not
going
to
beat
any
of
the
hazards
out
there.
Every
implementation
that
I've
reviewed
before
I
got
into
this
and
I'm,
not
an
expert
at
role
on
this
I
I
come
from
a
completely
different
subjects,
but
every
implementation
I've
seen
is
equivalent
to
Intel's
original
paper
white
paper
on
this
and
it's
equivalent
to
what
opens
the
cell,
for
example,
does
all
of
them
implement
the
following?
AT
You
can
compute
your
scheduled
words
as
I
told
you
without
knowing
what
the
previously
hashed,
what
the
previous,
what
the
current
status
of
the
Russian
is.
So
all
of
them
use
Vector
instructions
to
compute
several
words
at
a
time
if
your
CPU
supports
128
bits
registers
like
these
ones
and
almost
every
CPU
now
does
then
you're
gonna
compute
four
words
at
a
time:
four
double
words
at
a
time.
You
start
with
your
16
double
Words,
which
is
your
message
and
you
can
put
those
16
double
words
in
only
four
registers.
AT
If
your
computer
supports
avx2,
that's
equal
to
electric
and
registers
that
are
256
bits,
then
you're
gonna
compute,
eight
you're,
going
to
use
only
two
registers
and
you're
gonna
compute.
AT
Eight
words
at
a
time
if
your
computer
supports
AVX
512
you're,
going
to
compute
all
of
them,
the
16
of
them
at
a
time
and
well,
idx
1024
does
not
exist
yet,
but
it's
already
in
the
books
there's
two
options
into
Computing:
This
Modern
implementations
either
do
this
this
vectorization
or
if
your
CPU
implements
cryptographic
extensions
like
the
what
was
there
in
a
picture,
then
they
will
use
128
bits
registers
regardless.
If
your
computer
has
larger
registers,
because
cryptographic
extensions
can
compute
forwards
at
a
time
much
faster
than
vectorized
computations.
AT
But
but
the
point
I
want
to
make
is
that
this
vectorization
is
there
in
every
equivalent
implementation.
This
is
since
Intel's,
since
that's
white
paper,
also
Computing.
The
words
like
this
is
useful
because,
since
you
can
compute
the
words
and
at
the
same
time
pass
through
some
rounds,
so
let's
say
that
you
computed
the
fifth
word,
then
you
can
pass
up
to
the
fifth
round.
AT
Then
you
can
mix
scalar
operations
with
Vector
operations
and
the
CPU
will
perform
this
in
parallel.
The
spews
can
take,
has
several
ports
and
can
take
different
types
of
operations
at
the
same
time,
so
you
can
be
Computing
the
fifth
round.
That
is
a
computation
that
it
has
to
be
done
on
the
scalar
part
of
the
CPU
because
it
has
to
be,
it
cannot
be
in
parallel
and
at
the
same
time,
you're
Computing.
The
sixth
word
in
parallel
on
Vector
registers,
okay,
so
I've
covered.
AT
What
is
what
is
a
typical
implementation
of
hashing,
and
this
is
the
the
thing
to
remember
from
the
whole
part
from
the
whole
purpose
of
the
talk.
Is
that
the
hasher
signature
is
this
in
either?
Language
is
something
like
this:
it's
something
that
takes
an
arbitrary
length,
bite
slice
and
it
gives
you
back
32
bytes,
which
is
the
hash
okay.
AT
So
he
takes
an
arbitrary
length
message
and
it
gives
you
one.
It
gives
a
digest,
and
this
is
something
that
you're
not
going
to
beat
you're
not
going
to
implement
this
better
than
open
SSL.
No
one's
going
to
do
you
perhaps
can
do
it
for
one
CPU
you're
not
going
to
write
an
implementation
faster
than
what
is
already
there.
AT
However,
we
use
hashing
in
a
very
restricted
scenario.
We
use
hashing
to
Hash
Merkel
traits
a
Merkel
Trace
are
not
arbitrary
length.
Merkel
trees
are
in
the
case
of
the
consensus
layer,
for
example,
where
the
nodes
are
32,
bytes,
they're,
something
like
this.
Each
one
of
these
nodes
represent
32
bytes,
and
each
parent
node
has
only
two
children
and
these
two
children
are
hashed
together.
So
the
two
children
are
64
bytes
in
both
concatenated,
you
hash
them
and
you
get
the
hash
of
the
part.
You
get
the
what
what
goes
in
the
BART.
AT
So
this
is
what
we
we
hash
typically
and
we
observe
two
things.
Well,
actually
the
execution
layer
has
something
completely
different,
but
still
the
same
technique
is
going
to
apply.
What
we
observe
immediately
is
the
following
thing.
First
of
all,
we're
not
hashing
arbitrary
length,
we're
hashing
every
single
time,
64
bytes
and
second
of
all,
we
can
hash
this
in
parallel.
AT
You
don't
need
to
you,
can
hash
the
blue
one
by
knowing
the
entire
left
subtree
and
hash
the
yellow
one
completely
in
parallel
in
the
other
side,
of
course,
you
can
do
this
in
parallel.
If
you
have
a
CPU
that,
if
you
have
two
CPUs,
you
can
just
use
two
threads
to
Hash
this,
and
this
is
exploited
in
the
consensus
layer.
I,
don't
know
if
any
other
besides
Lighthouse
uses
this,
but
I
want
to
use
this
I
want
to
say
to
talk
about
a
different
kind
of
parallelization.
AT
So
again,
this
this
is
the
the
typical.
This
is
the
one
that
is
in
the
specs
in
the
consensus
layer
specs.
This
is
the
implementation.
I
think
this
is
vitalik's
implementation.
This
is
a
flat
array
approach
to
having
a
miracle
tree.
I
highlighted
the
line
where
you're
Computing,
the
hash
of
the
parent
by
hashing,
the
two
children
that
are
concatenated,
the
top
one
is
different
memory
layout.
This
is
Proto
lambda's
implementation
on
remarkable.
AT
AT
This
is
a
production,
a
production,
implementation
and
again
the
same
thing
is
slightly
more
complicated
because
it
takes
into
account
other
things,
but
the
highlighted
line
is
the
point
is
the
point
where
you're
hashing
and
you
hash
one
node
as
the
hash
of
the
two
children-
and
you
do
this
on
a
loop
for
each
pair
of
children.
You
have
once
you
call
the
hasher
and
you
get
one
touch.
AT
Okay,
so
I
want
to
tell
you
what
is
the
right
way
of
hash
in
a
miracle
tree
and
it's
fairly
fairly
simple,
so
there's
two
things
that
we
want
to
exploit.
We
want
to
exploit
the
fact
that,
though
this
takes
an
arbitrary
by
its
life,
so
this
is
the
layer
that
you
want
to
hash
and
it
gives
you
back
and
a
slice
of
hashes
of
32
bites
these
hatches
all
of
them
at
the
same
time,
I
think
I
might
have
gotten
this
correctly
in
Rust
after
several
iterations,
and
this
would
be
in
Python.
AT
We
have
a
library,
we
have
two
libraries,
we
have
one
in
go
assembly
and
we
have
one
in
user
assembly
with
C
bindings.
If
you
are
going
to
use
that,
let
me
know
because
crypto
extensions
on
arm
is
still
not
implemented.
It's
gonna
take
like
10
minutes
to
add
this,
but
this
is
the
signature
that
the
library
is
going
to
use
if
you're
using
a
c
bindings
all
right.
So
that's
all
I
want
to
sell
you
and
that's
that's.
It.
K
AU
AT
That
doesn't
exist
in
libraries
already.
Is
that
right?
It
does
so
so
Intel
has
this
Library,
where
in
tallahassee's
library,
where
you
send
you
give
them
different
messages
and
it
hashes
them
all
at
once,
so
that
I
just
copied
everything.
Nothing
here
is
mine,
I,
just
grabbed
Intel's
implementation
and
bitcoin's
implementation
and
put
it
in
a
library.
The
thing
with
intelity
is
that
it
takes
Arbiter
length
things
and
it
just
hashes
it,
and
it
gives
you
back
just
the
hashes
for
all
the
messages
that
you
give
them.
AU
Okay,
sure
I'll,
just
repeat
that,
for
the
sake
of
the
recording
I
was
just
asking
if
the
the
ability
to
process
multiple
different
hashes
in
parallel
was
already
implemented
in
libraries.
That
was
the
question
to
the
answer.
AU
So
so
that
so
there's
two
things
that
you
have
here
right,
one
is
to
like
pre-compute
the
padding
block
and
then
the
second
is
the
ability
to
to,
in
parallel
process,
multiple
different
hash
values
and
then
produce
multiple
different
hash
values
right,
correct,
and
so
there
are
already
implementations
that
do
multiple
different
hash
values
at
once,
and
you
just
modified
them
to
deal
with
the
padding
block.
Is
that
right?
Yes,.
AT
And
yes,
so
so
there
are
two
modifications
here:
one
is
the
thing
that
you're
going
to
use
the
padding
block.
The
padding
block
has
this
constants
hardcoded
and
then
there's
other
modification,
which
is
the
fact
that
you're
expected
to
get
a
list
of
64
bytes
chunks,
and
then
you
pipeline
this.
So
what
this
Library
does.
Is
it
grabs
all
of
the
blocks
consecutively
memory?
It
gets
a
matrix
on
the
vector
on
the
vector
registers.
It
transposes
this
Matrix
and
now
you
have
on
all
of
your
registers
the
different
mess,
the
different
messages.
AT
AU
AT
Yeah,
so
we,
the
original
implementation,
was
just
purely
assembly.
This
is
there,
it
has
C
bindings,
but
it
the
C
go.
Overhead
is
horrible,
so
it
ended
up
being
slower
than
using
the
go
implementation,
the
standard
library
and
go.
So
we
needed
to
write
a
Google
assembly
library
to
use
ourselves
yeah.
AV
Okay,
I'll
just
say:
I,
wonder
after
you've
done
this,
have
you
like
considered
it
whether
we're
doing
anything
practical
would
it
would
be
interesting
if
someone
tried
to
design
a
hash
function
specifically
for
hashing
Merkel
trees
and
whether
they
could
do
it
differently
and
how
big
the
savings
would
be
for
that
if
there
was
like
a
special
hash
function
for
Merkle
trees,
I'm,
sorry,
I,
I,
I
didn't
get
this
sorry,
I'll
repeat
the
question.
So
I
was
curious.
You've
shown
that
there
are
always
optimizations
you
have
to
do
to
get
around
the
general
purpose.
AV
AT
Oh,
oh,
that's
a
good
question
and
I
believe.
AT
Oh,
that's,
a
very
good
question.
I
need
to
think
I,
I
I!
Don't
think
I
can
answer
this
here,
I'm,
not
sure
if
you
can
do
better
than
the
current
implementations
like
you
can
take
the
sponge
type
implementations,
shot,
3
and
Company,
and
you
can
try
to
adapt
this
for
this
I.
Don't
know
I!
Think
yeah.
Your
question
is
completely
open.
It's
it's!
A
very
good
question.
I
think
we
should
think
about
this.
AT
So
if
I
answered
your
question,
as
can
there
be
a
method
that
is
designed
for
Merkle
trades
instead
of
like
a
generic
one,
it
might
be
I,
don't
know.
AU
Oh
sorry,
to
hug
the
mic
in
your
implementation.
Does
it
so
say,
there's
some
really
big
Merkle
trades
in
in
the
beacon
state
right
yeah?
Is
it
the
job
of
the
implementation,
to
kind
of
split
that
Merkle
tree
up
into
smaller
subtrees
that
fit
the
size
of
your
CPU?
No.
AT
No,
so
the
the
so
you're
saying
if
you
have
this
large
tree,
if,
if
it's
a
job
of
the
implementation
in
splitting
into
smaller
trees,
no
no,
this
this
is.
This
is
completely
orthogonal
to
that.
So
what
you
guys
are
doing
in
Lighthouse
is
splitting
this
into
smaller
trees,
and
you
send
this
on
different
threads
to
to
compute
them
in
parallel.
This
is
completely
different.
You
just
pass
the
entire
slot,
so
big
trees
is
the
big
gain
for
this
you're
gonna
be
at
least
hashing
four
times
faster
than
the
standard
Library.
AT
You
just
pass
the
entire
slice,
like
all
of
the
slides
that
you
of
the
bottom
slice
and
what
what
this
thing
is
going
to
do
is
not
split
into
subterest.
What
this
thing
is
going
to
do
is
grab
as
many
blocks
as
possible
that
it
can
fit
in
your
registers
and
then
just
go
to
the
next
chunk
and
the
next
chunk
and
the
next
chunk
like
this,
so
it
doesn't
split
in
sub
trees.
Okay,.
AT
It
produces
the
next
layer
and
then
you
just
feed
it
the
next
layer
entirely
and
it
produces
the
next
one:
oh
yeah,
okay,
so
for
the
state,
so
you're
thinking
in
the
in
the
beacon
State
for
the
state.
This
is
incredibly
fast,
but
this
is
not
how
we
hash
the
state,
because
we
have
it
in
cashier
typically,
and
you
just
have
a
few,
a
few
nodes
that
you're
you're
you're
you're
changing.
So
when
you
hash
the
dirty
trees,
the
dirty
leaves
then
well
there's
two
things
that
happen.
AT
So
sometimes
you
have
several
vendor
consecutive
and
then
you
can.
You
can
be
smart
and
pass
this
consecutive
layer
to
this
hash
or
you
can
just
use
whatever
you're
using
now
to
scratch
two
blocks
at
a
time:
you're
not
going
to
get
the
vectorization
impact,
but
you're
going
to
get
at
least
the
20
of
the
hard
quality.
The
the
padding
block.
AU
Yeah
Okay
cool,
so
I
guess
there's
maybe
like
an
argument
for
maybe
it's
quite
useful
for
small
trades.
You
don't
get
a
lot
from
caching.
AT
B
All
right,
since
we
are
still
very
good
on
time,
I
am
going
to
use
a
few
minutes
to
talk
about
something
else,
so
that
the
ZK
people
have
time
to
come
in
as
well
yeah.
First
of
all,
if
you
want
to
ask
questions,
please
do
wait
for
the
microphone,
because
there
is
also
a
live
stream,
and
people
can
only
hear
the
questions
in
the
live
stream.
If
we
use
the
mic
and
also
for
recording
purposes,
would
be
nice.
Thank
you.
B
Secondly,
this
year
in
Defcon
we're
doing
a
couple
of
things
different
than
we
used
to
do.
We
are
very
experimental
in
some
ways
which
I'm
kind
of
excited
about,
so
I
will
use
some
minutes
to
talk
about
this
before
I
hand
over
to
the
next
speaker,
so
I'm,
not
sure,
if
you
guys
saw
it
on
the
website
or
maybe
on
Twitter
or
in
some
of
the
communications
around.
B
But
this
year
we
have
introduced
something
called
continuous
Devcon,
which
is
basically
our
way
towards
a
more
holistic,
decentralized,
Community,
Driven
Defcon
and
as
part
of
that,
we
are
trying
out
various
Concepts,
one
of
which
is
that
the
venue
will
be
open
until
11
pm.
So
if
you
want
to
meet
up
with
somebody,
if
you
just
want
to
have
I,
don't
know
a
catch
catch
up
with
somebody,
if
you
don't
feel
like
partying,
if
you
want
to
hang
out
here,
feel
free
to
use
the
venue
and
yeah.
Basically,
the
the
basement.
B
The
first
floor
in
the
second
floor
will
remain
open
until
11
pm,
also
there's
a
happy
hour
from
6
p.m.
To
8
A.M
in
the
first
floor
so
feel
free
to
mingle
there
as
well,
and
then.
Lastly,
in
order
to
make
everything
a
little
bit
more
Community
Driven,
we
have
allocated
a
space
on
the
first
floor
to
six
selected
communities
and
it's
called
Community
hubs.
B
So
in
there
you
can
find
various
different
spaces
that
are
run
by
communities,
one
of
which
being
the
region
Hub,
the
temporary
Anonymous
Zone,
the
ZK
Community,
the
crypto
governance
and
economics,
Hub
crypto
economics,
and
what
was
it
again?
Talking,
engineering,
Hub,
I,
don't
know
this
is
powered
by
token
engineering
and
smart
contract.
Research
Forum
and
the
women
leaders
in
web
3
have
so
yeah
a
lot
and
the
design
Hub
so
lots
of
stuff
to
explore
there.
Please
feel
free
to
check
it
out
and,
lastly,
the
hacker
basement.
B
So
we
transformed
the
parking
lot,
which
is
in
the
basement
into
a
hacker
space,
and
it's
definitely
worth
looking
at
it
because
it
really
is
beautiful.
First
of
all.
Secondly,
there
is
also
some
sort
of
art
exhibition
going
on
in
there
and
30.
If
you
want
some
quiet
time,
if
you
want
to
hack
something
on
your
computer
or
if
you
just
need
like
a
break
from
the
masses,
I
can
really
recommend
the
hacker
basement
for
all
of
that
stuff,
yay,
okay,
I
think
now
we
can
move
over
to
the
ZK
part
of
the
day.
B
First
I'd
like
to
introduce
is
building
zocrates
at
the
ethereum
foundation,
and
today
he
will
give
us
an
update
on
surprise.
Zokates
is
a
toolbox
and
a
set
of
tools
aimed
at
making
developing
ZK
snark
applications
easier
using
a
high
level
language.
So
please
give
a
warm
welcome
to
TiVo
foreign.
AW
AW
This
is
also
tooling
for
Developers,
so
today,
I'm
going
to
present
Socrates
I'm,
going
to
give
a
quick
introduction
to
what
it
is
for
those
of
you
who
don't
know
it,
and
then
I
will
present
a
few
things
that
we've
been
working
on
and
that
I'm
excited
to
share.
AW
So
what
is
zocoties?
So,
if
you
want
to
program
snarks
today,
there's
different
tools
that
you
can
use.
Maybe
some
of
you
have
heard
of
Socrates
of
circum
of
Halo
2
and
all
those
tools
have
different
trade-offs
in
terms
of
the
power
that
they
give
to
developers
and
how
easy
they
are
to
use
in
high
high
level
or
low
level
they
are.
AW
AW
It
compiles
to
r1cs.
So
this
is
the
kind
of
snarks
that
are
easy
to
verify
today
on
ethereum,
so
you
can
verify
directly
on
your
smart
contracts.
AW
It
uses
modular
backend
implementations,
which
means
that
documents
does
not
develop
a
proof
system
itself,
but
rather
uses
back-end
implementations
from
the
community
for
different
proof
systems
and
just
targets
that
and
uses
those
great
implementations
in
order
to
have
such
a
high
level
language,
but
still
have
something
that's
really
efficient
at
the
at
the
low
level.
Zocoties
Max
use
of
optimizations
in
the
compiler,
which
is
something
that's
also
different
from
some
other
tools
which
give
you
a
very
low
level
access
to.
AW
So
where
does
it
run
so
the
part
of
Socrates
which
allows
you
to
compile
so
the
compiler?
You
can
run
it
natively
on
your
machine.
We
have
a
remix
plugin
as
well,
and
we
also
now
have
a
playground
that
you
can
access
at
play:
dot
socrate.es,
which
I
encourage
you
to
to
start.
That's
probably
the
easiest
way
to
get
started
with
Socrates
and
just
write
some
simple
programs
in
terms
of
the
scheme,
so
this
back-end
implementations
that
we
support.
AW
We
currently
have
support
for
a
graph
16,
which
is
some
of
you,
may
know
the
the
snark
that
is
the
smallest
and
the
fastest
to
verify
for
gm17,
which
is
in
a
evolution
of
graph
16.
AW
Let's
say
for
Moreland,
which
is
a
universal
snark
which
I'll
touch
on
a
bit
later
and
for
Nova,
which
I'll
also
go
into,
which
is
a
new
type
of
exciting
snark
that
enables
new
use
cases
for
proving
so
the
particular
implementations
we
rely
on
Bellman,
Arc,
Bel
person
and
also
snark.js
I'll,
also
go
into
a
bit
more
detail
on
on
that
integration
and
in
terms
of
the
verifier
we
have
you
can
verify
in
JavaScript
or
in
this
in
the
CLI,
and
for
some
of
the
schemes
that
are
compatible
with
devm.
AW
AW
AW
AW
For
an
example
of
something
that's
a
little
bit
more
advanced
to
make
the
point
that
this
is
actually
a
high
level
language,
here's
the
implementation,
endocrates
of
the
shot,
256
function,
so
just
to
give
you
a
few
things
that
are
expected
in
a
high
level
language.
We
have
a
module
import
system,
you
can
import
constants
as
well.
AW
We
have
for
Loops.
Of
course,
we
have
function,
calls
and
also
one
exciting
thing
that
we
added
kind
of
recently-
and
maybe
some
of
you
who
use
rust,
are
familiar
with
that
feature,
there's
a
notion
of
constant
generics.
So
in
this
case
the
shot
25
shot.
256
function
is
a
hash
function
that
can
take
an
arbitrary
number
of
bytes
as
input.
AW
However,
in
circuits
all
the
inputs
are
always
static,
so
all
of
these,
the
size
of
the
input
will
always
have
to
be
known
at
compile
time,
but
this
is
something
that
we
do.
You
can
still
Define
this
as
something
that
is
generic
over
K
and
then
have
a
number
of
rounds
of
this
round
function.
AW
But
then,
when
you
compile
your
program
and
actually
use
the
function,
the
shot
5
6.
this.
This
variable
K
is
going
to
have
a
concrete
value
which
will
then
compile
to
the
exact
number
of
blocks
that
you're
hashing
and,
if
you're,
trying
to
do
something.
That's
Dynamic
calling
this
function
on
something
that's
not
whose
size
does
not
know
that
compile
time
is
just
going
to
fail
at
compile
time.
AW
So
we
can
have
a
very
idiomatic
implementation
of
chapter
five.
Six,
of
course,
there's
more
complexity
initial
round.
But
if
you
look
at
the
code,
it's
almost
Line
to
Line
equivalent
to
an
implementation
that
you
would
see
on
Wikipedia,
for
example,
pseudocode
implementation
now
I'm
going
to
go
in
a
bit
more
detail
on
a
a
detail
of
the
shot,
256
implementation,
so
inside
the
Sha
round
there
is
this
expression
needs
to
be
calculated
a
lot
of
times.
AW
AW
So
you
define,
or
you
constrain
a
new
variable
BC
to
V
equals
to
equal
to
B
times
C,
and
here
I
just
want
to
clarify
that
these
constraints
are
the
low
level
constraints
that
we
deal
with
so
the
r1cn's
constraints.
They
look
like
this.
They
have
one
side
which
is
linear.
So
in
this
case
it's
only
one
variable,
but
you
can
have
a
sum
of
different
variables,
one
side
which
is
quadratic
right.
AW
So
this
first
one
just
defines
BC
or
like
constrains
a
new
variable
BC
to
equal
to
B
times
C
and
then
introduces
this
res
variable
here,
which
is
our
result
for
the
first
bit
and
then
constrains
it
in
this
way,
and
this
is
actually
more
efficient
that
what
the
compiler
would
generate
itself,
because
here
we
have
more
knowledge
of
what
we're
actually
trying
to
do
than
the
compiler
does
on
the
flip
side.
Here.
AW
So
this
is
something
that
Socrates
does
not
expose
at
the
moment
to
the
developer,
which
means
that
you
can
only
do
this
one
which
is
less
efficient.
If
you
look
at
more
lower
level
tools,
they
let
you
use
these
things,
but
then
it's
at
your
own
risk
and
then
it's
likely
that
you're
going
to
introduce
vulnerabilities.
AW
So
what
I
want
to
showcase
today
is
the
addition
to
zocates
a
way
to
actually
encode
this
thing
and
have
the
performance
from
this
thing.
In
the
context
of
the
high
level
language.
AW
AW
The
this
expression
here
I'm
going
to
create
an
an
entry
point
for
this
program,
so
taking
also
a
b
and
c
returning
a
u32
and
I'm
just
going
to
call
the
default
function
and
see
how
many
constraints
are
created
in
the
process,
so
I
compile
and
then
I
get
the
result,
260
constraints
here
and
now.
AW
What
I'm
going
to
do
is
to
Define
another
version
of
this
function,
which
hopefully,
will
reduce
the
number
of
constraints
by
leveraging
this
lower
level
implementation,
so
I
call
it
hand,
optimized
has
the
same
signature
a
b
and
c
also
return
a
u32.
AW
AW
We
have
sort
of
a
magic
tool
in
our
standard
library
to
do
that,
which
is
called
a
cast
function
and
which
can
do
this
conversion
for
you,
and
here
I
want
to
point
out
that
this
is
actually
free,
because
the
u32
type
is
actually
represented
as
32
bits
under
the
hood.
So
we're
not
paying
any
constraints
for
this.
AW
We
need
to
reason
at
the
level
of
field
elements.
So
we
need
to
turn
those
booleans
into
field
Elements,
which
is
a
lower
level
representation
for
this
I
call.
This
Bluetooth
field
function
which
I'm
going
to
Define
in
a
second.
AW
And
here
I'm
going
to
Define
this
function
and
again
this
is
something
that's
going
to
be
free,
that's
not
going
to
create
any
constraints
for
the
same
reason
as
earlier,
because
the
Boolean
is
actually
presented
as
a
field
element
at
the
low
level.
It's
just
that
it
can
only
be
value
0
for
false
and
one
for
true.
AW
Okay,
so
now
I
have
ABC
as
svl
elements
now
I
have
this
first
constraint,
which
was
BC,
equals
B
times
C,
and
actually
this
constraint
I
can
already
Define
in
in
the
high
level
language,
because
I'm
doing
both
constraining
and
assigning
BC
so
I
do
that
so
I
have
that
first
constraint
is
done
then
I
declare
this
result
all
again
mutable,
and
this
is
where
the
interesting
new
thing
happens.
AW
And
then
I
want
to
be
able
to
use
this
this
res
value
later,
but
to
use
it
I
need
to
make
sure
that
it's
really
constrained
in
the
constraint
system.
So
after
this
I
did
this
assignment
I
add
actually
constrained,
which
which
makes
sure
that
everything
is
is
set
in
stone.
AW
AW
So
I
need
to
be
really
careful
when
I
do
this,
but
I
can
force
the
creation
of
this
Boolean
with
this
value.
Finally,
I
reconstruct
the
u32
value
from
the
Boolean
array.
AW
Using
this
class
function
again:
okay
and
I
changed
my
entry
point
to
use
the
hand
optimized
version,
so
we
were
at
260
and
now
we're
at
164..
So
we
made
quite
quite
a
big
dent
in
our
constraint.
Count:
okay,.
AW
Oh
can
I
exit
this
okay.
So
what's
the
idea
here
we
want
to
keep
all
the
guarantees
that
we
have
from
our
higher
level
language.
We
have
types,
we
have
things.
AW
We
want
to
disable
a
bunch
of
checks
and
we
have
a
similar
approach
where,
as
a
developer,
you
would
write
most
of
your
program
in
safe
zocots,
let's
say,
but
then
for
the
few
parts
that
need
the
extra
performance.
You
can
write
them
in
those
ASM
blocks
and
try
to
make
those
blocks
as
small
as
possible,
so
that
when
you
need
to
review
the
code
or
make
sure
that
things
are
not
unconstrained,
you
know
exactly
where
to
look
at
and
these
things
are
relatively
small.
AW
AW
We
in
the
compiler.
We
have
a
special
case
which
detects
whenever
we're
doing
this
and
uses
this
exact
constraints,
but
now
potentially
using
assembly
blocks.
What
we
can
do
is
rewriting
some
of
the
internals
of
the
compiler
to
actually
use
these
things,
which
then
reduced
the
size
of
the
compiler
code
base
and
makes
it
easier
to
reason
about
and
how
to
okay.
AW
The
next
thing
that
I
want
to
talk
about
which
is
unrelated
to
this
is
the
fact
that
Socrates
is
not
compatible
with
with
snog
JS.
So
snack.js
is
a
JavaScript
library,
first
nrx
from
the
identity
team
from
the
circum
team,
which
has
a
bunch
of
tools
allowing
you
to
work
with
snarks
in
a
JavaScript
context.
AW
So
what
we
have
now
available
today
is,
if
you
start
from
your
your
zocoties
program,
and
you
run
the
compilation.
Currently,
it
returns
an
output
which
is
the
low
level
zocchi's
representation,
but
you
can
also
optionally
return
and
Dot
r1cs
file,
which
is
the
format
that
is
accepted
by
it's,
not
Js.
AW
And
then,
if
you
want
to
execute
your
program,
you
take
your
input
and
and
the
program
itself,
and
you
can
create
a
witness
file
which
is
also
compatible
with
with
snack.js
and
from
this
point
on
your
instant
RGS
lens.
So
you
can
do
whatever
snog
JS
enables
you
to
do
using
different
proof
systems
using
a
powersoft
Tau
ceremony
and
run
your
verifier
in
the
browser,
etc,
etc.
AW
Another
topic
that
I
wanted
to
touch
on
quickly
is
the
is
incrementally
incrementally
verifiable
computation.
So
this
is
a
scary
word,
but
it's
basically
the
idea
that
if
you
have
a
computation
which
you
can
split
in
steps
which
are
basically
the
same
function
being
run
over
and
over
again
on
the
say
on
on
the
state
and
updating
the
state,
then
you
can
actually
use
recursive
snarks
to
prove
this
computation
incrementally
so
as
opposed
to
Socrates,
currently
to
others
proof
systems
where
we
can
think
of
them
as
more
like
an
Asic.
AW
So
your
circuit
is
like
this
Asic
that's
really
set
in
stone
and
you
can
do
only
one
computation
and
everything
is
bounded
and
static.
In
this
case,
you
can
have
a
computation
where
you
run
one
round
of
the
computation
and
then
another
one
later,
and
you
can
prove
the
execute
the
execution
of
that
competition
at
each
step.
AW
So
some
use
cases
for
that
are
succinctly
verifiable
blockchain.
So
maybe
some
of
you
have
heard
of
the
Mina
blockchain,
where
basically,
you
use
this
to
have
a
a
blockchain
where
each
time
you
get
a
new
block,
you
you
verify
the
previous
block
as
well
as
the
transaction
of
this
block,
and
it
creates
a
snark.
And
then
you
have
this
kind
of
recursive
verification.
AW
Another
use
case
is
vdfs,
because
some
of
the
vdfs
actually
have
the
structure
of
having
some
state,
which
is
then
to
which
you
apply
a
function,
recursively
and
then
being
able
to
have
a
snark
of
this
computation
can
be
really
really
useful.
AW
So
what
we're
working
on
now
now
we're
actually
pretty
close
to
having
this
ready
in
production,
is
integrating
a
proof
system
called
Nova,
which
does
exactly
this
and
the
way
that
I'm
not
going
to
go
too
much
in
detail
here.
But
the
way
that
the
API
would
work
for
developers
today
is
that
you
write
a
function
and
the
only
restriction
here
is
that
the
input
needs
to
be
the
type.
AW
The
input
type
needs
to
be
the
output
type,
because
you
want
the
recursive
aspect
to
to
work,
and
then
you
can
compile
this
function
to
a
specific
curve.
So
you
need
to
use
this
balance
curve
because
under
the
hood
Nova
uses
cycles
of
electric
curves.
So
this
doesn't
work
for
any
curve,
but
we
support
the
curves
that
enable
that
and
then
you
can
basically
prove
a
number
of
steps,
starting
from
a
given
a
given
State
and
even
after
running
this,
you
could
run
it
again,
starting
from
the
last
state
that
you
had
this
hope.
AW
Okay,
since
I
only
have
two
minutes
left
I'll
just
go
very
quickly
through
some
things
that
we
that
we
added
recently
so
there's
the
powers
of
Tau
ceremony.
So
if
you
look
at
graph
16,
for
example,
it's
it
requires
a
trusted
setup
and
the
powers
of
Taos
ceremony
enables
you
to
do
that
using
MPC.
AW
So
you
can
do
that
directly
with
this
Orchestra
CLI
and
also
in
soccer's
JS.
Actually,
we
also
have
support
for
lock
statements
that
you
can,
where
you
can
inspect
certain
values
of
your
code
at
runtime.
We
have
support
for
the
Marlin
proof
system
that
I
mentioned
earlier,
which
is
a
universal
setup,
which
means
that
you
can
do
one
setup
and
then
use
it
for
different
circuits.
AW
We
also
change
a
lot
of
the
syntax
following
some
feedback
from
from
members
of
the
community
and
finally
yeah
the
this.
This
playground,
which
is
now
accessible,
which
I
invite
you
to
to
check
out,
okay,
I,
think
that's,
that's
all
I
had
to
share.
So
thanks
for
listening.
AX
Foreign,
okay
yeah.
Can
we
use
the
new
Nova
support
to
write
another
but
shorter
ZK
VM.
AW
That's
a
good
question,
I
think
in
theory:
yes,
I'm,
not
sure
it
would
be
the
most
efficient,
but
they
were
already
actually
a
long
time
ago.
AW
AW
That
that's,
that
was
the
case
in
those
projects
10
years
ago.
It's
a
project
called
tiny
Ram
where
that's
what
they
did.
They
basically
took
the
state
of
the
CPU
and
just
ran
each
time,
one
one
up
code
and
then
have
a
recursive,
snarks
recursive
start
each
time
yeah,
but
maybe
there's
other
approaches
to
leverage
this
proof
system.
AX
AY
B
I
was
made
aware
of
a
little
mistake
that
I
made
so
the
happy
hour
tonight
is
from
6
pm
to
8
P.M,
not
8.
Am
it's
not
going
the
entire
night
I'm,
sorry
guys,
but
still
worth
checking
out
the
first
floor
after
our
last
talk
and
I
am
also
certain
that
there
will
be
some
other
cool
things
actually
happening
on
the
second
floor
as
the
program
of
Defcon.
After
that,
this
is
the
one
thing
that
I
didn't
tell
you
about
in
the
other
break.
B
So,
as
the
last
thing
that
we're
also
doing
is
continuous
Defcon
is
allowing
Community
meetups
Gatherings
initiatives
activities
to
happen
in
the
evenings
in
the
venue.
That
is
anything
from
a
chess
tournament
to
I
think
currently
in
the
hacker
basement,
we
have
the
deck
an
evening
in
the
dark
Forest
tomorrow
we
will
have
autonomous
worlds
in
the
hacker
basement
tonight.
A
purchased
tweeted
me
that
we
will
have
a
session
demo
session
with
a
villian
about
autoscan
Aragon
sourcify
in
the
workshop
room
one.
So
it
is
definitely
worth
checking
out
this
Google
sheet.
B
But
without
further
Ado
I'd
like
to
introduce
our
last
speaker
of
the
day,
it's
ji
Chang
and
he
is
a
in
the
PC
team
of
the
F
and
he
likes
building
privacy
or
scaling
related
applications.
And
today
he
will
share
little
things.
He
has
learned
in
developing
Halo
2
circuits,
so
big
Applause
and
a
round
of
Welcome
to
him.
AZ
Oh
thank
you
for
coming
hi
I'm
CeCe
from
PSE
today,
I'm
sharing
some
my
developer
experience
for
programming,
the
circuits
CK
circuits,
so
we
know
in
ZK
snark.
Well,
we
allow
the
approver
and
a
verifier.
They
agreed
on
the
algorithm
and
then
proverb
can
provide
a
proof,
input
and
the
output
to
the
verifier,
and
then
verifier
can
be
convinced
that
the
computation
the
output
is
a
correct
corresponding
to
the
input
without
re-computing
the
algorithm
again,
so
lots
of
scaling
and
privacy.
We
are
getting
from
this
Nars
and
I'm
going
to
unroll
this
process
more.
AZ
It's
not
full
story,
but
U.S
circuit
developer.
You
are,
on
the
left
hand,
side,
you
define
some
constraints
and
and
then
you
send
a
verifying
key
and
program
key,
generate
generated
from
the
these
constraints
and
send
it
to
a
verifier
and
approver.
We
call
this
a
setup
time.
This
is
like,
when
you
like
finish
the
circuit
development
and
deploy
the
project,
and
that
is
ready.
But
at
a
proven
time
like
whenever
you
need
to
send
a
proof.
AZ
AZ
So,
if
you're
coming
from
like
a
normal
programming
world,
for
example,
if
you
write
pythons
the
rest
same
thing,
you
might
your
brand.
You
works
with
a
function
loops
and
if
else,
this
kind
of
good
stuff.
But
when
you
enter
the
circuit
world,
you
need
to
do
the
agreement
algorithmetization
and
you
need
to
get
the
circuit
tray,
so
you're
thinking
in
the
computation
phase
of
the
whole
computation
and
you
think
to
verifying
these
computations
with
math
and
equations.
AZ
So
it's
like
that.
So
before
we
go
into
introduce
some
tricks
and
development
tips,
that's
talk
about
the
rules
of
the
game
like
how
how
this
circuit
thing
work.
AZ
So,
first
of
all,
like
all
computation
is
represented
in
finite
field,
arithmetics,
so
a
financial
element
is,
you
can
think
it
as
an
integer
and
it's
a
positive
integer
and
less
than
a
number
P
and
let's
say
if
P
equals
to
3.
Then
if
you
have
two
field
elements
two
and
two
you're
adding
together,
you
actually
got
a
result
of
one
because
we
need
to
do
it
in
module
or
three
modular,
p
and
p
is
usually
a
very
large
number.
AZ
For
example,
254
bits,
so
the
takeaway
is
that
we
need
to
represent,
like
your
computation
in
in
Via
numbers,
insteads
or
bits
and
buys,
and
you
need
to
watch
watch
out
for
the
wrap
over
and
over
for
all
that
stuff,
and
then
we
get
this
grid
of
stuff.
AZ
Like
a
paper,
you
can
fill
in
the
field
numbers
inside
those
cells
and
for
if
you
you
can
expand
this
grid
with
more
columns
and
but
like
for
every
new
columns,
you
edit
it
it
would
be
more
costly
and
but
the
raw
rows
are
basically
free,
but
they
are
kept
up
to
some
limit.
Let's
say
2
to
the
18
like
this
is
260k
roughly.
So,
ideally
you
want
to
use
as
much
row
as
possible
until
like.
AZ
Maybe
you
have
more
computation
you
need
to
do,
then
you
add
more
columns
and
approving
call
scoring
times
verifying
costs.
Something
like
that
and
we
have
to
distinguish
some
types
of
columns
this.
This
columns
are
distinguished
by
like
who
can
see
what
and
when
they
are
like
when
values
are
assigned.
So
we
have
the
pink
one
is
the
advice
column.
AZ
It
is
only
visible
to
approval.
It
is
determined
at
approving
time
to
put
a
witness
the
value
inside
this
column.
We
have
a
fixed
column.
Fixed
column
is
a
aside
at
a
setup
time,
so
both
approval
and
the
verifier
has
have
a
copy
and
then
the
instance
column.
AZ
So
this
is,
the
value
is
visible
to
verifier.
So
we
can.
The
program
can
fill
in
the
input.
Then
the
output
in
a
proving
time,
so
so
that
we
can
have
the
the
scenario
in
the
first
slide
that
the
further
within
some
private
values
and
then
some
public
instance,
values
for
the
input
and
the
output
using
the
proven
key
to
General
proof
and
then
verifier
can
verifying
the
proof
and
also
take
the
10
and
100
layer.
AZ
You
can
verify
in
a
contract
or
do
other
verification
if
you
may
right.
Okay,
so
now
we
have
the
grid,
we
have
the
columns.
We
need
to
Define
like
what
values
in
the
cells
are
correct
or
valid
like
what
what
of
values
are
allowed
to
put
in
their
cells.
So
these
are
constraints
and
I'm
going
to
introduce
two
types
of
constraints
in
Halo
2..
The
first
one
is
the
custom
Gates.
So
this
defines
the
content
in
polynomials,
so
the
gate
looks
like
this
or
we
have
a
we.
AZ
We
choose
the
a
sales
from
column,
a
sales
from
column,
B
and
sales
from
currency.
You
can
choose
a
whatever
column
you
want
and
you
can
choose
the
relative
position.
For
example,
I
want
the
next,
oh
sorry
next
column
of
the
in
relative
to
the
H
cell,
and
then
you
can
Define
the
sorry
you
can
Define.
AZ
You
can
divide
a
polynomial
to
constraint,
like
okay,
a
must
equal
to
the
B
plus
b
b,
the
next
cell
of
the
B
and
then
but
like
this
gate,
will
be
applied
to
all
roles
in
in
a
column.
So
sometimes
we
don't
want
to
do
that.
So
that's
why
we
have
a
selector
here,
so
this
selector
will
be
the
one
or
zero
value.
So
if
this
selector
is
one
then
the
whole,
then
then
this
expression
here
must
equal
to
zero.
AZ
Then
our
constraint
is
enforced,
but
if
selector
is
zero,
then
the
whole
expression
is
already
zero,
so
this
constraint
doesn't
necessarily
hold
for
for
that
particular
row.
AZ
Oh
yeah
I
just
want
to
call
the
the
guitar
the
sentence
from
vitalik
I
heard
from
his
talk.
AZ
He
said
like
everything's,
a
polynomial,
I'm,
a
polynomial,
and
so
to
give
an
example,
a
viral
example
for
in
terms
of
Fibonacci
circuit
example,
and
we
want
to
prove
the
ends,
turn
of
the
Fibonacci
number
and
then
we
can
fill
in
the
value
of
one
one,
two
three
by
anything
into
the
grid
and
we
Define
the
gates
that
left
plus
right
must
equal
to
the
sum,
and
once
that
means
that
left
plus
right
minus
sum
equals
to
zero,
and
we
use
the
selector
Q
flip
to
determine
like
which,
which
row
we
want
to
enforce
this
constraint.
AZ
So
here
is
one
way
to
do
it
like
we
we
make.
The
skate
looks
like
very
straight.
They
are
on
the
same
rows,
but
you
can
see
that
we
are
using
three
columns
here.
AZ
So
this
is
an
the
other
way.
You
can
do
it
you
can.
You
can
make
it
like
the
sum
you
can
for
fold
it
to
the
next
sale
and
it
used
less
columns,
and
but
it
works
the
same
way
it.
It
still
can
give
you
your
desired.
Computation
results,
so
it's
flexible,
you
can
determine
the
shape
of
the
gates
depends
on
your
your
problem
to
solve,
and
the
second
second
tools
to
constrain
the
grid.
AZ
Value
is
copy
constraints,
so
I
have
this
yellow
tab
here,
the
group,
the
values
of
two
cells
together
and
if
they
are
glued,
so
this
value
assigned
in
a
cells
must
equal-
and
this
is
determined-
the
glue
must
be
determined
at
a
setup
time.
There
is
no
way
you
can
change
it
at
a
proven
time,
but
it's
very
cheap
and
use
it
as
much
as
possible.
AZ
So
the
final
rule
is
that
the
approval
can
be
evil
like
I
like
to
think
profile
or
like
everything
with
the
word
proof,
verifier
miners
things
like
that
as
a
cyborg,
they
are
the
human
who
runs
the
machine
and,
like
machine,
runs
the
algorithm
but
like
human,
are
attracted
by
incentives
and
they
they
could
do
all
kind
of
stuff.
So
if
you
didn't
write,
your
circuit
right
program
can
witness
the
wrong
values
and
still
convince
the
verifier
to
do
stuff.
AZ
AZ
Okay,
so
let's
get
into
tricks?
Okay,
let's
start
from
the
simple
one:
how
do
we
limited
limiting
the
options
in
one
cells,
for
example,
I
want
this
cell
to
allow
only
values,
one
or
two
or
three,
nothing
else.
So
to
do
this,
we
we
can
Define
the
gates
that
has
this
expression,
the
sale
value
minus
one
times,
seven
minus
two
times:
sales
value
minus
three
equals
to
zero.
AZ
If
you
plug
in
the
three
into
the
cell,
this
expression
will
be
zero,
but
if
you
plug
in
this
100,
then
this
expression
will
be
like
99
times
99
times.
97,
it
won't
be.
Zeros
of
this
constraint
is
not
satisfied.
If
you
witness
zero,
this
would
satisfy
either
okay.
Next,
that's
converting.
If
and
else
so.
For
example,
we
have
this
sample
program
here.
AZ
We
have,
we
have
input
a
input
B
and
there
are
like
zero
elements
and
happy
is
a
like
a
boiling
value,
and
if
happy,
then
we
do
a
plus
b.
If
we
are
not
happy,
we
do
eight
times
V
and
then
so.
For
example,
we
could
have
a
circuit
that
looks
like
this.
AZ
I
used
the
glue
to
glue
the
topic
instance
value
to
the
private
value
like
so
in
the
left
left
one.
We
can
see
that
we
witness
a
S5
and
ps6
and
happy
as
one,
so
we
are
doing
the
addition
test,
so
we
got
the
value
11
and
the
second
one.
We
still
have
sent
a
b
value,
but
heavy
zero
and
the
design
output
is
30..
We
we
just
gave
an
example
here
we
haven't
constrained
them.
AZ
So
the
way
you
you
turn
this.
If
else
into
the
gate
expression
will
be
happy
times
a
plus
b,
plus
one
minus
Happy
Times,
a
Milt
is
by
B
minus
output
equals
to
zero
so
that
if
you,
if
you're
happy
it's
one,
then
this
expression
will
be
enabled
and
this
expression
will
be
disabled.
The
other
way
is,
then.
If
happy
is
zero,
then
this
expression
is
enabled,
and
this
expression
is
disabled.
AZ
So
that's
the
we.
We
have,
like
example,
value
witness
in
here.
These
are
all
satisfied,
but
now,
where
you
kind
of
forget
something
here
like
football,
can
win
us
free
here
and
still
got
the
three
times:
five
plus
six
and
like
get
the
output
minus
27
equals
to
zero.
So
what's
wrong
here:
it's
because
that
every
input
is
a
finite
field
element,
but
if
we
want
Boolean
value
here,
we
need
another
constraint
to
make
it
boiling.
AZ
So
we
need
this
additional
constraint
here
to
limit
the
happy
to
be
one
or
zero,
so
using
the
tricks
before
okay.
Next,
we
need
to
cover
the
loops
into
the
circuit,
and
this,
let's
start
with
the
easy
one.
AZ
This
function
initiate
the
variable
rs0
and
then
it
runs
the
loop
five
times
and
at
the
five
at
our
at
5
to
R
and
Dot
it
five
times
and
like
five
is
our
constants,
so
it
will
be
easy
to
lay
down
like
the
value
of
R
started
with
zero
and
the
next
plus
five
plus
five
plus
five,
all
the
way
to
the
output,
and
then
we
constrain
the
value
0
to
0
25
to
the
output
to
the
output.
So
we
will
have
this
gate
expression
like
this
easy.
AZ
But
what
about
this?
This
looks
like
exactly
the
same
Loop.
It
said
like
how
many
times
of
loop
is
determined
by
the
approval
input
and
here
so
this
will
be
about
trick
here,
because.
AZ
You
might
have
a
so
we
we
can
imagine
a
good
tension.
Trace
looks
like
this
first
per
input
and
here
and
the
output
25
here
and
straight
or
input
three
here
put
here
and
notice
that
this
this
algorithm.
AZ
This
is
basically
the
a
very
in
efficient
way
to
to
find
type
5
times
n,
and
also
note
that
this
end
could
be
like
arbitrary
large,
but,
like
we
don't
have
a
arbitrary
large
circuit
so
like
we
cannot
do
the
infinite
Loops
we
we
can
only
do
like
only
up
to
a
certain
amount
of
computation.
So
we
need
to
like
restrict
this.
How
big
this
end
could
be.
Let's
say
we
receive
it
to
five
and
then
we
need
another
check
to
make
sure
the
end
is
actually
less
than
five.
AZ
We
want
to
focus
on
how
like
to
determine
that
the
computation
of
R
inside
so
yeah.
That's
first,
do
the
coffee
gate
for
zero
to
zero
and
the
output
to
Output,
but
because
the
copy
Gates
output
can
only
locate
it
at
the
same
place
of
the
a
cell.
So
so,
if
the
N
is
3,
we
need
to
like
repeat
the
result
here,
all
right,
so
one
attempt
is
to
solve
this.
Is
we
add
the
program
witnessed
selector
here?
AZ
So
let's
put
one
one
one
here
when
we
when,
when
we
are
still
in
a
range
of
N
and
then
zero
zero
zero
here
to
do
a
repeat
and
using
the
if
else
trick
without
learning
before
we
can
Define
the
scale
expression.
AZ
But
then
how
do
you
prevent
like,
like
the
purple
doing
this
so
like
if
forever
like?
They
don't
follow
the
rules?
They've
witnessed
zero
and
like
one
and
then
go
back
to
zero
and
one,
and
if
you
do
that,
math
you
can
realize
that
program
can
win
the
three
and
like
output
to
to
be
10.
Then
it
convince
you
that
five
times
three
is
ten,
then
yeah,
it's
a
it's
a
failure
that
people
say
it's
like
blockchain
is
about
trust.
I
would
say:
blockchain
is
all
about
trust
issues
and.
AZ
So,
let's
observed
like
like
the
like
different
cases
of
end,
like
n,
is
zero
and
it's
three
and
is
five
and
we,
after
that
s,
can
only
like
for
the
valid
case
that
you
can
start
with
zero.
But
once
you
are
zero,
you
can
only
go
with
zeroth
of
the
rest
of
the
row.
You
can
start
with
one
and
you
can
turn
one
to
zero
anytime,
but
like
once,
you
turn
to
zero.
There
is
no
way
to
turn
back,
so
we
can
identify
the
state
transition
like
this.
AZ
So
this
is
a
trigger
hand.
Taught
me
so
you,
when
you
start
you
can
start
at
a
eighth
stage.
That
means
we
need
to
add
five
and
you
can
from
any
state.
You
can
go
back
to
a
state.
You
can
go
to
the
pet
state,
which
is
which
is
a
zero
and
like,
like
the
RS
Remains
the
Same.
AZ
But
the
point
here
is
that
once
you
enter
the
past,
State
there's
no
way
to
go
back
to
the
edge
state,
so
we
can
Define
the
constraint
here
and
the
actions.
The
only
thing
we
need
is
here
like
if
you,
if
the
current
cell
s
is
zero,
for
example.
Here
then,
the
next
cell
must
be
zero,
so
this
constraint
can
help
you
achieve
that,
and
also
we
need
to
make
sure
the
selector
is
binary.
AZ
Now
we
have
a
final
touch
like
to
make
the
input
accumulate
this
input,
value
and
copy
back
to
input.
This
still
follows
the
same
same
rules
for
the
program,
witness
selector.
AZ
So
like,
why
are
we
doing
all
these
trees?
Why?
AZ
Why
why
we
need
to
like
doing
this
repeating
and
like
stay
transition
thing
because,
like
in
CPU,
we
take
instruction
one
of
the
time
and
if
you
have
a,
if
else,
branch
and
if
the
branch
you
didn't
enter
that
just
you
can
forget
it
and
you
don't
compute
it,
but
for
a
circuit
you
need
to
flatten
all
the
computation
all
paths
you
need
to
include
it
in
your
circuit
and
then
so
you
can
see
here,
even
though
we
only
witness
input
3
here
and
these
three
rows,
we
still
need
to
need
them
and
like
we
need
some
dummy
values
there,
because
we
need
to
consider
the
case
that
proverb
could
win
us
five
here
and
I
use
all
these
spaces.
AZ
So
the
takeaway
here
is
that
the
challenge
for
the
third
developer
is
that
we
are
working
with
the
computation
Trace
instead
of
the
execution
itself,
so
we
need
to
flatten
the
pad
and,
like
work,
all
possible
pace
of
your
computation.
The
second
is
that,
because
we
are
working
with
a
field
element
instead
of
bits
and
bytes,
so
we
need
to
do
some
math
and
equations
there,
and
the
third
is
that
we
need
to
work
with
a
verification
mindset,
because
program
could
be
witnessed.
AZ
Malicious
values
inside
your
circuits
and
the
tricks
we
talked
about
here
are
like
using
the
volume
value
and
one
one
time
times
the
true
path
and
the
other
times
the
false
path.
This
can
convert
the.
If
else
statement
into
the
the
the
circuit
expression
and
for
the
complicated
Loops
you
need
to
identify
the
state
transition
of
of
the
your
program,
logic
and
then
design
constraints
for
them.
AZ
K
AX
AX
Thought
that
in
Blanc
yeah
the
copy
constraints
were
expensive.
Can
you
expand
on
why
they
are
cheap
in
Halo
2.
AZ
I
think
they
are
I
think
they
are
the
same
I
think
that
they
are
achieved
in
terms
of
when
you
use
once
and
like
you
use
it
million
times.
They
are,
they
are
the
same
cost
so
I
I
think
yeah.
That's
that's
the
idea.
BA
AZ
So
it's
a
question
that
what
kind
of
material
I
recommend
to
read?
Okay,
thanks
like
there
are
too
many
I,
don't
have
a
good
one,
but
like
I,
would
recommend,
try
to
find
a
simple
circuit
project
and
try
to
run
it.
Try
to
tweak
it
like
removing
something
and
and
try
to
study,
I
think
that
would
be
the
good
way
to
learn
how
circuit
works.
BA
AZ
AX
You
mentioned
like
in
the
yeah
here
in
the
career
points
that
you
have
to
prevent
the
prover
from
achieving.
How
often
do
you
see,
when
you're
riding
the
circuits
that
how
often
do
you
find
bugs,
because
your
circuit
was
under
constrained
and
the
proofer
can
cheat.
AZ
Oh
good
question,
so
the
question
is
that,
like?
How
do
we
find
if
there
are
bugs
in.
AX
There,
or
how
often
does
it
happen,
that
you
find
a
bug
because
it's
under
constrained
and
the
proofer
can
cheat
or
because
the
constraints,
if
the
constraints
are
wrong,
if
the
gates
are
wrong
or
just
trying
to
compare
it
to,
like
usual
software
development,
when
you
just
basically
write
the
logic
wrong
right?
How
often
do
you
see
that
the
gate
that
you
come
up
with
are
doing
the
wrong
thing.
AZ
So
when
we
are
developing
the
circuit,
oftentimes
we'll
like
realize,
oh,
we
forget
this
connection
and
like
program,
could
witness
that
bad
values
like
we
found
a
lot
of
bugs
in
like
development
stage,
but
like
once,
you
want
to
go
to
the
production
like
we
would
like
do
some
external
updates
and
like
try
to
find
as
much
bugs
as
possible
and
like
when
you
and
really
enter
the
production
like
you
can
only
like
cross
your
fingers
and
but
like
I
heard
vitalik
has
an
idea
that
says
they
they
can
use
like
two
Proverbs
for
the
Roll-Ups
and
like
if
they
can
witness
like
they
can
create
a
value
proof
or
two
different
outputs.
AX
AX
Thank
you
so
in
yeah,
sorry
for
hijacking
the
questions.
Just
let
me
know
and
I'll
stop
so
you
when
you
write
the
Halo
2
circuits,
you're,
writing
and
using
the
rust
lab
right.
AX
AZ
I
definitely
feel
like
they're,
calm
or
like,
like
DSL,
are
more
readable,
more
outdidable.
But
the
thing
is
when
you
don't
have
DSL
you
just
have
to
so.
AX
AA
K
AX
AF
B
And
this
concludes
the
program
of
the
cold
first
stage
today.
Thank
you
all
for
coming.
If
you
didn't
get
enough
yet
you
can
still
hang
around
in
the
second
first
and
basement
second
and
first
floor
and
basement,
we
will
be
closing
the
upper
floors
soon-ish.
So
if
you
want
a
network
hang
around
or
whatever
make
your
way
to
the
second
floor
or
lower
and
yeah
thanks
all
for
coming,
see
you
tomorrow
again
and
have
a
great
evening.