►
From YouTube: 6. STARKs in sharding with Eli Ben Sasson, Uri Kolodny, Avihu Levy (StarkWare), and Justin Drake
Description
The Ethereum Sharding Meeting #2 - Berlin
6. Discussion: STARKs in sharding with Eli Ben Sasson, Uri Kolodny, Avihu Levy (StarkWare), and Justin Drake (Ethereum Foundation)
Resources: https://notes.ethereum.org/s/B1-7aivmX
---
Video: Anton Tal @antontal
Audio: Matteo Tambussi @matlemad
Producer: Chris Hobcroft @chrishobcroft
Executive Producer: Doug Petkanics @petkanics
For @livepeertv on behalf of @LivepeerOrg
A
B
One
possible
use
cases,
I,
don't
know
to
choose
the
that
the
best
one
of
them
to
use,
but
I
do
think
that
if
you
take
the
freedom
to
design
your
own
layer,
1
then
Starks
for
scalability
have
a
really
good
future
and
you
can
design
you
can
think
of
more
than
one
schemer
Starks
are
using
to
create
system
which
are
no
10
to
maybe
few
thousands
times
more
scalability.
On
top
of
the
current
solutions,
so
I
hope.
A
Ok,
so
it
sounds
like
you're
more
interested
in
scalability,
then
privacy,
which
I
guess
is
something
I
prefer,
and
we
also
potentially
more
interested
that
the
the
base
layer
in
terms
of
you
know,
applications
that
I'm
aware
of
on
the
shouting
side.
One
is
making
the
fancy
crypto
that
we
have
like
the
BLS,
the
quantum
secure,
because
that
is
one
of
our
n
games
in
the
design
of
a
theorem,
but
I
guess
another
good.
A
One
in
terms
of
performance
might
be
the
idea
of
taking
witnesses
in
the
context
of
a
stateless,
client
and
compressing
those
I
guess.
I
picked
this
two
example.
Maybe
because
the
underlying
crypto
is
is
very
easy,
as
in
the
circuits
that
you
have
to
work
with
relatively
straightforward,
maybe
primarily
hash-based
is
that
the
type
of
scalability
you
are
thinking
of
or
the
other
type,
which
is
maybe
related
to
state
routes
where
you
you
have
this
full,
very
complicated,
EVM,
a
huge
circuit
so.
B
Okay,
so
to
basically
I
think
that
when
you,
when
it
comes
to
payment
systems,
just
payment
system,
then
it
makes
a
lot
of
sense
to
even
start
with
both
stateless
client
and
compressing
the
state
at
the
same
time,
so
generate
the
proof
of
the
state
and
have
the
status,
client
and
I
would
say
even
more
if
you,
okay,
the
basic
idea
of
stateless
client
is
to
say
our
block
will
be
very
compressed.
There
will
be
just
a
state
root
in
each
book
and
the
transactions
themselves.
There
will
be
much
more
compressed
and
why?
B
Because
now,
once
we
don't
have
signatures,
we
can
think
of
ways
how
to
we
basically
stop
to
pay
on
space
on
the
block,
and
we
stop
to
pay
on
storage
and
the
only
thing
that
we
are
paying
now
is
transaction
size,
and
then
we
can
think
of
ways
to
tweak
this.
For
example,
for
example,
in
which
ways
we
represent
our
accounts
and
in
which
way
we
represent
our
value.
Maybe.
C
B
Spread
it
for
several
types
of
value
categories:
it's
basic
idea
of
sharding
and
in
when
you
go
to
this
direction.
You
can
probably
get
scale
because,
because
you
remove
the
signatures
and
you
remove
the
storage,
you
can
get
scale
of
a
node
10
to
20
s
over
on
top
of
what
idiom
currently
can
do
and
the
advantage
that
you
will
have
in
layer
1
over
layer,
2.
B
A
B
B
A
Think
in
terms
of
the
data,
are
you
thinking
of
a
model
where
every
user
is
aware
of
his
own
balance,
but
doesn't
necessarily
communicate
all
the
deltas
to
the
whole
network
to
the
home
to
all
the
miners.
It's
kind
of
a
more
scalable
in
that
fashion,
because
everyone's
in
charge
of
their
own
little
piece
of
data
yeah.
E
E
And
500
kilobytes,
depending
on
the
proof
length,
and
indeed
we
have
brought
this
Terra
our
engineering-
and
you
know
science
team-
has
brought
this
down
currently
we're
looking
at.
You
know
in
the
range
of
around
80
kilobytes
for
things
that
used
to
take
300
or
500
kilobytes
previously,
and
the
improvements
come
from
many
different
places
and
a
better
understanding
of
the
soundness
of
the
protocols
and
better
engineering
or
various.
You
know.
E
B
E
A
E
A
E
Indeed,
and
I
guess,
the
biggest
advantage
in
starks
compared
to
the
other
technologies
is
the
extreme
scalability,
which
means
that
a
verifier,
even
though
he's
seeing
a
computation
for
the
very
first
time
say
described
by
a
smart
contract
or
by
a
computer
program.
The
time
needed
to
verify
a
claim
of
computational
integrity
about
that
contract
or
computation
is
exponentially
smaller
than
the
time
needed
to
execute
that
contract,
and
this
holds
even
for
a
contract
seen
for
the
very
first
time
without
any
trusted,
set
up
any
trust,
some
shion's
of
any
sort.
E
A
Excellent
and
yuria
question
for
you,
so
it
sounds
like
you've
identified
the
blockchain
space
as
one
area
where
you
can
do
business
so
stock.
Where
is
this
company
that
you've
set
up
with
as
I
understand,
most
of
the
people
who
have
like
the
core
knowledge
and
stocks
from
what
I
understand
as
well
stock?
Where
has
no
intention
to
go
the
traditional
route,
which
is
an
ICO
or
to
do
things
like
consulting
you,
have
a
different
vision?
What
is
that.
F
Yeah,
so
we
presented
and
back
in
a
few
weeks
ago,
back
in
at
consensus
in
New,
York
and
Ellie,
was
on
stage,
and
he
said:
there's
no
ICO
and
the
volume
of
emails
coming
in
asking
them
about
participating
that
ICO
increased
tenfold.
So
it
seems
like
if
you
want
to
do
a
nice
seal
and
the
most
efficient
marketing
is
to
say
you're,
not
doing
nice.
F
So
that
said
yes,
so
our
intention
at
this
point
is
to
see
if
we
can
commercialize
this
technology
without
having
our
own
Park
chain,
we
were
identifying
a
whole
set
of
applications
that
are
both
layer,
one
and
layer.
Two
and-
and
our
hope
is
that
we
can
be
one
of
the
first
participants
in
this
ecosystem
in
the
sense
of
sort
of
a
layer
of
college
technology
providers
to
block
chains,
and
it
remains
to
be
proven
that
sustainable
businesses
can
be
built.
That
way.
I
think
that
we've
identified
such
a
path,
I
think.
F
In
fact,
we
we
see
several
different
avenues:
that
building
a
sustainable
business,
but
I
think
that
quite
a
few
other
players
are
concerned
about
that
and
I
think
that
the
ecosystems
at
large
should-should
should
take
that
concern
seriously
in
terms
of
how
can
new
participants
we,
regardless
of
the
current
ridiculous
opportunity,
cost
item.
I,
hope
that
window
is
closing
in
the
sense
of
the
ease
of
minting
your
own
token
and
having
your
own
ice
he'll
be
even
even
once
that
speculative
bubble
goes
away.
F
This
opportunity
cost
right
now,
I
think
sort
of
pulls
a
lot
of
people
in
that
direction,
but
even
once
that
speculative
bubble,
pops
I
think
the
industry
should
consider
and
think
very
hard
about
how
to
incentivize
developers
and
outside
parties
to
contribute
knowledge
to
existing
infrastructure
and
existing
block
chains.
I
think
that's
a
a
fundamental
question
that
seems
to
me
very
much
open.
F
A
F
So
that
long
term
I
think
that
custom
hardware
is
one
interesting
direction.
Providing
services
in
the
context
of
generating
proofs
I
think
is
another
interesting
model
with
recurring
revenues
there.
The
basic
notion,
I
think
it
was
alluded
to,
but
I
should
just
state
this.
Clearly,
the
basic
notion
is
that
whatever
off-tune
computation,
one
wants
to
conduct
can
be
done
off
chain
and.
G
F
It
AWS
prices,
and
the
only
thing
that
needs
to
be
done
on
chain
is
this
is
a
verification
of
a
stark
proof
that
for
that
computation,
which,
as
Ellie
said
is
it
comes
at
an
exponentially,
lower
computational
cost.
So
we
think
that's
sort
of
an
appealing
concept
in
the
sense
that
it
allows
a
lot
of
people
to
consider
doing
meaningful,
off-tune,
computations
and
feed
that
into
a
blockchain
infrastructure.
F
A
F
That's
that's
a
very
powerful
trait
of
proof
systems.
As
this
they
say
the
proof
is
in
the
pudding,
so
you
don't
need
to
go
and
interview
the
chef
inspect
that
in
the
kitchen,
all
you
need
to
do
is
look
at
the
proof
and
verify
that
that's
indeed
very
powerful
and
and
another
point
worth
noting
in
this
context-
is
that
to
the
extent
the
inputs
to
the
prover
are
shielded,
then
the
prover
has
no
no
way
of
of
in
any
way
censoring
the
participants
and
in
the
computation,
that's
also
very
important
assurance
I.
F
F
F
I
think
we're
far
from
from
being
done
and
that's
it
I
think,
there's
a
lot
of
sort
of
call
it
market
development
or
about
market
education
in
terms
of
both
understanding.
What
people
want
to
do,
but
at
the
same
time
explain
to
them
what
it
is
that
Starks
can
do
for
them
and
sort
of
get
them
to
think
about
their
problems.
With
that
capability
in
mind,
mm-hmm.
F
E
E
Need
to
clock
and
optimize
all
these
things,
but
they
they're
also
very
you,
know
friendly,
and
there
are
other
things
like
you
know.
Well,
okay,
there
you're
gonna
lose
quantum
security,
but
you
know
things
that
work
over
prime
fields
like
Peterson
and
Schnoor,
and
things
like
that
are
also
as
long
as
you're
working
we're.
The
same
feels
also
quite
elegant.
So
mimsy
is
definitely
one
candidate.
E
A
Because
I
guess
it
would
be
really
to
stock
wise
advantage
and
to
the
advantage
of
the
whole
ecosystem
if
we
could
somehow
standardize
around
a
hash
function
which
everyone
would
use
to
build
the
protocols
and
then
that
would
kind
of
make
these
protocols
much
more
approachable
to
stop
provers.
Yes,.
D
I
have
the
most
obnoxious
question
possible,
but
I'm
just
curious.
What
is
that
like?
What
do
you
expect
as
a
timeline
or
kind
of
seeing
Starks
used
everywhere,
because
it
definitely
feels
like
one
of
those
technologies
that
can
be
like
a
fundamental
base
for
tons
of
other
applications
and
systems
so,
like.
E
Well,
I
mean
you
know,
we'd
hope
for
them
to
be
used
tomorrow.
Realistically,
you
know,
as
with
all
new
technologies
takes
time,
so
hopefully,
let's
say
within
a
year
or
you
know,
to
18
months,
we'll
start
seeing
some
industry
grade.
You
know
stuff
that
people
can
use
I
just
want
to
mention.
The
academic
code
is
already
available
on
lip
stark
on
github
under
MIT
license.
So
those
were,
you
know,
fearless
can
already
start
using
it
there.
You
know
some
examples
there
and
interfaces.
E
There
is
sort
of
a
tiny
room
like
interface
and
you
can
build
your
own
errors.
These
are
algebraic
intermediate
representations
and
you
can
start
using
the
system
already
so
but
realistically
I'd
say
you
know
one
year
to
18
months,
but
it
should
maybe
you
know,
qualify
this,
that
making
predictions
about
deploying
systems,
as
everyone
here
knows,
is
a
very
risky
business.
G
I'm
gonna
demonstrate
my
ignorance
with
this
question,
but
I'm
gonna
ask
anyway,
because
I'm
so
excited
about
this
technology.
You
guys
talked
about
the
size
of
proofs
and
how
that's
come
down
quite
a
bit
recently.
Is
it
also
the
case
that
computing
proofs
takes
a
lot
of
time,
a
lot
of
resources,
and,
if
so
is,
is
that
under
active
research
and
how
is
that
looking
so.
E
Computing
proofs
takes
a
lot
of
time.
Start
provers
are
the
fastest
provers
out
there
already
by
a
factor
10
over
the
second
fastest
and
by
many
more
factors
over
you
know
not
the
second-fastest
and
I
urge
you
to
look
at
the
stark
paper,
which
is
also
available
online.
There
is
a
you
know,
a
bunch
of
figures
there
that
show
the
running
times
in
single
and
multi
thread.
I
just
want
to
say
from
a
ten
thousand,
you
know
meter
view.
Provers
are
the
heaviest
part
in
all
proof
systems,
and
they
will
likely
remain
that
way.
E
In
all
proof.
Systems,
specifically
the
stark
prove
er
already
currently,
is
extremely
lean
to
the
point
that
it
would
require
I
think
immense
breakthroughs,
theoretical
ones
to
improve
on
it,
theoretically,
not
on
the
engineering.
So
the
engineering
there's
a
huge
room
for
improvement,
the
orders
of
magnitude,
many
of
them,
but
theoretically,
so
without
going
into
details
to
prove
you
take
an
execution
trace,
you
apply
to
it
a
single
FFT
which
takes
time
T
log
T,
and
after
that
you
have
a
fully
parallelizable
computation.
That
takes
you
six
times.
You
know
some
parameter.
That
is
linear.
E
In
T
arithmetic
operation,
so
we're
a
very
simple
field:
that's
the
Fri
prover
and
basically,
then
you
apply
hashes
again
fully.
Parallelizable
I
think
it's
very
unlikely
that
we
will
see
significant
asymptotic
n't,
we're
already
looking
at
what's
called
strictly
quasi
linear
running
time,
o
of
T
log
T,
and
we
don't
even
know
now
we'll
say
something
as
a
theoretical
computer
scientist.
E
We
don't
really
know
of
any
generic
reduction
from
an
NP
language
to
let's
say
three
set,
which
is
the
simplest
of
you
know,
and
the
most
fundamental
one
of
np-complete
languages
and
the
original
you
know
NP
completeness
paper
of
of
cook.
We
don't
know
of
any
reduction
from
a
generic
language
to
3sat
that
takes
less
than
n
log
n
time
and
work
with
with
Stark's
were
already
at
the
n
log
in
upper
bounds.
So
I,
don't
believe,
prove
ur
time.
E
Asymptotic,
you
know,
mathematical
will
go
down
anytime
soon,
though
I
hope
to
be
pleasantly
surprised,
there's
a
lower
bound
of
T.
You
know
and
we're
already
have
an
upper
bound
of
T
log
T
and
reducing
that
log.
T
is
gonna
be
very
hard,
but
you
know:
building
systems
using
good
engineering
can
shave
things
off
with
several
orders
of
magnitude,
and
there
were
very
optimistic.
E
G
Talked
yesterday,
in
the
context
of
like
finality
and
a
blockchain,
certain
types
of
applications
do
or
don't
make
sense.
You
know
so
for
making
a
point
of
sale
purchase.
Maybe
you
have
a
ten
second
limit
in
the
certain
types
of
transactions
we
might
need
faster
finality
or
slower
finality
is
okay,
so,
based
on
what
you
just
talked
about,
are
there
certain
types
of
applications
which
you
think
will
or
will
not
work
well
with
Starks,
given
the
complexity
and
the
amount
of
compute
time
required
to
compute
the
solver
so.
E
I
think
that
end-user
and
again
I
urge
you
to
like
download
the
stark
lip
stark
and
play
with
it:
z,
cache
style.
You
know
shielded
transactions
already
on
the
academic
code.
Probably
if
you,
if
you
match
the
numbers,
take
very
few
seconds
on
a
any
standard
laptop
that
will
likely
go
down
by
you
know
an
order
of
magnitude
or
two
so
I
think
it's
completely
reasonable
that
a
smartphone
can
do
a
shielded
transaction,
stark
style
in
a
second
or
less
that's
very
reasonable.
E
The
main
power
of
Starks
again
is
in
the
huge
scale,
computations
they're,
you
know
the
prover
is
the
bottleneck.
I
think
that
there
as
well,
we
will
at
some
point,
see
specifically
tailored
systems
and
possibly
Hardware
dedicated
for
it
that
will
dramatically
increase.
You
know
latent
sorry
decreased
latency
and
make
them
these
systems
very
attractive
to
everyday
use.
I
was
just
so.
E
There
was
a
speaker
earlier
that
was
that
showed
a
picture
of
this
HPC
cluster,
that
that
does
what
was
it
like
more
than
a
petaflop
of
or
forgot
how
many
and
was
just
thinking
wow?
If
you
know,
if
we
had
a
petal
number
of
the
kind
of
operations
that
we
need,
you
know
you
could
probably
compress
the
the
proving
of
the
validity
of
all
of
bitcoins
blockchain
to
you
know
you
could
prove
all
of
that
or
at
least
half
a
chain
or
something
like
that,
and
then
a
smartphone
could
verify
the
validity
of
the
utx.
E
D
B
I
only
mentioned
few
use
cases.
There
are
many
more
so
in
terms
of
it
here
you
can
think
what
I
what
I
mentioned
on
Friday.
Let's
do
it
quick?
It's
the
privacy
solution
for
shield
transaction.
This
is
the
most.
G
B
One
that
people
already
knows
and
familiar
with
I
already
explained
a
little
bit
about
two
approaches
for
scalability.
The
first
one
is
from
the
point
of
the
transmission
size
and
the
other
one
which
is
more
like
can
save
much
more
is
the
computation
perspective,
meaning
that
you
can
generate
any
proof
of
chain
computation
and
have
the
contractors
validate
it.
I
also
mentioned
compression
of
the
chain
in
the
meaning
that
clients
doesn't
have
to
validate
state
we're
downloading
all
the
clocks.
They
can
just
validate
a
proof.
B
I
can
mention
things
also
outside
of
the
of
the
blockchain
itself.
For
instance,
you
can
validate
that
exchanges
do
hold
enough
assets
in
front
of
their
liabilities
for
their
users.
This
is
something
that
Starks
specifically
can
be
useful
and
good
at
because
of
the
large
scale
of
the
processes
sometimes
need
to
generate,
and
there
are
also
some
other
simple
use
cases
that
still
relevant
for
themes
such
as
layer,
1
solutions,
for
instance,
replacing
VLS
signatures,
which
star
basse
scheme
that
me
we
maybe
talk
about
later
today,
I.
F
Want
to
give
one
more
example,
we're
sort
of
in
the
midst
of
conversations
with
the
brave
and
what
brave
need
to
do
is
demonstrate
to
the
general
public
once
a
month
that
tokens
were
fairly
divided
based
on
users,
usage
and
preferences,
etc.
Now
this
is
not
something
that
would
generally,
it
will
require
a
supercomputer,
but
doing
it
on
chain
is
prohibitively
expensive
from
their
perspective.
F
I
think
it's
a
very
elegant
test
case
of
Starks,
because
this
is
something
needs
to
be
done
once
a
month
right
now,
they're
sort
of
relying
on
band
Brendon,
Ike's
reputation,
so
to
speak.
You
know
in
a
real
way
and
and
in
a
positive
way,
but
he
fully
realizes
that
this
needs
to
change
and
the
trust
needs
to
be
established,
not
because
he's
a
trusted
party,
but
it
needs
to
be
established
through
the
protocol.
E
Looking
forward
beyond
decentralized,
blockchains
right
I
guess
we
all
share
the
belief
that
at
some
point
central
trusted
parties,
you
know
governments
or
various
you
know
important
players
will
start
using
blockchains
for
audit
ability
and
accountability,
and
things
like
that.
So
you
know
if
any
one
of
these
institutions,
a
bank,
a
big
exchange,
not
crypto,
starts
putting
say,
Merkel
commitments
of
the
daily
state
of
the
system
onto
a.
C
Some
of
the
obstacles
quickly,
so,
first
of
all,
like
a
lot
of
the
start,
constructions
are
done
over
binary
finite
fields
and
currently
doing
operations
over
binary
finite
fields
inside
of
the
EVM
is
totally
not
efficient.
So
unless
we
want
this
to
be
kind
of
get
it
to
easy,
if
your
m
2.0
starting
exclusively
with
that,
though,
you
would
need
a
hard,
hard
fork
to
add,
the
pre-compiled
is
for
like
basically
binary
finite
field
operations,
though
it
could
be
done
pretty
generic,
like
you,
basically
need.
D
C
C
And
I've
read
that
aetherium
the
watching
like
using
using
up
eight
million
gas
for
a
proof
like
yes,
a
legitimately
very
expensive.
So
you
need
to
come
up
with
possible
like
some
kind
of
like
application,
specific
batching
technique
or
whatever.
That
would
make
that
practical
for
the
average
user
of
a
system.
B
A
F
Sure,
yes,
so
we're
we're
hiring.
We
think
of
these
guys
as
bilingual
in
the
sense
that
there
are
very
strong
engineers
and
very
strong
mathematicians.
The
intersection
of
these
sets
is
smaller
than
one
would
hope,
probably
not
in
this
room
but
out
in
the
world
and
I
think
that's
the
main
characteristic
of
our
R&D
team
at
the
moment
and
the
realization
that
in
the
foreseeable
future,
we're
going
to
run
in
parallel,
pushing
on
both
the
engineering
front
and
the
science
front,
and
that's
something
that
were
fully
prepared
to
do.