►
From YouTube: 9. Discussion: Casper loves sharding and everything
Description
The Ethereum Sharding Meeting #2 - Berlin
9. Discussion: Casper loves sharding and everything with Vitalik Buterin, Justin Drake, Karl Floersch, Danny Ryan, Hsiao-Wei Wang (Ethereum Foundation), and Everyone
Resources: https://notes.ethereum.org/s/B1-7aivmX
---
Video: Anton Tal @antontal
Audio: Matteo Tambussi @matlemad
Producer: Chris Hobcroft @chrishobcroft
Executive Producer: Doug Petkanics @petkanics
For @livepeertv on behalf of @LivepeerOrg
A
B
C
Inside
jokes,
inside
choic,
spacian
so
I
mean
one
thing,
definitely
peer-to-peer
network.
That
is
something
that
definitely
stood
out
to
me.
The
fact
that
she'll
way
I
remember
multiple
months
ago,
talking
with
Xiao
Wei,
trying
to
figure
out
this
pub/sub
thing
and
saying
you
know
just
use
lib
p2p
and
then
realizing
that
that's
just
not
going
to
scale
up
and
so
hearing
that
it's
still
definitely
an
issue.
It
seems
like
that's.
You
know
right
away.
C
D
We're
not
certain
whether
it
work
or
not,
so
we
should
probably
do
some
like
load
testing.
I
know.
Some
of
the
teams
are
interested
in
like
writing
code
in
reference
to
the
beacon
chain
and
I
think
that
you
should
to
get
more
familiar
with
the
spec
and
to
kind
of
open
the
dialogue
about
some
of
the
maybe
edge
cases
and
things
that
we
all
need
to
think
through
from
an
implementation
side.
D
So
if
you
want
to
write
some
code
write
some
code
I,
think,
though
we
were
talking
about
opening
a
getter
channel,
that's
a
kind
of
beacon
chain,
getter
channel,
where
we
can
discuss
the
beacon
chain
SPECT
specifically,
and
to
discuss
some
of
these
proof-of-concept
implementations
further.
As
we've
been
already
this
weekend,
I.
A
D
One
interesting
question:
I've
heard
a
few
different
sources
is
the
and
I
think
that
I
have
a
clear
understanding
of
this,
but
adjusting
it
really
fills
in
a
little
bit
more
the
the
vdf
to
some
people.
The
video
sounds
like
okay,
we're
just
burning
how
cycles
again?
How
is
this?
Not
that
house
is
not
us.
Just
burning
hash
like
looks
like
a
plantation.
E
So
one
thing
I
guess
to
mention
that
maybe
wasn't
clear-
is
that
the
vdf
idea
is
actually
quite
recent
and
we
haven't
figured
out
all
the
details
in
terms
of
why
vdf
isn't
like
a
race
to
zero
where
everyone
burns
a
lot
of
adversity.
I,
guess
the
reason
is
one:
it's
proof
of
sequential
work,
as
opposed
to
proof
of
parallel
work
and
to
the
game,
is,
is
prone
to
to
being
a
monopoly,
so
basically
whoever's
fastest
at
providing
these
vdf
outputs
and
proofs
can
basically
be
the
timekeeper,
be
this.
E
This
is
participant
in
a
network
that
provides
a
heartbeat
so
in
terms
of
attacks
that
that
can
happen.
There's
this
two
ones
which
which
Vitalik
found
number
one
is
kind
of
the
more
obvious
one.
So
if
so,
the
main
problem
is
basically
an
attacker
with
customized
hardware,
which
can
compute
the
video
faster
than
everyone
else.
E
So
there's
two
bad
things
they
can
do
number
one
is
kind
of
not
reveal
publicly
reveal
the
vdf
outputs,
but
still
know
them
before
everyone
else
and
basically
be
able
to
grind
the
the
entropy
pool
which
is
based
on
R
and
L.
In
order
to
get
a
favorable
random
number
at
the
other
end
and
I
guess,
the
second
attack
is
in
order
to
keep
the
five-second
heart
beat.
E
That
needs
to
be
done
in
terms
of
what
is
like
the
theoretical
optimum
you
can,
you
can
get
with
state-of-the-art
Asics.
What
can
we
do
with
GPUs
and
CPUs?
What
can
we
do
with
FPGAs?
What
can
we
do
with
like
a
very
basic
ASIC,
which
wouldn't
cost
that
much
money
and
make
an
informed
decision
based
on
that.
B
So
to
answer
again
the
question
of
like
why
it
would
not
just
go
back
to
a
proof-of-work
race
for
more
clearly
like
basically
because
it's
proof
of
sequential
work,
your
ability
to
do
more
work
by
burning
more
money
is
very
sharply
limited
right.
So
yeah,
it's
money,
billions
our
attacker
may
be
able
to
complete
the
work,
something
like
5
times
more
than
okay
or
then
someone
with
a
few
hundred
thousand
dollars.
B
So
one
example
is
that
you
can
kind
of
combine
the
PDF
mechanism
website
with
hash
reveal.
So
you
can,
for
example,
encourage
like
specific,
like
different,
specific
validators,
to
lay
down
a
hash
and
that
or
a
pre-image
and
then
lay
down
the
preimage
of
the
hash,
as
well
as
the
vidi,
the
the
vdf
results
sometime
later,
and
that
would
basically
give
that
particular
stager
a
kind
of
monopoly
right
to
the
ability
to
make
that
particular
vdf
solution.
And
then
you
could
assign
them
some
kind
of
reward.
F
B
Simple
ran'tao
basically
has
the
problem
that
any
particular
participants
can
exert
one
bit
of
influence
on
the
ran'tao
by
basically
not
showing
up,
and
this
can
affect
kind
of
short-term
randomness.
So
we
can
affect
the
end
of
the
or
the
sequencing
of
queries
going
to
be
a
proposer
and
so
forth.
Now
there
are
ways
to
mitigate
the
effect
of
that.
B
B
B
B
Now,
look,
there
are
things
potentially
that
we
could
do
to
sort
of
improve
the
economic
parameters
so,
for
example,
having
a
penalty
for
not
showing
up.
That's
how
the
largest
say
1/8
could
be
could
be
an
okay
thing,
especially
if
we
create
a
lottery
where,
if
you
do
get
incorrectly,
then
you
get
a
reward
of
one.
Even
so
it's
but
like
even
still
the
level
of
many
people
ability
is
fairly
high.
V.
B
H
H
I
F
B
B
H
H
B
D
There
were
some
talk
about
how
the
signatures
between
blocks
say
there
if
there,
if
we
go
with
the
attestation
design,
how
we
effectively
do
communicate
these
signatures,
I
haven't
thought
about
too
much.
Is
that
something
that
does
fit
into
the
lid
PT
P
or
the
PDP
layer?
Is
that
something
that
any
of
the
teams
thought
about
I.
J
D
E
Mean
they're,
you
know
metallic
was
talking
about
Charlotte
aggregation,
but
there
are
some
designs
which
are
basically
already
naturally
shot
at
aggregation.
So
if,
for
example,
we
fixed
the
committee
size
mm,
then
you
know
every
shard
will
produce
these
cross-links
aggregating
a
thousand,
and
then
these
cross-links
get
you
know,
put
into
the
beacon
chain
and
the
proposer
of
the
beacon
chain
doesn't
have
to
do
the
aggregation.
It's
already
been
done
in
the
shards
I'm
personally,
not
too
worried
about
it,
but
yeah.
That's
a
really
cool
property
of
BLS
that
you
can
do
these
incremental
aggregations.
E
J
B
You
can
kind
of
pass
along
an
aggregate
in
a
tree
structure
in
the
tree
structure.
It
doesn't
even
need
to
be
explicit.
They
can
just
kind
of
emerge
in
the
peer-to-peer
network
right.
You
could
have
basically
individual
nodes.
That's
about
that
specialize
in
aggregating
signatures
for,
like
some
particular
slice
of
the
proposer
space
and
then
well.
If
you
do
that,
then,
with
two
rounds
of
communication
like
aggregating
50,000
BLS
signatures
goes
down
to
the
work
of
aggregating
like
two
men
with
200
for
this
for
the
same
Bell,
one
validator.
C
So
one
thing
that
I
definitely
got
out
of
this
was
that
people
are
interested
in
these,
like
increment,
incremental
steps
right,
there's
like
a
lot
of
uncertainty
around
maybe
like
the
cross,
shard
communication,
for
instance,
that's
like
something
that
we
haven't
even
really
talked
about
outside
of
like
last
time.
You
know
there
are
a
very.
B
F
C
And
and
I
don't
know
at
least
one
kind
of
incremental
thing
is.
Definitely
you
know
you
have
the
the
peer-to-peer
stuff,
but
then
also
there
was
this.
You
know
Casper
FFG,
with
all
the
the
you
know,
votes
and
those
kinds
of
messages
and
and
definitely
learning
about
kind
of
doing,
a
beacon
chain
where
you
don't
actually
fill
in
the
details,
and
then
you
use
just
kind
of
like
stubbed
randomness,
but
getting
that
FFG
mechanism
working
to
finalize
the
main
chain.
C
C
C
Mean
okay,
I'm,
just
gonna,
say
plasma
is
a
design
pattern
which
you
can
use,
and
it
is
like
a
there's.
No
okay,
maybe
the
only
complaint
about
plasma
before
I
go
into
my
emotional
rant.
It's
my
complaint
with
plasma
is
that
it
has
been
used
as
a
kind
of
like
umbrella
term
for
a
huge
number
of
kind
of
vaguely
proposed
protocols,
and
that
is
kind
of
a
little
bit
lame
right.
C
I
totally
totally
agree
with
that,
and
it's
a
lot
of
marketing
I
mean
damn
that
plasma
implementers
call
good
marketing
know,
but,
like
the
the
I
think
that
the
general
concept
of
plasma
is
definitely
a
unique
thing
that
you
know
using
a
more
secure
route
chain
to
provide
guarantees
on
transaction
ordering.
C
On
a
you
know,
transactions
that
are
not
submitted
on
chain
until,
like
the
last
minute,
for
instance,
I
mean
this
is
this
is
definitely
something
that
is
useful
and
also
like
the
Merkel
proofs
and
it
kind
of
I
think
spurred
a
lot
of
interest
in
you
know:
building
scalable,
smart
contracts,
which
I
think
is
a
huge
positive,
so
I
don't
think
that,
like
okay,
yes,
it
has
this
hype.
And
yes,
it
has
this.
C
D
B
And
it
definitely
is
a
framework,
so
I
mean
first
of
all,
we
can
kind
of
think
about
possible
deviations.
That,
like
could
conceivably
happen
right,
so
one
possible
deviation
is
basically
going
right
back
to
making
the
beacon-
though
all
of
the
logic
of
the
beacon
chain,
fall
right
back
into
the
proof
of
work
chain
and
basically
just
doing
a
hard
fork
of
the
proof
work
chain
that
adds
beacon,
chain
logic.
B
My
main
argument
against
this,
as
basically
a
number
one,
it
would
require
a
heart
fork,
which
is
like
a
higher
level
of
float
on
client
developers.
Yeah
number
two,
it
would
be
kind
of
it
would
be
more
risky
because
a
lot
of
our
simplifications
that
we're
making
have
to
fairly
significantly
leverage
the.
B
Chain
has
a
finality,
and
that
simplifies
a
lot
basically,
for
example,
if
the
shark
chains
can
only
depend
on
the
finalised
portion
of
the
beacon
chain,
then
that
basically
means
that
you
don't
need
to
have
like
funny
independent
fortress
girls,
and
that
simplifies
a
lot
of
client
logic.
You
don't
need
to
worry
about
it
like
the
the
idea
that
something
which
was
a
cross
shark
dependency
of
some
short
transaction
is
like
becomes.
B
It
becomes
rolled
back
because
the
only
thing
that
can
roll
back
is
stuff
within
the
same
chart,
and
it's
so
it
creates
abstractions
that
are
kind
of
nice
in
a
bunch
of
way.
And
if
you
fork
the
proof
of
work
chain,
then
basically
it
doesn't
have
those
abstractions
until
everyone
agrees
to
use
and
trust
the
proof
of
stake
fortress
role,
so
it's
like
I
do
I
do
think
that
it's.
That
approach
is
inappropriate,
at
least
for
early
stages,
but
that
easy,
conceivable
strategy
right.
B
Matanga
hash
graft
diggity
a
TMR
copyright
patent
pending
in
order
to
process
data,
yet
data
on
shards
anyway,
so
and
I.
Personally,
don't
favor
that,
basically
because
those
kinds
of
protocols
tend
to
be
more
complicated
and
don't
have
you
know
for
choice
roles
that
are,
you
know.
Basically,
it's
like
having
a
be
okay
final,
go
ahead,
I.
K
B
B
K
I
see
yeah
yeah
so
but,
like
maybe
yeah
sorry
for
interrupting
in
this
weird
way,
but
I
guess
the
basic
question
was
like
before
was
the
we
have
a
general
framework,
and
that
would
be
like
genuinely
interesting
for
me
as
well.
Like
are
we
just
gonna
stick
to
I
mean
I've
seen
the
design
evolved
over
time
right.
F
K
J
J
Yeah
so.
B
Like
so,
for
example,
we've
had
a
concept
of
a
central
B
of
some
kind
of
central
beacon
chain,
pretty
much
always
throughout
the
entire
design
right,
even
since,
like
literally
2015,
the
concept
of
the
beacon
chain
being
temporarily
separate
from
the
main
chain
is
a
new
idea,
but
at
the
same
time
like
in
so
realistically
for
that,
you
know
give
it
a
couple
of
months
before
it's.
So
it
applies,
but
there's
a
yet
like
it
does
see.
But
we
did
move
toward
this
approach
from
the
other
approach
for,
for
a
reason
suggesting
a
reduced.
K
B
Other
things
that
are
set
in
stone
fairly,
so
one
example
is
the
concept
of
like
proof
of
stake
where
you
deposit
and
then
you
become
about
a
validator,
and
then
you
do
stuff
well,
yeah
yeah,
the
concept
that
you
base.
You
have
some
form
of
random
sample
collection
and
you
can
assign
to
a
value
at
some
date
over
here
then
validate
some
date
over
there,
and
these,
like
random
samples,
are
used
as
as
a
way.
B
J
F
K
B
So,
even
if
a
whole
spec,
you
know
just
part
of
it
so
like
to
be
fear
even
aetherium.
1.0
did
not
end
up
hitting
spec
freeze
until
something
like
a
year
after
development
started
in
six
months.
Before
launch,
though
in
like
there
is
a
rationale
for
saying
that,
oh
maybe
we
should
have
like
not
bothered
with
receipt
routes
or
whatever,
and
that
would
have
gotten
rich
and
stuck
with
the
POC
5
spec
plus
security
audits,
and
that
would
have
gotten
things
out
faster.
But
that's.
K
C
Personally
think
it
may
be
like
a
good
idea
to
come
up
with
some
kind
of
maybe
document
that
outlines
in
a
very
clear,
cohesive
manner,
all
the
like
all
the
different
pieces
right
because
there
is
there
is
this
there's.
First,
there
was
the
like
friendly
finality
gadget
where
the
slashing
conditions
needed
to
be
worked
out,
and
then
there
was
the
you
know:
how
does
block
proposal
work
and
for
a
while
it
was
just
you
know,
proof
of
work,
block
proposal
and
we
kind
of
like
called
it
a
day.
C
But
then
we
realized
that
the
Poisson
process
wasn't
going
to
really
cut
it
and
we
wanted
these
like
five-second
incremental
block
times
and
so
going
straight
to
a
beacon
chain
made
a
lot
of
sense,
and
it
also,
you
know,
felt
good
because
we
are
already
doing
hybrid
Kaspar.
But
then,
once
you
have
the
beacon
chain,
it's
like
okay
block
proposal.
Where
is
your?
How
are
you
going
to
do
block
proposal
for
each
one
of
these
shards
and
then
once
you
get
there?
It's
like
okay.
C
What
EVM
are
we
running
and
what
you
know
cross
shard
communication?
Do
we
have-
and
you
know-
and
all
this
stuff
requires
a
really
robust
peer-to-peer
network,
so
I
feel
like
these
things
exist
in
some
Minds,
but
they're
not
really
very
well
communicated
right
now,
and
so
you
know
what
I'll
do
I
will.
C
Happily,
you
know,
help
work
on
a
document
to
kind
of
like
make
pretty
pictures
and
and
make
it
all
seem
to
make
sense
in
a
cohesive
story,
because
there
is
a
cohesive
story
that
has
been
going
on
with
the
you
know,
development
team
with
you
know
Vitalik
and
justin,
but
it's
just
you
know
hard,
because
there
are
so
any
details.
This
is
a
huge
huge
project,
but
we
are.
The
Vitalik
is
the
bomb
okay.
J
K
Maybe
like
I
mean
maybe
you're
gonna
take
this
as
a
joke,
but
I
guess.
Maybe
you
should
just
write
a
book
about
it.
I
mean
like
about
the
story
because
that's
actually
not
a
joke,
because
we're
not
trying
to
say
is
that
you
know
a
lot
of
these
decisions
that
you
guys
are
making
on
the
protocol.
They
do
make
sense
in
some
contexts.
If
you
have
all
the
previous.
B
You
know
what
what
kind
of
alternatives
have
been
considered
to
be
fair,
like
you
know,
like
the
book
has
been
written,
it
just
needs
to
be
aggregated.
I
would
say
you
know,
which
kind
of
gives
the
criticism
we've
gotten
a
lot
of
places
like
there's
all
these
III
search
posts
and
like
medium
and
like
live,
live
streams
and
whatever
and
like
that.
You
need
to
be
in
one
place
more,
which
we
will
try
to
do
more
of
yeah.
E
E
That
means
that
you
know
we
don't
have
this
storied
framework
and
we're
happy
to
to
pivot
quite
fast,
I
guess
the
advantage
is
that
we
get
to
really
explore
lots
of
different
design
paths,
and
so
you
know
there's
this
feedback
loop,
where
our
intuition
keeps
on
getting
better
and
better,
and
it
feels
like
the
dark
rooms
that
are
unexplored
and
in
the
design
paths
are
starting
to
one
by
one
get
eliminated.
So
there's
definitely
this
feeling
of
progress.
E
Another
thing
is
that
every
once
in
a
while
you
know
we
make
like
a
10x
improvement
and
then
these
10x
improvements
kind
of
compound
on
each
other.
So
you
know
at
the
station's.
Bls
proofs
are
custody.
You
know
this
may
be
each
kind
of
order
of
magnitude,
improvement
and
so
I
think
there
is
value
to
to
try
and
get
as
many
of
these
as
possible.
I
guess
once
the
dust
has
settled
and
we
feel
comfortable
that
we
explored
a
lot
of
the
design
space
and
and
that
our
design
solves
our
requirements.
E
I
guess
there
will
be
a
phase
of
a
peer
review
of
simplification
and
then,
at
the
very
end,
what
I'd
like
to
have
is
kind
of
formal
proofs,
maybe
have
have
some
sort
of
formal
model
and
work
a
little
bit
like
what
Cardno
is
doing
with
maybe
slightly
less
pedantic
and
actually
write
proofs
that
there
are
design
is
its
coherent
and
then
maybe
even
go
down
the
route
of
formal
verification
for
specific
components.
I
think.
J
B
One
thing
I
like
doing
in
protocol
design
is
trying
to
see
if
we
can
write
down
a
list
of
desiderata
and
then
just
basically
derive
the
protocol
from
the
desert
otter
and
show
that
it's
pretty
much
the
only
option
and
like
there
are
some
ways
in
which
in
which
we
are
getting
somewhat
closer
to
that
so
like
for,
for
example,
the
like
there's
results,
results
about
properties
that
you
want
from
the
fork
choice.
Rule
there's
results
about
what
we
want
in
service
of
efficiency.
B
F
B
Reach
of
implements
ability-
that's
efficient,
was
substantially
better
than
the
approach.
That's
when
you
ends
up
getting
more
stability,
so
I
guess
one
example
of
that
is
like.
If
you
look
at
proof
of
steek
research
right
I
feel
like
it
was
very
kind
of
floating
in
the
air
about
like
two
to
three
years
ago
and
what
we
once
were
a
whole
bunch
of
ideas.
But
now
you
know
like
we
have
the
Casper
FFG
family.
We
have
the
CBC
family
and
there's
a
formal
proof
set
that
FFG
works.
J
H
B
F
B
Does
basically
mean
that
things
are
kind
of
like
much
more
stony
in
that
way
right.
The
next
thing
that
I
personally
want
to
see
becoming
more
stony
is
like
proof
of
stake
for
choice
rules,
so
basically
prove
that
you
know
there
is
basically
one
like
there
are
choices
of
structure
but
they're,
basically
as
one
and
only
way
to
make
a
good
fortress.
Well,
that's
out
as
wide
as
a
bunch
of
properties.
Right
yeah
goes
like
that's
pretty
much
my
philosophy
too.
B
J
B
B
More
than
fifty
percent
plus
epsilon
of
all
of
our
daters,
assuming
everyone
is
offline
and
you
control
fifty
percent
minus
epsilon
another
one
in
my
mean,
might
be
properties
about
what
difficulty
of
censoring
selectively
right
so
and
you
want
these
results
to
be
ideal.
Yes,
independent
of
the
random
number
generator
as
possible.
B
So
if
you
can
kind
of
prove
move
toward
proving
results
like
that,
then
that
comes
closer
to
basically
saying
okay,
this
is
a
fortress
rule
and
we
know
for
a
fact
that
we
have
that
we
can
prove
these
properties
that
it's
impossible
to
get
properties
that
are
asymptotic
way
better,
and
so
we
have
something
that
we
can
kind
of
settle
on
in
this
room.
It's
not
going
to
change.
B
B
Imagine
if,
like
computing,
the
timestamp
of
a
piece
of
data,
where
of
when
a
piece
of
data
was
published,
was
like
a
mathematical
function
that
you
could
evaluate.
So
you
could
compute
like
when
a
piece
of
data
was
published
to
the
Internet.
At
the
same,
we
could
compute
its
hash,
and
then
we
would
have
like
in
said
that
we
would
not
even
need
consensus
and
we
would
have
a
hundred
percent
fault
tolerance.
That's
like
optimal.
B
Unfortunately,
we're
not
gonna
get
anywhere
close
to
that.
But
you
know
I
feel
like
the
in
general
in
a
very
wide
sense
right.
The
optimal
protocol
is
one
that
has
the
properties
we
wants
to
a
reasonably
close
approximation
and
where
we
can
prove
that
you
cannot
get
better
properties
without
sacrificing
something
else
that
we
care
about
even
more
and.
C
I
think
that,
like
to
build
on
that,
at
the
same
time,
perfection
in
terms
of
the
process
would
be
also
like
defining.
You
know,
step
by
step.
How
do
we
actually
get
there
and
you
know
implementing
each
one
incrementally
and
getting
close
enough,
that
we
can
support
a
decentralized
application
ecosystem
and
not
like?
We
don't
need
the
absolute
you
know
highest
TPS.
We
just
need.
You
know
just
enough,
at
least
for
now
to
get
us
to
the
next
step,
and
you
know,
make
developer
experience
good
enough
and
then
pass
it
to
Danny
and
to.
C
And
definitely
uh-oh
just
yeah,
and
also
like
transaction
security
like
what
I
really
like
is
I
like
the
idea
that
you
know
you
can
sign
a
transaction
and
depending
on
what
the
you
know,
what
kind
of
transaction
it
is.
You
get
different
levels
of
security
and
you're,
okay
with
different
levels
of
security
and
like
the
more
you
wait,
the
more
ingrained
it
gets
and
the
you
know.
If
you
sign
it
immediately
and
it's
a
state
channel,
then
you
get
like
some
level
of
instant.
B
So
one
example
of
a
developer
experience.
That's
definitely
good
enough,
at
least
from
the
point
of
view
of
a
blockchain
is
something
equivalent
to
what
the
current
etherion
blockchain
provides.
Obviously,
you
know
I
could
Plus
somewhat
shorter
block
times
if
we,
if
we
can
get
them
so
the
goal
of
shorting.
B
From
an
experience
point
of
view,
right
is
not
to
make
the
experience
better
it's
to
massively
increase
capacity
without
making
the
experience
much
worse
and
if,
if
we
can
get,
if
we
cannot
decrease
developer
experience
at
all,
then
that's
the
obvious
optimum
and
if
we
can
in
then
from
there
it's
a
kind
of
bargaining
game
of
like
well,
you
can
decrease
it
this
much.
Maybe
you
can
do
a
bit
better
but
the,
but
at
the
cost
of
a
large
number
of
changes
to
the
protocol.
B
But
then,
if
there
was
a
large
number
of
changes
are
gonna,
be
our
extreme
complexity.
Maybe
it's
better
to
do
it
at
layer.
Two
from
a
rollout
standpoint.
My
personal
view
is
that
we
want,
if
the
rem
2.0
it
to
be
something
that
like
actually
can
promise
sort
of
higher
levels
of
immutability
in
certain
respects.
C
Think
also
for
a
developer
experience.
I
know
this
is
a
sharding
workshop,
but
it's
certainly
important
to
just
like
reiterate
that
every
single
person
here
is
like
smooth
some
of
the
most
knowledgeable
people
about
aetherium,
and
so
any
work
that
you
can
do
to
like.
You
know,
spread
the
information
of
how
to
build
decentralized
applications
and
how
to
reason
about
them,
like
we
have
this
adversarial
thinking
that
we've
been
talking
about
this
entire
time
or
it's
like
okay.
If
the
attacker
does
this,
what
do
we
do?
C
How
is
our
protocol
robust
against
these
kinds
of
you
know,
manipulations,
and
so
this
is
something
that
I,
don't
think
people
as
application
developers
on
the
broader
space
like
are
really
that's,
not
front
in
their
consciousness,
but
it
is
front
in
all
of
ours
because
we're
living
in
this
decentralized
peer-to-peer
kind
of
system
and
so
I
at
least
one
of
my
biggest
things.
It's
like.
Okay,
how
do
we
you
know?
C
Okay,
the
sharding
protocol
has
a
thousand
different
components,
because
it
is
like
the
using
the
bleeding
edge
of
decentralized
tech,
but
then
each
one
of
these
components
can
be
used
for
things
like
plasma
change,
use
for
things
like
state
channels
used
for
things
like
you
know,
reasoning
about
other
economic
mechanisms
that
even
exist
off
chain
that
exist
in
centralized
servers.
So
it's
really
this
mentality.
That
I
think
is
the
most
important
and
so
like.
Let's
develop
starting,
let's
build
these
protocols,
but
then,
let's
also
like
keep
decentralization
alive.
D
Think
it's
it's
a
it's
a
trade-off,
certainly,
but
I
think
that's
the
best
way
to
manage
the
complexity,
at
least
from
like
a
research
standpoint.
If
the
research
isn't
there,
then
kick
it
to
lay
or
to
for
the
time
being
with
knowing
that
there's
probably
gonna,
be
growing,
pains
and
people
transitioning
and
trying
to
address
those
earlier
rather
than
later,.
E
Is
that
the
the
research
that
we're
doing
for
shouting
like
it's
preachin
Eric,
it's
very
modular,
is
the
if
research
posts
tend
to
be
like
very
small
ideas,
and
just
this
morning
tribute
announced
that
they
would
be
using
proofs
of
independent
execution
together
with
their
forced
errors.
So
you
know
that's
a
piece
of
sharding
research
which
is
being
used
at
layer,
two
and
so
I
think
we're
gonna,
see
more
and
more
of
that
happen
as
soon
as
we
educate.
C
Just
like
one
one
example
of
this
kind
of
like
layer,
two
layer,
one
thing
is
that,
like
there
have
been
multiple
decentralized
Twitter's
that
have
come
up
on
aetherium
and
and
exist
on
aetherium
and
twitter
is
definitely
a
good
use
case.
In
terms
of
you
know,
we
do
want
some
kind
of
like
decentralized
censorship
resistant
stuff,
but
if
we
were
to
like
scale
this
decentralized
twitter,
we
clearly
can't
publish
every
single
tweet
on
chain.
We
don't
really
need
like
everything
to
be
on
chain
for
Twitter.
C
If
you
use
hash
links,
then
you
know
the
ordering
of
your
tweet
for
one
particular
user.
You
have
private
keys
and
you
do
need
pieces
for
that,
for
like
identity
and
for
like
sending
money
and
for
you
know
putting
bonds
for
your
tweets
or
you
know
there
are
million
mechanisms
that
you
can
use
on
aetherium,
but
it's
like
developer
experience
is
also
saying.
Okay.
C
B
B
So
in
the
current
spec
so
far,
all
of
the
an
incentive
accounting
happens
on
the
beacon
chain
and
like
what's
drawing
is
just
a
stub
right
now,
the
eight.
The
intention
is
definitely
for
beacon,
chain
deposits,
plus
the
rewards
to
be
what
Schwab
will
into
shards
and
for
there
to
be
a
money
about
eventually,
a
pathway
for
moving
either
from
the
main
chain
straight
into
shards,
so,
like
virtual
leaf,
is
sort
of
really
thin.
That
way.
K
Yeah
I
have
a
different
question:
okay,
so
you
guys
like
just
because
Swan
just
come
up.
It
was
kind
of
a
thought
from
this
event
that
it's
funny,
because
the
the
swarm
team
has
been
doing
its
own
research,
also
on
things
like
proof
or
custody
for
a
long
time,
and
it's
actually
supposed
to
be
like
a
reliable
thing
to
a
reliable
system
tour
to
store
things
and
ensure
that
they
stay
available.
They
do
have
like
a
messaging
solution
available
and
things
like
that.
K
B
B
K
But
then
the
question
is
because
I
mean,
as
far
as
I
can
see
in
in
in
the
in
the
shouting
protocols
it
does.
He
require
built
in
proof
of
custody
to
ensure
that,
like
immediate
histories,
life
couldn't
that
kind
of
facility
also
be
provided
by
a
swarm.
Then
I
mean
it
was
the
proof
of
custody
anyway,.
B
So
I
think,
like
the
main
problem.
Is
that
a
proof
that
a
data
file
has
a
proof
of
custody
needs
to
be
included,
but
first
of
all,
it
needs
to
be
included
in
the
consensus
layer
and
second
to
get
the
guarantees
that
we're
looking
for.
It
has
to
be
specifically
the
randomly
selected
subset
of
validators.
B
Yeah,
so
our
goal
with
the
kind
of
random
sampling,
the
plus
proof
of
custody
and
plus,
like
proofs
of
execution,
is
to
try
to
move
the
model
from
being
honest
majority
to
being
or
for
being
to
being
kind
of
like
uncoordinated,
economical,
irrational
majority
and
so
I.
Basically
unlost
a
very
large
portion
of
participants
are
in
a
court
needing
to
attack.
Then
it
would
be
in
people's
interest,
so
participate
in
ways
to
keep
the
system
running
safely.
Yeah.
F
C
Just
to
kind
of
like
expand
on
that
a
tiny
bit
the
biggest
problem
with
plasma.
Is
this
data
availability
thing
right?
You
don't
really
know
if
the
plasma
operator
has
published
all
of
the
data
for
the
block,
just
they
could
have
just
published
the
block
header
or
a
miracle
route
and
like
missing
a
lot
of
transactions,
and
so
you
can
use
this
sharding
infrastructure,
which
is
basically
a
like
short-term
availability
solution.
So
you
know
that
this
data
is
downloadable
or
it
will
not
be
included
in
a
shard.
C
You
can
use
that
for
a
plasma
chain
to
like
give
the
guarantees
that
you,
you
know,
are
missing
and
in
most
constructions,
and
so
you,
basically
what
you're
doing
with
that
is
you're
mitigating
this
kind
of
exit,
like
not
yeah,
this
mass
exit,
phoner
ability,
which
is
not
as
bad
in
certain
circumstances,
but
essentially
you
don't
want
your
plasma
operator
to
be
able
to
force
users
to
exit
ever
because
that
is
just
like
a
griefing
vector.
So
you
can
like
kind
of
solve
this,
which
is
cool.
F
B
B
So
what
one
kind
of
somewhat
fine
wine
is
like
what
level
of
efficiency
first
darks
is
actually
possible
and
the
ends
basically
like
that's
kind
of
like
a
positively
meaning
darkroom,
because
if
it's
possible
to
get
Starks
down
to
some
like
absurdly
low,
although
size,
then
there
are
way
more
applications
to
use
them
for
than
just
aggregating
signatures
and
doing
me
DFS.
So.
B
Block
correctness,
recursively,
verifying
block
correctness,
verifying
block
availability,
and
then
we
can
get
rid
of
fraud
proofs
in
many
cases
and
one
if
you
have
like
extremely
perfect
kind
of
like
instant,
extremely
fast,
a
general-purpose,
succinct
proofs,
then
you
can
like
that
massively
helps
a
lot
even
even
I,
scalability
and
Link.
It
opens
the
door
to
super
quadratic
scalability
in
a
bunch
of
ways.
B
Do
I
personally
feel
comfortable
with
like
shoving
half
of
my
offensive,
this
person
into
this
particular
model,
and
that's
so
the
we
you
know.
Are
there
centralization
incentives
or
they're
a
do?
The
anti
centralization
incentives
actually
work,
but
will
everything
just
call
us
into
like
two
pools
that
are
controlled
by
BitFenix
and
like
whoever
else
it's
up
running?
You
know
what
the
EEOC
ellicott
notes
or
whatever
thank
you
like.
B
We
have
sort
of
philosophically
come
up
with
you
know
a
bunch
of
mitigations
and
a
bunch
of
ways
to
I
can
make
the
out
of
the
proofs
take
more.
Finally
in
practice,
but
like
this
is
stuff
that
has
not
been
run
in
the
wild
and
so
what
has
been
run
in
the
wild.
There
is
a
certain
degree
of
unknowable
Ness.
That's
not
gonna,
be
crossed.
B
Like
there's,
there's
technical
element,
elements
to
them
so
like,
for
example,
the
feasibility
of
running.
You
know
certain
kinds
of
centralized
perversity
questions
in
practice,
but,
like
it
definitely
is
like
I.
Think
what
a
lot
of
the
extent
to
which
today
proof
of
stake
will
succeed
at
being
decentralized
as
definitely
as
social
as
it
is
technical
like
it's.
B
Basically,
the
the
extent
to
which
centralized
providers
are
capable
of
providing
more
convenience
and
the
extent
to
which
users
value
this
convenience
and,
like
I,
do
think
that
the
centralization
in
a
decentralization
of
crew,
stake
battle
will
be
fought
along
those
dimensions
even
more
than
a
lot
along
the
economic
ones.
And
that's
that
definitely
is
like
that
kind
of
unknown.
That
can't
be
as
easily
put
into
equations.
B
Another
one
is
also
incentives
that
have
to
do
with
capital
lockup
and
how
the
market
will
deal
with
so
capital,
walk-up,
evaluator,
swats,
and
how
capital
lockup,
evaluator
swats
will
end
up
being
priced
and
like
how
people
will
choose
to
participate
in
that
versus
participating
in
by
say
putting
their
deposits
behind
like
state
channel
constructions
or
interactive
verification
constructions
and
the
extent
to
which,
like
how
those
will
interact,
how
those
will
compete
with
each
other.
I.
E
B
B
A
value
excise
secret
chair
city
and
and
what
gives
participant
GA
like
X,
IJ
and
then
participant
G,
gets
all
of
the
X
IJ
pieces
and
then
like
la
garages
or
Boyd's
them
all
together.
And
then
you
have
like
a
lipstick
curve.
Verifiable
of
your
verify
ability
on
all
of
that,
and
this
all
happens
off
chain
like
it's.
A
primitive
and
academics
have
a
name
for
it
but
like
at
the
same
time.
It
is
hard.
B
One
thing
it
will
I
think
reduce
the
or
Surya
will
no.
It
will
reduce
the
difficulty
of
creating
centralized
staking
pool
solutions
that
have
some
degree
of
trustworthiness,
which
is
both
somewhat
worrisome
because
it
all
increased
the
extent
to
which
people
are
willing
to
trust
them.
But
it's
about
also
really
good,
because,
basically,
even
if,
like
40%
of
all
the
staking
he
attends
up
being
walked
inside
one
of
them,
it
would
still
be
like
because
of
the
trusted
hardware
itself.
B
C
Phil
has
a
question:
I
could
say
a
dark
corner,
I'll
just
say,
while
it
goes
just
a
like
cross,
shard
communication
developer
experience
like
that
is
a
question
for
me
and
like
the
actor
model,
if
we
decide
that
asynchronous
calls
between
contracts
are
the
way
to
go.
That's
another
big
question
right.
How
do
we
actually
make
that
a
nice
developer
experience
in
the
migration
path
for
a
solidity
and
Viper
to
that
is
also
another
question.
So
it's
basically
a
lot
of
work
not
like
technically
so.
B
Another
dark
room
is
like
perhaps
for
migration
from
this
shorting
spec,
two
radically
different,
shortening
specs.
That
may
be
better,
so
one
no
like
so
like
one
example
of
this,
a
super
quadratic
shorting
that
might
be
enabled
with
in
beast,
like
Starks
and
like
be
like
very
efficient
proofs
of
correctness
and
proofs
of
the
availability.
B
One
example
of
this,
as
like
various
kinds
of
dynamic
shorting
schemes
like
I,
would
personally
favor
just
like
saying
there
is
an
entire
category
of
complex,
spooky
stuff,
that's
not
allowed
for
aetherium
2.0,
and
we
should
just
be
okay
with
waiting
five
to
ten
years
for
it
to
come
in
if
your
iam
3.0,
and
we
should
be
willing
to
say
that
you
know
something
like
ten
year
at
five
to
ten
years
from
now
is
the
expected
release.
Date
of
etherium
is
3.0
type
thing.
L
So
one
fun
thing
that
Vlad
and
I
were
discussing
related
to
the
trusted.
Hardware
comment
earlier
was
that
you
could
actually
build,
let's
say
a
pool
for
for
storage
and
then
not
have
to
reveal
the
s
in
the
proof
of
custody.
Construction
to
whoever's
running
the
pool
plus
use
availability
bonding
to
make
sure
that
essentially,
like
their
risk
of
being
offline,
is
greater
than
yours
of
being
slashed.
So
I
guess.
L
F
B
J
B
Another
thing
is
that
at
least
I'm
trying
to
design
the
protocols
are
we
to
have
kind
of
redundant
components
that
try
to
ensure
the
correctness
of
properties
and
redundant
ways?
So
one
example
of
this
is
that
for
shard
block
validity,
you
have
you,
you
have
the
random
sampling
mechanism.
You
have
the
proof
of
cop
proofs
of
custody
with
validator
bonds.
You
have
the
potential
for
various
forms
of
client-side
data
availability.
Checking
so
like
the
goal
is
definitely
to
have
multiple
primitives.
E
E
I
mean
what
I,
what
I
gather
is
that,
for
example,
Definity
in
Kedah.
No,
they
they
kind
of
work
privately
and
then
once
they're
happy
with
the
ideas,
they
release
a
white
paper
and
it's
possible
that
in
one
of
these
white
paper,
there's
gonna
be
a
10x
idea
that
we
want
to
incorporate,
and
that
might
you
know,
lead
to
a
redesign.
B
So,
for
example,
the
current
sharding
spec
can
handle
up
to
four
million
validators
and
I
think
a
reasonable
upper
balance
for
how
many
validators
we
definitely
don't
need
more
than
is
basically
the
world
population.
So
realistic
way
around
like
eight
billion
or
so,
and
even
with
much
less
than
that,
you
know
you
still
get
very
high
levels
of
decentralization,
so
eight
billion
is
definitely
a
number
that
we
could
get
to
with
like
Moore's
a
decade
of
war
as
well
all
by
itself.
If
we
need
to
right
so
what
we
can
we.
B
C
And
I'm
feeling
good
about
solidifying
portions
of
the
spec
and
very
much
communicating
and
talking.
We
shall
way
about
I'm.
Sorry,
I,
keep
talking
about
this
peer-to-peer
network,
but
other
things,
not
just
that.
Also,
the
the
finality
gadget
with
a
concept
of
a
beacon
chain
and
coming
up
mate
I
feel
like
there's
an
there
could
be
an
interesting
way
to
like
do
that,
where
we
don't
actually
where
we
can
have
like
some
kind
of
test
net,
where
we
don't
that
we
don't
have
to
pivot
away
from
very
easily,
because
I
mean
the
finality.
C
Logic
is
still
there.
There
is
a
chain
that
exists
it
concurrently.
That
is
the
big
decision.
Like
do.
Do
we
somehow
go
back
on
this
beacon
chain
idea,
I
think
is
like
a
definitely
a
key
thing.
That's
in
my
head
and
I
don't
see
why
we
would
at
this
moment,
so
it
seems
reasonable
that
the
beacon
chain
will
survive
in
some
capacity,
and
so,
if
you
have
a
chain
that
finalizes
another
chain
and
on
a
peer-to-peer
network
that
is
charted,
you
are
now
very
very
far
away.
B
That
remind
that
was
the
title
of
one
level
of
a
video
game.
I
played
like
10
years
ago,
where
one
of
the
levels
basically
had
the
start
and
the
end
be
like
literally
three
meters
away
from
each
other,
but
separated
by
a
like
a
basically
a
cliff
that
it
was
just
too
long
to
jump
over,
and
so
he
had
to
do
this
really
big
long
journey.
That's
what
many
hours.