►
From YouTube: Casper cbc / Sharding - Vlad Zamfir
Description
Some proofs and Q/A about Casper (cbc) and sharding.
Recording and editing by https://twitter.com/alexboerger
B
That's
kind
of
like
a
technical
conversation
about
consensus
and
about
specifically
this
constants
for
thought
that
I've
been
working
on
I
published
a
paper
on
and
I'm
still.
You
know
working
on
publishing,
more
and
more
information
about.
So
this
is
like
a
opportunity
for
you
guys
to
get
some
free
publication
information
or
some
information
to
clarify
the
publication
stuff.
B
So
III
would
like
to
do
that.
If
no
one
has
any
objection
so
but
I
see
a
question
yeah,
oh
yeah,
sure
so,
hi
I'm,
glad
Dampier
I'm,
a
researcher
Darian
research,
slash
like
aetherium,
Foundation,
I,
work
on
consensus
protocols
and
proof
of
stake.
Predominantly
I
have
a
few
side
projects,
but
that's
pretty
what
I
work
on
is
consensus
protocol
is
a
proof,
stake,
I,
guess,
I'm,
like
a
consensus
protocol
engineer.
B
B
Consider
this
this
kind
of
structure
where
we
have
objects
called
protocol
states
and
morphisms
between
them
called
protocol
state
transitions,
and
you
know
if
you,
if
there's
a
transition
from
one
object
to
another,
then
there's
also
going
to
be
a
transition
from
if
there's
a
transition
from
one
to
the
other.
To
a
from
that
one
to
a
third,
then
there's
also
gonna
be
a
transition
from
the
first
and
third.
B
So
basically
I'm
saying
oh
look:
there's
a
category
of
state
transit,
State
protocol,
States
and
protocol
state
transitions
and
then
there's
gonna,
be
a
map
from
protocol
States
to
statements
about
the
consensus
called
the
estimator.
So
there's
this
thing
called
the
estimator
that
Maps
these
protocol
states,
which
I'm
going
to
denote
like
this
two
propositions
propositions
about
oops
propositions
about
the
state
of
the
consensus.
So
this
would
be
something
like.
Oh,
this
consensus
is
zero.
The
consensus
is
one.
Oh,
the
block
at
this
height
has
this
hash.
B
B
So,
basically,
if
we
have
the
property
hold
for
every
protocol
state
in
the
future,
then
that's
somehow
called
safe,
so
a
value
of
the
estimator
or
something
that
the
SD
estimator
kind
of
implies
is
safe
if
for
any
future
protocol
state
that
that
value
holds
so
if
this
block,
if
this
block,
has
this
hash
as
a
block
of
height
ten,
has
this
hash
at
this
protocol
state
and
at
all
future
protocol
states,
then
we
call
that
block
safe
or
that's
this
per.
We
call
the
proposition
of
this
block
at
this
high.
B
It
has
this
hash
safe
and
then,
basically,
you
know
by
the
way,
also
that
every
there's
a
proto
there's
a
state
transition
from
every
state
to
itself.
So
so
I
didn't
say
that
this
thing
also
satisfies
it,
because
well,
if
all
future
states
satisfied
and
ever
for
every
protocol
stage
of
the
future
state
of
itself,
then
then
I'll
kind
of
get
over
free.
Okay.
So
now
we
gotta
kind
of
are
going
to
get
to
the
safety
proof
the
kind
of
key
part.
B
B
That
also
evolves
to
Sigma
prime
then
we're
gonna
have
the
kind
of
following
property
that
we're
not
safe
on
the
negation
of
P,
because
if
we
were
safe
on
the
negation
of
P,
then
we
would
also
be
safe
on
the
negation
of
P
there,
and
but
actually
because
of
a
property
of
this
guy.
That
I
haven't
talked
about
it's
impossible
to
be
safe
on
P
and
on
the
negation
of
P
at
the
same
state,
that's
kind
of
intuitive,
because
you
can't
even
have
both
P
and
not
P
hold
for
any
states.
B
B
B
B
This
is
just
a
normal
kind
of
like
Oh
getting
I,
don't
remember
the
name
of
the
world,
but
you
just
get
rid
of
the
implication
and
then
by
demorgan's
rule
we
get
not
safe,
P
Sigma,
1
and
safe,
not
P,
Sigma
2.
So
this
this
this
conclusion
here
is
exactly
is,
is
exactly
consensus
safety.
It
basically
says
that,
oh
look,
we
don't
have
safety
on
P
and
safety
on
not
P
at
state
Sigma,
1
and
Sigma
2.
B
So
it
turns
out
that
this
statement
here
that
safety
P
implies
safety,
not
the
absence
of
safety,
unknot,
P
and
Sigma
2.
If
Sigma
1
and
Sigma
2,
you
have
a
common
protocol
future,
it's
the
same
as
Sigma,
1
and
Sigma
that
decisions
on
safe,
P
and
or
more
specifically,
he
says
that
P
and
not
P
are
not
both
safe
at
Sigma,
1
and
Sigma
2
respectively.
So,
basically,
it's
not
the.
So.
B
Basically
all
of
these
protocols
are
gonna
work
on
the
following
kind
of
premise:
we're
only
gonna
make
decisions
on
safe
values
should
maybe
mention
that
earlier
all
the
decisions,
then
it
says,
protocols
are
gonna,
make
are
gonna,
be
on
the
safe
values,
and
so
the
decisions
are
gonna,
be
consensus
safe
for
any
two
protocol
states.
They
have
a
common
protocol
future
by
kind
of
this
argument
that
says
that,
oh,
if
they
have
a
common
protocol
feature,
then
it's
not
the
case
that
they're
safe
on
some
proposition
and
it's
negation.
B
So
somehow
this
is
like
the
basic
shape
of
the
safety
proof
and
then
the
next
part
is
basically
to
guarantee
that
notes
have
a
common
protocol
future
as
long
as
there's
less
than
some
number
of
Byzantine
faults,
and
so
basically
as
if
we
have
a
common
protocol
feature,
then
we
have
consensus.
Safety
is
kind
of
this
part
of
the
proof
that
are
shared
with
you.
B
You
know
for
decisions
on
safe
estimates
and
then
the
kind
of
part
that
I
didn't
share
in
the
next
part,
which,
if
you
don't
stop
me
I'll,
go
to
is
that
nodes
have
a
comment
you
kind
of
we
make
constructed.
So
that
knows
how
I'm
a
common
protocol
feature
as
long
as
there's
less
than
some
number
of
Byzantine
faults.
B
So
we
have
protocol
States
protocol
state
transitions,
an
estimator
that
Maps
protocol
states
to
propositions
about
the
consensus
definition
of
safety.
That
says,
oh
look,
some
proposition
is
invariant
and
all
future
protocol
states.
We
have
this
notion
that,
oh,
if
P's
safe
signal
one
and
there's
a
transition
from
Sigma
to
Sigma,
Sigma
1
to
Sigma
1
prime,
then
it's
also
gonna
be
safe
there,
which
additionally
means
that,
for
anything
that
transitions
to
that
you're
not
gonna,
be
safe
on
its
negation,
because
then
you'd
have
to
be
safe
on
both
P
and
not
P.
B
Here
is
impossible,
and
then
this
kind
of
gives
us
a
kind
of
distributed
consensus.
A
distributed
safety
for
any
protocol
states
that
share
a
protocol
state
in
common.
So
if
you
and
I
are
protocol
states,
we
share
a
protocol
state
in
common,
then
any
decisions
we
make
on
things
that
are
invariant
it
over.
Our
futures
have
to
be
consistent
because
we
share
this
protocol
state
in
common,
where
we
could
both
end
up
and
where
all
the
things
that
are
safe
for
each
of
us
would
both
be
true.
B
So
that's
the
basic
basic
set
up
for
all
these
protocols
and
then
things
that
vary
between
them
are
Oh.
What
are
actually
our
protocol
states?
What
are
what
is
this
estimator
map,
but
in
terms
of
the
basic
consider,
the
safety
proof,
the
basic
setup
and
all
remains
unchanged,
which
is,
which
is
why
it's
pretty
cool
one
of
the
cool
and
also
like
why
we
generate
generate
because
it's
a
protocols
and
make
changes
to
them
without
changing
the
proof,
a
lot
or
at
all,
and
that
ends
up
being
released
for
because
you
don't
have
to
lose.
A
B
They're,
pretty
they're
they're
they're
a
really
great
hammer
and
distributed
systems
that
really
kind
of
provide
a
really
really
strong
guarantee
of
replication
and
and
and
they
they
make
it
easy
to
reason
about
how
to
do
a
lot
of
a
lot
of
stuff.
Because
you
don't
think
if
you
have
consensus
protocol
is
you
don't
need
to
think
in
a
distributed
fashion
as
much
when
you're
designing
decentralized
systems.
C
C
B
B
But
let
me
say
that
when
something
is
safe
here
that
we're
talking
about
a
kind
of
local
notion
of
safety,
that
a
node
will
never
have
will
never
achieve
a
protocol
state
that
doesn't
have
something
it
we're
not
talking
about
consensus
safety,
except
for
in
the
context
of
this
kind
of
distributed
safety
proof.
So
so
so
the
the
interesting
thing
about
this
is
that
it
bridges
the
gap
between
a
local
notion
of
safety
or
like
an
invariance
and
a
distributed
one.
B
That
also
has
this
proof
hold
it
in
the
context
of
a
hundred
percent
Byzantine
faults,
and
is
it
also
non-trivial,
meaning
that
it
can
have?
It
can
actually
decide,
are
two
inconsistent
values.
So
if
you
have
a
protocol
that
never
decides
on
anything,
then
you
can
satisfy
this
quite
easily,
because
you
can
always
have
common
for
teacher
protocol
states
if
you
never
make
any
irreversible
decisions.
It's
the
kind
of
irreversible
decisions
that
make
two
possible
protocol
states
not
not
share
short,
not
share
protocol
features.
B
So
what
ends
up
happening
in
some
kind
of
consensus
protocols
is
at
some
point
nodes
will
make,
will
kind
of
get
will
be
vibe
by
valent
and
at
some
point,
they're
gonna
be
completely
committed
on
a
value
and
it's
possible
with
a
hundred
cent
Byzantine
faults.
That
knows,
will
end
up
in
OneNote
up
here
and
one
other
there
here,
they'll
be
safe
on
zero.
B
Here,
they'll
be
safe
on
one,
but
they
don't
have
a
consensus,
safety
right
and
so
really,
when
you're
talking
about
the
language
of
talk
problem,
you're
not
talking
about
the
local
safety
issues.
Local
safety
stuff
is
just
fine
talking
about
the
consensus,
failure,
meaning
a
lack
of
distributed
safety
due
to
an
increased
number
of
Byzantine
faults,
which
is
something
that
totally
fits
perfectly
well
in
this
framework.
But
this
framework
does
nothing
to
guarantee
the
unexist
ins
of
Byzantine
vaults,
that's
more
in
the
kind
of
economics
and
governance.
A
B
B
But
you
know
we
do
have
models
and
basically,
at
the
end
of
the
day,
I
think
the
foundational
example
is,
you
have
a
smart
contract
and
it
wants
to
pay
Alice
to
send
a
message
to
Bob
or
penalize
them
if
it
doesn't
happen.
But
if
Alice
fails
to
send
the
message
or
Bob
fails
to
send
the
proof
that
Bob
receives
the
message,
then
the
contract
doesn't
know
whose
fault
it
is,
and
so
somehow
we
have
like
a
trade-off
right.
B
B
What
is
the
amount
of
participation
for
a
given
level
of
for
a
given
utility
function
for
a
given
level
of
perceived
Byzantine
faults
for
a
given
background
rate
of
interest
and
like
how
many
deposits
will
show
up
and
then
and
then
and
then
think
about?
Okay
as
an
attacker
attacks
and
increases
people's
perceived
rate
of
Byzantine
faults?
B
How
fast
is
participation
fall
and
because,
basically,
the
fundamentally
the
fundamental
kind
of
most
effective
attack
in
any
of
these
protocols
is
first
to
discourage
participation
and
then
to
commit
the
intended
faults
because
the
fewer
deposits
there
are
the
easier
it
is
to
attack?
And
so
like
this,
this
kind
of
game
where
anyone
can
show
up
to
play
the
Alice
sends
a
message
to
Bob
game
is
I.
B
Think
the
simplest
example
having
able
think
of
with
all
of
the
same
kind
of
challenging
economic
features
as
the
problem
of
incentivizing
extensive,
vertical
but
kind
of
more
broadly
about
incentivizing
consensus.
The
first
thing
to
think
about
is
okay.
Well,
we
want
to
incentivize
people
to
follow
a
particular
protocol.
Assuming
that
we
solved
the
problem
of
consensus
and
then
so,
we
need
to
be
able
to
detect
people's
behavior
detect
the
deviations
from
that
protocol
and
penalize
those.
B
But
specifically,
what
we
want
to
do
is
penalize
the
ones
that
cause
a
degradation
to
the
to
the
quality
of
the
protocol.
So
imagine
that
there
is
a
failure
and
there
are
some
Byzantine
faults
and
some
of
those
Byzantine
faults
caused
the
failure
and
some
of
them
didn't
the
ones
that
caused
the
failure
are
somewhat
much
more
culpable
and
those
are
the
ones
that,
if
you
penalized,
you
actually
make
a
cost
of
failure
much
higher.
B
But
if
you
penalize
people
for
any
Byzantine
faults,
whether
or
not
they
cause
a
failure,
then
you
increase
the
risk
that
someone
will
be
penalized
just
to
faulty
hardware
and
software
and
that
decreases
participation
and
reduces
security,
and
so
this
is
kind
of
we
need.
The
protocol
needs
to
infer
the
participants.
Behavior
penalize
only
Byzantine
behavior,
like
truly
malicious
behavior,
in
order
to
not
penalize
honest
behavior
and
in
order
to
like
try
to
maximize
the
cost
of
attack.
My
philosophy
is
very
much
like
maximize
the
cost
of
attack.
B
First
and
I
try
to
do
that
inside,
like
tractable
models,
because
things
can
blow
up
quick
I
mean
even
quadratic
utility.
It
turns
out
to
be
really
hard
to
parameterize
these
things,
but
I
feel
like
we're
getting
some
for
somewhat
districts.
Oh,
so
we
have
this
economics
threat
and
this
is
distributed.
Systems
thread.
I,
guess
you
know
next
question
or
I
can
pick
the
choose-your-own-adventure.
You
know
anyone.
B
So
so
is
different
than
customers.
Then
the
customer
friendly
finality
gadget,
which
is
the
kind
of
protocol
of
italics
working
on
which
is
kind
of
an
overlay
on
top
of
work
that
finalizes
it
checkpoints.
Sorry,
the
question
is:
how
can
I
compare
this
to
the
finality
gadget
and
do
I
plan
on
proving
the
safety?
The
finality
guide
in
this
framework
and
I've
kind
of
I
have
thought
about
that
right,
like
letting
the
protocol
states
of
the
finality
gadget,
be
the
protocol
states
in
here
and
having
an
estimator,
and
it
definitely
seems
to
seems
to
work.
B
B
Guaranteeing
that
nodes
always
have
a
common
future
for
a
goal
state,
because
well
it's
a
if
I.
If
my
protocol
state
is
a
where
a
is
like
a
set
of
messages
and
your
protocol
state
is
B
well,
then
we
have
a
common
protocol
state
in
future
called
a
union
B,
and
this
is
great
because,
like
oh
look,
we
guarantee
do
we
have
common
future
protocol
states
which
mean
that
we
have
consensus
safety
right,
but
unfortunately,
because
every
two
states
have
common
future
protocol
states.
B
We
never
end
up
with
in
an
event
like
this
or
nodes
like
don't
have
a
common
feature.
Protocol
state,
which
is
X
is
like
specifically
what
you
need
for
non-trivial
T
right.
You
need
to
make
inconsistent
irreversible
decisions,
so
we're
gonna
do
is
kind
of
this
right.
We're
gonna,
say:
okay,
we're
gonna
do
this,
but
any
any
protocol
states
that
have
more
than
some
number
of
Byzantine
faults,
we're
just
gonna
delete.
B
So
if
the,
if
a
and
B
has
more
than
some
number
of
faults,
it's
the
fault
count.
A
and
B
is
more
than
T
some
threshold.
We
just
delete
delete'
that
state.
How
exactly
we
do
we
figure
out
the
fall
count
from
set
of
messages.
Maybe
talk
about.
It
talked
about
a
little
bit,
but
with
this
set
up
now
we
have
okay,
no
two
notes
have
a
common
future
protocol
state,
so
no
receiving
mess.
Having
seen
message
a
and
in
others
a
noticing,
you
have
missed
messages.
B
So
but
first
I
guess
a.
Let
me
do
an
overview.
So,
okay,
it
would
be
great
if
we
just
made
protocol
states
sets
of
messages
and
allowed
any
two
protocol
states
have
a
comment
feature
by
looking
at
the
union
of
those
messages.
Because
then,
we
just
have
consensus
safety
for
everything,
but
we
can't
do
that
because
that
that
would
have.
We
would
have
triviality,
because
any
two
states
that
have
common
future
protocol
states,
which
means
that
no
two
no
state
has
ever
made
any
kind
of
your
irreversible
decision.
Understand
of
any
any
consequence.
B
B
So
here
here
is
like
a
maybe
it
may
be
a
story,
so
this
is.
This
is
a
let's
call
this
a
set
of
messages
a
and
this
is
set
of
messages
B
and
they
an
ending,
and
they
have
some
intersection
right,
and
these
these
paths
represent
sequences
of
messages
from
from
validator.
So
this
is
like
a
validator
making
a
bunch
of
messages,
and
some
of
those
messages
are
in
the
intersection,
and
some
of
them
are
not
so
this
validator
here.
B
So
all
these
about
all
these
validators
are
honest,
but
the
validator
a
doesn't
see
those
messages
and
validator
B
doesn't
see
this
message,
but
there's
gonna
be
one
validator
here,
oh
that
there
and
then
and
there,
and
they
kind
of
equivocate
in
a
way
that,
like
a
and
B
both
seem,
they
both
seem
to
be
honest
and
B,
but
in
the
union
of
their
views
you
can
you
can
detect
that
this.
This
validator
has
equivocated.
D
B
Mean
relative
to
proof-of-work,
you
mean
or
like
okay
yeah
so
I
mean
so
the
cool
thing
about
this
setup
is
that
we
can.
We
can
we.
It
turns
out
that,
like,
for
example,
in
Casper,
the
Friendly
Ghost,
we
can
finalize
blocks,
like
you
know
in
in,
in
the
sense
of
like
a
synchronous,
Byzantine
fault-tolerant
protocols
with
the
same
network
overhead
as
Nakamoto
consensus,
and
also
when
we
produce
blocks
they
really
just
don't
having
to
have
a
signature
and
so
we're.
B
Yeah,
so
so
yeah
I
kind
of
lost
the
listener
here.
Basically,
when
I
was
talking
about
the
state
transitions
and
why
these
stages
is
like
would
or
wouldn't
be
allowed,
even
though
the
state
transition
is
meant
to
be
like
the
superset
relation,
it's
basically
only
the
superset
relation
from
sets
of
messages
to
other
sets
of
messages
that
don't
exhibit
too
many
Byzantine
thoughts.
B
So
in
this
case,
or
really
it's
only
a
state
like
the
yeah,
so
so
so
in
this
case,
the
state
transition
from
B
to
B
Union
a
would
introduce
a
Byzantine
fault
that
wasn't
observed
and
just
B
and
and
so,
and
so
that's
kind
of
the
story
there.
The
story
is
that
like
okay,
well,
a
is
a
protocol.
State
and
B
is
the
protocol
state,
but
a
union
B
are
not
because
a
union
B
exhibit
too
many
Byzantine
faults.
B
Of
faults
really
there's
invalid
messages
and
equivocations.
These
are
thoughts
that
aren't
distinguished
that
aren't
indistinguishable
from
network
latency.
So
there
are
these
faults
called
liveness
faults
which
are
distinguishable
from
that
aren't
just
sitting
there
or
latency
and
liveness
faults
can't
cause
safety
failures
and
asynchronously
safe
consensus
protocols.
So
in
an
asynchronously,
safe
protocol,
liveness
faults
don't
cause
because,
as
his
failures
and
life
as
faults
are
indistinguishable
from
Network
latency
right,
so
there's
only
folks
that
aren't
indistinguishable
from
Network
latency
that
could
be,
it
could
occur
really
and
then
those
basically
look
like
well.
B
They
it's
something's
indistinguishable
from
network
latency.
If
it's
the
result
of
a
different
resolution
of
race
conditions
and
so
anything
that
custody
was
like
ordering
messages
and
ordering
messages
and
timeouts,
it
doesn't
count,
and
so
basically,
you
basically
any
way
that
you
can
run
the
protocol
in
a
valid
way.
B
B
One
of
them
is
to
do
an
invalid
state
transition
to
let
go
to
like
it
go
from
like
a
state
to
another
state
where
you
work
or,
and
the
only
way
that
would
be
evidenced,
is
with
an
invalid
protocol
message
more
with
basically
so
so
through
through,
through
the
like
messages
that
you
see
from
from
a
node
they're
gonna
evidence
there
having
been
a
different
protocol
States
and
unless
all
of
those
have
a
state
transition,
a
protocol
state
transitions
through
them.
Then
you
can't
tell
then
it's
not
plausibly
business
closed
and
be
honest.
B
So,
for
example,
if
I
have
a
protocol,
no
that's
exhibited
this
state
and
in
this
state
well
there
is
no
single
state
transition
through
those,
and
so
that's
an
invalid
way
to
havoc
there's
no
valid
way
to
I've
executed.
That
protocol-
and
this
is
one
equipped,
looks
like
an
equivocation
kind
of
looks
like
oh
there's,
no
way
for
you
to
have
as
a
single
threaded
protocol
execution
hit
both
those
points,
and
so
instead
we
speculate
that.
Oh
you,
you
must
have
run
the
protocol
more
than
once
or
run
on
modified
versions
of
protocol.
B
Then
it's
that
it's
that
it's
not
like
couldn't
have
caused
us.
A
safety
failure
and
the
safety
failures
basically
are
gonna,
be
caused
by
things
that
can't
be
called
protocol
executions,
so
it
basically
pronounced
to
invalid
messages,
invalid
Protocol
transitions
and
running
multiple
versions
of
the
protocol.
So
an
invalid
transition
will
just
jump
randomly
and
running.
D
B
F
B
It's
called
so
the
question
is:
do
I
see
a
use
case
where
T
is
greater
than
zero
and
the
answer
is
like
yes:
well
any
small
tolerance,
we
really
want
to
have
more
than
zero
tolerance
normally,
and
so
that's
a
TIA
teens
that
Byzantine
fault
tolerance,
kind
of
number
people
normally
like
to
have
a
number
like
a
third,
but
that's
that's
or
Tia's
yeah.
So
so
you
know
it's
definitely
kind
of
important
to
be
able
to
maintain
protocol
features
with
someone
in
the
context.
B
B
Yeah,
so
the
cool
thing
about
that
I've
been
able
to
do
with
this
protocol
is
to
allow
every
node
to
have
their
own
fault
tolerance
threshold.
So
I
can
run
my
nil,
attend
a
fault
tolerance
threshold
of
like
30%.
You
can
run
yours,
our
fault,
tolerance
are
50%
and,
like
you,
won't
lose
consensus
safety
with
anyone
with.
If
there's
less
than
some
less
than
50%
of
Byzantine
faults,
you
won't
lose
consensus.
B
Safety
with
anyone
with
with
a
tolerance
threshold
of
50
or
more
I,
won't
lose
it
as
a
safety
with
anyone
with
a
threshold
of
30
or
more
if
there
are
less
than
30
of
em
Byzantine
faults,
and
so
basically
in
some
way
the
fault
tolerance
threshold
actually
is
not
part
of
the
protocol.
It's
something
that
the
client
will
like
input
or
that
is
like
not.
B
B
B
Yes,
actually
I
think
that's
so
the
course
is
over
not
to
choose
a
lower.
Why
would
you
ever
choose
a
lower
fault?
Tolerance
number?
If
the
network
is
producing
high
fault
tolerance
and
the
answer
is
like
actually
I-
think
it's
better
for
people
to
choose
the
highest
halt
our
December.
The
network
will
really
produce
safety
on,
because
that
makes
it
more
difficult
and
inconvenient
for
the
network
to
produce
less
safety,
makes
degradation
and
quality
more
costly
to
the
validators.
Yes,.
F
B
Lower
number
that
you
have
the
more
ways
there
are
to
be
left
out
with
a
small
number
false,
so
actually
the
lower
your
fault,
tolerance
threshold,
the
more
the
more
ways
you
can
be
left
out,
and
if
you,
if
you
say,
if
you
have
a
high
fall
tolerance
threshold,
you
you
can,
you
can
basically
switch
to
whichever
fork
the
validator
is
like
like
reconciled
on
after
the
like
say,
like
the
attack
that
caused
this
low
fault,
tolerance,
node
to
spin
off
and
so
actually
I.
Think.
B
The
probability
that
you'll
have
to
like
manually,
intervene
to
resync
with
the
consensus
is
going
to
be
much
higher.
If
you
have
a
much
lower
few,
have
a
lower
fault
tolerance
threshold
because
it
takes
less
false
to
it
to
cause
you
to
spin
off.
It
really
is
like
a
lower
fault.
Tolerance
threshold
is
straight
up,
less
secure
for
the
for
the
users
on
that
on
that
I'm
not
sure
one
like
using
that
fault
tolerance
level,
basically,
because
they
it
takes
less
Byzantine
faults
to
cause
consensus,
failure
between
them
and
other
nodes.
B
So
the
only
way
that
you
can
the
kind
of
way
that
you
kind
of
can
get
a
lot
of
safety.
That
way
is
assuming
that
all
the
correct
nodes
see
the
same
finalize
block
first,
but
that's
a
sketchy
kind
of
assumption
because,
like
I
mean
exactly
what
the
adversary
will
be
trying
to
do
would
be
to
show
this
one
or
inconsistent
finalized
block.
You
know
in
order
to
make
notes
kind
of
fall
out
of
consensus.
B
In
order
to
enjoy
the
property
of
not
safe
or
not
P.
So
it
turns
out
that
the
safety
proof
doesn't
factor
against
the
scaling
problem
at
all,
and
so
we
get
to
pretty
much
exists
entirely
inside
this
context.
For
the
for
the
sharding
protocol,
and
so
basically
I
have
like
a
very
similar
I
mean
basically
the
sense
of
safety,
proof
and
methodology.
Doesn't
change
at
all.
I'd
have
like
protocol
states
that
have
messages
from
different
shards
and
an
estimator
that
maps
on
to
like
blocks
for
every
shard.
B
So,
basically,
I
have
like
a
sharded
fork,
choice,
rule
and,
like
a
protocol
states
that
kind
of
also
mirror
the
charting
and
it
all
fits
inside
the
same
kind
of
safety.
Proof
which
is
kind
of.
Why
than
why
all
these
protocols
are
called
like
the
CBC
protocols,
because
they're
all
derived
and
all
designed,
you
know,
specified
to
satisfy
the
same
kind
of
safety
proof
and
which
makes
it
kind
of
super
convenient
when
I
went,
generating
new
protocols
or
modifying
the
protocol
like,
for
example,
adding
validator
rotation
to
the
protocol
required.
B
B
C
B
As
a
software
as
a
random
software
bug,
I'm
happy
you
coordinated
a
bunch
of
software
bugs
so
basically
it's
a
it's
the
job
of
the
incentive
mechanism
to
like
penalize
these
Byzantine
faults
and
absolutely
that's.
You
know
tightly
related
to,
however,
nothing.
It's
a
state
problem
is
addressed
in
the
security
deposit
base.
Mistake
protocols.
B
The
question
was:
are
we
gonna
penalize
when
notes,
run
different
versions
of
multiple
versions
of
the
protocol
and
isn't
that,
like
what's
necessary
for
nothing
at
stake,
cool
anyone
else,
I
think
we're?
Oh
yeah.
Here
we
go.
B
B
It's
different
shards
such
that
if
you're
running
a
node,
and
you
want
to
sync
up
on
any
shard,
you
can
synchronize
and
do
that
and
such
that
the
semantics
of
the
remove,
like
the
virtual
machine
on
the
blockchain,
have
to
do
but
like
basically,
they
can
take
advantage
of
the
atom,
is
to
be
provided
by
the
block
structure
and
by
the
consensus
system.
So,
basically,
I
have
a
very
kind
of
clear
understanding
of
the
basic
properties
that
sharding
the
shard.
A
consensus
protocol
will
have.