►
From YouTube: CasperLabs Community Call
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
A
A
A
All
right
so
good
morning,
everyone
good
day
good
afternoon
good
evening,
wherever
you
may
be
thanks
for
listening
in
dialing
in
or
lives
or
listening
to
the
recording.
This
is
the
Casper
labs
community
call
we've
recently
rebranded
this
to
be
open,
so
anyone
can
join
versus
a
governance
call.
We
feel
like
community
engagements,
most
important
things.
So
without
further
ado,
let's
get
started
with
the
engineering
status,
so
the
team
is
entered
sprint
26.
A
A
We
just
want
to
do
a
straight
shot,
because
that's
more
efficient
and
so
I'll
be
providing
status
updates
on
what
we're
doing
with
highway
protocol,
but
you're
not
going
to
be
able
to
see
any
of
the
new
code
until
we're
completely
done.
Let's
see
we're
also
optimizing
latest
message:
maps
to
O
log,
N
and
Andreas
has
done
some
great
work
on
the
fork,
choice,
rule
and
our
initial
benchmarking
shows.
We've
got
fantastic
performance
there,
so
this
is
just
awesome.
Work
we're
also
initiating
support
for
a
typescript
I'm
super
excited
about
this.
A
This
came
out
of
feedback
from
hackathons,
and
the
like
that
you
know,
while
rust
is
an
amazing
language
and
a
lot
of
developers
are
interested
in
learning
it,
we
didn't
find
a
lot
of
proficiency
in
rust
in
the
market,
so
we
are
implementing
typescript
and
we
think
this
is
just
a
great
thing
that
we
can
get
done
and
we've
got
automation
of
infrastructure
set
up
for
a
long
running
tests,
and
this
is
so.
The
developers
can
self-service
their
own
long-running
tests.
A
We
have
a
detailed
engineering
plan
and
dependency
graph
for
the
implementation
of
highway.
This
engineering
plan
is
available
publicly.
You
can
go
ahead
and
check
it
out.
The
link
is
in
the
status
report
and
we're
also
working
on
a
simulator,
that's
going
to
help
run
critical
tests
to
help
us
benchmark
and
understand
how
different
algorithms
perform
on
the
execution
engine
we're
implementing
the
cache
for
labs
ty
system.
This
lays
the
basis
for
contract
headers.
A
We
will
not
be
deploying
contract
enters
a
contract
registry
until
later,
because
the
team
is
going
to
be
fully
focused
on
typescript
support
between
now
and
then,
as
well
as
support
for
secure
enclaves,
so
contract
headers
will
come
later.
We
also
are
looking
at
potentially
doing
some
changes
to
how
we
deal
with
system
contracts
for
performance
enhancement.
A
We
found
we
found
some
performance
as
well,
so
we
got
another
25%
boost
on
the
EE
I'm
very
excited
about
this
and
we're
propagating
payment
code
execution
errors
to
the
user.
This
is
important
because,
right
now,
all
you
can
see
is
when
your
session
contract
fails,
but
there
isn't
any
way
to
know
when
your
payment
code
fails.
So
all
you
get
is
insufficient
payment,
so
you
know
the
payment
code
is
touring
complete,
so
you
can
do
dependencies
in
your
payment
right.
A
So,
depending
on
a
series
of
events
happening
in
the
payment
code,
then
you
pay
for
the
talk
tract
execution,
and
this
is
a
very
nice
feature,
but
you
need
to
be
able
to
troubleshoot
those
payment
contracts
as
well.
Node
we're
mostly
focused
on
stability
and
optimizations
and
performance
of
the
node.
At
this
time,
the
node
team
is
fully
focused
on
implementing
consensus,
so
you
won't
see
a
lot
of
stuff
happening
on
the
node
front
for
the
next
several
months.
I
tested.
A
No
sorry
we're
always
looking
at
dev
net
security,
especially
when
we
see
the
clarity
Explorer
get
attacked,
it's
great
feedback,
so
keep
the
attacks
coming.
We
learn
from
every
single
one
of
them,
Automation
again
extending
ansible
to
Explorer
to
send
EE
and
clarity
logs
to
GRA
log,
and
this
again
is
more
work
on
the
infrastructure
front,
to
help
support
a
public
network
and
correct
dag
generation,
and
we've
got
simulations
for
ER
c
token
sale
and
again
this
is
again
more
performance
test
and
we've
optimized.
This
combined
bonding
integration
test.
A
I
believe
this
is
for
both
Python
and
Scala
clients
on
the
ecosystem.
We
have
a
ZK
snark
example
that
we've
built
we're
working
on
completing
that
that
should
be
available
as
part
of
the
node
10
release
and
we're
doing
some
great
work
on
the
tech
documentation.
We
want
to
have
adapt
developer
guide
by
node
11,
so
we'll
have
a
preliminary
version
of
that
soon
and,
of
course,
enhancements
to
clarity
which
are
kind
of
cool.
You
can
now
go
to
clarity
and
see
the
bonded
validators
for
a
given
block,
which
is
very
nice.
A
You
can
see
that
the
if
the
block
has
been
finalized
per
that
validators
view
of
the
day
on
the
economic
research
we're
doing
design
of
open
bond
auctions.
We
are
going
to
have
a
certain
number
of
slots
for
validators
in
each
era
during
validator
set
rotation,
and
the
reason
for
that
is.
It
provides
a
mechanism
for
the
consensus
protocol
to
adapt
to
changing
network
environments
and
allows
it
to
be
flexible
for
time
to
finalization,
and
what
that
means
is
the
greater
the
validator
set
size.
A
The
more
messages
I
believe
that
means
a
the
smaller
the
validator
set
size,
the
faster
to
time
to
finalization
the
larger,
the
more
message
overhead
the
longer
time
to
finalization.
But
you
get
more
security.
So
it's
a
trade-off
and
we're
looking.
We've
got
a
reward
distribution
for
highway,
which
I
believe
honors
presenting
today
and
then
we're
also
investigating
the
impact
of
equivocations.
We
need
to
make
sure
that
we
understand
how
the
network
behaves
when
it's
attacked
and
yeah
so
I
with
that
I'm
gonna
turn
it
over
to
Andreas
and
race.
C
And
after
leaving,
University
I
wanted
to
do
something
more
applied,
so
I
went
into
computer
science
and
I've
been
working
with
Google
for
a
few
years,
and
I
had
to
move
away
from
from
Germany
and
move
to
a
few
other
countries.
Since
then
so
I.
Since
then,
I've
been
working
self-employed
for
on
different
project
projects
and
been
involved
with
cryptocurrency
since
2016
and
with
Casper
lab
since
I
think
this
year.
A
D
C
So
yeah
currently
I'm
trying
to
improve
the
performance
of
the
fortress
implementation
and
I'm.
There
are
a
lot
of
technicalities
involved,
so
I'm
going
to
make
lots
of
simplifications,
for
example,
for
the
most
part
I'm
going
to
honest
that
all
the
stakes
are
hi.
So
all
the
votes
count
the
same
I'll
briefly
talk
about
Bitcoin
and
just
assume
that
there's
no
change
in
the
block
difficulty.
That's
constant
I'm
going
to
assume
that
every
block
has
exactly
one
parent,
which
is
also
not
true
in
our
system,
and
I
will
assume
that
all
messages
are
blocks.
C
C
The
whole
point
of
the
consensus
system
is
to
decide
which
fork
to
follow
and
eventually
to
agree
on
one
linear
branch
of
the
tree,
and
that
means,
whenever
you
create
a
new
block,
you
have
to
decide
which
branch
to
build
on
ie
decide
what
the
parent
of
that
new
block
is
going
to
be
and
the
rule
how
to
do.
That
is
the
fork
choice
and
in
bitcoins
case
it's
pretty
easy.
It's
called
the
longest
chain
rule
you
just
have
follow
the
longest
chain.
C
That
means
you
take
an
existing
block
with
a
maximal
height,
and
you
use
that
as
your
parent
and
in
bitcoins
case,
there's
not
even
a
reason
for
the
protocol
to
try
and
enforce
that
rule,
because
that's
already
the
best
strategy
so
yeah
in
the
proof-of-work
case.
It's
it's
rather
easy
for
us.
It's
not
that
easy.
C
So
in
Casper
to
make
proofs
about
about
safety
work,
we
have
to
follow
the
ghost
rule
that
stands
for
greedy
heaviest
observed,
subtree,
and
that
essentially
means
that
you
start
at
the
Genesis
block
on
the
left
side
here
and
whenever
there's
a
fork,
you
have
the
choice
between
multiple
children.
You
have
to
pick
the
fork
on
which
most
of
the
validators
are
currently
building
on.
C
So
that
means
you
look
at
validators
latest
messages
and
see
which
fork
they
are
on
and
you
follow
the
plurality
so
in
this
case,
bitcoin
would
build
on
top
of
the
yellow
block
here,
because
that's
the
longest
chain,
even
though
that's
that
chain
was
that
that
fork
was
built
by
Eve
alone.
But
the
ghost
rule
says
we
have
to
build
on
top
of
the
green
block,
because
the
the
fork
on
the
bottom
was
created
by
four
validators
and
and
four
validators
are
currently.
C
Right
so,
but
in
Casper
we
don't
have
the
luxury
that
that's
somehow
automatically
the
best
strategy
anyway,
so
we
have
to
enforce
the
ghost
rule.
We
have
to
enforce
that.
Everyone
obeys
it
as
far
as
possible,
and
that
means
you
don't
only
have
to
compute
it
when
you
create
a
new
block,
but
you
also
have
to
compute
it
when
you
receive
a
block
to
validate
whether
that
block
is
correct
at
all.
C
So
that's
one
of
the
reasons
why
computation
matters
we
have
to
compute
it
a
lot
more
often,
and
obviously
it's
not
that
trivial
to
to
even
say
what
it
means
for
a
block
to
follow
the
ghost
rule,
because
you
don't
know
which
messages
the
author
of
that
block
is
currently
seeing.
So
maybe
he
didn't
see
my
latest
message
where
I
switched
to
another
fork,
so
he
still
considers
me
voting
for
for
the
first
fork
and
that
means
in
Casper.
C
We
have
to
provide
more
information
so
that,
when
I
receive
a
block
from
you,
you
also
tell
me
which
messages
you
know
of,
and
so
each
block
contains
hashes
of
some
past
messages
and
those
are
called
justifications
of
that
block.
So
the
justifications
of
a
block
essentially
are
all
the
messages
that
the
sender
of
that
block
knew
of
at
the
time
of
block
creation.
C
Now
all
the
justifications.
Of
course,
that's
too
much.
We
can't
put
in
hashes
for
the
whole
history
of
the
off-the-chain.
So
just
say:
if,
if
a
is
a
justification
of
B
and
B
is
a
justification
of
C,
then
a
is
also
a
justification
of
C.
So
not
every
justification
has
to
be
direct,
and
that
also
leads
to
a
scaling
issue.
C
So
this
is.
This
is
a
different
kind
of
diagram
where
the
arrows
so
in
the
previous
diagram,
the
arrows
represented
the
relation
of
parent-child
of
blocks
and
in
this
diagram
and
you've
already
seen
that
in
wojtek's
presentation
a
few
weeks
ago,
I
think
where
he
showed
how
there
our
simulator
actually
produces
such
diagrams
from
actually
simulated
run.
So
here
the
arrows
represent
direct
justifications.
That
means
the
block
on
the
right
contains
four
hashes
of
the
blocks
that
it
points
to
so
that
I
know
when
I
receive
that
block.
C
The
sender
knew
about
those
other
blocks
and
we're
abstracting
away
the
whole
tree
of
blocks
here,
because
we
are
only
looking
at
at
one
height.
So
we
make
the
assumption
here
that
there
are
three
Forks
and
we
need
to
decide
which
fork
to
build
on,
and
we
need
to
do
that
multiple
times
to
do
to
compute
the
for
choice,
rule.
So
there's
this
purple
and
red
and
blue
Fork
and
all
the
blocks
are
colored
in
the
according
to
which
which
fork
they
are
on
and
the
parent
relation
is
not
actually
shown
at
all.
C
So
let's
say
we
received
the
block
on
the
right
and
we
now
need
to
verify
whether
it's
correct,
whether
it
correctly
followed
the
ghost
rule.
To
do
that,
we
need
to
telly
the
votes
of
the
latest
messages
of
all
the
validators
and
for
four
of
them
it's
easy,
because
the
block
is
pointing
right
to
them,
but
we
have
to
do
some
computation
because
it's
not
obvious
I
think
the
this
is
a
simple
diagram
button.
In
the
general
case,
it's
not
obvious
at
all
whether
this
block
can
directly
or
indirectly
see
the
latest
vote
by
Alice.
C
C
Right
so
this
this
poses
several
several
computational
problems,
because
we
have
to
in
a
potentially
very
complex,
complicated
graph.
We
have
to
face
first
find
those
latest
messages,
and
then
we
have
to
somehow
in
a
probably
very
large
tree
find
which,
which
branch
is
the
correct
for
choice
and
think
of
like
we
want
to
push
the
limits
and
see
how
many
validators
we
can
support,
certainly
not
only
five,
definitely
hundreds,
possibly
thousands,
possibly
tens
of
thousands
and
also
think
about
the
the
horizontal
component.
Here
the
graph
will
grow.
C
C
First
of
all,
so
now
a
list
kind
of
the
optimizations
that
we
that
we
already
did
thinking
of
and
some
of
them
are
pretty
obvious,
I
guess.
But,
first
of
all
we
we
definitely
don't
want
to
start
the
fork
choice
explicitly
at
the
Genesis
block
and
and
and
computed
again
and
again
for
all
the
heights
up
to
the
height
where
we
are
at
because
the
tree
grows
larger.
So
this
would
get
slower
and
slower,
but
fortunately
there's
an
easy
trick.
C
We
can
start
from
the
right
and
move
left
until
we
find
a
block
on
which
the
majority
of
all
the
validators
is
currently
building
and
then
obviously,
that's
also
true
for
all
of
the
ancestors
of
that
block.
So
we
don't
start
from
the
Genesis
block,
but
we
start
from
the
common
ancestor
of
more
than
50%
of
the
of
the
latest
blocks.
C
C
C
And
that's
complicated
but
blocks
that
were
produced
close
together,
so
only
a
few
seconds
apart,
probably
contain
very
similar
lists
of
latest
messages
and
in
fact
computing
one,
given
the
other
is
much
faster
than
computing
it
from
scratch,
and
so
we
memorize
those
lists
of
latest
messages,
as
seen
by
any
given
block
for
every
block
that
helps
a
lot
for
performance.
Unfortunately,
it
wastes
a
lot
of
memory
because
those
lists
are
large,
but
then
again,
that's
point
three.
Since
many
of
those
lists
share
large
parts,
we
can.
C
That
means,
for
example,
for
every
block
we
memorize
what
that
blocks.
Parent
is
what
its
second
ancestor
is,
and
its
fourth
and
its
eighth,
and
at
16th
and
its
32nd
ancestor
and
so
on,
and
that
that
takes
log
n
memory,
so
algorithmic
in
the
height
of
the
block.
But
it
allows
us
to
find
the
ancestor
of
that
block
on
any
given
level
in
logarithmic
time,
and
so
there's
there's
lots
of
further
optimizations
that
we'll
try
out.
C
Then
you
cane
suggested
to
find
this
to
find
this
latest
block
that
at
least
51%
of
the
that
more
than
50%
of
the
of
the
validators
are
currently
building
on
using
a
probabilistic
method
by
looking
only
at
a
random
sample
of
all
the
validators,
only
considering
them
finding
the
block
that
they
would
vote
for
and
then
checking
whether
that's
the
correct
choice.
Considering
all
the
validators.
C
Probability
of
hitting
the
right
block-
and
that
should
save
a
lot
of
time
and
also
we
can
probably
cut
short
the
fork
choice,
computation
very
much
in
when
we
when
we
have
a
message
for
which
we
already
computed
it,
and
then
we
have
another
message
that
that
has
a
very
similar
list
of
latest
messages
right.
So
in
summary,
there's
there's
no
world-shaking
new
algorithm.
A
B
B
So
this
is
a
new
model
that
we're
considering
for
reward
distribution.
We
were
previously
working
on
a
point
based
model.
Imagine
values
specifying
their
round
lengths
by
announcing
round
exponents
at
anytime
they
want,
and
then
they
would
collect
points
for
each
block
that
they
finalize,
and
you
know
eventually,
at
the
end
of
an
era,
you
would
collect
all
points
sum
sum
them
up
and
then
those
would
be
the
weights
which
you
distribute.
The
total
number
of
tokens
mint
it
for
like
constants
enourage
so
like
if
hundred
tokens
are
minted
for
this.
B
This
is
like
an
era.
If
my
tokens
are
minted,
this
guy
gets
like
40
40
20
tokens.
So
this
looks
like
an
ideal
case.
The
less
you
work.
The
last
points
you
get
so
it
looks
like
a
like
model
which
captures
the
performance
of
violators
and
distributes
equally.
But
here
is
a
problem.
Imagine
well
imagined.
Validator
is
an
ounce
round
exponents
or
regardless
of
what
they
announce.
They
just
propose
one
block
in
the
whole
era.
B
Just
one
one
one
block
in
one
week,
then
they
would
all
get
one
point
and
you
still
mint
hundred
tokens
they
would.
They
would
share
it
like
all
equally,
so
they
would
still
get
a
lot
of
reward
for
doing
no
work,
and
this
is
just
a
toy
example
in
reality.
In
practice,
we
would
expect
in
such
a
model
that
validus
compete
against
each
other,
so
I'm
the
competition
would
draw,
make
the
make
them
participate
in
rounds.
B
But
that's
the
argument
is
just
competition.
Holding
them
off
from
such
a
strategy
is
to
two
two
week,
so
we
need
harsher
conditions.
We
can't
just
depend
on
points
points
for
judging
evaluators
performance.
The
block
reward
meant
for
a
mentor.
Evaluators
should
be,
you
know,
allocated
in
advance
and
then,
if,
if
it's
not
finalized,
if
the
block
isn't
finalized,
they
should
lose
lose
something
they
should
have
something
to
lose.
They
should
be
accountable
for
their
failure
to
you
know,
succeed
in
and
round,
and
that's
that's
the
basically
what
we
do
in
the
new
model.
B
B
Each
block
can
get
a
reward
weight
and
you
can
incentivize
people
to
convergence
round
lengths
by
assigning
a
higher
reward
weight
to
that
block.
The
more
the
more
stake
so
imagine
like
yes,
60%
of
the
weights,
may
be
enough
to
finalize
a
block,
but
it's
not
very
desirable.
You
would
want
all
of
the
violators
voting
on
that
block,
so
you
would
want
to
incentivize
incentivize
higher
higher
percentages
by
giving
a
higher
share
of
the
total
reward.
B
In
that
case,
if
there
is
a
block
which
gets
under
like
let's
say,
80
percent
of
the
stake
participating
that
would
get
a
higher
weight
and
the
higher
way
we
result
in
a
higher
reward,
because
eventually
you
calculate
so
you
calculate
the
reward
weights
you
multiply
and
you
multiply
them
with
the
total
tokens
minted.
In
that
era,
I
mean
depends
on
how
you
define
weight,
so
you
get
proportional
to
your
weight,
the
the
block.
B
You
get
more
reward
from
the
block,
which
has
more
participants
in
that
case,
if
the
block
is
proposed
and
finalized
in
time
than
the
total
block,
reward
meant
for
that
block
is
shared
among
participants
proportional
to
stake,
as
usual.
Is
something
I
want
to
highlight
once
again,
we
always
want
to
you
know,
have
linear
payouts,
so
the
more
the
relationship
between
pale
and
validator
stake
is
linear.
B
B
There
are
no
barriers
to
entry,
and
you
know
the
total
block
reward
changes
as
compared
to
other
provoked
work
protocols
because
we
are
using
the
reward
rate
calculation.
So
this
reward
rate
is
a
function
of
the
total
participating
weight
in
that
round
and
we
are
building
a
parametric
model
and
we
will
test
out
like
different
values
of
the
parameters.
In
fact,
I
can
shape
share
it
right
now.
B
B
B
A
B
It
they
get
it
at
the
end
of
it
era
because
round
exponents
change
during
the
round,
and
you
do
the
calculation
taking
them
into
account.
So
you
have
to
you
have
to
do
it
at
the
end
of
the
era.
It's
deterministic,
but
but
you
have
to
do
these
checkpoints
from
time
to
time
long
enough,
so
that
you
capture
enough
enough
of
enough
activity.
A
B
C
B
A
E
A
E
E
And
they
basically
have
a
fixed
amount,
they
distribute
every
block
and
that
basically
the
way
it
works.
A
distribution
is
then
triggered.
It
has
to
be
a
one
micro
algo,
so
it
takes
today
based
on
the
current
supply
and
on
the
chain.
It
takes
about
40
blocks
before
distribution
occurs,
but
also
they
don't
have
a
validate
or
centric
distribution.
Today,
today,
they're,
it's
like
it's
in
what
I
call
inflation?
B
Two
things
delegation
is
also
on
our
roadmap
and
the
second
one.
If
this
weekly
payout
is
undesirable,
we
can
change
the
frequency
depending
on
the
amount
of
like
how
computationally
expensive
this
this
weekly
calculation
is,
we
can
increase
the
frequency
I
think
Andria's
might
have
a
better
idea
on
how
expensive
it
is.
C
B
C
Be
true
and
I
mean
isn't
the
reason
why
we
do
it
once
era,
rather
that
when
we
start
the
era,
we
don't
know
how
many
blocks
there
are
going
to
be
and
we
want
the
network
to
always
go
as
fast
as
possible.
So
we
don't
want
to
at
the
start
of
the
era,
decide
how
many
blocks
we
are
going
to
produce.
We
are
going
to
expect
no.
B
B
E
B
Thanks
for
clarifying
users,
don't
like
token
holders,
don't
have
that
power.
It's
it's
R!
It's
randomness
stride
from
block
them
from
the
block
tag,
but
there
will
be
like
delegation,
like
user
will
be
able
to
put
on
their
likes
and
son
like
bond
their
tokens
with
violators
and
a
crew
crew
seem
rich
and
transaction
fees
based
on
that,
but
they
won't
be
able
to
determine
who
gets
to
what,
when
they
won't
be
able
to
alter
the
schedule.
A
Great
seems
like
we
don't
have
any
other
questions,
so
Nate
Stephane,
thanks
for
joining
I,
really
appreciate
it
and
next
week,
because
we
are
really
close
to
the
holidays.
We
are
going
to
adjourn
the
community
calls
until
the
new
year,
so
the
next
community
call
will
be.
Let
me
just
put
pull
up
my
date
calendar
here.
It's
we're
not
going
to
do
one,
the
24th,
because
it's
really
close
to
Christmas
or
the
31st,
because
it's
close
to
New
Year's,
so
we
will
be
back
in
the
saddle
on
the
7th
of
January.