►
From YouTube: CasperLabs Community Call
Description
Rewards Distribution presentation & status update.
A
Good
morning,
good
afternoon,
good
evening,
good
day
to
everyone,
thank
you
for
dialing
in
and
joining
our
community
call
here
at
Casa,
collabs
community
call
we're
going
to
be
talking
about
the
program
broadly
and
the
status
so
without
further
ado,
I'll
get
started.
Alex.
Are
you
doing
a
presentation
today
as
well,
which
I.
A
Perfect
sounds
good
of
its
it's
just
another.
You
know
evolution
in
our
research
around
this
topic,
so
fantastic,
alright,
I'll
get
started
with
the
update,
so
very
happy
to
report
that
on
the
31st
we
did
launch
our
test
net.
We
had
20
wonderful
validators
that
supported
us
in
doing
that.
We
have
a
medium
post
for
that.
So
congratulations
to
the
development
team.
This
is
a
really
big
deal.
A
Congratulations,
the
business
team
and
Ashok
and
Tom,
and
everyone
that
works
so
hard
to
get
all
these
wonderful,
validators
up
and
running
online
and,
of
course,
a
validators
that
were
willing
to
provision
the
infrastructure
and
take
the
effort
to
join
our
test
net.
We're
so
grateful
to
have
you
as
part
of
the
program
we
released,
so
we
launched
the
test
net
and
we've
got
some
patching
that
we've
been
doing
so
I'll
talk
about
that
a
little
bit
more
in
detail.
A
So,
let's
dive
into
here
our
test
net
that
launched
on
March
21st
is
a
alpha
version
of
the
highway
consensus
protocol
and
it
assumes
honest
validators.
It
supports
era,
a
configurable
round,
exponent
and
other
highway
parameters.
It
doesn't
support
bonding
and
unbinding.
Slashing
rewards
or
validator
set
rotation,
it
does
have
the
genesis
process
and,
of
course,
all
the
smart
contracting
features
that
you
would
expect
that
from
our
execution
engine.
So
all
that
is
baked
in
there.
Of
course,
the
monitoring
and
memory
management,
monitoring
and
logging
for
the
nodes
we
can
I
can
show
you
here.
A
A
$17,000
that
memory
utilization
is
correct,
it
spikes
it
does
something,
and
then
it
drops
off
total
amount
of
gas,
that's
been
processed
by
the
network.
That's
what
we
use
here
and
then,
where
these
these
sync
traversal
efficiency
and
notification
efficiencies
are
around
consensus.
There's
some
consensus
of
metrics
can
pop
in
here
and
I'll.
Show
you
guys
a
little
bit
about
highway.
A
Let's
just
change
this
to
the
last
12
hours
here,
and
so
some
of
the
things
you
can
observe
here
is
the
finalizar.
This
is
getting
pretty
spiky.
We
know
we
found
a
bug
in
the
finalizar
that
needs
to
make
it
more
efficient.
So
this
is
one
of
the
reasons
why
we've
had
to
bounce
the
network
a
few
times
because
there's
some
problems
with
the
finalizar
finalization,
there's
pretty
complex
piece
scheduled
items:
q
we're
looking
only
at
a
single
node.
Here
we
can
look
at
many
nodes.
A
All
the
different
hosts
in
the
network
here
can
select
a
few
of
them
and
see
how
they're
performing
here
reload
you
you'll
see
for
all
the
metrics
for
the
different
nodes
here
and
you'll
see
the
finalizer
is
getting
pretty
spiky.
This
means
that
you're
seeing
a
problem
in
finalization,
that's
the
bug
we
fixed
that
we'll
be
cutting
a
release
for
and
and
the
other
metrics
you'll
see
go
up,
is
validating.
A
Add
incoming
block
so
you'll
see
down
here
valid
in
adding
coming
block,
was
pretty
efficient
and
over
time
because
of
the
of
the
bug
that
we
found
it
starts
going
up
up
up
up
up
right,
and
so
this
is
obviously
a
problem
that
ultimately
results
in
the
network
collapsing
and
failing
but
again
you
know
this
is
the
whole
point
of
running
a
test
net.
So
you
can
learn
about
these
things
right
and
learn
how
the
network
is
behaving
in
the
wild.
A
So
we'll
talk
about
test
net
performance,
we
passed
the
first
first
network
we
passed
through
the
Genesis
block,
but
we've
discovered
that
the
network
was
running
a
little
too
fast.
We
slowed
the
network
down
to
exponent
of
16
and
we
face
similar
problems
in
the
round
as
well,
and
then
we
we
found
the
bugs
and
finalization
and
we
cut
a
patch
dot
17.1
and
that
that
network
stayed
up
for
both
for
it
stayed
up
from
Saturday
Saturday
Sunday,
Monday
Tuesday.
So
what
about
four
days?
A
And
but
we
we
started
losing
validators
again
because
of
the
finalization
bug
that
you
see
now
and
we
have
another
another
fix
to
the
voting
matrix,
which
is
the
finalization
Oracle
and
we've
also
speeded
up
synchronization.
So
we
made
synchronization
a
lot
faster,
so
when
a
node
restarts
they're
able
to
sync
up
much
much
more
quickly
than
they
have
in
the
past
and
we
intend
to
cut
the
release.
Our
intention
is
to
cut
the
release
today
and
restart
the
network
tomorrow
with
a
new
Genesis
block
again
at
1900,
if
possible.
A
So
it
depends
how
quickly
we
can
get
this
release
cut.
Dev
net
is
still
up
and
running.
You
can
definitely
code
against
definite
for
the
smart
contracting
features.
We
haven't
made
a
lot
of
enhancements
to
smart
contracting
in
the
past
release
cycle,
so
def
net
is
pretty
close
to
the
current
smart
contracting
engine
we
have,
but
we
will
be
decommissioning
it
in
favor
of
Tesla.
A
A
Otherwise,
we're
working
on
to
play
gossiping
for
the
node
and
then
we're
also
making
some
changes
to
the
clarity
block
Explorer
to
enhance
the
user
experience
right
now.
It
takes
a
little
time
to
fund
your
accounts
in
test
net
and
that's
because
we're
only
sending
the
deploys
to
a
single
node,
and
so
you
have
to
wait
for
leader
selection
and
that
takes
a
little
bit
of
time,
but
the
deployers
do
go
through.
A
So
you
definitely
can't
fund
your
accounts
and
note
if
you
funded
your
account
until
we
go
through
Genesis
you'll
have
to
fund
your
account
again
and
get
more.
You
know
token
from
the
faucet
and
any
account
contracts.
You've
deployed
to
test
net
will
need
to
be
redeployed
whenever
we
go
through
a
new
Genesis
block.
C
I
think
I
think
I
missed
adding
this
one
more
finder
intensive
and
as
an
alternative
to
deploy
gossiping.
We
are
also
looking
at
Omega
blocks
to
carry
deploys
and
the
two
can
work
together
depending
on
the
round
exponent.
So
that's
an
additional
you
know
user
facing
feature
that
we
have
to
reduce
the
time
to
deploy
yeah.
A
C
A
All
right
there
we
go
okay,
so
what
you're
observing-
and
this
is
what
the
net
world
looks-
Network
looks
like
hive.
It
looks
like
when
it's
healthy,
you
see.
These
are
leader
blocks
here
that
are
being
proposed
by
different
leaders,
so
the
big
circle
is
a
block
and
what
you're
seeing
over
here.
These
are
what
we
call
Omega
messages
or
attestations,
and
the
change
that
we're
talking
about
would
be
to
reduce
right.
So
these
access
stations
are
votes,
you
can
call
them
votes
right.
These
are
little
messages
that
go
across
the
network.
A
Now
one
of
the
optimizations
you
can
have
to
reduce
what
we
call
overhead
right.
So,
if
you
think
about
this
is
one
block
this
message
overhead
or
all
these
attestations
right,
and
these
will
scale
out
as
the
network
has
more
validators.
So
each
validator
is
a
row
here.
A
swimlane
and
you'll
see
each
validator
is
sending
a
message
and
you
can
quickly
observe
that
when
you
get
to
200
messages,
that's
going
to
be
a
whole
bunch
of
data
right.
But
what
are
the
optimizations?
A
Maybe
you've
got
an
alpha
leader
and
some
beta
leaders
and
the
beta
blocks
are
going
to
be
these.
These
Omega
right
blocks
that
are
processing
deploys,
but
then
the
alpha
leader
will
just
download
all
those
messages
and
synchronize
the
network,
so
this
enables
does
to
get
greater
transaction
throughput
without
harming
liveness
or
safety.
A
There's
a
little
bit
of
tutorial
there
for
you
guys
on
how
clarity
works
so
yeah
intestine
s.
Are
we
right
now
we're
really
focused
on
testing
the
network
running
the
the
long
running
test
with
various
scenarios
to
make
sure
that
we're
we're
catching
any
bugs
before
they
get
out
to
the
network,
we're
scaling
our
s
tests
and
and
we're
also
adding
new
workloads
and
adding
new
workloads
right
new
features
for
both
monitoring
and
new
scenarios
right
yeah
and
then
we're
also
we're
working
with
our
tests.
A
A
As
well
and
folks
that
want
to
get
involved
in
the
community
and
help
out,
you
can
definitely
participate
in
the
test
net.
You
don't
have
to
just
be
a
validator
participate,
the
test
in
it.
So
if
you
want
to
write
adapt,
if
you
want
to
participate
in
the
community,
we
would
love
to
have
you
participate
in
the
test
net.
You
can
DM
me
on
discord
and
figure
out
how
you
can
get
involved.
So
do
we
have
any
questions
coming
in
any
questions,
I'm
going
to
check,
discord
and
YouTube
for
questions
over
to
you
own
or.
A
B
Let
me
share
my
screen,
so
the
title
of
today's
presentation
is
what
users
pay
for
on
the
cash
collapse.
Bochy
and
I
will
discuss
what
what
can
we
have
as
an
alternative
to
gas
and,
like
I
said
before
this
not
be
about
the
gas
price
volatility
or
how?
We
will
choose
a
pricing
strategy
to
reduce
that
volatility.
This
will
be
so
if
so,
what
gases
is
a
way
of
putting
a
price
tag
on
like
putting
giving
costs
to
these
computational
resources?
B
And
when
you
all
account,
when
you
use
the
same
unit
to
account
for
all
the
resources
you
couple
them
in
a
way
that
allows
user
to
user
to
just
state
one
gas
price
I'm
paying
for
transaction,
so
that
will
that
that
single
unit
will
be
what
is
used
when
determining
the
transaction
fee,
and
we
will
be
exploring
what
happens
if
you
were
to
decouple
those
computational
resources
which
are
all
like,
instead
of
just
measuring
them
in
terms
of
gas.
What
would
happen
if
we
measured
and
in
terms
of
stem
cells,
basically.
B
So
the
basics
is
a
public
smart
contract
platform,
in
essence,
is
a
computer
with
a
shared
global
state
and
anyone
can
send
transactions.
Anyone
can
use
it.
The
global
state
can
be,
you
know,
modified
by
anyone,
and
so
we
want
to.
You
want
to
conceptualize
what
a
resource
is
on
such
a
platform
for
the
users
and
since
computers
process
and
store
information
and
information
is
measured
in
bits,
accounting
of
resources
on
any
computing
platform,
whether
it's
cloud,
whether
it's
you're
buying
a
PC
you're
you're
paying
for
computation.
B
They
all
converge
on
the
same
basic
units,
which
is
the
bits
I,
will
just
give
in
examples.
The
Turing
machine
as
a
mesh
machine
is
a
computer
conceptualized
by
Alan
Turing
when
he
was
working
on
computer
science,
different
different
problems,
and
it's
it's.
It's
a
good
way
to
understand
those
two
basic
dimensions
of
resources.
When
you,
when
you
have
a
computer
at
hand,
the
Turing
machine
works,
it's
like,
and
it's
like
a
robot
that
that's
moving
on
a
tape
and
the
tape
has
live.
B
The
tape
is
assumed
to
be
infinite,
but
and
in
economics
there
is
no
infinite,
so
we
can
just
assume
it's
limited
and
you
have
limited
number
of
cells
which
can
take
the
number
0
or
1.
That's
how
you
encode
information
and
the
machine
just
moves
on
the
tape
and
is
able
to
switch
switch,
the
values
to
0
and
1.
The
interesting
thing
about
this
is
the
simple.
B
Although
this
is
a
very
simple
structure,
it
can
simulate
any
computer
that
can
ever
exists,
at
least
in
the
phone
for
diamond
architecture,
the
typical
binary
function
in
computers
and
in
this
computer.
Economically
speaking,
the
resources
you
have
is
like
first
of
all
tape,
and
then
you
have
this
machine,
but
you,
when
you
use
this
machine,
it
moves
between
cells.
So
the
more,
if
you,
if
you
need
to
move
it,
200
cells
like
right.
B
It
will
take
some
time
if
you
need
to
move
it
10,000
cells
to
take
like
an
order
of
2
orders
of
magnitude
more
time.
So,
although
you
are
paying
for
the
maybe
energy
or
just
cooling
of
that
machine
machinery,
when
you
are
looking
for
the
maintenance,
you
have
like
a
sysadmin
who
looks
at
the
machine
what
you're?
Actually
what
the
user
is
paying
for
is
the
time
which
they
used
it
to
a
machine.
B
So
we
conceptualize
these
2
dimensions
as
memory
and
runtime
runtime
is
runtime
being
when
the
machine
is
occupied
by
a
given
user
and
to
be
clear,
I
current
computers
are
in
turing
machines
because
they
aren't
very
practical.
We
have
these
different
architectures
of.
We
have
different
models
of
memory,
but
the
basic
thing
is
they're
analogous
to
each
other,
so
you
have
a
hierarchy
of
memory.
In
today's
pcs,
for
example,
like
you
have
a
hard
drive
which
which
stores
the
bits,
even
when
the
computer
is
turned
off,
you
have
you
have
a
ram.
B
The
ram
is
where
you
know,
between
executions
programs
store
their
data
for
fast
access.
You
have
registers
in
the
processor.
You
have
this
hierarchy
of
memory
in
a
computer
depending
on
where
it's
for
what
it's
being
used
for,
and
one
interesting
distinction
to
make
is
the
difference
between
memory
and
storage
and
I,
make
it
make
the
distinction
in
the
context
of
block
chains
like
programs,
programs
or
transactions
have
a
have
a
limited
execution
time
right
and
user
pays
for
the
execution.
So
a
memory
is
a
is
a
memory.
B
So,
let's
think
about
so,
let's,
let's
see
how
this
would
look
like.
So
instead
of
paying
for
gas,
we
don't
we
don't
invent
a
you
know
single
invent
a
coupled
unit.
We
just
measure
different
types
of
memories
and
storages
that
can
exist
on
the
system
alongside
the
runtime
which,
which
is
basically
the
time
the
user
keeps
the
Machine
occupied,
then
the
the
smallest
units
of
account
we
will
just
define,
define
for
run
time
will
be
nanoseconds
to
be
practical.
B
It's
high
enough
resolution,
then
you
have
bytes
for
memory
and
storage
and
instead
of
like,
like
etherium
user-specified
eath
for
gas
or
gray
/
gas,
which
is
a
small
denomination.
If
you
go
to
east
gas
station,
we
can
see
the
current
gas
price.
It
says
dollars,
but
it's
actually
using
the
value
from
exchanges.
B
C
B
They
would
see
it
more
explicitly
like
well.
If
a
if
an
app
is
heavy
on
storage
needs
to
store
a
lot
of
data,
they
would
see
it.
They
would
see
like
whether
they
need
to
optimize
that
or
if
they,
if
their
algorithms
aren't
optimized,
and
they
are
spending
too
much
runtime.
Maybe
they
look
into
that.
B
They
will
be
able
to
differentiate
what
read
which
resource
their
money
is
going
to
like
I
just
made
an
e
just
off
the
top
of
my
head,
an
example
how
a
transaction,
simple,
CLX
transfer
will
be
denominated
as
just
to
you
two
dollars
per
second
and
ten
dollars
per
megabyte
of
memory.
Would
I
I?
Don't
I
just
made
the
values
up:
11
cents
for
transaction,
so
the
user,
the
user
experience
I
mean
there
is
one
important
thing.
B
B
So
if
the
the
less
decisions,
the
user
has
to
make
the
better
the
user
experience
on
a
theory
and
that
that
is
because
only
theorem
gas
prices
are
allowed
to
float.
If
an
imagine,
if
all
these
resources
floated
they,
although
of
course
be
correlated
to
each
other,
but
in
time
like
storage,
but
if
you
Imperium
blockchain
the
phone
not
grows
by
an
order
of
hundred
gigabytes
per
year,
at
least
so,
for
example,
this
storage
price
would
increase
more
and
more.
B
You
would
see
that,
but
when
it's
coupled
and
if
they
don't
actually
update,
update
the
gas
costs
of
an
operation,
that's
like
s
store,
which
is
a
storage
operationally
theorem.
If
they
don't
update
that
value,
then
it
results
in
the
economic
inefficiencies.
Users
are
paying
too
little
for
the
price,
like
the
all
these
different
resources,
whose
supply
maybe
might
be
changing
so
through
time
aren't
like
are
priced
wrongly,
with
respect
to
each
other
aren't
priced
correctly.
B
In
our
case,
though,
we
don't
need
to
decrease
the
number
of
dimensions,
because
we
already
are
planning
to
implement
price
regulation.
That
means
we
can
just
keep
this
explicit
in
terms
of
accounting
and
the
users
will,
when
they
first
have
the
confirmation
screen
for
a
transaction.
They
would
just
see
the
single
compound
value
like
they
will
see
this
and
if
they
were
curious
as
to
what
what
what's
that
value
composed
of
they
will
see.
B
Oh
I
actually
spent
10
cents
of
run
time
and
one
sense
of
memory,
so
that
would
be
more
that
not
be
so
different
from
the
user.
Experience
of
ëthere,
in
fact,
like
price
regulation,
means
they
don't
have
a
choice
there,
so
they
don't
have
to
even
choose
for
the
gas
price,
so
they
will
just
they
just
have
to
click
confirm
or
not,
and.
B
Basically,
this
is
this
is
just
decision
super
advantageous
to
a
theorem.
This
doesn't
the
whole
system,
depending
on
how
you're
implementing
price
regulation
might
be
in
the
end
analogous
to
gas.
So
it
might
be
the
same
thing.
It
might
have
the
same
economic
effect
as
gas,
but
actually
stating
these
differently
allows
you
to
make
rationalize
and
conceptualize
some
ideas
better
in
a
better
way
and
I
want
to
clarify
that
storage.
B
Like
I
said
before,
or
in
a
theorem,
you
pay
it
for
only
once.
So,
when
you
run
a
transaction
store
some
variables,
you
pay
a
very
high
cost
because
they
had
to
make
it
like
overpriced.
In
order
to
prevent
the
fool
note,
storage
requirements
to
grow
like
too
fast
out
of
hand,
they
price
they
overpriced
it
so
that
the
growth
rate
is
manageable
and
you
still
have
a
gas
refund
mechanisms.
B
So
you
can,
if
you
have
stored
some
variables
in
the
past
when
you
free
them
in
a
trends,
not
new
transaction,
you
get
a
gas
refund
which
is
like,
like
a
static
incentive
for
you
to
free
up
the
memory
you're
using
on
a
full
note,
but
it's
still
not.
It
doesn't
reflect
the
usage,
the
usage
of
storage,
how
the
user
stores
bits,
because
this
is
like
storage-
is
like
like
time-based
occupation
of
memory,
on
on
the
machines
of
every
validator
in
the
network,
and
it
should
be
the
ideal
model.
B
In
an
economic
sense,
we
would
price
it
in
that
way,
but
if
your,
if
your
rooms
goal
in
2016
was
to
launch
as
fast
as
possible,
so
they
didn't
want
to
deal
with
this
problem.
Very
fast,
I
think
I've
read
about
this
before
in
metallics
posts,
but
we
have
time
we
can
make
this
reflect
like
the
reflect
the
real
world
in
a
more
realistic
way.
C
B
Ranta
and
memory
storage.
It
also
allows
us
to
like
reason
in
a
better
way,
if
you
want
to
implement
a
different
model
for
storage.
If
you
want
to,
if
you
want
like
some
of
course,
some
contracts
and
is
in
the
smart
contracts
in
the
system
will
be
vital,
like
you
cannot
have
a
leased
space
like
talking
from
like
clx
contract,
to
see
the
main
took
the
native
token
CLX
token
you
should.
You
should
not
be
paying
for
that,
like,
as
at
least
you
should,
it
should
be.
B
B
B
A
Is
great
yeah
I
mean
you
know
as
first
principles.
We
want
to
come
up
with
easy
and
simple
way
to
account
for
transaction
costs,
and
we
would
like
to
do
it
without
necessarily
having
to
meter
every
single
op
code
just
feel
like
that's
like
too
much
granularity,
and
this
would
enable
us
to
decouple
from
the
Wazza
me
interpreter
right.
If
we
wanted
to
go
with
another
web
assembly
execution
engine,
we
could
do
that
potentially
in
the
future.
B
B
Think
so
yeah
I
think
so
I
know.
I
can
also
talk
a
bit
about
so
we
decoupling.
This
will
also
allow
us
to
like
make
present
a
clear
picture
of
run
time.
So
we
are
already
working
on
this
using
benchmark
data
to
calculate
how
much
time
each
operation
in
our
system
consumes,
so
that
we
can,
we
can
assign
costs
more
realistically.
If
you
look
at
etherium
when
they
launched
they
just
they,
they
follow
the
trial
and
error
method,
so
they
updated
the
prices
as
they
met
with
bottlenecks
we
want
to.
B
We
want
to
follow
a
more
like
experimentally,
sound
approach.
So
in
our
platform
we
just
don't
want
to
have
like
for
40
like
conceptually.
There
are
very
fast
op
codes
and
you
just
assign
the
price
like
gas
price,
one
for
all
of
them,
and
then
everything
is
they
measured
in
terms
of
them?
We
don't
want
to
do
it
like
that,
so
we
want,
we
will
have.
B
We
have
these
host
functions
and
opcodes
was
an
op
codes
and
you
want
to
benchmark
them
each
of
them
separately,
so
that
we
get
the
exact
exact
runtime
on
average
in
the
network
they
consume.
This
will
allow
us
to
squeeze
as
much
computation
like
as
much
juice
from
our
system
as
possible
and
well
hope,
I
think
this.
This
might
I'm
just
engineers
intuition.
This
will
allow
us
to.
This
will
be
like
a
improvement
on
throughput
of
at
least
50%.
A
A
A
What
we're
trying
to
do
is
just
you
know,
share
with
you
our
thinking
around
these
problems,
right
because
we've
said
that
we
are
going
to
solve
for
these
problems
and
so
we're
trying
to
share
with
you.
You
know
how
we're
attacking
the
problem
and
what
our
thoughts
are
around
it.
So
thank
you.
Everyone
for
listening
and
dialing
in
and
check
us
out
at
Kasper
labs,
dot,
IO
and
see
you
next
week
on.
The
community
call
have
a
great
weekend
great
week,
all
right,
bye.