►
From YouTube: CasperLabs Community Call
Description
Rewards Distribution presentation & status update.
A
A
Yes
good
day,
everyone
for
those
of
you
joining
us
on
YouTube.
Thank
you
very
much
for
dialing
in.
Thank
you
for
being
here.
Please
be
safe.
Everyone
out
there
with
Kovan
19.
You
know
hearing
a
lot
of
reports
around
infection
now.
I
now
know
people
that
have
had
it
suspect
that
most
of
us
know
somebody
that
has
had
it
and
if
you
know
anything
like
I
know,
it's
the
the
symptoms
are
very
varied.
Like
somebody,
some
people
could
have
it
and
they
have
almost
nothing
and
no
effects,
but
others
get
it
and
it's
very
severe.
A
So
please
do
take
good
care
of
yourself
and
your
families
I
feel
very
blessed
to
be
able
to.
You
know
to
be
working
on
this
project
being
able
to
work
from
home
remotely
and
I
encourage
all
of
you
to
attempt
to
do
the
same
so
be
safe
out.
There,
I'm
gonna
give
the
update
today
on
Castro
labs
project.
So
let's,
let's
get
right
into
it.
So
we
cut
our
release
our
March
release
on
Monday,
and
this
is
basically
going
to
mark
feature
complete
for
the
test
net.
A
That's
launching
on
the
31st
we're
only
going
to
be
doing
bug,
fixes
and
minor
improvements
that
are
needed
for
the
validators
or
for
ourselves
to
monitor
the
efficacy
of
the
network,
the
stability
and
security
of
the
network,
but
other
than
that
we
are
feature
complete
for
test
net.
We
will
not
be
updating
def
net
with
this
release,
so
def
net
is
probably
going
to
be
decommissioned
here.
A
The
clarity
castro
labs
of
I/o
block
x4
is
going
to
migrate
to
point
to
the
test
net
and
if
we
decide
to
have
the
dev
net
in
a
future
phase,
maybe
to
do
incremental
releases
or
nightly
builds
for
those
of
us
that
want
to
interact
with
the
latest
and
greatest
code.
We
could
potentially
look
at
definitely
coming
back,
but
for
the
time
being,
we're
gonna,
probably
decommission
it
and
we're
gonna
fully
focus
on
the
test
net
and
moving
forward.
That
way,
our
current
focus
is
testing
and
debunking.
A
The
protocol
focus
is
stability
and
reliability
and
then,
of
course,
as
part
of
doing
that,
testing
testing
the
protocol
we're
using
the
S
test
framework,
which
is
a
basically
a
workload,
generator
and
infrastructure
for
testing
the
network
and
we're
also
looking
at
doing
some
work
on
consensus
and
documentation
and
economics,
so
I'm
dive
into
each
of
those.
So
on
consensus
we
found
a
few
attacks
that
these
are.
These
are
attacks
that
any
dag
based
protocol.
Fourth
choice
protocol
is
vulnerable
to
and
we
have
augmented.
A
We
have
a
flavor
of
reliable
broadcast
that
we'll
be
using
to
help
hardened
the
protocol
against
these
attack.
These
are
known
as
an
equivocation
bomb
that
either
goes
deep
or
shallow.
That's
what
we're
calling
it
and
we
can.
We
can
dive
into
the
attack
later
and
talk
about
some
of
the
protocols
that
we
SPECT
are
vulnerable
to.
This
you'll
find
that
those
protocols
are
likely
aware
of
this
attack
because
they
are
permissioned
protocols.
A
This
is
one
of
the
reasons
why
you
don't
find
permissionless
fork
choice,
dag
protocols
out
there
it's
one
of
the
reasons
why
Kasper
FFG
does
not
use
a
dag
it's
a
chain.
It's
because
this
vulnerability
exists
once
we
formalize
the
approach,
we
will
have
an
update
to
the
highway
paper.
There'll
be
another
version
of
the
hybrid
protocol
that
accounts
for
this
attack.
We're
also
updating
technical
specifications
in
alignment
with
this
right.
So
as
we
as
we
work
on
a
reliable
broadcast,
we'll
be
updating
the
tech
spec.
A
We're
also
updating
the
tech
spec
to
be
more
in
a
line
with
highway,
as
we
do
this
on
the
node
we've
integrated
the
S
test
testing
framework
into
the
hack
docker
framework.
So
if
it
works,
like
anything
else,
I
suspect
there's
a
make
s
test.
That
ties
will
enable
you
to
run
s
tests
using
docker
against
the
network.
You've
created
I
think
this
makes
this
I
believe
this
will
make
using
s
tests
very
easy
to
test
the
performance
of
your
hack,
docker
Network,
we're
also
stress
testing
the
network.
Under
various
scenarios.
A
We're
working
on,
deploy,
gossiping,
deploy,
gossiping
we've
discovered,
is
going
to
be
very
important.
The
first
few
weeks
of
the
tests
net
will
not
have
deploy
gossiping,
so
the
UX
is
gonna
kind
of
be
not
great,
because
if
you
send
a
deploy
to
a
node
that
node
has
to
wait
to
be
leader,
then
the
node
proposes
the
block
and
the
deploy
in
the
block,
and
so
this
results
in
a
longer.
A
You
know,
time
to
finalization
or
latency
the
finalization
based
on
the
deployment
deploy
was
sent,
and
so
we
want
to
make
that
a
little
bit
more
delightful
by
implementing
deploy
gossiping,
so
the
nodes
automatically
gossip,
the
gossip
deployments
they
receive.
This
is
what
you
would
expect
anyway,
and
it
hardens
the
interest
that
they,
basically
the
network
against
das
attacks,
it's
not
possible
to
das,
a
single
node
out,
we're
also
updating
and
adding
consensus
metrics
from
monitoring
the
protocol.
This
is
kind
of
like
the
production
engineering
process.
Right
we
put
the
system
under
load.
A
A
So
there
are
changes
to
the
proof
of
state
contract
that
are
highway,
specific
we're
making
those
changes
in
the
host
side
code
of
the
execution
engine
most
of
the
work
happening
right
now
is
there
on
test
and
site
reliability.
So
these
are
the
workload,
generators
or
working
with
s
tests.
They
are
now
running
cleanly
with
a
hundred
percent
of
the
transactions
being
finalized.
We
have
a
few
nits.
We
had
to
work
out
with
finalization.
Our
network
is
a
little
different.
A
We're
also
setting
up
the
infrastructure
for
the
alpha
test
net
and
all
the
management
process
for
test
notes.
So
this
is,
like
you
know,
is
making
sure
that
the
faucet
is
easy
to
configure
with
an
ansible
and
run
duck,
setting
up
a
clarity
instance
that
will
be
instantiated
when
we
spin
up
and
tear
down
the
network
setting
up
the
requisite
places
in
github.
That
will
contain
the
chain
spec
for
each
iteration
of
the
test
set.
We
will
be
bouncing
the
test
set
at
this
phase.
A
Our
intention
is
to
provide
a
mechanism
in
clarity
for
deploy
signing.
We
also
needed
to
do
things
like
filtering
ballots
out
in
clarity,
so
some
highway
specific
features
are
also
implemented
and
then,
of
course,
the
DAP
developer
guide
is
going
to
be
updated
with
assembly,
script,
information,
documentation
and
then
technical
specifications.
What
is
the
technical
specifications
for
the
network
we're
talking
about
here?
A
show
of
what
is
this.
A
C
A
Okay,
we're
also
we're
also
started
working
on
the
node
validator
guide
to
write.
So
we
should
probably
call
that
out
know
operator
guide
the
validator
guide
right
so
we'll
be
iterating
on
the
validator
guide
over
time
that
first,
the
first
iteration
of
it
is
going
to
be
obviously
for
the
first
set
of
alpha
test
net
validators,
so
we're
working
closely
with
them.
A
But
then
over
time,
as
we
learn
things
about
the
network,
this
validator
guide
will
become
more
and
more
rich
with
a
lot
more
information
for
node
operators
and
then
at
some
point
we
will
push
it
into
the
same
repository
as
the
DAP
developer
guide.
So
you'll
have
a
single
location
for
all
of
the
you
know.
Node
operator,
guide,
information,
so
economics,
we're
doing
research,
good
research
on
economics,
so
we're
looking
at
yield
and
returns
calculations
for
the
network
we
put
together
our
alpha
test
net
program
and
the
validator
game.
A
A
So
the
first
lap
is
our
alpha:
that's
starting
this
month
and
then
we're
going
to
have
several
laps
finishing
up
with
the
checkered
flag
and
and
then
the
launch
of
the
main
net
concludes
the
checkered
flag
round.
So
we're
studying
transaction
fee
models
or
NORs
gonna
talk
about
that
today
and
then
from
a
team
perspective
or
focusing
on
onboarding
validators
for
the
test
net
and
we're
starting
small
with
this
with
this
round
right.
So
we
can
help
them
get
coordinated,
and
so
we
want
to
have
obviously
a
decentralized
network.
A
So
it's
it
becomes
very
clear
and
obvious
what
a
validator
wants
to
join
the
network
where
they
find
the
information
that
they
need
in
order
to
start
up
their
note,
so
we'll
be
continuing
to
drill
in
on
this
and
make
it
simpler
and
easier
and
more
transparent.
So
less
and
less
coordination
is
needed
with
the
validators
over
time.
Join
us
for
this
call
Tuesday
mornings
at
9:00
a.m.
A
B
Yep,
so
we
talked
a
lot
about
pricing
blockchain
resources,
but
before
we
price
the
resource,
we
actually
need
to
count
it
like
how
much
is
being
consumed
or
even
what
is
what
what
is
the
consumption
of
a
resource?
So
what
is
being
consumed?
Is
it
is
it?
How
do
we
count
it?
Is
it
being
counted
efficiently?
Does?
Does
this
up
counting
cause
any
overhead
in
terms
of
if
you're?
B
If
you
want
to
count
how
much
resources
a
program
is
consuming,
then
you
also
have
to
write
a
program
to
count
how
much
it
is
consuming
and
you
have
to
run
them
at
the
same
time
and
that
causes
an
overhead
which
results
in
a
higher
longer
times
of
execution.
For
that
reason,
I'll
talk
about
metering,
block
chain
resources,
so
let's
define
what
a
computational
resource
is
like
I
I
have
I
have
a
computer
I
invested
in
I
incurred
capital
expenses,
I
bought
a
motherboard
CPU.
B
All
these
things
from
the
markets,
which
were
which
were
produced
by
various
producer
producers
around
the
world
and-
and
there
are
also
operating
expenses.
I
need
energy
to
run.
The
hardware
I
need
to
rent
rent
a
room
to
put
my
computer
in
these.
These
expenses
are
valid,
whether
you
have
your
own,
laptop
or
desktop
at
home,
whether
you
are
a
cloud
service
provider
or
whether
you
are
doing
the
blockchain.
B
All
these
computers
need
to
exist
somewhere
in
the
real
world,
and
somebody
has
to
pay
for
them
and
that's
what
we
mean
by
computational
resource
we
just
have
to
make
like
the
demand,
would
meet
the
supply
and
demand
and
then
make
sure
the
supply
is.
You
know,
allocated
efficiently
to
the
people
who
demand
it,
and
this
is
not
something
new
like
we
have
these
time,
sharing
mainframes
back
in
the
day
where
personal
come
before
personal
computers
that
the
IBM
mainframes
you
had
to
create
a
terminal.
B
That's
where
your
name
terminal
comes
from,
so
terminal
is
just
the
end
point.
You
just
have
a
screen
where
you
connect
to
a
computer,
and
everybody
in
the
building
is
connected
to
the
same
computer,
and
this
is
this
may
be
the
name
made
much
more
sense
back
in
a
day.
But
now
you
have
your
own
Windows
or
Linux
PC,
and
you
run
a
terminal
and
the
main
name
doesn't
make
a
lot
of
sense
anymore
issued.
So
that's
why
they
rename
it
to
command
line.
B
But
in
the
case
of
a
terminal
you
have
a
lot
of
people
consuming
the
same
resource,
and
this
is
very
analogous
to
the
blockchain.
If
you
look
at
aetherium
or
any
other
like
public
blockchain,
also
Bitcoin,
it's
not
a
computer,
but
it's
still
a
resource.
People
from
all
around
the
world
are
bidding
to
use
this
resource
and
they
bit
by
put
it
attaching
a
transaction
fee
to
the
transactions.
B
B
Product
that
is
being
sold
to
a
user
and
when
they
run
a
program,
they
occupy
computational
resources.
So
we
conceptualize
this
economically
as
occupy
occupying
time
and
space
time
and
space
like
when
you
run
a
program.
That
program
runs
for
10
seconds
and
that
may
be
that
program
may
store
some
variables.
So
even
after
the
program
executes,
you
need
to
continue
like
the
program
has
long
lasting
effects.
It
has
to
store
some
kilobytes
of
values
in
the
computer
storage.
B
So
this
is
what
I
mean
by
time
and
space
and
we
categorize
it
as
a
transient
resource
the
transient
resources
occupied
only
during
the
execution
of
the
program
and
a
non
transient
resource
which
is
occupied
even
after
the
program
finishes.
I
just
wanna
know
like
actually,
every
operation
might
consume
both
time
and
space
actually.
B
B
B
Every
object
every
operation
might
consume.
Time
is
what
I
want
to
say,
and
we
have
limited
computational
capacity.
You
don't
have
a
singularity
yet
so
we
don't
have
like
the
infinite
in
processing
power,
but
need
we
need
to
allocate
these
this
limited
resource
efficiently,
in
an
economic
sense
to
prospective
users
and
in
the
blockchain
that
I
just
want
to
make
things
more
in
context.
A
program
becomes
a
transaction,
so
it
a
transaction.
B
So
somehow,
with
each
transaction
you
create
use
occupy
this
bandwidth
resource,
which
is
actually
like,
the
the
you
have
this
limited
time
between
block
there
like
between
blocks
or
rounds.
So
between
the
time
that
that's
spent
creating
two
blocks,
you
have
to
get
get
all
these
transactions
payloads
to
all
the
miners
in
the
system.
So
that's
again
a
transient
resource
as
long
as
so
as
soon
as
you
propagate
the
block
to
all
miners,
that's
being
that's
stops
being
occupied,
that's
why
we
can
call
bandwidth
a
transient
resource
the
pricing
and
about
pricing
and
getting
paid.
B
This
is
easy
for
these
transient
resources,
because
market
determines
price
according
to
supply
and
demand,
and
these
are
paid
for
and
consumed
immediately.
So
not
transient
resource,
as
you
can
guess,
is
various
types
of
storage
and
pricing
and
getting
pay
is
trickier
in
this.
In
this
case,
because
yeah
somebody
has
to
pay
for
the
rent
and
you
can
have
different
models
and
what
happens
when
the
rent
isn't
paid?
Is
it
just
you
just
prune
prune
that
you
just
delete
the
contract
or
delete,
there's
storage
to
make
it
inaccessible
or
you
you
have
you.
B
Another
problem:
you
can't
sell
an
X
amount
of
computational
capacity
to
run
a
program
and
be
sure
that
it
will
finish
executing
before
depleting
a
allocated
capacity,
and
this
is
due
to
the
halting
problem.
This
is
a
impossibility
from
computer
science.
You
can't
predict
when
a
program
will
finish
executing
with
hundred
percent
accuracy
before
you
actually
execute
it.
So
you
have
to
metre
consumption.
B
Fortunately,
we
can
approximate
these
functions,
so
we
cannot.
We
cannot
compute
it
a
hundred
percent
accuracy,
but
if
we
can
calculate
it
with
95%
accuracy
and
get
a
much,
we
get
an
increase
in
performance.
Somehow
like
have
it
more
in
a
proximate
way
it
would.
It
would
help
us,
you
know,
achieve
a
higher
throughput.
B
Once
you
have
the
system,
then
you
you
realize
there
are
some
functions
that
that
that
are
run
by
everyone
all
the
time
like
a
hash
function.
So
you
actually
create
you.
You
don't
inject
code
in
that
case
and
you
just
assign
a
cost.
Do
you
run
it
without
metering
and
you
just
assign
any
fixed
cost
or
you
assign
you,
create
a
model
of
consumption
and
you
approximate
and
cross
your
fingers
and
hope
that,
like
most
of
the
time,
it
will
be
correct
and
it
won't
lead
to
a
lot
of
economic
in
efficiencies.
B
The
approach
is
being
followed
by
parity
actually-
and
this
is
the
first
real
world
implementation
in
polka,
dots,
polka,
dots,
various
test
nets,
and
they
they
they.
Actually
they
went
ahead
and
implemented
it
because
I
think
they
solve
a
great
upside
to
it.
They
say
that
a
weight
calculation
should
always
be
computable
ahead
of
dispatch
block
producers
should
be
able
to
examine
the
weight
of
it
is
possible
before
actually
decided
to
accept
it
or
not.
So
this
is
exactly
their
approach
and
gas
and
weights.
Actually,
the
concepts
are
very
similar
here.
B
They
wanted
to
emphasize
that
they're
doing
more
approximate
and
they
don't
care
about
the
halting
problem
a
lot.
For
that
reason,
they
wants
to
rename
the
concept.
What
we
are
working
on.
I
am
currently
working
on
a
systematic
way
of
filling
resource
consumption
models
using
benchmark
data,
because
we
will
change
our
implementation
and
maybe
we
will
create
like
we
will
abstract
away
some
functionality,
and
we
don't
want
to
do
do
this
pricing
by
trial
and
error
manually
every
time.
B
So
we
want
to
do
it
in
a
systematic
way
where
we
will
generate
some
inputs,
relevant
inputs
and
from
those
inputs
we
will.
It
will
make
like
a
statistical
inference.
We
will
say
that
within
like
95%
confidence
interval
this,
this
pricing
of
the
operation
will
not
will
not
so
will
be,
will
be
correct,
like
if
we
say
this
program
will
take
10
seconds
to
execute,
then
you'll
be
able
to
be
sure
that
95%
of
the
time
that
program
will
stop
executing
before
10
seconds.
B
B
B
Because
if
you
compare
different
wasn't
implementations,
there
is
a
performance
losses
when
you
enable
metering.
So
we
are
I
talked
Fraser
yesterday
about
this
and
he's
gonna.
Give
me
exact
figures.
I,
don't
know
yet
like
there
are
different
laws.
Me
implementations
like
waz
me
and
wasner,
and
we
need
to
know
how
much
like,
what's
the
order
of
magnitude
between
metering
and
not
metering,
so
yeah,
that's
what
I
want
to
talk
about
today.
I
can
I
can
take
any
questions
if
they're
in
the
chat.
A
Sonar
I
think
that
was
just
terrific
I
really
appreciate
the
work
you're
doing
and
yeah
I
mean
for
those
of
you
that
know
what
are
one
of
our
valued
core
value
propositions
are.
Is
that
we
want
to
provide
predictable
pricing
for
business
right,
and
we
know
that
pricing
becomes
an
issue
because
horbet
is
scalability
right,
you're,
not
able
to
scale
out
a
protocol
to
take
advantage
of
economies
of
scale
right,
which
is
very
different
than
what
you
see
in
cloud
computing
you're,
an
Amazon,
AWS
they're
able
to
drop
prices
because
they're
able
to
scale
out.
A
So
ultimately,
this
becomes
a
scaling
problem
which
we
will
begin
researching
on,
and
we
have
a
hydrogen
of
confidence
that
we
have
multiple
solutions
available
to
us.
But
for
the
time
being,
it's
still
too
much
risk
for
businesses
to
accept.
Even
if
there
is
plenty
of
capacity
of
the
network
they're
not
going
to
take
on
the
risk.
A
It's
it's
more
of
the
fear
of
the
unknown,
not
unlike
Ovid
19,
the
risk
that,
if
we
implement
against
public
blockchain-
and
there
happens
to
be
a
spike
now
we
are
stuck
and
we
have
no
migrating
off
of
you
know
a
core
platform
like
this
is
challenging.
So,
irrespective
of
the
scalability
problem,
we
have
to
provide
transaction
price
predictability,
and
so
it
is
our
intention
to
look
at
alternatives
to
gas,
because
that
will
help
us
get
more
certainty
around
pricing
right
and.
A
The
other
piece
of
it,
too,
is
because
doing
metering,
detailed,
opcode
level
metering
inside
the
VM
is
also
slow
and
expensive
right.
The
metering
itself
takes
a
good
amount
of
time.
So
if
we
can
find
an
alternative
to
opcode
metering,
it
actually
opens
us
up
to
using
faster
execution
options
for
web
assembly.
As
you
know,
there's
multiple
web
assembly
AVMs
and
so
decoupling
the
need
to
do
op
code.
Pricing
from
contract
execution
is
super
important
because
it
actually
opens
up
open.
It
opens
us
up
to
do
different
kind.
A
A
D
That's
correct,
so
on
our
side,
we
are
very
much
focused
to
help
out
the
builders,
the
dub
developers,
to
get
know
our
system
and
have
like
a
easy
entry.
So
our
first
focus
was
to
create
a
dub
guide,
which
is
some
kind
of
generic
introduction
to
this
topic,
and
now
what
we
are
doing.
We
selected
a
couple
of
very
cool
tutorials
on
smart
contracts
like
vesting
year,
C,
20
and
so
forth,
and
we
are
working
hard
in
order
to
present
this
in
a
way
that
could
be
digested
by
all
of
us,
our
users.
D
A
Traffic,
yes,
so
we
will
be
moving
the
weekly
workshops
to
focus
purely
on
getting
validators
set
up
right,
so
our
focus
is
making
sure
that
the
validators
are
have
a
clean
road
ahead
of
them.
In
terms
of
installing
and
downloading
the
note
software,
we
think
we've
done
a
great
job
with
a
note
software,
but
we
don't
actually
know
until
they
have
run
it.
So
we
want
to
put
the
software
in
their
hands,
make
sure
they
can
get
the
chain
specification
and
launch
the
test
net
on
the
31st
in
just
centralized
ways
possible.
A
So
our
focus
is
really
going
to
be
on
the
validators
around
the
weekly
workshops
in
the
coming
days.
So
that's
our
status,
update
check
in
with
us
next
Tuesday,
where
we
will
be
t-minus
one
day
for
launching
the
test
net.
Thank
you.
Everyone
for
dialing
in
cheers
and
have
a
great
day
and
stay
safe
out
there.
Bye
now.