►
From YouTube: CasperLabs Community Call
Description
Rewards Distribution presentation & status update.
A
A
B
A
Fantastic
all
right,
well
welcome
everyone
to
the
Kasbah
labs
community
hall
and
our
engineering
update.
I
am
happy
to
report
that
today,
we're
going
to
have
a
review
of
our
s
test
specification
and
Alexander
from
coming
from
mark
and
Alexander
is
also
going
to
share
our
thinking
around
transaction
fee
pricing.
So
we'll
have
a
small
presentation
from
our
economists
talking
about
you
know
some
thought
leadership
around
transaction
pricing
and
then
how
we're
going
to
test
the
platform
some
of
you
may
or
may
not
be
aware.
A
So
without
further
ado,
let's
just
dive
right
in
we
have
started
running
highway
on
our
long-running
tests
and
performing
basic
debugging
each
of
the
you
know.
Several
members
of
the
team
are
running
the
protocol
locally
using
the
hacker
docker
hack,
and
that
enables
you
to
run
the
network
locally.
So
it's
also
possible
for
folks
to
run
the
highway
protocol
locally
if
they
so
wish.
A
Our
next
release
is
planned
for
March
12th
and
that
will
mark
the
feature
complete
release
for
the
alpha
test
net
milestone,
which
is
a
honest
validator
based
highway,
with
a
single
validator
set
and
non
flexible
round
lengths.
So
this
is
a
protocol
where
the
round
durations
are
fixed
and
the
round
exponents
are
also
fixed
from
Genesis,
so
it's
our
first
iteration
on
the
protocol,
so
current
focus,
testing
and
debugging
like
I,
said
optimizing.
A
The
fork
choice,
rule
we've
also
moved
system
contracts
to
the
host
site
and
we're
also
going
to
add
the
standard
payment
as
well
and
we're
also
working-
and
this
is
for
performance.
This
is
going
to
give
us
significantly
more
performance.
As
a
result
of
this
we're
also
setting
up
our
s
test
environment
in
preparation
for
the
alpha
test
at
time
frame
and
we're
modeling
reward
distribution.
A
We
have
a
pretty
good
simulator
that
own
or
has
created,
and
we're
going
to
probably
put
that
up
on
a
webpage
someplace,
so
people
can
interact
with
the
rewards
creation
model
and
then
we're
doing
some
research
on
spam
protection
and
I'll
talk
a
little
bit
about
this.
This
is
this
equivocation
bomb
that
we
talked
about
later,
so
we've
got
our
honest
highway
implemented
and
we're
also
looking
at
a
log
and
implementation
for
the
fork
choice,
and
so
this
is
really
a
way.
You
talk
about.
Oh
of
n.
A
We
talk
about
0,
n,
squared
o
log
n.
This
is
the
overhead,
the
message
overhead
or
the
message
payload
associated
with
each
block,
and
so
we've
got
this
optimization.
That's
going
to
reduce
the
overhead.
When
you
talk
about
message
overhead,
if
you
think
about
Bitcoin
and
aetherium
proof
of
work,
the
message
overhead
associated
proof
of
work
is
o
101.
A
So
it's
got
a
very
low
message:
overhead,
but
proof
of
stake
protocols,
particularly
for
choice,
protocols,
have
a
higher
message
overhead
and
so
there's
a
lot
of
research
underway
in
the
space
on
how
to
reduce
the
message
overhead
for
block.
You
know
to
finalize
each
block:
we're
reworking
the
math
paper
to
account
for
equivocation
bombs,
andreas
and
his
infinite.
Creativity
has
managed
to
come
up
with
a
few
attacks
against
a
protocol.
One
of
those
are
referred
to
as
an
equivocation
bomb
and
it
results
in
message,
spamming
and
a
lot
of
the
dag
based
for
choice.
A
Protocols
are
susceptible
to
this.
It's
one
of
the
reasons
why
you
haven't
seen
you
meaning
generally
in
the
space.
We
have
not
seen
many
open,
permissionless,
fourth
choice
protocols,
and
this
is
research
that
we
need
to
do
in
order
to
make
highway
really
permissionless
and
open.
So
we
have
some
ideas
on
what
we
need
to
do
and
we're
just
gonna:
rework
the
math
paper
for
that.
A
No
no
teams
really
focused
on
the
block
and
fourth
choice:
validation,
this
oh
of
n,
log,
n,
optimization
and
then
updating
some
specifications.
We
want
to
get
deployed,
gossiping
done
as
well.
That's
gonna
be
a
stretch
goal.
We
don't
know
that
we'll
get
deployed
gossiping
done
in
time
for
the
Alpha
test,
that
time
frame
just
because
there's
a
lot
of
work
to
do
with
highway.
We
did
get
the
protobuf
definitions
updated
to
handle
all
CL
types,
and
now
we
expose
the
type
system
all
the
way
out
to
the
deploy
arguments.
So
that's
that's
good.
A
You
know
it
cleans
up
the
CL
type
implementation.
It
gets
everything
very
consistent
on
the
execution
engine.
We
got
our
what
we
call
a
turbo
mode
implemented,
and
this
uses
host-side
implementations
the
minterm
proof
of
state
contracts.
If
people
want
to
use
the
execution
engine
in
a
more
generic
fashion
and
write
their
own
consensus,
ID
minton
proof
of
state
contracts,
they
certainly
can
do
that.
They
won't
get
the
performance
optimizations
that
we
have
in
our
latest
tests.
A
We
are
seeing
about
200
transactions
per
second
to
look
at
transferring
transactions
per
second
on
a
single
core
on
a
single
core
I
recommended
core
is
eight
core,
so
I
suspect
we'll
see
somewhere
on
the
order
600
to
700
transactions
per
second
on
an
eight
core
machine,
which
is
very
nice.
That
puts
us
in
striking
distance
of
our
goals
for
maintenance.
A
Design
and
implementation
of
the
balanced
endpoint-
this
is
an
endpoint
that
will
help
you
get
balances
for
a
given
public
key
address
and
we're
also
gonna
bring
standard
payment
in
as
one
of
these
host
side
implementations
to
even
get
a
further
boost
in
speed,
we're
supporting
multiple
key
types.
So
this
is
kind
of
cool.
This
supports.
You
can
basically
implement
your
own
kind
of
an
EM
Licht
elliptic
curve
encryption
on,
so
you
don't
have
to
use
II
D,
two,
five,
five,
nine
one
right
now.
A
The
system
just
supports
II
D,
two,
five,
five,
nine
one
I
wanted
secure,
Enclave
support,
so
you
can
create
keys
within
the
on
clay
for
and
and
hardware
wallets,
and
this
basically
pushed
to
requirements
that
we
would
support
multiple
keys.
So
now
the
key
type
is
like
a
first-order
variable
within
the
system,
which
is
fantastic.
A
Test
an
SRE
big
focus
on
estás.
This
node
restart
test
is
the
s
test
of
tests
that
we're
getting
ready
to
run,
and
this
is
basically
an
O
joint
and
a
node
then
has
to.
If
you
talk
about
it
fray
simply
the
node
has
to
bounce
itself
right.
You
know,
stop
operating
the
network,
install
an
update
and
then
rejoin
as
a
great
example
right.
So
you
really
need
to
be
able
to
support
this
restart
scenario.
So
that's
the
first
test
that
we're
going
to
run
we're
integrating
s
test
with
ansible
and
run
deck.
A
This
enables
us
to
automate
and
the
firing
up
out
of
an
entire
test.
You
know
test
with
the
infrastructure
using
ansible
and
run
jack,
and
then
we've
got
our
small
integration
test,
we're
going
to
port
these
to
be
highway
compliant
and
then,
of
course,
we
got
to
get
ready
for
testing
it.
So
we're
setting
up
our
monitoring
and
alerts
for
that
which
we
can
just
share
with
the
validator
set,
and
then
we're
just
doing
some
small
automation
to
make
sure
that
the
client
Python
client
is
available
via
pi
PI.
A
On
the
ecosystem
front,
we
want
to
support,
deploy,
signing
and
sending
deploys
from
clarity
directly
for
users
that
want
to
like,
for
example,
do
a
token
transfer
within
clarity.
It
would
use
this
kind
of
functionality
and
then
we're
getting
the
adapt
developer
guide
out
this
week
and
then
creating
tutorials
for
smart
contract
development.
So
the
tutorials
really
are
like
how
to
work
with
er
c20.
How
to
you
know,
use
the
SDK.
A
We've
got
some
great
videos
up
there,
but
then
also,
you
know,
there's
a
little
bit
of
specific
functionality
for
the
Casper
laws,
blockchain
such
as
using
called
contracts
using
unforgeable
references.
Our
workshops
attempt
to
overview
these
as
well,
but
we
understand
that
you
know
a
tutorial
is
also
needed.
A
Economics
research
pricing
model
you'll,
hear
a
little
bit
about
that
today
from
Alexander
and
then
the
senior
age
distribution
in
Python.
This
is
the
simulation
that
we
were
talking
about.
We
want
to
spin
up
a
small
website
and
make
it
possible
for
people
to
interact
with
us
and
then,
of
course,
there's
always
the
lazy,
validator
problem,
so
we're
actually
looking
and
triple
checking
our
work
to
see
what
happens
with
the
lazy
validator
and
the
reward
distribution
screen
a
scheme.
A
We
have
our
weekly
workshops,
Thursdays
at
8
a.m.
Pacific
and
4
p.m.
Pacific,
that's
8
or
9
a.m.
in
Asia.
Please
remember
that
the
US
goes
through
daylight
savings
time,
next,
Sunday
and
so
Asian
time,
it'll
probably
be
one
hour
earlier
in
Asia.
So
it's
going
to
be
9
a.m.
I,
think
it'll
be
8
a.m.
or
7:00
a.m.
in
Asia.
So
we
might
need
to
move
our
Thursday
session
to
5:00
p.m.
to
accommodate.
A
B
A
B
Just
for
a
couple
minutes,
so
you
know
I
mean
I'm
going
to
be
talking
about.
You
know
our
pricing
features
but
I'm
also
working
on
this
senior
distribution
problem,
so
we
currently
have
from
owner
both
very
nice
specification,
a
formal
specification
for
reward
distribution,
and
additionally,
we
also
have
a
simulator
so
currently,
as
you
know,
a
prerequisite
to
expanded
this
to
something
that
could
be
used
externally.
I
am
trying
to
write
a
very
small.
B
You
know
kind
of
senior
analytics
pipeline
right,
so
essentially
a
series
of
scripts
that
will,
you
know,
generate
various
scenarios
for
as
a
simulator
runs
them
through,
and
then
you
know
and
then
actually
analyze.
What
happens
right
I
mean
so
because
you
know
our
idea
is
that
the
senior
it
should
be
fairly
predictable,
but
our
reward
model
depends
on
you
know
the
you
know,
validator
exponents,
which
might
vary
either
initially
or
over
time
due
to
you
know,
exponent
adjustment.
B
So
we
need
to
verify
that
you
know
in
the
face
of
this
validator
heterogeneity
we
could
still
hit.
You
know
our
target
seniors
so
and
the
hopefully
once
that
is
done.
That
can
be.
You
know
presented
in
one
of
these
meetings
is
a
later.
We
will
probably
try
to.
You
know
somehow
nicely
expose
it
to
people's
outside,
so
they
can
run
these
types
of
simulations
out
actually
haven't,
like
you
know,
deal
with
inhae.
You
know
download
in
the
source
code
and
all
that
and
so,
but
now
I
guess
we
can
switch
to
the
pricing
model.
B
So
let
me
share
now
all
right.
Let's
see
the
screen
and
okay,
so
I
have
previously
presented
a
few
ideas
on
you
know:
economically
modeling
gas
prices
and
the
benefits
and
disadvantages
of
various
gas
pricing
models,
but
that
still
needs
a
lot
of.
You
know
a
lot
of
conceptual
work,
so
I
don't
really
have
anything
very
interesting
on
the
theory
side
as
such,
but
we
have
a
distinct
challenge
that
we
need
to
address.
First,
in
fact,
which
is
you
know,
because
all
of
the
theoretical
research
will
take
some
time.
B
So
that's
one
part
of
the
problem,
probably
in
a
laker,
possibly
the
most.
What
one
but
there's
an
additional
component,
which
is
that
you
know
any
kind
of
a
limited
resource
should
be
priced
in
a
ways
that
you
know
makes
it
easy
to
prioritize
high
value
applications
right,
and
this
holds
true
for
all
the
resources.
In
this
case,
you
should
consider
you
know
the
resource
to
be
a
new
compute
time,
storage
and
bandwidth.
B
Now,
because
we
are
on
career
short
notice,
was
developing
this,
we
will
effectively
drop
the
aspect
of
you
know,
assign
in
assign
in
compute
time
to
the
most
high
value
transactions.
The
reason
for
this
is
that,
so,
if
here
iam,
while
it
creates
these
relative
up
mode
prices,
it
essentially
sets
you
know
the
price
for
unit
of
gas
by
something
that
looks
very
much
like
a
first
price.
B
A
B
Right
so-
and
you
know
in
simple
economic
models-
writing
where
you
will
excel
in
a
single
good
or
something
you
know:
first,
price
auctions.
You
know
working
theory
fairly
well,
unfortunately
turns
out.
You
know
both
from
the
experience
of
a
theorem
and
from
the
experience
of
other
online
platforms
that
uses
such
as
online.
You
know
search,
add
options.
B
First,
price
options
to
this
environment
end
up
being
pretty
volatile,
and
so
you
know
so.
This
means
that
in
such
a
system,
resources
are
allocated
efficiently
for
agents
who
are
willing
to
bear
the
risks
and
have
you
know,
deep
pockets
as
they
can
survive
these
spikes?
But
you
know:
okay,
there
was
this
long
tail
of
a
small
potential
user
so
which
would
really
no
underpin
the
whole
platform
at
the
end
they
are
effectively
excluded
or
you
know,
put
under
you
know
unreasonable
pressure.
B
Right
I
mean
people
need
to
be
able
to
tell
how
much
they
are
going
to
pay
to
run
their
service.
You
know
a
week
from
now
a
month
from
now
six
months
from
now,
so
as
an
initial,
very
crude
compromise.
What
proposin
is
you
know
we're
going
to
set
these
relative
gas
costs
based
on
you,
know
benchmarking,
and
then
you
know
you're
still
going
to
be
paying
for
your
transactions
in
clx,
but
the
actual
price
of
a
unit
of
gas
is
going
to
be
fixed
in
dollar
terms.
B
So
this
solution
cuts
down
on
the
you
know,
price
volatility
problem,
but
it
of
course,
has
a
you
know
typical
down,
says
the
price
control
sky,
which
is
effectively
trading
the
volatility
of
price
for
volatility
of
access,
because
you
could
no
longer
prioritize.
You
know
high-value
transactions
by
something
that
looks
like
an
option,
but
you
know
again.
This
is
not
the
final.
You
know
this
is
not
our
final
answer
to
this
challenge,
but
you
know
we
need
to
have
something,
and
this
is
what
is
going
to
be.
B
You
know
we
will
hopefully
replace
it
with
something
better.
You
know
down
the
road
and
I'm
going
to
you
know
discuss,
would
the
self
and
Michael
click,
but
so
you
know
so
we
have.
You
know
thus
answer
to
this
challenge
and
okay,
so
what
are
we
missing
for
implementation?
Right
well
turns
out
that
you
know
we
still
need
to
do
some
work
on
expanding
and
document
in
our
instrumentation
for
execution
engine
performance.
B
We
still
need
to
nail
down
exactly
how
the
Oracle
link
will
actually
operate
and
you
know.
Finally,
you
know
we
need
to
settle
down
on
the
you
know,
while
the
system
was
operational,
Harvey
actually
going
to
adjust
the
price
right
because
I
mean
hopefully
it's
not
going
to
be
a
unilateral
thing.
You
know
done
by
Casper
lapse.
Loser
is
gonna,
be
essentially
we're
gonna
implement
some
kind
of
a
you
know
algorithm
for
essentially.
No,
so
was
this
price
discovery
right,
so
we
don't
know
what
that's
going
to
look
like
yet
but
again.
B
B
So,
as
I
said
previously
in
a
system
will
exist,
you
must
assign
prices
to
all
the
limited
resources,
and
you
know
incentivize
and
code
review
starches
on
you
know
all
of
them
right
because
well
I
mean
I,
guess
it
doesn't
really,
okay,
so
to
correct
myself
it.
Actually,
it
doesn't
really
touch
in
compute
compute
time,
but
the
touches
on
bandwidth
and
storage
right
I
mean
because
you
don't
want
people
deploying
the
same
contract
so
we're
nowhere
again,
you
don't
want
them
sending
huge
deploys
around.
B
So
this
is
a
very
particular
instance
of
this
problems
that
we
will
be
address
and,
although
again
right
now,
we
only
have
very
very
rough
ideas
about
that.
But
so
this
is
right.
Now
what
we
see
is
the
rough
you
know,
plan
for,
implement
in
this
compromise
solution,
and
they
said
previously
down
the
line
via
envision
in
you
know
replacing
this
with
something
more
flexible,
something
that
you
know
takes
the
best
part
of
what
you
know.
B
Something
like
aetherium
does,
which
is
you
know,
is
a
sufficient
allocation
for
compute
time,
but
also
shields
small
participants
from
the
volatility,
and
you
know
there
are
two
things
you
can
do
here.
As
far
as
we
see
now,
the
first
one
is
sort
of
a
fairly
simple
modification
to
what
something
like
a
theorem
does,
which
is
it
that
you
couldn't
principle
implement
the
second
price
auctions
right
and
the
reason
we
might
consider
cysts
is
because
initially
back
in
the
day,
he
write
a
zero
A.
B
B
The
other
solution
is
inspired
by
certain
real
markets
for
something
like
commodities
right
because
effectively
we
saw
something
that
we
might
call
gas
futures.
You
know
you
can
in
principle
the
termini
know
the
price
of
you
know.
Future
transactions
in
advance
write
it
in
maybe
three
months
and
that
six
months
in
advance
now
doing
you
know
doing
this
in
the
distributed
system.
Of
course,
you
know
without
any
external
enforcement
is
rather
difficult.
Writing
because
you
know
how
do
you
make
you
know?
Validator
is
actually
owner
these.
B
A
Looks
great
yeah
I,
don't
have
any
questions
except
you
know
we
one
of
the
value
propositions
of
Castro
Labs
is
to
give
businesses
certainty
around
what
they're
gonna
have
to
budget
for
right
and
the
scaling
problem
really
manifests
itself
in
pricing.
So
when
you
hear
people
talk
about
the
scaling
problem
in
blockchain,
they're
really
talking
about
price
predictability
in
traditional
markets,
you
know
traditional
infrastructure
markets.
A
So
the
idea
is
that,
while
our
MVP
or
initial
main
net
launch
will
not
have
shards
in
it,
it
will.
We
will
be
ready
to
shard
the
protocol
very
shortly
thereafter.
So
thanks
for
that,
Alex
I
really
appreciate
that
and
we're
gonna
have
Marc
Greenslade
now
give
us
a
detailed
walk
through
our
demo,
actually
hands-on
demo
of
s
tests,
which
the
specification
can
be
found
here
on
github
I'm,
going
to
share
real
quick
mark
before
you
s
test
specification
can
be
found
in
a
castro
labs
s
tests.
A
C
C
C
Just
pin
through
this
there's
only
a
couple
of
slides
so
so
yeah,
it's
a
system
test
platform,
so
we
have
unit
tests.
We
have
integration
tests.
The
unit
tests
are
for
the
dev
teams
for
for
their
components.
Within
this,
the
the
overall
kusmanov
software
stack
as
they're
developing.
They
extend
those
unit
tests.
C
So
in
that
spectrum
of
testing,
you
need
a
another
level
and
that
level
is
system
testing.
So
we've
incubated
this
library
called
s
test
to
encapsulate
our
efforts
in
this
direction,
really
what
s
tests
or
system
testing
needs
to
look
at
is
really
to
to
to
to
make
it
easy
for
us
to
perform
testing
at
scale
and
in
depth,
and
what
we're
looking
for
is,
as
the
node
software
is
running,
we're
looking
for
a
state
bloat.
So
this
is
where
the
the
state
of
the
system
is
getting
accumulate.
C
C
We
want
to
look
at
functional
regressions
as
we
improve
extend
change
the
the
node
software
we
we
want
to
be
able
to
determine
is
our
functional
regressions
occurring
over
time,
scalability
degradation
as
we
dispatch,
larger
and
larger
workloads
to
this
to
this
network?
What
happens
as
we
increase
the
number
of
notes?
You
know
we're
scaling
out
horizontally
versus
going
vertically.
So
do
we
see
degradation?
There
protocol
upgrades
massively
important
to
verify
that
this
system
can
handle
adaptive
protocol
upgrades
gain
Theory
context
within
the
proof
of
stake
as
validators
of
bonding
and
bonding.
C
You
know
how
is
this
playing
out
and
then
just
at
the
on
the
node
level,
just
how
it's,
if
you
stress
its
memory
at
CPU
its
disk?
How
does
a
software
perform?
How
does
the
network
perform
and
then,
of
course,
the
unit
tests
and
integration
tests
are
quite
time
constrained?
We
need
a
context
in
which
we
can
run
long-running
tests
over
the
course
of
hours
days,
weeks
and
pretend
three
months,
so
we
need
a
platform
that
can
help
us
analyze
the
system
from
these
different
perspectives
and
s
test
is
designed
to
help
us
do
that.
C
It
focuses
and
will
focus
upon
two
types
of
simulation.
One
is
that
the
network
level,
so
this
is
individual
nodes
within
the
full
node
set
of
an
of
a
network
bringing
nodes
down
during
your
nodes
up
affecting
parameters
or
changing
parameters
that
influence
the
consensus
mechanism.
So,
for
example,
exponent.
C
Throttling
Network
the
network
interface
on
a
node
to
see
what
impact
that
has.
So
that's
like
the
kind
of
network
level,
but
there's
a
whole
set
of
simulations.
We
want
to
run
at
the
kind
of
diac
level
which
is
simulating,
for
example,
a
RC
20
tokens
or
vesting
contracts
or
or
maybe
online
gaming
scenarios,
so
any
kind
of
D
app
that
may
want,
or
defy
in
particular,
any
kind
of
D
out
context
that
wants
to
to
leverage
the
C.
Let's
make
what
we
need
to
be
able
to
kind
of
simulate.
C
So
first
of
all,
I'll
hit
a
command
just
to
set
up
my
local
environment
and
what
I'm
doing
here
is
I'm,
registering
with
estás
a
network
for
testing
and
a
set
of
nodes
within
that
network
for
testing,
so
I'm
saying
to
to
estás
here's
a
network
and
here's
a
set
of
nodes
and
here's
the
information
that
you
need
in
order
to
to
start
testing
this
guy.
So
I've
set
just
set
up
that
set
that
up
locally
if
I
go
into
Redis
now
I'm
refresh
my
Redis
store.
C
I
have
two
collections,
one
or
two
tables,
if
you
like,
if
you
think
of
it
in
terms
of
a
database,
one
simply
called
Network
and
one's
called
node.
So
here
we
have
network
I'm,
calling
this
LRT
one
so
for
long-running
test
one,
and
we
have
registered
this
this
network
so
simply
registering
the
name
of
the
network,
but
also
an
account
that
we're
going
to
use
as
a
forceps
a
network.
Little
faucet
account.
C
So
we
need
this
faucet
account
so
that
we
so
estás
can
draw
funds,
whilst
it's
running
tests
that
can
draw
down
clx
whilst
it's
running
tests.
So
that's
just
you
know
simple
bit
of
information
metadata
about
the
network.
That's
been
we're
going
to
test
and
we
have
the
set
of
nodes
within
that
particular
network.
We
have
some
more
meta
data
and
for
each
node
we
have
the
the
bonding
key
or
the
validator
key.
So
so
we
have
the
key
pair.
So
we
can
run
simulations
around
bonding
scenarios.
We
have
the
host,
we
have
the
port.
C
We
have
some
status
information.
We
have
the
type
of
node,
whether
it's
going
to
be
a
full
node
or
a
query
node.
So
again
we
have
metadata
about
the
nodes
themselves
within
that
network,
and
so
that
enables
us
now
to
start
running
a
and
s
test
simulation,
but
all
stf
simulations.
Whilst
they
may
be
dispatching
deploys
to
the
target
network.
C
C
Mode
software
is
running
going
through
Genesis
process,
just
wait
for
it
to
a
case.
Listening
for
traffic
good,
so
now
I'm
going
to
start
monitoring
that
Network
and
so
I've
just
pinged
a
message.
So
s
test
is
based
upon
a
using
a
message
broker
mechanism
to
scale
out.
So
every
interaction
that
estás
makes
with
a
target
network
is
always
through
a
message
broker
and
we
can
support
either
RabbitMQ
as
a
message
broker
already,
as
our
message
broker
I'm
going
to
use
Redis,
which
is
the
default.
C
It
also
supports
RabbitMQ
because
you
get
some
nice
kind
of
UI
stuff
that
use
quite
handy
during
development
process,
but
by
default
it's
Redis.
If
we
go
back
to
our
Redis
store
and
refresh
our
Redis
cache,
we
can
see
that
in
the
DB
0.
We
now
have
a
message,
that's
been
in
queued
and
it's
just
a
blob
of
JSON
effectively,
but
this
is
telling
us
test
that.
Ok,
when
you
actually
start
up,
you
can
pull
down
this
message
and
start
monitoring
a
network.
So
I'm
going
to
run
s
tests
in
I
call
an
interactive
mode.
C
So
I've
just
started
s
test.
Workers
which
are
the
the
guys
are
going
to
actually
do
all
the
work
during
the
course
of
simulations
and
that's
spun
off
a
few
processes
and
whole
later
threads.
But
what
you
might
see
here
is
there's
some
logging
information
which
says:
ok,
do
monitor
blocks
stream
events,
it
pulls
some
information
from
the
node
and
then
it
connects
to
a
node.
So
now
this
estás
is
listening
to
the
CLX
network,
because
the
CLX
node
software
supports
a
whole
set
of
end
points,
one
of
which
is
called
stream
events.
C
C
So
s
test
is
built
upon
the
notion
of
workload.
Generators.
Each
generator
will
dispatch
a
different
type
of
workload
to
the
network.
It
would
also
verify
that
the
networkers
process
that
workload
as
per
the
expectation
for
that
particular
type
of
workload
generator
in
this
case
WG
100,
is
our
like
a
baseline
generator
for
testing
a
transfer
cycle,
so
we're
going
to
from
that
network
faucet
account
that
I
mentioned
earlier
so
I'll
go
back
to
you.
C
So
that's
a
relatively
simple
test
and
for
each
run
of
this
particular
test,
I
specify
the
network
that
I'm
interested
in
in
this
case,
LRT
1
I
can
specify
a
run
identifier
which
will
be
an
integer,
so
I
can
run
this
multiple
times
and
then
compare
results
and
then
I
can
set
parameters.
I've
specific
to
this
particular
generator.
In
this
case
the
number
of
user
accounts
that
I
want
to
create.
So
going
back
to
my
interactive
shell,
this
is
still
waiting
listening
to
the
serial
x
network
event
stream.
C
The
the
results
of
those
event
streams
are
being
verified,
so
we
were
verifying
all
the
transfers
with
verifying
all
the
deploys,
and
then
the
the
WG
100
workflow
is
is
each
as
each
step
or
each
deploy
is
verified,
is
going
on
to
the
next
step
or
part
of
its
own
workflow,
and
we
can
see
at
the
bottom.
It
says:
WG
100
r
o1
has
completed
so
that
was
a
successful
run
of
that
particular
workload
generator.
C
You
know
that
this
transfer
cycle
and
refund
cycle
is
it's
fine
and
over
time,
over
the
course
of
of
a
week,
we
may
start
seeing
some
anomalies
because
there
was
consensus
starts
to
break
down.
This
is
what
estes
gives
us.
It
gives
us
this
this
kind
of
ability
to
scale
up
our
testing
we
can
in
this
case
we
only
had
5
user
accounts.
We
could
increase
that
50.
C
So
now
s
test
is
going
through
the
same
test,
but
instead
of
creating
simply
five
user
accounts,
plus
a
contract
account
plus
a
run
specific
faucet
account
it's
now
creating
50
user
accounts.
So
it's
just
scaled
up.
There's
more
work
now
being
dispatched.
We
can
look
at
the
the
execution
engine
output
and
that's
just
wandering
away,
as
is
the
the
node
logging,
output
and
and
voila.
C
So
very
simply,
we've
increased
the
amount
of
work
that
we've
been
able
to
push
onto
this
selects
network
but,
more
importantly,
we're
doing
verifications
so
we're
putting
we're
dispatching
work
and
we're
verifying
that
work.
So
this
is
classic.
You
know,
system,
testing
ready,
so
I'll
just
run
a
little
bit.
We
should
see
I.
C
They're
not
failures,
they
are
run
step,
verification
failures,
which
is
correct
because
each
step
within
that
run
has
to
wait
until
the
number
of
deploys
dispatched
in
that
step
has
equals
and
or
rather
the
number
of
dispatched
deploys
equals
a
number
of
finalized
deploys.
So
the
failures
you
were
seeing
what
actually
control
failures,
and
now
we
can
see
that
the
WG
100
ro
1
is
completed.
So
that's
good,
so
we
we
scaled
that
up
a
bit.
C
C
C
Okay,
so
in
in
the
database
one
we
still
just
have
the
infrastructure
information
the
network
in
the
mode
in
database.
We
have
the
monitoring
information,
which
is
for
every
block,
that's
being
mined.
If
you
like
or
finalized,
we
have
the
metadata
about
that
block.
That
is
of
interest
to
us
same
with
the
deploys
for
every
deploy
that
has
been
dispatched
and
finalized.
We
have
the
metadata
available
to
us,
including
our
finalization
time
or
time
to
finalization.
C
So
that's
at
the
monitoring
level,
but
at
the
run
level,
which
is
a
we
have
a
whole
set
of
information,
that's
run
specific.
So,
for
example,
we
can
look
at
the
the
accounts
for
for
run,
one
for
for
the
WG
100.
We
can
see
that
we
have
run
one
and
two
we've
created
15,
because
I
ran
with
the
user
account
set
at
50.
We
have
over
50
accounts
set
up
for
each
run,
so
these
are
dynamically
generated
accounts
which
just
see
the
key
pairs.
C
We
have
a
set
of
deploys
that
are
dispatched
for
each
of
those
runs,
so
I
can
see
for
run.
100
we've
run
number
one.
We
have
a
whole
set
of
deploys
that
each
deploy
has
a
metadata,
including
the
type
of
deploys.
This
is
a
transfer
deploy,
so
we
have
all
that
metadata
available
to
us.
We
are
also
looking
all
those
transfers
that
we
discussed
earlier.
We
can
see
you
know,
we've
got.
C
For
example,
we've
got
counterparty
1
counterparty
to
the
deploy
hash,
the
network
that
was
dispatched
to
so
we're,
basically
we're
persisting
in
a
database
metadata
about
the
the
network
from
an
infrastructure
perspective
about
the
network
from
a
monitoring
perspective
and
then
for
the
workload
generators.
Each
question.
A
A
Where
do
you
tweak
that,
in
terms
of
who's,
one
of
the
things
we'll
have
to
do
with
highways
figure
out
what
is
the
most
optimal
slash,
efficient
settings
for
the
network,
correct
in
terms
of
how
many
like
the
recommended
number
size
of
block
or
number
of
deploys
for
block
or
gas
limit
and
aetherium,
as
as
it
does
right
to
help
guide
the
network
into
an
efficient
state
of
operation?
Where
do
you
two
not
say.
C
C
Kind
of
orthogonal,
if
you
like,
so
for
example,
you
you
would
set
certain
parameters
at
the
mode
itself
on
the
node
software
and
then
do
run
one
off
this
workload.
Generative
example
got
it.
We
pre
parameters
to
run,
to
reprioritize,
to
run
three
and
then
use
all
those
metrics
and
metadata
that
we're
storing,
on
a
run
by
run
basis,
to
do
comparison.
Yep.
C
You
can
also
do
multiple
run
side-by-side
across
multiple
networks,
so
I've
registered
LRT,
one
I
could
register
another
network.
You
know,
for
example,
we
have
our
alpha
test
network,
but
we
at
the
end
of
March,
but
currently
we
have
in
in
our
own
in
Casper
labs
context.
We
have
multiple
internal
networks
right
so
test
network,
so
we
can
register
all
those
networks
and
run
these
work
load
generators
simultaneously
against
those
networks.
C
C
C
A
C
A
C
C
One
issue
I
should
mention
here,
but
we
are
touching
on
more
just
in
terms
of
where
we're
going
over
the
next
few
weeks
is
there.
You
know
if
I
was
to
set
that
user
accounts
to
a
hundred
thousand,
because
that's
the
kind
of
scale
I
would
you
know
we
want
to
see
be
able
to
test
that
will
cause
issues,
because
probably
s
tests
will
be
so
fast
in
terms
of
dispatching
deploys
that
the
node
software
and
the
consensus
mechanism
may
be
overly
stressed.
Perhaps
so
we
would
want
to
add
a
parameter
for
rate
limiting.
C
A
C
I,
do
that's
just
a
second
help,
so
one
of
the
parameters
for
all
these
generators
will
be
a
node,
so
you
can
specify
a
node
index
which
going
back
to
the
list
of
registered
nodes,
so
each
node
is
given
index
in
this
case
1
to
5.
You
can
specify
a
specific
node,
so
I
guess
we
could
say:
ok,
node
1
or
no
10
or
whatever
just
dispatch
all
deploys
for
that.
Node
forget
the
rest,
but
if
you
don't
specify
that
parameter
so
by
default
it's
set
to
0,
then
it
will
pick
a
node
at
random.
A
C
A
Let's
not
throttle
anything
and
put
some
heat
on
the
developers
to
make
sure
that
the
system
can
handle
that
level
of
load,
and
then
we
go
from
there.
First
principles:
yeah
yeah
awesome.
Well,
thank
you.
So
much
for
coming
on
here
to
present
I
really
appreciate
it,
and
next
week
I
think
a
show
up.
We
should
find
someone
else
in
the
engineering
team.
That's
gonna
present
something.
Maybe
a
coach
comes
back
on
to
show
highway.
I
think
he
did
a
few
weeks
ago,
but
maybe
we
could
talk
about
that
or
we
can
talk
about.