►
From YouTube: CasperLabs Community Call
Description
Rewards Distribution presentation & status update.
A
B
B
B
We
have
to
release
some
patches
to
the
test
net,
so
we
had
to
bounce
the
Testament.
The
Testament
did
experience
a
critical
failure
became
unstable
over
the
weekend.
We
did
see
this
problem
in
our
test
bed
and
we're
working
on
a
fix.
So
the
good
news
is,
is
we
have
a
fix
already?
The
bad
news
is
we
have
to
bounce
test
net?
We
are
very
pleased,
though,
that
the
initial
round
of
the
network
stayed
up
for
four
weeks
with
virtually
no,
you
know,
alerting
no
maintenance.
B
Very
few
validators
had
any
problems
at
all
with
their
nodes.
So
it's
really
testimonial
to
how
stable
the
software
is.
We
received
great
feedback
from
the
validators.
You
know
on
on
how
easy
the
software
is
to
install
and
monitor
and
maintain.
So
we
have
alerts
setup.
So
our
SRE
team
is
aware,
when
there's
a
when
there's
a
critical
problem
on
one
of
the
nodes
on
the
network,
because
we
do
know
who
the
validators
are
in
this
round
of
the
Zoot
500.
B
So
we
are
bouncing
the
network,
then
the
genisis
go
alive
time
I
believe
is
11
a.m.
this
morning.
So
two
more
hours
till
Genesis
kicks
off
and
I.
Think
we've
got
about
80%
of
the
stake
ready
to
go,
so
we
need
a
few
more
validators
to
get
ready
and
we're
in
contact
with
those
validators
to
help
them
come
along.
B
The
nature
of
the
problem
that
caused
the
problem
you
know
on
the
test
net
was
really
the
dag
structure,
so
there
was
a
problem
with
the
last
finalized
block
logic,
and
this
is
what
created
the
problem.
The
network,
oh
and
today,
owner
is
going
to
be
talking
to
us
about
how
we
can
tune
the
highway
protocol,
and
so
the
highway
protocol
does
provide
some
tuna
balls,
which
is
an
interesting
feature
about
the
protocol.
It's
very
highly
configurable.
Most
these
protocols
have
some
some
capacity
to
tune
them,
but
wait
some
of
these.
B
B
So
what
we're
updating
the
test
net
with,
whose
are
going
to
add,
deploy
gossipping
is
going
to
be
part
of
this
next
update,
not
the
update,
we're
currently
doing,
but
the
update
that's
going
to
follow
with
some
patches
and
fixes.
We
will
probably
have
to
go
through
another
genesis
for
that,
because
there
are
some
protocol
changes.
Basically,
the
genesis
md5
the
chain
spec
will
be
updated
as
a
result
and
we're
also
adding
fixes
in
the
finalizar
and
the
LSB
determination
that
took
the
network
down
right.
That's
part
of
this
fix.
B
We
are
starting
to
work
on
the
rust
node
for
those
of
you
that
are
not
aware,
we
have
decided
to
start
moving
away
from
Scala.
So,
basically
the
idea
is,
we
will
have
this
reactor
model
that
will
sit
in
between
you,
know,
Scala
and
the
rust
portions
of
the
code,
and
it
will
slowly
start
to
eat
up
more
and
more
Scala
code
over
time
and
eventually
subsume
all
the
Scala
code
into
rust.
We
believe
we
can
get
this
done.
B
A
full
rust
node
will
be
done
in
time
for
a
maenette
launch
and
the
reason
we
decided
to
do
this
is
because
we
discovered
there.
The
split
architecture
model
was
becoming
more
and
more
problematic.
For
us
over
time
seems
completely
reasonable.
You
know
and
doing
things
like
state
pruning
and
upgrades
will
be
profoundly
difficult
in
the
split
architecture.
So
we've
decided
to
move
to
single
architecture
that
work
is
going
on
extremely
well.
B
We
already
have
a
working
proof
of
concept
of
the
reactor
model
and
we
actually
also
have
initial
in
mem
implementation
of
the
unified
consensus
logic.
We
are
revamping
the
storage
story
as
well
part
of
the
split
architecture
that
became
problematic
was
this.
You
know
the
storage
model
right,
because
each
process
needed
is
access
to
its
own
storage.
So
we're
going
to
be
re,
going
to
be
unifying
the
storage
story
as
part
of
this
work,
contract
headers
is
in
test
coverage
is
in
for
it
Ashoka.
B
C
B
I
think
it
would
be
important
to
make
sure
that
before
we
cut
the
release
branch
or
we
could
cut
the
release
branch
today,
that's
fine,
but
we
will
have
to
make
sure
that
contract
headers
is
usable
prior
to
release,
so
it
will
probably
need
to
be.
Some
patches
will
need
to
figure
out
that
release
branch,
okay,
we're
actually
working
on
instrumenting.
There
was
an
op
codes.
This
is
for
transaction
costs.
B
The
validators
are
working
to
launch
Genesis
round
six,
as
I
said
there
we're
mostly
ready-
and
we
are
also
looking
at
so
I'll
talk
a
little
bit
about
what
we're
doing
in
terms
of
sre
here.
Besides
the
written
update,
we're
actually
sres
taking
a
much
larger
role
tom,
you
know
who
heads
up
our
SRE
team
and
it's
been
really.
You
know
instrumental
in
getting
the
test
set
up
and
running
and
working
with
the
validators
he's
going
to
be
making
the
decision
as
to
when
releases
he's.
B
Basically,
our
last
and
final
gate
to
test
that,
so
it
will
be
up
to
Tom
and
Sree.
To
tell
me
when
code
is
ready
to
go
into
the
wild
Sree
will
be
doing
much
more
production
readiness
testing
of
the
code
prior
to
release.
This
is
what
happens
in
large
engineering
organizations.
You
know
we
talk
Salesforce,
a
valera
armature
any
of
these
companies
before
they
ship
code
to
their
production
systems.
B
It
has
to
go
through
sre
for
production
readiness
right
to
make
sure
that
it
meets
all
of
the
uptime
and
stability
and
hardening
requirements
for
the
team
that
is
responsible
for
making
sure
everything
stays
up.
Sre,
so
they're
the
guys
that
monitor
and
watch
the
network
so
they're
the
guys
that
get
to
tell
us
when
the
code
is
ready,
so
Tom
will
be
informing
us
when
release
19
is
ready
to
ship
and
he'll,
be
informing
us
when
the
18
you
know,
dot
patch
is
ready
to
go.
B
It's
currently
sre
how's
the
code
for
the
18.2
patch.
They
don't
yet
have
the
code
for
19
a
joke.
You
probably
need
to
plan
for
one
week.
So
if
you
want
to
hit
the
may
18th
date,
you
probably
want
to
cut
something.
You
know
very
soon,
probably
about
no
later
than
the
end
of
this
week,
so
that
they
have
the
requisite
one
week
before
they
release
the
patch
food
for
thought.
B
B
You
need
to
back
merge,
18,
dot
or
you're,
going
to
merge
19
into
the
18
branch.
Okay,
I'll,
just
trust
you
guys
are
handling
that
ecosystem.
We've
got
a
port
some
of
the
features
inside
clarity
to
support
contract
headers.
So
right
now
the
faucet
account
is
not
working,
because
the
faucet
is
a
stored
contract.
That
contract
needs
to
be
ported
to
use
contract
headers.
B
We
are
also
writing
up
the
specification
for
multi
key
we're
working
with
Siler
for
those
of
you
that
don't
know
who
Sylar
is
he's
a
fairly
well-known
cryptographer
with
the
etherium
foundation.
We
had
him
do
a
little
bit
of
research
for
us
on
how
we
wanted
to
implement
our
multi
key
support.
He
actually
agreed
with
our
ideas.
B
This
is
what
machi
had
proposed
and
so
wrote
up
a
paper
on
it
and
we're
gonna
write
a
specification
and
get
that
work
implemented.
That
likely
will
not
be
in
nineteen,
though
right.
We
will
have
to
explore
that,
because
the
project
building
on
us
will
need
the
multi
key
support,
yep
we're
going
to
move
the
graph
QL.
So
this
is
a
big
update.
We're
gonna
move
graph
QL
off
of
the
node,
because
this
is
non-deterministic
load
for
validating
node
to
contend
with
and
when
I
say
non-deterministic
load.
B
Is
you
never
know
when
adapt
author
is
gonna?
Send
you
a
query
that
could
potentially
consume
a
lot
of
resources
on
your
know?
It
could
consume
a
lot
of
resources,
for
example,
when
you're
trying
to
propose
a
block
and
it
could
potentially
cause
a
risk
of
an
equivocation
or
a
liveness
failure,
and
we
don't
think
that
if
we
want
to
have
actually
query
support
for
state
queries
that
works,
we're
going
to
need
to
provide
this
separate
from
Val
node
validating
notes,
services
right,
because
we
suspect
that
they're
without
an
incentive
mechanism
around
it.
B
Validators
will
not
horse-trade
a
block
production
for
servicing
queries
and
so
we're
going
to
move
to
a
casket
type
system
and
which
Kasper
labs
will
offer
the
Kafka
querying
for
free
for
a
period
of
time.
And
then
we
hope
that
the
community
will
eventually
offer
this
same
service.
Maybe
a
paid
service,
maybe
a
reduced
fee
service,
but
something
along
those
lines
in
the
long
term,
and
this
will
basically
enable
nodes
to
offload
any
of
their
querying
responsibilities
to
a
centralized
system
like
Kafka,
but
there
could
be
many
Kafka's
right.
B
So
it's
centralized
in
that
we
could
support
one,
but
anybody
could
run
a
Kafka
service
right,
because
the
node
will
emit
events,
and
so
you
can
just
were
to
send
those
events,
those
finalization
events
or
those
contract
events
to
which
Kafka
cluster
you
want
to
send
it
to.
You
can
configure
that
in
your
node
economics
research,
so
we're
updating
our
tech,
spec
I
got
a
given
or
a
shout
out
that
Sphinx
we
put
a
brand
new,
look
and
feel
to
our
technical
documentation.
It
looks
really
great
I
love
the
look
it's
right
here.
B
We
also
for
those
of
you
that
saw
the
chain-link
press
release
and
announcement.
We
are
going
to
be
using
chain
link
to
help
stabilize
our
transaction
fees
and
so
we're
starting
to
design
the
integration
with
chain
link
and
we're
collaborating
with
them
on
the
design
and
of
the
contracts,
as
well
as
a
detailed
integration.
B
What
is
the
token
vintages
proposal?
Yes,
you
want
to
talk
about
this
owner
real
quickly,
I.
D
B
Right
right
for
those
of
you
that
don't
know
we
are
in
the
middle
of
doing
our
private
token
sale
and
we're
talking
to
our
Genesis
validators,
and
you
know
one
of
the
questions
came.
It's
like
I'm
participating
in
your
private
token.
Still
very
early
on.
You
still
have
a
long
way
to
go.
What
benefits
do
I
get
for
holding
this
for
as
long
as
you're
asking
me
to
lock
it
up?
B
So
we
came
away
with
this
idea
of
a
multiplier
which
incentivizes
validators
to
be
long-term
participants
in
the
network,
right,
terrific
and
then,
where
you
know
the
configurable
fault,
tolerance,
threshold
and
cost
accounting
and
other
protocols
we're
taking
a
look
at
the
configurable
fault
tolerance
threshold
I
do
like
to
have
fine
finalization
to
be
not
part
of
the
core
protocol
Ashok.
You
should
probably
put
a
pin
in
this
someplace
and
think
about
it.
B
As
you
talk
to
the
researchers
and
the
consensus,
guys
really
one
of
the
things
I
don't
like
is
that
when
test
net
losses
finalisation
it
falls
over,
it
would
be
good
for
the
protocol
to
be
able
to
continue
even
if
finalization
gets
impinged
right,
because
then
the
protocol
needs
to
be
able
to
recover
it.
Shouldn't
fall
over
because
finalization
fails
should
be
able
to
resume
finalization
right
like
something
bad
happens,
and
then
some
a
bunch
of
good
things
happen
and
finalization
resumes
it
shouldn't
fall
over,
so
I
think
that's
something
we
need
to
explore
right.
B
A
A
So
do
you
have
a
maximum
size
of
a
block?
You
know
when
when
a
block
is
mined
or
proposed,
it
is
actually
in
all
distributed
networks.
If
you
want
people
to
arrive,
if
you
like
achieve
consensus
on
a
on
a
block
like
added
added
to
their
own
blockchain,
you
want
all
of
the
people
all
of
the
miners
to
receive
it
before
the
next
one
is
proposed,
so
that,
like
between
two
blocks,
you
should
have
enough
time
to
fall
for
the
first
block
to
be
propagated
to
all
of
the
network
in
Bitcoin.
A
You
have
10
minutes,
block
time
and
1
megabyte
block
size.
So
I
also
have
this
column
called
forking
rate,
which
is
contextually
relevant,
but
this
is
not
something
you
choose.
Actually
these
three
depend
on
each
other,
so
you
choose
like.
If
you
choose
a
certain
block
time
and
block
size,
then
the
result
is
a
certain
forking
rate
and
it's
actually
very
close
to
zero.
Last
year
we
had
a
I
think
only
one
orphan
block
in
bit,
which
is
due
to
this
centralization
with
mining
pools.
These
are
very
high
rate
transfer.
A
So
when
a
block
is
mine,
they
just
know
where
to
send
it.
It's
it's
less
like
I,
think
distributed
network
and
more
like
they
know
exactly
who
they
should
send
the
block
to
in
order
to
in
order
for
it
not
to
be
orphaned,
because
we
have
like
more
than
like
a
lot
of
percent
of
blockchain
mining
power
concentrated
in
mining
pools
and
all
of
these
share
share.
You
know
information
like
inside
a
pool
miners
share
information.
So
that's
that's
to
explain.
A
A
Made
in
a
theory
'm
like
early
on
to
have
shorter
block
time.
That's
why,
when
you
stand
any
theorem
transaction,
you
just
get
it
fast
verified
fast
in
like
20
seconds
and
block
size
is
much
smaller
because
of
that,
because
you,
you
wouldn't
be
able
to
propagate
a
1
megabyte
block
to
the
whole
network
like
globally
in
such
a
short
time,
so
you
have
to
decrease
the
block
size.
A
If
you
divide
divide
the
block
size
by
block
time,
you
actually
get
the
throughput
of
information
transaction
payload
information
that
the
network
can
achieve
consensus
on,
but
that's
not
a
very
relevant
in
this
context.
So
I'll
just
go
on
like
defining
a
method
methodology
for
choosing
these
parameters.
Block
time
is
a
time
between
two
blocks.
Just
to
give
the
definitions
block
size
limit
is
the
maximum
size
of
a
block
in
bytes
block
gas
limits,
so
I'm
assuming
a
smart
contract,
public
permissionless,
decentralized
proofs,
take
blockchain
and
for
that
I
use
term
gas.
A
So
this
doesn't
have
to
be
a
theorem
it.
Just
it's
a
network
that
has
all
the
properties
that
I
just
said
and
block
gas
limit
is
maximum
number
of
operations
that
can
be
executed
by
the
transactions
in
a
block
in
a
block.
You
can
just
assume
you
have
a
single
operation,
not
even
like
you
have
just
the
add
operation
and
you
just.
This
is
a
twine
model
so
and
I'm
building
from
simple
to
complex.
A
B
A
A
So
you
ideally
want
to
have
smaller
block
time,
so
you
want
to
have
like
I,
think
polka
dot
has
six
second
block
time,
which
is
one
of
the
fastest.
They
have
also
I,
don't
I'm,
not
sure
of
their
block
size,
but
I
think
should
be
also
smaller
to
keep
in
mind
with
that.
But
if
having
smaller
blocks
reduces
throughput,
then,
like
you
have
a
trade-off,
should
you
like?
A
Should
you
have
a
network
with
faster
finalization
but
like
vasive
potential
for
more
computation
for
a
second
or
why
servers
I,
like
the
other
way,
the
other
way
around
or
you
may
not
at
all-
have
a
trade-off
at
all?
This
is
what
we
want
to
explore
with
this
analysis.
I
just
define
this
toy
protocol
phases.
A
You
have
you
have
a
proposer
and
a
you
each
in
each
round.
One
block
gets
proposed.
This
is
very
similar
to
our
case
right
now,
but
it
doesn't
have
to
be
so.
It's
also
similar
to
many
other
protocols
out
there,
our
leader,
based
and
who
depend
on
all
other
validators,
sending
messages
to
vote
on
the
it's
also
similar
to
I
think
aetherium
2.0.
B
A
One
block
is
proposed
and
then
proposer
proper
delivers
this
to
all
other
validators.
All
that
or
leaders
have
to
vote
on
it
and
that's
what
I
want
to
capture
with
these
phases
in
execution
phase.
Of
course,
the
leader
executes
sit
executes
the
block
himself
like,
but
he
has
to
collect,
collect
the
transactions.
Put
them
in
a
block,
then
execute
them
calculate
the
post.
A
State
hash
include
that
in
the
block
and
include
many
other
metadata
in
the
block,
and
then
they
he
actually
sent
sends
the
block
to
others
where
we're
after
which
we
go
to
the
propagation
phase.
You
wait
for
like
a
certain
time.
It's
in
protocol
define
how
much
should
be
weighted
and
like
whether
you
don't
expect
to
we
don't
expect
well
there
used
to
receive
before
that
time,
but
once
the
propagation
phase
is
over,
we
say.
Ok,
we
now
can
be
sure
that
all
that
leaders
receive
this
new
block.
A
A
So
after
after
receiving
a
block,
the
vaulter
verifies
a
block,
so
it
you
have
to
execute
it
for
a
second
time
in
the
hole
in
this
gap
between
blocks,
you
have
to
get
executed
a
second
time
and
then
they
have
to
compare
the
post
state
hash
like
you
have
to
do
many
operations
to
verify
it,
and
then
you
have
to
vote
on
the
block.
So
this
voting
phase
like
includes
all
the
time
required
for
messages
to
go
back
and
forth
between
the
leader
between
voters,
everyone.
A
So
this
is
like
a
time
that
is
like
the
rest,
so
that
this
this
whole,
this
whole
sequence
constitutes
the
block
time
total
block
time,
and
the
auditing
phase
is
what
your
mains
of
these
first
three
steps.
Let
me
make
it
faster
I,
don't
want
to
take
too
much
of
your
time
once
the
block
has
proposed.
A
It
is
received
by
every
other
evaluator.
So
this
is
I
already
explained.
I
think
I
already
went
over
this
well,
what's
important,
there
is
propagation
time,
so
propagation
time
is
different
than
block
time.
Propagation
time
is
a
time
required
for
the
propagation
phase
it's
smaller
than
block
time,
so
each
phases
have
like
their
own
respective
periods,
durations
and
propagation
time
is
the
one
corresponding
with
this
and
it's
fundamental
in
our
analysis,
because.
A
A
Just
like
there
they're
just
blocks
that
were
proposed,
but
they
got
eliminated
by
the
fork
choice
rule
and
similarly,
if
you
keep
propagation
time
constant
and
increase
block
size,
forking
rate
increases,
there's,
obviously
a
relationship
here
by
targeting
a
specific
forking
rate.
We
derive
propagation
time
in
terms
of
block
size
and
vice
versa.
This
is
a
like
method
for
us
to
choose
our
blockchain
parameters.
A
To
give
more
context.
I'll
just
give
an
example
of
bitcoins
block
propagation.
It's
not
as
as
easy
as
everybody's.
You
know
in
simple,
simplified
ghosted
protocols,
you
just
have
a
you
know:
epidemic
like
rumor
rumor
spreading
just
each
each
node
in
the
network
gets
a
message
and
then
passes
it
to
like
randomly
selected,
say
five
peers,
but
in
block
chains.
It's
not
as
simple
as
that,
because
if
you
send
the
block
to
someone
who
already
already
has
it,
then
it's
you
know
not
very
efficient.
A
If
some
you
have
to
you
want
to
make
sure
that
you
send
the
blocks
you
you
use
that
bandwidth
that
limited
time
for
propagating
the
block
to
the
validators
or
miners
that
haven't
hasn't
received
it.
So
you
want
to
receive
a
send
it
to
everyone
else,
and
for
that
reason
you
have
this
two
two
step,
two
step
scheme.
Like
note:
a
receives
block
verifies
it
like
if
it's
not
a
valid
block
its
discarded,
but
then
once
if
it's
valid,
no,
they
sense
an
inventory
message.
Saying
I
have
this
block,
then
this
message
is
very
small.
A
So
it's
much
smaller
than
the
block
block
is
one
megabyte
inventories
like
some
kilobytes.
So
it's
it's.
It
sounds
much
faster
and
in
general,
like
only
the
only
the
miners
who
don't
have
this
block
SAS
and
they
get
data
message
to
this
node,
a
where
which
like
after
which
no
they
starts
the
transmission
between
the
ones
that
requests
it.
So
in
this
case
it's
me
and
in
our
case
two
we
have
a
similar
block
propagation,
but
you
know
we
should
let
I
shouldn't
say
a
lot
about
this
because
I'm,
not
the
guy,
a
cautious.
A
So
the
important
thing
is,
we
want
to
optimize
this
block
chain
throughput,
and
the
important
thing
is:
what
is
the
relationship
between
propagation
time
and
block
size?
So,
like
I
said
before,
we
are
keeping
forking
rate,
constant
you're,
like
you
say
every
you
say,
1%
forking
rate,
every
one
in
hundred
blocks
will
be
constant,
sorry
orphans.
They
will
be,
you
know
for
Fionn
by
the
statistically
because
they
haven't
been
received
by
others.
A
If
the
relationship
relationship
is
linear
like
if
you
you
have
a
statement
like
if
you
increase
block
size
two
times,
you
need
two
times
more
time
to
propagate
it.
It
could
be
something
else
quadratic.
If
it's
quadratic
you
have,
when
you
increase
block
size
two
times,
you
need
four
times
more
time
to
propagate
it.
The
thing
is,
we
haven't
done.
Try
the
noise
yet
depends
on
how
many
validators
you
have
globally
decentralized,
your
your
network
is
you
have
to
use
data
in
order
to
get
this
distribution
of
propagation
delay
in
your
network?
Otherwise
it's
not.
A
You
can't
use
some
simplified
models.
You
have
to
use
like
real
real
world
data,
but
before
you've
got
this
data,
you
can
actually
reason
with
the
simplified,
like
the
other.
We
simplify
the
other
parts,
so
you
can
reason
in
terms
of
throughput.
So
if
we
later
learned
learned
propagation
time
to
be
linear
in
terms
of
block
size,
we
indeed
have
a
trade-off
in
here.
The
horizontal
axis
is
block
size
and
like
vertical,
is
throughput
the
more
block
size
you
have
the
more
throughput
you
have,
so
you
want
the
blocks.
A
If
you,
if
you
want
efficiency
in
terms
of
computation
per
second,
you
you
what
you
would
ideally
want
to
have
a
lot
of
bigger,
bigger
blocks,
and
the
reason
for
this
is
this:
let
me
go
back
this
voting
phase.
The
time
required
for
voting
is
constant,
so
the
voting
phase,
the
messages-
don't
depend
on
the
block
size.
For
that
reason,
you
have
this
trade-off
here
and
I
will
give
you
the
link
to
the
to
the
write
up,
which
tells
deep
so
I
mean
this
is
I'm,
presenting
the
results.
A
So
if
I,
if
I
went
too
deep
into
it
like
it
will
beat
the
purpose
I'm
just
saying
like
if
I
obtain
that,
if
you
have
linear,
you
have
a
trade-off
like
you
can
have
too
big,
because
you
want
fast
time
to
find
out
time
to
finalization.
So
you
have
to
decide
for
a
sweet
spot.
But
what?
If?
What?
If
it's,
not
linear?
If
it's
all
like
super
linear,
if
it's
quadratic,
then
you
don't
have
a
trade-off.
You
have
an
actual
optimum.
A
There
you
have
all
the
ranges,
maybe
it's
logarithmic.
Maybe
it's
the
complexity
depends
on.
We
don't
know
yet
we
will
see
once
we
launch
a
network,
we
will
have
more
information,
so
we
will
use
test
neck
data
and
global
long
running
tests
to
obtain
a
distribution
of
our
blog
propagation.
We
will
also
do
trial
and
error,
so
we
will
just
we.
Can
we
have
this
orchestration
of
machines?
We
can
just
do
it
ourselves.
A
B
B
About
this,
too,
in
terms
of
how
highways
configurable-
and
this
is
some
of
this
stuff-
that
our
SRE
team
is
gonna,
do
right
so
for
those
of
you
that
haven't
worked
in
you
know
or
don't
know
about
how
things
work
in
large-scale.
You
know
software-as-a-service
systems
if
you
work
in
a
big
company
like
Amazon
or
if
you
work
at
Google
or
you
work
at
Salesforce
right,
there's
a
lot
of
there's
a
lot
of
software
out
there.
That
is
now
delivered
exclusively
by
the
browser.
Right,
I
don't
be
cloud.
B
These
are
all
centrally
managed
right,
but
client-server
type
systems,
and
if
you
think
about
what
we're
building
here,
it's
a
decentralized
network.
This
is
true,
but
each
and
each
individual
node
instance,
and
the
software
that
runs
on
the
node
has
very
very
similar
properties
to
centralized
architecture
of
a
software-as-a-service
system
right
in
centralized
architecture
of
a
soft
service
system.
You
will
have
you
have
redundancy
right.
You
have
many
many
many
web
servers
that
do
the
exact
same
thing
right.
They,
you
will
have
multiple
database
servers.
You
have
database
replication
for
redundancy.
B
You
will
have
a
pool
of
web
servers
that
will
service
right.
Well,
you
know,
requests
coming
in
from
the
browser
that
basically
are
the
front
interface
to
your
application
and
virtually
because
these
browser
based
systems
need
to
provide
very
high
degrees
of
uptime
right
now,
it's
officially
the
service
providers
problem.
If
you
can't
get
to
your
data
application,
even
if
it's
online
banking,
you
name
it
anything
to
the
browser.
They
all
have
this.
These
things
in
common
and
those
servers
need
to
be
up
and
stable
and
resistant
to
bugs.
B
They
need
to
have
redundancy,
they
need
to
be
extremely
performant,
and
the
software
that
comes
out
of
engineering
is
delivered
to
what
we
call
an
on-site.
You
know
Network
Operations
Center,
which
actually
manages
the
hardware
or
what
we
call
a
site,
reliability
team,
which
we
call
it
referred
to
as
SRE
and
site.
Reliability
means
if
I
go
to
salesforce.com,
that
site
is
reliably
up
and
these
teams
what
they
do
is
they
take
the
software
that
comes
out
of
engineering
and
they
get
it
production
ready
right.
B
B
You
know,
etc,
etc,
and
the
key
is,
you
cannot
have
a
bug
or
a
problem
kind
of
propagate
through
your
server
pool
right,
because
if
it
propagates
through
your
server
pool,
that's
how
you
get
an
outage
or
network
downtime,
and
so
we
always
take
it
for
granted
that
we
go
to
Google,
Mail
and
Google.
Mail
is
always
up
and
the
reason
Google
Mail
is
always
up
is
because
they
have
a
team
of
engineers,
making
sure
that
Google
is
always
Google.
Mail
is
always
up,
and
so
this
is.
This
is
what
asari
team
does
right.
B
Team
really
is
your
validators
right,
and
so
our
team
is
going
to
lead
the
charge
in
educating
the
validators
on
how
to
tune
their
nodes
for
optimal
performance,
because
optimum
performance
to
the
validator
means
the
most
money
they
can
make
and
we're
going
to
provide
them
both
the
tooling
and
the
education,
so
they
can
optimize
their
node
and
opt
by
definition
by
extension,
optimize
the
network
for
the
best
performance.
So
that's
what
this
document
is
all
about.
B
That's
what
this
approach
is
all
about,
and
you
know
we
have
had
great
success
so
far
in
test
net.
We
have
endpoints
and
alerts
that
enable
us
to
monitor
the
network.
So
we
know
when
a
validating
node
is
experiencing
problems,
they're
failing
to
download
blog,
so
their
last
finalized
block
is
out
of
sync,
etc,
etc.
Some
of
the
same
stuff,
you
see
when
you
go
to
East
stats
net
right
where
we're
learning
on
all
those
things.
So,
let's
talk
a
little
bit
about
the
project's
building.
B
B
They
they
basically
support
using
on
chain
contracts
by
hyper
ledger
the
creation
and
maintenance
and
management
of
patent
information
who
owns
the
patent
where
the
patent
origin
origin
originated
from,
and
they
will
be
partnering
with
Casper
labs
to
create
a
on
chain
custody
solution
for
patent
data,
and
we
signed
the
MOU
last
week
very
very
exciting
about
this,
because
this
is
a
perfect,
perfect
use
case
for
blockchain.
Where
you
have,
you
know
something
that
needs
to
be
immutable.
This
is
chain
of
custody.
Chain
of
custody
has
a
lot
of
applications
right.
This
would
be
yet.
B
This
is
the
first
application
of
an
on
chain,
custody
solution
and,
and
it's
going
to
be
focused
on
patent,
so
we're
very
excited
about
this.
We're
gonna
be
starting
this
work
very
soon.
This
week,
we'll
get
the
initial
set
of
requirements
from
IP
we
and
we're
going
to
be
collaborating
on
the
architecture
of
the
final
system.
B
I'm
not
using
the
production,
public,
blotchy
and
we'll
be
prototyping
against
tests
met
with
an
eye
to
being
ready
to
launch
when
it
main
that
goes
lives,
so
very,
very
exciting
news
and
we
have
other
projects
that
are
in
the
hopper,
which
I
can't
share
right
now,
because
they're
not
signed
and
we're
under
NDA.
But
you
know,
keep
your
ears
peeled
for
more
exciting
updates
on
on
folks
implementing
our
technology.
B
The
community
wants
to
know
what
we're
gonna
do
to
increase
community
adoption,
so
the
things
that
I'm
going
to
be
working
on
is
I'm,
going
to
be
working
on
how
to
get
developers
involved.
We're
gonna
be
doing
a
hackathon
with
crypto
chicks.
Hopefully
in
June
we
are
going
to
be
giving
opportunities
for
people
to
share
video
content
and
our
workshop
content,
so
we're
gonna
be
building
out
a
lot
of
content.
That
folks
can
then
push
out
and
get
the
word
out
about
the
project.
So
look
for
updates
in
our
telegram
and
our
discord
channels.