►
From YouTube: Hierarchical Consensus for Content Routing - @adlrocha - Content Routing 1: Performance
Description
Hierarchical Consensus for Content Routing - presented by @adlrocha at IPFS þing 2022 - Content Routing 1: Performance
A
Yeah,
my
name
is
alfonso
larocza
and
I
work
at
consensus
lab.
It
has
nothing
to
do
with
content
routing,
but
I
hope
that
I
will
make
a
case
why
you
may
need
this
system,
and
so
I
think
that
we
all
agree
that
consensus
is
a
bottleneck.
So
blockchains
are
one
of
the
key
substrates
for
web3
and
then,
if
you,
if
we
look
at
ethereum,
if
we
look
at
bitcoin
and
all
of
these
systems
that
are
out
there
consensus
in
the
end,
it's
it's
a
bottleneck
and
it's
the
bottleneck
in
two
layers.
A
First
in
the
ordering
of
transactions,
because
you
end
up
having
to
sequence
them
and
in
the
execution
of
the
transactions,
and
if
we
want
to
like
target
these
these
requirements
for
the
web3
that
one
presented,
we
really
need
to
do
something
better
and
we
need
to
scale
or
have
a
way
of
scaling
these.
These
consensus
later
hierarchical
consensus
will
help
mainly
on
this.
So
we
will.
We
are
targeting
an
internet
scale
performance
for
the
use
cases,
so
we
should
be
able
to
support
one
giga
to
one
tera
transactions
per
second.
A
We
want
fast
local
finality.
We
don't
need
to
run
global
consensus
in
every
single
transaction
in
many
cases,
so
we
with
here
consensus.
The
idea
is
that
we
will
let
applications
grow
the
consensus
layer
as
they
need
and
apply
to
how
they
need
so
there's
always
a
trade-off
between
scalability
and
security,
and
the
idea
is
to
let,
instead
of
like
coming
up
with
the
best
consensus
algorithm
that
can
fit
every
single
use
case
to
let
users
come
up
with
the
trade-off
that
better
fits
their
application.
A
Then
we
still
need
this
secure
global
finality.
But
for
that
we
have
proof
of
work
and
then
and
other
like
consensus,
algorithms,
that
they
not
they
don't
need
to
be
as
fast,
but
we
want
them
to
have
these
security
guarantees,
and
this
also
gives
us
a
way
of
like
tolerating
partitions.
So
you'll
see
that,
because
of
how
we
are
growing
the
architecture
you
can
be
disconnected
from
part
of
the
network
and
still
be
able
to
operate
locally,
which
is
nice
and
we
have
a
way
of
horizontally
scaling.
A
So
if
we
have
a
network,
we
would
be
able
to
create
new
sub
networks
to
horizontally
scale.
The
same
way
that
we
load
balance
web
applications
these
days,
so
that
is
the
idea
behind
hierarchical
consensus
and
yeah.
The
main
thing
that
you
have
to
to
get
from
like
the
main
motivation
for
this
is
that
there's
no
one-size-fits-all
consensus.
A
So
if
we
go
to
the
super
performance
consensus
that
are
pft
based,
the
problem
is
that
the
the
performance
is
is
limited
by
the
specifications
of
the
leader
or,
like
the
the
the
one
that
is
proposed
block.
There
are
a
lot
of
a
lot
of
borders
next
and
then
we
have
the
other
way
around.
If
we
go
to
proof
of
work,
it's
super
secure,
but
you
have
other
problems
like
throughput.
A
So
to
give
you
so,
let's
put
I
mean
in
the
end,
higher
consensus
assumes
a
root
net.
This
rootnet
is
like
the
main
network
that
from
which
we
are
gonna
horizontally
scale.
We
can
think
about
this
as
the
falcon
magnet
for
instance.
That
is
what
that's
the
one
that
we're
initially
targeting
or
any
other
network
that
you,
because
we
have
a
reference
implementation
already
and
you
could
like
use
any
other
consensus
and
build
a
network
from
scratch,
because
you
need
consensus
somewhere
and
you
want
to
have
the
ability
of
horizontally
scaling.
A
A
A
We
have
these
firewall
requirements,
there's
a
full
paper
that
describes
this
this
requirement,
but
in
the
end,
what
we're
saying
is
that,
as
we
don't
know
what
is
happening
in
the
subnet,
we
will
have
some
some
checks
to
limit
the
impact
of
an
attack
in
a
subnet
in
the
parent,
so
actually
the
the
it
is
limited
to
the
amount
of
native
tokens.
So
for
all
the
interaction
with
hierarchical
consensus,
you
use
the
token
that
you
have
in
the
rootnet
I
mean
you
can
spawn.
A
You
need
smart
contracts,
you
know
erc20s
and
your
tokens,
but
in
the
end
the
one
that
hurricanes
is
understands
is
the
rootnet
token
and
the
firewall
requirement.
What
it
checks
is
that
I
mean
what
it's
enforces
is
that
if
there's
an
attack
in
a
subnet,
the
impact
in
the
upper
layers
of
the
hierarchy
will
be
of
at
most
the
circulating
supply
of
the
child
subnets.
B
A
That
is
a
great
question,
so
I'm
gonna,
the
way
in
which
we
interact
with
other
networks
is
through
cross-net
transactions,
which
means
that
every
transaction
has
to
go
so
it
would
come.
It
has
to
go
through
all
of
the
consensus
engine,
so
you
have
to
follow
the
tree
until
the
destination
that
you
want
to
follow.
So
I'm
not
saying
if
we
so
we
see
this
the
structure.
I
want
to
send
some
tokens
from
here
to
here.
I
can,
I
don't,
send
them
directly
right.
A
A
They
are
required
to
be
full
notes
of
their
parents,
because
they
need
to
listen
to
events
to
two
actors
that
I
will
talk
about
in
a
moment
and
also
the
subnets
are
periodically
committing
checkpoints
to
their
parent
with
a
proof
of
the
state
which
means
that
and
that's
why
we
have
the
firewall
requirements.
So,
whenever
someone
sends
a
transaction
here
that
is
not
consistent
with
the
checkpoint
or
that
it's
like
double
spending,
some
some.
A
That's
why
it's
up
to
the
circulating
supply,
because
the
parent
doesn't
have
access
to
the
state
of
the
subnet.
But
it
knows
how
many
was
sent
like
injected
and
sent
somewhere
else.
So
if
you
try
to
release
from
the
subnet
more
than
what
was
injected
there's
an
alarm
here
and
unlike
the
the
subnet
will
like
your
parent,
will
cut
the
transaction.
A
Yeah,
so
you
you
pay
the
price
of
like
running
consensus
in
every
in
every
network,
you
that
consistent
in
every
network
that
you
traverse.
So
that's
why
I
say
that
it's
up
to
you,
how
you
run
the?
How
do
you
build
the
hierarchy
if
you
see
that
you're
gonna
make
a
lot
of
transactions
with
this
network,
you
better
be
a
child
of
these
parents
instead
of
like
having
an
independent
branch.
So
as
you
go
wider,
you
pay
less
for
cross-net
transactions,
but
you
require
like
format,
checkpoints
and
so
on.
A
A
So
I
don't
like
that
name
anymore,
they're
called
like
that
in
solana,
but
we
could
have,
even
if
you
have
interactions
from
here
to
here,
we
are
thinking
about
how
to
open,
like
trusted
channels
kind
of
like
breaches,
but
without
having
to
implement
from
scratch
a
bridge
to
propagate,
because
the
reason
why
we
follow
the
hierarchy
is
because
this
is
the
the
the
tree
of
trust
between
the
networks.
A
Right,
we
go
to
the
common
parent
and
we
go
down
because
we
both
trust
the
common
parent,
but
we
are
trying
to
figure
out
how
to
do
these
breaches
so
that,
if
we
for
certain
applications,
we
may
not
care
about
having
high
trust
and
we
may
be
able
to
send
crosstalk
messages
with
the
same
semantics
without
having
to
go
up
and
down
this
answers.
Your
question
right,
right,
okay,
cool
and
like
this
is
implemented
under
the
hood
with
two
actors.
A
So
we
assume
that
every
subnet
is
shipped
with
a
built
in
actor
or
a
system
actor
that
we
call
the
subnet
coordinator
actor,
and
this
is
the
like
the
actor
that
implements
all
of
the
logic
for
the
haircare
concept.
So
it's
like
the
gateway
to
the
to
the
rest
of
the
system.
It
is
the
one
that
keeps
the
I'm
going
to
talk
a
bit
about
collateral.
So
in
order
to
join
a
subnet,
you
put
some
collateral
because
then
you
can
report
these
behaviors
and
be
slashed,
and
the
subnet
coordinator
actually
is
like
the
gateway.
A
It
keeps
this
collateral.
It
checks
the
enforces
the
firewall
requirement.
It
propagates
these
cross
net
messages,
so
it
does
all
of
the
all
of
the
low
level
details
of
hc,
and
then
we
have
the
subnet
actor,
which
is
the
one
that
it
is.
This
is
a
user
defined
actor,
so
we
have
a
reference
implementation,
but
anyone
would
be
able
to
deploy
their
own
for
their
subnet
with
the
consensus
algorithm
that
they
want.
A
We'll
see
that
we
have
checkpoints
and
the
signature
of
these
checkpoints,
they
could
use
threshold
signatures
to
be
validated
as
super
majority,
or
something
like
that.
So
you,
this
is
chosen
in
the
subnet
actually
and
whenever
we
want
to
deploy
a
subnet.
The
first
thing
that
we
do
is
we
deploy
the
subnet
actor
that
defines
these
policies
and
the
consensus
algorithm.
A
That
will
be
used
and
we
spawn
a
completely
new
state
like
a
new
stack
for
the
subnet,
so
with
its
own
mempoo,
with
its
own
consensus,
algorithm,
its
own
state
and
so
on,
but
with
the
so
we
keep
all
of
the
semantics
and
the
we
share.
The
transport
layer
in
the
end
is
the
transport
layer
for
this.
The
broadcast
layer
is
a
gossip
subtopic
used.
A
So
we
can
reuse
the
connections
that
we
have
with
other
peers
already
and
from
there
on,
like,
we
can
start
mining
our
own
cheap
blockchain
that
we
deployed
from
the
root
net
and
in
the
same
way
like
we
want
another
subnet.
We
deploy
some
actors
defining
the
the
policy
and
we
create
this
new
network
that
we
can
like
recursively,
create
a
sharpness
and
in
this
way,
grow
the
hierarchy.
And
if
we
want
to
start
interacting
with
other
subnets,
we
have
the
concept
of
this
collateral
that
is
used
to
be
slashed.
A
So
if
there's
a
misbehavior
here,
we
have
the
firewall
requirement,
but
also,
if
there's
some
misbehavior,
you
could
users
can
report
it
to
the
parent
so
that
the
collateral
is
slashed
and
there's
a
collateral
threshold
which
means
that
for
so
the
subnet
actor
will
always
check
that
you're
an
active
subnet.
It
means
that
you
have
enough
collateral
and
it
won't.
Let
you
propagate
crash
net
messages
or
anchor
your
security
to
the
parent.
A
If
the
classroom
threshold
I
mean
the
collateral
goes
below
the
threshold,
so
this
is
like
an
additional
level
of
of
guarantee
that,
because
we
can
assume
security
in
subnets
and
so
the
the
two
things
we
have
a
subnet
now
we
know
how
to
deploy
the
subnet.
Then
there
are
two
main
things
here:
the
checkpoint.
A
This
is
the
protocol
that
we
use
to
anchor
our
skills
to
propagate
proofs
of
our
state,
to
our
parents,
so
that
they
can
be
used
for
users
to
report,
missed
behaviors
built
fraud,
proofs
and
and
so
on,
and
also
in
checkpoints.
We
propagate
crossnet
messages
or
what
we
call
bottom-up
messages
that
I
will
talk
in
a
minute.
So
this
is
like
how
checkpoint
looks
like
like
in
the
end,
we
are
propagating
the
so
a
proof
of
the
state
and
also
information
about
the
children,
my
children.
A
So
I
propagate
information
about
the
cross
net
messages
that
have
to
go
somewhere
else
and
also
about
we
aggregate
checkpoints
from
the
children
so
that
they
can
also
be
anchored
up
implicitly
in
the
top
layers
of
the
of
the
hierarchy,
and
I
like
to
think
of
this
checkpointing
protocol,
as
the
clock
of
the
system
is
the
way
in
which
we
sync
all
of
the
clocks
of
the
different
subnets,
and
I
mean
to
give
you
a
high
level
overview
of
how
the
protocol
works.
A
A
This
is
the
the
the
checkpointing
period
for
the
epoch
200
so
that
we
we
start
putting
in
all
of
the
cities
for
the
checkpoints
that
we
have,
the
new
cross
messages
that
arrive
and
so
on,
and
when
the
epoch
200
arrives,
we
close
the
the
checkpoint
in
period
for
for
epoch,
200,
we
open
the
one
for
epoch,
300
and
in
parallel,
we
have
this
signing
protocol
again
like
this
is
up
to
the
in
the
reference
implementation.
A
We
require
two-thirds
more
than
two-thirds
of
the
validators,
to
sign
this
checkpoint
to
be
propagated
to
the
parent,
but
like
it's
up
to
the
subnet
to
choose
what
is
their
checkpoint
program?
They
could
use
threshold
signatures
as
we
use
to
anchor
to
bitcoin
and
yeah
this
signing
it's
up
to
the
same
actor,
because
it's
the
one
that
checks
it
and,
in
the
same
way
like
in
parallel,
what
we
were
doing,
the
signature,
the
checkpoint
period
for
the
next
epoch
opens.
A
We
start
gathering
all
of
the
crash
messages
and
when
the
the
window
finishes,
we
propagate
it
up
so
that
it
can.
It
can
go
to
other
other
points
in
the
in
the
hierarchy
and
the
way
in
which
we
propagate
crosstalk
messages.
I
told
you
that
so
we
have
like
two
and
a
half
different
messages
in
the
hierarchy.
A
The
first
one
is
top-down
messages
and
they
are
straightforward
because,
as
so,
children
need
to
necessarily
sync
with
their
parents,
which
means
that,
as
we
are
listening
to
events,
whenever
a
new
cross
message
comes,
validators
can
propose
it
in
the
consensus
engine
of
the
children.
So
those
are
straightforward
and
then
we
have
bottom
up.
So
parents
are
not
syncing
with
the
state
of
their
subnets,
which
mean
that
the
only
way
that
we
have
to
propagate
information
from
the
children
to
the
upper
layers
is
through
checkpoints,
and
this
is
how
we
propagate
these
bottom-up
messages.
A
And
then
we
have
something
that
I'm
not.
I'm
gonna
go
through
it
because,
like
up
till
now,
we're
just
sending
messages.
But
what
happens
if
we
need
to
perform
an
execution
with
state
that
is
stored
in
in
different
subnets?
So
for
this
we
orchestrate
is
kind
of
like
an
atomic
swap,
but
we
call
the
cross
subnet
atomic
execution
in
which
we
lock
the
state.
So
here
I'm
gonna
go
because
I
don't
think
that
for
this
this
is
interesting
for
content
routing.
A
But,
like
the
idea
is
that
there's
you
log
your
state
in
the
subnets
and
we
rely
on
our
common
parent
to
perform
the
execution
right.
So
when
we
perform
enough
chain
execution
with
the
log
state
of
all
of
the
subnets,
we
commit
the
result
to
the
common
parent
that
we
both
trust
and
then
the
common
parent.
If
both
results
match,
they
will
propagate
across
net
message,
unlocking
the
state
and
committing
the
state
into
the
subnet.
A
If
someone
sees
this
interesting
for
some
use
case,
we
can
we
can
discuss
this
further,
hey
all
right
and
yeah.
There's
a
lot
of
related
work
from
which
we
got
inspiration.
We
have
sharding,
but
we
depart
from
starting
in
the
because
we
don't
we
don't
explicitly
partition
the
state,
but
our
user
users
are
the
ones
that
choose
how
to
partition
the
state
there's
payment
channels.
We
could
do
payment
channels
over
these
over
this
architecture.
There
are
roll
ups
or
layer
twos.
A
We
can
think
of
this
as
a
mix
between
side
chains
and
roll
ups,
because
in
the
end
of
this
checkpointing
it's
similar
to
rollers,
but
we
don't
bundle
every
cent
transaction.
We
just
bundle
the
ones
we
have.
We
anchor
proofs
of
the
state,
but
we
only
bundle
the
transactions
that
need
to
propagate
to
be
propagated
somewhere
else
and
yeah,
and
there's
this
paper
about
proof
stake
side
chains
where
we
got
a
lot
of
inspiration
about
the
firewall
requirement.
A
If
you
want
to
read
more
about
the
reasoning
behind
and
there's
another
one,
a
really
good
one
about
the
atomic
straps
in
payment
channels
that
we
used
for
the
atomic
execution
protocol,
so
all
of
this
is
kind
of
packed.
Of
course,
maybe
we
we
have
some
blind
spots,
but
it
is
this
kind
of
back
research
that
has
been
done
is
nothing
like
completely
new
and
yeah.
There's
a
lot
of
future
work.
I
haven't
gone
in
depth
into
the
protocol.
A
There's
a
spec
if
you
want
to
read
it,
but
there
are
things
that
we
need
to
fix
like
data
availability
and
we're
working
on
a
crypto
economic
model
to
see
how
the
gas
model
behaves.
Once
we
have
all
of
these
subnets
working.
How
do
we
pay
for
checkpointing?
Should
we
have
a
base
feed?
It
affects
the
basically,
there
are
a
lot
of
like
subtleties
once
you,
because
we
also
want
these
subnets
to
choose
their
gas
model.
They
may
choose
hey.
A
And
now,
let's
go
to
the
interesting
part,
which
is,
I
guess
I
don't
know
if
there
are
questions
about
hc,
if
not
like.
Let's
move
to
the
use
cases,
because
how,
when
and
how
hd
can
be
used
and
in
the
scope
of
ipfs
and
falcon.
So
these
are
some
questions
that
I
recommend
you
ask
yourself
to
see
if
it
makes
sense.
First,
do
we
need
to
agree
and
share
some
information
in
a
powerful
way?
Maybe
it's
a
good
idea
to
use
hc
in
this
case.
A
Do
you
need
faster
finalities
and
higher
throughput
than
the
I'm
saying
filecoin
here,
because
it's
the
first
network
that
we're
targeting
but
like
this
applies
to
any
other
network,
because
you
you
saw
that
this
is
just
a
bunch
of
actors
that
read
the
event
and
then
some
crypto
to
science
stuff.
So
there
is,
I
mean
we
could
potentially
apply
it
and
deploy
it
anywhere
and
also
this
is
really
interesting.
So
something
that
it's
implicit
is
that
if
we
go
to
this
hc
architecture.
A
So
with
this
we
can
tolerate
certain
partitions,
because
if
we
have
a
bunch
of
users
here
and
we
are
partitioned
from
the
root,
for
instance-
we
can
still
operate.
We
won't
be
able
to
propagate
cross-net
messages,
but
we
will
be
able
to
still
operate
within
us.
So
it's
a
really
interesting.
If
you
have
a
network
that
is,
cannot
be
connected
continuously
to
some
other
network.
That
is
part
of
the
hierarchy.
A
It
doesn't
matter
because,
like
I
would
tolerate
partitions
of
one
of
these
subnets,
because
what
we
will
do
is
just
patch
the
checkpoints
and
whenever,
like
we
recover
the
connection,
we
could
like
send
the
vouchers
checkpoints.
So
if
this
is
something
also
interesting
for
your
use
case,
let's
chat
also,
if
you
need-
and
this
may
be
the
case
for
the
indexing
or
it
could
help
content
routing.
Do
we
need,
if
you
need
to
cheaply
spawn
a
full-fledged
blockchain,
but
you
don't
want
to
worry
about
the
blockchain
with
fast
consensus.
A
We
are
in
our
reference
implementation,
we're,
including
a
set
of
consensus,
implementations,
and
one
of
them
is
a
bft
high
throughput
protocol,
which
means
that
if
you
need
to
run
consensus
between
different
parties-
and
you
want
to
run
a
full-fledged
or
even
ephemeral,
blockchain
with
these
high-throughput
consensus,
you
would
be
able
to
do
so,
and
you
would
be
able
to
do
so
with
all
of
these
recursive
properties
that
we
just
saw.
So
you
could
run
a
blockchain
that
allows
you
to
create
subnet,
so
you
could
grow
your
own.
A
Blockchain
could
grow
with
with
the
demand
that
you
have
in
the
system
and
we
would
be
able
to
to
introduce
any
incentive
system
and
yeah.
So
ask
yourself
these
questions
and
if
you
see
that
any
of
these
feel
like
something
that
you
need,
let's,
let's
chat,
and
here
like
I
just
as
we're
in
a
content
routing
like
track.
I
shared
a
few
that
I
think
that
could
make
sense
to
use
hc
for
so
for
indexers,
for
instance,
right
now.
We
I,
I
think
you
have
a
centralized
indexer
still.
A
So
if
we
wanted
centralized
indexers
here,
we
could
so.
I
was
chatting
with
massey
and
we
could
have
instead
of
like
sharing
the
advertisements
through
gossip
shop,
where,
if
someone
joins
from
scratch,
there's
no
way
to
to
verify
that
it's
valid.
You
could
have
like
these
chip
consensus
running
a
high
throughput
pft
as
the
broadcast
layer,
so
that,
if
someone
joins,
it
will
sync
the
blockchain
from
scratch
and
it
would
have
every
without
having
to
trust
what
some
indexer
gave
him.
A
It
could
get
information
about
the
latest
cid
for
each
service
provider,
so
you
could
have
a
mapping
there
for
the
latest
cid
and
all
the
history
that
you
could
sync
from
scratch.
I
mean
this
is
an
example
from
the
top
of
my
head,
but
like
it
could
help
with
that
with
the
even
like,
with
additional
feature
that
you
could.
If
you
want
to
organize
them
in
areas,
you
could
start
with
a
30
indexers
and
have
this
blockchain.
A
But
then
you
could
partition
the
states
by
creating
new
area
areas
and
like
organizing
those
indexers,
with
still
having
capabilities
to
propagate
up
the
ones
that
they
want.
So
that's
an
idea
of
how
hurricane
consensus
an
existing
content
routing
system
could
work.
Then
I
was
reading
the
ficon
saturn
and
retrieval
markets
endeavor,
and
here
okay,
indexing
is
clear,
but
then
I
mean
it
is
a
cheap
way
of.
A
We
have
a
seamless
interaction
with
falcon
at
this
point,
but
it
would
be
a
cheap
way
of
like
having
new
networks
that
handle
all
of
these
payment
channels,
reputation,
systems,
crypto
economics,
retrieval
provider,
nodes.
So
you
could
have
actors
who
once
we
have
a
vm
which
we're
close.
So
we
have
vm
integration,
but
just
with
m1
we
are
waiting
for
m2.
But
the
idea
is
that
you
could
run
any
actor.
A
You
could
have
the
subnet
actor
and
have
any
actor
and
the
same
way
that
you
we
will
have
actors
interacting
with
the
storage
sector
bookkeeping.
You
could
have
these
guys,
like
subnets,
that
are
able
to
still
interact
with
the
bookkeeping
but
keep
their
payment
channels
and
keep
their
their
their
incentives,
even
a
reputation
system
without
an
actual
cryptocurrency
in
a
in
a
subnet.
So
I
don't
know
if
there's
anyone
from
fight
con
saturn,
I
was
hoping
to
catch
up
with
them,
but
yeah.
A
This
is
another
thing
that
we
think
like
in
this
l1
l2
cache.
You
could
have
create
a
you,
have
the
fico
main
net
and
you
could
create
l1
and
l2
caches
as
subnets,
so
that
they
can
organize
themselves
and
it's
easier
to
propagate
information
between
them
with
consensus
and
some
verifiability.
A
And
finally,
I
don't
know
if
the
baccalau
people
are
here
but
like
for
they
had
this
problem,
or
at
least
the
last
time
I
chatted
with
them.
They
had
this
plan
for
the
scheduling
of
jobs.
A
This
is
kind
of
the
same
problem
like
you
need
to
propagate
information
in
a
verifiable
way,
and
you
want
to
reward.
Have
an
incentive
system,
so
we
could
add
a
subnet
for
the
decentralized
scheduling
and
we
could
even
like
create
a
new
scheduler
in
each
subnet
so
that
we
could
have
different
zones
and
like
propagate
the
jobs
between
zones
and
yeah.
So
that's
kind
of
like
right
now
we're
in
we
have
an
mpp.
We
are
moving
towards
production
step
by
step,
but
we
would
love
to
have
some
use
case
to
validate
all
of
these.
A
A
I
can
even
do
a
demo
there's
code
already
that
you
can
use,
and
this
still
uses
the
legacy
vm
from
from
falcon,
but
it
will
use
fvm,
which
means
that
it
will
be
shipped
with
wasp
and
you
would
be
able
to
do
some
fancy
stuff
there.
So
yeah,
if
you,
if
something
comes
up,
feel
free
to
ping
me
and
yeah.
There's
general
information
here.
There's
a
repo
with
code
that
we
can
try.
A
I
can
also
do
a
demo
now,
if
there's
time
that
I
don't
know,
if
there's
time,
there's
a
paper
and
a
draft
spec
that
if
we
would
love
to
get
feedback,
so
if
you
see
something
that
is
not
correct
or
that
you
want
to
improve
with
something
else,
we
are
always
open
for
contributions
and
yeah.
I
haven't
prepared
a
demo
but
yeah,
let's
try,
hopefully
one
it
won't
fail.
So
what
I'm
gonna
do
is.
Okay.
This
is
probably
yeah.
A
So
I
have
this.
I
mean
I'm
gonna
cheat
a
bit.
I
have
this
script
that
I
use
to
to
make
things
fast,
but
basically,
what
I'm
doing
right
now
is
I'm
gonna
spawn
a
new
subnet
that
has
all
the
falcon
semantics
and
so
on
with
what
we
call
the
delegated
consensus.
This
is
a
debugging
consensus
where
we
choose
a
minor
and
he's
the
one
validating
blocks.
Always
we
could
use
like
proof
of
work
or
any
other,
but
let's
use
this
one
because
it's
faster
for
for
the
demo.
So
we
we
start.
A
Here
we
are
starting
a
new
root
net,
which
is
like
the
first
network
that
has
all
that
you
love
and
hate
of
litecoin
like
messages,
even
when
it
comes
with
a
vm
and
and
so
on,
and
it
has
all
of
the
hierarchical
consensus
actors.
So
we
will
be
able
okay,
sorry,
I'm
so
you
see
that
we
are
mining.
We
have
these
delegated
consensus
and
we
could
do
anything
that
you
do
in
with
fatcoin.
It
is
also
available
in.
A
So
we
get
actors
with
state,
get
actor.
A
I
wanted
to
show
you
the
sca
yeah,
so
this
is
the
you
see
that
there's
a
new
system
actor
that
is
this
t064,
and
this
is
the
the
built-in
actor,
the
new
built-in
actor
that
handles
all
the
hierarchical
consensus.
So
now,
let's,
let's
so
we
have
this
the
subnet
and
we're
gonna
create
we
have
this
rootnet
and
we're
gonna
create
a
new
subnet.
So
the
imagine
that
I
want
to
to,
I
don't
know,
have
a
subnet
for
an
indexing
area.
A
A
What
we
are
doing
is
that
we
are
creating
this
subnet,
actually
that
where
we
are
defining
all
of
the
policies,
so
actually
with
the
ibm
integration,
this
is
even
cooler
because
you
should
say
add,
and
there
wasn't
implementation
of
your
subnet
actor.
So
you
just
deploy
with
all
of
your
policies.
This
is
the
reference
implementation
with
the
legacy
vm,
but
the
idea
is
that
you
would
you
will
point
to
the
implementation
of
your
subnet
actor
and
in
the
end
you
just
have
to
implement
an
interface.
A
A
So
right
now
we
have
a
collateral
threshold,
one
one
five
coin,
just
for
testing
purposes,
but
the
idea
is
that
you
would
have
to
put
enough
so
that
I
can
start
mining
and
also,
I
use
proof
of
work,
because
I
want
to
start
mining
right
away
with
one
node,
but
like
you,
you
could
for,
for
instance,
you're
using
a
bft
protocol.
A
You
could
even
say
hey,
I
don't
want
you
to
start
mining
in
a
new
subnet
until
you
at
least
three
miners
have
put
in
enough
collateral
threshold
because,
like
vft
is,
if
not
four
sorry,
not
three
but
like
you.
All
of
these
policies
can
be
defined
in
the
subnet.
But
that's
why
I'm
using
proof
of
work
so
that
we
can
start
mining
with
just
one
one,
one
node.
A
So
if
we
do
like
here
list
subnets
you'll
see
that
now
we
have
a
new
subnet
that
is
active,
that
has
a
stake
of
two
falcon
but
has
no
circulating
supply
because
the
collateral
is
like
frozen
until
like
and
it
can
be
slashed,
but
it
cannot
be
used
for
for
payments
in
in
the
subnets
and
if
we
start
mining
in
the
subnet.
A
So
here
we
are
mining
in
the
subnet.
So
it
means
that
we
should
see
new
blocks
here.
If
I
store,
if
I
stop
mining
in
the
rootnet,
and
you
see
that
we
are
mining
two
networks
in
parallel,
so
you
see
that
here
there's
a
subnet
with
independent
state
that
it's
it's
mining
with
proof
of
work,
which
means
that
now
we're
in
a
good
place,
and
we
have
our
new
network
for
our
indexers.
A
We
could
also
deploy
like
a
new
subnet
for
an.
I
don't
know.
Let's
so
right
now
we
want
to
have
a
new
indexer,
a
new
indexer
zone
that
is
zone
2
and
instead
of
these
consensus,
we
want
to
use
a
mirror.
I
think.
A
See
mirror
is
a
bft
protocol,
so
it
doesn't
allow
me
to
just
send
it
without
the
minimum
number
of
validators,
but
here
I
could
put
the
minimum
number
of
validators
two
two
and
we
deployed
a
new
sub-connector
and
we're
gonna
start
mining
in
a
new
subnet
with
completely
new
state.
So
if
we
want
to
start
sending
right
now
like
we
have
two
independent
networks
and
I
could
send
messages
to
both
of
them
or
I
could
send
crosstalk
messages.
A
A
You
will
surely
see
it
in
the
in
the
subnet,
and
you
see
that
so
here
we
have
this
convenient
cli
where
we
define
what
is
the
subnet
we
want
to
interact
with
if
you
see
that
if
we
by
default
is
the
root
network,
you
see
that
in
the
room
that
I
have
like
200
file
coins.
If
I
go
to,
if
I
go
to
the
subnet,
I
only
have
those
10
that
I
just
send
with
a
cross
net
message,
because
even
if
we
are,
we
there's
no
minting
in
subnets
right.
A
That's
why
we
want
to
play
with
the
gas
model
and
for
anyone
to
be
able
to
come
up
with
their
their
reward
model
for
validators
and
also
so.
We
have
these,
and
if
we
see
like
the
list
of
subnets,
you
see
that
now
the
circulating
supply
has
been
updated
and
this
is
what
is
used
to
enforce
the
firewall
requirement.
So
the
the
the
parent
is
always
checking
that
there's
no
double
span
in
cross-net
transactions
and
finally
like
if
we
list
the
checkpoints,
maybe
this
would
work.
So
we
are
periodically
sending
checkpoints
up
here.
A
You
see
that
there
are
no
cross
messages
now
what
I
can
do.
Actually,
let's
do
that.
I
can
send
a
bottom-up
transaction
and
you'll
see
that
here
in
the
cross
messages
it
will
appear
true,
because
here
what
we're
just
is
committing
the
proof
of
the
state
to
the
parent.
A
A
The
wrong
key
okay,
you
see
that
cross
message
should
be
propagated
in
the
next
checkpoint
and
in
the
next
checkpoint.
We
should
see
here
that
cross
message,
changes
to
true
saying,
hey
see
so
it
says
hey
this
is
it
has
some
cross
messages,
it
will
go
up
and
it
will
be
executed
and
yeah
so
with
this,
and
we
could
leave
a
subnet
all
the
life
cycle
is
implemented
and
the
idea
is
that
this
will
allow
you
to
have
cheap
and
even
ephemeral.
A
So
when
I
say
like
leave
subnet-
and
there
are
no
enough
validators,
like
you
unregistered
from
the
hierarchy
and
in
that
way
like
we
grow
and
and
make
smaller
the
hack-
and
I
don't
know,
I
really
think
that
this
could
be
useful
for
certain
content,
routing
scenarios,
but
like
let's,
let's
discuss
it
any
question.
I
think
this
yeah.
Can
you
speak
to.
A
So
the
thing
is
that
we
don't
have
store
the
index,
as
I
don't
know,
to
what
extent
it
would
grow.
Unlike.
C
A
Tree
on
this
inside
of
a
state
street
inside
of
the
state
trio
of
the
network,
and
then
you
just
define
adding
a
record
as
submitting
a
transaction
yeah.
So
I
mean
I
don't
know
how
efficient
it
would
be
in
the
size
of
the
storage.
That's
my
main
concern
like
with
billion
of
of
records.
That's
why
we
started.
We
store
the
the
hash
because
it
made
like
the
number.
A
That's
why
I
was
thinking
only
for
the
cid
of
the
advertisement,
but
if
we
managed
yeah,
we
could
have
it
in
the
state
tree
like
we
should
do
some
numbers,
because
what
I
was
thinking
is
like
instead
of
just
posting
the
advertisements
in
this,
so
you
would
have
only
the
cid
of
the
advertisement
and
then
you
ingest
it
following
that
chain.
But
if
you
could
have
every
single
message
that
you
want
or
every
single
multi-hash
in
the
state
you
wouldn't
need
even
to
ingest.
A
Oh
okay,
I
see
so
you
mean
I
thought
that
the
oh
that's,
okay,
so
so
it's
the
areas.
You
will
only
keep
the
state
for
that.
Then
that
would
work.
I
think
that
would
work,
because
the
indexers
now
the
intersect
wouldn't
be
like
indexing
billions
of
records.
It
would
be
like
charted
into
yeah.
C
C
A
A
The
checkpoints
it's
so
right
now
I
mean
it
depends
of
what
you
use
as
a
checkpoint.
Checkpoints
will
be
the
same,
the
same
size
because
in
the
end,
it's
just
number
of
cross
messages
and
and
like
a
small
proof
right
now,
the
small
proof
is
super
nice,
because
it's
just
the
epoch
for
that
checkpoint
in
the
subnet
state,
so
that
you
can
have
like
small
snapshots
that
you
can
link
between
each
other.
A
If
you
have
access
to
the
state,
but
you
could
use
anything
like
we
are
discussing
with
the
guys
of
kryptonite
lab
if
we
can
have
like
some
kind
of
of
snark
or
something
attached
so
that
you
implicitly
check
the
whole
history
and
just
with
the
proof,
you're
able
to
verify
it.
So
the
idea
that
the
checkpoint
won't
grow.
What
it
would
grow
is
the
state
of
the
or
the
full
nodes
of
the
subnet.
A
No
because
you're
not
generating
it
over
the
whole
state.
You
you
just
generate
the
state.
You
just
check
the
state
change,
which
is
way
less.
I
mean
if
you're
in
if
a
new
blog
is
just
introducing
10
cities
tends
to
new
cijs.
That's
the
proof
that
you're
generating
for
that
moment
not
like
the
whole
state,
a
proof
of
the
whole
state.
A
C
A
And
then
we
are
not
forcing
this
in
the
protocol,
so
we
are
putting
the
pipes
for
you
to
do
whatever
you
want.
I
mean
for
the
indexing
use
case.
May
you
may
use
really
efficient
proofs,
because
that's
what
you
want,
but
you
could
have
other
use
case
that
needs
that's
why
we
have
the
subnet
actor
as
an
interface.
It's
up
to
you
to
like
we
put
the
pipes
in
the
protocol
and
that's
what
we
specify,
but
then
you
should
be
able
to
do
whatever
you
need.
A
And
the
checkpoint,
like
yeah,
it's
arbitrary
number
of
blobs,
but
the
idea
is
that
the
proof
shouldn't
be
like
put
all
the
state
there
and
because
then
you're
replicating
the
state
in
some
other
subnet,
which
is
not
great,
and
that's
also.
Why,
like?
I
didn't,
go
to
the
low
level
details,
but
when
we
propagate
cross
messages,
for
instance
up,
we
are
not
propagating
every
single
message,
because
if
then
it
would
appear
in
the
subnet
state
of
my
parent,
where
we
propagated
c
cids
that
then
we
have
a
content
resolution
protocol
that
picks
it
up.
A
A
So
the
subnet
ids
are
deterministic.
We
always
call
root
the
root
net
and
from
there
on,
like
their
id,
is
determined
by
the
subnet
actor
by
the
id
of
the
subnet.
A
So
when
I
deployed
actually,
I
went
a
bit
fast,
but
when
I
deployed
here
the
subnet
actor,
it
told
me
the
the
id-
and
I
know
that-
there's
a
new
something
available
with
this
id
because
it's
deterministic.
So
if
we
deploy
a
new
one
from
this
subnet,
it
will
be
root.
Slash
the
id
of
the
subnet
actors,
slash
the
id
of
the
subnet
activates
and.
A
Yes,
this
is
independent
because
then,
for
instance,
for
bft
as
we're
thinking
with
the
parent,
all
of
like
the
information
that
needs
to
be
verifiable
and
like
it's
important,
it's
in
the
in
the
parent,
so,
for
instance,
for
the
bft.
What
you
check
is
the
validator
set
to
see
that
the
right
signatures
are
there.
So
you
don't
you
don't
keep
it
in
the
subnet
state,
because
that
would
mean
that
if
the
validators
are
able
to
modify
the
state
you're.
A
And
you
don't
need
that,
like
for
proof
of
work,
we
just
keep
it
so
that
we
can
return.
When
you
leave.
You
recover
your
collateral
in
the
reference
implementation,
you
could
have
a
fee
or
a
delay
function
like
again
this.
There
are
a
bunch
of
knobs
that
you
can
configure,
but
we
keep
it
there
for
bookkeeping
for
proof
of
work,
but
for
other
consensus,
algorithms,
you
need
it
because
you
need
to
know
the
validity.
Instead,
you
can
have
permissioning.
So
it's
another.
A
A
So
you
could
have
like
something
that
one
sometimes
mentions,
which
is
have
a
subnet
in
a
data
center.
That
is
super
fast,
like
there's,
no
networking,
latency
and
so
on
and
beefy
servers
with
a
fast
consensus,
and
then
you
just
propagate
a
checkpoint
out
whenever
the
result
is
ready.
So
you
could
have
a
high
throughput
network
within
a
data
center
and
then
you
would
propagate
only
the
information
that
is
useful
for
the
rest
of
the
network.