►
Description
Content Routing - Scalability Constraints - presented by @jbenet at IPFS þing 2022 - Content Routing 1: Performance - https://2022.ipfs-thing.io
A
So
I
want
to
talk
about
constraints
in
general
strategies
in
general,
from
many
different
fields
or
industries
or
whatever,
and
then
talk
about
some
very
concrete
strategies
in
the
some
points
in
the
design
space
not
to
be
exhausted
at
all,
but
just
to
seed,
a
few
ideas
that
look
very
different
so
that
everybody
working
on
content,
routing
can
kind
of
get
see
different
handles
on
the
problem.
A
Checking
recording
cool
yeah
looks
good
great.
So,
let's
start
at
the
beginning.
So
at
the
end
of
the
day,
there's
some
very
difficult
constraints
that
we're
dealing
with
and
everything
else
comes
out
of,
that
first
one
is
causality,
can
violate
causality.
That's
like
at
least
we
haven't
figured
out
how
that
means
that
we're
bounded
by
the
speed
of
light
in
sending
messages,
but
that's
about
the
hardest
constraint.
Almost
everything
we
do
has
to
deal
with
this
particular
constraint.
A
The
other
constraints
that
physics
gives
us
like
information,
density
limits
and
computational
speed
like
we're
so
far
away
from
the
physical
constraints
there
that
those
aren't
problems,
the
other
physics
oriented
constraints,
are
more
around
manufacturing
like
we're
starting
to
get
into
the
boundary
of
like
transistors,
are
getting
so
small
that
it's
now
difficult
to
make
them
smaller,
so
we're
starting
to
paralyze
more,
and
so
that
means
that
computers
are
not
getting
faster
by
doing
more
off,
more
single
sequential
operations
per
second,
but
where
they're
getting
faster
by
replicating.
A
But
so
far
no
problem
like
computers
can
moore's
law
continues
fairly
unabated
and
the
other
acceleration
laws
are
are
still
so
far
away
from
fundamental
limits
that
that
that
we're
good
there
again,
the
main
one,
the
the
problematic
one,
is
the
speed
of
light.
So
in
general,
that's
good
news.
We
can
rely
on
hardware
continuing
to
get
better
and
better
and
better,
and
so
the
routing
problems
today
look
very
different
in
the
writing
problems
10
years
ago,
and
that
looks
very
different
than
10
years
before
that,
and
so
on.
A
A
lot
of
networking
theory
or
yeah
like
the
sys
networking
systems.
Theory
that
you
read
in
textbooks
and
so
on
still
has
this
understanding
of
reality
that,
based
on
what
the
hardware
kind
of
looked
like
in
the
late
90s,
and
so
you
end
up
with
early
to
late
90s.
So
you
end
up
with
all
kinds
of
statements
about.
No,
you
can't
do
routers
that
have
the
full
routing
table
and
everyone's
like
and
you're
like
why
it's
like
well,
the
routers
of
the
time
couldn't
handle
the
full
routing
tables.
A
It
turns
out
that
today
you
can
totally
handle
full
routing
tables
in
in
hardware,
because
just
hardware
has
gotten
that
much
better.
We're
talking
about
hardware
getting
better
by
many
orders
of
magnitude
in
in
the
time
span
that,
like
we,
we
just
connected
everyone
in
the
world
and
like
humanity,
is
not
growing
by
many
orders
magnitude
every
few
years,
so
so
hardware
has
gone
dramatically
better.
The
accelerating
return
laws
continue
and
there's
this
kind
of
bandwidth.
Separating
from
storage
thing,
sorry,
storage,.
A
Where
the
storage
density
is
the
cost
of
storage
per
unit
per
area
for
volume,
is
decreasing
so
fast
relative
to
bandwidth
that
you're
getting
the
split
between
the
two
where
our
drives
are
getting
denser
and
denser
and
denser,
and
the
bandwidth
between
computers
is
not
improving
nearly
as
fast,
which
means
that.
A
Moving
around
information,
especially
larger
and
larger
photos
and
video,
and
so
on,
gets
harder
and
harder
all
the
time.
This
is
why
content
routing
this
is
why
content
addressing
makes
a
lot
of
sense.
This
comes
from
a
physics-oriented
view
that,
as
the
hardware
gets
better,
just
moving
around
lots
of
information
from
one
location
to
another
is
going
to
continue
getting
harder
and
at
the
same
time
it's
just
gonna
get
a
lot
cheaper
to
have
all
of
the
content
with
you
right.
A
So
these
things
these
things
are
about
like
terabytes
now,
so
you
can
have
a
terabyte
in
here
and
so
like
how
long
until
you
have
a
petabyte
in
here
right
like
not
that
long.
Maybe
it
was
a
little
long,
but,
like
you
know,
10
terabytes
is
a
lot
100
terabytes
a
lot.
So
if
you're,
if
you're
walking
around
with
a
terabyte
here
and
a
petabyte
here,
you
can
start
storing
all
of
your
personal
information.
A
All
of
the
data
that
you
care
about
in
like
your
day-to-day
files,
all
of
wikipedia
like
you
can
start
walking
around
with
full
copies
of
the
stuff.
At
that
point,
content
routing
really
matters,
because
you
don't
want
to
address
files
on
wikipedia
by
like
go
to
the
wikipedia
server
somewhere
else,
only
to
realize
that
it's
already
in
your
computer,
so
content
routing
will
become
an
increasingly
important
part
of
the
internet.
Great.
So
one
other
note
on
hardware.
A
Getting
a
lot
of
hardware
sounds
really
difficult
and
it
is
from
a
perspective
of
you
having
to
individually
set
up
a
lot
of
hardware
in
a
lot
of
facilities,
but
when
you
turn
it
into
a
whole
network
or
an
economy
problem,
it
can
get
much
easier,
so
think
of
the
falco
network
as
an
example
of
in
about
two
years
amassing
an
enormous
amount
of
storage
capacity
right
so
and
these
are
like
massive
racks.
A
So
if
we're
worried
about
hardware
here
or
we're
worried
about
like
the
indexes
getting
so
big
in
a
single
machine,
there's
like
a
straightforward
way
to
handle
this,
which
is
kind
of
like
turn
this
into
an
open
utility
turn
this
into
an
open
service.
That's
blockchain
powered,
because
then
you
can
scale
indexing
in
a
straightforward
way
right.
A
You
can
turn
this
into
a
business
that
operators
can
go
and
run,
and
this
works
in
a
pretty
nice
decentralized
setting
where
you
can
have
good
protocols
that
define
who
gets
to
be
what
an
operator
needs
to
do
to
become
an
indexer.
A
The
guarantees
that
the
operator
has
to
has
to
provide
the
slas
of
the
service
you
could
you
can
encode
properties
like
censorship,
resistance
into
the
protocol
and
so
on
and
end
up
with,
like
a
really
good
good
setup.
A
So
I
think
that
in
general
it
is
the
physics
and
the
hardware,
but
with
the
physical
constraints
and
and
how
the
hardware
is
improving
and
the
rate
of
separation
between
bandwidth
and
storage
is
going
to
yield
an
environment
where
it's
going
to
be
fairly
easy
to
have
large,
dedicated
racks
everywhere
with
a
bunch
of
the
content
or
the
content,
routing
information
right
there
and
you
don't
have
to
go
to
a
bunch
of
places.
The
problem
is:
how
do
you
find?
How
do
you?
How
does
your
device
and
your
computer
know
to
get
there
right?
A
A
So
it
has
to
work,
but
it
also
has
to
work
relative
to
a
bunch
of
policies
and
constraints
set
up
by
many
different
groups,
and
that's
why
you
end
up
with
tons
of
different
protocols
like
the
internet
protocol
stack
is
enormous
and
there's
tons
of
inter
interfacing
systems
that
yield
this.
You
know
high
performance
environment
where
you
can
get
tons
of
traffic
through
and
you
can
satisfy
a
bunch
of
policy
constraints,
but
you
have
an
extensible
expanding
system
structure.
Now
most
of
the
internet
is
dealing
with
location.
Routing
with
peer
routing.
A
Most
of
the
internet
is
about
how
you
get
ip
addresses,
to
be
able
to
talk
to
each
other
and
send
packets
to
each
other
quickly
and
with
the
policy
requirements.
Policy
requirements
might
mean
like
whether
or
not
certain
groups
can
talk
to
each
other
or.
A
Dealing
with
like
different,
how
do
you
deal
with
congestion
control?
How
do
you
deal
with
routing
through
certain
routes
as
opposed
to
others,
and
how
do
you
deal
with
the
economics
of
those
routes
and
and
so
on?
A
One
other
concern
here
is,
or
one
other
useful
thing
to
think
about-
is
the
grapevine
of
the
internet
exists
because
of
both
the
kind
of
physical
and
hardware
constraints
and
and
the
regional
constraints,
but
also
because
of
the
economic
constraints,
so
the
the
internet
over
time,
the
grapevine
itself
evolves
to
match
the
application
applications
that
are
driving
traffic
through
through
the
internet
and
then
once
the
grapevine
looks
a
certain
way
applications
tune
for
that
great
event.
So
over
time
it
like
sort
of
settles.
A
However,
you
can
shift
it
it's
difficult,
but
but
you
can
do
it
now.
It's
certainly
a
lot
easier
to
just
look
at
the
grapevine
and
how
it
works
today
and
then
just
place
specific
devices
and
specific
software
in
those
spots
and
avoid
having
to
go
all
over
the
place
right.
So
in
most
of
the
peer-to-peer
world
in
most
of
the
blockchain
world,
we
don't
really
think
about
the
internet.
A
Graphing,
we
think
about,
like
oh
overlay,
networks,
where
you
can
talk
to
anybody
and
it
just
sort
of
works,
and
in
reality,
what
that
means
is
just
tons
of
packets
moving
around
in
this
grapevine
and
that's
not
always
what
you
want
in
reality.
What
you
want
is
to
know
precisely
what
computer
is
going
to
talk
to
what
computer
you
want
to
optimize
latency.
A
You
want
to
find
the
right
places
to
go,
and
so
on
cool,
so
talking
about
platforms
for
a
moment,
there's
a
ton
of
different
kinds
of
devices,
especially
once
you
go
into
iot
and
manufacturing
and
so
on,
but,
like
sort
of
for
the
most
part,
most
applications,
we
have
to
worry
about
in
a
shorter
time
frame.
Look
like
you
know.
A
You
have
these
large
dedicated
data
data
center
servers
that
have
super
fast
interconnect
between
them,
so
you
can
send
packets
within
a
within
a
single
data
center
in
you
know,
milliseconds
like
one
millisecond
or
so
one
to
four
and
then
sometimes
faster,
depending
on
what
data
center
and
if
the
data
center
is
tuned
to
allow
for
that,
so
you
can
have
one
machine
in
one
rack
talk
to
another
machine,
some
millisecond,
I
think,
and
then
you're
dealing
with
things
like
home
servers,
which
are
you
know,
some
sets
of
people
will
have
dedicated
machines
in
their
houses
that
are
like
you
know:
100
terabyte,
to
petabyte
scale
things
but
they're
behind
regular
consumer
isps,
which
suck
right
like
the
the
pipe
from
home
to
the
isp
tends
to
be
extremely
limited.
A
Then
you
have
laptops
and
desktops,
which
are
pretty
good.
Now,
laptops
are
moving
around
a
lot,
so
they
have
high
churn.
You
don't
really
want
to
use
them
to
store
a
lot
of
routing
information.
Desktops
are
more
okay,
but
still
are
like.
Maybe
the
cpus
are
really
good,
and
sometimes
they
have
access
to
gpus,
which
is
great
and
the
drives
might
be,
might
be
decent,
but
it's
not
petabytes.
It's
usually
kind
of
you
know
single
terabyte
to
10
terabyte
scale.
Then
you
have
mobile
mobile
is
super
churny,
very
small
devices.
A
Battery
constraints
are
really
matter.
You
really
don't
want
to
use
mobile
at
all
to
serve
any
traffic.
It's
mostly
clients,
wearables,
even
worse,
like
at
that
point.
You're
you're
in
a
region
of
the
internet,
that's
like
going
over
bluetooth
or
near
field
communication
and
other
kind
of
like
other
stuff
that
is
just
very,
very
difficult
to
to
at
all
deal
with.
However,
wearables
will
need
to
be
able
to
do
content
writing
queries.
A
These
things
will
eventually
get
end
up
with
resources
that
are
and
they
need
to
be
able
to
speak
the
some
protocol
to
be
able
to
get
content,
and,
ideally
you
should
go
from
the
wearable
into
the
phone
and
that's
it
or
from
the
phone
to
your
into
your
laptop
and
then
and
that's
it,
but
for
the
most
part
right
now.
A
Most
of
these
things
today
are
connected
to
the
cloud,
so
all
of
these
things
are
talking
to
the
cloud
which
yeah,
then
you
have
other
constraints
between
operating
systems
and
browsers,
where
some
os's
will
let
you
at
register
services.
So
that
means
you
can
deploy
content
routing
as
a
single
service
that
can
mount
into
the
os
and
then
operate
there.
So
imagine
writing
applications,
and
so
on
that
just
can
deal
with
the
content.
Routing
daemon
in
your
in
your
os.
A
This
is
how
dns
works
in
a
sense,
most
systems
browsers
have
moved
away
from
it.
But
you
know
os
is
shipped
with
a
dns
resolver
browser
same
thing.
You
can
nowadays
mount
extensions.
You
can
now
sometimes
ship
things
into
browsers
and
certainly
if
something
becomes
widely
used,
browsers
will
will
tend
to
adopt
it
and
you
have
a
decent
enough
pathway
to
try
something
out
in
some
applications.
A
Show
that
it's
working
and
then
eventually
make
it
into
browsers
like
today
we
have
ipfs
and
brave,
and
that
is
like
millions
of
people
that
that
can
just
use
that
directly
great.
A
It's
talking
about
protocols,
there's
a
bunch
of
constraints
coming
from
here,
but
for
the
most
part,
just
think
of
the
tcps
pstack
as
extremely
sophisticated
decades
of
work,
making
tons
of
different
protocols,
mostly
for
doing
location,
routing
like
location,
addressing
and
location
routing,
there's
a
family
of
protocols
around
name,
data,
networking
and
content-centric
networking,
which
are
entirely
about
content
routing,
except
that
most
of
that
literature
never
quite
got
merkle
trees
and
hash.
A
Linking
so
most
of
that
literature
isn't
does
not
have
a
good
answer
for
security
does
not
have
good
answers
for
privacy,
and
so
there's
a
lot
of
really
good
ideas
in
terms
of
routing
protocols,
but
often
and
often
they're
not
well,
they're,
either
not
private
or
they're,
not
able
to
leverage
content
addressing
in
the
same
way
their
content.
Addressing
is
different.
It's
not
it's
not
hash
linking
or
it's
just
it's
mutable,
but
but
anyway,
it
does
mean
that
there's
a
pile
of
literature
that
we
can
learn
a
lot
from
yeah.
A
I
think
from
so
so
a
couple
quick
thoughts
around
properties,
so
the
to
I
think,
content
routing
in
order
to
be
successful
in
long
term
has
to
become
something
at
the
scale
of
you
know,
go
towards
a
scale
of
something
like
location,
oriented
routing.
A
It
has
to
become
separate
systems,
separate
protocols
that
solve
the
problem
and
so
for
the
most
part,
most
whenever
we're
dealing
with
content
routing,
don't
don't
think
of,
like
don't
think
of
it
as
like
a
problem,
if
you
end
up
creating
other
services
or
systems
in
order
to
solve
the
problem,
it's
okay
to
go
in
that
direction
and
optimization
will
require
going
in
that
direction.
Actually
getting
to
very
high
performance.
Large
scale
systems
will
end
up
there.
A
And
yeah
there's
a
bunch
of
things
that
sort
of
protocols
need
to
do
to
become
sort
of
foundational
and
they
tend
to
sort
of
decouple
from
everything
else
yeah.
This
is
already
in
the
in
the
hierarchical
consensus
slide,
but
this
is
sort
of
the
this
was
mostly
oriented
towards
consensus,
but
it
kind
of
roughly
mirrors
the
the
center
the
set
of
properties
required
for
content.
Routing
content
routing
might
have
a
higher
scale
of
transactions
per
second
than
this,
but
it's
roughly
around
the
same.
A
This
is
the
map
from
falcon.
So
in
very
concretely
for
us,
we
want
to
have
a
a
system
structure
that,
like
tunes
for
the
throughput
coming
from
storage
providers
coming
from
these
on-ramps
coming
from
retrieval
networks
and
factoring
regions,
so
we're
headed
in
a
direction
where
a
lot
of
our
systems
will
end
up
with
regional
components.
Now
it's
not
necessarily
that
it's
two
layers,
most
regional
systems,
have
you
know
the
planet
and
then
each
region
and
in
one
other
layer
this
could
have.
A
A
The
this
kind
of
what
the
ndn
ccn
structure
looks
like,
and
I
think
a
lot
of
this
is
basically
right.
It's
just
that
they,
the
thing
in
the
middle,
the
thin
waste
is
really
hash.
Linking
hash
linking
is
really
the
thin
waste
that
enables
all
this
to
really
work,
but
there's
a
ton
of
protocols
that
you
know
are
designed
in
the
style
of
the
location.
A
Oriented
protocols
are
very
efficient,
very
high
performance
and
so
on
and
many
implementations
are
exist
for
a
lot
of
these
systems,
and
so,
but
I
think,
for
the
most
part,
we
can
leverage
a
lot
of
knowledge
from
these
systems.
A
A
Do
they
need
authentication
in
terms
of
being
able
to
find
out
who
who
needs
what
and
so
on
for
the
most
part
today,
most
of
the
web
three
stuff
does
not
do
any
real
writer
privacy
and
does
not
do
any
authentication,
because
it's
either
public
or
what's
public
and
accessible
is
ciphertext
that
you
then
use
separately
in
another
layer
to
do
authentication.
So
you
in
many
cases
you
don't
have
to
deal
with
authentication
in
the
content.
A
Routing
layer
you
just
can
distribute
all
the
ciphertexts
and-
and
you
do
authentication
at
a
separate,
separate
step.
This
is
yeah
the
the
level
of
scale.
A
Now,
let's
look
at
a
few
strategies
from
from
different
classes
from
a
different
systems.
The
cloud
10
settled
around
a
kind
of
network
makeup
of
having
these
data.
Centers
per
region
have
a
lot
a
ton
of
machines
within
single
data
centers,
and
they
use
those
data
centers
to
serve
the
content
and
the
applications
for
a
particular
region
of
the
world.
A
They
have
a
set
of
edge
points
of
presence
in
isps
to
and
they
follow
sort
of
the
grapevine,
the
internet
graphing
to
get
close
to
the
users
and
those
points
of
presence
tend
to
be
for
read-only
caches,
so
that
means
cdn
content
and
so
on
nowadays,
we're
starting
to
see
applications
being
put
all
the
way
to
the
edge,
so
things
like
cloudflare
workers
and
so
on
are
examples
of
shifts,
putting
the
actual
application
processing
all
the
way
there.
A
A
Solutions
to
deal
with
this
from
data
center
data
center
work
just
fine
edge
to
edge.
So
so
these
systems
can
work.
But
again,
most
of
the
cloud
is
still
kind
of
in
these
big
large
scale.
Data
centers
with
a
lot
of
throughput
in
between
those
machines.
So
also
think
of
that,
in
terms
of
content
writing
the
most
humans
will
interface
with
content,
routing
through
clients,
in
laptops,
phones
and
so
on.
However,
most
programs
will
internet
interface
with
content
running
within
data
centers.
A
Most
programs
will
operate
over
data
in
these
massive
scale,
data,
centers
and
so
content.
Routing
also
has
a
tune
for
those.
These
might
be
different
protocols
right.
So
you
might
have
a
protocol
that
deals
with
how
browsers
want
to
do
content
resolution,
or
you
might
have
protocols
that
deal
with
how
they,
how
you
want
to
do
data
resolution
within
a
data
center,
so
once
you're
operating
at
data
center.
A
If
you
ever
have
to
leave
the
data
center
to
go
somewhere
else,
you
you
kind
of,
did
something
wrong,
and
so
you
you
want
to
prefetch
most
or
all
of
the
information
you're
going
to
need
into
that
environment
before
you
run
some
large
computation
job,
because
at
that
point
like
once
you
kind
of
have
the
machine
running,
you
don't
want
to
end
up
waiting
for
speed
of
light
distances,
you're
dealing
in
microseconds
or
nanoseconds
at
that
point,
and
so,
if
you
have
to
wait
milliseconds
for
something
or
hundreds
of
milliseconds
like
that's
a
lifetime
in
for
the
for
the
lifetime
of
a
program,
there's
other
kinds
of
strategies
that
the
cloud
has
settled
on
things
like
again
tons
of
caching
everywhere
and
then
sort
of
like
writing
becomes
expensive.
A
And
then
you
invalidate
caches,
but
you
can
use
hashlinking
to
avoid
that
right,
because
you
get
this
really
nice,
caching
automatic
cache
and
validation
by
the
cid
changing.
So
if
something
changes
you
have
a
different
cid
and
you
don't
have
to
worry
about
invaliding
caches
at
all.
So
you
avoid
one
of
the
biggest
problems
to
worry
about,
but
it
does
mean.
Then
you
have
to
rewarm
everything.
A
The
cloud
has
also
settled
into
these
buckets
and
collections
where
people
are
able
to
group
large
amounts
of
content
into
a
single
related
collection
and
then
move
that
stuff
around
or
give
access
to
that
stuff
in
one
place,
and
so
that
lets
you
aggregate
lots
of
content
together
into
one
identifier,
that
you
then
can
provide
content
routing
to
that
right.
So
think
of
some
large
data
set
and
or
like
like
a
git
repo
or
even
some
something
massive
like
terabytes
in
size
or
or
petabytes,
and
that
having
one
identifier
that
then
gets
routing
information.
A
Just
for
that
one
identifier.
So
today
our
counting
routing
system
in
ipfs
doesn't
allow
that
it
only
has
content
writing
for
all
the
tiny
little
leaves
and
you
might
want
to
be
able
to
selectively
choose
what
you
want
to
make
content
writable.
A
I
already
mentioned
kind
of
consistency
in
regions.
One
other
really
big
thing
here
is
software-defined
networking
part
of
how
the
cloud
got
to
where
it
is,
is
by
being
making
the
entire
environment
very
programmable,
and
today
our
systems
are
fairly
difficult
to
build,
and
so
we
can
make
some
version
of
secure
decentralized
software
defined
networking.
If
we
had
a
version
of
that,
that
would
make
our
evolution
rate
on
all
of
our
stack
dramatically
better
there's
some.
A
You
can
also
look
at
the
literature
from
the
peer-to-peer
world.
There's
tons
of
really
good
ideas
there.
Those
ideas
tend
to
be
oriented
towards
these
really
large
networks
with
you
know
millions
or
billions
of
nodes,
and
they
tend
to
be
these
overlay
networks
that
lose
information
about
the
underlying
system,
so
they
don't
tune
for
the
internet
graphing
at
all
or
for
the
most
part
they
don't,
and
they
tend
to
have
lots
of
protocols
for
building
structure
out
of
unstructured
environments
and
so
on.
They
tend
to
be
resource
sharing
oriented.
A
This
is
all
before
economic
mechanism
design
and
they
tend
to
be
always
sort
of
become
some
distributed,
search
or
lookup
problem
and
trying
to
go
from
login
search
into
of
one
search,
and
you
there's
tons
of
different
strategies
to
do
that,
often
by
like
pre-doing,
a
bunch
of
the
searches
or
building
indices
or
making
like
compressed
indices
and
distributing
those
around
and
so
on.
A
When
you
get
into
consensus,
it's
a
totally
different
landscape,
where
you
have
consistency
over
the
stuff
that
you
really
care
about,
and
you
get
a
structure
to
maintain
a
set
of
computers
online
together,
you
get
state
machine
replication
of
some
log
that
you
care
about.
You
get
automatic
orchestration
because
all
these
things
have
in
order
to
maintain
consensus.
They
have
to
know
about
each
other.
So
you
already
have
all
the
bits
that
you
need
for
doing
orchestration
of
these
machines.
A
At
that
point
is
when
you
can
start
reasoning
about
creating
protocols
that
let
where
you
can
start
verifying
certain
thing
operations
or
defining
roles
for
certain
programs
in
the
network
and
describing
ways
in
which
those
will
work
together
to
achieve
certain
certain
results,
and
you
can
think
about
collaterals
and
staking
and
slashing
and
so
on,
and
all
that
sort
of
comes
alive.
Thanks
to
thanks
to
the
ability
to
construct
arbitrary
mechanisms.
A
One
one
other
thing
here:
sorry
I
had
an
extra
one
other
component
here
is
that
you,
you
block
chains,
have
constructed
a
lot
of
specialized
hardware
to
run
blockchains
so
by
requiring
certain
kinds
of
cryptographic
operations
blockchains
have
and
by
creating
an
economic
model
that
allows
anybody
to
contribute
hardware
and
get
rewarded
proportionately.
For
that
contribution,
they've
created
an
environment
where
blockchains
can
summon
hardware
to
solve
problems
really
quickly
like
things
like
falcon
got
built
really
fast.
A
You
know
two
to
almost
20
exabytes
of
capacity
bitcoin
in,
like
10
years,
became
one
of
the
largest
energy
consumers
on
the
planet.
Massive
amounts
of
hardware
dedicated
to
bitcoin,
including
many
generations
of
specialized
hardware,
building
right
we're
talking
about
asics,
like
many
generations
of
asics
built
just
to
do.
Bitcoin
mining
now
we're
starting
to
get
zero
knowledge
asics,
like
actual
chipsets,
designed
to
do
cryptographic,
operations
just
to
be
able
to
use
your
knowledge
proofs
and
so
on.
A
So
this
means
that,
like,
if
there's
a
bunch
of
operation,
if
we
can
design
a
content
routing
system,
but
we
have
some
operation-
that's
really
expensive,
but
it's
really
valuable
to
do
that.
Cryptographically
because
we
get
some
important
property
or
we
are
able
to
decentralize
the
thing
that
way.
That's
okay,
because
blockchains
can
just
cause
all
that
hardware
to
be
built,
and
in
you
know,
two
or
three
years.
You
now
have
a
super
fast
system.
A
The
other
part
here
is
to
maintain
massive
indices
around
the
world
with
extremely
high
slash,
we're
talking
about
open
services
and
open
utilities
like
we're.
Really.
A
This
is
what
blockchains
were
designed
for,
so
you
can
think
of
using
blockchains
to
build
content,
routing
protocols,
it's
not
clear
whether
it's
like
one
blockchain
or
or
there'll,
be
many,
but
when
you
think
of
the
scale
of
what
location,
addressing
and
location
routing
requires
to
be
high
performance
content,
writing
will
likely
require
that
kind
of
that
kind
of
operation
too
yeah,
and
so
looking
back
to
the
internet
location
routing
that
world
has
hundreds
of
specialized
protocols.
A
Many
different
kinds
of
routers,
like
many
generations
of
devices
and
operating
entire
operating
systems,
built
just
to
do
location,
routing,
better
tons
of
hardware.
Entire
physical
buildings
have
been
constructed
to
build
internet
exchanges
where
you
can
have
location
routing
go,
go
better.
Now,
content.
Writing
can
reduce
a
lot
of
the
stuff
or
it
could
cause
similar
kinds
of
structures
to
be
built.
A
This
goes
all
the
way
into
reshaping
industries,
reshaping
economies
and
even
like
law
like
this
entire
international
law,
design
around
the
constraints
of
location,
addressing
and
location
routing
and
like,
and
if
content
routing
works
at
scale,
it'll
cause
the
same
kind
of
thing.
Then
people
like
laws
will
be
written
about
how
to
deal
with
content,
routing
so
yeah
and
again
the
the
ndn
and
content
content.
A
Centric
networking
literature
has
is
exactly
sort
of
a
feel
that
we
should
be
learning
from,
with
the
caveat
that
now
with
hashes
and
now
with
capability,
crypto
and
now
with
zero
knowledge,
proofs
and
so
on.
A
So
I
want
to
finish
by
talking
about
a
few
concrete
strategies.
So
one
is,
I
don't
don't
really
don't
think
the
small
routers
are
going
to
cut
it.
I
think
small
routers
work
in
environments
where
you're
in
a
local
area
network
and
you're
trying
to
you're
just
kind
of
operating
here.
So
yes,
in
this
room
with
all
our
computers,
these,
like
relatively
small
routers,
I
mean
they're,
bigger
than
routers-
were
maybe
two
decades
ago
or
even
a
decade
ago.
A
But
these
small
routers
have
super
high
churn,
really
hard
platform
to
deploy
software
too,
and
so
they're
they're
good
to
route
between
when
applications
are
running
here,
but
they
should
not
be
serving
anything
for
people
way
far
away
in
the
grapevine.
If,
if
we're
sort
of
requiring
that,
I
think
we've
we've
kind
of
lost,
I
don't
think
conor
routing
is
going
to
work.
A
That
way,
I
think
instead,
what's
much
more
likely
to
happen,
is
that
we'll
define
ways
of
for
large
routers
to
appear
be
incentivized
to
appear
and
and
then
to
operate
well
in
a
decentralized
network
where
you
can
lean
into
you
know
terabyte
size
routers,
where
this
means
dedicated
computers
in
buildings
or
in
houses
or
or
in
isps
or
petabyte
size,
routers
in
isps
and
and
data
centers
and
so
on,
and
it's
totally
achievable
it
just
sort
of
needs,
an
incentive
structure.
A
A
I
also
think
regions
are
going
to
be
critical
to
include
in
some
way
and
so
that,
because
it
lets
you
solve
one
of
the
biggest
problems,
which
is
this
speed
of
light
latency
problem.
So
you
can
bound
the
regions
and
say
great
content.
Routing
operates
in
this
region
and
you
can
decide
whether
you
replicate
worldwide
state.
So
you
might
have
some
state
that
is
meant
to
be
replicated
everywhere,
and
so
you
can
just
bring
all
that
state
into
that
region.
Make
it
available
there
or
you
might
have
regional
state.
A
And
it's
okay,
if
you,
if
you're,
trying
to
find
that
content-
and
you
have
to
communicate
across
to
that
region-
and
you
pay
the
latency
ahead.
That
might
be
okay
because
you
might
have
very
large
amounts
of
fast-changing
content
that
only
sort
of
mostly
matters
to
devices
and
programs
in
that
region,
and
it's
okay
to
do
that.
So
the
best
example
of
this
is
content,
routing
information
required
in
a
single
data
center.
A
So
if
you're
running
a
ton
of
operations
in
a
single
data
center
and
most
of
the
programs
are
going
to
be
there,
you
don't
want
to
replicate
all
of
that
information
everywhere
else
in
the
world.
If
only
the
programs
there
are
going
to
care,
then
it's
potentially
way
easier
to
ship
your
program
into
that
environment
run
it
from
there
leverage
content,
writing
information
there
and
then
move
on
now.
A
However,
however,
for
most
of
the
information
on
the
on
the
human
accessible
internet,
like
the
internet,
that
we
want
to
view
and
so
on,
we
want
most
of
that
to
be
accessible
everywhere,
and
so
that
means
some
amount
of
that
information
has
to
be
replicated
everywhere.
The
good
news
is
that
moving
around,
like
once
you've
kind
of
moved
data
somewhere,
it
can
just
stay
there
for
a
long
time,
it's
fairly
cheap
to
maintain
data
in
one
region,
so
all
content
address
data
can
just
replicate
into
different
different
places.
A
I
think
that
this
model
from
hierarchical
consensus
is
extremely
useful,
because,
even
though
it
sounds
like
crazy
to
like
use,
a
blockchain
to
maintain
an
index
is
supposed
to
be
high
performance.
This
scalability
model
that
just
lets-
you
add
arbitrary.
You
know
it
lets
you
horizontally
scale
a
blockchain
by
adding
new
new
networks
maps
onto
this
content,
routing
model
really
well.
You
can
do
all
of
these
regions
as
these
specific
subnets
and
you
can
have
subnets
that
are
meant
to
be
so
think
of
like
these.
These
three.
A
You
know
multiple
layers
where
you
have
one
network
for
the
planet,
one
network
for
a
particular
continent
or
or
a
country,
or
something
like
that,
and
then
you
have
a
subnet
for
a
particular
data
center.
So
you
could
decide
whether
your
content,
writing
information
belongs
in
the
whole
planet
in
a
particular
continent
or
in
a
particular
data
center.
A
And
if
you
have
that
model,
you
can
then
start
making
those
data
center
specific
content
routers
like
massive
and
move
at
really
high
speeds,
but
you
don't
have
to
kind
of
aggregate
all
that
information
and
post
it
somewhere
else,
and
you
can
still
resolve
all
those
queries
you
just
have
to
pay
the
latency
hit
of
going
in
there
and
another
useful
thing
that
comes
from
utico
is
that
it'll
have
it's
a
full
blockchain
it'll
have
the
fem
once
that
lens.