►
From YouTube: Content Routing Track Intro - Will Scott
Description
An introduction to the concept of content routing, why it’s important, and how the IPFS conception of the content routing is evolving.
A
Today,
on
this
stage,
we're
going
to
be
talking
about
content
routing,
what
is
content
routing?
Why
do
we
care
content?
Routing
is
the
first
of
sort
of
two
logical
things
in
breaking
up
the
work
that
ipfs
implementations
are
doing
and
in
particular
it's
answering
a
more
tractable
and
concrete
question
of
who
has
a
given
piece
of
content
right
we've
got
a
Content
addressed
World
in
general.
A
You
already
have
this
hash
identifier
of
the
content
that
you're
interested
in
and
so
the
question
of,
then,
which
providers
can
I
go
to
to
fetch
that
data
is
one
that
we
can
break
out.
Ipfs
has
a
few
different
mechanisms
that
it
uses
to
do
this
today
across
different
versions,
most
commonly
we
use
a
DHT
you've,
probably
heard
about
the
cademlia
DHT
and
that's
used
in
ipfs
for
for
most
content
routing.
So
the
roughly
tens
of
thousands
of
ipfs
peers
that
believe
themselves
to
be
accessible
and
long-lived
will
hold
some
of
the
routing
records.
A
They'll
hold
a
chunk
of
the
space
of
all
of
the
SIDS
that
people
are
saying
they
have
and
then,
as
queries
come
in.
You
direct
the
queries
to
the
correct
peers,
who
are
able
to
then
give
you
that
content
back
but
content
routing
in
ipfs
is
evolving.
This
year
has
actually
been
a
really
active
year
around
content,
routing
and
I.
Think
we
we
sort
of
made
a
conscious
decision
near
the
end
of
last
year
that
content
writing
was
a
reasonable
boundary
that
we
should
draw
that.
A
We
should
think
about
it
as
a
separate
subproblem,
because
it
is
fairly
tractable.
It's
a
it's,
an
interface
that
we
can
try
to
experiment
with
and
try
different
implementations
and,
in
particular,
by
having
other
implementations,
we
have
the
ability
to
try
and
find
equivalent
mechanisms
that
can
get
content
much
faster
and
so
we'll
talk
a
little
bit
about
network
indexers
through
through
this
morning
and
we'll
talk
more
generally
about.
A
You
know
how
how
content
route
heading
is
growing
and
the
extensibility
that's
getting
added
in
ipfs
in
general,
the
the
things
to
take
away
content
routing
is
a
layer
that
can
be
shared,
probably
broader
than
the
data
transfer.
So
when
you
think
about
how
we
actually
send
data
back
and
forth
ipfs
today
uses
bit
swap
it
sends
blocks
back
and
forth.
A
A
A
So
as
a
client,
I
might
only
speak
a
subset
of
the
overall
data
transfer
protocols,
but
we
can
potentially
share
our
participation
in
the
content,
routing
layer,
content,
routing
already
is
pretty
fast,
and
the
development
effort
is
going
to
make
it
very
fast
pretty
quickly
so
Network
indexers,
which
you'll
hear
about
later,
are
able
to
generally
answer
queries
directly
in
order
of
a
millisecond.
And
so
then
it's
just
a
question
of
how
many
times
we're
replicating
that
and
how
we
incentivize
people
to
run.
A
Instances
of
these
databases
close
to
you
know,
most
regions
caches
in
your
metro
area.
I
think
we
believe
at
this
point
that
storage
density
is
going
up.
Storage
is
going
to
get
cheaper.
It's
not
going
to
be
unimaginable
that
you've
got
a
pretty
big
cash
that
can
hold
most
of
the
relevant
content.
Routing
information
for
you
as
a
client,
certainly
within
your
metro
area,
and
so
the
content,
routing
part
of
data
transfer
and
getting
traffic
is
something
that
should
be
sub.
A
10,
millisecond
I
think
is
sort
of
a
goal
that
we're
eyeing
and
then
we
already
have
in
ipfs
and
and
across
implementations
and
and
the
various
ipfs
implementations
that
are
expanding
options
for
experimentation.
So
in
both
iro
and
in
Kubo,
there
are
hooks
to
delegate
and
try
other
content
routing
mechanisms
so
that,
rather
than
having
just
a
Mainline
DHT
an
individual
client
or
an
individual
Gateway
can
start
trying
other
things
and
can
and
can
attempt
to.
A
You
know,
keep
a
cache.
That's
different
tune
things.
It's
it's
a
it's!
A
place
where
you
get
to
figure
out
what
works
and
customize
it.
That's
that's
pretty
easy,
because
it
is
a
very
limited
interface,
which
is
one
of
the
things
that
makes
it
attractive.
A
The
other
part
of
experimentation
that
we'll
cover
on
probably
tomorrow
afternoon,
primarily,
is
that
you
can
do
a
bunch
of
interesting
privacy
enhancements
over
this
interface.
So
once
you've
just
got
an
interface
that
is
I.
A
Have
a
sid
I
want
to
know
who
has
it
and
have
sort
of
a
consistent
database
view
of
that
overall
question:
that's
an
interface
that
you
can
start
to
put
different
privacy,
implementations
behind
and
protect
queries
from
users,
so,
just
to
like
give
a
very
high
level
overview
of
how
Kubo
has
evolved
over
I
think
really,
the
last
six
months
when
you
were
looking
at
ipfs
clients
this
spring.
A
Realistically,
you
were
seeing
something
like
this.
You
were
seeing
either
your
desktop
Kubo,
client
or
one
of
the
gateways
like
ipfs.io.
Whenever
they
were
unable
to
know
who
had
a
given
Sid
that
was
queried,
they
would
go
to
the
DHT
that
was
sort
of
what
existed
this
summer.
At
ipfs
thing
we
gave
an
update
on
the
progress
of
network,
indexers
and
so
network.
A
Indexers
is
a
project
that
the
Bedrock
team
has
worked
on
so
myself,
Mossy,
Andrew
and
Ivan
that
that
holds
sort
of
this
larger
database
in
one
machine
and
is,
is
betting
on
storage,
getting
cheaper
and
and
so
by
by
this
summer,
we
had
connected
this
network
indexer
to
the
DHT
such
that
that
queries
were
able
to
take
a
set
of
providers
like
both
web
3
storage
and
filecoin
providers,
and
and
get
them
through
their
existing
queries.
A
So
when
you
queried
into
the
DHT
there's
a
set
of
Hydra
booster
nodes,
which
act
sort
of
like
Bridges,
where,
as
you
make
your
request
in
the
DHT,
these
nodes
would
not
simply
return
their
fraction,
but
they
would
also
go
back
and
query
an
indexer
and
return
results
that
they
found
in
that
indexer
as
well,
and
so
this
bridged
and
and
allowed
large
providers
that
had
a
lot
of
data
that
they
couldn't
directly
publish
into
the
DHT
to
be
able
to
still
advertise
their
content
such
that
existing
clients
could
get
it.
A
Today.
We've
continued
on
integration
work
such
that
both
clients
and
the
Gateway
are
able
to
query
indexers
directly,
in
addition
to
the
DHT
and-
and
so
what
this
gives
you
is,
it
gives
you
a
different
set
of
characteristics
around
reliability.
A
You
know
that
all
of
your
queries
are
going
to
this
additional
database
in
addition
to
the
DHT,
rather
than
they
might
hit
the
hydro
boosters,
and
it
means
that
we
get
to
sort
of
experiment
with
this
interface
directly
and
we're
seeing
that
evolve
as
we
sort
of
exercise
it
both
both
in
privacy
and
and
in
sort
of
what
is
the
caching
story,
The
semantics
and
the
extensibility
that
you
want
for
an
ipfs
node,
whatever
that
may
be
to
actually
be
able
to
answer
quickly
and
safely.
A
This
question
of
content
routing
cool
to
give
a
sense
of
scale
as
of
as
observed
this
summer,
The,
the
network
indexers
had
about
a
billion
hashes
and
that
covered
maybe
20,
of
the
file
coin,
content
that
is
available
and
about
100
providers
that
were
providing
that,
whereas,
as
of
sort
of
around
now
last
week,
it's
grown
quite
a
bit
right.
A
We're
we're
at
like
quickly
approaching
a
trillion
hashes,
which
is
an
order
of
magnitude,
a
couple
orders
of
magnitude
more
than
the
amount
of
Sids
that
are
in
the
DHT,
so
there's
a
lot
of
content
that
people
have
where
they
are
not
able
to
advertise
all
that
content
in
the
DHT-
and
this
was
this
was
sort
of
the
other
thesis
is
if,
if
we're
going
to
scale,
especially
with
things
like
filecoin,
where
you've
got
a
really
large
imbalance
in
your
heterogeneity
of
nodes,
so
you
have
a
few
really
big
nodes
that
just
have
hard
drive
after
hard
drive
of
content.
A
It's
not
super
fair
and
also
it's
not
a
super
good
thing
for
them
to
just
publish
all
that
content
into
the
DHT
right.
So
if
I'm,
if
I,
if
I
look
at
the
DHT,
you've
got
a
few
tens
of
thousands
of
nodes,
a
lot
of
them
are
desktop
nodes
in
general.
We're
asking
them
to
store
five
to
twenty
megabytes
of
content
routing
records,
so
they
they
have.
You
know
a
reasonable
amount
of
content
in
there,
but
it's
not
so
much
that
you're
going
to
be
unhappy
about
that
being
stored
on
your
computer.
A
But
if
we
took
all
the
file
coin
content
and
had
those
filecoin
providers
publish
into
the
DHT
we'd
suddenly
be
asking
all
of
the
desktop
users
to
store
gigabytes
of
content,
and
that
starts
to
be
a
level
where
a
lot
of
desktop
users-
maybe
would
not
be
happy
about
that,
and
so
that
that
was
sort
of
the
other
motivation
is.
We
know.
A
We
know
that
we
can
make
just
a
database
that
has
all
this
and
in
fact
we
can
make
a
lot
of
replicas
of
that
database
and
we
can
come
up
with
ways
to
reduce
trust
through
replication
rather
than
through
a
classic
DHT
mechanism.