►
Description
How could inter-planetary content routing continue to evolve going forward?
A
B
Thanks
start,
I
think
that
a
lot
of
like
ipps
has
grown
big
enough,
that
a
lot
of
users
have
a
lot
of
different
requirements
and
use
cases.
So
I
I
would
hope
that
they'll
be
able
to
make
the
right
trade-off.
They
need
for
their
use
cases
like,
for
example,
users
who
might
live
in
a
in
a
country
that
is
being
censored
right
and
they're
trying
to
get
Wikipedia.
B
They
might
be
willing
to
tolerate
a
lot
of
latency
right
in
order
to
get
more
privacy
so
that
they
don't
get
a
lot
of
trouble
with
their
government
and
I.
Think
that's
fine
and
we
we
should
have
multiple
different
kinds
of
content
routing
to
to
provide
for
those
use
cases.
C
I
would
hope
that
when
you're
looking
for
something
like
a
video
you're
going
at
furthest
to
your
local
region
in
a
in
a
web,
2
networky
sense
that
maybe
there's
a
data
center
where
that
video
is,
if
it's
not
popular,
but
if
it
is
a
popular
video,
If,
It's
News.
If
it's
you
know
something
that
other
people
are
asking.
C
C
It
would
be
great
if
you
can
just
have
the
latency
of
that
and
so
I
think
it's
going
to
be
interesting
to
see
how
we
can
evolve
content
routing
to
help
discover
those
other
locations
that
are
really
close
to
you.
That
also
have
the
content,
because
I
think
that's
going
to
become
more
and
more
likely
to
be
true
and
the
more
that
we
can
realize
that
the
more
powerful
and
value-add
that
we're
able
to
make
with
content
routing
with
content,
address
data.
D
Yes,
I
agree:
I
was
gonna,
say
something
obviously
like
local
local
mechanisms
for
fallbacks
to
Global
mechanisms
for
finding
content.
I
want
would
want
the
content
to
be
delivered
to
me
from
multiple
sources,
ideally,
and
obviously
people
who
are
near
me
first
and
if
we
can
get
there.
That
would
be
great.
A
D
Yes
again
so
I
guess
a
bit
of
background.
There
is
that
DAC
house,
so
we
run
the
web
free
storage
service,
the
nft.
storage
service,
and
we
started
off
by.
We
had
two
kind
of
ipfs
clusters
that
grew
very,
very
big.
D
We
had
a
bunch
of
learnings
from
from
using
that
before
we
switched
to
elastic
ipfs
like
things
like
you
can
have,
you
should
have
a
cluster
of
kind
of
smaller
nodes
and
by
smaller
I
mean
like
smaller
disk
sizes,
because
as
soon
as
you
have
a
cluster
of
like
free
nodes-
and
you
put
a
lot
of
data
on
that
on
that
node,
then
it's
responsible
for
doing
a
lot
of
well
writing
for
starters,
but
also
it
then
has
to
provide
all
of
that
information
to
the
DHT.
D
So
there's
just
a
lot
of
work
to
do
there
and
if
people
put
popular
content
on
there,
then
it's
also
responsible
for
Content,
that's
quite
popular,
and
if
it's,
if
there's
a
chance
that
it's
going
to
be
the
only
node,
that's
serving
that
content,
then
you're.
Your
responsibilities
are
just
big
and
our
nodes
become
very
busy
and
tend
to
performance
kind
of
in
our
experience,
sort
of
tends
to
tail
off
as
as
nodes
get
bigger.
D
So
like.
Ultimately,
we
kind
of
got
to
a
point
where
we
had
to
turn
off
that
DHT
providing
from
just
because
our
nodes
were
got
big
in
in,
like
the
we
did,
move
to
smaller
classes,
but
it
just
meant
that
they
were
just
too
busy
and
the
only
way
that
we
could
get
some
read
performance
out
of
them
was
to
turn
off
providing.
So
that's
not
great
and
I.
Think
we
in
in
Iceland.
D
We
were
talking
about
how
the
the
there
was
a
kind
of
big
problem
there
and
that
we
had
so
many
cids,
like
the
overhead
of
like
provider,
records
that
were
being
placed
on
nodes
in
a
network,
was
like
four
megabytes,
each
node
or
something
which
is
just
quite
a
lot
to
ask
of
every
node
on
the
network,
and
we
had
lots
and
lots
of
Records.
D
So
the
alternative
is
to
change
your
provider
strategy,
I,
guess
from
a
lot
like
providing
for
all
blocks
to
providing
like
just
the
route
just
the
routes
for
all
of
it.
But
we
didn't
want
to
do
that.
D
Like
I
said,
if
there's
a
chance
that
we're
the
only
node
that
is
serving
this
content,
then
as
soon
as
you
turn
off
providing
for
other
blocks,
you
you
disallow
like
random,
seeks
into
the
data
and
you
are
not
no
longer
able
to
participate
in
in
bit
swaps
for
subsets
of
your
graphs
and
and
it's
so
that's
not
a
good
kind
of
solution.
D
So
the
way
I'll
answer
your
question
now
the
way
the
the
way
that
we
that
we
kind
of
solve
content
rooting
for
Kuba
users
is
to
lean
really
heavily
on
the
indexer
nodes,
and
so
all
of
the
data
that
comes
into
elastic
ipfs
is
is
published
to
the
indexer
nodes
and
that,
as
you've
already
seen
this
morning,
it
looks
like
writing
a
chain
of
of
entries
of
cids
that
we
are
that
we
are
providing
as
a
content
provider
that
the
indexer
node
then
comes
and
sort
of
slurps
up
at
its
Leisure
and
so
yeah.
D
A
C
We're
beginning
to
see
with
Boost
that
there's
a
subset
of
filecoin
nodes
that
are
running
bit
Swap
and
so
look
themselves
like
another
ipfs
node,
because
these
these
file
coin
nodes
have
a
lot
of
content
in
general
they're,
also
focusing
like
web3
storage,
initially
on
a
indexing
as
the
the
way
that
they're
exporting
that
list
of
Sids,
rather
than
trying
to
publish
to
the
DHT
and
potentially
overwhelm
it
so
I
think
between
large
Publishers,
like
both
elastic
ipfs
but
also
file
coin
nodes.
C
A
Excellent,
thank
you
so
for
Gus.
Another
question:
So
currently
Kubo
has
a
tier
2
approach
for
finding
content
from
Spears.
So
there's
like
direct
query,
then
fall
back
to
the
DHT.
How
do
you
see
that
evolving
over
time.
B
Yeah
in
the
short
term,
like
there's
already,
some
people
doing
analysis
on
like
how
to
how
to
change
that,
like
we
just
added
a
knob
and
a
recent
Kubo
release
to
be
able
to
to
tweak
this
interval
so.
B
Some
experiments
now
to
see
like
if
we
can
lower
it,
because
the
lower
you
can
get
it.
You
know
the
faster.
B
Your
queries
were
resolved
and
like
this,
this
mechanism
was
added
years
ago
when
the
DHT
was
much
slower
than
it
is
now
and
in
some
cases
like
this,
fallback
Behavior
actually
makes
queries
slower
instead
of
faster,
because,
like
many
DHT
queries
will
resolve
faster
than
a
second,
and
so,
if
you
wait
around
for
a
second
before
you
ask
the
DHT
you
might
end
up
just
taking
longer
than
you
would
have
so
I
mean
there's
some
tweaks.
We
can
make
to
that.
B
But
I
I,
think
what
like
and
also
this
behavior
is,
is
part
of
bit
swap,
which
is
kind
of
confusing.
There's
like
bit
swap
has
a
Content
routing
thing
that
does
this.
B
Our
long-term
goal
with
Kubo
is
to
pull
this
stuff
out
of
bit
Swap
and
make
it
like
a
proper
content
router
so
that
it's
not
like
complected
with
with
the
data
transfer
behavior
of
bit
swap
and
and
then
we
can
properly
experiment
with
different
like
we
can
just
use
that
configuration
that
I
even
showed
a
minute
ago
to
turn
this
off
or
tweak
it
and
so
being
able
to
do.
That
is
really
important
for
for
making
changes
to
it
too.
D
So,
a
while
ago
we
were
talking
about
hierarchical,
dhts,
and
we've
already
mentioned
this.
The
idea
of
local
and
then
sort
of
global
resolutions
has
there
been
any
sort
of
progress
in
that
respect
in
I
know
we
there's
now
in
there's
now
like
a
local
DHT
is
the
local
DHT
and
the
rest
of
the
DHT
or
like?
Is
it
any
more
fine-grained
like
than
that
they're?
What
I'm
sorry
I
couldn't
hear
you.
B
B
So
like
a
Kubo
node
by
default
runs
two
dhds.
It
runs
the
WAN
DHT,
which
is
like
the
one
that
connects
to
the
internet
and
then
there's
a
lan
DHT,
which
uses
like
mdns
and
techniques
like
that
to
find
nodes
in
your
local
network,
and
it
keeps
those
in
like
a
separate
DHD.
B
So
in
a
way
there's
already
been,
you
know
a
partitioning
of
the
DHT
going
on
under
the
hood
and
the
like.
The
querying
strategy
is
very
similar
to
the
one
that
we're
using
with
indexers.
Now,
where
you
like
query,
both
in
parallel
and
then
sort
of
merge
the
results
into
a
stream.
A
Excellent,
all
right
so
open
question
to
the
panel,
so
one
of
the
tensions
in
content
routing
is
resiliency
versus
performance
right,
so
getting
the
the
right
thing
versus
the
fast
thing.
So
what
level
of
performance
do
you
think
is
possible
in
ipfs
without
compromising
resilience
and
are
there
are
areas
that
we
shouldn't
compromise.
D
Can
we
have
can
we
have
both?
Can
we
have
resilience
through
many
performance
things
that
can
go
down?
Perhaps
that
I'd
like
I'd,
like
that
I,
don't
I,
don't
necessarily
believe
that
it's
a
either
or
question,
but
I
understand
that
a
lot
of
things
that
we're
talking
about
now
is
a
more
performance
orientated
at
the
expense
of
resilience
in
terms
of
they're,
more
centralized
than
we'd
like
I,
think
but
but
yeah
I'm.
D
Conscious
of
that
and
I
don't
want
us
to
continually
sort
of
Band-Aid
things
without
a
real
plan
for
making
making
it
less,
centralized
and
more
resilient.
So
I
just
I
just
think
whatever
we
do
do
now.
We
should
be
like
have
that
in
mind
at
the
very
least
so
that
we
can
get
to
a
place
where
we
do
have
both
of
those
things
and
I
think
it's
possible,
but
we
may
be
on
a
road.
That's
going
to
take
us
a
while
to
get
there.
C
I
want
to
draw
a
distinction
when
we
talk
about
resilience,
I
think
there's
actually
two
properties
in
there,
probably
more,
but
but
the
two
that
I
want
to
call
out
is
there's
Integrity.
Are
you
getting
the
right
results
and
then
there's
availability?
Are
you
getting
results?
C
Actually,
it
is
someone,
at
least
with
that
ID
that
really
has
said
that
they
have
it
and
as
part
of
the
double
hashing
work,
I
think
we
finally
will
be
getting
that
which
is
very
exciting,
because
that
means
that
we
can
go
a
little
bit
further
on
the
performance
while
really
just
having
to
think
about
the
availability
side
and
so
for
the
availability
side.
You
can
say
things
like
well
I'm
going
to
query.
Maybe
even
you
know
a
more
centralized
or
just
some
instance
that
happens
to
be
near
me.
C
I
don't
have
to
trust
them
that
much
because
I
still
have
some
of
this
Integrity
guarantee
through
signing
of
records,
and
so,
if
I
don't
have,
availability,
I
can
fall
back
to
other
instances,
but
I
I
get
to
sort
of
have
less
of
a
worry
about
them.
Lying
to
me
and
giving
me
like
and
causing
me
to
spend
a
bunch
of
network
bandwidths
in
the
wrong
way.
D
Is
there
any
re,
so
I
I'm
interested
in
that
in
in
the
like?
Is
there
any
recourse
in
the
index
the
network
indexes
for
adverts
that
you
receive
where
the
provider
stops
providing
that
data,
or
do
you
just
rely
on
them
telling
you
that
this
topic,
like
in
the
in
the
DHT
you
kind
of
you
get
this
great
kind
of?
If,
if
people
stop
providing
the
data,
then
they
stop
providing
it
or
they
fall
off
the
node.
The
network
and
sort
of
it
eventually
Works
itself
out,
because
those
records
expire.
C
Sort
of
and
about
half
of
it
is
built,
so
the
the
network
indexers
for
these
large
providers.
One
of
the
things
when
you
look
at
you
know
why
is
it
frustrating
to
provide
to
the
DHT,
is
you're
re-providing,
that
same
Sid,
saying
I
still
have
this
twice
a
day,
you're
doing
a
lot,
and
so
one
of
the
decisions
the
indexers
made.
Is
you
basically
keep
your
manifest
saying?
I
still
have
this
and
it's
not
a
thing
that
you
have
to
keep
it's
telling
it.
It's
just
I've
still
got
that
manifest.
C
If
the
indexers
can't
reach
the
publisher,
they
can't
reach
the
node
that
has
that
manifest.
They
eventually
will
drop
it
and
say:
okay,
I
can't
contact
you
at
all,
I'm
going
to
say
you
don't
have
any
of
this
stuff.
But
what
happens
if
you
keep
saying
yeah
I've
got
all
this
content,
but
you
don't
have
the
content
you're
still
a
node,
that's
online,
but
you've
deleted
it
and
you
didn't
update
the
index
properly.
C
Maybe
that
part
of
the
node
you
know
had
an
error
fell
off
that
implementation
is
bad,
and
for
that
our
guess
is
it's
a
reputation.
Feedback
loop,
which
is
clients
that
are
attempting
to
query
if
they
keep
failing.
Eventually,
that
signal
should
come
back
where
the
indexer
should
do
list
and
say:
okay,
this
node
is
claiming
it
has
a
whole
lot
of
content,
but
no
one's
actually
successfully
getting
it.
Let's
stop
returning
those
results.
C
D
B
I
think
Willie
mentioned
a
second
ago
that
can
be
like
frustrating
to
publish
to
the
DHT
I
thought.
That
was
interesting
because,
like
it
kind
of
highlights
the
trade-offs
that
we
make
like
it's
frustrating
to
publish
to
the
DHC,
because
you
can't
control
what
you're
serving
right
and
and
that's
a
property
that
improves
the
resilience
of
the
DHD
and
the
decentralization
of
it
at
the
expense
of
like
performance
and
stuff,
like
that.
B
A
B
I
think
we're
already
fragmented
and
we
don't
like
to
like,
don't
like
to
admit
it
so,
like
I,
think
that
for
the
past,
you
know
few
years
that
ipfs
has
been
around.
We've
done
a
really
good
job
of
like
hiding
users
from
the
fact
that
there
is
location
addressing
going
on
which
what
I
mean
by
location.
Addressing
is
there
something
else
other
than
just
the
content?
Id
that
we're
using
to
fetch
data
with
and
I
think
that's
just
an
intrinsic
property
of
fetching
data.
You
have
to
start
somewhere
right.
B
You
can't
fetch
data
from
nowhere
and
like
we
sort
of
like
hidden
this
behind
a
veil
for
a
long
time
and
protected
users
from
having
to
deal
with
it
by
like
hard
coding,
bootstrap
nodes
into
implementations,
but
like
the
grow,
the
growth
is
growing.
So
much
like
the
user
base
is
growing
so
much
that
I.
Don't
think
that
we
can
pretend
like
that's
not
happening
anymore
and
there's
already
like,
as
far
as
like
fragmentation
goes
like.
B
We
already
know
that
there's
a
bunch
of
ipfs
networks
out
there
that
aren't
on
the
public
internet
they're
in
like
lands
or
data
centers
and
stuff
like
that.
So
like
the
fragmentation
is
already
there
so
I
think
moving
forward.
It's
just
a
matter
of
like
accepting
that.
That's
a
thing
and
figuring
out
how
to
deal
with
it.
C
I
mean
there's
also,
you
can
there's
a
lot
of
content
on
the
web
that
has
a
hash,
that's
not
accessible
on
ipfs,
and
so
that's
another
realm
of
fragmentation.
So
we
can
also
think
about
how
we
build
additional
Bridges
between
these
disjoint
pieces
of
content
address
data
at
the
same
time
that
we're
growing.
A
C
One
thing
that
we've
seen
already
is
that
there's
a
motivation
if
you
are
a
relatively
high
traffic
Gateway
to
potentially
run
your
own
network
indexer
with
co-located
with
your
gateway,
because
suddenly,
if
you
can
pre-load
all
of
that
data,
you
don't
have
to
go
out
to
the
DHC
or
anything,
and
you
can
actually
like
really
reduce
that
uncertainty
and
Jitter.
You
just
are
querying
a
local
database
rather
than
an
unknown
sort
of
thing,
and
so
you
can
get
really
consistent
performance,
which
is
exciting.
I.
C
One
of
the
next
logical
motivations
is
you'd
like
to
take
the
DHT
content
and
pull
that
also
into
your
local
database
as
a
Gateway,
so
that
you
really
don't
feel
like
you
have
to
fall
back
to
the
DHC
if
it's
not
in
the
index
and
so
I
think
we'll
start
to
see
things
that
scrape
the
DHT
and
provide
that
as
a
source
into
indexes
as
well,
so
that
you,
someone
can
maintain
a
local
database
where
they
now
know
whenever
I
have
to
solve
this
content,
writing
problem.
C
I
can
do
it
in
a
millisecond
or
a
few
milliseconds
and
that's
I,
think
pretty
powerful
I
think
we'll
also
start
to
see.
We
don't
have
that
much
of
a
caching
story
like
we
have
a
global
CDN
cache
for
some
of
these
instances.
Right
like
if
we
run
one
or
Cloud,
runs
an
index
instance
of
the
index
we
can
put
in
HTTP
cache,
and
so,
if
stuff
is
getting
queried
somewhere,
you
can
imagine
that
subsequent
queries
for
that
same
Sid
are
quick,
but
it
would
be
interesting
to
have
an
application
specific
cache.
C
That's
actually
keeping
a
little
database
doing
some
some
more
efficient,
lru
type
things
and
and
knowing
okay.
This
is
these
individual
SIDS
I
can
store
that
as
compactly
as
the
index
itself.
I
don't
need
to
rely
on
HTTP,
caching
or
sort
of
these
generic
caching,
things
that
are
there
now
so
I
think
we'll
see
more
instances
and
caches
and
and
sort
of
a
diffusion
of
indexing
closer
to
clients.
D
Some
something
that
I'd
like
to
see
in
the
future,
one
of
the
things
that
users
of
web
free.storage
of
nft.storage
are
quite
interested
in.
It
is
when
their
data
is
available
more
widely
to
the
ipfs
network.
D
We
can
say
that
we've
received
their
uploads
because
they
generally
send
us
like
car
files
with
data
in
it,
but
to
them
that,
like
they
don't
the
like,
we
almost
instantly
make
it
available
on
to
gateways
by
magic
that
I
don't
need
to
talk
about
now,
but
the
it
would
be
also.
It
would
be
super
cool
if
we
could
somehow
be
able
to
tell
them
when
their
data
was
available
to
The
Wider
ipfs
Network
after
they'd
upload
it
and-
and
so
what
we
see
right
now
is
like
Network
indexes.
D
We
sort
of
give
them
advertisements,
but
we
don't
know
when
they
have
caught
up
to
where
in
the
chain
they
are
at
the
moment
and
it,
and
rather
than
us
like
polling
for
that
information.
It
might
be
cool
to
get
like
a
callback
or
something
for
maybe
not
an
upper
entry
level,
but
like
we've.
Incremented
it
by
one
would
be
would
be
super
cool.
A
This
segue
is
a
bit
into
into
my
next
question
for
for
dark
house.
So
in
cases
where
dag
house
is
providing
Gateway
service
to
your
users,
so
web3
to
storage,
nft
disorder,
your
users
are
coming
to
these
servers
to
get
their
data
back.
Is
there
a
a
future
where
you
don't
want
them
to
come
to
your
gateway
first,
and
you
want
them
to
go
to
the
broader
Network?
First,
yes,.
D
Yeah
almost
always
because
peer-to-peer,
but
also
because
it
costs
us
money,
it
costs
us
a
lot
of
money
to
serve
that
data
and
so
I
guess
the
in
the
the
interesting
thing
that
that
we
sh
you
should
know
is
that
we
run
two
gateways:
nft
just
storage.link
and
w3s
Dot
link
and
they
are
both
special
gateways
that
are
actually
not
really
really
gateways,
they're
kind
of
what
we
call
racing
gateways,
so
they
actually
just
race.
Other
gateways
and
the
other
gateways
provide
the
service
of
discoverability
and
sending
us
the
data.
D
But
when
our
gateways
are
going
through,
another
gateway
to
get
content,
that's
stored
on
elastic.
That's
like
that's
quite
a
lot
of
extra
cost
that
we're
that
not
only
are
us
as
an
organization
are,
are
having
to
pay,
because
it's
it's
like
egress
from
from
elastic
ipfs
to
ipfs.io,
and
then
they
presumably
have
to
pay
like
in
in
or
out.
And
then
then
we
have
to
pay
out,
or
you
know,
to
get
the
and
we're
just
shuffling
data
around
a
whole
bunch
of
proxies
and
and
so
yeah.
D
D
With
the
caveat
being
we
we've
just
released
a
thing
called
freeway,
which
is
a
gateway
where,
where
it's
backed
by
like
R2
buckets
in
cloudflare
and
our
two
buckets
they're
kind
of
like
S3
buckets
almost
exactly
the
same,
they
have
an
S3
API,
but
we
now
have
a
Gateway
in
our
race
of
gateways,
which
is
multiple
gateways
which
serves
data
directly
from
our
two
buckets,
and
it's
called
freeway,
because
R2
buckets
egress
is
free.
You
don't
pay
anything
to
get
data
out
of
them.
D
So
for
Content
that
is
hosted
by
us
that
we
asked
we
ask
of
the
freeway
first
and
if
the
freeway
can
serve
it,
then
that's
fine,
and
if
not,
then
it
falls
back
to
other
other
gateways,
and
so
the
freeway
doesn't
need
to
do
that.
Discoverability
thing
we're
still
serving
content
addressed
like
ipfs
data,
it's
just
not
going
over
lid
P2P
and
it's
not
costing
us
a
whole
bunch
of
money.
A
Excellent,
so
I
think
we
have
maybe
time
for
for
one
more
open
question.
So
there's
a
track
on
this
tomorrow.
Content
writing
privacy
So
currently
on
the
network,
reader
writer
privacy
doesn't
really
exist.
All
of
this
data
is,
is
open
and
transparent.
So
what
do
you
think
needs
to
exist
around
content,
routing
privacy
story.
C
Privacy,
so
so
I
think
there's
a
baseline,
which
is
we'd.
It
would
be
really
nice
if
the
content,
routing
story
in
ipfs
and
so
forth.
C
You
feel,
like
you
can
say
this
is
not
any
worse
than
a
web
2
like
world,
which
is
like
a
big
jump
from
where
we
are
because
in
a
web
2
like
world,
it's
like
you
and
that
service
provider,
and
maybe
a
little
bit
of
like
your
ISP
and
some
DNS,
there's
like
a
few
places
where
things
leak,
but
it's
not
every
participant
in
the
DHT,
potentially
every
random
node.
That
happens
to
connect
to
you.
C
D
So
I
was
going
to
say
that
as
a
service
that
it
is
responsible
for
receiving
data
and
making
it
available
to
the
distributed
Network
like
we
kind
of
want
that
to
be
public
and
discoverable
by
everyone,
because
that's
sort
of
the
idea,
but
as
soon
as
as
soon
as
we
get
to
the
state
where
users
are
hosting
or
running
their
own
ipfs
nodes
hosting
their
own
data
and
providing
that
themselves,
I
can
absolutely
see
why
you
would
want
content
routing
privacy,
because
you
know
as
soon
as
you
don't
have
that.
D
I
agree.
We
need
privacy
in
those
cases,
but
for
like
for
to
the
use.
Well,
I
mean
I
I
in
in
the
Far
Far
Future
I.
Guess
the
the
this
problem
of
there
needing
to
be
someone
else.
That's
hosting
your
data
for
it
to
stay
alive
in
the
network
might
not
be
a
thing
anymore.
So
then,
then,
privacy
everywhere,
but
right
now,
privacy
for
the
the
small
people
who
are
hosting
stuff
and
publicity
for
the
people
like
like
web
Freedom
stories
like
pinata
hosting
providers.
C
I
think
that's
a
good
distinction
to
draw
that
there
are
multiple
classes
of
data.
There's
like
large
public
data
sets
where
you
want
accountability
from
the
providers,
and
so
you
want
that.
I
have
this
large
amount
of
data
to
be
a
public
thing
that
people
can
then
try
doing
retrievals
and
make
sure
it
actually
exists,
and
so,
if
data
is
meant
to
be
public
and
meant
to
be
available
quickly,
you
want
that
accountability.
C
B
Yeah
I
think
there's
also
there's
a
lot
of
well-known
techniques
for
preserving
privacy
and
in
decentralized
ways
that
we
haven't
done
a
good
job
of
exploring,
mostly
because
it's
hard
because
of
assumptions
that,
like
implementations,
make
so
like
as
a
Kubo
maintainer
I'm,
really
interested
in
you
know
finding
the
right
seams
in
Kubota
like
enable
this
experimentation
with
this
stuff.
We
already
have
like
researchers
who
are
super
interested
in
it,
and
now
it's
just
like
a
lot
of.
B
A
Excellent
and
I
think
that
brings
us
to
just
about
time
again
there's
a
Content
routing
privacy
track
tomorrow
you
can
come
see
that,
along
with
Liberty
P
day-
and
we
don't
have
time
for
open
q
a
unfortunately,
but
these
three
are
here
all
day.
I'm
also
here
come
come
talk,
come
ask
questions.
Thank
you
all
so
much.
Thank
you.
Thank.