►
From YouTube: The IPFS Network From The Hydra's PoV - Dennis Trautwein
Description
This talk was given at IPFS Camp 2022 in Lisbon, Portugal.
A
So
why
do
we
want
to
take
a
look
at
the
hydras?
So
hydros
play
is
Central
role
in
content.
Discovery
is
well,
they
claim
to
play
a
central
role.
Let's
take
a
look
at
the
stats
later
on
and
yeah
several
users
and
applications
depend
on
them.
They
make
up
a
significant
proportion
of
the
whole
network,
nodes
that
are
DHT
server
nodes
and,
as
far
as
I
know,
there's
no
thorough
investigation
so
far
of
the
hydras
performance
and
how
much
they
actually
impact
their
performance
and
also
Jana's
earlier
shared
this.
A
Our
paper
that
we
published
and
the
review
of
feedback
is
the
main
concern
or
the
most
or
the
biggest
negative
point.
That
was
that
we
didn't
take
a
look
into
the
content
locality,
so
we
took
a
look
at
where
DHT
nodes
are
geographically
located,
but
not
where
content
is,
and
our
thought
is
that
we
can
take
the
Hydra
database
to
actually
find
out
where
content
is
stored,
Geographic
geographically.
A
So
for
the
talk
we
will
briefly
cover
what
hydras
are
then
we
take
a
look
at
hydros
in
production,
so
how
they
are
deployed
at
Pro.
Collabs
I
will
share
a
couple
of
grafana
panels
and
grafana
dashboards,
and
we
will
take
a
look
at
the
stats
there.
Then
I
will
describe
our
measurement
setup.
So
how
do
we
find
out
where
content
is
stored
and
what
else
can
we
do
with
that
data?
And
then
I
will
share
a
lot
of
graphs,
so
it's
going
to
be
fun.
A
So
briefly,
what
are
Hydra
boosters
and
Mikhail
already
touched
briefly
on
that,
just
to
recap
and
I
got
from
their
GitHub
page.
This
brief
paragraph
at
the
top,
so
a
Hydra
with
so
Hydro
boosters
are
a
Hydra
with
one
belly
full
of
records
and
many
heads
to
tell
other
nodes
about
them.
A
They
have
three
main
activities,
so
they
receive
add
provider,
requests
from
the
ipfs,
Network
store
them
in
one
big
database,
and
so
that
other
heads
of
that
Hydra
can
directly
access
the
database
and
serve
it
right
away.
Although
they
are
maybe
far
away
in
the
XR
key
space
and
yeah,
then
they
use
those
provider
records
to
answer,
get
providers,
queries
from
other
ipfs
nodes
and
also,
if
they
don't
know
about
a
provider
record
or
they
don't
know
about
the
CID.
A
They
proactively
search
for
those
provider
records
in
the
network
to
store
them
in
their
own
database.
The
code
is
open
source.
You
can
find
it
on
the
right
on
lipidop
Hydra
boosters,
so
this
just
briefly
what
they
are
so
hydros
and
production.
How
are
they
run
a
protocol
apps
right
now?
A
But
let's
continue
with
hydroson
production
at
Pro
collapse.
They
receive
tons
of
requests
so
on
the
x-axis
you
can
see
the
time
and
on
the
y-axis
you
see
the
received
requests
per
second,
the
Blue
Line
are
the
app
provider
rpcs
and
they
wobble
around
50
000
requests
per
second,
the
yellow
line
is
the
find
node
RPC.
So
this
accelerates
the
content.
A
On
the
other
hand,
they
also
send
requests
so
I
say
they
proactively,
search
for
provider
records
that
are
in
the
network
and
so
sorry
that
they
don't
know
or
know
about
and
receive
a
request
for-
and
this
is
also
in
the
order
of
a
couple
of
10
000
requests
per
second
that
they
sent
out.
So
they
are
used
quite
a
lot
and
they
handle
a
lot
of
traffic
in
this
in
the
network.
A
In
my
opinion,
for
the
total
provider
records
that
they
store
so
as
you've
also
heard
from
Mikel,
they
have
a
TTL
of
24
hours.
So
there's
an
up
and
down
going
on
on
the
x-axis
again
time
on
the
y-axis,
and
this
reaches
from
1
billion
at
the
lower
left
to
one
for
5.5
billion
on
the
top
left.
So
that's
the
order
of
magnitude
that
we
operate
with
regards
to
provider
records
in
the
network
and
yeah.
You
can
see
that
there's
an
up
and
down
so
brighter
records
come
and
go
depending
on
I.
A
Total
ipns
records,
that's
a
fun
one,
so
the
top
left.
Sorry,
the
bottom
left
starts
at
around
1.6
million
and
the
top
left
stops
at
1.9
million,
and
you
can
see
that's
a
straight
line
going
up.
This
is
just
because
the
hydras
don't
prune
any
ipns
records,
so
we
have
all
the
ipns
records
since
I,
don't
know
March
or
so,
and
so
they
just
don't
expire.
A
So
this
is
it's
hard
to
tell
how
many
active
records
are
there
actually
currently
in
the
network,
so
but
I
mean
we're
in
the
measurement
track,
and
if
anyone
wants
to
do
an
analysis
on
that,
I
just
want
to
point
out
that
this
data
is
there
and
we
could
collaborate
on
that.
A
This
is
a
tricky
one.
These
are
the
provider
prefetches,
so
I,
repeat
myself
again:
the
hydras
go
ahead
and
search
for
cids
in
the
network
that
they
don't
know
about,
and
this
there
are
a
couple
of
different
different
cases
that
we
need
to
consider
here.
So
you
can
see
the
orange
line
at
the
top.
A
This
is
the
case
that
the
hydras
have
received
a
record
as
sorry
a
CID
that
they
should
serve,
but
they
didn't
have
it
in
their
in
their
database,
but
then
they
went
ahead
and
searched
in
the
network
for
the
CID
and
actually
found
the
provider
records.
So
this
is
the
the
thin
orange
line
at
the
top.
Then
the
blue
bar
there
is
the
basically
the
successful
requests
from
for
for
Hydra.
For
these
Hydra
sorry
provider
records
from
Hydra
PS.
This
means
I
request.
A
Something
is
a
CID
from
the
Hydra
and
they
could
serve
it
right
away
from
a
cache.
Then
the
yellow
bar
here,
the
biggest
part,
is
that
the
Hydra
ran
into
so-called
negative
cache.
This
means
the
CID
was
requested.
Previously,
the
Hydra
went
ahead,
tried
to
find
it
at
the
network,
but
has
not
find
it.
If
sorry
has
not
found
it
and
thus
has
put
the
CID
into
a
negative
cache,
that
means
if
another
peer
comes
across
sorry
if
another
peer
requests,
the
same
CID,
the
Hydra
knows
well,
I
just
searched
for
it.
A
I
couldn't
find
it
Well
I
just
discard
this.
This
CID
and
the
the
green
one
at
the
bottom
is
the
high
drive
went
ahead
to
try
to
find
it
in
the
network
and
couldn't
find
it
in
this
particular
time
slot
here
all
right,
so
these
are
just
some
some
metrics
of
hydros
in
production
at
programs,
our
measurement
setup.
A
So
what
do
we
want
to
do
with
that
data
that
so
the
Hydra
the
Hydra
sit
on
a
lot
of
data
in
this
big
dynamodb
database,
and
we
want
to
know:
do
hydras
really
cover
the
whole
hash
space,
so
I
said
they
distribute
themselves
in
the
whole
key
space
in
a
very
specific
order
uniformly,
so
that
every
request
should
end
up
at
at
least
one
Hydro
head.
But
is
this
actually
the
case?
We
want
to
find
out
then
again
from
this
review
of
feedback.
Where
is
content
stored
geographically?
A
So
is
it
in
the
US
in
in
Europe
or
in
China?
We
we
don't
know
yet
and
then
also
what's
the
content
turn
so,
if
I
provide
a
CID,
what's
the
percentage
of
cids
that
are
reprovided
or
how
long
do
CID
stay
in
the
network
until
they
yeah
just
churn
away
and,
most
importantly,
do
hydras
really
speed
up
content.
Routing
I
mean
Michael
already
had
given
brief
glimpse
into.
If
that
that
this
might
may
not
be
the
case,
but
we'll
see
so
this
is
the
data
processing
pipeline.
I.
A
Think
I
will
just
skip
over
that,
because
this
will
change
a
lot,
but
maybe
for
this,
for
the
sake
of
completeness
there's
this
dynamodb
that
you
can
see
here
and
we're
sorry
and
we're
doing
like
a
point
in
time:
recovery
every
one,
every
24
hours
into
an
S3
bucket,
and
then
we
run
a
couple
of
AWS.
Sorry,
a
couple
of
spark
drops
with
AWS
glue
and
then
query
the
data
with
with
Athena
and
yeah.
We
have
all
the
whole
tool
belt
of
SQL
queries
attached
to
us.
So
that's
great.
A
Each
data
dynamodb
dump
is
around
200
gigabytes
and
if
we
also
want
to
have
the
peer
records
which
we
want
to
have,
this
is
another
one
and
a
half
gigabytes.
So
this
is
the
size
of
the
data
that
we're
operating
with
and
if
we
are
doing
these
Dynamo
DB
dance
quite
frequently,
this
adds
up
quite
quite
fast
to
a
lot
of
a
lot
of
data
and
makes
it
difficult
to
handle
just
for
for
everyone
to
know
which
kind
of
data
we
have.
A
So
if
you
also,
if
you
need
this
data
at
some
point
for
one
of
your
studies,
provider
records,
as
you
may
know,
consists
of
the
CID,
then
the
time
to
live,
and
then
the
peer
ID
and
the
peer
records
consist
of
for
us,
most
importantly,
the
peer
ID
and
then
the
network
addresses
and
since
the
hydras
know,
of
all
the
provider
records
and
thus
also
know
of
all
the
peer
records.
We
can
basically
join
both
of
these.
A
A
First
of
all,
the
hash
space
coverage
on
the
right
hand,
side
you
can
see
the
the
straight
line
just
indicates
that
we
have
a
very
uniform
distribution
of
peer
IDs.
So
this
is
good,
but
also
we
wanted
to
know
how
often
a
Hydra
head
appears
in
the
proximity
of
20
closest
peers.
So
what
did?
What
did
we
do?
We
took
a
full
Network
crawl,
and
so
this
snapshot
that
also
that
we
also
talked
about
previously.
A
Then
we
put
all
the
purities
in
this
binary,
try,
which
is
the
representation
yeah
for
this
XR
key
space,
thanks
to
GM,
who
built
this
brilliant
Library
here
and
then
for
all
the
peer
IDs
that
we
found
in
the
network.
We
calculated
the
20
closest
peers.
According
to
the
XR
distance-
and
we
checked
well,
it's
a
Hydra
head
in
there
or
not,
and
we
found
out
for
the
16200
PS
that
we
found
in
the
network.
A
This
is
the
case
for
15
700,
which
makes
around
97
coverage
of
the
whole
hash
space,
which
is
yeah,
which
is
good,
which
is
the
intention
and
also
we
just
confirmed
it
empirically.
Basically,
then
yeah
the
peer
yeah,
let's
start
with
the
peer
IDs,
since
all
the
hydras
also
know.
Besides,
our
provider
records
everything
about
peer
records.
A
We
were
able
to
map
the
peer
IDs
according
to
the
network,
addresses
to
different
locations,
but
sometimes
some
peers
advertise
multiple
IP
addresses
that
can
be
Associated
to
different
countries,
so
I
could
advertise
an
IP
address,
yeah
just
that
maps
to
I,
don't
know
the
US
and
another
one
that
maps
to
France
or
so,
and
but
this
is
the
minority.
This
is
just
to
a
verification
here
or
like
a
sanity
check.
A
Basically,
most
of
the
time
like
95
of
the
peers,
just
advertise
peer
addresses
that
can
be
Associated
to
one
country
and
we
will
only
focus
on
this
big
bar
in
the
next
slides,
and
this
is
the
the
PRD
distribution
so
most
of
the
peers
that
actually
provide
content
sit
in
the
US.
This
is
around
I,
don't
know
45,
maybe
roughly
almost
50
percent
of
the
peers,
and
then
we
have
a
steep
drop
and
the
next,
the
next
country
there's
Korea
and
then
China,
Germany,
Great,
Britain
and
so
on.
A
A
So
this
is
something
that
we
also
talked
about
that
someone
asked
me
yesterday,
so
how
many
providers
do
cids
have
in
on
average,
and
here
we
can
see
that
so
we
know
here,
we
can
see
that
around
85
percent
of
cids
only
have
a
single
peer
ID
as
the
provider,
and
then
10
have
two
two
providers
and
three
percent
SRE
and
five
percent
around
three
providers,
and
so
on.
A
So
the
vast
majority
again
is
just
provided
by
a
single
prid
on
a
single
machine
which
yeah,
maybe
good
or
or
not,
probably
would
be
better
if
they
were
distributed,
and
then
we
can
go
ahead
and
disassociate,
as
I
said,
the
CID
with
their
countries-
and
this
is
a
little
bit
more
ambiguous,
as
you
can
see
so
most
of
the
cids
again
can
be
Associated
to
one
country,
but
then
there
are
some
cids
that
can
be
Associated
to
multiple
countries
and
in
the
future.
A
So
in
the
coming
slides
again,
we
will
focus
only
on
this
big
bar
on
the
left,
so
the
ones
where
we
have
an
unambiguous
Association
to
a
country-
and
this
is
the
distribution
we
wanted
to
have.
So
this
means
that
this
just
shows
on
the
left.
So
the
y-axis
here
is
the
percentage
of
all
cids
and
that
are
associated
to
the
country
that
you
can
see
at
the
bottom
and
you
can
see
on
the
left.
A
The
vast
majority
again
is
provided
from
the
US,
like
55
of
all
distinct
cids
that
we
find
in
the
network.
Are
you
have
a
provider
that
sit
in
the
US
and
then
the
second,
the
second
prominent
one
is
the
Netherlands
and
then
France
and
and
Germany,
and
this
this
is
actually
what
we
wanted
to
find
out.
A
Then
we
can
also
check
how
many
cids
there
are
per
day
and
you
can
see
on
the
y-axis
again.
This
goes
from
zero
to
one
billion,
so
the
blue
bars
show
the
number
of
distinct
cids
at
one
particular
point
in
time,
and
this
shows
we
have
1
billion
cids
or
content
chunks.
If
you
want-
and
if
you
multiply
this
with
the
content
or
sorry
with
the
chunk
size,
we
end
up
at
around
120
terabytes
of
data,
which
is
just
an
approximation,
but
still
this
would
be
the
number.
A
This
is
the
amount
of
data
that
would
be
addressable
through
the
DHT,
so
just
to
have
a
ballpark
there
as
well
and
then
what
we
did
is
we
compared
the
this
number
of
distinct
cids
with
the
previous
day.
This
is
the
orange
bar
there
and,
as
you
can
see,
this
is
almost
half
half
then
as
like
it's
only
around
50
of
the
blue
bars
there,
and
this
means
that
50
of
all
cids
leave
the
network
within
24
hours,
but
also
50
of
new
cids
join
the
network.
A
So
there's
a
pretty
high
churn
of
cids,
which
is
well
for
me.
It
was
quite
unexpected,
so
this
means
that
there's
either
an
application
that
produces
a
lot
of
ephemeral,
content
or
there's
something
inherently
some
inherent
property
to
the
network.
A
That
makes
this
yeah
proper
property
apparent
in
these
graphs
here
and
then
the
green
graph
here
is
again
the
the
the
the
intersection
of
the
second
previous
day,
which
is
slightly
less
than
the
than
the
orange
bars,
but
but
not
as
pronounced
as
the
previous
drop,
and
this
is
kind
of
the
CID
churned
that
you
can
see
there
that
we
also
wanted
to
know,
and
what
we
can
do
now
is
also
check.
The
top
providers-
and
this
is
quite
concerning
I-
would
argue
here.
A
We
can
see
on
the
x-axis
different
peer
IDs,
that
I
have
truncated
and
on
the
y-axis,
the
percentage
of
CI
of
total
unique
cids
that
they
provide,
and
we
can
see
that
a
single
peer,
a
single
provider,
provides
around
13
of
all
cids
in
the
network,
and
then
it
drops
to
nine
percent,
seven
percent
and
so
on
and
so
forth.
And
if
we
add
up
the
top
10
peers
here
and
their
percentages,
they
make
up
over.
A
50
of
all
cids
provided
in
the
network
and
if
we
look
at
them
a
little
closer,
so
some
top
providers
forensics
here
these
are
again
the
ranks
and
then
the
priorities
and
and
so
on
and
so
forth.
We
can
see
that
many
of
them
I
was
able
to
map
to
web3.
storage.
But
then
there
are
a
couple
of
ones
that
I'm
not
sure
where
they
belong
to.
So
we
could
do
another
like
forensic
session
and
find
out
who
they
are
and
yeah.
This
is
just
50
of
of
all
peers.
A
Sorry
of
all
provider
records
are
provided
by
these
peers
there.
So
probably
some
some
centralization
concern.
Perhaps
let's
discuss
this
all
right
and
the
last
question
that
we
have
do
the
provider.
Sorry,
the
hydras
actually
speed
up
the
latency,
so
the
content,
routing
and
providing
yeah
well
content,
routing
in
the
sense
of
providing
content
to
the
network
and
also
requesting
content
from
the
network,
and
the
top
graph
shows
some
CDs
that
will
go
into
detail
in
a
second
without
Hydra.
So
I
did
the
same.
A
That
Mikhail
also
did
with
this
experiment:
I
just
disabled,
all
the
hydras
from
the.
So
if
I
find
I
want
to
request
something
from
a
Hydra,
I
just
drop
the
request
into
something
else
and
at
the
bottom
we
see
the
the
same
experiment,
but
just
we
also
use
hydras
for
that,
and
this
is
the
the
box
with
this,
this
red
box
that
you
can
see,
there
shows
the
CDF
of
the
DHT
walk,
which
the
Hydra
supposedly
should
speed
up,
and
this
is
a
little
hard
to
compare.
A
So,
let's
have
a
look
at
that
a
little
differently
here
we
can
see
on
the
x-axis
the
different
regions
that
we
probe
the
network
from
and
on
the
y-axis,
the
latency,
so
the
lookup
latencies,
and
we
can
see
that
hydras
indeed
speed
up
content
routing,
but
only
slightly
so
the
the
blue
bars
here
are
the
content.
Retributes
with
hydras
and
the
orange
bars
are
the
content
rituals
without
hydras,
and
even
so,
for
the
Middle
Eastern
node
here.
Can
you
see
my
yeah?
A
Only
so
from
this
study,
we
could
conclude
that
the
Hydra
actually
don't
really
speed
up
the
the
DHC
lookup
process,
as
as
it's
intended
all
right,
and
that's
it
just
once
again,
I'm
also
running
this
network
punching
campaign
sign
up
and
join
our
measurement
campaign
and
I
will
just
leave
this
up
for
questions,
and
this
is
my
brief
talk
about
the
hydras.
Thank
you.
C
Thanks
Dennis,
the
amazing
work
again
apart
from
nebula
crawler
anyway,
so
I
have
a
bunch
of
questions
so,
first
to
but
I'll
ask
maybe
a
couple
of
them.
So
one
on
the
negative
cache
versus
not
appearing
I
mean
you
were
explaining
about
the
data
set
in
the
beginning,
where
you
had
like
four
different
chunks
right
right.
C
A
So
you
mean
between
the
the
green
one
at
the
bottom
or
exactly
yeah
right
so,
but
for
the
green
one.
The
hydras
actually
went
ahead
and
tried
to
find
it,
but
couldn't
find
it
for
the
yellow
one.
The
hydras
yeah
just
didn't
even
try
because
they
tried
previously
and
for
yeah
again
the
blue
ones.
They
have.
They
had.
C
Okay,
okay,
cool
and
on
the
graph,
where
you
explained
with
multiple
dates
and
the
commonality
between
common
cids.
You
found
on
the
previous
day
yeah
this
one.
So
if
I
see
between
21
and
22,
I
see
an
increase
in
number
of
common
cids
right
right.
But
how
is
this
possible?
I
mean
if
you
take
a
an
intersection
between
the
total
number
and
the
number
that
was
appearing
on
the
previous
day.
I
assume
this
is
a
unique
CID
count.
A
C
It's
not
it's
not
like
the
blue
bar
and
the
orange
part.
Okay,
okay,
makes
sense,
makes
sense.
Okay,
and
did
you
see
any
anomalies
like
use
of
VPN
or
yeah
what
VPN
or
PR
I
p
rotation
those
sort
of
things
no.
A
C
One
last
question
on
the
Hydra
measurements:
so
one
thing
is:
you
have
done
the
experiments
on
the
AWS
I.
Think
yeah.
So
do
you
think
having
these
nodes
centrally
within
AWS
might
say
that
Hydra
is
not
performing
better?
If
you
had
like,
let's
say
the
node
at
the
edge
or
the
user
node,
maybe
the
Hydra
effect
might
be
a
bit
more
clear
like
whether
the
routing
has
better
performance
or
not.
A
A
This
was
also
a
review
of
feedback,
but
we
haven't
touched
that
one
yet,
but
yeah
good
point.
Definitely
this
would
be
more
realistic
for
the
end
users,
I
believe
yeah
cool.
D
Thanks
for
the
talk,
the
top
provider
aircraft
that
you
showed,
do
you
take
it
to
account
because
I
couldn't
read
the
sorry.
A
A
This
is
just
the
number
of
Series,
so
let
me
just
briefly
read
out
what
it
says
at
the
top,
so
it
says
the
percentages
can
add
up
to
over
100,
because
singularities
cids
can
be
provided
by
multiple
peers,
and
this
could
be
so
imagine,
there's
only
one
Cid
in
the
whole
network
and
provided
by
two
peers.
Both
would
provide
100
of
our
cids,
which
would
add
up
to
200
percent.
D
D
D
A
D
Sorry,
just
like
with
yellow
IO
red,
so
we
have
a
lot
of
the
the
yellow
and
the
green
regions
are
significant.
Do
you
know
what
kind
of
cids
are
those
were
the
cities
that
used
to
be
in
the
network
but
disappeared
or
they
will
become
available
or
you
didn't
check
that.
A
I
have
not
checked
that
would
be
interesting
to
see
which
con,
which
cids
I
actually
provided
and
as
I
requested
and
then
couldn't
be
fine
I
found
yeah,
but
we
haven't
looked
into
that,
but
it's
super
interesting
to
do
definitely
yeah.
Okay.
Thank
you.
E
Hi
I
the
slide
at
the
end
shows
that
the
the
DHT
lookup
in
using
Hydra
it
takes
about
four
seconds
that.
A
A
I
think
that
was
in
the
backups
here.
So
this
is
the
90th
percentile,
okay
and
then
also
the
95th.
Maybe
that's
probably
more
representative,
what
you
experience
from
the
network
then.
E
A
Yep,
that's
that's
how
they
would
work
yeah,
and
so
we
haven't
so
I
have
not
taken
a
deeper
look
of
why
the
hydras
don't
speed
up
the
network
so
so
much,
but
the
breakdown
basically
is
on
on
this
graph,
which
I
had
just
skimmed
over.
So
you
have
the
DHT
walk
and
then
also
the
the
content,
fetch
duration
on,
on
the
the
right
hand,
side,
and
also
in
this
DHT
walk.
There
are
a
couple
of
steps
that
are
performed
that
we
could
also
dissect
to
dig
deeper
into
that,
but
yeah.
A
A
True
true
yeah,
so
yeah
I
need
I
would
need
to
look
up
which,
which
part
of
the
content
retriever
this
actually
depicts,
but
yeah
yeah.
If.
B
I,
remember
correctly,
that's
the
whole
thing,
that's
why
in
the
Middle
East
there
there
is,
you
know
the
Why
without
hydras
his
pasta,
because
it
doesn't
really
make
sense.
Otherwise
there
must
be
something
you
know
with
a
bit
swap
before
and
the
content
fetching
after.
B
So
that's
for
the
whole
thing:
it's
not
for
the
DHT.
Only
I.
A
Regional
this
is
so
anyone
could
run
a
Hydra,
and
pro
collapse
is
just
running
like
not
even
one
big
Hydra
but
like
as
I
said
2
000
heads
across
130
or
so
hydras,
and
so
you
could
argue,
it's
a
global
Service
to
the
network
or
a
single
point
of
service
but
yeah.
How
much
service
does
it
actually
give
to
us?
Yeah
I,
don't
know
yeah.
So
what
do
you
mean
with
global
Service
at.
E
A
A
Yeah
we
already
heard
today
that
there's
some
some
ideas
of
just
turning
them
off
and
watch
the
world
burn.
So.
B
Yeah,
that's
that's
a
kind
of
pre-cash
or
experiment
to
see
if
it's
a
safe
choice
to
do
or
then
you
know
having
a
next
part
to
the
experiment
which
is
going
to
be
more
real
and
just
starting
turning
them
off,
perhaps
gradually
and
then
eventually
have
run
some
days
without
it
to
see.
If
these
persists
or
not.
G
So
the
if
I
understand
right
the
intention
of
the
pre-fetching
is
to
find
cids
that
are
hard
to
find
in
the
network
right.
Do
you
do
you
have
a
sense
of
how
good
of
a
job
it's
doing
like
how
like
how
much
is
it,
making
it
easier
for
other
peers
in
the
network
to
find
hard
to
find
cids.
G
There
so
it's
going
to
search
for
cids
that
it
doesn't
know
about
right
and
then
it
will
cache
them
which
improves
the
availability.
Yeah,
like
the
knowledge
of
that
CID
and
the
DHT
right
like.
A
A
To
find
I
mean
in
theory,
I
would
say
so,
but
as
we
can
see
on
this
graph,
this
barely
happens.
It's
just
the
the
orange
strip
at
the
at
the
top
when
this
actually
works.
So
this
orange
strip
that
says
I
requested
something
from
the
hydras
the
hydras
didn't
know
about
it.
The
hydros
went
ahead
fetched
it
from
the
from
the
network
and
then
saved
it
for
others
just
as
to
serve
it
to
others,
then,
but
this
is
only
like,
like
yeah,
almost
negligible
to
the
to
the
rest
of
the
requests
right.
F
G
A
Okay,
yes,
yes,
okay,.
B
Yeah,
so
so
this
is
basically
the
orange
money
showing
how
many
are
the
say.
One-Timers,
perhaps
you
know
in
caching
kind
of
terminology
is
how
many
users,
you
know
are
asking
for
a
piece
of
content
just
once
and
never
again,
so
it
doesn't
make
sense
to
Cache
anyway
right
so
I
guess
that's
what
it
is.
So
it
would
be
interesting
to
see
you
know.
Okay,
the
hydras
are
fetching
this.
How
many
more
peers
ask
for
that
yeah
yeah
to
see
if
it,
if
it's
worth
the
effort,
yeah
exactly
yeah,
yeah.