►
From YouTube: 2023-04-04 IPFS Content Routing WG #8
Description
IPNI go-lib-ipni API refactoring major release v0.6, DHT refactor and DoubleHash implementation, IPFS autodiscovery of IPNI nodes and ranking.
A
All
right,
y'all
welcome
to
the
ipfs
content,
routing
work
group
number:
eight,
we
didn't
meet
last
time.
We
gave
everybody
a
little
bit
of
time
to
catch
up
everybody's
preparing
for
ipfs
thing,
but
this
will
be
our
last
content.
A
Routing
work
group
before
ipfs
thing
and
I
think
one
good
idea
that
we
should
consider,
since
the
majority
of
us
will
be
in
Brussels,
is
we
should
have
an
impromptu
content,
routing
work
group
meeting
while
we're
there,
even
though
we're
doing
a
lot
of
content
routing
work
group
calls
it'll,
be
great
to
see
you
all
in
person
and
actually
kind
of
go
over
some
of
this
stuff
together.
A
A
I
went
ahead
and
preliminarily
put
some
updates
for
the
ipni
team
up
while
I'm
running
through
those,
if
you'd
like
to
go
ahead
and
throw
in
any
representation
for
your
team
that
you
have
looks
like
I,
don't
think
we
have
our
our
folks
from
looks
like
gee,
you
may
be
holding
down
the
fort
today.
A
A
B
A
But
I
put
the
change
log
here
as
well
as
kind
of
an
item
that
jumps
out
at
me
as
pretty
important
for
people
to
be
aware
of
this
API
refactor
I
think
that's
probably
one
of
the
bigger
stories
that
comes
out
of
our
major
release
and
does
anybody
else
hear
from
ipni
feel
like
there
are
other
points
that
really
should
stand
out
that
are
relevant
to
content
routing
that
I
should
be
calling
out
here.
Hey
little.
A
All
right,
I'm
gonna,
keep
running
Lionel
just
to
catch
you
up.
We
cut
a
major
release
for
the
network
indexer,
so
ipni
is
up
to
version
six.
C
A
A
So
that's
kind
of
a
big
theme
that
we're
hoping
to
come
to
ipfs
thing
with
a
very
strong
migration
story
that
we
can
then
share
with
the
other
index.
Operators
to
try
to
you
know
give
them
the
benefit
of
our
lessons
learned
as
well
as
give
them
an
opportunity
to
ask
us
any
questions
that
they
might
have,
and
also
to
make
our
presentations
a
little
livelier.
A
Since
we
can
say
this,
these
are
things
we've
already
done,
not
things
we're
in
the
process
of
doing
it's,
there's
a
lot
of
little
bugs
and
challenges
that
come
with
this
type
of
migration,
so
we're
still
wrestling
with
some
of
that,
but
Ivan's
leading
the
charge
and
then
thank
you
light
off
for
your
comments
on
my
PR
for
anyone
that
wasn't
following
I
proposed
a
bunch
of
language
changes
to
the
ipip337
on
the
basis
of
kind
of
the
most
recent
reframe
updates.
A
Some
of
the
links
are
dead
and
stuff
like
that,
but
Lidl
kind
of
reminded
me
of
a
good
operating
Behavior
with
ipipps.
We
don't
necessarily
want
to
change
the
history
on
those.
Maybe
like
a
comment
is
more
appropriate
so
that
people
can
kind
of
see
the
history
better
and
instead,
maybe
just
propose
a
change
to
the
to
the
links
that
are
broken
and
I'll.
I'll.
Definitely
follow
that
advice
today.
Lytle.
Thank
you
for
that
and
then
likewise
we
are,
we
have
been.
Actually
we
may
no
longer
be.
A
It
sounds
like,
as
of
this
morning,
thanks
sadeem
we're
chasing
providers
of
the
unretrievable
SIDS,
the
unretrievables
that
are
being
represented
in
casca
DHT,
to
enable
Saturn
lookups,
and
it
sounds
like
we
made
some
progress
with
inferior,
that
I
hadn't
quite
caught,
where
we
seem
to
have
had
them,
resolve
the
bug
on
their
end.
That
was
causing
us
to
hide
from
us
so
forward
progress.
A
We've
talked
a
little
bit
in
the
team
about
attempting
to
identify
or
whether
it's
appropriate
to
try
to
identify
the
sources
of
these
other
peering
connection,
issues
that
are
resulting
in
SIDS.
We
can't
find
or
or
whether
or
not
we
simply
just
provide
guidance
and
let
the
users
come
to
us-
we're
still
kind
of
working
through
the
best
way
to
approach
that
right
now,
but
I
will
pass
it
along
to
either
leidler
ghee.
If,
if
y'all
want
to
provide
just
a
quick
update,
anything
that's
relevant
to
this
group.
D
Yeah
sure
I
think
I've
upgraded
document,
but
I
cannot
see
it
on
your
screen,
but
anyway,
I'll
go
ahead.
D
So
on
my
side,
I've
been
working
on
refactoring,
the
DFC
and
planning
the
chain
for
the
double
house,
so
working
on
the
double
hash,
THC
implementation
and
a
common
topic
with
the
ipfs
tour
has
been
the
migration
to
the
double
hash
PhD.
So
we've
set
up
weekly
call
and
we
discuss
the
different
possibilities
for
the
migration
strategy
and
we
want
to
present
the
details
at
ipfs
thing
to
get
everyone
on
board
and
let
everyone
know
how
the
change
is
going
to
happen.
A
E
Loosely
this
week
we've
been
deploying
initial
version
of
graph
API
as
a
part
of
the
area
project
and
it's
kind
of
loosely
related
to
the
content
rooting,
because
we
are
now
passing
a
full
content
path
to
the
end
point
as
additional
content,
routing
hint
I.
Think
they're,
like
the
long
term.
E
Take
away
from
this
work
for
the
content.
Routing
will
be
figuring
out
how
to
have
similar
signaling
capability
for
situation
when
user
is
using
a
request
for
a
subgraph.
E
Kind
of
like
request
a
subgraph
recursively
or
to
some
depth,
but
still
pass
the
information
about
the
parent
content
path.
The
information
that
subgraph
is
part
of
the
parent
graph
seems
to
be
select
or
that
is
very
commonly
used
on
the
web.
F
Yeah
yeah
I
think,
and
it's
sort
of
right
sort
of
like
a
long-standing
thing,
which
is
that
there's
a
lot
of
there's
a
bunch
of
implicitness
around
the
fact
that
most
of
our,
like
our
advertisements
everywhere,
are
are
sort
of.
They
look
like
blocks,
but
they're
parts
of
graphs
and
like
could
we
maybe
have
less
of
those
advertisements
or
how
do
we
want
those
things
to
look
or
when
people
actually
start
looking
for
things?
Do
they
should
they
then
have
to
walk
back
up
the
graph?
F
So
there's
there's
like
some
of
that
can
maybe
that
we
can
use
to
try
and
decrease
the
amount
of
stress
from
Storage
in
in
like
ipni
type
systems,
but
that'll
come
later
and
also
comes
with
the
flexibility
that
we
get
from
the
metadata
field.
So,
like
that's,
that's
something
that
we
can.
We
can
look
into
later.
A
F
Yeah
I
mean
ipni
just
has
like
a
a
stuff
field
right
which
allows
us
to
then
experiment
with
what
stuff
should
go
in
there,
as
opposed
to
the
current,
like
DHT,
where
there's
no
like
there's
just
one
field,
which
is
the
peer
ID
right
and
that
that's
it,
the
only
the
other
thing
I
can
I
can
think
of
here
is
related
to
some
work.
That's
been
going
on
with
trying
to
do
like
service
worker
things
in
browsers
to
load
data
we've.
F
You
know,
we've
noticed
we'll,
say
some
like
issues
dealing
with
5kni,
where
we've
had
to
like
kind
of
punch
through
some
of
the
the
caching,
basically
because
the
200s
are
cached
for
like
an
hour.
So
if
you
like
forget
to
turn
on
web
transport,
and
then
you
reboot
your
node
and
you
turn
on
web
transport,
it
like
won't.
It
won't
work
for
an
hour
for
you
so
like
like
that
kind
of
thing
is,
is
a
little
unfortunate.
F
Probably
care
routing
would
resolve
this
and
yeah
I
think
I
think
that's
kind
of
most
of
it.
So
some
of
these
other
PRS
that
have
been
a
little
back
burner
because
everyone
has
things
to
do
before
ipfs
thing
we
may
need
to
to
bring
back
once
everyone's
on
on
home
ground
again.
A
This
is
a
good
call,
dude
thanks
for
thanks
for
kind
of
putting
that
thought
together,
I
I,
definitely
I,
don't
want
to
lose
sight
of
that.
But
at
the
same
time
like
the
last
thing,
I
want
to
do
is
apply
any
pressure
heat
right
now,
just
because
everybody's
getting
ready
for
this
event-
and
it
just
doesn't
seem
the
time
for
it
right
now.
F
So
yeah,
it's
just
things
that,
like
Russell
and
I,
are
gonna,
have
to
hack
around,
like
I'll,
probably
stand
up
like
another
routing
V1
instance.
Instead
of
using
sadie.contact,
because
like
we
can't,
we
can't
control
how
some
of
that
operates,
which
is
fine.
It's
sort
of
the
point
of
the
point
of
these
apis
is
that
you
can
you
can
do
this,
but
it's
just
you
know
it's
like
when
you,
when
you
hack,
something
together
for
a
demo.
F
The
responsible
thing
is
to
let
the
people
Upstream
know
what
you've
hacked
around
so
that
that
they're
aware.
A
That
that
that
definitely
I'm
interested
to
hear
kind
of
feedback
from
the
team,
all
I'll,
let
the
group
say,
should
we
take
this
topic
as
one
of
our
like
top
of
Minds,
to
discuss
towards
the
end
of
the
meeting
and
get
through
the
topics
we've
defined?
Or
do
we
find
this
important
enough
to
try
to
just
jump
into
right
away
while
we're
kind
of
in
the
thought
process
like
well
Mossy
did
you
did
you
have
any
feelings
about
this
subject
that
we
want
to
dig
into.
F
I
would
say,
like
we
have
a
way
of
dealing
with
our
thing
in
the
short
term.
Let's
clear
anything
else,
that's
like
top
of
mind
for
people,
and
if
we
have
time,
then
we
can
deal
with
it.
Cool.
A
All
right
and
I
don't
think
yeah
we
don't
have
a
bifrost
representation.
A
I'll
check
in
with
them
they've
been
providing
pretty
consistently
updates
of
everything.
That's
going
on
over
with
the
decentralized
gateways
program
and
I've
been
kind
of
keeping
up
with
them
regarding
the
state
of
the
rollout
on
on
GitHub.
But
presently
my
understanding
is
is
that
v19
is
rolled
out
across
all
of
their
clusters
and
that
that
includes
ipnix.contact
hard-coded.
Doesn't
it.
A
This
is
I,
actually
put
an
action
item
to
validate
that
this
is
the
case
from
looking
at
GitHub.
I
get
the
impression
that
that
is
what's
been
merged
to
date,
but
I'll
I'll
go
ahead
and
take
it
as
an
action
item
to
ensure
that
that's
actually
the
case
and
I
will
leave
a
note
in
our
summary.
A
At
the
end
of
this
call
once
I
get
that
feedback
from
them.
The.
B
Issue
is
still
open
and
I,
and
so
in
by
Frost,
open
infra,
which
makes
me
assume
that
that
is
not
completed.
A
I'll
double
check
and
find
out
if
they
haven't
pushed
it
out
to
their
cluster,
then
it'll
be
good
to
know
when
when
they
will,
because
I
think
Steve
pointed
out
that
all
of
The
Blocking
kind
of
challenges
with
the
Kubo
release
have
been
resolved,
so
I'm
not
sure
what
else
would
potentially
be
holding
it
up.
Maybe
it's
a
procedural
kind
of
like
deployment
pipeline
type
of
process
that
they're
going
through
I'll.
C
A
C
A
The
comments
of
I'm
just
gonna
go
ahead
and
split
screen
here
in
the
comments
of
this
issue.
A
George
posted
this
concern
about
the
time
to
first
bite
that
started
back
on
the
the
22nd
I.
Think
I
wasn't
sure
if
we
had
actually
kind
of
resolved
for
like
the
concerns
that
the
direct
impact
of
using
Sadat
contact
was
resulting
in
this
or
whether
or
not
there.
B
Was
that
so
we
don't
have
Gus
or
George
here,
who
are
the
only
two
people
who
will
have
thoughts
on
this?
The
the
delegated
routing
framework
should
not
increase,
as
you
add,
an
additional
parallel
routing
thing
that
was
a
bug
in
the
delegated
routing
client
code.
I
believe
Gus
has
been
working
on
it
I
believe.
That's.
Why
say?
Contact
is
not
turned
on
on
the
gateways
yet
got
it,
but
we
need
them.
I
think
to
comment
on
where
they
are
there.
A
Oh
so
we
do
have
I'll
just
say
this
for
the
call.
My
question
was
another
thing
that
came
out
of
that
topic
of
conversation
is:
how
can
we
track
an
estimated
percentage
of
contact
enabled
Network
and
at
what
point
we'd
be
willing
to
dial
down
the
Hydra?
So
it
looked
like
there
were
a
lot
of
questions
around
how
much
network
coverage
do
we
have
to
have
before
that's
kind
of
an
appropriate
action
to
take,
but
it
looks
like
Liddell's
already
answered
that
question.
C
So
when
we
talk
about
coverage,
it
breaks
into
two
things:
one
is
the
lookup
coverage.
How
many
people
are
using
set
of
conduct
to
look
things
up?
The
other
one
is
publication
coverage.
How
many
people
are
publishing
into
secret
contact
for
publication
coverage?
We
crawl
the
filecoin
network
to
see
how
many
nodes
are
exposing
the
protocols,
as
for
ipfs
Network
I,
believe
the
only
nodes
that
are
publishing
to
ipni
from
ipfsr
the
collab
clusters,
which
is
unintegrated
for
lookup
coverage
like
little
Mansion.
C
F
F
Ideally,
we'd
have
some
number
of
like
if
an
X
percentage
are
upgraded,
it's
good
to
go,
but
if
it
turns
out
that
it's
been
like
three
decades
and
there's
still
20
of
people
who
haven't
upgraded
like
I,
think
it's
time
right
so
like
I,
don't
think
I
think
there's
like
a
number
or
an
amount
of
time
or
a
number
of
releases
that
have
passed.
That
probably
let
us
feel
comfortable
doing
this,
especially
if
we
can
accompany
it
with
a
blog
post
of
like
yo.
It
happened.
A
That's
fair,
oh
that's,
totally
fair
and
thanks
for
clarifying
that
I
wasn't
I
haven't
done
the
Deep
diving
enough
on
Hydra
functionality
to
ensure
that
I
felt
confident
that
publication
coverage
wasn't
somehow
a
component
in
there
of
of
what
service
they
were
providing.
So
that's
good
to
hear
that
clarifies
the
problem
for
sure
and
I.
Don't
like
I
I,
don't
know
that
there's
like
pressure
coming
about
like
a
timeline
for
Hydra
drawdown,
but
I
do
know
that
it's
a
resource
semi-intensive.
You
know
service
and
you
know
anywhere
we
can.
F
It
happens
to
be
that
it
like
hides
within
the
existing
decentralized
infrastructure,
which
is
what
makes
it
easy
to
use,
but
like
there
there's
no
reason
to
do
that.
It
just
adds
extra
latency
to
your
query
and
extra
in
for
cost.
For
for
us
and.
C
D
Yeah
so
there
have
been
multiple
things
happening
at
once,
including
them
an
incident
with
a
lot
of
unresponsive
nodes
within
the
DHT,
and
so
the
conclusion
was
that
the
hydras
don't
have
a
big
impact,
at
least
on
the
DHT,
which
means
that
now
they
are
used
only
for
bridging,
but
from
our
observation.
So
again
there
have
been
multiple
events
at
once,
so
we
have
been
predicted
predicting
one
one
scenario,
and
then
this
incident
happened.
D
It
is
now
solved
and
the
number
we
observed
now
are
what
we
expected
before
so
I
would
say
that
we're
good
without
the
hydras.
A
So
I
do
think
like
how
we
affect
the
users
of
the
network
Dean.
This
is
a
pretty
important
Point.
We
definitely
want
to
you
know,
draw
a
fair
line
in
the
sand
that
doesn't
just
stir
up
everyone,
but
let
me
iterate
on
this
concept
a
little
bit
so
that
maybe
we
can
have
that
discussion.
A
I'll
talk
to
Steve
about
it
a
little
bit
see
what
his
perspective
is,
but
it
would
be
nice
if
we
like
drew
a
Target
Somewhere
in
Time,
where
we
can
wrap
these
up
and
I
think
we
actually
get
a
lot
of
benefits
from
pursuing
that
effort,
and
hopefully
it's
not
too
much
work,
also
so
kind
of
a
low-cost
High
reward
type
of
scenario,
which
those
are
always
a
good
place
to
focus
a
little
bit
of
energy,
the
production
gateways
to
use
the
index
or
lookup
issue
that
I
put
here.
A
I,
don't
know
that
I
think
this
was
more
of
a
kind
of
a
gus
topic,
but
I'll
pitch
it
out
here
anyway,
Steve
was
asking
about
whether
or
not
there
was
any
work
that
needed
to
be
done
on
the
ipni
side
to
support
this
production.
Gateway
lookups
occurring-
and
our
perspective
was
after
talking
about
it-
that
there
probably
wasn't
anything
we
needed
to
do
differently.
A
However,
I
just
wanted
to
follow
up
with
the
whole
crew
here
with
you
know,
does
anybody
have
the
perspective
that
ipni
should
be
doing
something
to
support
these
changes?
Is
there
something
that
we
need
to
kind
of
prepare
for,
or
do.
A
F
And
so
like
we're
already
like
30
percenting,
that
at
the
moment,
so
it
should
be,
but
it
shouldn't
have
to
have
anything.
You
know
standing
out
on
the
emperor
side
that
you
wouldn't
have
noticed.
Otherwise,.
A
I'll
I'll
kind
of
circle
circle
back
with
with
Gus
and
and
Steve,
and
just
kind
of
check
in
with
them.
I
I,
don't
think
there's
anything
here,
but
I
also
want
to
make
sure
that
we're
not
implementally
accidentally
blocking
or
something
like
that.
Just
because
we're
not
aware
of
something
they
need
and
then
I
did
catch.
This
routing
table
Health
discussion
in
the
ipfs
content,
routing
work
group
I
think
some
of
this
discussion,
probably.
A
Was
happening
it
didn't
look
to
me
like
we
got
ourselves
to
a
point
where
everybody
left,
probably
with
resolution,
but
I
wanted
to
pitch
here
to
the
group.
A
C
C
A
So
this
is
the
thread
that
a
Dean
hosted
asking
about
analysis
or
analysis
tools
for
making
sure
that
the
DHT
client
implementations
are
functioning
properly
and
when
I
went
through.
That
thread,
there's
kind
of
like
a
lot
of
requests
for
support
going
on,
but
also
kind
of
a
lot
of
data
that
came
out
of
that
discussion.
A
But
the
kind
of
end
point
that
I
took
as
a
summary
that
discussion
was:
is
that
we're
going
to
kind
of
make
an
attempt
to
run
an
experiment
I'm
summarizing
here?
So
we
we
intend
to
run
an
experiment
of
routing
table,
Health
kind
of
post.
This
ECS
migration,
I
hadn't,
heard
that
term
before
the
ECS
migration
I'm,
not
sure
what
that
is,
and
but
that
we're
going
to
run
that
experiment.
A
After
that
event
and
that's
going
to
drive
any
kind
of
like
you
know,
action
associated
with
improving
DHT
routing
table
health
I'm
assuming
is
is
that
is
that
accurate
geed?
Does
that
sound
right.
D
Yeah,
it's
a
study
on
the
routing
table,
Health
that
I
did
I
think
last
year,
and
the
plan
of
probe
lab
is
to
set
up
a
continuous
measurement
in
front
and
to
replicate
any
measurement
that
we've
done
in
the
past
and
just
continuously
monitor
things.
So
that's
definitely
something
that
we
plan
on
measuring
continuously.
C
Turfin
is
this
about
the
full
RT
measurements
that
Adin
and.
F
The
the
request
was
to
ask
our
friends
who
have
been
doing
a
bunch
of
this
measurement
stuff
to
help
help
the
boogeyman
go
back
into
the
Shadows,
which
is
like
there's
a
bunch
of
these
things
where
people
try
and
they
they
use
them
and
like
something
goes
wrong
and
they're
like
oh
I,
wonder
if
x
is
the
problem,
and
the
hope
is
that
with
some
continuous
measurement
here
we
will
know
if
in
fact,
X
is
the
problem
or
if
it's
like
something
else,
is
going
on
and
and
that
probably
that
investment
will
pay
off
like
fairly
quickly
based
on
the
number
of
times
people
the
amount
of
time
people
spend
trying
to
like
deal
with
these
like
unknowns.
F
So
one
of
these
is
like
is
the
accelerated
client,
finding
all
the
stuff
that
the
regular
client
can
find
have
the
parameters
changed
since
that
code
was
deployed
right?
Maybe
things
like
that?
F
That
will
also
help
build
confidence
so
that,
when
people
show
up
to
you,
know
steady.contact
and
they
say
well,
I
put
in
Cascade
equals
DHT
and
like
I'm,
not
getting
stuff
yo,
what's
happening,
that
you
can
with
confidence,
say
yeah,
that's
because
the
person
didn't
provide
the
data
not
because
we're
having
like
info
measurement
problems
or
even
if
you're,
not
there,
you
could
say
well.
I
know
the
full
RT
client
should
work.
So
let
me
make
sure
that
I
didn't
do
something
similar
on
the
I.
F
F
Think
it's
just
like
making
sure
we
appropriately
attribute
fault
with
also
something
that
came
up
in
this
discuss
in
the
discussion
yesterday,
with
with
one
around
like
helping
users
understand
when
the
reason
something
couldn't
be
found
is
because
the
person
who
was
advertising
or
storing
the
content
had
a
problem
as
opposed
to
like
the
the
protocol
is
slow,
because
it's
searching
everybody
on
the
internet
to
see
if
they
can
find
your
stuff
right
attribute
like
without
it.
A
That's
that's
a
really
good
kind
of
point
to
bring
forward
to
the
group
that
I
think,
like
the
summary
takeaway
here
is
we'd
like
to
be
able
to
provide
users
with
visibility
into
like
where
things
are
slow
or
failing.
Just
like
you
might.
If
you
were
trying
to
navigate
routing
issues
Elsewhere
on
web
2
right.
A
Is
the
slack
thread
that
I
was
referencing?
The
reason
I
called
it
out
here?
Is
there
were
kind
of
a
few
varying
threads
there
and
the
summary
takeaway
that
I
got
from
it
was
that
there
was
going
to
be
a
kind
of
an
experiment
that
took
place
as
a
result
of
this,
but
I
wasn't
sure
whether
or
not
measurements
have
been
requested
in
that
thread
that
hadn't
actually
been
serviced.
There
was
a
lot
that
were
provided,
but
I
thought
there
might
also
still
be
open
questions.
A
I
think
that's
something
that
will
benefit
all
of
us,
so
I'm
excited
to
hear
that
I
won't
belabor
that
topic
anymore.
A
A
I
just
wanted
to
you
know,
use
this
group,
as
you
know,
if
somebody
needs
help
with
like
measurements
from
one
part
of
the
network
or
the
other,
or
is
trying
to
prepare
for
a
presentation
and
wants
details,
don't
be
shy
to
Leverage
The
Crowd,
specifically,
if
you
have
requests
right
now
like
go
ahead
and
put
them
out
there
and
we'll
get
a
list
going
and
we
can
try
to
help
each
other
out
so
that
we
all
put
our
best
foot
forward.
A
A
If
not
DM
or
drop
it
in
the
ipfs
content
running
work
group
I'll
try
to
help
coordinate
with
that
kind
of
stuff
as
well,
and
then
I
just
wanted
to
throw
out
there.
I
think
from
our
perspective,
Auto
Discovery
is
potentially
a
very
big
win
from
the
ipni
side
of
the
house
and
I
wanted
to
know
from
everybody's
perspective.
A
Here
when
is
an
appropriate
time
to
like
revisit
that
topic
and
kind
of
get
into
the
guts
the
design
and
whether
or
not
you
know
I
know
we'll
put
together
this
proposal
in
the
past,
I
think
maybe
I'm
not
sure.
If
that
was
all
well
or
there
was
collaboration
on
it,
but
you
know
do
we
need
to
revise
on
that
at
all
or
how
does
this
go
from
like
an
idea
to
like
an
ipip
or
or
something?
What
are
the
next
steps.
F
Yeah
the
main
thoughts
I
have
are
like
the
the
the
most
important
part
of
Auto
Discovery
is
that
there
is
more
than
one
party
to
discover
right
and
making
sure
that
we
have.
You
know,
have
enough
reliable,
other
partners
that
this
this
is
even
like
reasonable
to
do.
F
If
we
write
code
that
does
auto
Discovery
and
it
discovers
five
five
indexers,
but
only
one
of
them
is
is
like,
has
any
reasonable
amount
of
data,
and
so
we're
just
gonna
end
up
choosing,
say
cid.contact
all
the
time
that
it
feels
like
we
could
be
spending
effort
elsewhere
to
grow
the
network
rather
than
rather
than
this.
F
What,
if
you
have
those
people
already,
then
it
seems
reasonable
to
do
this.
I
could
see
people
complaining,
maybe
about
about
stuff
before
the
the
double
hash
migration.
Maybe
people
would
complain
about
if
they're
sending
their
requests
off
to
these
other
entities,
that
they
trust
less
I'm,
not
I'm,
not
sure.
If
they're
going
to
care.
B
The
other
note
here,
though,
is
there
is
a
chicken
and
egg
problem,
which
is
it's
hard
to
get
people
to
pay
a
huge
amount
to
maintain
infrastructure
that
no
one
queries
so
without
a
mechanism
by
which
they
get
some
traffic
and
there's
some
value
seen
in
providing
it.
You
get
these
questions
of
like
well.
Why
am
I
paying
this
monthly
bill
for
zero
traffic?
When
are
you
sending
traffic
to
me.
A
All
let
me
jump
in
to
follow
those
comments.
So
Dean
I
really
appreciate
that
perspective.
It's
not
one
that
I
think
we've
presently
surfaced,
but
it
almost
seems
obvious
after
you've,
after
you've
made
it
I
I
totally
get
that
and
I'm
sympathetic
to
that
perspective
for
sure
I'm,
I'm
kind
of
working
with
the
other
index
operators
right
now
to
basically
kind
of
prepare
them
for
the
idea
that
you
know
they
would
be
supporting
this
type
of
traffic.
So
we're
going
to
have
like
a
kind
of
like
the
storage
provider.
A
A
I'll
share
a
link
to
that
with
folks
here
that
are
curious
about
it,
so
that
you
can
kind
of
get
a
sense
of
what
we're
trying
to
accomplish
with
this
quarter.
But
the
big
story
to
take
away
from
that
is
monitoring
and
synchronizing
across
indexer
instances.
A
Consistency
as
well
as
kind
of
some
efforts
towards
trust
and
I
think
reliability
are
all
on
the
menu
right
now
for
things
that
we're
hoping
to
accomplish.
It's
a
bit
aspirational.
It's
a
lot
of
work
but
I
think
we're
going
to
make
a
lot
of
progress
with
that,
and
so
we
really
expect
over
the
course
of
the
next
few
months
to
have
visibility
and
insight
into
what
each
of
these
index
operators
is
doing
and
simultaneously
I'm
going
to
be
preparing
them
for
the
concept
of
we're
going
to
be
like
passing
traffic.
A
So
you
know
working
on
this
like
incentive
discussion
with
them
of
like
you
know,
how
are
we
going
to
keep
you
online
and
like
what
are
the
benefits
that
you
get
from
doing
this,
making
sure
that
we
've
got
some
solid
partners
that
are
really
committed
to
supporting
this
effort,
so
considering
that
timeline,
then
you
know
how
do
we
reduce
the
variance
in
our
chicken
and
egg
to
being
as
logarithmically
close
to
one
another
as
possible
until
we
reach
the
singularity
I?
Think.
F
F
We
could
probably
start
to
experiment
with
this
already
like
Massey
raised
this
and
and
well,
and
the
ipni
the
pl
IPI
Channel
earlier
today,
or
yesterday
around
the
fact
that
there's
now
like
there's
a
delegated
DHT
instance
that
you
can
query,
and
so
people
might
even
if
they
already
have
the
ability
to
do
DHT
lookups
on
their
machines.
They
might
choose
to
ask
someone
else
to
do
it.
If
it's
gonna
help
them
out
with
you
know,
performance
or
or
like
yeah
with
either
like
you
know,
latency
or
resource
consumption.
F
They
might
choose
to
do
that,
but
they
probably
only
do
that
if
they
knew
it
was
as
reliable
as
the
running
it
locally.
And
so
we
could
choose
those
already
as
like
two
separate
routers
that
we
compare
against,
which
would
mimic
the
logic
that
you
would
do
for
two
separate
ipni
instances
of
like.
Are
these
two
effectively
comparable.
B
The
other
thought
here
that
that
I
think
we
have
discussed
a
couple
times
is
that
using
Ria
slash
Lassie
as
a
staging
ground,
especially
if
we
get
caches
in
addition
to
full
providers.
We
may
find
Value
in
having
Lassie
discover
the
caches
near
it,
rather
than
going
all
the
way
back
to
a
full
index.
B
A
A
Let
me
let
me
summarize,
on
this
discussion
a
little
bit
and
try
to
come
up
with
like
kind
of
action
items
associated
with
what
a
path
forward
might
look
like
for
this,
but
I
appreciate
y'all's
perspectives
on
it.
I
think
it's!
It's
definitely
like
a
big
step
in
the
right
direction.
If
we
can
make
a
little
bit
of
progress
on
honor
Discovery
it
it
kind
of
leads
us
to
that
next
place
where
we
can
start
to
let
this
thing
run,
which
is
a
big
goal,
but
would
would
really
be
great.
C
A
Then
gee
did
you
add
these
items
to
the
topics
the
or
did
go
ahead.
D
So
it's
more
to
start
the
discussion
or,
if
I,
why
so
the
currently,
the
way
they
provide
and
reprovide
operation
work
is
that
the
ipfs
implementation,
so,
for
instance,
Kubo,
is
responsible
to
reprovise
periodically
the
CID.
So,
for
instance,
if
we
take
the
DHT
and
I
think
that's
the
wrong
abstraction
I
think
that
the
content
routers
should
generally
have
an
interface
to
start
and
stop
advertising
for
some
content,
because
the
DHT
and
ipni,
for
instance,
have
totally
different,
provide
or
published,
or
advertise
mechanisms
and
I.
D
Don't
think
that
it
is
the
itfs
implementation's
role
to
care
about
this,
so
ipfs
or
Kubo
should
just
call
okay
start
providing
this
CID
and
maybe
after
some
time
we
don't
want
to
provide
it
anymore.
Okay,
stop
providing
it
or
give
me
the
list
of
content
that
is
being
provided
and
then
each
content,
router
I
will
make
sure
that
the
content
is
being
republished
if
it
needs
to
be
republished
or
not.
But
so
it's
an
an
interface,
so
we
break
the
interface
basically.
So
it's
a
change.
D
I
think
we
need
to
coordinate
before
we
come
into
it,
but
yeah
just
one
want
to
know
what
you
guys
think.
C
D
So,
for
instance,
so
I'm
not
really
sure
how
the
content
publishing
Works
with
the
indexers,
but
we
probably
don't
want
to
use
the
same
mechanism
and
republish
every
I.
Don't
know
every
24
hours.
I
don't
have
to.
C
So
this
is
sort
of
something
that
I've
been
pondering
about.
The
main
difference
between
DHD
and
the
indexer
is
an
indexer.
There
is
no
TTL
and
what
you
publish
is
the
thief
of
contents
in
that
I.
Have
this
now,
additionally,
I
no
longer
have
this
that
I
told
you
I
had
before
right,
so
the
the
diff
chain
keeps
growing.
C
What
I've
been
thinking
about
is
whether
something
like
a
TTL
is
needed
on
the
indexer
site
to
accommodate
a
whole
bunch
of
new
use
cases
and
the
example
that
I
often
bore
turfin
and
focus
on
the
ipni
team.
With
is
imagine
if
I
wanted
to
publish
weather
information
right.
This
is
like
information
that
expires
naturally,
and
it
changes
all
the
time.
It's
a
high
chain
information,
if
you
want
to
publish
that
using
indexer
it'll,
probably
be
some
somewhat
noisy.
C
People
would
still
look
up
old
data
if
my
node
goes
down
or
something
probably
get
the
wrong.
You
know
measure
of
temperature
right
now
in
the
town
that
I'm
living
in.
But
if
you
had
a
TTL
in
the
index
there,
then
you
know
it
would
basically
enable
you
to
provide
something
for
a
period
of
time.
And
then,
if
you
don't
re-provide
it,
then
it
just
disappears
and
then
the
the
rationale
for
having
this
in
ipni
as
well
as
the
DHT,
is
that
you
know
how
can
I
you
can
provide
this
at
scale.
C
D
F
D
F
So
much
sorry
go
ahead.
Go
ahead.
Yeah
I
was
just
thinking.
It's
not
so
much
like
about
the
fact
that,
like
it
expires
as
much
as
the
responsibility
of
who
will
Who,
who
cares
about
it
expiring
right
so
right
now
you
you
do
sort
of
have
if
I
understand
you
have
like
you
already
sort
of
have
a
TTL
not
per
record
but
you're
like
well.
F
If
you,
if
you
don't
talk
to
me
for
like
a
week
or
something
I'm,
just
gonna,
delete
all
your
records
and
garbage
collect
them,
or
at
least
you're
allowed
to
do
that
right,
exclusive,
yeah
right,
and
so
the
question
is
like
why
you
know
I
have
to
then
go
through
a
bunch
of
effort
to
basically
call
the
the
delete
function
on
all
my
records
in
order
that
you
get
to
save
some
space,
because
it
doesn't
really
do
anything
for
me
when
I
do
this,
and
so
the
TTL
is
you
like
you,
get
to
shift
the
burden
a
little
bit
and
you
say
I,
don't
believe
that
you're
gonna
call
the
delete
function
on
me
because
that's
like
a
pain
for
you
to
manage
instead
I'm
going
to
ask
instead
I'm
going
to
just
put
a
TTL
on
you
and
like
force
you
to
like
interact
with
me
every
so
often
in
like
a
bandwidth
versus
storage
trade-off
where,
like
I
I
force
you
to
interact
with
me
in
order
to
make
sure
I'm
not
wasting
storage
space,
even
if
I
waste,
some
bandwidth.
C
Yeah
I
would
say
that's
largely
true.
The
only
thing
I
would
add
to
it
is
in
the
indexer
interaction.
The
barrier
for
staying
discoverable
is
just
one
connection.
You
don't
even
need
to
provide
anything
right
as
long
as
you're
contactable
within
a
week.
That
is
good
enough,
whereas
that's
a
slightly
different
in
the
issue
where
you
need
to
explicitly
republish
the
information
for
you
to
appear.
F
What
I'm
saying
is
that
it's
like
you're
choosing
the
the
amount
of
work
that
someone
needs
to
do
to
like
prove
that
they're
still
viable
right.
It's
like
if
I'm
doing
the
weather
thing.
What
encourages
me
to
use
a
TTL
when
I
could
just
say
it
store
it
forever
like
what
like?
What's
the
incentive
for
me
to
choose
one
over
the
other?
F
F
C
C
The
the
point
of
having
this
TTL
mechanism
is
to
reduce
a
whole
bunch
of
complexity
in
the
backhand
side
when
it
comes
to
deletion
right,
regardless
of
whether
the
client
does
the
right
thing
or
not,
because
you
can
then
optimize
the
storage,
you
can
have
mechanisms
to
say
have
what
is
it
worst
case,
reasoning
about
worst
case
of
storage
that
you
might
need
in
order
to
store
information
like
if
you
like,
say
the
rate
of
publication
of
Records,
never
goes
beyond
this
many
pair
second,
in
a
day.
C
So,
if
I
have
an
expiry
of
a
week,
then
I
can
have
a
reasonable
assumption
about
how
much
storage
I
need.
Now,
let's
run
an
indexer
in
a
case
where
the
Publications
forever
and
when
deletion
is
so
difficult,
such
that
you
technically
can
delete.
Then
it's
really
difficult
to
reason
about
okay,
how
much
storage
do
I
need
if
I
want
to
run
this
thing
for,
like
five
years,
I
basically
have
to
arrange
a
stuff
periodic
to
make
sure
there's
no
garbage
or
I
have
to
charge
it
yeah
like.
C
The
original
idea
was
to
provide
a
lowest
possible
barrier
for
adoption
of
having
ipfs
nodes
published
directly
into
ipni,
because
right
now,
if
you
want
to
do
that,
you
have
to
have
a
long
running
process
that
either
speaks
graph,
sync
or
HTTP,
and
it
has
to
be
publicly
contactable
or
via
some
relay
and
then
compare
that
to
a
system
where
providing
a
record
into
ipni
is
equivalent
of
calling
provide
onto
the
DHD
interface
right.
You
just
say
provide
this,
and
then
it
works
exactly
the
same
way.
C
So
then,
you
open
up
a
whole
bunch
of
functionality
or
a
whole
bunch
of
opportunities
for
new
things
to
be
built
on
top
of
the
system,
because
you
have
a
scalable
Way
by
which
you
can
have
the
same
sort
of
time-based
records
that
you
do
already
provide
on
the
HD,
but
then
to
ipni.
It's
a
alternative
routing
system.
That
was
the
idea.
I'm,
also
conscious
that
I'm
I'm
moving
away
from
ghee's
original
question
ghee.
Please
forgive
me
for
stealing
time
and
doing
drop
me.
D
So
we
could
create
like
an
interface
that
would
be,
for
instance,
content,
yeah
content,
routing
system
provider
that
and
so
each
content
routing
system
needs
to
have
one
so
one
for
DHT,
one
for
the
indexer
and
so
on,
and
we
have
a
shared
interface
on
which
we
need
to
agree
upon.
For
instance,
do
we
do
it
like
start
providing
stop
providing,
or
do
we
do
like?
Okay,
provided
for
like
three
days
in
the
case
of
the
weather,
but
yeah?
We
need
to
agree
on
this.
Come
on
provider
interface,.
C
Yeah
I
agree,
I
think
the
HTTP
delegated
routing
is
sort
of
doing
this.
You
can
imagine
like
a
nested
layer
of
Delegation,
where,
when
you
tell
it
to
provide
it
periodically
calls
another
layer
of
Delegation
which
is
provide
every
you
know
on
a
periodic
basis.
You
essentially
built
the
same
sort
of
functionality,
but
yeah
like
in
terms
of
interfaces
I
would
love
the
delegated
writing
interfaces
to
evolve
such
that
we
can
have
just
swappable
pieces.
You
know
we
can
experiment
with
different
different
systems.
A
This
is
a
good
topic
to
to
end
the
meeting
on
I.
Think
there's
a
lot
to
think
about
here.
There's
also
a
lot
of
potential
I'll
make
sure
that
we
carry
this
into
future
discussions
and
also
that
we
summarize
it
and
make
something
actionable
of
it,
so
that
we
can
make
some
progress
towards
it.
A
I
really
appreciate
everyone
joining
today.
These
are
super
valuable
discussions
and
it's
always
a
pleasure
looking
forward
to
seeing
everybody
in
in
Belgium,
that's
making
it
there.
It's
gonna,
be
a
great
time
have
a
good
rest
of
your
day.
Everybody
thanks.