►
From YouTube: 2023-08-08 Content Routing Workgroup #17
Description
IPNI Operators feedback, go-DHT refactoring well underway on probelab, Bifrost is skipping kubo v0.21 to roll-out v0.22, preparing to distribute traffic for IPNI(IPFS stewards supporting ambient discovery)
A
All
right
so
as
usual
for
anybody
checking
in
here,
that's
not
familiar
with
this
work
group,
the
scope
of
this
work
group.
This
is
the
ipfs
content,
routing
work
group.
We
cover
content
routing
across
ipfs
and
filecoin
kind
of
bridging
across
both
thanks
for
throwing
a
link
to
the
last
occurrence.
Notes
there.
That's
that's
a
clever
idea:
I'll
keep
that
one
going
Steve.
A
We
have
a
list
of
links
that
are
a
good
contextual
reading
for
a
lot
of
the
content,
routing
oriented
decisions
that
we're
currently
at,
and
we
start
every
week
with
an
update
from
the
contributing
teams
that
participate
in
make
up
this
work
group
I'll
go
ahead
and
start
with
the
ipni
team
updates,
and
then
we
can
pass
it
off
to
either
probe
lab
by
crosstalk
ipfs
team.
A
So,
over
the
course
of
the
last
two
weeks,
the
ipni
team
hosted
a
operators
work
group,
and
so
what
this
is
for
everybody
on
the
call
is
that
the
ipni
implementation
of
sid.contact
is
the
primary
implementation,
which
is
hosted
by
protocol
Labs.
It's
the
backbone
of
lookups
for
the
network.
I
would
say,
at
least
in
the
sense
that
it's
a
reliable
source
that
you
can
always
go
to
that
has
now
the
majority
of
the
network
at
least
well.
A
Over
half
look
able
to
be
looked
up
on
if
you
were
to
go
to
said.contact,
it's
one
implementation,
and
so
we
have
operators
who
are
volunteers,
mostly
file
coin
storage
providers
who
are
attempting
to
help
us
distribute
that
network
of
lookups,
and
so
they
are
running
their
own
instances
of
the
ipni
and
they're
publicly
accessible.
You
can
perform
lookups
on
them.
A
However,
we
have
seven
of
those.
Currently
we
have
a
few
parties
that
are
interested
in
potentially
also
joining
the
crew
and
starting
to
host
their
own
ipni
implementation.
So
this
work
group
is
an
opportunity
for
them
to
kind
of
learn
more
about
operating
ipni
and
also
for
this
community
of
operators
to
give
us
feedback
about
issues
they're
having
with
the
implementation
they're
running
or
to
get
advice
or
to
understand
what
the
roadmap
is
and
where
we're
going
with
this
thing,
so
that
they
can
kind
of
find
out
about.
A
You
know
what
future
implementations
might
have
or
the
work
that
the
team
has
done,
and
so
the
summary
of
feedback
from
that
group
is
the
majority
of
these.
Operators
are
running
on
very
old
implementations,
so
old.
That
I
would
probably
call
them
Antiquated
and
the
feedback
that
we
kind
of
arrived
at
with
this
work
group
is
that
we're
going
to
try
to
get
the
majority
of
them
updated
to
the
new
version?
A
That's
the
stable
version
presently
running
on
sid.contact,
which
is
backed
by
pebbledb,
and
so
all
these
folks
are
going
to
start
the
process
of
kind
of
scrapping
their
old
implementations
and
starting
Anew
with
a
fresh
handy,
dandy,
CLI
that
they
can
use
they're
excited
about
it.
Within
that
group.
A
We
have
some
opportunities
that
we're
going
to
take
advantage
of
which
is
we
wanted
to
test
a
bare
metal
implementation
of
the
ipni,
but
we
don't
necessarily
want
to
run
it
ourselves
because
we're
already
running
one-
and
we
don't
want
to
provide
this
kind
of
support
to
the
whole
network.
We
want
the
community
to
take
ownership,
so
we
have
two
interested
parties
that
are
potentially
going
to
set
up
bare
metal
instances
and
we're
kind
of
working
with
them
to
understand.
A
You
know
what
the
scope
of
the
hardware
is,
that
they
allocate
for
this
and
the
constraints
that
they
leverage
and
ultimately
we'd
like
to
understand
the
cost,
because
we
believe
that
running
a
bare
metal
implementation
of
the
ipni
should
be
pretty
inexpensive
and
that's
a
good
business
case
to
get
more
people
on
board
start
Distributing
lookups
across
the
network.
A
But
the
other
thing
that
came
out
of
this
was
that
we're
going
to
use
our
car
mirror
to
get
at
least
one
of
these
Partners
synced
up
with
the
current
state
of
sidot
contact.
A
So
that
means
to
how
help
them
ingest
all
of
the
advertisements
up
to
the
current
ones
made
so
that
as
they're
going
through
the
ingestion
process,
they
have
it's
a
little
bit
nuanced
of
a
term
but
I'm
just
going
to
use
it
for
the
sake
of
everyone
here
that
they
have
the
whole
network
so
that
you
could
look
anything
up
on
on
their
implementation.
A
That
means
they'll
be
caught
up
with
us,
and
so
that
means
that
you
should
theoretically
be
able
to
as
reliably
get
results
to
your
queries
of
where
to
find
SIDS
off
of
their
implementation
as
ours.
A
A
It's
it's
something
that
the
team
has
wanted
to
do
for
a
long
time
and
we've
finally
managed
to
get
it
to
the
top
of
our
priority
list,
and
this
is
a
the
advertisement
protocol
that
we
use
to
get
advertisements
from
providers
and
so
I've
put
a
link
to
the
doc
there
I
know
a
number
of
you
had
mentioned
that
you
were
really
curious
to
see
how
we
were
going
about
it.
Please
feel
welcome
to
go.
Read
that
Andrew
put
it
together
did
an
exceptional
job.
A
This
is
really
well
thought
through,
and
the
way
this
will
execute
is
that
we
will
kind
of
write
this
protocol
and
then
we
will
deploy
it
via
the
provider
in
boost.
So
there
will
be
an
update
that
goes
to
boost
that
will
be
a
updated
provider
which
includes
this
new
ipni
sync
protocol
and
that
will
enable
us
to
get
away
from
solely
receiving
our
our
advertisement
syncs
via
graph
sync.
A
We
can
now
get
them
via
http
and
then
we
have
kind
of
an
ongoing
wrestling
match
with
ipni
people
outside
of
the
team.
Probably
don't
notice
too
much,
because
it's
designed
so
redundantly
that
it
maintains
100
uptime,
but
much
to
the
credit
of
the
team.
A
We're
actually
running
a
lot
of
parallel
infrastructure
and
we're
constantly
right
sizing
it
to
ensure
that
we're
maintaining
a
a
minimal
cost
in
relation
to
a
reliable
operation,
and
so
we're
always
kind
of
working
continuously
on
setting
alerts
up
around
some
of
our
DBS,
so
that
we
can
more
easily
recognize
when
we've
kind
of
grown
beyond
the.
What
What's
The
Saying,
when
your
legs
are
too
long
for
your
rug,
but
we're
we're
kind
of
constantly
wrestling
with
that
to
increase
our
stability,
which
is
one
of
our
primary
goals.
This
quarter.
A
So
that's
the
ipni
teams
kind
of
present
two
weeks
in
a
nutshell:
I'd
love
to
pass
it
on
to
either
ipfs
or
probe
lab
yeah.
B
C
Can
I'm
happy
to
go
as
representative
for
probe
lab
and
then
maybe
I'll,
let
I
know
light
has
been
filling
in
for
ipfs,
so
this
isn't
full
coverage
of
for
probe
lab,
but
just
a
couple
things
wanted
to
share.
You
know
I
think,
because
people
are
wearing
this
working
group
probe,
lab
team
has
kind
of
been
taking
on
the
maintainership
mantle
for
the
go
based
Academia
DHT
implementation.
You
know
they've
been
working
on
that
for
quite
a
while
to
figure
out
what
to
do,
how
to
refactor
and
where
to
go.
C
You
know,
there's
been
presentations
at
ipfs
thing.
You
know
inset
reps
Etc
about
this,
but
I
don't
think
we
had.
We
hadn't
really
put
together
a
coherent
story
of
this
is
what
we're
doing
and
why
and
where
we're
going,
especially
since
this
is
turning
out
to
be
a
pretty
sizable
amount
of
work.
So
a
a
draft
document
of
a
blog
post
was
put
together,
I've
linked
to
it
there
I
guess:
that's
not
fully
public
yet,
but
I
think
anyone
on
this
call
should
be
able
to
click
into
it.
C
We've
got
that
reviewed
with
Juan
yesterday
and
got
some
feedback
that
we
need
to
incorporate
either
in
terms
of
some
of
the
well
some
of
the
bigger
item.
Things
is
like:
how
are
we
Thinking,
Beyond
kademlia
in
terms
of
dhts?
Is
there
really
a
market?
Not
just?
If
is
there
a
market
for
a
more
base?
Dht,
that's
focused
on
peer
routing,
not
content
routing,
but
to
help
all
of
web3.
You
know
it
sounds
good
on
paper,
but
what
would
others
really
adopt
it?
So
there's
some
product
work
there
and
some
feedback
around.
C
You
know
we've
been
using
the
term
composable
DHT,
but
you
know
some
worry
that
we're
thinking
about
like
composability
within
a
kademlia
context
and
that
you
were
kind
of
squatting
on
terms
that
maybe
would
imply
a
a
larger
degree
of
composability
than
what
the
work
we're
actually
doing
so
either
need
to,
you
know,
think
think
bigger
in
in
scope
or
you
know,
maybe
use
more
precise
language.
So
the
point
is:
there's
some
feedback
work.
C
We
need
to
incorporate
before
that's
ready
to
go
out
for
the
community,
but
I
I,
guess
just
pointing
this
all
out
here
that
that
stocks
there
it's
not
fully
polished
but
well.
People
are
certainly
welcome
to
look
at
it
and
give
any
comments,
and
that,
like
we,
are
aware
that
we
aren't
communicating
the
best
outward
publicly
and
we're
working
on
it
and
here's
here's
where
it's
happening.
C
So
that's
that
again,
there's
no
I,
don't
think
any
probe
lab
team
members
here
to
speak
directly
to
some
of
the
work
that's
been
going
on.
I'm,
not
I,
haven't
been
super
close
to
it,
but
I
did
link
to
their
project
board,
which
I
know
they've
been
keeping
updated
as
to
all
the
specific
tests
that
they're
doing
and
those
they
have
there's
some
good
project
management
there
in
terms
of
linking
to
some
Milestones,
which
lists
out
all
the
child
tasks.
C
So
if
you're
curious
about
the
specific
refactor
work,
that's
going
on
in
the
go
DHT
land,
that's
the
place
to
find
it
thanks.
Thanks.
D
Yeah,
a
very
short
one,
but
there's
a
work
in
progress
on
exposing
rooting,
V1
server
endpoint
in
Cubone
as
an
option
for
people.
This
would
be
opt-in.
D
Ipns
one
is
in
review
that
one
is
fairly
simple,
because
we
already
had
the
IP
figured
out
the
period.
Think
one
is
in
review,
both
IP
and
and
box.
Implementation
and
I
mentioned
this,
because
we
have
at
least
two
browser
related
projects
which
would
like
to
use
HTTP
based,
stable
API
for
all
the
routing
needs
so
collapsing,
the
complexities
related
to
ipni
or
the
HT
into
a
single
API
call
and
then
using
that
for
discovering
other
trustless
gateways.
D
So
the
idea
here
is
to
enable
trustless
ipns
lookups
through
through
existing
trustless
Gateway
API,
that
we
have-
and
I
mentioned
it
here,
because
this
is
part
of
our
paying
off
technical
depth,
namely
removing
Kubo
RPC
from
places
where
the
for,
from
like
a
public
use,
Cuba
RPC
was
never
designed
to
be
exposed
on
the
internet,
it
it
does
not
support
HTTP,
caching,
it's
tied
to
a
specific
implementation,
so
we
want
to
remove
use
of
kubernpc,
and
this
will
let
us
do
that
without
waiting
for
things
like
rooting
V1
existing
in
the
wild.
C
That's
great
there
there
I
guess
we
we
need
some
bifrost
engagement
on
that
last
point.
Right
and
I
guess
have
you
already
gotten
it.
D
Yeah
I
mean
it's
waiting
for
a
review.
The
good
news
is
that
it's
already
running
a
version
of
Kubo
that
supports
this.
So
it's
just
a
configuration
change
in
nginx.
B
B
A
E
So
just
we
did,
we
haven't
brought
out
to
0-21
because
of
the
bug
that
was
reported.
There
was
a
book
that
was
supported
on
select
and
I
have
lost
to
that
thread,
so
we
were
waiting
for
either
0
21,
1
or
0
22
to
come
out
with
that
bug
fix.
But
there
was
a
bug
that
I
remember
that
Adin
paid
us,
but
I,
don't
remember
where
and
I'm
trying
to
find
the
comment
from.
E
Yes,
that
is
that's
it.
That's
it
exactly
the
underscore
redirect
thing
so
we
were
holding
on
until
that
was
fixed.
C
Okay,
got
it
yeah,
yeah
I
know
that's
been
fixed
in
Maine
or
your
master.
You
know
it's
in
going
out
in
the
0-22
release.
I
haven't
actually
checked
in
at
the
zero
22
release
is
supposed
to
happen
today.
I
don't
know
if
it
already
has
or
not,
but
that
but.
C
I
think
it
was
intended
yeah,
that's
true.
We
did
say
that
that
was
in
scope
to
be
added
there
as
well.
So
I
haven't
checked
in
on
what
release
has
happened
today,
so
I
guess
either.
One
of
those
should
work
for
you
as
soon
as
they
come
out
and.
E
The
other,
the
other
thing
is
there-
was
this
ticket
opened
by
Lytle
on
about
ipip351
on
the
delegates
about
supporting
ipns
record
on
the
delegates.
E
My
main
issue
with
that
is
that
the
preloaders
are
fairly
loaded,
so
I'm
wondering
how
much
more
load
this
will
place
on
the
preloaders.
Perhaps
we
will
need
to
scale
them
up
or
add
more
or
just
add
more
notes
if
we,
if
this
is
really
needed,
not
do
it
on
the
pre-loader
student,
separate
notes.
D
C
E
Yes,
thank
you
very
much
so
yeah,
that's
about
it
right
now.
I
think.
A
Cool,
let's
get
feedback
and
I
didn't
I,
usually
check
the
dashboard
before
this
meeting.
Has
anybody
else
checked
it
in
like
the
last
week?
Are
we
are
we
up
on
our
like
Telemetry
or
our
metrics
from
bifrost
now
or
we're
pending
being
able
to
be
fully
up
to
date
with
the
whole
cluster
until
I?
Think
this
v21
or
v22
update
goes
out?
Is
that
right?
Okay,
let
me
just
clarify
that
here.
A
We'll
check
back
in
on
that,
so
it
looks
like
I
I've
been
stirring.
This
subject
up
a
bit
I
think
mostly
just
to
have
people
thinking
about
it
and
kind
of
keep
it
top
of
mind
for
folks,
but
now
that
we've
had
our
operators,
work
group
and
we've
kind
of
got
volunteers
itching
to
update
one
thing
that
I've
kind
of
kept
bringing
up
is
this
like
ambient
Discovery
and
that
I
think
it's?
A
It's
hasn't
been
a
burning
priority,
but
I
think
it
won't
be
a
burning
priority
up
until
suddenly
it
is
and
I
think
the
point
in
time
that
we've
been
indexing
on
is
when
we
have
multiple
instances
of
ipni
that
are
caught
up,
especially
if
one
of
those
instances
is
operated
by
somebody
other
than
protocol
Labs.
That's
a
big
opportunity
for
us
to
kind
of
start
Distributing
traffic
and
take
advantage
of
it.
A
We
had
discussed
in
the
content
writing
work
group
a
while
ago
that
one
way
we
might
go
about
this
is
just
to
incorporate
the
ability
to
potentially
hard
code
like
multiple
iPad
or
instances
of
ipni,
so
that
we
could
delegate
traffic
between
them
for
testing
as
kind
of
like
a
starting
iteration
and
then
work
on
the
bigger
problem
of
reputation
across
these
implementations
and
how
we
might
Route
traffic
across
them
as
like,
a
much
bigger,
more
complicated
work
stream
that
we,
you
know,
dedicate
some
serious
planning
too.
A
Does
everybody
still
feel,
like
that's,
potentially
like
a
good
approach
to
this
problem,
and
secondarily
the
topic
we
should
talk
about
is
where
this
fits
into
all
of
our
other
priorities,
because
obviously
everybody's
pretty
mad
taxed
out.
So
we
have
to
respect
the
the
time
that
everyone
has
and
whether
or
not
you
know,
we
need
to
kind
of
hang
back
on
this
a
little
bit
or
or
whether
or
not
that's
even
appropriate,
because
this
is,
in
my
opinion,
but
probably
I,
want
to
make
sure.
A
There's
a
shared
opinion
across
this
group
that
this
is
a
really
important
milestone
for
starting
to
distribute
and
decentralize
this
traffic.
More
so
point
of
topic,
first
strategy
of
approaching
this
problem,
or
potentially
we
want
to
talk
about
whether
or
not
prioritization
is
effective,
but
one
of
those
two
things
I
think
is
what
we
should
lead
with.
C
Yes,
make
sense
to
open
so
I
put
some
thoughts
down
in
that
document
feel
free
to
click
into
it.
So
I
think
yes
important
to
have
resiliency
here.
Right
of
you
know
not
having
a
single
endpoint
that
if
it
goes
down,
people
lose
access
to
these
records.
So
it's
I,
I,
think
I'm
from
an
ipfs
stewards
regard
absolutely
want
to
support
being
able
to.
You
know
what
we'll
prioritize
enabling
multiple
endpoints,
it's
okay,
so
that
I
think
that
I
do
think.
That's
important
to
do.
C
You
know
from
looking
at
the
ambient
Discovery
protocol,
like
the
like
I
think,
was
phase
zero
of
it
right.
You
have
to
figure
out
great
you
now
you're,
given
a
list
of
potential
endpoints,
you
have
to
figure
out
which
ones
to
use,
and
there
is
some
basically
client-side
logic
that
has
got
to
figure
that
out.
C
Obviously
you
the
protocol
could
be
nice
and
that
you
can
be
getting
information
from
other
peers
to
help.
You
figure
that
out,
but
ultimately,
like
a
client
has
to
make
that
decision,
and
it's
probably
going
to
do
some
of
its
own
instrumenting
and
ex
you
know
quote
experimenting
to
figure
out
hey,
given
this
set.
What
should
I
you
know
what
decision
should
I
make
for
myself
and
so
I
guess
what
I'm
advocating
for
here
is:
let's,
let's
not
worry
about
the
protocol
aspect
for
now.
C
Let's
you
know,
let's
we
can
add
in
the
three
additional
ipni
instances.
So
and
now
we've
got
client
code
that
has
given
a
set
of
let's
say
four
ip9
instances,
and
then
it
has
to
make
a
decision
as
to
who
it
calls
when,
and
we
can
do
all
that
ex
we
can.
C
We
can
do
some
designing
there
and
code
writing
and
experimenting
without
like
trying
to
make
this
into
a
protocol
yet,
and
that
you
know
that
that,
to
me
seems
like
one
of
the
key
parts
of
this
equation
that
at
least
moves
us
forward
and
it
gets
us
having
some
resiliency
I.
Think
my
only
requirement
in
this
is
that
we
wouldn't
by
default,
be
shotgunning
and
hitting
all
instances
all
the
time.
C
You
know
we
would
we
would
want
to
get
it
to
a
so
if
I
guess,
if
you
go
down
to
requirements,
it
was
where
you
are
right.
Like
steady
state.
Sorry,
you
scroll
up
a
little
bit
right,
steady
to
State,
we
shouldn't
be
making
extra
queries,
but
if
we
have
to,
if
we
so
like,
we
should
be
trying
to
get
some
information
as
to
who's
the
best
to
call.
C
But
if
that
particular
endpoint
sales
that
fallback
queries
are
for
reliability
is
totally
fine
and
that
we've
got
metrics
around
who
we,
who
we
call
like
some.
Maybe
there's.
Maybe
there's
some
additional
requirements
Beyond
this.
But
these
are
some
of
the
key
things
that
I
think
we
would
want
to
have
and
then
a
you
know,
a
design
could
be
put
together.
C
So
I
think
from
an
ipfs
storage
regard,
we're
absolutely
game
to
help
further
flesh
out
the
requirements
review,
the
design
review,
any
implementation,
work
that
happens
in
boxo
and
Kubo
and
then
do
the
relevant
releases
and
then
ultimately
like
long-term
own
this
own
this
code,
but
that
that
to
me
seemed
like
the
sweet
spot
of
moving
this
forward.
C
In
terms
of
getting
us
some
of
the
resilience
that
we
want
allowing
other
parties
to
play,
while
also
not
taking
on
too
big
of
an
effort,
given
all
that
both
teams
are
trying
to
do
so
that
that's
kind
of
where
I've
you
know
my
thought
are
landing.
That's
mostly
what's
encapsulating
this
document,
but
feedback
welcome.
A
That
feels
like
an
ideal
outcome.
I'd
love
to
hear
will
Mossy
if,
if
that
sounds
like
a
good
path
to
take,
I
feel
like
that's
forward
progress.
F
Likewise,
I
agree:
I
think
this
also
maybe
feels
aligned
with
what
the
next
conversation
will
be
of
What
spark
is
asking
for
ipni
I.
Think
there's
parallel
work
that
we'll
see
as
pretty
complimentary
here
that
ipni
is
likely
to
want
to
want
continue
on,
and
so
I
can
talk
through
that.
If
you
want
Turf
in
now
or
would
you
continue-
or
we
can
finish
off
on
this
one.
A
A
Also,
it
definitely
like
respects
the
goal
that
we're
trying
to
hit
so
I
think
probably
we'll
find
close
alignment
once
everybody's
reviewed
it.
This
is
great,
thank
you
for
putting
it
together.
Yep,
you
bet
no
problem.
F
Cool,
so
let
me
give
a
one
minute
description
of
what
spark
is
doing
here
and
then
they're
interested
in
making
use
of
content
routing,
as
we
have
it
to
accomplish
part
of
what
they're
doing
spark
is
a
stem
that
is
getting
built
by
the
coin
station
team
as
a
way
of
tracking
how
well
retrievals
work
on
falcon,
so
they
want
to
have
a
distributed
set
of
desktop
users
who
get
tasks
of
attempt
to
get
data
from.
F
You
know,
provider
that
then
try
and
do
that
and
then
report
if
they
were
successful
or
not
and
they're
working
on
a
a
protocol
that
involves
only
minimal
cryptography
for
what
sort
of
looks
like
a
commit
reveal
protocol
where
a
group
of
these
desktop
users
will
attempt
to
get
some
content
and
then,
once
all
of
their
successes
are
are
revealed,
you
can
come
to
some
conclusions
about
whether
this
provider
seems
to
actually
be
providing
this
content
or
not.
F
One
of
the
key
inputs
to
such
a
system
is
what
content
and
who
has
it,
and
so
it's
this
content,
routing
problem
that
we
find
ourselves
in
what
what
should
we
be
testing,
and
rather
than
have
these
desktop
clients
have
to
somehow
get
to
a
consensus
themselves
on
content
that
exists
and
crawl,
the
file,
a
coin
blockchain
state
tree
to
find
deals
and
so
forth.
F
It
seems
that
some
of
that
already
exists
in
things
like
ipni
and
so
there's
a
desire
to
make
use
of
that
existing
index
of
what
content
is
available.
F
The
main
method
that
they're
looking
for
and
hope
could
be
added
as
a
feature
to
something
like
ipni,
is
this
first
method
that
I
wrote
down,
which
is
get
random
multi-hash
of
a
deal,
so
they
would
walk
the
state
tree
to
find
valid
deals,
and
so
they'd
have
a
sense
of
what
the
deals
are.
But
they'd
want
to
be
able
to
ask
for
some
hash
that
appears
in
under
that
context,
ID
and
and
given
a
seed
they'd
want
to
get
a
random
one.
F
Security
things
that
don't
really
that
haven't
been
fully
fleshed
out
in
ibni
I
think
there's
stories
for
many
of
these,
but
like
the
questions
around
ambient
Discovery
and
having
resilience
this,
this
leads
to
a
bunch
of
questions
about
well
what
happens
if
ipni
misbehaves,
and
so
one
of
the
things
that
would
be
nice
is
if
there
are
multiple
instances
of
ipni,
then
these
clients
can
actually
just
make
that
same
request
against
the
different
ipni
instances
and
make
sure
they
get
the
same
deterministic
outcome,
and
that
gives
them
some
sense
that
the
calculation
is
being
done
correctly
and
saves
a
desktop
client
from
having
to
get
hundreds
of
megabytes
of
Sids
and
do
that
block
itself.
F
So
so
there
there's
some
consensus
type
thing
of
like
if
we
have
a
Federation
of
these
indexers
that
that
gives
us
some
ability
to
to
get
to
confidence
that
we're
actually
getting
a
good
answer
of
some
tasks
to
do
without
having
to
do
all
that
work
ourselves.
However,
it
does
mean
that
we
need
some
way
to
now
say
well.
This
node
isn't
behaving
correctly.
So
how
do
I
challenge
and
say
this
ipni
node
is
misbehaving
and
is
not
doing
it.
F
So
one
thing
we
can
do
is
we
can
have
statements
from
IP
and
I
be
authenticated,
that
they
can
have
a
signature,
and
so
now
it's
you
know
disputable
that
that
there
there
is
an
attestation
that
this
is
the
right
answer
and
I
could
say
well
I
downloaded
the
data
and
did
that
computation
myself
and
it's
different,
and
so
that
means
you
did
it
incorrectly
and
so
you're
acting
improperly
and
I
would
like
to
punish
you.
F
We
don't
really
have
a
way
to
talk
about
bad
behavior
from
an
ipni
and
I
think
a
story
about
what
it
means.
If
one
of
these
ip9
providers
is
bad,
that
we
can
describe
more
formally
would
be
really
useful.
Should
an
ipni
operator
need
to
stake
coins
that
can
get
slashed
if
they
aren't
given
correct
answers?
Is
there
some
other
thing
where
they
fall
out
of
the
consensus
of
allowed
ipni
instances
that
queries
get
made
of?
F
So
there's
there's
security
of
ipni
that
we
can
talk
about
in
this
context
and
we
can
think
about
what
various
scopes
of
that
we
think
is
in
scope
right
now,
there's
a
second
one,
which
is:
how
do
we
know
that
the
content
that's
being
advertised
to
ipni
by
a
provider
is
actually
the
right
content,
and
this
is
one.
This
is
a
problem
we
already
have
today.
There
are
plenty
of
storage
providers
that
advertise
a
lot
of
Sids,
but
don't
actually
provide
that
content.
F
How
do
we
make
sure
that
we
only
have
accurate
content
in
the
indexes
that
we're
returning
to
clients
and
again?
This
is
a
this-
is
an
index
quality
question
if
a
provider
lies
or
and
just
says
well,
great
I've
got
one
deal
that
I
have
available,
so
every
time
I
get
a
deal.
What
I'm
going
to
tell
ipni
is
that
deal
also
contains
this
set
of
Sids,
that
I
have
unsealed,
that
I'm
gonna
be
the
ones
that
I
do
a
retrieval
on
then
it'll
only
get
tested
on
the
set
of
sets
it's
chosen.
F
So
is
there
a
way
to
dispute
and
say
actually
for
this
deal,
I've
downloaded
the
contents
and
it
doesn't
match
what
you're
claiming
the
SIDS
are
in
there,
this
provider
either
is
not
actually
making
available
or
the
underlying
deal
doesn't
match.
What
they're
claiming
the
SIDS
are
when
they've
indexed
it?
How
do
we
go
about
challenging
and
potentially
de-listing
a
provider
as
having
bad
information?
We
don't
have
a
mechanism
for
a
client
to
dispute
a
provider
as
not
providing
correct
information
right
now.
F
We
have
some
not
super,
well
codified
descriptions
of
if
we
can't
reach
a
provider
for
a
while,
we'll
de-list
them,
because
we
think
we
can't
contact
them
as
an
ipni
instance,
but
but
formalizing.
What
bad
behavior
looks
like
from
both
providers
and
from
ipni
instances
are
going
to
be
part
of
the
story
that
a
successful
spark
system
is
going
to
need
to
have
answers
for
it.
F
So
we
should
think
about
what
all
needs
to
happen
there,
but
there
there's
a
set
of
hardening
that
we
should
be
thinking
about
around
ipni
and
content.
Routing
in
general.
I
think
that's
useful
for
all
clients,
but
it's
particularly
useful.
If
we
have
an
incentivized
mechanism,
that's
making
use
of
this
system.
B
F
The
the
seed
is
that
you,
you
would
like
over
time
if
I
sample
a
deal
to
get
to
be
sampling,
different
SIDS
out
of
that
deal.
So,
rather
than
just
like
the
first
thing
in
it,
I'd
like
to
be
trying
different
things,
and
so
we
would
use
something
like
a
derand
value
or
the
apoc
or
something
else
as
a
seed.
That
changes
over
time
to
ask
for
different
random
sets,
but
a
deterministic
different
random.
C
Basic
question
here:
will
the
sorry
yeah,
it's
very
obvious-
yeah
I
mean
you
said
that
Spark's
already
going
to
be
your
station.
Seven
is
already
going
to
be
walking
the
state
tree.
I!
Guess
why
bring
ipni
into
the
loop
like?
Can
it
not
infer
some
of
this
information
already
directly.
F
Potentially
I
mean
I
think
if
we
were
thinking
about
complexity
and
what
complexity
a
desktop
user
would
be
will
able
to
handle
versus
what
complexity
we
could
offload
the
the
process
there
between.
What's
in
this
state
tree
or
list
of
active
deals
and
finding
a
Sid
involves
contacting
that
storage
provider's
index
provider,
endpoint
potentially
is
One
path,
and
so
then
you've
got
is
it
active?
Is
it
available
to
everyone?
F
Do
we
have
availability
of
this
advertisement
because
then
you
you
right
ipni
gives
us
sort
of
full
availability
in
some
sense
or
we
we
can
get
to
the
illusion
that
the
different
clients
probably
can
all
access
some
ipni
instance
that
are
in
consensus
as
opposed
to
what
happens
if
the
SP
provides
its
index
in
different
ways,
or
only
answers
on
this
index
selection
process
to
some
of
the
clients
they're
supposed
to
be
measuring
it,
so
they
don't
get
to
a
consensus
but
then
needs
to
download.
F
The
correct
advertisement
still
has
the
problems
of
the
SP
potentially
not
having
provided
the
correct
set
of
Sids
for
that
advertisement.
If
they
go
with
an
index
provider
listing
of
the
SPs
variant
of
what
is
in
the
payload
or
needs
to
download,
either
all
32
gigs,
which
is
now
a
very
large
ask-
or
it
gets
an
unverified
subset-
needs
to
potentially
guess
if
there
are
SIDS
in
their
throw
set
of
heuristics
and
then
see
if
it
can
get
a
sit
out
of
a
random
read
that
may
be
verified.
C
D
B
Another
question:
please:
do
you
think
we
need
a
reverse
index
of
pcid
to
list
of
sets
in
ipni,
because
right
now
it's
got
a
undocumented.
The
Bluetooth
so
happens
to
use
the
context
ID
as
pcid.
F
Right
so
so
this
is,
if
we
can
do
that
as
an
intermediate
step
rather
than
providing
this
direct,
give
me
a
random
one
in
the
context
ID
by
with
a
seed
that
could
be
a
reasonable
intermediate.
The
the
downside
there
right
is
that
that
list
of
multi-hashes
in
one
deal
it
can
be
a
quarter
of
100
makes
of
data
right.
F
So
you're
asking
the
client
to
do
quite
a
bit
more
Network
download
if,
if
you're,
providing
it
at
that
layer
and
you've
still
got
the
question,
you've
still
got
to
solve
both
of
the
challenging
edges.
Right,
which
is
you
still?
Can
you
still
have
to
figure
out?
Both?
Is
the
index
provider
providing
the
correct
list
and
has
the
SP
provided
the
correct
Source
data.
F
F
On
a
deal
and-
and
then
the
other
part
right
is
like
for
these
deals
that
represent
a
file
coin
deal
or
for
these
contract
studies
and
advertisements,
we
don't
expect
them
to
update
and
and
even
the
fact
that
a
storage
provider
is
updating.
What
SIDS
it's
claiming
exist
in
a
static
filecoin
deal
can
be
then,
like
that's
a
signed
statement
of
it,
making
mutable
a
thing
that
is
clearly
not
mutable
on
disk,
and
we
can
turn
that
into
at
least
spark
punishing
them
their
REM
their
reputation
for
doing
that.
F
B
I
guess
the
the
cases
that
I
was
thinking
about
was
more
a
deal
getting
slashed,
a
removal
happens,
but
the.
A
A
Yeah
I
think
both
of
these
topics
are
going
to
take
a
little
bit
of
thinking
and
reading
yep.
Well
is
there?
Is
there
any
documentation
around
kind
of
the
the
spark
kind
of
goals
for
probably
not
like
this
aspect
of
spark?
Yet
right
like
this
is
pretty
pretty
early.
Okay,
this
is
ideation
phase
stuff.
A
F
Yep
I
think
that's
right,
I,
don't
think,
there's
an
immediate
action,
but
I
think
it
is
how
how
do
what
needs
to
happen
to
ipni,
to
make
it
be
useful
in
cases
that
are
incentivized
and
that
we
need
to
think
about
the
potential
adversarial
behavior
that
participants
can
have
that
that
we
have
a
system
that
works
well
in
the
optimistic
non-incentivized
case,
but
there
is
a
bunch
of
hardening
that
that
needs
to
exist.
F
If
we're
able
to
support
cases
like
these
in
doing
that
hardening,
we
also
provide
additional
safety
for
our
normal
clients,
so
it's
worth
doing
as
well,
but
it
I
think
this
is
a
useful
thought
exercise
if
nothing
else,
for
what.
What
are
the
various
pieces
that
need
to
go
into
place
to
make
us
feel
like
this
is
a
system
that
we
would
use
in
incentivized
situations.
A
I
guess,
on
top
of
this
there's
kind
of
like
a
a
performance
impact
like
relative
to
this
work
that
we
would
do
because
we'll
be
like
generating
traffic.
Are
there
like
boundaries
that
we
should
be
thinking
within
regarding
like
what
performance
impact
would
be
associated
with
like
implementing
this
new
traffic
Paradigm
or
or
is.
A
Yeah,
well
that
and
I
guess
for
the
desktop
clients,
we're
kind
of
we're,
assuming
that
it's
okay
to
like
distribute
effort
to
desktop
clients,
because
they're
participating
so
I'm,
just
wondering
if
there's
like,
if
there's
traffic
patterns
that
we
should
be
like
attempting
to
accomplish,
or
that
we
should
be
focused
on
like
minimizing
this
in
a
particular
area
of
the
network.
F
Think
the
target
is
that
retrieval
validations
are
aiming
for
something
on
the
order
of
five
percent
of
total
retrievals,
so
you
can
think
of
maybe
an
additional
five
percent
load
of
of
accesses
to
ipni
as
a
result
of
this
system,.
F
A
F
We'll
see
how
much
of
this
we
get
done,
I
think
the
the
the
hope
for
spark
is
that
they'll
have
prototypes
and
and
start
having
things
happen
before
too
long
I,
don't
think
all
of
these
adversarial
cases
need
to
be
locked
down
immediately.
F
I
think
the
the
path
towards
real,
significant
incentivization
coming
through
spark
is
going
to
take
a
little
bit
longer
than
you
know
a
working
system,
but
as
that
as
part
of
the
in
growing
incentivization,
there
will
become
increasing
pressure
to
have
answers
for
some
of
these
various
attacks.
A
Makes
sense,
there's
a
lot
to
think
about
there's
good
stuff.
A
I
feel,
like
we've
gotten
a
lot
to
think
about
all
right.
This
is
good,
very
productive,
stuff,
I'm,
always
happy
if
I
can
give
you
all
some
time
back,
because
it's
so
fleeting,
but
thanks
everyone
for
joining
the
Contour
routing
work
group
today
super
constructive
session
I
hope
everyone
has
a
great
end
or
start
of
their
day.