►
From YouTube: 2023-05-30 IPFS Content Routing Workgroup #12
Description
Various IPIP’s on aspects of delegated routing, upcoming Kubo RC and it’s default content routing stance. IPNI is testing distributed DB’s. Also, the IPNI HTTP provider specification is up for review!
A
All
right
we
are
recording,
so
everybody
Welcome
to
the
ipfs
content,
routing
work
group
number,
12.,
I,
can't
believe
the
two
weeks
went
by
since
the
last
time
we
met.
A
I
think,
as
a
result
of
everybody
being
so
busy,
but
let
me
go
ahead
and
just
drop
these
and
the
notes
so
that
anybody
that's
following
along
can
pull
them
up.
I
did
prepare
an
agenda
for
today
so
that,
hopefully
we
can
kind
of
cover
some
important
topics.
We
have
a
lot
of
for
reference
for
people
that
are
joining
this
call
that
are
looking
at
these
notes.
A
A
Well,
actually,
it's
not
accurate
to
say,
I
replaced
the
original
function,
but
to
perform
the
bridging
function
from
the
ipfs
gateways
to
the
interplanetary
network
indexer,
which
was
previously
being
resolved
by
the
hydras,
which
are
no
more
and
also
sid.contact
itself,
which
is
a
great
place
to
read
about
the
interplanetary
network
indexer,
along
with
the
documentation
there
until
we
get
our
specific
service
site
up,
which
should
be
happening
in
a
matter
of
the
next
few
weeks,
looks
like.
We've
got
a
few
candidates
from
the
team.
A
That's
working
on
that
site
available
to
start
reviewing,
so
we'll,
hopefully
have
something
a
little
bit
more
comprehensive
up
soon
and
then
kind
of
a
discussion
about
our
leadership's
perspective
on
how
we
are
integrating
to
Kubo,
which
happened
quite
a
while
ago,
but
is
still
relevant
reading
for
anybody.
That's
trying
to
understand
contextually,
where
we've
gone
in
the
last
I
would
say
four
months
in
regards
to
how
traffic
on
the
network
routes
okay.
So
all
of
that
aside
I'll
go
ahead
and
jump
into
updates
from
the
teams.
A
If
you
are
here
from
your
team,
it
looks
like
it's
mostly
just
ipfs
stewards
today
and
ipni
folks,
so
we
will
spare
bifrost
from
an
update
today,
but
I'll
run
through
our
updates
from
the
ipni
side
of
the
house
and
then,
if
Steve
or
literally,
like
to
jump
in
and
kind
of
get
us
up
to
date
with
with
y'all's
teams.
So
to
keep
everyone
up
to
speed.
The
ipni
team
has
been
working
very
hard
on
some
I
would
say
kind
of
optimizations
for
both
the
read
and
write
path.
A
We
discovered
some
challenges
that
resulted
from
a
boost
issue,
but
also
simultaneously.
We
brought
the
reader
privacy
enabled
encrypted
Value
store
online.
At
the
same
time,
and
due
to
this
Confluence
of
of
traffic
I
think
we
recognized
some
challenges
in
the
value
store
that
we
needed
to
kind
of
get
a
value
store
in
place
that
could
handle
these.
A
These
challenges
a
little
bit
better.
So
we've
been
testing
some
new
key
Value
Store
databases,
I'll,
let
Mossy
jump
in
but
I
think
we're
settling
into
the
idea
of
foundation
DB,
but
we're
still
testing
yeah
wishy
washy.
C
Juris
is
still
out,
but
we're
trying
different
ones.
The
underlaying
goal
here
is
to
make
it
easier
for
people
to
run
indexes,
so
we're
trying
to
pick
a
solution
that
is
performant
horizontally
scalable,
as
well
as
operationally
cheap.
A
A
Some
of
the
folks
that
we've
talked
to
in
the
community
operate
those
kind
of
by
default
and
so
I
think
we're
we're
looking
at
options
that
we
can
potentially
leverage
a
value
store.
That
would
be
performant
in
such
a
such
a
scenario.
So
that's
ongoing.
Definitely
will
keep
this
work
group
up
to
date
with
how
that
effort
goes
we're
still
testing.
A
We
are
digging
deep
right
now
on
caching
Solutions
and
the
design
discussions
around
them
right
now,
it's
very
early
days,
I,
would
say
in
regards
to
the
direction
we
go
with.
Caching
Solutions,
but
we
have
some
great
ideas.
Mossy
specifically
has
been
leading
that
effort
and
every.
D
A
I
talk
to
him
about
progress
he's
making.
He
sounds
really
positive,
so
I
I
maintain
positivity
as
well,
but
it
seems
like
we
feel
very
confident
about
some
of
the
solutions
that
we're
looking
at
once.
We
have
some
documentation
which
represents
the
design
of
these
caching
solutions
that
we
hope
to
deploy
and
for
context.
The
place
that
we
hope
to
deploy
them
first
would
be
Saturn
to
leverage
the
content
delivery
Network
to
have
kind
of
an
edge
surface
on
the
network
where
we
could
potentially
have
caches
of
the
interplanetary
network
index.
A
This
is
this
is
kind
of
the
the
the
goal
so
that
we
reduce
traffic
overall
on
the
entire
network,
plus,
we
add
a
lot
of
redundancy
sustainability
by
having
the
index
deployed
kind
of
out
across
the
network.
A
Additionally,
we've
been
doing
some
deep
Dives
on
our
metrics
and
even
considering
somewhat
potentially
consolidating
them
into
an
API
endpoint.
It's
also
a
little
bit
early
in
that
discussion
to
potentially
get
too
into
the
details
about
it.
A
But
the
important
thing
to
understand
is
that
we
do
because
we
have
such
a
distributed
service
on
the
interplanetary
network,
indexer
a
lot
of
kind
of
combining
sources
from
different
nodes
in
order
to
resolve
the
metrics
that
we
then
articulate
out
to
our
leadership
as
well
as
anybody
that
follows
up
with
our
sit
rep
documentation
on
the
ip9
team,
and
so
our
idea
there
is
is
that
we
might
be
able
to
Aggregate
and
simplify
some
of
these
metrics
in
a
Consolidated
endpoint,
which
makes
them
easier
for
folks
to
understand
and
also
accessible
externally
kind
of
on
demand
for
Folk.
A
That
would
want
to
make
a
call
out
to
that
service,
plus,
potentially
any
alarming.
Related
to
that
that
you
would
want
to
build
yourself.
You
could
use
the
endpoint
to
do
that
kind
of
stuff,
which
would
be
great
for
external
parties.
A
I
think
actually
I
missed
an
item
on
here
that
I'd
like
to
add
that
I
didn't
have
a
chance
to
represent
in
the
meeting
two
weeks
ago,
which
is
that
we
set
up
our
our
service,
which
is
checking
SIDS
for
the
internet
archive
folks,
which
was
pretty
exciting.
Actually,
we
deployed
that
service
over
there
so
that
they're
able
to
use
the
same
measurement
tool
that
we're
using
internally
to
check
SIDS
in
order
to
look
up
their
own
SIDS
and
they
populated
it
with
our
own
list
and
they're
very
happy
with
it.
A
It
sounds
like
which
I
think
is
a
big
win
for
the
team,
and
it's
always
really
exciting
to
me
to
see
that
we
spent
time
on
a
tool
that
the
community
actually
finds
a
lot
of
value
in
especially
something
simple
but
helpful
as
that
tool
is
I'll.
Add
a
note
here
in
a
minute
for
that,
as
well,
really
important,
high
level
item.
A
We
are
experimenting
with
removing
the
unencrypted
nodes
within
our
index
from
the
read
pathway,
so
the
reader
privacy
store
is
deployed
in
production
and
it's
sustainably
functioning.
We
are
wrestling
with
a
little
bit
of
wishy-washy
traffic
right
now.
So
it's
not
a
hundred
percent
Sunshine
story
just
yet.
A
But
the
important
news
to
kind
of
take
away
is:
is
that
we're
so
confident
right
now
that
we're
actually
at
the
stage
where
we're
looking
at
decommissioning
some
of
the
non-uh
encrypted
nodes,
we'll
have
to
check
back
later
today
or
possibly
even
tomorrow,
to
see
how
that
effort
has
gone?
I'm
sure
that's
actually,
probably
where
Ivan
is
right.
Now,
oh,
if
it
is
here,
but
Ivan's
been
wrestling
with
that
all
morning,
so
Ivan.
A
E
I
just
got
access
to
all
the
indexes
from
the
read
path.
So
now
all
of
c.com
tickets
get
subs
100
from
the
double
hash
star,
but
I
don't
want
to
declare
Victory.
Yes,
I
want
to
I
believe
it's
running
like
at
least
I,
don't
know
for
a
few
hours
and
see
that
there
are
no
changes
to
the
metrics
yeah.
So
no
Victory
declaration,
yet
but
I
think
it's
we're
getting
there.
A
So
you
heard
it
here:
first,
everybody
keep
your
trumpets
hidden
from
View,
at
least
for
the
time
being,
we'll
pull
them
out,
possibly
in
a
couple
hours.
Maybe
we'll
save
that
excitement
for
tomorrow,
but
that's
that's
where
we're
at
in
this
process
very
close
to
being
fully
dependent
on
the
encrypted
Value
store
and
having
reader
privacy
fully
launched
such
that
we
would
actually
make
a
like
a
public
announcement
that
the
service
is
now
fully
running
sustainable
and
it's
how
we're
operating
and
then.
A
Lastly,
we
have
some
CLI,
tooling,
updates
that
we've
been
working
on
to
make
our
commands
a
little
bit
more
intuitive
and
also
structure
them
in
a
way.
That's
in
hopes
to
improve
usability,
I
think
internally
on
the
team.
We're
we're
pretty
excited
about
what
it's
looking
like
there
is.
A
document
I
can
share
here.
A
I'll,
add
to
the
notes
afterwards,
which
kind
of
describes
where
the
CLI
is
going
for
the
ipni,
but
I'll
share
that
here
later
I
took
way
more
than
our
five
minutes,
but
the
team
has
been
up
to
so
much
over
the
last
two
weeks.
I
wanted
to
make
sure
you
all
got
to
hear
about
it,
because
it's
all
very
relevant
stuff
with
that
said,
I'd
love
to
pass
the
microphone
over
to
either
Lidl
or
Steve
for
ipfs.
F
I
can
I
can
do
the
the
first
one
that
sounds
good,
let's
divide
and
conquer
kind
of,
like
updates
and
heads
up,
that
the
three
eye,
peeps
spec
Improvement
proposals
against
rooting,
V1
HTTP,
API
that
we
already
have
in
Kubo
and
on
a
cd.contact
ipni.
F
This
is
a
maybe
like
all
the
all
three
are
required
to
close
the
gap
in
the
con
in
the
routing
story
for
ipfs
as
a
project
content,
routing
is
only
one
of
three
types
of
routing
we
have
aside
from
asking
for
providers
of
a
CID.
We
also
want
to
find
the
latest
ipns
record
for
mutable
pointers,
and
then
we
need
a
peer
routing
endpoint
for
cases
where
running
full
DHT
client
is
too
expensive.
F
We
had
a
some
proof
of
concept
so
with
Helia
in
the
browser
having
a
peer
routing
as
a
first
thing
that
we
try
and
then
falling
back
to
other
things
could
improve
the
performance.
The
speed
perception
end
users
is
when
they
load
the
page
or
interact
with
our
stack
in
the
web
browser
or
other
low
powered
or
constrained
environment.
F
So,
in
general,
those
three
types
of
rooting
are
already
those
three
types
of
routing
are
already
implemented
in
Kubo,
so
what
we
want
to
do
we
want
to
expose
existing
routing
system
using
this
routing
view
on
API
and
take
that
as
a
opportunity
to
one
write,
missing
specs
and
to
write
a
implementation
inbox.
So
that's
like
a
reference
implementation
of
this
API
this.
F
You
already
have
a
bunch
of
information
about
peers,
so
it's
mostly
question
about
how
do
we
surface
that
and
how
the
API
looks
like,
so
those
are
three
ipps.
First,
one
streaming
is
important
for
exposing
DHT.
What
we
want
to
return
results
as
we
Traverse
the
DHT
so
yeah.
That
kind
of
heads
up
we'll
we
are
focusing
on
specs
the
first,
the
first
two
will
probably
land
sooner
than
later,
the
the
peer
routine.
F
We
want
to
have
more
conversation
around
how
the
schema
for
a
peer
looks
like,
but
that's
that's
the
update
here.
F
I
believe
we
had
conversation
around
all
like
the
controversial
or
open,
ended
questions.
So
we
have
a
ipfs
implementers
called
this
Thursday
and
my
plan
is
to
announce
the
first
two
ready
for
the
final
review.
We
will
clean
them
up
and
the
third
one
will
probably
happen
after
that.
We
we
have
set
a
soft
agreement
on
how
the
peer
schema
should
look
like.
We
just
need
to
write
spec
for
it.
So,
yes,
there
will
be
a
spec
and
there
will
be
a
reference
implementation
inbox.
So.
C
F
C
F
D
Hey
thanks,
Lytle
yeah
that
that
last
one
it
was
just
coming
from
a
follow-up
from
I
think
last
week's
meeting
right.
There
is
a
certainly
more
of
a
pressing
need
to
at
least
get
some
more
bare
minimum
ability
for
node
operators
to
you
know,
block
content.
Even
if
we
don't
get
the
ideal
solution
in
place,
you
we
just
haven't
been
able
to
there's
already
been
quite
a
few
things
in
flight
for
the
Kubo
maintainers.
D
This
current
release,
so
I
couldn't
squeeze
it
in
right
now,
but
we'll
you
are
we're
planning
to
do
a
Kubo
release
like
a
release
candidate
this
week
and
do
the
final
release
next
week.
That's
that's
the
plan,
although
it's,
but
so
anyway,
the
this
filtering
work
would
be
a
prime
candidate
for
the
following
relief.
D
We
haven't
scoped
out
the
specific
amount
of
work,
but
you
know,
given
the
importance
of
it
and
I
I
I'm,
pretty
pretty
confident
we
would
be
able
to
land
it
in
the
the
next
Kubo
release
between
0.22
in
July
is
how
things
are
kind
of
looking.
D
I
I
mean
that's
good.
It's
a
good
good
question.
It
hadn't
been
I,
mean
I'm
assumed.
Well,
it's
I
guess
the
thing
that's
different
for
you
all
is
you're
not
actually
serving.
The
content
right.
This
is
really
is,
is
very
applicable
for
for
Gateway
operators.
That
are
where
people
are
retrieving
the
data
so
that
that
was
the
pause
and
I.
Don't
know
the
ramifications,
so
I
guess
I,
don't
know
if
others
have
thoughts
or
comments
there.
I
don't
know
I.
G
Can
so
so
far
with
the
encryption
I
think
we've
been
hoping
to
not
have
to
deal
with
this
short
term,
so
we
want
to.
If
we
can
move
to
the
encrypted
reads
for
most
lookups
of
the
ipni
index,
then
it
becomes
less
of
an
issue
I
think
in
general.
We're
we're
I
think
even
for
things
where
we're
cascading,
and
so
we
don't
have
encrypted
reads.
Ipni
looks
a
little
more
like
a
Google
like
thing.
Eventually,
we
may
have
this
as
an
issue,
but
we
don't
have
pressure
to
ipni.
G
Yet
as
far
as
I'm
aware,
we
should
make
sure
we've
got
an
abuse,
Channel
set
up
in
case
we
we
do
get
it,
but
until
we
start
seeing
that
as
a
problem,
I
think
being
reactive
here
is
totally
fine
from
an
ipni
perspective.
A
I
just
wanted
to
jump
in
and
and
kind
of
back
that
up
as
well
mossie
everything
that
I've
heard
talking
to
the
folks
over
it
over
at
bifrost
about
this
is
that
ipni
is
not
like
the
main
target
audience
yet
and
I.
Don't
I,
don't
think
it's
it's
necessary
that
we
do
anything
but
be
reactive
like
well
described
at
least
currently.
If
that
changes,
because
we
get
a
letter
at
some
point
somewhere
that
shakes
us
on
our
boots.
I
think
we'll
we'll
be
able
to
take
a
look
at
that.
A
It's
not
bad
to
think
about
the
perspective
of
implementing
that,
but
I
think
we're
one
of
the
last
candidates
that
should
have
to
worry
about
it,
because
we're
kind
of
the
final
stop
so
that
all
the
layers
before
us
that
are
passing
these
lookups
to
us.
They
actually
solve
this
problem
for
us
and
therefore
stuff
getting
through
to
us,
should
be
pretty
unlikely.
A
On
cool
young,
did
you
want
to
give
us
an
update
on
problab
side.
B
Yeah
sure
so
it's
gonna
be
a
quick
update,
so
I'm
still
working
on
we're
implementing
the
goalieptv
cad,
DHC
implementation.
B
So
yeah,
that's
that's
the
branch
and
it's
it's
a
big
effort
and
probably
I'm
I'm,
probably
still
be
at
it
in
two
weeks
and
on
the
ipni
latency
measurement
side.
I,
don't
have
the
last
update,
but
I
know
it's
ongoing
and
then
it's
working
hard
on
this
and
yeah,
so
Dennis
I
couldn't
make
it
this
time,
so
yeah
I'm,
afraid
I
kind
of
give
a
much
more
detailed
update
on
this.
That's.
A
Okay,
Dennis
has
been
very
actively
communicating
with
us.
We'll
do
a
kind
of
an
update
a
little
bit
about
where
he's
at
right
now,
but
Dennis
has
been
working
super
hard
and
he's
been
staying
in
close
contact
with
the
team
and
we're
learning
a
lot
by
going
through
this
process
with
him,
I
would
say,
because
we're
able
to
observe
kind
of
the
perspective
of
external
to
the
network
like
a
broad
lookup,
which
also
is
trying
to
happen
from
AWS,
so
there's
there's
more
observations
than
just
the
output
that
we're
looking
for
here.
A
It's
definitely
a
constructive
exercise
to
go
through
I
would
say
so
that
is
the
team
updates.
Let's
go
ahead
and
get
to
the
topics
to
cover
I
just
want
to
remind
everyone.
I
put
these
topics
here
on
the
agenda
every
week,
however,
they're
not
meant
to
be
the
authoritative
list
of
the
only
things
that
we
discuss.
I
would
encourage
all
of
you.
A
If
you
have
anything
important
that
you
would
like
to
see
covered
here
to,
please
feel
confident
in
adding
items
to
this
list,
and
they
may
be
of
a
higher
priority,
potentially
than
some
of
the
items
that
are
on
this
list.
Presently,
so
don't
be
shy
about
letting
us
know
if
there's
an
important
item
that
you'd
like
to
make
sure
is
covered
by
this
group.
When
we
have
these
discussions,
it's
it's
very
welcomed
input,
okay,
so
the
ipni
HTTP
provider,
specification
I
just
wanted
to
point
this
one
out
to
everyone.
A
It's
a
specification
that
we'd
offered
up
and
internally
I
think
we've
all
kind
of
got
a
good,
solid,
read
on
the
review,
but
we
would
love
feedback
from
the
ipfs
stewards.
Folks,
I
think
specifically,
either
little
or
a
Dean
would
be
awesome
candidates.
If
you
could
take
a
look
at
this,
we
would
love
your
feedback
on
it.
A
F
Namely
the
the
fact
that
you
are
using
https
multi
others,
there
are
multiple
projects
within
PL
and
ipfs
psychosystem
that
are
looking
into
using
them.
So
I
I
think
it's
positive
that
we
do
that
at
the
same
time.
So
we
can
coordinate
and
write
it
down
that
it
does
not
mean
ipni.
It
does
not
mean
trustless
Gateway.
It
does
not
mean
lip
P2P
over
HTTP.
F
It
may
be
one
or
all
of
those
things
on
the
same
HTTP
server
mounted
on
the
different
paths,
so
I
I,
don't
think
it's
controversial.
There
will
be
a
IP,
maybe
like
later
this
week.
Marco
wants
to
write
this
down.
So
we
may
want
to
refer
that
from
this
document,
just
just
to
make
it
clear,
I.
Think
overall,
probably
LCP
caching
is
missing
and
if
you
use
HTTP,
you
probably
want
to
benefit
from
that.
D
Cool
and
in
the
big,
the
things
the
scope
here
is
around
like
being
able
to
get
the
current
like
the
status
or
state
of
advertisements.
Is
that
that
that's
like
this,
the
scope
of
this
ip9
HTTP
provider,
so
I
see
two
get
methods
exposed.
C
That's
right,
so
this
goes
into
a
series
of
specs
that
we
put
out
the
as
a
as
a
way
of
determining
whether
a
HTTP
content
provider
is
HTTP
content
provider,
so
part
of
that
functionality
is
Discovery
and
we
say
as
an
HTTP
content
provider,
you
need
to
satisfy
this
specific
specification
for
the
discovery
part
there
would
be
other
ones
for
retrieval
as
well
as
deal
making
and
so
on,
which
comes
out
of
boost.
But
this
is
just
in
a
portion
of
the
overall
functionality.
A
All
right,
I'll
jump
into
the
other
topic.
I
want
to
get
a
little
bit
of
clarity
around
this,
so
I
hope
I
didn't
advertise
too
much
on
the
ambient
discovery
that
we
talked
about
two
weeks
ago
that
anybody
is
definitively
volunteered
for
any
work.
But
what
I
would
like
to
ask?
Is
we've
broken
this
down
to
kind
of
two
components?
A
One
is
the
Discovery
aspect
of
identifying
From
ipfs's
perspective,
new
content,
routers
explicitly
in
this
case
they
would
be
other
ipni
nodes,
but
potentially
we
would
want
that
capability
to
extend
to
other
content,
routers
or
content
routing
methods.
So
that's
one
aspect
that
we
talked
about.
A
The
other
aspect
is
actually
the
reputational
decision
making
between
which,
in
a
list
of
those
content
routers,
you
would
choose
under
certain
scenarios
and
so
specifically
for
the
scope
of
this
group.
I
think
we
kind
of
agreed-
or
at
least
during
our
discussions.
We
lean
towards
agreement
on
specifically
focusing
as
a
component
of
work
on
this
reputational
decision-making
tree
of.
We
have
a
given
list
of
content
routers.
We
would
like
to
choose
between
them
and
be
able
to
send
traffic
their
way.
A
That's
the
scope
of
the
discussion
as
it's
related
here.
The
reason
that
we
thought
that
direction
would
make
sense
is
because
the
list
of
ipni
nodes,
at
least
to
start.
We
don't
expect
to
be
so
long
that
we
couldn't
maintain
them
as
a
kind
of
a
roster
of
options.
A
A
Does
that
make
sense
to
everybody
before
we
jump
into
the
who
do
we
need
to
do
what
work
kind
of
discussion,
or
does
everybody
agree
with
that
approach
that
we're
we're
taking?
Does
it
make
sense
to
focus
just
on
the
reputational
aspect
of
this
yum
I
know
you've
been
working
on
lots
of
routing
problems
with
your
your
brain
lately.
A
It
makes
sense
so
I
think
for
the
most
part
everybody
agrees.
This
is
probably
a
a
decent
approach
to
tackling
the
problem.
A
A
Don't
know
that,
there's
a
specific
answer
yet
I
think
we
presume
potentially
that,
because
we
expect
this
logic
to
live
in
Kubo
and
everybody
tell
me
if
I'm
wrong
here
in
this
presumption,
please
don't
hesitate
to
call
me
out,
but
we
presume
that
it
would
live
with
the
stewards,
but
is
that
potentially
an
incorrect
presumption,
or
is
this
something
that
would
go
deep
backlog
or
potentially
in
other
teams,
should.
D
Well,
I,
I
I
think
a
a
implementation
of
this
needs
to
be
in
Kubo
right
where,
where
we
assume
cid.contact
would
love
to
get
that
swapped
out
for
some
ambient
Discovery,
so
I
I
so
agreed
on
on
that.
So
that
definitely
means
at
the
minimum.
I
feel
Andrea's.
Ipfs
stewards
are
involved
right,
so
the
Google
maintainers
are
involved
in
help
making
sure
that
that
the
code
that
lands
there
is
you
know,
is
going
to
work
per
usual
code
review
standards.
D
Etc,
so
I
think
that
that
that's
for
sure,
then,
who
is
you
know
who's
writing
that
code
and
who's
also
driving
the
the
spec
and
making
sure
that
it
moves
forward.
I!
Think
that's
I,
think
that's
the
open
question
area.
F
Sorry,
there's
also
like
aspect
of
writing,
something
that's
useful,
Beyond,
a
single
use
case
and
that's
why,
back
to
the
thing
I
mentioned
at
the
beginning
of
my
update,
Kubo
itself
will
be
exposed
to
growth
in
V1.
F
We
also
have
a
partner
some
people
in
ipfs
ecosystem
or
people
external
wanting
to
use
ipfs
the
usual
question:
how
do
I
provide
the
router
only
for
my
contacts,
the
content,
so
you
know
indexer,
but
only
for
my
data
I'm,
the
university
I.
Don't
have
millions
of
dollars
to
host
the
public
ipni
I,
but
I
have
data
and
I
want
to
provide
the
indexer
for,
like
my
own
thing,
so
I
feel
that
we
one
thing
when
we
have
implementation
in
Kubo
that
will
be
able.
F
It
would
be
easier
for
us
to
model
reputations
because
we
already
have
a
routing
system
in
Kubo
and
we
can
tweak
it
to
behave
the
in
in
certain
ways,
and
the
second
thing
is.
D
F
Sure
that,
whatever
reputation
system,
we
are
designing,
it
works
with
router,
which
is
not
a
cognizant
of
entire
CAD
space,
because
that's
not
like
realistic
for
all
use
cases,
so
I
feel
we
Pro,
probably
from
the
stewards
side
of
things.
Until
we
have
rooting
V1
in
Kubo.
We
won't
have
bandwidth
because
we
all
select
running
on
the
skeleton
Pro.
D
Yeah
thanks
for
those
points,
yeah
so
I
mean
first
I
really
like
yeah
I
like
this
work,
I
want
to
see
it
get
done
and
landed
just
trying
to
be
like
cognizant
of
not
signing
our
team
up
for
more
than
we
can
than
we
can
handle
like
I,
don't
want
to
get
into
a
position
where,
like
oh
you
know
this
isn't.
D
This
is
important
for
the
you
know:
I
guess
some
of
the
decentralization
of
indexers
and
that
there
are
multiple
parties
at
play
and
then
oh,
that's
not
happening
because,
like
stewards
aren't
driving
the
spec
like
that's
kind
of
the
position
that
I
want
to
avoid
being
because,
like
right
now,
I
don't
think
we
can
push
on
some
more
more
of
these
kind
of
balls.
I
actually
want
to
make
sure
we're,
not
the
blockers.
D
So
if
other
people
are
putting
work
forward
and
it
needs
reviews
either
at
the
spec
level
or
in
you
know,
Landing
PR's
like
will
like
I,
think
we'll,
but
certainly
prioritize
making
sure
that
happens.
You
know
I
think
because
I
don't
have
a
better
accounting
of
what
all
is
being
asked
of
us
right
now
with
our
current
Staffing
that,
like
just
general
weariness
to
say,
like
yeah,
we're
gonna,
we're
gonna
drive
an
additional
thread
like
this.
A
I
I
appreciate
that
Steve
and
I
hope
I'm,
not
putting
the
storage
seam
too
much
on
the
spot
on
this
call
to
to
talk
about
this
subject.
I
really
want
to
get
to
the
root
of
like
where,
where
logically,
this
works
should
be
living
and
likewise,
whether
or
not
we're
you
know,
potentially
taking
the
right
approach
to
solving
the
problem
together.
I
think,
ideally,
what
we're
building
it
IP
and
I.
A
We
we
perceive
as
potentially
being
able
to
be
leveraged
by
all
all
scenarios
and
so
we're
trying
to
build
something
that
could
fulfill
this
use
case
of
a
university
that
doesn't
have
a
ton
of
funding
but
wants
to
have
routing
available,
and
so
I
think
when
I
hear
that
I
want
to
like
pull
on
that
thread
a
little
bit
and
ask
if
there's
something
different,
that
we
should
be
doing
from
the
perspective
of
ipni
in
order
to
make
it
the
obvious
choice
for
people
in
that
scenario,
to
have
indexed
data
that
they
can
look
up
quickly,
that
reliably
they're
able
to
to
reference
and
I
think
there's
probably
some
qualities
of
them
having
routing
in
ipfs,
specifically
that
we
perceive
them
as
potentially
getting
that
they
wouldn't
necessarily
get
from
ipni
and
I'd
love
to
I.
A
Don't
know
if
this
is
the
call
I'd
like
us
to
like
leave
this
call
kind
of
thinking
about
this
subject,
but
I'd
like
to
like
really
clarify
like
what
are
those
qualities
and
how
do
we
get
those
qualities
into
ipni
such
that
then
ipni
becomes
the
answer
rather
than
how
do
we
get
around
the?
Maybe
the
limitations
of
Ip
and
I
as
we
perceive
them.
A
Is
what
I'm
saying
like
relevant?
Do
you
feel
like
that's,
that's
a
Target
worth
kind
of
focusing
on
together.
G
G
So
in
that
sense
there
are
always
going
to
be
cases
where
there
are
different
sort
of
local
setups
right
when
you're
running
a
set
of
Kubo
nodes
in
your
data
center
to
do
data
transfer
between
nodes
totally
in
your
little
control,
Network
for
big
control
network
you're,
also
going
to
run
an
ipn
and
you're
going
to
have
a
totally
different
universe,
and
that
is
totally
fine.
G
I
think
one
one
thing
that
I
would
sort
of
think
about
here
is:
we
are
likely
to
run
into
this
issue
sooner
rather
than
later,
and
in
particular,
as
Ria
rolls
out,
and
we
have
more
than
one
when
there's
not
just
sit
dot
contact,
but
there's
both,
but
there
either
is
caches
or
there
are
other
full
instances
which
are
both
happening
in
the
next
couple
quarters
likely
Lassie
is
going
to
have
to
choose
which
one
or
ones
it
talks
to
to
do
queries,
and
so
we
will
come
up
with
something
as
a
stop
Gap
to
to
do
that,
and
so
I
I
suspect
that
that,
as
a
result
of
that,
what
we
end
up
with
is
you
know
something
that,
as
far
as
I
know,
will
follow
the
spec
that
we
posited
what
a
year
and
a
half
ago
as
the
basic
ambient
Discovery,
and
so
there's
been
a
lot
of
time
to
comment
and
refine
that
spec,
we
haven't
heard
any
problems
with
that
basic
and,
and
so
that
then
serve
just
becomes.
G
You
know
what's
happening,
and
so
it's
going
to
be
a
challenge.
It's
going
to
be
more
pain
to
roll
back
from
that
once
that
is
there,
and
that
will
just
sort
of
in
as
far
as
I
can
tell
right
like
code
is,
is
a
thing
that
then
is
you
know
we
can
write
us
back
around,
but
you
know
this
is
in
some
sense
like
a
getting
ahead
of
it.
G
F
Yeah
yeah
I
think,
like
the
only
open
question,
I
I
think
it's
on
that
PR.
If
not,
we
should
add
that
to
the
compatibility
section
is
the
underlying
assumption
of
the
reputation
system.
Do
you
build
the
reputation
system
in
a
totally
different
way?
If
you
assume
every
actor
should
have
access
to
the
same
set
of
cids?
F
G
Which
I'm
I'm
going
to
push
strongly
on
that
I?
Think
if
you
have
something
else,
that's
like
your
University
or
whatever.
You
probably
are
going
to
put
that
in
manually
in
addition
and
you're
not
going
to
go
through
the
same
reputation
system
for
your
Lan
partial
one,
the
the
reason
I'm
going
to
push
for
that
is
the
you
know.
G
So
so
we
need
the
ability
to
get
to
completion,
get
to
get
to
sort
of
feeling,
like
we've
done
a
full
search
without
in
a
scaling
of
n
lookups
needing
to
happen
from
every
client
and
the
only
way
that
I
think
we
can
get
there
is
by
having
this
assumption
that
there
is
a
global
view,
and
so
this
is
pushing
people
to
export
their
private
SIDS
as
a
way
to
allow
us
to
have
stuff
more
fast
than
oh.
F
D
So
just
here
for
moving
this
for
I
mean
this
obviously
has
been
I
mean.
Obviously
it's
open
as
a
hack
indeed,
and
then
as
a
IP
and
IP.
Oh
sorry,
an
ipip
PR
for
many
months
or
or
long
yeah
longer
than
that
so
I
think
part
of
it
didn't
get
more
look
because
it
was
there
actually
any
Implement.
Was
there
actually
going
to
be
any
push
on
the
implementation
side
to
get
it?
You
know
done
and
landed
and
I
think
what
I'm
hearing
you
say
well
is
like
yeah.
D
This
is
going
to
be
coming
like
like,
for
example,
Lassie
is
going
to
need
to
do
this
at
the
minimum,
and
you
know
that
if
it
starts
getting
ossified
like
and
here's
the
best
chance
to
influence,
we
do
have
some
implementation
lining
up
so
like
now
is
kind
of
the
time
to
engage
so
make
sense.
So
holding
to
what
I
said
earlier,
like
at
the
minimum,
ipfs
stewards
want
to
make
sure
that
they're
providing
input,
even
if
they're
not
driving
something
it's
like.
D
Okay,
now
would
be
a
good
time
to
be
making
sure
we're
looking
at
it
before
others
start
trying
to
implement
that
spec.
So
I
think
we
should
take
that
as
an
action
is
I
mean
since
this
ipip
was
written.
I
know
we
have
like
learned
some
things
about
how
to
do
better.
Ipipps
and
there's,
like
you
know,
a
little
bit
more
template
in
place
around
motivation
and
security
concerns.
Etc
I,
don't
know
if
this
particular
one
is
following
that
latest
template
or
not.
D
That
might
be
something
worth
I
think
that's
something
we
were
going
to
want
done,
regardless
of
or
so
before
it
would
get
merged
anyway.
So
if,
if
it
needs
a
like
a
a
retrofit
to
the
latest
template,
that's
probably
worth
doing
well
if
you're
kind
of
driving
it
on
your
side
and
then
I
guess
on
the
ipfs
stewards
and
vital.
Is
this
something
you
think
in
the
next
a
couple
weeks
we'd
have
bandwidth
to
give
someone
kind
of
see
where
we're
at
and
give
input
on,
yeah.
F
Yeah
I
think
that
it's
it's
important
if
we
make
very
clear
in
in
the
purpose
of
this
discovery
that
that's
like
beneficial
to
ecosystem,
because
people
will
know
that
they
cannot
use
this
for
partial,
indexers
and
I.
Think
that's
a
positive,
because
we
have
more
focus
on
mobile
fans.
F
There's
a
implementation,
wise
I
will
mention,
we
will
probably
land
in
Lassie,
then
for
it
to
land
in
something
like
Kubo.
We
would
have
to
have
an
implementation
in
box
also,
maybe
just
start
in
boxer,
and
we
have
a
single
one.
F
That's
just
you
know
the
duplicate
work,
but
this
is
well
I
mean
like
the
the
reference
implementation
for
the
client
I,
guess
that
would
be
like
lip
P2P
protocol
implementation
or
something
or
it
could
be
separatory
for
if
that
makes
more
sense
it
it.
It's.
A
F
F
It's
it's
lipido
P
protocol
specific
to
ipfis.
We
have
those
like
we
have
bit
swap
in
the
box,
so
maybe
it
makes
sense
to
have
this
inbox
as
well,
but
could
be
also
separate
trip
or
whatever
works
better.
A
Awesome
I
appreciate
everyone
taking
that
that
discussion
on
it's
it's
important
and
part
of
the
reason
that
I'm
pressing
on
it
is
just
because
I
do
know
that
we're
going
to
approach
this
state
where
we
do
need
to
start
sharding
traffic
out
to
other
instances
sooner
than
later,
we're
not
there
yet.
But
that's
why
I'm
raising
the
flag
now
to
like
give
us
a
little
bit
of
breathing
room
before
we
rapidly
approach
that,
because
I
suspect
this
is
a
one
of
those
like
hockey
sticks
of
pressure
type
of
dependencies.
A
Where
now
you've
got
you
know.
A
Ready
to
go
do
something,
but
they
can't
do
it
until
you
know
whatever
blocker
is
done
and
I
hate
putting
everybody
in
in
that
position,
where
there's
a
fire
to
put
out
and
suddenly
we're
trying
to
reorganize
anything
we're
doing
to
support
something.
That's
never
a
fair
position
for
people
to
be
put
in
so
I'm
hoping
to
get
the
word
out.
Necessarily
as
we
can.
You
know
on
something
that's
been
a
little
bit
back
burner,
but
it's
starting
to
become
more
and
more
urgent
over
time.
A
F
Sorry,
the
positive
on
the
positive
note
is
that
the
way
we
have
ipni
set
up
today
in
Kubo
and
places
like
Brave
makes
it
fairly
transparent,
migrate.
There's
like
no
configuration
change.
The
existing
users
will
seamlessly
switch
to
this
Auto
Discovery
once
we
shoot
it.
So,
on
the
end
user
side
of
the
story,
I
think
we
are
pretty
lucky.
A
Awesome
thanks
a
little
we'll
look
for
some
review
and
kind
of
feedback
from
you
and
then
I
think
we'll
kind
of
talk
to
the
team
about
how
we're
approaching
this
as
well
and
see
what
we
can
come
up
with
Steve
appreciate
all
your
feedback
on
that.
A
A
Dennis
is
really
hustling
and
going
through
a
lot
of
setup
challenges,
but
the
lessons
I
think
we're
learning
about
setting
up
these
tests
to
operate
against
ipni
will
continue
to
pay
dividends
in
the
future,
because
I
suspect
that
once
we've
got
it
set
up,
we
should
be
able
to.
You
know,
execute
again
in
the
future,
but
also
I
think
we're
going
to
get
some
pretty
unique
visibility
in
how
the
network
is
working
from
this
test
and
I'm
really
excited
about
the
results.
A
I
think
the
report
that
comes
from
this
is
going
to
be
really
beneficial
and
exciting.
So
I
I
didn't
intend
to
get
any
of
the
details
specifically
of
aspects
of
the
testing.
A
That's
going
on
I'd
like
to
raise
a
flag
and
ask
if
anybody
does
want
to
call
out
any
specific
components
of
it
that
they
think
is
relevant
to
the
viewers
you
know
potentially
outside
of
this
call
that
would
be
interested
in
it,
but
I
just
wanted
to
give
a
shout
out
to
Dennis
for
all
this
hard
work
and
commitment
and
I
know
geese,
probably
taking
care
of
other
stuff
in
the
background,
so
that
Dennis
can
focus
on
this,
and
we
really
appreciate
that
support
y'all.
A
A
In
regards
to
this,
regarding
whether
or
not
we
understand
what
the
what
the
cause
for
that
decline
in
traffic
is,
and
whether
or
not
it's
being
experienced
all
over
the
network,
or
it's
just
at
ipni
that
we're
seeing
it
does
anybody
have
any
thoughts
on
that.
I
can
say
that
the
bifrost
team
is
looking
at
getting
traffic
measurement
from
the
gateways
to
ipni
in
place,
but
they
don't
have
it
completely
in
place
yet
and
I
actually
haven't
gotten.
A
Potentially,
we
miss
with
whatever
this
method
for
alarming
that
we're
building
is
I.
A
There
was
like
an
RPC
call
blocker
that
little
you
called
out
that
I
think
we
determined
is
no
longer
a
blocker
for
the
measurement
tool
that
they're
attempting
to
deploy.
But
do
you
have
any
insights.
F
Like
that,
it's
actually
pretty
easy.
So
there's
like
the
metric
is
in
Kubo
20..
They
need
to
upgrade
Gateway
boxes
to
Kubo
20
and
the
metric
shows
you
how
many
requests
were
sent
to
the
ipni
endpoint.
So
the
question
is:
do
you
have
an
insight
into
Cash
hits
only
or
cash
misses
only
or
both
cash
hits
and
misses?
I
assume
you
see
both
so
then
you
will
be
able
to
compare
both
numbers,
and
so
it's
like
from
the
Gateway
yeah
foreign.
C
There's
another
parameter,
which
is
adoption
rate
of
Google
versions
that
correctly
send
traffic
to
suit
that
contact.
So
we
need
to
normalize
numbers
based
on
whatever
version
people
are
running,
which
is
a
slightly
more
complicated
yeah.
F
A
Well,
I
think
maybe
I've
been
thinking
about
this
wrong,
but
correct
me.
I
I've
been
thinking
about
this,
with
respect
specifically
to
the
ipfs
gateways
themselves
and
measuring
that
the
gateways
are
sending
100
of
traffic.
But
then
there's
like
a
secondary
component,
which
is
like
the
broader
Network
I've,
been
talking
to
by
Frost
about
specifically
the
ipfs
gateways
recognizing
this,
but
should
I
course
correct.
Or
should
we
be
considering
like
a
second
measure
which
is
I.
C
Think
there
are
multiple
aspects
here:
one
is
hydra's,
Hydra
shutdown
and
the
reason
is
shut
down
is
because
kubernos
now
talk
directly
to
a
City
contact,
so
one
part
of
this
problem
is
of
the
proportion
of
kubernos
are
running
out
there.
How
many
have
the
capability
to
send
information
to
that
contact
and
how
many
actually
do
the
second
one
is
the
Gateway
stuff
right.
C
So
engagement
again,
you
have
a
sub
category
of
problems,
which
is
Gateway
itself
runs
multiple
nodes
on
each
of
those
run,
multiple
or
different
versions
of
Kubo.
The
first
question
is:
are
they
all
running
a
version
of
kuboid
that
can
also
set
in
traffic
to
sit
with
contact,
and
the
second
question
is
how
much
traffic
is
being
sent
so
then,
the
third
thing
is,
from
the
server's
perspective,
how
much
traffic,
who
am
I
receiving
traffic
from
right?
This
is
the
setup
contact
the
script
thing
that
I
wrote
so
from
server
perspective.
C
I
can
confirm
that
I
can
see
IP
addresses
of
gateways,
but
obviously
we
don't
know
what
we
don't
know.
So,
if
there's
a
IP
address
that
we
don't
know
about,
that
should
be
sending
traffic.
It
won't
see
it
that's
why
it
needs
both
sides,
a
collaboration
to
roll
rule,
things
out
on
the
Gateway
side,
but
on
the
Kubo
side,
I
think
we
need
the
input
from
Probe
lab
to
tell
us
adoption
rate
and
upgrades
of
kubernetes.
C
And
Lydell,
would
you
mind
reminding
me
which
version
fix
the
bug,
because
you
know
see
that
contact
was
included
in
018,
but
I
think
it
hit
a
bug
where
it
wasn't
sending
traffic
to
support
contact
right.
So
zero.
G
No
so
zero
eighteen
was
was
a
reframe
still
I
think
and
then
it's
been
up
until
20,
so
19,
19
and
20.
Maybe
both
have
an
incompatibility,
but
only
with
the
accelerated
DHT
clients,
so
normal
users
will
still
be
sending
it.
We
believe
it's
just
the
configuration
that
we
were
running
on
the
gateways
with
the
accelerated
client
that
was
causing
it
not
to
get
sent.
C
Okay,
so
we're
looking
at
3.7
plus
6.3,
which
is
what
is
it
10
of
total
Network
in
Kubo.
A
And
just
to
confirm,
like
expanding
on
what
will
said
we
did
manually
validate
this
fix,
that
we
did
for
the
accelerated
client
issues
was,
was
functionally
working
like
we
were
able
to
see
traffic
passing
there.
What
I'm
talking
about
specifically,
is
that
we
set
up
alarming
and
monitoring
such
that
we
would
recognize
via
alerts
that
this
traffic
wasn't
being
passed
regardless
of
like
manual
verification.
Just
so
everybody
knows
that
happens
to
watch
this.
A
Anything
else
on
that
topic.
Anybody
who
wants
to
cover
it's
kind
of
it's
a
bit
of
a
loose
end,
I,
think
we
don't
know
what
the
decline
in
traffic
is
resulting
from
or
whether
or
not
it's
relative
to
the
deployment
on
the
gateways
that
we
have.
Are
we
just
seeing
a
decline
in
the
volume
of
traffic,
gee
I,
think
that
would
be
something
to
think
about.
A
I
would
I'd
love
to
have
you
go
away
from
this
meeting
kind
of
thinking
about
what
we're
seeing
in
regards
to
traffic
and
whether
or
not
they're
like
observable
aspects
of
the
network
that
we
could
leverage
to
kind
of
get
to
the
root
of
this?
This
mystery
because
I
think
from
the
ipni
team
side
of
the
house,
I
think
there's
not
an
obvious
solution
here.
We're
really
questioning
whether
or
not
it's
us
simply
seeing
the
a
lower
volume
of
traffic
lookups
or
is
it
you
know?
Is
it
the
entire
network?
A
We
don't
know
and
I'm,
not
sure,
there's
there's
other
ways
we
could
be
validating.
It
really
I
think
we've,
we've
kind
of
looked
at
all
the
angles
that
we
might.
B
B
But
yeah
I
can
try
thing
around
this
yeah
see
if
there's
a
logical
explanation:
I
don't
have
one
yet
but
yeah.
We've.
A
Chatted
a
little
bit
with
Dennis
about
this,
as
well,
specifically
with
like
kind
of
the
work
that
he's
doing
and
obviously
he's
he's
measuring
something
very
specific.
That's
somewhat
different
to
this,
but
with
how
involved
he
is.
It
would
be
interesting
to
kind
of
have
him
thinking
about
this
problem,
while
he's
going
through
all
the
trouble
of
doing
this
Network
kind
of
virtualization
and
doing
these
lookups,
the
we
have
two
minutes
left
so
I
just
want
to
the
the
kind
of
the
last
item
in
mind.
A
Does
anybody
have
anything
pressing
that
they
want
to
get
to
with
the
group
while
we've
got
everyone
here,
all
right?
Well,
adine's,
not
here
today,
but
he
had
dropped
it.
He
had
dropped
a
note
for
us
regarding
some
issues
with
lookups.
A
It
looks
like
Mossy
actually
already
jumped
on
it
and
was
tearing
it
up,
but
I
don't
think
we
need
to
to
beat
this
topic
up
unless
anyone
wants
to
I
put
it
here
just
in
case
he
was
here
and
we
wanted
to
use
the
the
wisdom
of
the
crowd
to
take
a
look
at
it.
A
Right
we
like
finding
a
bug
and
fixing.
That
said,
it
looks
like
we'll
finish
on
time
and
not
drag
you
all
into
other
meetings.
I
hope
everyone
has
a
great
week
thanks
for
joining.
As
always,
it's
a
huge
pleasure
seeing
you
all
we'll
talk
to
you
soon.