►
From YouTube: 2023-02-07 IPFS Content Routing WG #5
Description
IPFS Content Routing workgroup #5
Review of DHT cascading solution implemented by the InterPlanetary Network Indexer team, Bitswap provider delay optimization tests, retrieval options for passive Bitswap peers and Double Hashing support for HTTP delegated routing.
Notes:
https://www.notion.so/pl-strflt/2023-02-06-Content-Routing-WG-5-9c406256896045689de32221719f5301
more on the IPFS Implementers Sync including calendar information see:
https://www.notion.so/pl-strflt/Content-Routing-Workgroup-e59fa94a9c3f48d58480b7daf15bd356
Luma meeting series:
https://lu.ma/ipfs-routing-wg
A
Let
me
share
my
screen
all
right,
so
I'll
start
out
with
kind
of
a
description
of
the
purpose
of
this
meeting,
as
well
as
the
kind
of
top
of
Mind
documents
that
are
worth
reviewing
if
you're
new
to
this
session.
A
The
purpose
of
this
meeting
is
to
discuss
content
routing
across
both
the
ipfs
and
filecoin
network,
as
we
bridge
the
gap
between
the
two
and
kind
of
the
design,
design,
probabilities
and
outcomes
that
may
result
from
attempting
to
cover
these
bases
and
some
of
the
documentation
here
are
the
more
recent
design
discussions
that
have
been
ongoing,
as
well
as
some
links
to
important
documents
relative
to
this
work
group.
So
there
is
a
design
decision
about
reader
privacy
that
we
have
deployed,
but
across
DHT
is
still
in
implementation.
A
I
think
we
may
get
some
updates
about
that
today,
said.contact,
which
is
the
primary
content
router
that
we
operate
here
at
protocol
labs
and
then
an
async
discussion
with
our
leadership
regarding
our
design
pathway
towards
implementing
a
handful
of
features
relative
to
content
routing,
as
well
as
the
double
hashing.
A
The
DHT
spec
that
is
currently
in
process
I
will
say
for
the
next
part
of
the
meeting
we
found
the
process
of
having
each
team
perform
an
update
to
be
really
beneficial,
but
some
of
those
discussions
are
very,
very
dense
and
so
probably
for
the
sake
of
everyone's
best
interest
and
ensuring
that
we
get
all
of
our
topics
covered
throughout
this
small
section
of
time
that
we
have.
We
may
time
box
those
updates
just
a
little
bit
and
take
a
parking
lot
approach
to.
A
You
know:
potential
additional
details
about
updated
items
that
we're
going
through,
but
thanks
everyone
for
participating
in
that
exercise,
I
think
it's
been
helpful.
I'll
go
ahead
and
start
with
the
interplanetary
network.
Indexer
updates
to
give
you
all
some
time
to
kind
of
catch
up
with
the
other
stuff
and
Mossy.
A
Please
feel
free
at
any
time
to
jump
in
and
let
me
know
if
there's
some
latest
and
greatest
I
might
have
missed,
but
leaving
our
last
meeting,
we
had
been
discussing
a
cascading
solution
for
messages
from
the
DHT
to
allow
for
lookups
across
both
the
interplanetary
network
indexer
and
the
DHT,
without
having
to
run
a
DHT,
client
and
we've
deployed
that
to
production,
and
we
tested
it
along
with
a
note
that
we've
also
got
cloudfront.
Caching
enabled
that's
in
place,
so
anybody
that
wants
to
take
a
look
at
it.
A
I
didn't
throw
the
command
here,
but
Mossy
threw
a
little
curl
together
that
you
can
actually
test
it
out
if
you'd
like
to
and
I'll
make
a
reminder
for
myself
to
add
that
to
the
notes,
if
you
want
to
give
it
a
shot
and
see
how
it
works,
and
then
we
added
some
refined
responses
for
the
ndjson
to
the
ipni
HTTP
API
and
this
optimized
for
batch
fines.
A
So
that's
a
new
capability
that
we
threw
in
here
using
multi-hashes,
and
then
we
also
have
submitted
a
PR
enabling
saving
advertisements
and
entries
from
our
data
store
in
the
network
indexer
as
car
files
with
the
ultimate
goal
of
saving
advertisement.
Car
files
to
S3
so
you'll
have
the
ability
to
kind
of
package
up
your
advertisements
and
compartmentalize
them
for
Mobility.
A
So
that
is
the
interplanetary
network,
indexer
probe,
lab
or
bifrost.
Does
anybody
want
to
volunteer
to
jump
in
and
go
first
yeah.
B
A
B
Again
so
on
the
problem
side,
so
it's
been
mostly
two
things.
So
first,
the
bit
swap
provider
did
optimization,
so
Ian
produced
a
report
that
I
encourage
you
to
visit
and
review,
and
it
isn't
exactly
like
we
expected.
So
it's
kind
of
weird
because
we
get
a
worse
performance
or
worse
than
the
first
bite
when
we
set
the
provider
search
delay
a
bit
swap
to
zero
when
compared
to
one
second,
which
is
what
we
have
now
for
both
and
DHT
and
ipni,
and
so
I
think
so
we're
still
in
this
investigating.
B
But
we've
noticed
some
strong
correlation
with
the
Heap
usage
and
there
are
some
stuff
that
are
a
bit
suspicious,
so
we're
still
looking
into
it
and
we
don't
want
to
so.
We
want
to
put
the
test
with
bifrost
on
hold,
while
we
investigate
on
that
might
be
related
to
some
Kubo
issues,
so
we'll
we'll
need
to
see
and
on
the
other
side
we've
been
working
on
the
so
the
double
hash
DHT
and
the
spec
is
out.
B
Please
have
a
look,
give
some
feedback
recommendation
before
we
validate
it
and
yeah,
let's
say
lock
it.
So
some
stuff
isn't
still
totally
defined,
such
as
the
data
format
and
the
variant.
So
if
you
have
some
strong
opinions,
please
write
them
down,
so
it's
still
flexible,
so
you
can
still
be
changed
and
the
migration
plan
is
still
not
done.
B
So
that's
for
the
spec
and
for
concerning
the
implementation.
So
chainsafe
has
been
working
on
the
implementation
and
also
on
evaluating
their
implementation,
and
they
just
gave
us
a
report
last
week
and
it
was
disappointing.
We
couldn't
get
the
information
we're
looking
from
and
so
it's
we
cannot
really
use
it
as
it
is,
and
so
we
need
to
maybe
reframe
the
collaboration
or
see
with
them
yeah.
B
How
are
the
how
we
should
go
on
with
with
the
testing,
so
this
also
is
delayed
and
yeah,
so
that
was
mostly
what
we've
been
working
on.
A
And
I
did
not
forget.
We
owe
you
an
update,
I'm
hoping
to
bring
that
to
group
discussion
as
one
of
our
topics
for
the
priority
for
kind
of
some
of
the
stuff
that
we
were
talking
about
last
week.
So
we'll
we'll
keep
that
on
the
table.
I'll
come
back
up
in
a
little
bit.
C
There's
nothing
much
really
to
update
on
North
side,
but
at
least
for
in
this
particular
group.
So
yeah
we've
been
dealing
with
various
miscellaneous
stuff,
but
nothing
to
take
people's
times
here.
A
Well,
thanks
Mark,
we'll
we'll
jump
to
ipfs.
D
Yeah
I
just
dropped
a
short
comment
that
we
are
working
on
by
first
gateway
binary,
which
is
replacement
for
full-blown
Kubo.
It
does
not
have
goalie
P2P
at
all
and
it
delegates
all
the
ipfs
things
to
remote
backend,
which
provides
blocks
on
cars
and
mentioning
here,
because
that
back
end
for
the
time
being
will
be
Falcon,
starter
and
Falcon.
Saturn
will
most
likely
use
delegated
content.
Routing
cover
HTTP,
so
we'll
be
dog
fooding
it
via
proxy.
That
way,
I
guess.
A
That's
good
to
bring
out
I,
think
I
think
we
were
pretty
aware
of
that
over
on
our
side
of
the
house,
so
shouldn't
be
any
surprises,
but
thanks
little
for
for
bringing
that
up,
I,
don't
know
that
the
broader
group
would
have
been
aware
of
that
all
right.
So
that's
everybody's
updates.
Let's
go
ahead
and
grab
on
to
some
of
these
topics
and
just
a
friendly
reminder.
If
you
have
some
top
of
Mind
ideas
that
you'd
like
to
see
covered
towards
the
end
of
the
meeting.
A
This
section
here
at
the
bottom
exists
for
that,
so
you
can
throw
stuff
in
there
and
we'll
pick
it
up
as
soon
as
we
get
done
with
our
discussion
about
the
topics
that
we're
bringing
directly
to
the
group,
so
this
may
or
may
not
where
I
I
think.
Potentially
we
have
some
overlap
and
coverage
between
this
and
the
decentralized
work
group.
A
So
if
this
is
being
covered
by
the
decentralized
work
group
just
go
ahead
and
give
us
that
nod
and
we'll
pass
it
over
there
and
consider
it
an
option
for
opportunity
for
the
folks
here
to
catch
that
update.
But
the
question
came
up
last
week
about
Saturn's
options
for
the
passive
bit
swap
peers
specifically
will
brought
this
discussion
topic
up.
Is
there
a
way
to
measure
how
much
Gateway
content
currently
is
not
in
the
DHT
indexer?
So
this
is
a
question
that
we
had
kind
of
proposed
to
ghee
I.
A
A
What
the
priority
is
for
ghee
to
provide
us
with
this
data,
I
also
kind
of
asked
Dennis
trautlin
to
see
if
he
could
potentially
look
into
this
as
well
and
we're
going
to
meet
with
him
this
afternoon
to
talk
about
it
potentially,
if
there's
some
measurement
or
capability
here
so
gee
I'll
keep
you
up
to
date
on
that,
but
from
the
indexer
side
of
the
house,
we
wanted
to
kind
of
confirm
how
much
of
a
blocker
this
is
for
us,
or
if
this
just
provides
us
with
context
to
kind
of
proceed
with
our
work
and
I
know
we'll
wrap
this
up.
B
A
Yeah,
so
that
that's
what
I
was
hoping,
maybe
mossier
Ivan
could
help
us
with
context
on
and
if
you
can't,
we
can
always
grab
will
later.
B
E
In
terms
of
timeline,
probably
within
a
week,
if
that's
possible
but
will
is
the
man
to
talk
to
I
think
he
has
the
most
context
about
this.
One.
A
Yeah
I
think.
The
thing
that
we
probably
need
to
clarify
here
is
whether
or
not
knowing
this
is
essentially
a
blocker
for
the
work
that
we're
presently
doing.
I
I
think
I
gathered
from
the
last
training
that
it
was
not,
but
let's
get
clarification
from
them
and
ensure
that
potentially
this
will
influence
some
of
the
design
decisions
that
they're
making
over
there
or
that.
F
We're
making
the
world
like
there
are
other
options
available
to
us
right.
The
people
who
are
relying
on
bits
while
pairing
are
are
like
breaking
the
network
like
or
they're
breaking
user
experiences
like
what
this
means
is
that,
if
I
go
to
ipfs.io,
it
might
it'll
load
the
page
and
if
I
load
it
up
in
Brave,
it
will
not
load
the
page.
F
So,
like
the
problem,
is
that
there's?
No,
if
we
surface
the
errors
and
what
has
occurred
and
why
that
is
also
like
a
viable
option
here
like
there
are
ways
we
can
hack
around
it
and
keep
things
going
and
like
mimic
the
similarities
to
the
way
they
are
today,
but
at
some
point,
I
think
the
conversations
that
happen
around
like.
Should
you
be
advertising
in
ipni
or
the
DHT?
The
reason
they're
not
happening
is
because
PL
is
tapering
over
the
the
things
for
them
right
and
so,
and
then
PL
says.
A
This
is
actually
a
Dean.
This
is
really
interesting
because
the
other
night
I
was
I
kind
of
bounced
between
browsers
when
I'm
testing,
stuff
and
I
explicitly
noticed
that
exact
scenario.
I
was
loading
the
web
page
in
Brave,
and
it
kept
kind
of
it
was
just
incredibly
slow
and
I
was
trying
to
figure
out
why
this
this.
A
This
is
actually
something
that
I
think
probably
I
can
take
partial
ownership
of
not
Gathering
the
data,
so
ghee
I
think
it'll
be
important
to
continue
pursuing
the
data
here,
but
also
there's
another
aspect
of
this,
which
is
getting
other
indexers
online
and
kind
of
having
a
more
distributed.
Indexer
approach
that
I
think
affects
us
and
I.
A
Frankly,
I
need
to
kind
of
sort
out
with
folks
like
pinata
how
they're
running
their
indexer
instance
and
see
if
we
can
potentially
get
them
to
follow
the
kind
of
the
approach
that
we're
taking
here
so
I
think
there's
kind
of
multiple
you're
you're
right
solutions
to
this
problem
or
multiple
ways
around
it.
So
ghee
the
data
will
be
helpful,
but
also
we've
got
to
work
with
these
ecosystem
contributors
and
and
try
to
get
commitments
from
them
to
operate
the
network
in
a
way
that
actually
benefits
the
network.
B
F
You
could
do
that
and
then
effectively
run
it
against
the
list
of
like
Gateway
URLs
at
a
later
time,
and
that
might
be
something
that's
like
a
little
that
you
could
hand
off
then
to
other
groups
like
pinata
and
they'd.
Say:
oh
okay,
have
we
we
did
this
indexer
thing.
Does
that
mean
our
data
is
good,
yet
you
could
say
well
if
you
run
this
tool
against
some
fraction
of
your
data.
F
E
I
think
I
think
we
have
that
tool
already.
If
this
is
the
verification
thing,
we
have
a
CLI
that
yeah,
given
a
detached
index
or
even
an
index
provider,
would
tell
you
which
ones
are
retrieval
from
City
of
contact
or
any
indexer.
A
That's
a
good
call
out
I'm
going
to
tag
you
here
just
so
you
and
I
can
interface
on
that,
because
I'd
actually
like
to
take
that
tool
over
to
some
of
these
folks,
so
that
they
can
actually
see
you
know
what
what's
happening
to
the
network
with
their
present
implementation
of
indexing.
That
would
be
really
beneficial
to
the
discussions
that
I'm
having
with
these
community
members
now,
given
this
discussion,
topic,
ghee
I
think
Mossy
was
right
regarding
kind
of
needing
at
least
the
immediate
data
like
within
the
week.
B
I'll
have
to
chase
so
I
I
won't
do
it,
but
I
need
to
check
maybe
with
Dennis
or
Yan.
If
they
can
pick
this
up.
A
Cool
and
I'll
I'll
follow
up
with
Dennis.
Today,
okay,
we've,
we
kind
of
started
the
discussion
so
I'll.
Let
you
know
how
that
goes.
Once
we've
once
we've
caught
up.
A
Okay,
we
can
skip
this
second
item.
I
think
we've
already
essentially
covered
that
and
then
another
item
that
I
think
we.
A
Mentioned
but
we
didn't
get
to
dive
into
was
identifying
the
long
pairing
connections
is
what
we'll
call
them
so
basically,
connections
that
have
been
hard-coded
from
our
side
and
then
pair
IDs.
That
I
think
a
Dean
mentioned
which
have
been
hard-coded
to
us
and
identify
whether
or
not
additional
actions
should
be
taken.
F
Yeah,
there's
like
there's,
maybe
a
side
thing
which
is
just
about
like.
Oh,
are
there
peers
that
are
longer
lived
connections
that
we
should
be
keeping
alive
just
for
like
some
efficiency
purposes?
But
that's
not
that
that's
like
you
know,
Saturn
or
other
people's
problems
for
deciding
how
efficient
they
want
to
be
with
their
connection,
usage
and
closing
and
opening
I
know
it's
the
difference
between
like
a
yes
no
bit
and
like
how
much
CPU
am
I
using
kind
of
thing.
A
That
makes
sense
we'll
go
ahead
and
leave
that
there,
just
so
the
language
permeates
for
anybody
following
along
and
then
so
kind
of
a
shift
in
Gears.
But
one
thing
we
did
bring
up
that
I
think
we
haven't
got
conclusive
decision
on
or
or
Direction
on.
Right
now
is
double
hashing
support
in
HTTP,
delegated
routing
and
I.
Think
that
might
go
the
direction
of
of
Gus.
Probably
if,
if
we
have
kind
of
a
timeline
in
mind
or
Ivan,
I
saw
you
thinking
about
jumping
in
don't
be
shy.
No.
G
No,
no,
so
you
do
bus
perfectly
about
the
question
so
basically
yeah
so,
but
any
any
timeline
minds
and
like
what
is
going
to
be
how
it's,
how
it's
going
to
look
like
or
you're
having
a
thought
about
it.
Yes,.
A
Guys
can
we
can
we
get
you
in
there
for
that.
H
I,
don't
have
anything
I
I,
don't
have
anything
to
add
here.
Okay,
I,
don't
know
I
I
like
we.
Obviously
we
need
to
like
reevaluate
what
we're
working
on
right
now
and
so
I.
Don't
know
where
this
fits
in
with
everything
else
going
on,
because
we
haven't
done
that
yet
so,
like.
E
H
F
It
was,
it
was
me,
I'd,
probably
try
and
do
the
other
things
that
bring
us
like
up
to
par
with
some
with
some
of
the
areas
with
ipni,
where
there's
like
a
little
bit
of
a
dissonance.
So
one
is
like
the
which,
which
routers
am
I
targeting
so
like
an
option
like
basically
HTTP
options
like
Massey
added
and
in
the
ipni
spec
and
then
and
ipns.
F
E
So
the
main
thing
is
to
make
sure
all
these
systems
work
in
Harmony
when
we
have
the
double
hashing,
so
I
guess
for
the
delegated
routing
side
will
take.
The
first
passes
at
putting
up
a
specification
which
would
be
very
similar
to
basically
how
the
ipni
works,
and
then
we
can
get
the
conversations
started
to
mature
that
and
then
I
guess.
We
also
need
to
push
forward
the
with
ghee's
specification
on
double
hashing
itself
on
ipfs.
To
begin
with,
so
there's
a
few
steps
to
go
before
we
actually
rolled
out.
A
I
think
that's
a
good
approach
and
I
also
addresses
kind
of
the
most
immediate
needs
first
and
we'll
take
a
whack
at
it
and
then
we'll
we'll
follow
up
with
you
guys.
Once
we
get
some
some
detail
on
paper
to
hopefully
start
that
process
in.
A
Prioritizing
that,
against
the
the
remainder
of
the
ipfs
backlog,
I
think
the
best
thing
we
can
do
is
is
try
to
kind
of
provide
you
with
an
understanding
of
what
this
might
block.
Essentially
I
think
you
probably
already
have
a
good
understanding
of
that,
but
we'll
try
to
frame
it
in
that
context
such
that
it
helps
with
the
prioritization
discussion.
A
We've
been
doing
a
lot
of
prioritization
discussion
lately
on
the
ipni
team
and
if
you
all
want
to
kind
of
talk
about
how
we
went
about
that
happy
to
share
the
results
from
it,
I
think
it
was
a
pretty
constructive
exercise
and
we've
got
a
good
method
going
that
we're
happy
to
review
with
y'all
as
well
side
note
and
then
Mario
I
think
one
thing
that
kind
of
came
up
in
the
last
meeting.
A
I,
don't
know
that
we
need
a
a
decision
here,
but
it's
been
kind
of
a
lightly
mentioned
discussion
topic
that
we
should
just
kind
of
check
in
on
and
consider
which
is
bifrost
potentially
helping
with
the
indexers
having
monitoring
your
CI
and
masi.
That
specifically
we're
thinking
about,
like
said
dot,
contact
right.
C
How
we
can
collaborate
and
help
out
with
keeping
sit
down,
contact
up
and
running
and
CI
monitoring,
and
so
on
so
forth.
A
Bye
bye
for
us,
pitching
pitching
in
the
idea
to
help
us
run,
send
out
contact
and
use
their
tool
chain
to
support
it
potentially
absolutely.
C
E
C
Camera
want
to
collaborate
and
want
to
say.
I
Something
I
think
you
I
think
you
summarized
it
nicely,
I
mean
I,
guess
it
was
a
different
world
the
other
week
and
and
now
we're
having
to
look
very
closely
at
what
obligations
and
things
we
can
take
on.
But
at
the
very
least,
I
think
it
makes
sense
that
we
have
a
lot
of
overlapping
experience
there
and
it
could
definitely
shed
some
eyes
and
and
help
point
you
in
some
right
directions.
If
there's
anything
that
we
can
see
there.
A
I
think
thanks
bossy,
I
I
think
that
one
thing
I've
recognized
is
from
around
the
organization.
There's
kind
of
a
couple
groups
who
it
seems
presently
would
benefit
from
an
actual
in-depth
review
of
the
infrastructure,
supporting
the
current
implementation
of
the
indexer.
This
question's
kind
of
come
up
a
few
times.
The
discussion
that
Dennis
and
I
have
been
having
as
well.
What
we
may
want
to
do
is
actually
set
up
a
kind
of
a
deep
dive
with
the
different
teams
that
are
interested
in
this
detail.
A
We've
put
together
a
bunch
of
blog
posts.
The
indexer
team
is
kind
of
recently
done,
which
describe
in-depth
the
behaviors
of
the
indexer,
but
the
infrastructure
itself,
I
think
is
probably
a
little
bit
more
mysterious
to
those
outside
of
the
team.
And
so
let
me,
with
our
team
during
our
Polo
today,
we'll
kind
of
discuss
how
we
might
approach
educating
folks
across
our
internal
PL
ecosystem
about
how
the
infrastructure
works
of
the
indexer.
A
A
So
we'll
brainstorm
on
that
that'll,
be
our
action
item
and
then
we'll
we'll
kind
of
come
back
to
this
group
with
a
proposal
of
how
will
how
we'll
support
those
requests
in
the
meantime,
of
course,
I
hope
everyone's
in
the
ipni
slack
Channel,
and
always
you
can
ask
us
questions
there
as
well.
We're
super
responsive
there.
J
I
guess
the
one.
The
one
thing
I
will
add,
though,
is
that
all
of
our
infrastructure
is
set
up
using
the
git
op
standard
that
that
bifrost
previously
pushed
through
to
other
teams.
So
it
is
all
infrastructure
as
code
it
lives
in
store
the
index.
There
is
a
an
Ops
deployment
that
has
the
terraform
and
such
that
that
then
represents
what
our
infrastructure
is.
So
it
is
all
there
in
the
repo
as
code,
in
the
way
that
I
believe
we
we've
sort
of
recommended
as
best
practice
across
PL.
A
Let
me
just
drop
that
here
thanks
thanks
will
for
kind
of
adding
that
flavor
I
think
that
definitely
probably
clarifies.
I
A
In
the
chat,
I
see
the
discussion
going
on
about
the
surprise
results
of
the
zero
milliseconds.
Do
we
want
to
bring
that?
A
Did
we
come
to
any
conclusions
in
that
discussion
in
the
chat,
or
is
there
anything
we
want
to
make
sure
everybody
catches
that
came
from
it
I
see
the
link
to
the
data
there.
Let
me
just
ensure
that
that's
on
our
notes.
E
F
You
want
a
negative
one.
Second,
instead
of
zero
yeah.
E
F
Curious,
so
an
order
right
so
as
I
flagged
in
the
other
issue.
There
are
two
things
that
that
need
to
happen
if
you're
going
to
go
just
messing
around
with
the
number,
but
like
probably
you
want
to
just
do
the
thing
better.
So
one
is
that
this
this
broadcast
business,
which
is
what
you're
doing
the
one
second
like
the
delay
about
okay
broadcast,
just
make
sure
we're
on
the
same
page
broadcasts
should
not
need
to
exist
at
all.
F
The
first
is
our
code
does
not
use
sessions
properly
in
all
of
the
places
right
example,
if
I
try
and
do
a
Gateway
request
right
now-
and
this
will
show
up
with
our
with
the
rail
work
and
I'll
probably
make
a
PR
today
about
this,
but
like
the
there
are
three
different
requests
that
will
have
three
different
sessions
instead
of
one
session.
All
the
way
through.
This
is
bad
right,
because
the
sessions
are
meant
to
tie
together,
logical
data
segments
and
stop
you
from
needing
to
do
extraneous
lookups
right.
F
So
one
is,
if
you're
bad
at
doing
sessions,
then
the
spit
swap
thing
the
broadcast
helps
fix
it
for
you.
The
second
is
places
where
you're
not
going
to
be
using
in
you
know,
indexers
dhts,
whatever,
which
is
like
mdns
on
your
local
network,
but
we
could
like
explicitly
call
that
out
and
if
you
have
peering
agreements
with
peers
that
are
designed
to
give
you
peer,
IDs
that
are
designed
to
give
you
blocks.
You
could
also
explicitly
call
them
out
instead
of
spamming.
Everyone
you're
connected
to
you.
F
Could
spam
like
the
pre-configured
stammable
targets
right
and
like
that's?
Basically
it
right,
then
you,
then
you
don't
need
to
do
the
spammy
bit
the
spammy
broadcast
anymore
and
the
reason
why
my
high
suspicion
as
to
why
doing
these
low
latency
things
hurts.
You
is
because
we're
just
adding
more
spam
on
more
spam
and
it's
eating
all
the
CPU
and
it's
choking
the
resources
for
the
work.
You
need
to
do
right
and
you're.
F
Spinning
up
in
you
know,
maybe
the
maybe
delaying
that
swap
would
help,
but
the
DHT
is
making
connections
which
are
more
CPU
expensive
generally
than
sending
out
messages
on
already
formed
connections,
so
basically
just
make
it
so
we
can
turn
off
Broadcasting
and
then
and
then
you
should
make
this
go
away.
Also,
our
friends
and
inferior
and
number
zero
and
all
over
the
world
will
praise
our
names
or
at
least
stop
cursing
them
as
much.
If
we
do
that.
A
B
Ahead,
all
right,
it
was
just
so
with
Kubo
18
we've
reduced
the
the
number
of
directly
connected
peers,
so
we've
massively
reduced
the
broadcast,
but
still
the
TH
or
I
mean
DFT
or
ipni,
or
the
facets
of
both
but
yeah.
Anyway,
it's
making
a
new
dish
connection
seems
to
be
slowing
down,
even
though
the
the
broadcast
was
like
reduced
by
eight
times
so
yeah
I'm
not
really
sure.
What's
the
what's
the
reason
here
that
it's
still
slower.
F
B
I
A
I
think
this
is
a
good
document
for
review.
I
just
want
to
confirm
out
of
the
parties
that
are
present
in
this
call
dealing
with
the
design
decision
relative
to
how
we
would
potentially
replicate
some
type
of
session
management
more
so
than
broadcasting.
In
these
scenarios
that
you
described
to
Dean,
who,
who
ultimately
does
does
that
kind
of
decision
making
fall
on?
And
you
know
how
does
it
get
folded
into
the
framework?
Is
this
an.
D
A
Stewards
type
of
thing
to
wrestle
with,
or
is
that
something
that,
like
a
broader,
broader
group,
weighs
in
on
I.
F
A
Yeah,
that's
that's
a
good
point.
I
want
to
make
sure
to
capture
here.
F
A
Awesome,
that's
good
stuff.
B
Gee
I
would
say
the
priority
going
forward,
wouldn't
be
to
merge
this
pull
request
or
to
go
on
with
the
testing,
but
rather
yeah,
going
on
with
fixing
the
bugs
or
making
sure
we
don't
need.
A
broadcast
anymore
seems
a
more
reasonable
Next
Step
than
merging
this
PR.
F
The
only
argument
I
could
see
someone
making
if
they
really
wanted
to
try
and
like
merge.
The
pr
was
to
say
that
you
know
it's
unfair
Thunderdome
does
like
a
bajillion
requests
and
has
many
as
many
connections
to
many
peers,
and
such
I
would
be
much
happier
if
we
tested
this
out
on.
You
know
what
is
like
effectively
an
ipfs
companion,
node
or
something.
F
F
F
J
B
B
Let's
say
with
only
provider
search
delay
set
to
zero
and
compare
100
requests
per.
Second,
then,
requests
per
second
one
request
per
second
and
see
if
there
is
some
improvement
and
if
there
isn't,
then
we
can
just
go
on
foreign.
E
F
We
can
also
try
like
if
it's
if
it's
a
CPU
bounded
issue,
as
opposed
to
like
weird
worker
like
go
routine
things,
which
probably
should
also
be
adjusted
to
account
for
the
fact
that
IP
and
I
and
like
the
DHT,
are
not
the
same
thing
in
resource
consumption
like
ipni,
keeping
keeping
an
HTTP,
Connection
open
and
just
sending
more
requests
down.
The
pipe
is
like
not
that
expensive
in
comparison
to
opening
many
many
connections
to
many
peers
right.
F
So
this
is
just
like
additional
logic
that
needs
to
happen.
I'm
sure.
At
some
level,
the
folks
working
on
last
year
are
going
to
have
some
fun
fussing
around
with
the
same
kind
of
stuff.
F
But
this
is
like
the
kind
of
session
management
and
like
taking
into
account
that
we're
dealing
with
multi-protocol
systems
that
each
have
different
characteristics
for
us
to
account
for
and
we're
like,
because
there
was
a
handy
interface,
we
would
like
wrap
them
in
the
same
place,
but
but
they're
not
the
same
thing.
F
F
Also,
hopefully,
the
we
start
to
close
off
more
of
the
sessions
like
the
improperly
plumbed
sessions
issues
as
a
function
of
needing,
but
to
do
that
anyway,
for
the
rail
work
because
like
well,
has
a
block
store
that
wants
to
be
based
on
sessions
effectively
contacts,
and
so
we're
going
to
need
to
do.
We
have
to
do
this
anyway,.
A
All
right,
good
topic
does
anybody
else
have
anything
top
of
mind.
This
is
the
only
big
discussion
that
I
saw
in
the
chat
so
now's
your
chance
before
we
drop
off.
F
I've
been
I've
been
yeah
up
for
a
while,
since
I
last
looked
at
this,
but
I'm
just
looking
at
like
the
double
hash
DHT
stuff,
which
has
some
like
keying
based
on
peer
IDs.
But
the
peer
ID
is
like
for
anything.
We
do
with
like
HTTP
based
records
like
peer
IDs
kind
of
won't
exist
right
and
so
I'm
wondering,
if
there's
any
thought
for
how
that
plays
into
the
rest
of
this
or,
if
that's
like,
TBD,
we'll
we'll
do
it.
When
the
records
show
up.
F
I
mean
the
fact
that,
like
so
right
now,
it
takes
like
two
or
three
round
trips
to
like
get
to
do.
My
lookup
with
the
ipni
reader
privacy.
Specification
and
part
of
that
is
because
I'm
protecting
like
peer
info
and
then
I'm
using
some
of
the
peer
info
to
protect
other
metadata
and
like
it
seems
like
that's
like
a
little
bit
coupled
to
the
existence
of
like
a
PRC.
E
So
if,
if
the,
if,
if
you're
talking
about
the
HTTP
retrieval
mechanism
which
would
get
baked
into
metadata
in
ipni,
then
you
effectively
remove
one
of
the
round
trips,
because
what
you
get
back
is
you
get
back
a
key
which
you
can
then
retrieve
the
metadata
that
gives
you
directly
the
HTTP
address,
you're
looking
for
so
I
I
think
we
can
make
it
work
such
that
it's
not
dependent
on
PRI
dpid
is
not
special.
It's
just
part
of
a
key.
J
Content,
I
guess
the
other
thing
is
yeah.
Why
don't
we
propose
what
the
Integra,
what
it
looks
like
in
delegated
routing
part
of
part
of
the
reason
we've
got.
The
structure
we
have
is
so
that
we
remain
compatible
with
DHT
records
so
that
we
can
import
them
in,
but
I
think.
We
believe
that
the
integration
with
Kubo
should,
in
most
common
cases,
not
take
additional
round
trips
and
that
there's
both
between
local
caching
and
then
the
sort
of
generalized
key
usage.
We
can
make
things
reasonably:
okay,.
F
Yeah
I
mean
in
the
short
term,
like
it's
only
one
extra
round
trip
instead
of
two,
because
bitsoft
doesn't
need
metadata,
but
if
we
start
adding
in
protocol
that
do
need
metadata,
which
you
know
people
have
lots
of
good
ideas,
then
then
we're
going
to
eat
one
more
and
I.
You
know
we
don't
have
to
do
it
now,
but
I'm
wondering,
given
that
it's
gonna
it
might
mess
with
how
you
guys
do
indices.
If
we
should
try
and
reduce
the
reduce
the
number
of
round
trips,
we
have
to
do
to
an
ipni
instance.
E
What
would
HTTP
lookup
HTTP
retrieval?
Look
like
over
providers
found
sorry
which
providers
retrieved
from
DHD
well.
F
F
Shrug
I
mean
if
the
DHT
doesn't
support
them.
They
don't
have
to
right.
You
guys
are
a
different
system
you're
like
not
the
same
as
them
right,
I.
E
See
what
you're
saying,
let
me
think
about
this
Saturday.
This
is
a
good
one.
F
Like
some
some
examples
of,
like
you
know,
hacky
approaches
to
this
are
will
had
one
hacky
approach.
I
had
a
different
hockey
approach
that
both
involved
like
shoving
shoving
stuff,
either
into
multi-atters
that
were
a
little
weird
or
shoving
stuff
into
the
metadata,
and
using
that
you
know
kind
of
whatever
works.
F
It's
a
key
Value
Store,
just
trying
to
figure
like
yeah
how
we,
how
we
want
to
start
exposing
that
to
people
you
know,
is
the
way
that
I'm
my
example,
abuse,
metadata
abuse
or
just
regular
use,
and
do
we
want
that
to
take
an
extra
round
trip
as
a
penalty,
or
is
that
like?
Oh,
no,
we
actually
wanted
that
to
be
fast.
A
A
D
Yeah
I'd
say
like
by
the
end
of
the
week
because
there's
like
a
queue
with
this
one
and
also
we
got
at
least
three
for
the
gateways,
it's
just
all
the
other
stuff
how
to
take
a
priority,
but
I
want
to
land
it
before
the
end
of
the
week,
because
we
already
shipped
a
bunch
of
those
ipps
in
cover
18.,
and
this
one
will
be
part
of
the
departure
as
well.
I
think
just
need
to
look.
D
You
know,
do
the
final
look,
tweak
any
editorials
and
and
then
most
likely
match
it
this
week,
awesome.
A
All
right
y'all!
Well
that
summarizes
our
meaning
unless
anybody
has
any
other
hot
burning
topics
they
want
to
have
covered.
While
you
got
the
group
here,
speak
now
or
forever
hold
your
peace,
if
not
thanks
everyone
for
sharing
all
of
your
constructive
and
beneficial
opinions
on
all
these
topics.
Super
helpful
I
appreciate
every
week
when
everybody
comes
here,
we'll
see
in
two
weeks
or
online
of
course
take
care.
Everybody.