►
From YouTube: 2023-02-23 IPFS Content Routing Workgroup #6
Description
Troubleshooting a subset of queries from Lassie via caskaDHT, DoubleHashing DHT/Migration plan, Delegated routing and naming specs.
Notes:
https://www.notion.so/pl-strflt/2023-02-21-Content-Routing-WG-6-16748ffb16a543bca6295a2d9306782a
Luma meeting series:
https://lu.ma/ipfs-routing-wg
A
All
right
y'all,
we
should
be
recording
and
we'll
go
ahead
and
get
started.
This
is
the
ipfs
content,
routing
work
group
number
six
Steve,
it's
great
to
see
you.
A
We
are
working
through
a
number
of
issues
just
for
anybody.
Logging
into
this
call.
The
purpose
of
this
content
routing
work
group
is,
of
course,
to
address
the
content,
routing
design
decisions
and
support
the
efforts
that
are
ongoing
in
content,
routing
across
ipfs
and
the
filecoin
networks,
as
we
Bridge
those
gaps,
and
the
group
of
people
here
is
contributing
to
those
designs
and
implementations
of
content
routing
across
both
of
these
Networks,
so
to
get
started
for
reference.
A
There's
a
handful
of
important
docs
I've
tried
to
grab
the
most
timely
ones
and
place
them
up
here
relative
to
the
things
that
we're
currently
working
on
for
anybody.
That's
attempting
to
catch
up
or
potentially
get
context
about
the
discussions
that
are
happening
within
this
group
and
then,
as
we
dive
in
we'll,
go
ahead
and
grab
updates
from
the
contributing
teams
to
kind
of
get
a
picture
of
where
everyone's
at
with
the
last
two
weeks
of
progress,
so
generally
I
kind
of
open
it
up.
A
A
I
think
that's
big
stuff
guys,
thanks
for
thanks
for
that
update,
I.
Think
we've
all
been
looking
forward
to
that
for
sure.
A
Yeah
yeah
actually
go
ahead
and
I'll
I'll
take
notes.
C
Sure
so,
on
the
ipni
side,
over
the
last
two
weeks,
we've
been
improving
monitoring
of
the
double
hashing
query
pipeline.
So
we're
we're
continuing
to
monitor
the
population
of
data
on
the
double
hash
site,
so
that
we
can
eventually
switch
all
the
queries
to
be
on
double
hashed.
C
We
worked
on
exports
of
advertisements
in
bulk
in
car
files
and
then
making
those
available
elsewhere,
initially
we're
targeting
S3
as
a
storage
mechanism,
but
the
general
idea
is
that
we
want
to
enable
other
indexers
to
import
advertisements
in
bulk
become
up
to
date
very
fast,
as
well
as
providing
mirroring
for
multiple
places
to
get
the
advertisement
changed.
This
would
then
reduce
the
chances
of
overwhelming
an
index
provider
when
multiple
index
or
endpoints
go
to
fetch
advertisers
from
it.
C
We
have
Cascade
DHD
deployed
on
production
with
some
hiccups,
which
I
think
there's
an
item
on
the
walkthrough
to
talk
about
I'll
get
to
that
I'll
get
back
to
that
one.
We
have
two
new
specifications
on
ipfs
to
extend
the
nice
work
that
that,
if
it's
13
did
specially
Plus
on
introducing
the
delegated
routing,
the
two
specifications
introduce
extensions
to
put
provide
the
records.
C
C
I
know
that
that's
been
working
on
the
streaming
stuff,
so
The
Next
Step
there
is
to
move
out
move
any
Json
support
for
the
delegated
routing
endpoints,
which
I
think
we
have
already
draw
that
in
one
endpoint,
but
not
not
adsip.contact
I
think
that's
about.
It
is.
D
C
B
B
I
didn't
I
didn't
talk
about
streaming,
which
I
forgot
about
so
I
I
I'll
just
mention
it
since
she
just
mentioned
it
we're
already
talking
about
it,
yeah
so
I
I,
there's
a
PR
open
now.
Finally,
for
live
ipfs
to
add
streaming
support
to
the
client
and
the
server
implementations
in
there
and
that's
I've
integrated
that
with
Kubo
and
tested
it
against
kaska
DHD,
but
not
said.contact,
because
this
is
not
rolled
out.
Yet
it's
a
DOT
contact,
but
it
works
with
casca
DHD.
B
A
That's
great
Gus.
Everybody
keeps
asking
me
how
to
pronounce
that
name
and
I
feel,
like
you
just
nailed
the
winner
about
a
casca
DHT.
Is
it.
F
On
our
side,
so
it's
been
mostly
working
on
the
specs,
so
thank
you.
Everyone
that
gave
a
review
have
improved
the
the
spec
we
added
some
mermaid
diagrams
and
but
the
space
still
needs
some
attention.
So
if
you
haven't
given
a
review
yet,
please
yeah
write
a
review
there
and
we'll
make
sure
to
address
it
so
that
we
can
move
forward
with
it
and
yeah.
That's
mostly.
A
Cool
and
then
it
looks
like
we
don't
have
any
folks
from
the
bifrost
team
here
right
now,
they're,
probably
all
busy
with
Saturn
rollout
support,
I'm
guessing
to
be
fair.
They've
got
a
lot
going
on
over
there,
so
we'll
we'll
check
back
in
with
them.
I'll
try
to
get
some
updates
from
them,
actually
just
for
the
scope
of
this
work
group
and
get
them
in
the
notes
at
asynchronously,
so
that
we
kind
of
have
them
all
in
one
place
and
we'll
skip
topics
relating
to
them.
A
I
was
kind
of
going
through
some
of
their
executional
topics
from
the
last
content,
routing
work
group
that
we
had
and
they've
done
a
lot
of
updates
in
their
GitHub
that
Merit
attention,
I'll
mention
them,
but
we'll
obviously
try
to
catch
up
with
them
to
see
where
they're
at
and
I'll
throw
the
links
in
here
relative
to
these,
but
timing
for
the
rollout
of
delegated
routing
and
V
0.18.1.
A
They
have
a
GitHub
issue
open
right
now
to
deploy
that
to
all
of
their
cluster
and
I.
It
looked
like
they
were
probably
going
to
complete
it
pretty
soon.
So
I
wanted
to
kind
of
find
out
from
them
what
what
the
timeline
really
was,
whether
or
not
it
was
actually
within
the
week,
which
is
what
they'd
been
proposing
or
potentially
there's
there's
more
there
to
that,
we'll
skip
over
that
and
potentially
we
can
come
back
to
it
later,
especially
since
those
folks
aren't
here
right
now,
I
put
at
the
top
of
the
list.
A
The
casca
DHT
ndjson
Liddell
lydel's,
been
noticing
some
404s
I,
think
of
specifically
DHT
lookups
from
Saturn
that
are
hitting
the
indexer
Via
lassi
and
I
I
wanted
to
just
make
sure
do
we
have
like
a
clear
picture
or
like
logs
of
like
which
queries
are
falling
victim
to
this
failure
and
whether
or
not
the
indexer
team
you.
F
A
C
Over
the
last
week,
I've
I've
been
observing
Cascade
DHD.
There
was
an
issue
with
a
resource
manager
where
increased
resources
were
still
causing
the
accelerated
delegated
routing
a
refresh
mechanism
to
fail.
C
I've
just
went
ahead
and
disabled
a
resource
manager
completely
that
rectified
that
problem.
Of
course
it
comes
with
disadvantages,
but
you
know
in
terms
of
service
restarts.
We
are
okay,
because
it's
running
on
recruiting
this
environment
because
restarted
no
problem,
I've
bumped
the
resources
on
it.
The
whole
process
seems
to
be
very
CPU
intensive.
That's
something
that
I
would
love
help
from
maybe
a
Dean
or
Lidl
on
just
getting
in
general
idea
of
whether
this
is
expected
for
lookup
to
be
such
see.
C
A
C
So
I
I
figured
out
why
the
404s
are
happening.
There
are
two
places
that
things
go
wrong.
First,
first
one
is
the
circuit
breaker
mechanism
in
index
star,
which
is
the
load
balancer
in
front
of
single
contact.
C
Whenever
failures
happen
too
often,
the
circuit
breaker
opens
and
excludes
backends
from
the
endpoint,
and
then
it
heals
itself
after
you
know,
periodically
half
open
and
retire
and
whatever
what
happens
is
that
when
the
queries
hit
sit.contact
for
DHT
Cascade
with
non-streaming
content
type,
obviously
non-listening
Concepts
take
longer.
So
what
happens
is
that
when
the
queries
get
Cascade
over
DHD,
it
takes
longer
than
the
timeout
the
timeout
gets
fed
back
into
circuit.
Breaker
circuit
breaker
thinks
that
endpoint
is
no
good,
excludes
it
from
the
back
end
and
from
then
on.
C
We
get
a
lot
of
404s,
because
no
request
gets
routed
to
it.
Then
it
opens
and
the
same
thing
happens
again.
So
what
I've
done
to
fix
that
is
to
migrate
all
the
propagation
of
lookups
on
c.contact
onto
streaming,
regardless
of
what
the
accept
header
is
on
the
request
side.
So
what
happens?
C
C
C
That
means,
if
you
come
across
a
provide
the
record
that
does
have
address.
We
immediately
return
it
if
it
doesn't
have
an
address.
There
is
a
second
go
routine.
That
goes
and
looks
up
the
address
and
then
feeds
the
results.
Back
I've
added
extra
optimizations
there
like
search
for
local
peer
store
before
looking
for
addresses
if
it
exists,
just
look
it
up
from
there
I've,
also
added
limiting
for
peers
without
addresses,
because
there's
a
lot
of
lookups
that
fail
because
of
no
address
is
found
and
those
are
fairly
expensive.
C
So
I've
added
a
TTL
caching
mechanism
to
only
retry
within
I,
don't
know
20
minutes,
not
every
time
we
found
that
prid
and
the
third.
The
last
thing
that
I've
done
is
I.
Don't
love
adding
his
comments
on
this
I
update
the
P2P
peer
store,
with
the
addresses
that
I
find
during
lookup
and
I.
Think
that
should
improve
lookup
hop
count
as
well
as
just
the
general
returning
of.
C
Results:
I've
been
testing
it
sustainably,
I'm
hoping
to
roll
that
out
today
and
I'll
post
updates.
Please
go
ahead.
D
Yeah,
this
looks
good,
so
the
the
reason
why
the
you're
you're
getting
why
things
are
taking
a
long
time
my
suspicion
for
why
things
are
taking
a
long
time
for
some
of
the
peer
records.
D
Is
that
the
way
in
which
like,
if
you
do
a
fine
Pier,
the
way
in
which
you
determine
you
determine
which
is
the
best
like
record
for
them,
is
basically
you
just
connect
to
them,
because,
due
to
Legacy
insane
reasons
that
maybe
maybe
we'll
get
fixed
when
gigos
and
rewrites
the
RPC
code
is
that
fine,
node
and
find
Pierre
are
the
same
RPC
call
instead
of
different
ones.
D
So
we
can
have
like
slightly
different
logic
that
that
returns
there
a
little
earlier
but
what's
happening,
is
like
you're
trying
to
connect
to
a
peer
that
may
or
may
not
even
be
around,
or
you
know
something
like
that.
So
that's
that's
I
think
what's
happening
what's
happening
there
so
doing
the
asynchronous
streaming.
The
thing
with
the
timeout
is
like:
that's
that's
the
way
to
go
in
theory.
You
could
end
up
in
a
situation
where,
like
you,
discover
more
addresses
later,
but
it
I
think
it's
probably
fine.
As
it
is
so.
C
So
when
the
addresses
are
discovered
later,
if
the
context
is
not
so
this
is
goes
in
a
bit
of
implementation
detail.
If
the
context
is
not
canceled
I,
add
it
to
the
outgoing
extreme
of
results,
but
if
it
is
canceled,
I've
updated
the
peer
store.
So
next
time
next
yeah
that
prid
is
found.
You
know
just
the
best.
We
can
do
really.
D
You
asked
about
the
CPU
usage
Yeah,
so
basically
short
version
is
yeah,
it
sucks
the
it
it's
not
it's
not
that
it
uses
a
lot
of
CPU
is
the
issue.
The
issue
is
more
that
it
does
it
periodically
like
it
does
it
it
does
it
in
like
a
spiky
way
and
that's
the
thing
that
makes
you
miserable
is
that
throwing
that
Spike
you're
very
sad
during
the
refresh
weak
that
that
can
be
smoothed
out?
D
D
D
C
So
this
brings
up
a
question
for
me:
I
didn't
didn't.
We
are
using
the
default
Connection
Manager,
which
I
think
would
be
the
limits
of.
Is
it
192
high
water
limit?
Should
we
do
you
think
we
should
bump
that
up
too.
D
Yeah
I
agree
Gus,
but
yeah
you
should.
We
should
have
a
higher.
You
should
bump
the
high
the
high
waters
and
yeah
in
low
Waters,
because
they
you
need
to
make
those
connections
anyway,
right
over
the
course
of
again.
If
you've
got
a,
if
you
get
a
thousand
queries
over
the
course
of
you
know,
X
minutes,
then
over
those
X
minutes
you
will
have
to
hit
everybody.
C
Yep
and
on
the
CPU
side,
the
main
thing
I
love
to
get
my
head
around
is
from
option
operational
perspective.
How
much
you
could
give
it
because
I
give
it
two
cores
I
I,
bumped
it
to
three
cores.
Now
it
it
doesn't
matter
if
it's
hungry,
it's
just
like
giving
it
enough
resources.
So
I
guess
I'll
I'll,
monitor
it
with
the
extra
resources
and
keep
pumping
until
it's
80ish
utilized.
D
B
B
So
for
me,
I
think
I
think
it's
like
it
should
be
really
high
priority,
but
I
don't
know
how
it
Stacks
up
against
everything
else.
C
D
D
So
there's
the
standard,
client
who's
which
says
I
have
a
fixed
routing
table
size,
and
that
is
like
you
know,
20
Piers
per
bucket,
and
that's
it
and
then
I
have
the
accelerated
client,
which
says
I
am
putting
everyone
in
my
bucket,
and
what
we
want
is
something
that,
like
is
just
a
cache
like
it,
builds
its
size
over
time
and
that
way
it
sort
of
scales
based
on
how
much
you
use
it,
and
so
you
don't
even
have
to
think
it.
Just
are
you
doing
a
thousand
queries
every
10
minutes?
D
If
so,
your
routing
table
is
going
to
be
full
all
the
time.
Are
you
doing
one
query
every
X
minutes,
if
so
you're
going
to
have
the
small
routing
table
I
think
that's
like
kind
of
how
we
save
time
there.
D
But
it's
it's
a
bit.
It's
a
bit
of
a
pain,
the
reason
I
didn't
do
it
initially
was
I
I
had
no
time,
and
it
was
it's
easier
to
reason
about
I.
Guess
it's
easier
to
reason
about
when
the
query
should
terminate
when
you
have
everybody
in
your
routing
table,
so
you
have
to
like
do
a
little
more
effort
but
I
think
it's
it's!
It's
good
effort.
B
I
Wanna
I
wanna.
We
can
get
to
it
because
I
wanted
to
mention
two
real,
quick
I,
don't
know
if
you're
aware
there
there
are
other
dragons
in
the
accelerated
DHT
client
like
like.
If,
if
the
refresh
fails,
for
example,
you'll
have
a
stale
routing
table
and
you
won't
know
it
I
don't
I.
There
was
some
talk
about
resource
manager
earlier
I'm,
not
sure.
B
If
that
was
this
specific
system
or
not,
but
like
just
be
aware
that
like,
if
you
could
throttle
or
something
doesn't
work
when
it's
refreshing,
the
routing
table,
you
could
have
stale
entries
and
or
an
incomplete
routing
table
if
it's
like
the
bootstrapping.
Like
that's
another
thing
too,
is
that
like
when
it
bootstraps
you
don't
know
when
it's
done,
you
just
kind
of
have
to
guess.
C
B
Think
it
does
it's
yeah
great.
C
D
A
Production
yeah,
it's
other
things,
production
around
here,
buddy.
B
C
A
C
D
B
B
A
A
A
You
great
and
another
thing
that
came
up
is
we
met
with
Dennis
late
last
week,
topically
to
kind
of
discuss
about
the
structure
of
the
indexer
and
kind
of
its
architecture,
and
likewise
to
get
benefits
from
him
of
kind
of
discussing
the
scope
of
how
much
content
ghee
that
we
kind
of
mentioned
is
not
currently
in
the
DHT
or
indexers
I.
Think
we
we
got
a
pretty
good
feedback
from
that
I
can
kind
of
drop
some
of
the
data.
A
He
referenced
a
talk
that
he'd
done
where
he
had
kind
of
analyzed
a
lot
of
this
and
shared
some
graphs.
I
have
those
links
and
I'll
I'll
drop
them
in
the
notes
Here
for
anyone,
that's
interested,
but
I
wanted
to
check.
A
Mercy
did
we
did
we
need
any
more
data
or
details
to
support
allowing
indexing
for
retrievals
on
the
basis
of
like
understanding
that
kind
of
ghosty
type
of
data
in
the
network?
Do
we
need
deeper
analysis?
Should
we
take
that
further,
or
did
we
get
enough
feedback
from
that
discussion
to
kind
of
feel
confident
about
our
design
approach?.
E
C
The
the
the
main
thing
is,
one
third
of
the
States
won't
be
discoverable
with
you
know.
If
you
have
ipni
and
DHD
cascading
enabled
still,
the
chances
is
60
success
because,
based
on
the
data,
it
looks
like
one
third
of
this,
it's
getting
discovered
from
a
bit
swap
gossip
and
don't
end
up
on
the
DHD.
D
Don't
know
I,
think
I
think
fighting
we
can
we
can.
We
can
fight
this
with
the
product
folks,
as
we
like
push
further
up
but
like
I,
think
it's
time
to
start
making
people
responsible
for
their
data,
like
one
of
the
ipfs
like
classic
problems,
is
that
everything
looks
like
a
get
problem
instead
of
a
put
problem,
so
you
said,
like
I
tried
because
content
addressing
data
can
come
from
anywhere,
you
say:
I
tried
to
fetch
Buffy
Fubar
and
it
didn't
happen.
D
Ipfs
must
be
broken
when,
instead,
the
result
is
like
the
guy
hosting
it
turned
off
his
laptop
or
is
like
going
through
a
tunnel
or
didn't
advertise
his
records
anywhere
and
like
now.
We
get
a
new
Gateway
binary
with
new
error
messages
and
stuff,
and
we
can.
We
can
start
to
surface
some
of
these
issues
a
little
further
Upstream,
because
otherwise
we
end
up
routing
stuff.
It's
like
people
come
to
us.
They
say
the
Gateway
is
broken
and
we
ask
where's
the
data.
D
That's
coming
from
pinata
pinata
has
a
problem
send
them
to
pinata
like
the
more
we
can
remove
ourselves
from
being
in
the
loop
by
helping
users
figure
out,
what's
happening
the
better.
Ideally,
there
are
just
no
problems
at
all
and
we
want
that
to
happen.
But
as
long
as
problems
can
exist,
it
would
be
good
if
users
could
like
self-service
a
little
bit.
F
D
D
To
cut
down
the
amount
of
spam
like
we,
we
run
into
people
like
in
fear
and
whatever
who
are
like
I,
don't
like
please,
if
you
guys
could
stop
go
bit
swap
from
spamming
me
with
nonsense
like
that
would
be
great
because,
like
my
data
is
hosted
somewhere,
where
the
iops
are
expensive
or
annoying
and
like
please
don't
make
me
do
this
right
and
setting
up
distributed.
Bloom
filters
is
like
a
pain
so
like
they
have
to
deal
with.
All
of
this.
B
Yeah,
it
would
be
nice
I,
don't
know
like
it
would
take
some
scoping
or
something
to
figure
out
if
it's
feasible
or
not
but
like
if
we
didn't
double
down
on
bit
swap
for
Content
routing
here
like
if
we
could
use
this
as
an
opportunity
to
improve
the
network
and
not
continue
to
propagate
the
use
of
bit
swap
as
a
Content
router.
That
would
be
awesome.
B
D
Something
and
I
think
that
the
reduction
of
spam
also
relates
to
what
we
talked
about
last
time
with
like
dropping
the
time
to
wait
before
you
do
before
you
do.
Dht
queries
right
that
that
happens.
When
you
you
fix
sessions-
and
you
like
you-
fix
the
usage
of
sessions,
so
they're
they're,
you
they're
plumbed
through
properly,
and
then
your
CPU
usage
goes
down.
You
don't
have
to
worry
about
the
stuff.
So
much
if
the
bit
swap
stuff
is
like
is
correctly
optimistic.
A
I
think
this
there's
a
lot
about
for
for
people
who
haven't
caught.
Like
the
last
discussion
we
had
and
content
routing
or
group
number
five.
There
was
a
deep
discussion
about
this
topic
that
I
I
think
is
definitely
worth
catching.
It's
kind
of
towards
the
later
end
of
the
video.
If
you
want
to
give
it
a
go,
one
thing
that
I
took
away
from
that
is
I'm,
putting
together
kind
of
a
like
a
task
in
Road
Mappy
table
for
Content
routing
as
a
whole.
A
I
think
that
would
kind
of
serve
the
whole
group
as
like
a
valuable
visualization
of
like
our
focus
of
efforts
and
Gus
I,
really
heard
you
kind
of
mentioning
like
prioritization
in
the
backlog.
You
have
a
massive
backlog
and
like
where
do
we
insert
some
of
these
things
in
it?
A
I
wanna,
you
know,
do
the
best
support
that
we
can
from
this
work
group
to
kind
of
ease
that
burden
in
any
way
that
we
can
by
like
showing
some
goals
that
kind
of
weave
together
all
of
the
interested
parties
and
contributors
in
this
group,
so
I'm
working
on
that
it's
partially
put
together.
But
it's
not
quite
finished
enough
that
I'd
throw
it
in
front
of
everyone
just
yet
and
I'll
love
to
get
all
y'all's
feedback
on
it.
A
And
let
me
know
if
that's
an
easier
way
to
kind
of
work
through
these
problems,
I'm
hoping
to
kind
of
add
that
that
glue.
If
that
seems
like
a
good
idea
to
everyone,.
A
C
A
Cool
I
put
up
the
final
call
for
comments
on
373.
I
did
kind
of
want
to
ask
is:
is
there
like
a
Line
in
the
Sand
date
that
we
should
be
looking
for
with
that,
or
is
it
like
a
line
in
the
sand
type
of
action
relative
to
closing
that
out
for
comment?
F
D
Think
to
some
extent
I
mean
it.
Let
us
like
he's
joking
a
little
but
he's
also
right,
which
is
that
you
learn
about
the
thing.
Like
you
write
this.
You
write
the
proposal
for
like
what
the
Spec's,
probably
gonna,
look
like,
and
then
you
realize
halfway
through
coding
it,
but,
like
you,
missed
the
spot.
G
Kind
of
like
to
give
it
a
maybe
like
less
jokingly,
joking
feedback
is
that
for
the
previous
ipps
we
sometimes
started
with
just
spec
and
then
added
working
codes.
Usually
it's
like
against
like
reference
implementation
in
Kubo,
because
it's
it
was
the
easiest
and
then,
if
you
ship
with
kubois,
you
more
or
less
deployed
it
to
the
majority
of
the
network,
so
I'd
say
it's
perfectly
fine
for
that.
Ip
pull
request
to
be
open
until
it's
fine
to
say
that.
Okay,
that's
a
in
the
comment.
We
agree.
G
The
spec
is
good
enough
to
try
to
implement
it.
Then
we
have
reference
implementation
and
then
we
bring
that
up
to
ipfs.
Implementers
sync
call
to
inform
like
why
their
community
that
hey
this
this
thing
it
has
a
working
implementation.
Here
we
are
waiting
for.
We
are
planning
to
ship
that
with,
let's
say
next
version
of
Kubo,
and
that
gives
people
applies
pressure
for
people
to
who
are
interested
in
this
to
to
do
the
first
or
maybe
final,
look
at
it.
G
So
historically
it's
it
worked.
Pretty
fine
I
feel
this
is
a
pretty
big
change
for
entire,
like
ipfs
ecosystem,
because
we
start
talking
about
privacy
in
some
aspects,
so
it's
perfectly
fine
to
keep
it
open
until
we
are
really
really
happy
with
both
spec
and
implementation,
and
it's
been
often
that
we've
shipped
something
with
Kubo.
We
agreed
on
the
ipfs
implementation
implementers
saying
that
yeah
this
IPP
is
ratified.
G
We
just
wrote
comments
it's
ratified,
but
we
still
kept
it
open
for
like
a
few
weeks
of
feedback
after
we
shipped
it
in
case
there
are
so
much
cases
that
require
fixing
and
then
I
think
more
resolution
to
the
spec,
so
I
feel
that's
more
or
less
how
we
would
do
it.
No.
A
That's
great
perspective
and
I
didn't
mean
to
imply
with
my
question
that
perhaps
this
wasn't
a
good
way
to
go
about
it.
I
just
wanted
to
make
sure
that
we
were
looking
at
a
appropriately
from
the
perspective
of
everyone
in
the
group,
and
that
is
the
perspective,
we'll
look
at
it
from
that
works.
C
On
the
ipni
side,
we
have
a
implementation
of
this,
which
I
think
is
90
compatible
with
the
spec
I've
left.
Some
comments
on
the
spec
and
gee
has
kindly
already
replied:
I
haven't
got
to
it
yet.
Okay,
I
promise
I'll
get
back
to
you,
but
that's
the
only
implementation
that
I
know
of
that
exists,
but
also
adding
to
what
God's
wrote
in
the
chat.
This
whole
double
hashing
working
version
of
it
should
also
include
ipfs
DHD.
So
it's
it's
a
much
much
bigger
trunk
cover
and
I.
F
So
yeah
just
a
word
on
the
implementation,
so
I
totally
agreed
that
the
spec
can
remain
open
for
a
while
until
we're
ready
to
make
the
migration
to
this
new
DHT,
because
it
will
be
a
protocol
breaking
change.
And
so
there
is
already
an
implementation
that
doesn't
exactly
fit
the
spec,
but
is
they're
kind
of
close
to
fitting
the
space
that
has
been
implemented
by
chainsafe
and
the
collaboration
has
been
a
little
bit
difficult
lately.
F
So
we
need
to
figure
out
if
we
decide
to
finish
up
finish
it
ourselves
or
if
we
ask
them
to
finish.
The
implementation
and
I
just
want
to
make
sure
that
everyone
agrees,
at
least
on
the
IDS
on
the
concept
before
we
go
ahead
and
we
make
them
Implement
what
we
decided.
But
we.
D
D
Is
like
figuring
out
what
the
what
the
ramifications
are,
which
are
going
to
be
hard
to
like
know
exactly
now
right.
So
what
is
the?
How
does
the?
How
does
the
performance
change
as
a
function
of
of
doing
the
double
hashing
things
like
I
know
for
like
the
ipni
spec,
it
increases
the
number
of
round
trips
and
the
last
I
checked
was
to
like
three:
maybe
we
can
we
can
cut
that
one?
D
Maybe
we
can
cut
it
down,
increases
things
like
bandwidth
with
the
L,
whatever
the
the
the
k-lookup
thing
that
tries
to
protect
you
from
from
you
know
being
being
spied
on
that
way,
how
much
is
that
going
to
hurt
people
when
we
start
scale?
You
know
we
start
scaling
this,
so
we.
F
Yeah,
so
we
had
the
report,
so
we
hired
change
safe
to
do
a
performance
report
of
exactly
that,
but
it
wasn't
satisfying.
We
couldn't
get
information
out
of
it,
so
they're
working
on
it
again
and
we're
waiting
for
the
results.
But
concerning
to
the
number
of
hops,
it's
expected
to
be
the
same
I'm
fairly.
Confident
it's
not
going
to
change
for
the
network
load,
it's
expected
to
increase
by
just
the
key
factor,
so
it's
possible
to
compute
it
on
average
this
and
for
CPU.
H
Hey
I'll,
just
jump
in
a
couple
of
things
on
this
yeah
I
obviously
want
to
get
this
over
the
line.
There's
been
a
lot
of
great
work
and
discussion
to
get
it.
This
far
I
mean
a
couple
of
things.
Realistically,
you
know,
I,
don't
foresee
us
having
change
safe,
do
the
rest
of
the
work
just
given
where
we
are
at
with
you
know,
budgets
we're
having
we're
winding
down
contracts
and
we're
not
extending
things
so
like
this
is
going
to
fall
likely
to
someone
in
this
group
to
do
the
additional
implementation
work.
H
So
that's
one
thing:
the
other
is
likely
early
next
week.
The
maintainers
need
to
re-huddle,
because
you
know
last,
our
last
release
got
sort
of
you
know
fairly
disrupted.
You
know
pouring
into
Ria
and
then
there
have
been
some
operational
events
that
have
come
up
and
then
there's
some
follow-up
follow-ups
from
that.
So
there's
so
many
good
things
here.
I.
H
We
just
have
to
be
honest
with
ourselves
about
what
we
can
do
when
obviously
ghee
is
very
much
owning
and
driving
double
hashing,
but
we
need
to
get
him
paired
up
with
someone
on
the
implementation
side
to
work
on
Landing.
This
I'd
like
to
do
it
soon,
I,
just
I,
don't
have
line
of
sight
today
as
to
who
that
is,
and
when
that's
going
to
happen.
So
this
one
for
me
is
a
little
bit
up
in
the
air
and
obviously
yes,
we
need
to
get
a
decent
report
on
the
the
resource
utilization.
H
I
know
we
weren't
quite
satisfied
with
what
chains
they
could
produce
so
far.
So,
like
anyways
I
imagine
this
spec
will
be
open
for
a
bit.
There
are
still
a
little
ways
off
before
rolling
it
out,
but
I
I
certainly
don't
want
to
be
in
this
limbo
state
for
for
months
on
end,
but
I
don't
have
a
plan
yet
for
how
we
get
out
of
it.
But
those
are
some
of
the
things
we've
got
to
work
through.
D
E
Or
ipli
you
must
have
basically
one
lookup
is
to
get
the
encrypted
period
the
same
as
in
in
the
DHC,
and
another
lookup
is
to
get.
If
you
want
to
get
the
ipni
specific
information
such
as
like
metadata,
which
protocol
that
the
data
can
be
tested
over
so
and
they
encrypted
it
with
different
with
different
keys.
E
So
yeah,
that's
basically
need
to
be
two
instead
of
one,
because
in
either
BFS.
Yes,
you
you
assume
bit
swap
by
default,
while
in
a
dni
you
have
can
have
other
protocols.
C
E
D
D
You
do
you,
you
do
a
live,
P2P
connection
and
you
say:
I
speak
these
things
in
these
orders,
which
ones
do
you
speak
and
then
you
like
take
it
from
there
yeah
so
yeah,
which
is
I
mean
you
could
do
that
with
with
ipni,
but
we're
trying
to
add
in
like
more
information,
whether
it's
things
to
support
non
non-lib,
P2P
protocols
or
or
just
to
give
you
more
metadata
about
the
libidity
protocols.
Like
you
know
the
the
hints
for
the
file
coin
graph,
sync
stuff
exactly.
C
I
think
that
the
main
use
case
right
now
is
the
filecon
graph
sync
thing
which
just
gives
you
like:
PC
ID.
Is
it
verified
and
it's
not
verified,
which
is
you
know
it's
lip.
P2B
is
not
a
place
to
put
it
right,
you
see,
so
we
have
to
put
it
somewhere
and
that's
where
the
metadata
thing
in
ipni
comes
from.
C
I
think
the
main
motivation
for
it
is
to
reduce
the
explosivity
of
storage
requirement
because
all
these
records
are
encrypted.
So
what
we
are
doing
is
that
we
encrypt
the
key
and
then
encrypt
the
value
and
then
associate
the
key
to
the
value
and
allow
you
to
look
up
keys
by
values
rather
than
encrypting
key
and
value
together,
and
then
you
end
up
with
different
records
for
every
single
entry
which
gives
you,
which
needs
much
higher
storage
needs
right.
It's
optimization,
yeah.
D
Maybe,
although
I
guess
to
some
extent
the
way
I
see
it
is
that
the
DHT
is
much
harder
to
experiment
with,
because,
like
the
rollouts
of
migrations
are
hard
and
ipni
is
easier
to
experiment
with
which
means
we're
still
like.
We
can
use
ipni
to
explore
like
the
metadata
space
and
what
that
needs
to
look
like
and
once
that's
more
fleshed
out.
Then
it
becomes
easier
to
say:
okay
well,
the
DHT
should
support
something
similar,
but.
F
D
Much
there
because
it
just
takes
at
the
moment
it
takes
us
a
long
time
to
do
it.
I
mean
there's
stuff
like
with
the
fluence
people.
I
think
want
to
do
or
you
like
you
know
you
just
you
like
deploy,
wasn't
code
and
it
just
runs
everywhere.
But
that's
that's
not
the
world
that
we're
we're
at
at
the
moment.
C
A
Okay,
we'll
keep
that
topic
in
mind
as
like,
potentially
a
work
stream
for
future
design.
Discussion
I
think
it's
it's
a
good
one.
I
I
put
two
items
here
that
maybe
we
can
remediate-
maybe
not,
but
I
just
wanted
to
close
the
loop
on
them
from
our
prior
discussions
and
one
was
the
the
Thunderdome
testing
I
know
we
kind
of
had
a
pretty
good
discussion
about
like
potential
ways
to
approach
the
traffic
responses
we
found
there
and
I
just
wanted
to
check
in
and
see
over
the
course
of
the
last
two
weeks.
A
If
we
had
gotten
any
additional
feedback
on
it
or
we
had
recognized
any
different.
F
So
we
didn't
run
any
new
Thunderdome
experiment,
but
I
think
the
explanation
given
by
a
deal
some
sums
up
quite
well,
why
we
have
a
worse
results.
Worst
ctfp
with
a
lower
delay,
got
it
so
yeah
that
there's
a
GitHub
post.
D
Yeah,
the
good
news
is
that
at
least
some
of
the
sessions
are
going
to
at
least
for
Gateway
code
are
going
to
be
getting
fixed
likely
as
a
function
of
the
rare
stuff,
because
we're
probably
going
to
start
bundling
we're
going
to
be
bundling
everything
together,
so
that
we
can
make
like
single
requests
area
which
will
force
us
to
correctly
handle
contexts,
or
at
least
it
should
force
us
to
do
that,
and
so
that
will
handle
the
Gateway
Pathways
random.
D
Other
Kubo
commands,
maybe
less
so,
but
those
those
pathways
are
both
like
less
critical
and
there's
they're,
I
guess
a
little
less
gnarly.
So
we
can
go
find
them.
A
Thanks
Canada,
Dean
I
think
that
definitely
answered
my
question
there
and
then
I
think
we
we
kind
of
covered
this
earlier.
Actually,
so
we
don't
have
to
beat
it
up
more,
but
I
wanted
to
inquire
about
the
DHT
migration
plan
and
it
sounds
like
gee
you're
taking
ownership,
but
there
is
obviously
some
prioritization
to
to
do
and
kind
of
timeline
figuring
out,
probably
underway.
F
Yeah,
so
we've
discussed
this
during
the
pro
black
color
in
Engelberg,
and
so
I
suggested
the
plan
so
to
have
to
make
a
new
DHT.
That
would
be
upgradable,
which
means
that
it's
very
easy
to
push
for
upgrades
to
the
DHC
but
the
ID.
So
this
would
require
a
lot
of
engineering
work
to
get
it
done
and
will
need
to
implement
it
on,
go
and
rust
and
seems
like
we
don't
have
the
capacity
at
the
moment.
F
So
this
ID
was
postponed
and
we're
going
more
for
pork,
like
forking,
the
the
GST
that
we
have
and
so
yeah
we'll
need
to
plan.
How
we
want
to
do
this
and
I
haven't
put
so
much
effort
in
it,
as
I've
been
more
focused
on
the
other
solution,
but
yeah
I
mean
once
I
I
get
the
capacity
I'll
work
on
it
yeah.
If
anyone
has
some
inputs,
please
let
me
know.
D
F
Yeah
so
yeah
that
was
Max's
words
because
yeah
there
are
other
stuff
more
urgent
to
tackle.
D
F
Yeah
yeah,
but
that's
in
addition
to
the
double
Hashim,
which
means
that
so
we
can
have
one
large
migration
containing
like
one
upgradable,
DHC
and
double
hash,
and
so
all
future
DHC
migration
will
be
easy
or
we
can
do
a
quicker
migration
now,
with
only
double
hashing,
so
less
implementation
work.
But
then
the
next
migration
will
be
painful.
D
D
Yeah
yeah
I
mean
I.
Guess
it
just
depends
on
like
how
you
do
the
multiple
queries,
because
what
you
can
do
is
you
could
say:
well,
it's
too
small
I,
don't
bother
using
it.
I
only
start
using
it
once
it's
big
enough
right
and
that
would
sort
of
like
allow
the
network
to
grow
a
little
bit.
D
But
if,
if
everyone
has
to
you
know
it
depends
how
expensive
this
is.
If
it's
actually
fine
for
everyone
to
do
multiple
queries,
then
then
it's
fine.
C
D
C
B
Consider,
for
example,
you
know
a
big
company
a
couple
weeks
ago,
I
had
some
serious
outages
on
the
network
because
they
couldn't
reprovide
fast
enough
and
if
we
cut,
if
we
have
there,
we
provide
throughput,
it's
not
only
going
to
be
expensive
for
them,
but
also
potentially
you
know
there
might
be
some.
You
know
operational
events
and
stuff
from
doing
that.
D
F
D
A
A
I
hate
to
cut
us
off,
but
we
have
hit
time.
These
are
very
valuable
discussions
and
I
want
to
thank
you
all
again
for
participating
in
them.
This
context
is
incredibly
important.
I'll
get
to
working
on
that
road
map.
I'll
summarize,
as
always,
and
put
together
a
post
in
the
content,
writing
work.
Group
I
hope
you
all
have
a
great
evening
or
morning,
depending
on
where
you're
at
this.