►
From YouTube: 2023-07-11 Content Routing Workgroup #15
Description
The team discusses content routing improvements to IPFS nodes including bootstrap node persistence released with Kubo v0.21, the state of deployment of the new Kubo version across the Bifrost cluster, and ongoing architectural efforts on IPNI. Additionally we address handling HTTP records for supporting multiple routers.
A
A
Thank
you
all
for
joining
I
will
share
the
notes,
the
location
of
the
notes
in
the
chat
you
can
access.
It
says
publicly
available
notes,
Hey,
Dean,
welcome
nice
haircut,
man,
we'll
jump
in
so
I
did
add
one
document
to
our
stuff.
A
You
should
know
list
I
added
the
ipni
metrics
spec
doc,
where
we
describe
all
of
the
metrics
that
we
expose
through
our
set
reps
of
ipni
as
well
as
future
metrics
that
we'd
like
to
get
access
to
and
kind
of
some
details
about
a
little
bit
of
the
the
hair
on
trying
to
expose
some
of
those
metrics
like
stuff
we're
wrestling
with
internally
by
virtue
of
our
architecture,
but
some
insight
into
what
you
can
understand
about
ipni.
A
If
you
look
at
set
reps
or
the
Bedrock
metric
stocks
that
we
publish
routinely
that
documents
here,
it's
a
Google
doc
feel
free
to
leave
comments.
There
I'll
take
ownership
of
responding
with
questions
you
might
have
about
them,
but
those
are
freely
available
for
people
to
take
a
look
at
we'll
start.
A
The
meeting
off
with
our
team
updates
we've
got
a
lot
of
stuff
over
on
ipni,
so
I'll
rush
through
those
and
then,
if
the
folks
from
by
froster
ipfest
would
like
to
chime
in
and
let
us
know
what's
going
on
with
their
their
teams.
We'd
appreciate
that
as
well,
so
over
on
the
IP
and
I
side
of
the
house,
we
are
going
through
a
foundation
DB
ingestion
process
so
that
we
can
fully
replace
the
production
currently
Pebble
DV
with
Foundation
DB,
it's
at
about
65
ingestion
of
the
entire
index.
A
So
it's
working
its
way
through
kind
of
some
of
the
the
bigger
provider
data
to
be
fully
ingested
right
now
and
that'll
represent
a
snapshot
of
all
the
queryable
data
on
ipni.
That's
been
advertised
to
ipni
I
expect,
based
on
previous
iterations
of
us,
bringing
up
a
new
ipni
instance
that
this
will
probably
be
done
by
the
end
of
the
week.
A
A
Once
we
bring
a
new
instance
of
an
index
up
to
kind
of
optimize,
and
you
know
clean
up
the
read
and
write
paths
as
we're
kind
of
testing
it
before
we
fully
move
it
over
to
replacing
production,
but
we'll
be
working
on
that
over
the
course
of
the
next
few
weeks,
and
then
we've
got
a
good,
healthy,
ongoing
discussion
right
now
about
deprecating
our
our
count
function
just
to
expose
to
all
of
you.
A
The
count
function
that
we
represent
in
our
sit
rep,
because
the
ipni
key
value
store
is
so
large.
It's
not
something
you
can
just
simply
query
the
entire
thing
and
pull
Counts
from
also
the
ingestion
happens
very
very
fast,
so
we're
talking
hundreds
of
Records
I
think
a
second,
but
that
obviously
multiplies
very
rapidly
when
you
get
into
the
minutes
and
hours
range
and
so
counting
while
you're
ingesting
those
records
is
also
very
difficult
to
do
without
damaging
the
performance
or
creating
like
costly
operations.
A
Just
because
of
the
scale
of
ingestion
that
we
do
the
scale
of
records
that
we
onboard
and
so
our
account
function.
It
pulls
data
from
a
lot
of
different
places,
because
we've
got
multiple
nodes.
We
have
a
very
scalable
architecture,
and
so,
in
order
to
do
that,
we
kind
of
have
to
count
from
these
multiple
places
and
bring
that
data
together,
and
that
creates
a
bit
of
confusion
around
the
numbers
because
they
end
up
being
accurate
but
imprecise.
A
And
so
we
end
up
with
a
little
bit
of
overlap
between
these
and
we're
doing
a
lot
of
deep
philosophical
thinking
about
how
we
can
narrow
down
the
range
of
accuracy
that
this
count
represents
and
possibly
even
maybe
isolate
it
to
kind
of
strata
of
the
ingestion
that
happens
to
make
it
a
little
bit
more
beneficial
for
folks
outside
of
our
team,
who
aren't
as
used
to
kind
of
seeing
that
data
you'll
see
a
little
bit
of
that
discussion
over
on
that
metric
stock.
A
That
I
shared
it's
kind
of
an
ongoing
battle
that
we're
fighting
to
try
to
make
our
our
data
a
lot
easier
to
understand
for
folks
outside
the
team
that
aren't
routinely
working
with
it.
We're
also
doing
a
bit
of
cost
optimization
due
to
our
ingestion
activities.
I.
Think
those
of
you
that
are
on
the
cost,
optimization
work
group
probably
would
have
noticed.
A
We've
had
a
pretty
big
uptick
in
our
AWS
activity
as
we're
going
through
that
so
we've
kind
of
looked
at
some
of
the
redundancies
that
we
maintain
and
whether
or
not
those
redundancies
are
entirely
necessary
based
on
you
know
our
kind
of
current
backup
that
we
have
and
the
way
that
we're
you
know
launching
indexes
I
think
we've
probably
pruned
some
overly
cautious
stuff.
That
was
there
in
our
infra.
A
But
ultimately,
if
you
looked
at
this
week's
cost,
optimization
work
group
you'd
see
that
we'd
manage
a
pretty
good
declination
and
demand
from
our
AWS
account.
So
we're
trying
to
keep
things
nominal
I
would
say
is
the
best
term.
A
We
want
to
maintain,
obviously
the
the
right
amount
of
defensiveness
with
our
infrastructure,
but
simultaneously
only
using
what
we
absolutely
need,
and
so
it's
kind
of
a
constant
balancing
act
that,
thankfully
the
folks
on
our
team
are
pretty
good
at
doing.
But
it's
always
an
effort,
so
we've
been
working
on
that
and
then
one
thing
I
wanted
to
point
out
to
y'all
is
Mossy
was
kind
enough
to
work
with
the
Boost
team
to
get
a
dashboard
together.
A
That
gives
us
a
little
bit
more
visibility
into
the
uptake
of
boosts
across
the
storage
provider
Network
for
filecoin,
which
is
really
neat.
Actually
it
lets
us
kind
of
understand
when
boost
is
updated
like
how
broadly
adopted
that
new
version
is,
and
that's
really
beneficial
to
the
ipni
team,
because
for
those
of
you
that
aren't
familiar
with
Boost
as
an
operation,
it's
kind
of
a
package
deal
with
lotus
that
lets
storage
providers
manage
their
filecoin
deals.
A
That
I
wanted
to
call
out,
which
is
these
hay
fill,
which
is
our
API,
that
we
well
it's
a
service
that
we
have
that
can
be
exposed
via
API,
which
looks
at
snapshots
of
filecoin
and
lets.
You
do
some
kind
of
queryable
wearable
actions
that
we've
associated
with
a
an
API
endpoint.
You
can
look
at
so
there's
some
new
stuff.
You
can
look
at
Via
this,
which
is
you
can
look
at
the
known
quantity
of
known
storage
providers.
A
You
can
get
the
quantity
of
information
stored
and
list
them
by
their
peer
ID
and
there's
some
examples
of
this
in
this
link
here
go
take
a
look,
there's
actually
quite
a
bit
of
other
stuff.
That's
been
added
that
exposes
details
about
boost,
and
it's
just
a
lot
of
really
neat
and
helpful
stuff
to
folks
that
are
trying
to
look
at.
A
What's
going
on
at
any
given
point
on
filed
one,
that's
the
ipni
I
would
say
a
week
and
a
half
to
two
weeks
in
a
nutshell:
there's
been
a
lot
of
other
stuff,
but
it's
all
very
granular
and
I
would
say
mostly
internal
operational
stuff
that
we've
been
wrestling
with,
but
I'd
like
to
pass
the
mic
over
to
it
looks
like
ipfs
has
an
update.
Little
did
you
want
to
jump
in
and
add
some
color
to
that
one.
B
I
mean
yeah
brief
thing:
it's
detailed
there,
Kubo
0.21
ships
and
I
guess
for
for
routing.
The
notable
thing
is
that.
B
B
If
you
turn
on
your
node
every
so
often,
then
you
you
will
still
be
able
to
get
to
the
network,
even
if
say
like
the
bootstrap
nodes
are,
are
down
or
inaccessible
for
your
region
or
you
know,
DNS
got
black
hole
or
any
of
the
variety
of
things
that
could
possibly
go
wrong
when
you
have
like
five
nodes
or
so
serving
as
bootstrap
nodes
might
be,
that
other
groups
interested
in
in
running
routing
networks
like
this
have
have
similar
needs.
So
that's,
that's.
B
You
know,
Pub
sub
things
like
happens
with
lotus
or
other
people
running
DHT,
implementations
or
or
whatnot.
A
B
B
C
D
Question
for
Adin,
please:
what's
the
is
there
an
expiry
for
the
nodes
that
we've
seen
before
right
as
a
bootstrap?
How
far
back
does
it
keep
the
notes.
B
I,
don't
recall
offhand,
I,
don't
know
level.
If
you
know
it
might
just
keep
them
until
you
know
you,
you
turn
it
back
on
and
then
it
and
then
it
Cycles
them
again.
If
it's
been
a
while,
but.
D
B
A
Thanks
will
for
calling
that
out
we'll
definitely
throw
an
item
in
the
in
the
backlog
to
keep
an
eye
on
how
that's
going
to
impact
the
design
presently
proposed
for
ambient
discovery.
A
I
think
regarding
the
topic
of
ambient
Discovery
and
I
won't
dig
us
too
deep
into
this
hole,
but
it's
one
thing
that
I
want
to
keep
just
kind
of
at
the
front
of
people's
minds
just
because
we're
a
little
bit
delayed
in
the
sense
that
we're
not
working
towards
a
lot
of
new
features.
This
quarter
on
ipni
but
I
suspect
that
we
will
achieve
a
lot
of
those
features
but
demand
ambient
Discovery
pretty
quickly.
A
Once
we
start
to
focus
on
it
again
and
so
I
think
we'll
will
potentially
be
in
a
scenario
where
we
approach
the
need
for
ambient
Discovery
much
more
quickly
than
we
may
expect.
It'll
it'll,
sneak
up
on
us
and
I
want
to
make
sure
we're
all
kind
of
aware
that
it's
coming
down
the
pipe
we
want
to
keep
pushing
for
it
bifrost
team.
Would
you
like
to
share
an
update
with
us.
E
So
I
saw
the
the
thread
that
you
were.
You
were
talking
with
George
about
what
metrics
not
not
appearing.
This
is
what
I've
added
to
the
versions
that
we
are
currently
running,
at
least
as
far
as
our
resource
control
is
concerned.
So
I
am
assuming
that
that
is
current,
basically
yeah.
We
should
be
updating
to
0
21
over
the
next
few
days,
probably
at
least
from
what
I
saw
about
George.
E
One
thing
is
there
was
this
discussion
that
perhaps
it
would
be
good
for
one
of
the
four
production
bootstrap
nodes
to
not
be
running
on
Google
and
possibly
even
half
and
half
so
I'm
working
with
Max
right
now
to
well
Max
is
just
making
sure
that
the
rusty
P2P
server
is
actually
stable
so
that
we
can
actually
run
it
in
production
and
we
will
migrate
the
once
once
we
see
that
what
should
be
this
week,
we
will
migrate
the
New
York
bootstrap
node
to
use
the
rust
server
so
that,
basically
there
are
bugs
in
Kubo.
E
A
A
And
sharing
that
information,
the
the
thread
that
that
we
just
mentioned
just
so
everybody's
kind
of
on
the
same
page
was
a
request
for
feedback
on
the
versions
of
Kubo
which
are
running
on
the
bifrost
Clusters.
And
so
there
is
a
work
effort
underway
that
we
don't
really
have
a
timeline
established
right
now.
I
think
unless
that's
changed
over
the
last
two
weeks.
A
Let
me
know
if
I'm
wrong
on
that
by
Frost
team,
but
in
order
to
add
this
as
a
metric
that
could
be
viewed
in
a
dashboard
and
then
potentially
to
recognize
I.
Think,
loss
of
traffic
is
another
thing
that
we
were
looking
for
is
like
some
alerting
around
that
from
gateways
to
ipni
from
outside
the
ipni.
A
Where
our
post-mortem
takeaways,
the
only
thing
I
would
ask
the
bifrost
team
is
if
there
are
GitHub
issues
associated
with
those
efforts.
A
Please
share
them
so
that
people
in
this
group
can
tuck
in
and
check
on
them.
If
they're
working
on
something
relative
to
those
or
they're
concerned
about
the
status
of
them
or
something
that
would
be
helpful.
F
F
One
for
the
there's,
an
issue
already
for
the
ip9
monitoring
from
the
incident
tracking
that
which
is
basically
waiting
on
20
or
21
to
get
out,
and
then
we
will
make
one
for
the
other
one
and
share
it
with
you.
There
isn't
one
for
it
at
the
moment.
The
version
tracking.
A
A
All
right
so
I
threw
a
few
topics
up
here,
the
first
of
which
is
kind
of
an
ongoing
discussion
that
has
been
going
on.
That
I
think
we
should
kind
of
wrap
up
like
what
our
path
of
action
is
going
to
be,
but
there's
been
an
ongoing
discussion
which
was
HTTP
records,
ludel
brought
up
to
the
team,
ipip0388
routing,
HTTP
API
support
for
querying
multiple
routers.
A
So
there's
a
spec
linked
to
that
description.
There
that
you
can
take
a
look
at
when
we
last
left
off.
We
had
a
meeting
to
discuss
whether
or
not
we
were
going
to
do
this
and
the
magnitude
of
the
impact,
I,
think
and
I.
Think
monster
had
proposed
during
that
discussion
that
we
come
check
out
the
the
magnitude
of
these
lookups.
A
To
get
straight
to
the
point,
I
think
we
can
look
at
this,
but
the
pni
team
is
very
lightly
resourced
right
now
and
our
Q3
is
completely
maxed
and
there
are
a
lot
of
things
that
we
I
would
say,
aren't
even
nice
to
haves,
but
are
need
to
have
that
probably
aren't
going
to
make
it
into
our
Q3
and
so
adding
even
a
a
simple
task
to
that
stack
of
stuff
I
think
we're
we're
pretty
hesitant
to
to
do
without
really
understanding
how
demanding
it
is
before
we
change
gears
and
try
to
make
sure
that
we
implement
it
not
saying
that
we
shouldn't
do
this
or
the
it
wouldn't
be
something
that
we
would
plan
just
that
for
us
to
understand
like
when
and
where
we
would
do.
A
This
is
going
to
be
a
little
bit
difficult
now,
because
everything
we've
got
is
basically
critical.
This
quarter,
and
if
this
can
wait
till
Q4,
that's
probably
going
to
be
a
little
bit
easier
to
approach.
But
if
it's
something
that
needs
to
really
be
done
in
Q3,
even
I
think
we
should
understand
that
now
and
then
try
to
wrestle
with
that
urgency.
A
B
So
I
I
guess
the
short
version
is
like
I
would
expect
that
the
amount
of
stuff
to
be
done
to
support
this
is
pretty
low.
Given
that
either
you
know
we
we
could
send
PRS
for
the
things
that
Implement.
You
know
most
of
the
like
any
any
of
the
changes
here,
but
also
because
things
about
like
oh,
how
much
traffic
are
we
going
to
serve
and
whatever
are
are
somewhat
Mo,
given
that
this
is
the
status
quo.
B
So
at
the
moment
the
Cascade
parameters
right
are.
B
If
you
want
to
ask
somebody
to
do,
DHT
queries
for
you,
it's
probably
the
easiest
way
is
going
to
be
to
write
like
a
non
like
like
non-ip
or
routing
V1
interface
thing,
but
the
thing
that
targets
the
ipni
specific
API
and
there's
already
like
a
librarian
JavaScript
I
wrote
to
do
this
whenever
ipfs
thing
was
because
that
was
the
only
way
to
get
the
data
I
needed
to
get,
particularly
for
things
like
HTTP
records
and
other
stuff
as
well.
B
So
this
is
there
it's
in
use
if
people
are
plugging
it
into
Helia,
because
they
don't
want
to
do
DHT
crawls
in
a
browser.
This
is
what's
already
happening
so,
like
the
the
status
quo
is,
gonna
is
Gonna.
Keep
On
keep
on
doing
its
thing
unless
we
want
to
make
it
change,
and
so
we
have
to
like
change
away
from
the
status
quo.
If
that's
what
we
want
to
do
and
then
otherwise,
this
is,
if
it's
just
like
changing,
you
know
around
a
little
bit
of
like
okay.
B
A
D
The
only
thing
that
comes
to
mind
is
you
know
the
the
current
Cascade
in
ipni
always
looks
up
ipni
in
addition
to
something
else
right
and
then
the
main
rationale
for
that
was
to
stop
people
from
using
sit.contact
to
only
look
up
DHT
so
that
you
you.
It
always
includes
life,
because
you're
calling
an
iPhone
is.
Is
there
something
in
the
proposal
or
do
you
think
we
need
this
type
of
protection
in
the
API
designs
in
the
dedicated
one.
B
I
guess
like
the
I,
think
the
the
main
reason
for
allowing
my
opinion
to
like
allow
for
querying
independent
routers,
as
mentioned
in
that
issue,
is
to
allow
to
allow
you
to
choose
which,
which
of
the
delegates
you're
going
to
use
for
which
backend
which
for
which
types
of
back
ends
and
which
work
you
will
you're
willing
to
do
yourself,
and
you
can
only
do
that
by
sort
of
comparing
results
by
going
to
multiple
delegates
or
doing
the
work
yourself
and
saying.
B
If
I
ask
someone
if
I,
if
I
ask
you
know,
if
I
ask
George
to
do
the
work,
is
he
doing
it
as
well
as
if
I
did
it
or
like
good
enough,
and
if
I
can't
ask
that
question,
then
I
sort
of
have
I'm
sort
of
just
guessing
or
I'm
or
I'm
over
asking
I'm
like
you
know
what
provider
records
I
just
need
to
get
enough
of
them.
Ideally
they'd
be
the
the
best
records
in
a
good,
sorted
order,
but
I'm
willing
to
settle
for
getting
any
of
them.
So
I
get
my
data.
B
So
let
me
just
ask
everybody:
I
know:
cool
I
can
delegate
work
to,
and
so
this
allows
for
like
narrowing
that
down,
but
it's
still
very
reasonable
for
anyone
who's
providing
a
delegate
service
to
say
you
know
what
for
every
like,
DHT
lookups
are
more
expensive
for
me
than
ipni
lookups
and
those
are
more
you
know
and
and
bittorian
lookups
are
more
expensive
than
than
can
only
a
DHT,
lookups
or
something,
and
so
therefore,
the
amount
that
I
give
you
to
each
of
these
is
is
limited.
D
Next
question
this
feature
by
the
way
doesn't
exist
in
ipni
at
all,
so
I'm
just
brainstorming
a
little
bit.
Do
you
want
to
also
label
which
result
came
from
which
router
or
is
that
relevant
at
all.
B
It's
a
good
thought:
I
I
was
thinking
about
that
I,
don't
think
it's
we
could
do
it.
I,
don't
know
if
it's
strictly
necessary
I'm,
mostly
thinking
that
like
if
the
point
of
this
is
to
help
you
with
probing
to
some
extent,
then
you
can.
B
You
can
separate
out
the
probes
from
the
other
ones,
and
that
way
you
would
you
you
wouldn't
need
to
necessarily
know
this
all
the
time
and
you
would
need
to
double
count
them
and
say
how
am
I
getting
them
and
you
would
need
to
deal
with
like
duplicates
where,
like
you're,
sending
me
back
the
same
address
multiple
times
but
tagged
with
I
got
on
her
ipni
really
quickly
and
then
I
got
it
from
the
DHT
like
a
little
slower
like
I.
D
B
Think
I
don't
know
if
they're
a
separate,
so
I
don't
know
if
this
is
quite
this,
the
same
thing-
maybe
it's
not
the
HTTP
over
this,
the
line
about
HTTP
over
lip
P2P
is
fine,
but
pure
HTTP
implementations
is
up
for
addressing.
Is
that
specific
to
this
ipip
or
is
this
this?
Is
this
like
a
separate
ipni
thing?
This.
A
Is
the
discussion
outside
of
the
I
I
pulled
kind
of
the
relevant
points
from
the
previous
discussions
we've
had
so
we've
like
kind
of
talked
about
it
twice
and
I
tried
to
kind
of
gather
what
stood
out
to
me
as
like
important
details
from
those
discussions.
I
think
the
TLs
or
thing
is
kind
of
that's
Aaron
from
what
we're
discussing
now.
Oh,
what
we're
here.
B
B
A
Don't
I
don't
know
that
this
evolved
from
the
state
of
really
like
cascading
vhd
queries
being
like
a
desirable
Behavior.
So
much
as
a
like
I
mean
we've
been
talking
for
a
while
about
how
do
we
deprecate
cascading
and
stop
performing
that
that
function,
which
this
is
kind
of
a
another
direction,
to
take
right.
D
A
separate,
I
agree:
I
think
this
is
a
separate
thing.
There
are
two
separate
things
here:
one
is
just
API
over
cascading,
the
other
one
is
just
semantics
of
Transport
level.
How
do
you
handle
TLS
in
the
P2P
world
right
going
back
to
your
question?
I
didn't
pretty
much.
If
it
has
https,
then
just
ignore
the
prid
and
go
go
fetch.
The
thing
that's
how
it
works.
B
I
I
guess
I
have
I,
have
sort
of
like
two
concerns
with
the
current
state
of
like
the
the
fact
that
prids
are
sort
of
like
mandated,
and
then
we
ignore
them.
One
is
that
the
P2P
folks
have
a
proposal
for
enabling,
like
authentication
of
the
server
with
a
peer
ID,
and
so
now
you
sort
of
won't
have
a
wet.
You
won't
have
a
an
explicit
signal
to
know.
B
Do
I
need
this
or
not,
because
either
way
you
have
to
make
up
a
peer
ID
to
put
in
the
record
so
that
that's
like
a
little
bit
awkward
and
the
other
is
that
is,
is
just
that.
You
know
for
for
any
protocol
that
isn't
a
lip
P2P
protocol.
D
I
agree,
I
think
I
think
largely.
This
is
just
lack
of
specification
just
because
it
hasn't
been
needed
not
because
we
missed
something
and
lip
P2P
right.
So
I
think
there's
a
there's,
a
bunch
of
work
here
for
lip
TP
Forks
to
figure
this
out
and
I.
Think
they're
in
the
best
position
to
figure
it
out
is
in
excellent
hands.
D
As
for
the
ipni
specific
stuff,
it
goes
and
reaches
out,
but
in
ipni
messages
that
so,
but
then
in
the
message
that
I
read
back,
there's
expectations
that
there
is
peer
ID
and
you
know
their
signature,
that
gets
verified,
verified
and
there's
a
whole
I
think
three
different
variations
of
it
across
different
apis.
So
it's
like
a
you
know:
peer
ID
is
respected,
but
indirectly
so
going
back
to
the
main
problem.
D
I
think
the
main
issue
to
point
out
is
that
there
is
a
need
for
Liberty
to
be
specifications
on
handling
of
HTTP
over
lipitupi
I.
Think
Marco
is
already
on
it.
He's
doing
a
lot
of
work,
I've
seen
a
little
PR,
so
I
I
would
keep
an
eye
on
that
to
see
how
it
evolves
and
how
we
can.
Then
retrofit
it
to
everything
else,.
B
B
If
I,
even
if
we
just
talk
HTTP,
which
is
there's
HTTP
and
I,
don't
and
you
don't
have
a
peer
ID,
there's
HTTP
and
you
do
have
a
peer
ID
and
there's
HTTP
overload
P2P
and
you
could
express
all
of
these
with
multi-addresses
if
you
just
had
mult,
if
you
just
had
multi-addresses
but
but
the
assumptions
that
we
make
right
now
make
you
sort
of
choose
two
out
of
three,
and
so
that
might
mean
at
some
point
that
we
want
to
see.
A
All
right
so
I
think
the
the
summary
there
is
is
that
this
is
important.
We've
got
like
pretty
significant
adoption.
It's
also
dependent
on
work
that
the
P2P
team
is
doing.
For
now.
It
sounds
like
on
the
IP
and
I
said.
We
probably
agree
that
this
isn't
like
a
major
undertaking,
so
maybe
our
commitment
from
the
IP
and
I
side
will
be
that
we
we
try
to
sneak
this
in
along,
along
with
some
of
our
other
work,
where
we
see
fits
once.
C
B
D
Already
have
the
adder,
so
there's
nothing
stopping
you
from
having
PID
inside
of
that,
but
we
also
have
an
ID
right.
I.
Think
we
I
think
there
is
some
negotiations
that
happen
here
which
are
non-technical.
In
my
opinion,
they're
just
mostly
specification,
because
the
main
thing
we
care
about
is
not
to
break
people
right
and
that
I
think
should
come
from
the
P2P
folks
happy
to
get
involved.
I
think
Marco
attacked
me
in
a
in
a
HTTP
spec
thing.
So
probably
we
should
talk
to
hdb
people.
D
B
I,
just
I
want
things
to
like
fall
in
the
Gap
here,
because
there's
the
specification
of
like
how
is
it
that
one
does
the
negotiation
of
a
peer
ID
over
HTTP,
if
you
so
choose
to
do
to
do
that,
and
then
there's
like
okay
cool,
but
how
am
I
going
to
represent
all
of
these
to
put
them
in
a
provider
record
which
are
sort
of
like
which
are
slightly
different
concerns
right,
whether
whether
Lupita
peaches
to
use
for
you
know,
HTTP
overlap,
P2P,
whether
they
use
dot
well
known
or
identify,
doesn't
really
impact
this
at
all.
B
It's
it's
just
like
we
end
up
with
we're
already
in
this
awkward
situation,
where
we're
like
manufacturing
a
peer
ID
to
drop
it
so
I,
like
I,
think
we
already
I
think
we
already
see
the
problem
without
any
of
the
things
the
lapidity
folks
are
doing
good.
It's.
D
D
D
A
A
My
response
to
that
is
that
it's
not
something
we
would
get
to
in
Q3
I
think
we
actually
don't
have
a
backlog
item
for
this
one
specifically
right
now,
but
I'm
going
to
add
one
but
I.
Don't
have
any
any
expectation
that
we'll
be
able
to
get
to
this
in
in
Q3
I
think
we
can
think
about
the
problem
a
bit,
but
I
don't
know
that
we're
gonna
have
the
resources
to
approach
wrestling
with
that
I.
C
Did
want
to
I
mean
I,
guess
I
guess
torpin.
The
other
question
is:
is
that
something
that
is
an
ipni
team
action
item?
So
maybe
that's
a
discussion
of
who
who
owns
that,
because
that
client
library
was
something
that
the
stewards
and
the
the
Kubo
developers
wanted
to
do
for
the
non-double
hashed
variant.
So
I
guess
it
is
unclear
to
me
that
that
is
a
thing
that
immediately
lands
on
whether
the
ipni
team
has
bandwidth.
A
That
was
the
next
thing.
I
was
gonna.
Ask
well
so
that,
and
also
are
we
100
sure
this
is
the
way
we
want
to
go
about
it.
Is
this
the
only
solution
I
do
we
need
a
client,
or
is
this
something
we
can
handle
through
other
interfaces?
A
I,
don't
know
that
we've
thought
that
through,
but
any
any
way
the
the
problem
is.
We
won't
be
able
to
do
anything
for
this
in
Q3
and
is
everyone
okay
with
that?
Is
it?
Is
it
a
higher
priority.
B
I
mean
I.
Guess
you
still
get
something
out
of
the
double
hashing
work
that
that
you've
done
because
I
guess
you're,
you
know
you've
you've
you've
made
a
commitment
and
said
you
got
contact
to
say
we're
not
going
to
store
all
the
plain
text
records
but
you're
still
getting
all
the
plain
text
queries
so
I.
D
B
Yeah
yeah
for
for
sure,
so
I
think
that's
that's
mostly
it
in
terms
of
like.
If
you
want
to
get
clients
to
not
be
sending
you,
the
the
plain
text
request
quite
as
much
otherwise
yeah
I
mean
I,
I.
Think
it's
it's
mostly
just
agreeing
on
the
the
spec
for
for
how
to
do.
B
This
I
think
Will
said
about
oh
like:
where
is
this
implementation
going
to
live
or
whatever
I
my
suspicion
is
that
there
isn't
a
ton
of
like
there's
no
rocket
science
here,
given
that
routing
V1
already
exists
and
it's
going
to
be
the
same
streaming,
API
the
same
whatever
so
like
yep,
probably
the
same
thing
as
before,
stores
will
probably
put
one
in
boxo
that
does
this
implementation
and
then
they'll
be
like
the
server
component
that
lives
in
index,
star
and
and
so
on.
B
A
I
think
we
can
say
later
for
now,
at
least
from
the
ipni
side,
fitting
this
in
would
be
I
I,
don't
think
we
have.
C
Think
that
I
guess
that
the
thing
I'd
say
is
I,
don't
have
a
clear
sense
of
when
you
expect
the
double
hash
DHT
work
to
land,
but
that's
that
refactor
in
boxo
of
understanding
how
to
do
double
hashing
in
general.
In
boxo,
as
you
have,
these
two
different
implementations
likely
then
drives
what
the
concrete
implementation
is
for,
how
those
get
why
interleaved
and
when.
B
C
But
are
you
going
to
do
it
in
parallel?
Are
you
going
to
do
double
hash
and
then,
if
you
think
you
get
it
you're
good
and
you
have
some
delay
of
I.
Don't
have
the
results
so
I'm
going
to
now
go
back
to
the
non-double
hash
non-private
one.
Is
that
bad?
So
it's
a
config
option,
and
so
we
just
have
it
as
a
config
thing
of.
Do
you
fall
back
or
not.
B
B
A
function
like
right
now
right,
if
we
did
this
I'd,
be
like
okay.
Cid.Contact
is
a
thing
that
supports
the
double
hashing
right.
Therefore,
if
I'm
just
sending
requests
to
city.contact
just
use
double
hashing
and
then
fail,
if
otherwise-
and
that's
like
that's,
the
status
quo
does
this
anyway,
so
it's
basically
like
is
there?
Is
there
an
options?
Is
there
some
way
for
me
to
discover
so
that
you
know
this
set?
This
ipni
instance
supports
double
hashing.
If
so,
use
it
if
not
go
away,
and
things
only
become
more
complicated.
B
D
So
inside
Kubo
what
you're,
imagining
I
didn't
just
just
so
I
understand
it's
basically
another
routing
system
which
implements
delegated
routing,
but
internally
does
double
hashing,
and
then
you
just
have
multiple
routers,
just
like
we
do
now,
which
is
DHD
and
c.contact
would
have
say
that
contact
double
hash
to
something
like
that.
Is
that
what
you're,
imagining.
B
Yeah
to
some
extent,
and
then
you
know
the
order
in
which
you
query
things
and
how
long
and
whatever
will
have
the
same
set
of
games
as
now,
which
is
like
do
I
want
to
wait.
Do
I
want
to
do
the
DHT
and
and
city.contact
in
parallel,
or
do
I
wait
if
I
had
multiple
IP
like
delegated
routers
first
for
ipni
would
I
query
them
all
in
parallel
when
I
do
them
one
at
a
time
you
have
the
same
set
of
games.
B
You
do
anyway,
it's
just
just
now
with
a
slightly
different
capability
that
you
might
choose
to
prioritize
differently
by
default.
D
Okay,
so
I
guess
there
are
three
different
things
that
I
see.
One
is
the
specification
for
delegated
routing
for
double
hatched
records.
Third,
one
is
the
implementation
of
the
client
for
that
HTTP
specification
and
the
other
one
that
is
in
there
is
the
DHT
implementation
of
it
and
I
think
we
want
all
of
those
to
work
in
Harmony
in
that
for
the.
B
B
We
start
adding
support
for,
like
you
know,
BitTorrent
records
or
or
the
things
that
like
number
zero
is
working
on
inside
of
Kubo
I
have
no
guarantee
that
they're
going
to
do
double
hashing
or
that
they'll
do
it
in
the
same
way,
and
that's
okay,
because,
like
we're
just
trying
to
get
some
records
and
move
on
and
the
client
has
to
make
some
educated
choices
around,
you
know
where
they're
willing
to
make
their
privacy
versus
latency
trade-offs.
Yep.
D
D
B
Yeah,
my
my
my
gut
is
like
don't
wait
for
somebody
else
to
do
work
to
show
off
your
work.
That's
like
my
my
initial
reaction,
because
you
probably
don't
want
to
wait
for
the
double
hashing
stuff,
but
in
the
DHT,
but
yeah
up
for
up
for
disagreement.
A
I
mean
I
I
think
even
in
this
scenario,
probably
this
feels
to
me
like
something.
We
would
try
to
start
hacking
at
potentially
in
Q4.
If
we
were
to
assuming
that
the
implementation
itself
would
actually
live
with
ipni,
it
sounds
like
I
mean
we
really
do
need
to
nail
down
like
whether
or
not
this
would
be
I.
A
I
think
this
this
feels
like
a
Q4
thing
to
me
just
based
on
how
busy
we
are
right
now,
but
I'll
I'll
talk
to
Steve
about
that
too,
since
he
brought
up
this
question
so
I'll
follow
up
with
him,
but
I
agree
on
this
back.
Will
that's
a
good
call
I,
don't
think
anything
should
stop
us
from
hacking
at
specs.
A
Well,
that
kind
of
covered
all
the
items
did
anybody
else
have
anything
top
of
mind
that
they
wanted
to
review.
While
you
had
this
group
here.
A
If
not,
then
we'll
we'll
cut
it
short
for
I
think
the
first
time
in
quite
a
while
take
care
everybody
good
to
see
you.