►
From YouTube: 2023-01-24 IPFS Content Routing WG #4
Description
IPFS Content Routing workgroup #4
Sync of those involved with creating IPFS Content Routing proposals:
Notes:
https://www.notion.so/pl-strflt/2023-01-24-Content-Routing-WG-4-f8299028f94140c6a90cc604bfd7eb73
more on the IPFS Implementers Sync including calendar information see:
https://www.notion.so/pl-strflt/Content-Routing-Workgroup-e59fa94a9c3f48d58480b7daf15bd356
Luma meeting series:
https://lu.ma/ipfs-routing-wg
A
Like
it
cool
well
welcome
to
the
fourth
content
routing
work
group,
we've
made
a
ton
of
progress,
but
we
have
a
lot
more
folks
here
this
time,
thanks
to
everybody
coming
back
from
Switzerland,
that
was
a
really
productive
trip.
I
kind
of
coincided
with
our
last
meeting,
but
I
shared
the
notes
in
meeting
chat.
We've
got
some
agenda
items.
A
A
Do
a
brief
kind
of
General
high-level
update
of
progress
they've
made
on
their
team
since
the
last
meeting,
since
each
of
these
teams
has
such
a
broad
coverage
and
it's
all
pretty
much
relevant
to
the
content,
writing
aspects
of
the
work
that
everybody's
doing
but
I
put
in
our
notes,
kind
of
an
update
area
for
kind
of
folks
that
wanted
to
drop
stuff
in
here,
not
in
any
particular
order,
but
ipfs
folks,
would
you
indulge
Us
in
just
such
an
update.
C
D
E
Okay,
take
that
Gus
I
I
can
update
some
of
it.
I
I'm,
actually
not
like
fully
up
to
speed
on
where
we
are
with
the
release.
D
The
Kubo
release
is
out,
it
is
out
okay.
Yes,
yes,.
D
Deployment-
it's
not
deployed
everywhere
in
by
Frost,
but
the
release
itself
is
is
completed.
E
Okay,
so
that's
the
release
that
turns
on
said.contact
by
default
in
in
Kubo
as
a
Content
router,
so
that
that's
been
released
now
I
can
go
close,
a
bunch
of
issues
now,
thank
you,
yeah
and
other
than
that.
I've
opened
up
that
it's
still
a
work
in
progress,
but
I'm
adding
streaming
support
to
the
which
is
the
Indie
Json
content
type
to
the
delegated
routing
implementation
that
we
have
in
lib
ipfs.
E
D
E
I'll
go
yeah,
I
need
to
go.
There's
I've,
just
put
some
chatter
in
it.
That
I
haven't
been
keeping
up
with
so
I
need
to
go,
see
the
chatter
but
like
for
the
most
part.
Now
that
we've
launched
the
stuff
I
think
we're
ready
to
close
it
out
as
long
as
we
can
close
off
all
the
open
threads
that
are
in
it.
B
B
E
A
D
D
D
We
intend
to
change
that
with
this
next
release,
based
off
of
the
updated
findings
from
from
Probe
lab,
but
just
so
it's
clear
we
didn't
make
any
of
those.
We
didn't
merge
any
of
those
changes.
A
That's
good
to
know:
Steve
I
definitely
was
checking
the
grafana
dashboard
as
as
soon
as
I
saw
this
update
just
to
see.
If
we
had
had
a
different
traffic
Arrangement
going
on.
That's
that's,
that's
important
I'll
go
ahead
and
jump
in
with
the
the
ipni
updates
and
Mossing
will.
If
you
want
to
jump
in
on
anything,
let
me
know,
but
we've
got
the
DH
store
deployed
in
the
dev
environment
and
integrated
with
an
end-to-end
pipeline
of
privacy,
enabled
queries,
and
so
Ivan
was
testing.
A
Yesterday,
the
implementation
I
think
we
got
a
full
end-to-end
response
for
double
hash
queries.
Ivan
or
Mossy.
Did
you
want
to
add
any
flavor
to
that.
F
A
Cool
and
then
index
or
scalability
now
has
the
freezing
the
freezing
logic
deployed
in
production,
so
we're
we're
basically
fully
deployed
with
our
scalability
solution.
I
think
coming
out
of
the
last
meeting.
That's
kind
of
an
important
aspect
that
we
want
to
distress
to
all
the
ipfs
stewards
and
the
folks
over
on
bifrost
is
that
the
indexer,
we're
very
confident,
has
been
tested
and
is
scalable
and
we're
we're
ready
for
traffic
and
then
Marcy.
You
wanted
to
bring
up
a
topic
about
IPP
ipip337.
G
Yep
I
think
Liza
lungos
already
on
it.
There
was
I'm
Keen
to
get
the
ipip327
merged
I
think
the
only
thing
was
to
reduce
the
scope
of
it
and
that
is
moving
output,
so
I've
put
up
a
PR
that
does
that
and
sorts
that
lint
stuff,
so
I
think
once
that's
managed.
Hopefully
we
can
merge
ipip
327
after
that
I'll
copy
over
glasses,
excellent
work
on
the
put
request
and
and
I'll
capture
it
as
a
separate
PR
to
then
further
update
the
ip327
maximize
it.
A
That'll
happen:
that's
primarily
the
big
moves
from
the
indexer
team
unless
anybody
else
wants
to
jump
in
and
drop
one
on
us,
but
that's
been
a
lot
of
work
for
us.
We've
made
a
ton
of
progress,
we're
very
excited
about
it.
Can
anybody
from
Probe
lab
jump
in.
D
A
C
Okay,
so
on
problem
style,
I've
been
working
on
the
spec
for
the
double
action.
Dxt
I
included
a
link
with
the
preview,
so
the
let's
say
protocol
documentation
is
there
I'm,
still
documenting
the
rational
and
so
design
choices
for
everything,
but
basically
the
protocol?
C
Is
there
I'll,
let
you
know
once
I
open
the
pr
and
we've
also
been
investigating
on
the
DHT
slowness,
along
with
the
kubo's
two
words
investigating
on
yeah,
multiple
yeah,
multiple
ways
that
use
multiple
reason
the
GST
could
be
so
we're
investigating
into
flicks
servers
and
also
around
Gala
nodes,
among
others
and
yeah.
That's
mostly
for
probe
lab.
D
Hey
thanks
again
couple
questions
on
the
double
hashing.
We
still
have
chain
safe
involved
in
non-contract
right.
C
D
C
So
on
chain
safe
side
you
would
be
finishing
so
getting
the
implementation
recorded
with
the
the
spec
and
testing
everything
just
to
make
sure
that
the
double
hashing
GST
work
and
then
so.
We've
discussed
last
week
in
Switzerland
about
the
transition,
and
so
there
was
the
upgradable
or
agile
DHT
solution
that
was
proposed.
But
as
as
some
ipfs
Steward
mentioned,
it's
a
lot
of
implementation
work
to
get
it
working.
C
So
there
was
a
consensus
among
the
people
that
were
there
to
just
Fork
the
DHT
for
this
time
and
so
that
we
can
achieve
this
double
hashing,
PhD
and
we'll
see
later
for
the
upgradable,
DHT
and
yeah
feel
free.
If
you
don't
agree,
feel
free
to
add
any
comment
or
reach
out
to
us.
D
C
What
what
do
you
mean?
Community,
visibility,
I.
D
Guess
other
people
in
ipfs
ecosystem,
besides
us,
if
the?
If
and
when
we
Fork
the
DHT.
C
B
Yeah
we're
not
working
the
DHT
and
we
have
given
a
number
of
talks
over
the
last
year
about
this.
So
hopefully
it's
not
too
much
of
a
surprise.
Are
there
other
things
you
think
would
be.
D
I
I,
so
I
I,
don't
know.
I
haven't
like
thought.
I
haven't
put
my
head
into
like
whether
the
ramifications
of
the
decision-
I
just
don't
know
like
I
guess.
A
lot
of
it
depends
on.
How
is
this
going
to
affect
people,
and
so
so
yeah
I
I,
don't
I,
don't
have
something
premeditated
I
just
want
to
make
sure.
We've
we've
considered
those
angles.
H
Yeah
and
the
endless
server
side
will
need
to
hold
both
like
it's
just
going
to
increase
the
amount
of
stuff
used
in
the
net,
like
the
amount
of
resources
used
by
the
network
and
also
you're,
going
to
end
up
with
you're
you're,
going
to
end
up
with,
like
a
very
civil
attackable
net
like
as
much
as
the
standard
DHT
is
civil
attackable,
a
Network
that
has
like
no
nodes,
because
you
just
rolled
it
out
last
week
is
going
to
be
like
trivially
attackable.
H
So
deciding
what
you
want
to
do
about
that
I
will
I
will
give
some
some
brief
context,
which
is
that
we
tried
doing
we'll,
call
it
a
a
a
subtle,
a
subtle
Fork
of
the
DHT
around
Galactica
0.5
or
basically,
we
just
sort
of
transitioned,
all
the
bootstrappers
to
the
new
protocol
and
all
the
Hydra
nodes
and
we're
like.
H
Hopefully,
that
will
be
enough
to
hold
the
network
and
it
wasn't
I
think
partially
because
the
like
the
hydros
weren't
operational
as
they
were,
but
also
just
because
you're
moving
like
10
000
nodes
of
traffic
to
like
one
set
of
infra
nodes
as
a
way
to
like
help
with
that
transition
and
yeah
they
they
tried.
It
was
too
much
and
they
sort
of
wound
it
back
and
didn't
end
up
doing
the
fork
that
way.
H
A
I
think
Lindell
dropped
a
a
question
in
the
chat
about
whether
or
not
this
would
be
a
separate
little
P2P
protocol.
H
We
may
also
I,
don't
know
how
this
is
going
to
end
up
getting
implemented,
but
like
it
was
just
for
provider
record
like
I,
don't
know
how
how's
getting
implemented,
but
there
are
other
consumers
than
ipfs
of
this
of
the
lib
P2P
DHT
code.
H
So
we
may
need
to
like
either
more
obviously
separate
out
the
things
and
like
be
like
Yep.
This
is
the
ipfs
one.
If
you
want
to
keep
using,
you
can
use
it
if
you
want,
if
you
want
to
use
this
other
one
use
the
other
one
or
we'll
need
to
be
a
little
more
proactive
about
communicating
with
people
in
the
lip
P2P
ecosystem,
who
may
not
have
been
paying
attention
to
the
ipfs
DHT
because
they
like
haven't.
H
H
Mean
I
would
flag
that,
probably
on
the
go
lip
P2P,
CAD
DHT
repo,
if
that's
the
place
where
you're
going
to
be
making
changes,
because
at
that
level
it's
like
somebody
who's
choosing
to
use
that
repo,
as
opposed
to
someone
who's
invested
in
a
general
protocol.
Spec,
it's
just
like
what
implementation
do.
They
want
to
use
mm-hmm.
H
H
That
makes
sense
and
I
I,
don't
know
who
would
be
the
best
contact
for
this
PDP
these
days,
but
there
should
be.
We
should
hopefully
have
some
record,
like
some
understanding
of
which
are
some
of
the
larger
Partners,
or
you
know
blockchains
that
are
like
using
using
lip,
P2P
CAD
DHT.
This
way,
mm-hmm.
D
It's
definitely
a
good
topic
at
the
lipdp
community
call
like
to
to
surface
there,
because
you
know
that
gets
recorded
and
shared
and
that'll
have
a
larger
showing
of
The
PDP
interested
people
who
also
might
be
able
to
say
or
raise,
or
you
know,
other
organizations
or
groups
to
be
aware
of,
but
I
think
the
meta
Point
here
is
you,
like
def,
like
I
I,
want
to
make
I'm
a
little
worried.
We're
underestimating
all
that's
going
to
be
involved
to
do
this.
D
I'm,
not
not
trying
to
slow
us
down,
but
I
don't
want
to
get
I,
don't
want
to
go
through
some
one-way
doors
here
or
like
or
trip
on
anything.
That
is
then
gonna
Force
us
into
a
bunch
of
work
that
we
weren't
planning
on
so
like
obviously
keep
moving
forward,
but
I
really
want
to
make
sure
we
see
the
overarching
plan
of
how
we
are
going
to
execute
on
this
and
that
we're
like
aware
and
bought
off
on
it
before
we
do
any
before
we
pull
any
of
these
steps.
C
I
Yep,
so
a
couple
of
things.
First
of
all,
we
had
to
downgrade
everything
back
from
zero
eighteen
rc2,
the
test
that
we're
doing
because
of
this
bug
report
that
we
had
for
it
started
with
Gateway
response
for
Json
content,
not
encoded,
there's
a
report
in
in
bifrost
infrap,
but
it
turned
out
that
that
issue,
one
of
the
things
that
was
breaking
was
Kubo
CI.
I
So
we
basically
rolled
everything
back.
We
rolled
back
to
zero
17
until
and
of
course,
we
rolled
back
to
zero
17
and
then
a
couple
of
hours
or
less
after
we
rolled
out.
We
found
that
there
was
a
release
of
zero
Eighteen
with
that
fix
with
the
fixed
program,
so
we
still
have
to
roll
out
zero
eighteen
to
the
test
to
the
test
nodes.
I
We
actually
did
everything
except
for
two
collab
cluster
nodes,
because
if
we
because
of
the
repository
and
compatibility,
we
would
have
to
wipe
the
repo
or
convert
the
repo,
which
is
a
big
hassle.
So
we
said
you
know
what
now,
let's
just
leave
those
two
nodes
with
zero
eighteen
or
C2
until
we
actually
deploy
zero,
eighteen
and
yeah
George
is
taking
care
of
deploying
zero
eighteen,
hopefully
today
or
perhaps
tomorrow-
zero
eighteen
final
to
the
test
to
The
Cannery
to
the
test
notes.
I
On
the
other
hand,
following
on
from
last
week,
when
we
were
talking
about
cost
control,
I
was
the
cost
optimizations.
So
I
made
some
graphs
and
interestingly,
we
found
two
very
nice
dips
in
cost.
The
first
one
coincides
to
when
we
updated
to
kubo017
and
I
enabled
the
resource
manager
that
for
some
reason,
drastically
reduced
reduced
event.
The
outgoing
bandwidth
of
Google.
By
about
50
percent
I,
don't
know
exactly
what
happened.
I
E
I
E
I
I
I
There
a
way
of
doing
that
short
of
actually
digging
into
the
logs
and
digging
and
digging
and
digging
and
digging
into
the
logs.
H
Mean
there
are
some
like
things
that
report
there.
There
are
some
like
commands
that
report
stuff
out,
but
my
understanding
is
that
the
bifrost
nodes
all
have
that
functionality
turned
off
basically
because
they
didn't
they
didn't
want
to
it.
They
didn't
want
to
have
the
cost
associated
with
like
doing
all
the
metrics
recording.
I
E
I
E
I
I
D
Fair
enough
so
yeah
two
things
so
yeah
there
are
other
Prometheus
metrics
I
was
under
the
impression
that
we
had
actually
I
thought
we
had
disabled
those
for
by
Frost
because
they
aren't
implemented
in
a
performant
way
until
a
new
goal
of
P2P
release
goes
out
and
so
I
thought
we
had
disabled
them
in
your
case,
but
maybe
that's
right,
I
guess
I
know
you
guys
are
what
I
mean
I
might
be
mixing
up,
that
we
just
did
that
with
the
hydras.
So
I'm.
E
D
I
Okay,
good,
hopefully
we
will
not
it's
not
something
that
we
that
we
will
have
to
undo,
because
it's
very
nice
having
to
if
I
haven't
been
using
up
75
less
bandwidth
than
we
used
to
yeah.
H
I
haven't
poked
too
much
into
the
bandwidth
stuff
since
since
last
time,
mostly
because
effort's
been
going
into
some
of
the
other
things
with
the
moving
Gateway
traffic
to
Saturn,
which
will
give
us
a
number
of
benefits,
but
mostly
that
we
get
to
like
write,
binaries
and
code
that
are
specifically
optimized
for
running
a
public
Gateway
which
is
like
you
know
yes
yeah.
H
Finally,
and
so,
instead
of
chasing
down
like
what
happens
when
we,
what
happens
when
we
take
the
desktop
node
and
like
max
out
all
the
resources
we
can,
we
can
kind
of
just
build
the
thing
that
is
useful
here.
E
E
H
H
H
D
A
question
yeah
I
agreed
on
that
idea.
A
question
on
the
on
the
graph,
so
is
this:
this
is
by
Frost's
Equinox
Bend
globally
per
day.
Yes,.
I
This
is
by
Frost,
Global
bandwidth,
spend
redis,
Kubo
blue
is
the
load.
Balancers
yellow
is
total
and
there's
a
link
to
the
to
the
there's,
a
links
somewhere
above
here
with
these
to
the
spreadsheet,
where
this,
where
I
probably
pasted
this
one.
A
A
It
looks
like
y'all
are
kind
of
already
jumping
in,
but
first
top
of
Mind
was
the
DHC
expiry
of
24
hours,
potentially,
resulting
in
or
probably
resulting
in
60
times
provides
to
gets
ratio
thanks
Guillaume
for
adding
some
reading
material.
There
I'll
go
back
and
read
that
I,
don't
know
if
Will,
if
you've
seen
that
one
yet.
C
Cool
yeah.
Sorry,
so
that's
request
for
measurement
that
Michael
one
of
our
collaborators
wrote
on
analyzing
the
liveliness
of
the
provider
records
so
how
much
time
they
stay
in
the
in
the
DHT
before
they
they
expire.
A
Looks
like
we
got
a
little
homework
to
do
we'll
follow
up
on
that
thanks
again,.
B
I
mean
that's,
that's
useful
measurement
data
I.
Think
the
the
thing
that
had
we've
been
thinking
about
from
the
IPI
side.
That's
been
a
conversation
for
a
while
on
on
this
subject.
Right
is
the
the
current
delegated
routing
API
mirrors
the
implicit
DHT
Behavior,
because
that
was
the
simplest
thing
to
do
right,
like
there's
a
bunch
of
code
in
Kubo,
as
it
exists
that
re
publishes
every
12
hours
and
expects
implicit
implicitly
that
records
will
expire
from
the
DHT
in
24
hours.
B
That
doesn't
need
to
be
what
the
delegated
routing
API
behavior
is
right
and
and
the
indexers
use
a
different
mechanism,
a
different
scheduling
here,
where
you
say
I
have
this
and
then
it's
on
you,
the
publisher,
to
say
when
you
no
longer
have
it,
and
so
we
just
take
it
that
we
don't
have
a
TTL
implicitly
there.
B
It's
just
it
stays
there
as
long
as
you
as
a
publisher
are
alive,
and
so
we
we
could
figure
out
whether
we
want
to
say
that
we
want
some
amount
of
like
republishing
interval
for
delegated
routing.
B
But
maybe
we
also
just
want
the
the
distributas
or
a
way
to
like
get
the
current
one
as
a
snapshot,
there's
different
things
that
we
could
do
that
could
allow
us
to
both
to
either
sync
or
reduce
the
noise
of
having
to
do
that
daily,
republish
and
so
I
think
Yvonne.
You
were
working
on
thinking
through
what
what
might
make
more
sense
on
the
delegated
routing
API
in
terms
of
the
published
site.
F
Yeah,
so
basically,
if
we've
had
any
idea
about
seed
groupings,
how
they
relate
to
each
other,
so
that
we
can
group
them
into
the
ads
with
groupings
that
make
sense,
because
right
now,
they
kind
of
expire
randomly
that
results
into
like
I,
don't
know
sometimes
70
of
the
advertisements
that
were
generated
for
the
snapshots
being
like
removed
and
then
re-advised
again,
because
one
seat
expired
from
the
bunch.
Like
another
thing.
F
Obviously,
a
good
optimization
would
be
if
we,
if
Cuba,
could,
if
I
pay
fast
now,
it
could
explicitly
tell
the
divs
like
what
has
been
added,
what
has
been
removed
instead
of
like
publishing
the
snapshots
yeah.
So
these
are
like
two
things
that
they
primarily
had
my
mind.
C
B
B
B
Mean
and
and
I
guess
change
is
maybe
an
interesting
thing
here,
because
we
don't
have
anyone
really
publishing
from
ipfs
in
production.
We've
got
a
couple
like
sample
things,
but
there's
not
wide
use
or
anything.
So
it's
really
figuring
out
the
right.
One
I
think
we're
still
very
much
an
experimental
phase
of
what
what
we
want
that
to
be
designed
as
I
think
we
don't
have
it
in
the
initial
ipip.
Even
thanks,
tomasi
makes
sense.
H
I
H
B
I
think
there's
two
things.
One
is
where
we
realize
that
we
aren't
conveying
any
particularly
useful
semantics
that
the
Kubo
node
or
that
the
ipfs
node
knows
about
in
how
we
send
out
these
things.
It's
it's
just
that
there's
a
periodic
every
12
hour.
You
know
announcement
iterating
through
all
the
keys,
but
you
don't
know
which
ones
are
part
of
the
same
pin
or
are
grouped
or
any
of
that
behavior.
B
So
you
can't
say
I'm
going
to
actually
say
that
these
are
likely
to
expire
together,
because
none
of
that
information
gets
conveyed,
and
then,
secondly,
is
that
we,
the
schedule
of
this
is
just
the
implicit
one
that
happened
to
come
out
of
the
DHT,
and
so
you
know
we
have
sidecars
that
just
translate
where
they
have
to
watch
and
implement
the
dht's
12-hour
implicit
expiry.
H
B
So,
as
far
as
I
know
that
it's
not
used
on
either
side
but
but
that
that
that,
as
far
as
I
know,
the
Kubo
does
not
either
provide
that
or
respond
to
the
server.
One.
H
Yeah
yeah
I'm
trying
to
make
sure
I
understand
like
the
there
there's
I
guess
like
and
then
maybe
these
things
aren't
as
different
as
I
want
them
to
be,
but
there's
like
the
API
of
like
what
makes
sense
for
if
one
were
writing
a
client
from
scratch.
That
was
maybe
sometimes
using
this
for
the
DHT
and
sometimes
using
this
for
ipni.
What
would
be
enough
to
make
it
happy
and
then
there's
like
how
do
we
want
to?
How
could
we
mess
around
with
Kubo
internals
to
make
it
like
work?
Well
with
this
situation,.
B
H
Yeah,
okay,
I
I,
suspect
that
that
kind
of
thing
is
going
to
be
like
kind
of
painful
in
that
you
probably
want,
and-
and
there
are
good
reasons
to
do
this
anyway,
but
it
probably
means
like
adding
a
separate
key
Value
Store
database
table
whatever
you
want
to
call
it.
That
has
like
here's,
a
list
of
all
the
things:
I'm
advertising
and
some
tags
for
them.
H
But
that's
like
yeah
doing
doing
like
doing
that's
like
not
super
trivial
fun
work
for
someone
can
be,
can
be
done,
but
like
yeah,
I
guess
I'm
trying
to
separate
out
some
of
like
the
Kubo
internal
pieces
from
like
the
API
pieces,
because
the
API
pieces
may
be
much
easier
for
some
of
the
partners
that
are
getting
blocked
by
this
to
work
with,
although
like.
If
we
need
to
do
it
in
Kubo,
it
can
be
done
there
too.
H
D
That
makes
sense
to
me
on
the
surface
of
Dean,
I
mean
ultimately
I
mean
I'm,
assuming
the
right
way
to
do.
This
is
someone
needs
to
drive
the
the
put
right
side
of
the
HTTP
delegated
routing
API,
the
stuff
we've
kind
of
we've
pulled
out
that
needs
to
get
someone's
got
to
take
the
lead
on.
You
know
starting
up
a
new
IP
for
that
and
that's
where
some
of
these
questions
will
get
answered,
and
ideally
we
get
the
spec
nailed
down
and
and
then
we
can
enable
different
implementations
to
go
after
it.
B
Will
at
some
point
propose
a
spec
I
think
we
also
see
this
as
something
where
what
we're
worried
on
is
that
it's
going
to
take
a
long
time
for
anyone
to
get
to
the
inside
of
Kubo,
and
we
don't
have
any
other
clients
that
are
going
to
make
active
use
of
this.
So
this
is
really
a
when
we
get
to
it
and
it's
relatively
low
down
our
priority
list,
as
as
far
as
I
can
tell.
H
H
Well,
they're
they're
they're
they're
different,
but
it's
like
it's
similar.
They
both
have
different
rights
and
different
ttls,
so
you're
dealing
with
that,
but
you're
not
dealing
with
the
large
grouping
Behavior,
so
you're
sort
of
nailing
like
two
out
of
three
I
thought
that
was
that
was
the
only
reason
I
I
brought
it
up
like
it.
Just
moves
us
a
little
closer.
G
Yep
I
think
the
stuff
that
Ivan,
probably
looked
at
today
and
we'll
talk
to
it
for
me,
sounds
like
deep
changes
in
the
protocol
because,
right
now
the
entire
protocol
talks
as
talks
about
sits
as
atomic
units
right.
All
you
have
is
I,
provide
this
Sid
I
no
longer
provide
this
sit
right.
Sorry
I
provide
this
it
with
this
TTL
and
it
expires
and
goes
away.
The
problem
comes
when
you
have
a
huge
amount
of
sets,
which
means
the
churn
in
providing
the
chair
in
using
it
as
a
time
making.
G
It
increases
significantly
right.
So
the
way
that
ipni
has
been
dealing
with
this
is
you
have
this
concept
of
diff,
so
right
and
re-advertising
everything
you
just
say
add
one
remove
two
add
seven
right
and
then
on
top
of
that
you
have
some.
You
have
a
grouping
which
is
called
context
ID
that
allows
you
to
then
change
character,
ritual
characteristics
of
a
whole
group
of
states
without
having
to
re-advertise
anything
right.
G
So,
for
example,
you
can
say
of
all
the
things
that
I've
advertised
in
the
past
without
telling
me
what
you've
advertise
you
could
say
that
hey
I,
now
support
bit
swap
right.
I,
no
longer
support
pizza
right,
so
these
are
the
like
flexibility
that
ipn
I
provides
the
the
ideally
we
would
like
similar
flexibilities
in
ipfs,
because
we
want
to
be
more
efficient
about
advertising,
the
SIDS
and
so
on,
and
that's
what
that
to
me
sounds
like
something
that
requires
deep
changes
down
to
you
know.
G
P2P
interfaces
like,
for
example,
Contracting
interface,
peer,
acting
interface,
things
like
that
right,
separate
to
that
we
want.
We
also
want
to
change
the
HTTP
delegated
writing
apis.
To
do
this,
I,
don't
think
this
change
is
going
to
happen
over
time
so
iteratively.
We
should
probably
think
of
interfaces
that
enable
the
existing
ipfs
to
work
first,
like
they
put
stuff
that
Gus
wrote,
for
example,
and
then
we
can
think
of
improvements
on
top
of
that
which
then
enables
extra
things
that
ipni
provides
natively
within
with
the
data
track.
D
D
G
H
G
H
H
But
the
fact
that
there's
no
like
list
of
like
here
are
the
objects,
I'm
advertising
and
what
do
I
know
about
how
they're
advertising
sort
of
hurts
you,
because
even
with
the
DHT
stuff,
you
have
no
way
of
figuring
out
like
what's
been
advertised
recently
and
what's
not
right,
there's
just
like
did
it
all
happen,
I
hope.
So,
when
I
did
the
probe
for
IPs
DHT
find
pros
this
CID.
My
thing
came
back
so
I.
Guess
that
looks
good.
B
A
Well,
we
can
jump
over
the
we'll
we'll
keep
this
going
for
Content
routing
in
the
future.
I
think
it's
a
an
important
topic.
I'll,
probably
summarize
it
and
those
that
were
here
to
to
be
a
part
of
this
will
remember
what
we're
talking
about
and
if
we
have
to
bring
new
parties
into
the
mix
as
we
we
prioritize
efforts,
we
can.
We
can
do
that.
A
Well,
we
basically
entirely
covered
this
better
estimate.
Choosing
expiry
for
delegated
provides.
A
One
thing
that
I
think
came
up
was
so
I
guess
we
didn't
actually
roll
out
on
the
the
gateways
without
testing.
Did
we.
B
C
I
guess
we
do
so
Ian
will
test
on
Thunder,
though
I
think
the
plan
is
that
he
will
be
able
to
test
it
this
week,
or
probably
next
next
week,
worst
case
and
then
yeah
provide
some
Benchmark
for
how
it
behaves
with
different
provider
search
delays.
B
I
I
guess
so
so
I
think
there's
two
things
right
there
there's
so
so,
with
with
the
modeling
and
Thunderdome
testing,
you
can
say:
okay,
it's
going
to
cause
a
10x
increase
in
queries
and
it's
going
to
change
the
delay.
So
much
so
you
can
understand
the
relative
changes
in
bandwidth
and
you
can
understand
the
relative
changes
in
performance.
B
But
what
I'm
asking
here
is
that
we
do
a
rollout
on
all
of
our
ipfs
gateways
which
have
a
bulk
of
traffic
in
a
rolling
deploy
that
we
can
roll
back
before
we
cut
the
final
tag
so
that
we
can
tell
if
we're
doing,
a
20
X
increase
in
DHT
traffic.
Is
this
causing
a
bunch
of
the
DHT
to
fall
over
because
it's
got
too
many
queries.
C
So
as
with
the
Kubo
zero
eighteen,
we've
reduced
the
number
of
directly
connected
peers,
we're
going
to
have
much
less
bit
swap
requests,
and
so,
with
the
upcoming
change,
we're
gonna
have
much
more
DHT
requests,
but
so
the
additional
DHT
request
s
are
like
100
of
the
what
we
win
with
the
the
a
B12
search
that
we
don't
make
and
so
yeah
the
DHT
load
is
expected
to
increase,
but
not
significantly,
but
it's
yeah,
it's
good
to
test
before
we
actually
deploy
it.
H
Yeah
I
would
I
would
yeah
be
careful
just
to
check
on
that.
It's
also
there's
a
difference
between
like
sending
messages
and
then
forming
the
P2P
connections
right
with
the
bit
swap
stuff
you're
you're,
not
forming
any
new
connections,
you're
sending
messages
so
the
things
that
we're
dealing
with
Are
Gonna
Change.
H
It
may
be
that
all
the
resources
like
work
themselves
out
in
the
end,
but
yeah
good,
good
thought
well.
D
Yes,
I
guess:
yeah,
it's
like
yeah
I
correctly.
Remember
you're,
like
the
project
driver
there,
so
I
I
know.
We've
asked
to
get
some
extra
measurements
and
comparisons
and
using
Thunderdome
for
some
of
that
is
great.
Let's
also
add
in
a
step
for
coordinating
with
bifrost
to
do
bifrost
deployment
and
then
you
know,
make
sure
we're
measuring
and
reviewing
that
properly
and
then
we
can
get
it.
D
You
know
that's
just
a
config
change
on
in
bifrost
config
and
then
we
can
you
properly
assert
what
we're
going
to
do
and
why
within
the
Kubo
default
config
but
I,
guess
we're
kind
of
looking
to
you
to
drive
that
coordination
with
bifrost
on
the
the
deployment
you
know
being
specific
about
what
you
want
and
making
sure
that
it
actually
flows
all
the
way
through
yeah
cool.
I
B
We
we
didn't
talk
about
this
in
the
decentralized
Gateway
one,
but
there
is
some
question
of
if
we're
having
Saturn
L
ones
as
retrieving
content
going
and
and
trying
to
query
both
the
HD
and
indexers.
What
what
does
that?
Look
like
I
think
the
the
indexing
team
is
so
so
this
I
guess
goes
to
Gus's
initial
update
of
the
ND
Json,
which
is
we
expect.
B
We've
got
this
HTTP
API
we're
going
to
need
to
update
it
somehow
to
because
one
of
the
things
that
we
expect
is
that
the
latency
of
an
index
response
versus
ADHD
response
are
going
to
be
different.
So
you
don't
want
to
wait
for
all
of
your
results
to
be
collected
and
return.
A
single
response,
you'd
like
to
be
able
to
have
a
couple
different
responses
at
different
times.
E
E
Right
and
so,
instead
of
an
array
you
get
new
line,
do
limited
Json
objects
and
you'd
have
to
request
into
Json
as
the.
E
And
the
server
would
need
to
respond
with
the
right
content
type
and
the
the
client
and
server
code
itself
and
lib
ipfs
would
would
require
an
AP,
a
breaking
API
change,
but
I.
Don't
think!
That's
that
big
of
a
deal
right
now
they
return
slices.
You
can't
return
slices
of
extremely
results,
so.
B
And
then
I
guess
the
only
other
thing
you
maybe
have
thoughts
on
in
your
head
is.
Is
that
just
gonna
work
with
like?
Will
we
be
able
to
do
the
same
thing
on
double
hatched
queries?
Or
is
there
any
other
thought
that
we'll
need
to
put
in
when
we
have
both
of
those
happening
in
parallel.
G
I,
don't
see
any
issues
with
it,
I
mean
it
depends
on
the
body
of
Jason.
So
right
now
the
double
hash
response
is
encrypted
results
and
then
list
of
bytes.
Basically,
as
long
as
we
could
return
like
partial,
multiple
encrypted,
there's
also
multiple
list
of
fights,
then
I
don't
see
why
it
wouldn't
work.
B
Cool,
we
will
wait
for
the
ipip
gust
or
the
code
change
each
of
the
way
and
ping
us
and
we're
happy
to
review.
B
B
D
Yep
and
like
I
know,
Gus
has
already
has
some
code
on
this.
What's
the
collective
priority
on
this,
like
is
this
something
we
need
to
make
sure
it
gets
landed
this
month,
particularly
as
it
might
affect
the
decentralized
Gateway
work.
H
B
Understanding
is
no
because
it's
going
to
be
Lassie
likely
as
the
client,
which
we'll
be
able
to
support
a
streaming
thing,
and
so
we
don't
need
to
roll
out
to
clients
of
ipfs.
So
we're
not
blocking
on,
like
an
ipfs,
really
scheduled
for
this.
F
D
B
H
E
Well,
I
think
yeah
I,
don't
the
server
side
is
not
used
by
index
the
indexers
either
they
don't
use
the
by
BFS
for
this
stuff
have
their
own
implementation.
B
So
it's
as
it's
for
ongoing
compatibility
with
ipfs,
because
the
the
worry
is
in
ipfs,
Kubo
client
queries.
Indexer,
we've
got
DHT
results,
we
could
return
and
are
we
going
to
delay
because
they're
still
asking
us
for
us
the
non-streaming
variant,
I
think
the
answer
is
we
wouldn't
delay
and
we,
and
until
we
get
ND
Json
except
headers,
we
just
won't
we'll
give
the
fastest
results
only
on
the
sid.contact
endpoint.
H
B
H
Makes
sense
are,
is
there
any
worry
here
about
having
a
cellular
contact
return,
DHT
results
with
like
as
if
and
have
it
be
transparently
the
same
as
if
they
were,
like
other
ipni
results
that
being
confusing
when
you're
like
so
should
I,
you
know,
I'm
in
Australia
should
I
use
these
picnic
people
but
then
like,
oh
actually,
they
have
different
results
than
cid.contact.
Does.
H
Right
now
like,
if
I
could
accumulate
a
list
of
I,
don't
know
pretend
like
all
the
peers.
Listening
on
the
pub
sub
channel,
that
people
are
publishing
to
I
could
be
like.
Oh,
these
are
all
indexer,
Folk
and
I
can
query
them
and
they
can
give
me
results
and
I
could
just
be
like
yep.
They
all
seem
like
they're
giving
me
the
same.
Results
I'll
go
to
whoever's
closest
to
me,
I'm
in
Australia
I.
Guess
that's
picnic
yep,
but
but
as
soon
as
they're,
not
returning
the
same
results.
H
My
ability
to
do
that
kind
of
goes
out.
The
window,
which
makes
yeah,
which
makes
the
like
choosing
different
indexers
or
choosing
the
closest
indexer
to
me
like
a
harder
task
than
if
we
just
ran
this
like
next
to
the
indexer
or
something
like
it
like
in
the
same
data
center.
So
we
don't
have
to
deal
with
any
traffic
things,
and
then
we
put
another
node
in
front.
That's
like
a
combined
thing
like
I.
Just
basically
I
don't
want
to
hurt
ipni's
ability
to
have
like
you
get
to
choose
from
multiple
indexers
in
the
future.
I
I
A
Dean
I'm
going
to
keep
that
kind
of
concern
in
my
back
pocket
just
as
we're
going
through
the
process
of
these
iterations.
It's
a
good
point
to
dwell
on.
A
B
So
the
so
it's
sort
of
this
contrapositive
and
and
maybe
some
explicit
measurements
on
the
gateways
at
some
point
might
be
interesting
here,
which
is
there's
some
periods
where
we
do
resolve
on
bid
swap
to
Connected
peers
from
the
gateways.
But
if
we
were
to
then
go
and
query
the
DHT
or
query
the
indexers
for
that
Sid,
we
wouldn't
find
any
providers
for
it,
because
it's
providers
that
are
only
publishing
their
route.
B
And
so,
if
you
actually
then
ask
for
a
sub
item,
it's
not
in
the
DHT,
it's
just
only
there,
because
you're,
hopefully
already
connected
to
them
or
some
cases
like
pinata
and
Fleek,
where
they
don't
publish
any
of
their
content,
and
it's
that
they've
maintained
passive
bits
off
connections
with
gateways
that
they're
able
to
return
it
over
the
gateways.
So
one
of
the
questions
we
have
is
if
we
go
to
a
DHT,
slash
index
or
primary
content
routing
and
don't
have
consistently
open
fit
swap
things
or
if
a
number
of
gateways
expands.
B
How
much
content
do
we
lose?
That's
in
this
sort
of
degenerate
case,
and
we
don't
have
a
good
sense
of
how
much
of
that
content.
It
is
in
terms
of
the
number
of
requests
against
gateways
currently.
H
One
is
the
we'll
call
them
like
long
like
peering
connections,
right,
whether
it's
the
ones
that
we
know
about
and
are
hard-coded
from
our
side
or
for
for
other
reasons,
we
have
listed
our
peer
IDs
on
the
docs.ipfs.tech
website
and
other
people
have
hard-coded
peering
to
us
as
like
I
guess
and
yeah
that
so
they're
doing
it.
H
That
way,
right,
which
is
basically
just
repeatedly
connecting
to
us,
no
matter
how
many
times
we
disconnect
from
them
as
a
feature,
and
so
there's
like
those
are
the
persistent
peering
connections
which
is
different
from
what
we'll
call
like
shorter
term
peering
like
I,
only
advertise.
You
know-
or
you
happen
to
connect
to
me
because
you
advertised
some
of
the
data
and
then
over
the
course
of
the
connection
I
pulled
all
of
it
and
I
should
have
advertised
all
of
it,
but
I
didn't,
and
that
was
good
enough.
H
The
reason
I
think
separating
those
is
is
somewhat
important.
Is
that
the
the
big
persistently
connected
peers
like
we
can
do
something
about
and
the
smaller
ones
it's
it's
kind
of
like
it
just
sort
of
Hit
or
Miss?
A
So
I'm
just
going
to
jump
in
We've,
we've
run
our
course
and
we've
got
one
minute
left
to
drop
any
last
minute
urgent
top
of
Minds
from
anybody
in
the
group
before
we
before
we
build.
A
If
not,
as
usual,
I'll
create
a
summary
of
this
discussion
posted
in
the
content,
routing
work
group
capture,
any
action
items
associated
with
the
discussions
that
have
happened
and
kind
of
communicate
them
out
to
everybody
thanks
everyone
for
joining
this
is
super
productive
and
helpful.
I
really
appreciate
all
your
contributions.