►
From YouTube: 2023-03-06 IPFS Content Routing Workgroup #7
Description
Bifrost v0.18.1 rollout challenges, Double hashing threat modeling, Roadmap planning for Content Routing.
A
All
right,
everybody
I,
just
started:
recording
welcome
to
the
content,
routing
work,
group,
Edition,
number,
seven,
everybody's
favorite
event
every
two
weeks.
Let
me
share
my
screen
and
we'll
take
a
look.
A
So
for
those
of
you
here,
I'm
going
to
take
a
bit
of
an
opportunity
here
to
potentially
divert
from
our
normal
routine
and
take
a
look
at
what
the
long-term
planning
is
going
to
look
like
for
the
content,
routing
work
group,
what
we're
attempting
to
collectively
accomplish,
as
a
group,
I,
think
I've
kind
of
heard
from
several
members
of
the
team
in
many
cases
that
we're
not
sure
how
to
potentially
prioritize
some
of
this
work
against
other
work
that
we're
doing,
and
so
what
I'm
attempting
to
accomplish
by
putting
together
a
road
map,
is
to
establish
some
collective
goals
for
the
cross-functional
team,
so
that
you
can
try
to
apply
that
perspective
towards
whatever
prioritization
and
activities
you
have
in
place.
A
That
way,
we
can
draw
some
lines
in
the
sand
and
hopefully
it'll
help
with
that
difficult
task.
That
I
think
most
of
you
have
with
attempting
to
manage
priorities
so
I
put
together
this
content
routing
work
streams.
This
is
actually
I've
built
an
entire
kind
of
task
tracking
board
view
making
system
here
a
little
bit
of
a
framework
in
a.
B
A
That
should
give
us
views
that
we
can
use
to
kind
of
isolate
for
specific
teams
or
for
priority
all
this
kind
of
stuff.
Don't
worry
about
any
of
that
and
in
fact,
there
isn't
much
expectation
that
you
update
this
either
it'll,
be
a
group
exercise
to
go
through
this
together.
The
important
one
that
I'd
like
to
draw
everyone's
attention
to
that
I'll
drop
in
the
notes
and
then
drop
in
slack
later.
B
A
A
Man,
I,
don't
even
know
how
that
automatically
happened
when
I
shared
the
screen,
thanks
well
between
the
mute,
the
shirt's
great
we're
off
to
a
banging
start
today,
and
here
are
today's
notes-
an
agenda
dropping
the
link
in
the
chat,
I
went
ahead
and
kind
of
started
with
the
ipni
update,
so
I'll
run
through
these
and
mossy.
A
If
you
want
to
add
anything
for
the
group
go
ahead
and
please
feel
free,
we
are
currently
in
progress
with
our
double
hashing
ingestion,
so
we've
got
a
DH
store,
which
is
our
double
hash
store
that
we're
ingesting
the
index
into.
A
We
ran
into
a
few
hiccups,
but
we
expect
to
complete
the
ingestion
process,
probably
within
the
next
week
of
the
entire
index,
and
this
page
that
I
linked
here
is
some
metrics.
That
Ivan
has
expertly
crafted
to
kind
of
show
you
what
the
status
of
this
ingestion
is
versus
the
indexers
that
we're
ingesting
all
of
these
advertisement
chains
from
and
then
we've
been
helping
the
Lassie
team,
so
they
had
some
metrics
ingestions
themselves,
which
we've
kind
of
optimized
to
reduce
overhead.
This
last
week,
it's
a
very
high
volume
process.
A
There's
a
lot
of
queries
going
across
and
so
optimizing
that
was
was
a
very
high
high
benefit
activity
that
Mossy
contributed
to
this
last
week
and
then
I
guess
I
noted
here
that
the
double
hash
store
is
expected
to
wrap
up
at
the
end
early
next
week.
I
think
that's
probably
kind
of
from
a
high
level.
Important
areas
that
we've
been
working
on.
A
We've
also
got
some
of
the
scalability
items
that
we're
working
on
right
now,
which
are
to
support
the
ability
to
export
car
files
and
to
bootstrap
index
instances
from
those
car
files
to
S3
instances
and
fun.
Note
those
will
be
defaulting
to
gzip
for
anybody
concerned.
A
So
I'll
go
ahead
and
pass
the
torch.
Ghee
I
see
you
had
a
few
items
up.
Did
you
want
to
jump
in
for
us
yeah.
D
D
So
I'll
go
ahead
with
the
implementation
and
probably
I'll
need
to
adjust
some
details,
but
we're
gonna
stick
to
this
concept
that
we
have
now
and
I'll
carry
on
with
the
implementation
work,
probably
in
the
coming
days
and
on
the
other
side.
Maybe
that's
a
topic
to
keep
for
later,
for
a
discussion.
I've
been
planning
around
changing
the
content
router
we
provide
operation,
but.
A
Yeah
we
can,
we
can
throw
that
into
our
discussion
about
Road
mapping
for
the
group.
That's
that's
a
great
contribution
for
that
yep
and
then
it
looks
like
I.
Don't
see,
guests
today
hope
he's
feeling
all
right
a
Dean
did
you
have
any
updates?
You
wanted
to
pass
our
way.
I
know.
You've
been
really
focused
on
the
Lassie
support
recently,
but
if
you're
aware
of
any
updates
that
are
important
for
the
group
from
the
ipfs
side.
E
No,
nothing,
nothing
particularly
for
my
side.
I've
just
been
following,
along
with
what
the
rest
of
you
have
been
up
to,
and
you
know
commenting
where
I
can.
A
E
I
do
I
do
I
just
want
to
flag
that,
like
I,
suspect,
we're
gonna
start
getting
questions
so
like
yesterday,
yesterday,
it's
hard
for
you,
you
posted
about
the
metadata.
E
You
know
people
trying
to
basically
like
build
ipfs
searching
tools
across
all
the
stuff.
That's
there
and
I
I
wonder
if
we're
going
to
start
getting
questions
where,
like
the
SPs,
are
wanting
to
make
their
data
searchable
in
a
lot
of
cases,
and
they
have
lots
of
data
which
were
then
double
hashing
to
protect
on
on
or
then
double
hashing
to
protect
for
some
privacy,
but
then
also
exposing
all
the
clear
Tech
cids.
E
E
E
If
it
was
yeah,
some
Publishers
will
choose
to
reveal
what,
since
they
have,
but
also
just
like
the
set
of
cids
that
are
commonly
used
are
commonly
used,
will
show
off
in
a
search
engine
somewhere,
and
you
could
just
use
those.
And
if
it
happens
to
be
that
the
same
entity
decides
to
both
Run
a
search,
a
search
engine
index
and
a
ipni
index,
then
they
can
use
the
one
to
defeat.
The
other
one.
E
Yeah
and
I
don't
know
what
the
true
for,
like
you
know,
file
Queen
data
versus
otherwise
right
I
mean
different.
Groups
are
going
to
have
different
constraints,
but
I
think
it
would
be
interesting
to
know
if
a
large
amount
of
this
data,
if
a
lot
of
the
data
that
we're
doing
extra
processing
work
on
is
data
we
don't
need
to
because
the
SPs
are
going
to
like
expose
it
all
anyway
or
if
that's
not
the
case
right.
E
A
Sure
yeah
we
can,
we
can
definitely
burrow
into
this
I.
Think
it's
it's
a
good
topic,
but
I
was
under
the
impression
that
the.
A
Of
encrypting,
all
these
SIDS
would
enable
you
to
Leverage
The
indexer
without
exposing
who's
reading
those
SIDS.
So
is
it?
Is
it
knowing
the
locations
that
of
the
data
and
being
able
to
address
that
content,
that's
problematic,
or
is
it
actually
just
obscuring
who's
accessing
it,
which
we
we
still
gain
in
this
case?
Don't
we
or
am
I
miss.
C
It
depends,
it
depends
a
lot
on
on
the
specific
scenarios
and
we
don't
have
particular
percentages,
but
there's
some
case
where
you've
got
a
sid,
that's
just
on
some
ipfs
node.
That
only
has
a
double
hashed,
and
so
that's
it
only
ever
appears
double
hatched
and
then
and
is
in
sort
of
an
actual
like
situation.
And
there
you
get.
You
know
reader
privacy
in
that
no
one
else
knows
the
pre-image
Sid,
and
so
no
one
can
see
you
know
what's
happening.
C
There's
cases
where
that
Sid
is
in
a
public
data
set,
and
so
a
storage
provider
chooses
to
also
release
it
publicly.
And
so
then
you
know
some
malicious
actor
could
take
the
set
of
the
public
databases,
double
hash,
the
SIDS
that
they
find
publicly
and
know
what
the
double
hash
version
of
that
Sid
looks
like,
and
then
they
could
look
at
the
query
and
correlate
it
against
that.
C
So
you
know
they're
there
there's
different
adversary
models
and
there's
different
scenarios.
Segmenting
the
data
queries
and
we
can
come
up
with
a
more
specific
threat
model.
I
think
is
probably
the
most
direct
thing.
Is
that.
C
E
D
Yep
so
I'd
say
the
Privacy,
so
the
private,
the
reader
privacy
depends
on
the
secrecy
of
the
seed
that
is
being
accessed.
So,
if
you're
accessing
a
seat
that
is
very
public,
you
won't
have
any
privacy
in
general
or
just
a
k,
anonymity
for
the
DHT.
But
if
you're
accessing
a
private
CIP,
then
you
will
get
rid
of
privacy.
B
B
So
if
we
document
the
list
of
the
thread
models
that
we're
defending
against
with
the
existing
proposal
and
the
DHT
double
hashing,
that
would
be
helpful
on
the
differences
between
DHD
and
ipni,
be
more
than
happy
to
at
least
capture
an
issue
on
exposing
cha
anonymity,
I
think
with
the
double
hash
advertisements.
Sorry,
we
did
double
action
information
providing
that
from
ipni
perspectives,
I,
don't
I,
don't
see
any
issues
providing
going
back
to
adding
the
initial
concern
in
terms
of
further
protection.
B
E
E
Our
way
from
people
who
have
been
like
you
know,
we're
just
gonna,
be
coming
at
this
from
the
outside
and
like
a
little
less
clued
in
or
they're
coming
at
it
from
different
perspectives,
because
these
are
some
things
that,
like
you,
have
been
talked
about
before,
like
double
hashing,
something
that's
been
done
in
epicore,
and
so,
when
hypercar
released
that
there
was
a
whole
bunch
of
discussion
in
the
community
around
shoulder.
Should
we
shouldn't
we
and
now
we're
doing
it,
and
so
you're
gonna
hit
all
those
questions
again.
E
So
I
guess
this
is
more
of
just
like
you
know,
when
you're
building
your
slides
and
you're
doing
all
that
to
be
like
prepared,
because
these
questions
are
coming.
B
Yep,
that's
a
really
good
point.
Who
is
the
track
lead
for
privacy
track
in
ipfs
thing,
because
I
wonder
if
we
could
put
up
some
just
ideas
or
something
get
kind
of
detraction
just
get
a
sense
of?
Would
people
find
the
existing
double
lashing
useful
and
what
what's
on
the
wish
list
here?
You
know,
would
that
be
helpful.
C
Maybe
it
was
right,
I,
don't
know,
I,
don't
know
if
we
have
a
specific
privacy
track
at
the
moment.
I
think
we've
got
a
Content
routing
track.
We
can
take
a
look
at
how
things
are
set
up.
I
think
we've
got
one
for
Content
routing
and
another
one
for
like
measurement
stuff,
I,
don't
know
if
we've
got
a
specific
day
or
half
day
of
talks
around
privacy,
but
we
could
certainly
either
do
a
working
group
thing.
C
I,
think
I
I
mean
I
guess
the
other
one
right
to
remember
is
that
this
ipfs
thing
is
much
more
for
current
implementers
closer
to
the
community.
It's
not
a
it's
not
as
much
a
presenting
outwards,
so
I
I,
don't
know
if
I'm
expecting
too
much
more
scrutiny
of
the
double
hashing
than
what
we've
already
gotten
from
the
immediate
audience.
I.
A
Think
we
can,
we
can
take
an
action
item
for
threat,
modeling,
I,
think,
starting
with
that
known
place
of
like
this
is
what
we're
trying
to
solve
for
versus
this.
Is
you
know
what
we
probably
won't
be
solving
for
with
this
exercise?
It
helps
to
really
shape
the
conversation
that
we
continue
to
have
and
realign
people's
kind
of
goals
and
intentions
around
it.
A
It'll
make
things
a
lot
easier
to
approach
and
Dean
I
really
appreciate
you
bringing
that
up.
I
think
it's
a
it's.
A
very
important
point.
I
also
think
it's
one
that
we
can.
We
can
lock
down.
It's
definitely
something
that
we
can
do.
It's
important
to
think
about.
I
wondered
if
we
could
jump
over
to
the
bifrost
team
and
and
just
get
an
update,
Cameron
I
actually
put
down
in
some
of
our
top
topics.
A
G
Yeah
definitely
there's,
unfortunately,
not
a
lot
to
update
on
that.
We
were
trying
to
roll
that
out
now.
I
think
for
the
I.
Don't
know
maybe
third
time
and
we
hit
another
blocker
it
was
in.
Was
it
18
one
had
a
panic
bug
that
we
hit
and
we
had
to
roll
this
back.
Okay,.
D
C
G
At
that
currently-
and
it
looks
if
I
followed
the
conversations
right,
it
looks
like
they're
trying
to
Target
the
fix
for
that
in
019.
But
from
what
I
understand,
we
took
the
first
release
candidate
of
that
and
ran
it
on
the
one
of
our
staging
clusters
the
other
day
as
well,
and
also
didn't
get
some
good
feedback
on
that.
But.
G
The
issues
we
encountered
can
get
squash
for
the
release
and
make
it
out
still.
E
Yeah,
the
the
short
version
on
the
Panic
is
that
it
looks
like
in
the
refactor
to
do
to
expose
sort
of
all
of
the
routers
equally,
instead
of
like
DHT
plus,
like
there's
the
DHT
stuff,
but
then
there's
the
delegated
routing
stuff,
there's
some
type
casting
things
that
got
stuck
in
the
middle,
where,
like
you
actually
do
want
to
do
things
that
are
explicitly
DHT
operations
and
not
generic
router
operations
and
those
weren't
getting
exposed
properly,
which
is
are
needed
for
the
preload
notes.
E
They're
not
needed
for
most
things,
but
the
preload
nodes
require
them
so
and
I
guess
mostly
as
an
FYI.
To
some
extent,
this
the
function
that's
causing
the
problems
here
with
the
preload
nodes,
is
effectively
Outsourcing
the
routing
table
to
somebody
else,
which
is
the
thing
that
you've
been
hacking
on
for
real,
which
is
like
the
ipfs
DHT
query
command,
which
is
you
just
Outsource?
The
who
are
my
20
closest
peers
to
somebody
else,
and
then
you
do
the
work.
A
That's
good
feedback.
Cameron
will
keep
an
eye
on
that.
I
think
this
work
group
is
a
good
place
to
kind
of
surface.
What's
going
on
with
it
and
whether
or
not
we
need
to
throw
support
at
it,
I
think
we're
all
behind
kind
of
getting
this.
This
rollout
done
I
think
it's
there's
a
lot
of
functionality
that
we're
we're
dependent
on
I
think
actually
to
to
get
these
rollouts
done.
A
So
it's
a
big
priority,
but
I
know
the
bifrost
team
is
very
much
in
the
weeds
right
now
supporting
also
Lassie
our
Ria
rollout.
So
we
appreciate
all
y'all's
hard
work
and
if
there
are
speaking
with
I,
think
Mario's,
maybe
out
right
now
right,
but
if
Mario
or
George
or
anyone
needs
help
with
specific
issues,
we
can
wrestle
with
I
think
we
might
want
to
consider
lending
that
help
to
ensure
that
this
actually
gets
done.
I
think
it's
a
really
high
priority.
G
Yeah
definitely
much
appreciated.
Yeah
Mario
is
out
at
the
moment,
but
George
is
looking
after
things,
mostly
in
his
absence
and
yeah,
we'll,
let
you
know
appreciate
it.
Yeah.
A
Okay
so
we're
a
little
tight
on
time.
I'm
gonna
just
plug
two
items
real
quickly
to
ensure
that
everyone's
aware
of
them-
and
then
I'm
gonna
jump
over
to
this
roadmap
review
so
that
we
can
get
as
much
of
that
concentration
of
discussion
out
of
the
way,
because
it's
one
of
those
things
that
will
potentially
draw
and
be
on
just
the
scope
of
this
meeting.
But
one
Mossy
and
I
met
with
cloudflare.
Last
week
we
had
a
really
great
meeting
with
their
team.
A
A
That
said,
we're
kind
of
discussing
with
them
they're
very
interested
in
our
reader
privacy
efforts
and
so
we'd
potentially
like
to
talk
to
them
about
how
they
can
participate
in
testing
we'd
like
to
get
a
little
bit
more
engagement
from
their
team
and
support
and
kind
of
the
decisions
for
Network
design
that
we're
making
over.
Here.
A
They
have
a
lot
of
valuable
feedback
to
offer
and
in
response
to
that,
they're
particularly
interested
in
some
of
the
pathways
that
we're
taking
with
ipns
and
what
we
intend
to
accomplish
with
that
and
I
I
think
they've
run
into
a
lot
of
similar
problems
with
DNS
and
they
they're
very
interested
in
lending
their
expertise
and
how
they've
kind
of
wrestled
with
these
decisions
to
ensure
that
we
have
a
smooth
incorporation
process
as
we
move
forward
with
ipns
and
the
other
item.
I.
Just
briefly
wanted
to
mention.
A
Caro
is
here
with
us
right
now:
Caro,
don't
worry
I'm,
not
putting
you
on
the
spot
right
now,
but
her
her
team
has
this
kind
of
a
car
work
group
where
they
want
to.
A
Basically
they
want
to
index
data
and
make
for
human,
readable
searches
of
our
Network,
so
that
you
can
more
easily
find
files
we'll
jump
into
that
we've
already
kind
of
started.
The
conversation
in
Slack
I
just
wanted
to
call
it
out.
If
we
have
time
after
this
kind
of
road
map
review
type
of
discussion,
we
can
take
another
deeper
look,
but
I'd
recommend
just
taking
a
quick
look
at
what
it
is.
They're
attempting
to
accomplish
Carol
and
I
reviewed
last
night.
A
I
think
that
there's
actually
there's
kind
of
a
clear
defining
line
between
the
scope
of
efforts
between
these
groups.
However,
what
they
do
in
the
direction
that
they
end
up
ultimately
taking
will
probably
result
in
some
dependency
on
the
work
that
we
do
so
we'll
want
to
look
at
that
a
little
bit
more.
But
the
immediate
term
is
just
to
drive
awareness
of
what
it
is
that
they're
hoping
to
accomplish
over
there
yeah.
F
Thanks
torvin
I'm
actually
going
to
put
myself
on
the
spot,
because
I
need
to
leave
you
three
minutes.
Hey
all.
My
name
is
Cairo
I
work
on
the
data
programs.
Team
I
just
wanted
to
give
a
very
quick
overview
of
what
I'm
driving
the
effort
I'm
driving,
which
is
I.
Think
one
step
before
what
this
working
group
is
focused
on
I
am
going
to
drive
a
working
group.
That's
focused
on
getting
metadata
around
the
public
data
sets.
So
that
is
you
know
what
what's
the
author
of
the
data
set?
F
What's
the
topic
was
the
title
of
the
data
set
and
and
also
the
cids
if
the
data
set
is
large,
where
what
what's
the
set
of
cids
that
that
are
stored
on
the
network,
so
telephone
and
I
did
talk
about
it.
Yesterday,
there's
minimal
overlap
between
the
two
efforts
like
we.
We
must.
You
know,
work
together
in
the
future
to
ensure
that
the
end
to
end
flow
for
data,
retrieval
is,
is
frictionless
so
with
that
I'm
I'm
going
to
just
leave
it
as
that
for
now.
A
All
right
and
I'm
dropping
the
link
here
in
the
notes,
so
that
you
all
can
take
a
look
at
her
presentation.
It's
pretty
well
done
and
makes
it
fairly
clear
what
it
is
that
they're
attempting
to
do,
and
then
additionally,
there's
a
conversation
that
I
started
in
the
content.
Routing
work
group
slack
that
you
can
take
a
look
at
to
see
what
this
is
all
about.
A
All
right,
I'm,
jumping
over
to
the
roadmap.
I
will
preface
this
conversation
with
there's
a
whole
lot
of
stuff
going
on
here.
I've
got
about
a
dozen
tables
with
a
whole
bunch
of
Roll-Ups
things
like
that.
Don't
waste
your
time,
trying
to
figure
out
what
I'm
doing
here,
there's
there's
a
lot
and
the
goal
will
be
to
make
views
that
essentially
ensure
that
everybody
has
a
very
pretty
picture
of
what
specifically
is
for
them
to
worry
about,
and
not
other
folks.
The
one
thing
I
do
want
you
to
take
a
look
at.
A
However,
more
immediately,
these
two
top
items
are
where
you're
going
to
be
interested.
The
tasks
page
is
explicitly
stuff
that
we're
actually
currently
doing.
This
is
things
that
are
in
Flight.
Presently,
you
don't
need
to
worry
about
this
I'm
not
going
to
ask
anybody
here
to
go
through
and
update
any
of
these
or
anything
like
that.
A
I'm
gonna
put
this
all
together,
and
the
goal
of
this
is
for
us
to
have
a
very
clear
road
map
that
comes
out
of
the
exercise,
so
that
you
all
have
kind
of
a
more
clear
picture
of
how
the
work
that
we're
doing
aligns
not
that
you
go
in
here
and
provide
updates
every
week.
We
can
gather
that
through
the
content,
routing
work
group,
so
this
isn't
a
micromanagement
effort.
A
I
assure
you,
the
content,
routing
work
streams
is
at
a
higher
level,
something
like
epics
like
this
is
work
that
potentially
crosses
multiple
teams
and
collectively
completing
the
series
of
tasks
accomplishes
a
goal
that
we're
all
kind
of
working
towards.
Whether
or
not
all
of
these
goals
are
accurate
or
really
should
be
even
a
priority,
we're
thinking
about
or
for
that
matter
near
term,
something
potentially
we
don't
even
get
to
till
the
end
of
the
year
right.
This
is
a
long
tail
work
stream.
A
These
work
streams
intend
to
identify
like
why
are
we
actually
doing
these
things
and
kind
of
an
effort
to
justify
and
challenge
ourselves
with?
Is
this
collectively
what
we
want
to
accomplish
and,
secondarily,
why
are
we
doing
it?
What
is
the
impact
going
to
be
on
the
end
users,
as
we
focus
our
efforts
on
accomplishing
these
things,
I'm
going
to
read
through
these
real
quick
and
anybody
that
wants
to
drop
comment
in
the?
A
Why
please
feel
free,
but
we
can
also
go
through
those
together
and
likewise,
I'd
also
encourage
if
you
suspect
that
there's
a
work
stream
that
you
see
on
the
future
path
for
Content
routing
as
a
whole,
this
is
kind
of
a
brainstorming
session,
so
feel
free
to
drop
them
in
here,
we'll
take
all
the
content
and
we'll
chisel
it
down
to
something
very
articulated,
but
if
you
have
ideas
throw
them
in
here.
A
This
is
where
you
can
Surface
this
work
stream
to
the
group
and
get
everybody's
feedback
on
whether
or
not
this
is
a
valuable
thing
to
focus
our
time
on
collectively,
all
do
all
the
leg
work
on
the
tail
end
of
making
the
associations
between
the
work
we're
actually
doing
and
kind
of
this
Collective
effort,
so
starting
with
a
double
hashing
for
reader
privacy.
This
is
an
aggregate
effort
of
the
DHT,
the
migration
of
the
DHT,
the
ipni
reader
privacy
efforts,
the
migration
of
the
index.
A
A
A
B
A
A
I
tried
for
the
life
of
me
Mossy
to
think
of
the
true
benefit
we
get
from
delegated
ipns
records.
I'll
spit
it
out
here.
What
I
think
I
I
think
this
is
an
extensibility
of
being
able
to
incorporate
searches
from
external
name
resolution
Services
via
a
handy
API.
That
kind
of
makes
us
protocol
agnostic
I
find
myself
trying
to
write
that
in
a
sentence
and
I
wasn't
succeeding
too
much
did
do
you
have
a
sense
that
immediately
jumps
to
mind
for
for
this
effort.
B
So,
for
me,
the
clues
in
the
name,
you
know
the
the
we
want
to
be
able
to
use
other
services
to
resolve
arbitrary
strengths
to
sits.
You
know
right
now.
This
is
all
locked
into
I,
think
DHD
apis
the
it
is
inevitable
that
we're
moving
towards
the
world
where
there
will
be
alternative
naming
systems.
B
Ideally,
we
want
to
make
those
changes,
graceful
and
iterative,
and
so
on.
So
the
first
step
is
to
just
provide
an
API
that
we
can
stop
implementation
of
and
that's
the
idea
behind
it
delegated
IPS
records.
We
already
have
a
set
of
HTTP
apis,
provided
kindly
by
the
ipfs
stewards.
To
do
exactly
that.
Delegation
of
services-
I
names
name
system
is
just
another
service
delivery.
B
It
would
be
more
things
like
peer
Discovery
puts
records.
This
also
enables
us
to
experiment
with
alternative
implementations
on,
for
example,
cloudflare
side.
You
know,
so
you
could
just
say
yep.
You
can
write
your
own
naming
resolution
system
as
long
as
this
is
the
API.
We
can
hook
it
up
to
whatever
service
and
start
experimenting.
E
I
also
flag
a
little
just
because
you
mentioned
the
alternative
naming
systems
that
you
can
also
do
this
with
DNS,
so
the
other,
the
way
that
we've
been
getting
people,
because
you
generally
need
someone
to
like
describe
the
fact
that
you're
using
a
different
naming
system
so
the
way
we've
been
pointing
people
to
do
this
is
if
you're,
like
you,
support
ens.
What
about
Rando
chain
Ms?
The
answer
is
like
use,
use,
DNS,
okay,
put
in
an
end
points
put
it
here.
E
If
you
know
in
your
implementation,
you
could
decide,
you
don't
need
the
end
point
and
you
could
just
have
code
that
does
it.
Instead,
you
know,
depending
on,
if
that's
feasible,
for
your
implementation.
A
And
for
the
sake
of
like
everybody's
perspective,
here,
I
Dean
I'm
took
note
of
that
the
so
this
this
roadmap.
What
what
I
intend
to
do
with
this
is
for
some
of
these
efforts,
there's
going
to
be
kind
of
technical
design,
docs
associated
with
them.
I
think
you
know
who
who
owns
these
work
streams
is
you
know?
Potentially
some
of
these
design
docs
already
exist
that
kind
of
stuff.
Some
of
them
won't
need
design.
A
Docs,
potentially
there's
gonna
be
a
lot
of
that,
but
this
will
be
the
first
step
towards
kind
of
categorizing
that
talking
these
things
through
and
then
having
these
deeper
conversations
and
the
Weeds
about
like
how
we
execute
on
some
of
this
stuff.
Most
of
this
just
goes
over
to
to
GitHub,
anyway
right,
so
that
we're
not
exiting
the
workspace
where
we
actually
do
everything
and
I'll
just
link
that
stuff
back
to
this
page,
so
that
it's
easy
for
someone
looking
from
outside
the
group
to
find
like
much
more
detail
about
these
things.
A
I'll
take
ownership
of
that
next
up
is
delegated
content,
routing
inputs
and
Mossy
I'm
again
interested
in
your
input.
I
I
suspect
that
once
writing
to
the
indexer
can
be
performed
simply
over
HTTP.
We
have
broader
integration
options
for
easier
advertisement
ingestion
enabling
simple
routes
to
update
the
indexer
and
lower
I
I,
see
this
as
ultimately
lowering
the
overhead
for
the
methods
of
optimizing
index
or
synchronicity
across
multiple
instances.
B
Sure
about
the
I'm
not
sure
about
the
last
bit,
but
before
that
I
I
agree
and
the
the
general
you
know
from
systems
perspective.
This
is
how
it
occurs.
For
me
on
the
ipfs
world,
we
have
capability
to
ephemerally,
publish
content
on
the
ipn
eyesight
if
you're
on
the
other
side
of
the
Spectrum,
which
is
you
tell
me,
you
provide
something
I'm
going
to
remember
that
forever.
B
Until
you
tell
me
otherwise,
in
addition
to
that,
the
ipni
requires
providers
to
be
accessible
in
via
some
network
communication,
because
this
ipnis
indexes
that
reach
out
to
providers
both
systems
provide
different
trade-offs.
You
know,
for
example,
the
reason
that
ipni
is
successful
is
because
it
makes
bulk
advertisement
of
content
much
much
easier.
You
can
look
at
the
proportion
of
the
number
of
cities
that
are
discovered
or
pni
network
in
comparison
to
DHD.
You
know
this
is
why
we
can
integrate
Big
providers
like
nft
storage
and
pinata
or
whatever
into
into
iPad.
B
But
having
said
that,
not
there
is
a
long
tail
in
here
right.
There
is
lots
of
small
providers
that
don't
want
to
run
a
graph
sync
server,
Dom
or
run
HTTP
server,
that's
accessible,
yet
they
also
want
to
advertise
their
content
through
ipni
networks
such
that
is
discoverable.
Just
like
the
way
they're
doing
on
the
DHD.
B
To
that
extent,
the
effort
in
providing
delegation
of
put
records
fits
there
in
that
it
again
allows
us
to
swap
the
implementation
of
how
to
put
this
handled
with
alternative
systems
while
reducing
barrier
for
interoperability
between
these
two
systems.
So
the
goal
here
for
me
is
to
stick
an
API
in
front
of
the
put
requests
and
then
swap
the
implementation,
so
that
ipni
can
also
accept
fmr
records
that
are
being
written
to
DHD
and
then
later
on.
B
A
That's
comprehensive:
let's,
let's
see,
let's
see
what
kind
of
one
sentence
I
can
come
up
with
that
we'll
have
a
laugh
at
it
this
week.
This
is
good
Mossy.
This
is
good,
I,
think
it's
a
great
context
and
also
probably
got
folks
thinking
around
the
call.
Thank
you
delegated
routing
streaming
support.
A
This
left
me
scratching.
My
head,
I
I,
understand
why
we
need
streaming
as
a
capability
and
I
know.
We
kind
of
we've
talked
about
kind
of
optimizing
streaming,
I
think
in
the
long
term
to
and
then
I
think
as
we
support
Ria.
This
is
like
one
one
area
that
we're
kind
of
approaching,
but
when
I
say
delegated
routing
streaming
support
does
anybody
have
like?
A
E
Different
when
you're
basically
done
already
like
yeah,
oh
so
so
in
that
perspective,
yes
I
would
say
we
should
accomplish
it
in
the
next
year.
Yeah.
B
B
For
example,
you
could
ask
a
DHD
give
me
all
the
results
with
limit
of
zero
and
it's
good
carry-on
searching
stuff
which
could
take
I,
don't
know
one
minute,
two
minutes,
five
minutes
beyond
the
timeout
of
the
request.
If
there
is
no
streaming
mechanism,
you're
gonna
have
to
set
the
timeout
and
timeout
and
things
would
fail.
But
if
there
is
a
streaming
mechanism,
you
can
return
results
as
they
are
found,
which
results
in
much
much
better
user
experience
in
systems
where
lookup
could
have
a
complete
lookup
could
make
could
take
a
long
time
right.
E
I
think
the
most
obvious
thing
is
like,
even
if
you
decided
everyone
was
fast,
everyone
must
complete
it
under
10
SEC.
Everyone
was
completed
under
like
two
seconds
right.
What
would
happen
is
that
if
you
had,
if
you
tried
to
glue
two
systems
under
a
single
routing
request,
then
your
time
would
be
the
maximum
of
the
latencies
of
the
two
systems
right.
So
you
would
be
highly
disincent
devised
to
glue
anything
together,
because
your
latencies
would
always
go
up.
A
That
makes
sense,
is
there
so
I
was
kind
of
going
back
through
the
notes
from
a
few
of
our
other
meetings?
Is
there
an
optimization
effort
around
that
streaming?
That's
in
our
future,
or
is
that
is
that
going
to
be
on
the
scope
of
like
these
discussions
or
something
that
we
would
not
even
consider
doing.
A
Thank
you
I'm,
going
to
leave
it
as
a
no
for
now.
If,
if
you
start
to
think
that,
maybe
that
is
the
case-
let's,
let's
tackle
it
later,.
C
A
Then
I
did
throw
a
big
fat
comprehensive
bit,
swap
provider
search,
delay
item
here,
I
think
I
think
maybe
the
scope
of
bit
swap
provider
search
delay
is
potentially
outside
the
scope
of
this
group,
but
simultaneously
were
highly
dependent
on
the
artifacts
that
result
from
that
effort.
So
I
just
wanted
to
get
y'all's
opinions.
You
know:
is
this
work
stream,
something
we
we
consider
I
mean
it
kind
of
continuously
is
going
to
come
up.
A
It's
it's
not
prioritized,
but
it
is
work
that
we're
gonna
like
try
to
continue
to
accomplish
with
the
the
members
of
this
team.
Do
we
consider
that
as
a
as
an
artifact
or
a
work
stream
that
we're
we're
focused
on
with
this
group?
Does
that
make
sense
to
people.
B
Foreign
I
have
thoughts
here,
so
the
title
of
that
row
I
think
we
need
to
adjust
that
because
that
doesn't
read
as
an
epic
to
me.
I
think
there's
a
bigger
issue
here,
which
is
more
epic
like,
and
that
is
and
I
hope
folks
agree
here,
which
is
we
want
to
rely
Less
on
bits
or
gossip
for
Content
writing
right.
We
don't
want
to
have
this.
We
don't
want
to
end
up
with
the
content
writing
system
that
relies
on
noisiness
of
bits
of
in
order
to
operate
okay.
So
it's
not.
B
This
is
not
about
solving
noisiness
of
bits
or
what
is
is
more
about
reducing
that
Reliance
so
that,
if
bit
stop
exists
or
doesn't
exist,
contrary
would
still
work
and
that's
the
that's
the
Epic
for
me.
Does
that
make
sense?
What
do
you
guys
think.
D
A
Group
should
be
looking
at
this
particular
work
stream
that
that
directly
addresses
I
was
looking
at
this
two
obtusely
I
think
it
really
is
kind
of
the
effect
that
bit
swap
gossip.
Optimization
has
on
routing
that
we're
we're
really
interested
in
and
whether
or
not
work
that
we're
doing
collectively
contributes
to
you
know
reducing
that
reliance
adinki
did
you
have
any
opinions
on
this
topic,
specifically
yeah.
E
So
I'm,
just
gonna
post,
so
gave
posted
a
thing
today
about
the
reprovider
sweep
which
mostly
on
board
with,
but
it
came
with
a
comment
of
like
oh
yeah.
We
can
use
this
and
we'll
be
able
to
turn
off
the
bits
off
broadcasting
and
it's
like
that's
not.
You
can't
do
that
because
it
serves
like
a
bunch
of
jobs.
E
We
can
reduce
the
scope
of
those
jobs
to
be
like
the
the
the
minimum
of
what
is
necessary
right.
So
it's
obvious
that
people
don't
want
to
get
spammed.
So
what
we
want
to
do
is
like
like
if
I
could
say
what
the.
Why
like
the
thing
we
want
to
do
like
the
problem
we
want
to
solve
is
have
everything
still
work
and
make
inferior
hate
us
less
like
that.
That
would
be
the
that's
the
tagline
of
what
we
want
right.
E
You
know
whatever,
but
that's
I,
think
where
we
want
to
get
to,
which
is
a
little
different
from
the
provider
search
delay
which
is
sort
of
like
how
can
we
minimize
load
on
the
Node
without
breaking
stuff?
It's
two
different
sides:
one's
to
focus
on
the
server
one's
a
focus
on
the
client.
E
Both
good
focuses
right.
One
is
yeah,
reduce
client
load,
the
others
reduce
server
load,
I.
Think
if
you
reduce
server
load,
you'll,
make
more
people
happy
and
then
the
client
load
will
sort
of
fall
out
from
that,
because
the
servers
will
get
less
load
as
a
function
of
the
clients
needing
to
do
less
work.
C
We
need
a
Content
routing
system
that
works
for
things
like
Ria
or
nodes
that
aren't
online,
where
they've
had
long-lived
bit
swap
swarms
already,
and
so
we
need
a
system
that's
able
to
function
even
when
there
aren't
these
Long
Live
desktop
nodes
that
have
well-connected
swarms
right,
and
so
so
this
push
of-
let's
not
just,
have
a
bit
swap
broadcast
as
our
initial
content
running
lookup,
but
equally
move
towards
promoting
alternative
things
like
the
DHT
or
or
delegated
routing,
and
let's
also
have
a
provider
push
to
get
more
content
into
those
other
routing.
C
E
I
guess
I
would
I
would
counter
and
say
that
we
are
in
any
case
in
in
an
event
where
you
are
relying
on
bit
swap
for
Content
Discovery.
You
are
not
getting
fast
content
Discovery
at
all,
and
so
we
should.
There
are
things
that
you're
using
the
broadcast,
for
that
are
fast
that
are
efficient,
but
using
it
for
the
purpose
of
Discovery,
because
you
didn't
feel
like
advertising
an
ipni.
The
DHT
is
not
one
of
them.
That's
just
you
waited
around
for
two
minutes
and
hoped
you
got
lucky
and
bumbled
into
the
node.
E
So
we
want
to
push.
We
just
want
to
make
things
fast.
Like
that's
a
separate
like
we
want
to
make
content
routing
faster
for
all
these
people,
which
they're
not
going
to
get
this
way,
and
then
we
want
to
take
all
the
things
where
we
are
actually
using
the
bit
swap
stuff
and
like
spamming
and
limit
the
spam
so
that
it's
the
like
the
smallest
we
can.
A
Is
tricky
I
was
thinking
about
this
last
night
there
there's
kind
of
a
contention
between
like
Dewey
as
a
group
focus
on
improving
the
like
routing
of
content
in
in
all
forms,
or
do
we
focus
on
Guiding
that
content
routing
towards
the
ideal
form
and
reducing
Reliance
on
on
potentially
the
less
less
exacting
forms
or
the
less
efficient,
less
fast
forms,
I
think,
maybe
that's
the
push
and
pull
that
we
we
kind
of
get
to
wrestle
with
here.
A
I'm,
Gonna,
Leave,
so
I
think
there's
a
good
conversation
to
be
had
there
that
we
can
maybe
address
more
directly
as
long
as
we're
kind
of
thinking
about
it.
We
may
come
back
to
that
or
potentially
do
a
deep
dive
on
it.
I.
A
Is
a
really
good
point
peer
routing
optimization,
so
so
this
I
I
kind
of
do
these
over
a
lot
I
mean
these
overlap
right.
The
kind
of
discussion
about
search
delays
and
peer
routing
optimization
do
do
either
a
Dean
or
well.
Do
you
feel
like
these
are
two
distinct
kind
of
work
streams
if
I
try
to
describe
them
that
way
or
are
these
kind
of
the
same?
A
The
same
issue
that
we're
talking
about
here
I
describe
peer
routing,
optimization
as
long
tail,
optimization
of
peer
routing
functions
which,
when
I
I,
think
about
bit
swap
search
delays
in
the
discussion
we
just
had
I
feel
like
there's
a
lot
of
overlap
between
those
two.
E
B
B
Sorry,
what
does
that
mean?
Bypass
ipn.
A
A
A
These
these
topics
I
brought
up
from
prior
prior
kind
of
conversations
that
we've
been
having
is
we've
been
like
iterating
on
kind
of
some
of
these
discussions,
so
there
are
potentially
workstream
items
here
that
I've
declared,
which
overlap
a
little
bit
or
also
are
not
potentially
something
we
want
on
our
roadmap.
So
let's
not
take
these
as
Grail.
Let's
take
them
to
something
up
for
challenging
and
potentially
removing,
but
yeah
DRS.
C
I
would
for
for
this,
one
I
would
suspect.
This
is
a
mechanism
thing
which
is
the
the
work
stream
is.
Is
there
a
way
to
run
something
like
a
Kubo
node
without
a
DHT
in
it,
and
so
we
delegate
all
of
the
things
that
we
currently
get
off
the
DHT
one
of
those
is
pure
routing.
So
then
we
need
to
figure
out
why
we
have
peer
routing
and
what
it
is
that
we're
actually
addressing.
B
I
think
there
is
just
reading
this
again,
I
think
there
might
be,
and
looking
back
at
previous
conversations,
I
think
this
might
be
a
mix
of
two
things.
One
is
peer
routing,
which
is
to
be
constized.
It's
an
interface
in
lipid2p
that
Maps
peer
ID
to
address
the
other
one.
The
long
tail
thing
I
think
relates
to
a
case
where
only
root
of
a
dag
is
advertised
and
sits
below
the
root.
B
I
are
discoverable
through,
for
example,
bit
swap
spam
and
are
not
discoverable
at
all
through
something
like
ipni
or
DHD,
because
that's
something
that
we
touched
on
in
the
past,
so
I
see
two
distinct
things
here,
I'm
not
sure
which
one
that
grows.
A
Specifically
yeah,
but
those
two
are
right:
they
need
to
be
broken
up
into
two
items,
because
that
is,
you
nailed
them
honestly,
that's
the
conversation
that
I
got
this
idea
for
this
work
stream
from
specifically
I
you
just
added
enough
color
for
me
to
kind
of
refine
on
this
event.
I'll
I'll
put
that
in
there
okay,
so
beneath
this
there's
a
few
items
that
I
haven't
actually
punched
up
yet,
but
I
have
very,
very
explicit
detailed.
Why
we're
doing
these
things?
A
Some
of
these
are
are
mostly
like
ipni
functions
that
we're
presently
working
on
so
scalability
past,
a
single
physical
node
I'll.
Add
some
details
there
for
y'all.
But
the
short
end
of
the
story
is
we're
working
on
kind
of
some
efforts
to
make
ipni
nodes,
scalable
Beyond
a
single
instance,
so
that
if
you
were
yourself
running
an
IP
and
I
node
that
potentially
you
can
set
up,
you
know
a
cluster
and
be
able
to
Shard
across
it
without
too
much
challenge
the
goal
being.
A
It
makes
it
much
easier
to
run
a
node
and,
as
your
node
grows,
you've
got
some
decomposability
with
it,
but
I'll
kind
of
update
these
I'll
just
give
y'all
kind
of
a
high
level
review
of
what
these
efforts
are,
and
maybe
these
work
streams
need
to
be
collapsed
into
like
just
ipni,
Improvement
or
or
something
like
that.
A
I'm
open
to
doing
that
as
well,
ipni
being
able
to
support
results
from
the
DHT
I
I
think
we're
on
track
for
that
I
I,
don't
know
that
we
need
to
dig
into
that
one
and
then
we've
got
a
goal
internally
of
adoption
of
non-file
coin
index
providers,
so
I
think
our
delegated
puts
I
think
contribute
to
to
this,
because
that
would
enable
providers
to
make
puts
and
advertisements
via
HTTP
or
I
think
potentially
other
methods.
A
I,
don't
know
that
we've
talked
about
that,
but
I'm
guessing
that's
that's
kind
of
the
consideration.
A
And
then
ipni
Reliance
on
cloudfront
we'd,
like
to
kind
of
minimize
leveraging
cloudfront
by
looking
at
caching
Solutions,
but
also
I,
think
getting
multiple
implementations
of
the
index
publicly
available
that
we're
able
to
synchronize
with
so
that
lookups
are
being
made
not
only
of
sid.contact.
A
C
These,
it's
not
just
a
real
to
minimize
Reliance
on
cloudfront
right,
like
the
goal
is
decentralization
exactly
and
reduction
of
trust
in
US,
not
having
to
pay
cloudfront
is
a
nice
bonus.
B
E
To
add
to
Will's
point
I
I
think
like
I
see
this
generally
like.
If
you
use
the
word
decentralization
to
the
to
the
wrong
person,
then
they
start
being
like
Oh
I,
don't
know
you're
just
trying
to
like
optimize
for
like
some
like
some
like
Corner
case
censorship
thing.
E
However,
it
is
that
you
need
to
get
it
to
the
people,
whether
it's
reducing
the
costs
or
reducing
the
number
of
times
that
Massey
gets
beeped
on
a
weekend
or
whether
it
whatever,
whichever
one
of
those
or
whether
it's
reducing
the
cost
for
the
storage
side
or
the
cloud
front
side,
whichever
one
of
those
it
is.
It's
part
of
what
happens
by
spreading
the
load
for
multiple
people
with
having
redundancy,
which
is
what
we're
trying
to
do
so.
A
I
I
think
it's
it's
okay,
I
can
come
up
with
I
can
come
up
with
a
lot
on
that
subject.
I
think
for
me:
it's
this
the
exercise
and
really
like
the
way
that
we'll
benefit
from
this.
It's
8
30
and
we
need
to
drop.
But
the
final
parting
note
that
I'll
leave
with
this
is
the
benefit
of
this
exercise
is
to
ensure
that
we
all
feel
strongly
about
the.
Why
we're
attempting
to
accomplish
certain
things,
I'll
ask
everyone.
A
Just
take
a
look
at
this
I'll,
finish
kind
of
adding
these
notes
from
today's
conversation
and
then
we'll
we'll
take
another
look
hopefully
more
briefly.
Next
week
thanks
everybody
have
a
great
rest
of
your
day.