►
From YouTube: 2022-11-28 IPFS Content Routing WG #1
Description
Sync of those involved with creating IPFS Content Routing Proposals. Meeting #1
Notes:
https://www.notion.so/pl-strflt/2022-11-28-Content-Routing-WG-kickoff-mtg-89c5ab58e31f4771b98747edd3b73b26
For more on the IPFS Content Routing workgroup including calendar information see:
https://www.notion.so/pl-strflt/Content-Routing-Workgroup-e59fa94a9c3f48d58480b7daf15bd356
Website: https://ipfs.io/
Twitter: https://twitter.com/IPFS
A
B
B
A
B
And
then
I'd
encourage
everybody.
If
you
have
read
through
these
notes
already,
you
have
comments
or
issues
that
you
want
to
bring
to
the
table.
There's
a
section
down
at
the
bottom
here
called
top
of
mind
for
team
contributors.
You
can
go
ahead
and
punch
anything
in
there
during
the
meeting
and
we'll
try
to
get
to
all
that
stuff
before
the
end
of
the
meeting.
B
If
we
don't
have
time
to
get
through
it,
we
can
take
it
asynchronously
and
potentially
bring
it
up
during
the
the
next
two
week
period,
so
we'll
jump
in
I've
threw
the.
Can
everybody
see
this
okay.
C
Yeah,
because
you're
just
fine
cool
and
and
real
quick
are
you
are
you,
are
you
taking
notes
or
do
you
need
someone
else
to.
B
B
But
basically
the
purpose
of
this
meeting
y'all
is
to
kind
of
conclude
the
decisions
we
need
to
make
in
order
to
kind
of
design
the
MVP
for
the
interaction
between
integrating
the
indexer
into
Google
nodes,
and
so
we
need
y'all's
support
to
kind
of
nail
down.
What
are
the
outline
items
that
will
enable
us
to
do
this?
Any
blockers
for
decisions
that
we
need
to
cover
and
then
support
work
for
any
information.
B
B
And
then
one
of
the
components
of
this
that
we're
going
to
be
discussing
is
double
hashing,
they're.
Technically
two
documents
on
this
right
now,
but
I'd
like
to
point
you
towards
the
one
with
the
hourglass
here.
This
documents
had
some
pretty
recent
revisions,
which
include
more
detailed
kind
of
components
of
our
approach
and
The
Leverage
kind
of
comparison
that
we're
making
on
how
to
proceed
with
this
decision.
So
I
think
you
all
will
find
that
document
to
be
very
relevant.
B
I
threw
the
link
to
some
contact
up
in
here,
just
in
case
anybody.
That's
watching
kind
of
wants
to
follow
along
and
isn't
familiar
with
that
stuff
and
then
there's
this
ipfs
asynchronous
discussion
with
leadership
on
Kubo
integration,
which
is
ultimately
the
decision
document
that
we're
hoping
to
inform
so
that
our
leadership
can
kind
of
provide
a
path
forward.
C
No,
that
that
all
this
sound
that
sounds
good
during
I
think
maybe
there's
one
a
more
tactical
thing,
first,
just
to
make
sure
we're
aligned
on
regarding
the
HTTP
delegated
routing
API,
getting
that
deployed
to
Kubo
like
basically
getting
the
bridging
aspect
between
the
hydras
and
cid.contact,
using
the
updated
HTTP
delegated
routing
API,
that's
kind
of
what
we
already
had
in
place
with
reframe.
C
Obviously
we're
now
updating
to
this
more
https
friendly
API
and
just
want
to
make
sure
we're
that
we
nail
that
by
the
end
of
the
year,
so
that
we
can
kind
of
close
right
like
the
the
Milestone
one.
C
In
this
whole
Endeavor
was
first
about
getting
gateways
to
be
talking
to
cid.contact
by
default,
and
we
first
did
that
with
the
reframe
in
that
process,
and
shortly
afterwards
we
discovered
the
problems
with
the
reframe
API,
which
is
where
all
the
work
has
been
to
get
to
the
just
generic
HD
or
get
to
the
HTTP
delegated
routing.
Api
I
don't
want
to
make
sure
we
kind
of
close
out
on
that
guy
two.
C
So
maybe
maybe
we
can
quickly
start
with
that
to
make
sure
all
parties
are
aligned
and
then
get
into
the
rest
of
the
stuff.
B
Yeah,
absolutely
thanks
for
jumping
in
and
kind
of
prioritizing
that
Steve
we
can
get
straight
to
that
topic.
C
Okay,
so
I
guess
I
think
a
couple
things
for
people
to
be
aware
of
well
I
think
it
also
probably
applies
to
a
Content
routing
working
group.
I
think
it's
probably
most
people
aware
we
are
doing.
We
are
dialing
down
the
non-bridging
portions
of
the
hydras
on
Thursday,
you
know,
probe
lab
is
leading
that
effort
in
terms
of
the
communication
and
all
the
monitoring,
and
we
folks
like
Gus
and
Antonio,
will
be
doing
the
actual
Hydra
deployments
for
that
for
that
work
and
the
only
change
we're
making
here
is.
C
Basically,
you
know
severing
the
database
connection
between
the
hydros
and
the
UI
dynamodb.
We're
not
doing
anything
else,
especially
here
we're
not
trying
to
minimize
responses
or
anything
we're
only
not
we're,
basically
we're
turning
empty
results,
but
we
will
be
returning
results.
They'll
just
be
empty
set,
so
that's
happening
on
Thursday
as
a
result
of
that,
problab
has
requested
that
we
not
do
any
other
Hydro
deployments
beforehand
or
afterwards
for
some
number
of
days,
so
they
kind
of
have
a
stable
environment
for
them
to
be
monitoring
the
network
as
a
result.
C
I'm.
Sorry,
let
me
pull
up.
Okay,
I
guess
yeah.
So
as
a
result,
we
are
not
we're
not
going
to
deploy
the
hydras
with
the
updated
HTTP
delegated
routing
API
until
December
8th,
so
that
we've,
given
the
hydrogen,
you
know
we're
giving
problem
enough
time
to
be
doing
their
analysis.
C
So
let
me
so
I
I
that
doesn't
block
CID
contact
from
doing
any
of
its
own
deployments
beforehand,
but
that's
that's
kind
of
the
Hydra
timeline
and
I'll
pull
up
the
issue
here.
Where
we've
been
writing
this
out,
I
guess
I
just
wanted
to
see,
does
when
cid.contact
thinks
they
can
do
deployment.
D
I
mean
we
can
do
it
quite
quickly.
We
have
had
a
draft
PR
of
the
addition
of
this
HTTP
delegated
thing
for
10
days
and
are
waiting
for
code
review
that
we
Linked
In
This
content
writing
working
group
slack
channel.
So
no
one
has
commented
or
approved
that
PR
okay
to
confirm
that
we
have
it
compatible
with
the
spec
that
you
guys
have
LED.
C
Cool,
okay
and
so
I
guess
the
we.
Obviously
we
have
been
working
on
a
I
guess
on
a
practical
matter.
Obviously
we
have
been
working
on
a
a
go
implementation
of
the
spec,
we're
actually
planning
to
move
that
not
we're
not
planning
to
put
that
into
the
go
delegated
Road
and
repo
we're
actually
going
to
put
that
into
the
lib
ipfs
repo,
so
that
things
like
hide
like
our
hydras
can
have
both
deployed
at
the
same
time,
because
it's
because
we
won't
hit
go
minor
version.
C
You
know
issues
because
it'll
actually
be
a
separate
repo.
That's
what
we're
planning
to
do
on
the
hydras
I
guess
if,
if
cid.contact
wanted
to
use
that
they
could
but
I
guess,
obviously
it's
totally
fine
for
you
all
to
do
your
own
implementation,
I!
Guess!
What's
our
preferred
path
forward
here,
keep
you
know
supporting
bedrock's
own
implementation,
or
should
we
try
to
consolidate
on
the
live
ipfs
version.
D
Foreign
having
multiple
implementation
like
having
us
own,
it
means
that
we
will
debug
the
server
side
and
understand
it
rather
than
like
punching
it
back
to
you
guys
in
some
sense-
and
this
is
meant
to
be
a
pretty
simple
rest
one.
It
was
quite
simple
for
us
to
take
our
current
fine
thing
and
and
have
an
adapted
version
that
was
compatible
with
your
semantics,
as
proposed
in
the
spec,
and
so
in
the
same
way
that
we
found
that
that
initial
rest,
HTTP
API
was
pretty
minimally
painful.
I.
Think
it's
easier.
C
Okay,
then
sounds
sounds
great
cool,
so
I
think
we
just
need
to
get
on
our
on
our
board
that
we
make
sure
we
look
at
the
pr
and
I
you've
got
it.
You
guys
have
got
it
linked
here
in
the
notes.
C
D
It
should
be,
it
should
be
pretty
small,
which
is
great.
It's
I
think
it's,
you
know
under
50
lines.
D
C
Great
so
just
yeah
timeline
wise,
so
we'll
get
the
go,
delegated
routing
changes
released
and
merged.
Today,
again
that'll
happen
in
lib
by
PFS,
not
in
this
repo
we're
not
like
we're,
not
gonna,
deploy.
Well,
sorry
we'll
make
the
changes
in
hydras
and
then
deploy
them
on
the
eighth
obvious
I
think
sometime
this
week,
we'll
get
the
CID
contact
changes,
merged
and
then
I'll.
C
Let
you
all
do
a
deployment,
so
I'll
update
the
dates
here,
but
we'll
assume
this
is
happening
by
end
of
week
and
then
we
also
have
to
make
the
changes
in
Kubo
itself,
which
will
you
know.
Antonio
is
working
on
right
now
and
it's
not
obviously
it's
not
it's
not
done
yet
I,
don't
know
I'm,
assuming
we're,
probably
looking
at
near
the
end
of
the
week
as
well
on
this
guy.
C
We
didn't
actually
ask
him
today
earlier,
but
that's
I,
don't
know.
The
point
is
like.
The
key
thing
we
wanted
to
make
sure
is
that
this
gets
out
with
the
this
gets
out
with
the
next
Kubo
release,
which
we're
going
to
be
doing
the
release
candidate
on
the
8th
and
doing
the
final
release
the
following
week.
Before
we
all
break.
C
D
Yeah,
the
the
one
other
fun
thing
was,
the
it
turned
out.
Bifrost
didn't
prioritize
doing
the
direct
connection.
We
we
pinged
them
at
the
beginning
of
this
week
again
and
they're
they're,
going
to
see
if
they
can
also
get
the
direct
connection
from
gateways
to
Sid
contact
actually
running
this
week.
But
we'll
see
so
you
know
we
would
have
liked
that
before
the
hydras.
It's
a
shame
that
it
you
know
we
had.
We
ended
up
with
two
months
where
that
didn't
get
rolled
out,
that
config
change.
C
God,
oh
God,
okay,
yeah
that
sorry
I
I've
actually
meant
to
start
with
that
to
see
where
that
was
actually
at
because
hey
I
was
looking
at
the
config
yesterday
and
I
didn't
look
like
any
of
it.
Had
it
only
been
on
like
one
or
two
Banks
or
something
right,
they
hadn't
pushed
it
everywhere:
okay,
yeah!
C
But
so
at
this
point
well
I
guess
we
yeah,
but
are
you?
Are
you
still
gonna
push
on
them
to
deploy
it
using
reframe.
D
Right
we
would
like
that
to
happen
before
the
hydro
drill
down
the
dial
down
as
a
way
of
reducing
potential
impact.
There
right
like
that.
That
would
make
us
feel
better
about
hydra's,
changing
configs
in
ways
that
we
don't
know,
because
if
there
are
issues
with
hydras,
it
means
that
we're
not
losing
service
during
the
time
that
we're
spending
we're
figuring
out.
What's
what's
up
so
it
makes
the
the
hydro
changes
like
less
sort
of
critical
and
less
like
scary
to
mess
with
I.
C
C
I
see,
okay,
this
sound
sounds
good,
I
guess,
there's
nothing
else
coming
to
mind
for
me
on
this.
You
know
immediate
tactical
topics:
I'm
I'm
good
to
you'll
move
on
now.
Unless
anyone
else
has
any
questions.
B
C
Okay,
great
and
I
guess
then.
The
other
thing
that
would
be
ideal
is
right.
When
we
do
the
RC
we
we
do
get
that
RC
deployed
across
bifrost
infrastructure,
ideally
as
part
of
that
we're
also
converting
from
using
well
obviously,
they'll
have
to
kind
of
switch
from
using
reframe
to
the
new
delegated
routing
config
for
hitting
cid.contact
like
I
think
it
would.
It
would
be
great
if
we
can
go
into
the
Christmas
break
where
at
least
the
link
between
the
gateways
and
cid.contact
is
the
way
we
ideally
want
it.
C
I
I
know
we
won't
have.
We
won't
have
solved,
we
likely
want
to
solve
Kubo
by
then,
but
at
least
we've
got
the
gateways
fully
done
done.
B
All
right,
so
another
topic,
that's
kind
of
high
level
top
of
mind-
is
a
commitment
to
the
double
hashing
design
pathway.
A
B
Currently
I
would
say
wrestling
a
little
bit
with
two
options
before
I
get
too
into
the
Weeds
on
this
I
should
give
Ivan
a
chance
to
to
jump
in
and
take
the
reins.
I
think
he's
he's
gonna,
be
our
subject
matter
expert
on
the
topic.
Ivan.
Would
you
care
to
kind
of
provide
a
high
level
summary
for
the
folks
on
the
call
real
quick
of
these
two
options
documented
here.
F
Yeah
sure
so
we
started
thinking
about
how
to
implement
the
double
hashing
story.
On
the
on
the
index's
side
in
the
in
the
main
notion
dog.
That's
about
the
double
hatching
story.
There
is
a
section
about
the
indexes
and
we
started
some
dating
this
section
today.
Basically,
the
punch
line
is
we've
done
some
backup
envelope,
calculation
for
like
how
much
storage
is
going
to
be
required.
Extra
surgical
requires
if
we
were
to
implement
like
the
the
species
and
Etc.
F
So
if
it's
often,
if
you
scroll
down
a
bit
down
down
basically
yeah,
there's
about
a
bunch
of
thoughts
there
so
and
I.
D
Looked
at
this
and
I'm
I'm
I
think
we
need
to
revisit
this
before
we
try
and
get
to
consensus,
because
I
think
it's
much
less
like
we're
already
storing
a
value
that
is
a
pure
ID
context,
ID
as
the
value
and
the
the
encryption
here
is
just
adding
the
small
amount
of
salt.
In
addition,
so
I
don't
think
it's
an
additional
70,
plus
12
plus
eight
bites
I
think
it's
going
to
be
maybe
eight
to
32
bytes
per
record
that
we
end
up
adding
versus
the
code.
Let's.
F
Say
basically,
it's
still
being
walked
on,
basically,
so,
let's
revisit
its
later
on,
but
basically
in
the
next
couple
of
days
we
should
be
up
to
up
to
speeds
with
the
The
Proposal
from
the
from
the
indices
inside
Yeah.
So
basically,
anyway,
what.
D
I
wanted
to
say
here
was
my.
My
sense
is
that
the
extra
storage
is
not
something
that
we
need
as
indexer
to
raise
the
flag
on
or
I
would
prefer
not
to,
and
just
say
yes,
it's
going
to
be
somewhat
bigger,
but
not
so
much
bigger
that
we
can't
just
encrypt
each
value.
The
I
think
the
main
interface
difference
that
we
need
to
figure
out
is
that
our
values
are
not
directly
pure
IDs.
D
So
when
you
look
at
the
DHT
variant
of
double
hashing,
the
thing
you
get
back
from
your
initial
query
is
the
pure
ID
encrypted
by
the
Sid
that
you're
asking
for,
whereas
in
the
indexer
we
have
a
combination
of
the
pure
ID
and
a
context
ID,
where
the
context
ID
is
giving
us
more
specific
information
about
this,
the
specific
provider
records.
D
So,
for
instance,
you
know
what
deal
is
this
graph
sync
pure
advertising
to
Sid
Within,
and
so
there's
there's
something
that
comes
back
to
a
client
that
is
encrypted
by
the
Sid?
That
is
not
just
the
pure
writing,
and
so
that's
the
addition
to
the
interface
that
we're
for
delegated
routing
double
hashing.
That
indexing
is
going
to
end
up
sort
of
colliding
with
if
we
use
exactly
the
same
interface
that
the
DHT
double
hashing
is
on.
D
So
I
think
the
question
is:
do
we
just
propose
what
this
additional,
or
what
the
double
hashing
that
works
for
indexing
in
the
context
of
the
HTTP
delegated
routing
API?
Is
you
know
what
how
do
we?
How
do
we
manage
this?
You
know
addition
of
some
additional
context
in
this
proposal
that
indexing
is
going
to
want.
F
Yeah
so
I'm
still
fleshing
out
the
ingestion
side
of
the
story,
so
I've
been
so
far
concentrated
on
the
reader
to
client
to
index
aside.
So
yeah
I
will
aim
to
flesh
out
some
thoughts
on
the
ingestion.
The
next
couple
of
days,
foreign.
B
The
outcome
from
this
is
we'll
we'll
continue.
Iterating
on
this
we've
got
a
little
bit
of
Discovery
to
kind
of
wrap
up
but
asynchronously
the
documents
that
give
you
a
good
General
sense
of
the
direction
we're
taking
with
this
is
available
so
that
you
all
are
welcome
to
kind
of
take
a
look
in
the
time
being
and
then
I'll
take
an
action
item
to
follow
up
on
once.
Decisions
are
made
making
sure
that
y'all
understand
clearly
The
Proposal
that
we're
putting
forward.
G
H
I
have
a
very,
very
quick
question.
Maybe
maybe
it's
a
lot
of
work,
but
the
question
is
like
this
ingestion
story:
is
it
like
specific
to
the
Falco
indexer
or
is
it
like
indexers
accepting
provider
puts
from
clients
it's
that
still
on
the
roadmap,
it's
that
still
planned.
H
D
Have
a
different
model:
it's
not
specific
to
filecoin,
but
it
I
mean
in
that
we're
ingesting
indexes
and
advertisements
from
web3
storage
and
other
ipfs
nodes
as
well
and
ipfs
clusters
and
so
forth.
It's
not
a
put-based
model,
rather
the
indexers
pull
that
content
and
expect
the
publisher
to
be
online
and
available
to
act
as
the
server
that
that
answers
their
requests
for
it.
D
Is
the
model
right
now
we
own
that
in
just
spec,
we
have
it
in
the
ipni,
slash
specs,
you
know,
I
I
think
we
we
would
be
potentially
amenable
to
having
that
become
a
broader
ipip
style
spec,
but
I
think
it
has
been
useful
for
initial
iterations
for
us
to
just
own
that,
in
terms
of
the
speed
that
we've
been
able
to
modify
it
at
to
figure
out
all
the
bugs,
and
you
know
what
we
wanted
it
to
be.
F
B
I
could
get
confusing
okay,
so
just
to
recap
that
really
quickly
we
we've
got
some
final
ideation
to
kind
of
lock
down
our
our
proposed
method
and
we'd
like
to
propose
a
method
on
the
basis
of
the
research
that
we've
done
so
far.
We
feel,
like
that's
the
quickest
way
to
a
feasible
outcome,
so
as
soon
as
we've
got
kind
of
a
more
concrete,
you
know
proposal
put
together
for
that,
we'll
inform
the
team
and
hopefully
move
forward.
B
All
right,
we'll
move
on
from
that
there
was
a
topic
for
clarification
on
which
indexers
are
are
fit.
Steve
I,
believe
you
brought
this
up
and
I
think
it's
a
really
good
point
to
be
added
to
the
Kubo
maintainers
default
list,
so
the
concept
is
essentially.
B
And
what
are
the
criteria
for
an
index
or
potentially
being
added
to
the
Kubo
maintainers?
Do
we
have
criteria
identified
at
all
yet
for
what
would
be
deemed
appropriate
for,
like
a
baseline,
for
these
indexers
to
qualify.
C
No,
no,
that
that
hasn't
been
specified
or
written
out
I
mean
it's.
It's
sort
of
been
to
you
know.
I.
Think
part
of
the
reason
that
hasn't
occurred
was
because
we
hadn't
gotten
to
the
position
of
needing
to
come
up
with
that
policy.
C
You
know
like
if
we
go
an
ambient
Discovery
route.
That's
a
that's
a
decision
and
policy.
That's
outside
of
Kubo.
B
C
If
the
decision
gets
made
of
like
no
we're
going
to
be
hard
code,
we're
going
to
default
be
defaulting.
Some
of
these
into
Kubo,
then
I
think
we
need
to
come
up
with
that,
but
I
I,
don't
think
any
serious
thought
has
been
given
to
what
what
exactly
the
acceptance
criteria
would
be.
C
So
I
think
it's
I
think
it's
a
good
I
think
it's
a
valid
question,
because
at
it's
well
I
know
we
had
a
little
bit
of
back
and
forth
on
this
in
this
CID
contact
in
Cooper
document
and
I,
don't
know
where
you
probably
have
since
updated
like
I'm,
assuming
if
we
don't
have
it
in
Kubo,
we
still
have
to
answer
those
questions
somewhere
like
if
we
want
to
be
able
to
put
things
in
our
bootstrappers
list.
C
If
you
know
so,
there's
still
a
policy
decision
of
like
well
who
who
gets
to
be
the
delegated
sorry
the
default
to
delegated
routers
within
our
bootstrappers,
which
kind
of
have
a
privileged
position
so
like
I,
think
we're
likely
still
going
to
cross
this
bridge,
but
I
guess
I
think
the
Kubo
team
so
far
has
been
skirting
around
it.
B
That's
fair,
so
I'm
still
wrapping
my
head
around
a
lot
of
this.
E
B
One
thing
I'd
like
to
just
kind
of
throw
out
there
I'll
ask
the
the
dumb
question
is:
are
the
the
criteria
for
baselining
these?
Is
there
an
association
with
kind
of
baselining
off
of,
for
instance,
security
requirements,
privacy
requirements,
reliability
or
all
of
the
above?
E
E
B
E
B
Fair
enough,
so
what
actually
needs
to
happen
here
is:
we
need
to
essentially
kind
of
collaboratively
come
up
with
a
baseline,
for
what
criteria
would
need
to
be
established
to
ensure
that
these
are
met
is.
Is
this
something
that
this
group
perceives?
We
would
seek
like
Community
feedback,
for
instance,
from
service
providers,
or
my
perspective.
That
I
think
would
be
beneficial,
but
also
would
probably
take
a
long
time
or
potentially,
not
necessarily
biases,
towards
action
on
this
discussion.
Can
we
solve
for
this
discussion
internally
among
our
work
groups.
C
Yeah,
my
guess
is,
we
can
probably
you
know,
drive
it
and
put
a
stance
forward,
I
think
as
long
as
we
do
all
the
work
in
the
open
and
there's
the
ability
for
others
to
watch
it
and
chime
in
and
we
can,
you
know,
make
sure
relevance.
Parties
are
aware
of
it
and
have
the
opportunity
to
comment
on
it.
C
So
any
of
the
current
indexer
providers
or
people
that
are
coming
on
board,
maybe
as
well.
The
ipfs
operators
channel
is
a
good
place
to
expose
it
again.
I,
don't
think
I
would
block
here,
but
I
think
just
giving
as
long
as
we
have
high
visibility
into
what's
happening
as
we
move
it
forward.
I
think
that's,
probably
okay,.
B
Okay,
so
I
I
won't
volunteer
anyone
for
any
work
on
this
call.
But
let's
take
this
as
a
kind
of
an
asynchronous
action
item
that
we
need
to
kind
of
form,
at
least
a
rough
outline
for
what
our
Baseline
for
acceptance
criteria
would
be
to
integrate
integrate
indexers
into
maintainers.
If
we
need
to
start
like
a
brainstorming
session
with
a
smaller
group
to
kind
of
Kick,
this
off
I
can
organize
that,
but
I
won't
belabor.
This
point
too
much.
What's.
E
The
so,
if,
if
we're
not
going
to
be
hard-coding
these
because
we're
doing
ambient
Discovery
like
what's
the
urgency
of
doing
this,
then.
D
The
urgency
we're
hearing
is
that
there
is
a
desire
to
have
indexers
able
to
be
queried
by
Kubo
nodes
in
order
of
two
months,
not
in
six
months,
and
we
need
to
look
at
that
ambient
Discovery
one.
But
it
seems
very
hard
to
promise
that
we
could
get
ambient
Discovery
in
order
of
two
months.
Okay,.
E
But
so
this
would
then
be
a
stop.
Gap
then,
until
we
get
so
I
mean
I
wouldn't
spend
a
whole
lot
of
time
trying
to
work
out
a
sorry
little
you're
I
can
barely
hear
you
I
mean
if
it's
just
a
stop
Gap,
then
we
probably
shouldn't
spend
a
lot
of
time
working
out
a
process
and
criteria
and
all
this
stuff
right.
If
it's
just
going
to
be
something
that
we
keep
around
for
a
few
months
and
then
throw
away.
D
Do
we
have
a
template
on
what
we
like
how
we
select,
bootstraps
or
or
have
we
had
this
conversation
in
the
past
around
I'm,
not
sure
I
have
history
there.
We.
G
We
were
talking
about
I,
think
location
in
to
the
Gateway,
so
that
would
be.
These
are
part
of
the
CD
get
parameter
whatever,
but
like
in
the
URL
of
a
Gateway.
You
will
be
able
to
specify
something
like
a
multi-address
that
points
to
someone
that
has
the
data
or
maybe
an
indexers.
That
know
someone
that
points
to
the
data,
and
maybe
that
could
be
the
stop
Gap.
G
G
So,
of
course,
the
gateways
we
were
freaking
to
add
location
hints
that
would
be
either
get
parameter
or
in
the
CID,
something
that
tells
the
Gateway
where
it
can
find
the
data
either
directly
providers
that
host
the
data
or
some
indexers
and
then
we'll
be
able
to
find
you
someone.
So
we.
D
Have
that
I
mean
so
so
I?
Don't
think
that
solves
this,
because
a
we're
going
to
have
the
gateways
all
week
always
always
querying
already,
so
the
gateways
will
always
be
be
looking
that's
happening
like
this
week
or
next
week.
So
we
don't
need
an
additional
hints
at
the
Gateway
level
and
then,
like
the
point,
is
we've
got
more
content
in
the
indexers
already
than
in
the
DHT,
so
that
should
be
a
default
like
it.
D
It's
going
to
location-based
writing
where
it's
only
some
subset
of
Sids
that
are
somehow
communicated
with
an
additional
like
location
hint
where
it's
opt-in
is
weird
when
the
default
of
where
this
sit
is
most
likely
to
be
able
to
be
mapped
to
Providers
is
in
this
other
database.
That's
only
being
opt-ined
like
the
point
is
we
want
to
default,
not
opted
in.
B
A
B
If
we
can
start
with
a
brainstorm
of
criteria
that
you
know
kind
of
gets
the
process
started
for
the
team
so
that
we
understand
the
criteria
that
would
potentially
be
selectable,
I
think
I.
B
This
topic
I,
don't
know
that
we'll
resolve
fear
on
this
call,
so
I'll
take
an
action
item
to
kind
of
asynchronically
spur
this
conversation
on,
and
we
can
kind
of
continue
this
in
GitHub.
If
everyone
agrees,
unless
one
of
you
has
a
has
a
strong
opinion
about
persisting
with
a
potential
solution.
B
Foreign
thanks
y'all
all
right
next
up
on
the
list,
I
I
brought
this
item
up
just
to
ensure
that
we
had
completely
closed
this
out,
but
I
saw
a
comment
from
Liddell
in
in
slack,
which
was
on
the
ipip337
regarding
the
stats
endpoint
on
current
operators.
Is
this
consistent
across
all
operators?
I
just
wanted
to
make
sure
that
there
wasn't
any
like
additional,
outstanding
or
blocking
work.
B
With
that
question,
because
I
didn't
see
a
good
answered,
this
was
in
response
to
a
question
to
will
about
kind
of
how,
let's
see
here,
Lindell.
Do
you
know
the
question
that
I'm
referring
to
here
I.
H
B
Okay,
not
in
any
way
blocking
any
decision
making
or
anything
like
that.
We
can
skip
over
this.
In
that
case,
then,
yeah.
H
D
Right
but
but
is
there
a
value
like
for
why
clients
would
want
to
for
why
we
would
put
it
in
the
snack
like
it's
something
we
use
for
monitoring
and
for
like
our
next
web
page,
but
it's
not
something
that
we
have
clients
actually
querying
right
now.
So
if
that
is
the
value
of
it,
then
it's
not
something
that
we
would
put
in
the
spec.
B
I'm,
taking
from
this
that
it's
not
necessarily
something
of
value
that
we
want
to
include
in
the
spec
am
I
am
I.
Reading
that
accurately.
A
A
B
H
D
I
guess
are
we?
Are
we
reasonably
happy
on
what
that
proposal
for
Content
writers
is,
and
the
next
step
is
implementation,
or
is
there
more
on
sort
of
like
a
draft
design
dock
that
we
would
want,
and
should
we
do
more
rounds
of
like
trying
to
think
about
what
a
design
would
look
like
before
we
move
to
an
implementation
is
probably
where
we
are
on
on
that
ambient
Discovery
I.
D
Think
the
other
questions
there
that
we're
going
to
need
to
figure
out
is
what
is
an
MVP,
because
there
is
the
potential
for
an
infinite
amount
of
work
under
that
design
dock,
especially
in
the
modeling
and
so
forth,
and
so
figuring
out
how
to
scope
that
down
to
something
that
we
can
actually
feel
like
is
tangible,
is
going
to
is
going
to
be
some
work,
I'm
happy
to
try
and
work
with.
D
Maybe
torfin
is
the
right
person
to
like
think
about
what
to
propose
what
an
MVP
looks
like
in
terms
of
work
items
and
try
and
break
that
down
that
that
seems
like
one
thing,
I
think
the
other
one
is:
what
is
the
work
split?
There
I
think
we.
We
know
that
there's
probably
some
modeling,
that
we
want
to
do.
D
There's
some
integration
into
Kubo,
that's
going
to
need
to
happen
right,
like
most
of
the
code
is
in
Kubo
and
so
there's
at
least
some
involvement
on
on
stewards
that
we're
going
to
be
asking
for
it.
If,
in
that,
in
that
path,.
H
So
I
I
think
like
the
MVP
in
my
head
is
something
that's
it's
in
Kubo,
but
it's
like
opt-in
and
we
would
opt
in
on
the
bootstrap
nodes.
So
essentially.
H
That
seems
yeah
right,
yeah
yeah,
that
simplifies
that
deployment,
and
you
know
now
suddenly
the
existing
bootstrap
nodes.
You
know
every
ipfs
node
will
the
default
bootstrap
nodes
will
already
provide
yeah.
D
And-
and
we
could
hard
code
like
like
in
the
first
round
like
the
thing
that
we
could
do
in
a
few
months
potentially
is
have
bootstrap
nodes
running
and
advertising
sub
hard-coded
set
of
indexers
yeah
as
the
ones
that
they
know
about.
So
not
do
the
gossip
like
advertising
of
your
own
new
indexer
that
could
be
malicious
yet
and
then
and
then,
at
the
same
time,
figuring
out
some
of
the
like
feedback
loop
through
through
monitoring
or
measurement.
D
So
we
can
see
clients
reporting
back
their
stats
and
use
that
as
more
something
closer
to
real
data
of
what
that
feedback
loop
is
going
to
look
like
in
terms
of
tuning
before
we
actually
end
up
with
the
full
dynamics
of
that
feedback.
Loop,
potentially
spiraling
out
of
control.
H
Yeah
I
don't
know
if
it's
useful,
but
the
Kubo
is
still
always
connecting
to
bootstrap
nodes
that
that's
been
my
understanding,
yeah
yeah.
So
that's
if
we
could
like
Park
solving
that
until
we
have
a
real
Discovery
here.
But
this
way
we,
you
know
we
don't
hardcot
any
strings
in
Kubo
and
we
we
don't
like
the
problem
is
you
know
there
are
different
problems,
but
the
main
problem
is
like
having
clients
being
stuck
on
some
hard-coded
string
and
then
having
to
like
write
migrations.
H
D
Cool
I
guess
I,
don't
know
if
darfin's
going
to
come
back
or
if
his
internet
just
went
out
entirely,
but
we
can
definitely
try
and
like
propose
some
Milestones
of
what
these
various
levels
of
deployment
might
look
like
and
that
that
seems
like
the
The
Next
Step
here
and
I.
Think.
The
other
thing
that
we'll
propose
is
what
that
breakdown
would
be
between
work,
that
we
will
hope
to
get
help
from
or
that
we
think
the
stewards
need
to
own
because
it
is
in
ipfs
or
in
Kubo
directly.
D
So
certainly
the
deployment
but
I
need
to
think
about.
You
know
what
steps
before
that
is
also.
It
makes
more
sense
for
for
stewards
to
own,
but
but
I
think
that's,
that's.
Probably
the
right
thing,
I
I,
think
we're
reasonably
happy
with
the
basic
sketch
where
the
next,
like
the
the
thing
that
would
change
this
design
in
my
head,
is
data
more
so
than
opinions
right
like
we
have
some
uncertainty
of
like.
Does
this
actually
work,
but
the
way
that
we're
going
to
understand
is
it
going
to
be
reasonably?
D
Sane
is
data,
so
we
should
see
if
we
can
get
some
entity
like
probe
lab
to
help
with
modeling,
and
if
not,
we
need
to
figure
out
who's
going
to
do
that
like
the
simulation,
and
then
we
need
to
see
if
we
think
that
the
first
Milestone
MVP
of
bootstrap
nodes
running
it
would
be
enough
to
like
what
data
are
we
going
to
get
and
what
would
we
do
to
refine
the
design
based
on
that
data
seems
like
the
right
question
to
be
asking
but
cool,
we'll
we'll
own
proposing
what
an
MVP
next
next
steps
on
on
Building
look
like
under
this
design.
D
B
So,
tracking
time
we
actually
landed
right
where
we
were
hoping
with
kind
of
a
wrap-up
section
for
anything,
that's
top
of
mind
for
team
members.
So
this
is
more
of
an
open
pile
for
anybody
that
wants
to
bring
topics
to
the
work
group
that
weren't
referenced
in
these
prior
discussions
on
the
agenda
so
open
to.
If
we
could
start
with
the
ipfs
stewards,
did
you
all
have
anything
to
bring
to
the
work
group?
B
And
this
can
be
anything
topically
in
reference
to
you
know
Gathering
further
data
documentation
on
index
or
node
operation?
If
you
need
details
or
outstanding
kind
of.
B
Type
of
thing
open,
open
topic
for
that.
H
I
have
a
very
quick
kind
of
like
additional
context
that
you
know
the
gateways
are
the
main
use
case,
but
then
we
have
the
the
actual
ipfs
nodes
which
which
could
be
desktop
nodes
or
being
around
by
individuals,
and
while
we
control
ipfs
desktop
and
we
can
adjust
its
configuration
freely,
we
can
like
ship
to
release
this
one
in
one
day
and
they
get
automatically
updated.
H
We
don't
have
that
flexibility
with
a
partner
like
brave
brave,
is
running
kubernetes,
but
they
essentially
have
hard-coded
configuration
there,
which
their
security
team
reviewed
and
they're
very
skittish
around
things
like
DNS
privacy
leaks,
anything
that
smells
like
leaking
user
browsing
history.
Even
if
there
are
three
layers
of
indirection,
it
will
be
problematic,
even
if
it
finally
passes
it
will
be
very
slow
to
go
through
review,
so
I
just
wanted
to
flag.
H
So
this
group
is
cognizant
that
it's
not
like
stewards,
just
pushing
pushing
some
things
for
the
sake
of
pushing,
but
it's
like
actual
limitation
we
may
hit
with
Partners.
So
it's
good
to
have
at
least
have
answers
or
like
a
plan
of
the
roadmap.
Just
like
will
said
it's
fine
to
do.
Some
compromises
with
like
bootstrappers
having
hard-coded
lists
and
not
having
like
any
reputation
system
yet
and
then
doing
the
next
stage.
But
you
know
having
that
plan
we'll
make
some
discussions
way
easier.
H
So
when
we
ship
new
version
of
Kubo
and
we
ask
Brave
to
upgrade
they,
their
security
team
may
ask
some
things
and
it
would
be
good
that
we
are
prepared
for
that.
So
that's
like
the
only
like
meta,
I
thought.
I
had.
B
Thanks,
that's
actually
I
think
really
valuable
context
for
everybody
on
the
call.
It
helps
to
kind
of
give
the
broader
team
an
understanding
of
the
decision-making
process
and
what
kind
of
drives
some
of
these
barriers
or
obstacles
to
committing
to
a
pathway.
I
think
it's
important
for
everybody
to
understand,
appreciate
you
bringing
that
up
any
questions
about
that
from
anybody
on
the
team
or
other
items
that
are
top
of
mind
for
the
ipfs
stewards.
B
Well,
let's
jump
to
the
network
indexer
team.
B
Mind
for
anybody
on
the
team
or
kind
of
efforts
that
you
want
to
make
this
work
group
aware
of
contextually
that
you
think
would
be
beneficial
to
the
working
relationship
across
the
teams.
D
Let's
make
sure
to
have
the
to
do.
We
have
additional
bi-weekly
instances
of
this
working
group
call.
If
not,
we
should
keep
scheduling
these
I
think
in
two
weeks
we
definitely
should
be
checking
in
on
where
we
are
on
hydro
drawdown
gateways
and
subsequent.
D
B
Be
every
two
weeks
if
it's
possible
I'll,
try
to
compact
this
to
maybe
a
45
minute
or
even
a
half
hour
meeting
as
the
worker
proceeds.
I.
Think
for
this
kickoff
an
hour
was
beneficial
because
there's
a
lot
of
kind
of
balls
being
juggled
right
now
by
everybody
on
the
team
but
I'm.
B
That
definitely
should
be
bi-weekly.
Let.
E
A
B
All
right,
thanks
all
for
calling
that
out,
it
should
be
popping
up
as
an
invite
for
everybody.
Okay,.
B
All
right
so
for
our
action
items
that
came
up
during
this
meeting,
there
were
two
that
I
highlighted.
Please
call
out
any
that
you
may
have
heard
that
I
didn't
recognize.
B
One
was
the
approval
on
the
implementation
for
basically
where
we're
going
to
put
the
code
for
the
ipni
Team
to
own
the
code
for
the
HTTP
API
migration,
replacing
the
reframe
API
I
think
we
found
issues
with
I
have
Gus
looking
into
that
kind
of
by
the
end
of
the
week
and
I
think
Gus.
If
you
need
any
context
or
support
from
us.
Obviously,
teams
here
for
you.
A
B
And
then
also
will
volunteer
to
take
some
action
on
the
ambient
discovery
of
content.
Router's
proposal
he's
going
to
put
together
an
MVP
for
us,
we'll
communicate
I
think
both
of
these
things
out
in
the
content
routing
work
group,
but
if
we
need
more
Hands-On
kind
of
coordination
of
efforts
or
anything
like
that,
I'll
keep
an
eye
out
and
be
here
to
support
that.
If
we
do
and.
A
B
B
So
we
finished
with
a
little
bit
of
spare
time,
which
is
great
I'll
post.
A
video
of
this
in
the
content
routing
work
group
from
the
recording
that.
B
All
right,
well,
we'll
get
a
few
minutes
back.
I
want
to.
Thank
you
all
for
joining
this
I
think
it
was
pretty
successful
if
you
have
any
feedback
for
how
this
was
run
for
improvements
that
you'd
like
to
see
before
the
next
one
in
two
weeks
feel
free
to
reach
out
to
me
directly
and
we
can
restructure
this.
However,
we
need
to
to
be
the
most
effective
I
appreciate
any
and
all
feedback,
but.