►
From YouTube: 2023-07-23 Content Routing Workgroup #16
Description
CLI updates to IPNI, IPFS /routing/v1 update to Kubo v0.23, Bifrost continued v0.21 rollout. Delegated double hashing and ambient discovery on the horizon, but when requires further discussion.
A
Hey
Dan
welcome
hello,
all
right,
everybody
Welcome
to
the
content,
routing
work
group
number
16.,
we're
like
well
on
our
way
to
27.
it's
it's
crazy,
so
I
dropped
a
link
to
the
notes
in
our
chat.
We'll
drop
it
again,
because
I
think
I
did
that
before
record.
A
Just
so,
it
catches
the
notes,
and
then
we
can
just
review
that
at
the
top
of
this
notion
page
there's
a
number
of
links
relative
to
kind
of
open,
ongoing
items
that
are
in
discussion,
but
also
some
more
General
content,
routing
kind
of
context,
documents,
discussions
that
have
happened
in
the
past,
but
are
still
relevant
to
a
lot
of
the
like
ongoing
work.
A
That's
being
done
presently,
we
usually
start
this
meeting
every
time
with
a
quick
update
from
the
teams
that
participate,
because
we've
got
a
really
good
breadth
of
kind
of
visibility
into
what's
going
on
around
the
network.
Here
on
these
teams,
I'll
start
with
an
update
from
the
ipni
team
and
then,
if
the
ipfs
folks
from
Prov
lab,
would
like
to
jump
in
we'd
love
to
hear
about
y'all.
A
So,
on
the
ipni
side
of
the
house,
the
foundation
DB
experimentation
is
ongoing,
but
we've
hit
some
pretty
pretty
hard
obstacles.
The
backing
database
for
foundation
DV
has
a
memory
leak
that
I
think.
Once
we
hit
a
certain
amount
of
scale,
potentially
Mossy
correct
me
if
I'm
wrong,
there
may
have
been
other
factors
that
contributed
to
this,
but
essentially
the
important
point
to
take
away.
Is
we
had
a
very
painful
memory
leak?
We
think
we've
got
a
replacement
backing
database.
That
would
potentially
be
a
good
candidate
that
resolves
this
issue.
A
However,
the
real
question
for
us
is
whether
or
not
we
have
the
resources
to
continue
this
level
of
experimentation
right
now,
and
so
we've
got
a
lot
of
internal
considering
to
do
regarding
our
path
forward.
But
for
the
time
being,
the
important
takeaway
from
the
group
is
foundation.
Db
will
not
be
live
in
production
anytime
soon
for
IP
and
I
we're
continuing
with
The
Rock
Solid
continuing
to
serve
as
Pebble
DB,
which
is
rocking
along
heartily
right
now.
A
Hence
the
reason
why
probably
nobody
even
noticed
that
we're
having
these
problems
because
we're
serving
such
redundant
infrastructure
right
now
and
on
top
of
that
speaking
of
redundant
infrastructure,
we've
been
kind
of
undergoing
a
pruning
exercise
with
our
infrastructure
over
the
course
of
the
last
three
weeks
that
I
gotta
heartily
give
some
Applause
to
both
Mossy
and
Andrew
Gillis
for
really
Bringing
Down.
The
scope
of
infrastructure
costs
on
ipni
consistently
week
over
week,
three
weeks
running,
big
numbers
and
we've
done
so
without
reducing
any
of
our
reliability
or
redundancy
as
well.
A
I
think
from
the
looks
of
things,
our
performance
has
stayed
perfectly
the
same,
and
so
there's
a
lot
to
be
applauded
there.
Additionally,
there's
something
worth
checking
out
which
is
Andrew,
updated
our
CLI
and
included
some
really
helpful
distance
commands
that
I
think
folks
in
this
group
might
be
a
little
interested
in
I'm
talking
to
him
about
if
we
can
show
them
off
in
one
of
our
demo
days.
That's
coming
up,
but
folks
here
might
be
interested
if
he
doesn't
want
to
show
them
off
in
the
demo
day.
A
I
might
run
them,
but
either
way,
I'll
share
a
link
to
the
folks
in
this
group.
When
that
happens,
basically,
you
can
check
the
distance
any
given
provider
is
from
the
ipni,
but
also
you
can
do
a
standard
out
from
your
terminal
with
the
CLI
command.
That
shows
you
kind
of
the
the
running
logging
of
providers,
which
there's
all
kinds
of
interesting
insights
that
you
can
gather
about.
The
current
state
of
providers
on
the
network
from
this
data.
A
So
I'll
drop
a
link
here
after
the
fact
that
you
all
can
kind
of
check
out,
but
it's
it's
really
cool
and
then,
last
but
not
least,
the
ipni
team
has
been
iterating
on
the
design
decision
for
a
little
while,
regarding
the
what
we're
calling
the
IP
and
I
sync
basically
graph
sync
advertisements
to
ipni
are
timing
out
with
a
high
level
of
frequency
across
the
network
and
what
this
results
in
is
stalled
advertisements
to
ipni
and
there's
a
lot
of
this,
the
storage
providers
that
happen
to
store
and
then
advertise
data.
A
If
they
are
timing
out,
then
their
their
data
is
not
becoming
queryable
on
ipni,
and
so
our
proposal
to
fix
this
solution
is
to
take
a
look
at
this
data
transfer
protocol
and
clean
it
up.
We've
got
a
design
document
that
we've
just
started
iterating
on
with
kind
of
our
our
internal
team.
Probably
we
can
share
that
with
the
broader
group.
A
I
think
maybe
once
folks
have
had
a
first
pass
on
it,
but
we'll
get
that
out
there.
It
would
create
a
dependency
on
the
provider
in
boost,
so
we
would
have
to
update
boost
to
get
this
out
to
the
network,
but
we're
in
the
midst
of
discussing
what
that
kind
of
rollout
would
look
like
and
how
we
would
support
both
both
protocols
for
the
time
being,
as
folks
update
to
that
new
version
of
this.
So
those
are
the
big
updates.
A
Mossy
can
I
kindly
ask:
is
there
anything
else
that
you
wanted
to
add
to
these
or
any
color
you
wanted
to
add
to
them.
C
Let's,
what's
the
trick,
what's
the
trick
in
supporting
more
than
one
protocol
seems
like
you'd
have
like
you
have
you
have
identify
and
whatever
so
you
should
be
able
to
just
say:
do
you
speak?
Do
you
which
protocols
do
you
speak?
Let
me
try
the
one
I
prefer,
and
so
then
you
have
both
code
paths
and
then
you
deprecate
the
one
you
don't
like
when
you
feel
like
you're
able
to
kill
it
off
because
enough
people
have
updated.
A
C
B
Yeah,
so
we're
gonna
keep
the
obviously
keep
the
background
compatibility,
there's
some
complexities
inside
the
implementation
of
soda
index,
which
is
completely
low
level,
but
at
the
protocol
level,
like
you
said,
there
would
be
a
new
protocol
identifier
for
the
HTTP
sync
which
then
you
know
you
proceed
with
the
you
know,
supported
protocols
and
whatever
and
eventually
get
rid
of
the
old
one.
A
Awesome,
that's
the
update
from
the
ipn
side.
Ipfs
folks
did
one
of
you
want
to
jump
in
and
catch
us
up
with.
What's
going
on
in
ipfs,
World.
D
Yeah
I
dropped
like
we
spent
a
bunch
of
time
doing
planning,
but
I
dropped
two
items,
let's
use
for
the
updates.
So
one
is
that
we
for
maybe
like
not
for
Kubo
22,
but
for
Cuba
23.
D
We
will
be
aiming
at
finishing
the
pull
request
to
expose
rooting
V1
to
expose
existing
routing
system
in
Kubo,
which
is
ipni
and
DHT
as
a
routine
everyone
endpoint,
and
that
that's
like
a
way
for
dog
fooding
and
to
end
both
client
and
server
implementation
that
we
have
in
boxer
in,
go
and
also
dog
fooding
and
identifying
any
gaps
in
our
specs.
D
So
we
have
streaming,
we
have
ipns,
but
we
don't
have
peer
routing.
So
there's
like
an
open
IP,
which
I
linked,
which
introduces
a
peer
schema,
will
most
likely
only
use
a
subset
of
that,
but
they're
just
flagging
that
this
will
be
a
way
for
closing
functional
gap
of
the
routing
API
and
that's
part
of
a
bigger
picture
where
we
want
to
have
a
ability
to
do
everything
ipfs
over
HTTP.
There
should
be
like
anything
you
can
do
today.
D
You
should
be
able
to
delegate
in
some
way
over
HTTP
and
that
way
people
are
able
to
like
black
box
some
parts
of
the
stock
and
either
Implement
them
in
different
language
or
delegate
to
different
services.
D
So
that's
just
kind
of
PSA
why
that
PR
will
be
in
a
country
point
for
a
bunch
of
work
and
the
second
one
is
I
just
published
a
draft
of
ipns
cleanup.
It's
an
early
draft
tldr
is
that
we
we
duplicated
old
signatures,
but
we
still
published
records
which
had
both
of
them
and
even
when
both
client
and
server
only
cared
about
the
new
signature.
The
lack
of
the
old
signature
made
records
invalid.
D
So
what
this
IP
proposes
is
that
if
there's
no
old
signature,
that's
still
a
valid
record,
because
we
want
to
move
we,
we
ignored
the
old
signatures
anyway,
and
we
don't.
We
want
to
give
people
ability
to
remove.
D
D
I,
don't
believe
so.
I
also
don't
think
this
IP
will
kind
of
like
impact
any
like
people
who
already
integrated,
ipns
or
or
implemented
storage,
because
the
the
wrapper
envelope
is
still
protobuf
in
that
protobuf.
D
We
have
optional
public
key
for
RSA
keys
and
we
have
a
signature
and
a
c
bar
document
and
the
signature
science,
entire
symbol
document,
so
there's
like
fairly
the
surface
is
very
fairly
small,
and
if
people
implemented
the
protobuf
envelope
before
the
only
difference
will
be
in
validation
and
what
this
IP
documents
is,
the
change
in
the
way
we
create
records
to
make
them
lean
and
also
how
to
validate
things
so
building
records
are
still
valid.
What.
D
So
I
think
the
the
feedback
there
would
be
around
Peter,
schema
and
I
think
the
direct
so
I
think
like
the
the
takeaway
for
CAD.
That
contact
would
be,
if
you
don't
know
the
protocol
right
in
the
cases
when
you
assumed
bit
swap
before.
D
We
would
like
to
ideally
return
a
record
in
peer
schema
instead
of
like
unknown,
but
that's
kind
of
like
a
cosmetic
thing.
Implementations
at
least
in
go
in
JS,
will
also
like
support
and
unknown
just
to
make
a
smooth
transition
path.
But
for
for
the
spec
and
for
the
new
implementations
will
be
pointing
people
in
the
direction
that,
if
you
don't
know
that
the
specific
protocol
or
you
don't
have
any
protocol
specific
information
metadata
just
return
peer
record
and
with
multi
others
and
that's
good
enough.
D
A
A
E
Yeah
sure
we
in
the
process
of
rolling
out
0
21
at
the
moment
seems
I
think
we
just
skip
0
20
entirely
for
some
reason.
So
that's
what
we're
hoping
to
get
out
at
the
moment
so
we'll
have
those
stats
for
you
guys
at
the
moment.
It's
still
isolated
to
that
that
one
region
that
we
had
the
test
deployment
in
unfortunately,.
E
C
B
C
A
note
about
the
zero
21
deployment,
which
might
be
I,
think
this
is
true,
going
back,
yeah
I,
think
yeah.
I
think
this
is
moving
from
0
19
to
21.
There
is
a
bug
that
is
now
fixed
around
redirects
or
the
underscore
redirects
feature
just
grabbing
the
issue.
E
F
E
A
Awesome
thanks
Cameron
I
didn't
check
today,
but
I've
been
checking
periodically
and
it
seems
like
the
the
stats
pages
from
bifrost.
Team
have
been
keeping
up
to
date,
so
it's
it's
been
helpful
to
be
able
to
check
the
status
of
those
things,
I'm,
not
sure
if
anybody
else
has
been
referencing
them,
but
I
occasionally
check
just
to
make
sure
we
can
see.
What's
going
on
as
stuff
comes.
E
Up
just
so,
you
guys
aren't
misled.
Those
statues
seen
currently
are
only
from
that
test.
Rollout.
We
had
of
the
to
I
think
they're
in
Las
Vegas,
if
I'm
remembering
the
right
region
anyway,
it's
not
important,
but
it's
only
a
small
sample.
It's
not
all
of
the
Gateway
nodes,
that's
the
main
takeaway.
So
as
soon
as
we
get
21
a
one
out,
we'll
have
everything
properly
clarify.
A
All
right,
so
we
had
a
few
topics
to
cover.
We
don't
have
a
ton,
but
maybe
these
topics
will
be
liable
enough.
A
A
There
is
a
PR
for
the
hdb
delegated
delegated
routing
for
reader,
privacy
and
I.
Think
there's
been
some
discussion
on
that
PR,
but
there's
some
work
to
be
done
on
also
the
stewards
side
of
the
house
and
I
think
we
were
wondering
on
where
this
Falls
and
the
like
a
priority
list
of
stewards.
Is
this
going
to
get
worked
on
this
quarter?.
D
It's
a
good
question
so
for
sure
we
will
Circle
back
to
reviewing
it,
but
when
it
comes
to
like,
if
the
question
is,
will
there
be
implementation
in
box?
So
I'm
not
sure
I
didn't
tell
us
on
that.
C
Yeah
I
think
we're
we
are
still.
We
are
still
negotiating
what
the
the
end
results
of
the
prioritization
is
going
to
look
like,
but
my
suspicion
is
that
this
isn't
going
to
come
up
super
high
unless
there's
someone
specifically
pulling
for
it
from
us,
in
which
case
you
know,
I
feel
like
we're
trying
to
sort
of
like
amplify
our
our
impact
and
so
like.
C
F
The
the
OneNote
from
torfin's
earlier
update
that
maybe
is
worth
thinking
about
is
since
we're
not
on
the
ipni
side,
doing
we're
not
waiting
on
Foundation
DB
now.
So
we
expect
we'll
actually
have
some
partners
with
other
full
indexers
that
are
working
by
the
end
of
Q3.
C
That
that
makes
sense,
although
that
it's
you
need
both
things
to
get
this
to
work
right,
you
need,
you
need
some
level
of
like
decide.
You
need
some
code
that
decides
which
router
to
use,
and
then
you
need
the
API
for
figuring
it
out.
C
So
I
like
I
I,
would
hope.
The
API
is
is
is
less
work
than
the
figuring
it
out,
but
I
think,
but
it,
but
it
might
actually
be
be
more
given
how
these
things
tend
to
go.
So
that's
that's
a
good
point.
B
B
B
So
it's
ready
for
review
and
then,
after
that,
implementation
in
box,
so
and
then
integration
into
Kubo,
but
the
the
side
thing
that
slightly
worries
me
is
just
having
this
feature
only
in
ipni
until
q124
shipping
it
here
because
we
have
it
already
and
all
that's
missing
is
like
the
glue
bit
in
in
Google.
If
that
makes
sense,.
C
Also
like,
if
it,
if
it
comes
to
it
like
we
can
feel
like,
we
can
figure
it
out
right,
because
if,
if
what
we're
looking
at
is
we
can
either
implement
this
double
hashing
API,
where
the
only
consumer
of
it
is
ipni
or
we
can
use
like
an
ipni
specific
API.
C
Those
things
may
not
be
dissimilar
enough
for
anyone
to
care
until
there's
a
second
thing
that
wants
to
use
it
right,
so
it
kind
of
I
could
see
I
could
I
guess.
I
could
see
that
going
either
way
right.
If
we
don't
feel
like,
we
have
enough
confidence
in
building
in
building
like
a
somewhat
generic
API,
given
we
only
have
one
backing
implementation,
so
you
don't
have
confidence
to
do
that.
Yet
then
saying
okay,
well,
we'll
use
the
specific
API
for
now
and
then
make
a
generic
one
once
there's.
C
That's
why
I
guess,
like
that's
sort
of
the
good
thing
is
like
we
don't.
The
API
here
doesn't
need
to
be
the
the
blocker
right.
If
we
have
multiple,
if
we
have
multiple
service
providers
that
are
providing
data,
and
we
have
some
code
and
some
some
Logic
for
determining
sanely
how
to
decide
which
ones
to
use
and
sort
of
the
how
synchronized
everything
is,
then
that's
probably
enough
to
get
the
ball
rolling
little.
Do
you
disagree
sound
about
right.
D
A
That's
a
good
takeaway
I,
don't
know
if
you're
thinking
about
it
we're
we're
in
good
shape,
thanks
y'all
for
the
perspective
on
that
I
think
it
gives
us
something
to
think
about,
and
maybe
once
you
all
have
had
some
time
to
think
about
it.
We
can
talk
a
little
bit
more
about
where
a
false
priority
was
kind
of
based
on
renewed
understanding.