►
From YouTube: 2023-01-10 IPFS Content Routing WG #3
Description
IPFS Content Routing workgroup #3
Sync of those involved with creating IPFS Content Routing proposals:
Notes:
https://www.notion.so/pl-strflt/2023-01-10-Content-Routing-WG-3-991d6b01628b4b89b2debd16b5337d5e
more on the IPFS Implementers Sync including calendar information see:
https://www.notion.so/pl-strflt/Content-Routing-Workgroup-e59fa94a9c3f48d58480b7daf15bd356
Luma meeting series:
https://lu.ma/ipfs-routing-wg
Website: https://ipfs.io/
Twitter: https://twitter.com/IPFS
A
All
right
looks
like
I
shared
the
link
to
the
meeting
minutes,
so
I'll
go
ahead
and
share
my
screen:
I'm
working
off
just
a
laptop
screen
right
now.
So
if
any
of
y'all
have
any
chat
questions,
please
jump
in
and
just
mention
that
you're
looking
to
have
a
question
answered
because
I
will
not
be
able
to
see
it,
but.
A
All
right,
y'all,
so
I
dropped
the
meeting
minutes
in
the
chat
you
can
grab
them
and
follow
along
there
and
at
the
top
here
are
kind
of
some
of
the
current
specifications.
We're
working
on
for
folks
that
haven't
been
in
this
meeting
before
and
important
documents
that
outline
kind
of
some
of
the
technical
details
for
design
that
we're
presently
working
on.
A
You
can
go
ahead
and
drop
in
and
give
those
a
read
on
your
own
time,
but
these
are
pretty
important
docs
to
kind
of
keep
up
with
where
we're
at
with
the
content,
routing
work
group
and
then
the
purpose
of
this
meeting
is
basically
to
agree
on
a
technical
implementation.
Presently
of
double
hashing
is
actually
going
to
be
discussed
at
the
on-site.
Presently,
that's
been
the
scope
of
these
meetings
over
the
last
three
meetings,
but
we
do
have
kind
of
aspects
of
double
hashing
and
the
outcomes.
A
So
we'll
start
off
with
the
discussion
of
just
a
handful
of
topics
that
we
have
here:
it's
not
quite
as
voluminous
as
it
usually
is,
so
we
actually
might
be
able
to
compact
this
meeting
but
I'll.
Let
you
all
dictate
whether
or
not
we
are
able
to
cover
everything
in
the
time
that
we
have
available
since
the
HTTP
delegated
content.
Routing
discussions
are
going
to
be
happening
on
site.
The
primary
topic
that
we
wanted
to
cover
today
was
the
bifrost
implementation.
A
I
think
we
found
out
somewhat
over
break
but
kind
of,
as
we
ramped
back
up
that
it
sounds
like
the
deployment
range
of
versions
is
a
little
bit
different
than
I.
Think,
possibly
what
we
knew-
or
maybe
people
did
know
that,
but
the
important
output
outcome
is
that
we
need
to
kind
of
align
on
the
deployment
and
versions
and
basically
reach
a
unification
of
everything
to
version
17
with
ultimately
version
18
being
the
the
outcome
goal.
A
So
in
talking
to
the
folks
on
bifrost-
and
it
looks
like
you're
able
to
join,
welcome
Mario,
if
you
have
anything
you
want
to
drop
in
here,
please
feel
free
to
chime
in
but
I'm
talking
with
the
folks
over
on
bifrost.
Their
plan
is
to
bring
everything
up
to
version
17.
A
B
B
Everything
is
now
on
zero
17,
which
is
good
everything
except
the
Nitro
clusters,
because
with
all
the
cost
cutting
and
the
Nitro
the
changes
in
Nitro
and
so
on,
we're
kind
of
leaving
them
in
well
almost
minimal
life
support
because
they
are
basically
being
phased
out,
but
everything
else
the
gateways
preloads
bootstrappers,
are
now
on
zero
17
because
we
had
a
mess
of
anything
between
zeros
to
zero
fourteen
zero.
B
Sixteen
some
ad
hoc
comics
in
some
hosts,
so
it
was
a
mess
we
said
you
know,
let's,
first
of
all
get
the
house
in
order.
Everything
is
in
store
in
17
and
Jeff
has
said
that
he
would
deploy,
probably
between
today
and
tomorrow,
his
time,
which
is
PSD
018
rc2,
to
a
test
to
a
set
of
test
nodes.
B
So
that's
the
current
status
of
that,
and
one
of
the
things
that
we
want
to
do
is
from
now
on
actually
keep
better
track
of
the
latest
Kuba
release
in
production
so
that
we
actually,
we
do
not
end
up
again
in
this
situation,
where
we
have
like
zero
14
deployed
when
zero
17
is
the
current
one.
B
C
Just
just
a
small
thing:
you
know
that
there's
a
migration
in
the
release
right,
which
means
that
when
you,
when
you
test
out
the
deployment
and
do
the
migration,
if
you
try
and
roll
back
that
that's
more
complicated.
C
Between
17
and
18.,
the
migration
is
a
small
migration.
It
won't
take
very
long,
but
it
just
means
that
if
you
try
and
do
the
migration
it's
yeah,
it
requires
a
little
more
effort.
D
C
D
B
Actually
I
mean
to
be
complete
to
be
completely
Fair.
Well,
first
of
all,
that's
a
reason.
That's
a
that's
a
reason
why
we
are
doing
it
on
a
test
setup
machines
before
actually
deploying
it
everywhere,
and
the
quote-unquote
nice
thing
about
doing
it
in
these
hosts
and
not
the
Nitro
clusters.
Is
that,
if
necessary,
you
can
just
wipe
everything,
because
there
is
no
real
production
data
anywhere
there
I
mean
the
gateways
are
just
basically
cash
right.
B
So
the
only
thing
that
will
happen
is
that
ttfb
will
suffer
for
a
time,
because
we
wipe
the
whole
thing.
We
have
at
some
point
had
to
wipe
the
whole
thing
and
yeah
it
shouldn't
be
that
much
of
a
problem
preloads
might
be
a
little
bit
if
he,
whether
to
do
not
I
mean
we
might
have
some
complaints
from
you,
some
some
users,
but
in
the
end,
this
part
of
the
infrastructure.
The
nice
thing
is
that
it's
all
I
mean
it's
all
deletable
without
losing
data.
B
E
A
We'd
love
to
see
that
work
getting
put
to
use
Andrew
I
thought
it
was
awesome.
A
B
Yeah
there's
yeah
there's
a
I
I
was
looking
at
the
at
the
previous
bullet,
saying
something
about
that.
The
hydras
have
been
turned
off
as
far
as
I
know,
part
of
the
Hydra
functionality
was
turned
on,
but
part
of
what
Hydra
functionality
is
still.
There
is
so
the
hydrants
are
not
there
anymore
to
to
support
the
DHT
to
you
know,
make
the
DHT
better,
but
the
functionality
to
serve
as
a
bridge
between
reframe
or
content
or
well.
Basically,
the
indexers
and
the
DHT
I
think
is
still
there.
I
think
something
like
that.
D
Off
I
believe
that's
correct,
I
believe
they
are
still
partially
there,
but
if
they
aren't
acting
as
real
nodes,
where
they
sort
of
are
dropping,
provides
that
get
made
to
them
silently,
it
would
be
great
to
transition
to
not
need
those.
B
A
Did
we
go
beyond
the
simply
severing
the
reporting
relationship
from
the
hydros
to
the
dynamodb
since
break,
or
is
that
is
that
the
current
state
of
the
hydros,
so
the
bridging
functionality
is
retained?
Does
anyone
from
ipfs
can
y'all
answer
that.
A
E
A
Yeah
no
problem,
so
my
understanding
before
the
break
was
the
current
state
of
the
hydro
functionality.
Was
we
severed
the
reporting
relationship
between
the
dynamodb
and
the
the
hydros,
but
that
the
bridging
functionality
was
retained?
A
A
Okay,
cool
cool,
so
Mario.
Does
that
clarify
for
you
at
all
kind
of
the
the
current
operating
State?
Perfect,
that's
awesome!
Okay!
Does
anybody
else
have
any
clarifications
that
they
need
of
this
topic?
Regarding
the
bifrost,
we
call
it.
Maybe
remediation
plan
that
we're
undertaking
for
for
Mario
or
his
team
at
all.
D
So
so,
just
to
be
clear,
the
the
unification
of
Point
17
has
been
a
unification
without
reframe
I
thought
that
we
did
get
reframe
on
at
least
some
hosts
before
this
unification.
Are
we
going
to
add
back
in
reframe
on
the
17
hosts?
What's
the
plan
for
talking
to
indexers.
B
Yeah,
so
the
so
we
did
have
a
reframe
on
some
of
the
on
some
test
notes
in
017,
but
since
we
were
also
going
to
be
adding
the
resource
manager
to
the
whole
thing,
we
thought
that
adding
resource
manager
plus
reframe,
because
it
was
like
too
many
changes
in
one
go.
B
So
we
decided
to
do
the
resource
manager
in
zero
17
and
then
with
018.
We
will
be
using
the
HTTP
delegated
routing,
we'll
we
will
be
going
to
the
HTTP
version
of
well
refrig
of
what
was
previously
known
as
reframe
I,
guess,
foreign.
A
And
just
to
clarify
for
everyone
on
the
call.
Is
there
a
timeline
that
we
can
refer
to
Mario
for
that
upgrade
from
v17
to
v18?.
B
Basically,
when
the
pr
is
merged,
so
Jeff
will
make
the
pr
again
it
will
be.
It
will
be
either
today
or
tomorrow.
B
For
the
just
for
the
test
for
this,
for
these
test
nodes,
of
course,
we
don't.
We
don't
like
to
publish
non-released
versions
to
the
whole
to
the
whole
form
right,
so
we
will
do
for
those
test
notes.
It
should
be
done
either
today
or
tomorrow.
B
I
am
assuming
that
Jeff
bumped
up
I
guess
some
problem
last
night
because
he
said
that
he
was
going
to
have
the
pr
ready
by
this
morning
for
me
to
review
and
he
did
not,
which
is
and
I
haven't
synced
with
him
today.
So
I
am
assuming
that
he
bumped
up
against
some
other
problem
and
I
really
don't
want
to
do
it
myself,
since
he
was
already
he's
already
working
on
it
and
I.
B
Don't
we
don't
want
to
step
on
each
other's
toes,
so
I
I
hope
that
he
will
have
the
pr
ready
today
and
it
will
be-
it
will
be
done
either
later
today
or
well.
Today
evening,
European
Time
during
the
day,
Pacific
Time
or
tomorrow
morning.
A
That's
good
to
know,
as
far
as
that,
next
step
of
like
officially
deploying
to
the
Swarm,
do
we
actually
need
to
know
when
we're
going
to
be
doing
that
now
will
or
is
it
sufficient.
D
I
mean
so
so
here's
the
thing
right
like
we
started
asking
for
this
in
October
to
to
test
it
and
then,
a
month
ago
you
said
all
three
blanks
have
been
deployed.
What
fraction
was
that?
Because
that's
how
we've
been
trying
to
understand
how
much
traffic
we're
getting
from
gateways
and
it
sounds
like
we
only
got
a
fraction,
not
all
of
it,
but
we
don't
have
a
sense
of
like
how
much
we
got
what's
gone
now
and
how
much
we're
actually
going
to
get
when
this
eventually
does
happen.
B
Okay,
so
one
thing
I
mean
one
thing
we
can
do
is
I
would
have
to
check
exactly
because
we
that
was
the
other
thing
during
November
December
we
were.
We
were
running
multiple
experiments
at
the
same
time,
so
I'm
not
sure
which
one
is
which
in
one
of
them,
we
were
running
it
on
four
Banks,
so
I'm
not
completely
sure
which
one
it
was
right
now,
I,
don't
it
off
the
top
of
my
head?
B
D
Like
are
we
gonna
get
another
3x?
Are
we
gonna
get
a
10x
when
we
go
to
all
this
eventually.
B
I
can
tell
you
in
a
moment
feel
free
to
talk
about
something
else,
we'll
allow
you
do
while
I
wrap
around
sure.
A
So
another
topic-
I-
don't
know
that
everybody's
here
to
talk
about
this,
but
is
it
Guillaume
Michelle
am
I
pronouncing
your
name
correctly.
Please
please
help
me.
A
I
can't
hear
you
friend.
C
I
have
one
question
which
I
think
sort
of
bridges
between
these
two
topics
on
the
delay
thing,
but
also
the
stuff
with
the
how
much
traffic
the
indexes
are
getting
yeah
the
indexes
are
gonna
double
up
on
traffic
from
gateways
right
because
they're
gonna
get
it
twice
once
from
the
hydras
and
then
once
from
the
given
that
the
hydros
are
effectively
an
attack
on
the
network.
That
is
anti-helpful.
C
C
C
D
C
A
I
think
that's
an
important
topic
for
us
to
kind
of
keep
in
mind
thanks
for
thanks
for
bringing
that
up
and
I
think
or
do
you
pronounce
it
Aiden.
A
D
A
To
meet
you
Mario,
please
jump
back
in.
B
Yeah
so
Banks
Banks,
11,
12
and
13
have
six
hosts
each
see
six
Google
Notes,
each
so
18
Kubo
notes
and
in
total
we
have
120..
So
it's
going
to
be
between
six
and
seven
times.
B
Well,
actually,
one
thing
is:
it
will
probably
not
be
exactly
six
or
seven
times
because
we
are
using
on
the
load
balancers.
We
are
using
consistent
hashing,
which
means
that
which
node,
which
Kubo
node
a
request
sends
up
at
depends
on
the
hash
that
has
been
requested
at
that
point.
So
it
might
be
that
banks
that
those
banks
have
ended
up
in
a
hot
in
in
we
ended
up
with
hot
cids
or
with
cold
C
with
not
as
hot
cids.
A
That
makes
perfect
sense
thanks
for
looking
into
that
Mario.
That's
helpful
for
our
team's
planning.
B
A
All
right
so
sorry,
sorry,
we
cut
you
off
there,
but
we
wanted
to
kind
of
discuss
a
topic
with
Pro
blab.
A
We
kind
of
recognized
I
I
think
we're
we're
hinging
on
kind
of
the
same
topic,
but
there
was
some
discussion
of
like
potentially
20
times
the
reads:
issue,
8807
and
Kubo.
Is
this
sufficiently
covered
will
through
kind
of
looking
at
this
problem
with
Mario's
feedback?
Or
is
this
a
separate
issue.
D
D
Planning
thing
that,
but
right
it's
not
like
a
problem,
but
it
is
noting
that
if
we
get
rid
of
the
bit
swap
delay
to
dhts
or
indexers
we're
going
to
have
a
lot
more
reads.
We'd
like
to
understand
that,
as
it
gets
rolled
out
on
the
gateways
and
understand
what
that
read,
load
ends
up
looking
like
before
we,
you
know,
release
a
ipfs
18
with
the
thing
turned
off
right
like
that.
D
F
C
There's
a
bunch
of
like
other
things
they
might
run
into
like
Mario.
How
does
the
consistent
hashing
work
for
the
nodes?
Is
it
based
on
the
root
CID,
or
is
it
based
on
the
full
path.
A
C
C
Through
a
directory,
oh
that's
going
to
hit
different
nodes
in
the
cluster
which
vet
swap
would
generally
fill
in
the
blanks,
but
each
of
those
navigation
nodes
will
now
result
in
a
new
DHT
and
indexer
request.
So
like
the
number
here
could
be
like
really
large.
If
we're
not
careful
in
the
magnification,
because
bit
swap
is
tapering
over
like
a
lot
of
stuff.
B
C
C
A
C
I
remember
there
was
a
there
was
like
a
Thunderdome
test
with
this
turned
down
with,
like
the
the
delay
turned
off
that
ended
up,
resulting
in
like
lower
and
higher
ttfb
I.
Think
because
they're
probably
like
CPU
overload,
has
this
been
rerun
in
the
meanwhile.
F
As
far
as
I
know,
it
hasn't
the
rerun,
but
so
the
the
setting
here
will
change
with
Kubo
0.18,
because
the
number
of
connected
peers
would
be
different.
So
we
would
be
interesting
to
run
this
experiment
again
internally,
though,
with
a
number
of
like
100
connected
peers
instead
of
800
and
compared
to
different
provider
search
delay,
so
1
500
and
once
again,
for
instance,
to
compare
the
performance
also,
we
couldn't
really
explain
the
result.
We
had
last
time
why
it
was
worse
with
a
zero
delay
and
yeah,
maybe
we'll
try.
C
C
Don't
know
if
it's
true
both
because
you
noted
in
that
thing
that
there
were
a
few
like
top
peers
that
had
almost
all
the
content,
yeah
and
the
gateways
are
peered
to
all
of
those,
so
they're
gonna
hit.
Those
are
those
are
Never,
Getting
pruned
and
the
Gateway
limits
are
still
enormous.
C
Mean
I
guess:
there's
both
normal
clients
I,
don't
remember
how
they
get
pruned
I.
Think
if
you
continually
are
downloading
data
they
they
won't
get
pruned
I.
F
C
A
So
for
anybody
that
wants
to
kind
of
follow,
along
with
the
state
of
progress
on
this
analysis,
that's
happening.
Is
there
anywhere
other
than
GitHub
right
now,
where
kind
of
these
tests
are
being
planned
or
communicated,
or
should
we
just
follow
along
in
the
GitHub
for
that,
and
we
can
use
this
discussion
forum
for
people
to
kind
of
be
updated
on
it?
D
I
think
it's
more
so
so
from
from
indexing
side,
let's
we've
got
an
app
that
we'd
like
to
have
things
rolled
out
on
gateways
or
somewhere,
where
we
can
see
the
impact
on
the
indexers
and
track
that,
as
the
load
goes
up
to
make
sure
we're
going
to
be
able
to
handle
it
before
things
get
rolled
out
too,
release
clients
that
we
can't
walk
back
easily.
A
Yeah,
it
would
be
good
if
we
could
incorporate
I,
wouldn't
say
like
a
gating
function,
but
it's
some
kind
of
like
testing
or
awareness
just
just
so
that,
and
and
maybe
this
meeting
the
content.
Rounding
work
group
is
a
good
way
just
to
kind
of
surface
when
those
Milestones
are
happening
so
that
we
can
prepare
and
plan
around
it
I
think
that
works
for
now.
A
As
long
as
y'all
I
intend
to
continue
joining
this
group,
we
can
kind
of
take
that
as
as
feedback,
but
so
long
as
there's
awareness
that
we
kind
of
need
to
know.
So
we
can
plan
as
well.
It
would
be
very
helpful.
E
I
missed
some
of
the
conversation,
because
my
internet's
pretty
crappy,
but
are
we
so
we're
we're
blocking
the
rollout
on
the
gateways
pending?
Some
of
this,
like
analysis,
is
that
the
plan
or
no.
C
D
Out
querying
indexers
is
fine,
I
think
it's
it's
that
okay,
we
don't
want
to
release
18
without
the
delay,
so
so
this
is
in
particular.
A
A
No
problem,
thanks
for
clarifying
that
it,
it
always
helps
Gus,
especially
for
people
that
are
trying
to
come
back
afterwards
and
see
what
we
we
talked
about.
A
D
If
it,
if
it
gives
better
user
performance
that
seems
like
when,
with
the
only
caveat
being
if
it
generates
like
100x
more
inner
traffic
and
falls
over
the
indexers,
that's
probably
bad,
but
that's
something
we
can
capacity
plan
for.
We
just
need
to
know.
What's
going
to
happen.
F
So
I
would
say
that
we
can
reduce
it
to
500,
milliseconds
or
400
milliseconds
very
safely,
and
it
will
not
add
a
lot
of
traffic
and
but
the
the
benefit
is
none.
Isn't
that
huge
either
so
yeah
doesn't
change
much.
But
it's
still
a
progress.
I
would
say.
E
Are
there
we,
you
know,
we
just
did
a
lot
of
work
to
reduce
costs
of
hydros.
Is
there
a
a
cost
component
to
this
in
terms
of
like
the
infrastructure
that
needs
to
get
run
to
handle
these
requests.
D
Probably
right
now,
it's
cloudfront
is
the
caching
layer
and
that
costs
us
money.
I
mean
again.
This
is
also
sort
of
like
a
what's
the
current
state
of
the
world
versus.
Where
could
we
get
to
in
three
six
months?
If
we
manage
to
get
caches
onto,
for
instance,
the
Saturn
L
ones,
or
things
like
that,
we
could
manage
to
offload
a
lot
of
this
caching
with
at
lower
cost.
I.
D
Think
the
you
know
we
need
to
get
the
double
hashing
type
things
rolled
out
before
we
are
ready
for
that
and
having
untrusted
caches.
F
Alternatively,
maybe
having
a
different
delay
for
the
indexers
and
for
the
DHT
could
do
the
trick
so
that
the
indexer
don't
get
extra
load
and
will
get
like
if
the
request
didn't
result
in
one
second
with
either
bit
swap
or
to
DST,
then
the
indexer
will
take
this
load,
so
the
load
doesn't
change
and
DHT
doesn't
need
to
wait
like
one
full.
Second,
and
this
doesn't
occur,
any
cost
as
the
hydras
are
now
down.
I
guess.
C
I'd
say
certainly
it's:
it
is
a
it's
a
a
bug
of
an
interface
bug
that
the
all
the
routing
systems
are
grouped
together
and
they
have
the
same
delay
penalty,
because
each
system
has
a
different
cost
yeah.
What
what
that
should
be
for
each
system?
I,
don't
know
yeah
I
think
like.
If
that's
the
thing
that
we
wanted,
that
we
were
like
yep,
it's
important
to
us
that
we
change
that,
because
we
want
it
to
be
a
shorter
delay
for
the
indexer.
Is
there
a
longer
delay
or
whatever
we
could?
We
could
do
that?
C
Probably
we
want.
Probably
it's
like.
If
the
shorter
you
want
to
make
it
or
the
more
you
want
to
eliminate
the
bits
while
papering
over
things,
the
more
you
have
to
start
fixing
other
stuff,
whether
that
means
Plumbing
sessions
properly
or
thinking
about
how
your
info
does
consistent,
hashing
or
whatever.
C
That
will
emerge
as
a
result
of
this,
and
hopefully
we
have
enough
metrics,
because
people
have
done
a
lot
of
metrics
work
in
the
last.
You
know
six
months
or
so,
but
we
might
need
more
yeah
just
because
it
would
I
mean
it's
good,
but
it
would
also
be
sad
if
the
way
that
we
were
collecting
metrics
on
way
too
many
lookups
was
Will,
calling
us
up
and
being
like
there's
a
lot
of
money
being
spent.
Can
you
figure
out
why
all
the
money
went
and
we're
like?
C
B
Speaking
speaking
of
money
spent,
since
we
have
so
many
smart
people
in
this
room,
perhaps
one
of
you
can
help
me
out
and
go
offline.
It
turns
out
that
I
mean
we
can
go
offline
and
and
figure
this
out,
but
it
turns
out
that
our
biggest
spend
in
equinix.
Our
biggest
line
item
is
outgoing
bandwidth
from
Kubo,
not
our
http.
B
B
Possible
get
it
down.
C
B
C
Expect
I
think
some
of
this
I
don't
know
how
much
this
is
going
to
be.
You
know
easy
to
resolve
within
Kubo
with
like
more
magic
Flags,
but
it
certainly
should
be
fixable
because
there's
not
really
much
utility
you're
getting
out
of
egress
bandwidth
to
everybody
unless
it's
all
coming
from
like
legitimate
that
swap
wants,
but
that
doesn't
seem
reasonable
either.
B
E
But
we'll
see
we'll
we're
gonna,
so
Dennis
is
looking
at
this
problem
and
he's
gonna
talk
about
it
this
week
at
the
program
website,
because
he
found
some
surprising
numbers
when
looking
at
the
Hydra
data
that,
like
according
to
that
data,
like
a
huge
proportion
of
this,
was
just
like
the
P2P
overhead,
like
connection
handshakes
and
stuff
like
that,
so
he
might
have
some
interesting
results
soon.
On
this.
B
A
Awesome,
I
don't
think
we
can
take
that
topic
much
further,
but
I
do
have
kind
of
a
note
here
that
I'll
add
to
the
action
items
for
a
follow-up
on
this
I
think
that's
a
pretty
interesting
topic.
A
lot
of
folks
would
like
to
see
more
about
all
right.
A
So
these
are
all
the
topics
that
we
had
planned
to
cover
for
this
work
group
meeting
number
three,
usually
at
the
end
of
the
meeting
for
folks
that
haven't
been
here
before
we
do
a
round
table
kind
of
top
of
Mind
from
each
of
the
contributing
teams.
So
it's
kind
of
a
free-for-all
opportunity
to
bring
up
topics
that
you'd
like
discussed.
While
you've
got
this
audience.
C
All
right,
I
have
one,
but
that's
mostly
just
because
I've
been
out
for
a
long
time,
which
is
I,
guess
I,
see,
there's
a
PR
to
add
a
asynchronous
responses
with
ND
Json
to
the
delegated
routing
API
yep.
C
E
I,
remember
exactly
I
can't
remember
how
much
of
this
was
in
the
spec
or
not.
This
was
basically
I.
Think
I
did
this
over
the
break
of
just
trying
to
like,
like
hash
it
out.
I
could
certainly
write
a
spec
component
to
it.
If
we
want
to
do
that
on
Prime,
I.
C
C
E
C
A
Well,
I
think
we've
covered
all
of
our
topics
here,
so
we
can
have
a
little
bit
of
time
back
I
will
do
the
the
needful
thing
and
go
ahead
and
kind
of
aggregate.
The
outcomes
from
this
discussion
towards
the
bottom
of
the
notes
here
but
I'll
be
posting.
Those
in
the
content
more
group
slack
afterwards,
so
no
need
to
follow
up
on
this
you'll
see
that
update
there
as
well.
A
Does
anybody
have
any
final
questions
before
we
drop
off
my
favorite
jobs,
40
20.,
all
right,
we
got
a
little
a
little
morning.
Breakfast
action
in
there
good
deal
well.
A
Really
excited
about
the
direction
of
this
workshop
and
thanks
for
everyone's
participation,
it's
super
helpful.
We'll
talk
to
you
all
soon
see
ya.