►
From YouTube: Refactors, principles, localhost, topology visualizations - IPFS GUI and Browsers Weekly 2020-01-29
Description
About IPFS GUI and Browsers Weekly: https://github.com/ipfs/team-mgmt/issues/790
IPFS Mirror: https://ipfs.io/ipfs/bafybeiau4f6tt5kplmh6r4cy5loesf5hvo2d6mfksff2gd7ngowkgezfuq
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
Welcome
to
GUI
and
in
web
browsers,
we
click
all
four
29th
of
January
2020.
We
got
action
packed
agenda
this
week
and
people
parked
gallery
view
on
our
call.
So
if
anyone
has
any
issues
or
topics,
you
want
to
discuss
this
week,
Adam
to
agenda
to
make
it
faster,
I'll
just
start
with
the
first
one,
which
was
added
by
me.
So
it's
more
like
a
PSA
for
weather
community
that
a
lot
of
things
related
to
us
in
our
age.
A
A
Everyone
spent
time
to
document
this,
so
I
won't
make
it
longer
than
I
need
to
it's
pretty
well.
Written
Allen
also
made
a
guide
for
GSA
PFS
users
who
want
to
meet
the
users
user
developers
who
and
who
would
like
to
immigrate
to
this
new
API.
There
are
some
best
practices
and
some
commands
specific
tips.
How
to
do
that
and
final
thought.
I
want
to
end
this
section
with
is
that
our
libraries
got
really
small
after
district
factor,
for
example,
disrespect
a
TFS,
HTTP,
client
and
pre-release
version
after
the
factor
gets
pretty
small.
A
A
A
B
A
So
like
to
be
very
specific,
what
is
breaking
the
breaking
change
is
the
program
programmatic
interface.
In
JavaScript
we
don't
change
HTTP
API,
we
don't
change
like
core
IP
is
on
the
high
level.
Like
abstraction
level,
we
just
changed
the
way
those
core
IP
eyes
are
represented
in
J's.
So,
instead
of
using
old
promises
and
callbacks,
we
are
using
modern,
a
single
wait,
a
construct,
and
that
was
also
opportunity
to
do
some
cleanup.
So
some
stuff
got
simplified.
Hopefully
it
will
be
better
for
people
easier
how.
A
Where
do
I
stop
I
think
there's
a
pier
for
either
web
UI
or
desktop.
The
thing
is
like
above
ipfs,
web
UI
and
desktop
will
only
need
to
switch
to
the
new
HTTP
client.
Ipfs
companion
needs
to
switch
to
both.
You
know,
HTTP
client,
but
also
it
uses
simple,
JSA,
PFS
in
brave
and
in
chromium,
so
that
will
take
bit
more
effort
about.
Hopefully
those
are
the
same
changes
like
the
way
companion
is
implemented.
It
does
not
really
care.
Is
it
using
a
client
or
a
full
note?
It's
the
same
programmatic
interface.
B
I
am
done
this.
This
is
an
odd
quarter,
we're
having
a
meeting
soon
team
people
in
one
place,
but
until
then
knowing
knowing
what
the
goals
are
for
these
specific
areas
for
for
q1,
it's
gonna
be
really
important,
especially
given
that
there's
some
moving
around
of
teams
and
priorities
and
such,
but
specifically
around
the
desktop
and
web
UI,
making
sure
that
we
have
an
okay
are
for
Jared.
B
They
I
guess
making
sure
that
we
have
the
same
approach
that
we've
had
before
for
the
last
two
quarters
for
these
specific
areas,
which
is
ensuring
that
there's
no
breakage
as
new
versions
of
Co
ipfs
are
released
and
given
there's
a
very
important
version
of
Co
I
profess
when
we
have
the
the
new
dot
that
released
the
this
week,
but
then
preparing
for
0.5
and
making
sure
that
that
these
web
UI
and
desktop
function
correctly
with
0.5
there's
going
to
be
a
priority.
So
we
should
probably
have
them.
A
Look
for
sure
part
of
like
switching
to
a
sink.
Those
newer
api's
will
be
part
of
the
maintenance,
job
and
Todd,
and
also
like
tests.
We
had
some
I
think
Hugo
linked
another
project
which
could
enable
us
to
test
on
both
Firefox
and
Safari
I
think
so
we
could
like
bundle,
although
all
this
work
into
some
like
maintenance,
okiya.
A
B
Sure
so
I
mean
I'll,
give
a
quick
context
and
then-
and
then
you
can
add
some
more
detail.
What
we
did.
We
had
some
changes
in
the
November
timeframe,
I
think
around
kind
of
how
we
handled
redirects
for
in
companion
when
we
encountered
a
DNS
linked
site.
So
this
brought
up
some
questions
around
how
we
do
do
URLs
generally
ongoing
security
model
questions,
we've
had
around
path
based
security
model
path,
based
gateway
and
the
migration
to
CID
and
subdomain
to
be
able
to
ensure
origin
isolation.
B
So,
for
example,
if
you
have
a
two
websites
that
you
both
load
through
the
Gateway
and
then
see,
IDs
are
in
paths.
They
share
the
same
origin,
which
means
they
can
read
each
other's
local
storage.
They
have
say
cookie
sessions
as
a
rin
web
browser
world.
That's
not
very
significant
problem,
so
the
short
term
solution
here
was
to
move
to
see
IDs
and
subdomains,
which
means
that
every
cid
has
its
own
origin.
This
guarantees
the
isolation
and
safety
of
applications
that
are
actually
running
and
not
static
content
and
in
in
these
urls.
B
But
that
came
a
conversation
around
what
does
that
mean
for
architectural
locking
like
what
what
kind
of
what
kind
of
architectural
decisions
or
have
been
made
as
a
side
effect
of
moving
to
that
kind
of
a
model?
How
does
it
affect
our
long-term
ability
towards
better
integration
with
the
web?
What
does
it
mean
for
an
application
model
for
ipfs
from
a
from
a
threat
standpoint
if
we
had
ipfs
as
a
native
protocol?
Does
it
have
things
like
cookies
and
local
storage?
B
To
do
these
concepts
even
translate
to
a
world
where
we're
thinking
about
what
native
ipfs
means
or
is
that
just?
Is
it
just
a
static
content
delivery
protocol?
We
we
don't
really
have
a
lot
of
answers
for
for
where
we
want
to
be
yet
I.
Think
a
lot
of
us
are
really
interested
when
an
application
model
right
PFS
could
be,
but
until
then
we
have
this
existing
web
application
model
and
all
of
the
world
that
it
pulls
in
with
it
so
including
this
origin
based
security
model.
B
So
this
is
what
this
prompted
is
a
conversation
around.
How
we
make
changes
to
how
companion
works,
how
we
make
decisions
around
how
browser
integration
works
generally
in
the
HTTP
web,
when
loading
ipfs
content
and
so
I
told?
Maybe
you
have
some
thoughts
around
we
just
kind
of
really
started
talking
about
this,
but
how
we,
how
we,
how
that
actually
is
articulated
in
our
work
here
when
we're
making
decisions
about
how
things
like
companion,
should
work
yeah.
A
I
think
like
for
a
long
time,
we've
been
following
something:
a
lot
of
people
call
the
upgrade
path,
which
may
not
be
the
best
wording.
I
mean
it's
technically
correct.
We
want
to
like
upgrade
the
web
and
move
move
the
web
towards
a
content
addressing
other
things,
but
that's
not
realistically
how
things
tend
to
work.
If
you
look
back
how
new
technologies
got
adopted,
the
old
technologies
are
still
around
and
all
those
like
on
the
upgrade
path.
A
All
the
intermediate
steps
are
either
around
or
have
been
around
for
a
long
long
time,
and
the
thing
is
that
now,
when
we
our
deciding
on
how
to
tackle
some
or
how
to
solve
some
problems
along
the
the
upgrade
path,
we
need
to
do
that
with
awareness
that
those
decisions
will
be
around.
We
will
not
like
we
could
duplicate
them,
but
people
will
continue
using
them
and
also
the
actual,
like
coexistence
of
those
news,
upgrade
stages.
A
Does
it
like
process
threat
to
projects
adoption,
for
example,
in
this
subdomain
discussion?
Historically,
we
had
a
very
clear
situation
where
ipfs
addressing
followed
leaky
unix
conventions,
so
we
basically
had
a
root
and
all
addresses
were
mounted
on
the
same
route.
Was
it
like
ipfs,
IP
NS?
There
are
some
discussions
about
slash,
HTTP
and
addressing
that
way
that
the
thing
was
then
we've
introduced
HTTP
gateways.
A
Now,
when
we
sort
of
like
took
CID
and
stuck
it
on
the
left
side
of
this
domain,
it's
like
two
things
to
copy:
it's,
not
if
you
copy
the
path,
it's
not
the
full
path,
because
the
road
on
the
domain-
and
that's
maybe
a
long
but
I-
think
that
good
example
of
the
cat
SKS
of
which,
which
could
help
us
to
define
those
questions.
How
do
we?
What
questions?
Do
we
ask
ourselves
when
we
make
those
decision
decisions?
Was
it
like
Worf,
like
the
security
gain?
A
There
are
like
solutions,
but
the
first
step
is
to
think
about
those
like
actual
architectural
logins
that
we
introduced
by
making
those
changes,
not
sure
if
you
I
helped
at
all,
but
but
I
feel
like
illustrating
why
this
subdomain
gateway
situation
was
problematic
and
raise
those
questions.
I
feel
we
should
like
define
some
define.
Some
guiding
principles,
probably
like
publish
them
on
ipfs
in
web
browsers,
will
get
help
to
repo
readme
or
someplace,
like
that.
Just
to
have
it
at
hand
when
we
have
to
make
similar
decision
at
some
point
in
the
future.
A
A
B
So,
as
you
know,
there
are
the
kind
of
the
topology
that
we
have
for
for
our
BFS
and
web
browsers.
Right
now
is
there's
really
this
combination
of
IP
of
us
companion
and
that
give
us
desktop,
and
you
run
ipfs
companion,
those
way
to
detect
and
ipfs
URL
and
then
intercept
that
request
route
it
over
to
IP,
fast,
desktop
or
whatever
local
demon.
You
have
running
get
that
response
and
then
just
play
it
back
to
the
user
in
the
browser.
This
topology
results
in
a
couple
of
things.
B
First
and
foremost,
is
that
the
the
URL
that
the
person
typed
in
whether
that's
Doc's
done
ipfs,
dot,
IO
or
IP
bus
gateway,
URL
gets
redirected
to
a
local
host
URL.
So
from
a
from
a
usability,
New
York
standpoint,
it's
already
a
little
a
little
funky
normal
people
seeing
localhost
it's
not
really
in
the
state
that
we
want
to
want
to
be
in
for
the
long
term.
B
B
All
kind
of
capability
detection
you
know
that
are
not
stee
the
both
novel,
not
standard
across
browsers,
there's
some
differences
there,
but
also
not
even
implementing
existing
standards
to
spec.
So
you
have
both
common,
both
interoperability
and
spec
compliance
issues
that
start
to
crop
up.
So
we've
been
trying
to
now
be
aware
of
what
those
are
kind
of
where
they're
blocking
us
and
where
we
need
to
be
able
to
poke
at
browser
vendors
to
be
able
to
get
some
of
this
stuff
fixed
on
that
and
the
gecko
side.
B
B
That
would
also
be
like
what
is
it?
What
is
that
upgrade
path?
Look
like:
where
do
we
actually
want
to
be
the?
If
there's
the
it
seems
like
then
the
world
of
browsers?
Is
we
want
to
end
up
in
a
place
where
it's
ipfs,
colon,
slash,
slash,
cid
and
then
and
then
and
then
path?
What
what
is?
What
does
that
look
like
from
communicating
what
that
again,
we
get
back
to
this
idea
of
a
lack
of
a
security
model
or
application
model
that
we
have
really
for
a
threat
model
for
for
what
that?
B
Neither
protocol
handling
looks
like,
but
right
now
we
can't
kind
of
use
local
houses
localhost
as
a
proxy
for
understanding
what
that's
going
to
be
as
at
least
a
midway
integration
point.
So,
but
I
think
it
when
I'd
probably
like
to
get
to
us
at
the
very
least,
documented
X
documentation
for
what
our
expectations
are
right
now
and
what
the
limitations
of
browsers
are.
A
Thank
you
that
that's
like
super
super
useful
in
introduction
to
the
problem
space,
and
the
only
thing
I
want
to
add,
is
those
nuances
which
I
linked
those
nuances
I
linked
to
the
issue
specific
to
Firefox,
mainly
because,
like
the
Firefox,
is
remaining
piece
of
the
puzzle,
so
the
problem
of
addressing
local
node
local
gateway.
Right
now,
we
are
basically
exposing
HTTP
gateway
to
ipfs
on
localhost
and
we
used.
We
use
that
it's
basically
localhost
IP
and
port
8080.
The
problem
with
that
is
it's
a
single
origin.
A
So
when
subdomains
will
land,
we
hope
to
have
subdomains
on
localhost,
so
CID
that
idea
first
at
localhost
that
solves
the
original
problem.
However,
there's
a
separate
problem,
space
related
to
secured
contexts
so
secure
context
is
secure.
Context
is
another
security
abstraction
in
web
browsers
that
acts
as
a
gatekeeper
to
some
more
advanced
api's
and
some
like
operations,
either
from
from
the
page
itself.
The
things
that
you
could
do
from
the
JavaScript,
the
the
types
of
requests
you
could
do,
the
access
to
cookies.
A
The
local
storage
that
you
have
secure
context
in
general
has
a
very
formal
definition,
but
the
short
version,
a
short
pragmatic
version,
is
that
it's
either
a
page
loaded
from
secure
transport,
so
HTTP
with
a
valid
certificate
or
a
localhost
and
there's
a
cave
at
a
small
like
asterisk.
Around
localhost
means
so
in
the
initial
html5
spec
it.
A
It
was
just
stated
that
everything
local
says
that
the
secure
context
is
page
loaded
from
HTTP
or
page
loaded
from
localhost,
but
then
people
realize
that
the
local
house
itself
is
just
a
hostname
which
get
gets
passed
to
operating
systems
resolver
and
gets
localhost
IP
from
that
and
in
theory,
some
like
malicious
software
or
someone
could
set
up
operating
system
or
provide
a
malicious,
DHT
server
which
could
return
different
IP.
And
you
no
longer
you
think
you
are
talking
to
your
local
machine
over
look
back
address.
But
you
are
talking
with
arbitrary
okey.
A
So
then,
to
sort
of
like
to
plug
plug
that
security
hole.
The
spec
was
change
and
explicitly
stated
that
only
120
7.00
1
and
Colin
Colin
1,
so
ipv4
and
ipv6
for
local
host
IPs.
Those
are
hard-coded
in
the
spec
and
those
are
hard-coded
in
browser,
whether
in
browser
engines,
so
only
IPS
are
secure
contexts
and
the
local
host
host
name
for
a
long
time
was
not,
and
that's
why
we
are
revisiting
subdomain
gateways
this
year,
because
it
was
quite
recently
when
Chrome
and
Firefox
switched
to
this
idea.
A
That
localhost
should
not
be
passed
to
like
local
operating
system
resolver.
We
already
know
that
it
should
be
a
local
look
back,
so
the
browser
itself
should
just
like
seamlessly
translated
to
local
host
IP,
and
when
that
change
got
implemented,
then
browsers
are
sure
this
is
real
loopback
device
and
then
they
could
flip
the
switch
and
make
localhost
a
secure
context
again,
even
when
the
hostname
is
used.
A
That's
a
long
way
to
tell
that
host
and
IP
is
not
interpreted
the
same
by
the
web
browser,
but
I
feel
it's
useful
to
know
that
all
this
stuff
happened
in
the
past,
and
why
that
why?
It's
now
that
we
are
making
this
change
and
not
years
back
before
before
localhost
was
not
secured
context.
So
the
last
piece
of
the
puzzle
is
that
localhost,
it's
the
name,
is
secure
context
in
both
Firefox
and
chromium.
A
A
More
advanced
web
apps
not
like
static
pages
but
like
web
apps
that
use
either
like
web
crypto,
IP
eyes
or
access
more
advanced.
The
API
is
that
access
to
the
camera
or
things
like
that,
if
you
want
to
connect
to
web
sockets
as
to,
if
you
want
to
connect
to
secure
web
sockets,
you
need
to
be
insecure
context.
A
Only
JavaScript
running
in
secure
context
can
connect
to
secure
web
web
sockets,
which
means
you
are
not
able
to
connect
to
bootstrap
nodes
of
the
p2p
if
you
are
not
running
in
running
J's,
a
TFS
and
jelly
p2p
in
secure
context.
So
that's
the
context
where
it's
important
and
how
many
like
moving
pieces
are
there.
A
Even
if
we
like
to
stop
discussing
the
subdomains
there
is
this
topic
of
sacred
context,
and
we
want
people
to
be
able
to
redirect
like
requests
for
content
addressed
stuff
on
the
web
to
local
gateway,
and
we
don't
want
people
to
see
that
the
website
broke,
because
access
to
some
API
is
blocked.
People
won't
care
about
the
fact
that
it's
like
not
no
longer
sacred
context.
People.
A
We
should
not
break
stuff
that
people
put
on
the
ipfs.
We
probably
should
make
everything
in
our
power
to
make
sure
if
we
redirect
to
local
Gateway
to
ensure
that
they,
like
the
pages
and
the
user
experience,
it's
not
blocked
and
we
don't
decrease
like
security
isolation
that
people
had
on
the
regular
web
I
think
that's
it.
I
I
hope
it
was
useful.
B
Thanks
for
the
deep
down
I
really
for
me,
this
really
kind
of
begs
this
question
that
we
haven't
really
fully
answered
yet
which
is
like
what
are
the
expectations
around
an
application
model
and
the
capabilities
of
an
application
model
and
the
compatibility
of
the
web
application
model
for
an
IP
FS
resource.
If
you
live
something's
like
so
minutes,
so
much
of
ipfs
is
used
outside
of
the
web
context,
where
there's
no
expectations
of
any
of
this
actually
working
like
it's
not
being
rendered
in
the
context
of
my
user,
visible
web
page.
B
So
you
pull
in
this
entire
all
these
assumptions
about
world
a
when
really
you're,
just
trying
to
load
a
resource
from
world
B,
which
is
ipfs
where
we
actually
have
no
defined
requirements
around
any
type
of
handling,
what
the
expectations
are
and
what
the
content
is
from
a
mime
type
perspective,
let
alone
how
it
will
be
treated
in
a
specific
rendering
context.
We
we
dictate
none
of
that
we're
just
a
pipe
at
that
point.
C
Integration
choices
as
looking
at
like,
obviously
the
upgrade
path
eventually
gets
to
where
you
want
to
be,
but
where
would
you
ideally
want
to
be
versus?
Where
can
you
like?
Obviously,
some
doors
can
just
be
shut
and
like
oh,
it's
incredibly
hard
to
open
again
but
like
maybe
that
could
be
just
a
useful
brainstorm
tip
to
kind
of
go
like
in
an
ideal
world
we'd
be
here
in
an
ideal
world.
C
If
you
put
a
web
page,
if
you
try
and
load
a
web
page
through
the
gateway
that
references
other
like
Google
CDN
for
jQuery
like
do
we
even
want
like
do
you
need
to
opt-in
to
load
that
stuff,
rather
than
that,
be
the
default
which
like,
if
you
questioning
the
way
that
you're
doing
the
web,
then
maybe
like
that's
an
interesting
kind
of
exercise.
Yeah.
B
Yeah,
there's
a
really
good
point
that
that's
why
a
lot
of
times
I
try
to
couch
this
conversation
in
the
context
of
what
an
IP
fest
colon,
slash.
Slash,
should
result
in
that,
because
that
that's
a
point
where
we
get
to
redefine
what
those
expectations
are.
We
we,
we
don't
have
to
accept
the
old
model
and
we
really
should
learn
from
what
did
or
did
not
work
and
and
that
vision
of
what
we
think
the
web
and
50
years
should
be.
B
We
submit
again
the
this
is
a
long,
long
question
we're
gonna
I'm,
not
gonna
answer
it
in
this
meeting,
but
local
host
and
how
it's
handled
browser
renders
in
different
browsers.
This
is
a
really
good
example
of
where
we
really
start
to
hit
these
barriers
and
the
fact
that
we
don't
have
a
well-defined
application
model
for
I
profess
start
to
ping.
Em
we
started
hit
hit.
We
start
to
really
feel
that
gap
in
our
plan,
I.
A
Also,
like
fine,
that
andrew
has
very
good
good
idea
with
thinking.
How
would
we
create
this
browsing
experience
if
we
forgot
about
like
existing
legacy
situation
and
the
fact
that
pretty
nice
over
overlap
between
that
line
of
thinking
and
the
work
that
you
are
doing,
the
trick
with
browser
design
guidelines,
namely
the
stuff
that
we've
already
identified,
that
like
the
on
the
old
web?
A
You
got
those
like
guarantee
of
transport
not
being
changed,
but
but
you
don't
actually
get
any
integrity
guarantees
in
the
ipfs
you,
you
would
have
to
come
up
with
some
ways
of
communicating
that
that
there
are
integrity,
guarantees
and
transport
guarantees.
However,
you
have
a
different
privacy.
Guarantees
than
you
have
in
the
old
model
also
could
be
a
workshop
14-week
just
yeah.
B
B
We've
taken
that
trust
model.
Where
that,
when
do
you
put
your
trust
and
we
put
it
out
of
band
it's
out
of
view
now,
and
that
onus
of
that
trust
is
put
on
the
user
to
be
able
to
where
did
I
get
the
C
ID
from
do
I
trust.
That
person
is
that
it's
all
up
to
me
now,
as
opposed
to
at
least
having
like
some
framework,
some
business
incentive
around.
B
B
So
if
you're,
if
we've,
if
my
job
is
to
convince
a
security
team
and
a
une
design
team
at
Google
to
implement
ipfs
I
need
to
be
able
to
explain
what
the
nature
of
the
change
in
the
user
experience
is
with
regards
to
how
they
make
decisions,
should
I
trust
this
URL
or
not,
and
right
now,
even
though
we
have
a
system
for
that
trust
that
is
fallible
that
has
known
problems
there's
at
least
a
system,
but
it
with
ipfs.
We've
said
what
that's
out
of
view,
not
my
job.
B
B
C
A
Observation
that
we
sort
of
every
time
we
talk
about
this,
like
the
presence
of
ipfs
in
web
browser
and
then
trusting
someone
who
gave
me
a
link
with
a
CID
for
like
for
me
sort
of
like
the
abstract.
The
way
I
talked
with
less
technical
people
was
that,
like
just
like
right
now,
you
are
not
pasting
URLs
with
raw
IPS
to
people.
You
probably
want
to
be
pasting
or
very
rarely
be
pasting
links
with
Rossy
IDs
unless
you
want
like
a
specific
snapshot
of
a
website.
A
So
this
discussion
about
the
human
readable
names
is
never-ending
and
we
probably
need
to
figure
it
out
like
today,
we
like
IP
NS,
was
not
an
answer
to
this
problem
like
it
was
not
an
answer.
It
was
not
a
full
answer
to
a
human,
readable
names
problem.
It
was
just
a
way
of
having
static
address
that
points
at
the
content
that
could
change.
A
We
got,
of
course,
blockchain
solution
which
is
Ian
as
her
that
requires
funding
light
client,
which
is
not
feasible
like
right
now
we
are
not
bundling
a
Tyrian
client
with
a
cubes
companion,
so
basically,
DNS
link
is
the
only
solution
for
human
readable
names
and
it's
bridging
that
gap
from
the
old
web
to
the
newer.
The
question
is:
is
that
enough?
A
It
feels
like
at
the
weakest
link
in
our
entire
stack
when
it
comes
to
the
browser
integrations,
maybe
because
we
are
not
adopting
DNS
SEC,
maybe
that
for
a
reason,
I
think
it's
another
topic
for
a
longer
discussion.
We
probably
should
pause
at
this
if
you
want
to
feel
fit
into
this.
Our
also.
We
need
to
finish
five
minutes
earlier
today,
so
I'll
put
a
dot
on
that
conversation
here
and
let's
move
to
the
next
one.
Thank.
B
Yeah
actually
URL
to
the
notes
as
well.
We
are
beta
testing
the
FSF
grants
program
within
the
area
of
browsers
connectivity,
web
UI
and
desktop.
We
have
a
number
of
projects
that
we
have
said.
We
wanted
to
be
able
to
do
X,
but
we
don't
have
time
and
we
keep
not
getting
around
to
doing
X.
If
you
have
ideas
for
projects
that
are
either
micro
grant
status
and
it
can
be
done
under
$1,000
us.
If
you
have
ideas
for
projects,
maybe
they're
a
little
bigger
that
you
might
be
considered
a
larger
grant.
B
That
would
be
worth
investing
in
or
you
think
seeing
that
somebody
else
who
is
not
part
of
I
profess
appeal
might
be
investing
in
or
if
you
just
have
a
request,
then
you
would
like
people
to
bring
proposals
to
the
table
for
a
given
technical
problem.
Whether
anybody
out
there
might
actually
see
business
document
1
to
actually
invest
real
money
behind,
couldn't
see
that
problem
solved
the
ipfs
difference.
Repo
is
now
open
for
business.
It's
currently
in
beta
stage,
but
there
are
it's
being
used
for
things
like
bounties
for
documentation,
it's
being
used
for.
B
B
Helped
us
move
some
of
these.
Some
of
these
things
we've
had
on
the
backburner
for
a
long
time
help
us
move
them
forward,
even
if
their
experiments
I
would
encourage
you
to
think
of,
like
you
know
like
one
of
the
things
I'd
like
so
someday,
we'll
just
have
a
bunch
of
browsers
all
running
HTTP
pages.
Where
they're
connected
you
see
ipfs
over
80
hg
over
here
TC
aid
that
might
be
fine
for
a
given
use
case.
B
It
might
not
be
fine
for
a
use
case
a,
but
it
might
be
perfect
for
use
case
B,
especially
when
a
bunch
of
people
load
a
bunch
of
things
up
at
the
same
time.
The
same
please
might
worked
out
great
I
would
love
to
save
some
experimentation
around
that.
So,
if
you
have
these
types
of
ideas,
it
is
not
even
a
fully
formed
idea,
but
I
would
like
to
see
something.
Do
an
experiment
in
that
area.
A
We,
as
a
chief
as
project
early
P
to
peepers,
you
can
propose
chunks
of
work
that
we'd
like
to
see,
but
people
can
propose
their
own
projects
and
what's
important,
is
it's
not
only
about
a
like
fully
fledged
project,
but
there
are
those
like
micro
grants
if
you
have
a
project
which
could
have
ipfs
integration
added,
and
you
want
to
that
work
to
be
sponsored,
that's
a
special
type
of
grant.
That
also
could
be
proposed
and
it
doesn't
deliver.
B
Yeah,
there's
a
like
there's
an
issue
like
next
cloud
is
a
great
example
where
next
cloud
users
are
subset
of
them.
They
were
like
I,
just
want
messed
up,
I'm,
not
that
and
I
keep
meaning
to
port
that
from
the
IP
fest
mini
projects.
Repo
and
we've
just
moved
that
issue
over
into
deaf
grant,
because
I
think
that
may
be
something
where
people
in
that
community
would
would
be
really
interested
in
doing
that
working.
So
you
can
happen.
I
love
that
idea
of
integrations
as
Democrats.
B
I
think
I
think
this.
This
next
agenda
item
is
me
also.
This
is
something
that
is
more
more
hand.
Wavy,
so
I
meet
with
Microsoft,
regularly
they're
rolling
out
an
identity
platform
solution
called
using
a
technology
called
side
tree
and
their
implementation
of
it
I
think
it's
gonna
be
called
ion.
They
announced
it
last
year
and
it's
basically
identity
transactions
are
aggregated
written
to
the
Bitcoin
blockchain,
and
all
of
the
logs
of
those
transactions
are
actually
hosted
from
ipfs
nodes.
B
That
currently
Microsoft
is
running
its
JSI
PFS
running
as
a
daemon
on
the
server
side
that
they're
using
to
host
these
see
IDs,
basically
wanna
see
IDs
are
written
to
the
Bitcoin
blockchain
right
for
now,
as
their
initial
rollout,
but
I
guess
they
say
you
could
write
it
anywhere.
It's
it's
interesting.
The
they're
using
JSON
ipfs,
because
that's
the
tool
chain
and
platform
and
language
that
their
whole
team
and
the
whole
area
works
with.
They
are
they're
all
typescript
up
and
down
so
they're
interested
in
our
typescript
support
as
well
prototypes
for
genocide.
B
Beam
us,
but
also
this
topology
is
kind
of
strange
like
not.
A
lot
of
people
are
deploying
JCAP
fuss
on
the
server
running
it
in
production,
because
J
psyche
fest,
doesn't
have
two
HT
support,
so
it
can't
actually
connect
to
the
actual
DHD.
If
the
clients
to
this
implementation
bootstrap
to
Microsoft's
nodes
that
it's
great,
they
connect
to
the
Microsoft
site,
DFS
nodes,
they
request
the
CID.
Those
nodes
obviously
have
the
CID
boom
problem
solved.
B
However,
they
need
to
be
able
to
scale
this
network
farther
and
further
so
they're,
very
interested
in
in
us
implementing
more
feature
parity
in
gpfs
and
and
there
Jason
a
p2p
is
obviously
a
huge
part
of
the
connectivity
standpoint
and
some
the
DHD
work
as
well,
but
also
this
topology
ends
up
being
confusing
to
communicate
in
in.
In
this
use
case,
even
though
Jace
I
profess
is
not
directly
connected
to
the
DHT,
you
can
still
make
a
request
for
the
C
IDs
from
this
from
the
Bitcoin
blockchain.
B
Not
only
Microsoft.
Servers
are
gonna,
be
there.
So
we
have
this
topology,
where
they're
not
actually
connected
to
DHT,
but
actually
the
system
kind
of
works.
It's
not
going
to
work
great.
It
might
not
work
all
the
time,
but
communicating
about
one
this.
The
feature
parity
thing
is
kind
of
a.
We
don't
really
have
a
good,
solid
understanding
of
where
exactly
these
connection
types
kind
of
break
down
and
we're
just
I
professed.
B
But
if
they're
not
connected
to
go
DHT
go
connected,
independent
disconnected
you
see
it
somewhere
online
that
gateway
scenario,
or
they
actually
request
that
same
CA,
ID
from
a
gateway
might
not
work,
and
these
types
of
nuances
in
implementation,
deployment
standing
up,
I
professed
solutions
for
whatever
your
needs.
They're
here
in
this
case
a
global
decentralized
identity
system,
they're
not
clear
to
implementers
and
builders
I
would
love
it.
If
anybody
has
some
resources,
ideas
or
work,
you've
done
or
work,
you
know
of
around
kind
of
the
visualization
of
these
typologies.
B
Of
what
these
connection
limitations
are.
I
would
really
love
to
be
able
to
have
be
able
to
point
to
diagrams
interactive
visualizations.
Maybe
this
is
an
idea
for
a
deaf
grant
even
of
what
how
these
connections
actually
work,
sometimes
so
like
in
this
given
scenario
where
you
have
a
GSI,
PFS
node
running
on
a
server
somewhere,
it's
connected
using
our
default
connection
configuration
it's
got
a
few
hundred
peers
that's
connected
to,
then
you
take
a
CID
that
it's
hosting
a
new
public
and
you
ask
for
it
through
the
Gateway.
How
does
that
routing
actually
work?
B
B
What
are
the
possible
ways
that
the
request
can
route
through
the
system
and
eventually
performance
concerns
aside,
actually
connect
and
resolve
that
that
content?
You
know,
I
think
the
default
assumption
for
a
lot
of
us
that
are
running
IP
address
in
high
volume.
High
production
scenarios
is
that
you
just
have
to
run
it.
You
just
have
to
run
it
go
I
think.
That's
it,
but
that's
the
only
way
it's
gonna
work,
but
that's
not
really
often
the
that.
That's
not
all
the
time.
B
B
They
just
using
go
ipfs
and
there's
a
whole
lot
of
reasons
why
people
choose
GSI
PFS,
for
example,
when
your
entire
ecosystem
is
built
in
JavaScript
and
all
your
hires
and
your
hiring
pipeline
is
put
in
JavaScript
and
all
your
tooling
is
oriented
around
JavaScript
there's
a
huge
investment
in
JavaScript
as
an
ecosystem,
and
not
only
that
from
programming.
This
perspective
it
in
Python
are
the
two
biggest
ecosystems
in
that
regard.
B
So
having
really
strong
support,
for
it
is
really
important
for
us
from
adoption
standpoint
then
there's
also
this
it's
kind
of
like
network
visualization
and
and
and
deployment
topology
visualization
things.
So
there's
a
couple
stuff
wrapped
up
in
here,
but
I
think
I
didn't
primarily
out
of
this
as
like
I,
don't
have
good
tools
to
be
able
to
communicate.
These
deployment,
tell
apologies
how
that
routing
actually
works.
B
D
We
are
working
on
finishing
up
instrumentation
of
whippy-tippy,
because
we
need
this
for
test
ground,
and
then
that
is
also
going
to
be
incorporated
into
a
visualizer
that
we
already
have
design
comps
of.
But
we
didn't
have
the
instrumentation
to
provide
to
the
team.
That's
going
to
be
helping
us
build
out
that
visualization,
so
we
should
eventually
in
the
near
future.
D
I,
don't
remember
the
timeline,
but
it's
within
the
next
six
months
get
instrumentation
done
so
we
can
hand
that
off
so
that
they
can
plug
it
in
and
actually
build
out
a
network
visualizer
for
this
and
then
I
think
there's
the
second
part
of
this,
which
is
like
hey.
If
you
want
to
get
content
and
you
end
up
getting
content
but
you're
not
running
a
DHT.
How
does
that
happen
and
like
what's
going
on
because
there's
probably
like?
Maybe
it's
bit
swap
magic
or
it's
delegated
routing
or
whatever
is
going
on
here?
A
Ipfs
there's
like
yeah,
it's
all
the
stuff
that
we
have
in
jelly
p2p,
but
also
we
got
something
for
called
preloads.
So
there's
the
fact
that
you
can
ask
some
remote
nodes
to
fetch
content
to
its
cache
from
you.
And
then
you
don't
really
like
announce
anything
to
the
HD
but
effectively
by
the
fact
that
that
note
fetched
that
content
and
stored
it
in
local
repo
that
note
started
announcing
that
stuff
to
DHT.
So
that's,
but
that's
basically
like
like
delegated
concentrating
works,
no
I
think
in
conjunction
with
preload
nodes.
A
So
there's
those
are
like
two
pieces
of
the
puzzle.
One
is
like
I
passively
asking
for
a
content
and
that's
the
delegated
module
from
the
p2p
and
then
on
the
J's
ipfs
site.
When
you
add
the
stuff
to
your
local
node,
you
can
have
a
list
of
pillowed
nodes
which
automatically
will
prefetch
content
from
your
note.
So
this
diagram
that
the
tree
describes
is
even
more
complex
and
we
will
have
even
more
arrows
and
boxes
there.
A
If
we
not
only
talk
about
like
many
lodges
at
EFS,
but
also
add
this
overlay
of
like
data
in
ipfs,
it
could
be
maybe
interactive.
Like
toggle.
You
could
like
show
or
hide
layers.
I
may
be
working
on
some
visuals
for
brave
blog
post,
which
has
some
of
those
things
like
fill
out,
notes,
delegated
routing
modules
for
any
p2p.
B
D
B
D
Magic
I
think
it's
just
like
it's
very
unclear
what
those
stories
are
right,
like
when
I
add
content
to
the
network.
What's
the
story
for
that
content
getting
onto
the
network
and
then
what's
the
story
for
it
coming
back,
because
it's
the
more
we
add
to
ipfs
to
like
the
more
convoluted
it's
gonna
get,
because
you
have
something
like
bit
swap,
which
is
just
calls
like
if
it
can't
find
it
locally
or
amongst
his
close
peers,
it
will
go.
Ask
the
p2p
content
routing
to
find
this
thing
and
Lippi
to
be
content.
D
Routing
could
be
a
DHT,
it
could
be
delegated
routing.
It
could
be
something
else
in
the
future
like
it
could
be
any
series
of
that,
and
so
it's
like
plugging
it.
All
of
that
in
and
saying
like
this
is
specifically
what
JSI
PFS
is
going
to
do,
and
these
are
the
two
stories
that
get
you
that
content.