►
From YouTube: Strengthening the bridge from IPFS to Filecoin
Description
Join us for Filecoin Liftoff Week, an action-packed series of talks, workshops, and panels curated by the web3 community to celebrate the Filecoin mainnet launch and chart the network’s future. https://liftoff.filecoin.io/
Events take place all week, October 19-23, 2020.
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
A
With
falquin
launched
mainnet
finally,
here
we
want
to
kind
of
talk
about
this
bridge
between
the
the
ipfs
network
and
filecoin,
and
how
this
is
going
to
be
increasingly
valuable,
for
us
is
ways
of
fetching
data
from
ipfs
and
potentially
pushing
data
to
the
filecoin
network.
A
And
so
I
want
to
kick
us
off
kind
of
got
three
separate
topics
here:
try
not
to
spend
more
than
10
minutes
on
each
of
these,
and
the
first
thing
kind
of
want
to
talk
about
is
storing
data
remotely
this
idea
of
ingesting
data
from
ipfs
sending
data
from
ipvest
to
filecoin,
or
vice
versa,
and
so
in
the
immediate
term,
we
have
our
upcoming
release
of
ipfs
0.8
and,
in
this
release,
we're
adding
support
for
remote
pinning
services.
A
B
Yeah,
nice
and
and
thanks
juan
for
for
the
intro
shmaman
is,
is
rough
german
last
names,
it's
just
rough
yeah,
so
ipfs,
I
guess
you
know,
works
works
with
these.
You
know
ipld
graph
structures,
and
you
can
you
know
you
can
sort
of
add
them
locally
people
can
request
them,
but
for
a
long
time
now
in
the
ecosystem,
people
have
been.
B
You
know
asking
asking
third
party
services
to
host
data
for
them,
and-
and
so
you
know-
and
every
every
one
of
these
any
one
of
these
you
know
pinning
services
has
has
spread
up
with
its
own,
its
own
sort
of
api
and
its
own
way
of
working
with
it
sort
of
figured.
It
was
about
time
to
work
with
all
these
ecosystem
partners
to
figure
out.
B
If
there
was
sort
of
a
single
api,
we
could
standardize
on
to
make
things
easier
for
our
users
to
work
with
and
then
to
be
able
to
actually
include
an
ap.
B
You
know
an
api
for
working
with
these
all
together
within
go
ipfs
itself,
and
so
this
you
know
so
this
remote
panning
api
will
work,
and
you
know
if
you
want
to
store
your
data
with
you
know,
sort
of
pinata
or
or
inferior
or
textile
like
you
should
be
able
to
do
those
things
and
also,
if
you
want
to
start
it
set
up
with
you,
know,
sort
of
your
own,
like
ips,
cluster
or
or
a
file
coin
storage.
B
Miner,
like
you,
can
do
that
because
we're
abstracting
out
in
a
sense
this
how
to
push
data
around
with
this
api,
which,
which
should
be
sort
of
powerful
in
allowing
the
community
to
extend,
extend
upon
it.
A
So
right
now,
with
this
feature,
we're
going
to
have
the
ability
to
configure
these
these
services
and
right
now.
The
specification
for
this
is
http
based,
so
we'll
actually
be
able
to
make
http
request
to
the
service
and
then
coordinate
being
able
to
connect
to
them
and
then
transfer
transfer
data
to
that
pending
service
to
store.
The
there's
been
a
lot
of
discussion
about
the
evolution
of
that
api,
like
we're
using
webview
technology,
which
was
nice
and
very
easy
to
get
that
out.
A
But
the
future
of
this
like
web
2
is
like
http
requests,
not
very
decentralized,
and
then
the
data
backing
that
right
now
we're
pinning
these
data
objects
and
early
in
the
year
there
was
discussion
at
the
pinning
summit
about
potentially
looking
into
these
pin,
sets
or
textile
threads
being
able
to
send
that
kind
of
data
like
what
do
we
think
the
future
for
like
we
can
unlock
with
this
pinning
service
for
for
the
future
of
sharing
data,
and
these
larger
data
sets
and
kind
of
like
this.
Following
of
of
our
data
sets.
B
All
right
well
yeah,
I
think,
there's
basically
moving
from
moving
from
this
model,
where
we
have,
where
we're
sort
of
instructing
right
sort
of
this
we're
instructing.
You
know
some
some
peer
that
we're
pushing
to
like
hey.
Please
store
some
data
for
me
to
instead
have
this
sort
of
more
featureful
track,
distributed,
object
right,
whether
whether
it's
a
textile
thread
or
or
anything
else
of
that
nature.
That
allows
me
to
to
modify
this.
B
This
single
object
right
that
allows
any
of
my
other
sort
of
subscribing
services
to
to
watch
it
and
then
update
accordingly,
I
think,
would
fit
much
more
within
like
sort
of
the
ethos
of
what
we're
working
towards,
which
is
make
it
easy
swap
out,
you
know,
doesn't
matter
sort
of
which
painting
service
you
use
ipfs
should
work
the
same.
It
doesn't
matter
if
you're
backing
up
your
data
on
file
coin
or
your
own
machine.
B
Everything
should
work
just
the
same,
and
so
I
think,
yeah
moving
moving
more
towards
that
model,
you
know
and
being
able
to
track
things
like
which
which
peers
do
you
care
about,
and
peer
sets
and
and
and
keys
and
naming
these
people
right.
These
are
all
sort
of
important
things
that
making
the
usability
of
ipfs
stronger
and
making
it
easier
for
application
developers
to
build.
On
top
of
it.
A
Yeah,
I
think,
there's
there's
a
lot
of
of
conversations
to
to
be
had
on
this
line,
but
I
think
what
we're
really
looking
at
is
in
the
future
of
what
are
ways
that
we
can
unlock
moving
away
from
this
http
api
approach,
moving
to
lit
pdp
into
this
distributed
system,
where
we're
able
to
unlock
those
protocol
level
decisions
so
that
people
building
this
bridge
between
the
ipfs
network
and
the
filecoin
network
can
make
more
intelligent
decisions
about
like
how
that
data
moves
across
what
data
is
being
tracked
and
then
any
contracts
for
how
our
acls
handle.
A
How
is
is
payment
handled
across
these
channels?
So
I
think
there's
a
lot
of
of
in-depth
conversations
there,
and
so
I'm
really
interested
at
the
future
of
this
and
and
kind
of
what
this
file
coin
bridge
brings
with
that.
I
want
to
kind
of
transition
into
this.
We're
scaling
the
network
scaling
file
coin
so
moving
on
to
finding
data
in
the
network.
A
So
what
are
the
challenges
of
scaling
provider
records
on
the
network
and
how
might
we
try
to
solve
those
problems,
but
I
don't
know
if
you
want
to
take
that
off.
You've
been
looking
at
this
a
lot
lately,
yeah.
C
Yeah,
so
so
content
routing
is
the
generic
term
that
we
use
to
basically
refer
to
the
problem
of.
How
do
you
find
content
on
ipfs,
and
specifically
this
means,
if
you
have
a
content
id
and
you
want
to
retrieve
the
content
corresponding
to
it.
The
question
is:
how
do
you
find
the
peers
who
can
provide
the
content?
C
So
that's
the
problem
that's
being
solved
now
and
of
course,
naturally,
what
we
are
expecting
is
that
there
will
be
a
lot
more
provider
records
in
the
future,
both
because
filecoin,
as
well
as
just
because
of
the
natural
evolution
of
the
network.
So
there
are
so
first
I
want
to
give
a
sort
of
a
high
level
picture
in
the
ipfs
ecosystem.
C
There
are
multiple
mechanisms
for
discovering
content.
The
two
prominent
ones
at
the
moment
are
one
is
dht
based
and
I'm
going
to
talk
in
detail
about
this
one.
C
Roughly
speaking,
people
publish
provider
records
at
the
dht
and
look
look
them
up
through
through
this
channel,
but
there
are
other
other
channels
to
completely
separate
mechanisms
to
do
this,
for
instance,
bit
swap,
which
is
the
actual
sort
of
file
download
protocol.
Bitswap
also
has
a
mechanism
for
discovering
provider
records
which
is
based
on
a
completely
different
technology.
C
It's
sort
of
intelligent
and
can't
be
summarized
in
one
or
two
words,
but,
roughly
speaking,
it
looks
at
it
looks
at
the
network
of
people
downloading
files
and
tries
to
kind
of
find
records
through
there
before
resorting
to
the
dht
now,
and
presumably
there
will
be
other
other
mechanisms
as
well.
So
the
differences
between
these
two
are
roughly
twofold.
C
First
from
a
quality
of
service
standpoint,
the
dht
route
for
providing
for
finding
provider
records
is,
is
a
service
that
is
meant
to
be
it's
slower
in
terms
of
latency,
but,
on
the
other
hand,
is
designed
to
be
robust
in
the
sense
that
it
pretty
much
never
fails.
On
the
other
hand,
the
bit
swap
mechanism
is
a
mechanism
which
works
most
of
the
time
with
very
low
latency,
so
it
has
a
different,
obviously
has
a
different
characteristic
in
terms
of
quality
of
service.
C
Of
course,
the
other
thing
that's
distinguishes
the
bitswap
mechanism
is
that
it's
bitswap
specific.
C
So
it's
like
application
specific
and
the
right
way
to
look
at
these
things
right
now
is
basically
that
applications
like
bitswap
would
have
methods
that
benefit
from
knowing
what
the
applications
are
and
they
would
resort
to
the
generic
dht
mechanism
when,
when
the
their
methods
fail,
what
we
are
planning
to
work
on
in
the
next
you
know
couple
of
quarters
is
to
really
sort
of
get
a
order
of
magnitude
improvements
in
the
generic
methods
for
content
routing
based
on
the
dht.
C
So
now
I
want
to
briefly
explain
essentially
what
this
will
entail
and
why
the
current
implementation
is
not
as
optimal
as
it
can
be.
C
So
in
the
current
implementation,
when
somebody
wants
to
provide
to
publish
data
to
the
network,
and
let's
go
with
the
example
like,
for
instance,
if
I
am
wikipedia-
and
I
want
to
publish
the
merkle
dag
of
my
entire
website.
C
Currently,
the
protocol
would
ask
the
origin
of
the
information
which
is
wikipedia
in
this
example.
They
will.
The
protocol
would
basically
ask
the
wikipedia
to
publish
all
to
publish
provider
records
for
all
web
pages,
basically
for
the
entire
three
of
the
wikipedia
website
proactively.
Essentially-
and
of
course,
I
should
make
a
note
that
publishing
is
always
has
a
time
to
live,
so
this
this
whole
process
repeats
every
every
few
days,
but
within
the
time
to
live.
Basically,
the
originator
of
the
information
is
responsible
to
publish
everything
themselves.
Now.
C
The
downsides
of
this
are
that
often
time
you
end
up
publishing
information
that
never
gets
used,
because
in
a
in
a
couple
of
day
window,
maybe
not
all
wikipedia
pages
gets
looked
up
at
all,
so
why
should
we
waste
network
bandwidth
to
publish
them
in
the
first
place?
The
second
problem
is
that
it
is
very
difficult,
basically,
it's
impossible
for
a
single
node
to
publish
a
very
large
website,
because
there's
a
limitation
to
the
bandwidth
of
how
much
they
can
publish
into
the
network
over
time.
C
What
we
think
is
that
the
bigger
problem
is
that
we're
basically
publishing
way
too
much
information
currently
being
sort
of
proactive
about
it
and
most
of
it
never
gets
used.
So
we
want
to
kill
this
inefficiency
to
effectively
get
an
order
of
magnitude,
improvement
in
sort
of
bandwidth
and
storage
utilization
in
the
network,
and
so
the
approach
for
solving
this
problem
is
a
somewhat
standard
trick
in
computer
science,
and
the
high
level
idea
is
that
we
want
to
offload
the
responsibility
of
providing
records
for
all
of
the.
C
In
this
example
for
all
of
the
web
pages
within
the
wikipedia
website,
we
want
to
offload
the
responsibility
to
the
users
who
are
actually
reading
and
consuming
this
information,
as
opposed
to
the
originator
of
the
information
which
is
wikipedia,
and
the
way
this
would
work
high
level
is
is
as
follows.
So
if
I'm
wikipedia
and
I
come
in
day
one
and
I
want
to
publish
my
entire
website,
I
would
actually
only
publish
a
record
for
the
root
of
my
website,
which
is
very
easy
to
accomplish.
C
And
then
every
time
a
user
looks
up
a
page
within
the
website
they.
So
they
would
look
up
the
page
using
the
standard
route,
which
is
they
would
look
up,
wiki
the
root
c
id,
and
then
they
would
query
the
origin
which
is
wikipedia
for
the
specific
page
that
they
want
once
they
get
the
page.
They
also
get
a
small
responsibility
for
publishing
a
provider
record
for
the
page
or,
however
many
pages
they
actually
extracted
from
me.
C
So
in
other
words,
if
you
look
up
something,
then
you
kind
of
do
community
service
to
provide
it
once,
roughly
speaking,
what
what
the
consequences
of
this
are.
So
there's
a
few
good
outcomes.
The
first
thing
is
that
the
network
only
gets
to
have
provider
records
for
web
pages
that
have
been
looked
at
and,
furthermore,
the
number
of
provider
records
is
essentially
exactly
proportional
to
how
many
users
are
looking
at
the
page.
C
C
So
with
this,
we
expect
to
solve
both
of
the
problems
that
I
mentioned
initially,
which
is
that
now
the
network
is
only
going
to
carry
records
that
are
actually
being
used
and,
secondly,
everybody
participating
is
going
to
chip
in
pretty
much
the
exact
same
amount
of
provide
responsibility.
C
So
we
expect
huge
improvements
in
network
capacity,
so
this
should
be
coming.
You
know
probably
a
couple
of
quarters
from
now,
and
this
opens
up
some
interesting
sort
of
future
applications
of
this,
because
we
will
end
up
in
in
a
world
where
we
will
be
able
to.
C
For
instance,
anybody
will
be
able
to
transparently
see
the
popularity
of
any
given
piece
of
content
just
by
looking
at
how
many
provider
records
are
available
for
it
in
the
network
at
any
given
point
in
time
which
enables
all
kinds
of
intelligent
services
enable
enables
intelligent,
caching
enables
people
to
monitor
the
the
popularity
of
content
over
time.
C
This
can
enable
markets
for
speculating
which
content
should
be
cached
or
not,
and
it
also
enables
applications
like
search
engines
which
can
now
have
just
like
on
the
web.
They
can
have
sort
of
protocol
level
information
about
how
popular
different
pieces
are
yeah.
That's
the
the.
A
Height,
that's
great
so
looking
at
kind
of
talking
about
that.
You
know
that
just
kind
of
like
on-demand
providing
and
I
I
go
once
I
get
data-
I
become
a
provider
and
that's
my
community
service,
but
looking
at
data
sets
where
we
have
lots
of.
You
know
something
like
filecoin,
where
we're
doing
ingestion
into
filecoin
doing
long-term
storage.
A
C
When,
when,
when
users
who
read
the
data
starts
providing
it,
of
course,
we
so
every
time
you
every
time
you
resolve
one
piece
of
data,
you
only
you
only
become
responsible
for
providing
it,
of
course,
for
a
limited
amount
of
time,
maybe
like
one
or
two
provides,
because
one
provide
essentially
covers
like
a
four
day
window
or
something
like
this.
C
So
naturally
you
you
might
say
well
if
for
a
while
nobody's
using
the
data,
it'll
disappear
out
of
the
network,
so
a
code
start
is
always
so
a
code
starts
can
always
be
so.
People
can
always
basically
resort
back
essentially
to
the
to
the
root
when
provided
records,
sort
of
disappear,
because
nobody
has
used
the
information
for
a
while,
but
also.
C
But
the
point
is
that
the
whole
chain
of
fallback
remains
in
the
provider,
records
and,
and
the
last
result
is
that
people
end
up
having
to
get
it
back
from
the
very
origin.
So
it
it
works
kind
of
very
much
like
a
like
a
layered
sort
of
caching
system.
C
That
said,
we
will
also
there's
a
lot
we
can
do
in
terms
of
tuning
parameters.
In
terms
of
you
know,
how
long
should
you
provide
something,
and
just
back
of
the
handkerchief
kind
of
calculations,
suggest
that
essentially
things
could
be
slow,
meaning
like
completely
resorting
to
the
root.
Only
when
information
is
very
cold
and
no
one
has
has
expressed
any
interest
in
it,
but
once
the
piece
of
information
becomes
of
use,
it
tends
to
have
it
sort
of
on
the
web.
C
A
Great,
so
I
kind
of
want
to
like
tie
this
back
to
to
you
dean
and
maybe
expand
off
of
that,
and
then
also
kind
of
look
at.
We've
talked
about
a
few
mechanisms
here
of
like
bit,
swap
for
application
level
and
falling
back
the
dht
to
this
more
more
generic
routing,
and
also
this
like
cold
storage
mechanism
or
cold
start
mechanism
of
having
this
this
original
host
provider.
A
We
know
that
in
in
lit
p2p
loop
pdp
is
a
library
it's
fairly
easy
to
to
extend
that,
but
ipfs
like
today,
we
we
have
plug-ins
which
support
there
is
is
rough.
So
what
does
the
future
for
kind
of
these
like
alternate,
routers
or
plugability,
of
supporting
ongoing
systems
of
either
I
want
to
plug
into
ipfs
and
use
filecoin
or
pinata
or
infura,
as
as
my
long,
my
cold
storage
or
my
cold
start?
What
are
these
kind
of
like?
What
does
the
future
of
that
look
like.
B
Yeah,
that's
a
good
question
and,
and
some
of
it
is,
is
really
going
to
be
dependent
on
on
community
input,
because
there's
there's
place,
there's
a
number
of
ways
to
do
this
right.
Lib
p2p
gets
away
with
this,
because
it's
it's
a
library.
I
feel
that
you
can
get
away
with
this,
because
it's
a
library
right
and
if
there
are-
and
there
are
existing
applications
out
there-
that
use
ipfs
as
a
library-
it's
not
as
easy
as
it
could
be,
but
but
you
can't
actually
do
it.
It
does
work.
B
So
there's
there's,
there's
ways
of
making
that
easier
for,
for
other
other
applications
to
build
on,
but
then
there's
ways
of
dealing
with
we'll
call
it
plug
ability
into
the
ipvs
like
binary
itself
right
and
that
could
be
you
know
with
plugins.
Fortunately,
the
the
go
language
program,
the
go
link,
support
for
plugins
is
is
not
as
good
as
it
could
be,
but
there
are.
B
There
are
ways
to
work
around
that
you
know
sort
of
shared
libraries,
but
then
there's
also
the
kind
of
like
microservices
approach
right
where
you
could
have.
B
You
know
sort
of
a
daemon
that
handles
your
sort
of
the
pdp
networking
protocols
and
one
for
handling
your
data
storage
and
one
for
handling
your
various
data
transfer
protocols
and,
and
things
like
that
in
the
sense,
allow
various
peer-to-peer
applications
to
sort
of
build
on
top
of
each
other
in
a
natural
way,
so
you're
not
having
to
think
about.
Oh,
my
application
is
building
on
pfs,
so
that
means
I
need
to
include
some
ipfs
things
and
similar
p2p
things
like
finding
ways
to
make
it
easier
to
have
those
available
for
you.
B
This
is
especially
true
because
one
of
the
like
sort
of
really
interesting
properties
of
this
whole
like
bit,
swap
allowing
you
to
find
data,
and
also
some
of
the
you
know
performance.
Like
tweak.
You
know
performance
hacks,
you
can
do
around
dhts
involve
just
being
connected
to
more
people,
and
I
can't
run
20
applications
on
my
computer,
which
each
want
to
be
connected
to
100
peers
or
a
thousand
peers,
like
that's
just
unreasonable.
B
But
if
you
can
start
to
sort
of
move
towards
putting
these
at
like
a
sort
of
a
central
point
on
your
machine
right
and
everyone
is
sort
of
reusing
some
of
the
same
resources,
the
same
peer
connections
right
you
can.
You
can
really
start
to
scale
that
way
if
you've
seen
any
of
some
of
ones
like
high
level
talks.
There
there's
some
talks
around
like
you
know
how
ipfs
might
integrate
into
closer
to
like
the
operating
system.
B
If
you
have
opinions
about
this,
your
developer,
interested
in
building
on
ipfs
things,
you're
already
doing
so
now
and
you're
like
I
wish
this
was
better
or
easier.
I
highly
recommend
you
reach
out
on
github
or
discuss,
because
we
want
to
make
life
easier
for
you.
B
You
know
other
people's
pinning
setups
right
and
those
interact
with
ipfs,
and
you
can
sort
of
use
the
ipfs
way
of
looking
at
things
right.
Moving
around
moving
around
graphs
content,
routing
like
you
can
use
that
to
interface
with
the
world,
and
so
I
want
to
do
that
for
finding
content
too.
I
want
to
find
you
know,
mapping
who
has
this
cid
there's
multiple
systems
you
could
use,
it
could
be
a
dht,
it
could
just
be
asking
people
you
could
have
a
federated
system.
I
could
just
have
a
big
server
somewhere.
A
That's
great
so
moving
moving
on
to
the
next
topic,
I'll
cut
us
off
there.
We
talked
about
bit
swap
as
part
of
this,
like
the
difference
between
bitswap
and
the
dht
ipfs,
currently
relies
on
on
bitswap
protocol
for
this
retrieval
of
data
for
fetching
data
from
peers.
A
What
are
the
current
limitations
of
bit
swap
and
and
how
might
we
improve
that?
Just
data
retrieval
going
forward.
B
Yeah
sure
yeah
yeah
sure
so
the
yeah
sort
of
two
two
big
two
big
two
big
things
with
bitswap
are
sort
of
number
one
is
that
it's
like
a
latency
bound
protocol.
Basically
it's
latency
and
graph
dependent.
B
B
Then
the
midsole
is
actually
pretty
efficient,
but
if
I
have
sort
of
a
long
chain
of
sort
of
one
parent,
child
grandchild
and
so
on,
bitswap
will
sort
of
ask
for
one
block
at
a
time
as
it
walks
his
chain,
which
means
that
if
you
know,
if
you
have
high
latency,
this
could
be
very
problematic,
certainly
much
more
problematic
than
if
you
were
just
sort
of
the.
If
the
person
that
was
sending
you
data
would
just
start
streaming
blocks
to
you
right,
because
then
it
wouldn't
be
bound
by
this
latency.
B
Just
the
throughput
of
your
connection,
and
so
there's
some
folks.
I've
been
working
on
on
graph
sync,
which
is
an
alternative
data
synchronization
protocol
which
works
on
the
concept
of
graphs,
as
opposed
to
just
individual
blocks
or
like
nodes
of
data
and
and
sort
of
taking
a
look
at
how
we
can
integrate
that
into
into
ipvs.
Better
there's
already
an
experimental
sort
of
graph
sync
receiver,
which
allows
people
to
pull
data
from
you.
B
But
the
fetching
is
in
a
sense
where
all
the
complexity
lies,
because
it's
very
easy,
with
bitswap
to
figure
out
how
to
distribute
requests
across
multiple
peers.
You
can
just
break
up
the
graph
and
be
like
yeah
and-
and
I
ask
you
for
a
block
and
you
for
a
block
in
your
for
a
block,
but
if
you're
asking
for
things
on
the
operations
of
a
whole
graph
figuring
out
how
you
want
to
break
that
up
is
a
little
more
complicated
but
but
doable.
And
so,
and
so
that's
that's
one
avenue
forward.
B
B
For
instance,
we
have
things
now
that
allow
it
to
say
well,
you're,
responding
to
me
more
quickly,
so
I'll
ask
you
for
more
data
than
I
am
somebody
else
and,
and
we
say
well,
if
I
can't
find
any
data,
maybe
I
should
go
I'll
call
out
to
some
other
system
like
say
the
dht
and
ask
them
if
they
know
who
has
who
might
be
other
people
who
have
this
data
right,
but
allowing
for
more
context
that
says
well,
okay,
I
didn't
find
anybody
that
has
this
this.
B
You
know
this
node
this,
like
grandchild
node
in
this
in
this
dag
that
I've
been
requesting,
but
maybe
there
are
more
providers
of
the
root
and
they
might
know
they
might
also
happen
to
have
the
whole
tree
underneath
it.
So
maybe
I
should
go.
Ask
them
right
so
so
the
two
avenues
here
are
are
one
is
being
able
to
it's,
basically,
just
about
injecting
more
knowledge
into
the
into
the
data.
B
B
Wouldn't
it
be
kind
of
cool
if
you
know
ipfs
works,
sort
of
exactly
as
how
it
works,
but
that,
if
you
wanted
to,
you
could
have
some
some
api
or
some
plug-in
or
or
whatever
to
allow
me
to
allow
you
to
say
like
oh
okay.
Well,
I
have
some
wallet.
Let's
allow
someone
to
make
a
deal
with
a
retrieval
miner
so
like
to
fetch
the
data
from
filecoin
or
from
any
other
protocol
right,
because
this
is
ipvess
and
and
filecoin
are
not
the
same
they're
not,
but
they
live.
B
They
live
in
a
similar
ecosystem.
There's
going
to
be
reasons
for
for
filecoin
people
to
want
to
use.
If
I
could
pass
right-
and
I
think
that's
people
don't
want
to
make
use
of
filecoin
and
so
trying
to
you
know
just
like
we
have
for
we'll
say
for
pushing
data,
we
have
an
api
for
polling
data.
It
would
be
nice
if
we
had
an
api
as
well.
A
We
there's
a
common
thing
here
of
extensibility
of
of
ipfs
and
lippy
to
be
because,
while
we're
kind
of
talking
here
about
a
bridge
between
file
coin
and
ipfs,
really
this
turns
into
a
bridge
for
people
to
be
able
to
build
on
these
existing
protocols
to
whatever
system
they
want
to
be
able
to
exchange
data
or
connect
to
peers,
so
whether
it's
filecoin
or
other
blockchains,
being
able
to
leverage
that
subsystem,
extend
it
and
build
on
top
of
it
without
having
to
go
through
and
redo
all
the
work.
A
That's
already
there.
So
I
think
there's
some
very
interesting
implications
here
for
the
future
of
of
applications
built
on
top
of
ipfs.
If
we
can
make
that
properly,
extensible.
B
Absolutely
and
in
a
sense,
it's
like
this
whole
the
whole
sort
of
web
3
or
peer-to-peer
space,
because
everybody
in
the
peer-to-peer
space
has
has
thought
to
some
degree
around.
How
do
I
make
it
so
that
the
the
data
central
to
my
application
is
not
locked
up
behind
a
particular
provider
right
and
all
not
locked
up
behind
a
particular
application
and
so
getting
the
different
projects
within
sort
of
the
web3
space
to
be
able
to
talk
to
each
other,
especially
because
they're,
using
similar
concepts?
B
I
mean
the
number
of
number
of
projects
using
hash
links
is
like
insane
insanely
high
right,
we're
working
in
similar
concepts
right
so
to
be
able
to
to
sort
of
plug
these
things
together,
which
you
can
do
and
you
couldn't
have
done
a
centralized
system,
because
you
know
I
can't
plug
my.
I
can't
plug
my
you
know.
B
Google
drive
stuff
into
into
my
office
365
stuff
that
easily,
because
that
would
require
those
two
companies
figuring
out
that
they
want
to
talk
to
each
other
right,
but
between
two
peer-to-peer
protocols
like
that's
not
how
this
works,
we
can
just
build.
We
can
just
build
the
extensibility
and
then
allow
the
users
and
application
developers
to
build
on
top
of
it
and
figure
out
what
is
sort
of
the
best
mixture
that
works
for
their
for
them
and
their
applications.
C
Yeah,
another
thing
I
think,
is
that
integrating
sort
of
other
methods
for
push
and
pull
into
the
ipfs
client,
you
know
done
properly
sort
of
decouples
application
developers
from
having
to
know
about
these
details
in
the
sense
that
I'd
like
to
be
able
to
build
a
webapply
or
decentralized
web
application
which
benefits
from
pushing
and
pulling
files,
ipfs
or
otherwise,
without
the
application
developers
themselves
having
to
deal
with
the
complexities
and
differences
in
semantics
across
all
these
systems.
C
C
That
is,
and
let
the
and
be
able
to
develop
an
application
and
then
leave
the
freedom
to
the
user
to
hook
up
their
specific
sources,
whether
that
would
be
you
know,
just
ipfs
network
or
a
connection
to
a
file
coin,
with
a
specific
set
of
sort
of
trading
preferences
because
sort
of
custom
sources
like
file
coin.
If
you
want
to
get
something
out
of
them,
it's
not
just
a
matter
of
saying.
C
I
want
to
get
this
data,
but
you
have
to
say
you
have
to
have
a
time
frame
as
well
as
cost
sort
of
constraints
and
so
forth,
but
the
application
developer
shouldn't
have
to
know
about
the
user's
specific
kind
of
storage
arrangements.
Basically,
and-
and
this
is
just
more
essentially
in
the
direction
of
basically
making
ipfs
as
an
operating
system,
sort
of
layer
that
that
application
developers
can
use-
and
hopefully
this
can
grow
the
ecosystem
of
applications
much
faster.
B
And
in
particular
right
so,
depending
like
the
getting
getting
the
the
remote
pinning
api
to
a
place
that
is,
was
sort
of
you
know,
good
and
usable
by
by
lots
of
ecosystem.
You
know
members
right
now
required
or
acquired
collaboration
right
at
the
sort
of
the
pinning
summit
involved,
like
a
lot
of
people
sort
of
talking
these
things
out,
there's
lots
of
github
issues
and
and
irc
right
to
go
into
this,
which
means
it's
really.
This
is
like
a
community
driven
process.
A
Yeah
absolutely
super
great
points,
so
I
yeah
I
I
think
we
can
start
wrapping
up
there.
This
has
been
really
valuable
and
I
think
it's
the
the
future
here
and
like
the
work
that
we're
already
starting
to
look
at
doing
for
extending
this
is
going
to
be
really
valuable
for
making
sure
that
we're
building
a
thriving
ecosystem
and
that
people
can
use
ipfs
and
build
on
top
of
that
network.
A
And
so
I'm
I'm
excited
about
the
implications
here
and
the
implications
of
of
working
with
this
and
file
coin
and
other
systems.
And
again,
as
dean
said
reach
out,
the
community,
like
your
input,
is
insanely
valuable
to
helping
us
make
this
this
work.
Well,
so
please
please
reach
out
on
discuss
on
github
any
other
forum,
and
we
look
forward
to
chatting
with
you
soon.