►
From YouTube: IPFS Project Roadmap v0.6.0 ft. Juan Benet
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cool
so
great
to
see
everybody
thanks
for
the
chance
to
speak
to
you.
A
What
I
want
to
do
now
is
kind
of
like
give
a
a
kind
of
a
longer
term,
longer
arc
discussion
for
a
moment
and
kind
of
like
point
to
longer
term
perspective
for
for
ipfs
and
so
on
kind
of
calling
out
to
kind
of
long-term
goals
that
have
been
part
of
the
project
for
for
a
while
and
then
talk
about
some
very
big
opportunities
that
the
community
has
this
year,
so
that
people
can
kind
of
orient
against
those
specific
kinds
of
projects
and
take
advantage
of
certain
things.
A
This
is
a
starting
off
a
bit
of
discussion
with,
I
think,
called
the
ipfs
project
roadmap,
which
you
can
find
on
github.
There's
like
a
a
set
of
like
very
large
scale
goals,
spelled
out
there
in
terms
of
many
different
kinds
of
ways
in
which
ipfs
could
be
used.
The
project
as
a
whole
has
followed
many
different
trajectories
over
time.
It's
used
in
a
bunch
of
different
applications
and
so
on.
A
There's
a
massive
scale
of
use
around
blockchains
because
that's
an
area
where
content
addressing
makes
a
lot
of
sense,
but
there's
a
lot
of
other
applications
and
all
other
kinds
of
use,
cases
that
don't
relate
to
that
that
maybe
get
a
lot
less
of
attention
and
and
so
on
over
time.
So
things
that
I
want
to
maybe
draw
attention
to
that
haven't
seen
a
ton
of
scale.
A
Yet
is
things
like
partition,
tolerant
networks
where
you
saw
the
video
from
birdie
and
like
that's
a
good
example
of
of
that
in
use,
but
that's
kind
of
like
small
use
case.
We
we
haven't
yet
seen
large-scale
partition
networks
in
flight,
where,
like
you,
should
be
able
to
kind
of
move
around
large
swaths
of
data,
so
think
of,
like
all
the
data
that
maybe
some
community
might
use
in
in
kind
of
the
sneaker
net
type
environment
content.
A
Addressing
lets
you
do
that
the
gaps
there
relate
to
kind
of
implementations.
So
a
lot
of
the
things
that
are
probably
holding
back
those
use
cases
is
finding
good
implementations
that
target
those
use
cases.
I
think
everything
right
now
is
sort
of
very
limited
by
a
few
implementations
trying
to
serve
all
the
use
cases.
So
a
really
big
shift
that
has
to
happen
this
year
or
next
year
is
around
starting
to
tune
implementations
per
use
case.
A
So
I
like
to
describe
this
in
terms
of
kind
of
how
the
the
http
implementations
had
to
open
that
up.
So
initially
http
was
called.
There
was
a
binary
called
httpd
and
you
would
like
install
the
http,
client
and
server,
and
it
would
be
a
program
that
you
would
run
in
your
terminal
and
you
would
type
httpd
and
like
you
would
run
the
server
or
you
would
type
htp
to
run
the
client
and
for
a
while,
like
for
many
years.
A
Something
like,
I
think,
three
or
four
years,
maybe
more
like
that
was
the
main,
the
main
client
over
time.
People
started
sending
around
patches
and
there
were
suddenly
multiple
versions
of
http
and
http
and
suddenly,
but
still
that
that
was
like
what
the
web
meant.
It
was
like
those
specific
binaries
later
apache
came
in
the
picture
and
was
like
hey,
look,
let's
create
something
new
and
create
apache
too
and
like
now,
you
could
install
apache
too,
but
that
also
kind
of
bottlenecked
and
created
like
one
big
web
server
that
everybody
had
to
use.
A
There
were
other
ideas
and
so
on
along
the
way.
But
what
really
opened
up
the
web
to
be
used
in
tons
of
cases
was
the
emergence
of
being
able
to
have
clients
and
servers
as
libraries
in
a
bunch
of
implementations
and
really
treating
http
as
a
protocol
and
writing
programs
and
servers
in
in
kind
of
a
large
variety
of
use
cases
where
you
weren't
reliant
on
a
specific
implementation.
A
What
that
means
here
is
that
we
need
to
shift
the
use
of
ipfs
from
depending
on
a
single
implementation,
to
treating
it
as
more
of
like
a
set
of
libraries
and
tools
that
you
use
for
your
particular
application.
So,
very
concretely,
what
what
that
might
mean
is
like
a
really
great
opportunity
here
this
year
is
in
in
creating
an
executable
oriented
implementation
for
ipfs,
where
you
can
think
of
feeding
programs
into
some
runtime.
A
That
then
runs
the
runners
on
our
stack,
as
opposed
to
kind
of
like
the
the
opposite
of
that
which,
right
now
you
might
download,
go
ipfs,
install
that
locally
and
then
run
programs
like
maybe
speak
to
it.
So
that's
a
very
different
conception
and
that's
a
if
you're
interested
in
that
kind
of
thing
should
go.
A
Go
you
know
kind
of
explore
both
this
use
case,
these
set
of
like
perspectives
and
so
on,
and
these
kind
of
kinds
of
potentials
and
talk
to
other
groups
that
are
that
are
orienting
around
these
use
cases.
I'm
gonna
like
hop
around
roadmap.
For
a
moment.
A
We
saw
this
upgrade
path
thing
before
there's
a
lot
of
like
good
efforts
in
get
kind
of
closing
that
gap.
Another
implementation
here,
I
think,
is
needed.
The
go
ipfs
today
is
already
embedded
in
brave,
but
it's
not
an
implementation,
that's
tuned
for
the
browser,
so
people
could
write
an
implementation.
A
That's
tuned
for
use
in
browsers
and
embedded
devices,
and
that's
a
really
good
opportunity
and
that'll
look
a
bit
different
than
in
os's,
so
if
people
want
to
put
bring
ipfs
into
embedded
devices
and
embedded
systems,
and
so
on,
that
ends
up
needing
a
different
kind
of
implementation.
So
one
possibility
here
is
to
write
plugins
for
kernels
and
so
on.
That
can
do
content
addressing
and
can
move
around,
ipld
data
and
so
on,
but
that
are
tuned
for
this
kind
of
use
case.
A
This
use
case
and
the
browser
use
case
are
pretty
different,
so
you
might
end
up
with
multiple
implementations.
The
thing
they
should
not
do
is
like
try
to
depend
on
go
ipfs,
specifically
or
jsipf's,
specifically
or
lotus,
or
anything
like
that.
As
solving
these
use
cases,
these
are
like
different
tools,
different
systems,
so
they're,
big
opportunities
like
these
are
probably
entire
startups-
could
be
built
around
these
kinds
of
things.
A
A
There's
a
really
good
startup
for
this,
which
is
go
mobile,
which
I
think
the
birdie
folks
have
built
and
maintained,
go
check
that
out,
if,
like
you're
interested
in
this
kind
of
kind
of
thing,
one
other
something
that's
maybe
useful
is
to
look
at
the
ipfs
network.
As
I
wish,
I
had
like
a
I'll
make
it
like
a
nicer
graphics
version
of
this
someday,
but
this
is
a
view
into
kind
of
like
the
broader
ipfs
network.
There
are.
A
There
are
many
different
sets
of
networks
that
are
connected
by
by
the
protocol.
You
can
use
things
like
the
broadly
public
ipf's
dht
to
find
some
of
the
content,
but
there
are
a
whole
other
set
of
networks
that
don't
directly
connect.
So,
whenever
you're
talking
about
the
ipfs
network,
this
is
what
you
should
look
at
since
the
time
when
this
slide
was
made,
which
was
I
think
a
couple
years
ago,
and
now
another
massive
network
has
appeared
here,
which
is
the
falcon
network.
A
It's
in
terms
of
data
use
it's
enormous
and
I'll
describe
in
a
moment
like
how
you
can
take
advantage
of
it,
but,
but
even
so,
like
the
idea
of
like
these
partition
networks
and
large
amounts
of
content
moving
around
them
is
not
well
it's
both
there
as
a
possibility
and
not
yet
taken
advantage
of,
because
I
think
we're
missing
we're
missing
good
implementations.
A
We
talked
a
bit
about
content
routing
systems.
This
is
an
area
that
that
is
has
had
been
a
bottleneck
for
a
long
time
and,
finally,
the
the
improvements
to
both
the
dht
and
the
delegated
routing
protocol
came
in
and
so
now
getting
the
greatly
dropping
down
the
content.
Discovery
is
a
phenomenal
kind
of
outcome.
What
we'll
probably
see
more
and
more,
though,
is
loading
those
indexer
nodes
translating
into
needing
better
systems
that
have
better
properties.
A
So
so
one
one
range
of
problems
to
attack
here
is
around
reader
writer
privacy
in
the
system,
so
content
routing.
Once
you
move
into
content
addressing
and
content
routing,
you
encounter
a
whole
set
of
other
really
critical
properties
that
you
would
want
systems
like
this
to
do,
which
relate
to
censorship.
Resistance
relate
to
privacy,
related
security,
which
is
you
want
these
content
routing
structures
to
be
to
protect
reader
and
writer
privacy.
You
want
to
be
able
to
publish
content
on
the
network
and
have
the
network
not
know
who
exactly
published
the
content.
A
You
want
to
be
able
to
read
content
in
the
network
and
have
the
network
now
be
able
to
see
that
today
there
are
no
good
large-scale
systems
deployed
that
can
do
this,
there's
a
few
like
even
things
like
tor,
which
are
widely
used
and
a
lot
of
people
use
for
for
the
for
that
kind
of
censorship.
Resistance
can
be
spotted
like
you,
you
can.
There
are
attacks,
you
can
mount
around
tor
that
can
can
look
at
all
of
the
content
moving
in
and
out
and
you
can
correlate
the
traffic.
A
So
this
is
a
whole
area
where
the
entirety
of
web3
sort
of
has
to
go
and
work
on
these
sets
of
problems
and
solving
these
again,
it's
like
whole
start
not
only
whole
startups.
It's
like
whole
crypto
networks
are
likely
going
to
be
built
to
solve
this
set
of
problems
so
arriving
at
a
good
content,
addressed
content,
routing
system
that
can
do
that
can
preserve
reader
writer
privacy
at
scale
to
be
able
to
do
kind
of
like
sub
100.
A
Millisecond
queries,
like
that's
a
very
large
scale,
problem
that
this
community
has
to
go
solve
and
it's
a
huge
set
of
opportunities.
So,
if
you're
in
the
space
interested
in
blockchains
interested
in
these
kind
of
problems,
looking
for
a
project,
here's
a
here's,
a
really
good,
really
valuable
idea.
A
One
other
thing
I
wanted
to
talk
about,
I
mentioned
kind
of
like
the
the
falcon
network
as
one
massive
system.
So
here's
like
a
view
of
the
falcon
network
where
you
have
kind
of
a
set
of
storage
clients,
onboarding
data
into
the
network
through
a
set
of
on-ramps
things
like
nfdo
storage
web
through
the
storage,
fission
estuary,
bitbot
fleeq,
there's
a
bunch
of
these
and
then
eventually
that
data
makes
its
way
into
source
providers.
Search
providers.
A
Then
can
you
can
pull
out
the
data
from
source
providers
through
different
kinds
of
retrieval
providers,
so
virtual
networks,
the
ipfs
gateway?
There
are
a
ton
of
different
iphones
gateways,
content,
routing
content,
delivery
networks
and
so
on,
and
so
virtual
clients
are
over
here
now.
Of
course,
you
can
go
like
source
clients
directly
to
source
providers
and
back
out,
but
this
is
way
slower
like
this
is
like
a
traditional
data
center
type
operation.
You
want
to
be
able
to
go
much
faster
indexing
here.
A
This
is
kind
of
where
the
content
routing
piece
lives.
The
a
very
common
pathway
here
is
to
onboard
vast
amounts
of
data
through
here,
get
it
all
all
the
way
to
sps
index
the
content
and
then
make
this
kind
of
retrieval
loop
pretty
fast,
so
to
give
a
sense
of
scale,
the
capacity
of
storage
providers
is
in
the
you
know,
15
getting
on
16
exabyte
range.
A
That's
an
enormous
amount
of
capacity
for
storing
everything
right
so
like
the
entirety
of
the
web3
data
set
outside
of
hot
coin
is
about
like
three
to
four
petabytes
or
so
like
last
time.
I
looked
at
it
looked
at
it,
so
you
can
store
all
of
that
and
replicate
it
multiple
times
over
and
not
touch
this
capacity.
A
So
the
the
the
falco
network
is
now
onboarding.
A
large
amount
of
data,
there's
been
a
massive
gargantuan
effort
for
from
the
whole
community
to
start
take
making
use
of
that
capacity,
and
it's
getting
to
like,
I
think,
like
70
petabytes
of
use.
This
I
think
this
is.
This
is
not
the
duplicated,
so
it's
probably
kind
of
like
a
10
5
to
10x
copies
there.
So
that
would
be.
You
know,
seven
to
eight
petabytes.
It's
still
kind
of
like
a
bunch
of
data
beyond
all
the
web.
A
Three
data,
that's
more
like
web
2
type
of
data
coming
online,
with
kind
of
some
large
replication
factor,
and
this
is
kind
of
like
a
sense
of
the
data
being
onboarded
per
day.
So
this
is
like,
I
think,
on
average
400
today,
like
in
the
last
week
or
two
400
terabytes
per
day,
being
added
onto
the
network
with
you
know
some
some
spikes
above
a
petabyte,
but
you
know
these
are
sort
of
like
short-lived.
A
That's
a
huge
amount
of
data
being
added
to
the
network
daily,
so
all
of
that
is
kind
of
coming
from
either
large-scale
deals
from
source
clients
directly
to
source
providers
or
coming
through
these
on-ramps
in
terms
of
the
ids
and
source
of
content
that
you
can
address
address.
This
is
an
enormous
amount
of
data
that
you
can
then
start
pulling
out.
So
what
are
people
going
to
do
with
this
stuff?
A
In
many
cases,
it's
archival
information
so
being
able
to
kind
of
back
up
a
bunch
of
storage
that
people
care
about,
to
be
able
to
look
at
it
later.
In
a
lot
of
cases,
it's
data
that
people
want
to
disseminate
out
to
users
or
they
want
to
compute
on
it,
on
blockchain,
so
being
able
to
kind
of
store
the
data
and
then
be
able
to
run
some
other
kinds
of
computation
associated
with
that
data
around
some
blockchain.
A
So,
for
example,
all
the
nfts
fit
into
this
category
today,
nfcs
are
a
very
small
amount
of
data
like
they're
they're,
a
huge
number
of
nfts,
but
most
of
them
are
2d
images.
Even
some
3d
3d
objects
still
very
small.
What
peop,
where
people
want
to
go
kind
of
like
the
frontier
of
all.
This
is
like
once
you've
created
this
like
massive
onboarding
of
data.
A
You
want
to
then
be
able
to
do
run
computations
on
it,
and
so
that's
something
that's
going
to
come
to
the
to
the
falco
network
this
year
and
next
year.
A
It's
the
ability
to
kind
of
first
run
a
set
of
smart
contracts
to
be
able
to
kind
of
operate
on
the
state
of
the
network
and
then
use
that
as
a
hook
to
introduce
computation
around
the
data,
so
then
be
able
to
run
provable
computation,
orchestrated
by
some
kind
of
like
task,
scheduler
around
fem
and
so
on,
to
then
be
able
to
compute
and
all
this
happy
fest
data
on
top.
A
So
that's
sort
of
like
the
way
that
that
looks
is
like
to
kind
of
like
run
the
jobs
in
the
in
the
sps,
because
you
can
think
of
those
those
nodes
as
being
like
this
set
of
large
scale
nodes
that
have
you
know,
petabytes
of
storage
and
a
bunch
of
gpus
that
they
can
run
run
computations
over.
So
that's
kind
of
like
so
so
how
do
you?
A
How
can
how
can
you
take
advantage
of
it
of
this
massive
amount
of
capacity
and
kind
of,
like
onboarding
rate,
to
kind
of
like
tune
it
for
your
use
case?
I
think
today
like,
if
you're
dealing
with
a
lot
of
static
data
that
where
you
either
it's
for
archival
purposes
or
for
content
delivery.
A
This
might
be
a
good
fit
for
the
content
delivery
case
you
do
have
to
the
the
kind
of
like
retro
providers
here
are
on,
like
an
improvement
rate,
but
they're
still
kind
of
not
able
to
deliver
massive
amounts
of
of
bandwidth
in
a
very
short
latency.
So
you
might,
your
mileage
will
vary
here
like
the
onboarding
pathway
has
been
greatly
improved
over
the
last
six
to
12
months.
The
retro
pathway
is
kind
of
like
the
next,
the
next
big
thing
to
improve.
A
Although
already
now
a
lot
of
people
are
getting
pretty
good
latency
for,
depending
on
the
data
set
size.
So
if
you
have
like
a
few
terabytes
or
a
petabyte
even
of
data,
you
want
to
get
close
to
the
users
you
can.
You
can
get
there.
You'll
probably
have
to
do
a
bunch
of
lifting
yourself
to
set
up
clusters
and
so
on
to
with
low
latency.
If
you
have
like
smaller
datasets
than
that,
you
can
use
the
on
the
on-ramps.
A
Those
on-ramps
already
have
very
nice
delivery,
so,
like
nsu
storage,
is
a
good
example
of
getting
sub
second
delivery
of
all
their
content
out
to
refuel
clients.
But
now,
if
you
wanna
like
serve
much
higher
much
larger
stuff,
so
like
many
petabytes,
that's
that's
still
kind
of
there's
a
lot
of
work
to
do
there
to
to
scale
that
delivery.
So,
depending
on
your
use
case,
if
you
want
to
kind
of
like,
I
think
what
this
might
work
really
well
for
today
is
video.
A
So
we
we've
like
enabled
all
this
now
to
for
somebody
to
like,
come
around
and
create
something
like
video.storage,
where
you
can
create
like
a
super
scalable
way
to
onboard
massive
amounts
of
video
into
this
network.
Take
the
subset
of
the
video
that
you
need
to
load
quickly,
so
the
index
over
the
data
in
the
beginning,
preload
all
of
that
very
close
to
the
users,
while
you
kind
of
like
amortize
the
loads
for
the
rest
of
the
video,
and
so
you
might
be
able
to
use
all
that
down
the
road.
A
If
you
want
to
get
either
involved
in
building
computational
networks
or
in
building
applications
of
computational
networks,
this
is
stuff
to
look
into
towards
the
end
of
the
year
or
next
year.
So
many
groups
will
probably
want
to
get
started
now,
depending
on
what
you're
doing
but
yeah.
So
this
is
like
a
pretty
interesting
set
of
use.
Cases
cool,
I
think,
probably
the
last
thing
I'll
mention
today
is
there's
a
whole
new
wave
of
scale.
A
That's
going
to
come
to
network
networks
like
ipfs
and
networks
like
falcoin
and
many
other
blockchain
systems
where
today,
most
blockchains
are
not
able
to
deal
with
the
kind
of
computation
that
the
web
2
world
deals
with.
So
we're
many
many
orders
mounted
away,
but
this
is
sort
of
where
the
entire
entirety
of
the
industry
is
headed,
being
able
to
do
kind
of
like
billions
of
transactions
per
second
or
trillions
of
transactions
per
second,
but
it'll
take
on
the
order
of
like
two
to
three
years
to
get
there.
A
A
So
once
that
starts
hitting
like
you,
you'll
really
be
able
to
do
full
on
kind
of
virtual
world
type
experiences
with
very
high
transactional
throughputs
and
be
able
to
run
entire
games
and
so
on
through
systems
like
this.
So
it's
kind
of
like
where,
where
the
range
of
the
of
the
architectures
of
web
3
are
headed
but
they're,
you
know
it'll
take
a
while
to
to
really
flesh
out
and
get
there
all
right
cool
any.
A
I
blazed
through
a
bunch
of
stuff.
I
wanted
to
give
you
a
preview
of
the
things
that
I'm
pretty
excited
about,
that
I'm
working
on
directly
and
so
on,
but
also
happy
to
answer
any
questions
that
people
have
about
any
of
this
or
the
ipfs
project
or
whatever.
I
can
help
with
yep.
A
This
one-
maybe
this
is
like
a
better
view.
Yeah,
you
can
find
this
on
fem.falcon.
B
A
By
the
way,
sebastian
here
sebastian.
B
Hey
so
I
have
a
question
in
terms
of
what
you
see
in
terms
of
the
infrastructure
and
latency
being
built
out
here.
I
know
like
in
the
finance
industry
and
other
things
you
still
use
like
magnetic
tape,
decks
for
really
long-term
stuff.
I
sort
of
look
at
ipfs
as
kind
of
an
interim
state
of
data
in
between,
for
instance,
when
we
have
scientists
have
large
data
sets
who
want
to
get
it
out
there.
For
I
don't
know
10
months,
that
might
work,
but
like
is
each
protocol.
A
So
I
think,
right
now,
falcoin
specifically
is
not
tuned
for
tape.
Falcon
is
tuned
for
hard
drives,
not
even
ssds,
like
the
your
big
best
bang
for
the
buck
will
be
hard
drives.
My
sense
is
that
there
will
be
other
either
adjustments
to
the
core
protocol
that
might
come
in
as
fips
or
layer,
two
type
things
that
will
enable
tape
decks
to
come
online.
A
I've
had
a
lot
of
conversations
with
a
lot
of
groups
that
want
to
use
tape
to
be
able
to
to
leverage
that
a
problem
to
solve
there
like
the
the
hurdle
is.
How
do
you
do
a
proof
of
replication
over
tape
that
works
well,
like
that's
very
difficult,
because
you
don't
want
to
be
spinning
up
tape.
All
the
time
like
that'll
degrade
the
the
tape
and
the
gray
the
machines,
and
you
want
like
very
high
confidence
that
that
has
actually
happened.
A
So
you
need
some
way
to
enable
that,
if
your
goal
is
to
use
it
for
power
in
the
consensus,
you
may
just
do
tape
storage
with
like
very
infrequent
prison
storage,
but-
and
so
that
can
show
you
that
the
data
is
there,
but
it
won't
tell
you
much
about
how
quickly
you
can
access
it
or
anything
like
that.
A
C
My
name
is
han
hunterson,
like
I
guess
it's
not
even
possible
to
verify
the
source
medium.
For
instance,
somebody
could
pretend
that
they
have
a
tape,
but
you
would
never
know
at
the
protocol
level
right.
A
A
So
it
really
sort
of
depends
like
different
protocol
choices
might
tune
like
there
are
many
environments
where
people
might
be
okay
with
a
a
verifiability
structure
where
you
have
like
people
running
inspections
like
actual
humans
going
to
inspect
data
centers
and
some
set
of
use
cases
will
leverage
that
and
that's
and
that's
fine,
but
it
sort
of
depends
on
which
applications
many
applications
won't
want
that
many
applications,
one
cryptographic
verifiability.
A
So
it
kind
of
really
depends
on
what
people
are
tuning
for,
like
different
use.
Cases
will
call
for
different
things,
and
so
then
certain
protocols
for
some
protocols,
it'll
it'll,
make
sense
so
a
good
example
of
this
kind
of
thing,
not
necessarily
in
storage
medium,
but
in
renewable
and,
like
kind
of
like
the
green
energy
movement.
One
big
thing
that
we're
finding
is
because
we're
trying
to
make
all
of
file
coin.
A
Not
just
carbon
neutral
but
like
verifiably
green,
we're
like
we're
net
carbon
negative
so
like
we're
like
having
falcon
in
your
region,
can
like
bring
down
the
the
amount
of
carbon
emissions
in
that
in
that
environment.
What's
that
yeah
great,
so
so
in
that
environment
kind
of
what
you
need
is
you
need
certain
kinds
of
offsets
that
have
high
verifiability
about
like
when
they
happened.
What
type
of
energy
was
it?
What
organization
is
it
and
so
on?
A
And
the
best
thing
you
can
do
is
run
protocols
where
you
have
people
actually
inspect
expecting
the
facilities,
and
you
have
devices
installed
at
the
facilities
that
can
run
protocols
that
are
cons
like
checking
how
much
energy,
for
example,
you
can
create
a
solar
plant
and
you
can
inspect
that
the
solar
plant
was
there,
you
can
have
people
go
over
and
install
machines,
and
you
can
then
measure
the
output
of
energy
coming
from
that
solar
plant
and
you
can
run
periodic
inspections
to
make
sure
that
that
is
indeed
and
remains
a
solar
plant
and
not
not
secretly
burning
coal
in
the
in
the
back
room
or
something,
and
if
you-
and
if
you
do
that,
then
you
can
end
up
with
a
verifiable
renewable
energy
credit.
A
And
so
at
that
point
you
can,
when
the
solar
is
coming
out
of
that
plant,
you
can
measure
how
much
power
that
is,
you
can
see,
say
exactly
at
what
point
in
time
it
was,
and
you
can
end
up
with
like
a
very
good
instrument
for
knowing
precisely
that,
like
solar
power
came
in
from
that
plant
in
a
particular
region,
you
can
then
aggregate
a
bunch
of
these
over
time.
A
To
then
like
say
things
about
the
the
energy
use
of
a
region
or
a
community
and
so
on,
and
that
very
much
is
like
a
verifiability
structure
like
where
you
are
sending
people
to
those
plants
to
verify
certain
things
so
like
it
really
depends
on
the
use
case
in
some
use
cases.
You'll
end
up
with
with
that
kind
of
thing,
cool,
other
questions,
thoughts,
yep
back
there.
D
Hello,
hey,
I'm
john!
Is
there
any
kind
of
like
right
now,
both
ipfs
and
filecoin
are
very
much
kind
of
optimized
for
the
the
the
deal
flow
where
you
have
like
a
set
of
data,
you
structure
that
data
you
set
the
do
you
come
up
with
your
deal.
You
do
your
storage
market
negotiation
and
then
put
that
on
file
coin.
D
Is
there
any
thought
kind
of
in
the
road
map
about
like
more
streaming
based
flows
for
data
being
able
to
kind
of
freely
establish
the
deal
ahead
of
time
and
then
kind
of
like
migrate
or
see
sequence,
seamlessly
kind
of
stream
data
into
that
deal?
Yeah.
A
Great
question
so
some
some
people
are
working
on
diff,
so
so
one
part
is
like
the
thing:
that's
bottlenecking.
All
of
that
is
the
fem
stuff
right.
So
sorry,
can
I
get
screen
back.
A
A
Once
you
can
do
that,
and
you
have
that
programmability,
then
there's
nothing
preventing
you
from
creating
deals
up
front
that
don't
even
have
the
cid
associated
with
them
and
you
can
associate
the
cid
afterwards
that
stuff
that
all
has
to
be
built,
but
there's
nothing
like
preventing
you
from
going
and
and
creating
that
kind
of
thing,
there's,
probably
a
bunch
of
re-architecting
of
the
contracts
to
enable
that
to
be
to
be
easy,
and
I
know
at
least
like
two
or
three
groups
that
are
planning
to
do
that
kind
of
thing.
A
That
also
enables
other
kinds
of
things
like
you
can
do.
Space-Time
futures,
like
you,
can
start
blending
d5
with
storage,
where
you
can
then
kind
of
can
start
like
really
commoditizing
digital
storage
in
particular,
regions
like
you,
probably
want
to
get
sophisticated
about
like
what
regions
you're
buying
those
futures
in
and
so
on,
and
you
want
probably
verifiability
about
the
location.
I
think
that's
one
of
the
things
that
needs
to
improve
on
a
lot
of
these
systems,
something
that
is
not
there
yet
is
like
hard
proofs
of
location.
A
So
there's
a
big
opportunity
for
somebody
to
go
and
create
a
proof,
approval,
location
network
where
you're
using
nodes
around
the
environment,
to
then
pinpoint
precisely
where
particular
nodes
are-
and
you
can
do
this-
there's
a
bunch
of
mechanisms
for
doing
this.
A
One
of
them
is
like
you
can
use
latency,
and
if
you
have
a
preview
location
network,
then
you
can
use
that
to
then
inform
where
that
data
is
so,
but
that,
but
again
like
that's
even
further
out,
so
somebody
has
to
like,
go
and
create
a
proof
location
network
to
then
you
being
able
to
use
that,
but
that's
sort
of
like
where
all
this
stuff
is
headed,
but
you
know
over
the
longer
longer
arc.
A
So
in
the
ipfs
realm,
the
it's
really
important
that
these
protocols
do
not
create
something
inside
of
them
that
enables
any
power
any
group
to
kind
of
like
decide
what
content
is
bad
across
the
entire
network,
like
that?
A
That's
like
a
surefire
way
to
like
end
up
with
like
digital
totalitarianism
like
straight
up
so
so
you
really
need
these
networks
to
enable
communities
to
form
together
and
decide
what
content
they
want
to
move
around
now,
very
much
so
like
in
different
regions
of
the
world
like
there
will
be
different
opinions
about
what
people
should
or
not
be
able
to
move
around.
A
What
you
want
is
policies
that
enable
node
operators
like
people
actually
running
the
code
and
so
on,
to
subscribe
to
the
different
kinds
of
things
they
want
to
not
let
go
through
those
networks.
So
the
way
it
works
today
in
in
ipfs
is
that
the
main
nodes
that
get
takedowns
are
gateways
so
http
to
ipfs
gateways
tend
to
get
a
lot
of
the
mca,
takedowns
and
other
kinds
of
like
censorship,
things,
those
are
run
by
specific
groups
and
they
respond
to
those
gateway.
A
Takedown
requests
so
like
all
of
these
gateway
operators,
you
know
kind
of
like
the
we
run
one
of
these
gateways.
We
get
a
bunch
of
these
kinds
of
takedown
requests
and
we
we
serve
them.
So
now,
in
the
in
the
there's,
probably
like
a
an
opening
here
for
people
to
write
a
kind
of
like
some
there's
been
discussion
for
many
years
about
creating
some
kind
of
like
distributed
denialist
type
of
protocol
where,
like
groups,
could
come
together
and
say:
okay,
great,
like
this
is
dmca
content.
A
A
Oh
they're
on
to
us
the,
but
now
the
the
the
point
is
like
the
the
the
broader
picture
here
is
like
censorship
is
an
extremely
difficult
thing.
It's
very
clear
that,
like
communities
need
to
be
able
to
self-censor
when
they
need
to,
but
but
you
don't
want
to
enable,
like
a
macro
scale,
actor
to
then
censor
everything
or
censor
something
completely,
and
so
you
need
like
this
kind
of
balance
structure
where
you
enable
node
operators
to
then
decide
what
content
they
want
to
distribute
and
whatnot.
A
So
in
the
falcon
case,
you
can
do
that
kind
of
thing,
with
storage
providers,
so
source
providers
can
ultimately
a
deal
is
entered
into
between
the
clients
and
the
source
providers.
So
at
that
point
they
can
sort
of
decide
what
to
whether
or
not
to
store
something,
whether
or
not
to
accept
data
from
that
client.
What's
going
to
be,
what's
common
going
to
be
come
on
here
as
it
is
in
the
cloud
in
the
traditional
cloud?
A
Is
that
users,
malicious
or
not
malicious
but
like
parties
will
will
send
up,
will
send
content?
That's
encrypted
will
look
innocuous
and
then
eventually
it'll
get
revealed,
that's
bad
content,
and
so
you
need
a
you
need
a
response
mechanism
to
deal
with
that,
like
you,
you
need
to
at
that
point
be
able
to
like
flag
that
content
as
bad
in
the
falcon
case,
stop
serving
it
out
and
then
later
let
it
roll
off
once
once
it
once
it
expires.
A
You
could
also
like
delete
it
ahead
of
time
right
now,
there's
no
good
mechanism
for
updating
sectors,
so
that
th.
This
is
like
a
common
feature,
request
of
like
being
able
to
kind
of
update
a
sector
to
remove
some
data
out,
so
that
might
come
in
time,
but
that's
like
a
probably
not
a
super
high
priority
for
most
groups,
because
it's
not
that
big
of
a
problem.
If
it
became
a
much
bigger
problem,
it
would
probably
increase
in
priority
all
right,
I
think
I'm
being
told
to
wrap
up.
A
So
I
will
stop
here
thanks
so
much.