►
Description
Juan leads us through an in depth conversation about the Filecoin Storage Provider economy.
Watch the recap of the Storage Provider meetup in Austin, hosted by the Filecoin Foundation!
Meetups are an opportunity to meet the amazing people and teams participating in the Filecoin community's decentralized storage network. Learn more at https://sp.filecoin.io/
A
One
really
key
thing
here
is
blockchains
in
general:
are
highly
they're
like
large
systems,
they're
complex
systems,
they
involve
lots
of
different
components,
and
so
you
end
up
in
a
situation
where
you
have
to
get
many
different
things
to
work
well
together
and
so
changing
them
is
difficult.
A
But
you
want
to
be
sort
of
be
able
to
kind
of
improve
many
different
parts
over
time
and
you
kind
of
want
to
improve
each
one
of
these
segments
simultaneously.
I
tend
to
think
about
it
kind
of
like
in
a
spiral
where
you
want
to
get
each
one
of
these
things
working
well
enough
and
then
keep
iterating
across
and
keep
improving
things
over
time.
A
More
and
more
groups
start
attacking
different
parts
here
and
just
expanding
it,
but
really
think
of,
like
the
the
whole
large
scale,
ecosystem,
that
the
cloud
systems
have
developed
around
their
facilities
and
think
of
facilities
like
that
being
developed
in
the
decentralized
cloud
ecosystem,
and
one
of
the
key
components
here
is,
as
we
all
know,
in
the
falcon
community,
data
has
gravity.
A
So
that's
think
of
think
of
falcon
over
the
next
over
the
first
few
years
of
its
of
of
his
life
accumulating
tons
of
really
valuable
data,
especially
public
data
comments
and
things
like
that
and
then
building
a
large-scale,
open,
permissionless
cloud
where
you
can
run
any
kind
of
computing
system
on
top
of
on
top
of
that,
and
so
I'll
get
into
this
a
little
bit
later.
A
But
this
is
a
really
good
l1
for
com,
for
decentralized
computing
networks
to
be
able
to
do
large
scale,
competition
cool
so
the
it's
kind
of
amazing
to
see
these
kinds
of
stats,
just
the
the
16
xy
figure,
which
is
probably
outdated
at
this
point,
because
this
was
like
a
week
or
two
ago-
isn't
truly
an
enormous
amount
of
storage
capacity,
and
that
shows
just
the
the
massive
power
that
you
get
out
of
an
open,
permissionless
decentralized
network,
with
incentive
structures
and
with
a
large
community
of
participants
creating
this
value
together.
A
So
whenever
we
have
some
like
large
scale
problem,
if
you
can
think
of
solving
it
with
with
game
theory
with
mechanism
design
with
incentive
structures
with
permissionlessness
with
open
systems,
you
can
get
certain
scale
way
way
way
faster
than
many
many
kind
of
centralized
approaches.
So
I'm
sure,
right
now,
all
of
you
are
encountering
all
kinds
of
programs
and
systems
that
you're
designing
and
building,
and
so
on
and
you're
just
falling
into
this.
A
Where
you
coordinate
very
publicly,
you
can
get
much
more
impact
and
and
much
faster,
so
really
encourage
you
to
be
thinking
about
designing
systems
that
are
and
designing
your
programs
to
be
open,
permissionless
and
and
community
oriented
one
of
the
probably
most
important
things
here
is
just
the
the
number
of
organizations
and
contributors
and
so
on,
just
continuing
to
increase
and
yeah.
A
This
is
the
map
that
I've
been
using
relatively
recently
to
map
the
the
falcon
system.
This
is
kind
of
the
current,
the
current
state
over
time.
This
might
evolve
and
change
and
so
on.
But
you
know
one
I
want
to
maybe
describe
sort
of
like
why
it's
structured
this
way.
Think
of
storage
clients,
as
wanting
to
get
their
data
to
sps
and
then
want
to
retrieve
the
data
out.
A
Different
kinds
of
verticals
and
different
kinds
of
systems
need
a
bunch
of
tools
dedicated
to
that
vertical,
so
think
of
nft
storage.
As
an
example,
nfl
storage
is
a
really
useful
tool
for
nft
creators
and
nft
builders
and
so
on
and
being
able
to
tune
the
product
experience
being
able
to
tune
the
interfaces.
The
the
branding,
the
messaging,
the
developer,
libraries,
the
tutorials.
A
All
of
that
to
that
specific
vertical
makes
it
way
easier
to
achieve
product
market
fit
in
that
particular
vertical
and
be
able
to
build
a
the
main
product
that
everyone
will
use
to
onboard
their
data,
so
so
think
of
these
like
on-ramps
and
aggregators,
developing
for
different
kinds
of
verticals,
different
kinds
of
systems,
and
right
now
we
have
a
very
few
like
there
are
many
different
kind
of
kinds
of
verticals
that
can
leverage
this
type
of
strategy.
So
I'll
touch
it
a
little
bit
more
in
a
bit.
A
Now
that
one
really
big
one,
that's
missing
is
like
large-scale
data
onboarding.
We
need
some
kind
of
on-ramp
I'll
touch
on
it
a
bit
more
later,
but
we
need
some
kind
of
on
on-ramp
to
be
able
to
move
around
large-scale
data
sets
so
think
of
petabytes
at
a
time
and
so
on,
like
we've
quickly
vaulted
into
that
level
of
need
and
that's
a
pretty
awesome
place
to
be,
but
it'd
be
great
to
kind
of
problem-solve
against
that
and
and
just
enable
the
community
to
kind
of
create
that
there's.
A
My
sense
is:
there's
a
very
good
business
there,
that's
a
potential
for
sps
in
different
locations
to
create
kind
of
like
a
like
a
sort
of
like
a
data
shuttle
service.
Potentially,
I
can
dig
into
a
little
bit
more
later
actually
before
going
in,
we
heard
about
the
indexers
just
think
of
that,
as
the
main
way
to
address
all
of
the
content,
that's
being
added
and
over
time,
think
of
getting
and
developing
different
views
on
the
data.
A
So
right
now
we're
kind
of
indexing,
just
the
data
structures
that
are
inside
of
the
car
files,
but
later
think
of
being
able
to
run
a
process
that
takes
a
different
view
or
like
indexes.
The
data
differently
generates
some
other
data
structures,
and
then
you
can
index
those
and
provide
access
to
those
so
think
of
being
able
to
build
databases
on
top
of
all
of
this
and
being
able
to
build
specific
kinds
of
specific
use
cases
on
top
of
it
without
having
to
duplicate
so
without
having
to
duplicate
the
data.
Again.
A
A
lot
of
the
data
sets
that
your
especially
larger
scale,
ones
in
scientific
use
cases
tend
to
have
all
these
very
specific
formats
and
that's
a
great
area
to
kind
of
think
think
of
like
you.
First
just
get
the
data
as
it
is.
It
doesn't
matter
you
onboard
it
and
then
later
you
add
these
other
views
that
can
slice
the
data
and
look
at
it
in
different
ways.
A
A
We
need
a
lot
of
developers
to
start
doing
that
kind
of
work,
but
it'll
be
very
much
driven
by
demand
use
cases.
So
if
we
find
specific
verticals
where
there's
specific
types
of
data
sets
that
people
want
to
compute
with
that'll,
be
like
really
good
fodder
to
then
develop
all
these
kinds
of
indexing
tools,
and
so
on,
one
that
I
keep
hearing
about
is
like
geospatial
data.
A
So,
like
you
have
like
all
this
constantly
changing
large
scale,
large
resolution
data
you
want
to
onboard
and
that's
a
perfect
one-
to
be
able
to
develop
all
kinds
of
tools
and
applications
to
talk
with
well
right
now,
there's
a
so
then
the
next
thing
there
is
actually
maybe
I'll
come
out
one
once
more
on
the
linux
source
now
I'll
get
to
it
later,
and
then
the
retro
providers
are
a
way
to
kind
of
pull
the
data
really
quickly.
A
We'll
dig
into
this
a
little
bit
more,
but
the
the
important
thing
there
is.
You
want
to
be
able
to
deal
with
two
main
problems.
One
is
speed
of
light.
You
want
to
get
the
data
to
be
served
ideally
locally.
A
A
Cool,
so
by
the
way,
it's
really
awesome
that
we,
like
you
know,
years
and
years
ago,
we
like
imagined
this
and
like
it's
real
now
and
super
super
cool,
so
huge.
Congratulations.
B
A
One
of
the
coolest
things
in
in
this
graph
is
the
just
like
the
recovery
right
like
we
had
like
that
big
failure
of
like
a
mysterious
data
center
that
caught
fire,
maybe
did
I
never
heard
by
the
way.
Do
we
know
if
it
caught
really
caught
fire
or
was
that
never
resolved,
we're
never
gonna
know
so,
but
being
able
to
recover.
All
of
that,
and
then
you
know
get
on
the
fast
path
with
falcon
plus,
like
was
really
really
cool.
That
show
was
kind
of
a
data
source
network
in
action.
A
It's
actually
very
surprising
to
me.
I
would
have
expected
us
to
see
many
more
like
jaggies
or
like
things
falling
and
then
recovering,
and
it's
been
really
awesome
that
that
our
reliability,
the
verifiability
of
the
data,
is
so
good
that
that
hasn't
happened
much
if
at
all
yeah.
A
So
this
is
like
really
massive
scale,
and
you
know
there's
all
kinds
of
things
that
are
super
interesting
to
point
out
like
when
you
compare
this
to
the
bitcoin
network,
where,
when
you
have
significant
volatility
in
the
in
the
bitcoin
price
or
you
have
like
large
policy
changes,
usually
you
can
see
that
in
the
hash
rate
and
the
hatch
rate
responds
to
it
very
significantly,
and
you
know
lots
of
and
in
the
storage
network,
it's
really
critical
that
you
don't
like
lose
data.
A
People's
data
right
losing
hash
power
is
one
thing:
losing
people's
data
is
a
whole
different
thing.
This
is
really
awesome
that
the
whole
model
in
terms
of
structures
designed
into
file
coin,
are
working
to
be
able
to
kind
of
deal
with
those
stresses
and
absorb
those
shocks
in
the
network.
It's
like
really
really
great
indicator.
The
network
is
super
strong
and
like,
while
I
was
saying
here's
like
a
view
into
how
much
more
capacity
we
can
have
so
like,
I
would
imagine
we
took
it
like
the
actual.
A
It
would
be
great
to
have
like
a
graph
that
shows
that
the
watermark,
but
we
have
an
enormous
amount
of
capacity
in
the
aggregation,
so
think
of
as
like,
more
gas
usage
comes
in
through
fm
and
on
things
more
and
more
of
the
storage
proofs
get
pulled
into
the
aggregation
channel
and
then
that
can
go
like
much
faster.
A
A
Designed
against
that-
and
maybe
we
should
just
start
it
that
way,
it
would
have
been
easier
and
faster
and
then
solved
it
later
great.
A
So
this
is
an
awesome
graph
that
just
shows
the
the
changes
in
population
or
for
sps
across
different
different
scales
and
probably
one
of
the
coolest
things
it's
just
how
those
ribbons,
this
probably
like,
can
be
ordered
a
little
bit
different,
it's
kind
of
a
little
bit
hard
to
follow,
but
what
those
entry
point
ribbons
show
is
sps,
graduating
from
one
size
to
another
size.
This
isn't
duplicated
by
entity.
A
So
this
is
each
minor
actor
on
the
chain
and
their
sizes,
but
it
shows
like
that
those
ribbons
growing
show
the
number
of
specific
sps
growing
to
that
scale.
This
is
not
weighted
by
storage
size,
so
the
the
little
lines
at
the
very
bottom.
Those
are
like
the
mega
minor
actors
that
have
50
plus
petabytes.
So
if
this
was
weighted
by
size,
it
would
probably
look
different.
We
should
probably
make
a
graph
like
that,
but
yeah.
A
This
is
really
really
cool
to
see
like
even
throughout,
like
the
last
year
of
of
stresses
with
all
the
the
china
policy
changes.
It's
amazing
to
see
this.
This
growth
happening
across
the
world.
A
Yeah,
I
think,
raise
your
hand
if
you
participated
in
the
asp
program.
Just
so
I
get
a
sample
awesome.
Were
you?
Was
that
like
your
first
entry
point
or
were
you
mentors
or
were
you,
did
you
put
it
on?
A
Great
awesome,
what
can
I
hear
some
like
highlights
of
like
what
was
like
the
coolest
thing
that
you
learned
there
or
that
you
found
there
a
couple
couple:
people
yep.
D
A
B
For
tech
people
to
learn
some
business,
you
know
learn
about
it:
economy,
yep
yeah,
which
is
great
yeah,
but
I
I
hope
the
expa
people
get
to
introduce
to
each
other.
A
Yeah
that'd
be
really
cool,
it'd
be
awesome
to
have
like
a
video
reel
of
like
interviewing
all
the
participants.
I
went
through
and
kind
of
hearing
about
it.
It's
awesome
to
see
the
talks
online,
the
there's
like
the
the
boot
camp
version
on
youtube
and
that's
fascinating,
really
useful
knowledge.
So
thank
you
for
putting
that
together.
Any
other
comments.
F
Yeah,
just
the
scale
of
the
community
really
and
the
amount
of
support
you
can
get
from
anyone
just
walk
up
to
anyone,
ask
a
question
and
then
it's
a
half
an
hour
conversation.
It's
great.
F
D
A
Awesome
yeah
and
thank
you
for
running
this
super
useful,
so
yeah
opportunities
for
folks
attend
those
cohorts,
bring
your
services
if
you're
an
sp
or
a
or
an
ecosystem
provider
that
has
some
service
that
could
be
useful
to
sps.
Espa
is
a
great
channel
for
you
to
kind
of
talk
about
what
you're
doing
and
so
on
get
talks
recorded
about
your
tech
and
broadcast
it
out
to
the
community.
So
you
know
think
of
bringing
your
services
to
those
systems
and
then
think
of
starting.
A
I
don't
think
cusp
is
quite
doing
any
kind
of
chapter
thing
yet,
but
this
might
be
like
a
really
good
idea
of
being
able
to
expand
that
kind
of
program
to
different
locations
in
the
world
being
able
to
kind
of
think
of
a
chapter
style
structure
if
you're
an
sp
that
has
developed
a
lot
of
knowledge
about
how
to
run
these
systems
and
have
like
access
to
extra
facilities,
or
something
like
that.
A
This
could
be
like
a
really
really
useful
side
business
along
the
way
to
be
able
to
kind
of
train
other
other
sps
in
the
in
the
area
and
so
on.
Great.
So
I
think
I'll
get
to
this
a
little
bit
later,
but
just
kind
of
quick
comment.
I
think
in
terms
of
features
and
focus
sps,
should
be
really
focusing
on
excellent
onboarding
of
data
right
now.
A
So
that's
there's
a
bunch
of
bottlenecks
and
just
kind
of
so
a
lot
of
clients
interested
in
pulling
in
tens
of
petabytes
at
a
time
and
just
being
able
to
deal
with
that
onboarding
requirement.
Is
it's
pretty
important
so
being
able
to
provide
really
good
handholding
of
that
data
good,
like
customer
ux
and
then
have
good
retrieval
speeds
for
that
data?
A
So
once
you
get
the
data
and
you
onboard
and
so
on,
being
able
to
have
useful
good,
retrieval
speeds
will
matter
so
that
then
virtual
networks
can
work
on
top
of
that
and
applications
can
work
on
top
of
that,
and
so
on
the
other
thing
I
have
there
was
a
survey
I
think
for
data
center
links.
I
think
upgrading
the
links
might
be
really
valuable.
A
A
So
there's
at
the
moment
a
lot
of
data
in
the
wings
that
wants
to
get
into
filecoin
but
they're
having
trouble
just
kind
of
they
don't
know
who
to
talk
to
necessarily
or
they
are
following
some
pilot,
but
it's
kind
of
slow
and
so
being
able
to
kind
of
greatly
accelerate.
The
onboarding
rate
of
the
network
would
be
would
be
really
great,
and
a
lot
of
this
can
come
through
offline
data
onboarding
as
well,
so
both
either
increasing
the
capacitive
links
or
having
a
pretty
good
way
of
accepting
offline
data.
A
A
Separate
from
that,
this
is
related
opportunities
for
sp
funds.
I
would
sort
of
expect
a
number
of
sp
funds
to
emerge
that
are
going
to
provide
investments
to
source
providers
over
the
next
over
the
next
year
or
two.
But
but
you
know
don't
count.
Those
raising
money
in
this
time
period
will
be
will
be
hard.
The
second
thing
is
in
every
downturn.
This
isn't
tends
to
be
an
amazing
opportunity
for
those
that
are
really
well
prepared.
This
can
be
a
great
time
for
growth.
A
This
can
be
a
great
time
to
optimize
operations
and
so
on.
So
the
crypto
community
has
been
learning
through
all
kinds
of
it's
very
volatile
and
cyclical,
which
is
really
useful
to
like
in
in
those
downturn,
moments
focus
on
growth,
focus
on
on
leveling,
up
operations
and
and
be
well
poised
for
for
shifts.
A
So
but
again,
p0
is
like
stable
operations,
make
sure
you
and
your
org
are
like
really
well
set
up
and
then
separate,
like
think
of
what
opportunities
are
now
available
to
you
and
orient
towards
that,
and
if
you
find
like
good
side
businesses,
that's
an
sp.
You
develop
some
expertise
in
something
or
like
you
can
help
other
groups,
that's
a
great
opportunity
for
a
side
business
and
so
think
of
doing
that.
Think
of
like
leading
into
kind
of
added
services
or
different
like
tiers
of
service,
or
things
like
that.
A
Great
I'll
talk
just
a
little
bit
about
retrieval
providers,
so
I
know
this
is
what
a
lot
of
people
want.
Here's
a
view
into
the
ipfs
gateway
data
streams.
So
this
is
like,
I
think,
a
billion
requests
per
week
and
growing,
which
is
this,
is
only
the
the
single
gateway
that
pl
runs.
The
pl
starfleet
runs.
A
It's
also
the
case
that
the
most
when
there
are
like
big
spikes
and
there's
some
application
that,
like
consumes
a
ton
of
bandwidth,
they
tend
to
get
kicked
out
and
and
like
told
to
go
to
set
up
their
own
gateway
or
set
up
with
some
other
service,
so
it'd
be
great
to
get
visibility
into
all
of
the
gateways
for
all
the
applications
and
how
much
traffic
that
is,
but
over
time
this
is.
This
will
accumulate
to
to
a
lot.
A
So
good
opportunities,
so
devs
build
virtual
provider
networks
that
beats
the
gateway
perf.
This
is
where
saturn
mile
and
a
number
of
others
are,
are
working
and
for
sps
like
make
your
content
available
by
through
those
indexers.
So
the
indexer
that
brenda
mentioned
you
want
to
get
your
content
indexed
by
that
to
then
start
getting
retrieval
requests.
The
retrieval
provided
networks
are
going
to
lean
on
that
indexer
to
know
which
sps
to
go
to
to
ask
for
what
content.
A
So
you
want
to
like
advertise
your
content
there,
then,
if
you
operate
and
ours
can
like
connect
through
that
and,
as
was
mentioned,
like
definitely
talked
to
patrick
about
the
broader
set
of
projects.
There's
a
lot
of
stuff
going
on
many
different
things
are
coming
together.
A
Some,
like
really
cool,
highlights
include
the
the
I
think
which,
which
one
is
this
one.
It's
is
it
here.
A
A
really
cool
payment
channel
network
already
working
and
that's
super
exciting
titan
magma.
That's
right!
So
that's
that's
a
maybe
I
don't
have
it
here.
Oh
I
see
you
got.
Naturally
I
see
it
now
and
then
the
pando's
also
really
cool,
where
it
just
kind
of
become
this
really
useful
metadata
service
to
just
dump
all
kinds
of
important
artifacts
that
you
need
to
run
other
protocols,
so
opportunities
for
devs
join.
One
of
these
retro
network
projects
start
your
own.
A
There
are
grants
for
all
of
this
and
try
out
some
of
the
early
rp
software,
so
talk
to
asgard
from
saturn
team.
Talk
to
the
mile
folks
talk
to
titan
and
others,
and
I
think,
like
here,
the
I
think
things
are
getting
ready
to
deploy
these
networks.
I
think
there
there
is
a
lot
of
data.
A
There
are
many
data
sets
through
those
aggregators
that
want
to
find
high
throughput
low,
latency
delivery,
but
but
really
what
really
matters
there
is
to
have
like
a
good
end-to-end
structure
where
the
sets
of
clients
are
going
to
request.
The
data
can
go
to
rp's
that
are
in
their
region,
and
those
rp's
can
find
the
content
through
indexers
and
can
talk
to
sps
that
have
the
content
and
you
want
the
latency
across
those
those
those
jumps
to
be
pretty
good.
A
A
Two
three
seconds
might
be
okay,
but
if
it's
like
10
seconds
that
doesn't
work
that
might
work
for
some
kinds
of
data
sets
where
you
preload
the
data,
but
it
will
have
to
figure
out
what
tends
to
work
better
and
my
expectation
is
that
we'll
end
up
developing
different
approaches
and
then
we'll
test
them
out
in
production,
and
then
one
of
these
approaches
will
work
way
better
and
we'll
sort
of
learn
that
over
time
follow
with
the
retrieval
market
demo
days,
there's
tons
of
really
cool
stuff,
that's
being
built.
A
One
really
great
thing
is
like
that
dashboard.
It's
going
to
start
measuring
all
the
different
retail
provider
networks
and
how
well
they
work
across
a
variety
of
features.
So
you
can
tell
from
that
like
how
well
the
network
is
evolving
cool
content
indexing
a
lot
of
opportunities
here,
so
one
is
a
lot
of
opportunities
around
the
dht
lookup
processes
and
so
on.
There's
a
lot
of
data,
that's
being
served
through
ipfs
through
the
dht
over
time.
A
Some
amount
of
that
can
then
start
hitting
our
ps
and
sps,
but
the
more
exciting
thing
is
the
network
indexer
and
really
getting
all
of
your
content
there.
I
think
right
now
how
many
source
writers
are
being
indexed
right
now,
do
we
know.
A
100
plus
all
right
awesome
and
how
many?
How
many,
how
much
data.
A
A
Oh
two
terabytes
of
the
index,
got
it.
How
much
in
how
much
data
does
that
represent
that
store
somewhere
else?
Do
we
know
sorry
to
put
you
in
the
spot.
A
We
should
it'd
be
awesome
to
get
a
graph
where
we
can
see
that,
so
we
can
see
like
that
graph
of
index
data
catching
up
to
the
total
amount
of
data
in
the
network
and
eventually
get
to
like
100
yeah.
A
So
so,
ideally,
we
want
to
get
the
the
latency
here
to
to
be
able
to
figure
out
who
has
what
to
be
super
fast
like
to
be
kind
of
on
the
range
of
like
10
milliseconds
when
you're
within
the
same
data
center
or
the
same
region,
and
you
know
around
100
milliseconds
if
you're
not
in
the
same
data
center
and
that's
what's
going
to
enable
really
fast
retrieval,
because
you
first
have
to
do
a
lookup
figure
out.
A
Who's
got
the
content
and
then
from
there
go
so
retrievable
networks
are
going
to
be
huge
users
of
this,
both
like
to
figure
out
what
is
fees
to
go
to
and
what
and
between
themselves
like
to
be
able
to
pull
the
data
and
cache
it
better.
A
So,
yes,
opportunities
get
your
content
index
and
so
on
storage
clients.
A
So
I
think
this
is
where
it's
probably
the
most
for
me,
like
the
most
exciting
area
at
the
moment
and
for
growth,
getting
to
a
point
where
we
can
bring
that
graph
on
the
bottom
to
a
much
higher
spot,
like
that's
kind
of
where
a
lot
of
our
focus
should
be
and
translating
a
lot
of
the
client
pilots
that
are
interested
in
bringing
data
onto
the
network
into
successful
data,
onboarding
deals
and
chain
and
with
good
retrievability,
and
once
we
start
doing
that,
then
we
can
start
building
applications
on
top
and
all
that
kind
of
stuff.
A
So
this
is
like
awesome
super
fast
growth.
We
should
feel
super
proud
that
this
kind
of
onboarding
rate
is
like
both
enormous
in
size.
This
is
much
larger
than
many
successful,
centralized
storage
companies-
not
the
biggest
ones
of
course,
but
there
are
many
centralized
source
companies
who
build
like
very
significant
businesses,
storing
a
much
smaller
amount
of
data.
A
So
that's
like
a
really
great
success.
So
far,
we
we
have
a
lot
of
room
to
store
stuff,
so
there's
tons
of
capacity
to
keep
keep
going.
We
want
to
onboard
tons
of
really
useful
data
and
especially
put
an
emphasis
on
public
data
that
can
be
then
used
for
all
kinds
of
permissionless
applications.
If
you
onboard
a
lot
of
public
data
sets
and
then
you
then
have
hackathons
on
top
of
that,
you
can
make
that
data
way
more
useful.
A
If
you
need
to
run
large
computational
pipelines,
that's
when
we
start
getting
into
the
computational
networks
around
the
the
data
store
in
the
in
in
the
svs,
but
to
get
there,
we
gotta
focus
on
data
onboarding
rate.
So,
first
of
all
like
we're
in
a
really
really
good
trajectory
and
it's
growing
quite
a
bit.
A
I
think
we
need
to
get
this
to
about
like
five
petabytes
a
day
which
is
a
lot
granted,
but
that's
that's
sort
of
like
where
the
network
wants
to
be
and
can
go
into
more
detail
there
later
and
we
might
end
up
discussing
this
more
in
the
crypto
economy.
If
people
are
interested-
and
this
is
you
know
the
fastest
growing
decentralized
storage
network
by
far
this
is
much
closer
to
some
of
the
large-scale
store,
centralized
storage
networks.
A
Lots
of
opportunities
around
this,
so
sourcing,
more
and
more
clients
leverage
the
success
case
here
point
out
like
we
should
get
more
more
intel
over
what
data
sets.
A
These
kinds
of
this
onboarding
represents
what
customers
are
flowing
in,
what
what
data
sets
are
now
stored
to
them,
be
able
to
like
use
those
as
reference
clients
with
many
other
clients
and
have
like
a
whole
bd
cycle
of
being
able
to
kind
of
like
approach,
better
approach,
new
prospective
clients
with
really
good
success,
case
stories
and
so
on,
and
this
is
areas
where
like,
if
you
have
had
like
really
good
experiences,
onboarding
large-scale
clients,
think
of
like
single
petabytes
from
or
more
those
might
make
great
customer
success
stories
and
test
case
service.
A
What
are
they
called
case?
Studies?
That's
the
one,
the
and
so
being
able
to
kind
of
write
about
those
record
video
about
those
push
that
stuff
out
there.
That'll
help
generate
a
lot
more
participation
right.
So
right
now,
tons
of
people
around
the
world
are
paying
huge
sums
of
money
to
back
up
data
that
we
could
be
backing
up
for
them
dramatically
cheaper.
They
just
don't
know
about
it.
A
They
don't
know
about
it,
they
or
even
if
they
know
about
it,
they
don't
know
how
to
use
it
or
if
they
try
to
use
it,
they
like
have
had
another.
They
don't
know
how
to
like,
explore
it
and
so
on,
and
so
we
have
to
like
now
cross
that
chasm
and
get
a
lot
of
people
to
understand
what
the
potential
is,
what
the
value
is
and
make
sure
that
they
have
a
really
really
good
experience.
A
Something
we
haven't
done
is
figure
out,
like
some
some
algorithm
for
detecting,
really
good
customer
success,
so
think
of
like
we
probably
need
some
kind
of
coordination
mechanism
where
we
can
track
clients
as
they
get
interested
in
filecoin,
and
we
can
track
the
lifecycle
of
that
of
that
relationship
and
see
where
they
had
successes
where
they
had
failures,
because
that
we'll
get
a
ton
of
feedback
from
that
right
now,
all
of
us
are
sort
of
doing
that
independently
in
different
parts.
A
Various
groups
are
have
their
own
individual
sort
of
like
funnels
in
a
sense,
but
if
we
found
a
coordination
mechanism
to
integrate
that
information,
that
would
be
super
super
helpful
because
then
we
can
spot
like.
Oh,
this
type
of
client
tends
to
run
into
this
kind
of
problem.
There's
either
a
software
tool
that
needs
to
be
built
there
or
a
service
that
needs
to
be
built
there
or
hey.
Actually,
these
particular
sps
tend
to
handle
that
kind
of
thing
super.
A
Well,
let's
flow
more
of
that
business
to
those
those
groups
and
so
coming
up
with,
like
a
good
tool
here
to
coordinate
that,
I
think
would
be,
would
be
super
useful
and
then
kind
of
in
terms
of
increasing
on
onboarding
rate.
So
I
think
one
part
is
fat
like
bigger
pipes,
so
upgrading
the
the
connectivity
and
the
other
part
is
offline
onboarding.
So
that
means
like
just
the
snowball
or
data
heavy
databox.
A
Heavy
equivalence
like
we
need
to
be
able
to
like
show
up
a
facility
with
like
petabytes
of
capacity
or
a
petabyte
of
capacity
on
board
the
data
really
fast
and
then
take
it
to
an
sp,
and
that,
I
think,
is
a
great
like
business
for
that
space
could
run
right
so,
like
there's,
usually
a
lot
of
sps
in
some
region.
A
One
of
you,
if
you
get
experience
and
expertise
over
doing
this,
can
sell
that
as
a
service
as
a
side
business
to
other
sps
that
want
to
onboard
that
data,
and
I
think,
like
that,
could
be
pretty
really
compelling
bandwidth
like
moving
data
is
expensive,
so
figuring
out
a
good
way
to
do
that.
You
can
probably
like
price
it
pretty
well
and
that
you
know
get
a
significant
margin
there.
A
I
think
there
are
like
hundreds
of
like
large
scale
data
sets.
A
lot
of
other
people
probably
have
talked
about
a
lot
of
this.
I
think
yeah.
A
I
think
there's
lots
of
really
cool,
exciting
use
cases
where
these
large-scale
clients
are
interested
in
storing
copies
with
fat
coin
and
they're
testing
out
the
network
and
think
of
this
year
and
next
year,
and
maybe
even
even
the
year
after
for
many
large-scale
clients
as
really
important
crucible
years,
where
they're
going
to
test
out
the
network
and
they're
going
to
get
a
feel
for
the
capabilities
and
those
are
like
both
really
good
opportunities
to
get
really
amazing
customers,
but
then
lean
into
features
that
the
centralized
cloud
providers
just
can't
do
so.
A
Things
like
the
verifiability
of
the
proofs,
things
like
being
able
to
attest
where
exactly
the
data
is
certain
kinds
of
other
constraints
being
able
to.
I
think
this
is
where
certain
cryptographic
techniques
to
store
the
data
encrypted
like
fully
homomorphic,
encryption
or
mpc,
and
so
on
might
be
super
super
super
useful
and
those
might
be
edges
against
what
the
centralized
cloud
providers
can't
do.
A
A
There's
I
want
to
kind
of
highlight
this
use
case,
which
is,
I
think,
super
super
cool
having
these
public
archives
of
important
civic
data
sets
that
have
a
commons
oriented
use
case.
This
is
one
of
the
things
that
centralized
cloud
providers
can't
do
very
well,
but
decentralized
networks
are
perfect
for,
because
you
have
data
sovereignty
baked
into
the
system,
you
can
unload.
So
you
can
onboard
these
super
precious
valuable
data
sets
for
large-scale
communities.
A
These
tend
to
be,
in
some
cases
huge
data
sets
that
are
really
expensive
to
to
store
and
maintain
and
so
on,
and
you
want
to
run
lots
of
computation
over
and
you
want
to
be
able
to
store
them
in
a
public
permissionless
environment
where
people
can
where,
like
the
community,
that
cares
about
that
data
set,
could
run
these
kinds
of
things
risky
you've
heard
about
the
data
dial
concept,
a
few
cool,
so
the
idea
here
is
like
data
sets,
tend
to
map
to
specific
applications
or
communities
or
groups
that
are
curating,
maintaining
and
deciding
how
to
use
specific
data,
and
this
is
where,
like
the
coordination
tooling
of
dials,
can
be
extremely
useful.
A
You
can
think
of
building
a
dao
for
archivists
or
people
that
are
cleaning
up
data
and
transforming
it
and
so
on,
and
then
deciding
how
the
data
should
be
used
and
then
leaning
on
top
of
falcon
sort
of
the
piece.
A
We're
missing
is
fbm
to
be
able
to
run
those
kinds
of
things,
though
even
right
now,
people
could
probably
prototype
this
all
and
run
it
on
ethereum,
but
then
being
able
to
kind
of
like
then
decide
what
monetization
pathways
can
happen
over
the
data
can
make
those
data
dials
super
successful
as
well.
So
you
can
have
like
a
data
dial
that
curates
really
valuable
information
and
then
decide
how
that
data
should
be
monetized
and
so
on.
I,
when
initially
starting
falcon,
I
talked
to
a
lot
of
large-scale
data
users.
A
Some
of
them
were
like
these
massive
corporations
that
generate
massive
scale.
Data
sets
for
telemetry
over
the
earth
so
think
of,
like
telemetry
of
the
ground,
telemetry
of
like
the
atmosphere,
telemetry
of
all
kinds
of
factors
about
the
earth
or
a
city
or
like
all
the
plants
and
so
on
and
right
now.
All
of
that
data
is
like
super
super
difficult
to
use.
A
It's
not
well
organized,
not
well,
it's
not
searchable
at
all
hard
to
compute
against
and
so
on,
and
it's
inherently
complex
in
terms
of
a
sovereignty
and
governance
standpoint,
because
this
is
usually
public
data.
It
has
a
public
comments
orientation
and
it
has
privacy
constraints
you,
you
can't
just
kind
of
reveal
what
the
entire
grid
of
a
city
looks
like.
However,
you
it
is
perfectly
the
model
to
run
kind
of
verifiable
computation
over
where
you
can
ship
a
function.
That
is
only
looking
at
certain
things.
A
We
can
tell
what
the
output
is
going
to
be,
and
so
these
kinds
of,
like
civic
data
sets,
are
super
promising
for
a
network
like
this,
where
centralized
providers
just
like,
aren't
very
good
at
this,
and
this
could
be
like
a
use
case
that
really
shines
on
these
networks
and
those
data
sets
tend
to
be
enormous
and
and
they
get
bigger
and
bigger
and
bigger
over
time,
because
people
build
better
and
better
tooling
that
give
you
better
resolution
or
you
need
to
track
it
over
time
and
so
on.
A
So
there's
like
a
so
kind
of
like
landsat
as
like
one
specific
case,
think
of
much
higher
resolution,
imagery
across
many
different
kind
of
sensors
for
like
every
square
meter
of
the
earth
and
deep
into
the
ground.
All
that
data
can
add
up
to
hundreds
of
petabytes
and
needs
to
be
generated,
stored,
archived,
carefully,
curated
and
carefully
managed
in
terms
of
the
lifecycle.
A
So
I
think
it's
gonna
be
like
a
pretty
big
deal.
My
sense
of
the
timing
on
this
is
like
all
these
data
dials
are
probably
going
to
start
hitting
next
year
and
I
think
the
prototypes
are
going
to
happen
later
this
year
and
early
next
year,
and
so,
if
there
are
specific
data
sets
like
this,
that
could
be
really
useful
to
prototype
as
an
early
use
case.
Shine
light
on
that
show,
what's
possible,
tell
the
world
about
it
and
then
within
three
to
four
months.
A
A
And
so
yeah
there's
a
ton
of
like
data
sets
that
we're
doing
really
well
in
terms
of
onboarding
and
and
many
more
that
kind
of
can
follow
this.
I
think
it
it
would
be
it'd
be
awesome
to
be
able
to
kind
of
replicate
the
internet
archive
fully.
I
think
we
have
the
space
for
it.
We
just
have
to
have
to
figure
out
how
to
move
the
data
right
now.
A
It's
like,
I
think,
stuck
on
a
movement
of
data
problem
of
like
how
do
you
move
this
large
scale
data
around
to
what
is
ps
around
the
world
and
so
on.
So
that's
like
a
really
good
good
opportunity
and
slingshot
has
been
really
successful
at
pulling
together
a
bunch
of
data
sets,
but
now
we
have
to
make
them
useful.
So
so
we
have
the
data
great.
A
Now
we
got
to
make
it
programmable,
and
this
is
where
all
this
kind
of
indexing
over
it
different
applications
and
so
on
can
come
in
so
yeah
and
right
now,
in
terms
of
like
the
the
feature
set.
Archival
is
like
the
perfect
use
case
over
time,
as
retrievability
gets
really
good,
as
indexing
gets
really
good.
A
As
program
programmability
of
the
being
able
to
run
functions
against
the
data
and
so
on,
then
it
sort
of
opens
up
a
bunch
of
other
use
cases,
but
right
now
think
of
like
archiving
really
large
public
data
as
like,
a
really
nice
sweet
spot
onboarding.
Those
large-scale
use
cases
will
then
enable
you
to
then
offer
all
kinds
of
computation
over
those
use
cases
later,
so
you
can
sort
of
like
approach
it
as
an
archival
use
case.
A
First,
oh
and
then,
by
the
way
like
now,
we
have
computing
over
all
of
this,
and
and
so
on,
cool,
so
storage
on
ramps,
we've
and
by
the
way,
a
lot
of
this
data
is
probably
outdated,
but
huge
success
through
an
fts
storage
like
this
model
of
finding
a
very
specific
onboarding
pathway,
designing
a
product
tuned
to
that
use
case,
so
that
get
really
good
product
market
fit,
have
the
documentation,
the
the
branding,
the
messaging,
the
slas,
everything
very
well
tuned
to
achieve
product
market
fit
in
that
particular
vertical
is
a
great
strategy,
and
I
think
this
can
apply
to
all
kinds
of
other
verticals.
A
It's
great
to
see
like
see.
It
grow
a
ton
over
time.
This
stuff
is
going
to
start,
including
much
larger
nft,
so
think
of
video.
Video
nfcs
are
like
still
coming.
They
haven't
quite
hit
the
market
and
then
large
scale,
whole
3d
spaces
in
3d
rooms
and
games,
and
so
on
so
think
of
all
of
those
use
cases
as
flowing
into
into
these
kinds
of
onboarding
pathways.
A
Yeah
web3
those
stories,
the
same
same
kind
of
really
awesome
success
story,
and
this
one
tuned
more
to
like
more
general
use
cases
but
tuned
towards
developers
and
then
yeah
sure
as
well.
Another
really
awesome,
onboarding
tool
dedicated
to
larger
scale,
data
sets
and
so
on.
I
don't
know
exactly
what
the
sweet
spot
is
for
s3.
A
It
might
be
like
tens
of
gigabytes
all
the
way
to
like
terabytes,
but
I'm
not
100
sure,
but
it's
kind
of
like
maybe
having
a
map
that
shows
all
the
different
on-ramps
might
be
really
useful
to
like
know
which
one
which
one
to
go
to
this
one
recently
shipped,
which
is
big
data
exchange,
zx
here,
no
somewhere.
So
what's
really
interesting
about
this
is
like
right
now.
A
There's
a
coordination
problem
in
the
whole
network,
where
we
have
massive
amount
of
capacity
and
we
can
store
data
for
super
cheap
and
we
have
to
like
get
lots
of
people
to
help
on
board
onto
the
network.
Pay
a
high
onboarding
cost
because
they
have
to
learn
about
the
system.
They
don't
know
how
to
do
it
really
well,
they
have
to
like
deal
with
a
lot
of
tutorials
and
so
on
and
so
kind
of
I
sort
of
claimed
this
last
year,
where
I
think
that
this
is
ripe
for
an
auction
model.
A
If
you,
if
you
put
an
auction
model
on
there,
you
can
get
for
a
while
the
storage
pricing
to
go
negative
and
if,
if
in
the
world
storage
pricing,
goes
negative.
Think
of
that,
as
like
an
oil
price
goes
negative
kind
of
moment
like
it
is
super
crazy
to
hear
like
oh
you're,
gonna
get
paid
to
take
a
lot
of
oil
right.
Like
isn't
oil
like
one
of
the
most
expensive
commodities
in
the
world
like
isn't
it
everyone?
Doesn't
everyone
want
oil
yeah,
sometimes
in
the
supply
chain
and
systems
you
get
into
these
weird
spots?
A
That's
really
useful
to
us
if
we
lean
into
it,
because
this
can
create
an
environment
where
the
rest
of
the
world
suddenly
starts
pushing
tons
of
data
into
filecoin
and
and
busting
through
all
kinds
of
bottlenecks
and
solving
all
kinds
of
developer
problems,
because
this
becomes
the
draw.
And
so
this
is
a
there's
a
way
of
using
incentives
to
bring
lots
of
data
into
the
network.
And
once
that
data
is
there,
data
has
gravity.
Then
you
have
the
the
computational
networks
on
top
and
then
you
can.
A
That
can
be
the
new
home
for
the
data.
But
you
sort
of
like
want
to
make
this
kind
of
like
a
really
successful,
onboarding
pathway.
Think
of
this,
as
kind
of
like
the
beginning
of
d5
and
nft
sort
of
worked
in
similar
kind
of
structures,
huge
kudos
to
the
falcon
plus
folks
for
like
scaling
the
operation
this
year.
It
has
been
like
a
huge
awesome
thing
to
see,
like
all
of
this
happening,
so
quick
round
of
applause
for
everyone
working
on
this,
because
this
has
represented
a
huge
huge
amount
of
work.
A
And
you
know,
I
think,
like
there
are
some
missing
on-ramps
that
we
need,
so
one
of
them
is
video.
We
don't
have
a
really
good
on-ramp
for
video
right
now
and
there's
tons
of
video
being
generated
all
the
time,
and
this
is
perfect
for
falcone.
So
this
needs
an
nft
storage
type
service.
So
it's
a
good
opportunity
for
for
devs
same
for
archives.
We
don't
have
a
good
on-ramp
for
archives
where
they
have
specific,
tooling
requirements.
A
They
they
need
tools
that,
like
map
onto
the
use
cases
that
they
already
use
the
tool
sets
that
they
already
have,
and
so
there's
another
like
really
valuable,
missing
on-ramp
here
and
one
personal
favorite
for
me
is
like
dev
assets.
So
at
the
moment
like
we
have
enough
capacity
to
store
all
of
the
dev
assets
that
people
make
and
and
use
to
like
build
software.
So
all
of
the
code
on
github
all
of
the
package
managers,
all
of
the
vms,
all
of
docker
all
the
containers,
everything
and
by
the
way
you
can
do
duplicated.
A
So,
like
that's
cool
too,
but
we're
missing,
like
really
good
devon
ramps
to
make
that
happen,
and
unfortunately,
devs
have
the
highest
of
standards
for
ux.
So
this
is
one
of
this
is
a
tricky
one,
and
it's
going
to
need
like
a
really
good,
really
good,
developer
ux
thing,
but
if
we
make
one
then
we
can
there's
a
perfect
use
case,
because
tons
of
developer
assets
are
exactly
a
public
comments.
Type
of
problem
tons
of
stuff
is
open
source
done.
A
Some
people
want
to
be
able
to
have
good
sovereignty
of
the
data,
good
governance
systems,
good
governance,
tooling,
and
so
it's
perfect
for
the
falco
network,
but
it
sort
of
needs
like
a
really
really
good
developer,
ux
tool
on
top
of
it
and
then
think
of
like
all
of
the
all
of
the
processing
that
happens
over
code
and
vms
and
all
that
kind
of
stuff
right,
like
a
huge
amount
of
the
data
processing
in
the
world,
is
all
entirely
on
top
of
the
data
that
is
used
in
in
software
systems,
and
maybe
a
sub
subset
of
this
might
be
game
assets.
A
Game
developer
assets
might
be
like
a
really
good
one
where
the
assets
are
bigger
and
the
developer
ux
requirements
are
smaller.
You
know
people
need
a
lower
qual,
they
don't
need
they
don't
have
as
high
standards
for
for
the
quality
of
the
ux,
so
potentially
game
developer
assets
might
be
a
very
specific
on-ramp
that,
like
people
like
that,
might
be
like
an
entire
startup
cool,
so
large
data
on-ramp.
A
So
the
way
that,
like
centralized
cloud
source
writers
solve
this
problem,
is
by
building
these
specific
data
onboarding
systems,
these,
like
ruggedized,
peta,
scale,
transport
things
and
they
ship
you.
This
is
this
thing
you
like
load
it
up
with
your
data
and
you
ship
it
back
or
they
show
up
with
like
there
was
like
a
big
aws
semi
in
like
one
of
their
their
like
events
at
some
point,
eventually
we're
going
to
get
to
that
scale,
but
we
got
to
start
somewhere.
A
Azure
did
the
same
thing
like
they
follow
the
same
pathway
found
the
same
problem
solved
in
very
similar
ways.
They
leaned
into
you,
know,
building
similar
kinds
of
systems
of
these
I'd
like
encourage
you
to
look
at
the
databox
heavy.
I
think
that's
a
pretty
good
design,
but
I
think,
like
you,
can
also
lean
into
our
friends
in
seagate
who
have
like
built
this
entire
data
system
and,
like
all
of
this,
is
like
ready
to
go.
A
It
just
needs
like
th
what
what
this
needs
is
just
some
group
of
people
that
can
map
the
clients
and
sps
in
one
region
and
then
help
figure
out
the
business
there
of
like
saying,
hey,
there's
this
set
of
clients.
They
want
to
take
their
data
from
here
to
the
sps
over
here,
and
they
can
then
schedule
the
think
of
it
as
a
scheduling
problem
where
you
invest
in
some
set
of
drives
here
and
then
you
can
shut
it
back
and
forth
and
so
on.
A
But
the
really
key
thing
here
is
super
good
efficiency
in
terms
of
the
use
of
the
hardware,
because,
like
one
extra
day
of
misuse
here
like
is
like
killer
when
you,
when
you're
like
running
very
close
to
the
margin
or
the
other
thing,
is
really
good,
client
ux.
A
But
I
think
over
time
like
given
the
success
of
the
on-ramps,
perhaps
there's
like
a
really
good
business
here,
where,
like
somebody
who
has
experience
with
all
the
operations
here,
could
be
nsp
in
an
area
or
something
like
that.
Builds
a
side
service
like
this
to
help
support
a
lot
of
the
sps
in
that
region.
So
in
this
time
of
like
macro
downturn
and
so
on,
this
is
like
a
potentially
really
good
side
business
for
for
folks,
and
it
can
help
with
that
data.
A
Onboarding
onboarding
problem
and-
and
I
would
encourage
so
like-
I
think
all
of
these
groups,
like
the
the
large
these
groups-
ended
up
designing
their
own
stuff
because
they
at
the
time
these
things,
I
think,
didn't
quite
exist
or
if
they
existed,
they
thought
they
could
do
like
their
own
thing
better
and
so
on.
A
I
think
there's
a
lot
of
development
time
and
cost
that's
going
to
go
into
designing
these
things,
so
I
would
encourage
you
to
either
like
look
at
some
like
consumer
hardware,
oriented
things
you
can
put
together
a
simple
design
or
just
like
get
ready-made
things
that
already
work,
because
the
amount
of
time
that
you
might
spend
designing
a
new
thing
will
will
will
cost
more
so
kind
of
like
where
we're
headed
is
something
like
this,
where
you
have
storage
clients
and
source
providers
and
like
these,
like
large
boxes,
moving
around
vast
amounts
of
business
data.
A
I
think
this
is
how
we
can
replicate
the
archive
the
internet
archive,
but
it's
not
going
to
happen
by
pumping
it
through
the
wires.
I
don't
think
just
because,
like
the
internet's
not
fast
enough,
so
if
we
want
to
make
a
bunch
of
copies
of
the
archive
soon,
we
need
a
really
good
data
on
on-ramp
like
this.
This
is
a
this
is
a
really
good
opportunity
for
for
sps
cool.
So
I
think,
there's
a
lot
that
has
been
said
about
fm
recently.
A
I
don't
know
if
here
yet
there's
a
lot
of
stuff
coming
through
here.
A
One
of
the
big
things
here
is
like
you
want
to
get
evm
compatibility
to
enable
all
the
tools
that
exist
in
the
ethereum
world,
and
you
want
to
be
able
to
have
awesome
compatibility
because
there's
all
kinds
of
applications
that
are
going
to
need
to
run
on
top
of
the
data,
there's
all
kinds
of
legacy
systems
and
and
even
new
systems
are
going
to
want
to
compile
their
stuff
down
to
wasm
and
then
run
it
on
top
of
the
data.
A
So
you
want
to
you,
want
an
environment,
a
developer
environment
that
can
tune
for
that,
and
the
other
part
is
cross
chain.
Calls
like
getting
that
to
be
worked
really
well
from
the
beginning
is,
is
a
really
key
piece.
There's
a
lot
of
stuff
going
on
here,
like
you
can
follow
the
roadmap
here.
I
think
right
now.
The
good
opportunities
include
primarily
for
developers
but
joining
the
early
developer
program
and
be
able
to
kind
of
like
ship
some
of
the
first
web
3
apps.
A
This
could
also
be
a
good
opportunity
for
people
solving
large-scale
problems
across
file
coin,
using
the
coordination
tooling
of
smart
contracts
right.
So
you
could
this.
This
will
be
very
helpful
for
retrieval
provider
networks.
It'll
be
very
helpful
for
computational,
centralized
computation
networks
it'll
be
very
helpful
for
other
kinds
of
things
like
that.
A
I
was
talking
to
some
of
the
hardware
folks
around
so
there's
this
problem
in
supply
chains,
where
sometimes
like,
there's
not
enough
matter,
there's
not
enough
atoms
of
a
particular
type,
or
so
they
get
really
expensive.
And
you
end
up
having
to
like,
recycle
and
reclaim
a
lot
of
the
hardware
that
already
exists,
and
that
is
sort
of
like
tossed,
and
my
guess
is
that
a
decentralized
network
coordinating
that
supply
chain
like
that
supply
chain
back
from
like
landfills
and
trash
and
and
discard
stuff
to
like
collect.
A
All
of
this
might
be
like
one
of
the
biggest
contributions
of
the
falco
network
to
the
world,
because,
like
think
of
like
a
reverse
supply
chain
here,
where
you're
trying
to
like
reuse,
specific
components
that
are
extremely
expensive,
otherwise
and
coordination,
tooling,
is
exactly
what
blockchains
are
for,
and
so,
instead
of
like
having
a
slow
process
with
contracts
and
structures
like
that,
you
can
get
to
like
public
public
permissionless
auctions,
and
so
this
is
ripe
for,
like
a
crypto
econ
team
to
like
design
this
and
then
ship
an
incentive
structure
into
the
network.
A
And
then
people
can
just
like
buy,
buy
particular
materials
and
then
just
receive
them
in
the
mail.
So
I
think
that's
where,
like
a
lot
of
stuff
is
headed-
and
I
keep
alluding
to
this-
but
you
know
so
storage-
you
first
get
all
the
storage
in
place
and
then
you
create
an
environment
for
for
computation
around
it,
and
you
know
kudos
to
charles,
because
he
was
one
of
the
first
people
to
like
demo
a
computational
tool
around
data
in
I
think
phil.
A
Do
you
guys
still
run
this
yeah
awesome?
Are
you
getting
users,
so
some
yeah?
So
I
think,
like
this
kind
of
stuff,
like
being
able
to
do
data
analysis
and
so
on
over
especially
large
data
sets
really
matters.
A
Most
of
the
tools
out
there
in
the
sas
world
can't
handle
large
scale
data
sets
so
being
able
to
like
do
some
of
this
kind
of
stuff
over
terabytes
or
petabytes
like
that
would
be
super
super
super
cool
and
that,
like
this
entire
massive
scale
billion
dollar
companies
designed
around
just
solving
a
narrow
problem
like
that,
like
being
able
to
run
specific
data
pipelines
over
petabytes
of
data,
like
that's
so
hard
to
do,
that,
many
kind
of
like
large-scale
companies
are
just
focused
on
that
building
towards
that,
I
think
could
be,
could
be
really
valuable.
A
A
A
True,
if
you
look
back
and
see
that
there's
been
many
projects,
but
all
of
them
so
far
want
to
do
everything
and
I
think
that's
fraught,
I
don't
think
it's
going
to
work,
and
the
reason
is
that
that,
when
you
have
decentralized
computation,
you
have
to
add
verifiability
and
you
have
to
add
confidentiality
and
both
of
those
introduce
significant
significant
performance
hits
and
it
turns
out.
You
have
different
different
requirements
related
to
verifiability
and
different
requirements
related
to
privacy,
in
some
cases,
you're.
A
In
other
cases,
you
might
want
to
use
multi-party
computation
where
you
trust
a
network
of
computed
computational
nodes,
even
if
you
don't
trust
any
one
of
them
or
you
might
need
to
lean
into
something
like
fully
homophonic
encryption,
and
so
once
you
once
you
deal
with
those
cryptographic
systems,
you
add
like
many
orders
of
magnitude
of
slowdown,
and
so
that
means
like
it
just
changes
the
entire
design
of
these
systems.
You
end
up
with
different
protocol
schemes.
A
My
sense
is
that
teams
are
going
to
be
much
more
successful
by
picking
a
specific
technology,
mapping
that
to
a
specific
use
case
and
building
a
special
case
network,
moving,
really
fast
against
that,
and
then,
if
they're
successful
there,
then
translating
to
building
another
one
and
another
one
another
one
and
then
maybe
down
down
the
road.
Somebody
could
integrate
those
through
really
good
programming
languages
and
good,
compilers
and
so
on.
But
until
that
happens,
I
think
like
for
the
next
couple
of
years.
A
I
think
a
bunch
of
decentralized
computing
networks
will
emerge,
and
so
what
does
that
mean
for
falcoin?
A
Filecoin
can
be
a
great
l1
to
a
lot
of
different
decentralized
computational
computational
networks
that
can
tune
for
different
use
cases
that
can
tune
for
those
different
products,
and
so
this
is
like
you
know,
huge
opportunity
here
for
developers
to
build
one
of
these
things,
but
for
sps
like
pay
attention
to
these
things,
because
as
they
arrive
and
as
they
emerge
very
like
value
like
you
can
start
running
this
software
and
couple
it
to
all
of
the
data
on
on
filecoin.
A
And
then
this
is
like
a
great
additional
business
right
where
you
can
get
paid
for
the
storage
and
bandwidth
around
the
data
and
the
and
the
computation
around
it,
and
then
think
of
all
these
computations
as
producing
intermediate
artifacts
so
like
whenever
you
want
to
run
a
function
over
stuff,
you
end
up
generating
other
incremental
stuff
and
all
that
needs
to
get
stored,
at
least
for
some
period
of
time,
sometimes
forever
so
running
the
computation
right
there.
A
You
don't
want
to
move
all
that
data
somewhere
else.
So
where
is
going
to
get
stored
right
there
with
you
right.
So
this
is
like
really
really
good
opportunity
for
all
the
sps.
So
what
we
should
be
doing
right
now
is
like
shipping
fem,
so
we're
on
that
prototyping,
these
kinds
of
decentralized
storage,
network,
decentralized,
computation
networks
and
then
have
really
good
flows
for
for
groups
building
and
stuff.
A
You
can
follow
the
back,
allow
project
which
is
kind
of
working
on
building
a
lot
of
tooling
to
enable
this
and
have
some
like
really
cool
demos
already
and
there's
a
number
of
other
computational
networks
that
are
that
are
trying
to
do
this
and
then
sort
of
like
where
we're
headed
long
term.
We
have
to
bust
the
biggest
bottleneck
of
all
which
is
chain
bandwidth.
A
So
this
is
less
a
problem
right
now,
but
my
prediction
here
is
like
the:
it
is
easier
to
scale
blockchains
than
to
program
off-chain
things
like
vastly
easier,
so
my
senses
will
get
to
very
large-scale,
very
large
scale,
scalable
blockchains
through
hierarchy
and
there's
a
team
already.
One
of
the
labs
in
npl
is
building
this
consensus
lab,
and
this
will
come
to
upgrade
the
consensus
of
filecoin
over
time
so
yeah.
This
is
less
relevant
now,
but
but
important
long
term
developer
ecosystem.
A
The
nfc
stuff
is
like,
I
think,
super
interesting,
because
all
of
that
data
is
going
to
add
all
kinds
of
other
important
computational
stuff
and
everything
that's
happening
around
jpegs
right
now
is
going
to
happen
to
large
scale
video
later
and
so,
and
it
does
not
happen
into
video
right
now,
because
the
tools
aren't
there,
we
don't
have
those
good
onboarding
pathways,
but
once
those
are
there,
we
can
then
see
see
all
kind
of
like
systems
develop
around
that
yeah
awesome
growth
from
nfcs
storage
yeah.
A
This
is
one
of
the
things
that
we're
like.
I
think
this
is
going
to
be
really
cool
where
video
and
whole
virtual
worlds
are
going
to
be
able
to
kind
of
use
the
nft
tooling
to
pay
all
the
artists
around
the
world,
making
these
things
because
hey
it's
really
expensive
to
produce
really
high
quality
art
and
then
all
kinds
of
economic
flows
will
be
sort
of
be
built.
A
On
top
of
this,
a
cool
group
to
watch
is
mona,
like
that
there
came
through
it's
a
great
like
falcon
success
story
where,
like
they
went
through
the
one
of
the
accelerators,
I
think
takion.
A
Maybe
if
I'm
and
then
has
over
time
like
grown
with
the
network
and
and
are
scaling
and
say
over
time,
we
can
get
to
like
building
an
open,
metaverse
environment,
and
so
the
think
of,
like
think
of
building
this
in
layers,
where
you
first
get
all
the
data,
you
then
add,
programmability,
you
add
the
coordination
tooling
through
things
like
nfts
or
other
other
protocols
and
and
so
on.
A
I
think,
like
teams
that,
like
have
some,
I
think
right
now
doing
this.
A
large
scale
is
difficult,
so
teams
should
like
focus
on
specific
use
cases
and
try
to
build
composability.
So
one
of
the
things
that
I
think
is
promising
about
mona
is
like
they're
focused
on
the
composability
of
the
rooms
and
enabling
the
rooms
themselves
to
be
created
by
artists,
who
then
make
significant
amounts
of
money
by
selling
by
creating
these
rooms
and
selling
them.
A
But
then,
once
you
have
those
rooms,
you
can
enhance
those
with
programmability
and
different
kinds
of
games
or
different
kinds
of
systems.
On
top
of
that,
so
it's
like
a
really
good
way
of
like
decomposing.
What
traditionally
has
been
like
a
fully
vertically
integrated
thing
within
like
a
large
video
game?
Publisher,
but
now
you
can
start
creating.
You
know
if
you
decompose
that
you
can
create
a
whole
decentralized
economy
in
this
environment.
A
A
Cool
sorry,
I'm
like
running
out
of
steam,
so
I'm
gonna
like
conclude
quickly,
so
the
for
devs,
like
I
think,
hackathons,
are
really
cool
and
valuable
for
sps.
If
you
wanna
kind
of
accelerate
the
use,
cases
consider
hosting
an
event
and
consider
hosting
hackathons
around
your
communities,
so
we
have
sps
everywhere
if
you
host
things
like
philoson
or
phil
toronto,
that
is
going
to
be
hosted
soon
and
so
on.
A
This
can
really
help
accelerate
the
development
of
applications
and
then
beyond
that
think
of
also
leaning
into
projects
like
like
radios,
that
enable
large
scale
problems
to
be
solved
by
putting
up
filecoin
as
bounties.
So
there's
a
lot
of
kind
of
gig
work.
That's
going
to
be
enabled
through
tools
like
this
cool
I'll
mention
something
briefly
about
polygons,
because
this
might
help
solve
some
of
the
coordination
challenges.
A
The
public
goods
ecosystem
is
by
the
way.
Let
me
know
in
timing,
am
I
like
I'm
probably
running
like
way
late?
Am
I
one
hour
in
great,
so
we
have
30
more
minutes.
Is
that
true,
yeah,
okay,
cool
great,
so
the
whole
public
good
secret
system
raise
your
hands?
A
If
you
like,
have
heard
about
this
stuff
or
and
like
vaguely
a
little
bit
okay
cool,
so
this
whole
ecosystem
is
about
using
coordination
primitives
to
organize
large
groups
of
people
and
organizations
to
build
things
that
are
valuable
to
those
communities
and
so
usually
kind
of
bringing
together
a
set
of
participants,
coordinating
them
through
a
mechanism
to
produce
some
like
valuable
components
to
for
for
them.
A
There's
a
lot
that's
possible
here
and
at
the
moment
we
need
a
lot
of
people
to
kind
of
help,
design
these
kinds
of
systems
and
over
time
we
can
get
to
solving
super
large
problems.
I
sort
of
expect
that
we'll
get
to
a
point
where
we
can
define
slas
like
service
level
agreements
and
attach
funding
to
those
slas
through
a
public
mechanism
and
have
that
coordinate
large
groups
of
people
and
organizations
against
that
meeting.
That
sla
at
the
end
of
the
day
like
this
is
how
large
corporations
work.
A
You
can
like
sort
of
look
at
large
corporations
squint
and
to
just
see
how
they
coordinate
and
then
create
public
versions
of
those
mechanisms
in
these
networks.
You'll
have
to
tweak
them.
You'll
have
to
like
make
sure
that
it
makes
sense.
But
if
you
do
that
kind
of
stuff,
then
you
can
create
a
very
very
like
powerful
comments,
oriented
environment,
so
things
like
super
promising,
but
it's
very
early.
A
So
I
think
things
like
impact
evaluators
grants
milestone.
Bounties
are
like
early
examples.
Many
like
new
public
funding
mechanisms
will
emerge.
The
ethereum
community
is
really
into
this.
Bitcoin
is
really
into
this.
Optimus
is
really
into
this
there's
a
great
talk
that
puja
gave
about
like
many
different
kinds
of
incentives,
around
maintenance.
That
kind
of
stuff
is
really
important.
Think
of
all
the
stuff
where
we're
building
incentivizing
ongoing
ongoing
maintenance
is
valuable,
yeah
all
kinds
of
like
really
useful
mechanisms.
A
This
was
kind
of
like
funding
deployment
and
yeah,
like
I
think
one
one,
that's
like
so
early
that
I
don't
even
have
a
good
graph
for
this.
Yet
the
my
sense
is
like
you
can
create
a
kpi
and
then
have
an
impact
evaluator,
releasing
value
against
that
kpi
and
have
a
public
option
and
just
kind
of
like
let
that
run,
and
then
you
have
like
maximization
of
that
kpi.
A
trivial
example
of
this
is
the
bitcoin
hashrate
auction
right.
A
So
the
bitcoin
network
is
really
like
a
currency
that
built
on
top
of
this
crazy
hashrate
auction
system.
Where,
like
at
every
round,
it's
just
gonna
auction
off
a
certain
amount
of
bitcoin
for
whoever
presents
you
know
a
ton
of
hash
rate
and
it's
like
very,
very
simple.
It
doesn't
have
like
many
other
mechanisms
and
that
has
created
one
of
the
largest
consumers
of
energy
on
the
planet
like
in
like
a
decade
right.
A
It
went
from
nothing
to
you,
know,
decade
and
a
half,
I
guess,
and
so
that's
the
power
of
these
kinds
of
systems
of
an
open,
permissionless
guaranteed
impact
evaluator
against
the
kpi.
It's
just
like.
Let's
pick
some
good
impact
because
getting
more
shot
to
v6
hashers
like
dissipate.
Energy
is
probably
not
the
best
use.
Think
of
like
this.
A
The
five
coin
block
reward
as
associated
with
like
building
out
capacity
and
now
building
out
useful
data
storage
through
falcon
plus,
but
there
are
many
other
kpis
that
we
want
to
be
able
to
like
incentivize,
so
think
of
once
ibm
lands
being
able
to
run
these
kinds
of
like
public
auction
things
where,
if
you
have
a
good
way
of
verifiably
testing
that
people
are
showing
up
with
some
resource
or
some
value
created
or
some
impact
than
just
rewarding
them
proportionately.
A
Auction
mechanisms
are
super
super
powerful
because
they
just
cut
through
a
lot
of
the
coordination
bottlenecks
like
having
a
public
reliable
auction
that
is
an
ongoing
system
and
just
so
declared
into
the
market,
is
one
of
the
fastest
ways
of
getting
large
activity
to
happen.
So
supply
chains
work
this
way
the
google
ad
model
works.
This
way
all
kinds
of
systems
are
based
on
auctions.
A
Cool
again,
there's
a
conference
coming
up
in
a
couple
weeks
about
this
kind
of
these
mechanisms,
so
maybe
see
you.
There
probably
want
to
talk
a
little
bit
about
microeconomic
climate,
because
it's
very
much
in
the
air
on
a
lot
of
conversations
and
so
on.
I
covered
a
lot
of
this
in
the
ama
last
week,
but
I
want
to
touch
a
little
bit
on
the
world
economy,
crypto
and
then
filecoin.
A
So
everyone's
probably
tired
of
like
seeing
this
but
like
the
macro
economy,
is
like
not
doing
great
everyone's
freaking
out
but
in
context
like
it's
kind
of
hard
to
like
we
had
just
an
insane
level
of
growth
over
the
last
two
years,
and
certainly
the
last
decade
so
in
context
like
this
kind
of
makes
sense.
One
important
thing
to
to
notice,
though,
if
you
like
go
and
plot
out
the
numbers,
I
don't
think
we're
at
the
bottom
yet,
but
like
so
it
could.
It
could
either
go
sideways.
A
It
could
we
could
have
like
another
v-shaped
recovery
or
or
it
could
go
down
a
lot
more
so
like
in
this
climate.
You
want
to
be
careful.
You
want
to
be
like
make
sure
that
you
have
good,
don't
like
the
only
cost
here,
and
you
can
see
tons
of
people
over
decades.
Talking
about
this,
the
only
problem
here
is
or
like
the
main
problem
for
businesses
is
being
able
to
raise
capital.
A
So
if
you
don't
need
to
raise
capital-
or
you
don't
need
to
raise
a
lot,
then
this
doesn't
impact
you
very
much,
and
so
I
think
like
this
is
where,
like
you
want
to
be
careful
with
like
liquidity
management,
you
want
to
be
careful
with
cash
reserve
and
so
on,
and
so,
but
if
you're
really
well
poised,
then
this
can
be
one
of
the
best
moments
for
growth.
For
you
in
the
crypto
world,
like
you
know,
same
kind
of
thing,
everyone's,
like
oh
crypto,
is
dead
like
every.
A
Like
you
know,
every
every
cycle
we
hear
the
same,
the
same
thing
really
think
of
like
crypto
in
terms
of
like
the.
I
think
it
doesn't
make
sense
to
look
at
it
in
kind
of
like
the
absolute
value.
If
you
look
at
it
in
the
log
scale
and
over
many
years,
like
that's,
how
you
can
see
like
the
cryptos
growth
over
time,
it's
kind
of
in
some
measures
kind
of
crazy.
A
But
at
the
end
of
the
day
like
you
want
to
be
able
to
kind
of
deal
with
the
cyclicality
by
the
way,
there's
a
lot
in
bitcoin
itself.
That
causes
this
cyclicality.
A
When,
when,
when
you
have
a
peak
in
bitcoin,
and
then
you
have
like
a
crash
and
then
a
lot
of
hashrate
disappears,
then
a
lot
of
bitcoin
starts
being
sold
for
less
value,
because
that's
effectively
what
the
auction
model
is
doing,
and
so
again
like
my
guess-
is
that
the
that
the
action
model
of
bitcoin
itself
and
a
hash
rate
drop
is
contributing
to
like
the
long
recovery
of
bitcoin
and,
unfortunately,
the
rest
of
the
crypto
spaces
were
pegged
to
bitcoin.
A
At
the
moment,
it'll
probably
take
a
few
years
to
get
get
unpacked
against
that,
and
so
we're,
like
all
of
the
other.
The
rest
of
the
crypto
worlds
are
like
swings
with
bitcoin,
which
is
great
when
it's
going
up
but
like
bad
one
is
coming
down.
A
The
other
thing
to
to
mention
here
is
like
remember
how
like
volatile
the
crypto
space
is
like
this
is
ethereum
in
the
2017
cycle,
like
it
went
from
85
to
1400,
ish
and
then
back
down
to
85
in
the
span
of
like
a
year.
It's
a
lot
right
and
you
know,
look
at
the
teamwork
today.
Look
at
eath
now
and,
like
you
know,
massive
growth
and
tons
of
applications,
and
so
on.
Other
point
here
is
like
the
last
cycle.
Things
went
down
a
lot
more.
A
The
cycle
went
up
yet
at
that
bottom
and
that
last
cycle
was
during,
like
a
private
equities
boom,
not
a
private
equities
crash.
So
it's
really
unclear
what's
gonna
happen,
but
we
we
might
see
like
lows
even
further
or
or
not
right.
A
So
it's
one
of
these
things
where,
like
the
best
thing
you
can
do,
is
be
prepared
for
all
three
outcomes,
like
things
crashing
further
things
becoming
stable
and
choppy
for
like
a
while
or
things
recovering
and
booming
again,
and
what
you
don't
want
to
do
is
get
into
a
position
where
it's
very
difficult
to
maneuver,
where
you
sort
of
commit
to
one
to
one
bet
and
it's
very
difficult
to
adjust.
A
If
the
market
turns
out
to
be
something
different,
so
acknowledge
that
there's
at
least
three
different
things
that
could
happen
or
three
families
of
things
that
could
happen
and
know
that
even
like
you
don't
have
a
50
50
bet,
if
you
bet
on
one
of
those
things
you're
most
likely
wrong
and
so
just
be
prepared
for
for
all
outcomes
and
really
lean
into
like
good,
stable
growth,
again
financial
stability
and
then
good,
stable
growth
and
yeah
bitcoin
was
bigger,
was
kind
of
like
interesting
in
the
last
cycle
by
the
way
paradigm.
A
The
whole
fund,
like
return
its
first
fund
through
that
through
that
low
cool,
so
in
as
a
cool
crypto
world,
it
says
like
this
is
the
time
to
to
build
so
for
sps,
like
here's
like
a
you
know,
focus
areas,
stable
operations,
so
manage
cash
flows
in
this
downturn
really
ensure
survival.
So
this,
like
really
matters,
it's
not
clear
what
the
macro,
recon
condition
will
be
like
six
months
from
now
could
be
worse
could
be
way
better,
but
you
definitely
want
to
take
a
look.
A
A
There's
a
good
moment
to,
like
think
about
growth,
make
sure
you're
really
well
positioned
and
grow
in
the
areas
that
you
can
without
accumulating
a
lot
of
risk,
so
think
of
like
things
that
compound
think
about
joining
forces,
if,
like
things,
are
like
difficult
and
so
on,
look
to
other
friendly
sps
that
you've
had
like
really
good
work
relationships
and
see,
if,
like
there's,
some
cost
cutting
that
you
can
do
together
or
something
this
is
redoubles.
A
The
importance
of
popcorn
plus,
and
so
now
that
we've
sort
of
prototyped
that
program-
and
it's
shown
like
really
high
success,
leaning
into
it
to
orient
towards
really
important
outcomes
in
the
network
like
public
data,
retrievability
and
so
on.
That
might
be
like
a
super,
successful
way
of
like
helping
the
entire
community
kind
of
like
deal
with
this
with
this
cycle,
while
investing
deeply
into
improving
the
operations
for
for
the
network
right.
A
So
if
we
can
like
come
out
of
this
downturn
with
like
massive
scale,
public
data
computability
around
that
and
fast
retrieval
like
for,
like
all
public
data
commons-
and
you
know
the
first
data
dials
exploring
like
all
this
data
like
that'd,
be
a
phenomenal
phenomenal
place
to
be,
and
I
think
I
really
think
that
falcon
plus
it
is
such
a
cool
cool
system.
A
It
can
achieve
those
kinds
of
things,
but
it
really
sort
of
requires
your
participation
to
like
help
drive
that
program
in
in
different
directions,
to
make
sure
that
it's
like
a
good
structure
for
the
whole
network.
So
think
of.
Think
of
like
broader
participation,
in,
like
think
of
like
the
the
reward
structures
and
so
on.
I
don't.
A
I
don't
know
if
this
was
discussed
earlier
today,
but
over
time
a
lot
of
people
have
been
talking
about,
like
falcon
plus,
potentially
having
multiple
different
reward
tiers,
as
opposed
to
just
a
single
one
for
different
kinds
of
of
operations
like
if
you
bring
in
private
data,
that's
rewarded
the
same
amount.
A
If
you
bring
in
public
data
which
is
more
valuable
to
the
network,
then
that's
that's
a
different
amount
and
so
on,
and,
as
I
mentioned
before,
like
this
is
an
an
important
part
where
like,
if
you
know,
there's
like
good
secondary
business
models,
there
lean
into
them,
like
you,
want
to
kind
of
as
you're
creating
value
and
so
on
lean
into
secondary
business
models
to
augment
your
your
cash
flow
operations.
Any
questions
here
by
the
way,
like
I'm
sure
this
is
like
important
and
tough
on
your
minds
and
so
on.
A
There's
like
a
good
other
than
the
fact
that
this
is
live
stream
to
the
world.
This
is
a
small
room.
A
Maybe
maybe
we
pause
the
livestream
later
yeah,
so
touching
a
bit
on
on
the
economics
and
so
on
here
this
is
like
really
good
graphs.
I
think,
thanks
to
the
starboard
team,
for
putting
all
of
these
things
together.
A
It's
like
super
super
valuable
for
the
network
to
be
able
to
look
at
these
kinds
of
data
and
over
time
this
will
kind
of
improve
it'd,
be
really
awesome
to
see
the
new
committed
deals
and
terabytes
and
be
able
to
segment
that
out
by
region
by
use
case
by
a
specific
client
or
something
like
that
to
be
able
to
like
see
what
things
are
like
working,
really
well
and
kind
of
grow
from
those
yeah
getting
oh
yeah,
and
somebody
asked
me
about
like
wifi,
petabytes
and
so
on.
A
A
The
network
wants
to
be
in
kind
of
like
the
80
range,
that's
sort
of
like
where,
where
sort
of
like
the
parameters
are
designed
to
be-
and
you
hit
that
with
like
that
onboarding
rate
like
the
50
60
petabyte
scale
of
onboarding,
which
is
the
same
as
five
petabytes
of
falcon
plus
right,
like
that's
about
the
same
amount
of
of
collateral
ratios,
and
so
that's
why
hitting
five
terabytes
a
day
can
be,
is
really
good
for
the
network
plus
five
petabytes
a
day
of
really
valuable
public
data
is
like
a
great
way
to
get
falcon
to
be
the
largest
scale
storage
network,
storing
humanity's
most
valuable
data
right.
A
So
this
is
like
a
really
good
alignment
of
incentives
here,
where,
like
the
goal
of
what
the
network
wants
to
hit,
plus
the
broader
economic
structures
and
so
on
kind
of
orient
towards
that
that
pathway,
but
but
to
hit
five
petabytes
a
day,
we
really
need
to
lean
into
those
data,
onboarding
pipelines
that
are
kind
of
offline.
We
need
to
improve
the
links
and
so
on
cool
and
if
there's
a
lot,
there's
probably
a
lot
that
we
can
discuss
here
in
the
crypto
econ
day.
A
So
if
you
want
to
dive
in
deep
into
this
and
kind
of
understand,
more
about
the
mechanisms
and
so
on,
that'll
be
a
good
time
and
zx
and
a
number
of
the
crypticon
folks
will
be
there.
So
you
can
like
dig
into
a
bunch
of
questions
cool,
so
last
few
slides
yeah,
I
think
like
let's
get
to
large
scale,
onboarding
rates,
six
petabytes
of
falcon
plus
deals
so
more
than
capacity
like
capacity
is
great
but
like
valuable
data
is
way
better.
So
let's
aim
for
that.
A
As
a
community
like,
we
should
be
focusing
on
onboarding,
really
valuable
use
cases,
and
that
really
translates
into
making
sure
that
the
current
pilots
go
really
well
and
then
that
you
bootstrap
success
case
of
those
pilots
into
something
bigger.
A
Great
and
you
know
last
comments
for
startups
and
developers
same
kind
of
thing:
stable
operations,
funding
get
to
profitability
and,
if
you're,
just
an
individual
contributor
to
the
community,
there's
tons
of
people
hiring
through
this
period,
like
the
large
companies
are
in
in
the
tech
world,
are
doing
higher
increases
most
of
the
groups
that
I
know
around
the
platform.
Ecosystem
and
crypto
are
all
hiring
and
growing.
So
this
is
a
great
time
to
like
hop
over
from
those
environments.
A
So
this
one
and
you
can
also
find
a
lot
of
gig
work
through
things
like
radios
last
thing,
good
opportunities
to
hang
out
with
us
like
in
the
past,
we've
had
the
bangalore
build
space
in
india,
which
I
heard
was
amazing.
I
really
hope
to
go
there
sometime
either
this
year
or
next
orbit
falcon
in
lagos.
The
sustainable
blockchain
summit,
I
think,
there's
another
one
coming
up.
This
is
by
the
way
there's
one
of
the
com.
A
I've
gotten
a
ton
of
like
really
positive
attention
and
comments
about
this
kind
of
stuff.
The
fact
that
falcon
can
be
the
first
fully
green
storage
network
is
a
huge
deal.
No
other
source
network
on
the
planet
is
quite
doing
this,
so
we
can
sort
of
like
beat
the
rest
of
the
world
to
this,
and
we
can
show
how
it's
done
and
you
can
show
how
you
can
decarbonize
the
planet.
A
This
will
be
super
super
valuable
as
a
as
a
as
a
success
case,
I
think
finding
ways
of
like
getting
there
might
be
a
way
of
connecting
the
whole
falcon
green
rec
thing
to
like
government
subsidies
for
for
for
green
energy
and,
like
that
might
be
like
a
super
good
flow
of
like
bringing
economic
activity
into
the
falco
network.
For
doing
all
this,
like
giving
credit
to
the
vacuum
network
for
doing
all
this
valuable
work,
ether,
amsterdam
was
pretty
awesome.
Lots
of
hacking
going
on
paris,
pdp
and
so
on.
A
We're
now
in
philosophy,
tons
of
events
happening
this
week
huge
thank
you
to
the
falcon
foundation
for
organizing
all
this
like
huge
undertaking.
So
thank
you
and
lots
of
stuff
going
on
and
ahead.
I
think
we
have
phil
toronto.
Is
that
right?
Charles
awesome?
Fourth,
through
sixth
hack
fest,
is
an
online
hackathon.
A
C
So
this
is
less
a
question,
but
I'm
trying
to
recalibrate
my
comprehension
of
the
world,
and
so
I
would
consider
myself
a
storage
enabler,
not
so
much
a
storage
provider,
and
so
but
I
have
worked
on
a
number
of
big
kvs
systems
and
I'm
familiar
with
how
dns
and
all
that
stuff
works.
The
indexer
in
terms
of
the
retrieval
providers
and
stuff,
like
that,
that's
a
super
critical
role.
I
think
in
in
helping
us
reach
critical
mass
in
terms
of
functionality.
C
But
I've
looked
at
the
index
of
code
and
I
don't
see
anything
that
is
intrinsically
self-balancing
about
it,
that
that
allows
it
to
scale.
So
what
I'm?
What
I'm
actually
looking
for
is
a
self-balancing
hierarchy
or
the
indexers
by
way
of
hash
allocations,
or
something
like
that,
yet
you
targeted
10
millisecond,
retrieval
time
from
the
indexers
to
identify
where
the
content
which
providers
are
providing
the
content.
C
We
can't
expect
that
from
big
blobs
and
whatnot
and
then
I've
got
a
part,
two
question
that
follows
on
with
that.
So
is
there
a
plan
to
get
the
indexers
cooperating
in
an
ad
hoc
type
network
type
thing
and
rebalancing
themselves
so
that
we
could
like
a
dns
tree
traversal
we
can
get
to
the
sub
block.
A
Yep
two
things
here:
one
is
we're
coming
from
running
a
large-scale
dht
in
ipfs,
which
deals
with
like
all
these
problems
when
you're
setting
up
new
connections
and
we're
trying
to
move
into
a
spot
where
we
can
do
really
high-throughput
retrieval
of
records
by
leaning
into
less
machines.
A
One
of
the
kind
of
important
pieces
here
is
that
for
most
for
most
applications
and
so
on,
you
can
fit
all
of
the
provider
records.
It
turns
out
that
now
we
have
a
lot
of
storage
and
a
lot
of
machines,
so
you
can
fit
a
vast
amount
of
records
for
large
scale,
applications
into
single
machines
like
single,
say,
petabyte
machines,
and
then
you
can
replicate
the
entire
thing
and
then
just
serve
like
read-only
copies
of
that.
A
So
that's
one
one
inside
the
second
insight
is,
there
are,
like
you
know:
straightforward,
distributed
systems,
algorithms
to
do
that
kind
of
charting
and
so
on,
and
we
can
lean
into
that.
At
some
point,
I
think,
like
the
indexer
team
is
already
kind
of
planning
for
that
and
kind
of
going
in
that
direction.
A
But
right
now,
just
getting
this
structure
to
scale
and
just
make
the
bottleneck
right
now
is
latency
of
retrieval
is
not
dealing
with
too
large
of
a
scale
for
an
indexer,
and
so
you
can
end
up
you
think
of
like
our
ending
at
a
spot
where
we
have
the
same
index
replicated
across
a
bunch
of
regions
and
over
time
as
people
add
to
that
index
that
diffuses
through
through
that
sort
of
that
network.
A
The
the
other
thread
here
is
that
once
we
have
it's
like
privacy,
preserving
indices,
so
today,
most
distributed
systems
like
this,
especially
in
peer-to-peer
and
certainly
in
blockchains,
are
public.
Like
you
can
you
can
observe
the
the
who's
writing
the
index
and
you
can
observe
who's
reading
the
index,
and
so
that's
an
important
privacy
concern.
A
And
so
there
are
many
different
cryptographic
structures
for
doing
this
kind
of
privacy,
preserving
indexing
and
those
add
about
like
a
login
factor
to
the
lookup,
because
you
have
to
when
you
do
a
query.
You
have
to
like
touch
a
bunch
of
parts
of
the
memory
to
hide
the
access.
A
So
I
think,
like
just
thinking
about
storage
density
rates
and
the
fact
that
you
want
things
in
close
proximity,
we
will
likely
end
up
with
like
a
tree
structure,
but
that
tree
doesn't
have
to
be
very
deep.
It
might
be
that
it's
just
one
one
layer
deep,
because
there
is
a
lot
of
hardware.
Now
you
can
get
these
like
petabytes
scale
indices
and
just
lean
on
machines
like
that.
A
So
you
can
sort
of
expect
that
the
indices
will
be
large,
but
you
have
full
copies
of
those
or
close
to
close
to
fresh
full
copies
in
a
bunch
of
regions
and
then
be
able
to
sort
of
scale
that
way.
So
the
current
bottleneck
is
around
retrievability
of
a
lot
of
records,
and
so
let's
solve
that
problem
first
and
then
later
we
can
upgrade
into
adding
the
sharding
to
those
indices.
A
There's
also
another
bet
here,
which
is
that
consensus
scaling
thing
that
I
described
earlier
with
hierarchical
scaling
that
might
be
able
to
apply
to
indexing
too.
You
might
be
able
to
generate
an
index
out
of
that
out
of
a
network
like
that.
So
there's
a
lot
of
like
multiple
paths
right
now
that
are
being
explored
and
we'll
sort
of
see
saying
in
a
year
where
the
network
is
and
what
are
the
bottom
lines
there.
A
But,
but
I
think
the
main
problem
right
now
is
like
throughput
of
the
of
the
reviewability
and
being
able
to
just
store
a
lot
of
indices.
C
Second
question,
and
it
follows
on
with
that:
I
see
the
indexers
having
an
opportunity
to
monetize
their
role
in
the
ecosystem,
not
unlike
what
google
adwords
does,
but
what
they,
an
anonymized
client
base
and
anonymized
data,
but
analyzing
the
the
traffic
patterns
on
the
network,
and
so
I
I
haven't
seen
anybody
plotted
course
on
that,
where
there's
a
secondary
benefit
to
being
an
effective
indexer
in
the
world,
because
now
you've
got
relationships
between
consumers
and
producers
and
and
holders
of
content.
C
The
last
part
is
on
the
retrieval
providers,
and
you
were
you
mentioned
on
every
single
one
of
your,
your
punchline
slides.
You
listed
the
opportunities
and
whatnot,
and
I
have
a
vision
of
specialized
retrieval
providers,
either
in
like
say,
an
nft
content,
but
they
become
a
portal
to
application
specific
domains,
and
I
just
wanted
to
bench
check
that
I
was
reading
that
between
the
lines
that
the
rp's
in
the
generic
context
as
we're
bootstrapping.
C
This
serve
one
role,
but
they
might
become
the
in
effect,
like
a
github
of
the
fc
network,
is
a
specialized
kind
of
a
retrieval
provider.
A
Yep,
that's
a
great
great
point,
so
maybe
I'll
first
talk
about
the
indexers
you're,
totally
right
that
that
can
be
monetized
and
there
could
be
an
incentivized
protocol
there
to
incentivize
indexers
to
exist.
A
I
think
incentivizing
indexing
in
the
clear
is
probably
both
not
necessary
in
that,
like
it's,
not
that
expensive
to
run
these
indices,
and
I
would
rather
kind
of
incentivize
the
privacy
preserving
ones
once
we
figure
them
out
so,
but
but
I
do
think
that,
like
once
fvm
is
out,
probably
people
will
try
this
and
they
will
have
really
fast,
comprehensive
indices
that
maybe
you
pay
with
fat
coin
to
like
access
or
something
like
that
or
there
might
be
l2s
people
might
generate
this
as
an
l2
network.
A
However,
I
think,
like
the
the
and
part
of
that
we've
for
many
years
talked
about
like
oh,
you
can
do
an
incentivized
dht
and
you
can
have
like
really
large
dhc
nodes
and
you
can
solve
most
of
the
problems
that
the
hds
have
by
having
these
large-scale
nodes,
a
much
smaller,
dht
and
so
on.
So
you
can
follow
that
trajectory,
and
that
does
make
a
lot
of
sense
and
I
think
it's
the
space
is
open
for
that.
A
But
I
think,
given
the
problems
that
these
indexers
have
solving
the
privacy,
preserving
ones
where
you
have
read
and
write
or
privacy
fully,
is
a
really
important
component,
and
I
think,
once
things
get
incentivized,
it's
difficult
to
like
change
the
trajectory
too
much,
and
especially
adding
privacy.
Preserving
components
to
it
will
be
a
much
harder
change.
So
would
rather
like
use
the
the
public
in
the
clear
indexes
as
a
stepping
stone.
While
we
solve
the
privacy
preserving
indices
to
then
kind
of
incentivize,
those
and
those
will
need
more
storage
and
more
computation.
A
So
there's
an
even
stronger
argument
as
to
why
you
really
need
to
incentivize
those
with
the
added
caveat
that
like
be
careful
instead
devising
privacy
preserving
things
because,
like
you,
you
want
to
be
there's
constraints
there.
The
second
point
on
the
rp
networks.
Absolutely
right,
I
think,
like
we'll
first
there'll
be
first,
some
generic
rp
systems,
but
delivering
super
high
quality
ux
for
specific
verticals
might
mean
that
there
are
some
special
case
ones.
So
a
video
cdn
functions
very
differently
a
than
a
static
asset.
A
A
But
it's
likely
that,
like
it's
likely
that
the
primitives
are
much
closer
and
so
that
one
one
network
that
does
more
might
work.
This
is
a
little
bit
different
from
the
decentralized
computation
one,
because
the
decentralized
computation
one
the
differences
there
are
super
deep,
like
the
differences
in
the
architecture
of
the
cryptographic
protocols
or
how
you
compose
those
or
how
you
incentivize,
those
you
end
up
with
like
generating
completely
different
techniques
and
and
which
you
might
wrap
into
one
product
which
is
you're
kind
of
like
which
is
difficult.
A
So
that's
why
I
think,
in
that
case,
like
we'll,
definitely
see
different
networks.
In
the
rp
case,
we
might
see
both
we
might
see
some
special
case
ones
and
then
some
generic
ones
that
work
for,
like
most
of
the
cases,
so
maybe
like
a
good
strategy,
might
be
go
for
a
generic
one
and
then
in
those
specific
verticals,
where,
like
a
differentiated,
ux,
really
matters,
maybe
at
that
point
create
one
of
those.
So
like
video
is
a
good
one.