►
Description
The Filecoin network achieves economies of scale by allowing anyone to participate as a storage provider. Currently the network is made up of more than 3,000 storage providers spread across the globe. In this session, get an update on the improvements in deal-making Filecoin is pursuing.
A
Hello,
everybody
welcome
orbit
people,
I
am
jacob.
I
work
at
the
on
the
ignite
team
at
protocol
labs
and
I'm
here
with
raul
anton
and
dirk,
and
we're
going
to
be
talking
about
the
work
that
we've
been
doing
this
year
on
improving
deal
making
in
file
coin,
and
so,
as
part
of
this
here's
a
quick
overview
of
agenda.
A
A
So
kind
of
over
the
past
six
months,
we've
been
working
a
lot
on
improving
the
underlying
stability.
Our
goal
is
to
get
to
a
a
success
rate
metric
of
99
on
the
network
in
this
met
graph.
Here
you
can
kind
of
see
deal
making
on
estuary
deal
success
on
estuary
over
the
past
several
months
as
estuary
has
kicked
off.
A
A
A
lot
of
the
initial
work
on
dag
store,
the
storage
provider,
runtime
architecture,
data
transfer
stability
has
really
been
around
making
sure
that
the
functional
layer
of
falcon
deal
making
is
working
very
well
and
that
it's
stable,
something
that
we've
noticed
after
making
this
and
releasing
the
lotus
1.11
version
is
that
we've
kind
of
hit
a
threshold
here
of
stability,
starting
to
work
more
we're
still
fixing
things
every
day.
A
But
one
of
the
issues
we're
experiencing
here
is
that
we're
starting
to
see
a
lot
of
usability
problems,
making
sure
that
clients
and
storage
providers
are
able
to
make
deals
when
they're
ready
to
making
sure
that
deal
transfers
are
reliable
enough
and
that
you
know
folks
are
able
to
move
data
when
they
want
to
and
also
performance.
A
B
The
motivation,
while
we
introduce
the
dac
store,
is
that
we
conducted
extensive
test
ground-based
profiling
on
the
deal
making
processes
both
on
the
client
and
the
provider
sides
using
lotus,
and
we
realized
that
there
was
a
large
performance
bottleneck
coming
from
badger,
which
is
the
ipld,
which
is
the
data
store,
that
we
were
using
to
stage
ipld
data,
as
we
were
moving
it
around
between
the
client
and
the
provider,
and
this
performance
bottleneck
was
actually
even
visible
for
small
data
volumes.
B
Now
this
is
badger
is
a
great
database,
but
the
issue
here
is
that
we
were
using
it
for
the
wrong
workloads
for
workloads
that
it's
actually
not
well
suited
for,
and
the
reality
is
that
monolithic
lsm3
databases
are
really
do
not
scale
very
well
for
pebby
bytes
of
granular,
very
fine-grained
data
that
needs
to
be
accessed
on
with
random
access.
So,
in
short,
the
data
store
that
we
picked
was
great,
but
we
weren't
really
being
mechanically
sympathetic
with
it
with
our
access
patterns
and
that
we
had
a
key
insight.
B
Data
in
falcoin
is
actually
stored
at
rest
as
car
files.
So
could
we
read
from
those
car
files
and
write
to
them
directly?
Could
we
do
better,
and
this
led
us
to
to
explore
the
design
space
and
we
came
up
with
alternative
solutions
and
the
end,
the
one
that
we
ended
up
converging
towards
is
the
dag
store.
B
The
dax
drawer
is
a
sharded
store
to
hold
large
ipld
graphs
efficiently,
where
each
graph
is
packaged
as
a
car
file,
and
these
car
files
can
be
attached
and
detached
in
a
location,
transparent
manner
and
dynamically
at
runtime
they
can
come
and
go,
and
all
of
this
is
performed
with
mechanical
sympathy,
which
means
that
we
are
being
really
sympathetic
when
the
with
the
underlying
access
patterns
and
making
the
os
and
and
getting
help
from
the
from
the
os
to
actually
access
that
data
efficiently.
B
So
what
were
the
results?
We
shipped
the
dark,
store
and
mra,
which
anton
is
going
to
be
talking
about
in
lotus,
11
1.11.
This
was
around
about
the
end
of
july,
and
here
you
can
see
a
graph
of
estuary
deals
and
it's
of
estuary
success.
Successful
deals
made
over
time
and
you
can
see
that
shortly
after
we
launched
this
new
version
of
lotus,
including
these
two
features,
the
success
rate
just
went
up
on
a
stephen
clan,
so
we're
very
happy
about
that.
B
We
think
we're
on
the
right
path
and
just
to
give
you
a
little
bit
of
insight
for
the
curious
people
for
the
curious
minds
on
what
the
dax
store
on
how
the
dagstr
actually
works.
The
dax
store
is
a
synchronous,
concurrent
and
event-driven
component,
and
it's
built
for
high
performance,
as
I
said
before,
it's
mechanically
sympathetic
with
the
underlying
os
and
the
I
o
and
it's
using
techniques
like
m
mapping
and
caching
to
achieve
this.
Now.
B
As
I
said,
a
key
concept
in
the
dark
store
is
the
shard
and
what
is
a
shard
in
in
falcoin.
A
shard
is
basically
a
deal.
It
is
a
package
of
data
that
is
referenced
by
a
by
unique
key
and
in
filecoin.
This
is
the
deal.
Every
deal
becomes
a
chart
in
the
dark
story.
Now
the
payload
of
the
of
the
chart
is
a
car
v2
file,
or
it
could
be
an
unindexed.
B
The
main
chart
operations
are
register
to
register,
chart
acquire
to
acquire
shot
for
for
reading,
release,
to
release
a
shard
and
destroy
and
destroy
to
destroy
shard
now
shards
are
location
transparent,
which
means
they
can
be
attached
through
various
through
various
mechanisms
like
lotus
file
system
and
and
other
other
types.
B
This
component,
the
the
state
of
the
of
the
dac
store,
survives
these
strides
because
we
persist
the
state
on
disk
and
we
also
have
mechanisms
like
gc
to
reclaim
local
disk
space
safely
and
efficiently,
and
we
recently,
we
also
introduced
a
top-level
index
on
the
dag
store
to
allow
us
to
do
short
routing,
just
based
on
a
cid
of
a
block
that
we
know
is
present
in
some
shard,
but
we
don't
know
which
one
it
is
and
now
over
to
anton
to
talk
about
storage
provider,
runtime
architecture.
Thank
you.
C
Okay,
hello,
so
I'm
going
to
talk
about
the
storage
provider
runtime
architecture.
To
give
a
bit
of
background
on
the
project,
I
have
to
explain
how
lotus
miner
worked
before
this
endeavor.
So
before
the
storage
provider,
runtime
architecture
project,
the
lotus
miner
process
was
running
all
lotus
miner
subsystems
within
a
single
process,
namely
these
are
the
ceiling,
the
proving
storing
of
sectors
as
well
as
deal
making.
C
All
of
this
was
running
within
a
single
process
monolith
on
miners
systems,
so
the
ghost
of
the
spr8
project
were
to
split
the
market
subsystem
from
the
lotus
miner
monolith
and
in
the
process,
increase,
robustness
and
resilience.
C
Storage
providers
should
not
have
the
stability
of
deal
making
affecting
sealing
and
proving
of
sectors,
because,
due
to
the
economics
of
the
falcon
network
at
the
moment,
sealing
and
proving
of
sectors
is
more
important
to
miners
than
making
deals
with
plans
in
the
process.
We
also
reduce
the
attack
service
because
the
sealing
process
does
not
need
to
be
exposed
on
the
internet
anymore.
Only
the
markets
process
needs
to
be
publicly
available
and
to
allow
for
connection
from
storage
clients.
C
C
So
here
you
can
see
a
diagram
of
how
the
various
processes
of
falcon
miner
interoperate
today
so
after
the
spra,
a
project
got
merged.
The
market
subsystem
became
a
separate
process,
communicating
with
the
ceiling
miner
node
and
the
lotus
full
node
on
this
diagram.
You
can
see
it
as
a
light
blue
rentangle
and
it's
behind
the
firewall
and
the
load
balancer
basically
available
on
the
public
internet,
so
that
the
markets
can
communicate
with
the
storage
clients.
C
C
C
C
C
Lotus
repository.
You
can
see
that
that's
the
6356
who
request
merged
a
few
months
ago,
so
why
is
this
important?
Basically,
we
wanted
to
refactor
the
lotus
miner
monolith
as
part
of
this
project
in
order
to
increase
robustness
and
resilience
of
the
system,
reduce
the
possible
attack
services
and
no
longer
have
the
ceiling
and
the
mining
node
exposed
on
the
public
internet.
C
We
also
wanted
to
reduce
the
operational
risk
that
miners
bear
nowadays,
it's
not
no
longer
necessary
to
basically
run
the
market
subsystem
as
part
of
the
lotus
miner,
so
in
case
something
goes
wrong
with
storage
or
retrieval
deals.
This
is
not
affecting
any
of
the
proving
or
the
ceiling
functionalities
of
the
miner.
C
We
also
wanted
to
be
able
to
make
to
introduce
new
features
to
the
miner
more
easily
and
independently
of
critical
functionality
like
ceiling
and
mining.
C
We
wanted
to
be
able
to
horizontally
scale
the
markets
subsystem
and
introduce
failover,
and
last
but
not
least,
we
wanted
to
spread
the
knowledge
of
the
lotus
miner
within
the
organization
and
be
able
to
onboard
new
developers
and
extend
the
code
base
further
in
the
future
faster.
D
So
much
of
this
year,
we've
spent
working
on
performance
improvements
for
data
transfer
and
reliability
improvements.
So
one
of
the
main
changes
we
made
was
to
for
data
transfers
to
detect
when
the
connection
goes
down
and
to
be
able
to
restart
automatically
from
where
they
left
off.
D
We
reworked
the
accounting
system
for
retrieval
payments
so
that
the
way
the
payment
vouchers
were
processed
didn't
cause
blocking
anymore.
We
did
some
refactoring
in
the
graph
sync
transport.
That's
the
part
of
the
data
transfer
system
that
connects
with
the
graphsync
protocol
and
hana
did
a
lot
of
work
to
improve
graph
sync
memory,
usage
and
reliability.
D
So,
looking
towards
the
future,
as
jacob
said
at
the
beginning,
we
are
we've
sort
of
improved
performance
to
the
point
where
we're
starting
to
surface
some
ux
issues,
so
we're
going
to
be
focusing
quite
heavily
on
ux
and
extensibility
of
the
markets
process.
D
We're
going
to
make
the
deal
acceptance,
filter
more
extensible
so
that
it
can
have
more
parameters
when
it's
deciding
whether
or
not
to
accept
an
incoming
deal.
We
want
to
make
the
data
transfer
protocol
pluggable.
So
at
the
moment
we
use
graph
sync
because
it's
not
a
transfer
protocol,
but
some
groups
have
been
working
on
other
systems,
so
it
should
be
easy
to
plug
those
systems
in
and
we
also
want
to
improve
the
observability
of
the
whole
system
and
in
particular,
what's
happening
with
your
storage
deal.
D
D
We
want
to
make
it
very
clear
when
a
provider
can
stop
accepting
deals
because
they're
running
low
on
staging
storage
space
and
we
want
to
sort
of
list
all
the
ongoing
deals,
the
states
they're
in
and
make
it
easy
to
click
into
a
particular
deal
and
see
what's
going
on
at
the
deal
level
as
as
it's
going
through
each
of
the
states
that
it
needs
to
go
through
to
make
the
deal.
D
B
Hi
again,
so
I
just
wanted
to
talk
a
little
bit
about
the
dealbot.
The
dealbot
is
an
automated
deal.
Making
machine
that
we've
been
using
at
protocol
labs
to
measure
deal
success
rate
throughout
the
network
and
it's
been
instrumental
in
validating
the
improvements
and
bug
fixes
that
we've
been
making
to
to
validate
whether
they've
led
to
high
success
rates
or
not.
I
wanted
to
send
a
shout
out
to
the
data
systems
team
at
protocol
labs
because
they
actually
built
the
deal
bot
and
I'm
just
here
to
talk
quickly
about
it.
B
Now
the
the
next
thing
that
I
wanted
to
to
talk
about
as
well
is
reputation
systems
we
have.
The
fico
network
has
over
3
400
storage
providers
by
now-
and
this
is
wonderful,
but
it
also
comes
with
challenges
from
the
viewpoint
of
the
user.
How
does
a
user
choose
which
provider
to
pick
for
a
given
deal?
B
That
is
like
one
of
the
key
key
challenges
right
now
and
and
users
can
be
interested
in
many
parameters
when,
when
selecting
a
provider
like
reliability,
latency
availability
their
region,
the
good
news
is
that
the
ecosystem
has
partially
already
solved
this
problem
and
apps
like
fill
rep
or
the
textile
bit.
Bot
and
estuary
area
continuously
probing
and
measuring
the
quality
of
service
of
providers
using
their
own
methodology,
and
some
of
them
actually
publish
course
publicly.
B
However,
these
scoring
feeds
are
not
genified
or
traceable,
and
the
falcon
community
is
working
hard
on
standardizing
an
open
protocol
for
sourcing
these
quality
of
service
measurements
and
and
observations
from
deal
makers
all
the
way
through
to
enable
reputation,
providers
to
run
their
own
models
and
post
back
scores.
B
Recording
all
of
this
information
on
chain
and
providing
endpoints
for
deal-making
clients
to
visualize
course
and
list
providers
and
short
providers
based
on
the
score
and
ultimately
choose
a
provider
for
for
a
given
deal.
This
work
is
being
led
by
the
data
systems
team
again
at
protocol
labs
now
over
to
jake
for
closing
remarks.
A
Awesome
thanks:
everybody.
It's
been
a
great
year
of
filecoin,
we're
super
happy
to
be
ex
celebrating
the
anniversary
with
everyone,
a
few
things
that
we
wanted
to
touch
on
a
bit.
If
you
are
running
lotus-
and
you
have
yet
to
split
your
your
minor
and
your
markets
processes,
we
we
strongly
encourage
you
to
do
that.
You
can
check
out
the
file
coin
docs
on
overview,
how
to
do
that.
A
Let
us
know
if
you
have
any
issues
doing
that
it's
a
great
way
to
protect
your
miner
and
also
benefit
from
all
of
the
improvements
that
we've
made
in
the
storage
provider.
Runtime
architecture
updates
also
we're
going
to
be
reaching
out
to
folks
and
soliciting
the
community
to
get
more
feedback
on
the
markets.
V2
ux.
A
So
please,
once
you
see
that
announcement,
please
reach
out
we'd
love
to
hear
from
you
and
help
us
make
deal
making
easier
than
ever,
which
is
the
path
for
us
to
really
get
to
99
and
build
a
awesome
network
for
the
future.
So
thank
you
all
very
much
and
happy
anniversary.