►
From YouTube: SnapDeals: Lightweight Sector Update Protocol
Description
SnapDeals is a new extension to the Filecoin protocol unlocking the 12EiB capacity of the Filecoin network for data storage. Join and explore this new capability, including its benefits for both storage providers and clients.
A
Hello,
my
name
is
talking
about
snapdeals
snapdeals
are
lightweight
sector
updates,
so,
let's
start,
why
are
we
even
introducing
this
thing?
Committee
capacity
sectors
currently
represent
99
of
falcons.
12
megabytes
of
storage,
community
capacity
sectors
are
not
storing
any
data.
They
are
just
miners
signaling
that
they
are
committing
to
storing
some
data
in
the
future.
A
Currently,
the
available
pathway
of
updating
capacity
sectors
to
sex
restoring
data
is
equivalent
to
cnn
sectors,
so
many
miners
choose
to
instead
seal
a
new
sector.
This
is
the
primary
drawback
of
the
current
update
system,
and
it's
it
we
are
trying
to
improve
it
also
ceiling
is
a
senior
sector
is
very
high
latency
process
where
we
can
reduce
the
latency
in
the
system
by
using
an
existing
replica
existing
skilled
sector
instead
of
selling
a
new
one.
So
what
are
snapdeals
snapdeals
are
a
lightweight
way
of
updating
a
committed
capacitor
with
cleans.
A
The
data
will
leverage
existing
properties
of
committed
capacity
sector
and
use
it
to
create
a
sector
with
clients.
Data
storage
providers
can
use
an
existing
sector
instead
of
seeing
a
new
replica
which,
like
is
a
property
that
we
were
looking
for
since,
since
we
launched
falcon
a
year
ago.
The
plan
of
release
for
this
feature
is
q122
and
let's
jump
how
it
works
and
then
we'll
look
into
into
what
it
provides
in
general.
A
If
you
have
any
questions,
please
please
leave
them
in
the
qa
section
of
the
of
the
orbit
event.
I
will
look
at
them
at
the
end,
so
how
do
snapdeals
work?
In
essence,
we
are
we
encode
clients,
data
and
command,
committed
commands
sector
with
module
multiplication,
and
addition,
it's
not
a
easy
to
comprehend
step,
but
in
essence
it
works
like
storing
data
into
the
sector.
We
don't
sort
the
pure
data
into
the
sector.
We
randomize
it
first
to
preserve
some
properties
of
of
of
sealed
sectors.
A
You
can
read
more
about
it
in
fib
19,
so
we
perform
the
encoding
process
and
then
we
prove
the
correct
execution
of
the
encoding
on
chain.
So
the
primary
way
the
why
this
thing
process
even
exists.
What
the
powerfx
is
is
we
want
to
create
a
replica?
That's
incompressible.
That's
what
brings
falcon
its
security
and
the
update
process
has
to
preserve
that
preserve
that
property,
so
the
essential
part
is
the
raw
I
algorithm
here.
You
can
see
it
here.
A
The
the
ideal
algorithm
for
raw
eye
would
be
pure
randomness,
but
we
cannot
use
it
due
to
constraints
around
decoding
the
data
later
when
clients
want
to
retrieve
it.
We
settled
on
bucketed
randomness,
so
instead
of
having
as
many
randomness
values
as
there
are
nodes
in
a
sector,
we
limit
the
number
of
randomness
possible
randomness
values.
A
We
knew
about
the
possibility
of
updating
committed
capacity
sectors
for
a
while
now,
but
we
haven't
pursued
that
we
pursued
that
opportunity,
but
we
haven't
released
anything
since
till
now,
because
we
stumbled
upon
a
major
blocker,
which
was
the
randomness
requirements
of
this
process,
as
we
talked
as
we
just
talked
in
previous
slide.
The
randomness
for
the
encoding
process
is
very
important
because
it
preserves
the
security
of
the
system
and
of
the
system
of
updating
the
sectors
and
falcon
as
a
whole.
A
So
we
are
looking
at
two
different.
We
we
have
two
needs
for
for
randomness.
One
is
for
encoding.
The
raw,
as
I
showed
you
previously,
and
the
other
is
chime
generation
which
which,
which
is
used
for
generating
challenges.
When
miner
wants
to
prove
and
update
the
interactive
random,
normally
we
would
use
interactive
randomness.
We
do
it
for,
for
example,
por
rep
during
poor
rep
process.
We
use
interactive
randomness,
and
this
allows
us
smaller
proof
but
increases
the
delay.
We
need
to
wait
for
train
to
be
for
sure
settled
until
we
use
that
randomness
value.
A
The
interactive
randomness
in
this
in
this
case
required
two
stages
of
interactive
randomness.
So
it
was
introduced
in
three
hours
of
delay,
and
this
was
this
was
decided
as
a
major
blocker
for
this
protocol,
because
it
wasn't
as
good
as
it
could
have
been.
We
were
able
to
reduce
that
remove
that
requirement
of
interactive
randomness
by
creating
one
running
resistance,
encoding
function,
which
I
showed
you
previously
and
two
sorry
grinding
resistant,
random.
A
By
by
using
grinding
ring
resistant
encoding
function,
which
allowed
allowed
us
to
remove
the
interactive
randomness
for
from
the
first
stage
of
encoding
and
a
fiat
challenge,
transform
transform
for
the
challenge
generation
instead
of
generating
or
using
interactive
randomness
for
challenges.
A
What
does
this
give
us?
We
get
a
possibility
of
onboarding
users,
data
into
a
sector
and
proving
it
on
chain
in
one
hour
this.
This
is
major
improvement
from
previously
five
to
six
hours.
It
takes
to
seal
a
new
sector,
and
this
one
hour
is
not
hard
limit,
as
if,
whenever
we
improve
the
our
proving
performance,
this
time
frame
reduces.
A
So
if
you
have
committed
capacity
sectors,
you
can
include
the
client's
data
without
receiving
it
receiving
sectors,
and
also
we
gain
a
simple
one
message:
update
protocol,
which
was
a
winner
and
it's
on
the
cell,
because
implementation
is
way
simpler
and
the
integration
on
storage
provider
side
is
also
way
easier.
A
The
quicker
onboarding
of
clients,
data
unlocks
for
new
use
cases,
which
were
previously
impossible
due
to
the
time
frames.
So
five
six
hours
is
for
some
clients.
It's
unacceptable,
but
now
we'll
have
protocol
that
allows
us
to
onboard
this
data
in
in
an
hour
or
less,
and
also
storage
providers
can
upgrade
their
their
existing
sectors
with
fail,
plus
deals
without
receiving
to
increase
their
quality,
adjusted
power
by
the
factor
of
10,
which
will
be
an
amazing
opportunity
for
many
smaller
and
medium-sized
miners
who
focus
on
on
the
client
client-side
side
of
things.
A
So
the
current,
a
current
update
pathway,
was
used
by
some
storage
providers
because
exactly
this
property-
and
in
this
case
it
expands
it
and
allows
you
you
to
do
it
without
resealing
many
smaller
miners
struggle
with
with
collateral
payment,
collateral
requirements-
and
I
I
can
tell
you
we're
look
look
we're
looking
into
storage,
providing
lending
programs
and
storing
search
provider.
Insurance
insurances
such
that
lending
programs
can
can
become
cheaper.
A
Another
thing
is,
we:
can
search
providers
can
onboard
clients
data
with
higher
throughput,
which
is
a
win
both
for
storage
providers
and
the
clients.
Some
some
very
big
clients
have
a
lot
of
data
to
onboard,
and
the
onboarding
throughput
is
the
major
block
for
them.
As
we
are
not
no
longer,
it
will
be
no
longer
required
to
compute
the
complete
replica
and
seal
complete
replica
to
onboard
clients,
data
onto
the
network.
A
These
resources
can
instead
be
used
to
can
be
used
to
update
existing
committed
capacity
sectors.
So
in
essence,
community
capacity
sector
become
a
buffer
for
clients,
data
for
future
clients
data,
so
you
can
imagine
minor
building,
might
search
providers
building
up
the
buffer
and
then
using
it
to
onboard
client
storage
and,
finally,
most
important
of
all.
We
unlock
the
12
exabyte
of
latency
latent
storage
on
falcon
network
for
reference.
A
We
are
it's
very
much
possible
that
fip
19
will
ship
with
repeated
sector
updates,
so
you
will
be
able
to
search
providers
will
be
able
to
update
the
insert,
insert
data
deal
data
into
a
sector,
and
if
that
deal
expires-
or
there
is
some
still
space
in
the
sector,
they
can
do
that
process
again
to
insert
more
data
into
that
sector
or
or
if
the
data
expires.
They
can
re
redo
the
update
with
new
data
or
even
same
data.
A
We
are
looking
into
next
generation
of
proof
of
replication,
which
would
allow
this
protocol
to
be
even
more
lightweight
and
cheaper
for
storage
providers
and
the
clients,
because
clients,
at
the
end
of
the
day,
are
paying
for.
First
throwing
that
data
we
are
reducing.
We
are
working
on
reducing
proving
overhead,
as
you
probably
heard,
and
I
think
there
was
talk
about
about
it
or
there
will
be
we
recently.
A
There
was
recently
a
cuda
prover
introduced,
which
reduced
the
time
to
the
computation
of
proofs
was
reduced
by
about
40,
which
is
amazing
and
get
even
faster.
If
we
do
these
steps,
we
get
even
faster
onboarding
of
users
data,
because,
because
the
primary
overhead
primary
time
spent
to
onboard
data
with
snapdeals
is,
is
just
getting
a
proof
done.