►
From YouTube: Filecoin Core Devs #55
Description
Recording for: https://github.com/filecoin-project/core-devs/issues/131
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
Follow Filecoin!
Website: https://bit.ly/3ndAg44
Twitter: https://bit.ly/3ObND0x
Slack: https://bit.ly/3HKfFy7
Blog: https://bit.ly/3HFZFNv
Reddit: https://bit.ly/39N4Jmv
Telegram: https://bit.ly/3bkP8Ly
Subscribe to our newsletter! https://bit.ly/3Oy8J9j
#filecoin #ipfs #libp2p #web3 #nft
A
A
To
our
code,
devs
call
number
55
for
UTC
0000
middle
of
the
night,
for
so
many
people
joining
us
today,
I'm
in
Toronto,
and
it's
just
about
702
p.m.
We
have
a
number
of
things
to
cover
today.
A
Hopefully
we
are
able
to
go
through
them
quickly,
but
as
well
I'm
hoping
we
can
leave
this
meeting
really
meeting
all
our
objectives.
We
are
going
to
be
having
a
technical
discussion.
First
from
Luke
and
Irene
on
optimistic
Snapdeal,
Luca
I
read:
we've
already
agreed
that
you'd
have
five
minutes
to
quickly
run
us
through
that
and
then
Alex
North
will
take
another
20
minutes,
hopefully
less
to
discuss,
tip
47
and
Market
Chrome
risks
and
mitigations.
A
A
We
will
close
today
oops
we'll
close
today
discussing
Envy,
19
scope
and
timeline
planning,
as
well
as
Jennifer,
giving
us
very
quick
updates
on
nv18,
just
a
brief
update
as
to
timeline.
So
we
do
have
a
lot
of
things
to
cover
and
with
that
I'll
be
handing
that
over
to
Luca
and
I,
think
Irene
I'm,
not
sure
who
will.
B
Be
presenting
today
I'm
presenting
thank
you,
okay,
so
let
me
start
so
that
I
can
stay
in
the
five
minutes.
Irene
and
I
opened
a
discussion.
The
discussions
is
six
four:
five
to
present
the
idea
of
optimistic
snapdeals.
What
is
that?
It's
basically
evaluation
of
the
protocol
that
gives
a
trade-off
between
SLA
guarantee
and
computation
overhead.
Why
did
we
get
into
that?
So
snap
deals
was
introduced
last
year,
it's
great
because
it
allows
data
to
be
injected
into
existing
CC
sector,
but
the
drawback
of
the
protocol
is
like
Snapdeal.
B
Cost
is
really
significant
on
the
storage
provider
side
and
we
will
see
why
so
with
this
idea,
we
just
want
to
to
give
an
opportunity
to
have,
let's
say,
a
low
cost
alternative
for
the
storage
provider
to
perform
deals
at
the
cost,
which
is
not
having
a
priority
guarantee
that
the
data
is
stored
into
the
sector.
This
is
not
something
that
we
cannot
check,
but,
as
you
will
see
in
a
second,
it
has
some
sort
of
additional
cost
for
the
client.
B
So,
first
of
all
why
Snapdeal
is
expensive.
It's
expensive
because
we
used
shutter
56
based
company
and
on
the
positive
side,
this
commitment
is
really
cheap
for
the
client
to
compute
and
verify
next
slide
please,
but
on
the
negative
side,
it's
really
expensive
for
the
storage
provider
to
prove
in
the
snark
for
the
extractor
of
the
commitment
itself.
So
the
idea
of
optimistic
next
slide,
please,
is
to
to
ask
the
storage
provider
to
prove
snapdeals,
not
using
the
sharp-based
commit,
but
using
the
poseidon-based
company.
Why?
B
Because
on
the
on
the
storage
provider
side,
this
turn
this
translates
into
a
230x
cheaper
proofs
for
the
Snapdeal
proof
overall,
and
the
drawback
is
that
the
client?
If,
if
the
client
needs
to
to
check
this
commitment,
it
would
spend
100x
more
than
the
the
positive
thing
is
that,
depending
on
the
SLA
guarantee
that
the
client
is
willing
to
accept,
this
computation
is
not
mandatory
next,
please!
B
So
how
does
the
protocol
Works?
Basically,
the
storage
provider
submits
both
the
show,
commitment
and
the
possible
commitment
on
chain
and
then
snaps
the
data
into
a
cc
sector
proving
the
snap
using
only
the
Poseidon
commit,
and
basically
that's
it
in
if
everything
goes
smoothly
and
at
the
later
point,
when
a
client
retrieves
or
attempts
to
retrieve
the
data.
B
B
So
what?
What
does
the
the
former
one?
Please?
Okay,
what
does
optimistic
mean?
So
we
said
that
the
main
drawback
of
this
protocol
is
that
we
do
not
have
a
priority
guarantee
that
the
data
is
inside
the
sector,
but
what,
if
one
wants
to
to
actually
verify
that
this
is
the
case,
so
you
have
two
options.
B
The
first
option
is
the
client
checks,
the
Poseidon
commitment
and
if
it's
not
matching
the
sensor
for
proof-
and
this
can
happen
both
at
deal
making
and
afterwards
at
any
point
in
time
throughout
the
the
deal
Direction.
The
second
option
is
to
submit
a
fraud
proof
right
away
and
asking
the
storage
provider
to
provide
a
standard,
snapde
proof.
These
two
options
are
also
viable
from
third
parties,
so
not
only
for
for
the
client
or
for
the
data
owner
next.
B
To
wrap
up
snapdeals
optimistic
snapdeals
ads
on
the
the
current
protocol,
flexibility
on
the
client
side
that
now
can
trade
off
SLA
guarantee
and
computation.
So
if
the
client
has
the
provider,
there
is
nowhere
for
the
client
and
230x
cost
saving
for
the
provider.
B
If
the
client
does
not
really
trust
the
provider,
he
can
either
go
with
standard
snap
deals
or
check
the
deposit
and
commit
or
submit
default
proof,
and
if
the
third
entity
wants
to
audit
the
whole
process,
they
can
actually
either
check
the
position,
commit
or
submit
the
front
proof
as
the
client
can
do.
This.
Is
it
on
my
side?
All
all
comments
and
and
suggestions
are
welcome
in
the
discussion.
Thank
you.
A
Thank
you,
a
very
quick
questions
that
are
absolutely
important.
Now,
yes,
Stephen.
C
Okay
and
I
have
one
question
here
so
yeah:
if
there
is
a
collusion
between
the
client
and
the
yes,
the
storage
provider,
so
yeah,
this
can
be
yeah.
If
the
the
proof
is
full.
B
B
So
there
are
two
things
here:
first,
the
storage
hardness
is
not
good
at
risk,
like
consensus
is
not
at
risk,
even
if
the
data
is
faked
into
the
sector
and
second,
as
I
said,
any
third
entity
can
actually
check
if
the
data
is
actually
inside
the
sector
or
Not
by
either
running
a
fraud
proof
or
checking
the
the
commitment.
The
position
commitment
at
the
at
the
higher
cost
than
the
shock
commitment.
Of
course,.
C
And
yeah:
okay:
if
we
do
this
yeah,
if
we
want
to
check
all
the
yeah
all
their
data,
yeah
proof
how
about
the
cast
so
I
mean
that's
just
like
what
we
do
for
the
you
know
for
the
window
post
right.
B
C
B
B
Yeah,
that's
why
that's
why
I'm
saying
eventually,
incentives
or
or
this
kind
of
of
things.
D
C
C
So
yeah,
and
what
and
what
I'm
thinking
yeah
is
that
yeah
for.
C
But
for
the
verified,
yeah
I'm
thinking
about
the
security
course
yeah
there
could
be
yeah
yeah.
A
Great
thank
you.
I
will
also
be
sharing
the
fifth
graphs
or
the
discussion
thread.
You
can
add
your
comments
as
well
after
the
recall
or
conclude
the
conversation
async
handing
it
over
now
to
Alex.
D
Hi
everyone,
thanks
for
being
here
and
and
listening
so
I
got
two
things
first
thing.
D
So
the
your
Falcon
network
has
this
system
is
function
called
Chrome,
which
we
call
cron,
which
executes
some
actor
code
at
the
end
of
every
tip,
set
evaluation
that
is
intended
to
form
system
level
processing
the
the
code
is
executed,
not
on
behalf
of
any
one
move
on
anyone
user
and
in
particular
this
means
no,
no
one's
paying
the
gas
costs
for
this.
For
this
work
we
can
compute.
We
can.
D
We
can
track
how
many
gas
units
are
used,
but
there
is
no
price
for
these
gas
units
and
no
one.
No
one
pays
for
it.
This
was
a
you
know.
This
is,
could
be
viewed
as
either
a
really
nice
feature
or
as
a
design
shortcut.
D
It's
sort
of
a
bit
of
both
I
mean
in
some
ways
here
it
was
used
as
a
pragmatic,
pragmatic
tool
to
get
us
so
important
functionality
around
the
network
launch,
but
we're
approaching
the
point
where
we
need
to
find
better
ways
to
do
some
of
those
things.
So
right
now,
cron
is
using.
You
know
doing
three
times
more
work
than
the
entire
Target
work
to
validate
and
tips
it.
D
D
So
this
this
directly
affects
our
block
validation
times,
and
so
the
block
validation
times
are
Hardware
dependent.
So
these
times
depend
on
the
hardware
that
are
running,
but
while
we're
sort
of
targeting
a
2.5
second
average
tips
at
validation,
time
we're
actually
seeing
eight
seconds
as
a
median
and
then
you
know
much
larger
numbers
for
a
higher
higher
quantities.
This
matters,
because
a
long
block
valuation
time
affects
the
chain
quality.
D
So
if
this
gets
too
long,
then
some
miners
will
be
unable
to
compute
their
tip
set
in
time
to
produce
a
block
to
go
on
top
of
that
tip
set.
This
is
probably,
but
you
know
this
would
affect
smaller
operations
more
than
larger
operations,
because
small
operations
win
blocks
much
less
frequently.
So
when
this
happens
to
them,
they
will
lose
a
larger
share
of
their
income.
They're.
Also
less
likely
to
have
you
know,
super
expensive
and
redundant
Hardware
to
make
sure
they
get
these
opportunities.
D
It's
also
bad
for
decentralization.
For
similar
reasons,
it
raises
the
minimum,
the
minimum
Hardware
requirements
and
network
requirements
to
be
a
fully
validating
node
and
to
keep
up
with
the
chain.
It
particularly
raises
the
cost
of
syncing
the
chain.
If
one
falls
behind,
the
current
tip
certainly
needs
to
catch
up.
D
The
reason
it's
possible
to
catch
up
is
because
block
validation
takes
much
much
less
than
30
seconds
the
closer
the
block
validation
takes
to
30
seconds,
the
longer
it
takes
to
catch
up.
If
one
falls
behind
your
tens
or
hundreds
of
epochs
or
is
catching
up
from
a
snapshot,
you
know
daily
snapshot
or
something
when
syncing
a
new
chain.
D
So
basically,
this
is.
This
is
a
if
it
gets.
Worse
is
a
very
serious
problem,
we're
currently
at
the
point
where
it's
uncomfortable,
but
we
don't
need
to
cause
an
emergency
to
fix
it
so
long
as
we
fix
it.
So
next
slide
please.
D
D
Built-In
Market
actor
is,
is
you
doing
most
of
this
work
doing
deal
maintenance
in
the
built-in
Market
actor
today
performs
a
little
operational
on
every
active
deal
every
24
hours,
the
main
operation
performs,
is
transfer
and
incremental
payment
from
the
client
to
the
provider
edit,
but
it
will
also
checks
for
terminating
sectors
and
things
like
that.
What
did
that
does
the
it
does?
The
cleanup
work
after
a
sector
is
terminated
ahead
of
time.
D
This
is
a
fundamentally
unscalable
design,
but
was
a
you
know,
compromised
Network
launch
to
get
things
going.
This
design
cannot
keep
working.
If
the
number
of
you
know
deals
at
Scales,
you
know
much
larger
and
right
now.
The
reason
this
has
sort
of
come
up
on
us
everything
I've
said
is
was
sort
of
known
for
a
long
time,
but
the
reason
this
has
come
out
quickly
is
because
there's
been
huge
advances
in
the
rate
of
deal
growth
over
the
past.
D
You
know
six
to
12
months
and
that
has
turned
out
to
be
the
thing
that's
causing
this.
This
problem,
so
I
sort
of
crept,
Upon
Us
faster
than
was
expected
yeah.
So
the
built-in
Market
here
is
doing
a
thing
that
is
not
necessary
for
a
market
to
do
this
incremental
deal
payment.
It's
a
nice
feature,
but
it's
not
by
any
means
a
necessary
part
of
how
far
outgoing
Works-
and
so
this
this
chronic
execution
anything
cron
does
can
be
looked
at
as
a
subsidy.
D
Doing
some
execution
work
that
no
one
is
paying
for
in
some
cases
when
that,
when
that
subsidy
is
towards
a
network
good,
like
it's
checking
faults
checking
from
Mr
window
post
or
something
like
that,
then
we
think
that's
a
reasonable,
so
some
reasonable
thing
to
do.
But
in
this
case
this
is
not
an
activity
that
should
be
subsidized
at
all
and
in
the
long
term,
I'm
very
keen
and
I
think
some
others
are.
D
But
you
know
other
people
will
build
other
Market
actors
that
can
do
a
better
job
than
the
built-in
storage
Market
actor
in
many
different
dimensions,
but
those
those
user
program
markets
won't
have
won't,
have
Chrome,
there's
no
way
we're
going
to
let
cron
call
untrusted
code,
including
anything
written
by
you
know
outside
of
your
network
network
upgrade,
and
so
this
is
an
unfair
advantage
that
the
built-in
Market
actor
has,
which
will
be
a
slight
impediment,
so
user
user
markets
in
competing
for
on
non-functionality.
D
Short-Term
fix,
which
I
guess
is
the
main
thing
to
discuss
today.
Our
short-term
fix
here
is:
we
can
increase
the
the
maintenance
interval
from
one
day
to
30
days
to
approximately
divide
this
problem
by
30.
and
that
will
buy
us
enough
time.
You
know
that
will
probably
be
good
for
the
rest
of
this
year,
depending
on
Real
Deal
growth
rates
and
biased
time
to
do
a
more
permanent
fix,
and
so
I
recommend
and
I've
done
this
work
with
Google
and
zenground.
D
We
recommend
tidying
Network
version
19
for
this
short-term
fix
and
I've
already
written
the
code
for
it.
It's
ready
the
things
that
this
needs
to
do
to
get
into
Network
319
is
for
me
to
write
a
FIP
and
for
that
fit
to
go
through
governance.
D
That
will
only
focus
on
this
increasingly
interval
to
30
days,
but
then
the
second
thing
I
think
we
should
do
after
that
is
another
tip
which
removes
automatic
deal
maintenance
from
the
market,
the
built-in
Market
actor
entirely
and
puts
it
on
the
same
sort
of
the
same
playing
field
that
non-built-in
Market
actors
would
be,
which
is
that
they'll
require
some
kind
of
external
message
to
trigger
the
actual
transfer
of
funds
for
a
deal
which
we
most
likely
come
from
whoever's
going
to
receive
that
funds
that
receive
those
funds
after
the
deal
completes.
A
I
can
see
any
hand
up.
Do
you
want
to
proceed
with
the
next
one.
D
Yeah,
okay,
yes,
I,
have
no
questions
and
expect
if
they've
been
the
next
few
days
for
this,
which
will
be
very
simple,
it'll
just
be
multiplying
a
constant
well,
it's
slightly
more
than
that,
but
I'd
be
effective.
Doing
that.
D
Okay!
Thank
you.
Another
topic,
so
547
this.
This
is
a
FIP
that
was
proposed
last
year,
sometime,
approved
and
implemented,
but
not
yet
activated
in
the
network
was
scheduled
for
Activation
in
network
version
19..
This
establishes
a
mechanism
to
be
used
in
case.
We
discover
a
vulnerability
in
our
proof
of
replication
code
and
establishes
a
sort
of
a
a
it's.
The
FIP
has
two
purposes:
one
is
to
communicate
to
network
participants.
D
What
we
would
do
in
case
this
happens,
so
they
can
understand
the
risk
to
their
operation
and
and
either
plan
for
or
costing
or
mitigate
the
risk
that,
if
this
happens,
this
is
the
kind
of
policy
that
we
would
adopt
and
then,
in
order
to
make
that
policy
possible
to
implement
it,
established
an
on-chained
schedule
for
all
of
the
all
of
the
active
sectors
that
could
be
used
to
trigger
their
forced,
resealing
or
determination.
In
the
event
that
we
discovered
this
for.
D
We
this
was
accepted.
It
was
you
know,
motivated
by
desire
to
increase
the
maximum
sector
exploration,
the
maximum
sector
expiration
time,
so
people
can
make
an
upfront
commitment
to
a
very
long
period
in
a
sector.
D
Guys
everything's
unblocked,
but
we've
since
had
a
better
idea.
In
fact,
I
had
two
two
rounds
of
better
ideas.
So
I
wrote
this
slide
a
week
ago
expecting
to
explain
our
first
better
idea,
but
then
We've.
D
Better
idea,
which
is
we
realize
we
actually
don't
need
to
do
anything
so
the
goal
here
is
for
us
to
have
an
orderly,
be
able
to
Institute
an
orderly,
forced
resealing
of
all
sectors
in
the
network
over
some
time
period.
D
We
didn't,
we
couldn't
figure
out
how
to
do
that
without
recording
that
that
schedule
in
state
last
year,
we've
since
figured
out
how
we
can
do
it
without
any
changes
to
State,
and
so
it
was
the
fact
that
we
need
to
record
this
in
state,
which
is
what
prompted
us
to
need
to
do
work
now,
but
realizing
that
we
can
do
it
without
recording
anything
in
state
means.
We
can
postpone
some
of
this
work
so
I
have
not
yet
written
up.
D
You
know
I've
written,
like
a
full
sentence,
description
of
what
the
new
scheme
is,
and
we
should
you
know,
convince
ourselves
that
this
scheme
will
in
fact
work,
but
the
the
summary
is
that
Cooper
and
I
will
write
much
larger.
You
know
a
more
Fuller
description
of
what
the
scheme
should
be:
I'd,
probably
attached
to
a
new,
a
new
FIP
which
will
replace
fit
47.
D
and
I'll.
Show
you
next
slide
please
and
from
from
here
I
think
right
now.
This
would
definitely
be
up
for
discussion.
I.
Think
my
right
now.
My
recommendation
is
to
be
that
we
would
actually
implement
this
new
scheme,
including
all
the
code
that
would
that
would
do
the
scheduled
expiration,
but
just
set
parameters
that
mean
that
it's
not
doing
anything.
So
you
set
the
parameter
for
like
what
apoc
should
it
start
forcing
terminations
to
be
infinitely
far
in
the
future,
so
that
it's
not
doing
anything
and
later
on.
D
If
we
discover
a
floor,
all
we
do
is
pull
this
constant
back
to
be
when
we
want
the
schedule
to
start.
That
means
we
can
write
the
code.
We
can
test
it
and
then,
in
the
event
that
we
do
need
to
activate
it,
it's
one
less
thing:
we
need
to
do
in
some
critical
rushed
period
when
we're
trying
to
react
to
a
network,
that's
in
trouble,
but
it's
never
it's
possible
for
us
to
postpone
all
our
work
as
long
as
we're
confident
that
it
can
be
done.
D
D
I,
don't
think
you
know
this
will
be
a
nice
first
time
for,
for
this
particular
governance
flow.
You
know
the
cordov's
deciding
not
to
activate
the
network.
That's
totally
up
to
us.
We
can
just
decide
hey,
let's
not
ship,
this
code,
this
time
retracting
the
already
set.
The
flip
is
a
different
thing,
but
we
could
there's
no
no
particular
Rush
on
that.
D
A
A
couple
of
comments
going
on
and
chat
box
I
think
there's
one
from
Deep
around
any
estimates
on
the
magnitude
of
active
deals.
We
can
support
before
cron
it's
much
more
likely
to
run
out
of
time.
A
C
D
You
it's
hard
to
give
an
actual
number,
but
we
feel
like
if
the
current
growth
rate
continues
for
two
months
and
then
we
fix
it,
we'll
probably
be
okay.
Thank.
A
You
I
think
that
great
there's
I
think
Jennifer
also
asked
a
couple
of
questions
around.
What's
the
proposed
resealing
period,
any
mitigation
in
case
where
a
good
proportion
of
SBS
refuse
to
upgrade.
D
So,
anyway,
the
default
period
right
now
that
happens
anyway
because
of
our
limited
sector
lifetimes
is
one,
is
one
and
a
half
years.
I
guess
a
reasonable
default
to
expect,
but
the
actual
schedule
or
need
is
is
impossible
to
say
until
we
understand
what
the
problem
is
and
what
the
risk
was
and
what
you,
what
the
flooring
Pro
rep
is
and
how
critical
that
is.
So
we
would
certainly
yeah,
as
a
core
devs
group
reserve
the
right
to
set
the
schedule.
Based
on
the
actual
incident.
D
That's
happened
to
propose
that
to
the
network,
the.
D
This
change
doesn't
do
anything
to
the
the
fact
that
it's
possible
for
SPS
as
a
group
to
refuse
to
accept
an
upgrade
that
was
always
possible
in
the
first
time
as
well.
We
still
in
any
case
we
need
to
ship
a
network
upgrade
with
a
new
proof
and
disallowing
the
old
proof
and
triggering
triggering
this
forced
refresh
mechanism.
That
was
always
the
case.
This
doesn't
change,
then.
E
So
I
do
agree
like
so
like
just
to
make
sure
I
understand,
so
we're
proposing
to
introduce
the
resale
period
parameter
to
the
network
and
having
it
set
to
default
a
year
and
a
half
first,
and
then
we
can
react
like
later
on
we'll
figure
out
a
better
number
once.
G
G
G
E
H
Zen
Alex
and
kubaira,
who
has
like
spending
extra
time
until
reviewing
what's
the
best
like
approach
for
for,
like
you
know,
part
of
security,
because
you
know,
spend
a
lot
of
effort
last
year
to
mitigate
the
risk
potential
risk
introducing
by
SDM.
Because
it's
proposed
to
extending
the
sexual
lifetime.
E
We
come
up
with
a
very
complicated,
a
little
bit
complicated
like
solution,
and
we
are
temperature
and
the
fact
that
now
we're
taking
in
hand
review
the
design
which
is
like
avoiding
protocol
complexity
without
being
necessary.
I.
H
I
think
that's
really
nice
I
I
just
want
to
say
thank
you
to
the
crypto
Network
team
to
spending
this
time
and
from
an
incrementer
perspective.
Like
you
know,
we
don't
have
to
do
any
sector.
Migration
in
a
network
upgrade
I'm
extremely
happy
about
that
because,
like
our
note,
has
some
challenges
like
in
a
shock
upgrade.
So
thank
you.
A
Awesome
because
I
have
you
Vic
I.
Have
you
on
today,
I'm,
not
sure
if
you
wanted
to
speak
on
506
I
know
that
Caitlyn
comes
next
to
just
talk
about
like
the
governance
aspect
to
it
but
I'll.
Let
you
say
something.
F
F
F
F
A
good
summary
of
that
analysis
can
be
found
here,
but
the
short
answer
is
we
looked
at
a
crypto
econ
lab,
looked
at
a
variety
of
different
scenarios
with
regards
to
storage,
better
response
to
the
policy
and
from
a
perspective,
of
introducing
something
positive
to
the
network,
with
a
kind
of
less
or
less
downside
risk
that
we
concluded
that
a
sector
duration
multiplier
that
was
low.
F
Excuse
me
lower
and
slope,
and
a
little
bit
softer
of
a
policy
to
kind
of
introduce
the
network
would
be
better
from
a
risk
perspective,
as
well
as
from
from
a
perspective
of
also
including
a
kind
of
opinions
and
and
and
and
and
sentiments
of
storage
providers.
Given
the
current
economic
climate,
I
would
encourage
everyone
to
read
this.
F
The
document
Linked
In
in
the
chat,
which
is
also
public,
which
has
been
made
public
to
have
a
better
sense
of
exactly
why,
in
the
modeling
analysis,
that
kind
of
went
into
this
decision.
But
overall
the
tldr
is,
we
are
proposing
a
softer
slope.
Which
is
less,
which
is
kind
of
a
less
kind
of
I,
don't
want
to
say
less
aggressive
than
the
one
that
would
ones
that
were
previously
kind
of
proposed.
F
F
Those
are
kind
of
modeled
out
in
in
the
document
shared
but
given,
let's
say,
more
extreme
responses
to
the
a
multiplier
in
which
you
have
a
lower
amount
of
storage
committed
to
the
network
at
higher
amounts
of
power.
Given
this
given
like
a
maximum
multiplier,
there
are
potential
scenarios
in
which
that
could
be
harmful
to
the
network.
F
F
I'm
happy
to
answer
questions
about
this
further
but
I
just
you
know,
this
is
kind
of
the
version
of
the
policy
that
authors
think
is
better
for
the
network,
given
the
given,
given
the
current
current
status
of
the
network
now
and
I.
Think
Caitlyn
wants
to
talk
a
little
bit
more
about
about
governance
going
forward.
As
we
look
to
you
know,
entering
last
call.
I
Yeah
thanks
Zach
yeah.
So
as
we
all
know,
governance
has
been
a
huge
boond
I'll
go
on
the
step
and
we
both
know
that
there's
a
very
clear
timing
requirement
here.
That
also
don't
want
to
rush
any
decision
making.
So
really
thank
you
for
your
patience.
I
Over
the
last
couple
of
weeks,
I've
met
extensively
with
the
storage
provider
working
group
teams,
the
crypto
econ
lab
team,
who
I'm
in
person
with
right
now
and
also
run
a
couple
of
working
group
sessions
within
the
foundation,
and
we
have,
as
of
last
night,
pretty
unanimous
consent
about
a
sort
of
hybrid
last
call
Pathway
forward,
which
will
allow
us
to
move
forward
in
sort
of
a
community-minded
way,
while
still
achieving
explicit
consent
as
to
whether
or
not
SDM
will
be
accepted
and
is
ready
for
implementation
in
nv19.
I
So
again,
we're
moving
really
fast
in
order
to
get
this
in
place
in
time
and
I'm
hoping
to
have
a
full
process
stock.
That
is
simple,
clear,
easy
to
understand
available
tomorrow.
This
will
be
public
not
just
for
core
devs,
but
for
everyone.
If
you'd
like
to
see
all
of
the
planning
and
sort
of
decision
matrices
that
went
into
this
I'm
happy
to
share
those
with
you
privately,
just
send
me
a
slack
DM.
I
But
what
we're
effectively
looking
to
do
is
our
standard
two-week
last
call
period,
gathering
information
and
sort
of
preparing
final
documentation
about
pros
and
cons
and
major
Community
discussion
points
around
the
fifth
and
then
asking
for
soft
consensus
amongst
core
devs.
This
is
not
going
to
be
a
hard
vote.
I
Hopefully,
if
we
can
avoid
it
and
it
will
not
be
sort
of
formalized
through
fill
pole,
which
is
something
that
no
one
was
really
interested
in
sort
of
relaunching,
but
happy
to
sort
of
explain
more,
especially
once
we're
able
to
get
some
written
materials
out
for
everyone.
I
Jennifer
is
asking:
how
will
soft
consensus
work
in
this
question?
In
a
sense,
it's
the
same
way.
We
always
do
where
we
we
have
a
discussion
amongst
ourselves.
We
see
if
we
can
reach
an
endpoint
if
we
cannot-
and
there
is
significant
assumption
about
what
we
ought
to
do
from
about
half
of
the
group.
We
will
Levy
a
hard
vote
of
yays
and
Maze
with
the
option
to
its
abstain.
I
Again,
we
really
boiled
the
ocean
on
this
and
I've
looked
at
all
different
types
of
angles
for
moving
forward
with
the
SDM
proposal.
I
really
feel
pretty
confident.
This
is
the
best
way
forward.
I
know
some
core
devs
are
really
opposed
to
the
idea
of
having
to
weigh
in
that
directly
on
governance
challenges,
but
in
this
case
we're
really
asking
for
you
to
do
so.
I
It's
pretty
lightweight
change,
it's
pretty
similar
to
what
we've
done
before.
It's
just
formalizing
that
role
a
little
bit
more
and
it
is
really
for
now
just
in
the
context
of
SDM.
So
again
we
don't
have
a
ton
of
time
and
we're
still
writing
up
final
documentation
to
share
with
everyone
I'm
happy
to
answer
any
other
top
of
Mind
questions
and
also
happy
and
again.
We
don't
have
a
ton
of
time
this
evening
to
schedule
some
open
Office
hours
tomorrow,
which
I'm
also
happy
to
record.
I
All
right-
and
it
looks
like
CX
comment-
is
about
the
actual
parameters
Jenny.
What
is
the
start
date
of
last
call
great
question.
I
I
know
our
friend
Vic
wants
it
to
be
tomorrow,
I
think
that's
too
soon,
but
I
think
we
can
start
as
soon
as
Monday,
using
tomorrow
to
sort
of
socialize
a
plan
make
sure
there
are
no
major
major
flags
from
anyone.
But
again
we
have
consensus
from
within
the
entire
Foundation
ESP
working
groups.
Ceo
Falcon
plus
team
also
has
taken
a
look
at
this
proposal
and
so
hopefully
on
Monday.
I
That
would
also
put
us
in
line
with
including
SDM,
if
accepted
in
the
nv19
timeline,
as
it
is
currently
scoped
we'll
talk
about
that
in
a
second,
of
course,
the
timeline
is
not
set,
but
if
we
are
looking
at
an
upgrade
in
April
and
the
book
is
accepted,
it
would
be
possible
to
get
it
in
time.
A
On
top
of
your
hand,
that
you
want
to
ask
okay,
thank
you
Caitlin.
Obviously,
more
updates
will
be
coming.
Regarding
you
know
the
next
steps
and
moving
forward
so
expect
those
soon
Jennifer.
Are
you
ready
for?
Can
we
eating
updates
over
to
you
there?
This.
H
Isn't
gonna
be
quick
next
slide,
there's
a
a
dude
shouting
out
the
pie
day
there
so
just
want
to
quickly
align
with
the
rest
of
the
quota,
the
network
in
the
18
Huga
upgrade,
which
is
also
the
fem
launch,
and
that
brings
user
programmability
to
a
falcon
Network.
For
the
first
time
we
have
a
mainnet
upgrade
date
schedule
March
the
14th
3
40
PM
UTC,
we're
calling
it
a
pie
day
for
obvious
reasons.
H
For
those
who
are
wondering
why
so
many
straightforward
today
is
because
the
PowerPoint
may
not
change
ID
on
Instagram
is
actually
340,
it's
kind
of
cute
in
my
opinion.
So
so
that's
why
we're
having
this
upgrade
date
so
so
far
Lotus
has
released
our
final.
Our
families
to
support
this
network
upgrade
scenes
are
we
have
performed
the
upgrade
our
hyperspace
calibration
since
they're
doing
well.
I
have
a
quick
Matrix
showing
up
there,
which
is
so
far.
H
The
hyperspace
testnet
started
around
January
the
16th
and
we
have
36
000
contract
deployed.
So
there's
a
lot
of
testing
we
caught
a
lot
of
bugs,
but,
like
so
far,
we
feel
like
we're
ready
for
this
big
launch.
It's
been,
you
know
a
long
time
like
coming
so
yeah
March
the
14th.
That's
that
I
sync,
both
videos
and
Forest,
is
going
to
join
in
this
network,
upgrade
as
well
so
I
don't
know
if
Stephen
want
to
share
any
videos
release
update,
but
that's
it
for
me.
C
Yeah
yeah
from
the
weirdest
side,
yeah
well
currently
running
on
the
as
a
corporation,
Network
and
yeah
it's
running
as
in
the
garage
and
yeah
well
yeah.
Well,
comparing
with
notice,
we
found
that
the
yeah,
the
blog
verification
time
is
and
yeah
either
than
before,
but.
D
C
Should
be
without
from
okay
I'm
introduced,
okay,
so
yeah
it
will
have
higher
requirements
about
the
hardware
you
know
for
those
yeah
but
sn85
now
yeah.
What
take
about
a
few
and
I
feel
thinkable
and
in
a
few
seconds
and
every
artist.
C
H
H
We
had
one
to
one
mapping
in
the
in
Lotus
was
Falcon
API
and
if
you're
from
the
is
there
like
an
ecosystem,
you're
used
to
like
remix
or
metamask
any
truly
and
that
you
have
you're
just
working
fine
with
the
lowest
note
of
our
Queen
Network
and
so
so.
I
hope
this
is
a
good
news
for
developers
and
that
can
make
your
life
building
a
falcon
easier.
So
super
looking
forward
to
that.
C
I
Please
note
this
as
preliminary
because
it
is
preliminary,
it
is
tentative
and
subject
to
change,
and
if
you
have
significant
suggestions,
as
always,
we
capture
those
in
the
discussion
chat
in
the
core
devs
repo
on
GitHub,
but
right
now,
this
list
should
look
very
familiar
to
you
all.
These
are
the
proposed
fips
that
we've
really
been
talking
around
for
this
upgrade
for
the
last
couple
of
months.
I
It
includes
the
updated
and
revised
version
of
the
program
security
policy
that
Alex
presented,
as
well
as
the
pending
built-in
Market
cron
risk
mitigation
draft
we're
also
going
to
include,
if
possible,
we
may
not
be
able
to
given
the
timeline
that
we're
potentially
working
on
the
synthetic
program
updates
that
Luca
presented
to
us
today
and
depending
on
acceptance,
also
the
56
sector
duration
multiplier,
which
is
really
driven
a
lot
of
this
upgrade.
I
This
upgrade
is
already
reminding
me
of
the
chocolate
upgrade
we
did
last
year
because
it
has
lots
of
goodies
and
likely
lots
of
enhancements
that
aren't
rooted
in
fips
as
well,
but
right
now
the
timeline
that
we're
looking
on
is
quite
tight.
It's
going
to
be
April
4th
update
for
our
calibration
Network
on
April
25th
for
mainnet
Again.
I
So
starting
actually
we'll
probably
only
have
one
more
quartets
meeting
before
the
upgrade,
but
he
will
be
the
sort
of
the
go-to
with
all
of
the
updates
for
fips,
as
we
finalize
this
list
and
also
reaching
out
and
bugging
everyone
in
your
DMs
to
review
the
drafts
to
respond
to
edits,
commit
changes
and
otherwise
make
sure
everything
is
good
to
go
before
the
actual
upgrade
itself.
I
Any
questions,
Flags
or
ideas
that
you
want
to
Levy
here.
Otherwise,
I
can
also
quickly
grab
the
link
to
the
discussion
thread
in
the
court
entry
code.
I
H
The
scope-
yes
like
particularly
on
the
synthetic
polar
I
just
want
to
have
a
call
here,
just
like
Lotus.
H
We
have
been
reviewing
this
proposal
like
we
understand
the
motivation
behind
it
like
what
is
doing,
however,
based
on
the
user
feedback
and
our
primary
like
analysis
like
we
don't
we
we're
currently
like,
not
too
sure
this
is
the
top
priority
or
the
most
beneficial
thing
that
we
can
do
on
the
Lotus
Miner
to
bring
a
better
experience
or
like
help,
storage
quality,
improving
their
like
ceiling
on
boarding
rate
at
the
moment.
H
So
we
are
not
certain
that
that
should
be
a
fib
that
we
prioritize
for
this
upgrade
or
like
in
the
next
couple
weeks.
I
just
want
to
share
that.
G
Agreed
I
just
also
quickly
add
to
that
that
the
Lotus
minor
does
not
need
to
support
synthetic
forward.
Just
for
the
flip
to
be
accepted,
this
could
be
an
opportunity
for
alternate
miners.
A
Venus
minor
in
particular
could
potentially
support
it,
even
if
Lotus
doesn't,
and
that
would
be
nice.
H
Totally
agree
from
my
understanding.
Synthetic
power
up
is
more
going
to
be
more
beneficial
for,
like
smaller,
like
storage
providers,
because,
like
it's
Gonna
Save.
H
They're
going
to
be
spending
from
what
I
got
right
now
for
a
setup
that
is
invested
in
40
400k,
there's
gonna,
be
a
2.5
cost
reduction
on
their
money
operation
and
so
for
anyone
that's
like
around
that
size
definitely
definitely
see
a
benefit
to
them.
A
Thanks
just
looking
through
the
chat,
if
there
are.
A
Are
there
additional
questions,
I
think
conversations
going
on
there
and
feel
free
to
also
engage
and
take
your
conversations,
I
think
as
well,
but
at
their
boiling
questions
or
concerns
based
on
the
presentations
that
we
can
take.
We
actually
have
some
minutes
to
spare.
Yes,.
H
A
Yes,
I
know
that
Caitlin
has
a
hard
start,
but
I
think
we
can
take
that,
giving
that
we
have
some
time
that
ayush.
G
If
you're
actively
producing
the
comms
for
this
I
can
I
can
firmly
wait
till
tomorrow
for
what
it's
worth
and
maybe
it
doesn't
shape,
becomes
as
well.
My
question
is
basically
whether
as
Coronas
we're
expecting
a
road
factoring
in
the
community
sentiment
or
just
based
on
what
we
think
about
it
in
the
past
when
I
voted
on
or
indicated
my
support
or
or
if
I've
not
factored
in,
what
the
community
think.
I
Yeah,
that's
a
great
question,
so
let's
send
kind
of
a
philosophical
tinge
to
it
as
well.
I
To
inadequately
answer
your
question,
I'm,
going
to
say
that
your
frame
of
reference
should
be
choosing
to
support
or
reject
the
fifth
based
on
what
you
think
is
best
for
the
fileco
network,
so
the
materials
that
I
will
prepare
for.
You
will
reflect
Community
conversations
sentiment
really
focusing
on
those
sort
of
issues
which
are
are
worth
considering,
really
focused
and
seem
to
be
in
reflection
of
what
the
fifth
is
actually
stating,
and
these
are
many
of
the
ideas
that
should
be
very
familiar
to
you
all.
I
These
are
the
analyzes
and
suggestions
that
have
been
floating
around
for
months
now,
and
it's
going
to
be
useful
just
in
order
to
frame
any
resulting
conversation
that
we
may
have
with
the
group
and
also
make
sure
that
the
actual
facilitation
of
the
soft
consensus
if
there
has
to
be
an
async
component
since
we
probably
don't
want
to
have
to
wait
another
month
in
order
to
do
this,
that
folks
have
the
same
materials
to
review.
Going
into
that.
I
My
perspective
is
that
your
personal
opinion
again
should
reflect
what
is
best
for
the
filecoin
community,
but
the
reason
that
we
are
passing
this
to
core
devs
in
the
first
place
is
because
there
are
members
of
the
filecoin
community
who
are
fundamentally
at
odds
in
terms
of
either
accepting
or
rejecting
this.
I
It
is
why
our
typical
technical,
soft
consensus
does
not
work,
and
so,
therefore,
it
is
perfectly
reasonable
for
you
to
Levy
your
own
preferences,
reasonable
and
well
thought
out
as
they
are,
and
to
do
so,
while
realizing
that
you
will
not
be
able
to
account
for
every
Community
member's
preference.
So.
A
Yeah,
if
are
you,
do
have
additional
questions,
I
think
that
makes
sense.
Thank
you.
Okay,
great
all
right
awesome
looks
like
for
the
first
time
since
I
started
joining
this
course.
We
are
actually
we
have
additional
time
left
and
there
are
other
things
anything
else
based
on
the
presentations
we've
had
today.
That
you'd
like
to
also
ask
before
we
bring
the
meeting
to
a
close.
A
No
okay,
hoping
to
share
the
notes
from
the
call
I
will
be
sending
that
out
sometime
next
week
as
well
as
additional
documentation
and
anything
else
that
was
referenced
throughout
the
course
day
and
thank
you
so
much
for
joining
core
devs,
55
and
I'm,
hoping
to
see
you
next
month
for
code,
devs,
number
56
good
night.
Everyone
from
here!