►
From YouTube: Filecoin Core Devs #2
Description
Recording for https://github.com/filecoin-project/tpm/issues/3
A
Alrighty
today
is
september
25th,
and
this
is
the
falcon
core
death
meeting
agenda
for
today,
wanted
to
give
everyone
a
heads
up
that,
at
least
as
of
our
next
next
meeting,
we
should
have
the
folks
from
ipfs
force
with
us.
Current
timing
is
a
pretty
it's
pretty
terrible
in
in
china.
I
think
it's
like
1am
on
a
saturday
or
something
so
we're
probably
gonna
have
to
reschedule
these.
They
are
taking
maintainership
of
gofalcoin
and
planning
to
rename
it
oh
well.
A
The
puppy
is
so
cute
and
so
that
they
will
be
joining
us
into
the
future
to
talk
about
that.
Their
implementation
and
participate
in
in
these
calls
from
that
perspective,
so
bit
of
a
heads
up
would
love
to
spend,
like
maybe
two
minutes
at
the
very
end
of
of
this
session,
just
like
helping
make
sure
that
we
can
find
a
spot,
that'll,
that'll
work
for
all
groups.
It's
a
little
bit
tricky
to
coordinate
across
this
many
time
zones,
but
we'll
figure
it
out.
A
So
that
was
first
in
the
agenda.
Second,
is
just
status
updates
from
various
various
of
us.
Anyone
want
to
start.
B
Sure
we
can
start
go
for
it,
that'll
be
easier
thanks,
austin
yeah,
so
I
mean
it's
pretty
similar
to
the
status
updates
that
we
gave
last
week,
like
our
main
goals,
haven't
really
changed,
we're
just
kind
of
making
progress
towards
them.
B
Number
one
goal
is
working
towards
sinking
and
interop
and,
like
our
secondary
goal,
is
getting
to
a
full
node
in
terms
of
working
towards
thinking
and
interrupt
the
specific
things
that
we've
done
since
the
last
call
is
finishing
up
the
minor
after
changes,
we're
hoping
that
they
will
be
in
today,
actually
so
as
of
the
minor
after
landing
will
be
up
to
date
with
0.9.3
and
otherwise.
Other
other
changes
that
we're
matching
are
changes
in
the
message
pool.
B
What
still
has
to
be
done
for
the
message
pool
is
republishing
and
revert
logic
for
messages
and
adding
message
selection
logic.
So
this
will
be
like
crucial,
obviously
for
staying
in
sync,
and
then
we
are
also
updating
conformance
tests.
B
Whatever
changes
have
our
new
tests
have
been
updated
and
as
soon
as
the
block
sequence
tests
are
in
we'll
integrate
them,
we're
also
doing
a
pass-through
of
the
state
manager
and
chain
store
to
catch
any
changes
that
have
happened
since
we
implemented
it.
Things
like
the
fork
logic
and
a
few
other
smaller
things
that
we've
done,
I
mean
austin.
You
want
to
speak
to
the
amp
issue
that
that
you
were
working
on
yeah.
C
So
it's
just
making
sure
like
that
everything
kind
of
interrupts
like
by
I
mean
the
primary
goal,
was
to
test
the
block,
store,
reason,
rights
for
the
amps
and
then
move
on
to
the
hand
to
do
the
same.
Just
because
obviously,
gas
gas
usage
is
based
on
that.
So
just
making
sure
that
all
matches
up
and
then
also
testing
like
just
to
verify
all
like
the
cid
roots
are
equal.
C
I
mean
I
was
already
tested
before,
but
kind
of
just
like
expanding
on
that
a
bit
and
then
also
just
ensuring
that
our
functionality
matches
there
was
the
bug
in
the
amps
that
we
are
currently
matching
because
we're
not
sure
whether
or
not
that's
going
to
be
pruned
out
before
the
mainnet.
C
So
I
think
maybe
I
guess
we'll
get
an
update
later
about
what
the
status
of
that
is,
if
it
ever,
if
it
is
getting,
pruned
out
and
kind
of
not
being
added
in
from
mainnet
but
yeah,
just
kind
of
working
on
that
and
just
kind
of
refactoring,
the
amp
a
bit
just
to
kind
of
handle
errors
a
little
bit
better.
Just
because
of
how
out
of
gas
errors
kind
of
happen
within
the
vm,
it's
a
little
bit
finicky
when
using
these
data
structures,
but
yeah.
That's
that's
about
it!
That's
about
it!
C
For
the
stuff.
B
Yeah-
and
I
guess,
like
kind
of
that,
summarizes
most
of
what
we've
been
doing
towards
the
sinking
and
interop
goal
and
then
otherwise
we're
also
working
towards
a
full
node.
So
the
integration
of
the
storage
miner
should
land
today
and
integrating
storage
and
retrieval
markets.
We're
probably
looking
at
a
couple
of
weeks,
at
least
just
because
we're
finishing
up
the
pay
channel
changes.
B
After
that,
we
have
to
add
the
rpc
for
the
pay
channel,
and
then
we
can
try
out
the
go,
fill
markets,
interface
and
hopefully
it'll
just
work
once
we
have
those
changes
in
and
then
the
other
exciting
thing
is
next
week,
we
will
finally
be
in
a
position
to
run
our
local
devnet
so
like
with
the
miner
after
changes
and
the
storage
miner,
which
will
allow
us
to
produce
blocks,
we
can
actually
run
a
local
network
of
force
nodes.
D
Yep
sure,
hey
everyone,
so
for
us
for
for
us,
our
goals
are
pretty
much
the
same,
so
we
want
to
establish
a
network
well,
a
devnet
which
will
be
able
to
work
by
itself
with
the
focal
nodes,
as
well
as
these
lotus
nodes.
D
D
Otherwise,
all
the
other
parts
seems
to
be
working
fine,
but
as
well
as
finish
as
steering
will
be
fixed,
we'll
see.
Maybe
something
else
will
come
up
for
the
node
yep
we
have.
On
the
last
update.
I've
mentioned
that
we
are
blocked
on
transfer
level
because
of
tls
not
working
properly.
It
has
been
fixed.
So
now
we
have
a
tls.
D
D
Now
we're
also
fixing
it.
It
doesn't
seem
to
be
a
really
big
one,
but
soon
some
efforts
to
be
fixed.
D
So,
yes,
for
now,
our
main
goal
is
to
wrap
up
with
miner
as
it's
not
really
dependent
on
on
the
transport,
because
we
can
use
a
lot
of
nodes
and
our
own
miner
in
order
to
test
it
afterwards
we'll.
We
also
have
performance
tests
right
on
our
address
and
we
need
to
start
with
it
really
soon.
Just
because
the
current
our
current
cross
implementation
testing
doesn't
seem
to
be
really
flexible
and
fast.
D
A
That's
awesome.
I
agree
that
both
you
guys
are
are
leaning
into
those
conformance
tests.
We
have
role
here.
Well,
I
don't
know
if
you
have
any
updates
on
kind
of
where
things
are
the
things
that
folks
who
are
investing
in
test
factors
can
take
advantage
of.
E
We
have
been
running
a
bunch
of
analytics
on
chain
to
identify
the
latest
messages
that
we
could
extract
from
chain
using
our
extraction
tooling.
That
would
create
a
really
good
coverage
for
for
folks.
So,
basically,
what
we've
done
is
we've
identified
for
each
recipient
actor
type,
each
method
number
in
each
exit
code.
E
We
are
extracting
10
messages
from
chain
the
latest
messages,
and
this
is
going
to
give
us
a
lot
of
coverage
with
regards
to
like
the
business
logic
as
well
of
business
logic
of
spec
actors,
which
will
then
you
know
be
useful
to
implementers
as
well.
So
that
is
one
thing
that
we're
focusing
on.
E
We
are
creating
this
two-tiered
corpus
that
austin
already
you
know
reviewed
the
design
issue
for
and
a
few
others
have
chimed
in
as
well,
so
we'll
create
two
corpuses
we'll
create
a
coarse
grained
corpus
and
a
fine-grained
corpus
and,
and
that
will
kind
of
like
be
resilient
enough,
so
that
if
there
are
changes
in
logic
that
change
the
way
that
state
is
accessed,
we
would
be
able
to
regenerate
the
fine-grained
vectors
from
the
coarse
grain
factory.
E
So,
of
course,
grain
corpus
is
basically
going
to
be
a
fallback
for
us,
then
we
are
also
looking
into
and
those
were
probably
land.
So
we
dealing
with
like
a
bunch
of
things
in
to
make
the
tooling
efficient
to
extract
the
test
vectors.
But
hopefully
this
will
the
there's
around
187
vectors
that
I
think,
will
land
as
a
result
of
this.
Of
this
of
this
effort,
probably
will
land
sometime
next
week,
then
we're
doing
a
bunch
of
we're
creating
a
bunch
of
tooling
and
doing
a
bunch
of
changes.
E
These
might
translate
into
minor
schema
changes
to
support
multiple
network
versions.
So
you
have
already
like
all
folks
in
the
school
were
probably
have
seen
that
lotus
and
spec
actors
have
made
a
bunch
of
changes
to
support
upgrading
of
spec
log
spec
actors,
logic
and
hand
logic
and
like
adt
logic
and
like
a
bunch
of
things
in
different
places.
E
So
this
translates
into
something
that's
called
network
version
and
basically
we
want
to
create
tooling
that
will
allow
us
to
run
the
corpus,
the
entire
corpus
against
an
arbitrary
version
of
the
network
version
to
see
if
the
failures
that
we
find
are
the
ones
that
we
expect
to
find.
So
if
we
know
that
certain
vectors,
certain
actions
have
changed
in
logic,
then
we
know
that
that
should
invalidate
certain
vectors
and
we
should
be
like
the
failures
that
we
find
should
be
constrained
to.
E
You
know
the
changes
that
we
think
we
have
made
and
also
we
should
be
having
tooling,
to
very
easily
regenerate
the
test,
vectors
against
the
test,
vectors
that
have
failed
against
the
a
new
network
version
or
a
new
set
of
of
changes.
So
this
kind
of
like
creates
tooling,
to
validate
changes
as
as
they
happen,
and
also
we
wanna
be
able
to
basically
for
each
vector
express
the
network
versions
to
which
it
applies.
E
So
as
we
do
this-
and
there
are
multiple
network
versions
that
we
test
the
corpus
against,
we
should
be
able
to
enhance
and
record
for
each
vector
whether
that
vector
remains
valid
for
that
network
version,
or
it
doesn't
right.
So
this
will
allow
implementations
to
based
on
the
network
versions
that
they
support
to
feed
the
relevant
vectors
and
not
feed
the
ones
that
we
know
are
going
to
fail
because
they
do
not
apply
for
a
particular
network
version.
So
that's
another
thing
and
then
block
sequence
vectors.
E
E
We
have
landed
a
few
changes
in
lotus
that
allow
us
to
isolate
the
the
sync
line,
the
the
sync,
the
sinker
itself,
to
be
able
to
just
isolate
it
and
not
have
it
communicate
over
the
network
or
not
having
to
like
not
having
to
manage
the
worker
threads
of
the
sink
to
so
that
we
can
isolate
that
component
and
just
apply
the
the
vectors
there
so
still
not
relevant
progress
that
I
can
communicate.
E
But
hopefully,
if
things
are
pretty
quiet
next
week,
we
will
be
able
to
to
make
some
some
relevant
progress
there
way
to
get
back
to.
A
A
No
thank
you
super
thorough
love.
It
any
any
questions
for
a
wool
in
any
of
that.
C
I
just
have
one
quick
question:
as
far
as
the
version
being
used
to
generate
the
vectors
you
said:
you're
going
to
test
against
all
the
different
versions,
all
the
different
network
versions,
but
is
there
going
to
be
a
default
that
you're
going
to
use
to
generate
all
them,
because
I
noticed
before
they
were
generated
using
like
inconsistent
versioning,
and
I
was
just
wondering
if
there's
like
gonna
be
a
version,
that's
gonna
be
targeted
or
anything.
If
you
have
any
information
you
give
about
that.
E
Yeah,
so
so,
basically
that
what
what
I
think
is
gonna
happen
here
is
that
we
will
probably
test
again
so
there's
like
several
several
lines
of
versioning.
There
is
the
spec
actors,
versioning
itself
in
terms
of
you,
know
the
commit
log
and
in
terms
of
actual
versions
that
are
tagged
in
that
repo,
and
then
there
is
the
network
version
right.
So
network
version
usually
communicates
consensus.
Breaking
changes,
spec
actor
versions
do
not
in
it
by
themselves.
E
If
they
are
not
correlated
with
the
network
version,
change
should
not
be,
should
not
imply
a
a
consensus,
breaking
change,
which
means
that
it
shouldn't
affect
the
test
factor.
So
the
test
vector
itself,
if,
like
you
know,
a
commit
on
spec
actors
or
version
of
spec
actors,
is
just
improving
the
performance
of
certain
things
or
fixing
a
bug
in
a
way
that
it
doesn't
cause
a
consensus,
breaking
change.
E
Then
it
shouldn't
unless
we
capture
that
bug
in
a
test
vector
which
could
happen
as
well,
which
the
moment
that
we
launch
mainnet,
if
it's
captured
in
a
test
vector
and
that
changes
is
consensus
breaking
right,
because
if
the
state
changes
as
a
result
of
a
bug
being
fixed,
then
it
is
a
consensus
breaking
change,
so
these
things
will
become
like
really
like
really,
you
know
strident
in
a
way
that
you
know
when
there
is
something
that
breaks
the
network
that
leads
to
it's
a
change
in
state.
It's
gonna,
it's
gonna,
percolate
back
up!
E
So
at
this
point
I
don't
think
upgrading
spec
actors
without
upgrading
a
network
version
should
pull
in
any
any
changes
on
test
factors
themselves.
I
don't
know
if
that
answers
the
question.
It's
like,
maybe
a
bit
of
a
of
a
trip,
but
there's
like
a
lot
of
factors
to
consider
here
and
we
can
take
this
offline.
If
you,
if
you
have
more
questions
around
that.
C
No,
that's
all
good,
there's,
no
more
questions.
I
was
just
more
kind
of
seeing
if
you
were
gonna
target
like
the
zero
nine
three
or
like
the
zero
nine
eight
like
past
those,
because
there
were
some
breaking
changes
before
like
version
zero.
I
guess
of
like
the
network
up
great,
but
I
assume
you're
just
gonna
target
the
most
recent
for
to
generate
all
the
vectors.
E
A
Well,
I
realize
in
jumping
to
rubble
so
quickly
because
it
seemed
relevant
that
maximum
we
didn't
create
time
for
any
questions
to
you
from
other
people.
Anyone
anyone
have
any
questions
for
the
flu
hunting
or
anything
else.
You
want
a
dad.
We
now
have
a
lotus
representative
thanks
visa
for
showing
up.
F
A
Cool,
but
maybe
we
we
hop
over
to
lotus
and
let
visa
give
us
a
quick
update.
I
think
the
rest
of
the
team
got
caught
in
the
scheduling
they're
working
on
another
big
test,
so
yeah.
G
I'm
just
I
just
happened
to
have
a
little
bit
of
free
time,
because
I
scheduled
social
activities
tonight
for
a
very
big
change
for
me.
So
anyway,
so
what's
happening
is
that
we
are
preparing
for
the
upgrade
so
step.
One
is
to
basically
merging
all
the
code,
that's
necessary
for
the
upgrade,
and
then
you
like
define
the
airport
for
the
upgrade.
G
It's
gonna
happen,
so
we're
gonna
be
testing
the
code
without
doing
the
upgrade
yet
and
then
we're
gonna
test,
the
actual
upgrade
on
a
butterfly
network
before
we
trigger
it
on
the
real
network
and
that's
the
major
thing
that's
happening
so
the
other
interesting
things
that
are
happening
is
that
we're
starting
to
work
in
a
lot
of
slides
in
conjunction
with
the
gateway.
So
we
can
have
lightweight
lotus
nodes
that
work
by
using
a
remote,
node
gateway,
and
basically
this
nodes
should
be
able
to
iron
phones
and
stuff,
like
that.
F
A
Yeah,
I
think,
a
quick
timeline
on
the
the
the
big
upgrade
ability
upgrade
that
raul
was
also
mentioning.
We
did
a
number
of
refactors
within
lotus
to
support
that
spec
actors,
network
version,
logic
and
we've.
A
China
has
a
big
holiday
coming
up
and
we
know
a
good
there's,
a
good
percent
of
the
minor
community
in
china
and
so
we're
making
sure
that
no,
where
we're
trying
to
help
ensure
that
no
mandatory
upgrades
happen
during
that
period,
because
it
would
be
really
disruptive
for
folks
who
are
generally
out
on
vacation,
and
so
we've
decided
to
to
take
that
the
upgrade
that
viso
was
mentioning
and
the
spec
actors
upgrade
and
push
it
to
after
that
holiday.
A
So
as
not
to
you
know,
work
the
network,
while
people
are
out
and
have
a
hard
time
fixing
it.
And
so
we
have
more
time
for
testing,
which
is
good,
because
it's
complicated
and
a
lot
of
things
have
changed.
And
so
we
released
0.7.2,
which
has
the
refactor
without
the
upgrade,
and
the
upgrade
itself
is
getting
a
lot
of
testing
thanks
to
roewell
and
thanks
to
the
rest
of
the
lotus
team.
A
H
Is
there
like?
Is
there
anywhere
we
can
check
out
this
lotus
light
link
sounds
pretty
interesting.
G
G
Is
I
think,
35
32
or
something?
Yes,
it's
35
32,
it's
very
very
brief,
but
the
basic
idea
is
that
you
haven't
known
that
his
no
chain
store
basically
does
no
chain
operators.
It
doesn't
give
you
sync
or
anything,
and
it's
basically
a
client
node
that
basically
just
has
a
wallet
and
it
can
do
mess.
It
can
send
messages
and
do
chain
operations
by
utilizing
a
remote
node,
it's
going
to
be
a
gateway,
and
it
can
also
do
deals
directly
with
with
miners.
A
A
A
Questions
cool,
then,
maybe
we
hop
over
to
yamas
for
all
the
stuck
improvements
and
we
can
push
the
the
fip
stuff
a
little
bit
later.
I
Yeah
hi,
hello,
everyone,
so
quick
update
from
the
fieldspecs
side.
I
don't
know
what
was
the
last
time
you
you
visited
spec
site,
but
she
got
an
upgrade.
So
if
you
head
over
to
expected
file,
go
into
the
io
you're
going
to
see
a
new
nice
and
click
design
and
the
process
of
getting
content
updated
into
the
spec
site
has
been
done,
has
been
improved
a
lot.
So
there's
great
user
experience,
things
don't
break.
I
You
have
to
just
write
simple
markdown
language.
There
is
support
for
math.
There
is
a
kind
of
single
touch,
editing
in
the
sense
that
you
can
change
the
hierarchy
of
the
document.
Add
new
files
and
the
table
of
contents
gets
automatically
updated
yeah.
It's
it's
a
nice
user
experience
which
you
can
it's
very
easy
to
set
up.
I
So
there
is
a
readme
file
there
that
you
can
see
how
to
install
basically
npm
locally
so
that
you
can
run
a
local
site
and
see
if
something
breaks
by
the
changes
you
do
you
get
to
know
it
before
you
push
changes,
and
the
cia
tells
you
that
this
is
not
good,
so
very
simple
process.
We've
integrated
kind
of
health
monitoring
tools,
so
there
is
a
dashboard
you're
going
to
see.
This
is
tracking
the
status
of
the
specs
sections.
I
So
if
you,
if
you
want
to
go
over
and
read
about
the
protocol
just
head
over
to
the
dashboard
first
to
see
if
it's
in
a
good
state
or
if
it's
an
old,
outdated
version,
you
understand
that
you
know
updates
happen
all
the
time
it's
like
kind
of
shooting
to
a
moving
target.
So
we
we
keep
updating
aids
you'll,
see
that
you
know
there
is
a
spec
status
and
there
is
the
theory
of
dates
for
those
parts
of
the
protocol
that
have
been
audited.
I
You're,
going
to
find
a
link
to
the
to
the
report
of
that
there
is
a
similar
implementation.
Yeah
thanks
a
lot
molly.
There
is
a
similar
dashboard
for
the
implementations
where
there
is
the
ci
coverage
there,
whether
ci
tests
are
passing
test
coverage
and
also
security
audit.
It's
this
implantation
status.
I
I
This
is
this
kind
of
a
summary
of
it.
We
welcome
lots
of
prs
to
data
content.
So
if
you've
worked
on
something-
and
you
want
to
add
some
more
detail
or
correct
something
just
file
a
pr
there-
we
have
still
some
ongoing
work
so
we're
building
a
kind
of
api
proxy
to
pull
data
from
remote
sources.
So,
for
example,
the
tests
that's
the
friendly
only
team
has
been
raul
has
been
talking
about.
We
want
to
pull
from
that
repository
and
put
them
in
as
conformance
tests.
You
know
near
the
dashboard
in
their.
A
Respective,
oh,
I
lost
janice.
A
A
I
hope
everyone
gets
a
a
quick
chance
to
take
a
look
at
this.
It's
made
huge
progress,
and
now
every
everything
does
you
know
much
more
clearly
indicate
on
each
section
what
what
state
it's
in
and
whether
or
not
this
is
dependable,
accurate
information
which
is
definitely
better
than
where
it
was
a
couple
of
months
ago.
Thanks
to
sorry.
I
All
right,
yes,
and
then
I
would
just
say
that
there's
going
to
be
an
integration
with
the
fire,
the
fifth
process,
so
it's
going,
the
the
drafts
and
the
integrated
are
fib.
Fibs
are
going
to
be
showing
there
so
that
it's
kind
of
a
time
machine
and
you
can
travel
back
and
forth
to
the
spec
and
see
the
improvements
and
yeah.
That's
all
please!
Let
us
know
your
opinion
on
features.
You
want
update
the
contents.
A
Cool,
I
feel
like
it's
a
it's
a
decent
segue
into
just
talking
about
like
auditing
in
general,
as
you
can
see,
there's
a
lot
of
things
that
already
have
audits
or
have
work
in
progress
audits
and
then,
from
a
code
perspective,
a
chunk
of
implementations
have
have
gotten
or
are
getting
security
audited
from
a
from
a
code
perspective,
but
as
something
something
that
has
come
up
in
a
couple
of
of
chats
with
you
guys
is
talking
about
what
what
things
to
audit
what
things
have
already
been
audited
and
an
audit
strategy,
especially
for
you
know,
constantly
evolving
protocol
and
code
base,
and
so
at
least
I
guess
I
can
stop
sharing
this
at
least
on
the
the
lotus
side.
A
We've.
Definitely
we
started
first
by
auditing
the
various
components,
and
so,
for
example,
we
did
a
very
thorough
audit
on
gossip
sub
when
gossip
sub
went
to
v
1.1,
which
I'm
sure
yannis
could
also
talk
to
us
about
she's
involved
in
that.
But
that's
that's
something
that
I
know
that
the
forest
team
is
also
working
on.
A
There's
plans
to
have
the
rust
gossip
sub
audited
and
which
is
awesome,
probably
something
that
should
also
be
on
the
docket
maxim
for
for
the
c,
plus
plus
gossip
sub
implementation,
to
make
sure
that
that
gets
a
corresponding
security
audit
and
then
from
a
kind
of
code-based
perspective.
We
definitely
we
audited
la
p2p
and
then
we've
been
re-auditing
lotus
on
like
every
couple
of
weeks
basis
with
new
audit
firms.
A
Just
since
the
code
and
implementation
keeps
keeps
evolving
want
to
make
sure
that
we
kind
of
have
layered
tests
that
people
can
can
go
back
to
and
make
sure
that
we
fix
all
of
those
bugs
and
we
separately
audited.
The
spec
actors,
repo
so
separating
out
there.
The
actor's
side
and
the
the
load
aside
helped
us
do
those
two
audits
in
parallel
and
we
we
did
like
a
more
embedded
chunk
of
work
on
the
actor's
side.
So
that
was
that's.
A
My
quick
high
level
update
on
kind
of
where
the
the
actors
like
audit
status
is,
but
I'm
curious
if,
if
anyone
else
has
any
any
questions
when
it
comes
to
going
through
the
process
of
kind
of
scheduling
and
auditing
other
parts
of
the
implementations.
A
Cool
then
audited
away
kind
of
to
each
group
to
figure
out
like
good
timings
for
for
them,
depending
on
your
implementation,
but
let
us
know
if
you
need
any
kind
of
nudges
or
introductions
or
anything
like
that.
A
Awesome.
Okay,
I
think
the
last
thing
on
our
agenda
for
today
was
to
talk
quickly
about
the
most
recent
fip,
which
is
always
fun.
So
we
talked
about
the
fit
process
last
time
which.
A
A
We
landed
fifth
number
two,
I
think
yesterday
yeah
yesterday,
which
is
aimed
at
minimizing
the
fees
for
window
posts
where
possible,
one
of
the
things
that
we've
noticed
from
like
the
live
network
throughout
space
race,
and
is
this
that
miners,
sometimes
through
no
fault
of
their
own,
can
miss
a
window
post
for
kind
of
operational
reasons,
either
because
they're
restarting
their
node
or
because
like
to
apply
an
update
or
something
on
those
lines
or
there's
just
congestion,
and
they
don't
end
up
getting
their
window
post
message
through
the
mempool
in
time,
and
so
occasionally
this
happens
and
is
very
painful
right
now
to
miss
a
single
window
post
for
a
minor,
and
we
want
to
minimize
that
to
just
what
is
needed
from
like
a
concert.
A
A
security
and
crypto
econ
perspective
to
make
sure
that
the
incentives
are
correct
to
have
good
data
storage
without
overly
penalizing.
You
know,
honest
minders
that
are
doing
their
best
and
so
recommend
that
people
take
a
look
at
this.
The
the
main
difference
is,
instead
of
immediately
deducting
a
penalty
for
newly
faulted
sectors.
A
To
just
remove
power
mark
the
sector
as
faulty
and
skip
the
penalty
for
now,
and
then,
if
that
that
that
sector
stays
faulty
for
until
the
next
proving
period,
then
it
would
get
faulted
into
the
future
and
so
kind
of
deferring
some
of
those
penalties
for
people
who
you
know
you
can
miss
it
once
and
you're
okay.
But
if
you
keep
missing
it,
then
it
ratchets
back
up
to
exactly
exactly
the
same
level
of
penalty.
A
So
we
have
a
an
associated
implementation
in
spec
actors
that
folks
can
take
a
look
at
and
we're
actually
planning
to
get
this
into
today's
upgrade.
A
So
not
not
a
good
representation
in
terms
of
how
lightning
fast
we
will
most
fips
will
be
because
they'll
probably
be
much
much
more
invasive,
but
we
intentionally
chose
one
that
was
relatively
easy
to
implement
and
we
think
it
will
make
make
everyone
happy
across
the
board
but
curious
if
there's
any
any
questions
or
other
things
related
to
fip,
02
general
nudge,
for
everyone
to
also
think
of
things,
they
would
like
to
improve
and
write
them
up
as
tips.
H
I
just
have
a
quick
question.
I
I
was
looking
at
the
the
pull
requests
for
the
spec
actors
for
this.
What
was
the
original
rationale
to
to
incur
a
few
after
you
recovered
a
sector.
A
Yeah,
I
think
there
is.
There
are
very
strong
research
analyses
against
all
of
this,
but
I
think
probably
that
would
be
either
a
fine
thing
to
post
as
a
comment
or
a
question.
There
is
a
associated
issue
here
for
discussion,
and
so,
if
you,
if
you
want
to
post
a
question
here,
I
think
that'd
be
a
good
place
for
you
know,
even
though
it's
not
whether
or
not
we
should
do
fip
02,
it's
a
more
context
on
the
design
rhetoric.
H
What
was
the
question?
Sorry,
I
missed
it
on
the
on
the
pr
for
the
spec
actors
change
for
fips
too.
It
noted
that
one
of
the
things
in
this
pr
is
to
remove
the
fee
incurred
when
a
sector
is
successfully
recovered,
and
I
was
just
wondering
why
they
that
was
implemented
in
the
first
place,
where
after
sectors
recovered
that
there's
a
fee
incurred.
I
Right
yeah,
I
have
some
some
explanation,
but
I
don't
know
if
it's
the
best
one.
So
probably
the
crypto
econ
people
are
the
best
ones.
It's
got
to
do
with.
Miners
are
declaring
a
fold
because
they
get
to
know
if
they've
got
a
faulty
sector.
A
miner
that
has
a
40
sector
knows
it
first
and
therefore
they
can
declare
it
first
and
then,
when
it's
recovered,
things
would
mess
up
with
the
reward,
but
yeah.
I
don't
have
the
full
explanation.
Sorry.
A
A
A
Cool
well,
then,
maybe
we'll
call
it
early
this
week
and
yeah.
If
my
my
suggestion
for
moving
these
meetings
going
forward
is
to
move
them
to
4pm
utc
on
thursdays,
which
I
believe
is
9am
pt
and
like
11
p.m.
In
beijing
time,
it's
not
great,
but
better
than
better
than
you
know.
1Am
on
a
a
saturday
comes
up
all
right,
we'll
aim
for
it,
then
we'll
reschedule
future
meetings.
A
Cool
awesome.
In
that
case,
I
hope
everyone
has
a
wonderful
rest
of
well.
Probably
the
rest
of
the
week
is
short
but
wonderful
weekend
and
see
you
all
in
a
little
under
two.