►
From YouTube: Filecoin Core Devs Biweekly #12
Description
Recording for: https://github.com/filecoin-project/tpm/issues/25
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
A
All
right,
good
morning,
good
afternoon,
everyone
welcome
to
the
10th.
I
think
we're
on
of
the
meeting
of
the
core
devs
by
weekly
implementers
today,
we'll
be
going
through
kind
of
our
usual
agenda,
we'll
go
through
updates
from
everyone
and
there's
a
few
points
of
discussion,
and
there
are
some
tips
to
discuss
and
we'll
also
start
talking
about
how
we're
gonna
be,
how
we're
gonna
be
doing
network
upgrades
in
the
future,
because
so
far,
that's
kind
of
been
easier
to
do,
because
lotus
has
been
the
only
implementation.
A
But
now
we're
going
to
have.
You
know
two
three
four
implementations
that
all
kind
of
need
to
be
in
lockstep,
which
is
we
need
to
start
preparing
for
so
we'll
kind
of
have
a
preliminary
discussion
for
that
yeah.
Maybe
let's
kick
it
off
with
updates
from
the
tip
from
venus.
First.
B
Yeah
and
hello,
everyone,
yeah
and
okay-
we're
just
back
from
the
holiday,
but
I
think
the
team
just
take
about
one
week
off
and
we
come
back
to
work
on
this
project.
Currently
yeah.
B
We
had
a
very
good
progress
available
and
okay
and
at
first
venus
had
upgraded
to
a
spec
actor
version
3,
and
we
run
the
new
code
in
the
yeah
in
the
calibration
network
and
it's
and
it
passed
the
test
and
and
sm
will
have
another
version
to
upgrade
with
the
lotus
together
for
the
network
ratio
version
10
yeah.
So
we
should
have
another
release
tomorrow.
B
I
think
this
is
the
first
very
important
thing
we
have
done
last
few
days,
and
another
thing
is
that
we
start
to
work
on
the
disputed,
demanding,
poor
support,
and
by
doing
this,
we
are
yeah,
we'll
separate
the
modules
to
have
many
modules
work
together.
Instead
of
to
have
a
very,
very,
very
big
program.
B
For
example,
we
have
the
and
we
have
separated
villas
miner
and
with
a
sinner,
and
we
also
have
yes,
we
must
warn
it
and
the
and
and
next
we
will
have
a
messenger
to
have
the
basic
pool
management
and
the
basic
distinction.
B
So
yeah
we're
working
on
that,
and
currently
we
have
done
this
more
wanted
implementation
and
which
have
better
security
feature
than
the
current
one
we
can
get
from
lotus
yeah.
B
For
this
we
have
completed
the
test
and
using
test
network.
It
would
mean
that
we
have
the
minor
center
and
one
is
together.
We
combine
them
together
to
have
the
integration
test
and
yeah.
That
is
our
first
very
first
day
before
the
disability
remaining
poor
support.
A
No
that's
fantastic
yeah.
That
does
mean
that
the
upcoming
weekend
upgrade
will
be
our
first
upgrade
that
has
multiple
implementations
passing
over
the
epoch
together,
because
venus
will
be
there
with
us.
Well,
at
least
venus
will
be
there
with
us,
which
is
great.
Molly.
Has
a
quick
question
chat.
Does
the
web
wallet
support
multisigs.
B
I'm
sorry
yeah,
okay,
okay,
here,
okay,
I
said
we
have
this
okay
yeah.
I
mean
that
we
will
have
the
test
and
we'll
have
this
release
and
release.
Two
will
have
to
do
this
tomorrow,
and
they
currently
will
have
one
venus
load
in
a
minute.
B
So
yeah
we'll
upgrade
the
this
load
within
this
week
and
we'll
have
a
release
in
case
we
have
some
other
some
other
beta
stored
in
network
so
that
they
could
upgrade
their
system
before
yeah.
The
network
version
pen
takes
effect,
yeah.
A
Yeah
and
molly
was
just
asking
about
the
wallet
and
whether
it
supports
multi-six
as
an
example
changing
the
number
of
multi-sig
signers.
Does
it
have
support
for
that
kind
of
thing,
yeah.
C
For
context,
this
is
just
something
there
was
a
thread
about
it
and
for
the
glyph
wallet
which
doesn't
support
multi-sigs
with
more
than
one
signer,
and
so
this
is
just
something
that
I
know
there's
been
a
couple
of
requests
for,
and
it
seems
like.
Potentially
your
guys's
web
wallet
would
be
a
way
that
people
could
handle
multi-sigs
with
multiple
signers.
But
I
was
curious
if
that's
the
case.
B
I'm
sorry
yeah.
This
is
questions
for
me
and
I
don't
think
we're
supporting
the
gun
cliff,
but
we
have
yeah.
We
have
a
mobile
application
which
supports
the
waters
and
yeah.
We
limit
firewire
with
currently
two
large
supports
about
6.,
but
we
will
do
that
after
that.
So.
A
Yeah
cool
yeah,
good
work,
let's
hear
from
forest.
D
Hey
so
our
primary,
I
guess,
focus
has
been
upgrading
to
network
version
10
and
so
we're
almost
done
updating
actors.
I
think
we
just
finished
the
minor
actor
and
we
have
one
more
left
so
we're
almost
there
on
the
actor's
front
and
then
next
week
we're
planning
to
do
the
changes
to
the
vm
and
runtime
and
also
do
the
state
migrate.
Migration.
We've
already
implemented
the
framework
for
doing
state
migrations,
but
now
just
do
the
actual
state
migration
for
network
version
10..
D
In
addition
to
that,
we
are
updating
our
rpc
interface,
so
we're
adding
http
support
and
working
towards
feature
parity
with
lotus.
We
know
that
that's
perhaps
like
a
thing,
that's
changing,
but
for
now
you
know
we're
working
towards
at
least
some
level
of
compatibility
with
lotus
and
figuring
out
how
we
can
either.
D
I
guess
like
keep
that
in
the
future,
so
it's
easy
for
people
to
switch
between.
You
know
forest
and
lotus
or
yeah
with
some
sort
of
integration
testing
or
something
like
that.
Otherwise,
we're
working
towards
an
audit,
and
so
we're
figuring
out
the
exact
date.
D
But
it's
looking
like
it's
going
to
start
in
a
few
weeks,
but
prior
to
that,
we'd
like
to
productionize
forest
a
bit
more,
so
we're
working
on
some
improvements
to
our
message,
pool
and
then
also
some
improvements
to
syncing
after
that
and
yeah
outside
of
that,
we
had
a
great
chat
with
megan
last
week
and
so
we're
working
now
on
our
vision
for
how
we
differentiate
forests
going
forward.
D
I
mean
up
until
this
point
we've
been
very
focused
on
just
becoming
interoperable,
but
now
that
we
are
pretty
much
there,
it's
important
to
think
about
how
we
want
what
we,
how
we
see
forest,
improving
itself
or
or
serving
a
specific
subset
of
users
going
forward,
and
so
we
will
share
what
we've
been
working
on
in
terms
of
our
vision
with
the
foundation
and
with
the
lotus
team
for
feedback
in
the
next
couple
of
days
and
otherwise
yeah.
D
I
also
had
a
chat
with
jennifer
about
like
building
community
for
forest,
so
we'll
be
adding
updates
to
the
new
file
coin.
Community
page
that's
on
github,
but
yeah.
That's
about
it!
For
us
any
any
questions.
A
That
makes
sense
yeah.
That's
it's
nice
to
hear
work
priorities.
Shifting
from
you
know,
just
build
a
thing
to
you
know
who's
going
to
be
using
this.
How
are
we
going
to
be
building
the
community
around
it?
That's
great
question:
are
you
you
have
a
node
running
on
minute
right
now,
right,
oh
yeah,
are
you
hoping
to
have
migration
and
v3
reactors
done
by
wednesday
and
roll
over
the
v10
upgrady
with
us.
E
I
mean
ideally
so
I
mean
I
I've
been
upgrading
all
the
actors
for
the
past
like
week
and
a
half
or
so,
and
obviously
the
minor
actor
was
the
big
change
and
most
of
the
other
actors
haven't
changed
that
much,
I
would
say
a
lot
of
it
is
like
some.
Some
of
the
some
of
the
exit
code
handling
has
changed
a
little
bit
for
most
of
the
actors,
so
I
mean
I
anticipate
probably
getting
the
migrations
done
before
then,
but
it's
like
one
of
those
things
where
it's
like.
E
If
we
don't
even
have
basic
conformance
tests,
it's
kind
of
hard
to
know
if
we're
gonna
crash
like
right
at
the
fork
epoch
or
not
so
I
mean
I'm
obviously
gonna
keep
the
node
running
and
hopefully
hopefully,
hopefully
we
don't.
You
know
we
don't
blow
up
in
midair
but
yeah.
I
I'm
like
semi
optimistic
that
it
should
work.
E
I'm
writing
the
minor
code
with
like
very,
very,
very
like
open
eyes,
making
sure
the
exit
codes
are
right,
because
that
that
I
feel
like
that's
probably
like
the
hardest
part,
with
respect
to
like
looking
at
spec
actors
code,
because
it's
like
you,
you
sometimes
like,
forget
and
don't
wrap
an
error
and
throw
an
error
instead
and
then,
like
your
exit,
codes
are
off
and
even
though
your
state
routes
are
right,
so
yeah
but
yeah,
I
actually
have
one
question
with
respect
to
the
network
upgrades
v10.
E
So
from
my
understanding
I
I
mean
I
understand
everything.
That's
like
changed
in
spec
actors
for
network
version
10.,
so
I
mean
for
that.
You
just
have
to
upgrade
the
actors.
Get
the
network
get
the
state
migration
stuff
going,
but
is
there
anything
happening
on
the
lotus
side
with
respect
to
the
the
the
interpreter
and
all
that
over
there
yeah
there's
one
consensus.
A
Critical
change
in
lotus
that
comes
to
mind,
which
is
that
we're
deleting
the
actor
corresponding
to
the
zero
bls
signature.
This
is
kind
of
an
actor
that
never
should
have
been
created
in
the
first
place,
because
it's
not
secure
like
anyone
can
spend
from
it.
It's
not
really
a
security
threat.
It's
just
this
shouldn't
be
here,
so
we're
deleting
that
actor
just
burning
any
funds
that
happen
to
be
in
it.
A
I
think
that's
the
only
the
only
thing
that
changes
outside
of
actors
and
like
the
hampton
stuff,
yeah,
pretty
sure
we
don't
change
anything
at
the
vm
level
or
yeah,
pretty
sure.
That's
it.
E
Okay,
yeah,
it's
good
to
know,
yeah,
maybe
I'll
message
you
guys
a
little
bit
later,
just
to
confirm
and
see
like
how
you
guys
and
where
you
guys
are
handling
that
because
yeah
I
remember,
we
talked
about
the
the
zero
address
a
while
ago.
Just
forgot
about
it.
I
guess
yeah.
A
Yeah,
that's
also
a
good
flag,
though
that
from
this
point
on,
we
fairly
start
keeping
a
list
of
like
every
consensus,
critical
change,
even
if
it's
small,
like
the
bigger
ones,
obviously
goes
with
the
fit
process,
so
they're
documented
there,
but
even
like
kind
of
minor
bug
fixes
and
so
on.
We
should
probably
maintain
like
a
nice
little
list
somewhere
for
the
implementers
here.
Anyone
else
who
wants
to
implement
final
cut
and
so
on
I'll
make
a
quick
note
of
that.
F
Hi
everyone
thank
you,
so
we
had
done
significant
progress
in
terms
of
testing
our
solution
functionally,
so
we
had
quite
a
big
job
fixing
bugs
and
issues
we
have
faced
during
the
testing
of
functionality
and
synchronization.
F
So
now,
node
is
up
and
running
on
the
magnet
it
sinks
successfully
and
it's
in
the
catcher
state.
So
it's
kind
of
stability
in
instability
testing
mode.
Now
we
will
see
what
will
happen
in
the
next
week
or
two
if
it
will
be
stable
or
not.
F
In
parallel,
we
have
prepared
for
the
reliability,
slash
stability,
slash
load,
slash
stress
testing
and
we
are
going
to
execute
it
in
the
next
two
to
three
weeks.
So
this
is.
This
is
going
to
be
quite
an
extensive
one.
We
are
going
to
test
how
node
will
work
and
different
kind
of
environments
under
the
different
load
and
we'll
go
in
we're
going
to
crash
it
and
smash
it
and
see
what
will
happen.
F
So
this
is
something
we're
currently
working
on
well
in
terms
of
in
terms
of
development.
Nothing
really
major
is
happening
right
now,
as
them
basically
stabilizing
our
own
solution
actors.
We
too
are
being
slowly
upgraded.
So
we,
as
they
mentioned
in
the
previous
meeting,
we
had
the
smaller
effect.
Well,
it
turned
out
to
be
not
really
a
small
one,
but
quite
a
big
one,
big
refactoring,
so
we
can
introduce
a
smoother
transfer
to
actors
v3
later
on
on
cp
actors.
F
So
this
is
something
we
are
currently
finalizing,
so
the
only
two
I
think
yeah.
Currently
there
are
only
two
vectors
left
which
storage
storage,
miner
and
doctor
marketers
needed
to
be
refactored,
I'm
not
sure.
Well,
not
not
so
much
progress
was
done
since
the
last
update,
as
we
had
a
different
focus
and
well
for
the
next
two
weeks,
we
are
going
to
focus
on
reliability,
testing
and
see
what
will
happen
with
the
node
and
in
parallel
we
are
working
on.
F
To
introduce
the
quality
gate
for
all
the
code,
we
have
produced
so
far
and
basically
migrate.
All
the
codes
through
this
quality
gate
and
see
whether
it's
in
a
good
enough
state.
We
need
some
improvements
or
not,
so
we
are
hoping
to
have
a
good
quality
solution,
so
we
are
making
sure
everything
is
intact.
F
Yep,
that's
it
from
my
site.
I
think
I
we
have
not
sure
who
has
raised
the
question
regarding
the
the
conformance
test
for
the
future
changes
breaking
changes.
Do
you
want
to
touch
it
right
now
or
we
will
have
time
for
it
later.
A
Yeah
we
can
talk
about
that
right
now,
yeah,
so
I
kind
of
spoke
with
raul
and
kind
of
internally
we're
trying
to
figure
out
what
to
do
with
that.
First,
we
definitely
committed
to
creating
conformance
test
vectors
for
b3
they're,
probably
not
gonna,
be
ready
in
time
for
the
actual
upgrade
epoch,
much
as
we'd
like
that,
so
yeah,
so
first,
so
that's
kind
of
like
the
first
point.
A
The
second
point
is
that
conformance
cycle
should
probably
kind
of
become
like
a
shared
component
between
like
all
of
us
here,
essentially
as
we
for
future
network
upgrades.
So
we
can.
We
can
kind
of
talk
about
yeah
but
distributing
the
worker
and
creating
those
tests,
adding
new
tests
and
so
on
across
group
here.
Obviously,
it's
gonna
be
easiest
for
anyone
in
go-based
environments,
but
yeah
that
that's
kind
of
where
we're
at
with
that
I'm
hoping
to
have
test
vectors
out
sometime
next
week,
but
we'll
see.
F
Yeah,
thanks
great
so
and
for
the
future
breaking
changes.
Are
we
going
to
introduce
test
vectors
upfront
or
we
will
have
to
like?
Basically,
the
question
is:
how
are
we
confirming
that
we
are
in
a
good
enough
state
and
we
will
not
our
nodes
will
not
stale
in
the
main
net.
So
do
we
have
some
kind
of
process
for
that.
A
Not
yet
the
process
should
probably
be
like,
I
think
the
the
the
test
vector
series
probably
need
to
get
a
lot
more
comprehensive.
Now
that
again,
there's
kind
of
four
implementations
moving
across,
so
I
think
yeah
slowing
down
a
bit.
I
don't
know
about
creating
them
up
front,
but
certainly
writing.
You
know
yeah
get
getting
far
more
vectors
and
then
spending
more
time
with
them
and
maybe
again
like
splitting
up
the
the
tests
being
written
between
different
teams
is
probably
the
way
to
go.
F
A
Yeah,
we
should
basically
always
be
doing
that
we've
kind
of
always
been
doing
that,
putting
it
on
a
test
network
first,
whether
whether
that
should
just
be
the
calibration
test
network
or
whatever.
A
Test
network
is
that
we
all
go
across
or
something
smaller
first,
that's
kind
of
just
with
the
implementations
we
can.
We
can
see,
but
we'll
so
we'll
certainly
be
going
over
doing
the
upgrade
on
some
test
network
first
network
or
networks.
A
E
Of
sense-
and
I
mean
in
particular
it
would
be
nice
to
do
up,
like
you
know,
kind
of
have
more,
I
guess
larger
intervals
between
hard
forks
that
are
not.
You
know,
security,
critical,
hard
forks.
In
that
way
we
can
spin
up.
E
You
know
first,
like
kind
of
like
an
interop
net
and
then
upgrade
calibration
net,
and
then
I
mean
because,
like
ideally,
we
would
want
to
do
an
upgrade
on
a
network
that
has
like
people
actually
using
it,
because
the
obviously
like
you
know
conformance
vectors,
won't
cover
everything
and
that's
something.
We've
seen
already.
It's
like.
E
We
could
be
running
a
node
for
like
two
weeks
until
we
find
like
one
small
like
receipt,
mismatch
and
it's
like
god,
damn
it
yeah,
so
yeah,
I
think
a
longer
cadence
and
the
more
just
long-term
testing
is,
is
going
to
be
a
good
thing
but
yeah.
I
guess
the
first
step
is
getting
the
conformance
factors
up.
First,
before
we
kind
of
plan
the
whole
hard
forking
process,
yeah.
A
It's
interesting,
I
kind
of
like
half
agree
in
terms
of
in
terms
of
network
upgrade
cadence,
where,
like
I
think,
with
network
upgrades,
there's
like
a
pain
there
or
there's
work
to
be
done
across
the
community.
The
mining
community
in
particular.
That's
pretty
much
always
the
same,
no
matter
what's
actually
in
the
upgrade
where,
like
you
know,
they
have
to
like
update
all
of
their
infrastructure,
do
any
testing
that
they
do
and
prepare
for
this
big
epoch,
but
within
the
actual
implementations.
A
Not
all
network
upgrades
are
the
same
right
like
like
this
one
that
we
have
like
if
you're
introducing
a
new
version
of
actors.
That
kind
of
thing,
if
you
is
kind
of
actually
a
big
upgrade
and
you
know-
requires
a
different
degree
of
testing
and
therefore
potentially
a
longer
test
window
and
so
on
than
if
we
were
to
have
smaller
network
upgrades
to
kind
of
do
less
along
the
way.
A
I'm
not
saying
that
that
we
should
aim
to
do
that,
but
but
it's
kind
of
worth
keeping
that
distinction
in
our
heads
and
not
like
having
a
one-size-fits-all
approach.
I
guess
saying
like
we
always
want
every
upgrade
to
sit
on.
You
know
a
test
network
for
two
weeks
or
half
this
set
of
suites
or
test
weeks,
or
something
like
that,
because
sometimes
we
will
have
upgrades
that
change
two
lines
of
code
and
we're
pretty
sure
we
know
what
the
changes.
E
Yeah
I
mean
I
definitely
agree.
I
yeah
I
mean
definitely
there's
times
where
it's
just
like
this
can't
break
but
yeah.
For
sure
I
mean
it's.
I
think
it's
good
to
have
a
process,
but
also
just
still
be
flexible
with
it
as
well,
and
I
think
it's,
I
think,
that's
kind
of
what
you're
getting
at
and
yeah
yeah.
A
Great,
but
it
is
good
to
certainly
flag
that,
like
test
factors
are
kind
of
a
priority
for
everyone
here.
Clearly,
so
we'll
we'll
put
more
importance
on
that.
A
Cool,
oh
yeah:
let
us
update
what
have
we
been
doing
yeah
for
the
most
part,
so
we
finally
put
out
our
150
release,
which
we'll
be
introducing
network
upgrade
10..
This
release
was
kind
of
ready
back
at
the
start
of
the
month,
but
we
wanted
to
test
it
a
bit
more
and
then
a
large
chunk
of
east
asia
went
on
holidays,
so
we
we
got
a
lot
more
time
to
test
it.
So
we're
feeling
really
good
about
it.
Yeah
introduces
v3
v3
actors.
A
The
upgrade
epoch
is
for
next
wednesday,
tuesday
or
wednesday,
depending
on
where
in
the
world
you
are
so
that's
march
3rd,
I
think
and
yeah.
So
that's
that's
running
on
calibrate
the
calibration
test
network
right
now.
We
also
run
it
on
some
kind
of
like
internal
test
networks
first,
so
anyone
who
wants
to
try
it
out,
obviously
venus
already
has
a
note
on
there.
Yeah.
A
Anyone
else
who
wants
to
see
if
you're
syncing
well
with
v3
calibration,
is
a
test
network
for
you
and
yeah
that
that's
kind
of
been
the
big
chunk
of
our
work.
The
specs
actors
team
has
started
looking
into
like
the
preliminaries
of
what
v4
actors
will
look
like,
but
it's
still
very
much
early
days,
obviously
for
them,
the
big
part
of
that
work
is
one
of
the
flips
we're
going
to
be
talking
about,
which
is
in
proof's
world
yeah.
E
Cool,
I
I
have
one
question
actually
with
respect
to
calibration
that,
like
I
know,
calibration
net
was
barked
for
quite
a
bit,
so
I
was
just
wondering
if
this
new
calibration
net
is
like
a
new
network
or
if
it's
building
off
of
the
old
one.
No,
it's
it
it.
It
is
a
button.
A
It
is
a
brand
new
network,
it's
about
yeah,
about
a
week
old
by
this
point,
but
it
didn't
have
like
the
most
important
stage.
E
A
No,
no,
you
should
be
able
to
sync
it
fairly
quickly,
a
couple
of
other
notes
on
it:
we're
not
having
512
megabyte
sectors
on
it,
because
that
was
messing
with
some
of
the
crypto
econ
params
and
the
pledge
collateral
calculations
and
so
on.
So
it's
only
32
and
64
gigabyte
sectors,
which
is
unfortunate
if
you're,
trying
to
mine
and
kind
of
the
second
note
is.
A
We
certainly
intend
for
this
to
be
a
long
live
test
network
but,
like
we
feel
like
we
haven't,
hit
the
holy
grail
of
test
networks,
yet
because
there
are
a
couple
of
other,
mostly
crypto,
econ
type
programs
that
we
want
to
get
perfect.
So
the
goal
is
to
kind
of
keep
this
network
alive
for
a
while,
learn
from
it
and
then
kind
of
based
on
those
learnings
create
our
test
net
test
net
that
you
know
basically
stays
alive
forever
unless
something
goes
horribly
wrong.
A
But
yeah
calibration
is
like
the
public
test
network
for
that's
a
pretty
good
resource
and
we
threw
the
faucet
behind
like
10
layers
of.
Are
you
a
real
human
being
guard
so
that
people
don't
drain
the
faucet?
And
then
you
have
messages
in
the
slack
being
like
hey.
Can
someone
please
give
me?
You
know
fake
trial
coin
money
on
the
test
network.
A
Okay,
maybe
let's
jump
into
a
discussion
of
fifth
13
first,
because
I
know
there's
folks
here
who
want
to
talk
about
that
and
then
then
we'll
talk
about
getting
more
network
future
network
upgrades
search
time
nicola.
Do
you
want
to
take
the
lead
on
that?
Why
don't
look
who
wants
to.
H
So,
if
I
don't
know
if
wyatt
is
around,
then
oh
yeah,
he
is
so.
Let
me
just
give
a
high
level
overview
fit
13
is
the
one
that
introduces
proof
commits
sector
aggregates,
and
the
goal
here
is
that,
instead
of
committing
a
single
sector
with
a
proof
commit
you
commit
multiple
sectors
with
these
aggregates
and
the
the
property
here.
H
Is
that
if
we
just
think,
let's
forget
about
proofs
for
a
second,
if
we
just
think
about
gas,
we
pay
a
large
overhead
cost
for
in
operation
just
by
loading
state
and
doing
other
state
operations
that
if
they
were
to
be
batched,
we
would
just
save
the
save
this
cost.
The
cost
of
operation
for
one
sector
or
for
100
sectors
in
theory,
should
be
as
close
as
possible
to
be
the
same.
So
the
goal
here
is
shave
all
possible
initial
costs
by
only
paying
the
initial
cost
once
for
multiple
sectors.
H
That's
that's
the
main
goal
here
now,
the
the
the
large.
The
reason
why
we
didn't
do
this
before
is
because
there
is
a
large
technical
challenge,
and
it's
this.
This
proof
commit
aggregate,
doesn't
just
aggregate
state
transitions,
states,
state
operations,
but
it
also
aggregates
proofs-
and
this
has
been
what
we've
been
working
on
for
the
past.
H
Maybe
three
to
four
months
is
coming
up
with
a
proof
scheme
where
we
can
have
the
size
of
a
single
proof,
so
I
have
the
size
of
a
thousand
proofs,
for
example,
to
be
as
much
to
be
much
smaller
than
8
000,
independent
proofs,
and
so
we
have
our
own
in
our
own
initial
benchmarks,
that
for
overall
gas
costs,
if
you
upgrade
51
proofs,
you
already
are
2x
better
in
terms
of
gas.
H
So
if
you
were
to
send
50
proofs
one
by
one
or
50
proofs
with
aggregation,
you
will
pay
half
of
the
gas
by
sending
50
proofs
together.
H
H
So
we
believe
that
this
is
gonna
this
week.
We
are
going
to
do
initial
test.
Wyatt
is
working
on
this.
There
are
some
challenges.
H
It
may
be
that
we
cannot
realize
the
full
20x
speed
up,
but
it
could
be
that
we
realize
something
close
close
to
that
and
this
the
goal
here
is
to
have
in
the
next
week
or
so
some
initial
numbers
of
how
much
gas
saving
we
have
for
aggregating,
50,
100,
800
plus
proofs,
and
the
goal
is
that
we
want
also
small
miners
to
take
advantage
of
this,
and
so
we
may
be-
and
it
takes
a
while
for
a
small
miner
to
aggregate
to
generate
50
sectors
and
maybe
what
we
can
do.
H
This
is
almost
for
sure,
but
what
this,
if
the
more
large
miners
use
less
gas,
the
more
gas
will
overall
go
down,
so
it
will
benefit
everybody
and-
and
I
may
be
selling
a
dream
now,
hopefully
not,
but
if
we
have
a
20x
saving
in
proof
commit
gas
cost,
then
the
chain
is
close
to
empty,
and
so
maybe
even
if
we
don't
realize
that
20x
the
20x
and
we
realize
that
5x
is
already
a
huge
win
for
for
for
gas
costs
and
because
now,
over
over
one
third
of
gas
cost
and
spent
in
improved,
commit.
H
E
That's
really
cool.
I
have
a
question
for
that,
so
I
mean
I
don't
really
understand
how
you
know
aggregating
like
these
proofs
work.
Obviously
so
when
you're
aggregating
proofs
together,
like
does
the
size
of
the
proof
increase
linearly
or
does
it
increase
at
all
or
is
it
kind
of
aggregated
in
a
way
where
they're,
like
added
together
instead
good
good
question.
H
So
we
have
we
have
this.
We
have
a
presentation
that
would
assume
it's
going
to
be
out
soon.
It's
an
internal
presentation
that
describes
the
the
theory
of
this
and
we
have
a
paper
that
describes
this
in
more
detail.
But
the
the
idea
here
is
that
you,
independently
you
generate
50
proofs,
as
of
today.
H
50
proof
commit
proof,
so
10
partition
each
and
so
it's
500,
snacks
and
all
and
what
you
do.
You
run
some
special
operation
on
these
50
proofs,
which
will
create
a
new
proof,
and
this
new
proof
is
much
more.
It's
not
linear
in
the
size,
it's
logarithmic
in
the
size.
So
this
means
that
the
difference
in
in
the
final
proof
size
when
we
aggregate
50
or
8
000
proofs,
it's
closer
than
than
it's
very
close.
H
Then
then
contents
work
out
really
nice
and
and
so
in
practice.
Verification
time
is
faster
because
you
don't
verify
linearly
in
the
number
of
proofs
and
the
proof
size
is
smaller
because
you
verify
so
you
verify
the
size
of
the
proof
is
smaller.
I
am
sending
here
a
notebook
for
those
that
want
to
dive
into
a
little
bit
of
preliminary
numbers.
This
is
all
public.
H
You
know
we
all
of
this
work
is
public
in
the
fip
repo
and
except
from
the
the
doc
that
we
are
compiling
with
the
new
proof,
and
you
can
follow
in
in
the
fap
discussion
you
can
see
then
work
and
nemo's
work
on
proofs,
so
you
could
already
like
test
these
rules
and
and
and
play
with
this
and
in
in
this
link,
I'm
sending
you
that
I'm
sending
you
there
is
the
aggregate
gas
total
which
says
what
is
the
total
gas
spent,
which
includes
both
the
size
of
the
proofs
and
the
verification
time.
H
A
Yeah
quick
question
similar
to
the
one
that
eric
had
which,
like
you,
as
you
mentioned,
like
you
kind
of
this,
is
like
a
bit
of
an
obvious
thing
to
do,
and
we
would
have
liked
to
do
it
earlier,
but
aggregating
the
proofs
was
the
challenge
so
that
it's
not
just
you
know
a
linear
like
you're,
just
sending
8000
proofs
in
one
message.
Was
there
like
a
single
aha
breakthrough
that
made
that
possible
or
anything
right
so.
H
H
One
proof
commit
is
ten
snacks
and
we
were
trying
to
find
ways
to
reduce
these
ten
snacks.
We
were
trying
to
come
up
with
new
solutions
for
reducing
proof
size
for
these
10
snacks.
We
didn't
want
to
have
10
snacks,
we
wanted
to
have
one
and
this
technique
didn't
apply,
didn't
work
for
10
proofs
only
works
for
50
proofs
plus.
So
we
we
trust
this.
We
said
this
line
of
work,
it
doesn't
doesn't
work
and
then
we
had
a
chat
with
several
people.
H
We
were
planning
to
use
to
do
this
very
solve
this
very
problem
with
halo
and
and
then,
while,
while
talking
with
several
people,
we
went
back
to
drawing
board
and
we
said
oh,
we
could
use
this
not
to
aggregate
10
proofs,
but
we
gave
large
amount
of
proofs.
This
wasn't
a
problem
before
launch.
We
didn't
know
that
proof
commits
would
have
been
so
wildly
popular
and
I
mean
maybe
some
people
did,
but
in
recent
in
the
research
team
wasn't
thinking
that
we
thought
that
this
was
a
problem
but
not
an
imminent
problem.
H
H
Sorry,
this
work
produced
by
mary,
maler
benedict
woods
and
other
he's
does
70
of
the
work,
but
it
couldn't
be
used
directly
into
five
coin,
and
so
we
had
to
spend
the
past
three
to
four
months:
adapting
it
for
for
fight
coin,
and
it's
not
just
working
for
fico,
and
this
is
a
win
for
the
for
snack
space
in
general
for
the
blockchain
community
in
general,
because
a
lot
of
people
use
the
proofs
that
we
that
we
use,
which
is
we
call
them
snack,
graph,
growth,
16
and
and
and
now
whoever
has
got
16
proofs
can
use
the
same
libraries
to
have
two
aggregate
proofs,
and
we
believe
that
other
groups
will
take
advantage
of
this
and
the
more
groups.
H
B
Yeah,
what
one
more
question
about
this
and
they
aggregated
and
approved
for
one
manner
and
as
an
it,
were
yeah
very
benefit.
The
big
miners-
and
so
I
have
another
question-
is
that
a
quarter
way
how,
when
controller
yeah
one
controller
and
an
an
account
and
to
an
aggregate,
multiple
miners
yeah
proof
commit
then
together.
H
This
is
a
very.
This
is
a
very
good
question
and
we
believe
it's
going
to
be
the
next
step
for
this,
so
we
first
needed
to
come
up
with
a
way
to
aggregate
proofs,
and
once
we
can
aggregate
proofs,
we
could
aggregate
across
minor
now
the
e.
So
what
the
way
we
are
approaching,
this
is
the
following
is
step.
One
is
drastically
reduce
gas
used
in
the
network
and
how
do
we
do
this
is
by
it
just
by
having
per
miner
aggregation
already
solves
large
part
of
the
problem.
H
So,
for
example,
we
could,
in
theory,
go
from
3
million
proofs
per
day
to
maybe
100
000
proofs
per
day
and
doing
operations
across
minor
is
the
next
step
to
go
from
100
000
proofs
to
maybe
10
proofs
per
day
and
and
so,
and
so
it's
just
a
matter
of
what
was
faster
for
for
us
to
ship
and
because
one
thing
is
aggregate
is
building
software
that
aggregates
per
miner.
So
miner
already
has
software
to
do
proofs
and
aggregation.
H
There
is
an
extra
protocol
that
we
will
need
to
create,
which
is
coordination
for
across
miners,
and
there
will
be
some
incentives
there
and
slashing
conditions.
There
is
extra
work
that
would
delay
of
a
few
months
and
we
want
to
get
this
out
as
soon
as
possible,
but
it
is
the
next
step
to
do
that
and
I
believe
that
there
will
be
an
economy
of
aggregator
nodes.
B
Okay,
yeah.
I
think
that
will
be
very
great
and
yes,
because
that
we're
doing
the
yeah.
What
do
you
mean
the
distributed
main
import
support,
yeah,
which
will
it
will
support
multiple
miners
and
yeah
and
with
one
owner
or
one
worker
to
serve
other
miners?
That
will
be
very
cool.
B
If
you
can
do
that
and
another
question
is
yeah
in
one
an
aggregate,
an
aggregated
proof,
basis,
yeah
and
if
one
of
the
proof
failed
will
all
they
prove
all
the
sectors,
proof
will
fail
or
we
can
distinguish
that
that
and
to
have
others
success
and,
and
only
one
of
them.
H
So
once
you
generate
the
proof,
then,
if
so,
given
some
set
of
proofs,
you
can
generate
the
proof
if
all
of
them
are
valid,
and
so
there
is
no
way
that
the
proof
verification
can
fail.
There
is
there
are
some
edge
cases
that
could
happen.
So,
for
example,
you
have
already
proof
committed
these
sectors,
and
now
it's
a
it's
a
question
whether
you
know
if,
if
one
of
those
sectors
were
already
proof
committed
what-
and
we
will
submit
a
procommit
aggregate,
what
happens
then?
H
I
think
our
stance
is
that
you
will
fail
the
entire,
the
entire
submission
and
you
will
have
to
you-
will
have
to
resubmit,
but
I
don't
think
minus
will
ever
run
into
that.
H
A
H
Yeah
in
general,
what
we've
been
doing?
We've
been
working
on
the
on
the
on
the
theory
to
make
this
work,
but
the
protocol
change
is
it's
up
to
the
community
to
choose
what's
the
best
way
for
moving
forward
and
what
is
the
best
set
of
edge
cases
and
how
to
handle
them.
A
Oh
cool
yeah:
this
is
great
work
and
like
it
genuinely
would
like
transform
the
fivecoin
network,
which
is
amazing,
hopefully
any
other.
A
Questions
all
right
yeah,
so
this
is
certainly
not
one
of
the
steps
that's
going
to
like
that.
We're
that
we
can
quickly
talk
about.
So
you
know,
I'm
sure
there'll,
be
lots
of
discussion,
questions,
kind
of
design
choices
that
we'll
have
to
make
over
the
days
and
weeks.
But
for
now
you
know,
we've
talked
about
it,
we're
aware
of
it
think
about
it.
Some
more
raise
questions
on
the
fit
as
they
as
they
come
up
yeah.
A
This
is
this
is
a
great
work,
thanks
nicola
cool,
sorry,
jumping
back
to
the
agenda,
I
just
realized
that
I
totally
skipped
the
folks
from
the
popcorn
foundation.
Was
there
an
update
that
either
of
you
wanted
to
provide?
I
really
apologize
for
that.
G
Yeah,
we
actually
don't
have
any
updates,
but
we're
so
thrilled
to
hear
everything.
We
will
say
that
we
do
have
a
regular
update,
that
we
share
with
our
board
and
a
few
folks
in
our
ecosystem.
So
if
there
are
any
top
lines
that
you
guys
would
like
to
amplify,
just
we're
happy
to
please
feel
free
to
pass
that
along
and
we'll
make
sure
it
gets
shared
with
our
community.
So.
A
Very
nice,
thank
you,
yeah.
I
think
I
think
we're
gonna
have
some
exciting
announcements
in
the
in
the
weeks
ahead.
So
definitely
keep
that
in
mind.
G
Cool,
I
guess
the
other
other
quick
update
is.
We
are
actually
revamping
our
our
website
we're
trying
to
build
out
actually
a
developer
specific
portal.
So
if
anyone
is
interested
in
providing
insights
on
how
we
can
really
create
a
great
experience
for
any
developer
entering
the
falcon
ecosystem
just
to
learn
about
you
know,
some
of
our
existing
work
would
love
to
to
set
up
separate
one-on-ones,
and
we
can.
We
can
brainstorm
what's
useful,
what's
not
and
especially
relevant
to
this.
So.
I
Yeah
just
to
jump
in
with
one
other
thing.
We
are
also
hiring
and
so
we'll
we'll
put
the
jobs
that
we're
hiring
for
in
the
chat.
But
if
anyone
has
people
that
they
want
to
recommend,
for
you
know
growing
out
the
file
coin,
ecosystem.
G
And
we
we
have
the
jobs
here,
so
I
I
just
shared
it
in
our
in
our
chat.
G
Yeah,
so
we
actually
have
a
a
louise
that
reaches
everyone
on
our
foundation.
So
it's
probably
you
know
if
it's
a
broader
ask
it's
easier
to
just
go
to
team
file.org
for
megan,
and
I
it's
just
our
first
name
at
file.org.
So.
A
One
other
thing
that
I
started
on
a
very
different
tangent,
one
other
thing
that
I
wanted
to
mention
in
the
150
update
that
I
forgot.
As
you
know,
this
implements
fib
10,
which
is
accepting
optimistic
window
posts.
So
as
part
of
that,
we
needed
a
dispute
process
to
dispute
bad
window
because
it
shouldn't
have
been
accepted.
So
we
built
that
into
lotus,
and
so
there's
like
a
lightweight
process.
That
kind
of
like
tries
to
validate
every
single
post
and
if
it
fails,
it
then
sends
a
dispute
message.
A
So
just
something
for
the
implementers
to
keep
in
mind.
You
might
want
to
build
one
of
those
for
each
of
your
implementations
as
well.
There's
a
lot
of
scope
for
improvement
too.
So
yeah,
it's
something
to
be
aware
of.
It's
not
it's
not
a
core
part
of
any
implementation
of
the
pipeline
protocol,
but
you
also
kind
of
might
as
well,
because
it's
pretty
easy
to
do
cool
all
right.
A
The
last
thing
I
did
that
I
wanted
to
talk
about
was
yeah
how
we
want
to
do
network
upgrades
in
the
future
and
kind
of
like
I
mostly
just
want
to
hear
what
people
think.
A
So
I
was
gonna
say
that
this
is
our
last
upgrade
that
we
do
is
just
the
lotus
team,
but
that's
not
even
true,
because
we
have
other
implementations
we'll
be
crossing
across
the
upgrade
epoch
with
us
on
mainnet,
which
is
fantastic,
so
we
definitely
need
to
kind
of
like
change
the
way
we
like
plan,
these
things
and
so
on.
A
So
this
is
probably
the
forum
in
which
we'll
be
discussing
a
lot
of
these
things
as
well
as
async,
and
the
fill
implementers
channel
about
you
know
when
we
want
to
like
start
planning
for
the
next
upgrade,
what
we'd
want
to
get
into
it,
what
fips
are
landing
and
so
on
and
yeah.
So
what
do
people
think?
What
are
concerns
that
people
have?
What
do
you
think
the
process
should
be?
I
want
to
get
opinions
here.
E
I
mean,
I
feel,
like
a
lot
of
the
pain
points,
have
kind
of
been
touched
upon
already,
primarily
with
respect
to
testing
and
change
logs
and
that
kind
of
stuff.
It
kind
of
feels
like
a
lot
of
the
a
lot
of
the
big
things
are.
Obviously
you
know
communicated
in
fips,
but
I
feel
like
a
lot
of
like
smaller
things.
Aren't
really
documented.
E
So
I
feel
like
just
like
details
like
that
are
quite
nice
and
because,
like
the
way
just
to
give
some
context
the
way
that
I've
been
updating
our
our
actors,
implementation,
for
example,
is
I
basically
like
I
first
of
all,
like
you
know,
I
look
at
the
I
go
into
github
and
I
do
like
a
like
a
tag
diff
between
like
v3
point
whatever
and
two
three
four
and
then
I
look
at
the
files
that
have
changed
a
lot
and
you
know
I
kind
of
prioritize
based
on
which
actors
have
changed
a
lot
and
which
ones
haven't,
and
then
you
know
I
just
fired
vs
code
and
I
do
a
diff
on
basically
every
file
and
go
line
by
line
and
see
like
all
the
changes,
and
that
that's
I
mean
it's
quite
a
tedious
process,
which
is,
I
mean
it
makes
sense.
E
It's
like
you
know
it's
a
very
core
part
that
changes-
and
you
know
that's
cool,
but
yeah
it'd
be
nice
to
have
like
a
little
bit
more
of
things
that
are
like
documented.
With
respect
to
things.
I've
changed.
You,
like
especially
like
the
small
stuff,
is
like
that's
the
stuff.
That's
like
you
know
that
that's
the
stuff
where
you're
gonna
fork.
You
know
when
you're
on
main
net-
and
so
things
like
that
would
be
would
be
ideal,
and
I
mean
just
going
back
to
the
release.
E
Cadence
thing
it
would
just
be
nice
to
have
like
a
longer
heads
up.
I
guess
so
you
know
we
can
all
kind
of
agree
upon
like
a
launch
epoch.
I
mean
obviously
like
that
that
wasn't
super
important
for
v3,
because
I
mean
at
the
time
that
you
know
the
all
the
stuff
was
implemented
like
not.
E
J
I
I
wanna
also
kind
of
expand
on
eric's
point:
hey,
I'm
hunter
by
the
way
I
joined
the
call
a
little
late
so,
but
I
just
want
to
introduce
myself.
I
joined
the
forest
team
at
chainsafe
and
one
thing
we've
been
discussing
is
instrumenting
the
code
base
so
that
we
can
get
better
insight
into
you
know
things
that
might
be
edge
cases
or
things
that
might
behave
a
little
differently.
J
I
think
one
of
the
one
fear
I
might
have
is
in
each
case
to
slip
by
unnoticed
or
whatever,
and
I
mean
I
know
you
know
like
I
they're
they're
they're,
obviously
probably
not
going
to
be
the
like.
It's
going
to
be
very
noticeable,
I'm
sure,
but
you
know
there's
always
just
some.
Some
worries
that
you
know
are,
we
able
to
you,
know,
see
everything
work
the
same,
and
obviously
I
mean
the
consensus
protocol
is
going
to
be.
What
determines
you
know
the
reference
implementation
is
going
to
be.
J
What
determines
you
know
the
consensus
in
practice,
but
I
think
you
know
just
having
better
insight
and
kind
of
notification
and
understanding
you
know
throughout
this
kind
of
rollout
process
is
going
to
be
important.
A
J
So
the
the
at
least
forest,
at
least
we
we
generate
a
lot
of
log
output
and
it's
not
always
clear.
You
know
what's
important:
what's
not,
there
are
definitely
you
know.
Tracing
frameworks
and
things
to
investigate
to
you
know
see
how
things
are
working.
Can
we
get
a
dashboard?
You
know
grafana,
I
I
used
to
work
at
influx
data,
so
I'm
a
little
partial
to
that,
but
prometheus
is
also
good
for
time,
series
database
and
all
sorts
of
things.
F
Yeah,
just
you
know,
yeah.
I
think
we'll
write
a
little
bit
more
on
this.
We
actually
have
some
experience
as
we
have
running
a
couple
of
other
networks.
Well,
we
by
v
I
mean
sara
mitsu
and
they
seem
you
can
touch
it
with
you
in
private
messages,
so
yeah.
Basically,
the
approaches
you
have
mentioned
with
grafana
and
well.
Other
minor
intervals
are
quite
good
well,
at
least
from
our
of
our
experience.
So
this
is
something
we
have.
We
are
using
for
our
different
solutions,
and
this
is
something
we
are
planning
to
use
for.
J
Fantastic
and
maxim
where,
where,
where
do
you
work
just
to
be
clear,.
A
Cool
yeah
that
sounds.
That
sounds
like
an
interesting
thread
to
maybe
have
in
full
implementers
or
something
I'm
sure
other
people
will
weigh
in
or
or
be
interested
in.
The
conversation
sounds.
J
F
Yup
just
my
opinion
on
the
question
you
asked,
I
am
actually
quite
agree
with
eric.
This
is
the
process
he
has
described
on
how
tracking
changes
from
version
version
are
pretty
much
the
same
as
for
the
chainsafe
guys,
so
we're
going
through
the
all
the
changes
that
were
introduced
into
locus.
F
F
So
if
you
will
have
a
calibration
net
or
test
net
running
for
some
time
and
we'll
have
to
say
that,
okay,
if
you
have
a
version,
switch
and
every
every
version
is
running
on
the
calibration
net,
for,
for
example,
a
week-
and
we
have
a
test
of
institute
of
test
cases
that
we
want
to
run
on
this
calibration
net,
and
this
is
would
be
our
quality
gate
for
to
proceed
with
the
rollout
of
the
magnet.
I
think
it
will
be
a
good
enough
solution,
at
least
to
start
from.
E
Yeah-
and
I
mean
I
have
no
idea
how
the
ethereum
folks
do
it
like,
but
I
think
it'd
be
a
great
idea
to
you
know
chat
with
the
people
who
do
the
testing
and
the
hardcore
coordination
on
like
eth1
and
eth2.
I
know
eth2
is
going
through
their
first
hard
fork
right
now
or
in
the
process
of
you
know,
planning
out
their
first
hard
fork.
E
So
I
think
it'd
be
a
very
valuable
conversation
to
have
with
some
folks
there,
I'm
not
entirely
sure
exactly
who
but
yeah
they
obviously
ethereum's
been
around
for
so
long
and
they
have
a
general
purpose
like
vm
and
everything.
So
I'm
I'm
assuming
upgrading
and
all
that
stuff
for
them
shouldn't
really
be
that
easy.
So
they
might
have
some
good
insights.
A
Yeah,
that's
actually
a
fantastic
idea
like
we're,
not
we're
not
the
first
person
to
run
into
this
problem
and
won't
be
the
first
team
to
solve
it.
That
that's
that's
a
great
suggestion.
A
She
always
wants
to
let
her
opinion
known,
okay,
cool,
so
quick
notes
that
I
have
yeah
thanks
for
all
the
feedback
like
I
said,
this
is
just
like
an
information
gathering
exercise
and
we'll
kind
of
iterate
on
it
and
get
a
proper
process.
Quick
note
that
I
kept
was
yeah.
The
fibs
do
a
good
job
of
keeping
the
big
changes
in
place.
Changelog
of
all
the
smaller
changes
in
accuracy
code
would
be
very
helpful
and
maybe
like
flagging
little
things
in
the
film's
better
channel
as
well.
A
Slow
down
the
release,
cadence
a
bit
agree
on
a
launch
epoch
together,
which
we'll
definitely
be
doing
run
upgrades
for
at
least
a
week
on
test
networks
and
have
a
suite
of
test
cases.
If
we
plan
on
actually
exercising
on
the
network
and
yeah,
maybe
chat
with
the
ethereum
folks
figure
out
kind
of
kind
of
what
they
do.
A
One
thing
I
like
is
that,
like
we
are
eager
to
like
not
have
this
be
like
a
one-man
show
or
like
a
one
lotus
show,
so
whatever
we
have
to
do,
including
things
like
kind
of
slowing
down
our
process,
always
keeping
security
top
of
mind.
We
are
more
than
happy
to
do
because
you
know
a
lot
of
work
has
gone
into
all
these
implementations,
also
we're
eager
for
everyone
to
join
for
the
good
of
the
network
itself.
So
we'll
we'll
take
everything.
A
That's
being
said
and
again
for
sure
cool
molly
flagged
the
question
in
chat.
Does
anyone
want
to
discuss
a
fips
issue
56,
which
is
this
proposal
on
the
extension
of
the
v1
proof
sector
life
cycle?
A
I'm
not
sure
how
much
context
everyone
here
has
into
this,
but
it's
probably
worth
discussing
so
maybe
a
little
bit
of
background
here
is
we
found
so
we
launched
filecoin,
we
launched
the
network
with
v1
proofs
and
found
a
somewhat
minor
minor,
serious
vulnerability,
depending
on
who
you
ask
and
what
their
priorities
are
in
v1
proofs.
So
back
in
november
last
year.
I
think
this
was
the
third
network
upgrade
if
I
remember
correctly,
post
liftoff
network
upgrade.
A
We
essentially
bumped
the
bumper-proof
version,
so
introduced
a
fixed
version
that
we
called
1.1
disallowed,
adding
any
new
sectors
with
the
old
vulnerable
version,
and
it's
like
all
new
pre-improved
commits
have
to
be
with
the
new
secure,
1.1
version
type
and,
like
all
of
that
was
fine
and
the
network
upgrade
was
successful.
The
other
thing
we
did
was
we
disallowed
extending
the
sectors
that
were
sealed
with
the
vulnerable
proof
tag
because
we
were
like.
We
don't
want
these
sectors
to
given
that
they're
vulnerable.
A
We
don't
want
them
to
live
on
the
network
for
any
longer
than
they
need
to
so,
and
so
that's
kind
of
the
status
quo
and
that's
what's
been
running
on
the
network
since
back
in
november.
Understandably,
those
miners
who
cannot
extend
their
b1
sectors
are
unhappy
and
affected
by
the
city.
Steven
you're,
one
of
them
or
group,
is
one
of
them.
So
folks
folks
want
to
ask
if
they
can
extend
those
v1
sectors
and
honestly,
like
it's,
it's
a
it's
a
tough
decision.
B
Yeah
there's
one
thing:
okay
and
the
other
one
say
about
the
background
I
want
to
mention
here
is
yes
because
yeah
in
the
minor
matter
community-
and
there
are
actually
lots
of
discussion
about
this
and
yeah.
It's
understandable
that
many
big
miners
want
to
have
this
exchange
in
capacity
and
before
we
went
is
because
you.
B
Yeah
they
had
started
this
work
and
before
the
minute
and
million
broken
out,
and
they
have
perhaps
I
think
there
are
200
pib
yeah,
maybe
more
and
I'm
not
so
sure,
because
I
didn't
and
dig
into
the
data
and
for
the
first
details,
but
I
can't
have
that
perhaps
next
week
and
to
check
the
chain
about
that.
So,
okay
and-
and
it
is
also
understandable
that
yeah,
because
there
are
some
security
issue
or
something
about
fair
or
something.
B
So
after
think
about
this,
I
have
another
approach
and
that
perhaps
could
yeah
be
a
trade-off.
So
we
consider
to
allow
the
v1
yeah.
B
We
were
sectors
to
have
the
longest
expiration
to
540
days
only
instead
of
to
extend
much
longer
yeah
and
only
one
and
a
half
year
and
it's
yeah,
it
will
be
much
fairer
and
for
the
miners,
because
some
manners
have
these
sectors
have
the
that
time
to
one
and
a
half
year,
but
and
some
others
only
have
half
your
left
hand
before
those
sectors
so
yeah.
B
If
there's
a
security
issue
yeah,
it
will
be
actually
and
affect
all
of
this
instead
of
passionate
about
this,
and
we
also
can
protect
this
for
yeah,
for
example,
after
one
and
a
half
year
and
all
of
this
work
you
will
be
expired.
So
maybe
not.
Maybe
that
is
that's
a
big
issue
about
the
yes
security
or
something
like
this
yeah.
G
B
B
C
Just
to
I
think,
I
think,
that's
a
an
interesting
suggestion.
We
should
get
some
feedback
on
that
one
from
the
community
as
well.
C
I
know
it's
a
it's
a
pretty
contentious
discussion
where
there's
clearly
thoughts
on
both
sides
and
so
we'd
love
to
kind
of
get
more
of
a
sense
from
the
community
on
where
folks
are
are
landing
another
just
just
in
general,
from
a
principal
perspective
like
the
if,
where
there
are
sectors
that
kind
of
are
broken,
and
we
don't
want
to
stick
around
on
the
network
like
from
a
principal
perspective,
avoiding
having
those
sectors
stick
around
and
having
to
move
off
the
network
as
they
expire,
is
like
the
optimal
we.
C
We
have
been
like
brainstorming
about
this
of
like
what
what
might
be
able
to
be
done
and
something
that
we
want
to
investigate
and
encourage
the
community
to
investigate
is.
Is
there
a
way
that
we
could
upgrade
v1
sectors
into
v,
1.1
sectors
which
would
allow
kind
of
holding
onto
the
the
committed
capacity
while
avoiding
having
broken
proofs
on
the
network?
It's
like
that
seems
like
a
a
compromise
as
well.
C
That
might
be
a
a
good
fit
for
folks
who
are
concerned
about
that
expiry
and
so
like
I
don't
we
don't.
We
haven't
investigated
this
one
yet,
but
I
think
that's
another,
just
like
from
principle
perspective.
If
the
goal
is
to
help
get
these
proofs
kind
of
ex
moving
over
to
v
1.1,
that
might
be
a
thing
to
look
into.
E
Issue
than
anything,
because
I
mean
personally
like
from
a
technical
standpoint
like
I
feel
like
just
letting
these
sectors
die
is
probably
like.
The
easiest
thing
to
do.
I
feel,
like
you
know,
trying
to
convert
the
old
proofs
to
the
new
proofs.
Probably
just
requires
a
bunch
of
like
resealing
and
recomputation
of
proofs
anyways,
which
is
pretty
much
the
same
thing
as
just
sealing
new
sectors.
At
that
point,
what
I
am
curious
to
know,
though,
is
like
what
percentage
of
these
sectors
are.
E
You
know
from
the
old
proofs
and
are
you
know
scheduled
to
kind
of
terminate
and
so
like
that
to
see,
like
you
know,
the
impact
on
the
the
storage
power
of
the
network
and
everything
with
respect
to
like
the
security
and
all
that.
A
Yeah
so
short
answer:
there
is
like
everything
before
november
24th,
I
think,
was
sealed
with
the
old
cruise
and
everything.
Basically,
everything
after
was
sealed,
with
new
proofs
and
on
november
24th.
I
think
we'd
all
we'd
just
past
one
extra
bite
of
storage,
but
I'm
not
sure
off
the
top
of
my
head
is
what
percentage
of
that
was
six
month
sectors
that
will
be
expiring.
You
know
somewhat
soon
and
what
percentage
was
because
I
think,
basically,
everything.
A
That's
not
six
months
sector
was
a
one
and
a
half
year
sector
for
the
most
part
on
mainnet,
in
which
case
it's
expiring
over
in
2022.,
and
I
don't
know
what
the
breakdown
of
that
is.
I
think
the
majority
is
definitely
one
and
a
half
year
sectors
but
yeah.
A
I
also
agree
with
your
first
point
like
I
do
think
this
is
yeah
from
a
technical
point
of
view.
There
isn't
that
much
to
say
because
yeah
the
sectors
are
vulnerable,
but
they're
not
they're,
not
like
horribly
broken
like
if
it
were
like
a
show
stopper,
then
there
wouldn't
be
a
conversation
to
have
so
it
is
very
much
a
decision.
In
fairness,
more
than
anything
else,
I
do
agree
that
there's
kind
of
like
also
just
a
bit
of
a
bad
like
you
know.
It
looks
bad
in
code.
A
If
you
have
like
you
know
something.
That's
like
re-enabling
bad
sectors
or
something
like
that,
but
yeah.
I
I
think
that's
very
much
a
decision
on
what
what
is
like
the
fairest
thing
for
everyone
here.
It's
not
a
great
situation
to
be
in
and
whether
and
yes,
I
I
kind
of
think
like
everyone
kind
of
gets
to
weigh
in
and
then
yeah
it's
a
dominance
decision
that
needs
to
be
made.
A
I
think
I
think
allowing
the
six-month
sectors
to
extend
to
the
longest
vulnerable
sector
lifetime
and
then
allowing
them
all
to
die
out
is
a
reasonable
thing.
I
think
upgrading
them
would
be
great
if
we
could
somehow
easily
because
it
doesn't
have
to
be
fully
upgraded
to
1.1.
It
could
also
just
be
like
you
know:
do
do
something
to
confirm
that
the
that
people
are
still
have
to
full
storage
for
these
vulnerable
sectors,
so
yeah.
C
Just
flag
that,
it's
probably
you
know
eric
to
your
point
of
like
it's
not
really
a
an
implement
or
like
a
like
a
lotus
or
a
forest
or
venus
like
that's,
not
where
the
complexity
is.
The
complexity
is
more
on,
like
the
security
aspect
of
things,
because
security
things
are
all
margins
of
error
and
like
how
rational
is,
is
this
versus
something
else,
and
so
definitely
like
the
v1
proofs?
Definitely
we're
not
comfortable
with
that
margin
of
error.
C
Otherwise,
if
we
were
comfortable,
we
wouldn't
have
gone
through
the
work
of
moving
to
v,
1.1
and
and
making
that
sort
of
a
change,
and
so
definitely
do
not
want
those
sectors
like
persisting
on
the
network,
because
they
they
don't
meet
our
security
expectations.
B
B
So
yeah,
perhaps-
and
I
have
this
proposal-
is
because
I
think
perhaps
yeah,
because
I
discussed
with
many
minors
and
some
of
them,
and
actually
they
really
have
some
sectors
only
have
a
half-year
that
time.
You
need
work
yeah
because
they
want
to
have.
Maybe
it's
okay,
you
just
need
them
to
yeah,
extend
to
one
and
a
half
year,
yeah,
because
most
of
others
will
have
one
half
year
and
yeah.
B
If
the
security
issue
is
there-
and
there
are
many
many
other
sectors
and
yes,
we
need
will
be
expired
after
about
half
a
year
so
that,
yes,
that
is
a
tweet
off,
and
it
also
allows
them
to
have
this
kind
of
extension
instead
of
to
extend
a
few
years
yeah.
Maybe
we
have
more
data,
and
I
can
do
that
actually
to
have
this
data
and
to
see
if
we
can
have
a
decision
based
on
that.
A
Yeah,
that
would
be
super
useful.
Thank
you.
Thank
you,
stephen
thanks
steven
yeah,
I
got
the
other
thing
is
kind
of
like
what
is
the
right
place
to
be
discussing
this
like
I'm.
Obviously
we
can
talk
about
it
here
and
I'm
glad
to
get
the
input
of
people
here
and
there's
lots
of
discussion
in
fib
issue
56,
but
yeah.
I
don't
know
what
the
right
form
is,
because
we
kind
of
want
to
get
like
as
wide
a
range
of
opinions
as
possible
so
that
that's
something
to
figure
out
as
well.
A
C
C
Yeah,
I
mean
definitely
the
normally
conversations
here
are
like
about
the
technical
implementation
of
things.
More
than
so,
I
think
data
we're
surfacing
here
is
like
what
are
some
of
the
options
that
might
be
feasible
and
what
are
the
you
know,
bringing
together
the
different
implementations
and
focus
on
like
the
protocol
research
side
for
the
security
considerations
and
stuff
like
this
is
an
opportunity
to
at
least
put
information
on
the
table.
C
But
when
it
comes
from
kind
of
actually
trying
to
survey
and
understand
like
before
we
go
forth
and
implement
a
fip,
we
should
have
good
understanding
of
of
the
community
being
behind
it.
Otherwise,
you
know
they're
not
going
to
accept
a
fip
upgrade
and
we've
done
like
wasted
work,
and
so
I
think
the
continuing
to
have
the
conversation
in
places
like
the
fip
repo,
and
you
know,
aiming
to
pull
that
information
from
slack
we've
actually
been
working
on
a
polling
tool
that
would
enable
people
to
do
signed
voting.
C
I
don't
know
if
it's
quite
ready
yet
or
if
it'll
be
ready
in
time
for
this
issue
in
particular,
but
I
think
that
could
also
be
a
channel
to
get
some
more
feedback
from
other
forums.
Where
again,
we
don't
want
to
be
having
the
contentious
governance
decision
here.
This
is
about
implementation,
but
we
want
to
make
that
more
available
to
people
in
async
channels
and
then
discuss
outcomes
from
it
in
this
sort
of
venue.
G
Yeah
and
I
was
gonna
offer
from
the
falcon
foundation
side,
I
know
there's
a
number
of
questions
like
frequently
asked
questions
that
we've
kind
of
gone
back
and
forth
and
explaining.
We
can
add
a
section
that
covers
a
lot
of
these
questions,
discussed
and
linked
to
the
relevant
github
issue,
threads
that
where
they
have
been
answered,
because
it
does
provide
a
little
bit
more
transparency
on
the
how
versus
assumptions-
and
I
know
sometimes
these
calls-
might
not
be
always
the
easiest
place
for
some
people
to
go
back
and
listen
to.
G
So
we
can
definitely
pull
that
together
from
our
side
and
put
it
in
a
very
transparent
way,
just
the
top
questions
that
might
be
related
to
technical
implementation.
A
A
Really
good,
yeah
and
molly
that
pull
that
pulling
tool
you
described
is
sounds
great
as
well
like.
I
feel,
like
that's
pretty
much
exactly
what
you
want
in
a
situation
like
this,
but
yeah
we'll
see
we'll
see
if
we
get
it
ready.
A
Cool
any
other
question.
Sorry
we're
quite
over
time,
but
any
other
things
that
people
wanted
to
raise.
C
Oh,
maybe
tiny
flag.
I
believe
this
meeting
is
going
to
be
changing
time,
sometimes
soonish,
or
at
least
just
for
for
people
in
other
time
zones.
We're
gonna
hit
daylight
savings
time
here
in
my
time
zone,
at
least
in
like
two
weeks,
which
I
believe
means
that
not
our
next
court
of
bi-weekly
but
the
one
half
well
yeah,
the
one
after
that.
Might
change
might
change
time
for
some
people
at
some
point.
C
A
Yeah
yeah
good
flag.
I
totally
don't
think
of
that
yeah.
I
think
keeping
it
at
the
current
time
so
allowing
it
to
move
one
hour
forward
for
people
that
don't
absorb
daylight.
Savings
is
probably
the
best
thing
to
do,
because
I
assume
earlier
is
better
for
everyone.
That's
not
in
north
america,
just
because
it's
closer
to
the
actual
workday.
That.