►
From YouTube: Filecoin Core Devs #38 (Meeting 2)
Description
Recording for: https://github.com/filecoin-project/tpm/issues/93
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
A
All
right,
hi
everyone
and
welcome
this
is
the
second
meeting
for
core
devs
meeting
number
38
today
on
tuesday
march
10th
in
the
united
states.
I
think
friday
march
11
for
some
of
us
on
the
call
really
excited
to
have
everyone
join
us
today
and
ready
to
dive
right
in
so
up.
First,
as
always,
we're
going
to
give
quick
updates
from
all
of
the
implementation
teams.
As
we
move
into
this
new
meeting
format,
not
all
of
the
implementation
teams
are
joining
both
calls.
A
So
for
any
teams
who
are
not
currently
present,
I
will
be
giving
their
updates
and
if
there
are
any
outstanding
questions
or
larger
conversations
to
be
had
about
those,
we
will
transmit
those
via
the
meeting
notes
async
after
the
call
we'll
give
very
quick
updates
from
the
filecoin
foundation
and
then
want
to
sort
of
veer
into
a
larger
conversation
about
all
three
of
the
different
network
upgrades
that
we
have
been
talking
about
over
the
past
couple
of
weeks.
A
So
I
want
to
take
a
few
minutes
to
do
a
very
quick,
retro
and
offer
time
for
anyone
to
give
feedback
about.
The
recent
v15
network
upgrade
that
we
just
went
through
talk
about
the
current
thinking
around
b16
and
provide
a
quick
update
on
some
of
the
fbm
fips
that
are
currently
open
for
consideration
and
then
turn
it
over
to
alex
north
to
talk
about
early
thinking
around
the
b17
timeline
and
some
of
the
fips
that
he
is
hoping
to
put
forth
for
consideration
during
that
upgrade
cycle.
A
So
that
being
said
very
quickly,
do
we
have
anyone
on
from
the
forest
team?
I
don't
think
so.
A
All
right,
so
I
will
give
updates
for
both
forest
and
fujan.
Nothing
really
that
huge
again,
since
we
did
just
go
through
the
snapdeals
upgrade.
The
forest
team
is
already
working
pretty
and
lock
and
step
with
the
fvm
engineering
team
to
prepare
for
b16,
and
almost
their
entire
team
is
hands
on
deck
to
refactor.
The
forest
code
base
in
preparation
working
on
faster
syncs
and
higher
test
coverage.
Overall,
the
fujan
team
is
currently
working
with
myself
and
some
others
of
the
file
queen
foundation
to
rescue
their
contract.
A
All
right,
we
do
have
two
attendees
from
the
lotus
team.
I
you
sure
jennifer.
Do
you
wanna?
Let
us
know
what
you're
working
on.
A
There's
quite
a
bit
of
background.
I
just
saw
your
chat
message
too,
so
yeah.
I
think
your
updates
speak
for
themselves.
Just
concluded
the
osnap
upgrade.
I
also
shipped
lotus
version
1.15
with
retrieval
improvements.
It
looks
like
and
as
everyone
I
think,
is
very
well
aware,
the
lotus
team
is
working
with
the
fvm
team
to
prepare
some
of
the
the
spec
actors
and
and
these
seven
actors
as
well,
so
with
that
we
will
give
it
over
then
to
stephen
and
the
venus
team.
C
Okay,
yeah
for
vitas
team,
we
upgraded
to
the
yeah,
the
other
village
stores
and
to
network
version
15,
and
it
works
well.
After
that,
we,
the
whole
team,
is
focused
on
the
wireless
cluster
and
the
virus.
Cluster
is
a
new
component
for
for
storage
provider
to
support
the
the
sector
management
and
also
for
the
computing
resource
management,
which
could
help
the
storage
provider
to
share
the
control
and
they're
computing
resources
among
yeah,
multiple
falcon
nodes
and
to
yeah
better
utilize.
The
resources
yeah
we're
working
on
this
and
yeah.
C
This
will
be
open
sourced
very
soon.
That's
currently,
the
glass
cluster
is
in
progress.
We
have
done
that.
This
is
the
sector
in
this
setting
and
also
the
gpu
outsourcing,
and
we
had
also
implemented
the
mac
function
with
the
virus
cluster
and
yeah
next
week.
We're
working
on
to
implement
the
slab
deals,
ambulance,
cluster,
so
yeah.
This
is
this:
is
a
new
module
for
yeah
for
storage
providers
to
translate
and
to
use
this
to
multi
multiple
loads.
C
I
would
think
that
this
could
be
delivered,
perhaps
in
a
few
weeks
yeah
just
yesterday.
Okay,
I
think
that's
all
for
us
and
also
for
the
fvm.
The
team
is
trying
to
integrate
the
fm
with
the
yeah
current
version.
We
have
done
some
testing
with
our
private
network
and
and
we
also
keep
a
close
eye
on
the
yeah
building
after
development
once
it
is
complete
with
version
15,
we
will
integrate
that
and
to
test
with
the
magnet
okay.
C
A
Thanks
stephen
also,
I
copied
over
the
notes
from
what
you
had
provided
and
I
shortened
them,
but
I
do
want
to
note
that
under
ongoing
work,
just
to
be
clear
when
it
says
working
to
implement
snap
deals,
this
is
in
reference
to
venus
cluster
specifically,
but
I
do
want
to
confirm
that
venus
is
synced
on
version
15
of
the
network.
So
that's
not
a
concern
and
it's
not
still
test
right.
C
A
C
A
Perfect
great
any
other
questions,
comments
for
any
of
the
implementation
teams.
A
All
right
so
updates
from
the
file
coin
foundation
very
quickly.
Dudley
is
not
able
to
join
this
version
of
the
call,
but
there
are
no
outstanding
security
issues
to
flag.
No
news
is
usually
good
news
and,
of
course,
dudley
is
still
working
with
the
sigma
prime
team
on
the
file
coin,
fuzzer
project
and
working
through
a
full
audit
of
the
fujan
code
base
in
governance.
We
have
a
lot
of
fips.
A
Both
drafts
prs
open
in
the
repo,
as
well
as
those
that
have
been
merged
into
the
repo
and
many
discussions
in
our
discussion
forum.
We're
going
to
be
talking
through
some
of
these
today,
but
a
few
reminders
about
materials,
one
for
core
devs.
We
do
always
recommend
that
for
all
of
these
drafts
you
take
at
least
a
cursory
glance
at
the
material
that's
there,
and
although
it
would
be
great
to
get
feedback
on
all
of
the
drafts,
we
know
that's
not
always
possible,
but
it
is
encouraged,
so
even
tpms.
A
Anyone
on
your
team
is
welcome
to
take
a
look
and
over
the
next
couple
of
weeks
in
particular,
there
are
going
to
be
a
lot
of
very,
very
detailed
drafts.
Some
of
which
alex
will
talk
to
us
about
later
and
we
will
be
specifically
calling
attention
to
those
in
future
meetings.
So
if
you
have
had
not
had
a
time
to
take
a
look
at
them
again
highly
highly
encourage
that
you
do
so.
A
A
Through
this
last
network
upgrade
cycle,
some
of
you,
I
think,
alex
and
jennifer
in
particular,
have
left
feedback
so
I'll
get
to
that
as
soon
as
I
can
so,
we
can
incorporate
some
of
those
changes
and
ask
some
of
those
outstanding
questions
or
answer
some
of
them
for
you
and
additionally,
I
also
did
reach
out
to
all
of
the
implementation
teams
over
the
last
week
to
have
a
conversation
about
capacity
planning
within
your
teams
internally.
A
This
was
in
relation
to
the
contract
scoping
we're
undergoing
for
fujon,
but
we
do
now
have
a
very
nice
document
outlining
the
number
of
staff
that
each
team
has
internal
resources,
that
your
organizations
that
you're
relying
on
to
help
with
marketing
communications,
product
etc,
as
well
as
links
to
all
of
your
projects,
management
resources.
A
So
I
will
reach
out
one
on
one
after
this
call
and
offer
you
access
to
that.
Unfortunately,
all
of
that
is
collected
through
notion,
so
it
is
not
broadly
accessible
at
the
moment,
but
just
an
fyi
that
if
we
talked
about
this
there
will
be
follow-up
for
you
to
take
advantage
of
any
questions.
D
Sorry
caitlyn:
do
you
think
I
could
give
a
quick
update
on
the
filecoin
puzzle
site
mentioned
in
security,
section
cool?
I
haven't
actually
spoken
with
dudley
in
quite
some
time,
so
hopefully
I'll
catch
up
with
him
next
week,
but
we've
made
good
progress.
We
now
have
the
ability
to
call
different
functions
through
ffi
integration,
so
front
function,
interfaces
you
know
calling
the
go
implementation
and
the
rust
implementation
our
fuzzer
has
been
built
is
currently
being
developed
in
rust.
Just
so,
we
can
leverage
our
internal
expertise
lighthouse.
D
The
rust
implementation
of
the
young
kansas
client
is
our
baby,
so
yeah
just
we
thought,
we'd
leverage,
the
team's
expertise
in
rust
and
that
allows
us
to
hopefully
be
able
to
spin
up
arbitrary
traits
and
sparing
you
the
details,
the
ability
to
have
structural
fuzzing,
which
should
hopefully
help
surface
some
interesting
behaviors.
D
At
the
moment
we
have
about
five
six
fuzzing
targets
ready
to
go
so
things
like
blockhead
processing,
gossip
block,
election
proof
signed
messages,
we're
catching
a
couple
of
crashes,
so
we
will
be
investigating
those
internally
before
knocking
on
the
client
team's
doors
and
wasting
their
precious
time
but
yeah.
The
first
sort
of
phase
of
the
project
was
really
trying
to
get
an
understanding
of
the
overall
design
of
the
fuzzers
and
we're
happy
to
report
that
we're
very
comfortable
with
the
approach
that
we've
taken
so
yeah
stay
tuned.
D
A
lot
more
fuzzers
will
be
written
up
over
the
coming
days
weeks
and
hopefully,
hopefully
we'll
be
able
to
surface
some
interesting
bugs
for
you
all.
The
one
question
that
I
may
have,
if
that's
okay
for
everyone,
is
whether
we
can
simply
essentially
lock
in
certain
commits
or
tags
in
your
repositories,
just
to
make
sure
that
we're
hitting
the
exact
same
version
of
the
state
transition
function.
D
So
I'm
not
too
sure
whether
we
want
to
dive
into
the
details
here
or
if
the
actual
client
teams
can
reach
out
to
me
directly
or
perhaps
coordinate
through
dudley.
But
it's
very
important
for
this
project
that
we're
hitting
the
exact
same
version
of
the
implementations.
D
Obviously,
as
caitlin
was
explaining,
there's
a
bunch
of
upgrades
in
the
works,
I'm
not
entirely
sure
if
these
upgrades
have
been
sort
of
merged
into
the
stable
branches
of
each
client
team
or,
if
you're
developing
this
on
a
separate
branch,
we
have
so
far
chosen
to
go
with
the
tip
of
your
default
branches.
Please
let
us
know
if
we
should
be
targeting
something
else.
Instead,
that's
about
it
happy
to
answer
any
questions.
People
may
have
on
this
activities.
B
Yes,
I
can
speak
to
lotus
and
say
definitely
yeah.
We
can
work
with
youtube
pick
a
tag.
Probably
our
most
recent
stable
release
would
be
the
one
to
use
and
yeah
as
jefferson.
We
can
talk
and
file
coin
slack.
There
are
channels
where
you
can
reach
all
of
the
implementation
implement
or
simultaneously,
which
would
be
good.
I
have
a
question,
but
I
don't
understand
how
fathers
work
they
tickle
a
weird
part
of
my
brain.
So
do
you
simultaneously
target
all
of
the
implementations?
B
D
Great
question
so
the
way
it
works.
We
are
instrumenting
the
actual
code
basis
that
we
target.
So
we
have
written
a
piece
of
software
that
effectively
calls
through
foreign
function
interfaces
like
c
links,
the
go,
client
and
the
ros
client.
So
so
far
we
were
only
targeting
forest
and
loaches
right,
and
this
is
this
is
how
we're
playing
so
it
is
white
box
coverage,
guided
fuzzing
as
opposed
to
sort
of
black
box
buzzing,
where
you're
spinning
up
nodes
and
hitting
them
with
random
traffic.
D
Hopefully
that
answers
part
of
your
question
the
other
thing
to
keep
in
mind.
As
I
said,
because
we're
leveraging
rust
and
forest,
we
will
be
able
to
instruct
the
fuzzing
engines
to
format
the
actual
fuzzing
input
properly,
meaning
that
we
will
not
be
just
you
know,
fuzzing
these
implementations
with
random
bytes.
We
will
actually
be
able
to,
as
I
said,
instruct
the
fuzzers
to
parse
the
actual
bytes
into
sensible,
relevant
consensus
objects.
D
When
we're
hitting
the
stack
transition
functions,
we
will
also
be
sort
of
sending
junk
so
that
we
try
to
fuzz
effectively
the
parsing
or
the
unmarshalling
functions
of
each
client.
But
it's
very
important
to
reach
as
many
code
paths
as
possible,
so
we
find
that
structural
fuzzing
is
is
quite
key
in
that
sense.
Hopefully,
hopefully
that
answers
it.
A
I
also
have
a
question
actually
so
you're
developing
this
initially
for
forest
and
lotus,
but
I'm
not
sure
how
specified
the
the
fuzzing
the
test,
the
fuzzing
itself
actually
is
for
the
venus
team
also
they're,
also
written
in
golang,
and
I
I
just
messaged
steven,
but
I
assume
they
have
a
lot
of
yeah
almost
identical.
If
not
the
same,
like
cli
calls,
for
example,
would
they
also
be
able
to
use
the
the.
D
A
D
The
the
current
plan
or
current
agreement
that
we
have
with
the
foundation
is
to
solely
focus
on
the
two
clients
that
I
mentioned,
but
we
do.
We
did
build
this,
this
platform
to
be
sort
of
modular
and
we
can
in
the
near
future,
plug
in
any
implementation
really.
So
I
think
we
deemed
that
forest
and
lotus
were
at
least
then
the
perhaps
most
mature
clients
to
be
able
to
conduct
this
this
activity,
but
yeah
certainly
open
to
adding
more
in
the
future.
A
I
think
I
remember
you
presenting
that
as
part
of
your
roadmap.
A
couple
of
weeks
ago
too,
the
modularity
that's
great.
I
think
we
should
have
dudley
make
sure
to
give
an
update.
The
next
call,
at
least,
and
if
you
need
any
help
or
support
when
the
fuzzer
is
live
or
available
for
testing.
We
can
be
in
touch.
D
So
yeah
just
just
to
give
a
heads
up
to
everyone.
Obviously
we
are
dealing
with
a
live
network
with
hundreds
of
millions
billions
of
dollars
worth
of
value.
These
activities
are
usually
happening
on
private
repositories,
so
we
will
be
giving
access
to
the
relevant
people,
but
I
mean
for
everyone
sort
of
listening
to
this.
You
may
not
be
able
to
run
the
fuzzers
yourself
for
obvious
reasons.
A
All
right
so,
as
mentioned,
and
I
know
we're
a
small
group,
so
I
think
this
will
be
a
relatively
quick
conversation,
but
we
did
just
want
to
have
space
for
anyone
to
provide
any
feedback
about
the
recent
version
15
network
upgrade.
This
is
the
first
core
devs
call
that
we've
had
since
making
the
upgrade
and
jennifer
was
kind
enough
to
provide
this
really
wonderful
little
graph
for
us
to
show
that
there
have
actually
been
some
snap
deals
already
made
on
the
network.
A
Additionally,
earlier
in
our
first
meeting
of
the
day,
she
made
a
comment
about
how
a
lot
of
these
deals
are
also
being
made
with
verified
client
data
through
the
filecoin
plus
program,
which
does
make
sense,
and
if
anybody
is
interested
in
understanding
a
little
bit
more
about
the
dynamics
of
that
data,
in
particular,
as
an
aside
we'd,
be
happy
to
extend,
invites
to
the
file
queen
plus
governance
calls
which
do
provide
a
lot
of
really
really
great
statistics
about
the
onboarding
of
certain
client
data
sets
into
the
network
as
well,
which
may
be
interesting
as
your
team
sort
of
think
about,
or
continue
to
run.
A
All
right,
if
it's
okay,
I
think
so
jennifer
and
ayush-
are
in
a
restaurant
right
now.
So
I
can
just
kind
of
copy
through
their
comments
from
this
morning,
they
mentioned
that
their
team
felt
the
greatest
squeeze
when
preparing
for
v15
and
sort
of
getting
the
lotus
release
candidate
ready
to
actually
go
and
transitioning
from
being
on
a
test
network
working
through
bugs
to
actually
being
ready
to
release
in
sync
on
mainnet
with
snap
deals
implemented.
A
I
think
that
was
a
challenge
for
a
lot
of
people,
but
do
know
that
there
is
an
initial
test
plan.
That's
been
released
for
v16,
where
there's
been
quite
a
bit
of
priority,
placed
on
actually
making
sure
that
there's
enough
time
for
test
networks
to
be
run,
and
I
think
jennifer
last
time
her
and
I
had
spoke
there's
about
five
to
six
weeks,
just
dedicated
solely
to
testing
with
the
fvm
fips
on
the
network.
A
All
right,
which
of
course
brings
us
then
to
our
next
network,
upgrade
which,
of
course,
we're
already
thinking
about
v16.
I
did
want
to
provide
links
one
to
that
test
plan.
I
mentioned
if
you'd
like,
to
take
a
look
you're
welcome
to
as
well
as
a
confirmation
of
the
current
timeline.
We
had
proposed
a
timeline
that
was
actually
two
or
three
days
earlier
than
this
at
the
last
core.
A
A
There
are
three
fip
30,
31
and
32,
and
if
you'd
like
to
sort
of
take
a
deep
dive
into
those
three
fifths,
specifically
as
well
as
how
they
sort
of
work
together
and
very
particularly
how
the
team
has
worked
on
the
gas
model
adjustment
for
non-programmable
fvm.
This
is
the
newest
fip
and
it's
the
only
one
that
raul
has
not
discussed
in
detail
with
cordex.
A
Previously,
I
would
recommend
watching
the
recording
from
this
morning's
meeting
when
it's
available
or
checking
through
the
core
devs
meeting
notes,
also
when
they'll
be
available
very
likely
early
next
week,
so
that
you
can
make
sure
that
you're
up
to
date
with
all
of
the
thinking
and
planning
going
into
these
three
fips
any
questions.
A
All
right,
if
you
are
in
the
fips
repo,
you
will
notice
that
there's
actually
a
fourth
fvm
fit
it's
currently
on
the
numbered.
So
it's
step
n
n-
and
this
is
also
an
additional
gas
model
adjustment,
but
it
is
for
the
programmable
fvm
that
will
not
be
implemented
until
the
m2
timeline,
which
will
occur
later
this
year.
So
it
will
sit
open
the
repo
for
a
significant
amount
of
time,
and
it
is
important
to
realize
that
the
priority
is
very
much
on
gas
model.
A
Okay
and
as
a
final
note
related
to
fips
sorry,
I
noticed
a
lot
of
information
again
all
of
the
links
and
everything
will
be
present
in
the
meeting
notes.
After
today's
call,
we
are
also
newly
reconsidering
fip
27,
which
has
not
yet
been
accepted,
but
which
has
been
in
the
repo
for
a
couple
of
months
now.
This
is
something
that
we
had
talked
about
when
we
were
initially
planning
for
the
v15
network
upgrade,
and
it
has
to
do
with
changing
the
deal
proposal
label
type
to
bytes.
It's
currently
a
string.
A
This
is
one
of
those
things
that
seems
deceptively
simple
but
was
put
on
hold
because
it
requires
some
pretty
significant
architectural
changes,
but
there's
been
some
additional
work.
That's
been
done
because
it's
believed
that
this
is
actually
a
really
important
change
to
make
when
preparing
for
the
fvm
long
term.
So
if
you
look
in
the
chat,
jennifer
shared
a
link
to
an
updated
proposal
on
this
fip,
and
we
are
hoping
that
we
can
actually
potentially
plan
for
this
step
to
be
implemented
during
network
v16
as
well.
A
D
Any
questions
I'm
just
curious
whether
client
implementers
will
be
implementing
their
own
fvms
so,
specifically,
the
lotus.
Sorry,
the
golden
clients
is
the
plan
to
have
a
venus
implementation
of
the
the
fb,
I'm
sorry
and
a
lotus
implementation
of
the
fbm,
or
are
you
guys,
collaborating
on
a
go
link,
implementation
that
will
be
shared
across
these
two
clients.
D
B
That's
exactly
right,
so
only
only
one
if
we
well
only
one
reference:
swm
is
the
plan,
and
so
speaking
on
behalf
of
lotus,
which
is
based
on,
go
yeah
we're
calling
it
through
fsi
the
reason
we're
calling
it
the
reference,
svm
and
not
vfvm
is
obviously
because
folks
are
encouraged
to
kind
of
use
it
as
a
spec
to
go
and
implement
their
own,
better
versions
of
the
fpm
in
other
languages
and
so
on.
Gotcha
thanks
right,
there's
only
one
in
development
by
the
folks
on
this
call,
let's
say
very
clear:
cheers.
A
All
right,
and
as
an
additional
note
again
when
we
talk
through
fifths,
planning,
timelines,
etc.
It's
a
lot
of
information
again.
Meeting
notes
from
this
meeting
are
important,
but
beginning
this
friday
we're
going
to
begin
publishing,
long-form
governance
updates
in
the
phipps
discussion
repo
every
single
week.
A
So
of
course,
there
will
be
links
to
this
shared
in
slack,
as
always,
but
if
you're
hurriedly
taking
notes
or
tuning
all
of
this
out,
because
it
is
a
lot
of
information,
do
you
know
that
there
will
be
a
lot
of
reference
materials
built
for
you,
in
particular
support
of
the
v16
upgrade,
which
will
probably
be
here
before?
We
all
know
it.
A
All
right
any
questions
again
going
once
great,
so
with
that
we
can
shift
a
little
bit
into
talking
very
generally
about
b17.
This
is
an
upgrade
that
we're
hoping
to
schedule
at
the
end
of
the
summer.
This
morning,
lee
from
the
forest
team
had
asked.
A
But
again
I
would
encourage
everyone
to
take
a
look
at
the
test
plan
that
was
linked
for
b16,
for
that
upgrade,
as
well
as
others
going
forward.
There
is
a
strong
priority
and
emphasis
placed
on
testing,
and
so
it's
our
hope
that,
although
this
timeline
is
still
very
tentative,
there
should
be
more
than
enough
time
for
everyone
to
implement
things
as
needed
and
know
that
certain
fip
implementations
will
become
significantly
easier
once
the
fvm
does
go
live.
A
That
being
said
before,
we
can
get
to
a
place
where
we
begin
to
talk
about
the
specifics
of
this
upgrade.
There
are
still
a
couple
of
open
questions
that
have
been
highlighted
on
the
bottom
right
corner
of
the
slide.
A
The
first
is
sort
of
this
chicken
or
the
egg
question
that
jennifer
posed
this
week
about
whether
or
not
the
priority
for
the
first
upgrade
after
the
fbm
should
really
be
on
refactoring
system
actors
to
enable
eventual
programmability
of
the
fvm
or
if
you
should
actually
implement
fvm
programmability
in
that
m2
timeline
step.
So
we'll
continue
to
think
about
this.
A
But
I
think
the
thing
that's
very
top
of
mind
for
me
is
this
discussion
point
that
ayush
brought
up
a
couple
of
weeks
ago,
which
is
linked
at
the
bottom,
and
it's
how
big
should
a
network
upgrade
be?
So
there
are
a
lot
of
fit
proposals
that
we're
already
thinking
about
for
v17,
which
alex
will
walk
us
through
in
just
a
second
but
again,
we'd
like
to
ask
everyone
to
continue
to
think
about
and
weigh
in
on
this
topic.
A
It's
a
little
bit
of
a
more
philosophical
one
than
strictly
technical,
but
it's
important
for
us
to
sort
of
think
about
these
standards
for
upgrades.
So
we
know
how
to
prioritize
fits,
especially
as
we
expect
the
number
of
fifths
in
the
repo
in
that
backlog
to
continue
to
grow
over
time.
E
You
have
a
question
before
we
move
on
sure.
Sorry.
This
is
the
first
time
I've
seen
those
well
I've
ingested.
Maybe
I've
seen
them,
but
the
first
time
I've
noticed
those
dates
for
proposed,
v17
and
internalize
them.
So
my
commentaries
they
seem
quite
they
seem
very
conservative.
Slow
is
probably
the
word
I
would
use.
I
would
be
hoping
we
can
move
towards
a
world
where
we
are
by
the
time
we
launch
one
network.
E
So
by
the
time
we
launch
networks
16
where
we
can
enter
the
the
final
testing
phases
of
the
subsequent
network
so
that
we
can
have
it
live
on
the
net
on
the
network.
You
know
two
months
later
or
something
I'm
not
arguing.
E
We
should
be
compressing
testing
times
at
all,
but
we
need
to
be
pipelining
our
development
so
that,
while
sort
of
the
code
for
v16
is
frozen
and
it's
going
through
its
various
rounds
of
test
nets
by
the
time
that
it
is
released
on
mainnet,
the
the
code
for
the
next
version
is
frozen
and
probably
already
on
early
test
nets
and
going
through
its
final
calibration
net
phases.
And
so
that
way
we
still
get
long
testing
times.
E
But
we
can
increase
the
pace
a
little
bit
and,
as
one
part
of
my
sort
of
response
to
how
big
a
network
upgrade
should
be.
If
if
we
are,
if
we,
if
the,
if
a
slow
pace
of
upgrades
forces
us
to
upgrade
only
four
or
five
times
a
year,
then
the
answer
must
be
that
they
are
big
and
the
only
way
you
get
to
move.
E
The
answer
to
make
the
upgrades
smaller
and
less
risky
is,
if
you're
ready
to
do
more
of
them,
and
I
think
we
can
do
more
of
them
without
compressing
the
testing
at
all.
Just
by
setting
the
expectation
that
the
code
you
know
for
one
upgrade
should
be
ready
by
the
time
the
previous
one
is
just
launched
so
yeah.
This
proposed
timeline
here
is
sort
of
freezing
the
code
of
june.
That's
a
very
long
time
away
from
now.
E
B
Yeah,
so
this
is
interesting
right,
broadly
speaking,
I
think
I
agree
with
alex
at
least
that
we
could
get
to
that
world.
I
don't
know
that
we
should
get
to,
but
I
agree
that
pipelining
these
upgrades,
so
it's
to
make
them
more
modular,
more
parallel
and
they're
kind
of
happening
at
the
same
time,
so
that
when
one
when
v16
goes
live,
we
already
have
b17
I'm
making
an
early
test
and
it
sounds
good.
B
I
think
the
biggest
problem
with
that
is
or
when
I
think
about
trying
to
achieve
that
world
today,
two
immediate
obstacles
come
to
mind
number
one.
It's
just
resourcing.
Speaking
on
behalf
of
the
lotus
team,
I
know
that
that
would
sound
that
would
feel
difficult
for
us.
We've
certainly
fallen
into
a
pattern
of
you.
B
You're
focused
on
the
one
upcoming
network
upgrade
and
then
you
take
a
week
to
breathe,
and
then
you
start
to
think
about
future
ones.
That's
already
changing,
given
that
we're
talking
about
v17
and
v18
right
now,
but
I
think
that's
one
concern
just
in
terms
of
making
sure
we
can
do
everything.
We
have
no
resources
for
it.
B
The
other
problem
and
honestly,
the
one
that
I
think
would
jeopardize
that
that
idea
based
on
the
current
world
today,
is
that
we're
historically
not
very
good
at
sticking
to
our
timelines
again,
something
that
we're
getting
better
at,
but
I
would
be
concerned
about
about
yeah
the
v17
timeline,
essentially
getting
continually
pushed
because,
let's
say,
and
I'm
just
using
1716
space
holders
here,
not
the
specifics
of
what's
in
those
upgrades,
because
these
have
because
b16
keeps
getting
delayed
because
complications
arise
or
their
scope
creep
or
whatever
else
happens.
B
A
That's:
okay,
yeah!
I
wonder
also
to
jennifer's
point
rightfully
steven.
I
wonder
if
you
have
any
thoughts
or
opinions.
The
first
thing
that
came
to
mind
to
me
was
the
obviously
resource
needs.
I
think
it
was
actually
our
plan
that
we
were
going
to
do
this
sort
of
parallel
pipelining
development
work
for
the
v.
C
Okay,
yeah
for
for
wheeler's
team.
Currently
I
think
yeah
we
have
the
resources,
I
think,
is
capable
to
deal
with
fm
integration
and
also
continue
to
vote
for
the
beta's
their
development.
But
here-
and
I
think
the
question
is
that
yeah
I
I
see
that
we
have
actually
a
lot
of
development
and
upgrading
as
a
actors
and
actually
itself
yeah.
That
will
be.
I
would
expect
a
lot
of
the
students
to
do
yeah
with
this
yeah,
but
for
vedas.
C
It's
still
kind
of
integration
instead
of
nazar
development
yeah
by
ourselves,
and
so
is
kind
of
good
yeah.
Based
on
this
and
also
yeah.
Generally,
I
really
like
we
have
incremental
upgrading
for
yeah
multiple
beautiful
versions.
Instead
of
we
do
it,
yeah
yeah,
quite
big
yeah,
a
big
version,
upgraded
yeah.
It
will
be
very
hard
to
handle.
C
So
I
would
say
that
we,
we
still
need
to
plan
for
multiple
milestones
or
to
have
multiple
network
versions
and
I'm
not
sure
what
kind
of
content
yeah
could
put
it
in,
but
I
would
say
that
it's
better
to
have
more
upgrading
yeah
more
frequently
than
before,
yeah.
That
will
help
us
to
control
yeah,
but
in
another,
and
also,
I
really
hope
that
we
have
the
fm
ready
and
to
have
the
yeah
user
programmable
fm
and
available.
C
For
example,
we
have
this
fmps
and
here
to
say,
pray
can
be
deposit,
independent
and
also
super,
and
also
from
some
others,
which
is
also
very
important
and
also
including
about
the
back
and
to
to
separate
and
to
to
put
the
marked
actors
and
yeah
to
the
higher
level
instead
of
easy
system
level
yeah,
I
would
say
that
we
would
we
may
perhaps
we
should
put
that
be
yeah,
be
implemented
before
our
yeah
last
okay,
rfm
milestone
two
yeah
and
upgrading
yeah,
because
we
we
we
want
to
have
a.
C
Better
fundament
yeah
foundation
for
our
and
for
our
system
to
support
smart
smart
contracts,
and
we
do
a
lot
to
have
to
have
a
very
big
change
after
we
have
this
announced,
and
we
will
really
have
the
fm
ready
for
the
user
user
to
to
to
get
to
write
a
smart
contract
and
later
on,
we
have
to
upgrade
our
yeah
fundamental
level
and
yeah
fundamental
favor
of
our
system,
so
it's
kind
of
complicated.
So
I
would
think
that
we
need
to
think
about
this
very
carefully.
A
Thanks
steven,
I
think
this
is
a
great
point
and
a
great
place
to
start
this
discussion
and,
as
usual,
I
think
we
will
continue
to
have
this
discussion
over
the
next
couple
of
meetings.
So
I'm
excited
to
see
how
that
changes,
especially
as
we're
thinking
about
making
sure
we
have
a
strong
foundation,
also
understanding
how
work
capacities
for
flip
inclusion
actually
changes
once
the
fbm
does
go,
live.
A
A
Sorry
about
that,
I'm
just
on
one
computer
with
zoom
and
it's
switching
windows.
But
let
me
share
again.
E
No
worries,
thank
you,
yeah
there
you
go
thanks,
so
hi
everyone,
including
people,
watching
the
recording
it's
nice.
I
really
appreciate
that
the
new
schedule
for
these
meetings
permits
me
to
join
in
person,
at
least
for
some
half
of
the
fun
now
that
we
spending
time
zones
some
more.
I
think
this
this
I'm
looking
forward
to
seeing
how
this
sort
of
two
this
split
meeting
works.
E
I
look
forward
to
reviewing
the
the
previous
call
and
hope
that
this
gives
a
good
opportunity
for
everyone
to
get
sort
of
the
high
high
definition
high
resolution
information
from
all
time
zones,
so
yeah
since,
since
I
haven't
been
here
before
since
I
guess
must
be
more
than
a
year
ago,
since
I've
been
on
a
call
with
most
with
some
of
the
people
here,
I
thought
into
a
brief
brief
intro
of
where
the
stuff
that
you've
seen
me
produce
is
coming
from.
E
So
I'm
now
part
of
the
product
opportunities
team,
which
is
a
team
within
protocol
labs
as
part
of
our
cryptographic
network
lab,
and
this
is
a
special
team
which
I'm
rather
excited
to
be
a
part
of
which
is
a
which
contains
both
researchers
and
engineers,
and
so
has
this
capability
to
go
end-to-end
from
problem
understanding
and
research,
fundamental
or
very
close
to
applied
research
all
the
way
through
products
and
protocol
development.
And
so
you
know
our
first.
E
You
know
the
the
very
first
thing
we
work
on,
of
course,
are
very
far
applying
focused
the
people
in
this
team
were
responsible,
for
you
know
most
of
the
theoretical
development
behind
file
coin
and
a
large
amount
of
the
programming,
some
lotus
and
the
actors,
and
so
it's
a
great
capability
and
and
so.
E
Bring
this
sort
of
research,
heavy
protocol
development
to
file
coin
and
to
other
web
food
protocols
over
time
as
well.
Our
neat
term
goals
are
the
programmability
of
the
fem.
So
obviously,
obviously
the
fvms
enables
programmability
in
the
basic
sense,
but
winter
map,
programmable
of
storage
and
of
deals
and
storage
market
related
things
programmability.
That
is
specific
to
file
coin,
as
opposed
to
just
porting
some
d5
stuff
from
ethereum
into
the
fdm.
E
So
that's
our
primary
focus
well
that
that's
sort
of
half
the
team's
primary
focus
right
now
and
we
very
much
look
forward
to
enabling
the
same
kind
of
explosion
of
composable
building
blocks
and
protocols
building
on
top
of
each
other
that
defy
has
seen,
but
with
storage
related
primitives.
E
Unfortunately,
the
you
know
the
the
state
machine
and
the
actors
for
mainnet
launch
didn't
didn't
pay
much
attention
at
all
to
this
kind
of
programmability.
There
wasn't.
You
know
we
knew
we
didn't
have
a
real
vm.
We
didn't
know
how
it
was
going
to
be
so
we
got
one,
so
it
wasn't
wasn't
worth
putting
too
much
effort
into
architecting
the
apis
for
that
kind
of
unrestricted
development.
E
But
now
we
are
doing
that
work,
and
so
it's
it's
some
amount
of
you
know
what
could
be
classed
as
technical
debt
or
just
you
know,
shortcuts
that
we
took
advantage
of
because
we
could,
in
order
to
get
the
falcon
maintenance
launched
from
which
we
could
then
iterate
and
learn
and
part
of
it
is.
You
know,
of
course,
we've
genuinely
learned
new
things
from
the
community
and
from
watching
firecoin
that
the
network
proceed.
E
Things
related
to
retrieval
so
retrieval
incentives,
ways
to
take
metrics
of
the
behavior
of
nodes
and
ways
to
build
marketplaces
for
the
retrieval
side
of
the
filecoin
network.
These
are
hard
problems,
it's
hard
to
there's,
no
obvious
theoretical
or
no
obvious
construction,
whereby
you
know
someone
can
request
data
from
a
provider
and
if
they
don't
provide
it
that
you
can
prove
they
didn't,
provide
it
or
that
if
they
do
provide
it,
they
can
prove
they
did.
E
And
so
these
are,
you
know,
there's
a
really
interesting
protocol
development
going
on
here
and,
as
that
takes
shape,
I'm
looking
forward
to
that.
We're
really
increasing
the
value
of
storage
on
filecoin
once
the
retrievability
side
of
it
is
much
more
a
high
like
high
likelihood.
This
team
also
previously.
E
Using
all
the
web
stuff
for
snap
deals
like
the
the
the
proof
upgrade
side
of
it,
I
came
from
this
team
before
it
was
sort
of
named
as
such.
E
I
always
also
previously
worked
on
exponential
scaling
for
storage,
although
that's
a
that's
a
proposal,
that's
just
on
the
shelf
until
we
need
it,
and
I
also
worked
on
the
field
plus
subsidy-
that's
also
deferred,
because
we've
had
better
ideas
since
then.
That's
tba,
whether
after
we
implement
all
of
these
things
for
programmability,
whether
or
not
the
field
plus
subsidy
is
still
high
value
enough
to
pursue
or
becomes
a
thing
that
we
decide
is
not
worth
the
effort
thanks.
E
It's
a
joy
to
be
working
in
public,
so
you
can
follow
along
with
our
work.
All
things
from
our
brain
go
pretty
much
directly
into
public
on
the
kryptonite
crypto
net
lab
notebook
in
notion.
That
is,
you
get
almost
almost
the
raw
data
from
there.
E
E
Great
all
right,
so
so
here's
the
set
of
proposals
that
are
specifically
from
that
I'm
associated
with
that
are
stepping
stones
towards
this
wonderful
world
of
fully
programmable
storage
on
the
fpm,
and
so
I
want
to
leave
an
opportunity
here
for
a
q.
A
about
these-
I
don't
know
if
there
are
any
questions,
asked
asynchronously
that
katelyn's
going
to
pass
on
and,
of
course,
the
discussion
forum
and
the
fitpost
is
a
great
place
to
have
these
discussions
as
well,
but
just
briefly
to
motivate
these.
E
The
first
proposal
setting
the
pre-commit
deposit,
independent
of
sector
content.
This
is
decoupling,
a
piece
of
some
tight
coupling
between
how
the
storage
powered
consensus
works
and
how
deal
markets
work
where
the
the
verified
power,
the
quality,
adjusted
power
mechanism
depends
on
the
content
of
sectors.
But
we
don't
need
to
know
this.
E
This
is
an
opportunity,
a
for
simplicity.
It's
a
prerequisite
for
later
things
where
you're
really
much
stronger,
ripping
apart
of
markets
and
mining,
but
this
is
simplicity.
We
can
save
gas
straight
away
from
this
and
we
save
even
more
gas
later
on.
While
we
put
it
apart,
the
implementation
is
nearly
complete,
I'm
working
with
google
to
finish
the
implementation.
This
is
gonna,
be
you
know:
zero
work
from
client
teams
to
integrate
this.
This
is
strictly
a
change
that
happens
inside
the
actors.
E
There
is
a
migration
involved,
so
at
the
upgrade
we
do
a
microstate
migration,
but
it's
a
small
one.
Nothing
like
the
snapdeals
one,
but
at
least
all
the
api
is
intact
and
almost
all
providers.
We
expect
that
this
changes
nothing
about
their
operations
because
they
already
need
to
put
enough
funds
in
their
minor
actor
to
cover
the
initial
pledge
and
the
pre-commitment
deposit
is
still
less
than
the
neutral
pledge
after
this
change.
E
So
this
is
a
simplification,
a
gas
change,
I
guess
saving
and
a
prerequisite
for
more
decoupling
later
on.
Should
I
run
through
them
all?
Let
me
run
through
them
all
so
that
we
for
time
and
then
I'll
pause
for
questions
for
all
of
them.
At
the
end,
supporting
contact
us
deal
clients.
This
is
maybe
like
the
minimum
the
minimum
buy.
If,
if
we
were
allowed
to
do
absolutely
nothing
else
before
enabling
user
programmability
on
the
fm,
I
think
we
have
to
support
contracts
as
being
able
to
make
deals.
E
This
this
change
adds
an
alternate
entry
point
alternate
flow
for
entities
that
cannot
make
signatures
to
initiate
deal
clients.
An
issue
initiate
deals
with
the
built-in
storage
market
actor
in
the
long
run.
Of
course,
I
hope
that
there
will
be
many
storage
market
actors
and
they
can
do
whatever
they
like
in
terms
of
enabling
contracts
or
accounts
as
clients,
but
until
we
have
arbitrary,
your
other
third-party
markets
and,
of
course,
they'll
take
time
to
develop.
E
Allowing
contracts
to
make
deals
with
our
built-in
storage
market
is
sort
of
a.
What
I
think
is
a
minimum
capability
for
us
to
enable
this
is
a
relatively
straightforward
implementation.
We
haven't
written
the
code
for
it
yet,
but
again,
product
opportunities
team
plan
to
write
the
code
for
this
integration.
Effort
for
for
clients
is
zero
if
they
don't
want
to
take
advantage
of
the
new
capability,
but
if
they
want
to
expose
endpoints
for
triggering
contracts
to
make
deals,
then
there
might
be
some
new
new.
E
You
know
new
calls
to
plumb
through
to
clis,
and
things
like
that.
E
E
The
earlier
the
early,
the
early
ideas
came
out
in
the
field
plus
subsidy
premium
proposal,
but
we've
now
discovered
ways
to
break
that
problem
up
and
make
it
smaller
so
that
we
can
solve
bit
pieces
at
a
time.
So
this
decoupling,
with
falcon
plus
term
from
storage
marketplaces,
is
a
great
step
here.
E
This
proposal
leaves
all
of
the
existing
workflows
intact,
but
changes
the
state
representation
so
that
file
coin
plus
data
cap
allocations
and
claims
on
that
data
cut
by
providers
who
are
storing
the
pieces
are
stored
outside
of
the
built-in
storage
market
actor
and
so
can
then
be
made
available
to
other
other
market
actors
in
the
future.
When
they're
possible,
I
mean,
as.
A
E
The
building
storage
market
has
a
bunch
of
limitations
on
the
term
that
you
can
make
a
deal
for
that
are
not
technically
limited
tips
right,
they're,
not
technically
necessary,
they're,
just
shortcuts,
and
so
by
making
this
change,
we
can
also
enable
someone
and
enable
a
verified
client
to
make
a
a
data
cap
allocation
for
five
years
say,
even
though
the
built-in
storage
market
can't
do
a
five-year
deal
and
and
then
have
a
provider,
be
able
to
prove
that
piece
and
provide
that
data
for
up
to
the
life
of
up
to
the
entire
life
of
a
sector,
maximum
life
per
second,
we
can
also.
E
If
then,
it's
then
trivial,
to
extend
the
extent
that,
beyond
five
years
from
the
farcoin
plus
point
of
view,
there's
no
reason
that
we
couldn't
continually
renew
a
data
cap
allocation
perpetually
on
the
on
the
provider.
Side
they'd
have
to
move
the
data
into
a
new
sector
before
that
original
sector
expired.
E
But
these
are
all
the
kind
of
mechanics
of
sort
of
programmable
storage
that
we're
trying
to
enable
here
the
ability
for
providers
to
move
their
data
between
sectors
while
continuing
to
maintain
all
of
their
obligations,
decouple
their
commitments
to
clients
to
store
data
or
their
commitments
to
the
filecoin
plus
program
from
the
lifetimes
of
sectors
or
the
particular
sectors
involved
and
enable
them
to
to
repack
their
sectors,
to
maximize
the
usefulness
of
the
space
that
they're,
that
they're,
approving
and
so
on.
E
So
the
I've
shared
this
proposal
with
the
filecon
plus
notary
group
already
they
generally
generally
like
it,
it
solves
a
bunch
of
problems
that
they
have.
You
know
they
want.
Currently,
you
know
that
falcon
plus
deals
expire
after
six
months,
because
there's
no
way
to
make
one
that's
longer
and
there's
no
way
to
know
where
to
extend
it.
E
So
we
have
to
you
know:
recertify,
the
mine
has
to
reseal
it's
incredibly
inefficient
and,
of
course,
we
lose
data
that
way
because
the
default
is
for
the
data
to
get
dropped
so
are
very
much
on
board
with
the
potential
of
this
opens
up.
So
so
I
have
a
detailed
design
for
this,
not
here
written
up
in
a
fib,
but
the
fit
for
that
will
be
coming
soon,
along
with
an
implementation.
E
Okay
and
the
last
one
is,
is
again
pre-fit
discussion
about
an
architecture
for
programmable
storage
markets.
This
sort
of
there's
some
overlap
between
these
last
two
here,
because
I
designed
them
in
the
opposite
order
to
this.
But
the
final
one
breaks
the
breaks:
the
flow
whereby
the
minor
actor
consults,
the
built-in
storage
market,
actor
for
permission
to
seal
a
sector
with
particular
deals,
and
it
changes
so
that
the
minor
actor
can
seal
whatever
pieces
it
likes
it.
E
It
lags
into
some
data
and
then
inform
a
market
factor
that
that
data
has
been
committed
and
the
market
actor
can
rely
on
the
minor
they
built
it.
The
minor
actor
code
remains
built
in
and
trusted,
and
so
any
third
party
market,
including
a
bounty
contact,
a
bounty
contract
or
renewal
contract
or
a
replication
contract
dated
owl.
They
can
all.
C
E
Cap
and
be
informed
by
providers
that
data
has
been
stored
and
get
it
get
access
to.
E
You
know,
events
when
sectors,
fault
or
terminate
and
so
on-
and
this
is
this-
is
sort
of
the
final
piece
that
enables
arbitrary
storage
related
applications
to
be
built
on
top
of
filecoin
and
unlocks
the
final
bits
of
the
flexibility
on
the
provider
side
to
move
data
around
between
their
sectors,
transfer
data
to
other
providers,
and
so
on
this
last
one
I
mean
if
the
network
17
timeline
stays
is
august,
then
maybe
we've
got
time
to
do
this
last
one.
E
Otherwise,
if
we
can
bring
that
timeline
in
I'm
relatively
happy
for
this
final
architecture,
piece
to
land
after
m2,
but
in
my
goal
you
know
today
is
to
get
these
first
three
into
network
17
and
because
that
will
enable
a
lot
of
things
to
happen.
Contractors
deal
clients,
for
example,
and
you
know
these
long
field
plus
terms,
which
would
be
a
great
benefit
to
the
network,
and
then
we
can
do
the
final
pieces
for
programmable
storage
market
architecture.
E
After
that,
because
there'll
be
at
least
some
things.
You
know
a
large
amount
of
things.
Useful
storage,
related
programs
will
be
able
to
be
created
just
after
the
first
three
okay,
that's
my
field,
I'll
pause
for
any
questions
here
or
if
there
are
any
questions.
Oh
sorry,
question
from
ios
and
chat.
Let
me
take
that
one.
E
Does
the
second
fip
storage
contractors
deal
clients
deprecate
any
methods?
No,
it
doesn't.
It
doesn't
change
any
methods
in
the
existing
storage
to
your
workflow.
If,
if
you
do
have
an
externally
owned
account,
then
it
works
just
the
same
one
thing
it
does
change,
it
adds
a
new.
E
It
adds
a
new
possibility
for
the
off
chain
ask
and
deal
exchange
protocol.
It's
it's
optional
to
support.
You
know
the
old
thing.
The
old
things
just
keep
working,
but
I
note
that
wanted
to
support
alternative
ways
of
doing
deals.
I
think
I
have
to
go
back
and
check
whether
this
comes
in
necessarily
with
contractors,
dual
clients,
or
only
comes
in
with
a
phil
plus
piece
but
yeah.
My
goal
is
for
backwards
compatible
integrations
here
until
the
final.
E
C
I
do
not
have
yet
the
questions
for
this
currently
yeah,
but
I
would
say
that
yeah,
because
all
change
yeah
for
me
yeah
there's
a
lot
small.
Actually,
there
are
change
consensus
and
yeah
medicines,
especially
for
yeah.
Some
of
this,
especially
here
for
the
side.
C
I
would
yes,
if
the
time
allowed,
I
would
think
that
this
should
be
in
this
system
before
the
fm
milestone
2.
C
That
will
be
perhaps
better
for
us
to
your
data
to
support
the
yes
but
contracts
and
for
long
term.
C
Is
it
possible
to
provide
some
kind
of
retrieval
market
api
support
or
no
standard
support
and,
for
example,
a
framework
here
be
prepared
for
for
the
future?
Okay,
I
don't
know
because
we
don't
we,
we
really
don't
have
this
a
market
defined
very
well
right
now,.
E
Yeah
thanks
thanks
to
your
first
point,
yeah
I
mean
I
agree.
It
would
be
nice
to
get
all
of
these
before
him
too.
It
would
be
less
total
work.
E
E
The
retrieval
markets,
I
think
currently
our
ideas
are
too
early
for
us
to
know
what
it
is
that
we
want
to
build
in.
So
my
gut
feel
is
that
there
probably
is
in
fact
very
little
that
we
would
need
to
change
to
support
retrieval
markets
later
on.
E
There
are
possible
pieces,
like
some
method,
to
translate
between
commitments
that
are
shower
hashes
and
commitments
that
are
positive
on
poseidon
hashes,
so
so
the
the
kind
of
equipment
that
is
just
the
root
of
an
ipld
object
that
someone
might
actually
want
to
retrieve
versus
the
kind
of
commitment
that
is
the
you
know.
E
The
data
of
a
sector
they're,
two
different
hashing
schemes,
they're,
not
they're,
they're,
not
easy
to
translate
between,
and
so
that
kind
of
thing
could
be
useful
to
to
prove
that
that
the
data
that
a
client
wants
is,
in
fact
the
data,
that's
in
some
sector
and
be
able
to
demonstrate
that
on
chain.
E
But
but
overall
I
think
retrieval
is
likely
to
be
in
a
sense,
relatively
independent
from
the
story
or
inside
of
filecoin.
I
can
imagine
retrieval
metrics
and
marketplaces,
and
so
on
that
worked
just
against
you
know
an
ipfs
gateway
or
centralized
computers
or
something
that
wasn't
filed
some
other
story,
a
decentralized
cdn.
It
doesn't
need
to
be
backed
by
filecoin
as
it's
root,
datastore,
and
so
many
of
the
protocols
that
we're
working
on
are
not
specifically
tied
to
filecoin
in
that
way,
and
so
it
wouldn't
require
anything
from
filecoin.
E
E
You
know
there'll,
be
policy
questions
about
whether
we,
whether
we
want
to
start
requiring
a
certain
service
level
from
providers
or
you
know
that
storing
a
deal
implies
a
commitment
to
make
that
data
retrievable
that
the
protocol
could
actually
enforce.
I
mean
in
that
case
there
may
be
some
protocol,
changes
to
add
enforcement
mechanisms
and
new
pledges
or
something
like
that.
E
Question
is
getting
rid
of
phil,
plus
the
concept
of
verified
data,
something
you
were
others
considering.
No,
absolutely
not
now
we're
considering
how
to
make
it
a
better
and
more
useful
part
of
the
system.
Maybe
I
could
say
we're
considering
ways
to
incentivize
other
behaviors.
So
you
know
full
plus
incentivizes
the
storage
of
useful
data,
and
we
also
want
to
incentivize
the
retrievability
of
data.
But
I
don't.
E
I
expect
over
time
you
know,
four
plus
power
will
come
to
dominate
the
storage
power
over
the
long
run
that
are
out
of
a
total
of
100
exabytes
of
power.
Maybe
there's
only
50
exabytes
of
actual
storage
behind
that
and
the
other
50
exabytes
is,
is
power
boost
from
verified,
verified
data?
Maybe
the
ratio
goes
even
further
than
that.
A
I
do
want
to
point
that
we're
at
the
end
of
the
hour,
and
I
want
to
respect
everyone's
time
alex.
Do
you
have
any
final
asks
or
notes
for
this
call
or
anyone
else?
Any
final
notes:
do
you
think
it's
important
for
the
group
to
hear
before
we
presumably
pick
this
up
again
in
future
calls
as
well.
E
E
You
know
earlier
on,
it's
very
helpful
to
read
them
because
the
earlier
that
your
ideas
or
your
challenges
come
in
the
better
a
solution
we
can
design,
but
certainly
by
the
time
it's
in
a
fifth
draft,
I
would
really
appreciate
reading
them
so
that
when
we
come
around,
you
know
this
first
one.
I
expect
to
be
in
last
call
by
this
time
in
two
weeks
time
and
so
we'll
be.
E
You
know,
looking
looking
to
move
to
accepted
at
some
point
so
to
justify
the
final
implementation
in
rust
and
similarly
the
earlier
we
get
feedback
and
the
earlier
we
get.
We
understand
what
any
challenges
are
going
to
be
the
better
we
can
spend
our
implementation
work
towards
these.
A
Yeah
and
we
can
talk
about
the
last
call
status
offline,
also
or
publicly
in
a
slack
channel.
I
think
the
focus
right
now
is
going
to
be
on
scheduling
the
fbm
ones
and
doing
them
as
a
group
just
for
community
communications,
and
since
this
is
going
to
be
scheduled
for
implementation
for
b17,
we
should
have
plenty
of
time.
But
we
can
look
at
the
calendar
together
and
figure
out
a
timeline
that
works
best
for
what
you
expect.
For
this
also.
A
All
right
great
all
right.
Well,
thanks
again
everyone.
This
is
our
first
iteration
of
this
meeting
format.
I
hope
it
worked.
If
you
have
any
feedback
ways
that
you
think
it
could
be
more
effective,
do
let
me
know
again
many
links.
Many
resources
always
come
from
these
meetings,
so
do
be
sure
to
check
them
out
when
they
are
linked
in
the
different
slack,
repos
and
channels.
Excuse
me
channels,
not
repos,
and
I
will
see
you
all
again
in
two
weeks
time.
Thank
you.