►
From YouTube: Filecoin Core Devs #59
Description
Recording for: https://github.com/filecoin-project/core-devs/issues/142
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
Follow Filecoin!
Website: https://bit.ly/3ndAg44
Twitter: https://bit.ly/3ObND0x
Slack: https://bit.ly/3HKfFy7
Blog: https://bit.ly/3HFZFNv
Reddit: https://bit.ly/39N4Jmv
Telegram: https://bit.ly/3bkP8Ly
Subscribe to our newsletter! https://bit.ly/3Oy8J9j
#filecoin #ipfs #libp2p #web3 #nft
A
A
Hello
good
evening,
everyone-
well
it's
good
evening
from
my
time
it
is
801
p.m.
Eastern
time
today
is
Thursday
6th
July
2023,
welcome
to
file
coin
code,
apps
number
59..
We
have
quite
a
number
of
things
to
discuss
today.
We
will
do
a
bit
of
a
switch
of
things
before
we
continue.
A
A
You
know
the
peace
Gateway
and
then
we
will.
We
will
talk
about
our
watermelon,
our
great
nv21
and
then
the
file
coin,
master
plan
and
Community
engagement
by
Fatman
and
Stephen
Lee,
and
then
we
take
questions
and
answers
after
afterwards
just
to
confirm
that
there
are
no
more
people
in
there
waiting
room,
okay,
all
right
without
further
Ado.
A
Just
if
you
have
skipping,
we
have
just
one
hour
together.
Let's
keep
our
conversations
tailored
to
you
know
the
subject
matters
we
have
now
feel
more
than
you
know,
free
to
take
the
conversations
async,
especially
where
we
cannot
get
through
them
all
today.
B
Thanks
lucky
yeah,
so
I've
got
a
couple
related
things
to
talk
about
how,
in
the
general
scope
of
retrievability
this
first
one
is
really
a
somewhat
of
like
a
you
know,
a
position
piece
or
an
argumentation
that
we
we
do
need
to
be
thinking
with
a
little
bit
more
Nuance
and
that
we
can't
sort
of
avoid
retrievability,
there's
a
whole
lot
of
data
out
there
and
when
we
think
about
sort
of
the
original
use
cases
of
filecoin,
some
availability
of
data
is
is
just
like
a
core
part.
B
It's
right
like
we
have
to
both
store
and
then
get
back
access
to
data
to
make
that
data
useful,
and
that
goes
through
many
of
the
layers
that
we
have
pretty
deep
in
terms
of
the
things
that
we
want
to
enable
with
filecoin.
B
If
we're
going
to
extend
deals,
we
need
to
be
able
to
get
the
data
from
SPS,
so
all
of
that
stuff
about
data
Dows
in
fvm
things
like
that,
that's
all
conditioned
on
having
some
expectation
of
you
know,
even
even
sort
of
archival
or
offline
data
needs
some
way
to
eventually
get
back
off
and
then,
if
you're,
storing
stuff.
As
a
client,
just
as
a
basic
thing,
you
want
to
be
able
to
get
back
that
data
that
you've
stored,
I
think
one
of
the
main
things
we've
missed
so
far.
B
That
makes
this
hard
is
that
we
generally
are
talking
about
retrieval.
As
a
you
know,
retrieval
should
succeed
or
whatever
and
we're
not.
Thinking
like
the
the
economics
of
retrieval
end
up
being
a
little
bit
different,
which
is
the
bandwidth
and
operations
have
real
costs,
and
different
data
is
going
to
have
different
costs
on
that
retrieval
side.
B
So
it's
not
a
one-size-fits-all
problem,
and
it's
not
something
that
we
can
just
lump
in
and
say.
Oh,
you
need
to
make
this
retrievable,
because
what
that
means,
when
we
talk
about
retrievability,
needs
to
have
another
layer
of
nuance,
if
we're
going
to
partition
that
and
make
that
a
proposition
that
is
attractive
to
SPS,
that
they
can
actually
like
live
up
to
what
they're
agreeing
to
and
know
what
it
is
that
they're
agreeing
to,
because
if
you
unknowingly
store
something
that
turns
out
to
be
very
popular,
the
cost
to
you.
B
If
you
just
say,
oh,
you
need
to
keep
it
available,
is
potentially
unbounded
and
that
doesn't
work
right.
So
we
need
this
sort
of
thing
of
okay.
What
what
does
it
mean
when
we
say
it's
available?
Does
that
mean
that
you've
dedicated
it
a
gigabit
connection
like
what?
What
is
it
you're
actually
saying
in
terms
of
the
cost
that
you're
expected
to
have
and
the
the
sort
of
service
level
that
you're
expecting
to
sign
up
for
and
right
now,
all
of
that
is
negotiated
through
external
contracts.
B
It's
not
something
that
we
can
express
in
a
deal
and
it's
not
something
that
filecoin
is
helping
us
with,
and
so
that
that
leads
to
sort
of
small
tools.
So
yeah
you
can
yeah
move
on
to
this
middle
one.
There's
a
couple
things
that
that
we're
going
to
work
on
that
that
I'm
I
think
pushing
on
to
to
try
and
get
some
of
this
Nuance
into
the
conversation.
B
B
So
there's
some
efforts
that
fill
plus
is
trying
to
do
through
notaries
and
so
forth,
but
I
think
it's
really
Limited
in
the
Nuance
it
can
have
by
not
having
that
as
a
thing
that
the
network
knows
and
so
having
having
that
expectation
of
what
what's
expected
there,
we
have
one
bit
right
now,
which
is
fast
retrieval
which,
which
is
that
expectation
that
there's
an
unsealed
copy
available,
but
that's
really
not
actually
signing
up
for
well.
How
much
bandwidth
are
we
expecting?
B
The
SP
is,
is
providing
and
making
available
for
this
data,
and
so
we
need
some
more
Nuance
to
get
out
of
this
unlimited
unbounded,
like
downside
to
SPS
the
second
one
is
to
start
thinking
and
keep
moving
forward
on.
Incentives
which
is
SPS
need
to
be
remunerated,
for
the
cost
that
they're
incurring
for
retrieval
I.
Think
initially,
we
can
expect
this
to
be
experimenting
with
level
two
type
Solutions,
so
we
can
imagine
contracts
that
can
release
lateral,
there's
been
experiments
with
retrieve.org
and
others
around.
B
How
do
you,
you
know,
pay
back
the
cost
of
retruple
and
make
it
incentivized
for
someone
to
provide
retrieval,
but
eventually
we
think
that
this
is
a
thing
where
you
know
in
in
the
way
that
Phil
plus
is
already
you
know,
a
reward
multiplayer
of
the
network
providing
an
incentive
for
things
that
benefit
the
network
having
public
data
sets
that
you
know
then
amplify
the
Network's
presence
and
visibility
are
also
things
that
are
important
for
the
network,
and
so
we
think
that
you
know
living
up
to
these
expectations
of
retrievability.
B
That
a
client
is
expecting
is
part
of
what
would
be
expected
in
order
to
get
that
so
initially,
this
is.
This
is
something
where
we
want
to
think
about
it.
As
you
know,
what
works
make
sure
we've
got
something.
That's
solid,
that's
that's,
you
know,
meets
expectations
in
terms
of
you
know
the
rewards
you're
getting
correlate
to
you
actually
doing
the
receivability,
and
once
everyone's
like
agreed
that
we've
got
a
system
that
works,
then
we
can
think
about.
How
does
the
network
subsidize
this?
B
So
that's
sort
of
like
the
the
two
tiers
there
that
that
you
can
expect
that
that
when
we
think
about
actually
incentivizing
this,
a
lot
of
the
work
is
going
to
be
enabled
experimenting
with
that
at
a
at
a
higher
level
and
and
gaining
confidence
before
we
push
for
for
protocol
changes
directly
for
incentives.
B
Okay,
so
let
me
talk
about
the
the
more
concrete
thing.
That's
happening
right
now,
so
on
the
next
slide,
there
is
an
FRC
up
for
discussion.
This
is
I,
think
709,
and
this
is
talking
about
what
does
it
mean
to
have
HTTP
retrieval
as
a
provider
of
data?
B
We
think
that
there
are
sort
of
three
parts
that
form
that
one
is
making
the
content
of
the
deals
available
on
ipni
for
indexing
and
so
that
the
network
can
learn
what's
in
the
deals
so
that
they
can
ask
for
retrievals
there's
an
iPhone
spec,
describing
what
that
particle
looks
like
that,
that
you
provide
there's
another
one
which
is
around
you
know
providing
partial
retrievals
and
the
data
in
the
retrievals
there.
B
There
is
a
gateway
spec
that
I'd
give
us,
as
that
looks
like
that,
but
that
describes
what
the
semantics
of
providing
individual
blocks
of
data
out
of
the
deal
that
gets
made,
look
like,
but
then
there's
this
third
one
that
we've
actually
been
using,
as
probably
our
primary
form
of
retrieval
and
we
haven't
specified
that
anywhere,
and
so
what
this
is
is.
Okay,
you've
got
a
piece
of
data
right
like
you,
you
you
send
it
right
now.
B
Many
deals
that
are
online
deals
are
getting
made
over
HTTP
and
it's
a
32
gig
of
or
64
gigs
piece,
that's
getting
sent
over
HTTP,
and
we
have
also
a
Gateway
endpoint
for
getting
that
same
piece
back
as
a
whole,
just
HTTP
piece
of
what
that
car
file
is
and
that's
getting
used
for
things
like
Evergreen
and
and
other
places
where
we
pull
the
data
back
and
it's
not
a
particularly
complicated
spec.
You
ask
for
slash
piece,
slash
P
Sid,
and
that
is
an
HTTP
request.
B
That
gets
you
back
that
object
of
the
piece
data.
This
is
specifying
what
those
semantics
are,
so
that
we
have
a
clear
definition
and
inner
compatibility
between
different
implementations
of
markets
for
what
we
expect
when
we
ask
for
a
piece
of
data
back
so
that
we
can
replicate
it
welcome,
continue
discussion.
B
There
I
think
the
main
discussion
we've
had
so
far
is
that
we
may
want
to
have
this
endpoint
support,
retrieval
of
both
the
piece
that
was
you
know,
sent,
but
also
of
the
sealed
instance
of
that
piece,
so
that
you
can
ask
for
a
Comm
R
to
get
back
the
sealed
data.
B
This
would
enable
cases
for
unsealing
to
be
offloaded
unsealing
as
a
service
type
things
where,
instead
of
paying
an
SP
to
unseal,
you
could
have
a
market
there
for
the
unsealing,
but
also
right
like
if
there
is
space
dedicated
to
those
sealed
pieces
that
should
be
available,
that
you
should
check
that
there
actually
is
this.
B
You
know
unique,
sealed
copy
of
the
data
available
if
you
really
need
to
so
so
that
that's
been
added
in
the
subsequent
iteration
of
the
conversation
around
what
the
specification
looks
like,
but
but
any
other
discussion
there
I
welcome.
I.
Think
that's
what
I've
got
for
now.
Thank
you.
A
B
B
A
C
Oh
here
so
I
think
we
briefly
discussed
about
this
like
offline
once,
but
like
I
am
kind
of
really
interested
in.
If
there's
any
progress
on
the
incentive
side
of
the
scenes,
because
I
know
your
consider,
smart
contract
Solutions
but
also
may
be
fit
into
you
know
profile
rewards
just
to
make
sure
that
retrievability
either
is
decentralized
or
is
guaranteed
any
thoughts.
You
can
share
with
us
right
now
or
it's
too
too
soon
to
talk.
B
I
I
still
am
very
much
of
the
opinion
that
the
sort
of
the
north
star
of
where
we
eventually
want
to
be
is
that
retrievability
should
be
a
a
thing
that
we
can
subsidize
with.
Filecoin
block
rewards
right
like
that.
That
is
the
the
public
goods
subsidy
of
how
we're
you
know,
saying
we
want
the
network
to
grow
and
I.
B
Think
one
of
the
things
we
would
want
the
network
to
support
is
that
data
is
retrievable,
because
that
enables
a
bunch
of
use
cases
and
leads
to
a
ultimately
longer
term,
successful,
Network
and
so
I
think
I
think
we
need
to
get
there
and,
and
then
the
question
is,
is
a
what
is
the
path
to
feel
comfortable
that
that
we're?
Not
you
know
that
there's
no
foot
guns
that
we're
not
doing
something
you
know
mistaken
in
getting
there
and
that
we've
got
something
that
actually
is
reflecting.
B
What
we
want
to
be
incentivizing
right,
like
that's,
always
the
the
trick,
so
so
I
think
we
we
want
to
make
sure
we've
got
incentives
that
are
incentivizing
the
right
thing.
We
can
do
all
of
that
experimentation
through
contracts
and
have
a
level
too
and
and
then
once
we
are
convinced
that
we
actually
are
incentivizing
the
right
thing.
That's
when
it's
the
right
time
that
we
push
more
strongly
for
saying
now.
The
network
should
subsidize
this
thing
that
is
valuable
to
the.
C
Network
I
I
totally
agree
with
that
I
feel
like
it
will
be.
It
will
be
an
interesting
discussion
on
whether
that
should
be
additional
incentive
other
than
what
we
already
have
in
the
storage
incentive
today
because
like
for
me,
it's
also
a
formula
like
a
storage
service
should
guarantee
retrievability
by
yourself.
So
when
a
thing
is,
incentivized
storage
I
feel,
like
ritual
comes
with
it,
so
I
think
it's
going
to
be
very
interesting,
whether
we're
leveraging
the
existing
system
or
we
are
going
to
be
having
ideas.
Stephen
also
has
a
head.
D
Yeah,
okay,
thank
you
is
very
important,
but
I
don't
think
we
have
we're
a
good
way
to
do
this
right
now,
because
we
don't
have
a
yeah
proof
of
disability
or
the
proof
of
the
data
yeah
transmission,
where
yeah
quite
a
lot,
yeah
graphic
way
to
do
this
yeah.
This
is
one
thing
I
want
to
mention,
and
also
I
and
I.
Read
that
that.
D
The
a
storage
provider
who
provide
the
identiful
service
will
yeah
involved
yeah
some
cast
about
yeah,
for
example
the
bandwidth
and
also
the
storage
and
yeah
maintenance
and
yeah
original
copy.
D
So
I
think
that
if
we
have
a
platform
or
application
which
yeah,
if
the
client
actually
can
pay
the
fee,
which
could
cover,
cast
and
also
make
the
storage
provider
have
some
paint
face,
so
I
don't
think
yeah.
There
will
be
one
issue.
We
just
need
to
encourage.
D
Some
kind
of
service
to
do
this,
so
the
key
thing
is
that
the
real
requirement-
okay,
and
also
the
cost
so
that
to
compare
the
okay,
the
the
let's
play
different
over
the
cast.
So
I
said
this
is
the
key
and
fully
agree
that
we
need
to
have
some
incentives:
yeah,
because
yeah
it's
a
website
and
platform
or
application.
We
could
have
incentives
to
initials
this
kind
of
thing.
Yeah.
For
example,
we
could.
D
To
join
this,
but
I
don't
think
if
we
use
look
I'm,
not
so
sure,
okay
yeah,
if
this
can
be
integrated
into
a
falcon
plus,
because
Falcon
plus,
is
using
the
deity
token
because
they
felt
cool
to
incentive
the
Dual
data,
your
storage,
that
is
yeah
phototability.
That
will
be
a
problem
because
we
do
not
have
a
way.
As
I
mentioned,
we
don't
have
a
decentralized,
correct
or
graphic
way
to
prove
this
okay,
so
you
cannot
use
that,
but
another
two
yeah
that
will
be
very,
very
yeah.
D
Yeah,
you
could
use
your
token
yeah
to
do
this
and
you
you
even
can
do
something
like
a
centralized
the
way
and
to
develop
and
on
some
states
and
then
to
have
some
more
development
and
yeah
slowly
to
to
go
through
the
decisional
way.
So,
okay,
so.
B
Definitely
do
do
look
I,
think
I
think
we're
basically
in
agreement
do
watch
what
we're
what
we
come
out
with
we'll
have
more
of
a
roadmap
soon
and
I
think
you
are
already
starting
to
see
parts
of
it
in,
for
instance,
the
fill
plus
notary
world
today,
where
validation
Bots
are
expressing
what
retrievability
they've
observed
on
each
new
client
application
against
the
the
data
that's
being
provided
there.
B
So
there
are
starting
to
be
validation,
Bots
that
are
monitoring
and
tracking
the
retrievability
of
the
Plus
data,
and
so
you
can
imagine
that
becoming
first
at
a
smart
contract
where
that
becomes
on
chain
and
then
eventually
becomes
a
decentralized
view
into
retrievability,
there's
a
bunch
of
gaming
stuff
and
making
sure
that
that's
really
a
trustworthy
thing,
but
I
think
what
you'll
see
is
that,
as
it
happens,
into
a
contract
layer
that
looks
a
lot
like
this
secondary
token
layer
that
you're
describing
and
then
once.
A
Yes,
Scott
before
you
proceed,
Jennifer
is
also
asking
where
we
can
follow
your
work
just
so
that
we
can
keep
in
touch.
Maybe
you
can
you
can
share
more
about
that
in
the
chat
box,
deep
I
can
see
your
hand,
is
up
I'll.
Give
you
two
minutes.
If
you
can
quickly
run.
F
Through
I
will
say,
Scott's
hand
was
Scott's.
E
Just
a
quick
question
here
because,
like
I
think
Stephen
touched
a
lot
on
it
like
needing
a
second
token
for
retrievability
I'm,
going
to
leave
that
Foxhole
kind
of
alone
when
it
comes
to
the
tiering,
is
the
idea
that,
like
these
retrieval
Bots
are
timing
it
and
then
we
trust
them
to
like
enforce
the
tears
cryptographically
very
difficult
to
prove
that
you
got
it
at
the
right
speed
and
so
do
we
expect
that
complexity
to
go
into
consensus,
or
is
this
just
trust
that
we're
putting
in
these
retrieval
Bots?
E
B
So
I
think
there's
there's
a
a
few
different
things
and
there's
a
lot
of
complexity
here,
and
so
it
that
that's
that's
what's
getting
worked
through
right
is
is
what's
actually
something
that
we
we
think
is
has
real
ability
to
be
difficult
to
gain.
So
so
there
there
there's
baselines
when
you
look
at
like
what
retrieve.org
is
has
already
experimented
with,
where
you
nominate
a
set
of
sort
of
neutral
third
parties
that
could
potentially
check
on
violation
claims.
B
So
if
someone
says
I
can't
retrieve
it,
you
send
it
to
to
some
neutral
Auditors
to
attempt
to
retrieve
it
in
lieu
of
that
I
think
what
we're
really
expecting
is
a
couple
things
with
what
these
tiers
give
us.
One
is
there's
a
set
of
deals
and
data
that
are
that
are
ACL,
where
it's
not
like
the
the
thought
is
some
random
person
can't
get
them,
because
the
client
doesn't
want
that
to
happen,
and
so
in
that
case,
what
does
retrievability
means?
B
It
means
that
the
client
gets
that
data
back
when
they
want
it
right
like
because
they're,
the
only
ones
who
are
supposed
to
get
it,
and
so
you
end
up
having
to
say
well,
did
the
client
actually
get
the
service
they
wanted,
and
so
that's
in
some
ways
a
much
riskier
thing
for
an
SP
to
take,
because
if
the
SP
or
or
if
the
client
that
you've
said
then
gets
mad
at
you
and
says
they
can
claim,
and
so
you
don't
have
that
sort
of
backup,
Assurance
of
like
well
but
I'm,
doing
it
and
so
figuring
out,
you
know:
does
the
SP
want
to
take
it
only
with
the
Assurance?
B
There
are
some
agreed
upon
neutral
third
parties,
where
it's
good
enough
for
them
to
send
it
to
those
neutral
third
parties
to
to
meet
a
claim
that
they
are
making
that
content
available
so
figuring
out
what
what
those
sort
of
dispute
on
retrieval
processes
look
like
is
more
than
just
you
know.
Did
you
meet
your
bandwidth,
but
it's
also
making
sure
that
we've
got
sort
of
a
clear
protocol
in
place
for
this.
The.
C
B
Think
for
for
what
that
validation
looks
like
to
return
to
I
think
closer
to
your
original
question
right
now,
what
we
have
is
is
a
something
that
looks
in
some
ways
like
a
dram
like
Network,
where
it's
a
Federation
of
a
set
of
partners
that
are
running
validation,
Bots
and
they
will
I
think
with
tiers,
be
able
to
more
regularly
attempt
data
that
should
be
available.
B
You
know,
as
public
data
sets
versus
these,
like
archival
data
sets
right
where
you
might
only
have
the
the
piece
Gateway
exposed
and
not
you
know,
partial
data
I
think
longer
term.
We
expect
to
be
able
to
supplement,
or
or
have
parts
of
that
also
done
with
things
like
station
or
Saturn,
where
there's
large
decentralized
networks
that
are
actually
either
because
they're
just
doing
this
as
part
of
their
business
or
otherwise
are
attempting
retrieval
and
Reporting,
whether
it
works
or
not,
and
their
their
schedules
will
also
then
be
modulated
based
on
maturing.
E
Okay,
interesting,
thank
you,
yeah
I,
guess
the
one
the
last
comments
I
can
see
where
like.
If
you
do
have
this
second
token
layer
that
retrievability
could
be.
E
You
know
somebody
who
is
storing
the
data
claiming
that
they've
stored
to
data
on
this
chain
and
then
the
consensus
power
for
that
chain
then
actually
comes
from,
let's
say
other
nodes,
downloading
the
data
and
verifying
that
it's
retrievable
and
that's
what
produces
finalization
or
something
along
those
lines
like,
and
so
things
like
speed
and
some
of
those
other
things
don't
come
into
play,
and
then
the
you
know
the
economic
forces
of
me
producing
a
good
SP
to
store
this
data
is
going
to
make
others
ability
to
validate
it
better
and
then
you
can
actually
get
trustless
there
so
definitely
interested
in
retrievability
and
how
we
bake
that
into
consensus
and
I
like
Stephen,
where
Stephen
went
with
that.
A
Great
thanks
deep
over
to
you.
F
Okay,
I
will
actually
try
to
give
this
shot
too.
I
just
want
a
flag
for
people
that
are
following
along
the
discussion
and
also
you're
interested
in
seeing
what
we're
doing
on
the
flip
plus
side.
F
I
generally
agree,
directionally
with
all
this
same
Pages,
most
of
the
thoughts
being
shared
we're
working
with
Will
on
some
of
these
things
and
our
philosophically
aligned
in
terms
of
like
the
direction
this
is
going
in
and
are
interested
in,
seeing
an
ability
to
trustlessly
verify
or
at
scale
verified
or
drivability
of
content
from
a
field
plus
standpoint
where
we
are,
is
it's
a
little
bit
trickier
to
make
those
kinds
of
objective
statements
today?
F
So,
yes,
we've
got
retrieval
testing
we're
interested
in
seeing
more
sources
of
data
come
in
for
retrieval
testing,
so
we
can
build
as
unbiased
of
you
as
possible.
It's
not
a
different
problem
than
like
the
reputation
problem
generally
in
the
scope
that
it
exists
today,
and
so
we've
got
some
reasonable,
like
short-term
direction
that
we
have
confidence
in,
but
I
do
want
to
call
out
that
there's
a
distinct
difference
between
like
the
testing
that
we're
doing
right
now
and
the
bus
land
and
how
retrieval
testing
should
go
at
scale
in
that.
F
However,
open
data
constitutes
a
very
small
percent
of
of
the
data
in
the
world
and
so
I
think
figuring
out
ways
in
which
we
can
actually
simulate
client
behavior
and
get
get
data
from
clients
that
are
actually
needing
the
data
and
should
be
getting
to
the
data
like
with
Integrations
on
the
Gateway
side
or
Saturn
and
stuff
I
think
will
be
a
much
more
useful,
Direction
longer
term
and
relying
Less
on
data
sampling
and
testing
and
more
on
real
world
results
will
probably
yield
better
upside.
A
A
Finally,
we're
excited
nv21
is
coming
along
and
that
is
called
the
watermelon
upgrade
and
we're
beginning
to
think
about
how
that
would
look
for
all
of
us,
and
there
are
a
couple
of
open
questions,
obviously
that
we
need
to
you
know,
debate
or
decide
today,
so
that
we
can
move
forward,
one
of
which
is,
you
know,
talking
about
synthetic
poor
rep,
which
I
believe
is
59
asking
if
it's
sufficient
to
Warrant
an
upgrade
at
this
moment
based
on
our
preferences
or
you
know
or
other
constraints
that
we
have,
and
then
you
know,
if
we
anticipating
to
have
two
more
upgrades
this
year,
then
we
we
need
to
make
that
quick
enough
and
move
along
in
the
coming
weeks.
A
Caitlyn
I
see
your
hand
up.
Do
you
want
to
continue
the
discussion.
A
G
Thanks
lucky
so
from
my
perspective,
I
don't
know
if
we
have
to
hard
agree
that
this
is
what
we
want
to
do
and
the
direction
we're
going,
but
I
think
we
need
to
move
the
conversation
along
for
six
to
eight
weeks
now,
we've
been
talking
around
nb21
and,
as
lucky
pointed
out,
there's
this
conversation
about
whether
synthetic
po
rep
as
well
as
a
few
other
smaller
tips,
are
significant
enough
to
justify
the
cost
of
upgrading.
G
On
the
governance
side,
we
don't
have
a
strong
opinion.
Our
only
preference
is
being
very
careful
about
which
fips
we
consider
for
inclusion
and
the
three
that
we
have
now
are
already
accepted.
So
this
works
really
well
for
us.
G
G
My
perspective
for
what
it's
worth
is
that
we
have
typically
done
sort
of
a
late
summer
upgrade
so
that
we
have
room
for
one
more
at
the
end
of
the
year
and
if
that
is
sort
of
the
schedule
of
upgrades
that
we're
looking
at
it's
time
for
us
to
sort
of
commit
to
a
mainnet
upgrade
date,
so
that
we
can
begin
to
organize
resources
but
wanted
to
move
this
into
this
meeting.
To
provide
a
little
bit
of
clarity.
G
A
Okay,
thanks
I
I,
don't
think
the
comments
are
for
oh
yeah,
oops
I'm
not
updated.
Are
there
comments
anyone
willing
to
share
immediate
thoughts
or
concerns.
E
C
C
You
know
how
much
effort
you
put
into
that
so
I
I
strongly
push
back
on
anything
that
says
shipping
for
the
sake
of
shipping
I
think
we
should
only
conduct
a
network
upgrade
when
the
ecosystem
and
one
the
community
when
the
network
needs
it
instead
of
just
like,
for
the
sake
of
doing
it,
that's
why
I
feel
like
the
scope.
Conversation
is
very
interesting
and
I
would
love
to
hear
what
other
code
I've
seen
about
in
the
current
early
21
School.
H
Yeah,
hey
sorry,
I'm
outside,
so
maybe
some
background
noise.
Unsurprisingly
I
agree
with
almost
everything
Jennifer
said
in
particular,
yeah
I
think
Network
upgrades
have
to
be
warranted
and
I
do
think
warranted
because
of
you
know
that
there's
major
improvements,
we're
delivering
or
something
critical
that
the
network
needs
most
of
the
tips.
I
think
I
personally
feel
the
fips
in
that
we
have
in
interview
21
so
far.
H
Do
not
make
me
especially
motivated
to
have
an
upgrade
sooner
rather
than
later,
and
my
understanding
is,
none
of
them
are
urgently
needed
by
the
community.
There
is
one
thing
that
I
think
could
motivate
a
sooner
upload
rather
than
later,
which
is
we
do
have
some
protocol
bug
fixes
that
we're
excited
to
get
out
there.
Some
of
these
were
originally
scope
for
nv19,
but
b-scoped
due
to
the
expedited
timeline.
Others
are
new
protocol
bug
fixes
that
we've
discovered-
and
you
know
the
fixes-
are
all
ready.
H
We
just
have
to
ship
them
so
I'm
more
sympathetic
to
an
argument
about
along
those
lines
and
I'm
interested
in
having
that
conversation,
but
otherwise
yeah
I'm
not
jonesing
to
get
this
upgrade
out.
G
Yeah
I
hear
you
I
think
this
all
makes
sense.
I
do
want
to
say,
since
he
was
not
able
to
join
this
call
today.
I
have
also
been
asked
to
represent
Nemo's
perspective.
He
obviously
worked
and
was
an
author
on
the
synthetic
Pro
rep
FIP.
G
G
It
will
also
allow
for
storage
providers
to
use
less
to
space
overall,
which
is
a
benefit
if
they
choose
to
upgrade
and
that
it's
a
preference
given
that,
if
we
only
have
one
more
upgrade
at
the
end
of
the
year,
there
will
likely
be
a
push
to
get
a
lot
of
other
fips
into
it,
and
it
may
be
more
difficult,
then,
to
also
include
synthetic
Pro
rep.
A
Thanks
can
I,
we
can
take
Jennifer,
then
Steven
and
Scott.
C
So
there
are
a
couple
bugs
that
that
there
are
a
couple
things
from
nv19
postpone
the
first
one
is
the
Snapdeal
activation
bug,
which
we
are
overriding
and
the
activation
Epoch
for
sector
wrongly,
and
that
can
let
people
to
create
it's
a
sector
has
time
here
maximum
lifetime.
If
they
do
the
Snapdeal
one
The
Five-Year
maximum
sector
lifetime.
Sorry
when
they
extend
the
sector
or
snap
the
sector
upon
the
end
of
a
existing
sector.
C
We
we
should
fix
this,
but
we
do
have
until
2020
for
February
to
fix
this
I'm,
not
saying
we
should
wait
until
then.
Once
we
get
a
chance,
we
should
fix
it.
So
that's
one
of
the
protocol
bugs
there's
another
one
that
we
are
looking
internally
but
like
that
one
has
a
timeline
risk
because
we
will
have
to
come
up
with
some
FIP.
So,
according
to
the
fit
process,
the
current
proposed
timeline
likely
won't
work
already,
so
Caitlyn
I,
probably
should
sync
with
you
offline
sometime
soon
as
well.
C
We
also
have
another
FIB,
that's
accepted
and
should
be
finalized
in
the
last
upgrade,
which
is
extending
the
deal
sector,
maximum
lifetime
sector,
commitment,
Lifetime
and
the
deal
maximum
lifetime
to
3.5
years,
which
I
know
is
a
thing
that
could
be
really
nice
for
people
to
have
longer
deals
slowly.
So
that
is
another
thing.
That's
a
pending
in
the
pipeline
we
are,
we
are
also
fixing
the.
We
are
also
doing
a
lot
of
optimization
since
addicts
and
addicts
too,
they
have
been
optimizing.
C
The
deal
activation
costs
a
lot
which
are
introduced
by
of
545.
We
increased
the
the
cost
there.
We
are
optimizing
it,
so
it
will
be
really
really
really
nice
to
get
those
optimization
shipped
as
well,
but
I
do
also
want
to
call
out
a
people
store
provider
can
actually
get
a
cheaper
cost
if
they
just
simply
use
proof
commit
right
now,
instead
of
using
proof
commit
aggregation
because
nothing
can
be
three
percent
free
and
full
commit
right
now
is
free
when
it
comes
to
deal
Activation.
C
D
To
say
so,
yeah
for
me,
yeah
for
the
upgrading
the
network
deal
version,
as
I
mentioned
this
before
I
really
like.
We
have
a
fixed
path,
which
means
the
pace.
I
mean
okay
and
I
I
mean
yeah,
for
example,
every
new
version,
one
quarter.
That
will
be
good
for
the
yeah
for
the
community
to
to
forget,
to
follow
right
and
to
plan
that
plan
and
ahead
for
this
kind
of
upgrading.
D
Essence
but
I
don't
think
we
really
put
all
this,
but
anyway,
I
yeah
I
still
prefer
that
we
have
more
upgrading
event
yeah,
especially
as
a
current
stage,
because
we
we
have
many
many
yeah
product
changes
here
and
yeah
some
yeah
Improvement,
so
for
the
yeah
I
also
and
I
will
support
that
will
have
and
waiting
to
one
okay.
D
The
network
could
be
upgraded
and
yeah
in
this
quarter,
because
that's
what
would
be
good
so
we
will
try
anything
we
could
put
it
in
yeah,
for
example,
there's
some
bug
fix
and
just
as
exchange
information
yeah,
because
we
know
that
we
we
have
something
we
need
to
do
anyway.
Okay,
yeah,
not
right
now,
maybe
maybe
there
are
so
yeah,
but
anyway
what
we
could
have
done
and
we
put
it
in
so
it
will
also
Bill
Gates
the
risk
for
the
next
upgrade,
because
you
have
done
something
right
yeah.
D
This
is
one
thing
I
want
to
mention
so
and
about
the
details,
change
information,
so
I,
don't
think
there
is
any
problem
for
yeah
bring
us
to
to
follow
all
this.
D
Fix
them
as
soon
as
possible
yeah.
This
is
the
thing
I
want
to
mention,
and
I
have
one
question
actually
here
because
well,
we
are
thinking
about
the
next
upgrade
and
with
you're
still
thinking
about
the
long
term,
yeah
non-term
development.
So
the
question
is:
what's
the
status
of
the
Native
FBA
development?
D
C
B
Yes,
yeah,
so
native
fem
is
going
to
be
extra,
probably
a
few
upgrades
away,
basically
because
we
need
a
couple
upgrades
to
harden
the
runtime
so
that
users
can
securely
deploy
custom
contracts
with
some
new
contracts.
It's
probably
not
going
to
be
this
year.
Looking
at
our
current
roadmap,
yeah
we're
hoping
early
next
year,
but
we're
supposed
to
get
it
out.
C
I
would
actually
pass
a
question
back
to
Stephen.
Why
do
you
think
Native
fvm,
yes,
should
be
prioritized?
Why
is
it
so
important?
What
kind
of
use
cases
and
product
features
are
you
looking
forward
to
getting
abled
with
it?
I
feel
like
a
fem
team
has
been
looking
forward.
Like
has
been
looking
for
feedback
on
that.
D
So,
okay,
so
yes,
person
is
at
yeah,
because
the
native
FM
is
in
our
schedule
before
right
and
with
the
whole
communication.
Is
that
is
a
big
thing.
Okay,
so
we
didn't
hear
the
updates
for,
for
some
time,
I
think
the
community.
You
need
to
send
an
update
on
that
so
yeah.
This
is
the
first
thing
and
it
is
yeah
section
is
that
we
don't.
We
also
expect
the
native
FM
or
really
help
us
to
write
some
contract
which
could
be
more
efficient.
D
Okay,
yes,
because
if
even
you
know
that
you
have
to
yeah
interpret
it
into
the
eBay
and
then
go
to
the
native,
what's
up
right,
so
the
gas
cost
will
be
higher.
B
Gas
costs
of
native
would
probably
be
better.
It's
a
bit
tricky
because
it
might
not
be
as
good
as
the
current
native
contracts
for
security
reasons.
Unfortunately,
because
for
now
we
have
to
assume
the
the
consuming.
The
current
contracts
are
are
not
malicious,
but
with
arbitrary
native
contracts
you
might
have
to
charge
more
gas.
B
It's
an
outbreak
is
going
to
be
pessimistic,
so
it's
still
it's
a
fight
player
with
the
exact
performance
of
beat
truly
like
high
performance
would
or
would
require
more
parallelism,
which
is
also
something
on
the
roadmap,
but
that
gets
more
complicated.
E
Thanks
so
I
guess
just
back
on
to
the
mv21
conversation,
I
guess
one
one
additional
kind
of
Light,
which
has
been
covered
to
some
degree,
is
like
I
agree
that
this
you
know,
if
you
look
at
kind
of
the,
how
big
at
the
fem
upgrade
was
and
like
how
critical
nv19
was
that
you
know
nv21
would
be
kind
of
in
a
class
of
its
own.
You
know
compared
to
those
two.
E
So
you
know,
inside
of
it
is
not
something
that
is
necessarily
super
urgent
or
you
know
super
big
and
transformative,
and
so
then
you
kind
of
say
why.
So
if
you
play
it
forward
a
little
bit
and
you
skip
this
one,
then
you
know
the
nv21
that
rolls
around
in
October
or
November
will
be
the
the
most
meaningful
shipping
vehicle
that
you
have
all
year
outside
of
fvm.
E
All
focus
is
going
to
be
on
the
November
release
and
we're
gonna
we're
gonna
cram
it
you're
gonna
have
bugs
we're
gonna
cut,
Corners,
potentially,
and
then
you
know,
Steven
talked
about
you
know
native
actors,
early
2024,
so
we're
going
to
turn
around
and
do
a
native
actors
upgrade
three
four
months
later.
E
What
about
that
February
bug?
February
2024
is
when
this
thing
needs
to
be
fixed.
That
means
we're
doing
it
now
or
we're
doing
it
in
November.
And
so
then
we
we
take
a
problem
that
we
have
time
on
and
less
urgency
and
breathing
room,
and
we
turn
it
into
something
that
must
ship
in
November
or
with
you
know,
native
actors,
and
so
we
put
oursel.
E
We
put
ourselves
in
a
tough
situation
of
we
have
no
other
option
by
not
releasing
this
stuff
so
working
backwards
from
that,
and
we
can,
and
so
should
we
it's
a
good
conversation
to
have
in
within
that
context
of
of
where
things
might
be.
Six
to
eight
months
from
now,
it
seems
like,
unless
we're
gonna
tie
ourselves
to
the
Mast
on
it,
that
we
should
ship
what
we
have,
because
we
have
a
good
opportunity
and
a
good
window
to
do
it.
G
No
I
I
am
inclined
to
agree
with
Marva
Scott,
but
Jennifer
is
the
one
who
is
planning
resources
for
the
Lotus
team.
We
always
run
into
this
problem
with
difficult
upgrades
of
there
not
being
like
a
Surefire
way
to
say
yes
or
no
one
way
or
the
other
I
think
this
conversation
has
moved
the
needle
quite
a
bit.
That
is
all
I
have
to
say.
A
Great
thanks.
Obviously,
the
conversation
continues
after
the
link
to
where
we
have
that
you
know
the
discussion
on
nv21
continue
to
drop.
Your
thoughts
is,
is
that
another
hand?
Yes,.
A
H
Okay,
good
yeah
I
just
wanted
to
say
one
thing
that
you
know
the
the
that's
not
really
gonna
have
to
help
drive
us
a
decision
here,
but
I
do
think.
H
One
of
the
reasons
why
Jennifer
and
I
are
a
little
reluctant
to
kind
of
jump
on
board
here
he's
just
because
of
the
amount
of
work
that
Network
companies
require
being
from
the
Lotus
and
built-in
actors
and
fbm
teams,
as
well
as
just
a
broader
Community,
one
Avenue,
that
we
really
should
be
exploring
more
and
that
there
is
some
preliminary
work
going
on
that.
H
Maybe
we
could
share
in
the
next
coordinates
meeting
is
how
to
make
it
easier,
how
to
make
Network
complex
easier,
how
to
make
them
less
painful,
how
to
make
them
safer
and
just
generally
how
to
produce
the
overhead,
because,
right
now,
at
least
speaking
in
someone
on
the
Lotus
team,
the
overhead
feels
like
a
lot,
and
so
that
might
be
something
really
worth
pursuing,
even
though
it
is
very
much
a
longer
term
investment.
You
know
it's
not
necessarily
transformative,
but
I.
H
Think
I
I
think
this
conversation
at
least
it's
likely
delivering
some
impetus
to.
Why
that's
important
optimization
to
pursue
and
I
know,
that's
something
that's
kind
of
under
Horizon
for
the
fvm
and
interplanetary
consensus
teams,
so
just
something
to
something
to
think
about
and
a
takeaway
that
I'll
have
from
this
meeting.
C
And
and
Casey
is
also
not
just
Lawless
decision
because,
like
as
Venus
Lawless,
yes
always
open
to
support
whatever
the
network
needs,
I
think
the
pushback
here
is
again:
you
coordinate
the
last
Network
upgrade,
so
you
have
better
understanding.
Now,
it's
not
just
the
implementation
team
has
to
put
into
the
effort,
but
also
thinking
about
the
ecosystem
impact
API
service
provider,
exchanges,
all
the
users,
all
the
storage
providers.
All
the
no
operators
has
two
updating
and
supporting
testing
in
this
process.
C
So
it's
not
just
a
developmental
work,
but
just
also
like
you
know,
user
impact
as
well,
and
that's
why
I
think
we
should
sync
a
little
bit
harder
when
it
comes
to
like
what
to
ship
and
how
often
you
ship
but
again
if
we
can
make
the
process
easier
for
ordinal
operators
to
to
do
upgrades.
That
would
be
amazing,
but
I
think
there
are
things
coming
in
the
future
that
can
enable
us
enable
us
that
just
doesn't
exist
today.
G
Totally
also
sorry,
it
was
too
colloquial.
I
was
using
Lotus
sort
of
a
shorthand
for
the
fact
that
you
are
kind
of
representing
that
opinion.
We
can
also
say
you
can
discuss
what
other
timelines
through
the
rest
of
the
year
might
look
like
for
one
or
two
upgrades,
but
what
it's
sounding
like
to
me
is
where
we're
at
is
that
there's
not
really
a
bunch
of
impetus
to
move
forward
immediately
with
planning
for
nv21,
and
so
we
can
continue
to
have
this
conversation
over
time.
G
C
C
So,
basically,
at
certain
time,
only
the
Fit
already
have
a
proposal
in
a
considerable
draft
state
will
be
considered
in
early
21,
and
we
can
take
codex,
can
take
a
starting
point
from
there
to
finalize
the
scope
of
the
upgrade
and
then
commit
to
your
timeline
and
to
to
to
upgrade
so
that
can
help
us
preventing
continuously.
You
know
sleep
in
the
timeline,
expanding
the
scope
and
all
those
things
wondering
if
you
agree.
G
There
are
also
a
few
other
kind
of
tricky
governance
dependencies
right
now.
That
I
think
we
can
also
talk
about
offline
that
make
that
a
little
bit
harder,
I
think
the
best
thing
to
do
also
I
think
we
only
have
10
minutes
left
and
we're
getting
into
that
territory
of
sort
of
discussing
these
big
trade-offs
that
as
core
devs
we've
always
kind
of
had
to
negotiate
and
haven't
had
a
really
great
policy
for
moving
forward
on
them.
G
A
Thanks
with
that,
we
are
moving
quickly
to
Fatman
and
Stephen
Lee
on
Korean
food
court
problems
over
to
you,
I'm,
not
sure
who
we'll
be
presenting
today.
I
Hello,
can
you
hear
me
hello,
oh
cool,
so
hello,
everyone
I
am
Batman
13..
It
is
a
pleasure
to
be
invited
by
lucky
to
give
core
devs
a
quick
overview
of
the
proposed
FIB
discussion,
725
filecoin
protocol
master
plan.
What
being
proposed
is
not
final,
and
we
would
very
much
appreciate
Community
input
on
the
discussion
so
before
we
dive
into
what
the
protocol
master
plan
is.
Let's
see
some
of
the
problems
it
tries
to
address.
I
I
It
is
very
hard
to
set
up
Definitive
process
on
how
we
can
reach
consensus
to
understand
which
group
should
should
or
should
not,
weigh
in
on
the
decisions
or
increase
governance
engagement
in
general,
then
we
have
the
problem
of
a
slow
protocol,
iteration
stalling
Innovations
on
user
defined,
defy
and
storage
solution.
I
Many
device
Solutions
have
to
incorporate
C5
elements
due
to
limitation
of
protocol
and
user-defined
storage.
Market
is
being
all
competed
by
by
the
network
building
subsidies.
So,
lastly,
we
have
the
problem
of
fitting
with
no
clear
strategic
goals,
a
lack
of
priorities
and
principles
to
guide
the
guide
which
FIP
should
be
discussed
and
implemented.
Next
next
slide,
please.
I
So
we
think
the
solution
to
aforementioned
problem
is
a
layered
approach
to
protocol
architecture,
design,
using
an
application
stack
and
analogy
to
to
support
generic
applications
to
be
run
in
computers.
The
Os
Os
layers
focuses
solely
on
managing
process,
exposed
Hardware
resource
facility
communication
between
apps
Etc,
so
by
defining
clear
layers
and
rows
of
each
layer
within
the
protocol
architecture,
we
think
Falcon
Network
as
a
whole,
were
then
able
to
evolve
into
its
next
stage.
Next
slide.
I
Please,
the
layer
approach
is
only
durable
as
after
fevn
launch
in
March
2023
of
2023.
For
for
the
longest
time
since
midnight
launch
filecoin
protocol
itself
is
the
product
for
data
storage,
so
it
has
to
couple
with
all
kinds
of
complexity.
A
storage
product
requires.
So,
however,
now
we
have
the
option
to
clean
up
and
clean
up
the
core
protocol
layer
and
push
the
storage,
push
the
storage
product
to
Layer
Two,
so
that
Falcon
core
protocol
can
actually
transition
to
a
platform.
I
Just
like
the
OS
layer
in
the
previous
slide
next
slide,
please
many
of
the
details
in
The
Proposal
will
be
augmented
due
to
the
time
constraints
of,
but
we.
What
we
like
to
stress
again
is
that
on
the
left
before
fevm
Falcon
protocol
itself
is
the
storage
product
and
everything
is
tightly
coupled
like
an
old
Chinese,
which
means
pulling
one
hair
is
affecting
the
whole
body
so
under
on
the
right.
I
The
core
protocol
layer,
like
rewards
minor
consensus,
proof
of
storage,
are
completely
separated
from
user
layer
where,
where
the
products
and
applications
are
actually
built,
so
the
layer
division
effectively
effectively
makes
Falcon
core
protocol
as
a
platform.
As
in
the
title
of
this
slide,
the
proposed
the
proposed
the
protocol
master
plan
transfer
or
transforms
Falcon
from
a
product
to
a
platform.
E
I
Slide
please
so.
Finally,
let's
see
how
some
of
the
problems
are
presented
at
the
beginning
could
be
resolved
by
this
proposal.
So
first
we
will
be
having
a
scoped
down
core
protocol
which
in
turn
allows
safe
governance
to
reach
soft
soft
consensus.
I
Much
faster,
for
example,
The
Core
Concepts
proposed
in
56,
would
would
actually
bypass
fit
governance
to
Define,
whatever
multiple
multiplier
they
see
fit
in
in
the
user,
app
and
user,
slash
application
layer
and
then
we
will
have,
and
secondly,
we
we
will
have
genuine
competitions
amongst
large
markets
for
for
SP
storage
and
storage
markets
that
could
potentially
evolve
on
their
own
without
waiting
for
a
core
protocol
change
to
bring
new
innovation
to
the
network.
I
So,
lastly,
we
can
establish
priorities
and
strategic
goals
to
realize
the
protocol
master
plan
of
fifth
by
Fifth
to
meaningfully
take
the
network
to
next
stage.
So
that
is
all
thank
you
for
listening.
Look
for
looking
forward
to
to
some
of
your
input
in
the
fifth
discussion
725.
Thank
you.
A
G
I
mean
this
is
this
is
a
huge
proposal,
which
is
why
I
think
it's
awesome
that
fat
man
could
come
and
introduce
it
to
everyone.
I'm
gonna
grab
the
link
real
quick.
If
anyone
wants
to
read
it
in
Greater
detail
and
provide
some
comments,
something
like
this
is
something
that
has
to
marinate
with
folks
for
a
really
long
time
and
is
very
incremental.
So.
A
Great,
thank
you
all
so
much
for
joining
us,
for
this
very
interesting
coordinates,
call
number
59.
I'm,
looking
forward
to
seeing
you
all
at
quarters
number
60
next
month
do
not
forget
to
to
join
us
for
the
next
governance
call,
which
is
how
usually
happens
every
last
Friday
of
the
month
do
register
there
and
join
our
Phil
golf
slack
channel
to
ask
all
your
questions
and
join.
You
know.
Other
discussions
happening
Jennifer
Yes.
A
We
should
definitely
be
in
touch
I
think
to
have
that
conversation
and
then
move
all
of
it
publicly,
so
everyone
else
can
join
the
conversation.
Thank
you
all.
So
much
for
joining
have
a
lovely
rest
of
your
day.
Bye.