►
From YouTube: FF Coredevs61 FinalCut
Description
Recording for: https://github.com/filecoin-project/core-devs/issues/146
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
Follow Filecoin!
Website: https://bit.ly/3ndAg44
Twitter: https://bit.ly/3ObND0x
Slack: https://bit.ly/3HKfFy7
Blog: https://bit.ly/3HFZFNv
Reddit: https://bit.ly/39N4Jmv
Telegram: https://bit.ly/3bkP8Ly
Subscribe to our newsletter! https://bit.ly/3Oy8J9j
#filecoin #ipfs #libp2p #web3 #nft
A
A
So
welcome
everyone.
This
is
Falcon
cordov's
call
number
61.
today
is
Friday
September,
1st
or
Thursday
August
31st,
depending
where
you
are
in
the
world.
A
We
have
a
really
busy
agenda
today,
so
first
we're
going
to
go
through
a
couple
of
quick
housekeeping
items
for
core
devs
and
then
we're
going
to
turn
it
over
for
a
few
different
technical
presentations
and
including
an
analysis
from
the
Cel
team
at
the
end,
we'll
round
it
out
discussing
the
upcoming
Network
upgrade
and
we
will
also
flag
an
additional
crypto
net
research
item
at
the
end
of
this
call
as
well
again
really
packed
agenda,
so
we're
going
to
try
and
facilitate
speaking
really
tightly
so
if
you
do
have
questions
that
can
be
handled
async,
please
signal
those
in
the
chat,
but
please
be
aware
that
if
you
go
too
far
over
time,
I
may
interrupt
as
well.
A
That
being
said,
we
want
to
know
that
we
have
been
discussing
having
pre-scheduled
upgrades
starting
in
2024..
It
seems
like
a
lot
of
people
are
really
on
board.
With
this
idea.
We
do
have
an
open
issue
in
GitHub
if
you'd
like
to
discuss
you're.
Welcome
to
add
your
thoughts
there,
however
Jennifer
has
suggested
sort
of
a
Cadence
for
these
upgrades,
starting
with
three
per
year,
starting
in
late
February
or
early
March,
July
and
October.
A
The
point
of
flagging
this
here
is
just
to
let
everyone
know
that
there
hasn't
really
been
any
follow-up
on
this
for
a
few
weeks,
so
it
seems
like
we
have
consensus
to
move
forward
with
this.
If
you
disagree
or
have
an
alternative
proposal,
please
let
us
know,
and
we
will
also
flag
the
discussion
link
for
this
in
all
of
the
follow-up
materials
that
lucky
usually
shares.
A
In
addition,
as
mentioned,
the
cryptonet
team
does
have
a
new
report
that
they
have
just
put
out
that
they'd
like
us
to
flag
for
everyone.
I'm,
sorry,
I,
forgot,
I'd,
move
this
up,
rather
than
keeping
it
at
the
end.
I
wanted
to
make
sure
we
referenced
it.
A
There
is
a
recording
that
I
put
together
for
us
and
then
there's
also
a
link
to
the
doc
as
well
highly
recommend
everyone
take
a
look
and,
as
before,
we'll
make
sure
to
share
this
with
all
of
our
post
meeting
materials
as
well
all
right
so
without
further
Ado,
let's
jump
into
the
first
section.
Alejandro.
Are
you
here
to
walk
us
through
your
proposal.
B
That
is
because
the
confidence
in
a
transaction
not
being
reverted
comes
from
within
an
amount
of
time
and
a
number
of
blocks
that
is
appended
to
this
to
the
blog,
where
this
transaction
is
so
that
the
probability
of
a
reorganization
decreases
over
time
in
other
longest
chain
types
of
protocol
like
Bitcoin,
the
system
itself
doesn't
describe
a
particular
organization
time.
This
is
not
so
much
the
case
in
Falcon,
because
the
power
table
is
part
of
the
state
itself
and
is
considered
fine.
B
It
can
be
lower
for
some
applications,
so,
for
example,
coinless
takes
two
hours
for
a
transaction
to
because
to
consider
confirmed,
but
there
is
a
risk
involved,
of
course,
to
lower
finalization
times
and
in
any
case,
any
reasonable
time
for
important
State
updates
will
take
tens
of
minutes
for
any
longer
chain
type
of
protocol.
These
are
huge
pain,
point
and
a
product
blocker
for
the
FICO
and
ecosystem.
It
doesn't
just
hinder
user
experience
for
the
core
usage.
B
So
storage
providers
must
wait
for
this
900
epochs
to
see
any
update
to
their
power,
reflected
or
any
storage
deal
reflected,
but
also
just
simple
transactions
reposites
into
an
exchange
take
hours
it
will
be
confirmed,
but
especially
it's
paying
for
the
long
is,
is
a
pain
point
for
many
applications.
It's
exciting
projects
that
are
building
on
top
of
for
in
the
ficon
ecosystem.
B
We
have
many
examples
here,
but
in
consensus,
lab
we're
focused
on
IPC
on
offloading
value
from
mainnet
into
dedicated
subnets,
depending
on
applications,
and
these
subnets
require
somewhat
frequent
interactions
with
mainnet
and
with
other
subnets
and
any
number
of
hours
for
some
net
main
interactions
is
just
embarrassing
and
it's
a
no-go
for
IDC
us.
Can
we
go
to
the
next
slide?
B
Thank
you.
So
the
solution
we
propose
is
what
we
call
a
finalization
model
with
which
we
target
some
epoch
finalization.
B
It's
a
simple
model
that
only
has
two
components:
one
is
the
what
we
call
Granite,
which
is
a
noble
bft
consensus
protocol
that
we
have
designed
with
open
systems
with
a
lot
of
participants
in
mind
like
the
case
for
five
coin,
and
something
that
we
call
the
finalizer,
which
has
a
funny
name,
but
it's
just
a
very
simple
algorithm
that
uses
granted
to
finalize
chipsets
if
the
implementation
of
the
finalization
model
can
be
done
completely
modular
and
it
requires
minimal
changes
to
the
FICO
and
client
code
bases.
B
In
fact,
in
the
field
discussion
that
we
have
recently
opened
and
I
invite
you
all
to
participate
in
it,
we
have
recently
realized
that
actually,
the
two
operations
that
are
required
to
be
exposed
by
the
primary
chain
structure
of
Icon
clients
is
already
there.
So
we
don't
extremely
need
any
modification
to
the
existing
code
there.
It's
just
the
addition
of
the
FICO
of
the
finalization
model,
but
we're
still
discussing
some
money.
Some
you
know
design
choices.
That
might
be
wise
while
we're
doing
this.
B
B
We
Believe
Granite
will
tolerate
the
number
of
participants
that
exist
in
carrying
the
falcoin,
but
suppose
the
number
of
the
you
know
the
block
miners
in
Falco
and
instead
of
three
thousand
or
fifth,
you
know
in
the
order
of
three
thousand,
we
have
now
multiplied
by
10
or
100.
There
will
also
be
a
practical
limit
to
how
many
participants
can
can
participate
in
one
instance
of
a
bft
consensus
protocol,
and
that's
why
we
are
preparing
early
on
with
support
for
committees.
B
So
subsets
of
these
participants
executing
the
consensus
protocol
and,
of
course,
support
pull
away
the
Committees
the
we
have
Justified,
so
I
have
gone
over
a
little
bit.
Why
longer
chain
types
of
protocol
and
when
you
want
loan
finalization,
just
look
just
modifying
specific
consensus
without
something
like
a
quorum
based
type
of
consensus
protocol
is
not
the
way
to
go
to
obtain
finality
within
seconds
or
within
a
few
minutes.
But
why
run
it
right?
Why
not
just
an
existing
consensus
protocol?
B
Well,
as
I
said
before,
we
have
design
Granite
with
open
systems,
is
mine
in
mind
with
huge
participation
and
we
believe
is
a
very
good
fit
for
that.
The
main
feature
of
granite
is
that
it's
completely
littleless,
so
every
participant
in
the
consensus
protocol
executes
exactly
the
same
code.
There's
no
designated
subset.
That
has
you
know
a
specific
role
which
makes
it
as
resilient
to
the
network
service
tax
as
vanilla
consensus
protocol
can
be.
B
It
does
we
also
so
guys,
it's
also
not
vulnerable
to
network
disruptions,
so
we
assume
no
synchrony
or
safety
whatsoever,
and
we
have
even
a
variant
where
no
synchronous
required
for
liveness
either
and
the
main
goal
of
granite,
which
is
to
provide
fast
finality,
is
obtained
by
only
requiring
exchanging
a
linear
number
of
liquidity
messages
of
constant
size
for
a
granted
instance
to
terminate
with
a
finalized
chipset,
and
in
the
case
that
we
decide
to
to
to
execute
with
constant
size
committees,
it
would
only
require
to
exchange
a
constant
number
of
lipidop
messages.
B
So,
what's
next
well,
of
course,
I've
already
mentioned
it.
Please
engage
in
the
existing
discussion,
the
fifth
discussion
that
we
have
at
the
moment
we
have
most
spec
out,
but
there's
still
some
rough
edges
and
your
opinion
will
be
super
relevant
for
that.
B
B
We
are
also
detailing
the
specification
we
expect
to
have
a
whisper.
Are
you
still
hear
me,
my
okay,
I
guess
sorry,
there
was
something
in
my
headphones
were
talking
yeah.
We
see
we're
supposed
to
have
a
full
specification
by
the
end
of
September,
and
once
that
is
done,
we
we'll
be
working
to
test
an
MVP
in
Q4
this
year
with
the
aim
to
upgrade
falcoin
with
this
fast
finality
by
q1
next
year.
That
would
be
all
please.
B
If
you
think
you
have
available
input
participate
in
the
discussion
we
will
also
be.
We
will
also
be
at
filled
up
in
Singapore
and
Iceland,
giving
some
talks
on
granite
and
on
the
finalization
model.
So
let's
have
a
chat
if
you
have
anything
interesting
in
in
this
project
and
if
you
want
to
know
a
bit
more
about
what
we've
been
doing,
have
a
look
at
the
granite
indexer
that
gathers
all
the
docs
that
we'll
be
working
on
in
the
past
few
months.
A
Thank
you,
Alejandro
any
questions
for
Alejandro
or
about
fast
finality
in
general,
foreign.
A
Likely
more
once
we
get
the
FIP
dropped
up
for
people
to
read
and
provide
some
more
thoughtful
commentary
there
too,
as
well,
but
this
was
great
thank
you
and,
as
always,
we
will
make
sure
that
links
to
any
of
this,
including
your
discussion
post,
is
also
included
after
the
meeting.
A
All
right
next
up,
we
have
ayush
talking
about
fvm
Randomness
changes.
Oh
stub,
do
you
have
a
question.
A
B
Yeah,
so
that's
that
is
a
good.
That
is
a
good
question
that
is
part
of
why
we
want
to
incentivize
participation
in
the
fifth
discussion.
So
if
that's
one
of
the
things
that
the
opinion
of
code
devs
will
be
very
relevant,
for
my
opinion,
is
that
the
what
granny
drinks
brings
to
the
table
to
FICO
and
it's
such
a
huge
Improvement
in
terms
of
finalization
at
such
a
low
cost
in
terms
of
extra
messages
and
signature,
verification
Etc.
That.
D
Oh
we'll
just
do
it
yeah
if
possible.
What
fraction
the
networking
support
speech
hey
say
that,
again,
what
fraction
of
the
network
needs
participate?
Yeah.
B
So
we
are
targeting
full
participation,
but
of
course,
once
we
have
an
MVP
and
run
some
tests,
we
will
know
exactly
the
the
the
the
limits
to
perform
at
the
rate
that
we
want
granny
to
perform
for.
B
So
so
a
variety
is
optimal
resilience
in
without
assuming
synchrony,
which
means
that
33
of
the
participants
running
granted
can
go
offline
or
do
anything
anything
weird
whatever
they
want
to
behave.
Charlie
basically,.
D
G
Yeah
I
guess
Steven's,
Point
and
and
maybe
just
to
double,
clarify
Alejandro
you're,
saying
that
there
isn't
a
a
specific
like
consensus
like
fractional
reward.
That
goes
specifically
to
groups
who
are
participating,
participating
in
this
consensus
process
and
providing
you
know
through
that
participation.
The
fast
finality
service,
but
that.
B
That's
what
I'm
inclined
to
so
that's
my
opinion,
but
if
the
discuss
is
still
there,
there
are
other
things
that
one
could
do
right.
The
easiest
would
be
to
just
reward
the
same
fraction
to
everyone,
but
that's
still
that's
an
instant
device,
a
partial
signature
right,
so
a
particular
vote,
because
we
only
need
a
fraction
of
votes
in
order
to
terminate,
but
at
the
same
time,
if
you
want
a
threshold
of
the
votes
only
to
be
rewarded,
only
the
one
who
vote
this
should
also
be
consensus
on
that.
B
So
my
opinion,
like
you
know
most
most
consensus
protocols
like
Cosmos
was
cardano
like
they
don't
explicitly
provide
Rewards
or
like
single
voting,
but
if
that
is
something
that
is
required,
we
can
definitely
there
are
definitely
options
there
and
that's
why
there's
a
discussion
is
is
great
for
for
this
for
to
continue
this
talk.
A
H
Hey
everyone
this
will.
This
will
be
fairly
quick.
I
want
to
present
a
FIP
that
we're
calling
improvements
to
the
fbm
randomness.
This
call
the
FIB
Dr
PR
draft
PR
is
open,
and
thanks
for
the
folks
about,
we
reviewed
that
this
is
very
much
a.
This
is
the
way
things
behave
today
and
we're
making
some
slight
tweets
to
how
it
operates
so
I
wanted
to
start
with
by
discussing,
what's
changing
and
how
it
currently
Works.
H
So
the
fbm
today
exposes
syscalls
called
getchain
Randomness
and
get
Beacon
Randomness
that
fetch
that
actors
or
contracts
can
trigger
in
order
to
fetch
Randomness
and
the
source
of
the
randomness
itself
is
the
file
coin
blockchain.
So
you
just
walk
up
the
blockchain
hi.
Sorry,
you
walk
up
the
blockchain
and
retrieve
the
randomness
from
the
blocks
themselves.
H
Today,
what
these
calls
do,
yeah,
I
should
add.
The
fvm
itself
has
no
access
to
blockchain
history,
so
this
is
very
much
requested.
It
just
forwards
to
the
client,
Lotus
Forest,
Venus,
whatever
to
say:
okay,
I
need
you
to
fetch
me
Randomness
that
satisfies
these
restrictions.
So
today,
these
specifically
these
says,
calls
receive
the
epoch
from
which
you
want
to
draw
Randomness,
as
well
as
some
personalization
parameters
2
in
particular.
H
The
fem
then
forwards
that
to
the
client
which
walks
up
the
blockchain
until
it
finds
that
requested
Epoch,
there's
no
limit
to
how
long
to
how
far
back
it's
willing
to
walk
at
least
not
at
the
ciscal
level.
It
finds
a
Randomness
that
the
client
finds
that
randomness
of
the
deep
block
and
performs
a
hash
with
this,
introduces
his
personalization
into
the
hash
and
then
returns
that
so
that's
how
this
works
today
and
there's
a
fixed
cost
charged
for
this
operation.
H
So
it
doesn't
really
matter
I'm
slightly
waving
my
hands,
but
it
doesn't
really
matter
what
Epoch
you
request
a
Randomness
from
whether
it's
this
current
Epoch
or
all
the
way
back
to
Genesis.
It
costs
the
same,
and
this
is
a
little
insecure.
So
if
we
can
go
to
the
next
slide,
I
want
to
discuss
how
we're
changing
this.
H
So
what
the
change
is
proposed
to
do
so
you'll
still
receiving
EPO
to
draw
the
randomness
from,
but
there's
no
longer
any
of
these
personalization
parameters.
So
the
fem
doesn't
have
to
receive
that
and
obviously
doesn't
forward
it
onto
the
client.
The
client
still
walks
up
the
blockchain
until
the
request
Epoch
we're
still
not
enforcing
any
limits.
H
So
if
you
wanted
to
go
all
the
way
back
up
to
Genesis
you
can
the
client
will
find
the
randomness
of
the
epoch,
but
then
there's
no
hashing
of
this
personalization
parameters
required
and
just
returns,
that
to
to
the
fvm,
which
will
forward
that
on
to
the
actor
or
contract,
we
are
proposing
changing
the
cost.
However,
from
just
being
this
one
fixed
cost
to
essentially
linear
and
how
far
back
up
the
chain
you're
walking.
So
the
further
back
you're
walking
the
more
you're
going
to
pay
for
it.
H
But
if
you
want
to
walk
all
the
way
back
up
to
Genesis,
that's
fine
you
can.
You
can
do
that.
This
there's
an
implicit
assumption
over
here
that
clients
can
essentially
efficiently
support
that
that
walk-up
operation
in
linear
time.
Clients
can
certainly
do
better
than
that.
If,
if
a
client
wanted
to
implement
some
constant
time,
look
up
for
this
operation,
they'd
just
be
doing
better,
but
the
pricing
is
based
on
the
Assumption
that
it's
linear
and
just
quickly
on
motivation.
H
So
why
are
we
no
longer
taking
these
personalization
programs
and
doing
the
hashing,
for
you
short
and
you
being
the
actor,
because
you
can
do
it
yourself?
The
fem
already
provides
this
functionality,
such
as
kind
of
following
the
logic
of
let's
have
the
system.
In
this
case
the
fem
and
the
client
do
as
little
as
possible
and
just
ask
users
to
do
this
work.
This
is
a
bit
more
secure.
H
It's
a
bit
more
modular
and
in
particular
you
know
it's
less
logic
in
the
fvm,
which
is
kind
of
what
we
always
want
to
go
for,
say,
there's
some
critical
bug
that
implements
this
logic.
It's
now
no
longer
a
system,
fem
problem,
it's
just
a
problem
for
whatever
contract.
That
is
so.
It
simplifies
things
a
little
bit.
H
The
change
to
the
pricing
is
probably
fairly
evident.
We
the
further
back.
You
walk
up
the
chain,
the
more
time
that
that
takes
so
the
more
that
should
cost
the
reason
if
you're
wondering
how
can
we,
how
come
we're
doing
it
with
the
sixth
class
today?
H
It's
because
the
the
these
Randomness
functions
can
be
accessed
by
very
specific
callers,
to
build
an
actors.
Basically
so
users
user-defined
contracts
can't
make
you
walk
all
the
way
back
up
to
Genesis,
and
so
it's
it's
not
too
much
of
a
problem.
Today.
This
is
a
future-proofing
hardening
change.
H
We
expect
minimal
changes
to
our
minimal
disruption
to
any
users
with
this,
but
yeah.
Those
are.
Those
are
the
changes
that
we're
proposing
making
as
part
of
kind
of
a
hardening
Milestone
that
the
FM
team
is
working
on
so
feedback
welcome,
and
if
you
have
any
concerns
in
particular,
please
raise
now
or
later.
H
It
should
not
know
so
any
contracts
deployed
will
continue
to
function
in
exactly
the
same
way.
The
operations
might
start
costing
differently,
but
nothing
nothing
breaks
as
a
result.
A
All
right,
thank
you,
so
we're
next
going
to
turn
it
over
I.
Think
to
the
wavy
end
to
Arthur.
I
think
Arthur
is
on
the
call.
Is
that
correct.
A
E
I
A
E
Okay,
thank
you
for
inviting
I'm
Arthur
I
am
a
storage
provider
in
APAC
area
I'm,
also
one
of
the
co-authors
of
discussion,
774.
I'm,
sharing
an
interview
of
discussion,
Samsung
4
and
the
process
and
feedbacks
we
got
from
the
community
so
far,
I
think
it's
very
important,
so
I
have
basically
written
down
every
point
on
the
slides
in
case
the
connection
is
sensible,
so
to
be
short
in
the
discussion
7
force,
the
community
is
concerned
that
the
continued
volume,
price
and
RBP
have
shown
that
Falcon's
consensus
has
been
weakening.
E
My
many
believes
that
the
weakening
comes
from
the
unsustainability
of
the
current
tokenomics
of
falcoin
for
the
long-term
success
of
Bitcoin.
It
needs
to
be
changed.
Many
concerns
that
have
been
mentioned
in
the
discussion
have
been
solved
can
be
solved
by
a
change
to
the
current
multiplier
in
development
core
protocol.
At
the
same
time
since
PowerPoint
is
a
resilient
Network,
the
risk
from
the
change
is
controllable.
The
this.
That
is
how
the
discussion
went
so
far.
Next
page,
please.
E
Okay,
so
why
did
we
start
discussion
on
Samsung
4
at
the
beginning?
Falcon
submission
is
broader
than
Phil
Plus
or
real
data.
In
our
belief,
Falcon
could
be
able
to
accommodate
all
kind
of
data
as
long
as
there
is
a
willingness
to
use
the
network
instead
of
just
focusing
on
data
that
is
recognized
by
only
a
few
people.
E
Yes,
it's
actually
the
second
Point
page.
Yes,
this
one
foreign.
E
On
the
overlap,
the
small
piece
that
contains
the
Red
Dot,
but
we
could
have
captured
the
whole
value
in
a
big
storage
Circle
in
green.
Of
course,
it's
the
best
I
know
it's
the
best.
E
We
if
we
can
capture
all
the
value
in
all
big
three
circles,
but
in
fact
we
are
when
we're
planning
strategies,
limited
resources
are
preconditioned,
so
Falcon
hasn't
done
well
enough,
even
in
the
storage
area
itself,
we
don't
expect
we
can
do
greatly
in
all
three
areas
at
the
same
time,
yeah
next
page,
please.
E
So
the
community
also
believes
that
and
in
the
modifier
privilege
in
the
Falcon
core
protocol
is
not
the
end
of
the
current
field
plus
Vision.
It's
just
a
strategy
calibration
to
Falcon.
Instead
of
betting,
all
the
resources
on
storing
Humanity's,
most
important
data
Falcon
moves
to
focusing
on
providing
performance
and
Secure
Storage
blockchain
for
anything
that
this,
the
the
market
needs,
fem,
subnets
and
l2s
could
be
used
to
keep
dealing
with
field
plus
computation,
retrieval
and
faster
Etc.
E
There
will
be
infinite
imagination,
based
on
the
potential
storage
capacity
and
the
Intel
probability
of
the
structure
provided
by
a
cloud
Point
ecosystem
next
page.
Please.
E
E
Could
potentially
push
us
forward
the
change,
given
the
Farish
impact
of
the
crypto
economics
to
a
blockchain?
We
believe
that
we
need
to
get
hard
consensus
by
building
solid,
tooling,
that
can
get
by
in
from
two
groups
of
people
versus
token
holders,
and
second
is
storage
providers.
We
can
expect
these
two
groups
to
drive
the
best
interest
of
the
community,
given
their
capital
is
at
risk.
It's
also
very
objective
note
that
we're
not
denying
other
roles
from
voting.
E
They
can
always
vote
by
holding
Falcon
to
provide
Hardware,
holding
540
PowerPoint
or
provide
Hardware
to
the
department
Network
next
page.
Please.
E
So
the
the
conclusion
is
that
there
is
a
division
happening
in
the
community.
It's
the
it's
in
powerpoint's
favor
to
retain
all
people
who
have
the
faith
in
decentralized
storage.
E
The
situation
is
how
can
it
has
failed
to
abide
by
the
by
the
precautionary
principle
of
field
plus
multiplier
incentive
design
the
damage
to
the
consensus
has
been
made
and
Phil
plus,
and
people
who
support
field
plus
and
the
multiplier
need
to
provide
a
better
reason
for
the
existence
of
it,
because
most
of
the
arguments
in
in
favor
of
field
plus
and
the
modifier
have
already
been
dismissed
by
the
community.
If
we
don't
make
a
change,
it's
very
likely
that
Falkland
Community
will
face
a
bigger
division.
It
has
already
been
started.
E
The
RVP
and
the
huge
price
decline
are
very
clear
appearances
of
the
division
unless
the
student,
unless
certain
ecosystem
participants
violates
the
laws
of
civilization,
I,
think
it's
for
parkman's
best.
We
can
retain
everyone
here
together
next
page,
please
so
so
after
we
know
that
there
will
be
questions
like
how
will
removing
multiplier
incentives
affect
the
current
data
onboarding
in
our
review
by
removing
multiplier
privilege
that
are
onboarding.
E
Will
in
fact
be
encouraged
because,
besides
what
Phil
Plus
have
today
that
I
don't
fit
info,
plus
criteria
will
be
able
to
store
on
Falcon
in
a
in
a
fair
market
so
long
as
people
put
in
pledge
and
pay
gas
to
to
store
the
data,
these
data
are
benefiting
file
core
Network.
It
sounds
more
attractive
than
just
the
sticking
with
what
we
temporarily
think
are
useful
and
real.
E
So
if
we're
specifically
talking
about
the
kind
of
data
that
are
under
field
plus
criteria,
I
think
in
short-term
the
field
pass,
the
onboarding
rates
will
be
declined,
maybe,
but
in
long
term,
if
we
restore
the
integrity
and
the
free
competition
to
the
network,
data
under
field
plus
criteria
will
also
be
flourished,
so
that
that's
basically
what
I
have
today
I
hope
it
helps
the
community
understand
the
situation
and
the
makeup
to
make
a
quicker
decision
before
it's
too
late.
D
I
So
I
think
the
main
benefit
of
the
third
option.
Cl's
report,
which
is
allowing
anyone
to
make
data
cap,
is
continuity
in
terms
of
collateral
requirements
and
power
multipliers.
Almost
all
the
sectors
that
join
the
network
over
the
past
year
have
had
10x
power
multiplier
and
the
10x
collateral
requirement
versus
CC
with
option
three.
All
new
sectors
that
join
in
the
future
will
also
have
the
same
collateral
requirement
and
the
same
power
multiplier
next
slide.
Please.
I
So
our
motivation
for
removing
human
gated
power
multipliers
number
one.
We
want
filecoin
to
be
fair.
Human
gating
of
the
power
multiplier
on
filecoin,
creates
an
uneven
playing
field
and
a
high
barrier
to
entry
for
miners.
Those
miners
who
are
notaries
or
who
have
good
relationships
with
notaries
and
the
governance
team
have
a
big
Advantage
when
it
comes
to
securing
the
power
multiplier,
it's
easier
to
get
applications
approved
and
they
can
buy
in
bulk
at
lower
prices.
I
On
the
other
hand,
small
Miners
and
new
miners,
especially
those
with
no
Connections
in
the
faco
and
ecosystem,
are
at
a
big
disadvantage
when
it
comes
to
getting
the
power
multiplier
compared
to
mining.
On
other
chains.
Mining
on
filecoin
plus
involves
a
lot
of
non-technical,
off-chain
work
and
has
less
predictable
Roi,
because
a
Miner's
access
to
the
power
multiplier
is
gated
by
capricious
human
beings
rather
than
by
mathematical
mechanisms
like
those
on
used
on
other
chains.
I
I
I
Next
slide.
Please
reason
number
two:
we
want
miners
to
focus
on
business,
not
politics.
Empowering
non-users
to
Define
usefulness
has
been
a
source
of
endless
conflict
notaries
and
the
governance
team
are
not
users
of
filecoin.
They
do
not
pay
for
storage
or
transact
on
the
Chain.
We
have
asked
them
for
an
objective
measure
of
usefulness,
other
than
willingness
to
pay,
and
no
one
has
given
us
one.
It
is
impossible
to
get
consensus
around
non-quantitative
non-verifiable
systems
of
power
multiplier
allocation.
I
Therefore,
the
rules
of
Falcon
plus
will
always
be
changing.
Minors
and
notaries
will
argue
that
the
kind
of
data
they
have
easy
access
to
is
useful
and
that
other
types
of
data
are
not.
They
will
argue
against
SLA
that
is
difficult
for
them
to
follow,
and
in
favor
of
services
that
they
can
easily
provide.
There
will
always
be
dirty
competition
to
win
the
favor
of
notaries
on
the
governance
team,
bribes,
fraud
framing
other
minors
Etc.
I
This
is
a
bad
use
of
miners
time
and
energy
arguing
over
these
rules
is
pointless
because
in
99
of
cases,
The
Miner
is
just
buying
a
power
multiplier,
and
there
is
no
real
user
anyway.
Very
soon
when
the
whole
network
is
10x
following
these
extra
rules
will
not
even
provide
a
subsidy,
but
the
burden
will
remain
so
instead
of
forcing
miners
to
be
legislators
and
lobbyists
allow
them
to
spend
all
of
their
time
committing
capacity
making
deals
and
building
apps
next
slide.
Please.
I
The
third
reason
is
that
we
value
willingness
to
pay
more
than
PR.
No
data
or
client
is
valuable
in
and
of
itself
it
doesn't
matter
how
famous
the
client
or
how
important
the
data
is.
Only
paying
users
Drive
value
in
Falcon
I
do
not
believe
that
the
governance,
team
and
notaries
are
able
to
predict
what
the
real
use
cases
and
paying
users
of
stylecoin
will
be,
especially
because
the
capital
being
used
to
subsidize
potential
use
cases
is
not
their
own.
I
Last
slide.
Please
the
way
forward.
After
eight
years
in
the
wild,
the
number
one
paid
service
on
ethereum
is
just
trading
wrapped
ether
for
usdc
on
uniswap,
yet
no
one
predicted
the
invention
of
the
amm
when
ethereum
launched
in
2015,
and
it
didn't
require
any
subsidies
to
succeed
after
it
was
deployed
on
chain.
This
is
a
very
underwhelming
application,
but
it
and
other
simple
applications
like
over
collateralized
lending
and
non-fungible
tokens
have
been
sufficient
to
get
ethereum
to
a
200
billion
dollar
market
cap.
I
I
Thank
you
and
I'm
and
I'm
a
token
holder
by
the
way.
A
All
right,
thanks
to
both
of
you
so
quite
a
bit
of
information
but
I,
think
we
should
pause
there
and
start
to
answer
some
questions.
I
know,
Kieran
has
a
follow-up
that
goes
along
with
this,
to
talk
about
sort
of
crypto,
econ
Labs
perspective
on
this
and
the
analysis
that
they've
done.
A
If
it
seems
like
there's
a
good
segue
I
will
move
over
to
including
that
analysis
as
well,
but
to
kick
it
off
Vic.
Should
we
start
with
your
question?
You
reference
you'd
like
to
better
understand
one
of
the
slides
I
think
it
was
the
second
one
that
Arthur
spoke
to
perhaps.
A
What
data
is
being
stored?
What
data
is
being
blocked
from
being
stored
now
that
would
be
stored
if,
if
filecoin
plus
was
removed,
there
is
no
block
on
regular
deals.
E
Yeah
because,
in
my
personal
opinion,
basically
all
the
data
out
of
the
thought
criteria
are
blocked,
because
the
modifier
make
the
other
kind
of
data
uncompetitive.
Comparing
to
the
data
that
fits
in
the
field
plus
criteria.
E
D
G
H
Focused
like
large-scale.
E
G
E
Yeah
one
thing
directly
come
to
my
mind
was:
was
David
David,
Cason's
client,
because
his
client
they're
not
willing
to
put
in
more
than
one
one
times
off
the
modifier
into
into
the
sector.
So
that
will
this.
It
feels
like
a
discrimination
to
different
data
on
cloud
coin
with
the
modifier.
So
so
the
the
ones
who
are
not
able
to
take
huge
risk
of
the
Bitcoin
price
will
will
be
like
left
out.
G
Sorry
this
is
you:
have
the
storage
provider
doesn't
want
to
put
up
collateral.
E
E
E
I
feel
in
the
end,
it's
the
same,
because
the
clients
will
have
to
take
the
risk
as
well,
because
the
the
Falcon
price
will
directly
affected
the
business
of
the
storage
provider.
If
the
the
price
is
like
shocking,
too
much
storage
providers
will
be
unsustainable
and
their
data
will
not
be
stored,
will
will
be
in
danger.
E
G
I
think
you
guys
are
the
ones
being
asked.
Questions
right
now,
I
think
we're
trying
to
discuss
your
point
and
I
don't
want
to
have
you
guys
not
have
the
space
and
time
in
this
venue
to
respond
to
those
questions.
I
know,
Stephen
has
one
as
well.
D
Sorry,
I
really
want
to
respond
to
that
previous
Point.
That's
not
correct.
Yeah,
the
the
collateral
is
proportional
to
the
power
you
get
on
the
network
so
like,
regardless
of
whether
or
not
useful
Plus
or
you
don't
use.
12
plus,
you
will
put
down
the
same
upgraded
role
to
get
the
same
amount
of
power
on
the
network,
so
that
doesn't
really
have
an
effect.
E
Sorry
about
the
the
collateral
ten
times
of
the
collateral,
the
modifier
will
give
the
storage
provider
10
times
out
of
the
reward
comparing
to
the
one
times
of
the
modifier
of
the
storage
provider.
So
for
the
storage
providers
who
can
can
serve
for
for
10
times
DC,
they
have
comparatively
battery
cost
lower
cost.
E
So
that
will
make
these
kind
of
storage
providers
more
competitive
than
the
start.
The
the
storage
providers
who
provide
service
to
one
times
of
the
like
the
normal
deals,
I.
D
Think
that
doesn't
make
sense.
Sorry
I
I'm
confused
because
the
again
it's
proportional,
so
it's
like
you
put
down
next
PowerPoint
you'll
get
back
X
reward.
If
you
use
the
plus.
This
is
there's
a
10x
multiplier,
but
it's
on
both
the
collateral
and
the
board.
It
just
means
you
have
to
seal
less
data,
so
it
doesn't
mean
that,
like
basically
there's
no
loss
for
taking
a
follow-up
plus
deal
here.
Yes,
you
probably
have
more
collateral
but
you're,
also
getting
back
proportionally
more
reward,
so
it
shouldn't
be
any
difference.
There.
D
A
Yeah
I
will
say
that
this
is
the
core
dev's
call,
and
the
purpose
is
talk
about
the
technical
specifications
of
this
proposal.
You
are
more
than
welcome
in
community
discussion
forums
to
have
conversations
about
like
the
ideological
or
Visionary
direction
of
filecoin,
but
that's
not
what
this
forum
is
for.
So
as
long
as
it
relates
to
your
proposal,
we
welcome
it,
but
we
don't
want
to
go
through
a
rabbit
hole.
So
just
just
a
fair
warning.
E
Yeah
and
and
we're
not
we're
not
trying
to
convince
anyone,
it's
just
sharing
the
current
status.
That's
in
our
view,
so
yeah.
A
Do
Jonathan
I
think
you
have
a
question
first,
do
you
have
your
hand
up
for.
F
A
while
yeah
this
is
more,
this
is
less
technical,
more
just
like
economical
about
some
I.
Think
some
of
the
like
I
think
what
everyone's
saying
is
kind
of
true
at
the
same
time,
but
just
like
at
different
sort
of
like
economic
snapshots
of
the
network
and
and
so
I.
Think,
like
one
of
the
problems,
is
that
like
what
Steph
is
saying
is
100
True
like
especially
if
everyone
is
pledging
10x
sectors,
it
makes
a
lot
more
sense,
but
I
think
at
least
that's
how
I
understand
it.
F
But
one
of
the
problems
that
I
think
is
happening
right
now
is
that
what
was
once
a
subsidy
is
now
a
majority,
and
so
for
you,
as
a
storage
provider,
you
you
can't
compete
on
unverified
deals,
even
if
you're,
actually
getting
paid
storage
deals,
because
there's
so
much
data
cap
in
existence.
You
have
to
actually
you're
you're,
more
profitable
to
go,
spend
Business,
Development
hours
right
now
to
try
and
game
fill
plus
rather
than
to
go
out
and
find
paid,
storage,
clients
and
so
I
think
that's
kind
of
like
it's.
F
It's
a
problem
with
the
pollution
of
data
cap
and
the
current
state
of
the
network,
rather
than
or
at
least
that's
how
how
I
see
it.
G
Yeah
I
think
there's
actually
really
strong
agreement
around
like
quality
and
making
sure
that
subsidies
go
towards
storing
data
and
bringing
data
to
the
filecoin
ecosystem
is
like
critical
and
not
towards
incentivizing
gaming
of
mechanisms,
and
we
need
to
improve
our
mechanisms
to
make
sure
that
they
are
not
gameable
and
not.
G
You
know,
incentivizing
the
opposite
of
what
we're
trying
to
do
as
a
network,
which
is
be
a
useful
storage
network,
but
I
think
something
that's
really
important,
which
one
of
the
earlier
slides,
but
we
kind
of,
went
through
two
presentations
was
talking
about
like
hey.
We
expect
token
holders
and
storage
providers
to
have
the
say
over
the
network
governance
and
about
this
direction
forward,
effectively,
writing
clients
and
storage
clients
out
and
ecosystem
developers
and
and
all
of
the
other.
We
have
five
pillars:
I.
G
G
That's
who
filecoin
serves,
and
so
I
just
want
us
to
make
sure
that
we're
clarifying
that,
as
we
go
without
sourcing
Community
feedback
on
proposals
to
make
sure
that
we
are
talking
to
the
people
who
are
actually
storing
their
data
and
filecoin
today
and
bringing
their
data
into
this
ecosystem
because
they
are
playing
a
critical
service
that
we
absolutely
want
to
accelerate
and
support
and
and
be
prepared.
I
Molly
you
mentioned
quality.
This
is
something
that
I
talked
about
in
my
presentation.
The
definitions
that
we
have
with
quality
are
extremely
subjective.
At
the
end
of
the
day,
people
need
to
pay
in
order
for
filecoin
to
have
value.
If
you
think
that
a
data
set
is
really
valuable
for
Humanity,
but
the
user
isn't
paying
that's
not
going
to
help
the
price
of
filecoin.
G
I
disagree
with
you
entirely
I.
Think
there's
a
lot
of
really
good
counter
examples
to
that
proposal
or
that
that
way
of
framing
it
I
again,
I
don't
want
us
to
get
down
rabbit
holes
here,
I'm
sensitive
to
time
and
I
know
we
have
more
presentations
but
I
I
strongly
disagree.
I
think
there
are
many
high
quality
data
sets.
You'll
notice
AWS,
for
example,
offers
credits
hosts
data
sets
that
are
then
used
frequently
for
compute
jobs.
G
There
are
many
types
of
data
that
are
valuable
for
a
network
to
store
and
offer
because
those
attract
additional
usage.
They
you
know
very
much-
do
have
an
impact
on
the
utility
of
the
network.
Things
like
that
and
so
I
disagree
with
your
your
I.
A
Right
so
again,
this
is
the
point:
we're
not
going
to
Rabbit
Hole
down
on
these
things
when
there
are
these
fundamental
disagreements
that
don't
really
relate
to
the
the
context
of
this
proposal
in
this
environment,
Jonathan
and
Steph,
you
both
have
your
hands
up.
Do
you
do
you
want
to
ask
your
questions,
or
should
we
give
care
in
time
to
also
speak
to
this
with
his
analysis.
D
I
do
feel
like
it's.
It
is
important
to
note
here
that,
like
like,
there
does
need
to
be
money
coming
into
the
network.
Someone
needs
to
be
using
the
network
and
paying
for
something
happening.
We
have
not
seen
anything
except
for
people
potentially
willing
to
pay
for
storage.
Basically,
the
inputs
right
now
are
speculation,
gas
and
storage.
This
is
an
economic
question
by
I,
don't
know,
I
I
think
that
we
would
need
more
evidence
that
people
were
willing
to
put
this
there.
D
There's
I
think
there
is
a
valid
argument
that
the
fact
that
the
system
is
not
charging
for
storage
is
limiting
the
income
from
the
network,
regardless
of
Bill
plus
I,
guess.
G
G
Yeah
Layer
Two
networks
are
an
example
of
that.
That's
also,
for
example,
demand
for
chain
space
is
a
totally
valid
use
case.
We
could
go
on
with
this
again.
I,
don't
want
to
Rat
Hole
us
here.
I
I
agree
that
I
want
to
see
lots
of
groups
paying
for
lots
of
things
on
top
of
Alcorn,
but
I
would
not
say
that
storage
is
the
only
one.
A
Karen
keeping
in
mind,
we
have
about
10
minutes
left
and
we
do
want
to
make
sure
Jennifer
has
some
time
to
touch
base
quickly
about
mb21.
Are
there
any.
F
J
If
you're
asking
me,
I
can
go
through
my
slides
in
about
two
to
three
minutes,
because
this
this
is
simply
an
analysis
of
kind
of
the
different
options
that
were
proposed
in
the
774
discussion
and
I've
laid
out
the
options
here
on
the
right
here
so
option
one
is
drop
the
multiplier
from
ten
to
one
immediately.
This
is
the
QA
multiplier.
That's
currently
given
to
deal
sectors
so
option,
one
is
cut,
cut
the
multiplier
away,
option
two
is
ramp
it
down
and
option.
J
Two
and
four
are
pretty
similar
in
the
sense
that
they
both
ramp
down
just
different
schedules
and
then
option
three
is
to
give
everyone
the
10x
multiplier
so
before
I
just
want
to
state
two
different
caveats,
which
is
that
one.
This
is
an
extremely
large
crypto
economic
change,
with
significant
impact
to
filecoin.
So
it's
not
something
that
I
mean
in
my
opinion
and
I.
J
Think
in
cl's
opinion
it's
not
something
that
we
can
rush
through,
and
the
other
aspect
is
is
that
we
simply
did
an
analysis
of
the
different
options
that
were
presented,
but
we're
not
endorsing
any
of
them.
J
So,
but
with
those
caveats
in
mind,
I'll
go
through
the
analysis
in
detail,
but
just
as
a
quick
tldr
in
terms
of
comparing
all
the
different
options
and
how
they
affect
the
network,
performance,
metrics
or
performance
indicators.
Option
three
is
the
one
that
kind
of
is
the
least
disruptive
to
the
network
in
terms
of
it
continues
to
offer
benefits
for
new
SPS
to
join
because
of
the
multiplier
enables
you
to
dilute
costs,
and
then,
but
the
downside
of
option
three
is
that
right
now
there
exists
an
explicit
incentive
mechanism.
J
If
we
go
to
the
next
slide,
I
can
talk
through
these,
and
so
this
panel
and
I
apologize
it's
a
little
bit
busy,
but
you
can
go
through
each
of
the
main
Network
performance
indicators
for
each
of
these
options
when
comparing
it
to
the
no
change,
so
the
the
column
on
the
far
left
represents
no
change
in
the
crypto
economics
right
now.
So
the
10x
stays
for
fill
plus
and
one
access
for
cc
and
so
option.
J
One
remember
is
the
option
where
everybody
goes
to
1x
and
this
vertical
dotted
line
in
each
of
these
plots
represents
the
simulated
date
when
we
change
the
multiplier
and
so
in
option
one.
We
can
see
that
because
we
applied
a,
we
removed,
the
multiplier
network,
qap
starts
dropping
and
eventually,
after
all,
of
the
old
QA
sectors,
expire
QA
reaches
RB,
they
they
converge
and
what
happens
here
is
you
see
a
drop
in
in
locking
and
also
in
terms
of
the
fill
on
fill
returns,
The
Fill
on
fill
that
returns.
J
You
see
the
significant
drop
as
soon
as
this
policy
change
is
made,
but
then
the
fill
on
fill
returns
climbs
over
time,
because
the
now
keep
in
mind
that
this
policy
change
is
simply
affecting
the
QA
multiplier,
but
it
doesn't
change
anything
else,
specifically
with
regards
to
how
the
rewards
are
distributed,
and
so
you
get
basically
less
dilution
and
and
because
the
pledge
is
going
down,
you
see
a
increase
back
in
the
fill
on
fill
returns
now,
I
I
don't
need
to
go
through
option
three
and
or
two
and
four
are
basically
exhibit
the
same
qualities,
but
because
they
are
reduced
the
multiplier
from
ten
to
one
on
different
schedules,
you
see
slightly
different
trajectories,
but
the
conclusions
basically
remain
the
same
in
terms
of
option.
J
Three,
the
because
everyone
is
now
getting
the
10x
multiplier.
You
get
a
slightly
higher
acute
Network
qap
than
in
the
no
change
scenario,
and
this
is
the
the
reason
it's
only
slightly
higher
is
because
currently
90
of
onboarding
is
fill
plus
and
so
we're
basically
making
that
100
in
this
case,
but
otherwise
there's
not
too
much
of
a
change
between
the
10x
option
and
the
current
status
quo,
and
that's
that's
really
all
I
had
I
it's.
It's
been
five
minutes.
A
A
Yeah
I
also
just
typed
up
a
notice
too,
to
say
that
the
preliminary
FIP
draft,
as
long
as
we're
talking
about
803,
does,
does
look
pretty
good
in
its
current
state,
but
FIP
editors,
including
myself,
have
done
a
preliminary
review
and
asked
for
like
additional
details,
specifically
related
to
security
and
incentive
change
for
this
fit
I.
A
Think,
as
you
mentioned
at
the
beginning,
this
is
actually
a
huge
Network
change
and
speaking
just
for
myself
like,
we
recognize
the
sensitivity
and
the
interest
in
this,
but
it's
really
important
that
we
add
more
and
be
collaborative
and
continue
to
flesh
us
out.
This
is
a
good
part
of
it.
It's
great
that
CL
was
able
to
put
this
together
so
quickly,
but
it
might
also
be
worth
returning
to
this
conversation
again
once
we
get
to
a
more
final
version
of
this
draft
in
the
interest
of
time,
Jennifer
I'm.
A
Sorry,
you
have
four
minutes
but
nv21.
It's
all
yours.
C
Yeah
I'm
starting
to
get
used
to
speaking
2x
anyways
so
quickly,
mut21
code,
name
watermelon.
The
whole
idea
is
supposed
to
be
very
juicy.
We
thought
about
summer
as
well,
however,
Alex
has
pointed
out
in
the
other
side
of
the
world,
it's
actually
winter,
but
we
are
staying
as
a
juicy
upgrade.
So
quickly
goes
to
the
key
crypto
date
really
quick.
So
our
code,
free
state
is
going
to
be
September,
28th
correlation
upgrade
is
going
to
be
October.
The
10th
and
the
midnight
upgrade
is
going
to
be
November.
C
C
Moving
on
to
the
next
slide,
I
will
go
through
the
scope,
so
I
think
it's
earlier
last
week
and
the
implementers
had
a
great
call
talking
about
like
how
much
capacity
that
we
have
reviewed
the
fips,
that's
proposed
to
being
scoped
and
we
ended
up
the
scope
of
to
be
including
all
the
post,
MB
19
fifths
that
get
postponed
because
of
the
code
and
co-emergency
upgrade.
C
So
we
are
introducing
52,
which
is
increasing
the
sector
and
deal
life
like
Max
lifetime
two
three
years
and
we're
also
fixing
the
snap
deals
and
sector
activation
Epoch
bug,
and
this
inside
the
program
is
already
accepted,
so
we're
delivering
that
we
are
also
working
on
Direct
data
onboarding,
a
Mini
Storage
feather,
meaning
that
I
kind
of
storage
provider
does
not
have
to
be
using
the
building
storage
market
and
to
onboarding
data
into
the
sectors.
There's
a
lot
of
like
other
good
benefits
from
this
FIP.
The
the
draft
is
open.
Please
go
take
a
look.
C
We
are
also
considering
in
supporting
allowing
storage
provider
to
move
in
the
petition
so
that
they
can
reduce
the
human
operations
or
like
on
managing
in
their
probable
storage.
We
are
also
introducing
some
fvm
fundamental
changes.
The
only
thing
I
want
to
call
out
here
is
if
VM
team
has
decided
to
heavily
descope
a
lot
of
the
tracks
that
was
discussed
last
week.
C
So
right
now,
we're
focused
on
fixing
some
security
concerns
and
in
getting
down
the
foundation
to
be
ready
to
support
like
awesome
actors,
which
will
be
more
to
come
in
on
the
22,
hopefully
so
take
a
look
on
the
latest
scope
in
on
that
link.
We
are
also
hoping
to
in
fixing
the
long-term
Market
Crown
have
introduced
in
long-term
Market
profits.
So
we
put
us
kind
of
like
a
short-term
stop
Gap
in
a
me
19
and
it's
doing
pretty
well
Network
validation
time
is
going
really
well.
C
However,
we
do
want
to
make
sure
the
network
is
scalable
and
that's
why
we
are
hoping
to
introducing
this.
This
fix
as
well.
We
we
have
decided
to
actually
postpone
switch
the
derived
to
mb22,
just
because
the
spec
and
implementation
testing
is
not
going
to
be
ready
in
time,
and
we
are
also
postponing
the
super
snap
proof,
but
suggested
by
the
proof
team,
a
couple
notes
from
the
implementers.
We
do
have
a
hard
that
line
here,
so
fifth,
that's
being
considered
today.
C
If
the
fifths
are
not
accepted
by
September
the
27th,
they
will
be
the
sculpt
from
mb21
fifths
that
are
not
fully
test
fully
implemented
and
tested
by
September.
The
28th
will
also
be
the
scope
from
nb21
any
fips
that
are
not
on
the
list.
Today.
We
are
still
open
about
the
ideas.
It's
just
implementers
doesn't
have
the
capacity
to
actually
supporting
the
implementation.
However,
if
arsers
can
work
with
open
source
contributors
to
actually
implementing
the
code
and
make
sure
they
are
tested
and
production
ready,
where
and
fifth
is
accepted,
we
are
open-minded.
C
We
were
considering
those
for
mb21
inclusion
quickly.
Next
slide
again
stress
about
the
timeline
again
and
all
the
fifths
must
be
entering
last
call
so
that
it
has
at
least
two
weeks
before
it's
accepted,
and
then
the
code
freeze
already
covered
so
moving
on
until
the
next
slide.
So
you
have
heard
me
talk
about
this
like
implementers.
What
the
hell
is
that
so
last
week,
a
couple
of
us
from
the
Venus
team,
Lotus
Forest,
there's
a
typo
and
also
actor
and
Rapha
VM
implementers.
C
We
gather
together
and
started
to
discuss
about
a
priority
our
engineering
capacity
on
how
much
we
can
support
in
this
nb21
and
we
really
enjoyed
our
call.
It
was
very
productive,
efficient.
We
get
decision,
made
scope,
aligned,
which
is
amazing,
and
we
realize
oh,
my
God.
We
should
have
feel
implementers
as
a
group.
It's
a
subset
of
Courthouse.
C
So
there's
gonna
be
a
call
scheduled
and
also
we
also
have
a
open
public
Channel
field
implementers
that
we
are
handing
out
these
days
so
you're
welcome
to
lurk
there.
If
you
want.
Also,
if
you
are
in
implementers,
please
join
the
channel
I
think
that's
it
two
minutes
over
time,
but
you
know.
A
That's
okay,
you
spoke
really
quickly
and
it's
a
lot
of
important
info.
Thank
you,
Jennifer.
Thank
you
to
everyone
else
who
spoke
today:
Alejandro
ayush
Arthur,
who
joined
us
the
wavy
who
joined
us.
Thank
you,
everyone
for
your
questions.
As
usual,
we
will
share
slides.
We
will
share
all
of
the
links
to
engage
in
these
conversations,
appreciate
you
being
here,
and
we
will
talk
to
you
next
month
unless
you
are
in
Singapore
or
Iceland.