►
From YouTube: Filecoin Core Devs #60
Description
Recording for: https://github.com/filecoin-project/core-devs/issues/144
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
Follow Filecoin!
Website: https://bit.ly/3ndAg44
Twitter: https://bit.ly/3ObND0x
Slack: https://bit.ly/3HKfFy7
Blog: https://bit.ly/3HFZFNv
Reddit: https://bit.ly/39N4Jmv
Telegram: https://bit.ly/3bkP8Ly
Subscribe to our newsletter! https://bit.ly/3Oy8J9j
#filecoin #ipfs #libp2p #web3 #nft
A
A
It
is
about
12
or
1
p.m.
I
am
in
Ontario,
Canada
and
Welcome
to
our
filecoin
core
devs
call
number
60..
A
Today
we
have,
we
have
a
number
of
things
to
discuss,
starting
from
switching
to
nutrient
Network
by
Patrick
we
have
Beyond
fvm
by
step
and
his
team.
We
also
have
a
discussion
on
FIB
67,
which
is
on
poor
rep
security
policy
and
replacement
ceiling
enforcement.
A
Then
Molly
will
do
a
quick
shout
out,
and
perhaps
announcements
on
on
the
filecoin
dev
Summits,
coming
up
we'll
then
hand
over
to
Caitlyn
to
discuss,
clarify
the
nv21
watermelon
upgrade.
If
we
can,
please
mute
our
mics
if
we
are
not
speaking.
Thank
you
very
much
after
nv21
watermelon
discussion.
We
will
just
take
the
questions
if
we
do
have
time
and
take
it
from
there.
A
B
Hello,
everybody.
Thank
you
very
much.
Lucky
I'm
Patrick
from
the
drum
team.
For
those
of
you
don't
know.
Diran
obviously,
is
the
randomness
Beacon
use
in
filecoin
for
leader
election
and
a
bunch
of
other
proof
stuff.
B
Since
March
of
this
year
we
released
a
new
network,
that's
been
running
in
parallel
with
her
existing
dran
network,
the
emits
Randomness
every
three
seconds.
Instead
of
30
seconds,
we
have
raised
a
fit
for
transition
of
the
Falco
Network
to
this
new
network,
which
has
been
merged
but
not
yet
accepted,
but
essentially
we've
been
running
it
for
a
few
months
now
and
at
some
point
in
the
future,
we
plan
to
Sunset
the
default
dran
Network.
B
A
small
note.
The
network
that's
running
in
dramina
at
the
moment,
fastnet
is
being
sunset
for
a
very
small
compatibility
change
to
be
compliant
with
an
RFC,
that's
literally
one
character
in
a
domain
separation
tag,
so
it
doesn't
really
affect
any
other
sort
of
availability
guarantees
that
we've
seen
running
it
since
March
1st.
B
Some
things
that
this
new
network
enables
are
time
lap
encryption
for
a
start,
which
is
wonderful.
We've
been
Shilling
it
everywhere
for
the
past
year,
and
hopefully
we
can
see
it
on
fem
sometime
in
the
future,
if
the
FIP
is
accepted
also
enables
stateless
round
verification.
So
in
the
default
Network
right
now,
you
need
to
look
back
in
the
chain
of
Randomness
to
verify
each
new
Beacon,
but
in
quick
minimum
Fast
Net
the
beacons
are
Standalone
which
enables
easier
verification,
possibly
in
some
form
of
future
land.
B
The
discussions
of
faster
block
times
for
filecoin,
obviously
using
this
new
dran
network,
it
would
enable
at
its
fastest
a
30
second
block
time
for
the
Falcon
Network.
Oh,
that
shoot
in
terms
of
changes
to
Lotus
or
other
implementations.
It
would
require
updating
the
D1
client
dependency,
which
at
least
in
the
go
World
kind
of
manages
everything
for
you
out
of
the
box.
B
It
would
also
require
using
a
new
qian
hash
to
connect
to
the
studio
Network,
although
the
endpoints
and
other
stuff
all
remain
the
same
I
suppose
the
biggest
change
is
that
the
Falco
Network
would
have
to
consume
every
10th
drowned.
Epoch,
rather
than
every
Epoch,
although
at
least
in
the
Lotus
code
base,
that
seems
to
be
kind
of
handled
already
and
then
finally,
we
need
to
set
a
block
height
for
the
switch
of
which
there
is
already
prior
art.
B
So
for
the
first
I
want
to
say,
5
000
blocks
of
the
Falcon
Network.
They
were
running
on
the
dran
test
net,
now
they're
running
on
drama
net,
so
it
would
be
re-implementing
such
work
to
switch
to
a
new
dran
network.
Some
outstanding
questions
I
have
on
that.
I
guess.
Are
the
testing
pipeline
fill
out?
How
do
we
test
it
and
whether
calorimet
and
Butterfly,
they
need
to
be
upgraded
with
the
view
to
the
default
Network
being
Sunset
sometime
in
the
future?
A
Great,
thank
you
just
to
clarify
also
very
quickly
that
this
is
considered
well,
it's
one
of
the
fits
that
we're
considering
in
scope
for
nv21,
I,
believe
and
yeah
at
the
moment,
we're
still
using
the
soft
consensus
method,
for
you
know,
processing
tips,
and
so
we
are
not
going
to
be
voting
on
this.
It
will
just
go
through
the
last
call
period
once
the
time
is
set,
I
believe
that's
Jennifer
or
ayush
I'm,
not
sure
who
will
be
speaking.
C
And
thanks
for
presenting
I
think
all
of
this
makes
sense
announcing
very
excited
to
to
have
this
to
with
an
eye
towards
FIP
acceptance.
My
only
question
is
kind
of
around
correctness
of
this,
so
I'm
wondering
if
you
can
quickly
just
speak
to
testing
auditing.
You
know
how
is
it
you
know?
V-Road
has
been
working
well
for
us.
C
B
Sure
in
terms
of
testing,
obviously,
we've
been
running
at
night
I'm
in
maynet
for
for
quite
a
while
and
in
fact,
in
Tesla,
even
further
before
that,
since
late
last
year,
in
terms
of
auditing,
we
haven't
specifically
had
the
deranko
BS
pre-ordered,
but
we
have
had
timer
encryption
Auditors,
which
we
built
on
top
of
this.
B
That
did
involve
a
cursor
look
into
our
new
schemes
from
kudelski
security,
who
I
guess
are
one
of
the
partners
in
the
league
of
entropy
and
I,
think
I've
done
other
stuff
with
PL
and
they
identified
some
faults
in
the
temple
Crypt
stuff,
which
we've
remedied
months
ago
also,
so
that
is
I,
hope
well
we're
in
territory.
C
C
This
change
as
well
is
once
it's
all
integrated
we'd
upgrade
the
test
networks
for
butterfly
then
calibration,
and
at
that
point,
if
there
are
any
issues
they
emerge
so
yeah
in
general,
we
just
try
to
follow
the
same
kind
of
upgrade
lineage
between
magnet
and
calibration
net,
at
least,
and
any
number
of
people
can
run
their
own
devnets
for
kind
of
preliminary
testing
on
this,
on
the
other
thing
I'd
say
is,
and
this
is
very
much
an
issue
for
Lotus
and
the
client.
This
is
I.
C
We've
not
really
done
a
switch
like
this,
at
least
in
Lotus.
In
a
long
while,
arguably
ever
we
haven't
made
slow
professional
change
in
the
beacon
schedule,
as
you
mentioned,
at
least
in
Lotus.
We
do
have
a
lot
of
this
logic
already,
but
it's
never
really
been
tested.
You
know
there's
so
the
confidence
we
have
in
it
there
to
the
level
of
a
couple
unit
tests,
maybe
so
I
think
as
you're
speaking
for
Lotus
will
have
some
some
cleaning
up
a
pose
to
do
or
some
confidence
gaining
to
do.
C
But
that's
unrelated
to
the
core
that
very
much.
We
have
some
work
to
do
and
we'll
do
it,
but
otherwise
I
see
no
problem
with
this
and
I
think
we're
all
generally
excited.
A
Thanks
I
think
we
have
a
couple
of
comments
in
the
chat
box
one
from
zeneground.
Is
it
correct
that
this
means
that
there
will
be
no
increase
in
bytes
stored
in
the
block
header.
B
So
actually
it
would
mean
there
is
a
reduction
in
the
number
of
bytes
stored
in
the
Black
Block
header,
because
we
changed
our
signature
scheme
slightly
so
that
now
for
the
technical
details,
the
signatures
are
on
G1
rather
than
G2
a
group
in
in
BLS.
So
that
would
actually
reduce
the
size.
But
potentially
we
could
do
some
padding
or
something.
If
that
is
an
issue.
A
Great
I
will
hand
it
back
to
Steven
and
then
I'll
come
back
to
Jennifer.
D
Isn't
like
like,
if
I
try
to
like
use
some
encryption
to
decrypt
some,
some
message
like
is
that
going
to
take
me
in
the
order
of
milliseconds
nanoseconds
like
where.
B
Yes,
milliseconds,
you
have
to
do
pairing
operation,
so
it's
it
is
quite
expensive.
Although
we've
limited
our
implementation
limits,
the
amount
of
data
you
can
actually
time
lock,
encrypt
and
decrypt,
so
the
preferred
method
is
really
hybrid
encryption.
You
encrypt
just
a
key
for
something
and
then
use
some
other
scheme
to
encrypt
your
real
payload.
D
I
said:
is
it
possible
to
share
this
this
process
across
multiple
encrypted
cyber
texts
like
decrypt,
multiple
subjects
at
once
with
one
operation.
B
E
Student
answer
your
question
and
I
discussed
it
and
then
back
while
back
with
Nicholas.
In
essence,
the
time
of
decryption
is
equivalent
to
signature.
Verification
for
BLS
there's,
some
precomputation
you
can
do
to
like
a
DSP
could
provide
some
block
producer
could
provide
some
per
computation,
but
it
gets
harder
to
implement
than
because
you
have
to
assume
that
pre
competition
might
be
done
wrong
and
so
on.
D
F
My
my
pressure
is
more
around
the
potential
rollout,
so
for
this
one,
like
all
the
implementation
has
to
switch
to
the
new
network.
You
know
upon
the
upgrade
Epoch
I'm,
not
sure
that,
like
I,
would
love
to
ask
for
us
to
take
a
look
at
the
steps
soon
and
just
let
us
know
if
there's
any
consistent
challenge
challenges
to
implement
that
in
Forest,
because
I
I
think
back
in
the
early
days,
we
did
a
test
with
with
lotus
like
switching
different
like
dram
Network.
We
even
have
to
revisiting
that
call
ourselves.
F
However,
back
in
the
days
forest
was
not
in
the
network
yet
so
this
is
going
to
be
the
first
time
for
Force
to
do
that,
and
we
I
think
it's
very
important
for
us
to
coordinate
and
testing
this
in
your
Share
Cast
Net
before
you
even
deploy
any
changes
to
calibration
on
this
particular
on
this
particular
one.
So
like
what
yeah
any
comments
from
forest
will
be
will
be
appreciated.
A
I
just
wanted
to
Circle
back
to
Molly's
question
around
she
said
I
think
we
made
a
d-rand
network
switch
to
the
current
main
net
back
in
September
2020..
Can
you
clarify
that
for
us
battery.
G
This
once
with
lotus,
but
we
need
to
make
sure
that
other
networks
or
other
implementations
get
this
nice
and
tested
so
that
we
don't
run
into
any
issues.
I
Oh
yeah
thanks,
oh
yeah,
just
with
it
with
an
eye
towards
getting
this
accepted
and
rolled
out,
I
think
the
you
know
the
text
itself
is
great,
but
not
quite
specific
enough,
so
so
I
think
it's
not
quite
at
the
stage
where
an
independent
implementer
could
just
sit
down,
read
the
FIP
up
their
implementation
and
and
be
sure
they're
gonna
match.
I
You
know
describes
the
kind
of
changes
that
need
to
happen
if
it
doesn't
specify
exactly
how
those
changes
must
happen
and
so
I
think
anyway,
we
only
have
a
few
implementations
to
coordinate,
and
so
we
can,
you
know,
don't
have
to
perfectly
sequence
this,
and
so
you
might
need
help
from
one
of
the
implementations
to
like
actually
do
the
work
once
and
then
then,
you've
answered
all
the
design
questions
and
you
can
write
them
down
in
the
fifth
forever
and
else
to
replicate,
but
I
think
that's
necessary
for
us
to
get
to
accepting
this
FIP.
I
We
probably
need
to
actually
do
it
first
and
then
and
that
sort
of
ties
in
with
as
Jim
was
saying,
you're
actually
testing
this
change
on
a
network.
That's
not
one
of
our
long-running
test
Nets,
so
we're
confident
it's
going
to
work
when
we,
when
we
do
that.
F
I'm
gonna
call
out
that
Patrick
has.
Can
we
offered
help
us
do
a
prototyping,
the
Lotus
or
the
Go
version
there.
So
once
that
is
out,
I
can
I
think
that
can
be.
You
know.
First,
you
may
take
a
look
at
that,
sometimes
in
August,
and
treat
that
as
a
reference
implementation.
You
know
just
like
to
reduce
some
of
the
work
analysis
for
the
forest
team
and
then
obviously
it's
possible.
The
Google
implementation
has
something
wrong
whatever.
F
Ideally,
forest
team
can
help
us
call
those
potential
bugs
out
to
you,
but
but
yes
just
one,
let
you
know
they're
expect
to
be
a
goal:
implementation,
sometimes
in
August
that
we
can.
We
can
check
out
right,
but.
B
Just
yes,
that's
correct,
also
related
to
that.
We
do
have
a
community
rust
Library
for
everything.
Dran
related,
it's
not
strictly
handled
by
our
team,
but
there's
at
least
some
priority
for
the
forest
folk
that
they
can
pick
it
up
and
modify
or
use
it
directly
depending
on
their
appetite.
A
Great
thank
you.
We
can
definitely
continue
the
conversations
async.
The
flip
draft
is
ready.
Anyone
can.
You
can
always
weigh
in
on
that.
Let's
see
over
to
you,
civilian
on
the
Beyond
FM
discussion.
D
Okay,
thank
you
yeah.
So
sorry,
this
should
be
Beyond
fathom
so
behind
fevm
we're
still
sticking
with
the
IPM,
at
least
for
the
moment,
but
yeah.
Basically,
we
we
launched
February
network,
but
we
still
don't
have
Native
actors
or
like
native
Native
wasm
actors
that
anyone
can
deploy
and
we're
currently
working
towards
this.
For
a
few
reasons
one
is,
it
enables
any
Target
or
any
language
that
can
Target
webassembly
in
that
it's
not
well
written
but
whatever
yeah.
D
So
basically,
we
can
Target
any
language
that
Target's
web
assembly
such
as
rust.
These
actors
will
get
direct
access
to
iPad
lead
blocks,
so
they
deserve
create
their
own
Equity
data
structures
and
importantly,
they
get
copy
and
write
semantics
for
their
internal
state,
which
is
something
you
don't
get
in.
Something
like
like
the
evm
and
finally,
web
supply
should
have
improved
performance
depending
on
how
we
do
it.
But
these
are.
D
These
are
kind
of
the
the
reasons
why
we're
looking
at
enabling
more
native
webassembly
actors
next
slide.
D
So,
unfortunately,
this
is
not
easy,
so
more
guns
for
a
while
I'm
trying
to
deal
with
all
the
interesting
problems.
But
there
are
three
core
problems:
one,
the
current
Watson
VM
we
use
has
an
expensive
compilation
set
where
compiles
the
web
assembly
module
to
native
to
your
native
architecture,
because
it's
not
actually
it
doesn't.
It
doesn't
interpret
the
actors
it
actually
compiles
and
bound
to
your
data
architecture
like
xv6
or
whatever,
and
then
executes
that,
and
this
like
this
is
an
optimized
compiler
that
takes
an
arbitrary
amount
of
time.
D
So,
if
we
want
to
do
this
safely
in
a
way
we
can
try
like
actually
charge
the
correct
amount
of
gas
for
this
kind
of
operation,
for
example,
we're
compiling
an
untrusted
actor,
we
would
probably
need
to
use
either
some
kind
of
single
pass
compiler.
They
basically
just
takes
one
password
for
the
code
and
doesn't
do
anything
fancy
or
potentially
interpret
the
Wasa
module
instead
of
actually
compiling
it.
The
second
tricky
part
here
is
actually
charging
gaps
for
running
these
awesome
modules.
D
But
one
of
the
tricky
things
is
like,
because
was
it
was
so
close
to
Native
because
it
compiles
into
a
native
like
things
like,
for
example,
like
cash,
misses
in
your
instructions
or
like
branch,
mispredictions
and
stuff
like
that,
really
affect
your
runtime.
So
if
you
wanted
a
gas
model
that
was
actually
like
secure
in
a
world
where
people
can
deploy
arbitrary
and
potentially
malicious
Native
actors,
then
we
have
to
be
a
lot
more
conservative.
D
Instead
of
making
the
gas
model
based
on
like
average
and
expected
execution
times,
we
need
to
make
the
gas
money
based
on
on
worst
case
execution
times,
and
we
have
some
concerns
here.
This
could
like
drastically
increase
gas
fees
So
currently,
in
both
these
cases,
we're
looking
at
kind
of
a
two
World
scenario
where
we
have
like
the
the
built-in
actors
that
get
charged
one
thing
one
set
of
gaspies
and
then
the
the
sort
of
accused
Point
actors
that
may
run
a
bit
slower
and
they
cost
a
bit
more.
D
But
this
is
still
complicated
and
kind
of
out
there.
Yeah
and
the
final
part
of
this
just
like
this,
like
there
aren't
any
other
ideal
lead
based
blockchains
that
are
like
was
in
baseball
chains.
They
do
some
interesting
things
that
are
like
there's
many
different
types
of
blockchains
like
it's
there's,
nothing
that
like
looks
exactly
like
ours
so
like
with
with
pavum.
D
We
can
just
sort
of
copy
the
EV
up,
and
that
was
easy,
but
with
with
with
this
like
with,
like
the
fvm
itself,
once
we
start
letting
users
deploy
native
actors
directly
on
top
of
the
fbm,
we
have
to
really
think
through
and
be
careful
about
the
apis
we
expose,
because
it
becomes
very
hard
to
change
these
kinds
of
things
once
you've
exposed
them
to
users
so
yeah.
Those
are
the
current
challenges
we're
facing
next
slide,
so
those
accounts
we're
facing
now.
D
Basically,
we
have
a
few
options.
Ways
to
deal
with
this.
The
shortest
term
approach
for
for
Native
actors
is
don't
it's
basically
saying:
okay,
don't
deploy
these
native
actors
to
development
instead
of
Altus,
for
example
via
IPC
the
upside
is
we
can
do
this
potentially
immediately
with
no
upgrades,
the
downside
is:
there's
no
direct
integration.
Ipc
is
still
very
Alpha
in
early
stages.
D
It's
a
lot
of
users
like
they
want
to
deploy
something
that's
sort
of
like
it
might
be
more
risky
for
users
and
it
won't
cover
all
use
cases.
A
soon-ish
approach
is
the
one
that
I
actually
want
to
talk
about
here.
This
is
what
I
wrote
up
in
the
fifth
discussion.
If
you've
been
paying
attention
to
the
ongoing
discussions-
and
this
is
this
idea
of
allowing
users
to
sort
of
contribute,
contributed
actors
that
still
get
to
play
through
fips,
it's
like
right
now.
D
We
have
this
whole
fit
process
where
you
you
have
to
like.
Even
on
teams
Network,
you
have
to
make
this
extensive
proposal
explaining
why
you
want
to
make
this
change,
how
this
change
works
and
everything
related
to
it.
What
I'd
like
to
do
is
propose
a
slightly
more
simplified
process
where
you
still
write
a
bit,
and
you
just
hope
to
like
describe
why
you
want
this
change
and
and
stuff
like
that,
but,
like
you,
don't
have
to
get
into
quite
as
much
detail
about
exactly
how
your
actor
Works.
D
D
Like
the
protocol
level,
things
they'll
just
be
basically
adding
a
new
actor
into
the
system.
So
that's
the
sort
of
the
soonish
approach
that
I'm
posing
here,
there's
a
more
midterm
approach
which
would
publish
up
around
q1
Q2
of
next
year,
and
this
is
basically
just
deploy
a
webassembly
interpreters
in
the
network.
This
is
going
to
be
slow,
but
it's
an
option
for
some
users
who
just
they
just
want
to
try
to
rest.
They
don't
care
about
performance.
They
just
need
to
Target
something
like
rust
or
some
language.
D
That
is
not
absolutely.
Basically
and
then
the
long-term
approach
is
actually
like
fully
permissionless
past
webassembly
actors,
but
that,
unfortunately,
like
hopefully,
would
ship
in
2024,
but
it's
still
complicated,
so
there's
a
lot
of
button
notes.
So
these
are
the
four
sort
of
pads
that
I
see
here,
and
the
one
I'm
currently
proposing
is
is
assume
one.
The
permission
contributed.
Actors,
the
other
one
I
think
is
somewhat
viable,
is,
is
the
sort
of
midterm
approach
of
like
shipping
interpreter?
Also,
the
none
of
these
approaches
are
exclusive.
D
So,
like
we
can
just
do
all
four,
but
the
question
is
like:
do
we
want
the
intermediate
steps
as
well,
so
that
that's
what
I
want
to
talk
about
here?
D
We
do
have
the
potential
first
user
here
on
the
call,
if
he's,
if
they're
there,
so
it's
something
that's
awkward,
yeah
we're
we're
right
here.
I
was
hoping.
If
you
have
time,
if
you
could
give
a
like
a
short
intro
and
like
hey,
this
is
is
what
we're
building,
and
this
is
why
we
we
want
a
built,
an
actor.
J
J
All
right,
hi,
I'm
Bernhard,
with
fluence
some
of
you
have
met
some
of
you.
I
haven't
thanks
for
having
us,
so
we're
fluents,
we're
decentralized,
serverless
compute
and
we
are
off
chain
and
we've
decided
quite
some
time
ago
that
our
own
chain
home
for
marketplaces
and
verifications
should
be
the
falcoin
network.
If
yeah
and
part
of
that
is
related
to
the
Prospect
of
having
a
web
assembly
runtime.
J
So
we
talked
on
and
off
to
the
team
in
various
capacities
at
various
times,
and
we
are
currently
on
testnet
and.
J
Put
a
milestone
in
the
ground
for
our
road
map
for
a
Q4,
minimal,
viable
main
net
on
the
file
claim.
As
part
of
that
drive,
we
are
interested
now
in
exploring
the
use
of
the
web
assembly
runtime
for
one
of
our
webassembly
modules
called
aqua
VM.
That
module
on
chain
would
be
acting
as
a
verifier
for
some
off-chain
compute,
so
it
wouldn't
be
running
compute.
J
It
would
be
a
verifier
for
off-chain,
compute
and
but
we'll
go
into
that
in
detail
and
after
asking,
if
we
could
start
building
on
it
and
and
what
the
possibility
of
Q4
deliverable
of
that
capabilities.
K
J
J
Yeah
keep
going
sorry,
okay,
okay,
so
where
was
that
yeah
so
that
that
we
actually
synced
on
on
the
Q4
mainnet
release
that
in
part
I
think
I
believe
at
least
Steven
proposed
seven
in
in
that
Beyond?
If
you
have
additional
runtime
actors,
discussion
fit,
and
so
here
we
are.
We
fundamentally
believe
that,
obviously,
it's
a
pretty
significant
commitment
from
our
side,
so
the
earlier
towards
the
Falcon
Network
and
the
earlier
we
can
test
and
play
with
it.
J
The
better
we
have
Alternatives
and
I
can
briefly
speak
to
them
in
a
little
bit.
But
how
many
of
you
know
influences
I,
don't
want
to
bore
anybody,
otherwise
we
can
run
briefly
through
what
fluence
does
and
is
and
why
it's
important
what
we
want
to
do.
J
Okay,
I
only
see
five
people
at
any
given
time,
so
one
person,
those
two
people-
know.
Okay,
let
me
run
through
real
quick.
If,
if
I
bore,
you
just
say,
stop
and
we'll
just
accelerate
the
whole
thing
so
affluence
does
a
decentralized,
stateless,
serverless,
compute
protocol.
So
it's
basically
a
decentralized
Lambda,
which
is
very,
very
different
from
other
Solutions
include
pakalau,
which
are
more
managed
sort
of
on
the
ec2
type
container
side.
We
run
on
Watson
containers,
not
a
Docker
and,
as
I
said,
the
compute
is
off
chain.
J
We
have
multiple
components
in
our
reference.
Peer
one
is
called
Marine,
which
is
our
own
wasm
runtime,
which
is
built
on
Watson
time
at
this
point,
and
we
have
Aqua,
which
is
distributed,
choreography
and
composition
engine.
Basically,
what
you
do
is
the
developer,
writes
some
business
logic
in
Rust,
compiles
it
to
Wazi,
deploys
it
to
one
or
more
peers
out
in
that
Network
and
then
uses
Aqua
to
compose
these
Services
into
whatever
compute
Solutions
applications.
Even
protocols,
these
Services
influence
are
not
rest
or
Json
RPC
accessible.
They
only
P2P
accessible.
J
Therefore,
Aqua
functions
also
as
a
a
composer
at
that
P2P
level.
So
this
is
all
of
chain.
However,
we
have
a
very
significant
on-chain
component,
and
that
includes
multiple
marketplaces,
including
the
marketplaces
that
match
developers
with
capacity
providers,
peers
peer
providers,
operators
and
marketplaces
to
to
run
capacity,
provisioning
and
we'll
get
into
that
a
little
bit,
and
we
also
want
to
run
a
bunch
of
verifiers.
J
We
have
a
variety
of
proofs
proof
of
execution,
proof
of
correctness,
proof
of
capacity,
blah
blah
blah
and
one
of
those
verifiers
actually
would
be
the
very
active
rvm
we
have
on
every
Pier,
enabling
the
distributed
choreography
and
composition
to
quote
unquote:
re-run,
probabilistic
sample
subset
of
the
execution
traces
on
chain
which
drives
payment.
The
important
part
here
is:
it
drives
payments
settlement
between
developers
and
capacity
or
capacity
consumers
and
capacity
providers.
As
I
said,
we
want
to
be
on
Falcon
with
a
minimal,
viable
mainnet
in
Q4.
A
J
A
We
I
can
give
you
just
two
minutes
to
round
up
so.
J
That
we
can
move
okay,
all
right.
So
let
me
just
move
next
slope,
so
it's
part
of
our
move
to
to
Falcon.
We
believe
we
bring
a
lot
of
benefits
to
the
community
and
part
of
it
all
hinges
on
not
entirely
on
Aqua
VM
being
able
to
run
our
biggest.
We
in
in
heavy
discussions
with
multiple
SPS
to
to
provide
excess
CPU
capacity
to
the
fluence
network
and
I
can't
speak
exactly
who
it
is.
But
those
of
you
who
are
at
ESP
esp87
probably
saw
me
talk
to.
J
They
should
know
who
I
talked
to
and
who
we're
talking
to,
and
we
we
do
believe
we
can
reveal
some
of
the
partners
very
soon,
but
the
latest
in
October,
at
the
espa
Slash,
Phil
Vegas
event.
J
We
also
bring
in
some
other
partners
in
with
a
variety
of
NPC
Solutions,
which
we
also
feel
is,
is
very
beneficial
to
the
community,
as
well
as
the
the
Vodka,
Network
and
fluence
has
been
I.
Think
an
active
partner
with
many
many
Falcon
events
and
the
Falcon
Community
for
several
years
on
a
global
scale,
and
obviously
we
continue.
We
would
like
to
continue
this
commitment
and
actually
accelerate
and
extend
this
commitment,
so
that's
sort
of
in
a
nutshell.
From
an
alternative
perspective,
there
are
alternatives.
J
We
currently
run
test
net
on
near
and
Aurora
I.
Don't
really
like
it
for
a
minute
solution:
So
within
the
five
coin
ecosystem,
it's
IPC
as
an
L2
IPC
itself,
isn't
particularly
it's
it's
early.
We
are
considering
at
an
l24
for
another
part
of
the
solution.
However,
since
Aqua
VM
drives
settlement,
the
security
and
the
responsiveness
of
you
know
with
L1
real
time,
certainly
would
be
a
tremendous
benefit
for
the
solution.
A
So
much
if
you
could
make
those
slides
available
as
well,
so
that
I
can
include
them
when
I
am
sending
the
the
slides
out
yeah.
Thank
you
very
much.
A
E
Minutes
it
depends
how
many
questions
I
get,
but
so
hello,
everyone,
so
I
I
would
like
to
like
to
introduce
part
of
security
protein
and
replacement
ceiling
enforcement.
It's
a
FIP
that
replaces
FIP
0047,
which
was
the
port
of
security
policy,
because
we
noticed
a
significant
simplification
that
we
can
make
towards
in
that
FIP.
E
The
primary
change
is
that,
instead
of
explicitly
scheduling
a
replacement
ceiling
for
every
sector,
that
SP
has
we
we
we
require
the
storage
providers
to
replacement
a
replacement
replacement
seal,
their
sectors
over
the
period
of
one
and
a
half
years,
so
just
just
also
high
level,
because
I
assumed
everyone
knows.
547
547
addresses
the
the
risk
of
PowerApp
construction
becoming
broken
in
some
shape
or
form,
either
through
through
computational
advancements
or
a
some
development
in
cryptography,
or
something
like
that.
E
So
in
that
case,
currently
the
default
policy,
as
we
have
right
now,
is
that
a
sector's
commit
a
sector
of
commitments
are
at
maximum
one
and
a
half
years
and
in
case
of
breakage
for
a
breakage.
We
would
prevent
disallowed
participants
in
the
network
to
extend
the
commitments
on
their
sectors.
Thus,
the
vulnerable
sectors
would
expire
after
one
and
a
half
years.
E
E
So,
as
I
mentioned,
the
primary
change
is
that
we
don't
schedule
the
replacement
ceiling
explicitly
in
case
of
a
power
of
vulnerability.
We
would
we
just
require
search
providers
to
replacement
seal
over
time.
This
brings
some
benefits.
The
search
providers
are
not
required
to
refresh
sectors
periodically
as
it
was
described
in
547,
and
also
they
have
complete
Freedom
over
sequencing
of
how
how
and
when
they
seal
which
sectors
as
long
as
they
they
follow.
E
The
linear
charge
trajectory
in
in
aggregate,
so
essentially
they
have
to
offload
their
story,
their
old
old
sectors
linearly
over
time
and
at
least
linearly,
not
slower,
and
then
onboard
and
free
replacement
ceiling
mechanism
and
failure
to
follow
the
replacement
schedule
initially
leads
to
inability
to
produce
blocks
recoverable
faults.
But
after
some
time
we
have
to
terminate
those
sectors
otherwise
terminate
a
some
chunk
of
the
sectors.
Otherwise,
there
are
issues
with
imbalance
between.
What's
in
the
power
table
and
who
can
produce
blocks
next
slide,
please.
E
So
what
does
pip67
bring
us
bring
us
in
relation
in
comparison
to
47
is
that
schedule
is
no
longer
necessary,
which
significantly
reduces
the
amount
of
addition,
State
and
bookkeeping.
We
have
to
do
for
the
same
reason.
There
is
no
need
for
immediate
implementation,
a
code
implementation
of
the
of
the
FIB,
although
that
having
that
code
implemented,
would
make
enabling
it
when
we
need
it
much
much
easier,
because
we
would
have
already
the
code,
and
let
me
remind
you
that
the
case
we
are
thinking
here
is
a
power
up
breaking
for
some
reason.
E
Thus,
they're
all
already
will
be
a
lot
of
things
to
do
and
create
and
right
it
introduces
only
two
aggregate
State
variables.
A
per
minor
instance
in
comparison
to
previously
would
have
essentially
the
the
number
of
the
amount
of
State
it
like
in
previous
implementation.
At
three
years,
we
reuse
the
exploration
queue
which
led
to
very
complex
code
and
additional
State
there.
We
can
so
like
the
minimal
thing
we
can
do,
I
think
we
can
introduce
placeholders
for
those
State
variables
in
whatever
nearest
upgrade.
E
We
want
such
that
at
least
we
don't
have
to
migrate.
The
Core
State,
although
we'll
still
have
migrated
the
shape
of
the
state
and
thus
increase
the
complexity
of
changing
the
tooling,
but
we'll
have
to
perform
a
migration
when,
when
we
want
to
enable,
nevertheless
to
populate
those
entries
and
those
entries,
especially
one
entry,
which
is
the
initial
old
sectors
where
we
populate
it,
with
the
count
of
sectors
with
longer
expiration
than
one
and
a
half
years,
which
is
very
small
migration
comparison.
E
I
Thanks
another
question
for
you
because
I
understand
this,
but
a
question
for
everyone
is:
how
can
we
you
know?
I
What's
our
next
step
with
this
proposal,
which
is
now
you've
been
written
up
and
published
for
months
now,
and
how
do
we
get
it
to
the
point
where
everyone
knows
that
this
is
the
plan
in
case
we
do
have
a
problem
and
we
know
what
the
steps
are
so
I
would
like
to
I
mean
I,
propose
I
I,
think
our
current
process
actually
suits
this
quite
well,
where
we
can
make
any
amendments
and
ex
and
put
this
FIP
through
the
acceptance
process
without
mandating
that
we
do
any
work
straight
away,
because
the
process
is
currently,
you
know,
get
governance,
acceptance
and
then
core
devs
decide
when
to
implement
it.
I
So
if
we
accept
this
fit
doesn't
mean
we're
committing
to
doing
any
of
this
work
right
now,
but
it
means
we're
now.
The
quarters
can
then
decide.
You
know,
when
is
the
right
time
for
us
to
do
this
implementation
work,
which
sets
us
up
for
then
the
easy
path
should
this
ever
ever
happen.
I
I,
don't
think
we
should
withhold
accepting
this
FIP
based
on
not
wanting
to
do
the
work
right
now.
Those
two
steps
are
decoupled
in
our
current
process
on
purpose.
A
I
think,
in
my
opinion,
that's
pretty
much
correct.
We
can
I
mean
the
football
continued
to
go,
follow
the
process
as
normal
and
then
quarters
can
decide.
You
know
based
on
engineering
requirements
or
resources
or
interests
when
to
show
or
bad
for
maybe
implementation
I,
don't
think
it
makes
sense
to.
You
know
only
wait
until
we're
ready
to
implement
before
we
take
it
through
the
acceptance
process.
A
I
mean
if
it's
ready
for
last
call
we
can
otherwise.
Well
that's
my
that's
my
opinion,
I'm,
not
sure.
If,
if
anyone
else
wants
to
add
on
them.
A
Great
okay
are
the
additional
questions
on
this
before
we
quickly
move
on
nope.
Thank
you
so
much
before
we
do
this
I'll.
Let
Molly
jump
in
for
her
shout
outs
and
then
I
hand
over
to
Eden.
G
Thanks
lucky
I
can't
present,
but
I
also
don't
need
to.
If
you
want
to
just
make
it
easy.
I
wanted
to
give
folks
a
heads
up
and
hear
about
the
falcoin
dev
Summits
that
we're
organizing
with
the
Falcon
foundation
in
September.
G
The
aim
is
that
we're
going
to
have
two
Summits
Regional
Summits
one
in
Asia,
one
in
Europe,
North,
America
Asia,
is
going
to
be
Singapore.
The
12th
through
14th
of
September
and
Europe
North
America,
will
be
in
the
thing
that
borders
the
intersection
of
those
two
continents
which
is
Iceland
September
25th
through
27th,
and
the
aim
here
is
that
these
are
going
to
be
protocol
development.
G
Conversation
venues
talking
about
improvements
to
kind
of
the
tech
stack,
getting
good
alignment
on
how
we
see
the
upgrade
path
for
implementations,
network
scalability
data,
onboarding,
tooling,
Etc,
improving
over
time,
we're
still
in
in
the
planning
stages
right
now
now,
but
are
starting
to
reach
out
to
folks
like
like
core
devs
and
others
to
join
these
venues
and
host
tracks.
There
is
a
a
website
we're
trying
to
keep
this
to
be
a
limited
venue.
It's
not
trying
to
be
a
big.
You
know
fill
X
Phil,
Bangalore
style
conference.
G
It's
it's
focused
on
developers,
protocol
Engineers
tool,
Builders
Etc,
who
are
talking
and
aligning
about
how
we
build
this
awesome,
Network
together
going
forward
and
so
we're
through
that
not
just
opening
up
attendance
to
everyone.
G
So
if
you
I
expect
and
hope
that
everyone
here
attends
this
venue
and
if
you're
paging
into
the
core
Dev
calls,
that
probably
means
that
you
are
engaged
in
protocol
development.
So
if
you
see
the
recording
of
This
nudge,
nudge,
we'd,
also
love
to
have
have
you
apply
and
hopefully
come
to
one
of
those
two
events.
The
aim
is
that
we'll
do
Asia.
G
First,
have
really
go
to
open
conversation
about
how
we
see
some
of
these
areas
improving
a
little
bit
more
of
a
focus
on
data
onboarding
stored
provider
tooling,
so
like
SP
stack
and
then
build
towards
in
Iceland
having
a
little
bit
more
of
a
concrete
Vision
about
protocol
development
and
scaling
opportunities
with
a
deeper
focus
on
retrievals
clients,
Layer
Two
networks,
building
on
filecoin
and
Falcon
scaling
Solutions
and
like
retrieval
incentives
and
checkings,
and
things
like
that
and
so
we're
working
on.
Getting
that.
G
All
into
the
website,
but
all
of
you
here
today
encourage
you
to
just
drop
that
apply
button
engineer
in
the
core,
devs
working
group
and
start
booking
time
on
your
calendars
for
one
of
those
two
slots
fingers
crossed,
but
really
looking
forward
to
getting
together
in
person
to
get
to
jam
on
some
of
these
awesome
design,
ideas
and
plan,
not
just
the
current
open
fips,
but
what
we
expect
to
see
one
to
two
years
or
three
years
from
now.
G
A
Thank
you,
Caitlin.
H
Sure,
thanks
Molly
thanks
lucky
just
wanted
to
come
in
and
and
speak
briefly
about
what
we
have
more
or
less
collectively
agreed
to
as
the
upgrade
timeline
for
nb21,
which
we're
calling
the
watermelon
upgrade.
Since
we
originally
thought
that
it
would
land
mid-summer,
it
has
been
pushed
out
a
little
bit,
but
we
think,
probably
for
the
best
we
are
expecting.
The
next
mainnet
upgrade
to
be
on
November
7th
I'm,
going
to
be
working
together
with
Jennifer
and
lucky
to
coordinate
all
of
the
fips
for
this.
H
While
we
also
work
to
unblock
some
of
the
stickier
fips
process
issues
that
we've
been
talking
about
for
the
last
couple
of
months
as
well,
so
the
three
of
us
will
be
coordinating
Communications
in
the
coming
weeks.
But
one
thing
I
wanted
to
specifically
flag
is
that
a
lot
of
teams
are
working
under
a
pretty
tight
capacity
at
the
moment,
and
we
have
had
a
pretty
set
scope
for
fips
that
are
going
to
be
included
in
nb21
for
the
past
several
months.
H
Yep
I
can
let
Lucky
speak
to
this
if
he
wants
to,
because
he'll
be
helping
to
facilitate
ensuring
everything
reaches
last
call
by
an
appropriate
time.
Etc.
But
please
know
that
the
cutoff
period
for
last
call
fits
it's
going
to
be
September
15th,
even
though
upgrade
is
going
to
land
in
on
chain
in
November.
We
are
not
going
to
be
accepting
any
fips
proposed
or
entering
last
call
after
this
point.
H
So
if
your
team
is
working
on
something
that
you
really
think
needs
to
be
in
this
network
upgrade
that
it's
really
going
to
cause
a
lot
of
Downstream
issues,
if
not,
but
that
there
is
not
yet
currently
a
draft
open,
please
be
sure
to
fly
this
to
us
ASAP,
because,
as
we
get
closer
and
closer
to
this,
upgrade
deadline,
we're
going
to
have
fewer
and
fewer
resources
available
to
accommodate
the
work
that
you
may
have
in
progress,
particularly
if
it's
not
fully
visible
to
us
right
now.
H
So,
just
to
reiterate,
we
do
use
this
public
channel
for
all
of
our
planning
purposes.
We
always
have
same
one
as
usual.
We
will
also
begin
to
share
these
dates
more
publicly.
H
Across
slack
and
on
Twitter
as
well,
and
one
thing
I
want
to
also
flag
is
that
this
was
a
particularly
difficult
upgrade
to
schedule
and
I
think
this
difficulty
is
only
going
to
increase
as
more
teams
want
to
join
core
devs
as
more
folks
come
into
the
network
themselves,
become
more
complicated
and
so
we've
sort
of
re-raised
this
issue
of
potentially
moving
to
pre-scheduled
network
upgrades
in
2024.
Okay.
This
is
just
the
discussion
like
everything
else
core
devs.
This
is
a
group
that
we
all
kind
of
work
together
to
structure
and
run.
H
If
you
have
opinions,
ideas
or
thoughts
on
this,
please
raise
them
so
that
we
are
aware
of
them
and
we
can
take
your
needs
into
account.
But
there
is
a
discussion
forum
that
has
already
been
opened
for
the
pre-scheduled
Network
upload
scheduling
suggestion
as
well.
So
please
take
a
look
at
this
if
you
have
not
associated
with
opinions,
otherwise
I
think
there's
lots
of
good
q
a
we
can
also
delve
into
today.
So
I'm
going
to
stop
for
now,
but
likewise
happy
to
answer
any
questions
that
anyone
may
have.
F
Or
I
just
want
color
alt,
oh
from
this,
like
existing
list,
there's
one
other
FIP
effort
that
loaders
team
and,
along
with
some
other
teams,
are
closely
tracking.
Is
addicts,
work
directly
they're
on
boarding
which
allowing
commit
data
directly
to
the
sectors
either
verified
or
non-verified
or
without
directly
interacting
with
the
market
actor.
Many
of
the
team,
including
all
this
manner
team,
both
team
singular
participate,
which
are
the
client
side
of
the
tooling,
are,
are
actually
tracking
this
work
and
we
are
hoping
to
support
this
in.
F
We
are
aiming
to
support
the
CMD
21
just
because
of
the
overall
benefit
it
brings
to
the
to
the
network
and
the
opportunities
enables
to
use
a
storage
Market,
but
also
significantly
reduce
the
cost
for
storage
fighter
to
onboarding
data
into
the
network.
Today,
we,
we
believe,
there's
a
positive
like
impact
on
the
network.
That's
why
we
are
hoping
to
get
this
supported
in
ub21
I,
don't
know
if
Alex
has
anything
to
add
to
this
topic.
I
Sorry
yeah,
so
that
one
also,
we
need
to
make
a
long-term
fix
to
the
market
actor
for
the
Chrome
problem
that
we
short-term
fixed
in
the
last
upgrade.
I
There's
no
for
that,
but
it's
well
understood
what
we
need
to
do
and
we're
working
on
it.
So
somehow
I'm
there
were
some
I
don't
want
to
spend
a
long
time
on
this.
There
was
obviously
some
confusion
between
on
the
scope
in
for
this
upgrade
initially,
the
date
was
more
like
now
and
in
which
case
the
scope.
This
proposed
made
sense,
but
when
the
date
was
pushed
back
and
we're
only
getting
one
this
year,
they'll
sort
of
I
think
we
somehow
missed
the
like
okay.
I
So
what
should
our
new
scope
be,
because
these
two
very
significant
things
are
missing
from
it?
I'll
go
and
write
them
in
the
appropriate
GitHub
discussion
as
well.
Now,
but
I'm
not
sure
there
was
a
very
clear
call
for
for
what
should
our
new
scope
be
with
the.
C
D
One
is
already
up,
and
hopefully
it'll
be
merged
as
a
draft
very
soon,
around
ibld
reachability
I'm
going
to
present
that
in
the
next
cortex
call
another
one
is
basically
trying
to
like
clean
up
a
lot
of
the
the
fdm
syscall
apis
to
make
them
a
little
about
really
just
a
little
bit
safer
for
and
like
user
deployed
actors
to
use
when
we
eventually
allow
those
another
one
is
around
just
like
also
cleaning
up
a
few
more
security
things
we're
like
they're,
not
correctly,
with
the
security
issues
to
play,
actors
and
I
think
those
are
the
main
ones
but
yeah.
D
Basically,
we
have
these.
We
have
I
think
like
three
or
four
tips
that
we're
going
to
be
hopefully
proposing.
We
can
try
to
shrink
them
down
into
fewer
hits,
just
fewer
bigger,
bigger
hips,
because
they're
actually
not
very
complicated,
most
of
them
at
least.
A
F
Yeah
I
was
going
to
echoing
with
Alex
and
both
Alex
and
Stephen
mentioned
on
the
clarity
on
the
on
the
timeline
stuff.
I
have
been
mentioning
that
other
than
the
FIP.
We
should
really
try
to
decouple
the
fit
process
from
the
network,
upgrade
that
the
ideal
world
will
be.
You
have
all
the
FIP
accepted,
and
then
they
are
getting
prioritized
and
scheduled
in
the
upcoming
Network
upgrades.
However,
that's
not
how
we
work
today
because
of
the
nature
of
development
in
the
network.
F
These
days,
however,
I
think
it's
very
important
from
an
implementation
perspective
to
have
a
cut
off
like
basically
like
scope
date,
rather
than
just
the
FIB
last
call,
because
for
implementation
team
we
plan
monthly,
quarterly
and
things
like
that.
We
need
to
allocate
engineering
resources
to
support
the
FIP
answers
to
actually
implementing
the
fifth
to
be
finalized
in
the
network
upgrade.
So
when
the
these
days,
the
last
call
deadline
is
always
two
weeks
before
the
code
freeze.
F
It
makes
really
really
hard
for,
like
implementations,
to
be
reactive
and
support
and
planning
of
our
work
and
resources
with
the
current
timeline.
So
what
I
would
really
like
in
the
pre-schedule
network
upgrade
work
is,
if
we're
expecting
say
hypothetically
say
we
have
upgrade
scheduled
in
next
year.
February
I
would
love
all
the
potentials
that
us
steps
to
be
prioritized
in
that
upgrade
to
be
registered
with
a
implementation
team
say
no
later
than
December,
so
we
have
at
least
two
months
to
implementing
all
the
fifths.
That's
important
to
the
network.
F
I
think
that
update
is
very
important
and
needs
to
be
aligned
across
implementation
teams
and
quarterf.
Welcome
to
here
the
feedbacks
from
other
implementation
teams
as
well.
A
Before
you
go
step
Caitlyn
you
had
your
hand
up.
Do
you
want
to
jump
in.
H
I'm
not
sure
I
want
to
think
about
Jennifer's
comments
a
little
bit
more
I
I.
Think
if
you
I
mean
if,
if
the
implementation
teams
want
to
suggest
a
cut
update,
then
that's
great.
We
can
help
incorporate
that
as
part
of
sort
of
the
the
cutoff
for
starting
last
call.
I.
H
Think
Alex
is
a
note
that
there
wasn't
really
this
transition
between
talking
about
different
timelines
and
potentially
rescoping
is
a
good
one,
and
if
there
are
more
smaller,
fips
and
implementation
teams
feel
like
they
have
the
resources
to
allocate
to
those,
then
that's
great
one
thing
I
was
going
to
suggest
publicly,
but
also
as
a
note
for
you
lucky.
It
might
be
really
helpful
as
we
move
closer
and
closer
and
potentially
to
a
scope,
assertation
cut
off
date,
I,
don't
know
what
that's
going
to
look
like.
H
Yet
if,
in
the
weekly
governance
update
that
there
is
a
very
explicit
list
of
what
we're
currently
considering
in
scope,
so
there
is
something
that
historically
every
single
week,
we
can
reference
to
see
when
those
things
have
changed
and
then,
when
there
are
changes
that
those
can
be
flagged
specifically
for
core
devs
things,
move
really
really
quickly
and
it's
easy
to
get
lost.
D
I
just
want
to
note
that
it
also
like
all
this
really
depends
on
what
type
of
hip
you
have.
There
are
some
tips
that
require
extensive
changes
to
clients
or
some
fips
that
require
basically
a
sense
of
implication,
work
by
the
zip
author
first
before
they
came
and
post
the
fit
so,
for
example,
a
lot
of
the
tips
from
the
fvm
the
unfortunate
moment.
There
is
only
one
fdm
implementation,
so
we
do
all
the
implementation
work
first
to
figure
out.
Okay,
is
this
going
to
actually
work?
D
How
is
this
going
to
work
out?
Then
we
write
the
FIP.
Then
we
try
to
get
people
to
accept
it
and
at
that
point
like
as
long
as
there's
little
integration
work.
This
is
this
is
where
it
gets
straight.
You
were
like
I
I,
don't
know
if
we
can
pick
like
a
good
cut
off,
because
some
things
like
in
some
cases
it'll
be
very
much
like
okay,
some
pros
of
hypnos
a
bunch
of
information
work
afterwards.
In
other
cases
like
all
that
will
be
done,
and
if
it
will
basically
hey
this
is
done.
D
Can
we
ship
this
thing
so
I
think
we
really
have
to
like
I,
don't
think
we
can
apart
cut-offs
here,
I
think
like
basically
once
the
hip
is
accepted,
then
you
you
talk
to
the
corporate
to
see
okay.
How
much
time
will
take
to
actually
integrate
this
and
that's
when
you
decide
whether
or
not
it
actually
gets
launched
I.
F
Totally
I
totally
agree
with
you
there,
like
you
know,
different
fifths
has
different
nature
of
development.
I
think
what
I'm
asking
is,
at
least
we
have
ideation
like
a
safe
discussion,
started
early
and
then
the
the
library
proposal
discussion,
because
early
on
talk
to
the
implementation
team
suggesting
that
hey
this
is
the
effort
I'm
working
on
and
I'm
hoping
to
get
accepted
by
the
Fall
Queen
Network.
Please
keep
an
eye
on
this.
We
might
be
working
closely
over
the
next
couple
weeks
or
months
to
implementing
this
and
potentially
finalizing
the
senior
Network
upgrade
I.
K
Yeah
I'm
wondering
just
an
idea
here:
one
I
wonder
if
to
make
this
process
effective
as
well,
would
make
sense
to
commit
to
having
a
global
pipeline
attack
rank
slash
whatever
that
we
all
implementation
teams
and
teams
that
are
working
on
specific
components,
commit
to
keeping
updated
on
on
a
specific
Cadence
it
could.
It
could
align
with
the
core
debts
meetings
in
terms
of
that
also
elicits
things
that
they
see
on
the
horizon
that
might
not
yet
have
reached
a
discussion
might
be
in
discussion.
K
Stage
might
be
an
implementation
stage,
and
really
it's
like
this
instrument
would
serve
for
like
I,
think
it
would
provide
greater
alignment
and
that
it
would
allow
teams
to
flag
more
publicly
to
the
community
the
things
that
they're
working
on.
So
it's
such
that
when
there
is
a
specific
cut
update
if
that
gets
placed
or
if
there
is
like,
we
could
aligned
up
work
better
and
really
understand.
K
Well,
these
things
are
like
likely
in,
like
you
know,
in
finalization
stage,
when
it
comes
to
implementation
and
they're
very
close
to
and
that
data
lines
very
closely
with
like
a
potential
cutoff
date
or
a
potential
pre-scheduled
upgrade.
Does
it
make
sense
to
give
some
slack,
maybe
because
I
don't
think
like
having
very
hard
dates
and
fixed
States
on
on
these
on?
K
The
upgrades
is
super
super
feasible,
I
think
there
needs
to
be
some
slack
give
and
take
because
things
things
arise
and
potentially,
like
you
know,
some
Network
upgrades
do
require
more
testing
ahead
of
time
and,
like
the
con,
the
the
the
the
path
towards
the
network
network
upgrade
might
be
more
complicated
in
terms
of
kind
of
like
the
the
global
actions
that
that
core
devs
need
to
take
test
Nets
and
so
on.
K
So
I'm
wondering,
if,
like
having
visibility
in
Greater
visibility
into
the
implementation
timeline
into
the
implementation
pipeline,
sorry,
but
kind
of
like
bridge
the
gap
here.
A
I
think
these
are
things
that,
in
my
opinion,
are
because
you
know
constantly
thinking
about
trying
to
improve
and
make
it
more
walkable,
especially
in
terms
of
the
visualization
and
the
visibility
of
work.
That
teams
are
doing.
A
I
personally
have
a
problem
with
cut-off
dates.
I
know
that
it
helps
implementation
teams,
but
I'm,
not
sure
if
that's
good
practice
in
governance,
that's
my
personal
opinion
to
have
these
things
set,
and
then
we
say
after
this
time
you
know
we
can
accept
any
more
or
I.
Don't
know
if
that's,
if
that's
good
practice,
but
I
guess
it's
usually
helpful
for
implementation
on
teams
and
Engineering
teams
to
plan
resources
very
well
in
advance,
but
I
just
wanted
to
drop
my
my
personal
thoughts
on
that.
A
I
Alex,
since
we
have
the
minutes,
yeah
I
think
both
you
and
roll
make
some
good
points
about
this.
I
Our
ultimate
goal,
I,
would
claim,
is
to
you
know,
deliver
improvements
to
the
network
at
a
you
know,
at
a
good
Pace
to
deliver
good
improvements
as
as
fast
as
we
can
and
but
not
move.
You
know,
obviously
not
not
move
too
fast.
That's
the
outcome.
I
We
want
I
was
sort
of
question
like
what
I'm
not
quite
sure,
what's
so
broken
about
our
current
process
like
I,
agree,
I,
agree,
it's
like
there's
some
friction,
there's
some
confusion,
we're
not
exactly
sure,
but
it
it
doesn't
seem
that
broken
to
me
that
we
knew
you're
not
having
a
cut-off
date.
I
We
need
this
acceptance,
cut
off
data
I've
totally
understand
why
we
have
that
and
it
seems
like
a
reasonable
distance
ahead
of
the
the
the
network
upgrade
I.
Think
more
transparency
from
implementation
teams,
including
you
know
things
like
the
fvm
and
the
actors,
and
so
on
about
what
they're
working
on
you
know,
published
more,
would
greatly
help
engineering
teams
plan
their
plan,
their
work,
but
I'm
not
sure
that
adding
more
process
to
it
is
going
to
get
us
to
ship
more
stuff,
faster,
I'm,
not
at
all
coming.
F
I
can
share
what's
broken
today
for,
for
the
at
least
from
the
lowest
perspective
and
again,
I
want
to
hear
from
Venus
and
Forest
as
well.
Yes,
for
example,
we
already
did
we
did
our
quarterly
planning
early
in
the
summer.
F
We
have
two
two
months,
three
months
of
work
that
would
and
then
identified
important
for
Lotus
and
lowest
manner
to
work
on
right
now
to
and
as
you
shared
direct
data
on
boarding
project
with
us
last
month,
we
had
to
drop
other
priorities
that
we
already
planned,
and
you
know
based
on
we're,
using
based
on
our
user
requests
and
things
like
that
to
support
this
effort.
We
are
supporting
this
project
because
we
do
see
higher
impact
from
your
project
compared
to
whatever
other
things
in
our
backlog.
A
Raul
we
start
a
new
hand
on
an
old
hand:
no
okay,
okay,
again,
I
want
to
use
I
believe
some
most
of
our
editors
are
on
this
call
just
another
appeal,
but
you
know
for
us
to
ship.
These
things
quicker.
I
would
rely
on
you
to
process
this
PR
graphs
as
quickly
as
possible
and
rather
than
having
them
stay
endlessly
in
the
Repose.
So
please
I'll
be
sending
nodges
and
reminders
constantly
asking
us
to
process
this
reviews
as
quickly
as
possible
so
that
we
can
move
quickly
with
the
timelines.
A
I
mean
September.
15
is
around
the
corner.
Otherwise,
thank
you
all
so
much
for
joining
us
again.
Register
for
our
community
governance
calls
and
join
the
field
goals.
Lab
Channel
continue
the
conversation
all
of
the
links
referenced
and
all
of
the
materials
referenced.
I'll
put
them
all
in
the
notes
and
share
around
as
soon
as
possible.
Thank
you
all
so
much
and
have
a
great
rest
of
the
day.
Bye.