►
From YouTube: Filecoin Core Devs Biweekly #20
Description
Recording for: https://github.com/filecoin-project/tpm/issues/45
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
A
All
right,
good
morning,
good
afternoon,
good
evening,
everyone
welcome
to
the
20th
five
point.
Cordex
meeting
that
is
thursday
june
17th,
I'm
gonna
drop.
The
agenda
in
the
chat
might
be
a
fairly
quick
meeting
today.
It's
mostly
updates
from
the
teams
and
then
some
more
discussion
around
network,
v13
actors,
v5
and
kind
of
the
timeline
and
some
testing
and
simulation
work
that
we've
been
doing
and
obviously,
if
there
are
other
agenda
items
we
will
take
them
on,
but
first
off,
let's
jump
into
updates
from
the
various
teams.
B
And
hi
everyone
yeah-
this
is
steven
from
from
villa's
team
yeah.
In
last
two
weeks
we
had
some
sample
venus,
so
the
first
thing
is
that
we
had
successfully
integrated
the
speak.
Actors.
V5
and
yeah
also
note
the
code
into
the
calibration
network
and
it
works
fine.
Yes,
so
far,
so
good
and
another
thing
is
that
we
yeah
and
completed
the
first
version
of
another
component.
We
called
venus
gateway
and
yeah,
as
you
know
that
we
have
yeah
some
common
services.
B
You
know
for
vedas,
for
the
and
for
the
disputed,
demanding
pool
so
yeah
for
all
this
and
common
service
yeah.
There
might
be
some
access
point
for
each
component,
so
we
have
another
component
called
gateway
and
to
combine
all
of
them
to
need
all
the
manners
will
have
a
uniform
and
access
point
yeah.
It
will
make
yeah
things
easier
yeah.
This
is
another
thing
we
had
done
and
we
also
put
the
gateway
into
the
live
pool.
B
Yeah
living
pool
in
the
mainland
yeah
also
yeah
so
far,
so
good
and
another
big
thing
is
that
we
are
working
on
the
dark
bench,
upgrading
so
yeah.
B
If
you
go
to
the
venus
falcon
document
yeah,
which
is
already
polished
and
yeah,
it's
much
better,
I
would
think
that
and
that
ever
and
I
can
put
it
in
the
chat
window
yeah
if
you
want
to
check-
and
we
also
have
another
real
me
file-
to
make
it
clear
about
yeah
for
people
yeah
if
they
go
to
github,
to
see
the
venus
and
zwippo
and
and
we're
also
working
on
a
faq
which
is
in
progress,
is
because
we're
preparing
for
the
villas,
mainly
incubate
incubation
center
yeah,
where
preparation
for
that
and
we
plan
to
launch
it
in
july.
B
So
we
want
to
have
some
small
manners
to
know
the
details:
how
to
connect
to
our
commerce
services
and
yeah.
That's,
I
think,
that's
what
we
are
working
in
last
week
and
in
next
tweaks
we
will
do
more
testing
and
review
the
new
code
to
make
sure
that
the
upgrading
to
the
new
network
version
is
successful
about
the
upgrading
and
another.
Very
big
thing
is
that
we
want
to
complete
all
the
fixes
yeah
and
against
the
issues
found
in
our
audit.
B
So
we
want
to
complete
all
the
yeah.
I
want
to
have
the
final
audit
report
yeah
in
next
week.
Yeah.
I
think.
That's
all
thank.
A
You
excellent
that's
a
lot.
You
have
a
lot
of
parallel
tracks
going
on.
At
the
same
time,
it's
great
yeah,
very,
very
cool
to
hear
that
that
venus
nodes
are
running
where
long
calibration
net
and
are
kind
of
ready
for
the
hyperdrive
upgrade
on
mainnet
and
yeah.
Also
very
nice
to
see
some
of
that,
like
docs
work
and
kind
of
user
experience,
work
for
the
benefit
of
anyone,
viewing
the
recording
the
docs
link
for
venus's
venus.filepoint.io,
so
check
that
out,
yeah.
A
That's
very
cool,
any
questions
for
anyone
about
it.
C
You
yeah
so
some
exciting
stuff
going
on.
We
got
a
preliminary
report
for
our
audit
last
week
couple
weeks
ago,
something
like
that.
So
we've
been
working
hard
on
fixing.
You
know
a
bunch
of
the
stuff.
It
doesn't
look
like
anything
requires
any
big
refactors,
thank
god
and
yeah.
So
so
yeah
we're
getting
those
fixes
in
we've
started
working
on
the
actors.
C
V5
changes
so
currently
updating
the
runtime
and
integrating
the
new
prefs
libraries
and
then
we're
going
to
start
implementing
the
fips
should
be
done
relatively
soon.
I
would
say
it's.
We
should
have
been
starting
this
a
little
bit
earlier,
but
it
was
kind
of
unclear
when
the
preliminary
report
for
the
audit
was
going
to
drop.
So
when
we
got
it
we're
like.
Oh,
we
dropped
everything
and
started
working
on
that
right
away.
C
First,
so
we
yeah
so
there's
that
for
v5
and
then
yeah
we
got
metrics
landing
in,
like
probably
in
a
couple
days
or
so
we
got
some
cool
little
grafana
dashboards.
Going
on
just
to
get
more
insight
into
you
know
the
nitty-gritty
of
the
performance
and
stuff
we've
re
com,
we've
successfully
refactored
a
lot
of
our
rpc
stuff,
and
so
now
we're
moving
quite
fast
with
implementing
a
bunch
of
cli
commands
and
that
leads
to
us.
C
You
know
starting
to
write
user
documentation
and
prepping
for
our
first
release,
which
I
don't
know
when's
going
to
happen,
but
yeah
after
we
got
all
this
stuff
in.
I
guess
yeah.
I
think
that's
it.
A
Nice
sounds
good
yeah,
it
sounds
like
you're
in
the
you
know,
putting
the
finishing
touches
phase,
which
is
great
yeah.
Let
us
know,
obviously,
if
you
have
any
issues
with
v5
vectors,
v13
upgrade
there's
a
bunch
of
stuff
from
like
test
vectors,
which
I'm
sure
zen
will
talk
about,
but
also
interop
net,
and
it's
not
running
on
calibration
net
as
well.
So
you
have
lots
of
lots
of
resources
to
to
test
against
cool
any
questions
for
the
fortune.
A
Okay,
let's
hear
from
fujon,
please.
D
Thank
you
good
morning
evening,
day,
everyone.
So
we
finally
have
some
great
news.
We
were
being
able
to
resolve
all
the
memory
management
issues
and
now
the
memory
consumption
seems
stable,
well,
small
deviations
of
about
20
to
30
megabytes
happening,
but
that's
that's
totally.
Okay
and
it's
coming
back.
So
at
this
point
I
think
we
will
well
that
all
been
running
for
days
now
straight
away,
no
issues,
no
memory
management
issues,
no
check
faults,
so
we
will
have.
D
We
will
let
it
run
for
at
least
one
more
week
and
well.
If
something
will
happen,
we
will
try
to
fix
this
fixes
as
soon
as
possible.
Otherwise
I
will
hit
the
someone
from
the
foundation
to
set
up
the
security
audit
and
we
will
be
preparing
for
the
testing
of
miner
on
interrupt
net,
and
this
is
something
we
have
had
to
postpone
due
to
like
investigation
of
the
memory
management
issues,
and
we
have
also
introduced
couple
of
improvements
internally.
D
So
we
have
introduced
proper
chipset
caching
for
certain
components
and
we
have
refactored
the
api.
Well,
actually,
the
main
memory
leak
was
related
to
the
api,
how
the
messages
are
transferred
to
the
api.
So
this
is
something
we
had
to
change
and
I
think
that's
mostly
it
for
the
changes
we
have
been
doing
past
two
weeks
so
yeah.
I
think
that's
it.
D
D
E
Would
you
mind
if
I
interrupted
a
bit?
Are
you
I
saw
this
dudley
from
file
coin
foundation
security
here.
Did
I
understand
correctly
that
you
guys
are
ready
for
an
audit?
Now?
Yes,
you,
you
do
so.
A
G
Yes,
so
I'll
link
you
to
the
v5
rc3
tag
in
the
zoom
chat
and
if
you're
watching
this,
you
can
just
go
to
the
specs
actors,
repo
and
click
on
tags.
There
have
been
very
few
changes
between
the
last
time
we
spoke,
and
now
there
were
a
few
bug
fixes,
which
you
can
take.
A
look
at
I
won't
get
into
the
biggest
tweak
to
behavior
is
checking
the
caller
addresses
on
the
prove,
commit
aggregate
method
based
on
some
security
concerns,
so
other
than
that
all
behavior
should
be
the
same.
G
So
the
only
other
thing
I
have
to
report
on
from
the
spectacular
side
is
the
generated
test,
vectors
work
stream.
So
now
in
specs
vector
specs
actors
master,
we
have
auto
generated
test
vectors
from
scenario
tests
and
I'm
linking
you
here,
the
readme
in
the
repo
which
explains
how
to
get
these
vectors
for
running
against
existing
test
harnesses
in
your
own
projects.
I'm
also
available
on
slack
to
help
make
sure
things
are
working
and
figure
out
how
to
use
these,
but
I'm
really
excited.
A
Fantastic
yeah,
I'm
sure
anyone,
re-implementing
actors
is
going
to
be
very
happy
to
have
those
test,
vectors,
cool,
yeah,
so
kind
of
from
the
lotus
side
of
things
we
have
consumed
the
latest
so
that
actors,
v5
rc3
data
and
are
obviously
implementation
complete
in
terms
of
in
terms
of
the
hyperdrive
upgrade,
did
a
bunch
of
testing
on
internal
test
networks
and
it's
running
on
interrupt
net
and
didn't
find
anything
too
concerning,
and
so
we
put
it
on
calibration
net
the
calibration
that
has
now
upgraded
to
to
v13
and
is
using
v5
actors
running
well,
we
did
find
one
proof
steadlock
or
it
was
deadlock
in
bell
person
actually,
which
was
yeah,
causing
causing
some
nodes
to
stall.
A
That
has
since
been
fixed
with
potential
performance
ramifications
that
we're
still
investigating,
but
we're
not
too
concerned
yeah,
and
so,
if
you
are,
if
you're
at
all
interested
in
trying
out
the
functionality
unlocked
in
flips,
eight
and
thirteen,
you
can
do
so
on
calibration
net.
The
latest
lotus
release
is
kind
of
rc5
we're
on
rc5,
but
most
of
the
release
candidates
have
involved
like
very
small
changes
like
minor
fixes
or
oops.
G
A
Too
much
scare
scary
stuff
has
been
uncovered,
which
is
great.
Miners
are
really
testing
on
calibration
they're
pretty
hard,
so
we're
waiting
to
see
kind
of
the
data
that
we
get
from
them
and
that'll
be
like
the
the
final
piece
of
confidence
building
that
we
want
for
for
hyperdrive.
A
But
if
things
continue
to
look
good,
we'll,
probably
be
shipping
a
release
sometime
next
week
with
the
network
upgrade
itself
being
one
to
two
weeks
after,
let's
figure
out
the
exact
timeline
yeah,
so
that
that's
kind
of
what
we're
thinking,
there's
also
been
some
interesting
results
from
some
predicting
the
future
work
in
terms
of
simulating
what
the
world
will
look
like
after
hyperdrive,
but
we'll
get
into
that
stability,
and
we'll
be
talking
about
that.
We'll
do
that
after
updates.
That's
the
first
first
item
on
the
agenda.
A
That
cool
sounds
good
all
right,
any
updates
from
the
foundation
or
from
the
community
side
of
things
that
we'd
like
to
share.
H
Oh
hi,
sorry
for
joining
a
few
minutes
late,
I
did
want
to
just
quickly
share
that
we
launched
the
minor
working
group
this
week
for
all
of
those
that
are
not
as
familiar.
This
is
something
where
we
have
minors
representing
north
america,
europe
and
also
asia
have
a
forum
for
them
to
talk
about
some
of
the
challenges.
We're
also
hiring
someone.
H
We
will
help
think
about
programs
and
additional
incentives
that
might
be
challenges
in
the
ecosystem
and
let
me
go
ahead
and
also
share
the
the
blog
post
for
those
that
may
not
have
seen
it
and
then
I'll
also
let
sonya
fill
in
on
other
other
things
from
the
grand
side.
So
give
me
one
sec
here
we
go.
A
H
Yeah,
so
this
is
really
something
that
is
led
by
the
community,
not
us
at
the
foundation,
and
so
we're
gonna
hear
we
just
had
a
kickoff
meeting
yesterday.
We're
gonna
really
hear
the
top
priorities,
but
a
couple
of
areas
that
we've
heard
so
far.
Are
you
know
everything
from
client
deals
to
some
more
compliance,
questions
around
data
storage,
and
then
there
have
been
a
lot
of
discussions
around
other
issues
like
retrieval
mining.
H
You
know
there's
a
lot
of
people
wanting
to
learn
more
there,
so
we
definitely
want
to
make
sure
that
everyone
in
all
geographic
areas
can
kind
of
talk
about
top
priorities,
because
obviously
there's
a
huge
size
difference
and
so
from
there
we
can
kind
of
narrow
down
how
we,
how
we
think
about
being
able
to
propel
some
programs
that
fit
those
needs.
So
thank
you.
I
Yeah
I
mean
thanks
for
the
great
update
clara
I
mean
from
the
community,
and
I
think
a
lot
of
what
I'll
be
focusing
on
in
collaboration
with
with
a
lot
of
you
is
just
trying
to
find
better
ways
of
incorporating
community
feedback
into
the
fibs
process
itself.
I
I
mean
what
was
while
we're
still
waiting
to
get
someone
who
can
focus
on
governance
itself.
I
know
this
is
something
that
has
been
brought
up
in
previous
conversations.
I've
I
already
mentioned
that
I
did
go
through
all
19
of
them
and
I
I
know
those
those
the
whole
conversation
about.
Should
the
community
be
involved
in
deciding
the
timelines
for
network
upgrades
etc.
I
I
I
For
example,
but
on
ecosystem
does
as
well
and
something
else
that
I
will
be
having
conversations
around
is
the
developer
portal
that
we'll
be
trying
to
build
so
getting
your
feedback,
getting
your
opinions
on
what
should
go
on
a
developer
portal
that
can
make
it
easier
for
developers
to
get
started
with
each
of
these
implementations
so
I'll
reach
out
to
you
a
yush,
and
then
we
can
take
the
conversation
from
there
and
spread
it
out
to
all
the
other
implementations.
A
Oh
yeah,
fantastic.
That
sounds
very
interesting.
Yeah
thanks
a
lot
for
kind
of
going
back
and
trying
to
draw
insights
from
the
protocol,
changes
and
tips
that
we've
that
we
have
made
on
the
network
so
far
because
you
know,
I
think
I
think
this
group
kind
of
moves
very
quickly
and
we
want
to
get
things
out
there,
but
there's
probably
lessons
that
we
could
have
drawn
or,
like
yeah
insights,
that
we
should
be
drawing
that
that'll
be
very
valuable.
I
think
that's
really
cool.
Thank
you.
E
From
the
security
side,
not
too
much,
I
will
say
to
keep
in
everyone's
mind.
One
of
the
audits
that
I
would
like
to
get
done
in
the
future
is
a
live
network,
so
basically
making
a
test
net
with
auditors
from
everyone
or
sorry
with
notes
from
every
implementer
and
then
attacking
that
live
because
most
of
the
audits,
we
have
so
far
code
audits
and
that's
great,
but
as
we've
seen
in
the
past,
some
of
the
bigger
bugs
we
have
had
come
from.
E
E
Probably
I
think
every
project
is
already
capable
for
this
and
it's
not
going
to
be
soon
the
auditors
I
have
in
mind,
for
it
will
probably
be
booked
for
about
three
months,
but
in
the
future,
look
to
be
contributing
to
a
fun
test
that,
where
things
are
intentionally
broken
that
that
sounds
good.
That
sounds
like
a
good
thing
to
have
background.
E
A
Fantastic
yeah,
so
the
I
think
we
talked
about
network
v13
actors
v5
stuff.
Obviously
we
can
take
questions
at
the
end
as
well,
but
I
do
want
to
get
to
some
of
the
simulation
work
that
we've
been
doing.
A
The
motivation
behind
this
is
basically
hyperdrive
really
really
changes
a
lot
of
things
about
the
filecoin
network
by
significantly
increasing
storage,
onboarding
rate,
and
so
we
wanted
to
see
what
that
would
look
like
and
make
sure
we
weren't
getting
ourselves
into
any
trouble
and
stabilian
took
on
this
project,
and
can
I
describe
both
the
work
that
he
did
as
well
as
results
that
we've
got
so
far
and
how
we're
feeling.
F
Okay
yeah
here
so
basically
I
created
a
simulation
system
that
just
generates
a
fake
blockchain,
assuming
that
everyone
on
the
network
continues
to
supposed
to
do
and
the
seal
as
fast
as
possible.
So
basically
just
takes
any
network
forks
it
at
some
point.
It
starts
to
like
perform
window
posts
as
necessary
will
create
as
many
improvements
as
possible
and
creating
a
seal
or
frequents
as
possible
and
kind
of
runs
the
chain
at
max
bandwidth.
So
that's.
F
The
idfc
okay,
like
in
a
month
two
months,
three
months,
six
months,
whatever
what
will
the
the
capacity
be
on
the
changes,
meaning
everyone's
sort
of
like
doing
what
we
expect
to
do?
Will
anything
break
state
growth
be
that
kind
of
stuff?
So
now
I'm
gonna
share
my
screen
with
results.
F
Here
we
go
so
yeah.
I
ended
up
creating
this
work
document.
It
describes
a
lot
of
results
in
terms
of
data
store
size.
This
is
purely
the
lowest
things
you
might
not
care
too
much.
We
noticed
that
before
the
data
is
growing
about
21
gigabytes
per
day.
Now
it's
going
to
grow
around
23
to
24
gigabytes
per
day,
so
we're
not
actually
increasing
the
turn
too
much
in
this
new
network
version.
F
The
sort
of
headline
is
the
state
tree
size.
I
actually
have
two
versions
of
simulation
here.
One
is
assuming
that
the
network
is
optimal.
That
means
basically,
five
blocks
per
tip,
set
completely
full
of
messages
no
gas
over
estimation.
F
This
is
not
realistic,
but
it
shows,
like
worst
case
scenario,
what
happens
and
the
answer
there
is
that
the
state
trade
grows
1.7
gigabytes
per
day.
This
is
rather
a
lot,
but
but
we
we
do
have
other
improvements
in
the
horizon
to
to
improve
the
situation
before
it
gets
too
far
out
of
hand,
but
we
figured
we
should
share
this
with
you.
Yeah.
F
Down
here,
I
basically
modified
the
simulation
to
take
into
account
the
overestimation
we're
seeing
on
chain
and
the
actual
block,
efficiency
or
chipset
efficiency
we're
seeing
on
chain.
So
you
can
see
the
lines
are
kind
of
tapering
down
a
bit
and
say
tree
growth,
we're
getting
about
1.1
1.2
gigabytes
per
day,
so
it's
still
large,
but
not
too
terrible
from
there.
We
can
project
and
see
how
big
state
trees
will
be
and
also
how
big
the
actual
snapshots
will
be.
F
So,
as
you
probably
know,
most
people
will
restore
their
nodes
by
or
start
it
start
off
by,
like
loading
snapshot
in
basically
six
months,
the
stage
resize
and
the
snapshot
will
be
about
a
quarter
of
a
terabyte.
We're
we're
hoping
to
have
improvements
by
then
or
at
least
by
the
end
of
the
year,
because,
like
there
are
people
that
are
working
on
improving
this,
the
biggest
contribution
that
we've
seen
here
is
actually
sector
infos.
The
biggest
part
of
that
is
the
the
comm
r.
F
So
the
the
only
real
way
to
tackle
this
is
to
move
call
mars
off
chain,
which
is
a
goal.
It's
theoretically
possible.
It's
just
a
bit
tricky
because
it
means
we
have
to
change.
How
would
oppose
proofs
work,
but
it
is
still
theoretically
possible.
There
are
other
changes
we
can
make.
For
example,
we
can
probably
slim
down
the
the
second
dose
by
quite
a
bit
there's
a
lot
of
duplicate
information
in
there.
F
That's
kind
of
present
in
all
sector
infos,
where,
if,
instead
of
storing
everything
as
kind
of
like
one
like
separate
objects,
we
kind
of
turned
everything
into
a
column
store
effectively
where
we
have
like
one
one
column
of
commars,
and
then
we
like
bit
fields
for,
like
everything,
expiring,
the
certainty
box
and
all
this
kind
of
stuff.
We
could
probably
shrink
the
state
by
I'm
guessing
about
a
half
which
is
not
going
to
really
fix
the
problem.
F
As
you
can
see,
that
means
basically
in
a
year
it
would
be
we'd,
be
at
a
quarter
of
terabytes
that
are
half
a
terabyte,
but
that,
like,
if
necessary,
that's
a
really
good
system
to
improve
the
situation.
We
also
did
a
bit.
G
F
F
So,
for
example,
we
said
okay,
if
we
wanted
to
target
100
gigabyte
snapshots,
how
would
we
do
this
we'd
be
like
a
slow
down
factor
of
4.5
x
on
the
new
chain,
and
then
we
we
just
looked
at
different
targets,
I'm
not
sure
if
any,
if
anyone's
interested
in
discussing
this,
I
can
later,
but
it's
not
too
interesting,
but
mostly
we
looked
at
like
basically
increasing
gas
and
or
reducing
batch
sizes.
So
one
of
the
most
effective
approaches
we
found
was
just.
F
We
could,
in
theory,
reduce
the
wrap
size
now
at
the
moment.
We're
not
planning
on
doing
any
of
this
at
the
moment.
We're
planning
on
just
saying:
okay,
we've
discussed
this
full
team
ahead.
The
state
tree
is
gonna
grow,
but
we
have
plans
for
for
improving
the
situation
in
the
future
and
it
does
meet
our
goal
of
faster
network
growth.
F
So
things
working
quite
well
yeah.
We
can
also
look
at
onboarding
rates.
These
are
all
projections
here,
but
let
me
get
down
to
the
actual
simulation
results.
F
F
I
still
need
to
import
this
if
it's
similar
to
the
document,
but
it's
more
like
540-ish
petabytes
per
day,
but
that's
basically
what
we're
expecting
currently
we're
seeing
about
500
sorry,
50
petabytes
per
day
on
boarded
so
we're
expecting
about
a
10x
increase
or
11x
increase
in
onboarding
bandwidth.
F
F
Yeah,
take
a
look
at
the
feet:
lotus
sim
branch
in
lotus.
You
can
find
the
current
work
here.
It's
like
it's
something
you
can
probably
replicate
it
in
other
systems
and
other
languages.
This
is
specifically
designed,
though,
for
lotus,
so
we
can
test
like
outlets
sort
of
holds
up,
but
if
you're
interested,
you
can
probably
take
this
code
generate
chain
and
lotus
and
then
effectively
copy
the
chain
into
whatever
into
like
your
system
and
then
just
like
see
like
does
that?
Like?
Can
you
validate
that
chain?
F
There
are
a
couple
of
hacks
that
I
have
here
where
like,
for
example,
I'm
not
signing
messages,
because
I
don't
have
the
keys
I'm
sending
everything
as
second
messages
instead
of
second
bls,
because
I
need
like
a
deterministic
execution
order.
I
only
have
one
massive
block
per
chipset
instead
of
the
actual
five
blocks
for
tip
set.
So
there
are
a
couple
things
that
aren't
quite
correct.
F
You
would
need
to
fix
if
you
actually
want
to
like
do
full
chain
validation,
but
it's
enough
to
actually
like
basically
test
your
actor's
implementation
and
like
test
performance
and
that
kind
of
stuff.
A
Yeah
I've
got
a
few
questions
and
I'm
sure
others
have
too
just
right
from
the
start.
You're
talking
about
the
difference
between
state
tree
growth
and
churn
and
change,
and
the
change
in
churn.
Do
you
want
to
elaborate
on
what
that
difference
is.
F
F
State
tree
growth
is
permanently
added
to
the
chain.
Turn
is
state,
that's
added
and
removed,
or
just
changed.
So
a
lot
of
the
turn
comes
from
windowpost,
where
basically,
every
deadline
usability
post,
the
window
post
proof
gets
stored
for
some
period
of
time
that
changes
the
deadlines
adds
a
little
bit
of
state.
The
state
disappears
that
kind
of
stuff
we're
seeing
this
at
is
about
20
something
20,
really
20
gigabytes
per
day,
21
gigabytes
per
day
of
just
like
rolling
churn.
F
The
like
this
is
also
what
can
be
effectively
garbage
collected
or
like.
If
you
restore
from
a
snapshot,
you
don't
necessarily
need
all
of
the
historical
states.
You
can
throw
this
away.
That's
what
the
term
is
the
the
sanctuary,
growth
and
specifically
like
the
state
you
need
in
a
single
state
trade
yeah.
That's.
That
is
the
the
only
part
about
this
and
it's
like.
If
you
want
to
have
like
the
full
state,
you
want
to
validate
blocks.
F
You
need
the
full
state
tree
and
then
you
need
basically
sort
of
the
turn
back
through
1
800,
f
box.
It
doesn't
current
snapshot
size,
but
you
don't
need
to
determine
beyond
that.
A
So,
even
if
I
were
to
say,
I
only
care
about
the
last
three
days
of
what
happened
in
the
chain
in
the
last
three
days.
This
is
going
to
impact
me
significantly
because
it's
a
state
recite
itself.
So
it's
getting
a
lot
bigger,
that's
it,
and
he
also
mentioned
that
a
lot
of
this
new
state
is
carmars
in
secular
influence.
What's
the
command.
F
Callbar
is
the
the
the
seal
sector
cid?
Basically
it
it's
four,
it's
I
think
it's
like
36
or
maybe
38
bytes,
or
something
like
that.
I
mean
40
bytes.
I
don't
know
the
exact
size
yeah,
but
when
you're
ceiling
I
actually
haven't
encountered
the
other
sectors
per
day,
but
a
lot
of
sectors
per
day.
F
This
starts
growing
pretty
quickly.
That's
the
problem
there
and
it's
not
it's
a
hash.
So
it's
also
a
big
even
compress.
A
F
Yeah,
so
the
state
tree
growth
is
not
low
to
specific.
That
is
actually,
I
was
measuring
the
size
of
the
state
tree
itself.
There
will,
of
course,
be
overhead
in
your
data
store,
so
actually
my
measurements
are
probably
significantly
lower,
unfortunately
than
the
actual
values
might
be
about.
Well,
probably
not
quite
2x,
but
you
have
to
store
the
extra
metadata,
the
cids.
Whatever
else
you
need
to
actually
index
the
data,
basically,
the
the
data
store
growth
is
somewhat
low
to
specific.
F
But
again
you
should
expect
the
same
order
magnitude.
Any
other
system
you
have
yeah,
the
the
power
growth
is
is
going
to
be
the
capacity
growth
is
giving
the
same
everywhere.
I
actually
also
have
numbers
on
window
post
submissions,
we're
trying
to
project
and
see
like
how
much
of
chain
bandwidth
would
oppose,
take,
as
we
add
more
and
more
state
to
the
system.
Currently,
this
is
these
are
kind
of
the
numbers
I've
seen
over
time,
where
every
day
it
was
growing
by
about
0.05
percent.
F
So
like
it's
something,
but
it's
not
too
much,
and
we
have.
We
have
ways
of
improving
this,
like
there
are
optimizations
we
can
make
the
windows
specifically.
We've
also
been
discussing
again
ways
of
of
just
sort
of
super
linearly
optimizing
with
the
posts,
so
like
the
the
the
basically
the
the
big
constraint
I
want
to
post
right
now.
Actually
is
the
fact
that
to
verify
a
windowpost,
you
need
to
load
up
a
bunch
of
sector
infos
from
the
chain.
G
F
They're
not
verified
for
normal
submissions,
but
still
like
this
kind
of
limits.
The
maximum
like
effectively
the
maximum
receptor
is,
we
can
load
up
because
we
can't
fit
like
we
can't
load
up
too
much
more
in
a
single
block,
so
we
want
to
fit
when
to
post
a
single
block
in
this
release.
F
Actually
we
are
increasing
the
the
number
of
window
posts,
we're
allowing
in
a
single
block
or
sorry
it
would
have
the
number
of
partitions
we're
allowing
for
window
posts
which
should
increase
the
or
to
decrease
the
bandwidth
used
by
these
window
posts,
but
but
like
there's
kind
of
hard
limit.
There,
however,
like
for
example,
if
we
were
to
be
able
to
not
store
the
the
the
car
mars
on
chain,
part
of
this,
this
process
would
be
making
the
windowpost
not
need
to
load
the
call
mars.
F
G
F
Basically
submit
a
window
post,
it
doesn't
need
to
really
load
any
state
and
it
or
except
for
maybe
like
some
like,
like
root
comma,
like
like
aggregate
window
comma
for
the
entire
partition
and
just
verify
that.
So
that
way,
we
can
make
windows
based
leading
interpretations
instead
of
the
number
of
sector
sectors.
A
Thanks
sorry
yeah,
who
else
has
questions.
C
I
have
some
questions
with
respect
to
the
growth
of
the
state,
so
you're
saying
currently
right
now,
the
state
growth
is
around
a
gig
a
day.
The
state
tree.
F
No,
no,
due
to
their
after
the
simulation
or
sorry
enough
position
like
after
the
upgrade
it'll,
be
around
1.2
gigabytes
per
day.
Currently
it's
significantly
less.
I
haven't
actually
measured
that
yet,
but
the
current
state
tree
size
is
about
16
gigabytes,
and
this
is
the
sort
of
accumulation
from
like
the
entire
network
over
the
course
of
it.
So
far,
so
it's
it's
significantly
less
than
a
gigabyte
per
day.
C
F
C
Yeah,
that
makes
sense.
I
just
see
yeah
it's
a
little
bit
concerning.
If
it's
going
to
be
a
gig
a
day,
I'm
not
sure
how
people
are
going
to
download
these
snapshots
in,
like
a
month
from
now.
F
Yeah,
it's
I,
I
think,
that's
like
it's
it's
concerning,
but
I
think
in
terms
of
download
time
you
can
kind
of
like
download
the
snapshot
like,
especially
if
you're
trying
to
like
roll
your
data
store
or
like
carb
respect
it.
We
have
a
couple
things
here,
one
if
you've
seen
the
split
store
on
lotus.
This
is
kind
of
a
way
to
allow
you
to
like
do
local
garbage
collection
or
basically,
you
keep
your
hot
store
like
the
actually
need
in
your
cold
store
of
everything
else.
F
And
then,
if
you
run
out
of
space,
you
just
delete
your
cold
store.
This
should
reduce
the
amount
of
times
people
actually
need
to
download
the
the
state
free
in
terms
of
actually
like
spinning
up
a
new
node
yeah
you're
gonna
need
to
download
basically
like
10x
the
amount
of
data
within
six
months
by
or
actually
more
like,
eight
months.
That's
not
like
that's
not
great,
but
it's
not
the
end
of
the
world
with
most
people's
bandwidth.
F
The
one
thing
we'll
definitely
need
to
do
is
make
sure
that,
like
these,
these
these
snapshots
get
distributed
around
the
world.
Now,
no
matter
where
you
are,
you
can
download
the
snapshot
without
having
to
cross
countries
and
firewalls
yeah.
It's
also
beyond
that,
like.
I
think
this
really
motivates
like
like
client
work,
yeah,
it's
already
really
difficult
to
use
or
sorry
to
like
to
run
a
lotus
or
any
other
five-way
node
locally.
F
Just
because,
like
the
change,
verification
and
storage
requirements
it
this
just
makes
it
even
more
important
to
have
a
like
client
system
yeah.
I
agree
this
is
this
is
scary,
but
we
do
have
planned
optimizations
and
ways
to
deal
with
it
and
we'd
rather
not
get
into
a
world
of
like
sort
of
arbitrarily
limiting
ceiling
capacity.
F
That
was
basically
the
the
decision
there.
It
was
a
smart
team.
B
And
that's
steven
yep:
this
is
your
fantastic
analysis.
I
think
it's
a
very,
very
good
simulation
yeah.
Thank
you.
Yeah.
Is
this
public
available
yeah?
Would
you
please
share
a
link
of
this
yeah
if
it
is.
F
F
Do
that
it's
not
currently
publicly
available,
but
I
think
I
can
add
people
who
are
here.
I
just
haven't
like
this
is
kind
of
a
living
document.
It's
not
really
ready
for
people
they're,
like
for
lots
of
random
people
to
come
in
and
ask
questions,
but
I
think
they
can
share
it
to
the
people
in
this
room.
Is
there.
G
F
G
F
Powerpoint
project,
yes,.
A
Full
implementers
this
group
also,
we
will
definitely
be
including
some
of
the
results
that
come
from
this
once
they're
like
cleaned
up
and
formalized
a
little
bit
in
like
release,
notes
and
general
comms
are
on
hyperdrive,
because
we
definitely
want
people
to
be
aware
that
this
change
is
going
to
be
happening.
And
no
one
should
be
caught
by
surprise
if
hardware,
rex
and
so
on,
are
going
to
go
up.
We'll
try
to
be
as
explicit
about
that
as
possible.
B
Yeah
I
understood
yeah
and
I
have
another
question
that
yeah
it's
about
you
just
as
you
mentioned
the
yeah,
the
size
of
old
state
yeah
is
growing
very
quickly,
so
you
mentioned
that
we
could
have
the
common
art
data
of
off-chain
yeah
apply
so
yeah,
I'm
thinking
and
and
about
this
is
because
every
minor
needs
to
come
our
data
to
verify.
F
Currently,
so
the
yes,
like
the
the
that,
that's
why
it's
tricky
the
the
goal
here
would
actually
be
to
make
it
so
that,
like
the
commars,
are
off-chain,
but
there's
some
form
of
witness
on
chain
or
aggregate
witness.
So
the
idea
is
that,
like
when
I
submit
a
window
post
that
would,
instead
of
like,
instead
of
like
the
you,
have
it
like
a
verifier
having
to
load
up
all
the
commands
from
disk
and
then
use
that
with
a
window
post.
F
Instead,
they
would
load
up
like
some
kind
of
merkle
tree
root
that
includes
all
these
comm
mars
and
then
the
the
windowpost
would
actually
prove
using
this
merkle
cheap
root
instead
of
the
individual
commars,
or
something
like
that.
That
is
just
an
idea
of
how
to
make
this
work,
but
you,
basically,
you
can
have
a
system
where
even
a
verifier
of
one
of
these
proofs
doesn't
actually
need
all
the
local
commars.
F
They
just
need,
like
some,
some
witness
of
all
of
them,
some
like
chain
of
events
showing
that,
yes,
this
witnesses
was
built
correctly,
that
kind
of
stuff.
Now
the
miner,
creating
the
proofs
will
still
need
all
the
commands.
So,
like
miners
will
need
to
store
their
own
sector
infos
on
their
own
local
disks.
This
becomes
a
bit
of
a
problem
for
like
if
you
lose
data
restoring
things,
so
this
would
need
to
be
like
well
kept
and
backed
up.
F
I
think
that
the
commas
are
already
stored,
probably
redundantly
inside
the
sector,
like
the
other
sector
state
oakland
because
like
or
you
can
also
regenerate
it
if
you
need
them,
but
yeah
sentence.
Your
question.
A
Oh
yeah,
I
also
just
want
to
say
that,
like
yeah,
this
is
certainly
some
something
of
a
gnarly
problem,
but
I
think
there's
still
for
a
lot
of
creativity
to
come
up
with
clever
solutions
to
addresses.
So
folks
here
want
to
think
about
it.
A
If
you
have
a
cool
idea
like
let
us
know,
personal
implementers
are
open
to
fit
because
I
think
I
think
there's
a
the
range
of
the
the
problem
is
hard
with
the
range
of
solutions
is
very
wide,
and
so
as
many
solutions
as
we
can
think
about
it,
so
there
will
help
here
and
get
through
an
effort
over
the
next
few
months.
I
think.
F
Yeah
again,
there
are
also
shorter
term
solutions
that
involve
things
like
again
like
shrinking
the
size
of
these
sector
intros,
because
they
include
a
lot
of
things
like
projected
network
growth
and
like.
F
With
like
fines
and
stuff
like
that
and
like
all
a
lot
of
these
fields
can
probably
be
sort
of
like
linearized
into
their
own
arrays,
and
if
we
sort
of
rotate
the
entire
data
structure
and
data
store,
we
could
probably
save
half.
Unfortunately,
those
things
are
going
to
be
like
linear
factors
instead
of
super
linear
factors,
and
we
really
need
to
like
like
in
order
for
this,
to
really
continue
to
grow
properly.
We'd
like
store
on
chain
states
sublinear
in
the
number
of
sectors.
A
But
there
is
no
scope
for
linear
improvements
that
I
think
we
can
stay
ahead
of
the
problem.
Moving
forward.
A
F
So
actually,
one
of
the
nice
things
about
is
it
does
is
kind
of
run
everything
forward.
So
I
would
like
to
use
it
to
like
test
things
like
expiring
sectors
and
what
happens
if
a
miner
just
like
goes
off
the
network?
Let's
do
a
cron
when
it
has
like
terminated
everything
like
how
does
network
deal
with
terminations
like
faults
and
stuff
like
that
as
well.
F
The
cool
thing
is,
you
can
kind
of
take
a
network
at
any
point
and
fork
it
off
perform
whatever
upgrades
you
want
and
then
just
sort
of
continue
running
it
forward.
So
this
gives
a
lot
more
information
than
we
usually
have
we're
trying
to
do
an
upgrade
yeah,
the
the
like
the
main
limitation
is
like
this
is
mostly
built
within
lotus,
because
the
goal
was
was
really
to
test
like
out
or
one
of
the
the
key
goals
to
test
account.
Lotus
would
react
to
this
upgrade,
but
it's
it's
pretty
pluggable
so
like.
F
Basically,
you
define
it
with
this
pipeline
where,
in
this
case
I
have.
The
first
thing
is
the
funding
stage
is
basically
like.
I
need
to
be
able
to
fund
myself
when
I'm
running
simulations,
so
I
like
to
have
a
stage
here
that
kind
of
just
like
takes
money
from
large
wallets
and
pulls
them
into
a
single
wallet.
So
I
can
actually
fund
messages
and
fun
miners.
The
next.
F
Just
goes
through
and
like
figures
out
any
window
posts
that
are
like
that
can
be
submitted
at
this
epoch
and
then
try
to
submit
them.
F
Then,
in
the
remaining
gas
is
approved,
commit
stage
it
prov
commits
any
pre-commits
that
are
now
ready,
so
they've
passed
their
150
50
buck
delay
and
then
finally,
in
the
pre-commit
stage,
it
basically
takes
the
rest
of
the
the
chain
bandwidth
available
and
tries
to
as
many
arguments
as
possible,
but
you
can
change
this
so,
for
example,
we
can
add
additional
stages
in
there
that
like
to
send
arbitrary
messages,
we
can
add
a
stage
that,
just
like
is
a
kind
of
chaos,
monkey
stage
just
randomly,
restores
and
removes
or
and
default
sectors
at
the
moment.
F
Unfortunately,
this
is
kind
of
like
when
you
run
the
simulation.
You
can
just
run
the
separate
binary
and
say
run
if
you
want
to
analyze
it.
You
shut
it
down
and
run
your
analysis.
I
would
like
to
eventually
have
this
running
as
a
demon,
so
you
can
just
kind
of
have
it
like
running
less,
like
kind
of
a
normal
lotus
daemon,
and
then
you
can
make
normal
queries
and
it's
just
that
way,
but
we're
not
there
yet
yeah,
but
it
should
be
pretty
sensible.
A
F
It
should
like
the
ordering
is
a
bit
different,
but
in
terms
of
like
it's
not
perfectly
realistic,
yeah
so
like
the
the
it
might
be
slightly
underestimating
the
turn
actually,
because,
usually
you
probably
have
pre-comments
on
chain
for
for
a
longer
period
of
time,
although
actually
they
wouldn't
increase
churn.
F
F
Like
otherwise,
I
basically
need
a
bunch
of
sort
of
separate,
like
the
agents,
all
pretending
to
be
different
services
and
all
trying
to
reprice
things,
and
I
just
wanted
to
be
very
deterministic
and
easy
to
analyze
and
understand
so
like.
Yes,
it's
not
100
accurate,
but
it
I
think
it
is
pretty
close
to
accurate
and
it
it
does
show
you
like.
If
the
network
were
behaving
optimally
like
how
much
bandwidth
would
be
used
for
windowpost,
how
much
bandwidth
would
be
used
for
like
each
one
of
these
things.
A
And
that
is
a
good
flag
to
bring
up
with.
At
the
end
of
the
day,
the
simulation
is
making
a
lot
of
assumptions
on.
You
know
how
much
demand
there
actually
is
for
storage,
onboarding
and
so
on.
We
will
see
what
actually
happens.
First
hyperdrive,
we
don't
know
what
the
what
the
increased
onboarding
rate
will
be
because
of
real
world
factors,
so
yeah.
A
A
Cool
nice
thumbs
up
and
clap
emojis
cool,
any
other
questions
or
points
of
discussion
that
people
wanted
to
bring
up
about
about
anything
powerpoint
and
hyperdrive
related.
A
Okay
sounds
good.
I
think
we
can
wrap
up
here,
take
any
questions
or
thoughts
async
in
the
film
watch
channel,
but
for
now
bye.
Everyone
thanks
for
joining.