►
From YouTube: Filecoin Core Devs #1
Description
Recording for https://github.com/filecoin-project/tpm/issues/1
A
All
righty
welcome
everybody
to
the
first
falcon
core
devs
meeting.
It
is
september
11th
and
we're
going
to
be
talking
a
little
bit
today
about
the
core
devs
meeting
itself.
A
Along
with
talking
about
the
the
foul
coin
improvement
process,
the
fit
process
we're
gonna,
have
opportunities
for
updates
from
each
of
the
different
falcon
implementations
that
are
under
works
right
now
and
then
also
talk
a
little
bit
about
two
two
topics:
one
is
cross
implementation,
conformance
testing,
which
a
number
of
groups
are
are
using
to
help
make
sure
our
implementations
are
interoperable
and
then
also
very
briefly,
around
kind
of
upgrade
process
and
some
of
the
the
logic
there.
A
So
that's
that's
what's
on
the
agenda
so
far
for
today,
but
feel
free
to
add
any
additional
things,
maybe
at
a
very
high
level.
A
I'm
taking
notes-
and
I
will
pr
them
to
the
file
coin-
project
tpm
repo,
which
has
a
a
record
of
kind
of
all
of
our
meetings
and
how
to
propose
agenda
items.
And
then
each
agenda
itself
is
going
to
be
an
issue
here,
so
that
people
can
add
to
it
ahead
of
time,
which
has
the
relevant
information
for
it.
So
that
is,
that
is
a
meeting.
A
We're
in
the
main
purpose
of
this
meeting
is
really
for
for
groups
across
the
different
implementations
and
core
protocol
developers,
including
folks
from
kind
of
the
security
side,
crypto
econ
side
and
and
others
who
might
be
proposing
fips
and
improvements
to
the
protocol
itself
kind
of
meet
and
discuss
some
of
the
finer
aspects
of
those
technical
issues.
So
it
gives
us
an
opportunity
to
both
share
status,
make
decisions
across
the
protocols
and
also
kind
of
discuss
some
of
the
the
nitty-gritty
of
making
sure
we
stay
interoperable,
because
that
is.
A
That
is
our
aim
here,
and
so
these
meetings
are
currently
scheduled
by
weekly
and
we'll
kind
of
see
what
the
demand
is
in
the
near
versus
longer
term
and
we
can
always
adjust
from
there.
If
we
feel
like
we
have
more
things,
we
want
to
talk
about
or
or
less
that
would
surprise
me,
so
that
is
the
the
main
purpose
of
these
meetings.
So
far,
is
there
anyone
else
who
has
questions
along
that
topic
or
or
any
requests
around
these
meetings
and
how
we
can
use
them
all.
C
Awesome,
hey
folks,
I
just
wanted
to
point
out
that
I'm
gonna
have
to
leave
in
like
10
minutes.
So
if
there's
any
topics
regarding
performance
tests
and
so
on,
if
we
can
cover
them
earlier,
that
would
be
great.
A
Awesome
thank
you
for
for
the
heads
up
rule
that
is
on
our
agenda
that
it
was
a
little
bit
later.
But
let's
move
that
one
up
to
the
to
right
now,
then
so
that
we
can
get
it
well
while
you're
around,
and
thank
you
for
also
demonstrating
good
protective
face
covering.
C
A
Discuss
so
this
is
just
one
of
our
the
first
of
of
many
core
dev
weekly
meetings
and
so
the
we
have
folks
here
from
each
of
the
the
different
file
point
implementations
right
now
and
it's
a
good
opportunity
to
kind
of
highlight
how
they're
being
used
by
different
different
implementations.
What
what
exists
well
so
far
and
and
the
best
ways
of
utilizing
that
and
then
what's
coming
in
the
near
future.
That
teams
should
be
kind
of
utilizing
to
help
make
sure
we
stay
interoperable.
C
Awesome
all
right,
okay,
so
you've,
I
think
you
all
are
probably
part
of
the
of
the
five
point
test
factories
channel
the
five
point
slack.
We
have
been
kind
of
like
giving
a
lot
of
update
and
there's
been
a
lot
of
conversation
around
test.
C
Basically,
the
the
goal
of
test
factories
is
to
use
to
provide
a
repository
of
and
provide
a
corpus
of
interoperable
test
factors
and
sorry,
I'm
gonna.
I'm
gonna
have
to
pick
up
my
dogs.
C
Just
give
me
a
second
yeah:
are
you
guys
recording
this.
C
This
is
fantastic
all
right,
I've
just
named
this
all
right.
Do
you
want
me
to
show
it
to
the
camera?
Oh!
No!
No!
I
think
they're,
okay,
all
right
so.
C
Basically,
there
are
three
three
kinds
of
vectors:
all
of
them
are
interoperable
they're
defined
in
json.
We
have,
there
is
a
json
schema
as
well,
which
serves
has
a
good
checkpoint
validation
that
with
which
every
single
vector
that
we
committed
to
repo
is
verified,
verified
against.
We
have.
We
have
created
a
builder
api
with
which
we
can
crop
these
test
vectors.
There
are
two
methods
to
to
create
test
vectors.
C
There
is
the
generative
method,
which
basically
consists
of
using
the
implementation
right
now
there
is
a
builder
api
for
the
lotus
implementation,
which
we
consider
the
reference
implementation,
but
we
can
create,
or
we
can
consider
ways
to
plug
in
the
builder
api
or
an
analog
kind
of
like
a
an
equivalent
of
that
into
into
other
implementations,
so
that
we
can
generate
vectors
from
other
implementations
as
well.
So
there
is
the
generative
approach
and
there
is
the
extractive
approach.
The
extractive
approach
consists
of
basically
hey.
We
have
a
live
network.
C
There
are
events
of
interest,
there
are
messages,
tip
sets
and
there's
going
to
be
sequence
of
blocks
that
we
want
to
capture
into
its
inspector,
so
basically
pointing
pointing
a
framework
and
pointing
a
program
against
a
a
json
rpc
from
a
client
to
be
able
to
capture
a
set
of
events
that
have
happened
in
a
real
network
so
that
we
can
then
capture
them
into
test
factors
and
use
them
as
regression
tests
for
the
future
for
every
single
implementation.
C
That
is
kind
of
like
the
second
approach,
so
you've
all
seen
the
purpose
it
is
maintained
by
a
bot.
So
we
don't
touch
the
json
test
vectors
manually
ever
we
just
commit
the
the
scripts
changes
to
the
scripts
changes
to
the
dependencies
and
the
bot
comes
along
and
sends
in
a
pr
that
updates
the
the
vector,
so
the
vectors
are
never
tied.
Finally,
currently
I
think
forest
is
has
our
so
forest
has
been
kind
of
like
a
great
adopter
here.
C
They
like
jumped
on
this
and
very
the
very
day
that
we
released
it
and
made
it
public
so
kudos
on
that
this
is.
This
has
been
amazing
work
and
you
know
I
can't
tell
you
how
pumped
the
team
is
to
like
have
seen
such
you
know,
immediate
traction
for
the
work
that
we're
doing,
and
I
know
that
you
guys
were
looking
for
it.
So
it's
great
you've
been
providing
a
constant
stream
of
feedback,
which
is
amazing,
and
that
is
help
us.
You
know
shape
many
things
and
yeah
keep
doing
that.
C
That's
that's
great
and
at
the
end,
I
think
you
guys
have
like
666
factories
already
that
you
are
conforming
with.
There
is
one
thing,
so
we
are
as
as
we
do
so
as
we
create
test
vectors.
We
are
finding
circumstances
whether
loaders
within
spec
actors
that
do
not
conform
to
our
expectation
of
how
things
should
should
behave.
C
Unfortunately,
the
written
spec
and
the
documented
spec
is
not
really
in
a
great
state
right
now.
So
it's
like
for
some
of
these.
For
some
of
these
circumstances,
it
could
be
like
difficult
to
say
to
to,
like
you
know,
basically
delineate
and
say,
hey
yeah.
This
is
like
the
the
theory
is
correct,
or
the
practice
is
correct.
So
basically,
what
we're
doing
with
these
test
vectors,
where
we
have
like
some
form
of
conflict,
is
to
tag
them
as
explicitly
tag
them
as
incorrect
or
broken.
C
So
these
vectors
are
committed
as
well,
that
tested
every
single
time
or
they
can
be
tested
against
the
implementation
or
they
can
be
excluded
from
from
the
test
that
I
think
that
are
effectively
executed.
If
they're
tested
against
the
implementation,
then
the
assertion
needs
to
be
flipped.
So
basically,
what
you
need
to
what
you
need
to
do
is
hey.
C
I'm
going
to
test,
I'm
going
to
run
this
test
vector
I'm
going
to
check
that
the
output
of
the
test
vector
is
not
what
is
expressed
in
the
test
spectrum
that
that
could
be
con.
That
could
be
considered
a
pass
or
either
you
could
skip
the
test
vector
because
it's
not
strictly
a
pass
so
yeah.
So
so
those
are
like
a
few
touches
what's
coming.
C
Next
is
so
we
have
two
categories
of
test
vectors
right
now,
there's
a
message:
class
vector
and
the
tip
set
class
vector
and
there's
gonna
be
a
third
one
that
we're
gonna
be
adding.
Ideally,
it's
not
gonna
happen.
Today,
but
it's
probably
gonna
happen
at
the
start
of
next
week,
it's
called
the
the
block
sequence
factor
and
this
basically
addresses
a
singer.
So
what
this
vector
expresses
is
a
set
of
row
blocks
that
are
arriving
from
the
network
at
given
points
in
time
which
are
expressed
as
offsets
from
genesis.
C
So
we
the
preconditions
of
this
transfector,
it's
going
to
be
a
genesis.
It's
going
to
be
a
car,
it's
going
to
be
a
state
tree,
it's
going
to
be
a
chain
history
and
the
genesis
will
contain
obviously
the
the
genesis
times
timestamp
and
from
there.
The
applies
of
this
test.
Spectra
are
going
to
be
a
sequence
of
times
of
tuples,
of
timestamps,
with
the
raw
block
and
for
simulating
things
like
failed
signatures.
Feel
randomness
failed.
C
Whatever
you
call
you
name
it
there's
going
to
be
a
set
of
what
we
call
so
so
notice.
There
are
a
set
of
interfaces
like
like
that
thing
that
basically
encapsulate
a
set
of
functions
to
verify,
signatures
to
get
randomness
and
so
on.
These
are
called
like
ffi,
verifier
and
and
syscalls,
and
so
on
so
we're
going
to
create
a
set
and
spec
out
a
set
of
pure
functions
for
each
of
these
one.
That
basically
says
when
you
receive
a
signature
that
contains
the
sequence
of
bytes.
C
You
have
to
fail.
If
it
contains
this
other
sequence
of
bytes,
then
you
have
to
succeed,
and
this
way
we
will
be
able
to
control
what
the
actual
behavior
from
these
functions
has
to
be
within
the
test
without
actually
having
to
you
know
to
incur
in
the
cost
of
like
generating
the
right
input
for
a
verification
of
the
signature
to
succeed
and
things
like
that
so
yeah,
I
think
that's
working
for
too
long
now.
A
Anything
from
any
anyone
who's
using
test
vectors
downstream
any
things
you
you're
enjoying
about
it
so
far
or
requests
or
feedback
for
role.
E
I
mean
just
trying
to
kind
of
generally
just
kind
of-
I
guess,
echoing
what
rule
said
was
just
yeah.
We're
definitely
very
excited
for
the
for
the
vectors
I
mean
like.
As
you
mentioned,
it's
been
something
we've
been
asking
about
for
a
while.
That's
why
we
were
so
quick
to
kind
of
jump
on
it
and
yeah
like
we're,
passing
all
the
vectors,
except
for
ones
that
are
kind
of
marked
as
invalid,
or
have
kind
of
flagged
as
invalid.
So
yeah
things
are
definitely
looking
good.
E
I
just
have
one
question
about
like
the
extraction
vectors.
How
are
you,
how
are
you
deciding
like
what
what
is
like
valid
to
kind
of
pull
from
or
extract
from,
or
are
you
just
picking
stuff,
that's
like
early
on
in
the
chain?
So
you
don't
have
you
know
a
bloated
car
file
or
something
like
that.
C
Yeah,
so
that
that's
an
excellent
question
we
are
right
now
execute
so
basically.
C
That
we
want
to
pick
up
that
we
want
to
extract.
We
get
the
state
tree
at
that
point
and
then
we
prune
it
and
we
shake
it
to
retain
only
the
actors
that
were
effectively
touched
by
that
by
that
particular
by
that
particular
message.
It
is
not
perfect.
Some
actors,
obviously
like
the
storage
power,
actor
and
a
few
other
actors,
the
market
actors
and
so
on,
do
contain
a
lot
of
data.
So
it's
like
right
now.
C
We've
gotten
to
a
point
where
you
know
with
the
lowest
we
have
gotten
is
like
a
15
megabyte
vector,
which
is
not
ideal
one
potential
solution
that
would
be
to
dive
into
the
state
of
these
particular
like
factors
that
we
know
contain
a
lot
of
data
and
explicitly
between
what
we
don't
think
is
relevant.
C
But
that
has
a
huge
handicap
and
doing
any
manipulations
has
has
a
huge
handicap,
which
is,
if
you
alter
at
any
given
point
in
time
the
logic
of
spec
actors
or
the
logic
of
like
the
equivalent
and
the
implementation
in
a
way
that
expects
that
data
that
was
present
in
the
network
and
that
was
present
in
the
state
tree
back
then,
but
you
decided
to
prune
because
it
wasn't
needed
with
the
previous
version
of
you
know
the
actor,
then
you're
basically
have
no
way
out.
C
You
have
basically
lost
lost
a
vector
things
that
we
could
do.
There
is
save
traceability
information
so
that
we
know
exactly
the
message
from
which
network
and
like
we
identify
the
message
from
which
the
the
vector
was
extracted.
So
we
can
re-extract
it
again,
but
yeah
it's
it's.
It's
a
tricky
question.
C
We're
to
have
to
like
yeah
I'm
going
to
have
to
iterate
on
this
a
little
bit
a
little
bit
more.
E
E
C
Under
active,
so
one
of
our
developers
will
is
focusing
on
this
and
is
right
now
doing
another
thing,
but
he's
gonna
continue
working
on
this
particular
problem
as
of
like
mid
next
week.
So
maybe
we
can
connect
like.
I
can
connect
you
with
them
and
you
can
discuss
other
ideas
if
you
want
to
get
involved.
C
One
thing
that
I
wanted
to
say
is
that
probably
not
this
week,
but
the
week
after
we're
gonna
start
a
little
project
to
create
a
dashboard
so
that
that
is
kind
of
like
I
don't
know
if
you've
ever
seen.
No
dark
green,
that's
sort
of
like
this
huge,
wide
matrix
style
dashboard
that
basically
basically
contains
data
and
kind
of
like
illustrates
for
specific
versions.
C
Passing
I
can't
really
remember
at
the
top
of
my
head
what
it
does
but
kind
of
like
we're
very
inspired
by
that
format,
to
be
able
to
basically
pick
up
conformance
data
of
all
implementations
that
are
exercising
themselves
against
the
vectors.
So
we're
gonna
like
at
one
point,
we're
gonna,
initiate
conversations
with
you
guys
to
figure
out
what
the
best
way
to
extract
that
metadata
would
be
to
like
populate
a
real-time
dashboard
of
client
conformance.
A
A
Awesome
yeah,
so
we
can.
We
can
test
ongoing
against
against
all
these
conformance
tests,
which
is
also
great
for
catching
things
like
regressions
and
changes,
any
other
questions
for
raul.
I
remember
he
had
to
leave.
A
All
right,
then
awesome,
then
maybe,
let's
move
on
to
the
the
fit
process
and
since
we've
hopped
down
to
conformance
testing,
moving
back
up
to
the
improvement
proposal
process.
So
if,
if
anyone
is
really
watching
the
falcon
project,
repo,
like
a
hawk,
you
may
have
noticed
that
the
fip
zero
zero
zero
one
got
merged.
Yesterday,
big
thanks
to
andrew
for
picking
up.
Why
are
you
sleeping's
pr
and
pushing
it
forward,
and
so
we
now
have
an
initiated
repository
for
the
falcon
improvement
protocol.
Andre.
B
This
was
definitely
a
complete
team
effort,
but
modeled
very
closely
off
of
the
ethereum
improvement
for
the
proposal
process.
So
ours,
ours
is
very
similar.
You'll
notice
that
I
think
the
bulk
of
fips
will
end
up
being
technical,
but
we're
also
creating
other
categories
for
us
to
improve
the
fit
process.
So
there's
there's
avenues
for
you
to
do
that.
B
If
you
think
there's
there's
ways,
we
should
be
working
on
the
project
better
and
then
also
another
category
for
recovery,
fips,
that
we
will
work
with
the
community
to
define
in
the
future,
but
basically
like
a
forum
for
discussing
and
raising
and
receiving
consensus
on
a
very
limited
set
of
fault,
recovery
or
change
rights
that
you
might
want
to
do
so
and
then
obviously,
like
this
forum,
I
think,
is
the
avenue
for
which
a
lot
of
these
fips
will
be
discussed
and
aligned
on
so
looking
forward
to
everyone's
feedback
on
proposing
fits
kind
of
commenting
on
them
and
then
receiving
an
alignment
so
that
we
can
move
forward
and
improve
the
network.
A
Awesome
yeah
so
fit.
We
have
three
subcategories
at
top
level
of
fips,
ftps,
fops
and
frps,
so
technical
organizational
and
recovery,
but
they're
all
fips,
and
they
all
get
to
be
so
kind
of
like
how
you
have
eips,
which
also
have
ercs
as
a
subcategory
of
them.
Similar
to
that
model
and
the
repo
has
some
guides
on
writing.
Fips
has
a
template
for
fips
and
then
has.
A
This
is
the
main
repository
where
you
get
to
add
new
fips,
including
fip0001,
and
we
do
that
for
lexigraphical
ordering
reasons
so
that
we
don't
have
the
really
annoying
ethereum
problem
where
you
go:
1,
10,
100
and
then
all
of
the
one
thousands
and
then
you
finally
get
to
the
two
get
to
the
200s
and
then
the
2000s
anyway.
It's
a
this
will
help
us
stay
organized
as
well,
and
so
this
is.
A
This
is
the
one
that
why
are
you
sleeping
got
to
push
push
on
which
lays
out
kind
of
the
main
process?
And
then
we
can
improve
this
over
time
as
well,
but
as
you'll
see
very
heavily
influenced
by
the
eip
process
and
also
defines
like
the
high
level
formatting
and
and
contents
for
important
pieces
yeah.
A
So
as
there
are
fips-
and
I
know
that-
there's
already
a
number
that
are
in
like
various
groups
like
radar
or
backlog
or
other
things
like
that
to
to
start
proposing
some
of
these
to
the
group.
But
we
should
expect
to
see
more
of
these
start
popping
up.
A
You
know
from
any
of
our
teams,
but
also
from
the
community,
as
they
have
ideas
and
suggestions
for
how
we
can
continue
making
falco
better,
which
will
be
awesome,
and
we
also
will
create
a
couple
of
forums
through
which
we
can
do
better
discussion.
Of
course,
the
issues
here
is
a
good
opportunity
for
that,
but
also
using
like
the
discuss
filecoin.io
as
a
place
where
we
can
have
more
community
discussion
about
these
things.
Andrew.
B
Yeah
thanks
molly
I'll
I'll,
just
add
that
we
definitely
view
this
as
a
community
owned
process.
So
what
jeremy
and
molly-
and
I
have
put
in
here-
is
meant
to
just
be
the
seating
of
the
fit
process.
If
you
will,
there
are,
and
you'll
notice,
as
you
go
through,
there
are
components
that
are
still
be
to
define
like
the
frp
process
is.
B
I
think
we
have
thoughts,
but
it
definitely
should
be
owned
by
the
community
same
on
the
fop
process
and
then
there's
a
dock
that
contains
like
the
mission
there's
a
section
that
contains
the
mission
and
like
the
governing
principles
for
vips,
that
that,
like
will
help
us
guide
our
decisions
but
again,
like
those
principles,
should
be
owned
and
defined
by
the
the
the
committee
as
a
whole.
So
please
add
comments
and
thoughts.
If
you're
like
hey,
we
should.
We
should
really
add
this
or
tweet
this
things
like
that.
A
Now
that
this
is
up,
I
think
we
do
expect
to
start
start
drafting
fips
even
before
you
know
paying
that
launch.
So
we'll
expect
to
see
kind
of
a
backlog
here
of
things
that
kind
of
are
in
that
slightly
longer
term
future.
But
we
want
the
community
to
start
thinking
about
and
commenting
on
and
us
to
look
at
from
an
implementation
perspective
across
groups.
A
Cool
I'll
feel
free
to
look
at
that
further
comment
on
it
further
as
we
go,
I'm
terrible
at
taking
notes
and
talking
at
the
same
time,
so
I'll
add
notes
for
that
later
cool
then
moving,
maybe
a
little
bit
into
high
level
status
updates
from
the
major
implementations
in
terms
of
kind
of
what
what
what
kind
of
the
high
level
status
is,
what
you
guys
are
working
on
right
now
and
kind
of
what's
coming
in
the
next
couple
of
weeks
for
each
one
so
omar.
Do
you
want
to
go
first.
F
Yeah
just
a
meeting
yeah,
so
so
we're
from
chainsafe.
My
name
is
omar.
It's
it's
nice
to
see
everybody,
and
we
also
have
austin
here
on
the
call
who's.
The
tech
lead
technical,
lead
on
the
project,
I'm
the
project
manager.
We
also
have
eric
who's.
One
of
the
developers
on
this
call.
So
our
main
goal
right
now
is
working
towards
syncing
and
open
interop.
So,
like
the
vast
majority
of
the
developers
on
the
team
are
working
towards
that.
F
So
we
focused
in
the
last
couple
of
weeks
on
implementing
the
conformance
test
and,
as
you
heard,
like
they're
all
passing,
except
for
the
ones
that
are
marked
as
invalid.
We've
also
been
updating
the
vm
and
actors
to
match
the
latest
changes.
Only
the
minor
actor
is
left.
Then
we're
almost
done
there.
F
Originally,
when
we
implemented
our
chain
syncer,
it
was
a
proof
of
concept
because
we
thought
we'd
be
using
graphsync
in
production,
but
since
then
we
realized
we'll
be
using
chainsync
in
production,
so
we're
updating
that
to
make
it
production,
ready
and
otherwise
we're
making
other
updates,
such
as
the
changes
to
message
cool.
F
All
with
the
goal
of
syncing
with
the
network
and
interrupting
our
secondary
goal
is
getting
to
a
full
node
and
so
that's
sort
of
lower
priority
compared
to
syncing
and
interop.
So
if,
in
the
near
future,
you'll
see
mostly
changes
towards
syncing
and
interrupt,
but
following
that
we're
also
working
on
getting
integrating
the
other
go
components.
So
the
storage,
miner
and
the
storage
and
retrieval
markets
for
the
storage
miner.
F
We
have
a
pr
in
for
this,
we're
almost
done
integrating
it
we're
just
working
out
the
final
details
and
for
the
storage
and
retrieval
markets
for
those
of
our
of
you
who
are
unaware.
We
are
like
there's
another
project
within
chainsafe
to
implement
an
interface
on
top
of
go
fill
markets.
So
what
lotus
uses
so
that
both
forests
and
any
other
implementations
can
use
it.
That's
pretty
much
done,
there's
one
remaining
husky
bug,
but
I
think
we're
almost
done.
F
Fixing
that
hannah's
been
super
helpful
with
us,
helping
us
get
there
and
then
the
only
other
thing
we
really
need
to
integrate,
storage
and
retrieval
markets
is
to
finish
implementing
pay
channel.
So
once
that's
done
and
we
add
the
rpc
calls
for
the
pay
channel
stuff.
F
G
Yeah
a
couple
first
off
sounds
great.
Like
sounds
like
y'all.
Have
great
progress,
I'm
very
impressed
when
you
said
that
you
were
integrating
latest
actors.
Is
there
a
particular
version
that
you're
using.
E
So
right
now
we're
we're
kind
of
pinned
to
093,
but
obviously
we're
just
gonna
update
to
whatever
is
coming
next.
The
reason
one
we've
been
kind
of
sticking
to
one
through
one:
nine:
oh
sorry,
zero!
Nine!
Three,
even
though
I
think
there
has
been
some
some
new
minor
versions
out,
it's
just
because
that's
what
the
test
factors
are
kind
of
pinned
to
right
now
plan
is
we'll
just
target
whenever
you
guys
have
finalized
your
you
know
not
the
reset
that
happened
or
was
playing.
E
It
happened
last
night,
but
the
one
that's
going
to
happen
next
week
and
we'll
just
kind
of
once
that
one
comes
in.
I
think
this
molly
might
have
mentioned
zero,
zero,
nine
ten,
I'm
not
sure
if
you
guys
have
like
figured
out
which
version,
but
then
we'll
just
update
our
actors
to
that,
but
yeah
for
right
now,
zero
nine
three
is
is
the
commit
the
only
one
that
hasn't
been
updated,
just
the
minor
actor.
E
G
Okay
sounds
good
yeah,
we're
running
we're
running
on
eight
in
production
right
now,
and
I
think
next
week
will
just
be
the
1.0.
I
think
that's
the
plan
so
yeah,
but
that
should
should
not
be
anywhere
near
as
big
a
jump
as
the
recent
one
that
you
all
are
making
it's.
It's
a
big
migration,
there's
lots
of
types
moving
around
and
that
kind
of
thing,
but
logic
changes
are
much
less,
which
is
what's
important
at
this
stage.
Yeah
sounds
good.
A
Awesome
in
that
case,
you
want
to
talk
about
lotus.
G
Yeah
sure
what
the
hell
does
lotus
up
do
these
days,
yeah.
So
a
lot
of
notices.
Our
reference
have
implementation
of
powercoin.
We
are
currently
in
week
three
of
our
big
space
race,
competition,
which
is
kind
of
where
we've
got
a
lot
of
attention
from
miners,
because
there's
money
on
the
line
in
terms
of
recent
changes
that
you
folks
would
care
about.
G
I
think
it
would
mostly
be
around
kind
of
network
upgrade
logic
which
will
be
needed
if
you
need
to
sync
the
current
the
current
network,
so
there's
stuff
in
there.
The
the
information
there
might
need
to
be
might
need
to
be
conveyed
to
you
in
terms
of
immediate
priorities.
Yeah.
Basically,
all
of
our
attention
is
on
landing
the
specs
actors,
migration
from
0.98
to
0.2
to
1.0
this
will.
G
This
will
definitely
be
a
complicated
thing,
but
it's
something
we
really
want
to
nail
down
before
mainnet,
because
because
we
might
have
to
do
it
during
maine
and
we
might
have
to
do
it
somewhat
frantically
and
chaotically
during
mainnet
in
the
event
of
a
crisis.
So
if
we,
if
we
can
get
good
migration
logic
done
beforehand,
that'll
be
that'll,
be
great,
and
I
would
recommend
that
other
teams
kind
of
do
the
same
as
well
using
syncing
test
net
thinking.
G
Our
current
test
network
as
a
baseline
challenge
works
very
well,
because
then
you
know
that
you
can
do
it
yeah.
So
that's
that's
our
far
and
away
our
biggest
focus
in
terms
of
consensus,
critical
stuff
for
the
next
little.
While
our
aim
is
to
have
it
to
have
it
next
week
and
we're
making
progress
towards
that.
G
I
don't
think,
there's
anything
else
that
going
on
that's
of
too
much
relevance
to
interoperability
or
consensus,
criticality
yeah.
I
think
I
think
that's
that's
kind
of
where
we're
at
I
guess
I'll.
I
will
mention
the
new
ghost
day
types
repo
y'all
will
run
into
this
when
you
do
make
a
jump
up
from
193..
Basically,
this
is
something
that
was
created
while
we
were
addressing
the
question
of
of
upgrading
actors.
Essentially,
actors
had
a
bunch
of
types,
some
of
which
we
expect
will
really
never
change.
G
The
bigint
is
a
good
example
of
that
and
some
of
which
have
a
much
higher
probability
of
changing,
say
the
minor
state.
Let's
say
so.
We
try
to
take
all
of
the
more
long-lived
types
and
move
them
into
a
different
place
entirely.
So
there's
a
new
ghost
state
types,
repo
that
that
y'all
will
that
will
discover
when
you
move
up
to,
I
think
0.97
introduced
it.
So
that's
something
to
be
aware
of,
but
it
should
be
what
a
one-time
manual
migration
of
changing
your
imports
and
dependencies.
G
That's
about
it.
The
other
thing
I'll
mention
quickly
that
occurs
to
me,
is
again
within
the
context
of
of
upgrading.
We
added
a
within
actors
code.
There's
now
the
concept
of
a
network
version,
so
essentially
a
lotus
node
can
inform
actor's
code
that,
oh
sorry,
anything
running.
The
filecoin
protocol
can
inform
actors
code
what
protocol?
What
version
they're
at
this
is
through
the
runtime
interface.
G
So
essentially,
actors
can
ask
hey:
what's
the
current
network
version
and
a
node
should
respond,
this
allows
for
kind
of
thinner
for
logic
to
be
executed.
So
if
there's
some
message
that
there's
some
method
that
shouldn't
be
called
before
a
certain
epoch,
you
can
kind
of
easily
trigger
that
through
one
of
these
checks,
more
complicated
migration,
obviously
or
more
complicated
upgrades
have
to
have
to
be
handled
in
various
places.
G
Yeah.
I
think
I
think
that's
all
folks
folks
need
to
be
aware
of
obviously
we're
still
fixing
bugs
and
so
on,
but
nothing
nothing
that
has
affected
consensus
lately,
and
I
know
that
we've
got
a
lot
of
stuff
pointed
out
from
you
folks
austin.
I
think
you
in
particular
have
have
raised
a
lot
of
good
questions
around
hey.
Why
is
lotus
doing
this
thing?
It
doesn't
seem
right,
so
I'm
sure
there's
more
of
that.
G
Please
keep
that
coming
file
issues
and
yell
at
us
aggressively
and
slack
when
you
discover
them,
because
anything
touching
consensus
is
what
we
really
want
to
figure
out
right
now
before
minute.
Yeah,
that's
that's
where
lotus
is
at.
I
don't
know
if
I
missed
anything.
Jeremy,
magic.
H
I
think
that
was
a
pretty
good
recap.
I
point
out.
Another
thing
we're
also
working
on
is
just
performance
in
general.
It's
a
lot
of
different
fronts
there
I
don't
know
if
people
saw,
but
I
mentioned
in
chat,
but
we
have
some
code
going
internally
that
should
do
pre-commit
one
in
two
hours,
which
is
you
know,
double
the
performance
of
our
current
code,
so
we're
working
on
getting
that.
Getting
that
integrated
and
shipped
out.
H
We
also
have
we,
we
didn't
merge
it
in
the
07
update,
but
we
merged
it
right
after
so
it's
a
master,
some
new
performance
stuff,
making
verification
of
snarks
much
faster
using
the
blast
library,
and
so
that's
like
three
or
four
times
faster
on
a
lot
of
like
critical
chain
things,
so
that's
very
exciting,
yeah
and
then
also
on
the
pub
sub
and
sync
front.
There's
continuous
little
little
things
going
on
there.
A
Awesome
and
I
think,
an
area
for
for
future
work
as
well
as
something
that
we're
you
know
having
a
live
network
with
a
lot
of
people
onboarding
onto
it
on
something
that's
been
coming
up
through
throughout
the
community
as
well.
A
Is
I
don't
know
if
it's
quite
chain
throughput
but
message
selection,
making
sure
that
it's
easy
for
people
to
get
their
window
post
messages
onto
the
chain
and
kind
of
prioritization
logic
around
that,
and
so
that's
also
an
area
that
we're
looking
into
to
make
sure
the
community
is
insane
and
able
to
do
that.
Really,
you
know,
continue
maintaining
storage
really
effectively,
in
contrast
to
onboarding
new
storage,
super
important
cool
anything
else
on
lotus
or
any
questions
for.
I
H
I
E
Yeah,
I've
got
a
I've.
Just
got
a
couple,
quick
questions
about
about
that,
so
one
around
it
wasn't
mentioned
in
this
call,
but
just
for
anyone
watching
on
the
youtube.
Someone
mentioned
the
the
block,
sync
or
I
guess,
block
sync-
to
chain
exchange,
name
rename
and
like
the
protocol
name
change,
I'm
wondering
what
your
guys
plan
is
for
like
migration.
Are
you
guys
gonna?
E
Are
you
guys
gonna
have
basically
both
protocols
registered
and
then
just
like
deprecate
block
sync
once
once
I
guess,
mainnet
comes
around
or
what's
the
plan
with
that.
H
Yeah,
I
think
the
way
we
usually
do
these,
at
least
like
we've
gone
through
a
number
of
these
in
ipad
side
of
things.
Is
you
just
support
both
for
a
while
and
then
once
you
detect
that
enough
of
the
network
supports
the
new
the
newer
version?
You
just
drop
the
old
one.
It
like
it
costs
almost
nothing
to
support
both
so.
E
Yeah
sure
yeah,
of
course
I
was
just
curious
to
see
what
the
plan
is.
One
other
question
about
the
that,
like
runtime,
call
about
getting
the
actors
version.
Isn't
that
kind
of
like
a
circular
reference,
because
I
mean
the
actors
code
that
would
depend
on
the
version
would
be
phased
out
based
on
the
version
of
the
actor's
code,
like
I,
don't
really
see
how
that
gets.
You
anything
because,
on
the
new
version
say
if
you're
upgrading
from
096
to
097.
E
E
Yeah,
the
runtime
method
that
would
give
you
the
the
actors.
H
Yeah,
so
the
the
the
thing
the
use
case
of
that
is
to
tell
the
actors
about
forks,
so
this
allows
lotus
to
set
the
fork
schedule,
so
we
can
say
that
you
know
from
zero
to
height.
Thirty
thousand,
it's
you
know:
network
version
zero
from
thirty
thousand
to
fifty
thousand,
that's
network
version
one
and
then
beyond.
That
is
network
version
two,
and
that.
E
A
Team
awesome,
then,
last
but
not
least
max,
and
you
want
to
give
an
update
on
foohan.
D
Yep
thanks,
my
name
is
maxim:
I'm
a
team
lead
for
the
sramitsu
team
implementing
for
home
implementation
of
filecoin
protocol
on
cpusplus,
so
our
current
status
is
here,
pushing
our
version
to
be
well
mining
and
thinking
in
the
net
of
photos-based
nodes
at
the
moment.
D
And,
on
the
other
hand,
we
still
need
to
check
the
validity
of
interoperability
of
our
version
against
the
lotus
and
our
virtual
machine
basically,
and
what
we're
doing
is
we're
just
getting
there.
We
are
downloading
the
basically
block
blockchain
via
the
lotus
and
exporting
it
and
then
reapplying
it
on
our
own
node.
D
So
it's
kind
of
tricky,
but
it
works.
This
approach
works.
So
this
is
the
way
how
we're
testing
interoperability
at
the
moment.
D
So
what
we
have
also
faced
is,
as
we
are
trying
to
push
really
fast
to
deliver
the
node
which
is
able
to
sink
in
mine.
Our
last
interoperable
version
against
well
again
against
all
the
circular
machine
was
against
the
lotus,
which
was
0.4.2.
D
I
don't
remember
the
other
version
of
actors
exactly
and
we
have
realized
that
to
make
it
interoperable.
With
the
current
lotus
version,
we
will
have
to
make
some
significant
enough
changes
that
will
take
some
time,
so
we
decided
to
import
spectactors
directly
from
sigou,
make
basically
go
make
specs
actors
call
library
for
c
and
use
the
calls
for
partially
used
calls
to
virtual
machine
actors
from
the
spectactors
library.
D
And
then
we
like,
we
are
going
to
start
testing
it
and
probably
against
the
test
methods
of
us.
D
Are
you
paying
taxes
for
the
markets
with
market
and
storage
markets?
They
were
ready
for
quite
some
time.
Maybe
we
need
to
improve
them
in
order
available
if
there
were
some
changes
we
have
interacted
for.
I
think
a
couple
of
weeks
now
when
we'll
be
having
the
full
working
note,
I
will
get
back
to
this.
H
No,
that's
really
cool
excited
to
see
like
definitely
send
us
a
ping
when
you're
got
something
connected
to
the
test
night.
I
want
to
see
that
oh.
D
A
Else
awesome
cool.
Well,
it
had
a
line
here
to
talk
more
about
the
actors
upgrade.
I
just
have
a
quick
async
update
from
a
north,
whose
time
zone
does
not
overlap.
Well,
with
this
meeting
just
a
little
bit
more
on
the
the
actors
upgrade
and
kind
of
what
what's
working
with
what
the
actors
team
is
working
on,
but
we've
talked
about
it.
A
A
good
bit
already
as
part
of
the
lotus
update
main
main
aim
here
is:
make
it
easy
to
version
and
upgrade
spec
actors
so
that
as
we
need
to
make
changes
or
eventually,
improvements
that
come
out
of
fips
and
things
like
that,
we're
able
to
version
and
improve
spec
actor
is
much
much
more
easily
and
so
we're
doing
the
upfront
work
now
of
making
that
simple.
Instead
of
pushing
that
out
far
into
the
future,
one
one
thing
alex
did
call
out
was
you
know?
A
A
The
an
opportunity
to
minimize
that
and
also
minimize
kind
of
the
chain
sinking
costs,
because
right
now
you
know
we
do
have
a
high
throughput
fast
growing
chain
is
to
also
have
a
have
checkpoint
thing,
and
so
this
is
something
that
we've
got
kind
of
some
basic
formats
now,
but
then
combining
that
with
with
state
pruning,
will
also
allow
us
to
effectively
have
much
smaller
checkpoints
that
that
folks
can
kind
of
rebase
from
over
time
and
that
minimizes
to
some
extent,
this
potentially
depending
on
what
points
in
time
we
choose,
can
minimize
the
the
time
frame
under
which
other
implementations
need
to
have
versioning
implemented
for
those
spec
actors,
because
if
we
choose
a
time
when
we
checkpoint
and
throw
away
the
past
or
archive
the
past,
just
maybe
a
better
way
of
saying
it,
then
other
groups
would
be
like
great.
A
G
I
know
I
think
we
covered
the
well
all
of
the
information
that
I
can
think
of
to
share
I've
shared,
but
I
would
not
be
surprised
if
y'all
have
more
questions
about
it,
so
raise
them
now
or
bring
them
up
on
slack.
I.
A
G
A
So
noise
in
particular,
generally
tls,
is
the
the
main
one
that
that
we
use,
but
switching
to
noise
is
like
the
cross
language,
interoperable
one,
let's
say,
is
also.
We
have
a
version
in
javascript
that
works
in
the
browser,
and
so
all
of
the
implementations
are
moving
that
direction
and
we're
actually
working
on
deprecating
secio
right
now
in
ipfs
and
we'll
have
more
to
release
once
we
do
that
and
move
the
network
off
of
secio
entirely.
A
But
we
think
that
using
kind
of
generalized
security
transports
that
are
kind
of
well
used
in
many
places
and
well
well
maintained
and
audited
by
other
groups
are,
is
really
good
and
a
good
best
practice.
And
so
the
that's
part
of
the
motivation
for
the
move
away
from.
A
A
Yeah
so
I
believe,
like
yeah,
used
to
ipfs
pretty
soon
it'll
be
fully
gone
from
most
of
the
major
deployments
of
what
p2p.
E
A
A
If
folks
have
proposals
for
time,
changes
or
any
other
sorts
of
things
like
that
feel
free
to
make
them
in
the
repo
it's
a
great
place
to
coordinate
and
also
I'll
start
a
new
issue
right
now
for
the
agenda
for
two
weeks
from
now-
and
folks
can
add
things
over
time
if
anything
comes
up
on
your
radar
as
an
opportunity
to
think
for
to
sink
on
and
if
anything
comes
in
through
the
fit
process.
That'll
also
be
the
sort
of
thing
that
that
may
end
up
on
the
agenda.
G
Cool
one
quick
question
molly:
this
is
largely
for
you,
I
suppose,
but
it
does
sound
like
you
know.
Well,
these
implementations
aren't
ready
to
use
use
yet
they're.
Getting
pretty
close.
I
was
wondering
what
plans
we
have
to
kind
of
seed
folks
to
start
using
these,
whether
they're
miners
or
other
projects
that
we
have
going
on.
A
Yeah
it's
great,
I
would
be
very,
very
happy
for
a
good
distribution.
Maybe
we
can
get
the
the
fill
fox
dashboard
to
not
just
have
versions
but
also
have
different
implementation
percentages,
and
then
we
can
push
push
the
mining
community
to
make
sure
that
we
spread
out
across
different
implementations
so
as
to
increase
the
security
factor
that
comes
from
you
know
many
different,
interoperable
versions,
I
think
probably
v0.
A
There
is
like
let's
get
them
all
up
and
running
on
testnet
and
it
brought
interoperating
really
nicely
and
then
probably
the
thing
that
would
be
useful
from
there
is
presenting
them,
highlighting
them
to
the
community
and
also
highlighting
the
the
variable
benefits.
It's
like.
Oh
with
this
implementation,
you
get
these
sorts
of
benefits.
A
This
part
is
faster
and
we
have
these
these
sorts
of
tests
to
to
demonstrate
that,
and
so
that
probably
be
kind
of
the
next
step
from
there
is
demonstrating
the
community
why
they
might
want
to
be
using
which
implementation
in
particular
and
then
encourage
some
groups
to
do
so.
I
think
we
would
absolutely
on
the
few
nodes
that
that
we
run
would
be
very
happy
to
be
running.
A
All
three
would
be
great:
it
helps
us
all
stay
in
sync
with
each
other,
and
I
know
the
chainsafe
also
has
a
mining
operation,
so
I
imagine
having
that
would
also
be
big.
Chungus
running
forest
would
be
snazzy,
and
then
we
have
a
yeah.
I
think
we're
still
a
little
bit.
I
don't
know
the
exact
status.
A
It's
a
question
for
the
the
ecosystem
team
on
like
seeding
mining
communities
with
you
know,
especially
small
miners,
and
encouraging
them
through
grants
to
you
know,
continue
building
up,
but
that
that
could
be
a
good
opportunity
to
to
put
out
some
support
for
groups
that
you
know,
adopt
other
implementations
and
help
push
them
forward
and
contribute
back
to
them
as
well.
To
do
so,
it's
a
good,
a
good
nudge
I'll.
Add
it
to
the
notes.
G
Sounds
good
yeah
and
I
think
whatever
product
efforts
we
have
going
on
as
well
in
terms
of
not
just
random
like
dev
communities
and
so
on,
hackathons,
whatever
might
be
interested
in
trying
out
different
things
and
that's
the
that's
the
kind
of
like
that's
the
right
experimental
atmosphere
for
trying
out
brand
new
implementations
too.
I
think
so
just
some.
A
Thoughts,
that's
great
all
right
anything
else,
big
chungus,
tm,
very
important.
I
love
big
chunks,
it's
phenomenal!
I
I
missed
that
meme
somehow
never
heard
of
big
chunks
before
and
now.
Oh
this
video,
so
many
videos,
it's
great
cool.
Well,
it
sounds
like
that's.
That's
it
for
this
week's
our
first
of
our
core
dev
meetings.
Thank
you!
So
much
it's
been
really
fun
and
I'll
see
you
all.
In
two
weeks.