►
From YouTube: Filecoin Core Devs Biweekly #25
Description
Recording for: https://github.com/filecoin-project/tpm/issues/59
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
A
All
right
thanks,
ayesh
hi
everyone,
if
you
haven't,
had
the
chance
to
meet
yet
my
name
is
caitlin
and
I
recently
joined
the
file
coin
foundation
as
the
tpm
for
governance
and
as
iosh
also
introduced
earlier,
we're
beginning
a
transition
where
I'll
begin
helping
to
facilitate
these
meetings
going
forward.
So
today,
if
there's
any
glitches,
appreciate
your
patience
and
in
the
next
week,
or
so,
you
may
begin
to
see
updated
reading
invitations
etc,
but
insofar
as
the
meetings
will
continue
on
a
bi-weekly
basis,
there
shouldn't
be
any
procedural
changes.
A
A
So
with
that,
I
think
you
can
just
jump
into
team
updates
lotus.
Do
you
want
to
go
first.
B
Yeah,
so
not
so
much
going
on
in
lawless,
we
just
released
a
111
to
rc1,
which
I
mentioned
last
time
introduced
the
dextor
and
we
are
officially
like
launching
the
stable
version
to
the
community.
So
right
now
we
are
doing
the
testing
process.
It
introduced.
The
latest
proof
release
and
I
think
there
are
some
like
bugs
in
the
gpu
because,
like
in
the
gpu
for
like
c2
and
window
post,
we
are
working
with
miner
x
to
figure
it
out,
but
that
train
is
going,
and
hopefully
it
goes
well
and
pretty
much.
A
Great,
I
shouldn't
even
ask
anything
to
add
all
right
great,
so
thanks
so
much
jenny,
we
can
jump
down
to
the
forest
team.
C
Hey,
hey
yeah,
so
I
mean
there's
not
too
much.
You
know
heavy
lifting
that
we're
doing
in
the
next
few
weeks
or
so
or
the
last
few
weeks.
We're
mainly
you
know,
putting
on
the
cherry
on
the
top
and
trying
to
get.
You
know
a
little
release
going
a
little
bit
soon,
so
you
know
we're
making
a
push
on
documentation
all
the
user
facing
stuff.
I
feel,
like
I
said
this
last
week
as
well
or
last
time
we
met
but
yeah,
it's
user
face
and
stuff.
C
There's
a
lot
of
stuff
there.
So
we're
doing
that
debugging
a
few
consensus
issues
here
and
there,
but
I
think
you
know
working
with
the
proofs
team
and
with
the
specs
actors
team.
We
figured
out
some
stuff.
So
that's
really
nice,
so
yeah!
Oh,
I
see
dudley
has
his
hand
raised.
I'm
not
sure
if
he
has
a
question.
Do
you
have
a
question
doctor.
D
I
do
indeed
thank
you
very
much,
so
I
think
I
missed
the
last
meeting
here.
How
are
you
guys
doing
with
the
audit?
I
noticed
some
people
from
least
authority,
or
was
it
super
prime
sorry,
too
many
audits,
I've
requested
access
to
the
document.
Have
they
been
reviewing
the
remediations.
C
D
E
Hey
she
has
his
hands
up.
Yeah
wanna
know
more
about
this
consensus
issue
that
you're
debugging
proof
specs
actors
kind
of
what's
going
on.
I'm
sure
others
will
be
interested
as
well.
C
E
So,
but
is
the
difference
that,
like
lotus
and
the
other
implementations,
are
trapping
that
panic
and
so
and
move
forward,
whereas
y'all
like
what
was
the
breaking
consensus,
there.
C
So
the
breaking
consensus,
I
think
it
has
to
do
with
some
of
the
error
handling
that
goes
on
when,
when,
like
the
previous
crosses
ffi
boundaries
with
so
in
between,
let
go
and
and
the
rest
implementation.
So
there's
like
some
differences
in
how
we
handle
the
the
battery
seals
assist
calls
yeah
sounds
good
thanks.
A
All
right,
then,
fujon
you're
up.
F
F
F
We
have
also
been
able
to
resolve
the
issue
with
lotus
minor
plus
focal
node,
which
was
not
well.
The
issue
was.
F
A
fukon
node
was
not
able
to
properly
propagate
the
block,
which
was
generated
by
the
miner,
and
we
have
basically
discovered
the
source
of
the
issue
and
resolved
that.
Currently,
we
are
in
the
process
of
testing
the
deals,
storage
deal
with
real
deal,
and
so
on
so
yeah.
We
are
also
quite
like
actively
working
on
testing
on
interrupt
net,
as
well
as
on
local
machines.
So
this
is
something
we
want
to
finish
as
soon
as
possible.
F
Our
internal
date
for
the
release
is
currently
established
for
september
10th,
but
it
might
be
a
little
bit
later
if
the
scope
of
the
audit
will
change
or
something
seriously,
we
will
be
found.
So
that's
how
our
plan
and
the
stops.
A
Sounds
great
and
we
can
follow
up
with
you
closer
to
the
september
10th
deadline,
also
to
get
a
more
accurate
picture
of
when
the
release
is
likely
to
actually
be
scheduled
so
that
we
can
update
the
documentation
as
well.
A
It
is
yeah
it's
coming
all
right!
Wonderful,
thank
you
and
now
on
to
the
venus
team.
G
G
Currently
we
have
about
more
than
50
applications
received
and
we
have
now
saving
clients
and
yeah
have
been
connected
to
the
network
and
they
are
growing
the
power
and
supposedly
and
other
six
are
working
in
progress
yeah.
It
was
still
communicating
with
more
and
more
users
and
to
yeah
to
leave
them
in
the
network
in
next
weeks.
This
is
one
part
we're
working
on
and.
G
Regarding
the
development
we
are
working
on
two
parts,
one
is
about
the
storage
market,
implementation
implementation.
We
yeah,
we
get
many
feedbacks
actually
from
the
community
and
to
and
ask
us
to
support
the
storage
market
as
soon
as
possible.
So
we
invested
more
resources
into
this.
Now
we
plan
to
have
this
module
be
available
in
three
or
yeah
four
weeks
and
yeah.
We
hope
that
will
be
working
well,
yeah.
G
You
know
that
because
the
architecture
of
the
vedas
is
not
the
same
as
lotus,
so
the
current
notice
market
module
cannot
work
with
the
yeah
with
a
software,
so
we'll
design
a
new
one
and-
and
we
hope
we
can
consider
to
leverage
the
storage
pool
yeah
for
the
storage
market
and
yeah,
for
example,
to
because
in
one
storage
pool
we
have
a
multiple
miners
right,
so
we
may
in
the
future
we
yeah.
G
We
could
have
the
poor
to
get
the
yeah
deal
and
then
disability
to
yeah,
twist
manners
and
perhaps
randomly
or
based
on
the
reputation
and
yeah
that
is
under
consideration.
G
We
have
designed
the
mechanism
and
we're
working
on
this
reward.
Yeah
sharing,
algorithm
and
also
we're
handling
open
blocks
and
other
issues
to
identify
this
kind
of
issues
is
because
of
the
platform
or
yeah
because
of
the
individual
minors,
so
that
we
could
yeah
share
reward
fairly
yeah,
that
is
in
progress.
We
yeah
we
and
also,
we
also
expect
that
could
be
available
yeah
about
one
month.
G
Okay,
I
think
that's
all
any
questions.
B
Yeah,
I
think
I
have
two
questions.
First
of
all,
like
great
news
like
I
know,
the
community
has
been
asking
for
market
featuring
venus,
so
I'm
super
looking
forward
to
that.
I
just
have
like
two
questions
regarding
it.
The
first
one
is
to
the
reward
mechanism,
I'm
wondering
if
you
can
share
more
about
the
design
with
us,
like
the
other
implementers
or
like
how
you
are
going
to
do
the
reward
distribution
a
little
bit
more
and
also
for
the
market.
It's
great.
B
I,
like
you,
mentioned
that
you're
going
to
design
a
new
field
market
instead
of
using
the
module
that
you're
using
so
like
I'm
very
interested
in
how
you
are
going
to
integrate
that
in
the
storage
pool
mechanism
like
venus
have,
especially
if
your
node
will
be
distributing
deals
to
the
miners.
I'm
wondering
is
the
venus
pool
going
to
maintain
a
reputation
system
among
the
miners
like
to
choose
the
client
to
choose
the
storage
provider
for
the
client
like?
What's
what's
the
design
so
far,.
G
G
Is
not
revealed
right
now,
yeah
we'll
release
that
with
importation
very
soon,
so
all
of
this
yeah
will
be
open
sourced,
then
absolutely
and
we'll
put
it
in
the
github.
G
Because
there
are
still
some
under
discussion
so
yeah,
I
will
not
do
this
right
now.
I
will
discuss
with
this
with
the
team
and
say
yeah,
if,
if
we
can
release
that
in
one
or
two
weeks,
okay,
that
yeah-
that
is
for
the
first
one
and
the
second
one
about
the
marketing
module,
I
would
think,
are
the
first.
We
need
will
not
implement
and
all
the
things
I
mentioned,
including
to
leverage
the
poor
and
to
disbuild
the
deals
to
different
manners.
G
We
will
have
a
very
first
yeah
same
conversion.
G
People
could
make
a
deal
with
the
miner
and
specified
actually-
and
you
know
so-
the
pool
will
not
manage
all
the
miners
and
to
help
your
clients
to
get
to
issue
the
deals,
and
that
will
be
the
next
step
and
yeah.
It's
not
the
the
current
one
will
be
available
in
september,
so
yeah
for
that
one.
We
just
have
a
rough
idea.
We
need
to
have
more
discussion
and
to
have
a
yeah
design
in
more
detail.
G
So
yeah,
if
you
have,
if
anyone
here,
have
some
ideas,
yeah
you're,
very
welcome
to
to
to
to
data
slow,
and
we
will
yeah
well
incorporate
that
into
our
design,
yeah
but
anyway.
Yes,
that
is
our
next
step,
not
not
at
this
version.
G
E
Yeah,
honestly,
I
don't
have
many
good
ideas
right
now,
because
it
is
a
fairly
tricky
thing
so,
which
is
why
I
think
many
of
us
are
very
interested
in
seeing
what
the
initial
design
is
and
then
we'll
be
happy
to
provide
feedback,
or
you
know,
try
to
try
to
see
if
it
makes
sense
if
it's
fair,
if
it
works.
That's
that
kind
of
thing
yeah.
G
We
may
okay,
we
may
submit
an
issue
in
github
and
have
some
swap
ideas
and
try
to
get
some
feedback
and
information
from
the
community.
Okay
sounds
great.
A
Great
any
other
questions
or
immediate
feedback
for
us.
The
venus
team.
D
Hello,
so
I
don't
have
too
many
updates.
I
wanted
to
thank
steve
from
protocol
labs
for
jumping
in
quickly
to
help
assess
a
bug,
a
potential
blood
bounty
the
other
day
I
also
wanted
to
mention
that
auditors
are
good.
Quality
orders
are
quite
booked
up
we're
looking
at
six
months
on
average,
we're
looking
at
eight
months
for
trails,
bits
we're
looking
at
need
to
pull
in
favors
to
even
get
consensus
to
speak
with
us
I'll
reach
out
to
you
guys,
but
otherwise,
not
too
much.
A
For
my
end,
a
very
quick
update
as
well
as
we
transition
the
foundation
to
facilitate
this
meeting.
Once
again,
you
may
begin
to
see
some
updated
meeting
invitations
in
the
future.
A
So
please
accept,
if
you
plan
on
continuing
to
join
this
meeting
on
a
bi-weekly
basis,
and
the
other
thing
is
that,
as
we
look
at
our
broader
governance
policies
and
the
way
that
we
manage
these
workflows,
we
are
very
focused
on
trying
to
find
ways
to
optimize
sort
of
this
communication
and
workflow
loop
between
different
implementations
and
any
potential
technical
fits
that
may
come
down
the
pipeline.
A
So
there
is
nothing
at
the
moment
that
will
need
to
be
worked
into
or
append
any
work
that
you're
currently
doing,
but
should
there
be
large
enhancements
in
the
future
which
we
expect
just
as
the
course
of
things
develops,
we
are
again
trying
to
think
about
how
best
to
communicate
this
with
everyone,
and
there
may
be
opportunities
in
the
future
for
this
group
in
particular
to
weigh
in
and
sort
of
share
their
thoughts
about
what
would
be
most
productive
for
them
and
their
team.
A
All
right,
so
it
looks
like
there's
no
outstanding
questions
but
interrupt
if
there
are
also
boris
wanted
to
join
us,
but
was
unable
to
today.
Is
there
anyone
else
from
fishing
on
the
line
who
wanted
to
introduce
themselves
real,
quick.
H
Hi,
as
you
said,
boris
wanted
to
be
here,
but
he's
actually
hosting
danny
o'brien
on
our
thursday
hall.
So
hi,
I'm
brooke,
I'm
the
ctoc
vision,
we're
doing
a
few
things
in
the
file
coin.
Space
right
now
mainly
around
accounts,
but
we're
also
starting
to
poke
around
the
vm
discussions
as
well.
Boris
not
come
out
of
the
ethereum
core
dev
space.
H
A
I
also
know
as
an
aside
boris
and
I
are
both
very
excited-
that
fission
has
been
able
to
create
a
fips
website
for
us
and
it
is
a
much
nicer
display
than
github
for
actually
reading
through
implementation
specs.
So
I
don't
want
to
steal
his
thunder.
I
think
he'll
be
interested
in
quickly
presenting
it
in
the
coming
weeks,
but
that's
in
progress
as
well
cool
all
right
great.
So
if
there's
no
questions
for
for
brooke,
then
we
can
jump
into
our
finals
for
presentation.
A
For
today
I
think
kubo
is
going
to
take
over
and
present
on
the
newest
snap
deals
proposal
which
I'm
quite
excited
about.
I
I
can,
I
will
quickly
share
my
screen
with
just
the
topic
and
there
is
also
a
draft
open,
so
so
snapdeals
two
weeks
ago.
I
think
it
was
two
weeks
ago.
We
we
presented
like
possible
versions
of
lightweight
sector,
update,
update
versions,
so
we
were
back.
Then
we
knew
100
percent
free
ma
to
message
protocol
could
have
could
work.
We
were
pretty
sure
that
two
message
protocol
works
and
we
were
investigating
one
mess
protocol.
I
Now
we
are
security
wise.
We
are
very
convinced
that
one
message
protocol
works,
which
would
allow
us
to
perform
deal
updates
without
any
interaction
with
the
chain.
Apart
from
one
message,
publishing
the
the
publishing
these
deal
updates
for
a
sector
which
which,
on
its
own
is,
is
amazing,
and
it
was
a
lot
of
hard
work
from
from
lucca
nicola
rosario
on
our
side
and
so
on.
I
So
the
protocol
would
is
lightweight
to
the
encoding
process
is
very
lightweight
the
verification
chain.
We
managed
to
also
make
it
very,
very
lightweight.
I
Unfortunately,
in
in
recent
days
we
had
a
bit
of
a
bit
of
a
snag
with
with
the
con
circuit
construction,
where
data
deal
data
commitment
calculation.
Today
is
done
with
shah
hashes,
where
most
of
data
commitments
in
the
circuits
are
done
with
poseidon.
The
the
the
issue
with
sha
is
it's
much
more
expensive
in
the
circuit,
so
currently
we're
looking
at
ways
to
make
the
sharp
data
commitment
work
in
the
circuit
on
the
scale
we
need
for
for
snapdeals.
I
There
are
ways
to
work
work
around
it.
It
will,
of
course,
increase
the
proofing
costs
somewhat
somewhat
significantly,
but
we
are
optimistic
to
say
the
list
and
and
if
this
it
doesn't
work,
we
have
other
options
also
for
it,
which
would
include,
for
example,
changing
the
way
we
do.
Data
deal
data
commitments-
that's
the
high
level
updates
update
on
that
it's
bit
of
happiness,
better
of
more
sour
news,
but
yeah.
E
No,
mostly,
we
want
to
clarify
in
the
call
yeah
it
was
just
two
weeks
ago,
the
status
was
like
three
three
message
was
517.
Two
message
was
a
maybe
but
optimistic,
and
one
message
was
very
hopeful
is
how
do
we
feel
about
two
messages
right
now.
I
So
two
message
can
like
we
are
100
so
to
message:
it's
a
different
two
message
vertical.
Then
it
would
be
too
different
to
message
protocol
than
it
was
two
weeks
ago,
but
we
are
sure
it
can
work
and
the
circuit
would
be
comparable
with
one
or
two
or
two
window
post
circuits,
like
they're,
improving
proving
costs
for
for
miners
performing
the
the
updates.
The
cost
goes
significantly
a
bit
more
like
goes
up
significantly
for
for
one
message:
protocol
yeah,
that's
what
you
were
asking.
B
I'm
wondering
like
the
new
proof,
design
how
much
requirements
would
get
put
on
the
storage
provider's
hardware
or
they
can
just
use
the
existing
one.
They
have
right
now.
I
To
generate
all
those
proofs
and
challenges
so
the
proving
step
it's
on,
like,
apart
from
the
proof,
proof
itself,
it's
very,
very
cheap.
On
the
order
of
few
minutes
of
cpu
computation
on
the
on
the
and
then
there's
computing,
the
proof
itself,
which
is
equivalent
like
it's
the
same
type
of
computation
that
that's
done
for
window
post
or
power
up
right
of
window
post
being
the
benchmark
of
like
the
proof
size,
because
it's
exactly
one
circuit
for
us.
E
Yeah,
so
I
think
we're
in
an
interesting
place
here
where
you
know
we,
so
it's
it
sounds
like.
We've
got
two
good
designs:
basically
a
non-interactive
one
message
version
and
an
interactive
two
message:
version
that
have
you
know
different
pros
and
cons.
We
are
saying
security,
wise,
we're
happy
with
both
right
so
yeah.
So
it's
almost
like
a.
E
How
do
we
decide,
which
is
better,
because
I
don't
have
a
good
sense
of
that,
and
maybe
this
comes
down
to
storage
providers
and
we
probably
we
might
want
a
survey
to
see
kind
of
like
which
works
better
for
them,
like
is,
is
two
messages,
a
deal
breaker
for
them,
or
is
that
something
they
really
really
want
to
avoid?
Or
is
that
better
than
with
the
with
you
know,
some
time
just
spend
waiting
as
opposed
to
more
time
spent
on
computation
and
using
those
resources?
I
And
it's
also
like
also
market
fit
discussion
right
because
one
of
them
might
may
give
better
like
the
customers
generally
want
faster
deals.
Yeah
jennifer.
B
H
I
B
H
I
Different,
but
mostly
it's
about
proving
overhead,
so
proving
computation
on
the
minor
side.
I
see
yeah
thanks.
I
E
I
E
Okay,
yeah
yeah.
What
do
people
think
instinctively
like?
I
do
think
this
comes
down
more
to
storage
providers
and
obviously
some
of
us
have
insight
here
but
instinctively,
which
feels
better,
like
spending
a
bit
longer
waiting
and
having
to
send
two
messages,
but
for
the
most
part
not
dedicating
massive
resources
versus
only
one
message,
but
you're,
not
massive
resources,
but
you're
computing.
More,
I
don't
know
which
is
better.
I
don't
know
I'm
curious
to
know
what
people
think.
C
I'm
I'm
curious
about
like
verification
time
for
like
people
who
are
not
miners,
they're
just
running
nodes.
Have
you
guys
done
any
benchmarks
on
the
effects
of
two
messages
versus
the
one
message
protocol
so.
I
The
proof
proof
is
so
for
too
much
protocol.
We
would
have
to
verify
multiple
partitions
of
the
proof,
but
the
proof
itself
is
verification
times
wise
accepts
so
proof.
Verification
start
with
verifications
generally
scales
with
the
number
of
inputs
you
have
into
the
circuit
and
currently
for
a
big
proof
like
for
powerapp
or
windowpost.
We
are
accepting
like
more
than
4000
inputs
into
each
circuit
for
verification.
I
In
this
case,
we
would
be
looking
probably
at
less
than
10
inputs,
which
makes
those
proofs
much
cheaper
to
verify
so
verification
overhead.
There
isn't
big
difference
like
significant
difference
between
the
two,
I
would
say
the
the
overhead
of
the
train,
bookkeeping
and
train
will
be
hired
like
or
I
would
guess,
order
than
a
magnitude
higher
than
done.
Verification
of
the
proof.
B
Yeah,
I
think
another
thing
we
should
consider
is
like
it
was
too
much
such
approach.
It's
like
one
message
like
overheads
and
one
message
approach
right
so
like
miners
have
to,
but
if
the
let's
say,
if
the
network
is
congested
for
some
reasons
and
minors,
storage
providers
have
to
worry
about
like
humanity,
to
manage
two
messages
in
a
way,
and
you
know
in
the
code,
I
would
assume
we
have
to
handle
different
situations
as
well
as
like.
B
I
Yeah,
like
the
biggest
difference,
is
the
improvement
overhead.
So
just
rough
numbers
like
we,
we
haven't
evaluated
all
possible
optimization,
like
circuit
of
the
circuit
optimizations.
Yet
roughly
we
are
looking
at
the
difference
between
one
window
window
post
proving
time
so
about
it's
about
five
five
minutes
versus.
So
it's
five
minutes
of
gpu
compute
versus
30
to
35
40
minutes
of
gpu
compute
for
the
message.
I
B
I
probably
should
go
read
the
proposal
myself.
Sorry,
I'm
like
so
bad,
but
like
one
more
question
is
like.
Are
we
considering
like?
Is
it?
Is
it
gonna
be
able
to
like
do
upgrade
cc
with
deals
like
aggregation,
like
sort
of
thing
so
like?
If
we
do
the
two
message
approach,
there's
a
wait
time.
That
means
like
the
the
hardware
might
be
idle
and
at
the
same
time,
can
storage
provider
to
choose
upgrade
another
cc
sectors
and
somehow
aggregating
this
several
messages
together
is
that
in
the
consideration.
I
If,
if
we
went
with
the
two
message
protocol,
we
would
probably
allow
for
like
how
we
have
currently
batch
become
it,
so
we
would
allow
for
something
like
batch
pre-commit
and
then
either
batch
or
aggregate.
It
update
itself
after
that,
and
that's
also
true
bachelor
or
aggregated
upgrade
itself
for
one
message:
protocol
too
yeah
like
there's
the
miner
can
just
start
start
another
upgrade
concurrently.
They
don't
have
to
wait
for
one
finish
so.
A
I
I'll
think
about
it,
because
we
have
still
are
investigating
some
solutions,
so
especially
we
have
so
these
two
were
where
one
message
versus
two
message
protocol
and
not
changing
anything
else.
We
have
also
this
the
possible
solution
of
changing
how
we
do
data
commandments,
so
piece
id
or
comedy
calculations
which
have
much
bigger,
follow
like
not
followed
but
require
bigger
changes
in
on
the
user
side
of
things
and
deal
making
changes
in
the
deal
making
protocol
itself
like
off
chain.
I
So
we
are
considering
still
our
options
there
and
but
yeah
I'll
I'll,
probably
contact
you
I'll
start
off
the
next
week.
When
I
know
more.
A
Sure
yeah
you're
welcome
to
come
to
one
of
the
working
groups
or,
if
you'd
like,
if
you
actually
I'm
looking
at
the
the
discussion
right
now,
you
probably
don't
even
need
to
add
many
more
details.
I've
already
circulated
this
amongst
the
minor
working
groups
or
storage
provider
working
groups,
but
if
you
want
me
to
bring
attention
to
anything
in
particular,
we
could
sort
of
gauge
some
preliminary
feedback
that
way
too
so.
I
Awesome
yeah
I'll
update
it
when,
when
we
know
more
because
yeah
we
it's
the
the
issue
of
of
proving
overhead
is,
is
still
quite
fresh
and
yeah,
we'll
start
still
discovering
stuff
around
it.
A
Yeah,
that
makes
sense.
I
initially
thought
that
the
discussion
post
was
quite
short
and
something
maybe
omit
it,
but
it
looks
like
it's
just
summarized
quite
nicely,
so
I
think
we're
good
for
now.
E
Thank
you.
Thank
you
all
quick
question.
Katelyn
actually
do
we
have
similar
storage
client
working
groups.
I
ask
because
a
lot
of
this
decision
making
does
come
down
a
lot
of
how
this
decision
should
be
made
comes
down
to
what
clients
want
like.
Is
there
a
burning
need
to
get
deals
active
faster
so
to
reduce
the
window
between?
When
I
decide,
I
want
to
store
some
data
to
when
it
goes,
live
or
you
know
or
yeah
what
the
size
of
the
deals
involved
will
be.
A
Yeah,
that's
a
great
question
and
as
far
as
I'm
aware,
there
is
no
like
standing
work
group
for
those
types
of
things.
My
interpretation
of
this
is
that
a
lot
of
the
larger
storage
providers
will
work
directly
with
like
their
identified
clients,
but
obviously
they
keep
those
conversations
private.
A
It
might
actually
be
worth
looking
in
to
seeing
if
they
would
be
willing
to
bring
even
like
a
single,
large-scale
client
that
they'd
want
to
work
with,
but
yeah
we
can
follow
up
offline.
If
you
want
to
do
this,
I
think
we
can
put
it
together,
but
there
are
some
challenges
yeah,
but
also.
B
Like
we
should
consider,
you
know
when
we
say
like
a
client
working
group,
we
should
like
come
in
brokers
as
well
like
webster,
storage,
estuary
and
other
applications
too,
because
eventually
they
might
be
the
main
platform,
like
the
deals
coming
from
right.
Yet
in
a
way.
So
that
would
be
an
interesting
conversation.
A
Yeah,
I
think
so
also
we
could
create
like
a
storage
provider.
Client
like
technical
working
group
or
summit,
and
if
we
do
we'd,
probably
want
to
aggregate
as
many
potential
proposals
into
like
yeah
session.
As
especially.
B
Since
so
many
of
them
touch
on
similar
topics,
so
there
is
a
channel
set
up
in
the
file
coin,
slack
called
hashtag
onboarding,
which
we
will
be
adding
a
lot
of
like
brokers
and
clients
into
that,
so
that
they
can
like
on
board.
So
storage
provider
can
help
on
boarding
their
like
data
and
deals
into
the
network.
So
I
think
we,
it
would
be
great
to
start
a
conversation
there
as
well.
A
Sure,
and
it
looks
like
this
channel
only
has
well
now
12
numbers,
but
just
a
notice
to
everyone
else
in
this.
Call
that,
if
you're
curious
and
following
up
on
this,
that
you
might
want
to
join
as
well,
because
you're,
probably
not
already
in
it.
A
A
All
right
well
thanks
again
kuba,
since
we
do
have
about
10
or
so
extra
minutes
left
on
this
call.
I
think
it
would
also
be
great
not
to
put
brooklyn
on
the
spot.
I
know
you
did
want
to
lurk
in
this
first
meeting,
but
since
we
do
have
the
time
if
you'd
like
to
discuss
evm
integrations,
the
the
floor
is
yours.
A
H
I
mean
yeah
sure
most
of
my
thoughts
or
at
least
high
level
thoughts
were
in
that
post.
I
guess
the
the
big
major
points
are
so
evm
does
seem
to
be
the
de
facto
standard.
Increasingly,
despite
its
any
flaws,
I
still
think
that
there's
possible
ways
of
doing
interoperability
or
even
might
go
to
bite
code
transpilation
in
the
original
post,
juan
had
mentioned
potentially
having
multiple
vms
over
time.
I
think
that's
a
very
scary
idea,
especially
when
you
have
so
much
security
involved
and
scoping
down.
H
The
semantics
of
a
vm
to
the
exact
needs
of
filecoin
are
potentially
makes
the
security
problems
a
lot
more
tractable.
There's
a
lot
of
things
that
filecoin
could
do
that.
A
lot
of
other
chains
can't
do
like
memoizing
or
or
hotspot
optimizing
everybody
else's
data
or
keeping
the
results
of
computation
so
that
you
never
have
to
run
them
again.
You
can
just
essentially
do
a
large
memorization
table,
but
you
know
at
web
scale
that
are
interesting,
but
that
don't
necessarily
become
just
directly
evm
compatible
out
of
the
box.
H
If
the
evm
is
the
direction
that
filecoin
wants
to
go,
there's
a
lot
of
work
there
to
be
done
around
exactly
how
to
do
the
integration.
Should
there
be
any
additional
op
codes
or
pre-compiles?
H
H
You
know
battle-hardened
blessed
contracts
that
everybody
calls
into
and
make
those
composable,
rather
than
having
people
re,
deploying
potentially
broken
versions
of
the
same
contracts
over
and
over
again,
which
works
very
well
with
the
actor
model
view
that
is
in
falcon
right
now.
So,
but
I
realize
that's
pretty
pretty
broad.
It's
mainly
a
question
of
where
does
this
community
want
to
go
with
that
from
here
to
narrow
it
down?.
J
I
guess
to
give
you
some
of
my
perspective
here,
like
at
the
moment,
I'm
currently
leaning
towards
wasm,
with
some
form
of
evm
compatibility,
either
translation
or
or
absolutely
compiling,
but
we
haven't
made
any
final
decisions
here.
The
current
plan
is
basically
implement.
Something
then
compile
our
current
actors
to
that.
J
But
some
of
the
interesting
tidbits
we
have
is
like
we
use
ipld
instead
of
there's
a
flat
memory.
This
gives
a
lot
of
cool
features,
but
it
also
introduces
complexity,
so
the
biggest
hurdle
I'm
hitting
right
now
is
like
okay.
I
need
to
be
able
to
like
sort
of
decode
from
this
gag
and
then
write
data
back
and
make
sure
that,
like
all
the
data
is
still
reachable
or
is
it
able
to
do,
I
need
to
still
reach.
J
On
the
other
hand,
it's
really
cool
because
it
means
like
I
can
at
least
I
leave
and
send
dags
from
one
actor
to
another,
which
means
I
can
do
very
efficient,
like
data
passing
without
actually
having
to
copy
all
of
the
data
which
could
open
up
the
room
for
things
like
like
sandbox
calling
or
if
I
call
some
other
actor
or
some
other
static
method
with,
like
basically
all
of
my
state,
and
I
can
do
whatever
it
wants
with
all
my
state,
but
it
can't
actually
write
back
to
it
and
it
can
give
me
like
a
transformed
version
that
can
then
like
work
over
and
make
sure
it's
correct.
J
So
yeah,
that's
my
current
picking.
There
is
a
channel
in
you
know
the
bitcoin
slack
called
fpm
and
that's
where
I've
been
posting,
some
updates
and
some
docs.
I've
actually
just
now
made
the
dock
public,
but
I
think
the
link
there
should
yeah
the
link
that
should
still
work.
So,
if
you
look
like
you
should
get
access
to
the
document
or
the
current
design,
like
brainstorming
blurb,
that
I
have
right
now.
So
it's
it's
not
very
well
organized!
I'm
sorry.
C
But
I
really
I
have
a
question
actually
with
the
the
kind
of
the
paradigm
that
we
are
thinking
of
using
or
not
but
like
basically
like,
we
have
a
system,
that's
working.
C
We
have
like
all
these
actors
and
stuff
and
we
have
like
vm,
so
I
guess
like
kind
of
maybe
this
is
a
little
bit
far
down
the
line,
but
like
kind
of
what
is
the
way
that
we're
gonna
implant,
this
vm
into
the
system
like
is
the
vm
gonna,
be
like
a
separate
execution
layer
as
in,
like
you
know,
another
actor
like
a
vm
actor
or
something
like
that.
That
is
like
sort
of
sandbox
but
sort
of
not
sandbox,
but
has
an
interface
to
the
existing
vm.
C
J
Both
of
these,
I
think
also
linkedin
here-
is
probably
a
google
doc
that
goes
over
the
options
here.
Basically,
that's
the
multi
vm
versus
the
single
beyond
world,
but
we've
talked
about
a
world
where
we
have
multiple
vms
and
that
would
be
the
world
like.
Okay,
we
have
like
the
sort
of
like
the
the
native
or
built-in
vm
or
actors,
and
then
we
have
like
vvm
and
an
fdm
or
mosby
or
whatever
kind
of
goal
floating
around.
That's
where
we
started
or
the
concerns
there
is.
J
But
at
the
moment
we're
looking
more
towards
it,
like
okay,
just
like
take
exist
actually
like
not
even
implement
these
actors,
but
literally
take
existing
actors,
for
example
the
rest
ones,
and
then
just
compile
them
into
awesome
like
in
the
ideal
world
that
would
be
possible
and
those
would
be
viable
actors.
We
would
need
to
modify
the
run
time,
but
I
know
like
in
go
at
least
like
the
runtime
is
fairly
well
sort
of
separated
out
from
the
actors,
and
I
believe,
my
from
what
I've
seen
in
invest
as
well
yeah.
J
They
appear
to
be
pretty
well
separated
as
well,
so
it
should
be
possible
to
kind
of
like
say.
Okay
here
is
the
new
like
wasm,
run
time,
shove
that
in
take
the
actors
with
very
little
modification
rest,
compile
them
together
and
then
just
ship
that
as
a
set
of
actors.
So
that's
kind
of
where
I'd
like
to
be
at
the
moment.
That
makes
sense.
J
So
it's
a
really
wonderful,
beautiful
deal.
It's
like.
I
read
the
specs
and,
like
it
yeah
it's
it's.
Basically,
it
feels
like
it's
really
designed
for
cases
like
this
actually
but,
like
you
can
see
where
how
like
the
control
flow
is
tight,
you
even
have
tight
loops,
so
it's
very
easy
to
analyze
code
and
to
to
actually
like
statically,
verify,
effector
properties
and
optimize.
J
It
has
wide
supports
there,
so
many
vms
that
already
exist,
so
people
kind
of
pull
off
whatever
vm
they
happen
to
like.
That
is
really
one
of
our
goals
here,
like
we're
not
like.
I
at
least
do
not
want
to
modify
the
vm
in
any
way.
J
I
want
people
to
just
take
it
off
the
shelf
vm
if
I
need
to
do
any
like
any
instrumentation
that
will
be
done
at
the
bike
yield
level
and
it's
kind
of
leaving
that
yeah
as
as
brooklyn
was
noting
the
evm
has
better
crypto,
actually
because
the
the
wide
numbers
this
is
unfortunate,
but
I
think
we
can
probably
work
around
this
by
basically
providing
runtime
libraries
for
a
lot
of
use.
J
Cases
because,
like
most
of
these
cases,
will
just
be
limited
in
one
time
so,
like
I
would
come
in
personally
internet
movement
as
well
like
I
would
love
to
have
something
like
built-in
snark
library.
That
would
be
really
good
yeah.
So
that
really
is
my
my
my
leaning
there.
It's
the
fact
that,
like
a
lot
of
people
are
looking
at
the
evm,
there
are
a
lot
of
implementations
and
it
was
sort
of
well
designed
from
the
ground
up
based
on
like
many
many
many
years
of
vm
research.
H
About
those
runtime
libraries,
were
you
thinking
about
plugging
them
in
essentially,
as
precompiled
addresses
or
like?
How
would
you
actually
get
that
out
of
not
just
compile
down
to
wasm
but
to
like
maybe
get
more
native
performance.
J
I
I
don't
know
it's
like
I.
I
do
expect
there
to
be
a
lot
of
of
runtime
libraries
like
ian
bosom
like
actually
I
love
to
use
a
lot
of
imports
for
this,
because
everything
is
conscious
addressed
with
cids.
You
can
kind
of
just
like
import
a
cid
and
as
long
as
in
the
system,
your
logic
like
you're,
allowed
to
upload
your
actor
so
also
like
just
a
lot
of
scattered
static
importance,
but
that's
not
necessarily
gonna
fix
this
one
like
one
way
of
dealing
with
this
is
like
we
could
say.
J
Well,
I
see
you're
trying
to
import
this
awesome
thing.
We
actually
have
this
faster
thing.
I'm
just
gonna
use
the
faster
thing.
The
problem
here
is
gas
because,
like
like,
if
you're
doing
that
like
yes,
I
guess
you
could
just
make
the
changes
run
magically
faster,
but
you
would
still
end
up
charging
more
gas
if
you're,
just
charging,
not
you
correct
about
gas
yeah.
So
the
other
way
is
to
say
well,
like
at
some
point.
J
Sort
of
the
network
agrees
that
calling
into
this
into
this
sort
of
blessed
library
should
cost
as
much
it's
less
and
then
like.
If
you
decide
not
to
want
that,
okay,
you
could
use
the
wasan
implementation.
You
might
like
your
your
validation
times
might
be
slower,
but
that's
your
choice.
I
think
that's
probably
the
best
trade-off
there,
but
I
don't
know
like
I
I
do
like
for
for
some
of
these.
J
They
might
end
up
just
being
like
specialist
calls,
for
example
like
that
we
might
need
to
have
special
cis
calls
for
specific
types
of
hashing
and
stuff,
like
that.
I'd
like
to
avoid
that
as
much
as
possible
by
that
might
be
necessary
for
some
performance,
for
example,
to
avoid
having
to
like
sort
of
like
make
a
call
from
one
wasn't
module
into
the
runtime
into
another
module.
J
H
Yeah,
it's
super
tricky,
especially
with
wasm,
because,
as
you
mentioned,
the
the
modules
you
have
to
load
and
unload
external
data
right,
which
is
which
is
expensive,
ethereum,
has
taken
the
approach
many
times
of
exactly,
as
you
said,
saying.
Well,
when
I
call
into
this
address,
I'm
actually
going
to
hop
out
of
vm,
execute
this
code
and
that's
going
to
cost
this
much
gas,
and
so
it
becomes
essentially
a
extremely
inelegant
external
op
code,
but
it
works
because
then
you
don't
have
to
modify
the
vm
at
all.
H
Doing
that
directly
in
the
like
in
the
isa
makes
sense.
But
if
you
don't
want
to
modify
the
vm,
then
that's
kind
of
the
trade-off
right.
Yeah,
there's
a
lot
of
cases
where
you'll
be
able
to
do
automatic
gas
calculation,
which
is
essentially
just
hot
spot.
Optimizing
your
cost
dynamics
right.
So
you
know
that
calling
it's
this
contract
or
this
code
will
always
take
this
much
gas.
So
you
can
just
say:
well,
we
don't
actually
have
to
calculate
that
at
run
time.
H
We
can
just
do
that
at
the
top
and
not
have
to
do
you
know
each
individual
thing
and
that's
having
to
write
that
code
then
means
that
you
have
to
do
it
in
solve
this
case,
for
these
precompiles
also
compiling
down
when
you
have
gas
calculation
you're
going
to
have
to
inject
the
gas
per
instruction
from
so
it's
not
just
just
direct
you'll,
be
able
to
use
any
tooling,
which
also
is
kind
of
frustrating,
but
you
still
gets.
H
H
They
didn't
get
super
far
mainly
because
it
doesn't
fit
their
use
case
as
well
as
the
evm,
but
wasn't
probably
for
eth2,
maybe
it'll
be
more
more
tooling
yeah,
but
that's
to
say
that
this
is.
This
is
a
tricky
problem,
doing
pre-compiles
of
well-known
addresses
and
maybe
giving
them
names
in
the
compiler.
So
people
don't
have
to
remember
and
paste
hex.
All
the
time
can
be
a
helpful
way
of
doing
it.
J
E
Yeah
now
the
other
kind
of
complicated
thing
over
here,
obviously
in
filecoin,
is
kind
of
the
idea
of
system
contracts
right
where,
like
consensus,
is
going
to
require
to
call
into
some
of
these
contract
factors
to
figure
out
whether
something
is
valid,
whether
I'm,
whether
block
production
is
valid
and
so
on,
which
which
is
fine,
it's
just
kind
of
a
weird
handshake
that
happens,
but
there
are
other
protocols
that
have
similar
things
between
staking
staking
blockchains.
E
J
I
assume
in
some
cases
this
will
be
like
so
like,
instead
of
sort
of
like
directly
looking
at,
because
currently
the
effect
would
inspect
the
state
of
the
minor
actor
and
stuff
like
that,
I
assume
we
won't
be
doing
that
instead
of
something
like
you.
Basically,
you
will
have
some
kind
of
like
local
read-only
call
into
probably
the
like
the
reward
or
the
power.
Actually
is
the
power
actor
and
he's
like
hey
power?
Could
this
minor
mind
this
block,
and
then
it
will
tell
you
yes
or
no.
That's
my
assumption.
H
You
know
another
approach
to
this
would
be
to
create
a
higher
level
intermediate
representation
ir
that
compiles
nicely
down
to
bosom
and
not
go
all
the
way
down
to
wasm
as
the
spec.
H
For
the
entire
system
and
put
in
there
all
of
the
system
calls
that
you
need
right
and
then
compile
it
down
or
any
other
target
right
like.
J
H
Yeah,
so
awesome
one
of
watson's,
big
design
goals
was
to
be
a
good
back
end
for
the
llvm.
So
most
of
the
compiling
that
you're
getting
everything
compiles
down
to
awesome
is
because
of
the
llvm
yeah.
So
if
you
have
something
that
goes
from
your
language,
lvm
it'll
then
go
down
to
awesome
very
nicely
and
you'll
get
all
of
the
llvm
tooling
and
all
the
tooling
above
the
wasmbite
code.
Yeah.
J
I
I'm
still
hesitant
to
do
that
just
because,
like
the
lvm
by
code
is
not
stable,
but
it
really
wasn't
designed
for
that
yeah.
It's
like!
I
don't
want
to
be
this
he's
like
okay,
the
network
accepts
llvm
code.
That
was
one
of
the
options
we
looked
at,
but
there
was
concerns
there
about
that.
It
also
means
I
could
be
kind
of
like
acquiring
the
oven
for
everything,
okay,
but
we're
effectively
requiring
it,
but
I'd
still
prefer
like.
I
guess.
The
question
really
is
is
like
what
what
would
the
goal
here
be?
J
I
guess
I
don't
know
why,
like
so,
I
guess
yeah
fine.
So
what?
What
is
your
goal
here?
Why
do
you
think
we
need
to
how
about
something.
J
Oh
sorry,
I
missed
the
last
sorry.
I
guess
my
question
here
is
like:
why
do
you
feel
that
we
need
to
compile
that
some
other
intermediary
and
not
just
directly.
H
Because
you
get
a
lot
more
control
over,
what's
in
your
semantics,
what's
in
your
world
right
rather
than
having
to
say
well,
we
do
wasm,
except
for
these
exceptions
right
and
having
to
you
know
inject
codes
through
all
of
the
other.
You
know
awesome
calls
and
you
know
check
that
those
are
actually
there.
You
do
something
at
a
higher
level
that
maps
really
nicely
down
below,
and
so
you
still
get
all
of
the.
A
J
Yeah,
some
of
them
like,
I
guess
I
I
I
don't
think
that's
gonna
be
okay.
Now
I
obviously
you
probably
have
more
experience
here
I
from
what
I've
read.
It
doesn't
look
like
it's
too
hard
to
do
that,
because
wasm
is
pretty
easy
to
just
parse
and
instrument,
so
it
should
be
pretty
straightforward.
Just
go
through
and
say:
okay,
which,
like
instructions
are
here.
Oh,
we
don't
allow
this
instruction
like
reject
the
module
we're
like
okay.
This
is
a
branch
here,
it's
like
all
the
branches
and
loops
and
everything
all
typed.
J
So
I
can
very
quickly
like
say:
okay,
this
is
where
the
branch
ends.
This
is
where
it
begins.
This
is
how
much
gas
section
will
use
and,
just
like
add
instructions
throughout
the
entire
system.
It
should,
I
think,
make
that
easier
like
I.
I
think
it
should
effectively
be
the
same
as
llvm
bytecode
in
that
respect,
from
what
I
can
tell,
but
again
I
haven't
really.
I
haven't
actually
tried
to
implement
that
part.
Yet
so
I
don't
know.
H
J
I
guess,
but
again
the
reason
I
think
like
I
would
like
to
sort
of
target
wasn't
directly
instead
of
saying.
Well,
if
you
sell
something
else
as
an
intermediaries
and
like
if
that's
something
else,
change
like
basically,
while
zombie
is
being
adopted
by
a
lot
of
browsers
and
stuff
like
that,
so
I'm
expecting
it
to
sort
of
classify
a
bit
which
will
actually
be
helpful
for
us,
like
the
lvm,
really
is
very
much
like
a
it's
an
intermediate
representation
used
by
compilers
that
changes
over
time.
J
It
is
very
specific
to
that
compiler
as
well
and
as
far
as
understanding
is
also
actually
quite
a
bit
wider
than
than
muslim,
like
because
it
sports
like
arbitrary
with
ins
and
a
bunch
of
other,
like
optimization
specific
things
like
that.
E
Yeah,
I
think
your
first
point
there
is
well
taken
about
the
llvms
you're,
really
repurposing,
something
else
here,
but
I
still
kind
of
like
the
idea
lots
going
on
here.
This
conversation
could
probably
go
on
for
a
long
while
sorry,
yeah.
J
But
yeah,
I
recommend
you
guys
jump
into
the
fem
slacker.
It's
currently
a
bit
dead,
but
I'd
love
to
have
more
discussions.
There
also
the
comment
on
the
current
design
exploration
document:
it's
not
very
fleshed
out,
so
I
apologize.
If
you
have
difficult
to
figure
out.
Where
is
what
and
what
is
where.
A
That's
great
also
too,
as
we
begin
to
like
continue
to
work
on
this
issue.
We
can
also
carve
time
out
of
future
meetings
to
revisit
it
in
brooklyn.
Of
course,
you're
welcome
to
join
with
boris
or
on
your
own
anytime
in
the
future
as
well.
So.
A
Great
well,
if
no
one
else
has
anything
to
add.
This
has
been
a
great
meeting.
I
appreciate
you
guys
letting
me
step
in
and
help
facilitate
today
and
going
forward,
as
always,
you'll
see
materials
posted
online
in
the
tpm
github
and
if
there's
anything
between
now
and
our
next
meeting,
you're
welcome
to
reach
out
to
myself-
or
I
think
irish
still
for
the
mean
time,
and
we
can
make
sure
that
it's
moved
over
to
the
agenda
wherever
else
it
needs
to
be.