►
From YouTube: Filecoin Core Devs #51
Description
Recording for: https://github.com/filecoin-project/tpm/issues/120
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
Follow Filecoin!
Website: https://bit.ly/3ndAg44
Twitter: https://bit.ly/3ObND0x
Slack: https://bit.ly/3HKfFy7
Blog: https://bit.ly/3HFZFNv
Reddit: https://bit.ly/39N4Jmv
Telegram: https://bit.ly/3bkP8Ly
Subscribe to our newsletter! https://bit.ly/3Oy8J9j
#filecoin #ipfs #libp2p #web3 #nft
A
A
Today
we
yeah
I'm,
hoping
that
we
have
more
people
join
us.
If
not
that's
fine.
We
just
have
two
two
main
discussion
points
today.
We're
covering
the
interplanetary
concerns
us
formally,
HC
I.
Think
that's
the
hierarchical
contents.
A
Etc
and
it's
going
to
be
led
by
Alfonso,
as
well
as
the
fifth
discussion
on
improving
AC
security.
We
call
system
broadcast
again.
The
Alfonso
will
be
yes
volunteered
to
meet
that
conversation
as
well,
and
then
we
can
take
questions
and
answer
afterwards.
A
B
Perfect
cool,
okay,
so
I
mean
today.
What
I
want
to
present
is
the
interplanetary
consensus
or
hierarchical
consensus,
which
is
a
project
that
we've
been.
It
was
a
research
project
for
the
past
year
at
consensus,
lab
and
now
we
are
we're
starting
to
feel
comfortable
to
deploy
it
over
the
Falco
Network
and
we
wanted
to
start
socializing
the
design
and
like
sharing
with
everyone,
what
we're
trying
to
build
and
like
get
as
much
feedback
from
all
of
you
and
like
start
discussing
what
are
the
key
points
of
the
of
the
design?
B
So
I
guess,
like
many
of
you,
maybe
already
convinced
that
one
of
the
key
bottlenecks
that
we
have
right
now
in
blockchains
is
the
consensus
layer.
I
mean
if
we
go
to
the
to
bitcoin
or
ethereum.
B
We
see
that
this
consensus
layers
is
in
the
order
of
of
the
dozens
of
transactions
per
second
and
that's
why
we
see
like
proposals
like
the
lighting,
Network
and
layer
2
Platforms,
in
order
to
improve
these
this
scalability,
but
even
if
we
go
to
The
Ordering
of
transactions
and
like
the
best
consensus
protocol
at
I
mean
any
PFT
or
any
specific
protocol.
If
we
don't
have
a
way
of
horizontally
scaling
these
these
protocols,
we
are
limited
by
the
single
validator
and
the
specification
of
the
single
validator
that
is
proposing
the
Plus
next
slide.
B
Please
and
with
hierarchical
consensus.
What
we
are
proposing
is
a
way
of
like
trying
to
achieve
like
these
targets,
because
if
we
want
to
start
accommodating
web
2
loads,
as
one
has
been
saying
through
labweek
into
web
3
infrastructure,
we
really
need
a
way
of
horizontally
scaling.
B
The
consists
layer
and
horizontally
scaling
blockchains
in
order
to
achieve
this
kind
of
of
of
properties
and
the
ones
that
we
are
really
targeting
with
HC
is
the
the
is
the
first
one,
two
three
four
five
six
seven
once
where,
like
we
want
to
have
high
throughput
fast
local
finality,
so
be
able
to
run
a
consensus
in
a
Data
Center
and
still
be
able
to
communicate
with
the
rest
of
the
network,
make
it
as
flexible
as
possible
for
everyone
to
be
able
to
handle
because,
like
not,
every
application
will
have
the
same
requirements
in
terms
of
performance
and
security.
B
So
we
want
developers
to
be
able
to
configure
the
underlying
protocol
to
whatever
they
need,
and
also
one
that
I
really
like,
which
is,
of
course,
horizontal
scanning
and
partition
tolerance,
because
up
till
now
it's
not
easy
to
I
mean
the
current
proposals
for
scalability.
They
don't
tolerate
partitioning
with
the
with
our
protocol.
B
What
we
are
hoping
is
to
have
like
these
local
consensus
of
be
able
to
support
potential
partitioning
between,
like
of
some
parts
of
the
or
some
of
the
softness
of
the
of
the
system,
and
still
be
able
to
propagate
all
of
the
changes.
Eventually,
when
the
connection
is
recovered.
Next
slide.
B
Okay
yeah,
so
these
are
requirements
cool
and
you
can,
like
click
once
again,
so
that
we
have
the
whole
yeah
so
with
with
hierarchical
consensus
in
the
end.
What
we
want
is
that,
as
we
think
that
there's
no
one-size-fitsal
consensus
for
every
for
every
use
case,
what
we
want
is
to
try
and
give
a
framework
that
allows
applications
to
fine-tune
the
consensus
layer
to
whatever
the
the
their
use
case
needs.
B
So
we
will
be
able
to
accommodate
the
kind
of
more
defy
like
fully
web3
application
or
more
with
Web,
2.0
or
computation
applications
that
where
we
may
not
need
that
level
of
security,
but
we'll
still
need
verifiable,
computation
and
SMR
properties.
And
that's
that's.
The
idea
behind
these
records
is
to
have
an
on-demand
horizontal
scalability
for
filecoin,
and
we
can
think
of
this
proposal
as
an
alternative
to
shorting
and
sidechain.
B
So
it's
a
mix
between
both
of
them
and
what
we
want
to
give
is
full
configurability
to
all
users
and
to
applications
to
be
able
to
to
fine-tune
the
underlying
consensus
and
then
they're
laying
blockchain
infrastructure
to
whatever,
whatever
their
use
case,
needs
next
slide,
yeah,
so
yeah.
Thank
you,
I
mean
in
the
end
like
what
we
do
with
with
hierarchical
consensus
is
that
we
have
a
root
net.
B
We
can
think
of
the
Falcon
magnet,
and
this
is
the
high
level
architecture
of
the
protocol
or
what
we're
hoping
to
build,
and
we
have
the
five
point:
Main
net,
for
instance,
as
the
root
net,
and
here
we
have
like
30
second
blocks
and
a
finality
that
it's
way
longer,
so
it
may
not
be
perfect
for
certain
use
cases,
and
especially
like
after
NVM,
where
we
have
programmability
some
use.
Cases
may
not
be
I
mean
these
block
times,
and
this
finality
may
not
suit
all
of
the
use
case.
B
So
the
idea
with
Target
consensus
is
have
a
way
of
deploying
different
supplements.
So,
let's
imagine
that
a
set
of
users
they
want
or
developers
they
want
to
deploy
their
own
use
case
and
they
need
like
higher
performance
and
higher
throughput.
So
what
they
will
be
able
to
do
with
their
consensus
is
deploy
a
new
subnet
and
these
new
Subnet
in
the
is
just
an
independent
network
with
its
own
state,
with
some
consensus
algorithm,
it's
some
mempool.
B
So
all
of
the
stack
from
scratch
is
independent
and
the
idea
is
that
it
will
be
validated
transactions
in
parallel
with
the
ability
to
still
be
able
to
interoperate
By
Design
with
all
of
the
other
subnets
of
the
of
the
hierarchy,
and
this
will
build
so
in
the
same
way
that
from
rootnet
we
were
able
to
to
deploy
new
subnets
to
accommodate
new
use
cases
with
a
new
consensus,
algorithm
and
so
on,
and
validated
transactions
in
parallel.
We
would
be
able
to
this.
Is
this
property
is
recursive
so
from
a
subnet.
B
So
now,
let's
look
at
how
this
is
implemented
at
a
low
level
in
terms
of
the
logic,
so
we
see
that
here
what
we
had
is
like
the
root
net,
and
then
we
were
deploying
subnets,
so
the
two
key
core
like
components
that
orchestrate
all
the
logic
are
too
user
defined
actors
in
each
subnet
that
handle
all
of
the
logic.
For
these
sections,
so
in
what
we
are
hoping
to
do
is
in
the
Falcon
magnet
to
deploy
a
user
defined
actor.
B
That
is
the
IPC
Gateway,
and
this
Gateway
in
the
end
is
the
one
that
will
handle
all
of
the
logic
for
AC
in
the
root
net
every
subnet.
So
if
you
can
go
a
few,
like
click
a
few
tabs
in
the
keyword,
so
that
everything
appears
thanks,
you
see
that
every
every
subnet
they
have
their
own
IPC
Gateway,
and
this
IPC
Gateway
is
the
the
user
defined
actor
that
enforces
all
of
the
logic
for
IPC.
So
it
will
handle
like
the
collateral
that
that
users
need
to
put
in
order
to
deploy
a
subnet.
B
They
will
handle
all
of
the
orchestration
of
the
crossnet
messages
and
something
that's
because
subnets
are
able
to
validate
transactions
in
parallel,
they're
able
to
interpret
with
other
subnets
in
the
system,
and
they
also
can
anchor
their
security
to
the
one
of
the
to
the
security
of
their
parents
by
leaving
like
proofs
of
the
state
of
the
subnet,
by
leaving
checkpoints
in
their
parents
so
periodically
sending
checkpoints
to
the
pad.
B
B
As
long
as
we
implement
the
interface,
anyone
could
come
up
with
their
own
governing
policies
for
the
subnet
and
if
we
want
to
deploy
a
new
subnet,
we
first
deployed
this
subnetector
and
we
from
there
on,
we
can
start
like
we
can
deploy
one
of
these
subnets
and
start
mining,
but
we
won't
be
able
to
so.
B
In
order
for
this
subnet
to
be
able
to
anchor
the
security
so
create
checkpoints
and
send
cross-net
messages,
it
will
require
to
to
register
to
the
IPC
Gateway
and
put
some
collateral
so
that
we
have
some
insurance
in
case,
because
we
don't
assume-
and
we
don't
weren't-
guarantee
any
Security
in
the
submit.
So
in
order
to
prevent
potential
attacks
and
misbehaviors
like
report
potential
misbehaviors
in
in
subnets,
we
we
enforce
a
collateral
for
users
in
order
to
register
the
summit
next
slides.
B
Okay,
so
we
see
how
was
the
architecture,
how
it
is
implemented,
low
level
and
in
order
to
like
the
way
in
which
we
implement
this
at
a
client
level.
So
right
now
we
have
an
IPC
node
in
the
end
is,
is
a
client
like
the
client
for
APC
is
based
in
Lotus
at
the
way
in
which
we
Implement
the
synchronization
with
new
subnets,
it's
just
by
replicating
like
spawning
full
stack
for
a
new
blockchain.
So
and
so
let's
go
step
Aesthetics.
B
So
here
in
the
NPC
note,
we
see
the
the
representation
of
two
subnets
and
how
I
understand
for
the
two
softness,
and
we
see
that
we
shared
the
transport
layer.
So
in
the
transport
layer
we
just
have
gossips
up
and
we
have
different
topics
for
the
different
subnets,
and
this
is
the
way
in
which
we
interact
with
the
different
subnets,
such
as
share
messages
with
a
different
subjects
and
from
there
I'm
like
all
of
the
each
subnet,
they
have
their
own
message,
pool
their
own
consensus.
B
So
here
we
have,
for
instance,
mirror
and
proof
of
work
in
the
right
they're
on
state
tree,
their
own
fvm
and
your
API,
and
they
are
validating
and
creating
blocks
in
parallel,
and
then
we
have
these
special,
like
message:
proof
that
is
a
cross
message
pool
which
is
a
message
pool
that
is
syncing
with
the
parent
with
their
parents.
So
here
we
can
see
that
the
parent,
in
this
case,
like
we
would
in
the
Falcon
case
we
are
sinking
to
the
falcon
mainnet
or
to
whatever
other
parents
So.
Eventually
this
could
be
ethereum.
B
The
framework
is
generally
enough
to
support
any
kind
of
network
and
this
cross
message
pulling
in
the
end.
What
it's
doing
is
not
only
so
that
the
traditional
message
pool
is
is
listening
for
unverified
messages
that
need
to
be
proposed
in
the
network
and
these
trust
message
pool
what
it
does
is
listening
to
the
parent
in
case
there
are
pending
messages
that
needs
to
be
from
other
subnets.
That
needs
to
be
proposed
in
the
current
subnet
next
slide.
B
Yeah
I'm
not
gonna,
go
in
depth
into
the
checkpoint
protocol.
In
the
end,
we
can
think
of
the
Checkmate
protocol
as
a
way
to
propagate
information
to
the
upper
layers
of
the
hierarchy
and
also
to
Anchor
to
Anchor
pieces
of
the
state
of
the
subnet
into
into
our
parents.
So
that,
eventually,
like
with
this
as
we
are
sending
checkpoints
with
some
information
about
the
subnet,
we
can
build
fraud
proofs
and
we
can
build
other
kind
of
of
mechanisms
to
report.
These
behaviors.
B
And
the
way
which
we
propagate
cross-net
messages
is
so
we
have
two
ways,
as
you
saw
in
the
in
the
architecture,
for
the
node,
if
we
can
click
a
few
times,
so
that
all
the
animations
go
in
okay.
So
if,
as
we
saw
in
the
in
the
architecture
for
the
for
the
node,
we
are
always
listening
to
events
in
our
parents
to
see
if
they
are
pending
messages
or
if
there
are
new
events
or
something
happening
in
the
IPC
Gateway
or
on
in
the
subject.
So
propagating
messages.
B
Top
down
is
quite
easy
because,
like
we
have
this
assumption
that
the
child
supplements
they
need
to
necessarily
think
with
the
parent
and
then
so.
The
the
waiting
we
in
which
we
propagate
resonant
messages
between
the
two
two
between
like
the
whole
system
and
how
we
propagate
this
information
between
softness,
is
using
top-down
messages
and
bottom-up
messages
that
we
call
the
top
down.
Messages
are
quite
straightforward
because,
as
we're
syncing
with
the
parent,
we
can
see
that
there
are
verified
messages
that
need
to
be
proposed
in
the
subnet
and
but
for
bottom-up
messages.
B
So
if
we
sent
a
cross
message
from
the,
for
instance
in
the
Falcon
menu
that
one
we
wanted
to
go
to
one
of
the
subnets,
we
would
send
a
message
to
the
APC
Gateway
in
in
the
parent
and
the
IPS.
So
the
children
are
listening
to
events
in
the
IPC
Gateway.
B
They
would
see
that
there's
a
number
five
crossnet
message
that
need
to
be
propagated
further,
so
the
parent
bridge
will
the
Christmas
message
put
through
the
parents
bridge
will
detect
that
there's
a
new
message
and
it
would
be
proposed
as
any
other
message
in
the
network.
So
cross-net
messages
in
the
end
in
the
subnet
are
proposed
as
if
they
were
some
any
other
message
in
the
in
the
current
network,
but
the
validation
that
It
suffers
like
it
has
additional
validations
by
checking
the
state
in
the
in
the
parent.
B
So
if
you
click
a
few
times,
a
few
uros
should
appear
like
explaining
the
whole
the
whole
place,
okay,
and
so
when
we
were
figuring
out
how
to
do
these
horizontal
scaling,
and
these,
like
mongodb,
went
a
bit
into
the
text
but
like
how
we
were
figuring
out
how
we
could
do
these
these
yeah
next,
you
come
to
the
next
slide.
Okay,.
A
Sorry,
if
I'm
sorry
too
fast,
I'll
put
them
here.
You
go
yes,.
B
So
a
few,
a
few
things
that
we
were
exploring
and
where
we
got
inspiration
for
these
IPC
I
mean
the
two
ways
we
could
like
approach
scaling
for
blockchain
was
either
using
vertical
scaling
where
we
tried
to
come
up
with
a
better
consensus.
Algorithm
we've
seen
several
attempts
of
that
not
working.
If
we
really
want
to
scale
a
lot,
we
should
use
horizontal
scaling
and
then
for
horizontal
scaling.
B
We
explored
sharding
payment
channels,
side,
chains,
Roll-Ups
and,
like
everything
that
people
are
doing
and
in
the
end
we
came
with
this
approach,
which
is
a
mix
between
side
chains
and
maybe
sharding,
but
we
are
not
explicitly
partitioning
the
the
network
so
instead
of
forcing
like,
for
instance,
the
ethereum
approach
is
that
there
are
clear,
shards
and
there's
a
clear
protocol
on
how
to
partition
the
network,
or
at
least
that
was
the
case
because
now
they're
going
into
more
Layer
Two
roll
up
approach.
B
But
what
we
wanted
is
a
way
where
users
and
developers
are
able
to
influence
infrastructure
according
to
their
needs,
so,
instead
of
explicitly
partitioning
the
network,
what
we
want
is
that,
according
to
the
demand,
the
infrastructure
is
is
adapted
to
the
needs
of
the
of
the
users.
B
Hopefully
we
want
to
be
able
to
support
because,
like
we
are
trying
to
come
up
with
a
flexible
framework
and
configurable
framework
thanks
to
the
fem
capabilities,
so
that
and
that's
why
we
have
the
subnet
actor
and
we
have
like
the,
which
is
user
defined,
and
we
have
like
the
the
ability
to
implement
different
consensus,
because
in
this
way
we
could
like,
for
instance,
decentralize
sequencers
in
our
rollup,
because
with
the
checkpoints
we
could
use
as
checkpoints
are
also
configurable,
you
could
configure
checkpoints
as
a
roll
up
and
then
Implement
a
consensus
where
the
the
your
subnet
is
your
validators
in
the
subnet.
B
C
B
Are
a
bunch
of
discussions
on
how
it
hierarchical
consensus
is
different
from
other
layer,
2
Solutions?
We
may
not
have
the
right
answer,
but
please
is
jumping
if
you're
interested
and
you
want
to
give
some
some
infinites.
B
We
will
love
all
of
your
feedback
that
all
of
the
feedback
that
you
have
next
slide
and
really
briefly
like
what
are
the
keys,
the
key
models
that
makes
because
actually,
we
are
not
so
we
can
think
of
IPC
as
an
application
over
the
fvm
and
Falcon,
and
we
don't
expect
to
change
much
in
the
underlying
protocol.
B
So
we
want
to
try
and
make
this
an
application
and
write
it
as
an
FRC
so
that
we
can
have
like
different,
even
implementation
of
certain
actors
of
of
even
IPC
gateways
and
have
it
an
FRC
so
that
we
have
common
interfaces
but
allow
the
community
to
be
come
up
like,
for
instance,
we
are
dividing
into
different
actors
like
we
have
the
subnet
actor.
B
We
have
the
IPC
Gateway
as
a
different
actor
and
we
have
the
atomic
execution
protocol,
because
we
support
Atomic
executions
for
actors
as
a
different
as
a
different
actor,
and
the
idea
is
to
have
these
sectors
as
reference
implementations,
but
like
improve
the
protocol
like
agree
on
common
interfaces,
so
that
we
create
a
whole
community
and
not
like
this
is
not
a
change
that
requires,
hopefully,
a
change
in
the
in
the
protocol,
and
these
are
the
main
models
that
confirm
IPC.
B
We
see
that
we
have
the
IPC
Gateway
and
the
certain
actors
that
I
mentioned.
We
have
a
consensus
interface
for
anyone
to
be
able
to
implement,
and
these
we
are
trying
to
Upstream
it
to
Lotus,
but
have
a
consensus
interface
to
be
able
to
implement
different
consensus,
algorithms
for
our
sadness,
so
that
we
don't
have
to
ascribe
to
a
single
one.
We
need
a
new
type
of
addresses
and
I
guess.
This
is
something
that
we
really
want
to
start
discussing,
because
we
need
a
way
of
like
uniquely
identifying
an
address
in
a
subnet.
B
B
This
could
have
a
lot
of
value
for
the
Falcon
ecosystem,
the
same
for
the
checkpoint
protocol
and
the
cross
net
messages
execution
a
safe
set
like
these
are
General
protocols
that
are
configurable,
but
it
would
be
great
if
we
can
agree
on
the
or
start
discussing
the
interfaces
for
these
checkpoints.
The
the
different
fields
that
we
want
to
add
in
these
checkpoints
and
so
on.
B
So
we're
hoping
to
start
a
discussion
on
each
of
these
parts,
especially
in
the
ones
that
you're
more
interested
so
that
we
can
start
getting
Clarity
and
and
as
we're
moving
to
production.
We
get
more
more
feedback
from
the
community
next
slide.
B
Okay,
and
to
give
you
a
glimpse
of
what
we
have
so
far,
we
have
an
MVP
implementation
of
IPC
in
otico.
In
the
end,
it's
a
fork
of
lotus
that
we
use
at
consensus
lab
to
do
all
of
our
testing
and
like
Innovations,
so
usually,
what
we
do
is
that
in
Nautica
we
just
push
the
code
and
try
a
lot
of
things
there
and
then
once
they're
ready,
and
we
see
that
they
work
at
an
MVP
level.
B
We
try
to
move
it
into
a
more
production
already
to
the
lots
of
space
code.
So
right
now
we
have
an
MVP,
but
we
want
to
release.
So
by
the
end
of
this
month,
we
should
have
the
first
release
of
the
space
net,
which
is
a
test
net
where
we
are
gonna
test.
First,
the
mere
protocol
so
and
fvm.
So
it's
like
the
root
net
for
an
eventual
IPC
test
net.
B
B
Actually,
the
first
MVP
used
the
Legacy
VM,
because
fvm
wasn't
a
thing
now
we
our
actors,
are
targeting
fvm
and
that's
why
we
we're
moving
from
a
lot
of
protocol
changes
and
spec
actors
and
all
of
these
two
actually
going
to
user
land
and
trying
to
make
all
of
our
actor
use
certified
and
also
there's
a
fifth
discussion
and
a
paper
that
a
high
level
paper.
That
gives
a
bit
of
an
overview
of
like
more
an
academic
overview
of
how
we
came
to
this
protocol
and
and
how
it
works
at
a
high
level.
B
Next
slide,
yeah
and
like
there
are
a
bunch.
So
if
you
want
to
check
the
code
for
the
actors
we
have
like
also
in
these
links,
we
have
the
ipc7
adapter
reference
implementation,
it's
being
changed
a
bit
because,
like
we
are
trying
to
catch
up
with
the
changes
in
in
fvm,
and
also
we
have
the
IPC
Gateway,
the
uptime
Checker,
which
is
because,
like
as
part
of
the
subnets
for
many
of
the
of
the
use
cases
that
we
think
that
hierarchical
consensus
and
IPC
can
be
useful.
B
It
would
be
also
useful
to
know
if
the
validators
are
online
not
only
participating
in
the
consensus,
but
are
online
and
many
of
the
applications
for,
for
instance,
like
this
game,
when
we
were
thinking
about
like
using
IPC
for
to
handle
the
different
geographical
areas
for
for
Falcon
Saturn.
So
we
need
an
uptime
Checker,
and
this
is
an
actor
that
checks
that
in
an
area
or
a
specific
subnet,
the
validators
and
the
participants
of
the
consensus
are
actually
online.
So
there
you
can
see
like
it
needs
documentation.
B
We're
trying
to
have
all
like
up
and
running
for
the
space
net
launch
but
like
the
idea,
is
to
start
deploying
this
in
testnet
and
start
the
discussion
so
that
we
can
build
a
proper
FRC
and
we
can
get
a
people
like
a
sense
of
what
the
community
wants
with
the
protocol.
B
Next
slide,
yeah
and
I
think
I
mentioned
this,
like
our
Focus
right
now
is
to
try
and
Upstream
through
I
mean
we're
trying
to
build
this
test
net,
and
we
want
to
stream
some
of
the
of
the
changes
that
we
are
making
into
Lotus.
If
it's
possible-
and
we
want
to
we're
going
to
start
running
this
space-
not
these
long-running
test
net
from
by
the
end
of
November,
it
won't
have
support
for
subnets.
B
So
there
we
will
only
be
running
a
filecoin
test
net
with
fvm
and
a
new
consists
argument,
so
the
mere
consensus
algorithm
and
then
like.
We
are
spending
a
lot
of
time
now,
so
we
went
from
the
Lincoln
CPM
to
targeting
the
fvm
as
a
built-in
actor
and
now
we're
trying
to
move
everything
to
user
definitely.
So
this
is
what
we're
focused
right
now,
trying
to
move
IPC
from
like
more
protocol
land
to
user
lines
so
that
we
can
have.
B
We
can
start
deploying
it
and
testing
it
over
this
Network
that
has
a
new
consensus
and
that
way
testing
all
of
the
pieces.
While
we
discuss
the
low
level
details
of
the
implementation
with
the
community
and
I
guess,
that's
all
from
my
site
for
IPC
I
hope
it's
kind
of
clear
but
like
happy
to
keep
discussing
or
feel
free
to
open
any
discussion
that
you
may
have
to
discuss.
It
I
think
thanks.
A
D
Excuse
me
that
I'm
still
a
little
sick
from
Lisbon
and
and
sorry
if
I
missed
this
also
during
your
presentation,
but
to
try
and
close
the
loops
of
some
of
the
open
questions
you
stated
you
mentioned
that
there
may
be
a
new
type
of
address
class
necessary
to
to
optimize
usability
here,
and
it
looks
like
someone
previously
had
flagged
that
the
F4
address
class
proposal
that's
being
worked
on.
Maybe
a
solution.
Have
you
followed
up
with
that
thread
by
chance.
B
Yeah
so
I
thought
it
would
it
I
mean
initially
I
thought
that
F
and
actually
I
have
it
in
the
presentation
that
F4
addresses
could
be
pleasure
could
be
useful,
but
then
and
I
would
love
Alex
if
you
can
follow
up,
because
I
think
that
you
had
some
concerns
and
about
F4
not
being
what
we
need.
I
haven't
like
gone
into
that
since
our
conversation,
but
like
I
thought.
C
Problems
to
solve
I,
don't
think
F4
is
going
to
solve
the
problem
that
you
have
or
the
you
know
the
opportunity
that
you
have
to
introduce
a
dressing
that
can
address
a
particular
chain.
F4
addresses,
like
every
other
blockchain
address
everywhere,
implicit
or
implicit
about
what
chain
they're
supposed
to
be
resolvable
on
I.
Think
there's
a
great
proposal
in
here
for
a
universally
resolvable
blockchain
address
that
would
be
a
could
solve
issues
broader
than
filecoin.
C
So
I'm
excited
about
this
part.
A
little
I
I
think
there
is
still
working
work
to
be
done.
It
could
be
really
good.
A
Gotcha
thanks
great
thanks,
Alex
thanks
so
much
so
any
of
the
comments
or
questions.
A
No
okay:
we
go
back
to
your
photo
for
the
the
EC
security
conversation.
B
Okay
for
this
one
I
hope
I
do
a
good
job,
because
I'm
covering
for
guy,
which
is
even
later
in
his
time
zone.
So
here
we
wanted
to
present
a
FIP
that
what
what
we're
trying
to
achieve
with
this
flip
is
to
prevent
security,
vulnerability
that
we
found
in
Falco
and
easy,
because
We've
realized
that
it's
possible
to
perform
an
end
split
attack
in
the
Falcon
consensus
and
the
in
the
end
split
attack.
B
In
the
end,
what
happens
is
that
a
malicious
service
provider
can
gain
the
cutoff
time
in
order
to
try
and
equivocate
other
service
providers.
So,
in
the
end,
these
malicious
service
provider,
where
what
it
would
do
is
when
the
cutoff
time
is
approaching,
it
will
send
a
block,
different
blocks
for
the
same
ticket
to
different
service
providers
and
in
this
way,
splitting
the
network.
B
So,
in
the
meantime,
as
it
splits
the
network,
the
malicious
SP
itself
will
start
like
building
its
own
local
chain
in
order
to
have
a
heavier
chain
that
whatever
it
has
split.
B
This
is
it's
true
that
this
is
a
low
likelihood
attack,
but
it
could
be
really
I
mean
it
could
really
damage
the
network
because
it
will.
It
would
affect
the
safety
of
the
of
the
protocol
and
there's
a
security
analysis
that
Sarah
and
guy
have
been
doing
that.
Hopefully
we
could,
we
I
mean
eventually
we
will
share
it
and
they're
gonna
publish
it
soon.
So
this
is
the
current
attack
and
the
proposal
in
the
FIB
is
to
use
consistent
broadcast.
So
we
went
through
several
options
to
prevent
the
attack.
B
The
first
one
is
removing
Bloods
from
from
I
mean
making
tip
sets
and
making.
This
is
to
have
a
single
block
up,
but
of
course
this
is
not
an
obvious
change
and
it
will
require
an
upright
and
to
Falcon.
The
second
approach
was
to
use
asynchronous
consistent
broadcast
because
like
if
we
have
consistent
broadcast,
what
it
would
happen
is
that
in
the
end
we
can
we
can
ensure
that
for
the
same
ticket
from
the
same
Miner,
all
of
the
the
service
providers
are
receiving
the
same
block
so
with
consistent
broadcast.
B
We
have
these
these
this
feature
we
thought
about
using
an
asynchronous,
consistent
purpose,
but
the
problem
is
that
rolling
this
out
in
the
network
will
require
again
an
upgrade,
and
then
we
came
up
with
the
idea
of
using
a
synchronous,
consistent
broadcast,
relying
on
the
magic
number
of
like
six
seconds.
This
is
configurable,
but
we
like,
as
gossips
up
one
of
the
properties
of
Gossip,
so
is
that
every
six
seconds
like
in
six
seconds,
a
block,
has
been
broadcast
to
the
whole
network.
B
What
we
so
our
proposal
is
to
use
asynchronous
consistent
broadcast
where
the
service
providers
delay
for
six
seconds,
the
delivery
of
the
block
and
with
delivery
I
mentioned,
like
actually
accepting
a
Blog
for
for
a
specific
ticket
and
in
this
way
like
if
there
are
malicious
service
providers
trying
to
equivocate,
as
we
don't
have
best
if
best
effort
broadcasters
right
now,
but
a
consistent
broadcast
eventually
like
we
are
delaying
and
if
we
see
more
than
one
block
for
the
same
Miner
on
the
same
ticket,
the
service
provider
will
reject
both
of
them,
because
it
means
that
it's
okay.
B
So
that's
the
idea
behind
this
synchronous
consistent
broadcast.
We
would
delay
the
delivery
of
a
Blog
for
six
seconds
or
whatever
other
number,
that
we
come
up
with
like
this.
These
six
seconds
is
just
because
gossips
up,
but
it's
true
and
then
service
providers
they're
playing
with
their
cut-off
times
and
so
on.
So
it
may
require
some
tweaking.
But
by
delaying
this
delivery
we
can
keep
listening
for
new
blocks
for
the
same
ticket
and
eventually,
as
the
service
provider,
will
keep
broadcasting
the
blogs
that
they've
found.
B
We
would
realize
if
someone
is
trying
to
equivocate
or
not
and
in
this
way,
removing
the
ability
of
like
splitting
the
network
by
equivalating
and
and
like
forcing
blocks
to
different,
for
instance,
different
blocks
for
the
same
ticket
for
different
for
these
Foreign
Service
Providers
we
already
tested,
so
we
have
I
mean
we
wrote
this
discussion.
B
I
think
Sarah
and
guy
are
are
releasing
the
attack
soon.
So
we
would
love
to
move
fast.
This
fit
to
draft
and
like
to
the
implementation,
if
that's
possible-
and
we
have
the
implementation
of
these
synchronous,
consistent
broadcasting,
a
branch
of
of
lotus,
we've
tested
it,
and
we
also
did
some
testing
with
the
attack.
But
we
would
love
to
like
have
some
support
here,
because
we
try
to
actually
run
the
attack
over
test
Prime
to
see
if
our
fix
actually
worked.
B
But
we
would
love
some
support
from
the
from
more
knowledgeable
people
in
terms
of
of
lotus
and
Falcon
to
see
if
this
is
the
case.
But
yeah
like
we
have
the
code
for
the
the
synchronous
consistent
broadcast
already,
and
we
have
like
a
potential
code
to
for
an
attacker
of
how
this
would
work
and
yeah.
We
would
love
feedback
and,
like
any
support
that
we
can
have
to
test
that
these
works
and
to
start
rolling
this
out
to
the
service
provider.
That
would
be
great.
A
E
Yeah
actually
I
have
some
comments.
Yeah
in
that,
if
app
is
real,
so
okay
I
have
a
few
questions
here.
Yeah
the
first
one
is
that
yeah,
this
approach
asset
is
good.
That
yeah
first
thing
is
that
this
actually
is
yeah.
It's
not
mentoring,
yeah,
because
you
cannot
detect
yeah.
You
cannot
confirm
if
a
service
provider
and
yeah
take
this
or
lab.
Take
this
right.
E
So
as
the
first
thing
that
I
I'm
not
sure
if
this
should
be
a
FIP
or
FRC
I
said
this
could
be
discussed
yeah
from
the
process,
the
point
of
view:
okay,
yeah.
This
is
one
thing,
and
another
thing
is
that
at
heart,
comments
yeah,
that's
weird,
because
hello,
yeah
I
discussed
actually
with
with
meaning
yeah
story
providers
and
actually
yeah
different
storage
providers
may
have
different.
E
You
cut
off
time,
save
him
so
I'm,
not
so
sure.
If
there
will
be
majority
of
the
majority
of
storage
providers,
will
take
this
approach
a
lot,
because
this
does
a
mandatory
right.
So
that
is
my
concern.
We
we
have
to
make
sure
that
your
majority
of
the
your
service
providers
will
take
this,
and
this
could
actually
take
effect.
E
So
I'm
not
sure.
If
we
should
to
to
take
a
survey
about
this
or
yeah,
because
we
you
know
we
did
we,
we
need
you
to
know
yeah
if
people
were
yeah
prefer
this
or
that
you
know
to
make
sure
that
yeah.
This
really
worked
right.
B
Yeah
so
I'm,
probably
I
hope
I'm,
not
making
a
mistake
here
again:
I'm,
not
the
one
that
did
all
of
the
analysis.
I
just
did
the
implementation,
but
I
think
that
the
increase,
so
the
cut
of
time
is
independent
from
this
right.
So
here
what
we're
saying
right
now,
what
you
are
you?
What
you
do
is
that
you
accept
every
block
that
comes
before
the
car
of
time
here.
B
What
we're
saying
is
that
which
means
that,
eventually,
if
you
see
two
plugs
for
the
same
ticket,
you
could
I
mean
in
that
cut
of
time.
You
could
detect
it
without
this,
but
here
what
we
are
proposing
for
service
providers
that
they
don't
want
to
be
vulnerable
to
this
attack.
B
So
in
the
end,
the
fact
that
we're
using
the
synchronous
approach
makes
it
it
improves
our
probability
of
being
of
preventing
the
attack,
but
it
doesn't,
as
you
mentioned
like
it,
doesn't
completely
prevent
it,
because,
if
not,
if
a
majority
of
the
network
doesn't
support
this,
it
may
be
the
case.
B
But
it's
true
that
I
think
that
it's
independent
from
the
cutoff
time,
because
in
the
cut
of
time
like
if
you
increase
the
cutoff
time,
what
you're
doing
is
that
you
have
more
time
it's
it's
harder
to
game
you
and
trying
to
force
you
to
accept
a
block
by
the
end
of
your
cutoff
time.
I
would
not
I'll
have
to
know
your
current
time
in
order
to
be
able
to
force
a
block
in
you
in
the
last
moment.
E
B
E
Right
I
understand,
but
so
so
I
mean
that's.
Actually
the
story
providers
may
not
really
care
about
this
kind
of
that
yeah
and
this
kind
of
yeah
and
a
pack
that
yeah
the
only
one
who
gives
it.
E
Do
you
really
care
about
this,
and
also
there's
these
your
consensus,
Forge
report,
let
me
catch
them
to
their
presence
is
so.
This
is
why
I
have
this
concern.
C
B
Mean
I
completely
feel
you
and
just
one
point
like
they
will
care
I
mean
I,
think
that
they
will
care,
because
the
thing
is
that
the
longest
chain,
the
private
change
that
the
attacker
will
be
building,
will
be
the
heavier
one.
So
it
will
push
a
fork
that
will
make
their
if
they're
splitting
the
network,
there's
I
mean
and
I'm
building
a
a
I'm,
a
crewcutting
the
network
so
that
they
don't
have
all
of
the
plugs
and
I'm
building
the
right
chain.
E
B
B
D
B
So
we
I
mean
we
can
start
whenever
we
start
like
aligning
on
the
design.
That's
why
we
wanted
to
send
the
discussion
whenever
someone
says
like
okay,
we're
happy
with
the
design.
We
can
start
writing
the
the
face.
So
we
as
soon
as
we
we're
comfortable
with
with
the
design.
D
Sure
I,
that
that
makes
sense,
I
think
this
is
how
a
lot
of
teams
think
about
it.
But
it
might
be
helpful
if
there
are
open
design
questions
to
write
the
fifth
draft
first
and
make
those
as
notes
it
will
help
with
discussion
and
also
it
will
provide
you
space
as
there
are
diagrams
or,
like
contrary
design
choices
as
part
of
technical
review
and
the
feedback
process.
D
A
Can
anyone
hear
me
or
are
we
still
speaking
no
I
can
hear
you
oh
good,
okay,
great
looks
like
would
have
additional
conversations
on
this
going
forward,
but
again
fonts
a
little
snow.
If
you
need
additional
help
in
adding
this
interfere,
or
you
know
doing
initial
survey
or
analysis
to
gain
triple's
interest
or
feedback,
do
we
have
any
other
comments
or
questions,
because
this
is
our
last
discussion
point
or
agenda
for
today.
E
Yeah
I
have
a
website
about
IPC,
okay,
the
first
topic,
yeah
essay
that
is
very
exciting
and
which
is
very
huge
okay.
So
that
is
also
very
important
for
the
Falcon
and
your
system
is
because
yeah,
so
it's
basically
for
the
yeah.
E
There's
a
proof
of
storage,
so
actually
there
there
do
not
have
yet
much
more
a
little
bandwidth
for
your
photo
contract
yeah
for
yes,
your
full
smart
contract,
so
yeah,
it's
very
cool
and
also
the
your
subject
or
yeah
or
sharding
or
an
IPC
yeah
would
be
very
important.
But
okay,
I
was
thinking
this
way.
It's
very
huge,
okay
and
also
they're,
very
important
I
will.
E
It
will
be
very
good
if
we
could
have
a
milestone
and
have
a
different
base
and
to
show
yeah
to
the
community
and
we
yeah
they
take
yeah,
for
example,
in
a
few
years:
okay,
I
guess
yeah
I'm,
not
so
sure,
and
while
we
have
this
and
we
need
to
yeah.
For
example,
we
move
yeah
some,
yes
in
the
current
yeah
consensus
or
yeah
two
get
to
the
tablet,
as
perhaps
when
yeah
some
Milestones
yeah.
E
For
example,
we
could
move
the
Falcon
plus
to
your
Twitter
tablet
and
I
guess
yeah
or
we
even
could
improve
our
first
yeah
throughout
storage
or
anything
yeah.
The
basic
thing
is
to
to
accept
it
as
well
of
the
Milestones,
so
I
mean
that
yeah.
If
we
have
that
and
that
plan
that
will
be
very
helpful
for
the
whole,
the
whole
community
and
also
the
ecosystem,
to
understand
the
Falcon's
vision
and
yeah.
That's
myself
here.
B
Yeah
we
have
preliminary
milestones
and
like
that
we
can
publish
I.
Think,
like
the
roadmap
is
public,
but
we
should
probably
come
up
with
some
more
like
visual
one
and
that's
to
do
for
George
and
I.
We
probably
will
have
it
in
the
in
the
in
the
website.
We're
gonna
release
a
website
soon
like
explaining
IPC
and
how
we're
planning
to
our
efforts
to
scale
Falcon
and
we're
gonna
publish
their
the
Milestones
similar
to
the
Milestones
that
fvm
has
now
in
their
website.
B
D
I
I
agree
Stephen,
that
what
you're
raising
is
a
great
point
and
I
think
this
is
a
challenge
for
a
lot
of
teens,
because
even
once
they
have
these
resources.
There's
a
lot
of
information.
Fragmenting
Lucky's
primary
task
right
now
is
to
work
on
launching
a
public
fips
roadmap
also,
so
that
we
can
coordinate
this
information
and
collate
it
in
one
place
and
hopefully
share
it.
D
So
if
there's
a
place
that
you
think
this
would
be
most
accessible,
that
would
be
helpful
for
us
to
know
too
so
that
we
can
draw
attention
to
resources
like
the
website
for
IC,
like
fvm
roadmaps,
like
things
like
this,
the
challenge
again
is
that
they
usually
exist
they're
just
really
hard
to
find
so.
E
Yeah
right
so
I
also
similar.
This
kind
of
Milestone
is
not
yeah
from
one
team
yeah,
for
example
the
IPC
team
and
also
the
FM
team
or
other
teams.
They
have
yeah.
They
have
a
yeah,
their
own
Milestone.
You
know
from
Community
point
of
view
they
want
to
actually
to
do
the
whole
Falcon.
E
As
a
big
picture.
We
have
a
direction
and
Milestones
yeah,
for
example
the
next
five
years.
Just
something
like
this.
Okay,
you
have
lost
our
Focus
yeah.
E
That
might
be
changed
due
to
you
know
some
yeah,
because
it's
a
long-term
plan
so
yeah
but
and
I
said
it's
okay,
yeah.
If
we
have
something.
E
Yes,
that
is
better
for
us
for,
for
the
whole
Community
to
have
some,
you
have
to
have
more
confidence
yeah
for
the
the
Falcon,
because
we
have
a
long-term
plan
and
we
are
working
on
this
to
make
it
happen
right.
Yeah,.
A
Are
there
additional
points?
I'm
just
mindful
of
time.
Is
there
anything
else,
so
anyone
would
like
to
bring
some
ask
or
discuss
them.
A
Great
looks
like
we
are
done
for
today
and
I'm
looking
forward
to
letting
all
of
us
go.
It
must
be
brutal
for
some
of
us
in
different
time
zones
as
well.
So
if
you
have
additional
thoughts,
comments,
questions
or
anything
you'd
like
us
to
discuss
next
month,
please
do
send
them
over
to
us
and
I'll,
be
happy
to
pop
that
into
our
agenda
for
code.
52
and
I
will
be
sharing
the
recording,
as
well
as
the
meeting
materials.
A
If
anyone
would
like
to
read
further
otherwise
see
you
all
next
month,.