►
From YouTube: Filecoin Core Devs Biweekly #19
Description
Recording for: https://github.com/filecoin-project/tpm/issues/41
For more information on Filecoin
- visit the project website: https://filecoin.io/
- or follow Filecoin on Twitter: https://twitter.com/Filecoin
Get Filecoin community news and announcements in your inbox, monthly: http://eepurl.com/gbfn1n
A
Okay,
all
right
good
morning,
good
afternoon,
good
evening,
everyone
thank
you
for
joining
the
19th
file
coin,
core
devs
bi-weekly
meeting.
We
have
a
fairly
packed
agenda
today,
so
we'll
see
how
many,
how
much
we
can
get
through
and
also
kind
of
in
the
interest
of
time.
Maybe
we
can
keep
updates
a
little
more
a
little
more
brief.
Maybe
I'll
kick
things
off
with
a
quick
lotus
update
yeah.
A
So,
basically
our
q
focus
has
been
has
been
working
on
the
hyperdrive
upgrade
network
b13
we
put
out
our
first
rc
for
the
upgrade
yesterday,
which
is
incorporating
the
actors
v5rc,
as
well
as
the
latest
release
as
well.
So
we've
just
mostly
shared
it
with
miners
right
now,
so
they
can
kickstart
their
integration
process.
There
will
be
a
couple
more
updates
or
things
that
have
like
changed,
mostly
tweaks
in
terms
of
some
of
the
fips
that
are
in
v13.
A
We
will
talk
about
those
outside
of
outside
of
our
implementation
updates.
Apart
from
that
yeah
lotus
v190
has
been
out
for
a
while
now
and
has
generally
been
well
received
and
yeah.
I
think
that's
about
it.
From
from
the
lotus
side
of
things,
let's
see
forest
do
y'all
want
to
go.
B
Yep
forest
here:
yes,
over
the
last
two
weeks,
we
still
going
through
that
audit
things.
Look
pretty
nice.
So
far,
just
some
some
configuration
bugs
that
were
found-
or
I
guess
misconfigurations
found
on
the
lit
p2p
layer,
but
we
fixed
those
pretty
quickly
and
everything
seems
to
be
running
really
nicely
we're
still
getting
metrics
up.
B
We
have
like
all
like
most
of
like
the
the
ground
work
done
now,
we're
just
like
building
out
grafana,
dashboards
and
stuff,
but
just
just
looking
at
you
know
like
rough,
like
metrics,
using
like
h
top
our
performance
and
ram,
and
all
that
looks
pretty
nice
and
yeah.
So
we're
also.
We
made
big
improvements
to
our
rpc
stuff,
which
enables
us
to
quickly
rapidly
build
out,
like
our
cli,
which
is
ongoing
right
now,
and
we
are
starting
to
implement
network
v13
next
week
should
be
fairly
fairly
straightforward.
B
In
my
opinion,
so
yeah,
that's,
I
think,
that's
kind
of
oh
yeah
yeah,
oh
yeah,
we
got
like
we're
getting
our
state
migration
stuff
in.
We
have
everything
done,
we're
just
kind
of
doing
some
benchmarks
on
like
the
kind
of
parallelism
and
all
that
we
want
to
do
just
to
make
it
so
that
doesn't
choke
our
cpu,
but
also
it's
fast
enough
and
yeah.
B
That's
that's.
A
It
sounds
good
one
quick
question
I
didn't
have.
Is
there
an
end
date
to
the
audit?
That's
going
on.
B
I
believe
from
our
talks
with
the
auditors
and
with
deep
before
it
was
supposed
to
be
around
six
weeks,
so
I
think
we're
approaching
the
tail
end
of
it.
If
I
understand
correctly,
yeah
sounds
good
cool.
A
Thanks
all
right,
maybe
we'll
get
an
update
from
venus.
A
C
Can
you
guys
help
me?
Oh
okay,
hello,
everyone,
I'm
I'm
eating
from
iphones
as
steven
is
on
a
business
trip,
so
I
will
share
the
information
from
winners
team
this
time.
In
the
last
two
weeks,
we
focus
on
the
following
steps
for
sizing.
We
keep
with
kept
up
with
the
hyperdrive
upgrading.
C
So
we
brought
several
features
and
optimizations
into
winners
in
a
message:
minus
wallet
and
we
refactor
some
piece
of
the
code
and
added
more
tests
and
documents,
and
so
in
the
ongoing
print
well
intent.
We
are
implementing
a
four
node
coordinator,
which
can
do
the
real-time
selection
from
from
a
number
of
upstream
nodes
based
on
their
chained
heads
and
the
proxy
the
request
to
one
of
the
selected
nodes
and
we
are
implementing
a
component
hosted
by
miners
which
will
join
the
winners
project
to
provide
we
impose
computing
and
signing
functions.
C
A
Okay,
cool
yeah,
very
cool
thanks
yeah.
I
think
I
think
venus
is
really
doing
some
very
innovative
things
in
the
in
the
mining
space
of
the
falcon
ecosystem.
So
I
think
it
sounds
really
cool
and
I'm
excited
to
see
more
and
more
people
pick
those
things
up
all
right.
Let's
hear
from
the
fucon
team,
maybe
thank
you.
D
Good
evening,
everyone
good
day
so
for
from
outside,
not
so
many
updates.
Actually,
we
had
a
decent
amount
of
our
team
having
vacation
past
week
so,
but
we
did
prepare
some
interesting
stuff
beforehand.
D
So
what
we
did
was
we
have
bootstrapped
the
lipitpinnode
living
its
own
life
and
running
for
more
than
I
think
it's
more
than
eight
days
now
and
memory
consumption
on
the
lipitp
level
seems
really
really
straight,
so
it
deviates
about
30
megabytes
into
the
80
in
eight
days.
So
it
goes
up,
goes
down
and
nothing
really
major.
So
what
we
did
also
we
have
launched
foothold
nodes
itself
and
we
have
prepared
some
core
dams.
D
So
we
are
doing
some
anotherly
interesting
work
on
finding
the
root
cause
for
memory
consumption
and
what
might
be
the
reasons
for
it,
so
it
will
probably
take
some
more
time
as
we
are,
as
there
are
like
lots
of
parts
which
can
be
influencing
the
consumption.
So
unfortunately
it
takes
a
lot
of
time.
So
that's
that's
it
on
our
that's
good,
yeah,
okay,.
A
Well,
I
hope
you,
you
folks
had
a
nice
vacation
and
yes,
you,
you
have
my
sympathies.
I
think
we've
said
this
before,
but
yeah
fixing
mixing
memory
leaks
is
not
it's
not
easy
and
yeah.
I
hope
you
hope
you
guys
have
some
luck
with
it.
A
Yeah
cool
thanks
for
the
update,
though,
all
right,
real,
quick
from
the
some
next
up,
yeah
from
the
community
side
of
things
or
the
falcon
foundation
side
of
things.
It's
an
update,
we'd
like
to
share.
E
Hi
yeah,
I
can
jump
in
from
the
five
point
foundation
side
just
a
little
bit
on
some
bug,
bentley
news
and
how
we'll
be
organizing
that
among
different
employment
or
implementers,
so
a
few
things
still
need
to
be
ironed
out,
but
filecoin
foundation
has
sort
of
taken
over
the
coordination
aspects,
at
least
of
handling
blood
panties
in
the
falcon
ecosystem.
Obviously,
with
implementers,
we
kind
of
have
to
work
out
exactly
how
we're
going
to
rank
awards.
E
Like
the
probably
it's
going
to
end
up
that
the
more
nodes
you
have
in
the
network,
the
higher
percentage
of
nodes
you
have
in
the
network,
the
high
the
rewards,
but
it's
always
going
to
be
ad
hoc
to
developers
how
severe
something
actually
is,
whether
it
affects
the
protocol
as
a
whole
or
just
implementation,
etc.
But
what
I
do
need
and
I'll
be
reaching
out
to
every
implemented
team
is
like
one
person
from
each
team
that
I
can
kind
of
use
as
my
go-to
guy,
by
the
way
I'm
dudley.
E
I
was
at
a
meeting
a
few
times
a
few
weeks
ago,
I'm
working
security
at
the
falcon
foundation.
Now
I
probably
should
have
started
with
that,
but
I
have
one
point:
man
from
each
team
or
point
woman
and
essentially
they're,
going
to
be
ingresses
of
bugs
from
everywhere
right
if
you've
dealt
with
security
reports
in
the
past,
you'll
know
that
you're
going
to
get
them
with
slack
rooms,
you're
going
to
get
them
in
personal
emails,
you're
going
to
find
them
just
online
and
sometimes
they'll
come
to
the
correct
location.
Sometimes
they
won't.
E
So
essentially,
if
your
team
gets
a
bug
report,
whether
it's
on
github
and
you
can
consider
it
significant
that
might
be
worth
rewarding,
you
can
contact
me
all
of
this
is
going
to
be
formalized
and
a
lot
easier
way
to
do
it
for
the
time
being,
if
you
guys
can
either
just
reach
out
over
slack
or
security
filecoin.org,
whatever
visit.
E
E
So
yeah
one
point
man
for
each
person.
This
will
also
probably
be
similar
when
we're
organizing
audits,
for
example,
for
fujana
in
the
future,
and
hopefully
that
makes
sense
for
everyone
again.
This
is
still
something
that
it's
kind
of
in
a
it's
like
coming
over
by
osmosis
at
the
moment,
but
eventually
we'll
have
a
formal
plan
on
how
to
deal
with
these
sort
of
things
and
how
rewards
will
be
coordinated,
sounds.
A
Great
yeah
glad
to
hear
the
stuff
is
in
is
in
progress.
It's
obviously
critical
to
the
broader
firefighting
ecosystem
and
for
the
benefit
of
folks
doing
the
recording
jennifer
flags
in
in
chat
that
dudley
should
be
added
to
the
implementer
channel,
which
I
think
totally
makes
sense
so
I'll
make
that
happen.
A
A
Sounds
good
okay,
so
let's
talk
network,
v13
actors,
v5
stuff,
so
a
couple
of
quick,
quick
little
housekeeping
things.
I
there's
this
issue
in
the
tpm
chat
in
the
tpm
repo
that
that
introduces
basically
that
tries
to
concisely
summarize
all
of
the
protocol
changes
that
are
being
landed
in
this
upgrade
we'd
like
to
make
this
a
thing
consistent
going
forward
where
every
time
we
have
a
network
upgrade
we
detail
what
tips
are
in
it
and
any
other
like
non-shipped
changes.
A
So
these
are
like
protocol
tweaks
or
minor
bug
fixes
that
we
may
have
in
the
network
upgrade,
and
that
way,
even
folks
who
aren't
you
know,
part
of
this
call
or
in
the
core
implementer's
groups.
If
someone
wants
to
implement
a
filecon
protocol,
the
file
controller
called
the
canada
they
can
keep
up
with
mainnet.
A
So
so
that's
one
to
flag
second
thing
quickly
is
interrupt,
net
is,
is
live
or
is
going
live
as
we
speak
and
it
will
be
starting
from
b5
backwards
directly,
which,
which
I
know
is
important
to
some
people
here
at
least
at
one
point
was
important:
yeah
supported,
sector
sizes
will
probably
be
2k
will
be
2k,
8,
megabytes
and
512
megabytes.
I
think,
and
yeah
we'll
share,
we'll
share
how
to
connect
to
that.
This
will
be
a
network,
a
test
network
exclusively
for
this
group.
A
Essentially
so
so
the
funds
will
be
fairly
tightly
controlled,
but
you
know
if
anyone
here
needs
some
funds
to
test
it
out.
We
can
just
like
send
some
transactions
yeah
so
that
that's
finally
happening,
and
hopefully
that
will
be
helpful
cool.
A
I
think
the
next
thing
to
talk
about
in
terms
of
network
v13,
v5
actor
stuff,
is
the
changes
that
the
final
set
of
changes
have
landed
to
fit
13.,
I
don't
know
zen
or
juan,
I
don't
know
which
of
you
wants
to
go
over
them,
but
I
think
it'll
be
useful
to
brief
the
rest
of
the
group.
F
Hey
how's
it
going
yeah,
have
you
go
through
it?
I
usually
want
me
to
go
through
just
the
batch
discount
and
match
balancer
and
so
on,
or
do
you
want
to
go
through
more.
F
Yeah
yeah
totally
so,
and
then,
if
you
want
to,
I
don't
know
if
you
want
to
do
a
bit
of
an
interview,
forehand
around
gas
decisions
and
so
on,
but
I
can
also
take
it.
G
F
Great
so
ziggs
and
I
and
the
others
have
been
looking
at
the
crypto
econ
considerations
for
doing
adding
all
the
the
aggregation
and
adding
integration
of
proofs
there's
a
whole
bunch
of
different
incentive
considerations
with
the
change
of
this
magnitude,
adding
this
much
on-boarding
capacity
and
so
on.
We
summarized
a
lot
of
those
changes
in
the
incentive
considerations
section
of
the
fifth
I
think
everybody's
probably
read
through
it
and
so
on.
F
So
I'm
gonna
kind
of
maybe
just
talk
about
the
mechanism
here,
instead
of
go
through
the
incentive
considerations
but
happy
to
dive
into
those.
If
you
saw
useful,
the
mechanism
in
question
adds
a
component
called
a
this
gas
charge
applied
to
the
aggregations
and
the
way
that
we
calculate
that
is
with
this
batch
balancer
target
and
a
discount,
and
what
the
goal
of
that
structure
is
to
create
a
gas
charge.
F
That
applies
to
aggregate
proofs
during
basically
at
all
times,
and
it's
effectively
creating
this
gas
lane
for
the
aggregate
proofs
that
it's
not
in
the
same
kind
of
gas
lane
as
the
rest
of
the
block,
but
it
balances
the
incentive
considerations
between
kind
of
being
able
to
aggregate
and
kind
of
get
this
like
massive
cost
saving,
with
kind
of
like
putting
things
directly
on
the
normal
gas
lane.
F
So
the
the
parameters
parameters
are
set
so
that
to
kind
of
balance
between
these
such
that
we
can
try
and
push
a
bunch
of
the
gas
savings
to
a
lot
of
the
other
messages
on
the
on
the
on
the
network.
Things
like
adding
deals
and
sends
and
all
kinds
of
other
things
that
people
will
want
to
use
the
the
chain
capacity
in
the
main
lane
for
while
also
keeping
the
the
gas
charge
and
gas
chain
gas
charges
for
aggregation
pretty
reasonable.
F
So
there's
the
batch
balancer
works
with
a
discount
in
this
structure
where,
while
the
base
fee
is
below
a
certain
target,
then
this
kind
of
batch
balancer
acts
as
a
minimum
cost
for
aggregation.
F
If
you
look
at
the
kind
of
the
graphs
in
the
incentive
alignment
section,
you
can
see
how,
when
the
base
view
is
below
a
certain
target,
you
that's
the
sort
of
the
first
graph,
then
the
cost
for
just
throwing
out
a
proof
directly
in
the
chain.
F
A
single
proof
inside
of
an
aggregate
is
the
cheapest
way
to
do
it,
and
this
is
basically
what,
while
the
chain
is
not
saturated,
whether
the
base
fee
is
slow
and
so
that
that
proceeds
as
the
chain
is
getting
fuller
and
fuller
and
fuller
and
the
base
is
increasing.
Then
at
some
point
it
becomes
much
more
rational
to
switch
over
to
aggregate
aggregating
proofs
and
at
that
point,
most
of
the
mostly
activity.
F
If
you
know,
people
want
to
keep
adding
more
and
more
and
more
onboarding,
onboarding
and
storage,
then
they'll
shift
the
the
all
most
of
the
proofs
towards
the
aggregation
and
then
up
because
that's
the
much
cheaper
way
to
do
it.
Gas-Wise
and,
as
you
know,
the
basically
keeps
going
up,
then
that
gets
cheaper
and
cheaper
and
cheaper
relatively
so
this.
This
is
sort
of
targeting
a
case
where
there's
a
yeah.
So
this
should
greatly
increase
the
onboarding
capacity,
which
is
which
is
great,
but
also
balance.
F
Incentives
with
you
know,
between
the
achieve
minor
fairness.
This
is
sort
of
described
in
the
pit,
but
achieve
minor
fairness
share
the
gas
costs
align.
You
know,
pay
the
network
for
onboarding
this
much
greater
amount
of
storage
and
making
base
fee
spiking
attacks
harder.
F
There's
also
like
this
nest
attack
where,
if
you
created
a
very,
very
cheap
way
to
to
onboard
storage,
but
then
a
strategy
might
be
effective
or
like
you,
you
spike
the
base
you
up
in
the
normal
channel
and
then
you
kind
of
you
can
keep
aggregating
very
cheaply,
but
but
you're
not
kind
of
affected
by
by
that
basic
spike.
F
If
you
will
have
questions
about
the
mechanism
can
take
them.
I
don't
know
if
it's
very
like
the
change
itself
is,
is
relatively
simple:
there's
just
this
gas
cost
computed
for
whenever
there's
an
aggregation,
approves,
we
figure
out
the
gas
cost
for
all
of
the
proofs
going
into
into
the
aggregation,
in
addition
to
the
the
actual
message
on
the
chain
and
then
that
gas
cost
is
applied
separately
with
a
with
a
expression
in
the
in
the
fit.
F
Yes,
it's
within
the
like
batch
gas
charge,
section
which
I'll
link
here
so
yeah,
maybe
I'll
walk
through
this
function,
so
this
like
a
batch
gas
charge,
is
where
most
of
this
is
set
up
and
you
can
think
of
that
as
taking
two
parameters,
the
number
of
proofs
that
are
batched
and
the
current
base
fee
then
kind
of
the
batch
discount
match
balancer
and
the
single
proof.
Gas
usage
are
basically
network
parameters
and
you
can
think
of
those
as
constants.
F
Then
we
just
calculate
the
batch
gas
charge
by
taking
the
number
of
proofs
being
bashed
in
this
in
this
particular
proof
aggregate,
and
then
we
m,
we
compute
the
batch
gasket
for
this
proof,
which
is
you
know,
we
take
the
max
of
the
balancer
or
the
current
base
fee
and
then
the
gas
charge
is
the
gas
fee
that
batch
gas
fee
times
the
single
proof
gas
usage,
which
is
that
network
parameter
times
the
number
of
proofs
being
bashed
times
the
discount.
F
So
if
some
practice
makes
when
kind
of
the
base
fee
is
above
the
the
the
target,
then
this
makes
the
proofs
the
aggregate
proofs
much
cheaper.
And
so
that's
that's,
like
that's
passing
the
gas.
F
That's
cost
that
way,
but
when
the
basic
goes
lower,
which
means
that
there's
plenty
of
capacity
in
the
chain,
then
that
then
most
of
the
usage
should
move
back
into
the
into
this
into
the
main
lane
and
that
the
way
that
we
pay
the
gas
view
to
just
send,
do
a
message
send
to
f99
cool
so
that
it's
a
fairly
simple
change
but
the
how
the
mechanism
works
and
what
it's
intended
to
do
and
so
on
is
yeah.
It's
complex,
so
happy
to
talk
through
it.
B
Is
there
a
pr
associated
with
this
since,
like
I
feel
like
this
was
added
after
fip
13
was
implemented?
If
I
understand
correctly,
I
just
kind
of
want
to
understand,
like
which
parts
of
lotus
and
the
spec
actors
have
changed,
because
it
doesn't
sound
like
gas.
Calculation
has
changed
in
the
in
this
in
the
actives
in
the
gas
table
itself,
but
rather
in
the
vm
during
message
execution.
If
I'm
understanding
correctly,
no.
G
A
The
pr
spr
100
for
the
benefit
of
someone
doing
the
recording
and
yeah
well
well
yeah.
Well,
while
zen
pulls
up
the
pr.
Yet
the
thing
to
note
is
yet
this
guess
is
this:
the
change
happens
essentially
entirely
in
actors
from
an
integration
perspective.
The
kind
of
like
new
thing
is
that
actors
can
now
ask
the
runtime
interface
now
exposes
the
basic
the
network
base
fee,
which
you
know
it's
information
that
it
already.
That
was
that's
all
always
available
and
was
always
available.
A
It
just
wasn't
exposed
before
so
that's
kind
of
a
new
the
the
integration
aspect
that
this
microsoft
requires.
F
A
Which
was
easy
to
do
lotus,
but
hopefully
is
easy
to
do
for
the
other
implementations
as
well.
H
Yeah
yeah,
I
I
have
a
question
I'm
received
from
1475
by
the
way,
based
on
my
understanding
regards
that
gas
charge
equation.
H
The
gas
charge
amount
is
increasing
linearly
along
with
the
number
of
proofs,
including
that
aggregate
proof
message.
So
as
a
miner,
I
don't
feel
I
am
motivated
to
include
more
proofs
in
that
aggregate
proof
because,
like
if
I
include
10,
proof
and
1000
proof,
I
I
I
am
charged
equally
for
proof.
Is
that
understanding,
right
or.
F
Well,
well,
the
you
there's
also
the
charge
for
the
message
itself.
So,
for
example,
if
you
were
to,
if
your
goal
is
to
add,
say
a
thousand
a
thousand
sectors
and
you're,
you
are
choosing
between
sending
one
message
with
those
thousand
or
say
10
messages
with
100.
F
Each
sending
one
message
with
a
thousand
is
going
to
be
way
cheaper
because
you're
not
going
to
pay
the
the
gas
cost
of
the
actual
message
on
on
the
chain,
three
additional
nine
additional
times,
you're
only
going
to
do
it
once
so,
it
is
still
cheaper
and
rational
to
aggregate
more.
The
way
that
the
balancer
works
is
you
shift
to
not
aggregating
when
the
basis
is
below
the
target?
That's
that's
the
only
that's
the
case
when
it
becomes
rational
to
not
aggregate
proofs
are
getting
an
additional
proof
is
just
cheaper
for
you.
A
A
And
kind
of
a
related
question-
and
this
kind
of
goes
to
the
motivation
and
just
to
reiterate
the
gas
charge,
cost
cost
charge
through
this
mechanism
does
not
affect
the
overall
block
estimate
like
it's
not
going
to
shrink
the
the
capacity
that,
for
that
of
the
number
of
investors
that
can
be
actually
going
to
block.
Is
that
correct.
F
Yeah
yeah,
so
the
the
usage
of
this
batch
gas
charge
does
not
apply
to
does
not
add
up
gas
costs
to
the
lim
to
the
block
itself
into
the
limit.
So
if
you
add,
if
you
hold
all
the
other
messages
equal-
and
you
just
add
proofs
into
an
aggregate-
that's
the
the
amount,
the
proof
going
into
the
the
proof
message
is
still
the
same
size,
and
so
it's
not
gonna.
It's
not
gonna
increase
the
the.
A
So
fundamentally
doesn't
affect
capacity,
but
instead
allows
these
benefits
to
spill
onto
those
other
messages.
As
you
said,.
H
A
Sounds
good
cool
any
other
questions
about
this
mechanism
before
zen
jumps
into
kind
of
other
param
tweaking
that
happened.
F
G
Yes,
that's
the
full
change
for
everything
in
fit
pr
100,
so
it
includes
the
changes
we
just
talked
about,
and
it
also
includes
a
few
parameter
tweaks,
so
I
can
whip
through
those
pretty
quickly
it's.
They
correspond
to
what
we
talked
about
two
weeks
ago.
I
believe
also
in
the
implementers
channel,
though
with
maybe
a
little
bit
of
difference.
G
So
we
settled
on
a
max
aggregate
batch
size
of
819
and
did
some
final
gas
estimation
to
motivate
both
the
gas
charge
and
the
gas
charge
function
and
the
minimum
batch
size
for
aggregates
that's
rational.
G
So
the
gas
charge
function
turns
out
to
be
a
little
bit
more
complex
than
we
initially
thought
through
so
by
you
know,
doing
empirical
measurement
of
how
long
it
takes
to
verify
aggregates
of
different
sizes.
We
settled
on
a
a
logarithmic
major
component
with
a
minor
linear
component
and
you
can
look
at
the
whole
table
the
spec
for
the
function
in
fit
pr
100,
and
it's
also
up
in
fib
13.
Now
the
others
we
went
through
the
aggregate
batch
size
changes.
G
The
other
thing
we
just
added.
We
added
space
under
the
pre-commit
batch
size
because
there
didn't
seem
to
be
much
reason
to
not
allow
greater
savings
with
greater
batch
sizes,
so
we
increased
from
32
to
256
in
this
pr
and
finally,
we
extended
max
proof
commit
duration
to
30
days
from
our
original
extension
of
six
days
in
order
to
allow
miners
with
smaller
onboarding
capacity,
to
get
a
full
benefit
of
of
batching
all
up
to
the
aggregate
batch
size.
G
If
they're
willing
to
wait
the
long
amount
of
time
it
takes
to
fill
up
aggregates
of
that
size,
yeah
that
I
guess
the
last
thing
is
the
we
just
set
the
maximum
proof
aggregate
size
to
double
what
we
think
it
should
be.
But
yeah,
that's
that's!
That's
everything
right!
There.
A
Excellent.
Thank
you.
Okay,
who
has
questions
about
kind
of
these
changes
around
v5
actors
and
the
tip's
going
into
them.
A
Okay
sounds
good,
yeah
feel
free
to
take
some
time,
digesting
kind
of
the
the
latest
changes
and
bring
up
any
questions
in
the
film
preventers
channel.
A
couple
of
other
things
to
flag
is,
we
included
we're,
including
fib
15
in
in
the
in
a
hyperdrive
upgrade.
So
this
revert
step
9,
which
reverts
the
exemption
of
direct,
successful
window
first
messages
as
a
stop
gap.
A
We
put
back
in
back
in
december,
and
we've
had
like
three
major
improvements
in
particular
optimistic
window:
first,
that
that
kind
of
obviates
the
need
for
that
at
this
point
yeah.
So
I
guess
very
the
last
thing
to
talk
about
in
terms
of
network
v13,
hyperdrive
stuff
is
kind
of
the
actual
execution
so,
like
I
said,
the
lotuses
for
star
c
that
implements
all
of
this
was
released
yesterday,
and
hopefully
the
mining
community,
in
particular
begins
kind
of
integration.
A
Work
on
that
we'll
put
it
up
on
some
private
test
networks
and
interrupt
net
for
this,
for
this
group
of
people
is
going,
is
live
or
is
going
live
shortly,
so
we'll
do
some
testing
there
and
then
kind
of
next
episode
we'll
go
on
calibration.
A
The
public
test
network-
probably
sometime
next
week
or
so,
depending
on
what
we
find
depending
on
how
testing
goes
with
an
actual
main
net
network,
upgrade
probably,
as
we've
been
saying,
second
half
of
june
june
20th
or
so,
is
roughly
what
we're
looking
at
right
now
and
yeah.
We
gonna
need
to
flesh
out
a
little
bit
of
a
testing
plan
to
make
sure
that
that
the
lotus
minor
side
of
things
is
good
and
so
that
that's
kind
of
our
immediate
next
steps,
but
yeah
we'll
see
how
things
go.
G
Okay,
I'll
jump
in
really
quick
to
say
that
a
test
vector
work
generated
from
actors
tests
is
not
totally
there
yet
but
expected
to
land
early
next
week.
So
I
hope
that
helps
with
yeah,
with
implementing
v13
from
other
implementations.
A
A
A
The
details
themselves
will
be
coming
out
shortly,
but
if
folks
in
the
community
have
questions
about
it,
they're
welcome
to
join
or
pre-submit
your
questions
also,
if
folks
in
this
group
would
like
to
attend
so
that
it's
not
so
that
we
have
other
implementations
in
the
indies,
ask
me
any
things
that
would
be
great
too
we'll
reach
out
in
the
film
about
a
channel
to
kind
of
schedule.
Some
of
that
stuff.
F
F
A
great
place
to
discuss
a
lot
of
the
questions
around
gas
and
gas
usage
and
gas
charge
and
aggregations
and
how
they
stack
up.
I
think
that
would
be
yeah
really
good
place
to
for
miners
and
other
users
to
to
ask
about
how
that
would
work,
so
definitely
definitely
be
there.
A
A
Sounds
good,
you
can
always
take
questions
async
in
the
influencer
channel,
all
right
so
yeah.
I
wanted
to
welcome
representatives
from
1475
to
talk
about
these
two
prs
that
we
have
that
can
that
we're
excited
to
receive
it's
nice
to
say
it's
nice,
that
more
and
more
of
this
is
being
driven
by
the
community.
A
H
Sure
we
created
this
tr
scene.
We
just
realized
that
there
was
already
a
discussion
regarding
that
topic
very
similarly
back
to
january
in
this
year.
I
think
we
we
share
the
same
motivations
with
that
discussion.
There
are
two
motivations
here
based
on
my
understanding.
H
The
first
one
is:
if
miner
miners
want
to
park,
cr
data
real
data
in
in
cc
sectors,
they
should
be
allowed
to
do
so,
but
currently
the
verification
part
of
cc
sector
requires
that
the
null
data
should
be
should
be
used
for
for
for
sale
cc
sectors.
H
H
Post
post
seal
is
possible,
so
that's
our
second
motivation
and
which
is
not
included
by
this
fib
here
by
the
way,
probably
if
it's
okay,
I
will
talk
a
little
bit
further
for
another
one
or
two
minutes
for
for
the
you
know
extension
part
of
this
cspr
so
for
this
pis
serve,
we
again
we
want
to
allow
miners
to
pack
any
data
they
want
in
the
cc
sector,
so
the
detailed
step
steps
involved
in
ccs
first,
of
course,
miner,
has
to
have
their
data
file
in
their
file
system
one
way
or
another.
H
Then
we
should
provide
the
cla
command
for
them
to
you
know,
to
select
the
data
file
and
to
park
into
the
cc
sectors,
and
then
we,
when,
when
the
state
machine
is
running
when
the
steel
machine
has
that
information
about
the
data
file,
it
will
change
the
current
ad
piece
logic
to
to
parks,
this
particular
data
file
or
multiple
data
files
into
cc
sectors,
and
one
thing
I
missed
missing
in
the
prs
service
that
we
also
should
add
the
key
value
mapping
to
the
key
store
like
sector
and
pcid
and
start
and
end
offset.
A
Sorry
go
ahead,
so
just
quick
interruption.
I
think
it
would
be
just
in
the
interest
of
time
and
kind
of
for
the
flip
process
itself.
I
think
it's
good
to
focus
on
the
protocol
level,
things
that
need
to
change
like
stuff
that
has
a
change
in
lotus
or
some
other
implementation
that
that's
kind
of
not
that's
an
implementation
detail
all.
H
H
Sense,
yeah,
yeah,
yeah
yeah,
so
in
consensus
part
we
we,
we
shall
add
this
and
seal
the
id
in
the
pre-commit
information
for
for
the
for
later,
when
we
prove
sectors
we
can
leverage
that
so
then
we
will
need
to
also
add
the
unsealed
id
in
the
seal.
Verify
stuff
and
yeah
add
the
adding
to
still
verify
stuff
structure,
as
well
as
the
set
on
chain
info.
H
So
with
that
in
in
the
verify
info
method
in
inspect
actor
country,
it
will
just
you
know,
get
the
when
we
get
verified
info.
It
will
just
you
know,
to
check
if
we
have
the
deal
id
of
such
a
particular
sector.
H
If
we
have
your
ids
and
go
back
to
the
market
actor
and
fetch
back
the
pcids
array,
and
if
not,
we
just
pass
the
empty,
you
know
yeah
empty
empty.
They
have
to
construct
as.
H
Construct
actually
unsealed
id
and
verify
against
that,
so
we
propose
the
changes
to
and
unsteal
the
id
in
the
still
verified
stuff
and
say
it's
not
empty,
and
so
logic
will
choose
that
to
verify
in
the
last
part
as
a
comedy
and
if
it's
still
empty,
then
we
do
things
as
yes
and
I
I
think
the
last
priority
is
we
we.
We
would
like
to
put
the
unsealed
id
field
optional
field
in
the
sector,
on-chain
info,
that's
prob,
probably
extra
overhead
for
the
storage.
H
But
again
that's
that
will,
I
think,
we'll
pave
the
way
to
to
the
next
part
which
we
can
verify
piece
pcid.
H
Such
a
client
proposed
of
chair
and
verify
that
the
piece
cid
against
unsealed
id
and
the
standard
property
notary
can
determine
if
this
is
worth
to
make
it
a
verified
data.
A
Part
yeah
thanks
a
lot
yeah
thanks
a
lot
for
providing
the
pr
too,
I
think
yeah,
I
think,
it'll
be
good
to
maybe
split
them.
The
way
you
have
and
kind
of
the
first
phase,
which
is
what's
in
this
pr
and
then
a
second
phase
of
like
possibly
future
improvements
where,
yes,
you
can
upgrade
to
be
verified
and
so
on,
right,
yeah,
so
kind
of
kicking
it
off.
I'm
I'm
personally
a
fan
of
this
idea,
I'm
glad
I
like
I've.
I've
thought
it's.
A
It
would
be
good
to
have
for
for
a
while
and
I'm
glad
the
discussion
is
moving
forward
a
little
bit.
I
know
not.
Everyone
feels
that
way
in
terms
of
yeah.
So
just
a
couple
of
thoughts
I
think
yeah
we
can.
We
can
kind
of
just
three
factors
here.
We
just
discuss
that
we
need
to
think
about
like
first
off
whether
it's
good
to
have
whether
we
want
whether
it's
a
benefit
for
the
network
in
general.
Second
kind
of
some
of
the
implementation
impacts
so
stuff
like
yeah.
A
Does
you
know
the?
How
much
does
a
new
field
impact
state
size
and
that
kind
of
or
maybe
does
it
affect
the
the
time
it
takes
to
like
process
a
pre-committed,
appropriate
message
and
kind
of
the
third
impact
is
or
third
thing
to
consider
is
kind
of
security
like
does
this
change
for
some
reason,
make
it
easier
to
fake
data
or
fake
storage
for
some
reason,
but
yeah
on
on
the
whole,
I'm
I'm
personally,
I
think
it's
a
good
idea.
B
I
don't
think
I
quite
understand
this.
So
is
the
idea,
adding
so
inside,
of
a
pre-commit
message,
adding
a
new
field
that
has
a
list
of
piece
ids,
but
the
sector
is
that,
like
those
files,
don't
actually
get
sealed
or
like?
I
don't
really
understand
that
part.
G
I
can
I
can
try
to
try
to
rephrase
it
to
make
sure
I
understand
too.
I
think
the
idea
is
so
currently
a
cc
sector
with
no
deals
has
to
have
a
a
certain
precise
data
value.
The
idea
is
to
specify
piece
ids
that
are
not
associated
with
deal
ids,
which
can
then
be
used
to
create
a
com
d
which
is
unique
to
this
sector
and
then
yeah
seal
over
that.
G
Currently
it
has
to
be
like
all
zeros,
I
think
in
the
data
yeah
does
that
does
that
line
up
with
what
you're
saying
a
am
I
understanding
correctly.
F
Good
question:
why
is
producing
one
single
unverified
deal,
not
good
enough?
Is
it
just
the
complexity
of
having
a
client
producing
a
deal
and
so
on,
or
is
there
something
about
the
initial
sealing
process
that
is
making
that
pathway
like
too
hard,
and
then
this
be
more
attractive.
F
Why
why
so
a
different
pathway
to
achieve
the
same
thing
you're
trying
to
do
is
to
make
one
a
very
large,
unverified
deal
for
the
whole
sector
and
add
that
at
the
very
beginning,
is
that
too
complex
or
difficult
here?
Is
that,
like
true
and
this
yeah,
I
guess
maybe
I
don't
fully
understand
why
that
pathway
is
way
too
expensive
or
too
hard
to
follow,
because
it
seems
to
kind
of
arrive
at
the
same
at
a
similar,
if
not
the
same
kind
of
outcome,.
H
All
right,
that's,
of
course,
that's
a
possible
approach
to
achieve
the
same,
and
the
only
thing
is
that
we
think
that
a
deal
is
is
probably
expensive
and
with
more
and
more
verified
data
and
deals.
H
Yes,
we
can
hear
you
well
yeah,
yeah,
okay
yeah.
I
I
think
moving
forward,
although
with
this
coming
upgrade
of
v7
v13,
the
block
will
become
empty,
but
with
more
and
more
deal
so
the
block
limit
will
gradually
be
filled
and
considering
in
the
future,
we
probably
will
on
board
a
smart
contract
which
will
also
consume
a
lot
of
guest
limits.
I
I
suppose
so
that's
just
you
know,
saving
some
space
in
in
the
in
the
block.
A
Yeah
and
I
I
kind
of
add
you
know,
yeah,
there's
all
of
the
complications
from
the
are
not
complications
with
the
additional
restrictions
that
come
with
the
deal.
I
suppose,
where
there's
the
cost
of
the
message
itself
and
then
you
do
have
to
put
up
some
collateral
and
you
know
maybe
you
set
the
storage
price
to
zero,
but
that's
still
kind
of
like
some
additional
things
you
have
to
think
about.
So
I
think
I
see
an
argument
for
why
this
is
better.
H
F
Yeah,
I
think
I
think
getting
just
making
that
clear
in
the
fifth
would
be
very
useful
for
everybody
else
to
kind
of
maybe
compare
the
two
pathways
of
saying:
here's,
the
pathway
doing
an
unverified
deal
and
like
just
how
expensive
or
problematic
it
is,
and
therefore
why
this
other
thing
would
be
useful.
It
could
be
good,
so
this
is
effectively
an
implicit
deal.
F
So
even
if
it's
not
kind
of,
if
it's
trying
to
kind
of
bypass
it
by
by
not
having
to
go
over
the
machinery,
it
would
be
interesting
to
see
if
we
can
kind
of
classify
it
as
a
deal
somehow
in
the
chain
and
kind
of
still
achieve
your
goal
to
you
know,
make
it
cheaper,
make
it
less
annoying
and
so
on,
because
yeah,
I
think
the
goal
of
allowing
more
data
onboarding
is
great.
F
I
think,
having
an
implicit
deal,
though
the
chain
doesn't
know
as
a
deal
until
later
is
a
little
bit
painful
and
it
can
create
other
other
annoyances.
So
because
a
lot
of
people
are
tracking
the
amount
of
deals
as
a
proxy
for
data
usage
and
whatnot,
and
so
it
would
be
good
to
get
that
information
on
the
chain
but
yeah.
If
it's
too
expensive,
then
then
understood.
But
I
think
describing
all
this
in
the
tape
I
think,
would
be
useful
for
people.
H
All
right,
I
will
do
some.
You
know
quantify
research
on
that.
A
Okay,
yeah
and
also
like
regarding,
I
don't
know
how,
if
I
agree
that
this
is
strictly
an
implicit
deal
in
that
it's
an
implicit
deal
or
it's
a
deal
that
you're
making
with
yourself
in
some
sense.
But
I
think
the
argument
that
I
feel
most
persuaded
by
is
like
if
I,
as
a
minor
want
to
you,
know,
store
the
text
of
my
my
diary
in
in
my
sector.
A
Then
that
is
something
I
should
be
able
to
do,
which
I
currently
cannot
do
without
making
a
deal
with
myself
and
then
having
those
complications,
so
that
that's
kind
of
right.
What
I
see
motivating
this
domestic.
B
Yeah
I
mean
so
one
thing
that
I
think
is
missing
also
from
the
fip,
with
touching
on
what
you
just
talked
about.
Ayush
is
like
the
incentive
considerations
in
that
portion
of
the
fib.
It
says:
there's
no
impact
on
existing
incentives,
but
clearly
there
is
if
a
miner
is
able
to
make
deals
with
themselves,
I'm
going
back
to
like
the
whole
implicit
deal
thing
it's
like
it's
like.
B
Maybe
I,
instead
of
you
know
like
going
through
the
storage
market
mechanism,
I
can
go
directly
to
a
miner
as
a
client
instead
and
then
they'll
be
storing
all
this
bypassing
like
the
whole
machinery
like
one
has
one
said
before.
So
I
think
there
is
like
impacts
potentially
on
incentive
considerations,
and
I
think
I'd
be
a
little
bit
more
confident
in
this.
If
that
was,
you
know,
analyze
a
bit
further.
A
Yeah
agreed,
and
I
also
feel
the
same
way
about
security
considerations
like
I
think
there
is
some
thinking
to
be
done
here.
Some
work
to
be
done
before
we
can
feel
completely
good
about
it,
but
no
so
on
the
like,
we
can
flag.
We
can
write
down
all
the
things
that
we're
flagging
in
this
action
items
and
then
bring
that
to
the
fip
and
iterate
on
it
there,
but,
but
I
think
I
I
think
it's
a
really
good
idea
and
thanks
again
for
for
opening
it.
H
Right
sorry:
go
ahead.
G
Yeah
sorry,
one
last
thing
just
to
think
about
when
updating
the
5th,
a
in
response
to
the
the
the
question
of
tracking
being
able
to
track
data
that's
on
chain
in
this
way,
in
order
for
it
to
work
at
all,
we're
going
to
need
to
include
a
piece
size
along
with
piece
c
ids.
So
while
it
will
be
in
a
different
place
on
chain,
all
this
information
will
will
still
exist.
So
that's
I
think
that
strengthens
the
argument
for
using
this
yeah,
just
a
thought.
H
Okay,
yeah
regarding
yeah.
I
think
we
can
definitely
add
pcid
on
chain
info.
You
know,
unless
you
guys
feel
that
will
be
too
expensive
for
miner
to
store,
but
anyway,
so
about
the
last
argument
regarding
the
incentive
consideration
of
security
consideration,
I
think
we
we
were.
I
think
we
will
have
two.
H
H
So
from
my
perspective,
I
might
be
wrong,
but
from
my
perspectives,
security
consent,
you
know,
incentive
or
self
inclusions
data
is
is
trivial
because
there
is
no
mechanism
to
allow
them
to
to
upgrade
these
sectors
into
a
verified
verify
so
such
should
be.
H
You
know
trivial
from
my
perspective,
of
course,
you
know,
like
I
said,
the
second
motivation
is
paved
the
way
for,
for
the
second
fip,
I
guess
is
to
make
make
this
part
of
piece
can
be
upgraded
by
by
notre
dame
or
something
that
should
be.
You
know,
take
it
seriously
and
to
secure
the
security
chain.
So
so
that's
just
my
perspective
for
for
this
you
know
particular
fib
itself.
H
I
think
it's
not
it's
just
like
well,
not
such
a
serious
impact,
the
security
part
again,
that's
just
my
you
know
my
thought
on
this.
A
Yep,
I
agree.
I
think
we
can
include
these
in
kind
of
the
action
items
for
the
fit.
I
think
it
would
be
really
useful
to
introduce
a
new
future
improvements
section
or,
like
you
know
what
this
unlocks
section
in
the
fip
itself,
in
which
we
can
talk
about
yeah.
This
idea
that
you
bring
up
an
idea
that
pawn
brings
up,
which
is
the
ability
to
upgrade
an
unverified
deal
to
a
verified
deal
after
the
fact,
which
is
you
know,
related
to
kind
of
this
conversation.
A
So
I
think
that
will
flip
a
bit
more
I'll,
compile
a
list
of
essentially
kind
of
action
items
that
we're
talking
about
and
then
we'll
we'll
take
this
conversation
to
the
field.
Thank
you,
cool
and
yes,
jennifer
flags
understand
that.
I
think
the
community
in
general
will
be
interested
in
this,
so
we
can
definitely
discuss
this
in
the
full
sips
channel
five
point
slack,
which
is
the
right
place
for
this
conversation
to
keep
happening
cool
in
the
interest
of
time.
A
I
do
want
to
get
to
the
second
flip.
Pr
that
you
have
so
maybe,
let's
put
a
pin
in
conversation
conversations
are
unpacking
arbitrary
data
and
cc
sectors.
For
now
and
move
on
to
the
alleviate
penalty
discussion
that
you
started.
H
Right
so
for
such
a
pr
I
just
create.
I
just
included
two
two
fips.
Actually,
the
first
one
is
kind
of
you
know:
emergency
upgrade
for
the
network
when,
when
the
you
know
when
the
government
ban
mining
across
the
country
but
seems
like
in
the
recent
day
update
it,
it
becomes
very
unlikely
to
go
to
that
extreme.
H
So
we
we
can
just
support
that
first
fib
hold,
I
guess
unless
things
change
dramatically
in
the
in
the
coming
days
or
week,
and
but
for
the
for
the
second
one
which
allow
a
miner
to
flexible,
choose
their
start
date
and
end
the
date
you
know
to
stop
supplement
window
post.
I
think
we
still
need
to
consider
it
because,
except
the
policy
change
or
government
change,
there
are
positive.
H
Other
possibilities
like
nature
disaster
will
cut
off
the
infrastructure
supply
for
a
while
and
in
such
real,
although
it's
real,
but
in
that
circumstances
I
I
think
it's
still
nice
to
allow
a
miner
to
to
choose.
You
know
you
see
a
maintenance
window.
So
basically,
basically
france,
consensus
part
of
this
fip.
H
We
will,
we
will
basically
allow
we
will
add
the
two
methods.
One
is
while
another
is
resumed
to
indicate
a
miner
wants,
to.
You,
know,
stop
their
regular
operation
and
resume
their
regular
operation
and
as
as
soon
as
a
stop
message
is
submitted,
all
the
miners
power
will
will
be
zeroed
out
and
the
only
operation
this
miner
is
allowed
is
to
is
to
submit
to
the
resume
method.
Other
operations
like
pre-commit
or
commit,
or
we
know
post
will
be
prohibited.
H
H
That's
such
a
basic
idea
of
this,
and
we
can
also
allow
miners
to
put
more
more
pledge
to
allow
them
to
extend
their
14
days
mandatory
sectors.
Termination,
yes,
so
basically
is
a
supposed
item
in
in
this
fib.
A
A
quick
question,
so
I
I
find
the
argument
very
persuasive
that,
yes,
any
number
of
crises
could
cause.
Something
like
this
to
be
necessary.
Is
the
proposal
that
this
that
these
methods,
the
pause
and
resume
methods
can
can
be
that
you
can
send
a
message
to
the
these
methods
at
any
time
or
is
a
proposal
that,
like
we,
we
we
have
these
methods
in
the
background
and
also
put
like
some
trigger
that
the
network
can
pull
so
like.
A
H
I
think
currently
we
I
don't
propose
some
automatic
trigger
mechanism
in
here,
because
the
assumption
is
the
event
is
a
bad
event.
Is
happened
locally,
so
the
miner
can
still
you
know,
submit
events.
Their
idc
has
been
cut
off
certain
steel.
You
know,
sub
emphasis
stop
method
from
elsewhere
as
long
as
they
have
their
private
key.
A
Yep
yeah.
In
that
case,
I
think
my
chief
like
this
is
a
it's
an
interesting
proposal.
I
think
it's
yeah
there's
definitely
cases
when
it's
useful,
but
it
somewhat
undermines
kind
of
one
of
file
points
underlying
principles
of
you
know.
Your
data
is
secure
and
being
proven
every
24
hours.
If
there's
now
a
mechanism
where
you
know
miners
can
declare
that
they're
not
going
to
be
proving
their
data
but
want
to
be
exempt
from
the
usual
penalties.
H
Well,
for
for
the
for
the
deal
part,
I
I
think
the
payment
channel
will,
you
know
still
running
and
security
data,
of
course,
will
not
be
working
because,
whatever
you
submit
stop
message
or
not,
your
infrastructure
has
been
cut
off.
Presumably
so
query
will
not
working
definitely
but
to
the
good
studies.
H
If
you
know
the
cutoff
is
longer
than
14
days
on
current
mechanism,
the
client
will
lose
their
data
in
that
minor
permanently,
but
with
this
there's
still
a
chance
for
for
the
miner
to
bring
clients
data
back.
A
B
I
have
a
question
with
respect
to
this,
like
I,
I
feel
like
yeah
like
this
is
like
a
real
problem
like
if
an
earthquake
like
hits
like
you,
know,
big
area
or
something,
but
I'm
just
wondering
like
what
kind
of
security
considerations
we
should
take
in
here,
like
I
think,
it's
quite
dangerous
to
take
off
like
half
of
the
network's
surge
power
like
like
just,
I
can
see
like
weird
attacks
happening
from
this,
just
wondering
if
like
if
this
is
the
best
solution
or
if
there's
another
alternative
for.
H
This
well,
I
I
think
the
assumption
is
that,
since
this
disaster
event
has
happened
locally,
not
across
you
know,
continent
or
something
like
that.
So
with
that,
I
I
think
maybe
there
is
a
better
way
to
do
that,
but
currently
that's
our
proposal
to
handle
this
and,
of
course,
the
security
barriers
need
to
be
considered.
H
So
during
the
maintenance
window,
I
think
the
penalties
still
need
to
be
applied
for
that
miner,
but
with
you
know
what
level
of
that
fee
can
still
maintain
the
security
or
the
negative
incentive
for
for
minor
to
abuse
this
mechanism?
F
Yeah,
that's
some
that's
here.
I
think
in
general.
The
idea
of
being
able
to
help
miners
through
disasters
is
a
great
idea
and
being
able
to
kind
of
handle.
There's
a
lot
of
different
kinds
of
events.
That
can
happen
really.
Suddenly,
I
think,
having
structures
that
can
help
in
those
situations
is
a
really
good
idea.
I
think
the
ability
to
at
will
turn
off
a
huge
section
of
the
network
with
no
with
no
or
a
very
small
fee
is
definitely
a
security
and
economic
consideration.
There's
all
kinds
of
potential
problems
there.
F
As
eric
was
saying
there
are
potentially
a
bunch
of
security
attacks
that
become
viable
at
that
point.
So
I
think
we
need
to
do
analysis
on
that
kind
of
stuff.
I
think
also
there
there
could
be
a
thing
around
having
something
like
a
verifiable
disaster
oracle
or
something
like
that
where
there
can
be
like
a
network-wide
checking
of
of
that,
there
is
indeed
like
some
direct
problem
affecting
miners
in
specific
regions.
F
I
think
right
now
the
infrastructure
for
making
that
kind
of
thing
possible
is
being
built.
Is
it's
thinking
it's
going
to
take
a
while
to
for
that?
To
happen,
but
things
like
you
know,
bridges
connecting
to
other
chains
getting
contracts,
you
know
there's
tips
and
proposals
and
so
on
to
add
evm
and
other
things
like
that
being
able
to
kind
of
have
an
oracle
that
set
that
kind
of
declares
disasters
in
regions
would
then
allow
much
easier.
F
A
Okay,
yeah.
I
think
we
all
agree
that
the
that
it's
well
motivated,
like
the
like
the
use
case,
is
very
real
here,
but
it's
not
an
easy
problem
to
solve
and
it's
it's
it's
a
tricky
thing
to
it's
easily
exploitable
and
there's
a
whole
bunch
of
considerations.
So
I
think
we'll
keep
the
discussion
going
for
now
for
sure
and
again
summarize
the
conversation
that
we
had
here
today
and
get
input
from
the
community.
But
this
one,
I
think,
is
a
little
harder
to
move
on
for
now.
A
Fine,
that's
good!
Yes,
so
we
are
past
time.
I
want
to
be
mindful
of
that,
but
also,
if
folks
have
questions
or
things
that
are
things
are
blocking
them
feel
free
to
bring
them
up
now,
but
also
feel
free
to
drop.
If
you
need.
A
A
Cool
all
right
yeah,
as
always
feel
free
to
bring
things
up
async
in
slack
thanks
a
lot
for
your
time.
This
was
a
meaty
call.
I
like
I
like
how
much
stuff
we
discussed
and
thank
you
very
much
for
joining
from
1475.