►
From YouTube: September 2023 OpenZFS Leadership Meeting
Description
Agenda: 2.2 release; Fast Dedup; scrub specific txg's; reducing pool import time; BRT for file concatenation; etc
full notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
All
right,
let's
get
started,
welcome
to
the
September
2023
open,
ZFS
leadership
meeting.
It
looks
like
we're
pretty
late
on
the
agenda
today,
at
least
on
in
the
dock.
So
I'll
just
mention
briefly.
The
open,
zfest
developer
Summit
is
going
to
be
held
in
October,
pretty
pretty
pretty
soon
now
so
not
too
far
off
I've
received
the
talk.
Submissions
we'll
be
getting
back
to
the
speakers
should
be
by
the
end
of
the
week.
A
A
A
All
right
so
then
I'll
open
the
floor
to
other
topics
of
discussion.
A
I'm
going
to
see
that
Brian
and.
B
A
Are
both
yeah,
Brian
and
Tony
are
both
here?
If
anybody
has
questions
about
the
upcoming
releases
schedules,
it
would
be
they'd,
be
good
folks
to
answer
it.
C
Of
course,
I
want
to
thank
you
for
providing
the
Code
walkthrough
last
week
and
I'm
looking
forward
to
getting
feedback
from
both
Mark
and
Brian,
and
it's
been
rebased
I
re-based.
Yesterday,
it's
looking
good
so
just
kind
of
after
we
get
some
feedback.
I
think
it'll
be
good
to
good
to
go.
C
The
I
did
note
that
the
freebusty
bill
bot
is
failing
again
in
the
configuration
phase
unrelated
to
my
changes.
So
I
don't
know
what
that's
about,
but.
C
D
D
A
look
and
see
if
it's
failing
for
existing
reason
or
something
with
your
PR
I,
don't
know.
Did
you
get
a
chance
to
look
and
see?
Is
it
a
yeah.
C
D
Well,
I
can
talk
a
little
bit
about
the
upcoming
2-2
release,
so
we've
put
together
now
four
RCs
for
that
that
have
been
released.
The
last
RC
was
cut
last
week.
That's
hopefully
close
to
the
final
RC
things
are
looking
pretty
good
there.
There
may
be
some
additional
patches
we
end
up
pulling
in,
but
hopefully
not
too
many
and
we're
looking
to
cut
a
release
for
that.
Our
final
release
for
that
in
the
next
couple
weeks,
so
so.
A
So
this
could
be
very
exciting
timing
with
the
release
and
release
and
then
reads
the
expansion
going
into
the
main
branch
right
right
before
the
conference.
The
exciting
new
stuff
to
talk
about.
D
F
Partially
related
to
release
I
mentioned
it
last
week.
Another
call
freebies
was
Brian
recently
and
it
was
updated
also
to
follow
the
FPS
2.2
branch
pretty
quickly.
Several
people
who
are
doing
massive
package
builds
found
several
issues.
A
couple
were
probably
mine
in
Zeal
my
work,
those
should
be
fixed
now,
no
I
fix
it
as
far
as
I
know,
and
those
working
okay
was
also
report
for
Block.
Cloning
was
also
a
hit
problem.
It's
also
already
pitched,
and
so
far
goes.
Okay.
F
D
F
D
D
I
know:
we've
systematically
been
cherry,
picking
things
back
into
the
RC
for
issues
that
we've
seen
so
it's
been
stabilizing.
It
sounds
like
oh.
F
Yeah
there's
one
more
report
specifically
somehow
happening
on
your
numbers,
only
C2
with
pre-made
images
of
FreeBSD
and
happens
only
with
DFS.
It
looks
like
some
memory,
corruption,
but
everywhere
immediately
on
the
boot
completely
quickly
and
nowhere
else,
but
I
I
saw
only
backed
races,
I,
don't
know
what
to
blame
and
how
can
it
be
specific
to
ec2
and
nothing
else?
That's
I
have
no
explanation.
So
far,
I
don't
have
ec2
instance
to
try
to
look
on
that.
F
I
invited
today
at
nadolski,
my
colleague
from
Eric
systems,
who
is
working
on
Parallel
sync
implementation
just
to
improve
it,
do
not
think
too
much,
but
also
not
think
too
little,
and
it's
also
related
to
allocators
multiplicators
of
ZFS
that
address
few
issues,
so
I'd
like
ads
to
bring
some
attention
to
his
work,
because
it's
definitely
interesting
I
think
and
would
benefit
from
some
ice
on
it.
It's
yeah.
G
I
just
wanted
to
put
in
a
plug
to
see
if
anybody
a
chance
to
look
over
the
pr
like
like
I,
was
saying
we
just
like
to
get
some
more
eyes
on
that
and
I'm
happy
to
go
over
that
and
more
detail
either
here
or
offline.
Whatever
you
like.
A
Yeah,
so
this
is
it
you're
sinking
you're
like
in
Spa,
sync
you're
syncing.
A
bunch
of
data
sets
in
parallel
is
that
right.
C
A
Cool
yeah
sounds
kind
of
related
to
the
work
that
I
did
a
little
put
a
while
back
to
have
it
like
within
a
data
set,
the
denodes
get
synced
in
pair
in
parallel.
A
This
is
kind
of
extending
that
to
work
across
multiple
data
sets.
So
that's
cool,
yeah
I
might
be
able
to
take
a
look
at
that,
and
then
you
said
it.
How
does
it
relate
to
the
allocators.
G
So
a
couple
other
parts
of
that
change
are
to
basically
increase
the
number
of
right
issue.
Tests
used
to
match
the
number
of
allocators,
and
so
the
idea
of
those
changes
is
and
also
to
match
the
number
of
sync
threats.
So
the
idea
of
all
those
changes
is
to
reduce
the
the
lock
contention
that
that
can
occur
in
a
in
a
given
allocator,
as
that's
been
shown
to
be
sort
of
a
bottleneck
during
sync.
A
So
is:
is
that,
like
effectively
reducing
the
number
of
right
issue
threads
to
like
the
number
of
allocators
is
normally
four?
Is
that
right
right.
G
Four,
so
it's
it's
creating
currently
four
sync
threads
and
then
it's
increased
the
number
of
right
issue
task
cues
for
the
for
the
standard
priority
from
one
in
the
in
the
ordinary
case
to
to
match
the
number
of
allocators.
So
it's
increasing
the
the
parallelism
at
that
point
too.
H
H
Can
Rob
Norris
from
my
team
will
find
some
time
to
to
look
at
those
changes?
I
know
he'd
looked
at
Alexander's,
zil,
V2
or
whatever
it
was
called
the
from
last
month
and
he's
been
doing
a
bunch
of
work
on
our
side
on
some
Zill
related
stuff.
So
he's
got
a
lot
of
context
that
he
can
apply
to
your
review.
H
So
first
up
is
to
work
on
dedupe,
Rob
and
I've
been
applying
some
of
what
you
talked
about
last
month,
Matt,
basically
having
the
concept
we
have
right
now
is
there
are
two
AVL
trees,
basically
an
active
one
and
a
sinking
one,
and
so
we'll
accumulate
log
entries
that
we've
logged
to
the
DDT
and
keep
that
AVL
sorted
by
hash
order
and
then
once
it
reaches
something
like
50
of
the
the
amount
of
memory
we
want
to
spend
on
the
ddu
blog,
then
we
will
switch
that
one
to
the
syncing
one
and
make
the
a
fresh
one
active
to
start
taking
any
new
rights,
and
we
can
start
flushing
out
those
log
entries
in
hash
order
to
the
zap.
H
So
that
we're
you
know
getting
the
maximum
amortization
but
being
able
to
append
to
the
log
kind
of
checkpoints.
Or
you
know
we
got
up
to
this
hash,
walking
through
that
AVL
and
eventually,
once
we
get
to
the
end,
then
we
can
free
the
AVL
and
truncate
that
log
and
move
on
to
the
next
one
and
be
able
to
do
that
in
a
way
where
we
can
solve
the
problem.
H
We
had
with
the
previous
design
about
how
do
we
kind
of
checkpoint
and
sync
not
an
entire
Shard
of
the
AVL
at
once,
and
so
this
way
it
allows
us
to
be
more
incremental
without
having
to
reapply
the
log
into
kind
of
odd
order
and
focus
on,
like
you
mentioned,
having
it
in
hash
orders,
so
that
we're
maximizing
the
amortization
of
of
the
the
rights
to
the
indirect
blocks
and
so
on.
In
this
app
Rob
did
you
have
any
more
to
add
about
just
maybe
some
flavor
on
on
what
you've
been
doing
there.
I
I
have
some
questions
like
I
here
would
appreciate
some
ideas
on
how
to
do
a
couple
of
things,
but
give
anyone
a
chance
to
speak
to
what
you
just
said.
First,.
I
Sounds
good
yeah,
yeah,
the
so
the
thing
we've
been
looking
at
a
bit
is
is
basically
when
to
actually
do
these
things
like
when
to
swap
the
syncing
list
to
be
the
flashing
list
or
portrayal
logo.
I
There's
too
many
words
in
there,
which,
for
the
most
part
right
now,
is
just
sort
of
you
know
based
on
size
and
if
we
haven't
done
it
for
a
while,
it's
not
fully
fleshed
out,
but
the
real
thing
I
was
trying
to
figure
out
during
this
week
was
for
how
much
to
flush
each
each
transaction.
I
We
thought
to
do
something
initially
thought
to
do
something:
a
bit
like
what
scrub
and
resilver
and
other
scans
do
where
they
say.
You
know.
This
is
the
amount
of
time
we
can
spend,
but
those
things
have
the
advantage
that
they
do
their
own.
I
o,
so
they
can
act
and
they,
you
know,
have
a
little
carve
out
on
the
on
the
think
thread
to
be
able
to
take
some
of
that
time,
whereas
here
we're
just
writing
to
zaps,
so
we're
writing
to
dmu
objects.
I
So
obviously
the
actual
calls
to
you
know
zap
update
or
whatever
a
instant,
because
they're
all
memory-backed
and
we're
a
long
way
away
from
you
know
when
the
I
o
is
actually
done
and
that
I
O
is
just
mixed
up
in
everything
else,
so
we
and
also
one
one
zap
update
at
the
top
or
nzep
updates
at
the
top-
does
not
necessarily
correspond
to
some
amount
of
I
o.
So
like
it's,
it's
kind
of
hard
to
see,
so
the
ideas
we
sort
of
had
were
we
could
time.
I
You
know
how
we
we
could
attached
like
timestamps
to
to
like
how
long
it
takes
to
write
a
single,
D,
node
out
kind
of
thing,
and
then
look
at
that
and
and
adjust
how
much
we're
putting
in
and
out.
You
know
each
transaction
until
we
find
a
sort
of
a
nice
point
or
it's
all.
Action
at
a
distance
and
I
haven't
really
had
a
great
idea.
I've
done
some
experiments,
but
I
haven't
really
had
a
great
idea
that
I
liked
it
isn't
a
lot
of
threading
stuff
up
and
down
the
stairs.
I
So
I'm
just
curious.
If
anyone
has
a
particular
thought
about
that
or
another
really
great
strategy,
for
you
know
how
to
decide
how
much
to
ship
out
at
a
time.
A
A
So
yeah
I
definitely
hear
in
the
the
difficulty
of
like
you're
you're,
creating
work
that
will
be
done
later
and
you
don't
know
when
to
stop
adding
more
work
onto
the
pile.
A
A
So
you
know
you
could
just
kind
of
Follow
that
so
say
we're
going
to
use
whatever
we're
going
to
use
a
gigabyte
of
RAM.
For
this,
that
translates
to
X
number
of
entries.
That's
like
a
million
entries
in
our
AVL
tree,
so
yeah
we're
like
populating
some
new
AVL
tree,
we're
syncing
out
the
old
AVL
tree,
but
we
they
only
need
to
sync
out
some
of
it.
This
gxg
not
the
whole
thing,
so
you
could
look
at
it
and
say
like
okay.
A
Well,
what's
the
sum
of
the
number
of
entries
in
my
tree
plus
the
the
the
like
the
live
one,
however
much
over
a
million.
That
is
that's
what
I'm
gonna
think,
and
so
it's
like.
Well,
we
let
ourselves
get
over
a
million
by
like
one
txg
is
worth
or
whatever,
and
that
would
kind
of
naturally
throttle
it
right
because
you're
gener
you're
creating
as
you're
creating
work,
and
then
you
know
when,
when
you
actually
do
those
iOS,
that's
gonna
like
slow
down
whatever
it
is.
That's
causing
like
more
changes
to
be
accumulated
right.
A
If
you
can
get
something
along
those
lines
to
work,
I
think
that
would
be
probably
the
best
and
you
can
have
on
top
of
that
some
kind
of
minimum
thing
where
it's
like
yeah
like.
If
the
workload
is
low,
then
we
don't
need
to
really
use
up
the
whole
gig
of
RAM,
just
like
always
push
out
like
a
thousand
or
ten.
A
Some
constant,
where
you're
just
like
every
chance
to
do.
We
push
out
like
ten
thousand
of
these
things,
because
we
know
that's
like
not
a
significant
amount
of
IO
yeah,
so
that
would
probably
be
the
best
approach,
assuming
that
it
kind
of
works
out.
A
The
other
thing
that
you
could
look
at
doing
is
you
know,
because
you're
doing
the
you
do
you
you're
doing
the
zap
updates
and
the
actual
rights
like
syncing
out
that
D
node
in
syncing
context,
so
I
think
there's
nothing
that
would
prevent
you
from
like
making
some
zap
updates
and
then
like
calling
denote
sync
and
then
like
making
some
more
Zapped
updates
and
calling
denotesync
again
within
the
same
transaction
group.
A
So
you
know
you
could
do
something
like
that
where
you're
like
well
like
do
you
know,
10
000,
zap,
updates
and
then
do
this
sync
and
then
do
the
10
000
updates
and
then
the
sync
and
keep
doing
that
until
I've
done
it
for
like
five
seconds
or
whatever.
That
would
let
you
get
kind
of
a
similar
Behavior
to
the
like
time,
carve
out
that
the
scrubbing
does
but
I.
Don't
really
love
that,
like
you
know,
it's
I
think
that
would
be
acceptable.
A
But
it's
not
like
great,
like
I,
wouldn't
say
that
the
plan
carve
out
thing
of
the
scrubbing
is
really
great
either.
There's
a
similar
kind
of
thing
for
like
background
freeing
and
that's
like
carves
out
one
second
or
whatever.
None
of
those
mechanisms
are
really
good.
They're,
just
kind
of
good
enough
yep.
I
That
one's
interesting,
yeah
yeah,
the
the
the
max
memory
and
and
sort
of
just
having
it
just
monitoring
the
interest
rate
and
that
sort
of
thing
was
yeah.
I
A
I
Yeah
all
right
interesting,
you
know
I,
don't
think.
Thank
you
for
that
very
good
suggestions.
I,
don't
have
anything
specific
Alan.
I
To
it,
but
no.
B
H
D
H
And
then
Don's
been
looking
into
the
issue
you
raised
last
month,
Matt
about
doing
prune,
where
we're
going
to
delete
some
entries
from
the
unique
zap,
and
there
was
concern
that
that
would
result
in
them
not
getting
scrubbed,
because
the
scrub.
A
C
Yeah,
it
looks
like
during
a
scrub
the
there's,
a
DSL
scan
DDT
that
scans
the
duplicate
box.
It
ignores
the
unique
ones
and
then
in
the
top
down,
we
have
this.
This
function
called
class
contains
and
it
says,
does
the
duplicate
class
contain
this
block
and.
C
Yeah
it
calls
this
class
contains
and
then
then,
as
far
as
you,
you
brought
up
that
what
about
transitioning
blocks
so
when
a
block
transitions
in
line
like
as
as
it's
transitioning,
we
and
a
scan
scrubs
going
on
we'll
go
ahead
and
cover.
C
H
C
But
as
far
as
far
as
scrap
goes
yeah
the
unique
box
so
yeah
you
can
delete
play.
H
Although
I
guess
that
raises
questions
about
pavel's
idea
of,
do
we
want
actually
to
keep
two
separate
zaps
or
have
just
one
big
zap,
because
then
we.
G
H
Yeah,
it's
just
in
particular,
like
you
said
during
the
scan.
It's
just
checking
if
it's
on
the
duplicate
list
or
on
the
unique
list
yeah
from
that.
If
we
store
that
all
in
one
zap
I
think
it
was
still
going
to
have
some
way
to
tell
them
apart
whether
it's
just
the
ref
counter
or
explicitly,
storing
a
class.
Although
I
don't
know
that
that
makes
sense,
but
yeah
make
sure
that
we
don't
break
anything.
If
we
look
at
actually
getting
it
all
into
one
big
step.
I
A
E
A
What
layer
of
the
cache
it's
present
in
wouldn't
be
that
different
for
the
for
the
different
classes
and
so
avoiding
the
second
lookup.
Would
you
know,
in
theory,
double
your
performance
on
those
kind
of
workloads,
because
you
only
have
to
look
at
one
zap
versus
looking
in
two.
H
A
The
duplicate
one
or
not,
but
yeah.
A
Yeah
I
mean
for
rates
and
freeze
you,
you
have
to
look
in
both
unless
you
hit
in
the
first
one
that
you
look
in
anyways,
so
you
know
reducing
those
reducing
the
number
of
zaps
would
help
there.
H
Other
pull
requests
we
posted
this
week
was
incremental
scrub,
so
being
able
to
scrub
just
a
specific
transaction
range
and
having
scrub
record
the
last
transaction,
it
did
shrub.
So
if
you
do
a
regular
scrub
bit
we'll
record
the
transaction
number
when
it
finished
so
that
you
can,
then
you
know
just
scrub
all
blocks
that
change
since
then,
but
also
arbitrary
ranges.
H
So
you
could
split
a
scrub
up
into
smaller
chunks
or
in
particular,
we're
looking
at
a
case
where
there's
a
failure
of
a
jbod,
and
maybe
some
data
got
damaged
because
of
that,
and
we
just
want
to
scrub.
You
know
100
transactions
on
each
side
of
that
event,
and
quickly
be
able
to
tell
you
know,
is
the
data
that
was
born
in
this
small
time
range,
all
okay
or
not,.
H
A
Yeah
I,
wouldn't
I,
mean
I,
wouldn't
recommend
doing
that.
Like
txg
number
is
not
something
that's
like
exposed
to
users,
so
you
know
there
might
be
some
like
hidden
property
or
something
maybe
but
I,
don't
like
if
I'm
a
use,
assistant,
administrator
and
I'm.
A
You
know
looking
at
like
what
I
have
in
the
Man
pages
and
then
somebody
says
like
oh,
like
you
can
do
a
scrub
like
special
and
the
two
extreme
ranges
it's
like
well,
what
like
what
is
that
there's
some
New
Concept
that
doesn't
really
correspond
to
anything
that
I
know
about
it
would
be,
it
seems
like
it
would
be
better
if
it
was
either
like
wall
clock
time
or
like
time.
A
Since
you
know
time
amount
of
minutes
in
the
past,
or
you
know
maybe
based
on
like
snapshots
or
something
where
you
could
say
like
you
know,
I
took
a
snapshot
then
take
another
Snapchat.
H
I
A
A
Yeah,
it
wouldn't
be
I,
don't
know
if
there
would
be
billions,
but
there
could
be
a
lot,
but
it
still
might
be
reasonable
to
record
those
like
you,
don't
necessarily
have
to
record
every
single
txg,
but
you
could
say
like
record
one
txt
per
minute
right,
then
you
have
like
at
most
one
entry
per
minute
and
you
can
kind
of
do
the
math
of
like
well.
If
this
pool
lasts
for
100
years.
How
big
is
this
log
you're
gonna
get,
and
maybe
then
you
adjust
it
yeah,
not
every
minute.
A
H
For
the
kind
of
incremental
or
whatever
version
it
is
straightforward
and
you
just
do
like
people
scrub,
Dash,
C
for
continue
or
whatever,
and
it
just
scrubs
everything
that's
changed
since
the
last
scrub
completed.
But
it
does
currently
offer
you
the
ability
to
manually
specify
a
start
and
end
transaction
group,
and
maybe
we
can
come
up
with
a
decent
way
of
keeping
time
or
maybe
even
just
a
kind
of
a
rrd
style
thing
of
you
know.
H
H
Think
most
SQL
events
know
the
transaction
group
that
have
it
happened
that,
and
we
were
that
and
just
the
monitoring
system
happens
to
grab
the
the
purple
txgs
case
that
where
we
just
know
the
last
hundred
transaction
groups
around
when
the
problem
happened,
and
that
was
enough
for
their
system
to
be
able
to
do
this,
but
I
think
a
better
user
interface
on.
It
would
be
much
nicer.
E
Even
if
the
number
is
like
you
know
not
super
meaningful,
like
okay,
which
would
have
some
sensible
default.
It's
like
scrub,
100
txgs
on
either
side
of
whatever
this
event
was
an
internally
view.
That
knows
what
the
PHD
is,
and
that
might
be
a
more
approachable
user
interface
for
citizens
who
are
just
like
something
bad
happened
to
my
disks.
I
know
that
there's
an
event
associated
with
that,
so
I'll
scrub
around
that.
H
Yeah
because
I
think
yeah
every
event
has
like
an
event
ID
and
maybe
even
just
being
able
to
specify
that
and
arrange
or
whatever
yeah
it
would
that
be
a
much
nicer
interface
for
the
user
than
this
kind
of
implementation
detail
magic
number
that
doesn't
mean
anything
yeah.
A
A
I
mean
I,
don't
know,
I,
don't
know
what
number
to
type
in
there
like
you
might
as
well.
Just
have
it
be
like
the
default
is
well
like
we'll
scrub,
like
a
little
while
around
there,
where
we're
not
going
to
tell
you
how
a
line
is,
and
maybe
you
can
say
like
you
can
do
like
a
tiny
while
a
little
while
or
a
long.
While,
you
know
like
you
guess,
yeah
you
can
do
that.
A
It's
a
little
hooky
and
we'll
just
translate
that
to
like
10
txgs,
100,
G
excuse
or
a
thousand
two
extra
juice,
which
again
we
don't
know
like
how
much
data
was
written
in
that
time
or
how
much
wall
clock
time
passed.
Yeah
it's
it's
like
very
hokey
I
feel
like
either
like
amount
of
data
written
or
wall
clock.
Time
are
things
that
are
like
relevantages.
A
So
I
would
try
to
make
it
based
on
those
and
if
we
have
to,
if
we
can't,
then
you
know,
maybe
this
is
the.
The
txg
thing
is
just
not
first
class
feature.
It's
a
tunable
or
something.
A
H
And
map
you've
had
an
idea
about
using
this
to
defer
the
like
import,
verified
data,
yeah.
F
Still
not
so
long
ago
on
every
pulling
part,
ZFS
always
Scrub,
but
lost
three
transaction
groups
for
metadata
it
at
least
check
to
submit
the
metadata
of
elite
and
it
blocks
only
in
part.
If
error,
because
it
may
have
sense,
do
not
important
import
previous
TXU
manually
or
as
an
import,
something
we
can't
use.
But
for
data
scrap
was
just
running
opportunistically
just
in
case
it
can
fix
anything
but
who
was
imported
always
and
to
save
some
time.
F
I
made
data
scrub,
I,
practically
disabled
data
scrap
or
it's
configurable
now,
but
by
default
it's
disabled,
but
I
was
wondering
whether
it
can
be.
If
we
still
want
scrap
to
be
done
during
import
for
date,
no
data
scrub,
then
we
could
use
maybe
this
mechanism
to
run
a
testing
if
we
don't
have
any
other
scrub
running
right
now
for
just
explicit
requests,
I,
don't
know
whether
we
really
needed
that
much
right
yeah.
But
if
somebody
you
think
we
do.
A
A
A
But,
like
it's
hard
for
me
to
say
how
often
that
would
happen
it
seems
like
it
would
be.
I
could
definitely
believe
you
know
discs
lying
about
things,
being
unstable
storage
or
people
configuring,
the
system,
you
know
so
that
that
doesn't
really
take
effect,
but
you
would
think
that
that,
like
there's
plenty
of
pieces
of
metadata
that
are
still
getting
verified,
and
you
would
think
that
some
of
those
would
have
been
hit
by
that
problem
as
well.
F
Oh
yeah,
it
would
be
good,
of
course,
to
do
metadata,
I
synchronously,
somehow
no
at
least
not
to
delay
pull
import,
but,
as
I
have
told
in
case
of
metadata
corruption,
we
may
want
not
impart
yeah
still.
Production
would
not
be
happy,
but
at
least
it
will
not
be
game
over
if
we
import
and
start
updating
something
that
we
can
no
longer
maintain.
Yeah.
A
F
F
Actually,
speaking
about
pulling
part-time
and
mentioned
log
Dido
blog
flash
I
was
actually
thinking
not
about
the
Duplo
block
flash,
but
about
I
like
space
matlog
I
was
thinking,
maybe
adding
some
flux
to
the
pool
sink
or
some
other
command
sub
comment
of
the
pool
to
forcefully
flush.
All
those
logs
I
was
thinking
about
metaslab
log,
but
it
also
could
also
be
applied
to
Judo
blog
just
before
we
we're
doing
some
failover
or
reboot,
or
anything
that
we
know
we
will
soon
be
we
needed
to
import
the
pool.
F
Suppose
a
pool
expert
is
generally
a
more
slower
process
because
it
destroys
things
in
memory.
It's
tries
to
free
Arc
and
just
simply
freeing
a
half
terabyte
of
Arc
of
in
eight
kilobyte
blocks
just
takes
forever
yeah.
We
don't
have
that
time
during
the
export,
otherwise
pull
export.
It
actually
flushes
the
space
map
log
already,
but
questions
that
we
cannot
afford
fully
exports.
F
So
it
would
be
good
just
to
be
able
to
do
quick
and
parallel,
not
disturbing
log
Flash,
and
after
that
we
could
kill
the
system
for
a
lower
and
import
it
faster.
A
Yeah
I
think
that's
a
good
idea
to
have
a
function.
That's
like
please,
minimize
pool
import
time
and
and
yeah
like
zpl
export
would
do
that
always
zoopool
like
reboot
would
do
that
always
and
then
also
you
have
a
a
manual
trigger
of
this
yeah.
That
sounds
super
useful
yeah.
H
A
F
Probably
pretty
small,
if
nothing
else,
I've
noticed,
while
looking
on
last
blockloyan
crash
that
I
I
may
theoretically
get
situation
when
block
loan,
you
use
it
for
file
competitions
that
actually
freebiesd
does
you
can
use
CAD
command
to
concatenate
several
files
and
it's
all
getting
mapped
to
block
cloning
by
ZFS.
But
if
you
have
512
byte
file
and
try
to
concatenate
it
to
another
five
syllabbyte
file,
you
will
get
file
with
2512
byte
blocks.
F
That's
probably
not
what
we
want.
I
was
just
wondering:
is
there
any
threshold
where
we
would
limit
it?
Would
it
be?
Data
sets
a
record
size
or
we
want
something
in
between.
A
A
The
data
says
record
size
because,
like
you,
could
have
something
where
it's
like
well,
I
changed
the
record
size
property,
but
I
created
a
bunch
of
files
before
I
changed
the
record
size,
property
and
all
those
files
have
the
same
record
size,
and
so
they
all
you
know
like
it
makes
sense
to
put
them
together
to
splice
them
together
since
I'm,
not
like
adding
any
additional
restriction
like
the
thing
that's
annoying
is
like
I
have
two
files
that
happen
to
be
five-fold
bytes
long,
but
the
record
size
is
128k
and
if
I
were
to
append
to
either
of
those
files,
then
their
record
size
would
grow
or,
like
the
you
know,
the
block
size
of
the
file
would
grow.
A
But
if
I
put
them
together
now
I
have
a
multi-block
file
and-
and
it's
going
to
be
stuck
with,
you
know,
block
size
512.
So
you
just
want
to
avoid
that
situation.
Where
you're
taking
a
single
block
file,
whose
record
size
is
less,
you
know
whose
block
size
is
less
than
the
record
size,
property
and
and
turning
it
into
a
multi-block
file.
With
this
small
record,
with
a
small
block
size.
F
E
B
I'm
I'm
now
relying
on
Native
encryption
and
to
get
it
working
requires
some
trial
and
error
on
a
mix
of
flags
and
force,
and
it
eventually
worked.
But
the
error
handling
is
often
like
asking
for
a
password
once
that
should
be
a
new
password,
but
it
says,
enter
password
and
just
fails
and
gives
typically
error.
255,
though
I.
A
A
Yeah
I
think
that
the
goal
with
it
I
think
that
I
don't
think
that
we
are
trying
to
have
the
process
exit
with
different
numbers
to
indicate
different
kinds
of
failures.
I
think
that
no.
G
H
H
Of
ones
for
especially
some
newer
errors
about
encryption
that
are
outside
the
normal
range
and
so
all
map
to
two
to
five,
maybe,
but
it's
a
similar
problem
I've
seen
before,
like
you
know
the
eye
octal
returns,
you
know
Ian
valve
for
like
17
different,
possible
errors
and
then
lives
out
of
sometimes
tries
to
guess
which
of
those
errors
was
more
likely.
F
H
To
we
even
just
determining
what
the
text
that
will
return
is
because
that
all
happens
on
the
user
space
side,
not
in
the
kernel,
side
and
I.
Think
I
did
a
pull
request
a
couple
months
ago
to
like
change
the
order
in
lib
ZFS
of
one
of
those
particular
the
more
likely
error
was
never
displayed.
It
was
always
saying
it
was
some
unrelated
thing
that
was,
you
know
one
of
17
cases
where
it
might
return
e
in
Val.
A
Yeah
but
I
I
would
not
I
I,
wouldn't
if
it
does
even
do
any
different
error
codes.
Any
different
process
exit
codes,
I,
wouldn't
try
to
rely
on
those
I
would
I
mean
unfortunately
like
it's.
You
gotta
parse
the
error,
messages
and.
A
A
H
H
Okay,
I
think,
there's
some
of
we
have
extended
and
added
our
own
error
codes
to
be
able
to
give
more
meaningful
messages,
but
it's
definitely
not
been
plumbed
through
for
a
lot
of
the
existing
error
messages
and
there's
a
lot
of
just
switch
case
guessing
based
on
the
error
now
and
like
oh,
it
got
Ian
Val.
Is
this
value
set
something
that
means
this
error?
Otherwise
it
means
this
error.
E
B
A
Well,
other
questions
or
topics.
A
All
right,
well
I,
hope
to
see
you
all
at
the
conference,
which
is
just
five
weeks
from
now
I
think
yeah.
Five
weeks
from
now,
we
have
the
next
call
here.
Four
weeks
from
now,
October
10th
I'll
keep
that
meeting.
Even
though
it's
just
one
week
before
the
conference
I
know
not,
everybody
will
be
able
to
attend
the
conference
so
love
that
meeting
but
I
understand
if
folks
want
to.
A
You
know,
skip
it
and
and
if
they're,
especially
if
they're
coming
to
the
conference,
you'll
get
to
see
enough
of
us
the
next
week.
Look
forward
to
that
and
we'll
have
the
talk
announcements
out
next
week.