►
From YouTube: July 2023 OpenZFS Leadership Meeting
Description
Agenda: RAIDZ Expansion; Gang ABD; BRT; ZED slow io's; metaslab_unload
full notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
Meeting
for
July
18th,
so
we've
got
a
couple
things
on
the
agenda
for
today.
Up
first
read
the
expansion.
It
looks
like
so.
C
Yeah
at
the
last
meeting,
I
was
just
starting
and
hadn't
put
together
the
pull
request
yet
and
that
that
has
happened
and
it
wasn't
too
bad
Brian
there
was
a
collision
in
the
d-rate
section.
You
had
added
some
code,
I
think
if
you
have
sectors
that
are
uninitialized,
there
was
a
potential
for
corruption,
and
so
that
pull
request
was
dependent
on
some
structure,
fields
that
were
removed
as
part
of
the
rate,
expansion
and
so
I
had
to
go
with
the
put
those
back.
C
I
mean
it
was
basically
I
think
it
was
determined
that
they
weren't
needed
anymore.
So
they
had
no
consumers,
but
obviously,
in
the
future
they
did
have
a
consumer.
So
I
put
that
back
and
there's
a
similar
one
in
a
different
section
of
the
code
where
a
Zio
was
being
passed
through
a
function
and
then
it
was
removed
because
no
one
was
consuming
it,
but
turns
out.
That
was
some
of
the
changes
you
had
made.
C
There
was
a
consumer,
so
changes
like
that,
and
then
over
the
past
couple
years,
there's
been
a
lot
of
Cleanup
in
the
code.
Trivial
things
like
assert,
you
know
x,
equals
equals
zero
turns
into
a
certain
zeros
and
things
like
that
and
various
cleanups,
where,
if
you
don't
have,
if
you're
not
consuming
a
an
argument,
there's
a
new
convention
where
you
void
it
in
the
in
the
function.
So
a
lot
of
things
like
that,
so
I'm
hoping
I'm
catching
them
all,
but
I've
probably
missed
a
few.
C
C
Yeah
yeah
and
then
and
then
going
fine
and
then
actually
I
found
to
my
sort
of
astonishment.
If
you
will
is
that
in
the
code
we
weren't
actually
enforcing
the
feature
flag
and
it
was
corrupting
your
pool
so
and
because
most
people
just
take
the
current
code
base
create
a
pool
at
it.
You
know
so
if
you
actually
don't
have
it
enabled
it,
it
did
bad
things
and
hung,
and
and
anyway
so
I
added
that
last
night.
C
So,
but
in
doing
so
like
notice,
the
comments
for
Spa
V
Dev
attached
he's
talking
about
mirrors.
You
know
so
to
go
clean
up
the
comments
there
and
I'm
working
on
a
big
Theory
comment
for
the
rate:
Z
beat
every
Z
code,
so
I'll
I'll
be
posting
that
shortly,
but
but
their
current
Focus
right
now
is
on
testing.
There's
the
ZFS
test,
Suite
I,
think,
is
a
good
shape.
It's
passing
then
on
z-test.
C
It
was
sort
of
two
modes
of
running
it,
one
where
we
just
you,
can
specify
as
the
kind
Ebates
and
then
it
would
throw
you
know
an
expansion
in
in
the
mix.
Then
there
was
a
more
dedicated
approach,
which
I
think
Stuart
maybe
did
where
you
actually
just
have
a
dedicated
sort
of
test,
it's
sort
of
like
the
MMP
test.
C
You
know
it's
like
a
one-off
test
inside
of
z-test,
where
you're
not
messing
with
all
the
all
the
craziness
you're,
just
actually
doing
a
focus
test,
and
so
that
I
kind
of
had
to
rework
it
to
get
it
working,
but
it's
working
now
and
that
one
you
pass
in
a
dash.
X
and
it'll
create
a
pool.
A
great
d
pool
fill
it
up
with
with
a
lot
of
data
about
20
of
the
capacity
of
the
pool
and
then
attach
your
disk
and
then
interrupt
the
Reflow
to
confirm
that
we
can
survive.
C
You
know
a
crash,
unexpected
termination
in
the
middle
of
Reflow,
so
that
test
is
in
there.
I
haven't
added
it
to
zloop,
yet
but
I
plan
to
do
that,
so
it'll
get
mixed
in
with
our
normal.
You
know
regression
testing
that
we
do
so
on
the
z-test
side,
the
other
one,
the
one
I
first
mentioned
I'm,
seeing
about
five-ish
consistently
failing.
C
It
runs
like
eight
eight
Z
loops,
and
then
you
know
the
eighth
or
ninth
one
fails
and
go
look
at
it
and
there's
a
handful
that
I'm
trying
to
figure
out
it's
a
Zio,
unexpected
Zio
error
device,
not
configured
which
CIO
layer
itself
we'll
set
that
in
certain
cases,
if
we
can't
can
determine
the
status
of
the
V
Dev,
but
there's
no
VW
attached
to
it
so
I'm.
C
You
know
that
one's
a
little
problematic
to
figure
out
but
I'll
keep
working
on
it
and
then
there's
some
code
in
there
I
think
the
door
added
it.
But
on
on
every
import
we
checked
that
we
sanity
checked
the
scratch
area
that
we're
using
and
it's
failing,
asserts
and
I.
Don't
quite
understand
he
has
a
code
setup,
so
you
can
actually
in
discrete
areas
of
that
where
we're
writing
out
the
stuff.
You
can
have
it
stop
at
that
exact
point
and
then
do
a
validation
and
some
of
those
are
failing.
C
So
I
don't
know
if
they're,
false
positives
or
not,
but
I
kind
of
think
they
are.
But
anyway
so
there's
a
handful
Z
testings,
it's
fun.
You
know
the
fun
part
of
working
on
z-test,
so
you
have
to
go.
She
go
chase
down
all
the
problems.
It
uncovers
yeah
I'm
I'm
in
that
Loop
right
now,
where
you
know
get
a
whole
bunch
of
them.
C
Look
for
the
pattern
see
if
I
can
figure
out
how
to
diagnose
it
further
so
hopefully
wrap
that
up
this
week
and
then
move
on
to
kind
of
the
final
push
would
be
getting
reviewers
I've
asked
a
few
people,
I
I,
think
I
pinged,
you
Brian
on
that.
But
if
you
wouldn't
mind
looking
at
specifically
at
the
d-rate
all
the
d-rate
changes
because
I
think
they're
fine,
but
you
know
we
need
more
eyes
on
it.
C
C
Yeah,
unfortunately,
I
haven't
seen
any
outside
people
weighing
in
other
than
Fedor,
but
I've
been
doing
tons
of
manual
testing
with
real
real
date
and
real
arrays.
You
know,
and
it
seems
fine
yeah.
A
C
But
no
test
to
test
a
pull,
not
having
the
feature
enabled
I
I,
don't
know
if
it'd
be
worth
it
to
add
that
test
I
mean
you
could.
You
know,
certainly
set
compatibility
equal
to.
You
know
some
earlier
thing
and
and
test
that
and
just
make
sure
that
every
returns.
B
I
think
mostly,
we
can
probably
spend
all
the
feature
Flags
to
make
sure
they
actually
don't
work
when
they're
not
enabled
yeah.
C
C
C
And
the
other
unfortunate
thing
I've
noticed
is
that
on
stock
master,
C
Loop
is
failing
not
as
frequently
but
constantly
in
d-rated
and
Ray
G
and
most
most
common
one
is
just
just
hangs
and
I
haven't
I
haven't
evaluated
those.
B
But
is
that
Alexander?
Is
that
the
same
one
you
were
seeing
with
the
you
put
it
on
slack
earlier
last
week,
I
think
where
a
bunch
of
tests
you
were
seeing
time
out
because
they
were
hanging
just
remember
what
I'm
talking
about.
D
I
haven't
seen
the
loop
or
hang
in
our
job,
usually
crashing
I.
Think
I
saw
on
some
other
tests
like
regular
tests.
Yeah
I
think
it
was
during
the
zero
suspend.
We
have
some
test
of
removal,
removing
Zeal
or
somehow
else
triggering
zero
suspend
the
problem
is
you
also
spend
that's
I'm,
not
sure
how
it
worked
before
like
written
the
description,
I
haven't
created
issue
an
easy
investigated
myself.
I
should
probably
do
one
or
another
until
it
got
lost
from
history.
Yeah.
C
D
C
Clean,
so
this
is
just
on
stock
master,
so
I
was
just
sort
of
trying
to
Baseline
the
current
state
of
it,
but
and
I'm
writing
these
longer.
90
900
minutes
900
seconds
I
think
they'd
normally
ran
like
for
500
or
600.
D
A
The
zil
suspend
issue,
I
kind
of
suspect
that
the
recent
regression
I
don't
recall,
seeing
it
you
know
a
month
ago,
so
that
might
be
bisectable
at
least
for
that
particular
issue.
D
I
haven't
seen
it
before
either
but
I
say
quoted
in
that
slack
message:
I,
don't
see
how
would
it
be
new
or
or
mine
yeah
like
I
I,
try
to
handle
to
improve
some
issues
around
suspend,
but
it
seems
like
orthogonal
to
my
to
what
they
fix
it.
B
But
I
just
noticed
it.
It
had
made
some
of
the
tests
in
the
the
brt
poll
requests
fail
because
they
timed
out
because
they
got
hung
in
the
same
spot.
Yeah.
A
C
Okay,
cool
yeah,
that's
all
I
had
I'm,
basically
an
update,
yeah
and
making
steady
progress
optimistic
that
we
can
land
this
thing
it
sounds
like
great
progress.
A
Yeah,
so
the
how
old
is
what's
in
the
pr
right
now?
Do
you
push
it
recently
or
is
it
from
a
week
ago,
with
original.
C
Push
as
far
as
based
off
master
or
just
in
terms
of
commits.
A
C
I
just
made
a
commit
this
morning:
I
think
okay,
I
didn't
beat
base.
If
I
re
can
I
rebase
it
and
retain
my
commit
history,
I
think
I
can
but
yeah
okay
yeah,
but
there
hasn't
been
I've
been
spot.
Checking
there
hasn't
been
significant
changes,
I
mean
it
was
it's
kind
of
slowed
down
when
we
were
trying
to
release
that
the
final
version,
but
yeah.
C
D
C
A
C
A
Good
all
right,
what
else
do
we
have
on
the
agenda
for
today.
B
I
think
the
next
one
was
mine.
We
saw
kind
of
interesting
thing
starting
last
week
if
you
have
gang
blocks.
In
this
case,
the
pool
is
like
90
something
percent
full
and
there's
just
I
think
a
lot
of
fragmentation
and
not
a
lot
of
free
space.
So
it's
resorting
to
using
gang
blocks
and
we
create
a
gang
ADB
to
to
send
down
to
the
bio.
B
And
so
when
we
prepend
the
gang
header,
the
512
byte
gang
header,
we
end
up
with
an
underlined
right
that
the
4K
native
Drive
refuses.
B
So
when
we
prepend
the
512
bytes
in
the
beginning,
we
add
the
right
amount
of
padding
to
the
end
of
the
the
thing,
but
then,
when
we're
issuing
it
down
to
the
bio,
it's
too
big,
so
we're
splitting
it
and
because
we
split
it
on
a
4K
page,
we
end
up
with
a
512
byte
thing
and
then
a
bunch
of
4K
pages.
And
now
it's
not
4K
aligned
and
the
device
will
give
you
a
right
error
and
then
the
second
half
has
the
same
problem
because
it
starts
4K
the
line.
B
But
then
it's
got
this
unaligned
bit
at
the
end,
and
so
it's
throwing
errors.
It
didn't
do
this
in
0.8.
It
was
introduced
in
2.0.0
as
part
of
the
gang
ADB
concept
where
we
wouldn't
have
to,
because
in
0.8
we'd
copy
and
re
make
a
new
linear
buffer
that
was
4K
line
properly.
But
when
we're
doing
the
the
gang
ADB
thing
to
avoid
all
the
memory
copies,
the
problem
is
when
it
gets
when
it
has
to
get
split
across
bios,
because
there's
the
aggregation
was
too
large.
It's
causing
right
errors.
B
D
Myth
yeah
I
was
gonna,
say
that
I
was
heated
by
this
problem,
not
by
IBG
problem,
not
the
problem
of
again
walks
when
I
was
trying
to
optimize
Zeal
to
avoid
memory,
copies
I
was
trying
to
create
abds,
not
align
it
to
anything,
just
arbitrary,
whatever
I
want,
at
least
of
memory
chunks
and
while
freebhd
disk,
the
dev
geom
code
was
mostly
happy
to
do
to
handle
that
just
by
creating
a
copy
on
demand
at
the
end,
if
it
wasn't
aligned,
Linux
doesn't
do
that
and
it
just
tries
to
chunk
and
disappear
as
a
problem
as
you
describe
it.
D
To
make
it
multiple
of
4K
after
it
was
512
just
can't
work,
not
at
this
point
we
would
either
have
to
fix
somewhere
at
the
dev
level
or
just
originally
do
copy.
It's
again
the
block
creation
instead
of
append
one
or
another,
but
with
my
in
my
Zeal
case,
I,
give
up
on
that.
I
went
different
approach.
D
B
But
yeah
we
looked
at
possibly
making
it
so
that
when
we're
using
the
ADB
gang
offset
stuff
that
it
will
make
sure
it
returns,
something
that's
4K
aligned,
so
it
will
split
some
other
page
in
the
middle
and
leave
the
second
half
of
it
in
the
second
block,
so
that
that
second
block
will
then
be
4K
aligned
as
well,
but
that
seemed
a
little
gnarly
and
we
haven't
you
know:
we've
mostly
just
narrowed
down
what
was
causing
this
and
managed
to
recreate
it
on
purpose.
B
D
Was
there
may
be
one
more
problem?
If,
even
if
we
try
to
make
that
function,
you
mentioned
trying
to
cut
it
properly.
Then
we
then
again
at
the
Block,
Level
HBA
or,
as
a
controller
may
be
unable
to
handle
alignments,
not
multiple
to
page
size
and
that's
another
threshold,
the
side
of
a
shift.
We
also
have
page
size
and
right
now
everything
every
block
that
is
bigger
than
page
size.
It
always
aligned
at
the
page
size
and
every
every
discontinues
are
always
multiple
to
page
size.
D
More
than
any
image
should
support
arbitrary
scatter
Gaza
and
some
HBA
supporter
bitters
cutter
gazard.
But
it's
not
hundred
percent
I'm,
not
sure
how
exactly
Linux
handles
that,
but
I
have
I
suspect,
I
I
try
to
look,
but
if
I
remember
correctly,
it
doesn't
properly
still
kind
of
expects
fan
system
to
do
that,
and
so
it's
just
opens
so
big
can
of
worms
that
maybe
just
making
gank
blocks
aligned
originally.
Maybe
the
easiest
way.
B
D
B
Well,
in
this
case,
it's
like
we,
we
had
this
nice
scatter
gather
list
of
pages,
and
then
we
needed
to
stick
a
an
unaligned
header
on
the
front
of
it
and
then
other
than
a
member
copy.
There's
no
way
to
make
that
aligned
again
right.
B
D
B
D
A
list
of
the
list
of
blocks
like,
what's
what's
the
problem
to
allocate
list
of
blocks
with
first
prep
engine
512
bytes,
it's
not
that
we
are
copying.
Data
data
will
remain
in
Arc.
It's
usually
just
temporary
buffer
used
to
allocate
that
again.
I
I,
don't
remember
that
code
out
of
my
head,
but
that
would
I
would
think.
B
D
B
A
B
We
came
across
another
kind
of
only
slightly
unrelated
bit,
the
like
v-dev,
bio,
Max,
segs
or
whatever
the
the
function
that
figures
out
how
many
segments
we
should
put
in
a
bio
basically
uses
the
Define
limit
of
256
of
them,
but
on
by
default
on
Linux,
all
the
regular
disk
devices
have
a
Max
segments
of
128.,
so
we
have
a
patch
to
actually
ask
the
disk
how
many
segments
it'll
take.
B
So
the
ZFS
will
split
it
up
and
and
issue
it
to
the
disk
so
that
the
Linux
block
layer
isn't
going
to
split
every
bio.
We
send
it
in
half
anyway,
I,
don't
know
if
that
will
help
much
I
haven't
had
a
chance
to
measure
it,
but
just
another
thing:
I
I
noticed
there
that,
basically,
because
we
found
by
reducing
the
aggregation
and
never
sending
a
big
enough
thing
that
has
to
get
split,
we
avoided
this
problem.
D
A
question:
what
should
be
the
split
like
as
I've
told
some
devices
some
controller
support.
Arbitrary
scattergas
are
other
requires
pitching
on
page
sizes,
aware
Linux
will
split,
will
split,
but
if
I
remember
I
saw
it
was
splitting
on
sector
boundary,
but
we
cannot
go
in
this
case.
We
would
like
to
go
both
below
the
page
and
below
the
sector
boundary.
B
Yeah
but
we'll
be
digging
more
into
that
this
week
and
we'll
share
what
we
find.
D
Yeah,
of
course,
it
would
be
great
to
have
arbitrary,
scatter
Gaza
support.
Arbitrary
ABD
I
found
that
we
have
some
requirements
of
alignment
for
this
ABD
in
in
encryption
in
check
Simon.
In
some
other
places,
it's
not
big
but
somewhere
64,
bytes,
sometimes
32,
something
like
that,
but
and
that
we
could
handle
more
or
less
in
some
places
it's
already
handled,
but
it's
definitely
not
handled
on
the
video
layer
where
it's
worse,
I
saw.
A
B
A
Yeah-
and
this
is
a
bit
of
a
Corner
Cable-
you
say
the
gangblocks-
don't
usually
get
kicked
in
yeah.
So
when
we
do
fix,
this
would
be
great
to
pull
in
the
test
case
you
put
together
to
make
sure
it
stays
fixed.
A
B
Yeah
I
think
it's
just
adding
an
X,
making
the
percentage
chance
of
it
having
a
tunable
as
well.
So
we
can
say
any
blocks
figured
in
this
become
a
gang
block
so
that
we
can
yeah
exactly
be
able
to
add
a
test
case
for
this
and
make
sure
it
doesn't
pop
back
up
in
the
future.
B
So
we
have
a
pull
request
that
hooks
up
copy
file
range
clone
file
range
due
to
file
range
fi
clone
all
the
various
things
that
Linux
user
space
can
use
to
pavel's
brt
work.
A
B
That
you
can
do
it
all
with
the
built-in
tools
on
Linux
and
then
we're
separately,
we're
also
working
on
a
sample
module.
A
VFS
underscore
ZFS
for
Samba,
so
Samba
will
know
to
use
brt
for
server-side
copies.
B
B
Pretty
good
yep,
it's
there,
it's
got
a
bunch
of
tests
for
all
the
different
ioctals
and
and
syscalls,
and
it's
just
looking
for
some
review.
B
Shy
about
changing
things
yeah!
Well,
even
looking
at
the
the
previous
thing
that
there's
a
nice
helper
function
to
here's
a
b
Dev.
How
many
segments
will
it
take,
but
it's
not
available
on
Ubuntu,
like
2004,
only
newer
versions,
but
if
you
get
the
queue,
then
there's
a
function
that
can
do
it,
but
not
if
you're,
on
really
old
kernels
and
it's
just
yeah.
A
D
Cool
yeah:
it
should
also
help
an
FS
which
also
has
server
side
copy.
According
to
our
service
teams,
that's
told,
should
bosses
and
B
and
NFS
should
work.
We
are
investigating
that.
How
will
it
go
one
of
my
teammates
Amir
Hamza
already?
Oh
sorry,
not
a
different
mersalim
is
now
playing
with
that
trying
to
run
some
tests.
Maybe
you'll
see
his
feedback
on
APR.
A
A
Was
gonna
say
this
still
doesn't
support
the
the
issue
about
copying
between
data
sets
right
now.
B
I
think
one
of
them
can
do
it
yeah,
but
the
default
for
like
CPU
reflinks
is,
if
you
use
Force.
Yes,
if
you
don't
use
the
other
one,
no
or
something
yeah.
D
A
referral
link
always
like
one
of
them
uses
once
called
one
user's
analysis
is
called
yeah
like
practically.
If
you
do
demand
it
to
be
colonies
and
colony
is
supported
only
within
the
file
system.
But
if
you
said
you're,
okay,
with
optional
copy
and
then
which
most
people
should
be
okay,
I,
don't
know.
Why
would
somebody
require
a
work?
Loan
usually
work
across
and
science,
if
I
was
told
boss,
NFS
server,
both
support,
both
use
copy
file
range
is
cool,
could
be
file
range.
B
D
B
And
then
the
other
thing
we're
working
on
is
expanding
Zed.
If,
based
on
the
work,
we
did
I,
don't
know
some
number
of
months
ago
to
allow
you
to
configure
Zed
with
vdev
properties
to
control.
You
know,
after
how
many
writers
or
checks
and
errors,
do
we
replace
the
disk
with
a
spare
expanding
that
so
that
if
there
are
too
many
slow,
I
O's
on
a
disk,
it
will
auto
spare
it
since
I
had
yet
another
disk
that
decided
to
go
out
to
lunch
but
not
actually
die.
B
It's
like
I'll
do
one
IO
every
three
seconds
or
something
and
drag
the
performance
of
the
whole
pool
down,
and
so
this
will
Auto
offline
it
or
replace
it
with
spare,
and
let
you
just
be
able
to
configure
that
in
the
same
way
we
do
checksum
and
disc
errors
at
the
moment.
A
And
in
this
case,
slow,
I
o
is
the
whatever
internal
limit
we
have
in
VFS
at
the
moment
when
it
besides
an
iOS
low,
it's
I,
don't
know
30
seconds
or
something
like
that.
Really
long
right,
yeah.
A
B
Can
see
a
count
of
them
with
one
of
the
flags
for
his
equal
status,
so
this
would
just
be
teaching
it
to
actually,
you
know
you'll
be
able
to
set
a
threshold
count
in
a
threshold
time
and
say:
if
it
does.
This
then
try
to
replace
the
disk
and
keep
my
pool
healthy,
yeah.
A
B
So
that
should
be
coming
relatively
soon
and
I
had
one
other
question
about
I'm
gonna
help
with
somebody
with
a
0.8
install
of
ZFS
and
looking
at
performance,
and
they
were
getting
huge
amount
of
CPU
time
spent
loading
and
unloading
meta
slabs,
because
in
0.8,
if
you
don't
use
a
meta
slab
for
three
transaction
groups,
we
unload
it
and
then
have
to
load
it
again,
whereas
I
think
in
2.0.
B
0.8.3,
even
we
added
the
kind
of
cool
off
time
where
it's
like
once
we've
loaded
it
don't
unload
it
for
at
least
10
minutes
or
whatever,
to
avoid
this
problem,
but
they're
not
in
a
position
to
change
version
of
ZFS
right
now.
So
we
set
the
metislab
debug
unload
parameter,
which
just
doesn't
unload
the
meta
slabs.
Are
there
consequences
to
leaving
that
on?
Is
there
like
stuff
that
only
happens
when
it
unloads
the
metaslab
I
think
condensing
normally
happens
on
load,
not
unload
right.
A
B
A
B
All
yeah
they'd
be
fine
with
tens
of
gigabytes
as
long
as
it's
not
50s
of
gigabytes,
probably
but
yeah,
and
that's
kind
of
gets
back
to
another
thing.
I
think
we
talked
about
before,
but
I
want
to
get
back
to
is
we
probably
do
need
to
change
some
of
the
tuning
so
that
creating
a
14
wide
raid
Z3
doesn't
result
in
15
000
meta
slabs,
when
the
target
was
200.
B
B
B
And
so
works
great
for
small
VMS,
but
I,
see
I.
Think
one
thing
that
we
talked
about
before
is
maybe
changing
the
tuning
from
I
think
what
they,
what
they
said
before
was
the
default
maximum
size
for
metislab
is
16
gigabytes
of
space,
and
then
it
will
keep
increasing
the
count
until
the
count
gets
too
high.
Then
it'll
bump
the
size
to
the
next
power
of
two,
but
the
max
number
is
128
000..
You.
C
B
Can
you
know
we
created
a
10
petabyte
pool
and
it's
ends
up
with
98
000
meta,
slabs
of
64
gigs,
each
or
something
like
that,
but
yeah.
So
we
were
looking
at
is
maybe
every
time
we
hit
the
limit
to
alternate
between
bumping
the
size
and
the
count.
So
we
aim
for
200
and
if
that's
not
enough,
we
go
up
to
400
and
if
that's
not
enough,
then
we
double
the
size.
And
then
we
double
the
count
and
double
size.
B
Double
the
count
or
something
to
try
to
you
know
not
customize
smaller
pools,
but
also
have
a
a
way
of
scaling
to
large
fools.
That
isn't
just
have
lots
and
lots
and
lots
of
these
because
it's
yeah
so.
B
It
does
give
you
more
fragmentation,
although
not
that
much
more,
it
just
seems
to
be
spend
a
lot
more
time.
Cycling,
especially.
The
other
thing
is,
if
you're
using
large
record
size,
then
if
your
medislabs
limited
to
16,
gigabytes
and
you're
allocating
16
megabyte
chunks,
then
that's
only
a
thousand
segments
for
the
whole
meta
slab
and
maybe,
depending
on
your
use
case,
you
want.
We
made
most
of
the
things
tunables
already.
Those.
D
B
I
think
you're
already
merged
all
those
yeah,
so
that
gives
people
a
bit
more
control.
But
we'll
want
to
look
at
the
defaults
a
bit
more
in
do
some
analysis
to
see
if
there's
advantages
one
way
or
another,
because
we
don't
want
to
just
change
the
numbers,
because
we
don't
want
a
big
number
but
to
actually.
A
D
Yeah
from
too
many
metal
slops
would
require
us
to
sing
them
separately,
and
if
we
do
it,
you
know
first,
we
will
write
all
the
updates
into
Global
lock,
but
then
we
need
to
propose
that
lock
sometimes,
and
for
that
we
need
to
distribute
all
the
difference
between
all
the
meta
slabs.
Just
the
number
of
iOS
is
growing
randomly,
and
if
we
don't
do
it
dynamically,
then
we
would
have
to
do
it
on
pool
import,
but
also
a
lot
of
iOS.
B
Yeah
and
and
I
think
too
you're
in
other
people's
points,
especially
in
Failure
type
scenarios.
We
want
an
unclean
import
to
not
be
massively
delayed
by
these,
and
so
that's
some
of
the
cases
we'll
make
sure
to
test,
as
we
figure
out
how
best
to
approach
this.
A
B
Especially
if
we
start
seeing
you
know
128
terabyte
ssds,
then
suddenly
you
could
make
a
pool.
That
was
one.
Video
is
a
lot
bigger.
B
B
B
That
was
the
one
we
talked
about:
zil
suspend
hanging
tests
because
we
noticed
that
was
reporting
cast
failures
on
the
brt
Linux
pull
request.
It's
like
we
didn't
do
that
so
I,
remember
Bob,
saying
he
had
the
same
one,
so
that
was
nice,
yeah,
yeah,
yeah.
A
There
still
are
a
handful
of
other
tests
failures.
We
still
see
in
the
test.
Suite
too,
you
know
not
entirely
reliably
like
all
Annoying
failures
right,
you
hit
it
one
run
and
ten
right.
A
Be
annoying
but
yeah
I'm
all
for
writing
those
down.
I
guess
I'll
just
make
one
other
call
for
testing
a
week
or
two
ago
we
tagged
the
2-2
rc1
release.
So
that's
out
there
for
people
to
start
kicking
the
tires
on
and
do
a
little
bit
testing
we'll
pull
some
fixes
back
for
an
rc2
I
hope
shortly
and
get
that
out
there
for
some
more
testing
but
yeah.
Just
to
plea
for
additional
test
coverage
on
that
manual
testing
and
the
automated
testing.
D
A
I'll,
take
a
look
and
pull
them
in.
We
also
probably
want
to
make
a
pass
to
see
if
there's
anything
else,
we
we
missed,
we
want
to
pull
back,
that's
been
merge
to
master
that
makes
sense
for
2.2
or
2.1.
A
D
Well,
yeah:
we
want
it
at
some
point
just
either
we
wanted
right
now
or
post
until
next
release,
we'll
still
we'll
see
how
stable
it
is.
People.
B
I
think
Pavel
got
all
the
the
problems
out
of
it,
but
yeah
the
bits
to
make
copy
file
range
use
brt
are
in
the
pull
request
that
we
just
opened
to
make
it
use
it
on
on
Linux.
It's.