►
From YouTube: January 8, 2019 OpenZFS Leadership Meeting
Description
We discussed the sharenfs property; persistent L2ARC; and TRIM support.
Agenda and meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit?ts=5bb3b66c
B
C
B
A
B
B
B
Okay.
It's
one
outfit
hour,
let's
get
started
so
we
had
a
very
I
think
good
and
productive
discussion
last
week
about
last
month
that
it
kind
of
centered
around
the
organization,
and
you
know
what
are
what
are
the
goals
there?
Do
we
all
have
the
same
sort
of
goals
and
hopefully
we're
on
the
same
page
with
that
I
I'm
not
really
looking
to
re-read
assign,
but
we
can
continue
that
on
the
mailing
list.
B
If
folks
went
to
or
inter-site
future
meetings,
we're
pretty
behind
on
technical
items
that
folks
I
wanted
to
discuss
so
I'd
like
to
make
sure
that
we
stick
to
getting
through
the
items
that
are
on
the
agenda
this
week,
maybe
first
time
at
the
end,
and
then
we
can
go
off
script.
But
let's
get
started
with
the
share
and
FS
property
compatibility,
stuff
I
think
George
has
women
to
bring
up
a
problem
and
and
getting
put
on.
What's
the
right
way
to
address
it.
F
Yeah
thanks
man,
so
you
know
one
of
the
things
that
we've
kind
of
been
wanting
to
to
start
a
discussion
about
is:
how
are
people
really
using
the
share
and
FS
property
I
mean
when
we,
when
we
originally
implemented
this
in
Solaris?
It
was
very
closely
tied
to
like
the
Solaris
NFS
sharing
functionality
and-
and
it's
probably
worth
noting
to
Sharon.
F
If
s
is
not
the,
you
know,
kind
of
the
only
property
we
also
have,
like
you
know,
share
I
scuzzy,
which
has
been
kind
of
dead
as
far
as
I
know,
on
most
platforms
and
like
share
SMB
might
also
have
similar
issues,
but
the
idea
that
I
wanted
to
kind
of
get
some
thoughts
around
is.
This
is
like
an
on
disc
component,
and
so
how
does
this
behave
when
you
have
to
do
like
portable
pools,
as
we've
discussed
on
the
mailing
Elias?
F
B
It
fair
to
say
that
there's
like
a
tension
between
basically
having
this
just
feel
like
a
pass-through
to
the
platform
specific
you
know,
NFS
sharing
functionality
versus
which
is
like
easy
to
implement,
but
not
portable
and
that
on
one
side
and
then
on
the
other
side
like
having
something
that
is
really
easy,
a
fast
specific
that
each
platform
needs
to
like
decode
and
you
know,
do
on
their
own
platform.
And
you
know
maybe
the
list
of
available
things
like
ZFS
and
infest.
F
I
think
that's
a
very
good
way
to
kind
of
define.
The
range
of
you
know
everything
from
like
everybody
kind
of
implements
some
specification,
and
maybe
it
doesn't
cover
every
NFS
option.
You
know
so
maybe
it's
for,
like
we
try
to
solve
80%
of
the
use
cases
out
there
that
kind
of
make
it
you
know
functional,
but
there
might
be
some
where
you
can't
use
the
share
and
FS
property
and
or
maybe
there's
some
some
things
that
don't
map
correctly
to
certain
platforms.
I
think
there's
a
lot
of
investigation
that
we
have
to
do
so.
F
That's
if
we
kind
of
went
down
the
generic
path
or
we
say
it's
kind
of
a
free-for-all.
Everybody
implements
their
platform
component,
but
now
we
need
to
make
ZFS
smart
enough
that
if
you
do
poor,
you
know
import
this
on
another
pool
that,
like
it
knows
how
to
deal
with
these
properties
and
kind
of,
does
the
right
thing
to
text
it
like
I
can
import
the
pool
but
I'm,
obviously
not
doing
anything
beyond
yeah.
You
can.
B
G
Could
presumably
also
store
a
shadow
property
that
contains
say
the
you
name
value
at
the
time
that
the
property
was
set,
because
presumably
it
made
sense
when
it
was
set
and
then
and
then,
if
the
you
name
doesn't
match
at
import
time.
You'd
I,
don't
know
print
a
warning,
but
not
then
apply
the
rules.
Yeah
say
we.
B
A
H
G
I
Like
with
nfsv4
on
FreeBSD,
it
works
quite
nicely.
You
just
set
the
route
you're
going
to
share,
share
it
and
then
just
have
chair
and
a
vests
on
or
off
to
mask
datasets,
but
with
v3
on
FreeBSD
yeah
you
have
like
George
has
the
dock
there
and
basically
you
have
the
entire
line
you
would
have
put
in
your
exports
file,
except
for
the
path
as
the
contents
of
the
property
and
on
FreeBSD
has
done
very
easily.
I
It's
literally
writing
that
line
into
a
text
file
that's
basically
appended
to
et
Cie
exports,
except
for
it's
a
separate
file,
but
our
Mount
Damon
knows
to
read
et
Cie
exports
and
etz
ZF
s,
exports,
which
is
just
the
share
and
FS
property
from
every
dataset,
where
it's
not
off
concatenated
to
a
file.
It's
really
lame.
Yeah.
H
J
It
ends
up
being
quite
awkward
to
on
linux.
For
those
reasons-
and
you
know,
flags
that
aren't
yet
supported
right,
ipv6
is
a
good
example
of
something
that
isn't
really
well
handled.
We
also
have
the
complication
of
links
where
you
don't
just
necessarily
have
one
a
NSS
implementation
or
one
Samba
implementation,
so
which
one
are
you
talking
about
like
which
flag
should
it
take
all
right,
I
think
there
may
be
a
good
argument
for
not
doing
this
in
the
file
system
do
or
leasing
an
optional
I
have.
K
B
Mean
it
depends
on
your
definition
of
site
right,
like
you
could
imagine
having
you
know,
pool
that
moves
between
systems
within
a
data
center.
In
that
case,
you
know,
maybe
you
do
want
to
continue
providing
those
NFS
services
as
you
move
it,
but
I
think.
Maybe
what
you're
getting
at
is
like
some
of
the
examples
here
where
you
know
you're
you're
specifying
what
network,
what
what
subset
of
the
network
is
allowed
to
access
it
or
what
privileges
and
windows
those
don't
make
sense
everywhere
on
the
Internet
computer.
K
Question
was
this:
is
then,
if
this
particular
and
the
share
SMB,
one
I
guess,
are
basically
hinged
to
the
operators,
so
that
that
kind
of
thing,
whether
something
is
a
hint
or
a
natural
filesystem
property,
is,
should
impact
how
the
various
implementations
are
going
to
deal
with
it?
That
was
my
entire
point.
B
K
D
L
From
a
freebsd
standpoint
and
from
a
free
math
standpoint,
it
was
always
convenient
to
think
of
those
properties
as
solaris
specific
and
it
just
occurred
and
they
didn't
exist.
It
didn't
do
anything
on
previously
even
know
after
a
while
they
did
start
to
do
something.
It
was
just
easier
to
ignore
them.
L
Yeah
they
do
now,
but
when
ZFS
was
originally
ported
to
freebsd,
they
weren't
hooked
up
right,
even
the
sheriff
s
wasn't,
and
so
when
it
did
start
to
work,
there
was
no
compelling
reason
to
to
try
and
use
it.
It
was
just
that's
a
Solaris
specific
property,
you
never
might
pretend
it
doesn't
exist
and.
K
L
K
L
If
you
imagine
a
nightmare
to
to
maintain,
if
you
create
a
NFS
export
on
FreeBSD,
you
know,
then
you
have
to
look
up
the
correct
export
syntax
for
Linux
and
put
that
in
as
well.
Even
though
you're
not
using
it,
you
know,
you
might
very
well
just
go.
Forget
that
and
then,
two
years
later,
when
you
move
your
pool
the
Linux
and
with
an
NFS
sheriffs,
don't
work.
Why
is
that
again?
I
mean
it
just.
H
H
M
G
B
J
F
Of
along
those
lines
Brian,
so
if,
if
it
were
more
Linux
specific,
do
you
think
that
it
would
actually
have
more
value?
Like?
Would
people
use
it
if
we
actually
had
Sharron
FS
properties
per
platform
so
that
it
would
have
it
would
match
the
implementation
on
the
platform
you're
using
it
on
I?
Think.
J
I
A
question
of
usability
there,
like
with
NFS
v3,
where
you
have
to
share
each
individual
file
system
individually.
If
you
have
200
file
systems
being
able
to
set
share
NFS
once
on
the
parent
and
have
it
automatically
inherit
and
configure,
all
of
them
is
useful
versus
having
to
manually
write
three
hundred
lines
of
export
file.
Yeah.
A
I
I
think
that's
was
Joshua's
point
and
I
think
he
was
right
back
I
bet.
If
you
import
an
illumise
pull
on
freebsd
with
the
different
syntax.
It
will
cause
me
to
refuse
to
reload
his
config
file
because
of
a
syntax
error
and
possibly
take
down
other
working
NFS
shares
just
because
you're
imported
and
a
fool
from
illumos
that
contains
invalid
syntax
for
NFS.
Yes,.
B
I
mean
I
think
that
that's
why
judge
BOTS
up
to
be
you
know
it
is
like
the
current
way
that
it
works
is
wrong.
Right,
like
it
is
not
right
to
be
putting
your
the
specific
things
in
this
property
and
then
like
having
Linux
or
FreeBSD,
try
to
interpret
them.
So
we
need
to
like.
We
need
to
decide
if
we're
gonna
like
we
need
to
say,
hey
Sharon.
B
E
B
B
Figuring
out
how
to
get
there
figuring
out
how
to
get
there
from
here
is
I.
Think
another
question:
okay,
all
right!
Yeah
that
make
sense!
You
know
it
could
be
like
sure
we
leave
this
behind
as
the
deprecated
like
just
don't
use
that
property
and
there's
a
new
sheriff
s,
illumos
sure,
and
if
s
taught
Linux
or
or
like
a
new
sheriff
s,
dot
ZFS
that
has
its
own
new
syntax,
that
each
buffer
has
to
you
know
figure
out
how
to
do
it
or
it's.
G
Not
a
property
anymore,
maybe
it's
a
top
level
commands
like
I
said
if
I
share
or
something
like
you
could
create
a
whole
new
surface
area
for
abstract
level,
things
that
that
might
would
be
stored
in
properties
underneath,
but
could
provide
a
UI
for
doing
that.
If
that
was
important
to
somebody,
I
think
it's
probably
a
mailing
list
level
thing
I.
Think
at
this
point,
though
it
sounds
like
definitely.
We
all
will
agree
that
this
is
not
correct.
Today,.
L
J
Sorry
go
ahead,
I
was
gonna,
say
having
looked
at
the
translation
code
before
I,
don't
think
that's
practical
I
mean
there
are
lots
of
different
options.
I
noticed
none
of
the
different
platforms
that
just
don't
exist
so
doing
any
kind
of
translation
is
going
to
be
spotty
at
best
so
I.
My
feeling
it
would
be
good
to
steer
clear
of
that
yeah
sure
we
would
a
Linux
at
the
moment,
but
it
wasn't
a
good
idea.
J
B
K
B
F
I
mean
like
so
given
kind
of
our
model
we
actually
relied
on
share
and
FS
Pro,
the
sheriff
s
property
very
heavily
in
you
know
in
our
appliance
and
we're
now
kind
of
in
that
mode.
Is
you
know,
going
from
an
illumise
to
a
Linux
engine
of
what
do
we
do
with
this
type
of
migration?
How
do
we
actually
address
that?
You
know
we've
actually
considered
saying:
ok.
F
Well,
let's,
let's
at
least
make
you
know
the
properties
that
we
use
as
robust
as
possible
on
Linux,
so
that
it
just
works
and
and
then
there
is
no
migration
that
needs
to
happen
or
do
we
go
off
and
kind
of
create
some
migration
utility
that
is
going
to
slurp
in
the
properties
and
then
write
them
out.
Just
using
you
know,
that's
the
exports
so
we're
kind
of
in
that
world
of
like
how
do
we
deal
with
this?
B
F
B
Cool,
so
let's
move
on
to
the
next
item,
which
is
persistent
l2
arc
status,
F
requested
that
we
talk
about
this,
but
I
I
went
trying
to
look
for
what
the
status
is
in.
My
email
and
I
had
to
look
a
long
ways
back.
This
is
a
feature
that
I
would
love
to
see
in
ZFS
everywhere.
I
spend
too
much
time
personally
doing
code
reviews
for
it
several
years
back,
but
I
didn't
I
couldn't
find
any
activity
on
this
in
the
past
two
years.
B
D
So
we
at
data
I've,
talked
about
this
a
little
bit
and
maybe
not
in
the
context
of
actually
planning
on
working
on
it,
but
more
on
just
the
limited
helpfulness
that
the
l2
arc
provides
in
general
right
now
and
so
to
us.
I
think
one,
our
big
things
that
was
preventing
us
from
putting
it
on
our
list
of
things
that
we
might
want
is
that
since
the
l2
arc
itself,
just
isn't
that
helpful
at
the
moment
making
it
for
persistent
doesn't
really
add
to
it
being
not
so
helpful.
I
think.
G
It's
a
minimum
required
feature
for
it
to
be
useful,
though,
because
in
the
reason
to
have
the
LT
arc
in
theory
is
because
without
it
you
are
not
able
to
fit
the
walking
set
of
the
thing
that
you're
trying
to
use
on
your
giant
pool
into
memory
that
has
a
low
enough
access,
latency
for
reads,
and
if
a
reboot
or
a
panic
or
a
device
replace
like
device,
removal
or
whatever
like
can
just
kick
out.
The
entire
contents
of
that
case
in
that
case
is
cold
again.
D
But
my
one
thing
is:
it
seems,
like
Medus,
lab
allocation
classes
for
a
lot
of
use.
Cases
help
alleviate
this
already,
maybe
not
make
it.
It's
maybe
not
perfect,
like
maybe,
if
you
have
some
like
a
bunch
of
actual
user
data,
that's
you
know
accessed
over
and
over
again
that
you
know
and
that
you
have
on
your
regular
spinning
disks
as
opposed
to
your
SSDs.
You
know
that's
like
a
separate
thing,
but
I
think
that
some
of
the
people
who
have
this
problem,
where
you
know
they
need
know
to
arc
to
make
their
stuff.
D
B
I
Don't
tiered
storage
is
still
a
very
popular
thing,
I'm,
seeing
of
course
I'm
mostly
dealing
with
pools
that
are
up
to
petabytes
now
and
then
have
you
know,
tens
of
terabytes
of
Flash
and
often
done
is
two
separate
pools
and
that's
why
I've
been
interested
in
having
curved
pool
and
curvy
tub
properties
to
be
able
to
tune
those
rather
than
system-wide,
but
I?
Don't
know
that
we
can
just
punt
on
the
concept
of
l2
I
can
say
if
you
need
the
speed
you'll
just
have
your
entire
pool,
be
SSDs
I.
G
Think
it
depends
on
how
big
the
thing
that
you
have
is
and
how
big
the
working
set
is
and
how
much
money
you
want
to
spend
on
it.
I
mean
there
are
a
lot
of
economic
knobs
that
that,
like
it's,
I,
think
it's
true
that
two
and
a
half
inch
spinning
disks
are
probably
not
that
long
for
the
world
and
that
those
that
form
factor
will
probably
be
SSDs.
Only
like.
I
G
And
a
half
inch
drives
because
they
just
don't
exist
right
so
like
that
that
specific
economic
case
will
probably
go
away
but
like
if
your
data
set
is
stored
on
three
and
a
half
inch
disks
or
SMR
disks
or
some
other
kind
of
storage.
That's
slower,
because
it's
cheaper
at
the
capacity
that
you
want.
I,
think
that
the
hybrid
stuff
is
still
going
useful
for
something
yeah.
M
I
work
with
a
customer
who
is
gradually
moving
to
the
large
amount
storage,
so
they're
still
using
the
spinning
disks
at
the
highest
capacity
available
time,
so
Ted,
10
or
more
terabytes
not
sure
that
up
to
now,
but
yeah
they'd
be
fitting
running
with
a
couple
of
nvm
es
as
the
old
two
are.
This
is
using
that
the
Oracle
Solaris
at
the
moment,
but
that
was
absolutely
essential
for
their
workload.
Somebody
wanted
to
do
something
similar
on
the
Lumos.
I
would
guess
you'd
meet
the
same.
You
need
the
same
sort
of
layout.
B
Is
anybody
like
thinking
about
potentially
making
like
making
this
a
priority
to
to
get
it
in,
or
should
we
just
kind
of
you
know,
continue
kicking
continued
status
quo
for
for
time
being.
B
That's
fine
I
mean
you
know,
everybody
got
priorities
cool.
So
let
me
let
me
reorder
lately,
let's
go
to
trim
support,
so
several
folks
that
asked
me
about
trim
and
have
any
discussion
about
it
here
so
background.
There's
a
trimming
limitation
in
freebsd
that
is
not
in
any
other
implementations
and
then
there's
separately.
There's
a
trimming
location
that
I
think
from
Descente
originally
on
Lumos
and
has
been
ported
to
gol
linux
as
well.
B
So
I
think
folks
wanted
to
talk
about
both
like
I
easy.
You
know
is
there
should
we
have
both
of
these?
Should
one
of
them
become
the
standard
and
also
what
is
the
like?
What's
the
status
and
what's
the
path,
what
are
the
next
steps
to
get
these
to
get
one
of
these
actually
integrated
on
to
the
rest
of
the
platforms.
I
B
O
Jerry
you
go
ahead:
oh
yeah,
the
one
that
accepted
did
that
I.
I
ported
it
over
to
the
latest
and
I
emailed
out
about
it.
Last
fall
is
working
and
I
thought
that
that
was
going
to
be
lined
up
with
the
zero
out
work,
but
I'm
not
sure
because
it
sounded
like
Tim
chase
was
maybe
thinking
of
going
off
in
a
different
direction
now
for
trim.
So
I,
don't
really
know.
If
that's
gonna
I
mean
I
want
to
stay
in
alignment
with
whatever
Zeo
I
was
doing
yeah.
A
J
So
trim
has
an
open
pull
request
against
Linux.
We
had
one
open
for
a
couple
of
years
now
that
keeps
getting
rebased,
but
it
has
yet
to
be
merged.
So
I
think
this
is
a
really
good
question,
because
we
want
one
of
these
I've
kept
it
Hermia
merges.
We
want
to
do
the
same
things
all
the
other
ports.
We
didn't
want
to
diverge
in
our
own
direction.
So
if
there's
general
consensus
that
the
trim
implementation
from
extent
is
the
right
way
to
go,
then
I
would
like
to
get
that
integrated
for
Linux
is.
B
B
M
B
O
J
B
A
I
Think,
on
the
previous
decide,
we'd
like
to
see
that
as
well
as
were
in
the
process
of
moving
to
being
based
on
the
0al
repo
trim
is
one
of
the
features
were
hoping
we
wouldn't
lose
as
part
of
that
transition.
So
if,
if
we
pull
in
the
next
enter
one
via
0l,
that
way,
that
makes
our
transition
easier.
I
C
Indeed,
no
we
were
it's
been
in
our
tree
for
a
longer
matt
time.
We
use
excessively
yeah
but
well
of
making
sure
that
we've
got
the
right
resources
around
getting
into
Synnex
and
then
into
freebsd
and
making
sure
that
we're
all
on
the
same
page
on
that
one
I,
definitely
don't
want
to
be
supporting
two
different
implementations
moving
forward,
how
we
actually
do
the
migration
between
the
two.
Obviously
that's
up
for
debate,
but
focusing
everybody's
efforts
on
one
consistent
implementation
is
definitely
going
to
be
the
way
we've
all
gotta
get
on
I.
I
P
What
happens
with
Linux
on
like
non
ZFS
file
systems
is
there's
a
FS
trim
command
that
you
can
run
arbitrarily
so
instead
of
running
continuous
trim,
which
on
some
devices
and
hurts
your
performance
significantly,
the
FS
trim
hooks
into
the
file
system,
and
then
it
finds
you
know
large
chunks
of
free
space.
You
know
consistently
with
file
system
activity
and
then
runs
trim
on
the
empty
space,
so
you
can
run
it
after
the
process
and
what's
created.
Does
that
work
was
it
of
us
next.
O
B
Is
that
the
initialized
code
could
be
repurposed
to
do
trims,
but
it
would
be
a
one-time
will.
You
know,
like
one
time
only
kind
of
thing
like
we
you'll
find
all
the
free
space
and
we
trim
it
out,
but
the
you
know
the
next
edit
trim
code
has
can
do
that,
but
it
can
also
do
ongoing
trim
where
it's
keeping
track
of
like
recent
freeze
and
then
I
were
getting
them
together
and
then
issuing
those
kind
of
continuously
to
the
desk.
I'm.
J
B
To
that
I'm
sorry
I
agree.
It
sounds
like
folks
are
behind
that.
I
I
have
I,
have
kind
of
high-level
questions
about
that
code,
and
you
know
how
it's
working
Jerry
you
place
on
my
emails
about.
You
know,
there's
a
lot
of
complexity
there
about
keeping
track
of,
freeze
and
aggregating
them
and
how
that
interacts
with
allocations.
B
So
I'd
be
interested
to
hear
like
anybody's
experience
with
performance
of
this.
Like
ideally
I'd
like
to
see
like
somebody
say,
okay
like
if
I
don't
have
trim,
we
get
myth
performance
and
it's
bad.
If
we
do
have
the
this
trim
code,
we
get
great
performance
and
if
we
have
the
trim
code,
but
we
turn
off,
you
know
all
the
fancy
complicated
stuff
then,
and
it
may
be
like
we're
just
trimming
every
txt
what
was
freed
up
to
extreme
then
we
get
you
know
better
than
baseline,
but
not
nearly
as
good
as
the
full
implementation.
O
We're
not
running
this
code
get
in
in
our
in
our
you
know
our
source,
because
you
know
we're
waiting
to
sort
of
get
some
clarity
on
where
things
are
headed.
I,
don't
know
if
anybody's
from
available
from
an
exempt,
or
maybe
we
could
ask
again
got
the
mailing
list
because
I
assumed
they
would
have
the
most
long-term
experience
with
this
performance,
but
I
can't
really
say
sort
of
what
the
different
trade-offs
are,
because
we're
not
running
it
yet.
L
Standpoint,
a
lot
of
modern
SSDs:
don't
need
trim
to
keep
their
performance
up
anymore.
You
know,
while
it
used
to
be
vital,
to
trim
SSDs
to
keep
their
performance
from
absolutely
tanking
these
days,
it's
more
about
reclaiming
space
than
it
is
about
performance.
So
from
a
from
an
enabling
trim
perform
you
know
standpoint,
you
really
only
stand
the
loots
because
there's
a
lot
of
devices
you
know,
and-
and
this
is
just
in
the
locally
connected
block
device
arena-
they
get
very
unhappy
if
you
start
to
trim
them
too
aggressively.
L
As
soon
as
you
start
talking
about
network
block
devices
like
a
SAN
or
something
well
VMware,
you
know
they
turned
it
on
for
a
little
while
man
oops,
we
better
turn
that
off,
because
you
can
really
make
sans
unhappy
if
you
issue
trims
to
them.
You
know
so
so
this
to
me
isn't
about
performance
at
all.
It's
it's
more
about
reclaiming
space
and
from
that
aspect,
I
would
be
very
conservative
if
I
had.
P
Know
that
I
would
say
trim
is,
is
not
a
performance
issue.
I
mean
I,
know
that
in
the
testing
that
we've
done
on
our
like
Intel
nvme
devices,
and
we
did
it
with
ext4
and
we
did
it
with
GFS
and
having
trim
enabled
lets
you
keep
consistent
performance
over
a
much.
You
know
basically
steady
state
right
and
without
trim.
You
know
after
the
device
has
been
overwritten
once
you
know.
Your
performance
starts
to
decline
as
it.
You
know
it
still
does
have
reserved
space
and
FTL.
P
Can
you
know,
find
erase
blocks
and
things,
but
it
has
to
do
a
lot
more
work
to
do
that.
If
there's
lots,
you
know,
the
file
system
is
telling
it
that
that
space
is
unused.
It
doesn't
need
to
do
work
to
reclaim
it
right.
So
it
is,
it
is
still
a
performance
issue.
Even
with
you
know
the
fastest
devices
today
it.
G
K
It's
fairly
significant
impact
me
this
is
from
experiencing
an
apple.
The
big
issue
with
trim
that
we
ran
into
this
is
several
years
old,
so
things
have
changed,
but
the
big
issue
was
trim
has
a
particular
overheads.
You
don't
you,
you
need
to
be
smart
about
how
much
you
issue
and
how
you
integrate
it
with
the
rest
of
the
I/o
flow.
I.
Think
Apple
limited
it
to
eight
simultaneous
command
to
the
time
and
then
queued
any
further
waiting
for
other
I/o
to
be
done.
P
P
Sorry
I
was
just
going
to
say:
there's
definitely
a
danger.
I
know:
hd4
has
you
know
the
online
trim
and
I?
My
understanding
is
not
many
people
use
that
right
it
does.
You
know
we
do
have
aggregation.
The
initial
you
know
trim
everything
that's
deleted,
you
know
was
a
disaster,
and
so
we
do
have
any
HT
for
there's
a
tree
in
it.
A
grits
and
things
like
that,
but
even
that
is
still
I
mean
in
devices
have
gotten
better
but
I
think
the
more
common
case.
P
You
need
yeah,
I,
guess
that's
true
too
I
mean,
but
if
you're
very
close
to
the
edge,
then
you
know
you
want
to
free
up
the
space
to
avoid
waiting
on
the
erase
block
hacking
right.
But
if
there's
lots
of
space,
then
you
do
it
periodically.
You
get
minimum
performance
impact
right,
and
so
there
is
a
question
in
my
mind
whether
the
online
is
beneficial
versus
just
having
a
periodic
trim
out
unused
space
in
the
file
system
to.
B
P
B
C
To
give
some
historical
information
of
people
we
serve
on
surd,
the
original
trim
FreeBSD
work
because
of
the
performance
issues
that
were
actually
seeing.
Obviously,
the
discs
and
vendors
have
improved
significantly
in
that
space,
but
in
all
of
our
implementations,
even
since
those
improvements,
we've
abandon
line
trim
was
beneficial
in
95%
of
cases
now,
I,
don't
know
how
other
people's
experiences,
but
it's
been
the
default
of
on
in
FreeBSD.
C
Q
Say
I
can
say
from
my
own
experience
that
over
the
past
years,
consulting
finest
users,
a
number
of
times,
I
recommended
to
disable
dream
the
users
of
different
consumer
SSDs,
because
in
many
cases
in
case
of
rewrite,
when
rights
was
combine,
it
with
synchronous
know
is
caused
in
Chronos
dream,
FreeBSD
performance,
degraded
and
disabled
team
actually
helped.
On
the
other
side.
Just
recent
days,
our
performance
engineers
run
some
tests,
maybe
not
very
deep,
but
at
least
with
couple
us
a
stripes
and
for
Western
Digital
SSDs
in
on
the
test.
Q
Dream
showed
and
gave
no
any
visible
benefits,
probably
due
to
efficient
enough
FTL
of
the
device,
but
on
micron
much
cheaper
micron
sizes,
D
or
dream
actually
improved
situation.
A
lot
at
least
on
scenario
of
sequential
file
rewrite,
maybe
one
random
access
or
other
results
could
be
different.
But
at
least
on
sequential
rewrite
dream,
significantly
improved
performance.
No
I
can
reduce
the
degradation
which
otherwise
micron
SSD
thrown
so
I
would
definitely
like
to
have
ability
to
do
dream.
Q
O
The
thing
with
the
exotic
code
is,
you
know,
wait
if
we
go
down
that
path.
We
have
the
choice
of
all
these
things
right.
We
don't
have
to
use
it.
We
can
have
it
used
in
an
ongoing
basis.
We
can
have
it
used
when
you
initiate
the
command.
You
know,
however,
you
it
either
by
hand
or
in
a
cron
job
or
whatever.
So
you
know
we
have
all
these
choices
if
we
go
with
an
extent
to
code,
but
right
now
we
have
no
choices.
B
Yeah,
so
to
be
clear,
like
I
mean
my
my
request
for
performance
information
was
not
that,
like
it
needs
to
improve
performance
in
all
situations.
I
totally
agree.
Jerry
that
like
having
like
the
the
feature
set
of
that
code,
is
large
I
guess
I
was
kind
of
looking
for
somebody
to
say,
like
the
feature
set
is
large
and
in
this
use
case
on
these
drives,
we
really
need
it.
It's
a
big.
It's
you
know
the
the
the
implementation
complexity
is
justified.
You
know
in
at
least
some
cases
I.
B
K
In
question
mmm-hmm,
would
it
be
possible
for
someone
to
come
up
with
a
write
up,
shirt
right
up
of
haldi
neck
center
and
the
freebsd
versions
of
trim,
plantations
of
trimmings,
ffs
work,
and
then
we
can
try
to
figure
out
a
way
to
compare
them
and
decide
which
one
if
either
would
be
the
way
to
go.
I.
A
B
I
Point
is
about
the
added
complexity,
cost
and
so
on.
Like
I
know,
the
FreeBSD
version
of
trim
has
been
the
source
of
five
or
six
bugs
as
we've
integrated
new
features
like
device
removal.
There
were
assertions
in
there
that
said
that
you
know
this
device
would
either
receive
a
read
or
write
and
it
didn't
understand
what
a
free
operation
was
and
a
couple
other
places
where
you
know,
trim,
had
assumptions
that
didn't
apply
elsewhere
and
I'm
sure
the
next
center
version
has
the
same
kind
of
complexity
and
I
mean
Matt's
point.
I
J
Your
point
about
performance
complexity,
Matt
yeah.
We
definitely
have
had
situations
where
the
trim
functionality
was
needed
to
make
a
pool
perform
properly.
It
depends
a
lot
of
the
exact
hardware
being
used,
but
we
have
had
people
test
the
pull
request
and
verify
that,
yes,
with
this
I,
don't
see
my
performance
crater
after
several
driver
rights,
you
know
I
move
out
and
it
does
and
it.
J
Okay,
yeah
we've
been
carrying
you
for
a
long
time
and
people,
it's
probably
our
most
requested
bit
of
functionality
and
people
keep
picking
up
and
testing
it,
and
we
get
a
lot
of
anecdotal
information
back
about.
Yes,
this
does
solve
my
problem,
but
I
mean
that
the
test
based
sort
is
so
large
right
because
it
depends
on
exactly
the
bit
of
hardware.
You're
testing
with.
L
C
It
was
very
specific
to
the
hardware,
but
not
only
the
hardware
is
very
specific
to
the
access
patterns.
They
were
preparing
it
well,
but
it
did
make
the
difference
of
actually.
You
know
what
I
need
to
take
these
SSDs
and
hardware
erase
them
every
three
months
to
get
them
back
to
remount
of
performance
versus.
We
can
just
forget
about
it
when
the
disk
has
steam,
it
runs
a
steam,
but
if
99%
of
the
time
or
100
was
time
until
we
actually
started
losing
disks,
it
was
perfect.
You've
got
the
freemium
implementation
in.
L
There's
no
doubt
in
my
mind
that
the
older
the
SSD
is
or
the
lower
end.
It
is
the
more
trim
helps
it.
I
mean
in
in
2007,
Erin
tell
710
or
310
was
unusable
without
trim.
It
would
turn
into
a
disk
drive
very,
very
quickly.
You
know
2016
or
2017
era.
As
SSD.
It's
almost
more
trouble
than
it's
worth.
You
have
a
greater
chance
of
walking
up
the
firmware
because
you
sent
a
too
big
a
trim
than
getting
a
performance
improvement
out
of
it.
You.
C
G
L
L
Are
going
away
anyways
as
nvme
takes
over,
but
my
point
is
that
you
know
definitely
whatever
implementation
we
use.
You
need
to
have
the
option
to
turn
it
on
or
off
and
and
so
I
think
the
complexity
is
is
warranted
if
you're
going
to
have
it
because
you're
gonna
want
to
be
able
to
do
online
and
offline
trim.
I
J
P
F
R
B
K
Of
the
reasons
you
would
want
to
trim
with
an
SSD
what
we
do
if
you're
doing
but
writes,
constantly
miss,
even
if
they're
sequentially,
it's
still
going
to
have
to
remap,
especially
new,
to
start
filling
up
the
device
and
we're
using
blocks.
So
the
trim
is
then
to
allow
that
to
do
that
more
quickly.
G
G
I
One
of
the
other
features
of
I
think
both
trim
implementations
is
to
delay
trimming
a
block
for
a
little
bit
of
time
so
that
if
we
do
overwrite
it,
we
don't
have
the
waste
of
friut
trim
it
and
then
immediately
write
it
again.
Because
then
we
could
have
just
over
written
that
LBA
and
the
flash
would
had
to
do
anything.
B
My
impression
was
that
the
goal
of
that
logic
was
more
to
aggregate
the
trims
to
so
that,
if
you
free
adjacent
things
in
subsequent
transaction
groups,
then
that
you
get
to
issue
one
big
trim,
but
that's
also
where
a
lot
of
the
complexity
comes
in.
Where
you
you
know,
we
need
to
keep
track
of
things
that
were
freed
in
the
past
and
then,
if
it
is
a
if
it
is
allocated
later
than
we
have
to
you
know,
then
we
have
to
remember
to
not
trim
it
out.
So.
A
B
Yeah
so
kind
of
sorry.
C
Did
you
have
an
additional
printer
like
one
point
is,
is
are
on
the
actual
device?
Support,
obviously,
is
in
the
freebsd
stuff,
where
it
comes
down
to
the
deposit.
We
did
end
up
having
to
manually
ensure
that
individual
trim
requests
didn't
get
to
large
parts.
The
performance
on
various
drive
just
plummeted,
and
so
just
to
make
people
aware
of
if
the
OS
that
layer
doesn't
support
to
go.
Oh,
you
know
what
I'm
not
going
to
wish
you
a
trim
greater
than
safe
okay.
B
Nobles
yeah,
so
it
sounds
like
there.
I'm
gonna
give
one
last
chance.
If
anybody
that
I
cut
off
would
like
to
continue
their
thought
go
ahead,
then
I'm
gonna,
conclude
okay,
so
I
to
summarize
trim
it
sounds
like
there
are.
There
is
a
need
for
both
the
periodic
trim
and
ongoing
trim.
Perhaps
different
needs.
B
The
code
does
that
people
like
that
I
think
my
my
concern
about
the
complexity
still
remains,
but
it's
good
to
know
that
the
performance
does
like.
It
has
a
group
performance
and
it
goes
like
you
know
in
actual
test
cases.
So
that's
good
I
still
think
that
a
like
human
track
of
all
the
trims
across
a
bunch
of
T,
excuse
and
whatnot
is
a
lot
of
complexity
that
may
or
may
not
be
needed
to
achieve
that
good
performance.
B
I'm
glad
that
we're
kind
of
a
little
bit
more
on
the
same
page
as
to
what
functionality
the
code
actually
offers
I
think
maybe
in
the
future,
it
might
make
sense
to
when
we're
discussing
these
projects,
to
have
somebody
lined
up
to
give
an
overview.
Since,
obviously,
not
everyone
is
on
the
same
page
about
what
functionality
there
is
in
in
the
features
that
are
under
review,
so
you
know
so
just
that
we
can
get
get
on
the
same
page
more
quickly
rather
than
you
know,
kind
of
haphazardly
next
time.
B
All
right
great
well,
Ron
time,
we
stayed
on
the
agenda
so
I'm
glad
about
that.
Next
time,
we'll
talk
about
reduction,
send
receive
and
the
pool
features
default,
pool
features
that
we
started
discussing
last
time.
Also
just
so
I
can
remove
this
from
the
agenda.
Allen
asked
that
anybody
using
LSI
HBAs
with
a
lot
of
SS,
geezer
and
Gammy's
on
freebsd
to
contact
him
of
his
email
addresses
in
there
he's
also
on
slack
Allen
Jude.
Was
there
anything
you
want
to
add
to
that
Alan
no.