►
From YouTube: January 2021 OpenZFS Leadership Meeting
Description
At this month's meeting we discussed: release roadmap; RAIDZ expansion; marking vdevs non-allocatable; zpool import performance; drive vendor SCSI priority bits.
meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit
A
Did
not
have
anything
on
the
agenda,
so
today
is
going
to
be
a
a
pretty
open
discussion
led
meeting.
So
I
will
open
the
floor
to
folks
that
have
questions
or
projects
that
they
want
to
talk
about.
B
Roadmap
for
opengfs
releases:
do
you
have
some
perspectives?
What
features
should
be
available
on
next
release,
or
in
this
year?
It
will
be
interest
to
me,
because
we
try
to
port
messy
cities
to
deals
and
I'm
interested
in
some
perspectives.
When
you
have
plans
trying
to
release
something.
This
opens
the
professors.
A
Yeah
so
brian.
C
A
Be
able
to
answer
that
better
than
I
he's
not
on
right
now
that
I
see
so
I'll
do
my
best.
The
main
projects
that
are
in
flight
are
well.
A
I
mean
d
raid
is
the
main
one,
so
that's
in
that's
in
master
now
and
the
next
release
2.1
will
have
that
so
that's
kind
of
the
big
exception
to
our
guidelines
about
minor
releases
not
having
new
features
and
another
thing
that
I
know
brian
was
hoping
to
get
into
2.1
was
the
forced
pool,
export
so
being
able
to
export
a
pool?
That's
that's
hung
because.
A
Is
gone
or
something
like
that,
yes,
we'll
suspend
it
right,
yeah,
exactly
yeah.
A
A
A
And
we
would
expect
those
I
think,
on
a
roughly
quarterly
basis
and
then
we're
hoping
to
get
open,
zfs
3.0
the
next
major
release,
hopefully
by
the
end
of
the
year,
that
would
be
the
fastest
major
release
ever
in
the
history
of
zfs
on
linux.
So
you
know
I
wouldn't
I
wouldn't
I
wouldn't
count
on
that.
But
that's
that's!
The
goal
is
to
get
out
this
this
calendar
year,
but
realistically,
I
think
that
the
cadence
has
been
more
like
18
months
than
12
months.
B
Okay,
thanks
for
your
update
and
what
about
read
the
expansion.
A
Sure
so,
actually
before
it
looks
like
brian
just
joined,
so
let's
just
ping
him
on
that
ambush
him
on
that.
To
make
sure
that
I.
E
A
Accurately
described
that
so
igor
was
asking
about
the
release
plans
and
I
you
know
I
mentioned
2.1
having
d
raid
and
hopefully
the
suspended
pool
expansion
for
sorry
export,
so
force
export
and
that
coming
out
in
this
quarter,.
A
The
plan
okay
and
then
roughly
quarter
the
minor
releases
that
would
have
mostly
just
bug,
fixes
after
that.
E
A
And
then
3.0
aiming
for
the
end
of
this
year,
but
maybe
realistically
you
know
first
half
of
next
year.
E
F
What
about
the
directio
invitation?
What's
the
the
status
of
that
how
that
dropped
the
last
few
weeks
so.
E
Yeah,
so
that's
still
in
progress,
we're
want
to
get
that
into
the
2-1
release
as
well.
Brian
atkinson
was
going
to
refresh
it
again.
There
was
some
feedback
on
that
and
I
think
it's
moving
in
the
right
direction.
We
got
some
feedback
on
the
freebsd
side
as
well,
and
it's
still
moving
forward.
So
that's
what
I'm
helping
him
work
on
for
the
next
few
weeks
or
months
to
just
to
get
it
in.
So,
yes,
we'd
like
to
get
that
into
2-1
and
get
it
out
in
the
first
quarter.
E
A
A
Yeah
anything
else
on
those
releases
before
I
I
talked
about
the
raids
expansion.
A
Okay,
so
you
igor
you're,
also
asked
about
raids
the
expansion,
I'm.
E
A
Working
on
that
and
I
have
made-
I
have
continued
to
make
progress.
I
gave
a
bit
of
an
update
on
that
last
month
last
month,
months
meeting,
which
I
think
you
probably
saw,
but
for
other
folks,
if
you
want
more
details,
I
went
into
a
lot
of
details.
Last
month
I
didn't.
A
I
didn't,
make
my
goal
of
getting
a
new
pr
beta
release
of
that
out
by
the
end
of
last
year,
but
I
I
am
still
working
on
it
and
actually
putting
in
putting
time
on
it
so
expect
to
see
progress
on
that
soon.
I
I'm
work
right
now,
specifically
I'm
working
on
breaking
out
some
of
the
abd
related
changes
to
some
of
the
abd
infrastructure.
A
Oh
yeah
yeah
I
mean
I
imagine
you'd
have
to
you-
might
have
to
make
some
changes
there,
but
it
should
be
very
straightforward,
like
the
what
I'm
doing
is
changing
is
making
it
so
that
the
abd
can
be
the
abdt
struct
itself
can
be
allocated
by.
A
A
I
don't
remember,
but
it's
one
it's
one
of
the
two
of
those,
so
you
can
look
at
either
the
freebsd
or
the
or
the
linux
code,
and
you
know
copy
that.
B
A
Yeah
sure
thing
why
I
have
to
ask
you:
yeah
are
you,
but
in
your
for
dillo
s
it
it's
based
on
the
open
cfs
code
right,
not
the
most
zfs
code.
Is
that
right.
B
Let
me
explain
at
this
moment
zfs
code
on
devos,
more
specifically
to
open
the
fs,
but
I
did
not
port
all
fishes
yet
because
I
tried
to
do
it
step
by
step
and
this
kavit.
I
was
missing
something
parts
because
I
was
infected.
E
B
Try
to
merge
some
additional
features
like
derate
how
to
arc
persistent
and
a
little
bit
more
and
I'm
interested
in
the
in
razi
expansion,
because
it
can
be
interest
to
our
business,
because
our
technology
is
a
company
with
a
big
storage
features
and
we
would
like
try
to
import
first
features
what
will
be
interested
to
our
business.
B
Where
we
can
see,
we
will
see
some
additional
changes
to
the
fs
tests
code,
where
you
can
find
we
can
use
the
same
userline
parts
on
different
platform.
It
is
our
goal:
try
to
make
a
dls
platform
to
be
more
easy
for
ports
of
user
length
applications
from
linux
d
from
debian
mainstream,
something
like.
F
B
And
at
this
moment,
zfs
code
on
geos
and
illumis
are
different,
because
I
have
portraits
additional
features.
Both
are
missed
on
illumos
and
ilumas
contained,
for
example,
l2r
persistent
controlled.
I
did
not
pull
to
deal
with
yet
because
I
tried
to
specify
some
changes
and
we
have
to
be
more
stable
in
gfa
specific
code
for
our
business.
B
I
want
to
use
open,
zfs
code
deals
and
try
to
prepare
a
special
build
environment
where
we
can
make
changes
only
in
the
fs
code
and
use
additional
build
board
where
we
can
try
to
test
additional
changes
from
opencfs
to
dealer.
Spotlight,
that's
cool,
it's
a
wonk.
Yes,
it's
a
long
term
project.
What
I
thought
to
work!
It
is
why
I
try
to
ask
you
about
some
features
and
some
perspectives
to
be
a
more
informative
about
what
the
way
I
want
to
try
to
use
for
our
specific
to
be
more
compatible
with
opencfs.
A
Yeah
so
the
long
term
like
using
the
opencvs
code
based
on
illumos,
that
sounds
really
cool.
I
know
that
joshua
kulo
was
also
interested
in
that
I
don't
know
like
what
it
sounded
like.
He
was
getting
more
real
in
his
mind.
Last
time
I
talked
to
him
about
it,
so
you
might
want
to
you
know,
coordinate
with
him
if
you're
not
already.
B
You
must
have
an
additional
way,
it's
why
I
have
some
my
additional
way
with
zfs
and
something
because
we
have
no
found
some
agreements.
What
way
we
can
use
for
lumos
and
photos.
A
All
right
so
on,
but
on
this
specific
stuff
of,
like
you
know,
wanting
to
get
reads:
yeah
yeah.
G
A
For
reads
the
expansion
stuff:
the
abd
changes
will
be
easy
to
port.
If
you
have
the
like
abdos.c
slash,
abd.c
breakup
and
you
have
like
you
know
your
own
illumos,
a
b
d,
o
s,
dot
c.
Then
I
think
then
I
think
the
changes
I'm
making
will
be
straightforward.
A
Yeah
gotcha
and
then
the
other
main,
the
other
big
thing
that
you
would
want
to
port
in
order
to
get
raid
z
expansion
would
be
d-raid
because
a
bunch
of
the
kind
of
groundwork
that
was
laid
in.
A
By
raids,
the
expansion-
and
I
I
had
to
do
that-
rebates
and
it
was
a
lot
of
work.
B
I
understand
I
need
to
try
to
portray
it
first
and
try
to
see
the
way
about
raise
the
expansion
a
little
bit
later,
because.
A
A
Yeah,
so
I'm
behind
schedule,
but
you
know
at
least
you
still
have
something
that
you
can
do
to
get
yourself
closer
to
where
you
want
to
be
by
porting.
The
the
d-rate
stuff
over
and
that'll
probably
keep
you
busy
for
a
few
days
at
least.
B
Thanks
for
your
information-
and
I
hope
additional
platform
can
help-
try
to
find
additional
issues
and
we
have
found
additional
issue
with
infinite
loop
in
receiver
stuff.
If
checkpoint
present,
I
will
provide
additional
information
and
because
we
try
to
minimize
some
steps
with
the
small
drives
for
reproduces
issue.
B
For
example,
we
are
using
for
testing
a
storage
about
one
petabyte.
It
is
why
I
can
try
to
find
a
more
additional
issues
when
we
can
try
to
find
the
difference.
Small
drives
based
on
the
fs
code,
and
some
issues
cannot
be
introduceable
with
a
small
data
and
can
be
reproducible
only
with
the
big
data
about
5000
of
terabytes
or
something
else
well.
All
right.
B
I
can
try
to
help
because
we
have.
I
can
try
to
help
with
testing,
because
we
have
our
environment
where
we
can
try
to
test
some
specific.
It
is
why
I
am
interested
in
communication
with
the
open
gfs,
where
we
can
try
to
provide
some
issues,
try
to
understand
it
and
fix
it
and
open
the
first
will
be
more
stable.
We
are
all
interested
in
it.
A
B
Additional
issues
additional
issue:
it
is,
for
example,
if
you
are
using
a
local
drives
connected
to
lsi
sba.
For
example,
you
can
see
some
specific
device
where
drive
can
respond,
but
if
you
try
to
use
network
device,
for
example,
from
iscsi
or
something
else,
you
can
see
network
delay
where
you
will
be
wondering
how
it
can
impact
of
some
cfs
works
and
where
we
need
some
retry
operation.
B
Also
additional
issue
with
a
network
drive
if
you
have
stripe
of
drives
in
range
network
drives
and
one
drive
was
offline.
If
you
try
to
do
import,
you
try
to
retry
to
this
drive
over
network
again
and
again.
If
you
try
to
do
import
jf,
the
import
and
additional
feature
will
be
interest
to
me.
If
we
try
provide
a
specific
vdf,
try
to
move
it
to
offline
state.
B
H
H
That
handy
as
well,
I
ended
up
removing
the
slash
dev
node
to
hide
the
disk
temporarily,
so
that
the
dyeing
disk
would
allow
the
rest
of
the
pool
to
import
with
that
disc
offline.
So
a
way
to
exclude
that
would
be
helpful.
B
A
Yeah
yeah,
that's
interesting.
It
kind
of
it
ties
in
with
some
of
the
ideas
of
z-full
properties
that
are
sorry
of
v,
dev
properties
that
have
been
discussed,
and
we
have
some
work
that
I
think
mark
maybe
you're.
A
To
mark
devices
as
ones
that
we
don't
want
to
allocate
from
so
this
is
you
could
think
of
this,
maybe
as
sort
of
an
extension
of
that,
although
I
mean
the
way
that
we're
you
know
be
able
to
do
that,
import
time
is
another
special
thing
and
whatnot.
So
it's
not
exactly
the
same,
but
it's
kind
of
closely
related
yeah
exactly.
I
That's
interesting
mark.
Could
you
elaborate
on
that
a
little
bit
because
I
have
implemented
maybe
exactly
what
what
matt
just
said
a
device
does
that
doesn't
is
isn't
used
by
location
well,.
F
I
G
I
I
I
It
doesn't
get
allocated
from,
and
I've
added
like
assertions
in
the
cio
pipeline
stuff
and
so
on
that
we
crash,
if
we
actually
try
to
use
it.
The
nice
thing
is
that
we
don't
do
these
assertions
for
the
vdf
label
space.
So,
like
all
the
normal
z
point
management
stuff
that
works
and
that
code
is
pretty
much
done.
I
just
felt
it
was
too
hecky
to
upstream,
but
maybe.
A
F
A
So
that
sounds
very
similar
to
like
what
we
would
sounds
like
he
would
do
what
we
need
to
do
like
we
need
to
just
mark
devices
as
ones
that
should
not
be
allocated
from
the
for
a
little
bit
of
background
information.
The
the
goal
for
us
is
that
we
want
to.
We
may
want
to.
A
The
customer,
may
want
to
remove
several
devices
with
top-level
device
removal
and
we
want
to
mark
all
of
them
so
that,
like
today,
what
you
do
is
like
you
move
the
first
device.
Then
you
move
the
second
device,
then
you
remove
the
third
device
when
you're
moving
the
first
device.
It's
like
copying
data
onto
you
know
the
fourth
and
fifth
devices
where
you
want
it,
and
also
the
second
and
third
devices
that
we're
just
trying
to
move
anyways,
which
is
one
problem,
and
then
another
problem
is
like.
A
A
So
the
idea
is
that
when
you
start
this
operation,
this
kind
of
whole
procedure,
you
know
you
would
mark
the
second
and
third
devices
as
don't
allocate
from
them,
and
then
you
would
remove
the
first
device
and
so
that
way
like
when
you're
kind
of
there
trying
to
do
your
whole
thing,
you
get
all
the
error
checking
and
then
you
also
get
you
know
better
performance
and
stuff
because
we
aren't
trying
to
allocate
on
those
devices
there's
also
some
other
kind
of
corner
cases
where
you
might
want
to
use
this
to
compensate
for
like
badly
performing
devices
or
super
fragmented
devices,
or
things
like
that.
A
Class
to
kind
of
move
its
labs
off
to
the
side
to
where
we're
not
going
to
do.
It
sounds
like
a
way
to
implement
that.
There's
also
additional
logic
that
we
need
to
do
like
the
space
accounting
and
you
know,
make
sure
they
will
not
run
you
out
of
space
kind
of
similar
to
what
we
do
with
device
removal.
I
A
So
in
the
in
the
current
implementation,
that's
in
the
pr
we
don't.
We
don't
try
to
move
them
between
classes.
So
that
is
a
lot
simpler.
So
yeah
I
mean,
maybe
maybe
that's
not
the
right
way
to
go
about
it
in
terms
of
moving
metastops
between
classes.
I
think
that
we,
I
have
to
check
the
design
dock,
but
I
don't
think
that
we
had
envisioned
doing
that.
A
I
think
that
we
there's
another
mechanism
for
removing
whole
v
devs
from
the
allocation
rotor
that
that
happens
like
when
it
when
a
dev
is
offline.
A
I
don't
think
that
code
is
very
well
exercised,
but
I
think
that
it
exists
seraphim.
Do
you
remember
any
of
that?
You
had
like
a
little
prototype
of
this
right.
A
Again
for
the
for
the,
if
you
have
no
alec
stuff,
did
we
do
that
by
like
changing
the
metaslab
class
or
or
some
other
mechanic
like?
What
was
the
mechanism
that
we
did.
G
We
are
just
taking
the
metaslab
group
out
of
the
rotor,
so
yeah
it
wouldn't
be
part
of
the
normal
class,
but
yeah.
G
A
We
would
never
visit
that
group
yeah,
we
just
wouldn't
so
it's
kind
of
a
different
way
of
doing
it
from
what
christian
is
talking
about,
but
we
should
definitely
like
take
a
look
at
your
code
and
see
if
there's
any
better
ideas
there
yeah
definitely
than
what
we
have.
I
I'm
not
not
really
convinced,
that's
a
better
idea
that
I
have,
because
this
is
the
first
time
I
heard
that
metaslab
still
carry
the
attribution
to
an
allocation
class.
My
impression
was
that
it's
like
something
that
is
only
relevant
before
the
allocation
is
done,
but
apparently
I
misunderstood
so
yeah
I
don't
know
I
can
share
the
code
and
somebody
can
take
a
look.
Okay.
H
I
H
And
I
will
try
to
upstream,
I
have
working
code
for
the
v
dev
properties.
Most
of
the
properties
it
has
right
now
are
read
only
but
there's
a
couple
read,
write
ones,
and
you
know
no
alec
was
always.
That
was
what
I
envisioned
as
the
way
to
control
it.
H
You
could
just
set
the
property
on
the
v
dev
that
hey
don't
allocate
from
this
anymore
for
your
use
case
of
the
v
dev
removal
stuff,
but
also
like
you're
saying
you
know
if
you
have
a
really
fragmented
v
dev
and
it's
one
of
you
know
40
v,
devs
being
able
to
say
just
leave
that
one
alone
would
be
useful.
A
Yeah
yeah,
if
you
I
think
that
the
way
we
designed
it
like
the
if
you
get
that
stuff,
if
you
get
that
pr
out
before
then
we
can
take
advantage
of
that
for
the
user
interface.
But
if
not,
then
we'll
have
like
a
new.
You
know
sub
command
or
something,
and
then
we
can
map
that
back
onto
the
properties
when
they
integrate.
H
Yeah
so
I'll
try
to
get
that
done
ahead
of
time
to
save
you
guys
a
bit
of
effort
on
having
to
build
an
interface
for
it
cool
but.
B
Yeah,
we
can
see
additional
issue
with
performance
on
the
pool
import
command
when
we
have
scrap
or
receiver
is
in
progress.
If
you
try
to
import
the
pool
after
crash
and
as
we
work
around
right
now,
we
are
using
external
variable
where
we
can
stop
a
scan,
operation,
import
tool
and
start
scan
operation
again
it
can.
B
B
I
mean,
for
example,
if
you
have
active
receiver
or
scrap
operation
on
the
clue,
and
you
have
a
crash.
B
A
B
A
A
It
should
be
proportional
to
the
number
of
txgs
that
go
by
before
the
pool
is
like
you
know,
fully
available
for
use
and
it,
but
it
should
be
like,
like
the
txt
timeout
30
seconds
or
whatever
times
number
of
txgs
and
not
proportional
to
any
sort
of
performance,
but
it
sounds
like
my
understanding
may
be
wrong,
so
something
else
there
is
going
on
that.
We
should
investigate.
B
Yeah,
your
understanding
is
wrong,
because
if
you
have
a
big
pool
about
one
petabyte
and
you
have
eight
thousand
of
terabytes
of
data
in
compression
and
duplication,
you
can
wait
without
scan
about
one
hour
or
more
on
the
pool
import.
B
F
D
A
B
A
B
A
So
why
don't
you
file
an
issue
and
we
can
try
to
discuss
on
there
to
get
more
details
about
yeah
the
numbers
and
try
to
understand
why
it
would
take
hours
to
do
that.
B
Yeah
yeah
yeah,
but
you
cannot
reproduce
this
issue.
If
you
have
a
small
of
data,
you
can
reproduce
it
if
you
have
a
big
data
and
some
network
device
with
some
device
for
the
producer
of
this
issue,
I
probably
I
can
try
provide
additional
information
or
access
to
our
test
environment
where
you
can
try
to
play.
A
Measure
guess
get
some
additional
data
during
the
import.
Like
do
you
know
using
dtrace
to
see
what's
being
read
or
things
like
that.
H
H
H
A
A
That's
the
question
I
was
trying
to
ask,
but
I
presume
like
I'm
guessing
since
I'm
guessing
that
this
only
happens
when
you
have
a
scrub
in
progress,
and
it
doesn't
happen
if
you
are
just
doing
that
couple
txj's
check,
but
I
mean
another
question.
I
think
that
something
that
could
be
at
play
here
is
like,
like
you
said,
looking
at
all
of
the
data
sets,
so
it
would
be
interesting
to
know
how
many
data
sets
are
in
you
know:
igor's
system
file
systems
and
snapshots
and
z-volts.
E
We
have
seen
similar
issues
like
this,
where
we
put
some
minor
mitigations
in
place
like
there's
a
delay
before
the
scrub
starts
again
during
import.
It
delays,
like
I
don't
know,
10
or
20
transaction
groups,
or
something
like
that.
Just
keep
it
out
of
the
way.
Initially
before
it
starts.
I
think
that
might
have
made
it
tunable.
You
could
try
to
increase
it
for
testing,
but
nothing.
Quite
this
severe.
A
B
F
B
B
If
we
have
scrap
in
progress
with
the
scan
or
something
else,
I
think
it's
our
specific
of
using
of
the
devs,
but
it
can
show
where
we
can
see
some
issues
with
imports.
A
Yeah,
it
sounds
like
something
is
going
on
there,
that's
causing
it
to
do
like
an
amount
of
work.
That
is
not.
You
know,
bounded
by
those
time
limits.
Obviously,
so
we
need
to
figure
out
what
that
work
is
yes,
that
it's
trying.
B
A
D
A
A
Yeah
and
then
muhammad
did
you
have
a
question.
I
see
you
raised
your
hand.
J
Yeah-
hey
guys,
I
just
I
don't
see
mav
on
here,
alexander
and
me
and
him
were
exchanging
some
emails
and
he
thought
it
would
be
good
to
kind
of
bring
it
up
to
this
forum
and
see
if
there
are
other
people
interested.
J
So
what
he
was
trying
to
do
was
to
use
the
ata
and
scuzzy
priority
scheduling
as
part
of
the
spec
to
do
some
background,
work
and
offload
some
of
that
at
the
device
level
so
send
you
know,
commands
with
a
specific
priority
so
that
they
don't
not
not.
All
commands
are
created
then
equally
in
this
in
the
system
and
what
I
found
out
from
because
he
was
having
some
issues
with
some
drives
and
he
was
seeing
inconsistent
behavior,
and
so
he
wanted
to
get
an
explanation.
J
So
we
dig
a
little
bit
deeper
and
we
found
out
from
the
firmware
team
here
at
seagate
is
that
it
totally
depends
on
the
implementation
of
that
product,
and
you
know
the
firmware
being
used
and
and
most
of
the
time
what
we
do
is
that
we
work
with
the
ecosystem.
So
I
won't
name
names
from
the
customer
side,
but
you
know
the
west
coast,
people
that
are
huge
and
taking
on
our
you
know
near
line
drives.
J
They
basically
request
and
test
a
certain
firmware,
a
certain
way,
the
way
they're
using
the
drive
and
the
priority
bits
in
both
scuzzy
and
ata,
and
then
the
firmware
is
released
to
them.
So
if
we
want
to
put
it
in
a
generic
one,
the
firmware
team
was
saying
that
they
need
to
know
a
little
bit
more
on
how
zfs
ecosystem
would
be
using
it
so
that
it
doesn't
implement
the
generic
case
and
that's
why
it's
not
turned
on.
J
If
you
just
go,
buy
an
airline
drive
from
amazon
and
so
matt
was
like
hey
yeah
I'll,
be
definitely
interested
in
talking
to
you.
What
I'm
trying
to
do
and
what
I'm
seeing
in
the
drives
that
he
had,
and
the
question
to
this
forum
is
that
if
it
is
an
interest
to
the
general
body,
I
would
much
rather
have
one
discussion
where
we
set
some
agenda
and
then
I'll
bring
in
the
core
firmware.
J
So
if
you
are,
if
anybody
is
interested
in
being
part
of
that
discussion,
you
can
send
me
an
email,
send
me
an
im
or
whatever
for
mav.
I
promised
him
after
I
come
to
this
forum
and
open
it
up.
If
I
can
get
some
interest-
and
I
can
just
have
this
one
meeting
rather
than
having
multiple
ones.
B
J
So
there
is
some
priority
scheduling
features
that
are
part
of
the
scuzzy
and
ata
spec,
both
t10
and
t13.
But
the
thing
is
that
it's,
they
are
not
always
worded
the
same
way
so
there's
some
of
it
is
open
to
interpretation,
but
also
it's
not
implemented
in
all
the
drives.
So
you
have
to
kind
of
go
figure
out
like
hey.
Do
my
drives
actually
support
it,
and
if
they
do,
then
you
actually
send
that
command.
But
then
it's
even
between
us
and
wd
and
toshiba
and
others.
J
A
Have
to
know
that
you're
using
those
drives
and,
like
you
said,
customers,
your
customers,
who
are
designing
a
whole
whole
hardware,
software
stack
and
they're
buying
specific
drives.
They
set
it
up
because
they
know
that
that's
what's
going
to
happen
right,
but
for
a
generic
software
solution
like
zfs
it.
A
You
know
it's
hard
without
any
special
knowledge,
it's
hard
for
us
to
take
advantage
of
that,
because
it's
like
well
like.
Why
would
I
set
these
special
bits?
Because
I
don't
even
know
what's
going
to
happen
when
it
goes
down
to
the
drives,
because
I
don't
know
what
kind
of
drives
there,
but
in
theory
I
guess
we
could
build
in
something.
That's
like
we
query
the
drives.
If
the
drives
are
running
the
right
firmware
that
we
know
how
what
it
does,
then
we
take
some
special
behavior.
You
know
right.
J
So
I
mean
all
you
would
need
to
do
is
actually
look
at
the
you
know
the
identify
or
the
whatever
bits
and
say:
oh
yeah,
that
feature
is
enabled.
That
means
that
that
drive,
regardless
of
the
way
the
finger
is,
would
actually
behave
the
way
the
spec
says
it
would
behave
from
a
from
that
perspective,
but
I
think
there's
some
in
bigly
in
the
spec
and
then
there's
some
other
stuff
that
needs
to
happen.
So
I
was
offering
a
forum.
J
A
It
would
be
great
to
get
like
a
just
a
high-level
description
of
like
what
the
drive
can
do
based
on
that
and
and
then
we
can
think
about.
Like
you
know
what
would
be
like,
let's
just
assume
that
we
knew
that
we
were
always
using
these
drives
or
a
significant
portion
of
the
time
we're
using
these
drives.
A
How
would
we
take
advantage
of
that,
and
you
know
how
much
you
know
benefit?
Would
we
get
and
then
try
to
figure
out,
because
there's
two
problems?
One
is
like:
how
do
we
know
that
we're
in
this
situation?
Is
it
worth
knowing
that
we're
in
this
situation,
then
it's
like
okay.
If
we
are,
then
what
are
we
actually
going
to
do
about
it?
What
you
know,
how
is
the
ifs
going
to
take
advantage
of
that
information,
so.
J
K
Oh
yeah
sure
sorry
I
got
laid.
I
just
connected.
I've
missed
the
call
event,
so
my
as
I've
written
in
my
email
and
I
saw
I
said
spoken
on
previous
monthly
meeting
hours
two
months
ago.
My
goal
was
to
separate
a
different
priority
of
requests.
K
In
particular,
the
biggest
step
was
to
separate
background
such
as
scrap
receiver
and
other
activities
from
interactive
ones
initiated
by
user
science.
The
first
has
latency
acceptable
up
to
means
potentially
or
who
knows
hours
forever,
just
to
not
cause
common
payments,
while
second
still
has
to
be
in
seconds
and
milliseconds.
So
obviously
it
could
be.
K
It
would
all
could
also
be
interesting
potentially
to
separate
even
synchronous
requests
from
asynchronous,
but
it's
like
emotique
science,
I
think
chronos
read,
maybe
potentially
promote
to
synchronous
when,
if
we
first
issued
prefetch
request,
but
then
has
to
escalate
it
to
actual
actual
synchronous
request
when
that
even
for
which
we
prefetch
it
has
arrived,
when
actual
application
request
has
appeared
and
that's
making
maybe
complicated,
but
for
rights
we
have
really
two
kinds
of
rights:
one
is
synchronous,
rights
and
other
eisenchronos
rights.
I
don't
know
what
the
drive
can
differentiate.
K
Those
two
within
its
right
cache.
Is
that
generally
possible,
because
what
I
saw
with
wd
is
that,
as
soon
as
request
hits
right,
cache
all
requests
becomes.
The
same
seems
like
that
flag.
J
K
K
B
B
K
K
That
could
be
interesting
potentially,
but
that
topic
of
cash
on
on
the
card
is
more
applicable
to
rate
cards,
and
we,
as.
K
Red
card:
no,
if
there
is
a
significant
right
cash
and
if
it's
battery-backed
it's
it's
more
significant
investment
to
just
give
practically
different
different
class
of
cards.
None
of
our
no
none
of
our
systems
use
that.
But
I
agree
that
that
information
could
be
useful
for
a
card
but
barely
buying
an
additional
card
with
additional
memory.
K
Would
compensate
things
that
could
be
done
by
the
drive
directly
since
drive?
Is
practically
the
only
place
in
a
system
that
knows
about
where
it's
head
right
now?
How
can
it
be
scheduled
in
what
proper
order
requests
should
be
executed
and
to
better
handle
that
drive
should
have
as
deep
queue
as
possible,
but
as
soon
as
we
submit
more
requests
to
the
drive,
latency
start
growing
and
we
have
no
control
over
those
latencies
and
supplying
priorities
to
the
drive
is
a
way
to
say
at
which
point
latencies
are
acceptable
and
which
they
are
not.
B
If
you
try
to
use
a
single
drive,
if
you
are
using
red
z
or
several
read
z
in
stripes,
you
can
use
a
different
iops
to
personal
drives,
where
you
can
split
it.
B
For
example,
if
you
have
have
a
tool
with
one
red
z,
you
can
see
a
one
latency.
If
you
can
use
a
different
red
z
in
stripe
in
two,
you
can
see
a
different
latency
to
operations,
because
first
right
operations
will
be
moved
to
faster,
with
z,
into.
K
B
K
B
We
try
to
talking
about
a
priority
to
personal
drive.
I
try
to
speak
about.
It
can
be
different
in
different
configuration.
K
Of
okay,
okay,
but
to
each
specific
drive,
we're
sending
some
mix
of
commands.
Some
of
those
commands
are
high
priority.
Some
of
those
commands
are
low
priority,
but
drive
has
no
idea
about
that.
Drive
execute
them
all
evenly,
while,
obviously
we
don't
need
it,
it
needs
them
to
be
even
no
matter
how
we
distribute
load
between
drives.
K
There
are
still
concurrent
ios
from
one
side
and
from
another.
I
am
heavily
trying
to
reduce
that
concurrency
to
keeps
latency
in
touch,
but
reducing
queue,
depth
reduce
performance
of
the
drive
science.
The
only
drive
knows
how
to
schedule
it
properly.
So
I'm
trying
to
bring
knowledge
to
the
point
where
it's
usable.
K
A
Agree
with
what
you're
saying
alexander,
I
think
we're
almost
at
the
ending
time
of
our
meeting.
So
why
don't
we
try
to
wrap
up
this
discussion
and
muhammad?
What
what
form
are
you
going
to
use
to
to
gather
folks
together?
Is
that
going
to
be
on
one
of
the
mailing
lists
or
on
slack.
J
Yeah,
that's
what
I
was
looking
for
your
guidance
on
like
if
I'm
just
trying
to
get
an
agenda
together
so
that
you
know
my
folks
know
what
they
need
to
come
prepared
with
and
for
somebody
from
the
office
say
you
know
if
it's
just
math,
just
explaining
the
actual
need
and
how
zfs
would
actually
use
what
is
currently
out
there.
That's
the
kind
of
guidance
I'm
looking
for.
So
if
it's
just
like
me,
mailing
it
to
the
mailing
list,
saying
hey:
this
is
the
agenda
if
you're
interested.
J
A
Yeah,
I
I
think
that
it
might,
if
you
have
some
information
that
you
can
send
out
before
about
like
you,
know
the
details
of
what's
going
on
under
the
hood,
and
maybe
that
alexander
can
add
some
ideas
of
what
you
know
how
you
intend
to
use
it
with
zfs.
A
Then
you
might
get
more
folks
interested
in
attending
the
meeting.
If
they
can
see
some
of
the
details.
J
Out
like
which
way
would
be
better
because
I
mean
I
think,
if
it's
too
big
of
a
forum,
then
it's
probably
not
gonna
go
anywhere.
So
I'm
trying
to
look
for
like
hey.
There
are
people
who
are
four
or
five
that
are
interested
in.
A
Yeah,
I
think
that
you
know
if
you,
if
you
coordinate
it
on
slack,
I
mean
that's
like
probably
the
narrowest
focus.
Probably
you
know
the
folks
who
are
on
this
call
are
mostly
all
going
to
be
on
the
open,
cfs
slack.
So
you
know
you
could
I
could.
A
You
know
and
then
maybe
once
once
you've
kind
of
decided
what
to
do,
send
out
an
email
just
to
catch
anyone
else
who
might
be
interested,
and
I
would
do
that
to
the
developer
at
opencfs
mailing
this,
since
that's
kind
of
the
most
focused
on
develop
developers
rather
than
like.
You
know,.
C
A
All
right,
then,
let's,
I
think
we're
at
our.
A
The
next
meeting
will
be
february,
2nd
and
it'll
be
at
the
later
time.
One
o'clock
pacific
thanks
everyone.
I
I
I.
E
A
That,
with
with
nothing
on
the
agenda,
we
somehow
managed
to
fill
the
hour
useful
discussions.
So
I
I
think,
that's
a
sign
of
a
healthy
community
and
I
enjoy
everyone's
ideas
and
discussion,
and
I
hope
that
we're
able
to
get
to
the
bottom
of
some
of
the
strange
import
performance
stuff
that
igor
brought
up.
So
let's
please
continue
that
offline
thanks
a
lot
thanks.