►
From YouTube: VDEV Properties by Allan Jude & Mark Maybee
Description
From the 2021 OpenZFS Developer Summit
slides: https://docs.google.com/presentation/d/10JXk6Rmvee86eoTLr-SNy13pwKknO7UU
Details: https://openzfs.org/wiki/OpenZFS_Developer_Summit_2021
A
We
do
previously
in
zfs
development
and
today
we're
going
to
talk
about
vdev
properties,
so
as
anyone
who's
used
zfs
for
a
while
knows,
properties
are
a
great
way
to
express
administrative
intent,
basically
for
the
administrator
to
tell
zedives
how
you
want
how
the
administrator
wants
zfs
to
manage
the
things,
whether
that's,
what
compression
to
use
what
the
quota
should
be
or
whatever
different
aspect
of
the
file
system,
volume
or
pool
that
you're
trying
to
control
is,
and
the
user
interface
for
properties
is
also
very
rich,
allowing
you
to
extract
data
from
those
properties,
as
both
human
readable
as
a
human
interface,
but
also
provides
nice
machine,
readable
output
that
you
can
hook
up
to
a
metric
system
or
something
that
effect,
and
so
after
playing
that
with
that
for
a
while,
I
decided
that
v
devs
should
have
properties
in
that
same
vein,
so
that
you
can
control
them
and
also
get
information
out
of
them
in
both
human
and
machine
readable
ways.
A
The
first,
when
I
started
looking,
was
that
in
the
struct
v
dev
that
hv
dev
has
there's
a
whole
slew
of
counters
that
provide
a
lot
of
really
interesting
data
about
how
many
reads
and
writes
and
freeze
and
and
other
operations
have
been
done
on,
that
v
dev
and
those
even
percolate
up
to
top
level
v
devs
and
so
there's
a
lot
of
useful
stuff
there,
but
even
other
more
stuff.
A
That's
built
in
that's
not
necessarily
exposed
like
the
enclosure
path
or
the
physical
path
to
that
device,
rather
than
just
the
v
dev's
name.
So
by
exposing
all
of
those
you're.
Now
able
to
get
all
that
information
about
one
or
more
v
devs
in
this
nice
human
and
machine,
readable
output,
then
we
added
some
writable
properties,
so
you
can
actually
control
how
the
v
dev
works
and
then
user
properties.
So
you
can
store
whatever
arbitrary
kind
of
key
value
data.
A
You
want
to
store
about
a
v
depth
and
in
my
case
I
actually
store
the
partition
scheme
as
a
property.
So
when
I'm
replacing
the
disk,
the
automation
can
pull
out
and
apply
that
same
partitioning
scheme
and
the
other
thing
I've
always
wanted
was
there's
a
lot
of
data
in
the
zpool
status
output,
including
a
bunch
of
counters
for
each
v,
dev
about
how
many
read
and
write
and
checksum
errors
it's
had,
but
the
output
format
of
z,
pull
status
is
not
very
conducive
to
machine
reading
and
so
by
exposing
those
properties.
A
So
to
start,
the
number
of
properties
you
can
actually
set
on
a
v.
Dev
is
pretty
small.
What
we
have
hooked
up
so
far
is
a
comment
when
you
just
put
some
arbitrary
free
text
about
the
v
dev
the
path.
So
this
is
normally
you
know,
dev,
sdb
or
disk
by
id
and
the
path.
A
But
sometimes
you
want
to
change
that
and
you
don't
necessarily
have
the
ability
to
export
the
pool
and
re-import
it
with
the
d
flag
to
force
it
to
pick
up
the
the
v
devs
from
a
certain
directory
so
basically
allows
you
to
change
the
path
in
the
zpool
config
that
it
will
try
first
for
opening
that
v
dev.
So
next
time
you
import
it.
It
should
pick
up
from
the
correct
path
and
then,
in
a
few
minutes,
mark
we'll
talk
about
the
allocating
property
and
how
that
works.
A
And
then,
as
I
mentioned,
we
have
the
user
properties.
So
you
can
just
set
whatever
extra
information
you
want
to
control
about
a
v
dev
in
there
and
then
the
future.
One
of
the
original
ideas
we
had
was
moving
some
of
the
per
v
dev
queuing
information,
so
like
maximum
number
of
asynchronous
reads
that
are
outstanding.
Moving
that
from
a
currently
that's
a
system-wide
tunable
into
a
purvidev
tunable.
So
if
you
had
say
a
pool
that
mixed
some
hard
drives
and
some
ssds,
you
could
have
different
values
for
those
different
things:
different
vdfs.
A
In
addition
to
being
easy
to
get
the
device
is
grid
and
its
id,
you
can
get
the
physical
path
or
the
enclosure
path,
the
serial
number
of
the
disk,
it's
parent
v
dev
and
it's
children,
child
v
devs,
and
how
many
children
it
has.
How
big
it
is,
how
much
food
is
free,
how
much
of
it
is
allocated
the
boot
size
property?
A
So
you
can,
you
know,
query
a
random
v
dev
and
find
out
if
it's
raid,
z3
or
raid
z1,
how
fragmented
it
is,
how
big
it
is
as
current
state
and
whether
it's
in
the
process
of
being
removed.
Basically,
anything
that's
kept
track
of
in
the
v.
Devt
is
now
added
as
a
property
so
that
you
can
inspect
it
easily.
A
So
if
you
do
zpool
get
all
the
pool
name
and
then
one
or
more
vdev
names
or
the
special
keyword
all
dash
v
devs,
then
it'll
print
out
a
bunch
of
this
information,
and
you
can
see
you
know.
I
have
a
v
div
here,
64
full,
it's
got
512
byte
sectors,
it's
a
16,
gig
virtual
disk.
A
You
see
the
original
device
name,
it's
actual
dev
id
the
path
etc.
And
you
see
its
parent
is
the
pool
itself.
So
it's
a
top
level
v
dev
and
you
can
see
number
of
read
and
write
operations
and
so
on.
A
The
properties
are
persisted
in
a
per
vita
of
zap
that
zap
already
existed.
It
was
added
as
part
of
the
device
removal
code,
so
vdf
properties
doesn't
require
any
new
feature
flags
and
that
top
level
v
devs,
so
the
metaview
devs
like
mir
or
raid
zed,
can
actually
have
properties
as
well.
So
you
can
apply
properties
to
the
top
level
v
dev,
rather
than
just
leaves.
B
All
right,
so
I'm
going
to
talk
about
the
allocating
property,
which
is
one
of
our
writable
properties.
The
allocating
property,
as
alan
said,
is,
turns
allocating
off.
It
applies
only
to
top
level
v
devs,
because
that's
the
only
allocating
v
devs
you
have
in
your
pool
and
feed.
It
has
to
have
allocating
space
allocatable
space
in
order
to
be.
B
Or
not
allocating
the
default,
of
course,
is
on
all
top
level.
Devs
can
be
allocated
from
by
default
and
setting
it
off
prevents
future
allocations
to
the
v
dev.
So
that
means
the
remaining
space
on
the
v
dev
is
now
unavailable.
This
obviously
is
going
to
impact
the
total
available
space
in
the
pool.
I'm
going
to
talk
some
more
about
that
here
in
a
minute.
B
One
of
the
constraints
with
this
property
is
that
you
must
have
at
least
one
allocatable
device
in
your
pool.
Alright,
this
isn't
about
read-only
pools.
This
is
about
allocating
or
non-allocating
of
a
device,
and
so,
if
you
had
turned
were
able
to
turn
all
devices
as
non-allocatable,
that
kind
of
lot
will
pretty
quickly
lock
up
your
pool.
B
So
the
primary
use
case
for
this
new
property
is
device
removal
all
right.
So
the
current
device
removal
process
doesn't
expose
any
notion
of
allocation
directly
or
not
not
allocating
directly,
but
it
does
it
implicitly.
So
when
you
request
device
removal
right
now
before
this,
this
property,
what
happens
is
that
the
device
removal
code
verifies
that
there's
space
available
for
device
removal,
and
then
it
marks
devices
not
allocating
internally.
So
it
becomes
it's
an
internal
trigger
that
it
it
it
has.
B
That
allows
it
to
set
it
non-allocating
on
a
specific
on
a
per
device
level,
and
then
it
removes
the
device
by
moving
the
contents
of
that
device
to
the
other
devices
in
the
pool.
Then
you
could
remove
another
device
et
cetera
and
a
one
device
at
a
time
sequence.
B
Now
that
we
have
this
property
and
the
reason
we
have
this
property
is
now.
You
can
actually
do
this
sort
of
at
a
more
efficient
level.
So
if
you
want
to
remove
multiple
devices,
you
can
set
non-allocating
or
allocating
off
explicitly
on
those
devices
for
all
the
devices
and
that
will
immediately
verify
there
is
enough
space
to
remove
all
those
devices
from
the
pool.
And
now
you
can
issue
your
removals
one
at
a
time
for
those
devices
and
not
worry
about
getting
new
data
into
the
device
you're
about
to
remove
a
little
bit
later.
B
So,
let's
go
through
a
an
example
of
how
that
works.
So
here's
a
simple
scenario:
you
have
a
config
and
you
have
a
couple
of
old
four
terabyte
drives
and
you
want
to
replace
those
with
a
bigger,
faster,
a
terabyte
drive.
B
Let's
assume
that
each
your
two
drives
are
half
full
at
the
moment,
so
you
want
to
all
right.
Well,
you
want
to
move,
replace
your
two,
your
your
two
drives
with
device
c.
So
the
first
thing
you
do
is
you
add
your
new
dr
drive
to
the
pool,
and
so
dev
c
is
our
new
eight
terabyte
drive
and
we
have
dev
a
and
devby
as
our
two
existing
four
terabyte
drives.
Half
full.
B
This
is
the
first
step
is
you're
going
to
replace
your
two
drives,
and
this
is
do
this.
This
scenario
I'm
going
to
first,
is
without
the
properties.
This
is
the
traditional
sequence.
You'd
go
through
right
now,
if
you
were
to
remove
two
drives,
and
so
when
you
issue
the
zip
or
remove
tank
command,
you
see
that
it
becomes
non-allocatable
and
you
see
that
it
moves
its
data
from
that
device
onto
the
exist.
B
B
We're
going
to
do
the
same
scenario
except
this
time,
we're
going
to
apply
the
allocating
property
or
the
allocating
equals
off
technique.
So
we
add
the
drive
and
now
we
set
allocating
off
for
a
and
b,
and
we
can
do
this
because
we
have
drive
c.
So
we
still
have
capacity
in
this
pool
and
drive
a
and
drive
b
are
now
non-allocating,
but
no
data
has
been
moved
yet.
B
So
this
is
an
optimization
you
can
imagine.
This
could
get
pretty
significant
if
you're
doing
a
larger
transformation
on
your
pool
if
you're
moving
or
replacing
a
bunch
of
drives
with
newer
drives.
Potentially,
you
could
end
up
copying
a
lot
more
data
around
if
you
have
to
a
piecemeal
so
for
configurations
with
many
drives
in
them.
This
can
be
a
huge
optimization.
B
So
why
allocating,
though,
instead
of
read-only
well,
mostly
because
this
does
not
make
a
device
read-only
all
right,
it's
still
possible
to
to
modify
a
device.
That's
marked
on
allocating
and
still
updates
labels,
and
you
can
still
even
do
scrubs
on
it
right
in
place.
Data
manipulation
is
still
possible.
It's
just.
We
simply
have
disabled
the
ability
to
allocate
new
space
on
that
drive.
B
It
also
impacts
the
sp
pool
space
in
a
way
that's
different
than
what
you
would
think
of
for
read
only
all
right
and
that
the
expectation
for
read
only
would
be.
I
am
only
going
to
remove
the
unused
vdef
space.
The
only
unused
veda
space
from
that
drive
would
be
subtracted
from
the
available
space,
meaning
in
our
previous
example,
we
had
four
terabyte
drives.
B
B
But,
more
importantly,
from
our
perspective,
this
is
because
we
are
actually
using
this
as
a
precursor
to
device
removal,
and
so
we
need
to
make
sure
there's
not
just
the
capacity
of
the
drive
but
there's
a
full
full
drive
capacity
available.
So
we're
also
when
we
move
that
drive
content,
we're
going
to
make
sure
we
have
enough
space
for
another
on
the
pool.
B
I
think
that's
all
I
really
I
had
specifically
talked
about
for
the
perspective,
allocating
alan.
You
want
to
come
back
and
talk
about
your
last
couple,
slides
here,
yeah.
A
Yeah
so
yeah,
the
question
is:
what
else
could
we
do
with
this?
I'm
sorry,
so
the
things
that
I've
thought
about
that
haven't
done
yet
is
obviously
having
support
for
channel
programs
to
change.
The
properties
like
you
have
existing
on
a
very
commonly
discussed
idea.
Is
this
concept
of
having
a
mirror
read
bias?
So
if
you
have
a
mirror
with
one
of
the
members
is
actually
remote
via
metronet
or
something
you'd
want
to
try
to
have
all
the
reads
actually
come
from
the
disk.
A
That's
local
and
not
the
one,
that's
far
away,
but
you
might
actually
want
to
rather
than
just
say
you
know
in
this
mirror
prefer
reading
off
the
first
member.
You
might
actually
want
to
say
you
know
each
mirror.
Member
has
the
host
id
of
the
machine
that
should
prefer
to
read
from
it
so
that
when
you
actually
do
fail
over
to
the
other
side
of
the
metro,
it
knows.
Oh,
I
should
actually
read
off
what
is
my
local
disk?
Not
what
was
the
local
disk
of
the
the
original
machine
before
we
failed
over?
A
Another
thing
is
currently
libserves.
Does
caching
of
all
the
pool
properties,
but
does
no
caching
of
v
dev
properties?
I
don't
know
if
it
makes
sense,
especially
since
pool
properties,
there's
kind
of
one
list
and
there's
only
ever
one
pool
per
lib
zfs
handle,
and
so
you
know
the
amount
of
memory
is
kind
of
fixed,
but
with
v
devs
you
know
it
could
be
a
lot
to
try
to
cache
or,
to
you
know,
automatically
load
all
those
properties
as
soon
as
you
open
the
handle.
A
So
it
might
not
make
sense,
but
it
might
be
interesting.
We've
talked
previously
about
having
inheritance
for
properties
so
far,
there's
not
a
use
case
where
it
is
worth
the
the
hassle,
but
there
might
be,
and
another
idea
we
had
was
currently
the
counters
like
number
of
reads
and
rights,
and
so
on
are
not
persistent.
They're
just
come
from
the
v
devts,
so
they're
only
persistence
through
importing
the
pool.
So
if
you
export
the
pool
and
import
again,
the
counters
go
back
to
zero.
A
There
might
be
some
counters
where
we'd
actually
want
to
persist
those
and
keep
track
of
long
term.
How
much
has
been
written
to
this
ssd
or
you
know?
How
much
have
I
been
using
this
cloud?
So
I
can
compare
it
to
my
cloud
bill,
but
we'd
also
have
to
balance
how
frequently
we
want
to
write
those
persist,
those
out
because
we
want
to
create
more
write
load
by
keeping
track
of
how
much
right
load.
There
was.
A
But
the
other
question
for
the
kind
of
takeaway
is
what
other
settings
could
be
v
dev
properties.
You
know,
there's
quite
a
few
tunables
where
we
started
to
think
about.
Maybe
these
could
be
per
pool
or
per
v
dev,
but
what
other
things
about
a
vdav?
Would
it
be
interesting
to
be
able
to
control
one
of
them
that
I've
thought
of
is
the
special
vdaps,
the
metadata
class
stuff
being
able
to
control?
If
I
have
multiple
special
v
devs,
I
want
to
say
this.
A
Special
v
div
only
holds
metadata,
and
this
one
only
holds
small
blocks
or
this
one's
only
b,
dupe
or
whatever,
with
a
bit
more
granularity
than
you
have
now,
or
maybe
that
ends
up
being,
rather
than
one
setting
that
says
it
control
it
takes
this
type
or
that
type,
maybe
there's
special
metadata
equals
on
or
off
special
small
block
equals
on
or
off
or
whatever.
A
Another
question
that
I've
brought
up
a
couple
times
is
how
many
properties
is
too
many
properties.
Do
we
end
up
wanting
to
have
some
kind
of
a
flag
on
a
property?
That's
not
necessarily
hidden,
it's
just
not
displayed
by
default.
So
if
you,
you
know
get
that
one
or
you
know,
we
have
a
second
keyword
after
all,
like
really
all
or
something
to
display
them.
There's
there's
a
bunch
of
them
where
maybe
it
doesn't
make
sense
to
display
this
all
the
time.
A
But
if
someone
wants
it,
they
should
be
able
to
get
it,
and
then
powell
had
asked
about
being
able
to
set
v
dev
properties
at
pool
creation.
Time
definitely
could
use
some
input
on
what
the
command
line
syntax.
For
that
should
look
like
you
know.
If
I'm
specifying
a
pool,
it's
gonna
have
four
different
v
dabs
and
I
want
to
set
you
know
this
property
one
way
on
this
one
and
a
different
way
on
a
second
one.
A
And
you
can
find
me
on
twitter
or
whatever.
If
you
have
questions
after
jorgen,
would
you
like
to
ask
a
question.
C
A
Correct
the
physical
path
is
kind
of
that
one's
a
bit
more
os
specific,
like
I
think,
on
linux,
it
ends
up
being
literally
like
the
pci
path
or
whatever
it's
a
bit
different.
But
the
path
in
this
case
is
like
dev,
whatever
that
you're
opening.
C
Yeah
now
the
old
one
out
here
is
obviously
windows,
because
path
is
cosmetic.
Only
like
it's
purely
a
nice
string
that
the
users
see
it
doesn't
use
it
ever
in
physical
paths
where
we
hit
hide
the
ugly
thing,
but
on
the
other
hand,
I'm
not
sure
I
want
users
to
be
able
to
change
the
you
know
the
partition
offset
than
the
path
name.