►
From YouTube: March 2021 OpenZFS Leadership Meeting
Description
At this month's meeting we discussed: vdev properties; zpool import; code cleanup.
meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit
A
All
right,
let's
get
started
with
the
first
march
2021
open,
zfs
leadership
meeting
because
of
the
way
the
month
falls.
We'll
have
our
another
meeting
four
weeks
from
today,
which
will
be
also
in
march.
A
There's
not
too
much
on
the
agenda
for
today.
So
there'll
be
time
for
q,
a
or
other
discussions
I
got
and
brian.
Do
you
wanna
just
mention
that
we're
still
working
on
the
release,
numbering
planning,
stuff.
B
Yeah,
I
don't
have
a
concrete
proposal
for
today,
so
nothing
much
to
share
there.
That's
about.
A
It
all
right
so
we'll
aim
to
get
that
together
for
next
next
meeting
and
the
other.
The
other
thing
they've
been
the
agenda
is
I
asked
alan
to
share
some
more
about
the
v
dev
properties
work
that
he
had
done.
I
thought
that
it
was
years
and
years
ago,
but
then
I
saw
you
know
it
was
only
2019.
A
The
2019
development.
D
C
Or
maybe
the
idea
or
something
but
yeah,
we
spent
a
bunch
of
time
on
it
at
the
2018
hackathon
and
then
so.
In
2019
I
presented
what
I'd
come
up
with,
based
on
that,
okay.
A
So
I
think
we
mentioned
this
a
couple
of
meetings
ago,
but
mark
maybe
is
going
to
be
working
on
the
like
v
dev,
no
alec
stuff,
that
we
need
for
our
product
and
we
were
basically
we
wanted
to
figure
out
what
the
user
interface
for
that
should
look
like,
and
it
seemed
like
if
we
could
make
it
so
that
the
user
interface
was
the
same
as
kind
of
the
grand
plan
for
v2
properties.
A
So
I
wanted
to
have
a
discussion
about
what
that
user
interface
would
look
like,
and
then
I
was
also
kind
of
curious
on
like
where,
where
the
code
is
at
and
if
you
know
if
it
would
make
sense
to
complete
that
the
real
video
properties
in
the
short
term
versus
if
we
should
like
make
something
that
has
the
same
user
interface
for
the
one
thing
that
we
care
about
and
then
we
can
kind
of
refit
it
back
into
the
infrastructure.
A
You
know
when,
when,
when
the
whole
thing
lands.
C
Yeah,
I
think
it's
I'll
give
it
another
once
over
tonight,
but
I'll
be
able
to
push
a
pull
request
for
it
this
week,
so
that
people.
C
And
hopefully
that
means
that
it
can
go
that
way
because
yeah
one
of
the
things
that
drove
me
to
the
idea
of
doing
this
v
dev
properties
was
actually
talking
with
matt
about
this
cued
v-dev
removal
thing
at
the
the
zfs
user
conference
at
dotto
in
what
was
that
2017?
Was
it
oh.
A
Gosh
we've
been
working
on,
we've
had
that
on
the
plans
for
so
long
yeah,
and
that
was
one
of.
C
The
ideas-
and
then
I
I
know
it
was
richard-
was
put
some
comments
on
the
slideshow
based
on
the
mailing
list
post
earlier
this
week
about
also
using
it,
for
if
you
have
something
like
a
metro
setup
where
you
have
mirrors,
but
some
of
the
disks
are
actually
remote
being
able
to
express
you
know
you
should
bias
towards
reading
from
the
local
mirror
and
he
had
a
better
way
of
doing
it
than
I
had
thought
up
and
that
he
put
in
the
comments
that
I'd
like
to
look
at
too
so
yeah
as
it
exists
right
now.
C
It's
mostly
read-only
properties
that
it
exposes
a
bunch
of
stuff
from
vwt
that
are
just
counters
and
so
on
kind
of,
like
the
pers
dataset
counter
case
stats
that
we
exposed
not
that
long
ago,
on
that
I
ported
the
freebsd,
but
those
on
a
per
v
dev
level,
which
is
also
somewhat
interesting,
especially
if
you
have
mirrors
seeing
you
know
how
much
more
work
is
one
of
these
mirrors
doing
than
the
other
or
and
so
on,
but
it
does
have.
C
It
takes
advantage
of
the
per
video
zap
that
the
device
removal
code
added
to
be
able
to
track
its
status
and
so
on
to
be
able
to
store
stuff.
So
you
can
have
comments
and
I
created
user
properties,
so
you
can
put
whatever
arbitrary
information
you
want
about
vdev
on
the
v
dev.
C
One
of
the
ideas
we
had
talked
about
at
a
freebies
developer
summit
was
kind
of
our
version
of
the
the
zed
tool,
the
event
daemon
being
able
to
apply
a
partitioning
scheme
to
a
replacement
disk
before
starting
the
zpool
replace
or
whatever.
C
So
you
know
if
the
original
disk
is
partitioned
this
way,
if
we
store
that
information
in
a
vdf
property,
then
when
we
go
to
replace
that
disk,
we
can
apply
the
same
things
to
the
new
disk
and,
or
you
know
one
of
my
pet
peeves
is
always
I
want
to
know
the
serial
number
of
the
disc.
That's
not
there
anymore.
C
C
Yeah-
and
I
you
know
any
settings
that
made
sense
there-
it
made
sense
to
try
to
hook
them
up
to
be
the
same
way.
You
set
settings
anywhere
else
in
zfs,
which
you
know
is
zfs,
set
whatever
or
z
pool
set
whatever.
Originally
I
had
tried
a
couple
of
different
syntaxes
like
originally.
It
was
z,
pool,
set
property
at
v
dev,
then
the
pool
name
or
whatever,
but
after
playing
a
bit
with
the
stuff
zdfs
and
linux,
had
done
to
be
able
to
disambiguate
pool
names
and
v-dev
names
on
the
command
line.
C
It
made
more
sense
to
just
use
that,
and
so
the
current
form
is
z,
pool,
set
property,
equals
value,
the
name
of
the
pool
and
then
the
name
of
the
v
dev,
and
that
allows
you
to
set
a
property
on
a
specific
v
dev
and
then
on
a
get.
You
can
specify
multiple
v
devs
if
you
want,
or
the
keyword
all
and
get
that
property
from
that
list
of
v
devs
or
all
of
them.
Can't
you
supply
multiple
v
days
on
the
set.
C
I
don't
know
if
I
allowed
that,
but
it
could.
If,
if
that
makes
sense,
then
yeah,
you
know,
it
probably
does
make
sense,
because
you
want
to
atomically
set
that
on
a
bunch
of
them
at
once,.
E
C
A
Yeah,
I
think
that
this
pretty
much
makes
sense.
My
one
one
idea
that
I
had
was
you
know
when
you're
doing
like
z,
pool
set,
there's
there's
a
lot
of
positional
parameters
there
right,
so
you
have
like
sequel
set.
A
You
know,
property.
A
Pool
v
dev,
which
is
which
is
more
than
we
typically
have
so
I
was
thinking
about
how
this
works
for
like
data
set
properties,
so
there
you
have
basically
like
zfs,
set
property
equals
value,
pool
slash
data
set.
A
So
I
was
thinking
of
like
a
and
if
you,
if
you
want
to
set
it
on
multiple
things,
you
know
multiple
data
sets.
Then
you,
then
you
have
to
do
like
pool
slash
data
set
to
pool
slash
data
set
three
et
cetera,
et
cetera,
and
the
pool
name
gets
repeated
there
and
we
have
checks
to
make
sure
that
they're
all
the
pool
name
is
the
same
every
time.
A
So
I
was
thinking
that,
like
by
kind
of
similarity
to
that,
we
could
have
a
way
to
specify
to
fully
specify
the
v
dev
with
one
like
word,
by
doing
like
pool
colon
v
dev
or
like
something
like
that,
like
using
some
other
special
character
to
combine
the
pool
name
and
the
video
name
together.
Just
like
we
have
pool
slash
and
then
data
set
name
yeah,
which
would
maybe
make
it
like
a
little
bit.
Syntactically
nicer.
I
think
it's
yeah.
It's
definitely
arguably.
A
C
E
C
Well,
you
still
end
up
with
the
same
number
of
positional
arguments.
If
yeah
that
way,.
A
Yeah
and
at
least
with
zfs
you
most
people
don't
know
this.
I
always
forget
this,
but
you
can
actually
omit
the
set
keyword,
so
you
can
do
like
zfs
compress
equals
on
pool
slash,
fs
and-
and
so
you
know.
Presumably
we
could
allow
that
here
as
well.
I
mean,
regardless
in
either
syntax.
You
could
do
z-full,
no
alec
equals
on
pool
colon
v,
dev
or
pool
space
v
dev,
which
does
make
it
a
little
bit
more
palatable
right
but
yeah.
A
I
was
curious
what
folks
thought
about
trying
to
combine
them
into
one
token.
Obviously
we
haven't
done
that
before
so,
like
you
know,
say:
z,
full
offline,
it's
like
zoupol
offline
pool
space,
v
dev,
and
you
know
we
we
could
like
change
that
to
also
accept
the
new
syntax
if
we
wanted,
like
z,
full
offline
pool,
colon
v,
dev
or
or
whatever
other
separator.
C
C
C
E
A
trade-off,
I
think
you
know,
with
the
common
use
case,
which
is
what
matt
and
I
were
discussing
the
company's
case.
It
just
could
be
a
single
v
dab,
and
so
it's
a
it's
a
simplification.
It's
all
matrix.
You
don't
have
to
you
know
it's
not
non-positional
now,
but
you're
right
in
the
less
common
case.
What
is
multiple
b?
Does
now
it's
more
verbose,
because
you
have
to
have
that
full
name
in
every
instance.
A
Yeah-
and
I
think
the
same
kind
of
thing
applies
to
the
zfs
command-
it's
just
that
we
kind
of
we
we
bit
the
bullet
very
early
on
of
like
combining
the
pool
name
with
the
data
set
name
as
a
concept,
and
that
seems
to
have
turned
out
pretty
well.
There.
C
A
A
F
A
C
C
B
C
A
C
Yeah!
Writing
that
fits
in
z,
pool
list,
z,
iostat
and
a
couple
other
things
use
it.
G
G
Like
I
mean,
I
think
that,
like
inheritance
becomes
an
interesting
thing
in
in
the
set
command,
you
know
if
you
can
actually
do
this
on
devices
at
you
know
like
identify
a
top
level
and
then
have
it
do
that
for
all
the
lease
underneath
it
I
I
I
guess
I
I
don't
see
a
whole
heck
of
a
lot
of
value
to
adding
this
so
that
we
don't
have
the
positional
nature
of
it
given
where
it
is
in
all
the
other
commands
would
be.
My
only
comment,
all
right.
A
Yeah
I
mean,
I
think,
especially
if
folks
don't
see
value
in
kind
of
reusing
that
pool
colon
vdev
in
other.
A
To
having
a
generalized
and
standardized
syntax
for
specifying
the
v
devs
in
a
pool
that
one
of
yourself
would
be
useful
for
other
cases,
yeah
what
other
cases
do
you
think
it'd
be
useful
for
any
time
you
want
to
specify
a
pool,
a
a
v
dev
if
you
want
to
offline
it,
if
you
want
to
check
the
status
of
just
that
having
a
standardized
syntax
strikes
me
as
a
useful
thing
to
go
into.
E
B
Mean
all
of
the
z-pool
commands
are
like
that:
z-pool
initialize,
equal
trim.
I
mean
yeah
kind
of
the
convention
in
all
of
the
cli
commands
at
the
moment,
so
I
I'm
not
sure
I'm
against
having
a
new
one,
but
if
we
were
to
do
that,
I
think
we'd
also
want
to
apply
it
to
all
the
zfs
commands
right.
Yeah.
G
I
think
if
we
came
up
with
a
new
naming
convention,
we
may
also
want
to
make
it
so
that
the
v
devs
are
self-describing,
because
we
have
like
cases
where,
like
you
want
to
identify
it
as
a
log
or
it's
a
mirror
or
you
know
so,
would
you
want
to
have
like
mirror
space
pool
colon
v,
dev
pool
colon
v
dev
I
mean
like.
G
I
think
we
may
want
to
think
about
that
too.
If
we're
looking
at
a
new
naming
convention.
A
Zip
will
add
yeah
for
depool.
Add
yeah,
I
mean
well
with
zip,
will
add.
I
think
that
you
would
keep
that
the
same,
because
when
you're,
when
you're,
you
aren't
specifying
a
v
dev
within
a
pool
you're.
A
To
add
to
the
pool
right
right
so
like,
for
example,
when
you
add
a
new
mirror,
you're
doing
like
add,
pool
mirror
device
device,
but
what
about
like
attached
right
right
so
like
when
you
attach,
when
you
specify
the
device
that
you
are
attaching
to?
A
A
A
Because,
like
the
way
that
you
can
specify
vdev
like
for
exa,
I
think
what
I
was
trying
to
make
clear
there
is
that
the
a
v
dev
like
is
part
of
a
top
level
v.
That
has
a
name.
That's
like
mirror
dash,
one
or
raid
z,
one
dash
zero.
Where
has
it
like
dash
v,
divided
d
and
that's
how
you
specify
the
top
level
v
div,
but
when
you're
like
adding
a
a
new
top
level
v
dev,
you
don't
know
what
the
number
is
right,
the
the
cfs
decides.
A
Yeah
yeah,
I
mean
it's
a
relatively
minor
point
but-
and
I
still
definitely
see
like
maybe
like
just
go
just
be
consistent
with
the
existing
way,
even
if
it's
like,
maybe
not
the
best,
you
know
thing
that
we
ever
came
up
with
and
and
and
just
do
like
z
full
set.
E
I
think
yeah,
I
think
you
know,
there's
enough,
there's
enough
precedent
in
the
other
command
to
say
that
should
be
supported,
whether
it's
the
ideal
way
or
not,
and
then
I
think
orthogonally.
I
think
there
is
some
potentially
future
thought
and
something
we
could
do
in
terms
of
saying
all
right.
Let's
come
up
with
a
naming
convention
which
would
apply
across
all
the
the
full
commands
command's
only
one
named
vidas
and
these
these
commands
or
something.
G
G
Yeah
and
it's
also
using
like
the
colon
right
for
some
of
its
oh
yeah
description,
yeah.
G
All
right:
well,
I
think
it's
a
good
thought
exercise
for
sure
and
see
if
there's
something
that
we
can
improve
on.
A
Yeah,
well
I
mean
I'm,
I'm
fine
with
sticking
with
the
syntax
that
allen
outlined
there
and
I
just
stole
what
what
brian.
C
Pool
at
vidaver,
whatever
you
know,
colon
sounded
good
but,
like
you
said,
d-raid
probably
makes
it
complicated.
I
can
see
a
bunch
of
places
where
that'd
be
useful,
but
at
the
same
time
it
also
a
bunch
of
places
where
it'd
be
slightly
redundant,
have
to
put
the
pool
name
six
times
on
the
command
line.
A
C
I
guess
for
set
when
you're
doing
vw
properties,
it
would
be
a
list
of
all
v
devs
with
the
separator,
and
so
you
wouldn't
have
the
pool
name
by
itself
on
the
command
line
right
your
z-pool
set.
You
know
no
alec
google's
on
pool
at
v,
dev
one
pool
at
vdf2
and
never
specified
just
the
pool
name,
and
that
would
make
the
syntax
more
different
than
setting
a
pool
property
to
make
it
less
confusing.
C
A
C
Currently,
it's
an
error,
it'll
say.
C
Is
no
property
called
that
for
a
pool?
Okay,
I
think
the
only
one
that
overlaps
is
comment
is
the
only
pool
property
that
has
a
corresponding
view
of
property,
the
syntax
to
set
it
on
all
the
v
devs
is
to
use
the
v
dev
called
all
I
see
I
gained
stuff,
okay,.
A
A
C
A
C
Comes
from
is
that
it's
awfully
close
to
setting
just
the
pool
property
so
but.
B
A
I
mean
the
only
things
I
could
imagine
that
you
might
want
to
set
on
all
are
like
things
that
you
can't
do
nowadays
like
set
like
change.
My
preferred
day
shift,
which
you
know
is
not
a
thing
now,
but
I
think
george
has
a
also
many
years
old
prototype
for
or
user
data,
such
as
the
vid
was
added
on
such
and
such
date,
or
modified
on
such
and
such
date.
A
B
C
Yeah
it'll
be
what
z
pool
list
capital
h,
dash
o
name.
A
A
A
A
I
am
I'm
okay
with
the
syntax
that
alan
originally
proposed,
I'm
okay
with
getting
rid
of
all,
but
it's
great
that
you're
so
far
along
that
you'll
be
able
to
open
a
pr
soon
and
then
maybe
mark.
Can
you
take
a
look
at
it
and
help
you
get
that
across
the
finish
line?
If
you
need
some
more,
if
you
need
some
more
work
because
yeah.
C
C
Yeah
because
I
figured
when
I
thought
about
it,
I
was
like
well,
you
gotta,
you
know,
make
that
free
space
probably
disappear
from
the
available
free
space
and
who
knows
what
else
yep,
because
I
had
also
thought
about
trying
to
do
read-only,
but
I
don't
think
that
makes
sense
on
a
privitive
basis.
A
C
But
it's
very
similar
to
read
only
because
the
other
use
case
we
had
thought
of
was
there
might
be
stuff
in
the
special
v
dev
type,
where
it
made
sense
to
have
a
v
dev
level
setting
rather
than
a
data
set
setting
like
this
ssd
is
just
for
dedupe
table
and
nothing
else
like.
A
Yeah,
well,
I
think
that
you
can.
A
But
you
could
imagine
using
the
properties
to
change
the
types
after
the
fact
I
mean
it
probably
wouldn't
be
super
trivial,
but
I
think
you
could
do
it
by
saying,
like
you
know,
set
class
equals
normal
special
log,
et
cetera.
C
C
Okay,
but
it'll,
be
good
enough
for
mark
to
take
a
look
at
and
and
see
if
it's
gonna
make
sense
for
what
he
wants
to
do,
and
you
know.
C
A
Yeah
well
maybe
now
now
we
have
like
a
real
use
case
that
can
help
us
to
get
it
across
the
finish
line
right.
E
C
Yeah,
for
me,
the
main
use
case
was
the
the
stats
stuff
I
wanted.
You
know
like
the
purvey,
dev
stat
or
per
data
set
stats
that
we've
got
in
the
meantime.
I
wanted
that
on
a
per
disk
level-
and
you
know
I.
C
A
Yeah,
I
think
you
could
surely
persist
those
every
txg,
but
it's
just
a
matter
of
doing
the
work
right.
A
B
C
That's
a
another,
you
know
pet
peeve
of
mine
that
we
might
need
to
look
at
at
some
point,
is
both
zipl,
clear
and
like
import
with
the
force
flag
are
generally
often
very
broad
like
when
you
do
clear
it
does
it
it's
like
a
zoople
clear
is
what
you're
supposed
to
do?
C
If
you
know
your
jbod
disconnects
and
you
reconnect
and
it
it's
kind
of
somewhat
similar
to
z,
pool
reopen,
but
it
gets
everything
unsuspends
the
pool
and
gets
it
all
running
again,
but
you
know
sometimes
you
want
to
do
one
of
the
side
effects
of
z
pool
clear,
but
not
all
of
them
like
I
want
to
reset
the
counters
without
doing
this,
or
I
want
to
you,
know,
get
the
pool
working
again,
but
not
reset
the
error
counters,
and
you
know
like
same
for
zpool
import
dash,
f,
the
dash
f
overrides,
I
think
six
or
eight
different
safety
belts,
and
sometimes
I
feel
like
you
should
be
able
to
do
force
equals.
C
A
D
D
Yeah,
maybe
with
regards
to
that
there
is
this
z
pool
import
policy
is
tracked,
and
I
think
it's
like
somebody
threw
it
in
there
for
metadata
validation,
count
or
something,
and
as
far
as
I
can
tell
the
only
code
that
instantiates
that
struct
is
the
pull
main
and
it
sets
some
default
values.
Is
there
anyone
actually
like
using
that
struct
from
somewhere
else
and
setting
other
values,
then
the
defaults?
A
It
is
it
not
used
by
the
import,
dash,
fx,
stuff
capital,
f,
capital
x,.
D
G
D
I
was
driving,
it
would
be
cool
if,
if
we
had
one
struct
that
we
can
maybe
represent
as
json
or
some
other
form
of
structured
data,
and
then
we
can
just
pass
that
to
the
pool
import,
then
put
it
in
an
nv
list,
pass
it
to
the
kernel
and
then
do
it
that
way.
Instead
of
yeah,
I
don't
know
inventing
some
complicated
command
line
flags
for
every
single
thing
of
that.
A
Yeah
definitely
having
a
richer
way
of
expressing
that
using
like
the
import
flags
or
whatever
would
be
nice,
I
mean
importing
is
like
one
of
the
most
complicated
things
there's
like
so
much
stuff
going
on
there
and
it's
it's
really
hard
to
follow.
Pavel
did
a
whole
lot
of
work
on
that.
You
know
several
years
back
when
he
added
like
all
the
debug
statements
and
all
that
kind
of
stuff,
but
it's
not
especially
this
is
admin
friendly,
is
pretty
cool
too.
D
A
A
Succeeded:
it's
like
it's
not
that
hard,
especially
yeah
since,
like
you,
don't
even
have
to
do
crazy
things.
Like
I
mean
crazy,
you
don't
even
have
to
do
all
that
much
work
because
you
can
it's
all
associated
with
the
spotty
you
can
just
like.
Have
the
spotty
have
like
a
you
know,
a
giant
string
which
is,
or
you
know,
an
array
of
strings,
a
list
of
strings
that
are
the
debug
statements
that
we
accumulated.
A
While
we
were
importing
it
send
those
back
to
users
land
at
the
end
would
really
not
be
that
hard
compared
to
like
the
more
like
general
case
of
like
I'm
doing
some
arbitrary
ioctal,
and
I
want
to
accumulate
all
the
messages
from
all
the
layers
for
that
it's
like,
then,
you
gotta
like
pass
down
your
message.
Buff,
you
know
down
everywhere
and
let
everybody
like
append
to
it,
and
you
know
you
have
to
change
the
function,
signature
of
a
lot
of
different
places
versus
with
this.
It's
like
we're
importing.
A
B
A
I
think,
as
a
first
pass
like,
like,
you
really
could
just
say,
take
all
those
all
those
debug
messages
that
are
in
the
import
path,
that
pavel
added
and
then
just
have
them
also
append
something
into
the
spot
sheet
and
then
like
package
that
up
into
an
envy
list
and
return
it
back
to
userland.
C
The
one
I
was
going
to
mention
is,
I
think,
during
a
recovery
some
months
ago
now
we
came
across
that
if
you
disable
the
spa
verify
or
whatever
tunable
while
an
import
is
running,
it
actually
skips
the
verify,
but
it
stays
inside
the
loop
moving
over
every
object
in
the
pool
and
doesn't
break
out
of
the
loop,
and
so
it
does.
It
saves
you
some
time,
but
not
the
amount
of
time
you're
hoping
to
actually
get
the
import
to
happen.
C
You
know
when
you're
doing
a
rewind,
and
I
meant
to
get
back
to
fixing
that,
but
you
know
I
think,
there's
some
comments
on
the
an
old
bug
report.
I
don't
think
there's
even
a
pull
request
for
it
that
I
know
brian
and
I
and
some
other
people
talked
about
it
when
it
happened,
but
I
never
got
back
to
it.
C
A
A
great
one
or
two
line
exactly.
B
B
C
D
C
But
I
guess
the
other
thing
I
found
in
my
get
tree
while
digging
up
this
vita
property
stuff
yesterday
was,
I
have
pool
user
properties,
so
you
can
just
set
arbitrary
user
properties
on
the
pool
that
one,
I
think,
turns
out
to
be
like
this
big
of
a
patch.
Is
that
interesting
to
anyone?
I
think
I
wrote
it
originally
for
paul
dilidek
needed
it
for
something
and
I
to
ended
up
in
a
weekend.
D
C
It's
distinct
because
yeah
I
didn't
want
this
property
I'm
trying
to
set
on
the
pool
to
show
up
on
every
data
set
in
the
entire
pool,
which
is
what
happens
if
you
set
our
user
property
on
the
root
of
the
pool.
B
F
We
kind
of
had
one,
although
we're
just
doing
user
properties
on
the
root
data
set
instead
with
how
we
deal
with
managing
the
keys
for
a
z
pool,
because,
because
also
we,
when
we
use
eve,
cool
encryption,
we're
doing
the
whole
pool,
because
we
do
a
lot
of
snapshotting
and
cloning
and
trying
to
manage.
That
would
be
tricky.
F
So
if
that
would
have
been
nice
for
some
things
with
that,
as
well
as
there's
a
couple
other
little
bits
where
it
would
have
been
nice
to
be
able
to
set
like
user
property
for
cool.
F
It's
also
like
we
have
on
most
the
systems
that
we
quote
unquote
support,
there's
one
pool
with
all
the
storage,
but
sometimes
people
will
try
to
do
multiple
pools
to
do
things
and
so
to
be
able
to
identify,
as
well
as
there's
some
historical
stuff
where
the
main
pool
doesn't
always
have
a
set
name
so
being
able
to
identify.
Okay,
which
pool
is
actually
the
main
pool
that
has
all
the
stuff
that
we're
looking
for
we'd.
Be
able
to
do
that
with
a
user
property
would
would
be
useful,
though
there
are
ways
around
it.
A
Yeah,
I
guess
today
you
know
you
can
shoehorn
it
into
the
comment.
The
pool
comment,
property
or
you
can
use
the
user
property
on
the
topmost
file
system
and
kind
of
deal
with
the
clutter
that
that
causes
further
down.
But
yeah
I
mean
seems
like
a
nice
thing.
Yep
there
might
not
be
a
ton
of
use
cases,
but
if
it's
just
a
little
bit
of
code,
then
sounds
good
to
me.
A
Cool
other
folks,
other
things
that
folks
would
like
to
discuss.
D
D
I've
started
factoring
out
all
these,
like
the
the
platform,
independent
parts
of
zebra,
write,
discard
and
read
into
common
code,
and
one
thing
I
sum
it
across
is
the
zivol
geom
bio
strategy,
which
is
in
the
freebc
specific
bits
which,
like
doesn't
seem
so
easily
so
easy
to
factor
out,
because
it
doesn't
use
cfs
uio
t
yet,
and
the
question
is
somebody
familiar
with
freebsd
could
add
support
for
converting
a
struct
bio
from
the
free
bc
corner
into
the
zfs
uiot,
and
I
won't
be
able
to
do
that,
and
I
don't
know
if
there
is
any
desire
on
the
free
bc
site
to
do
it.
D
D
And
there
are
some
interesting
like
small
semantic
bits
which
I
think
are
just
code
drift
over
time,
but
could
not
be,
and
in
some
places
I'm
I'm
just
don't
sure
what
the
desired
semantics
are
so
yeah.
Somebody
during
the
pr
review
has
to
give
that
a
hard
look,
so
that
there
are
no
like
semantical
breaks
between
the
platforms,
but
I
think
yeah.
E
D
If
they
are,
they
shouldn't
be
there
under
a
unified
codebase,
but
anyways.
A
Yeah
I
I
agree,
and
I
think
I
clarified
a
couple
of
those
that
you'd
asked
about.
A
A
Then
our
next
meeting
is
going
to
be
at
the
earlier
time
nine
o'clock
pacific
on
march
30th
four
weeks
from
today,
looks
like
the
calendar.
Invite
time
needs
to
be
changed,
but
I'll
ask
karen
to
take
care
of
that
thanks.
A
A
Which
which
which
will
be
in
the
fall?
But
you
know
we
we're
trying
to
figure
out
like
the
logistics.
C
So
I
had
a
question.
I
don't
know
if
it's
been
fixed,
yet
I
don't
know
that
I've
tried
to
build
in
the
last
week
or
so,
but
the
compatibility.d
directory
sim
links
cause
problems.
If
you
do
a
second
make
install
after
it's
already
been
done.
Is
that
somebody
looking
at
that
or
fixed
already
that
got
fixed
okay.
A
Yeah
another
part
super
long
time
coming
I
mean,
I
think,
that
one
one
thing
that
sometimes
is
frustrating
about
my
work
on
zfs
and
the
communities
work
on
zfs
in
general
is
like
we
have
so
many
good
ideas
that,
like
they
feel
like
they're,
so
like
it's
not
that
hard,
we
just
got
to
get
it
done,
it'll
be
so
great,
and
then
you
know
it
takes
years
and
years
years
and
years
later,
it's
still
not
done.
Yet
we
talk
about
it
again.
Another
open
cfs
summit.
A
You
know
the
next
year
so
that
that
can
be
frustrating
sometimes.
But
then
I
think
about
the
fact
that,
like
you
know,
actually
like
a
lot
of
those
good
ideas,
have
gotten
done
eventually.
A
Those
are
a
lot
of
things
that
have
been
in
progress,
for
you
know
what
seemed
like
way
longer
than
the
time
that
they
could
have
been
done
in,
and
I
think
that
they
were
all
kind
of
like
worked
on
in
fits
and
spurts
right
yeah,
but
in
in
the
end,
I
think
it's,
it's
really
nice
to
see
that
those
important
things
people
do
care
about
them
and
they
do
make
the
time
for
them
eventually,
and
a
lot
of
that
I
know,
is
brian.
B
A
C
C
A
C
A
Yeah,
well,
you
all
know
that
I'm
working
on
raids,
the
expansion
and
have
been
for
years,
but
that's
that's
getting
closer
as
well
cool.
Well,
thanks
everyone-
and
I
will
see
you
again
in
four
weeks.