►
From YouTube: April 2020 OpenZFS Leadership Meeting
Description
At this month's meeting we discussed: libshare changes; updates on dedup changes; OSX in common repo.
Details and meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
Cool
it's
one
after
the
hour,
so
let's
get
started
so
we
have
a
few
items
on
the
agenda
and
we
might
have
time
for
some
additional
items
at
the
end
of
the
meeting.
Welcome
to
the
April
20-21
XIV
s,
leadership
meeting.
So,
let's
get
started
with
George.
Had
a
topic
about
Libby
share
that
he
wanted
to
discuss.
A
B
So
I've
kind
of
brought
up
some
of
these
issues
that
we've
seen
in
the
past,
with
like
Lib,
share
and
kind
of
the
implementation
that
we
had
seen
on
Linux
and
we've
done
like
some
performance
analysis,
and
we
were
seeing
that
there
was
kind
of
some
inefficiencies
and
I
think
overall,
the
Linux
community
hasn't
really
embraced,
using
like
the
share
and
FS
property,
but
at
del
phix
we
use
it
quite
a
bit.
So
we
kind
of
decided
that
we
would
look
to
see.
B
Could
we
simplify
this
revamp
it
and
so
I've
been
doing
some
work
on
this
and
effectively
what
I'm
proposing
is
to
combine.
There's
some
work
that
I
think
FreeBSD
has
already
done
in
what
they're
calling
live,
ZFS,
FS
share
and
so
and
the
work
that
I've
done,
which
is
to
pretty
much
burn
down
the
old
share
tab,
so
I'm
still
keeping
something
called
Lib
share,
but
it's
vastly
different
than
what
it
was
before
and
much
simplified.
B
But
the
idea
would
be
that
we
would
have
a
OS
specific
Lib
share,
so
we
would
have
a
Linux
version.
A
FreeBSD
version
I
would
effectively
have
rewritten
the
logic.
That's
in
libs
EFSF
a
share
to
be
now
a
Lib
share,
FreeBSD
and
what
I've
been
doing
is
kind
of
testing
out.
You
know
some
of
the
functionality
specifically
with
with
Linux
to
see
what
kind
of
performance
gains
we
can
get
and
they're
pretty
dramatic.
B
The
simplification
of
live
share
has
given
us
like
99%
improvement,
so
right
now,
I've
kind
of
been
testing
with
like
relatively
simple,
with
like
a
thousand
file
systems
and
I
create
just
a
bunch
of
multi-threaded
processing,
he's
trying
to
all
share
different
file
systems.
At
the
same
time,
because
we
were
seeing
that
is
the
worst
case
scenario
when
you
ran
with
the
old
share
tab
and
old
lib
share
logic,
and
with
with
those
cases
we've
gone
from
instances
where
it
would
take
us.
B
You
know
several
minutes
to
share
out
those
file
systems
to
now
about
twenty
seconds.
So
we
were
seeing
improvements
across
the
board
on
like
sharing
on
sharing
inheriting
the
the
share.
Nfs
property
I
can
kind
of
go
through
some
of
the
numbers
if
people
are
interested,
but
the
proposal
here
is
really
to
simplify
Lib
share.
It
would
still
allow
consumers
like
illumos
to
come
in
some
time
later
and
have
their
own
libs
share
logic
or
utilize.
B
The
old
live
share
logic
so,
but
it
simplifies
the
current
structure
and
removes
a
lot
of
like
the
kind
of
quadratic
cases
where
we
were
running
over.
Like
multiple
lists.
You
know
several
several
different
times
trying
to
determine
if
something
was
shared
or
if
it
was
mounted
and
it's
so
it's
mostly
removal
of
code.
There
is
some
additions,
but
it's
mostly
a
lot
of
burning
down
of
old
code
for
what
it's
worth.
I'd
appreciate.
B
Uses
yeah
so
I
I
modeled
it
after
the
illumos
logic,
so
it
uses
kind
of
same
entry
points.
So
if,
like
the
Lib
share
logic,
kind
of
does
the
same
thing.
If,
if
at
some
point
in
time
we
get
to
a
point
where
we
have
like
a
combined
code
base,
because
we
have
the
same
entry
points,
we
should
be
able
to
still
have
like
the
same
vector
so
I
today.
The
the
logic
I
have
written,
you
know
requires
the
consumers
to
create
a
NFS.
B
You
know
an
NFS
component
and
an
SMB
component,
even
though,
like
freebsd
doesn't
have
SMB.
So
it
should
be
relatively
easy
for
illumos
to
just
come
in
and
kind
of
build
the
same
logic.
It
would
just
we
just
store
it
under
an
OS
specific
Lib
share
and
then
the
rest
of
the
code
remains
common.
So
you
know
the
logic:
that's
in
libs
EFS
mount.
We
could
be
used
by
all
all
different.
You
know
distributions.
A
B
So
so
the
original
code,
or
the
existing
code
kind
of
relies
on
maintaining
a
share
tab
file,
which
is
not
used
very
often
and
then
kind
of
calling
export
FS
explicitly
to
export
each
individual
file
system.
What
what
I've
created
now
is
leveraging
some
logic
that
exists
in
export
FS,
which
allows
us
to
create
a
file
within
the
exports
export
exports,
D
directory
in
Linux
and
so
I
have
now
a
ZFS
dot
exports
file
that
we
maintain
for
NFS
shares
for
SMB
shares
there.
You
know
there's
no
longer
a
common
file.
B
So
that's
the
share
tab
logic
in
the
past
used
to
have
one
file
that
could
could
home
hold
both
at
NFS
and
SMB
shares
for
Linux-
and
you
know,
FreeBSD
is
kind
of
a
similar
beast,
but
for
Linux
there
is
no
common
file
anymore.
It's
like
there's
two
different
repositories
that
are
used
for
SMB,
it's
like
var,
libs,
Samba
user
shares
and
for
NFS,
its
Etsy
exports
DS,
the
FS
dot
exports
and
the
advantage
of
using
that
for
NFS
is
that
the
NFS
server
logic
knows
to
go.
Look
there.
B
B
So
Michael
asked
if
we
had
I,
scuzzy
or
fiber
channel
on
the
radar
we
haven't
looked
at
the
OL
I
mean,
and
this
holds
true
and
for
a
Lumos
there's
there
is
some
dead
property
for,
like
you
know
the
sharing
of
ice
cuz
he
directly
from
ZFS.
As
far
as
I
know,
nobody
has
looked
at
kind
of
revamping
that
and
kind
of
bringing
that
up
today
to
make
it
usable
either
with
you
know
the
Linux
ice
cozy
server
or
the
Lumos.
C
C
C
D
B
Yeah
I
think
that's
even
with
like
SMB
on
Linux
I
think.
The
only
thing
that
it
currently
supports
is
for
using
sheriffs
and
beers
on
and
off
so
it's
it's
it's
a
limited
implementation
there
and
I.
Don't
know
if
there's
if
anybody
on
the
call
is
you
know
using
the
the
sheriff's
and
B
functionality
and
Linux
or
is
more
familiar
with
kind
of
some
of
the
properties,
but
that
would
also
be
a
place
where
there
could
be
some
improvements
going
forward.
B
A
All
right,
then,
let's
move
on
to
the
status
updates.
I
think
that
was
the
only
like
discussion
topic
that
we
had
had
on
the
agenda
so
far.
So
some
of
these
are
holdovers
from
last
week
that
we
didn't
get
to
or
last
month.
Excuse
me
Josh.
Do
you
give
an
update
on
the
Panzer
D
dupe
improvements,
sure.
F
Sure
so,
initially
the
people
that
did
that
D
dupe
stuff
said
it
was
incredibly
self-contained
and
would
apply
to
any
version
of
ZFS,
and
we
very
quickly
discovered
that's
not
the
case
and
then
well.
We've
been
sort
of
working
on
trying
to
get
that
untangled
so
that
we
can
get
a
PR
boasted
that
applies
to
the
current
state
of
the
world.
Panzer
has
be
caught
in
a
bunch
of
funding
and
stuff
business
stuff,
and
so
they
frozen
that
whole
thing.
F
So
for
right
now,
nothing
is
happening,
but
once
once
the
business
drama
gets
sorted
out,
we
hope
to
zoom
back
on
getting
those
changes
untangled
for
some
other
things
getting
them
to
apply
to
the
upstream.
They
end
goal,
of
course,
is
that
we
have
a
bunch
of
cancer.
Has
a
bunch
of
ZFS
technology,
we'd
like
to
see
make
its
way
out
into
the
wild,
and
that
would
benefit
us
by
allowing
us
to
take
advantage
of
some
of
the
newer,
open,
ZFS
features
as
well.
So
we're
kind
of
stuck
well,
as
you
can
imagine,
we've
diverged.
D
A
Thanks
sounds
like
a
great
a
great
goal,
at
least,
and
you
know,
we'd
love
to
see
pens
or
you
know,
take
advantage
of
the
upstream
changes
and
then
also
contribute
their
changes
back
to
the
community.
Yeah.
E
The
DDT
limit
stuff
is
coming
along
nicely.
We
just
finished
addressing
another
round
of
of
the
review
feedback
on
the
digital
emmitt,
which
were
a
bunch
of
nice
changes.
We've
managed
to
get
the
diff
down
a
bit
and
to
name
the
properties,
I
think
it
was
DDT
table
limit
quota
to
fit
the
the
naming
with
everything
else.
E
Instead
of
s
better,
which
I
I
think
is
very
important,
because
one
of
the
things
I
liked
most
was
that
if
s
is
how
were
consistent,
all
the
the
naming
and
everything
is
so
getting
that
right
was
important,
so
that
one,
we
think,
is
almost
finished.
We
had
one
open
question
about
in
between
sinks
of
the
DDT
when
we're
estimating
how
much
we
expect
the
DT
is
going
to
grow
from
these
new
rights.
A
Yeah
I
mean
this
is
probably
we
can
probably
take
this
to
the
okay
to
the
pull
request.
I
think
that
it
was
but
yeah
I
think
I.
Originally,
you
were
using
some
also
just
like
some
number
that
was
based
on
something,
but
I
was
basically
suggest.
No,
we
make
it
more
explicit
that
this
is
a
number
which
is
just
kind
of
pulled
out
of
the
air
and
I
actually
related
to
you.
You
know
like
we
don't
really
know
how
much
space
that's
gonna
use
our
desk
all
right.
A
So
just
having
like
a
macro
that
explicitly
is
like
this
is
an
estimate
of
how
much
space
each
new
record
in
the
do
you
table,
you
will
use
on
desk,
but
yeah
I'm
not
be
to
do
another
round
on
the
code
review
there.
Do
you
have
any
thoughts
on
how
any
of
this
will
interact
with
the
YouTube
stuff
that
Josh
was
talking
about
or
is
it
like?
We
don't
really
know
exactly
since
we
haven't
seen
the
code
yet
yeah,
we
don't
know
exactly.
E
Mostly
was
this
was
to
put
out
things
that
were
on
fire
today.
Yeah,
you
know,
ideally,
you
know
the
Pens
are,
did
you
solves
the
problem
better
and
we
and
we
don't,
have
the
problem,
but
the
previous
we've
been
doing
mostly,
is
you
know,
customers
that
are
already
have
do,
YouTube
tables
that
are
taking?
You
know
over
200
gigs
of
RAM,
and
they
need
to
make
something
more
manageable.
So
I
don't
know
if
anybody's
interested
in
the
other
patch
we
have,
which
is
basically
a
bulk
purchase
from
the
DDP.
E
It's
not
really
that
user
serviceable
it's
kind
of
an
extra
flag
to
zero
import.
That
says,
you
know
on
import,
just
drop
that
whole
zap
and
recreate
an
empty
one.
So
it's
it's
not
really.
Something
I
think
makes
sense
upstream,
but
if
people
have
a
use
case
for
it
I
suppose
we
could
put
it
somewhere
well,.
G
E
And
then
for
DD
vlog
they've
made
a
little
progress
there,
but
not
that
much.
It
looks
like
DT.
Blog
is
mostly
meant,
if,
from
my
understanding,
to
solve
the
problem
of
D
to
be
using
a
lot
of
I
ops
with
all
the
updates
and
the
trade
offs
to
use
more
memory
and
and
their
biggest
problem
being
a
backup
provider
is,
is
they
need
to
use
less
memory?
So
the
to
do,
vlog
doesn't
look
like
it's
actually
going
to
solve
the
problems
they
were
hoping.
It
would.
A
A
E
Problem
with
limiting
memory
is
that
it's
even
less
predictable
than
the
on
disk
size
because
of
the
compression
and
so
on,
and
in
particular
it
also
doesn't
necessarily
shrink
just
because
you
removed
some
entries
and
that
was
causing
some
usability
problems.
It's
like.
Oh
we've
hit
the
digit
heat
limit
and
then
we
delete
a
hundred
thousand
entries
and
the
table
doesn't
get
in
his
face
and
you
still
can't
write
to
it.
Yeah.
A
A
A
A
I
know
it
I'd
hate
to
see
like
you
know,
if
you're
using
that
with
4k
discs,
you
know
you're
not
going
to
get
barely
any
compression
at
all
if
anything
and
then
that
hates.
If
you
use
that
on
raid-z,
you
know
you're
going
to
get
a
huge
inflation
in
the
disk
space
used
by
your
by
your
DD
table,
mm-hmm.
So
yeah.
E
A
E
No,
you
go
ahead
as
I
say
it's
it's
later
in
the
agenda,
but
would
like
to
get
some
eyeballs
on
the
other
one.
The
DDT
load
command,
that's
been
out
for
a
while,
which
basically
allows
you,
after
a
reboot
to
trigger
loading,
the
entire
DDT.
Basically
just
you
know,
reads
it
all
into
the
arc,
because
you
know
in
the
case
where
you
have
a
DDT
that
was
200
gigs.
A
Gotcha
yeah
I
think
actually
both
of
those.
It
would
be
good
if
you
haven't
already
to
send
a
message
out
to
the
developer
at
open,
ZFS
mailing
lists.
Just
since
you
know
there
are
new
features,
new
user
interface,
property
names,
anyone
not
to
make
sure
that
we
can
get
feedback
from
the
broadest
and
your
folks,
mr.
palm,
and
also
I
included
in
those
folks,
maybe
if
Josh
or
anybody
else
that
has
outstanding
improvements
to
D
do
to
take
a
look
at
those
and
ball-like.
A
F
Mean
I
can
write
that
up.
We
basically
have
a
so
had
the
you
know
made
their
kind
of
their
name
on
their
GD
p--
implementation.
They
released
a
series
of
white
papers
about
that
and
we
did
a
cleaner
Omri
implementation
of
that
D
do
from
their
white
papers.
So
it's
essentially
giving
open
CFS
the
data
domain
deduplication.
F
A
F
A
Yeah
I
guess
my
thought
was
that
it
would
be
like
the
concept
of
like
pre-loading,
the
DD
of
table
or
the
concept
of
putting
limit
on
the
disk
space
used
by
the
DD
table
like
it
seems
like
it
should
be
applicable
to
any
invitation
whether
it's
the
current
one,
the
D
do
blog
or
this
a
very
different
one
that
you're
talking
about
Josh.
So
it
would
be
nice
to
like
validate
that
in
and
then
be
able
to
continue
using
those
properties
to
set
limits.
A
F
Will
say,
our
implementations,
you
know,
depends
on
having
the
DD
table
on
SSD
and
I.
Think
that
open
ZFS
has
the
capability
now
to
put
metadata
on
specific
PDFs,
and
so
that's
going
to
be
a
predicate
of
this,
because
getting
the
new
metadata
and
spinning
disk
is
just
a
game
over
scenario,
no
matter
what
technology
is
to
do
to
do.
A
Yeah,
so
it's
like
you
would
all
work
together
very
well,
but
just
want
to
make
sure
you
guys
are
aware
and,
like
you
know
this,
this
would
be
the
good
time
to
give
feedback.
If
that's
not
the
case,
you
know
if
it
seems
like
he
doesn't
work.
It
wouldn't
work
with
your
implementation
or
if
we
would
want
to
do
things
differently,
then
it
would
be
good
to
know
that
now
so
that
you
don't
run
into
more
speed
bumps
down
the
road.
A
F
Yeah
I
think
it
should
be
fine
cool
yeah,
yeah
I
mean
we
were
using
this
with
multi
petabyte
datasets
at
this
point,
and
not
really
having
a
huge
issue
with
the
D
do.
But
of
course
our
use
case
is
everything
is
cloud
out.
So
you
know
the
LAN
length
to
the
cloud
becomes
a
the
choke
point,
naught
D
doop
itself,
and
so
even
people
are
doing
on.
Prem
object
store
stuff,
you
know,
typically
speaking,
that's
a
10
gigabit
link
is
about
the
fastest.
F
A
A
D
I
can
talk
about
that
for
a
minute.
Can
you
hear
me
yeah,
okay,
yeah,
good,
so
yeah,
the
persistent
out
do
our
stuff
went
in.
There
are
actually
a
couple
follow
up
patches
that
could
use
reviewers.
For
that
too.
I
can
point
you
at
them
if
you
want
to,
but
basically
it
means
that
everything
merged
and
at
least
in
master
you'll,
have
a
persistent
arc.
This
was
the
modified
version
of
Persons,
the
Delta
mark
that
came
from
Vic
Center,
originally
I'm,
not
sure.
There's
much
to
add
to
that.
D
A
Like
kind
of
just
works
first
time,
you
boot
up
on
the
new
stuff,
it'll,
be
writing
out
metadata
on
to
the
El
torque
device
to
keep
track
of
what's
there
and
then,
when
you
open
a
pool
that
has
that
metadata
on
it,
it'll
suck
it
back
in
and
basically
we
recreate
the
in
memory
state
in
terms
of
like
knowing
what
blocks
are
in
DL
to
work.
Yeah.
D
That's
right,
I
think
the
only
minor
noteworthy
thing
is
it
a
new
feature,
but
it
didn't
come
with
a
feature
flag
because
you
can
lose
the
l2
arc
anytime.
So
it's
not
really
a
problem.
So
where
there's
a
long
header
on
disk
and
everything
looks
good
and
rebuild
it
and
if
it
doesn't
it'll
just
discard
it
so
no
problem
there,
so
it
didn't
require
a
feature
flag.
A
All
right,
the
next
one
I
wanted
to
mention
I
really
say
we
probably
we
didn't
cover
this
at
last
month's
I
think,
but
the
D
dupes
tend
to
receive
work,
so
I
I
was
working
on
on
this.
We
proposed
I
think
it
might
have
been
three
or
four
years
ago
to
deprecate
the
dee
doop
send
and
receive
functionality.
So
this
is
not
is
this
is
confusingly.
This
is
actually
not
basically
not
at
all
related
to
on
dispute,
and
that
confusion
is
one
of
the
reasons
that
we
wanted
to
get
rid
of
it.
A
This
is
the
ZFS
send
ddr4
as
if
SN
capital,
D
flags
and
those
are
now
deprecated
they're
going
to
be
deprecated
in
the
OH
dot,
8.3
release,
and
then
the
code
is
now
removed
on
open,
ZFS
master,
so
that
functionality
isn't
there
anymore,
and
you
know
we
we
sent
out
a
bunch
of
emails
about
that
got.
Very
little.
Feedback
did
not
seem
like
people
we're
using
that,
which
is
what
we
were
hoping
and
expecting
the
folks
that
we're
using
it
seemed
to
be
confused
about
what
it
actually
did,
which
is
also
what
we
expected.
A
C
D
D
H
A
A
I
Well,
since
we
got
freebsd
in
the
linux
repository
I
figured
I
would
try
to
do
the
same
if
I
would
be
eventually,
you
know
allowed
then
we'll
see
so
I
figured
the
best
thing
was
to
start
from
beginning
and
just
take
the
Linux
repository
or
the
free
BC
repository
whatever.
We
call
that
shared
one
and
apply
the
OS
X
changes
again,
and
it
given
me
a
great
chance
to
redo
some
of
the
things
that
are
a
little
bit
old
and
wrong.
I
We
have
come
across
a
few
things
that
were
not
entirely
right
and
I.
Think
we've
even
found
one
or
two
things
for
Linux
and
FreeBSD
to
look
at
in
their
code.
Nothing
major,
but
they
are
in
my
product
page
and
all
notes
about
that.
That
I
will
eventually
share
with
Brian
and
who
is
in
charge
of
FreeBSD.
I
Over
yeah
so
I'll
bring
it
up
with
him.
Man,
many
yeah,
but
it's
nothing
too
exciting
there.
Whether
or
not
it
will
be
accepted.
It
will
see.
Some
of
the
big
changes
will
be
interesting
to
see
what
you
guys,
think
of
them,
and
particularly
uio
changes
that
I
had
to
do
not
too
many,
but
there
are
a
few
yeah
and
I
guess
yeah!
That's
a
goal
is
to
have
it
join
with
the
repo
if
it
can
yeah.
D
I
It
was
yeah,
a
lot
of
it
was
done,
but
Matt
may
see
it.
It's
it's
done
really
well,
so
it
was
not
a
problem
there.
Obviously
they
did.
They
don't
do
assembler
the
same
way,
so
they
didn't
kind
of
could
have
kept
going
and
that
it
the
way
they
have
done
things
the
whole
way,
but
they
stopped
at
assembly,
so
I
just
kind
of
continued
that
to
make
sure
it
was
sort
of
done
the
same
way.
A
I'd
also
love
to
see
it
combined
with
a
common
repo
I
think
the
only
real
thing
that
I
would
be
concerned
about
holding
it
back
is
testa
is
testing.
Like
you
know
the
we
want
to
make
sure
that
whatever
is
supported
by
the
common
repo
is
tested
by.
You
know
automated
testing
when
you
open
pull
requests,
because,
obviously
we
can't
expect
everybody
to
like
have
an
OSX
machine
to
test
their
changes
on
manually.
So.
I
A
A
Question
I
had
was
you
mentioned
they're
trying
to
do
some
things
better?
Second
time
around?
Are
you?
Are
you
looking
at
or
considering
changing
any
of
the
on
disk
format?
Change
it
unjust
format,
decisions
that
you
did
because
I
vaguely
recall
that
there
were
some
things
like
in
the
GPL
or,
like
you
know,
assumptions
about
I
forget
what
now
you
know
like
file
locations
and
extended
attributes,
and
things
like
that
that
we
had
discussed
like
oh
yeah,
that's
kind
of
like
not
quite
the
ideal
way
of
doing
it.
A
I
This
is
actually
a
great
time
for
you
to
come
in
and
give
you
comments
on
those
changes,
because
I
consider
it
to
be
a
new
port,
which
means
that
there
are
definitely
some
changes
that
the
user
would
have
to
get
used
to.
In
that
the
existing
version.
We
call
the
color
property
mimic
HFS,
because
we
have
to
lie
and
say
we're
HFS
for
all
the
applications
to
allow
us
to
write
to
it
so
this
week,
but
since
HFS
been
dropped,
it
was
kind
of
a
bad
name,
so
we
just
call
it.
I
We
will
just
call
it
mimic
so
and
then
you
can
set
it
to
what
you
want
to
pretend
to
be.
You
now
have
to
pretend
to
be
a
PFF.
So
right,
that's
exactly
it
right.
We're
gonna
have
to
support
that
so,
rather
than
having
mimicked
HFS
in
mimic
a
PFS
which
figure
we
just
call
it
mimic,
and
then
you
set
it
to
be
a
either
HFS
or
a
PFS
really.
I
J
A
E
A
I
I
Version
of
the
pools
before
we
kind
of
import
them,
but
there
are
definitely
things
that
we
can't
handle
like
Zeebo,
most
or
all.
It's
a
Ciel's
and
in
the
extended
attributes,
and
not
has
the
first
normally
sorta.
But
it
would
also
be
nice
if
you
could,
if
there
was
a
path
for
them
to
come
to
the
new
version.
But
that's
something
we
can
discuss
all.
H
All
right,
so
just
a
quick,
quick
note,
so
I
would
love
to
see
like
all
the
ports
in
one
repo
and
what
we
could
do
is
to
do
what
FreeBSD
does
with
architectures
that
are
not
like
very
much
or
just
use
tier
one
category
for
them.
So
basically,
there
are
different
warranties
for
those
tier
2
ports,
so
they're
in
the
repo,
but
maybe
they
don't
have
to
build
all
the
time
they
don't
have
to
pass
all
the
tests.
They
don't
have
to
be
complete
so
but
they
are
there.
A
Yeah
I
think
that's
something
that
we
could
consider
I
mean.
Thankfully,
FreeBSD
is
like
mature
enough
that
you
know
there
wasn't
that
much
development
work
between
like
it
works
in
the
common
repo
and
like
it
works
kind
of
in
the
common
repo
and
it's
totally
solid
I
mean
I.
Obviously
there
was
velocity.
A
A
G
A
G
H
A
E
I
think
on
the
last
call
and
as
a
solution
to
some
other
problem
as
well
and
actually
Pavel
was
asking
me
about
it
earlier
this
week
and
since
I
did
a
lot
of
the
plumbing
around
it
on
the
Vita
property
stuff,
which
is
almost
finished,
which
kind
of
hope
that
stream
soon
I
was
just
wondering.
Does
anybody
else
have
a
use
for
user
properties
on
Z
pools
and
is
there
any
considerations
that
we
might
want
to
take
into
mind
before
I?
You
know
finish
plumbing
it
out
and
create
a
pull
request.
A
H
H
But
of
course,
properties
are
very,
very
simple
just
moment
and
the
value
but
I
have
I
could
make
use
of
like
hidden,
hidden
properties,
properties
that
keep
some
kind
of
secret.
So
not
every
user
can
just
ask
for
the
value
just
like,
with
some
configuration
file
that
have
some
some
secret
I
can
use
permissions,
but
it
doesn't
have
to
be
permissions,
but
just
differentiation
between
regular
property
in
some
hidden
or
sensitive
property,
I.
F
C
G
H
Well
depends
well,
for
example,
if
I
would
like
to
let's
say:
do
a
backup
of
the
entire
pool
and,
for
example,
be
able
to
I,
don't
have
to
keep
the
secret
somehow
within
the
backup
itself,
or
basically,
I
will
just
have
or
just
awesome
idea
for
pull
white
properties.
That
I
don't
want
the
properties
to
be
inherited
to
every
single
data
set
and
basically
just
be
able
to
to
have
this
option
in
case.
I
need
this
for
sub,
like
encryption
or.
E
C
C
H
Agree
I
will
I'll
think
about
this.
More
I
was
just
literally
spent
too
much
time
on
it,
because
it's
not
there
but
FL.
If
you
would,
if
we
would
have
to
go
with
all
the
let's
say,
allow
protocol
that
we
have
now
for
all
kinds
of
being
able
to
read
the
property
and
who
can
read
the
property,
then
probably
that
would
be
too
complicated
for
some.
A
To
allow
system
administration,
normal
users
I
think
that
I
think
that
way,
your
ass
I
agree
with
what's
been
said
so
far
and
I
think
that
what
you're
asking
for
could
be
done.
It's
not
like
super
simple
but
I
think
there's
some
other
examples
of
like
properties
that
have
been
added
that
are
a
little
bit
special
and
I.
Think
that
the
user
and
group
and
I
think
now
project
space
accounting
is
not
readable
by
everybody.
A
So
I
think
that
you
have
to
be
privileged
to
get
those
and
there's
like
different
iocked
oles
for
getting
them
than
forgetting
and
like
normal
properties.
So
you
kind
of
go
down
some
kind
of
road
like
that
of,
like
you'd,
add
a
new
octal.
The
actor
would
only
be
accessible
by
route
that
would
get
the
new
that
would
get
this
set
of
special
properties
and
then,
like
all
the
user
and
stuff
has
to
be
taught
about.
Like
you
know,
there's
this
new
class
of
properties
that
I
have
to
ask.
E
A
A
E
H
A
C
E
H
E
D
So
we're
putting
it
together.
Now
there
was
a
pull
request
opened
with
a
just
a
bunch
of
bug
fixes.
Basically,
I
can
look
up
the
number
quick
which
one
of
the
most
recent
poll
requests.
If
you
have
any
additional
patches,
you
absolutely
critical
bug
fixes
you
want
to
see
in
there
letting
you
know,
but
we're
hoping
to
get
it
together.
Real
soon
it'll
be
over
a
four
and
it's
just
a
bug
fix
release
so
nothing
too
exciting
there.
It's
a
much
smaller
patch
set
than
last
time,
I
think
30
patches,
something
like
that.
A
And
I
seventh
really
talked
about
this,
but
with
freebsd
having
been
integrated.
I
think
that
we
have
all
of
the
features
that
we
want
for
open,
ZFS
to
data,
which
will
be
the
next
major
release.
We
have
really
talked
about
like
what
exactly
the
timeline
is
from
here.
Obviously
we
want.
We
want
to
give
a
little
bit
more
time
to
make
sure
that
things
soak
in
that
we've
shaken
out
any
bugs
in
in
freebsd,
as
well
as
other
new
features.
D
D
Maybe
at
the
point
where
we
could
start
making
release
candidate
tags
or
something
like
that
for
people
to
start
testing,
and
you
know
it
may
still
be
a
long
time
before
you
do
a
final
release
but
have
putting
something
out
there
that
people
start
testing,
maybe
would
be
useful,
I'd
be
open
to
that
idea.
Does
anyone
else
have
any
thoughts,
I'm,
actually,
tagging
something
well
taking
a
release
candidate,
you
know,
call
it
open
ZFS.
Do
you
got
it?
Oh,
that
sharks,
t1
or
whatever,
and
to
have
perhaps.
C
D
A
No
semantic,
meaning
of
so
we're
just
first.
A
The
name
yeah
I
mean
we
also
change
the
name
from
ZFS
on
Linux
to
open
ZFS,
so
yeah,
it's
just
like
a
major
release.
Just
like
the
difference
between
oh
no
seven,
no
died
in
and
in
fact,
I.
Think
ZFS
on
Linux
has
a
track
record
of
changing
what
is
a
major
which
numbers
do
no
major
minor
releases,
yeah.
A
A
C
D
C
Just
that,
if
they,
if
the
intent
is
for
it
to
be
like
a
time-based
release,
which
seems
fine,
that's
fine,
like
I
I,
obviously
December
style
like
we
can
just
bump
the
number
and
make
breaking
changes
and
work
so
well
for
a
file
system.
But
if
the
intent
is
to
do
them
on
a
time
basis,
a
time
based
version
might
make
might
help
clarify
that
perhaps
I
don't
know
yeah.
A
C
E
A
Yeah
I
mean
it
is
a
little
tricky
because
I
don't
think
that
we
are
in
a
position
to
be
like
it
shall
be
released
on
like
in
this
month.
You
know,
there's
still
like
it's
still
pretty
wishy-washy
and
it's
still
like
we're
gonna,
let
it
bake
until
we've
got
all
the
bugs
shaken
out
and
we
are
like
as
well
alder
the
Machine
as
canonical
with
their
I.
Don't
know
how
many
but
I'm,
assuming
at
least
hundreds
of
employees
compared
to
our
zero
I.
A
So
we're
almost
out
of
time,
I
had
one
so
I
one
last
question
about
the
website.
We
are
still
working
on
moving
the
website
to
OSU,
which
I
think
I
mentioned
a
couple
meetings
ago.
I
actually
just
previewed
that
today
it
is
working
and
we
just
need
to
make
the
DNS
changes
to
switch
that
over.
So
that's
well
on
its
way.
It
should
be
transparent
to
everyone.
With
the
exception
of
we
are
going
to
try
to
change
the
main
URL
to
be
open,
ZFS
org,
without
a
so
like
when
you
go
to
the
other
domains.