►
From YouTube: September 2019 OpenZFS Leadership Meeting
Description
We discussed: ZoL EOL of RHEL 6; Xattr cross-platform compatibility; Relaxed quota semantics for improved performance; zpool replace of log vdev- temporal dedup
Detailed notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit
A
All
right,
it's
one
after
the
hour,
let's
get
it
started.
We
have
overall
interesting
think
about
today,
so
we'll
see
if
we
get
through
mu,
gender
or
not.
A
Let's,
let's
start
with
the
Red
Hat
six
topic:
Brian,
are
you
on
yep
I'm
here
so
so?
I
just
start
started
talking
Brian
and
passing
about
the
various
platforms
that
are
supported
by
ZFS
on
the
next
on
Linux
and
well
o
the
chain
I
was
hoping
to
maybe
make
is
not
possible.
Brian
mentioned
that
Red
Hat
six
is
probably
old
enough
that
we
could.
We
could
drop
support
so
Brian.
Do
you
want
to
talk
a
talk
about
it?
Well,
I!
A
A
D
Okay,
well,
as
Matt
was
saying,
we
were
looking
at
dropping
support
for
rel
six,
which
is
pretty
darn
old
now.
The
motivation,
of
course,
is
maintenance.
You
know
it
costs
us
time
and
effort
to
keep
updating
the
code
for
that
old,
colonel
and
it's
been
released
for
a
long
time.
So
historically,
what
we've
done
is
ended,
support
for
something
when
it
goes
end-of-life,
but
we're
l6
still
technically
doesn't
go
end-of-life
till
November,
thirtieth
2020,
which
is
a
long
time
it's
ten
years.
D
D
At
the
moment,
it
hasn't
been
a
big
problem
with
rl6
for
the
most
part,
just
because
all
that
compatibility
code
is
in
so
you
have
to
kind
of
avoid
breaking
it,
but
at
some
point
we
do
need
to
take
it
out.
So
I
think
it
would
be
pretty
reasonable
personally
to
the
end,
support
for
l6
now,
I,
don't
think
many
people
are
using
it,
but
I
would
be
interested
to
hear
what
other
people
have
to
say
about
it.
In.
A
I'll
keep
my
comments
brief.
Then
you
know
we're
gonna
support
faster,
but
0.8
would
continue
work
or
else
relative
or
as
long
as
it
is
I
know,
which
would
be.
You
know
years
that
also
be
receiving
updates.
It's
yeah.
D
C
D
So
a
Red
Hat
officially
ends
support
forward
in
November
of
2020,
so
technically
they're
still
supporting
it
for
a
year
when
it
was
released
back
in
2016
or
no
2010.
Sorry
so
they're
coming
up
on
ten
years
now,
which
is
just
what
you
get
with
Enterprise
Linux
right.
It's
a
really
long
support
window.
So.
D
B
Yeah
I
expect
what
we
would
want,
as
opposed
is
just
you
know,
at
least
this
much
warning
period
before
it
goes
away
which
gives
people
time
to
okay,
get
upgraded
and
also
maybe
the
announcement
means
they
get
a
little
bit
of
time
to
object.
If
there's
that
many
of
them
that
are
still
using
rel,
seven
or
whatever,
okay.
E
I
will
also
say
that
at
data
we
have
a
additional
kernel
driver
or
a
separate
product,
and
we
had
to
add
support
for
rel
five
or
one
of
our
customers,
so
these
that
was
that
was
I,
think
a
year
and
a
half
ago.
So
you
know
these
things.
I,
don't
know
a
lot
of
a
lot
of
companies
that
set
up
these
systems.
That
kind
of
like
they
work.
They
never
really
want
to
update
them.
B
F
B
A
D
So
making
it
happen
isn't
too
hard.
Basically,
it's
going
through
the
build
system
for
the
source
tree
and
removing
all
the
compatibility
codes
for
anything
for
kernels,
310
and
older
I.
Suppose,
because
I
think
that
means
that
our
new
oldest
supported
kernel
will
be
a
310
kernel
which
is
what's
in
our
l7.
So
that
should
be
pretty
straightforward
and
then
we'll
just
remove
the
rel
6
bot
from
the
CI.
D
D
A
D
D
A
B
G
All
right
so
big
picture,
I
I
work
primarily
with
services
at
I/o
systems
and
with
us
and
B
protocol.
You
can
end
up.
Clients
can
write
x.
Address
really
can
write,
alternate
data
streams
in
Flora's
kernel,
SMB
implementation.
It
looks
like
alternate
data
streams,
get
written
what,
as
it's
a
terse
with
a
Sun
WS
and
B
prefix
on
FreeBSD
and
Linux,
and
Samba
X
outers
get
written
as
X
adders
and
depending
on
configuration
alternate
data
streams
may
also
get
written
as
X
adders,
with
a
dual
stream
dot
prefix
in
front
of
it.
G
These
are
all
written
as
they're
written
in
the
user
name,
space
in
FreeBSD
and
Linux.
The
username
space,
though,
is
implemented
slightly
differently
in
ZFS,
on
Linux
and
on
FreeBSD
and
ZFS
on
Linux.
It
appears
like
there's
a
user
dot
prefix
in
front
of
user
name.
Space
X
adders
FreeBSD
interprets
all
expect
all
X
address
that
don't
have
a
FreeBSD
System
prefix
as
being
in
the
user
name
space.
So
what
this
means
is
that
etc,
so
were
written
in
solaris
are
visible
in
freebsd
as
being
in
the
user
name.
Space.
G
But
what
happens
is
if
I
have
a
Samba
server.
I
have
Mac
OS
clients
writing
their
meta
data
to
an
SMB
server
on
FreeBSD.
It
ends
up
that
if
I
export
the
pool
and
import
it
in
DFS
on
Linux,
none
of
the
X
out
are
serviceable.
The
meta
data
just
disappears
and
users
lose
their
color
tax
because
only
stream
it
satirist
with
the
user
dot
prefix,
are
interpreted
as
being
in
the
username
space.
Does
that
make
sense.
G
D
Got
one
quick
question
actually,
so
my
understanding
I
could
be
misremembering
this
that
we
does
require
the
user
dot
namespace
or
the
user
dot
in
the
namespace.
Well,
my
question
is:
what
do
you
know
what
other
file
systems
do,
or
is
this
not
really
a
problem,
because
we
don't
have
very
many
other
file
systems
that
are
portable
between
FreeBSD
and
Linux,
so
this
really
hasn't
come
up
for
Solaris.
So
the
fact
that
they
all
use
different
conventions
hasn't
historically
mattered
or
does
like
the
Samba
code
handle
this
in
some
way.
A
I
was
gonna
say
it
sounds
like
I
mean
my
understanding.
You
can
correct
me
if
I'm
wrong
is
that
the
application
is
doing
the
same
thing
on
on
Linux
and
for
the
most
introduced
II,
but
the
way
that
ZFS
is
storing.
It
is
like
adding
or
removing
this
like
user
dot
prefix
internally,
so
that,
like
the
on
disk.
G
D
Oh
yeah
I
was
gonna
say
it
depends
on
your
point
of
view.
I
mean
it
depends
how
it's
stored
on
disk
right,
because
that's
how
Linux
requires
them
to
be
well,
at
least
that's
how
the
VFS
wants
to
see
them.
I
guess
it
doesn't
matter
exactly
how
they're
stored
on
disk.
We
could
be
adding
that
in
dynamically
or
something
but
all
right.
Well,
the
namespaces
on
Linux
there's
like
four
or
five
different
ones,
and
they
all
have
their
own
unique
prefixes.
D
So
if,
like
one
I
guess
you
could
assume
that
it
doesn't
have
one
of
the
other,
it
didn't
have
one
of
the
other
ones
than
it
is
user,
which
sounds
to
me
like
what
FreeBSD
is
doing
if
I
understand
correctly
yeah,
so
I,
guess
that
might
be
an
option.
I
mean
we
could
assume
that
if
it's
not
in
one
of
the
other
privileged
a
mistake,
user.
G
D
C
Is
an
it's
filesystem,
specific
or
actually,
in
this
case,
OS
specific,
because
the
application
may
not
care,
but
if
you
then
moved
the
pool
from
one
say
from
Linux
to
FreeBSD
or
vice
versa,
and
then
run
Samba,
it's
going
to
have
different
results.
The
application
will
not
be
getting
whether
it
expects
because
cos
has
done
different
things
behind
ZFS
is
back.
F
F
C
C
F
G
A
So
it
sounds
like
maybe
you're
saying
that
it's
okay,
that
the
name
of
these
changes
when
you
go
from
system
system,
because
it's
the
operating
system.
That's
that's,
adding
those
prefixes
not
as
the
FS,
but
we
won
ZFS
to
be
able
to
expose
Ashley
buttes
that
were
created
on
different
systems
kind
of
regardless
well.
G
C
A
Well,
there's
kind
of
two
possible
solutions:
one
is
one
solution
would
be
like
new
stuff
is
written
in
some
new
format
and
all
the
systems
write
at
the
same
new
way
and
those
fools
or
class
systems
are
our
portable
are
portable
and
but
but
maybe
existing
stuff
still
continues
to
be
busted
the
same
way.
It
is
already-
and
you
know
you
you
could
optionally,
take
it
one
step
further
and
say,
and
existing
file
systems
will
you
know
the
code
will
be
updated
to
be
able
to
access
them
in
some
way.
A
I
mean,
in
my
opinion,
like
having
stuff
just
disappeared,
seems
like
a
bad
idea
so
being
able
to
update
the
code
so
that
you
can
at
least
access
those.
Even
if
you
have
different
names
would
be
really
nice,
but
yeah
I
mean.
Ideally,
we
would
kind
of
be
able
to
move
forward
with
something
that
doesn't
switch.
The
names
around.
A
But
I
think
like
it's
for
the
next
steps
for
this
project.
I
think
somebody
needs
to
put
together
a
proposal
of
what
exactly
that
would
look
like,
and
you
know
what
would
happen
in
these
various
scenarios
and
you
know
whoever
does
that
can
decide
which
of
those
features
are
most
important
to
them.
We.
F
C
It's
an
incompatibility
between
the
different
operating
systems.
If
you
wish
to
share
data
or
migrate
data
between
them,
that's
the
big
issue.
Everyone
has
been
doing
their
own
implementations
for
a
lot
of
this
stuff
for
so
long.
We're
now
trying
to
get
everyone
to
talk
together
and
we're
discovering
things
like.
Oh
yeah,
when
we
created
X
setters,
we
just
put
Wis
prefix
on
some
of
them,
but
not
others.
D
Probably
different
across
the
platforms
to
what
they
expect,
I
was
gonna,
say,
there's
one
more
wrinkle
here,
which
I'm
sure
that
across
various
platforms
there
are
different
restrictions
on
X,
headers
I
know
Linux
is
in
general,
are
pretty
small.
So
if
we
even
wanted
to
make
something
portable,
you
might
not
be
allowed
to
have
big
X,
adders
I
think
Linux
caps
out
at
64
K
for
individual
X
adders.
So
if
we
want
to
be
really
portable,
do
we
want
like
them
the
minimum
set
of
functionality.
A
Well,
I
think
I,
don't
know
the
answer
to
that.
In
my
opinion,
I
think
that,
having
having
some
way
of
accessing
these
like
assuming
that
the
problem
is
really
is
like
just
in
the
DFS
there
and
the
ZFS
implementations
on
the
different
platforms
are
all
basically
the
same
they're
all
just
like
you
give
me
a
string,
that's
the
name
and
I
fit
into.
Is
that
object
in
every
every
easy?
A
Fs
is
doing
that
then
it's
it
does
kind
of
seem
like
it
would
be
reasonable
to
punt
on
the
problem
of
like
different
operating
systems,
hand,
us
different
strings
and
and
say
that,
like
when
you
move
your
pool
from
one
operating
system
to
another,
ZFS
like
like
things
may
show
up
with
different
names,
but
we
should
still
try
to
make
it
so
that
everything
shows
up.
Somehow
that
makes
sense.
H
C
C
The
big
question
is:
what
do
we
actually
want?
I
think
what
we
really
want
is
for
the
same
set
of
Exeter's
to
show
up
the
same
way,
if
at
all
possible,
on
multiple
platforms,
so
I
can
take
a
pool
or
do
a
send
of
a
pool
from
say
FreeBSD
to
Linux
and
have
things
continue
continue
to
show
up?
Have
all
the
data
be
visible
and
with
that,
as
a
specific
goal,
then
trying
to
decide
the
limitations
and
possible
solutions?
Is
the
next
step.
A
Yeah,
that's
the
way
we
want
to
go.
Then
probably
ZFS
needs
to
be
interpreting
stuff
a
bit
more
than
it
is
now
so
that
didn't
know
like
oh,
you
know:
I'm
I'm.
This
is
the
FS
on
FreeBSD
and
these
kind
of
attributes
have
the
you
know:
prefix
FreeBSD
down
whatever,
but
what
I'm
going
to
do
is
strip
off
that
prefix
and
put
on
like
ZFS
dot
whatever,
and
then
you
know
all
the
platforms
know
that
you
know
the
user.
Namespace
of
Exeter's
is
ZFS
user
dot.
Whatever.
C
D
G
I
One
quick
idea
is
to
tag
each
etc
name
with
a
platform
name
basically
like
this
etc.
Name
was
produced
on
Linux.
This
was
produced
on
FreeBSD
and
hopefully
there
will
be
some
Veggie
code
to
take
that
information
to
account
and
to
you
know,
do
some
additional
translation,
but
maybe
this
could
be.
Maybe
this
will
not
be
possible
anyway,
because
the
worst
of
all
will
still
do
their
own
translation
and
it's
hard
to
fit
all
that
into
a
working
system.
G
G
F
H
J
G
J
D
My
understanding
is
only
because
there
is
no
analog
to
alternate
data
streams.
There
are
no
interfaces
for
that.
I
could
be
mistaken,
but
I
haven't
found
them
in
my
crap
travels.
So
what
we
did
on
the
linux.
I
was
a
little
bit
different.
We
just
took
our
X
at
our
interfaces,
which
are
different
and
then
map
them
into
that
alternate
data
streams
that
concept.
It
seemed
like
a
good
idea
at
the
time,
so
you
could
get
access
to
them
on
other
platforms,
but
clearly
there
were
complications
but
yeah.
We
don't
support
alternate
data
streams.
G
I
you
enable
extra
files.
In
that
case
no
in
Samba.
You
can
configure
it
to
use
VFS
streams.
X
adder,
in
which
case
I'll,
write
the
alternate
data
streams
as
X
adders,
with
a
dual
stream
dot
prefix
in
it,
and
what
happens
is
if
you
also
enable
VFS,
usually
the
place
where
you
have
to
worry
about
large
X
addresses
Mac
OS
clients
with
resource
Forks,
so
there's
an
ability
to
write
the
resource
forks
out
as
separate
files
in
that
case,
but
other
alternate
data
streams
get
written
in
SX,
adders,
I.
D
Must
be
a
little
problematic
on
Linux,
because
they're
usually
very
very
small,
like
most
of
the
other
standard
file
system
exe
and
will
not
only
allow
a
couple
kilobytes
for
X
adders.
G
G
D
G
H
A
So
I
think
we've
probably
gone
around
on
this,
a
bunch
and
I'm
glad
that
we
did
because
I
think
folks
are
a
little
bit
closer
to
share
an
understanding.
I
would
like
to
give
some
time
to
the
rest
of
the
items
on
the
agenda
Andrew
or
with
someone
else
like
to
take
lead
on
kind
of
writing.
G
B
G
B
You
know
just
conceptually
in
my
mind:
I
had
you
know,
quota
policy
equals
strict
or
loose
or
something
like
that,
and
it
would
let
you
have
the
option
to
keep
the
current
behavior
of
make
sure
you
never
go
over
that
quota
or
the
new,
less
strict.
Behavior
of
you
know
you.
If
you're
over
the
quota,
you
can't
do
any
more
rights,
but
we
might
let
a
little
bit
more
data
be
in
flight
to
avoid
slowing
to
a
crawl.
A
That's
pretty
different.
Yes,
that's
kind
of
a
different
thing,
a
lot
of
times
you
run
into
that
first,
before
running
into
the
same
thing,
but
yeah
I
think
that
the
the
mechanism
for
kind
of
making
you
slow
down
is
basically
the
same,
because
quotas
and
being
out
of
space
are
basically
treated
the
same
way.
There's.
B
It's
a
separate
item
at
some
point.
We
might
want
to
revisit
the
defaults
for
the
slop
space.
I.
Think
right
now
is
like
1/32
of
the
pool
which
scales
nicely,
but
when
you
have
a
petabyte
pool,
do
you
really
need?
You
know
that
many
there's,
like
tens
and
tens
of
gigabytes
of
space,
that
you
don't
have
access
to.
A
A
Special
kind
of
pool
that
would
really
were
you'd
be
able
to
make
effective
use
of
that
last
couple
percent,
just
because,
like
fragmentation
and
other
stuff,
but
certainly
those
news
cases
do
exist
and
as
long
as
we
can
make
sure
that
ZFS
doesn't
write
all
the
space
and
end
up
not
being
able
to
do
anymore.
Oh
you
know
2x
views,
then
you
know
that's
the
real
thing
that
the
slot
space
is
trying
to
protect.
I
Yes,
basically,
this
is
this
comes
from
a
report
from
a
user
and
for
some
reason
they
had
to
replace
lock
device
and
instead
of
removing
an
old
device
and
which
was
still
operational
and
added
a
new
device.
I
executed,
Zippo,
request,
command
and
notice
noticed
that
it
took
very
long
time.
In
the
end,
the
pool
status
reported
that,
like
zero
bytes
was
a
silver
it,
but
many
many
thousands
of
gigabytes
were
scanned
and.
I
To
be
honest,
I
haven't
researched
researched
this
much,
but
my
impression
is
that
there
is
no
like
special
logic
to
determine
what
kind
of
to
determine
what
kind
of
device
has
been
replaced.
So
it's
always
the
same
code
and
basically
default
behavior
is
to
scan
the
whole
pool
because
that
like,
if
it
were
a
data
device,
it
potentially
could
have
a
data
from
any
txg.
I
So
that
makes
sense,
but
for
for
a
walk
device
which
has
like
only
some
like,
let's
say
in
flight
data
or
some
very
recent
data,
it
really
doesn't
make
sense,
but
it
seems
that
the
same
thing
still
happens.
So
I
was
wondering
if,
if
the
reason
is
in
smart
that
we
can
do
here
or
if
we
could
just
basically
maybe
prohibit
replaced
command
for
anything,
but
data
disks
and
force
users
to
use,
remove
and.
I
A
So
my
understanding,
like
I,
haven't
looked
at
this
in
detail,
but
what
you
described
is
not
surprising
to
me
the
the
me
anything
I
think
we
wait.
You've
said
all
kind
of
makes
sense,
except
for
the
fact
that
log
devices
can
have
data
can't
have
old
data
on
them.
So
you
can't
always
so
what
you
could
do
is
you
could
say
if
you're,
if
you're
doing,
if
you
do
0
replace
with
a
blog
device,
try
just
removing
the
old
one
and
adding
the
new
one.
A
But
the
thing
is
that
removing
the
old
one
might
not
actually
work,
because
you
can
have
old
log
blocks
that
haven't
been
claimed.
They
haven't
been
replayed
yet
so
they
give
you
crash
and
you
have
stuff
in
the
log
when
the
pool
comes
up,
it
claims
all
the
blocks
and
there
was
it
marks.
Tomales
allocated
all
the
log
blocks
is
being
allocated
and
then,
when
you
mount
the
filesystem,
then
it
plays
the
log
and
deletes
and
deletes
that
Zil.
I
A
And
this
kind
of
thing
might
be
more
even
more
coming
in
the
future
because
of
like
encryption,
where
you
know
you
might
need
different
keys,
different
file
systems
and
you
can't
mount
it
and
play
the
logs
until
you
have
the
key.
So
you
might
open
the
pool,
and
then
you
know
the
whoever
owns
that
filesystem
doesn't
come
around
until
next
week
to
enter
their
key
or
whatever.
A
So
we
do
need
to
retain
the
ability
to
do
the
like.
You
know
scrub
based,
replace
or
attach,
but
what
you
could
do
is
you
could
improve
the
performance
of
that
by
saying
like
well,
it's
a
log
device,
so
we
know
that
it's
only
blog
stuff,
that's
on
there.
So
just
go
look
at
all
the
logs,
rather
than
looking
at
every
block
of
the
whole
pool
and
in
by
looking
at
all
the
log,
like
all
the
blocks
of
all
the
logs.
That
should
be
enough
to
you
know,
restore
everything
that
could
be
on
that
device.
I
Yeah
I,
just
I
honestly,
don't
know
that
code
very
well,
I'm
just
concerned
about
like
what
happens.
If
we
let
say
we
implement
this
optimization
and
we
do
a
replace
of
a
work
device
and
some
moments
later,
what
says
data
device
needs
replacement
will
be
any
like
conflict
or
any
confusion.
Any
bad
interaction
between
those
things,
I,
don't.
A
A
That's
inside
the
file
system,
then,
if
somebody
else
comes
along
and
they
need
a
scrub,
then
they're
gonna
have
to
either
restart
the
scrub
from
the
beginning,
with
the
new
parameters
or
like
will
restart
the
receiver
or
they're
gonna
wait,
which
is
some
new
code
that
says
like.
Oh,
you
know
we're
doing
one
Rees
over
whoever
comes
along
and
must
do
another
where
there's
no
weight
for
the
first
one
to
complete.
So
I
think
that
that
you
know
that
that
logic
would
all
work.
J
I
A
So
I
think
we
have
time
to
get
through
the
rest
of
the
things
in
the
agenda.
I'm
gonna
go
quickly
on
the
next
one,
they're
renaming
bookmarks,
you
got
iced.
If
there's
any
kind
of
gotchas
there
I
implemented
the
bookmarks
originally
I'm,
pretty
sure
that
there
are
that
it
would
just
work
like
there's.
No,
your
bookmarks
are,
you
know,
sort
of
like
files,
sort
of
like
snapshots
conceptually,
although
not
nucleation,
I,
think
that
rename
me
is
just
like.
K
So
just
some
quick
background:
Panzer,
it's
been
using
ZFS
since
the
days
before
the
lawsuit
between
NetApp
and
and
the
Sun,
and
they
definitely
tried
to
stay
under
the
radar.
While
that
was
in
process,
and
it
resulted
in
some
diversions
between
pantera
ZFS
and
in
CFS
and
then
later
the
open
ZFS
project
and
they
really
been
trying
to
find
a
path
back
to
convergence
and
and
so
I
finally
got
agreement
to
start
open
sourcing.
Some
of
the
technology
and
hires.
K
The
offensive
invitation
and
I
actually
had
a
meeting
with
one
of
the
co-founders
about
two
minutes
before
this
meeting
started
and
finally
got
us
go
ahead
and
so
I'm
not
giving
him
a
chance
to
change
his
mind.
Because
Panzer
is,
you
know
politically
they're,
an
old
appliance
people
and
not
very
open
source
friendly
or
aware,
and
so
they're
they're
kind
of
viewing
this
as
a
as
a
trial
run,
and
we
will
see
how
the
Scopes,
but
we
have
a
fairly
self-contained
temporal
d-do
implementation
that
we're
going
to
put
a
fun
github.
K
That
I
think
will
apply
pretty
closely
to
modern,
open
ZFS
and
it's
performant-
and
you
know
it's
usable
for
online-
be
do
four
tier
one
tape,
storage
applications
so
that
not
giving
them
a
chance.
I,
Damon,
Andre
kind
of
owns
their
open
ZFS
code
here.
Cancer
and
I
didn't
even
have
a
chance
to
talk
to
him,
but
I
really
wanted
to
get
this
announced
before
there
was
any.
You
know
any
second
thoughts,
so
so
we're
gonna
open
source
up
on
github
our
d-do
code,
and
it
should
be
something
that
can
be
taken
and
open.
A
Awesome
well
then,
that's
wonderful
to
hear
another
company
has
been
active
with
their
work.
Two
questions
one
is
you
mentioned.
The
effort
to
integrated
upstream
is
that
is
that
something
that
Panzer
is
planning
to
take
on,
or
are
you
looking
or
hoping
that
somebody
outside
of
the
company
would
do
that.
K
K
K
You
know,
as
Andrei
will
attest
time,
and
a
start-up
is
always
very
difficult
to
come
by
I.
Don't
know
that
anyone
here
at
Panzer
is
could
build,
commit
100%
to
making
it
work,
but
I
I
would
like
to
think
that
we
would
at
least
be
available
to
help
with
it,
if
not
work.
So
alright,
so
that's
a
firm.
Maybe
how
about
that
sounds.
K
A
freebsd
based
right
now
and
were
were
mostly
based
on,
like
version
21
sun
ZFS.
We
had
a
lot
of
consultants,
do
a
lot
of
things,
so
we're
not
really
even
be
21.
I
believe
feature.
Flags
was
at
least
an
idea
that
came,
you
know
out
of
Panzers
participation
and
the
early
days
of
the
open
CFS
project,
and
then,
ironically,
we
never
implemented
them
ourselves
or
took
the
implementation
at
all.
So
you
know,
we've
done
several
several
forklift
upgrades
of
ZFS
and
it
ends
up
being
a
million
line
diff
and
then
in
hopes
to
risky.
K
But
we
are
trying
to
get
back
to
you
know
some
some
ones
the
art.
So
we
have
a
lot
of
things
aside
from
Artie
doop
implementation.
That
would
be
very
interesting
for
open
ZFS
and
we
had
there's
a
lot
of
things
in
open
ZFS
that
we
very
interesting
for
us
and
so
we're
trying
to
get
back
to
that
sort
of
that
sort
of
state.
So
that's.
A
K
I
Yeah,
the
very
short
explanation
is
that
the
loop
records
are
grouped
by
at
the
time
they
were
created.
So
the
idea
is
that
the
records,
basically
the
data-
that's
born
together-
is
oftenly,
changed
together
or
remove
together.
So
that
gives
performance
advantage
over
scattering
the
doop
records,
just
basically
in
a
random
order,
based
on
the
hash,
of
course,
to
look
up
entries.
I
I
A
We're
about
time
so,
thanks
to
everyone
for
getting
through
the
whole
meeting
the
whole
agenda
today,
the
next
meeting
will
be
four
weeks
from
now
October
15th
in
a
little
be
at
the
earlier
time.
Limina
starts
two
hours
earlier
than
this
and
I
think.
That's
all
that
we
have
I
owe
the
the
talks
for
the
open
ZFS
conference
are
announced.
So
take
a
look
at
that
and
I
hope
that
you
can
all
make
it
to
the
conference,
which
will
be
November,
4th
and
5th
in
San
Francisco.
So,
thanks
and
we'll
see
you
in
four
weeks.