►
From YouTube: OpenZFS Office Hours with Brian Behlendorf
Description
Chat with Brian Behlendorf, lead engineer of the ZFS on Linux project. January 29th, 2014
B
Hey
welcome
everyone,
so
I'm
matt
ahrens,
I'm
here
with
brian
bellendorf,
the
the
lead
engineer
on
for
zfs
on
linux
and
we're
here
to
talk
about
open,
zfs
and
zfs
on
linux
and
whatever
questions
you
have
for
us.
So
hopefully
we'll
get
a
few
people
joining
the
google
plus
hangout.
The
link
should
be
live
now
on
the
open,
cfs
office
hours
page
and
also
I
see
quite
a
few
people
are,
are
being
on
on
youtube.
B
So
brian
first,
I
guess.
Let
me
ask
you
like
what
have
you
been
up
to
lately?
Last
time
we
talked
was
at
the
open,
zfs
developer
summit,
which
was
back
in
november,
and
actually,
why
don't?
We
start
off
when
you
tell
us
about
what
you
worked
on
for
the
open
zfs
developer
summit
hackathon
and
how
that's
been
going.
C
Sure
let
me
take
a
step
back
first
and
introduce
myself.
I'm
not
sure
everybody
knows
who
I
am.
My
name
is
brian
billendorf,
I'm
a
computer
scientist
at
lawrence,
livermore
national
laboratory,
there's
a
little
bit
of
bio
about
me.
I've
been
working
on
large
distributed
systems
basically
for
10
years
now,
particularly
parallel
file
systems.
So
I've
kind
of
got
a
background
in
linux
file
systems.
C
I
spent
many
years
working
on
lustre,
which
was
a
lot
of
fun
and
then
moved
into
vfs
in
conjunction
with
work
on
luster.
So
that's
a
little
bit
about
me.
I
guess
about
2007.
I
started
porting
vfs
and
really
got
into
that.
So.
B
So
before
you
were
working
on
zfs
when
you
were
using
luster
was
that
kind
of?
How
did
you
see
problems
with
luster
on?
Was
it
on
ext3?
Is
that
right.
C
Yeah,
so
the
way
luster
is
built
is
it's
basically
a
bunch
of
clients
running
on
compute
nodes
and
they
talk
to
servers
and
those
servers
need
a
back-end
file
system
because
they
didn't
want
to
write
a
back-end
file
system
for
the
distributed
file
system
and
take
all
those
problems
on.
C
In
addition
to
the
the
challenges
of
building
distributed,
file
systems
are
like
well,
we'll
just
take
ex3's
code,
it's
been
out
there
forever,
it
works
pretty
well
right
and
we
started
building
on
it,
but
that
worked
really
well
for
a
bunch
of
years,
but
at
some
point
you
know
for
the
reasons
that
you
guys
worked
on
vfs
and
we're
all
working
on
open
cfs.
C
B
So
was
it
like
scalability
or
performance
or
like
snapshots?
Reliability
was
all
those
things
or
like.
C
I
would
say
all
of
the
above
okay,
but
in
particular
I
think
my
interest
was
drawn
to
it
for
the
scalability
and
the
data
integrity
guarantees.
Okay.
So
I
guess
it's
worth
mentioning
that
many
of
the
large
distributed
computing
jobs
are
scientific
applications,
so
they
care
a
lot
about
accuracy,
getting
the
wrong
data.
C
That's
a
big
deal
right.
It's
not
like
you're
streaming,
a
movie
to
somebody
who's
watching
it
and
they'll
get
a
little
pixelation
or
something
like
that.
They
they
need
the
right
answer
when
they
run
the
simulation,
so
keeping
the
data
intact,
all
the
way
from
the
server
to
their
app.
That
was
really
important
to
us,
so
yeah
the
integrity
and
just
the
scalability
exd3
still
doesn't
scale
particularly
well
right
I
mean
you
can
buy.
C
It
doesn't
take
too
many
terabytes
before
you
start
taxing
it
legal
system,
so
yeah
yeah.
I
have
to
say
too
that
we
might
have
looked
at
the
butter
fast
if
it
had
come
along
earlier.
It
just
wasn't
quite
ready
when
we
needed
to
be
ready,
so
I
still
kind
of
follow
the
butter
fs
development.
I'm
interested
in
that.
B
Yeah,
I
think
they
have
some
some
pretty
interesting
technical
ideas.
It's
it
seems
like
they
from
what
I
could
tell
didn't,
really
have
the
the
staffing
to
you
know,
get
this,
get
it
to
a
stable
version
and
get
out
the
door.
You
know
in
a
in
a
short
amount
of
time.
C
B
So
we
have
a
question
from
fox
on
irc,
so
he's
asking:
why
not
present
writable
snapshots
to
users
that
are
in
fact
clones
masquerading
as
snapshots?
It
seems
like
the
functionality
is
identical
to
writable
snapshots.
B
I
think
functionality
of
clones
is
identical
to
radical
writable
snapshots
and
you
know
conceptually.
He
thinks
that
you
know
writable
snapshots
mix
is
makes
more
sense
conceptually
than
clones.
C
Well,
maybe
we
should
take
and
explain
some
of
your
thinking.
I
know
butterfest
did
the
other
thing
right
where
they
have.
Every
snapshot
is
basically
a
writable
snapshot.
A
C
Kind
of
like
the
model,
the
cfs
adopted
personally,
I
like
the
distinction
between
there
being
an
immutable
snapshot
of
something
and
then
a
writable
version
of
it
or
many
versions
of
the
clones.
I
think
that's
a
nice
distinction
to
make,
but
it
does
seem
somewhat
like
a
policy
thing,
so
I
can
see
people
arguing
the
other
side
of
that.
B
Yeah
so
I
mean
I
am
kind
of,
maybe
I'm
biased,
but
I'm
with
you
on
like
liking,
having
being
able
to
have
as
a
policy
like
these
things
are
immutable,
and
these
things
are
changeable
so
that
you
know
that,
like
okay,
I
can
go
back
to
this
point
in
time
in
the
past
and
nothing
will
mess
with
it.
But
you
know
you
could
imagine
having
it
both
ways
right,
like
with
the
writable
snapshots.
That
butterfest
has,
I
mean
they
could
just
have
some
thing.
On
top
of
that.
That
says
like.
B
Oh,
I'm
creating
this
snapshot,
but
it's
not
going
to
be
a
writable
snapshot
and
it
just
like
administratively
prevents
you
from
writing
to
it.
Even
though
you
know
the
implementation
could
do
it.
So
the
reason
that
that
we
did
did
the
way
that
we
did
in
zfs
is
that,
like
the
way
that
it's
actually
implemented
is
very
different
than
in
butterfest,
so
it
it
relates
to
like
how
we
figure
out
when
to
free
space.
B
So
if
you
take
a
snapshot
and
then
create
a
clone
from
it,
and
the
clone
like
totally
eventually
totally
diverges
from
the
snapshot,
it's
not
actually
using
any
or
you
know,
maybe
only
a
small
number
of
blocks
from
the
snapshot,
and
maybe
the
file
system
that
the
snapshot
was
based
on,
has
also
totally
changed
and
has
totally
different
blocks.
In
that
case,
you
know
you
could
have
like
say
you
had
a
terabyte
file
and
then
you
overrode
it.
You
created
a
clone
and
you
totally
overwrote
it.
So
you
have.
B
We
would
have
like
three
terabytes
of
information,
the
snapshot,
the
file
system,
that's
based
off
of
and
then
the
clone,
and
because
this
makes
it
a
lot
easier
to
implement,
because
we
can
know
very
s
with
very
little
pieces
of
information,
basically
just
the
birth
time.
That's
in
the
block
pointer,
we
can
know
if
a
block
is
part
of
a
snapshot
or
not,
and
so
basically,
if
you
have
a
clone
and
the
block,
the
block's
birth
time
is
before
the
snapshot
that
we
created
it
from.
B
Then
we
know:
okay,
that
block
is
being
shared
with
the
snapshot,
so
we
don't
really
have
to
worry
about
it.
So
when
we,
when
we're
not
using
it
anymore,
we
can
just
ignore
that
we
don't
need
to
do
anything
about
it
versus
the
way
that
butterfest
implemented
it
from
my
understanding
is
that
they
actually
have
like
back
pointers
from
every
block,
so
that
whenever
you
create
a
new
reference
to
block
through
snapshot,
then
you're
creating
pointers
back
to
all
the
things
that
reference
it.
B
So
this
allows
you
to
have
true
writable
snapshots,
where,
if
I
create
a
readable
snapshot
and
then
I
modify
it,
then
I
could
end
up.
You
know
I
could
have
only
two
copies
of
you
know
two
terabytes
of
data,
one
that
is
the
new
version
in
the
file
system
and
one
that's
the
new
version
in
the
clone.
There
is
no
snapshot.
B
You
know
unmodified
snapshot
that
I
need
to
keep
the
data
for.
So
you
know
that
is,
I
think,
a
bit
more
powerful
in
certain
circumstances,
allowing
you
to
save
that
space,
but
the
implementation
is,
is
more
complex
and
you
potentially
has
more
performance
issues.
You
know
where
you're
gonna
pay,
you're
gonna,
you're,
gonna
notice
attacks
when
you're
heavily
using
this
kind
of
feature
versus
with
zfs.
You
know
you
can
have
as
many
snapshots
and
clones
as
you
want
and
they
all
have
the
same
performance
as
the
original
file
system.
C
Let
me
chime
in
along
similar
lines
this
I
think,
one
of
the
really
cool
features
that
butterfs
and
some
other
linux
file
systems
have
now
is
ref
link,
which
relates
to
this
so
being
able
to
make
like
a
thin
copy
of
a
single
file.
I
think
that's
very
cool.
I
would
like
to
see
that
functionality
come
to
zfs,
because
there
are
tons
of
use
cases
for
that.
B
Yeah,
I
talked,
and
I
think
they
have
rough
links
on
freebsd
too,
is
that
oh
cool,
I
recall
correctly
and-
and
I
was
talking
to
somebody
from
from
that
community
about
this
depending
on
the
exact
semantics,
I
think
it
could
be
implemented
in
zfs.
B
Potentially,
can
you
what,
if
you
create
a
ref
link,
then
can
you
change
both
the
old
and
new
copies
like
independently
yeah
and
okay.
B
Other
file
systems
for
others.
So
how
do
you
know
how
that
is
implemented
like
on
ext3?
Is
it
like
ref,
counting.
C
Yeah
for
zfs
I
was
thinking
you
could
do
a
relatively
easily
implementation
of
it.
If
you
did
like
something
like
profile,
dedupe,
maybe
and
just
created
it.
Oh
yeah.
C
Get
most
of
that
property
for
free,
all
you
would
need
is
a
cheap
way
to
create
that
deduped
object,
and
as
long
as
you
can
do
it
on
like
every
object
in
the
data
set,
you
might
not
pay
the
cost
of
dedupe.
So
I
believe.
B
Yeah
it
you
could
also
do
like
you
could
do
something
a
little
bit
fancier
because
with
because
you
know,
like
the
relationship
of
all
these
blocks,
it
isn't
like
any
block,
can
reference
any
other
block.
It's
like
my
this
file.
Block
number
23.
Can
reference
this
other
file
block
number
23.
B
end
of
story
right,
so
I
would
think
that
you
could
avoid
some
of
the
pain
of
having
the
giant
hash
table
for
dedupe.
By
taking
that
into
account.
B
I
actually
implemented
something
kind
of
similar
to
this
for
a
dell
fix
project
which
I'll
talk
that
I'll
talk
about
that
later.
If
we
have
time
because
it's
kind
of
a
tangent
and
we've
already
gone
on
a
tangent
from
fox's
original
question,
he
has
a
follow-up
here,
he's
asking
so
if
it
with
a
snapshot
promoted
to
to
be
a
clone.
Oh,
I
see
so
like
when
you
create
a
clone
from
a
snapshot.
B
You
never
get
a
space
back,
even
if
the
original
data
set
yes,
so
because
with
zfs,
that
snapshot
always
exists,
so
the
snapshot
is
holding
onto
the
space.
B
That
would
be
somewhat
opaque
and
have
surprising
you
know,
space
usage
characteristics
if
we
didn't,
you
know,
make
it
part
of
the
interface
that,
like
there
is
this
snapshot,
and
it
is
the
thing
that
continues
to
hold
on
to
this
space.
Given
that
that's
how
we
we
implemented,
it.
B
Interesting,
so
someone
is
saying
row
roh
he's
using.
I
guess
this
is
solaris
10
update
10
and
he
has
a
a
deep
tree
of
clones
with
up
to
100
clones.
Deep,
I
assume
that's
like
a
a
clone
of
a
kind
of
a
clone
of
the
clone
of
a
clown.
B
Ask
him
if
he's
talking
about
performance
issues,
so
he's
asking
what
is
the
deepest
country?
That's
considered,
okay,
as
opposed
to
stretching
the
limits
so
have
you
do
you
have
any
experience
with
this
brian.
C
I
don't
actually
we've
tried
it
a
little
bit
on
linux,
but
we
really
go
more
than
a
couple
levels
deep.
Our
problems
tend
to
be,
and
I
consider
being
issues
here
around
being
stack
related.
So
a
lot
of
these
functions
of
zfs
are
implemented.
Recursively,
you
traverse
all
the
parents
up
the
tree,
recursively.
B
And
that's
only,
for
I
mean
that's,
that's
like
the
in
the
tree,
like
hierarchical
tree
of
of
data
sets
right
so
like
yeah
pool,
slash
home,
slash,
mat
videos
has
to
traverse
that,
but
you
wouldn't
I
mean.
If
you
have
a
bunch
of
clones
you
could
arrange
them.
You
know
totally
flat
if
you
wanted
right,
cool.
C
C
B
Yeah
yeah
me
too,
so
I
would
be
interested
to
find
out
what
those
pro,
what
those
issues
are.
I
assume
performance
issues.
If
I
would,
I
would
assert
that
there
are
no
performance
issues
with
the
read
and
write
code
paths
so
like
with
the
main
data
path.
B
I
would
be
extremely
surprised
if
you
saw
different
performance
between,
like
just
the
file
system
versus
the
clone
of
the
clone
of
the
clone
of
the
clone
of
the
clone,
etc,
because
you
know
all
of
those
clones,
they
have
exactly
the
same
data
like
they
have
same
block
pointers
same
tree
of
blocks
birth
times,
like
all
that
stuff
is
the
same
as
a
regular
file
system.
The
place
that
I
could
imagine
there
possibly
being
some
performance
issues
is
with
management
operations.
B
So,
like
I'm
in
the
middle
of
this
tree
of
clones,
and
then
I
promote
the
one,
that's
in
the
middle
or
you
know
deleting
one
of
their
snapshots
or
something
like
that.
Now
I
think
you
know
we
really
tried
to
design
all
those
algorithms
so
that
there
wouldn't
be
scalability
issues
there,
but
it
is
possible
that
there's
something
that
we
overlooked.
B
So
he
got
back
to
us
and
said
with
performance
issues
and
kernel
panics
every
once
in
a
while
yeah,
as
as
george
followed
up
there
on
irc,
you
know
we
do.
We
definitely
want
to
see
those
criminal
panics,
because
you
know
that
could
be
any
number
of
things
it
could
be.
It
could
be
that
there's
some
stack
issue
like
you
mentioned
brian
or
you
know,
who
knows
who
knows
how
much.
C
B
Yeah
I
mean
either
but,
like
I
said
it
should
it
should
totally
work.
So
if
not,
then
definitely
let
us
know,
and
he
says
he
also
said
that
their
name
space
is
flat,
so
they
aren't
running
into
the
like
recursion
issue
with
deeply
nested
names.
B
B
This
I
did
this
slightly
differently
so
before
I
was
using
hangouts
on
air,
and
this
is
also
hangouts
on
air,
but
I
it's
like
pre-scheduled,
which
I
thought
would
work
better,
because
then
people
could
have
the
urls.
You
know
you
know
a
month
in
advance,
but
maybe
there's
some
problem
with
actually
getting
people
to
join
it.
Also,
if
people
I
can,
I
can
invite
people
from
from
in
here,
which
is
how
brian
joined.
B
So,
if
you,
I
guess,
tell
me
your
name
on
google
plus,
then
no
great
george
says
that
it's
not
letting
anybody
join.
B
Well,
all
right:
I
apologize
guys
that
that
that
you
aren't
able
to
join
this
google
plus
hangout.
That's
really
annoying!
If
you
can,
I
wonder
if
they
can
find
find
us
and
try
to
join
it.
C
A
C
So
I
think
it's
been
of
interest
to
me
and
I
think
many
others
from
zfs
has
been
large
block
support,
so
I
would
like
to
see
vfs
be
able
to
handle
blocks
larger
than
128k
and
fundamentally
in
the
code
I
mean
it's,
it's
a
constant
right.
I
don't
know
exactly
why
you
guys
picked
128k
way
back
then
it
seemed
probably
like
a
good
choice
at
the
time,
but
it
became
clear
that
it's
a
little
bit
too
small
for
certain
workloads.
C
There
are
cases
where
you
would
want
to
have
bigger
blocks,
in
particular
the
raid
z
case
of
interest
to
us
where
you
have
big
files,
and
you
want
to
spread
them
over
lots
of
vw.
C
You
don't
want
to
be
carving
it
up
into
4
or
8
or
16k
chunks,
so
be
really
nice
to
be
able
to
have
one
meg
blocks,
two
big
blocks,
eight
meg
blocks.
I
don't
know
why
you
need
to
stop
at
one
necessarily
so
for
the
google
for
the
google
hangout
for
the
developer
summit.
I
picked
up
some
work
that
matt
had
started
and
pushed
large
block
support,
at
least
a
patch
for
the
prototype
patchwork
to
get
it
working.
C
It
was
pretty
cool
to
see
working
so
there's
now
a
patch
floating
out
there,
at
least
in
a
pull
request
on
the
linux
port,
that's
kind
of
a
prototype
patch
that
does
work.
I
know
matt
picked
up
that
work
and
pushed
it
farther,
so
it
keeps
advancing.
So
I
think,
there's
hope
of
us
seeing
large
block
support
in
zfs
once
we
iron
it
out
for
those
use
cases.
B
Cool,
so
I
I
took
some
of
your
changes
and
actually
ported
them
over
to
lumos
and
and
did
some
of
the
things
that
were
on
your
to-do
list.
B
So
I
went
through
and
I
looked
at
all
of
the
places
that
were
using
spa
max
block
size.
So
by
the
way
you're
like
your
changes,
did
they
totally
worked
on
a
lumos
with
very
little
with
very
little
work
effort.
Oh
george
joined,
hey
yay
george,
okay,
I'm
gonna
post
that
url
to
the
okay.
B
Okay,
great,
so
that
was
if
anybody
was
hosting
this
in
the
future.
That
was
the
url
that
actually
my
browser
is
looking
at
now
to
to
to
participate
in
this.
B
B
Cool
yeah
I
went
through
and
I
did
the
I
took
kind
of
a
conservative
approach
of
changing.
B
Basically,
all
of
the
places
that
we
use
spa
max
block
slides
to
be
spot
old,
max
block
size,
so
in
other
words
128k,
so
that,
basically,
unless
you
change
the
property
on
the
file
system,
the
record
size
property,
then
you'll
get
like
exactly
the
old
behavior
of
just
going
up
to
108k
and
and
then
I
also
added
support
for
the
like
perf
per
file
system
feature
flag
kind
of
thing,
where
it'll
keep
track
of
which
file
systems
are
using
the
large
blocks.
B
I
know
like
for
for
database
stuff,
we've
actually
at
delfix
been
interested
in
using
it
for
compression
potentially
as
well.
Do
you
have
any
do
you
guys
use
compression
at
lawrence,
livermore
or.
C
That
was
an
open
question
for
us
like
would
we
benefit
from
compression
or
not
on
our
systems,
because
a
lot
of
the
applications
run
through
libraries
that
do
it
from
the
I
o
and
they
have
compression
and
whatnot
built
into
them.
So
it
depends
on
what
the
users
did,
whether
we're
already
showing
pre-compressed
data
in
the
file
system
or
not.
We,
we
really
didn't
know,
but
it
turns
out
that
yeah.
C
We
took
a
conservative
approach
at
first
too,
because
lz,
the
ld4
support
was
new,
so
we
just
turned
on
lv
jb
support,
but
even
with
that,
we're
seeing
remarkably
good
compression
ratios
in
the
file
systems.
So
if
you
have
a
50
petabyte
file
system
and
you
get
1.7
to
1
compression
yeah,
that's
pretty
good.
All
right!
C
That's
huge
extra
tens
of
petabytes
is
a
nice
win,
so
we're
thinking
that
we
may
move
to
lz4
because
it's
proven
itself
in
the
last
year,
I
would
say
I
haven't
heard
of
any
issues
with
it
since
it
was
merged.
So
I
I
think
we've
got
some
trust
in
that
working
right
now,
yeah.
There
was
another
thing
actually
that
pulled
over
really
easy
for
alumnus.
That
was
great.
B
Oh,
that's
great
yeah
we're
at
delfix
we're
also
using
lz4
in
production.
All
of
our
customers
are
on
lz4.
Now.
C
B
What
what
what
would
you
think
about
changing
the
default
of
if
you
just
set
compress
equals
on
changing
that
to
be
lz4,
assuming
that
they
have
enabled
the
lz4
feature
changing
that
to
be
lz4
rather
than
lzjb?.
C
I
think
I'd
be
okay
with
that
now
that
enough
implementations
have
picked
up,
the
ld4
feature
right:
it's
in
illumos,
it's
in
freebsd,
it's
in
linux,
so
I
don't
think
that
would
cause
people
too
much
trouble
now
that
it's
most
places
what
was
initially
merged.
That
was
a
little
scarier
right
because
you
could
make
an
incompatible
pool
really
yeah,
but
now
I
don't
see
a
good
reason
not
to.
B
Cool
yeah,
that's
something
that
we
should
think
about
doing.
If
anybody
wants
to
this,
that
seems
like
a
pretty
easy
change
to
make
you
just
need
to
like
check
is
if
it
is
the
feature
enabled
and
then,
if
so,
you
know
set
the
compression
algorithm
based
on
that,
so
that
would
be
pretty
easy
work
to
do.
If
somebody
wants
to
pick
up
a
small
bug.
B
Were
there
any,
it
looks
like
there's
just
a
lot
of
issues,
people
having
issues
with
the
hangout,
I'm
looking
to
see
if
there
have
been
any
more
actual
questions,
richard.
C
Had
a
question
about
large
block
support,
maybe
you
looked
at
this,
maybe
didn't
that
he's
asking
if
it's
compatible
with
the
nutmeg
support
that
wouldn't
solaris,
I
don't
know
I
have
not
tried.
I
suspect
they
made
the
changes
in
a
very
similar
way.
We
did
so
maybe,
but
I
don't
think
any
effort
was
expended
yet
to
make
sure
that
it's
compatible.
B
Yeah
I
haven't
looked
at
that
either
I
would.
I
would
suspect
that
it
would
just
work,
but
the
issue
would
be
like
figuring
out
like
they
don't
understand
that
feature
flags,
and
we
don't
understand
you
know
that
version
whatever
it
is
means
large
block
support.
So
assuming
that
we
implemented
it
the
same
in
terms
of
like
using
the
higher
bits
of
the
you
know,
size
and
the
block
pointer
and
whatnot,
it's
still
might
be
a
little
bit
tricky
or
hacky.
B
To
tell
you
know,
tell
the
other
code
that
you
can
read
this.
It
might
be
something
where,
like
you,
would
have
to
have
like
a
transition
where
you
said.
Okay,
I
have
this
thing.
That's
using
large
blocks,
it's
only
it's
only
using
large
blocks
in
open
zfs
and
it's
not
using
lz4,
and
it's
not
using
these
other
things.
So
I
can
go
and
safely
like
downgrade
the
the
version,
the
full
version
number
from
version
5000,
which
is
the
feature
flags
version
to
version
31
or
whatever
it
is
that
oracle
zfs
uses.
B
So
richard
yao
was
suggesting,
I
think,
suggesting
that
we
try
lz4
for
metadata
as
well,
that
that
definitely
seems
like
something
we
should.
We
should
investigate
sure
or
that
he
should
investigate.
C
Actually,
one
of
the
nice
things
about
all
the
compression
codes,
it's
very
pluggable
right,
it's
kind
of
self-contained,
it's
easy
for
anybody
to
work
on.
You
just
need
to
find
the
right
entry
points
which
are
pretty
well
documented
and
you
can
try
those
sorts
of
changes
out
pretty
straightforwardly
cool
so
that
might
be
a
you
know,
a
bite-sized
project.
Someone
can
try
using
lv4
for
their
metadata
sure.
B
I
think
sasha,
the
the
guy
who
actually
imported
that
or
implemented
lz4
for
zfs.
He
looked
at
it
and
he
had.
He
puts
up
some
performance
numbers.
The
lz4
hc
is
like
a
lot
slower
right.
C
C
A
B
B
There's
another
there's
a
question
of
on
question
about
compression
he's
saying
this
is
fox,
saying
it
would
be
useful
to
have
quotas
on
the
amount
of
uncompressed
data,
because
otherwise
the
interaction
be
between
compression
and
quotas.
You
know
it
gives.
B
B
A
B
There
isn't
any
quota
that
only
that
counts
the
uncompressed
size,
but
I
did
at
least
I
think
I
added
a
property
that
tells
you
the
uncompressed
size,
yeah
logically
used.
Yes,
I
was
about
to
go.
Look
it
up.
B
B
So
you
right
so
at
least
you
have
that
property
available,
so
you
can
see
the
uncompressed
size.
I
we
didn't
need
quotas
on
that,
but
it
seems
like
it
wouldn't
be
that
hard
to
implement,
given
that
we
already
have
that
information.
B
So
c
burroughs
had
a
question
I
think,
probably
mainly
for
you
brian
he's
asking
about
the
list
of
milestones
on
the
the
zfs
github
page.
Some
commentary
on
this
is
maximize
performance
and
missing.
Advanced
features,
not
sure
what
that
means
exactly.
But
I.
C
Don't
either
exactly
so
the
way
we
track,
I
guess
when
a
release
gets
kicked
out
on
the
linux
side,
we
make
a
milestone
and
basically
there's
a
list
of
things
that
we
need
to
get
fixed
before
we
release
it
right.
So
there's
a
judgment
call
made
on
that
stuff
at
the
moment.
C
B
I'm
not
sure
I
think
maybe
he
was
asking
I
I
brought
up
the
page
and
it
says
like
zero:
nine,
zero
sev,
the
the
description
is
significant
zfs
restructuring
to
maximize
performance
and
then
zero
eight
zero,
it
says,
adds
missing
advanced
features.
Maybe
he
was
just
asking
like
what
features
those
are
and
what
performance
improvements
are
needed
to
get
to
those
milestones.
C
C
Usually,
each
milestone
contains
a
mix
of
back
ports
features
from
a
limo's
performance.
Improvements
for
linux
bug
fixes
whatever
the
critical
stuff
is,
try
to
make
sure
no
regressions
sneak
in
that
kind
of
stuff.
So
there's
a
mix
of
performance
improvements
and
bug
fixes
that
we
expect
to
be
in
the
next
tag.
I
could
probably
go
over
some
of
those
if
that's
of
interest
a
lot
of
them
are
backwards
from
alumos.
B
C
Well,
let
me
briefly
touch
on
some
of
the
more
interesting
stuff.
I
think
in
the
next
tag,
along
with
the
bug
fixes
some
of
the
new
features
people
are
going
to
want.
We
finally
get
posix
tackles
when
we
make
our
next
tag.
That'll
be
nice,
so
they
were
implemented
on
the
linux
side,
not
in
terms
of
the
nfs
apples.
C
A
C
C
Well,
but
easier
to
make
sure
it's
right
too,
because,
unlike
some
of
the
other
kernels
right,
so
linux
provides
us
all
the
helpers
and
all
the
functionalities
for
project
aqua
support
they're
all
in
the
vfs.
They.
C
C
So
for
a
long
time,
we
wanted
to
unify
the
two
in
the
common
vfs
code
base
so
that
we
could
store
like
a
posix
tackle
and
then
convert
it
to
an
nfs
apple,
basically
preserve
all
of
it,
and
then
you
know
convert
it
back
when
linux
wanted
it
as
a
posix
tackle,
but
we
ended
up
punting
on
that
instead
and
saying:
okay,
we'll
just
store
this
as
a
an
x
adder,
so
they'll
be
visible
on
other
platforms
as
posix
tackles
as
as
extended
attributes,
but
you
they
won't
be
enforced
or
anything
just
on
linux.
C
B
So
when
you,
when
you
have
or
can
you
have
both
these
kinds
of
permissions
as
well
as
the
you
know,
nfsv4
or
sifs,
smb
style,
permissions
or
or
do
you
like,
and
how
do
those
interact?
I
guess.
C
So
the
pos
exactly
so
we
preserved
everything
as
it
was
in
an
alumnus
pretty
much.
So
if
there's
a
posix
tackle
set
in
the
x
adder,
it
takes
precedence.
The
linux
vfs
will
read
that
and
enforce
that
as
appropriate
and
if
all
those
passes
checks
pass
basically
and
then
we
use
the
normal
code
path
through
zfs
and
you
know,
convert
the
zips
and
fs
accol
to
mode
bits
and
whatnot
and
then
those
get
enforced
like
a
normal
unix
file
system.
So.
B
It's
like,
basically,
you
have
two
checks.
You
have
to
pass,
one
is
the
posix
axles
and
then
one
is
the
nfs
slash,
smb
style
axles
that
are
implemented
in
zfs
proper,
yes,
sketches.
C
B
C
No,
in
fact,
all
of
the
milestones
created
there
were
I
needed
to
make
milestones,
because
I
had
a
lot
of
open
issues
and
I
had
to
make
some
judgment
call
about
what
I
thought
they
might
possibly
be
fixed.
So
I
created
a
bunch
of
milestones
and
based
on
how
hard
the
issue
was
or
how
important
it
was
or
how
quickly
I
thought
we
were
going
to
get
to
it.
I
put
them
in
certain
buckets.
I
just
happened
to
stop
making
06
milestones
at
0.657.
C
There's
nothing
particularly
special
about
it,
I'm
not
even
sure.
If
we're
going
to
get
there,
we
might
just
jump
to
07
when
some
major
changes
come
in
and
move
all
those
issues
forward.
So
what
what
I'll
often
do
is
just
punt
issues
from
one
milestone
to
the
next
based
on?
If
we're
going
to
get
it
there
or
not,
because
at
some
point
we
have
to
make
a
tag
right.
So
we'll
just
move
the
issue.
If
we
don't
think
it's
going
to
get
into
a
milestone.
B
Cool
here's
another
softball
one
for
you,
f
d
ion
is
asking:
is
there
a
list
with
easy-to-follow
steps
for
the
various
linux
distros
mint
and
debian?
In
particular,.
C
Yeah
I
wish
there
was
better
documentation,
but
every
distribution
is
a
little
bit
different,
so
we've
kind
of
deferred
to
the
end
distributions
and
the
enthusiasts
for
those
distributions
to
put
out
documentation
on
how
the
right
way
to
do
it
on
their
platform.
Is
we
tried
early
on
in
the
project,
to
support
as
many
distributions
as
possible,
but
it
just
became
unwieldy
and,
frankly,
the
best
people
to
write
that
documentation
describe
how
it
should
be
done
on
a
particular
version
of
linux.
C
Are
the
people
who
are
most
familiar
with
that,
so
we've
taken
a
step
back
and
more
focused
on
just
making
the
core
css
code
behave
well
on
all
linux
distributions.
You
know,
get
the
kernel
compatibility
right
and
leave
the
integration
issues
to
those
people
more
familiar
with
that
platform.
You
know
the
linux
ecosystem
is
diverse
and
we
don't
have
the
time
or
resources
to
do
the
right
thing
for
everybody's
platform.
So
I
don't
know
that.
C
C
Maybe
it
makes
sense
to
put
a
section
on
the
open,
zfs
page
with
maybe
at
least
links
to
the
distribution
specific
documentation
if
it
exists,
but
no
it's
not
always
clear.
I
think
you
probably
will
do
pretty
well.
If
you
stick
to
some
of
the
more
common
distributions.
Support
for
inventor
is
good.
Support
for
fedora
is
pretty
good.
Support
for
janky
was
really
good.
If
you
were
running
more
obscure
distributions,
then
it's
not
just
going
to
be
there.
You
might
have
some
trouble
installing
it
and
configuring
it.
B
So
see,
there's
another
question
that
I
thought
you
would
like.
Oh
so
richard
yao
was
asking
related
to
posix
apples,
people
in
mac
zfs
we're
asking
why
ackle
mode
hasn't
been
brought
back
yet
I
think
this
is
I'm
not
super
familiar
with
this.
This
is
this
is
something
that,
like
it
was
in
sli
aqua
mode
was
in
solaris
and
then
they
removed
it
and
then
like
we
added
it
back
in
illumos,
so
I'm
not
sure
what
the
state
is
on
linux
or
if
you
thought
about
that.
C
So
we
pulled
the
code
after
the
latest
drop
from
whatever
it
was
134.
It
was
the
last
version
of
open
solaris.
I
think
at
that
time,
aqua
mode
had
been
reverted
from
the
code
base
and
subsequently
you
guys
added
it
back
into
lumos.
We
saw
those
upstream
commits
go
in,
but
at
the
time
they
weren't
really
of
interest
to
us,
because
we
didn't
have
any
echo
functionality
on
linux,
but
we
kind
of
decided
to
skip
doing
that
work.
We
need
to
get
our
head
around
to
understand
how
it
worked.
C
So
I
don't
know
that
we're
opposed
to
bringing
back
aqua
mode.
We
just
need
to
figure
out
how
it
makes
sense
for
linux
or
if
it
makes
sense
for
linux
and
what
the
behavior
should
be
there.
So
I
would
say
we
just
haven't
given
it
a
lot
of
thought.
B
And
probably
I
mean
the
for
the
max
dfs
folks,
you
know
if
they
have
given
it
that
thought
and
it
does
make
sense
for
them.
Then
I
think
you
know
adding
it
back
in
max.
Dfs
would
also
be
totally
fine.
C
B
So
fox
has
a
question
about
milestones
that
maybe
you
wanted
to
read
on
the
irc,
because
it's
a
little
long,
it's
a
little
long.
It's
just
the
last
comment.
C
I
would
generally
agree
with
that,
so
he's
asking
for
release
candidates,
basically,
so
originally
we
had
hoped
I
had
hoped.
Perhaps
I
had
naively
hoped
to
make
tags
more
frequently
and
put
them
out
there
for
people
to
use,
but
in
practice
the
tags
we've
been
cutting
as
stable
releases
for
links
have
been
less
frequent
than
I
would
like.
C
So
the
idea
is
just
to
drop
more
tags
that
people
can
run
known
points
in
in
the
code
base
without
running
just
the
latest
master
source,
I'm
okay
with
that
it
just
hasn't
been
a
priority
for
us
to
do
it,
but
I
I
think
it's
a
fine
idea
as
long
as
we
don't
have
to
go
through
any
more
advanced
testing
on
those
particular
points
and
develop
more
resources
to
that.
If
it's
sufficient
just
to
tag
the
master
source
to
the
point
where
we
feel
it's
in
good
shape,
then
that's
fine.
C
C
We
don't
let
things
slip
in
and
it
is
stable
and
safe
to
run,
but
sometimes
things
get
missed
right.
It's
just
that
the
stable
tags
do
see
more
testing
and
maybe
making
these
additional
point
release
tags
would
get
more
testing
for
those
which
would
make
a
better
final
official
release.
So
yeah,
I'm
I'm
okay
with
that,
but
you
have
to
decide
on
a
policy
for
a
frequency.
How
often
that's
going
to
happen?
What
they're
going
to
be
called?
C
Yeah
fix
tags
and
problems.
I
guess
there's
some
more
questions
about
how
this
is
done.
On
gentoo,
richard
takes
care
of
richard
yao
takes
care
of
this.
The
downstream
integration
for
lumo.
C
B
Yeah,
actually
richard,
I
think,
rich,
is
listening
on
this
I'd
be
interested
to
see
like
whatever
scripts
you
guys
are
using
to
pull
changes
from
lumos
into
zfs
on
linux,
because
it's
you
know
it's
not
just
as
simple
as
get
cherry
pick
right,
you
have
to.
B
C
Yeah,
so
the
process
is
pretty
manual
at
the
moment.
There
are
some
scripts
that
were
written
that
do
basic
things
like
path
substitution
to
translate
the
illumina's
pass
into
the
way
they're
laid
out
in
the
linux
tree
long
term.
I'm
not
even
so
opposed
to
restructuring
the
linux
tree
or
coming
to
some
agreement
on
like
an
easier
way
to
make
that
happen
right
to
get
the
past
the
same
yeah
I
I
know
freebsd,
I
think,
pulled
over
the
past,
exactly
as
they
were.
C
Well,
when
doing
the
portfolio
linux,
I
couldn't
bring
myself
to
do
that.
I
just
couldn't
like
it
does
not
need
to
be
this
bad
right.
I'll
live
someone
repo,
so
I
I
move
things
around,
so
that's
my
fault
that
things
are
in
different
places.
I
move
them
at
the
time
to
where
I
thought
were
reasonable
places.
So
now
we
have
this.
B
What
you
did,
I
don't
think,
is
necessarily
a
bad
idea
if,
like
if
you
have
like
automation
and
scripts
for,
like
you,
know,
changing
the
paths
between
them.
The
reason
I'm
asking
this
is,
as
I'm
thinking
about
doing
the
open
zfs
code
repository
like
how
we
should
lay
out
the
code
there
and
how
and
like
you
know,
we'll
need
like
scripts
to
change
the
paths
to
be
able
to
like
easily
get
changes
merged
in
because
we
want.
B
We
want
to
make
it
as
easy
as
possible
to
get
changes
from
linux
into
open,
zfs
and
then
on
to
other
platforms,
so
you're
having
scripts
in
place.
To
do.
That
would
be
great.
Oh.
C
Great,
I
would
say
the
biggest
thing
is
just
the
path
substitution
beyond
that,
there's
just
going
to
be
conflicts
in
the
code
for
things
that
have
subtly
changed
and
unfortunately
they
just
need
to
be
manually
worked.
A
C
Yeah,
it
would
be
nice
if
we
did
push
more
of
our
changes
back
upstream
somewhere
to
a
common
code
repo.
So
there
was
less
differences,
because
a
lot
of
the
stuff
is
trivial
right.
It's
just
context
fuzz
or
that
kind
of
thing.
B
Yeah,
so
that's
you
know,
for
those
of
you
that
that
haven't
already
heard
me
give
this
spiel.
B
That's
exactly
what
you
know
we're
trying
to
do
with
creating
open
zfs
code
repository
I've
been
behind
on
my
organization
work
on
that,
but
I'm
hoping
that
in
the
next
few
months,
I'll
have
some
more
time
to
devote
to
that
and
I'll
definitely
be
looking
for
volunteers
in
particular,
one
of
the
first
things
that
we
need
to
do
is
create
make
files
that
will
be
able
to
build
this
code
in
userland
on
all
the
different
platforms.
B
So
we
want
to
have
the
opencvs
code
repository
basically
be
all
the
code
that
is
truly
platform
independent
and
create
like
posix
wrappers,
so
that
this
code
can
be
compiled
in
userland,
similar
to
what
we're
doing
today
with
lib
zpool
and
z-test.
But
we
want
to
be
able
to
have
like
one
code
repository
with
one
set
of
make
files
that
you
can
actually
build
on.
You
know,
run
make
or
you
know,
configure
or
whatever
and
have
it
get
built
on
linux
or
freebsd
or
illumos
at
least
so.
B
Somebody
who
knows
these
kind
of
makefile
tools
would
be
very,
very
valuable
and
much
much
appreciated.
C
It
would
be
interesting
to
try
this
with
the
current
linux
repository,
so
we
have
configure
options
in
place
now
to
build
just
everything
in
user
space
or
everything
in
kernel
space.
So
you
can
probably
take
the
current
repo
and
drop
it
on
previously
and
say
just
build
everything
in
user
space
and
see
how
it
goes
right.
C
Not
be
so
bad,
that's
what.
B
C
A
B
And
is
that
about
at
the
auto
con
stuff?
Is
this
is
about
like
finding
like
what
what
packages
are
installed
where
and
that
kind
of
stuff
or
like.
C
It's
a
framework
to
be
able
to
write
basically
generic
tests.
For
anything
you
need
to
be
able
to
do
conditionally
in
your
code.
So
on
your
platform,
you
need
to
figure
out
well
a
trivial
example
right
like
where
awk
is
installed,
or
something
like
that.
Yes,
there's
a
can
test
for
figuring
out
where
auck
is
installed
and
you
can
get
that
kind
of
stuff,
but
you
can
also
use
it
to
do
things
like
check
the
behavior
of
a
specific
c
function
on
this
platform
that
you
know
behaves
differently
on
previously
versus
linux
right.
C
B
B
Just
like
between
different
linux,
distros
that
have
differences
that
you
need
to
take
into
account.
C
There
are
some
actually
not
too
many
between
the
distributions,
because
a
lot
of
them
run
similar
versions
of
the
libsy
and
the
kernel
takes
a
very
hard
line
on
not
changing
any
kind
of
abi
behavior
to
user
space.
That's
considered
a
serious
offense
if
you
change
the
behavior
as
it
exposed
user
space
at
all,
but
we'll
see
things
like
changes
in
the
pilot
or
the
gcc
right
now.
C
There
was
recently
an
issue
with
gcc
48
on
linux,
where
it
started
overly
aggressively
optimizing
some
loops
by
default
and
that
led
to
some
problems
in
zfs
those
newer
versions
of
gct48.
We
have
to
turn
off
the
aggressive
loop
optimization,
so
there's
an
auto
check
in
there
now
in
the
new
code.
That
says:
does
your
version
of
gcc
support
aggressive
loop,
optimization
and,
if
so,
turn
it
off
until
we
can
rework
the
zfs
code,
so
this
doesn't
cause
a
problem,
but
it
handles
all
of
those
cases.
It's
a
framework
for
handling.
C
All
of
that
stuff,
I
would
say
our
most
extensive
use
of
autocomp
on
linux-
is
for
kernel
compatibility,
so
we
try
to
support
lots
and
lots
and
lots
of
linux
kernels
and
while
they
make
a
very
strict
guarantee
about
not
changing
the
system
call
behavior
on
linux,
as
opposed
to
users
face
anything
below
that
is
fair
game.
They
are
completely
willing
and
commonly
do
restructures
of
interfaces.
C
So
there's
a
lot
of
auto
code
on
linux
to
detect
what
interfaces
your
kernel
supports
and
then
use
them
appropriately,
but
all
that
kind
of
framework
could
totally
be
extended
was
originally
designed
for
compatibility
and
portability
between
various
platforms.
So
it's
very
cool
for
the
job.
It's
just
work
to
get.
It
done
all
right.
B
Well,
I'll,
definitely
I'll
I'll,
investigate
that
and
I'll
probably
bug
richard
since
it
sounds
like
he
knows,
he
knows
about
some
of
this
stuff.
C
B
B
Cool,
so
I
think
we're
almost
out
of
time.
I
have
one
one
last
question
for
you,
which
is
last
time
I
saw.
You
was
at
the
open,
zfs
developer
summit,
which
was
a
great
time,
and
I
was
wondering
like
what
what
feature
or
change
to
zfs.
Are
you
most
looking
forward
to?
We
talked
about
like
lots
and
lots
of
new
features
and
things
that
we're
wanting
to
do
or
planning
to
do
or
hoping
to
do.
You
know.
B
C
C
So
I
would
like
to
see
things
like.
Oh,
we
made
a
good
pass
at
this
chris
dunlap
at
livermore,
just
posted
a
pull
request
for
first
cut
at
fma
support
for
linux
and
infrastructure
to
build
that
up,
so
the
ability
to
properly
kick
in
hot
spares
and
that
kind
of
infrastructure.
C
I'm
I'm
excited
to
see
that
finally
come
to
fruition
because
we've
been
knowing
we
need
something
better
for
years,
so
there's
basically
something
very
analogous
to
what
was
on
solaris
written
for
linux
and
I
think,
even
a
little
bit
more
flexible
where
the
kernel
generates
events
and
they
get
consumed
in
user
space
by
demon
and
the
right
actions
happen
right.
So
I'm
I'm
excited
to
see
that
stuff
happen.
B
Do
you
know
how
portable
that
code
would
be
like
to
freebsd,
because
I
know
we
I
think
we
talked
about
this
at
the
developer
summit
like
they
have
similar
problems
of
of
not
having
that
functionality.
C
I
think
it'd
be
great
to
talk
to
for
the
freebsd
guys,
so
we
wrote
it
originally
with
the
idea
of
making
sure
it
was
portable
as
portable
as
possible,
so
it
was
written
to
avoid
using
as
much
as
possible
linux
specific
interfaces
or
anything
like
that,
all
the
scripts
and
whatnot
or
either
c
or
straight,
born
shell.
It
was
designed
and
written
with
portability
in
mind,
so
our
hope
is,
if
there's
interest
from
the
freebsd
guys
or
even
the
mac
community.
C
It
was
also
written
to
be
hopefully
as
extensible
as
possible,
so
that
kind
of
policy,
information
and
users
can
modify
and
adjust
and
easily
hack
on
without
having
to
get
really
deep
into
the
guts
of
zfs.
So
we
tried
to
make
it
as
open
and
usable
by
end
users
as
possible.
So
I'm
looking
forward
to
that.
C
There's
an
initial
pull
request,
open,
that's
completely
functional
and
it's
basically
time
for
review
and
comment
on
it.
It
works.
It's
got
the
basic
functionality
there.
It
needs
a
little
bit
more
code
cleanup,
but
I
would
say
it's
actually
in
pretty
good
shape.
We've
been
working
on
it
for
months
now.
So
because
it's
something
like
you
say
all
the
platforms
know
they
need,
but
I
think
only
a
lumos
has
right.
B
Yeah,
this
sounds
that's
cool.
We
should
send
this
over
to
like
justin
gibbs
and
the
other
freebsd
guys,
so
they
can
at
least
take
a
look
and
and
see
if,
if
they
want
to
help
boarding
that
freebsd.
C
Yeah,
it
should
be
relatively
straightforward,
as
for
other
stuff
that
I'm
excited
about
this
year
for
sure
on.
My
agenda
is
to
get
on
top
of
the
memory
management
issues
that
are
still
lurking
in
the
linux,
port
and
zfs.
That
needs
to
happen.
I
think
that
opens
up
a
lot
of
doors
for
us,
so
I'm
excited
to
get
down
and
work
on
that.
Hopefully,
that
will
come
to
fruition
this
year,
at
least
that's
the
plan
cool.
B
That'll
be
great.
Well,
I
don't
see
any
more
questions,
so
I
should
probably
wrap
it
up.
It's
been
great
chatting
with
you
cool.
Hopefully
we
get
to
do
this
again.
Sometime
yeah,
oh,
and
I
should
mention
before
we
go
any,
do
you
have
any
events
that
you
want
to
evangelize
or
promote?
C
You
have
a
couple,
I'm
still
involved
in
the
lester
community,
so
I
advise
going
to
log
always
illustrator
user
group.
I
think
it's
in
miami
this
year.
That's
always
a
good
time
I'll,
be
there
cool.
B
So
I
have
a
few
events
in
february
february
11th,
just
a
few
weeks
from
now,
I
will
be
speaking
at
the
ieee
computer
society
of
silicon
valley.
This
is
a
free
event
open
to
the
public.
It's
in
san
jose,
I
think
so,
if
you're
in
the
bay
area
and
it's
in
the
evening,
it's
like,
I
forget,
seven
o'clock
or
eight
o'clock
on
a
tuesday
evening.
B
So
if
you're
in
the
bay
area,
it's
a
great
opportunity
to
stop
by
and
I'll
be
giving
a
talk
kind
of
in
similar
to
what
I've
given
at
some
other
conferences,
but
a
little
bit
more
in
depth
and
there'll
be
lots
of
time
for
a
q
a
there.
B
So
that's
the
ieee
computer
society
of
silicon
valley,
february
11th
and
then
the
next
conference
that
I'll
be
at
will
probably
will
be
I'll,
be,
I
hope,
I'm
not
scooping
them,
but
I'll
be
giving
the
a
keynote
talk
at
the
freebsd
asia,
or
was
it
asia
bsd
con
conference,
which
is
in
tokyo
in
in
march.