►
From YouTube: OpenZFS Office Hours w/George Wilson
Description
OpenZFS Office Hours w/George Wilson, December 12, 2013
A
B
A
B
So,
just
a
little
bit
about
myself,
I
been
working
with
matt
for
quite
some
time
now
on
zfs
started
when
we
were
both
working
at
sun
microsystems
and
I've
continued
doing
that
now
that
I'm
at
delfix,
I
spent
most
of
my
time
working
on
spa,
related
issues,
topics
and
features
so
pretty
much
the
lower
half
of
zfs.
B
A
C
Okay,
I
guess
I'll
correct
you.
It's
joseph.
D
A
C
B
C
A
Where
is
where's
max,
he
said
that
he
would.
He
would
send
you
a
softball
question.
B
And
jason
just
indicated
that
he's
does
support
and
accenta.
So
oh
cool.
He
indicated
that
on
the
chat.
A
Do
you
want
to
tell
us
you
I
know
like
in
the
last
hangout,
you
fielded
several
questions
about
zfs
performance,
so
I
don't
want
to
bore
you
with
that
same
thing:
oh,
we'll
get
there
eventually.
Can
you
tell
us
maybe
a
little
bit
about
what
you've
been?
Let's
see
tell
us
a
little
bit
about
your
hackathon
project
that
you
worked
on
at
the
open
zfs
developer
conference,
just
okay,
yeah.
B
So
myself
and
robert
from
joint
started
looking
at
a
way
to
have
a
compressed
arc.
So
the
idea
here
is
that,
as
blocks
start
to
age,
instead
of
just
simply
evicting
them
from
cash,
compress
them
down
first
so
that,
hopefully,
if
somebody
were
to
access
it
again,
you
could
simply
just
decompress
it.
So
you
freed
up
some
memory
and
allowed
the
arc
to
continue
to
field
new
blocks.
But
if
somebody
happened
to
actually
access
that
block,
then
you
would
get
it
back
and
we
just
simply
uncompress
it
and
hand
it
out.
B
So
we
kind
of
have
staged
it
in
a
way
that
is
on
the
way
to
eviction.
So
you
have
uncompressed
blocks
that
would
be
available
for
anybody,
who's,
referencing
them
at
the
moment
or
new
blocks
that
were
just
read
in
and
then,
as
things
start
to
get
to
the
point
where
they're
going
to
get
evicted,
then
we
would
stage
them
out
and
first
compress
them
and
then
eventually
kick
them
out
completely.
A
You
mentioned,
if
you
know
determining
if
it
was
lucrative
to
do
so.
How
will
you,
how
do
you
figure
out
like
what
the
benefit
is
of
this
work?
Yeah.
B
So
I
think
what
the
initial
thought
was
to
just
to
kind
of
come
up
with
a
a
typical
workload
that
would
allow
us
to
kind
of
measure.
How
much
of
you
know
how
many
cache
hits
and
misses.
You
would
get
for,
say
that
workload
and
and
make
the
assumption
that
that
workload
is
larger
than
memory
and
then
look
at
having
the
compressed
component
in
there
and
see
if
we
get
a
higher
cash
hit
ratio.
B
I
think
we're
also
going
to
be
looking
at
like
having
like
detroit's
probe,
so
we
can
kind
of
track
how
many
times
we're
actually
going
through
and
coming
out
of
the
compressed
phase.
And
if
it
looks
like
it's
low
overhead
and
you
know
high
impact,
then
we
can
move
to
trying
to
do
this
fully
for
the
entire
arc.
A
So
I
mean
it
sounds
like
there
would
be
some
increased
cpu
overhead
of
doing
this
compression
and
uncompression.
How
will
we
figure
out
like
if
that
is
acceptable
or
or
is
it
going
to
be
like
you
have
to
figure
it
out
yourself
and
you
tune
it
or
what.
B
Yeah,
that's
a
good
question.
You
know
it's
since
we're
using
plan
on
using
lz4.
It's
really
good
about
decompression
and
low
overhead
there
for
customers
or
people
that
are
already
using
like
compressed
l2
arc.
It
really
wouldn't
be
much
of
a
impact
to
them,
because
they're
already
paying
the
penalty
of
doing
compression
but
yeah.
I
haven't
really
thought
about
how
we
would
actually
expose
this
out
to
see.
B
If
you
know
if
this
is
the
right
fit
for,
say
somebody
running
on
you
know,
say
a
dual
core
processor
or
some
kind
of
older
pc
that
may
not
have
a
lot
of
cpu
to
give,
I
think
definitely
we
will
have
to
have
like
some
kind
of
counters
that
we
can
maintain
so
that
people
can
query
that
out.
I
don't
know
how
many
people
actually
look
at
the
overhead
of
compression,
but
that's
that's
definitely
something
that
we
need
to
think
about.
A
Cool,
so
there
are
a
couple
questions
on
on
irc.
The
first
one
from
dhe
is
asking
about
a
compression
algorithm,
lz4
hc,
which
is
I
I
believe
it
compresses
like
lz4,
but
higher,
like
higher
compression
ratio
using
some
more
cpu.
So
he
was
asking
about
if,
if
we
might
see
that
in
the
future
as
a
new
compression
algorithm-
or
you
know
something
in
the
style
of
gzip
like
where
you
can
specify
the
level
of
compression
like
lz4
dash,
9
or
dash
1,
or
something
like
that,.
B
Yeah
and
I
I
seem
to
recall
that
I
think
sasso
had
done
some
initial
work
there
and
I'll
have
to
go
back
and
look
at
the
the
results,
but
from
what
I
remember
like
they
were
comparable
to
what
lz4
was,
and
so
it
wasn't
added.
But
I
think
that
you
know
with
with
the
compression
mechanism
within
zfs,
it's
very
easy
to
kind
of
plug
new
things
in
so
there's
no
reason
why
we
couldn't
actually
extend
this
and
have
lz
4
hc
or,
as
was
suggested,
you
know
some
kind
of
lz4
version
or
like
compression.
B
Value
like
dash
nine
or
dash
six
or
whatever
it's
it's
really.
I
mean
for
anybody
who's
interested
in
kind
of
doing
some
analysis
and
and
wants
to
kind
of
tackle
that
project.
It's
actually
a
pretty
simple
thing
to
do.
You
can
kind
of
follow,
there's
a
lot
of
boilerplate
pieces
that
are
already
in
zfs.
B
So
it's
easy
to
kind
of
pick
up
like
what
compression
algorithms
you
use
today
and
find
all
the
places
where
it's
referenced
in
the
code
and
then
figure
out
where
to
plug
it
in
and
all
the
compression
algorithms
are
kind
of
standalone.
So
there's
a
you
know
a
table
that
we
simply
reference
every
single
time.
We
want
to
compress
a
block,
so
it's
easy
to
kind
of
do
it.
So
it's
a
good
my
opinion.
B
A
There's
a
follow-up
question
about
using
using
lz4
hc
for
l2r
compression.
He
notes
that
you
know
the
l2
arc
is
usually
filled
relatively
slowly.
You
know
we
have
controls
on
how
how
quickly
it
can
get
filled.
So
if
the
decompression
is
just
as
fast
then
would
that
be
worth
exploring
using
lz4
hc
for
the
l2
arc.
B
Again,
I
think,
yeah.
I
think
it's
probably
worth
taking
a
look
to
see.
If
we
can,
you
know
if
we
can
get
more
benefit
out
of
lz4,
hc
and,
and
one
of
the
reasons
that
you
know
we've
kind
of
gone
with
lg4
is
because
it
is
significantly
better
than
what
we've
seen
with
lzjb.
B
So
as
new
things
come
along,
there's
like
no
reason
to
actually
like
to
just
take
them
on
and-
and
I
know
to
date,
we
haven't
actually
changed
the
default
compression
for
like
everything
to
be
lz4,
but
there's
no
reason
why
we
couldn't
do
that
and
if
lz4
hc
is
that
much
better
than
we
can
also
consider
that
being
the
default
compression
whenever
you
just
simply
specify
compress
on
the
biggest
thing
would
be
handling.
You
know
pools
that
are
older
and
are
kind
of
upgrading
new
bits,
but
yeah.
I
think
that
makes
total
sense.
B
A
I
think
at
this
point
you
know,
lz4
has
been
in
there
for
a
while,
and
it's
performs
very
well.
I
think
that
it
would
be
a
good
time
to
go
ahead
and
change.
You
know
compression
equals
on
to
mean
use
lz4
yeah.
We
kind
of
threatened
this
for
a
long
time,
and
I
think
that
would
be
good
to
finally
make
make
good
on
that.
So,
if
somebody
is
interested
in
implementing
that,
I
think
that
that
would
be
a
great,
relatively
small
intro
project
to
zfs
to
the
zfest
development.
A
B
So
the
I
mean
most,
the
data
that
I've
looked
at
has
all
been
internal
data,
so
it's
kind
of
you
know
right
now,
it's
all
fabricated,
so
I'm
looking
for
people
that
are
actually
using
this
out
in
the
community
to
you
know
feel
free
to
send
me
the
histogram
output,
or
you
know,
post
it
on
the
mailing
list
or
whatever,
but
I
will
say
that
the
pieces
that
I've
been
looking
at
specifically
have
been
to
to
try
to
figure
out
like
really
what
is
the
distribution
of
space?
B
We
have
a
couple
servers
in
inside
delfix
that
we
use
very
heavily
for
development,
and
so
I've
been
doing
some
analysis
on
that
most
recently
and
then
I'm
also
looking
at,
like,
I
said,
kind
of
a
fabricated
benchmark
that
we
use
to
see
how,
in
fact
that
makes
you
know
how
we
use
that
to
make
adjustments
and
so
I've
I've
actually
have
some
code
that
you
know
we've
been
testing.
B
You
know,
matt's
looked
at
it
that
is
taking
advantage
of
the
histogram
information
and
making
newer
decisions
on
it,
and
I
was
just
running
some
benchmarks,
the
last
few
days
and
for
pools
that
are
less
than
80
full
we're
seeing
a
significant
performance
improvement.
The
80
mark
seems
to
be
kind
of
that
special
place
where
things
really
get
rough,
especially
for
us
where
we
we're
testing
primarily
with
ak
blocks.
B
So
once
you
kind
of
get
to
that
80
mark,
it's
like
everything,
that's
out
there
in
the
pool
happens
to
be
like
8k,
fragments
kind
of
all
over
the
place,
so
you're
you're
you
pretty
much
run
out
of
continuous
space,
and
it's
a
matter
of
at
that
point
in
time.
We'll
probably
have
to
spend
some
more
analysis
in
figuring
out,
okay,
once
you
get
to
that
point.
What
really
should
we
do?
B
B
Maybe
a
a
very
large
extent
of
contiguous
blocks
is:
go,
fill
little
holes
and
fragments
that
are
out
there
to
try
to
like
leverage
that
space
that's
already
fragmented.
So
I
think
there's
a
lot
of
room
for
improvement,
and
so
I
welcome
people
to
kind
of
send
in
some
some
histogram
data,
and
if
you
don't
know
how,
to
you
know,
capture
the
histogram
data,
I'm
more
than
happy
to
kind
of
share
that
I
think
did.
B
B
A
A
A
B
Cool-
and
I
think
there
is
value
in
that
just
on
some
some
testing-
that
I've
been
doing
I'm
seeing
that
there
is
a
benefit
in
not
trying
as
hard,
and
since
we
have
some
feedback
available
to
us
as
rights
are
coming
in
long
before
we
actually
go
to
sync
them
out,
then
we
can
make
some
decisions
ahead
of
time
of
okay.
These
are
gonna.
You
know
we
have
a
low
trickle
of
rights
coming
in.
So
don't
worry
about.
B
You
know
lining
all
these
things
up
continuously
go
ahead
and
kind
of
you
know
do
some
random
rights
in
that
instance,
to
fill
those
blocks
and
leave
the
large
pieces
for
when
we
actually
need
it
cool.
But
it
is
very
interesting
to
see
that
at
least
I'm
getting
some
significant
improvements
like
on
the
order
of
like
50
to
60
percent
in
cases
where
the
pool
is
actually
pretty
empty.
B
So
80
seems
to
be
the
cliff
point
that
that
I've
been
seeing
it's
still
early
in
our
analysis,
but
there's
a
lot
more
work
that
we
want
to
do
in
this
area.
A
B
Yeah,
so
it's
like,
I
said
this
data
is
preliminary,
so
but
it's
kind
of
remarkable
that
really
what
I
and
I
attribute
the
big
difference
here
is
that
we're
able
to
actually
select
better
meta
slabs
instead
of
today
the
current
workload
is
or
the
current
logic
within
zfs
is
to
really
try
to
fill
the
entire
meta
slab
all
the
way.
A
Yeah,
did
you
see
andre's
question
about
the
metaslab
weight
factor
enable
on
the.
B
A
So,
do
you
just
want
to
clarify
the
difference
between
that
tunable
and
the
new
stuff
that
you're
working
on
yeah.
B
So
metaslab
weight
factor
enable
was
kind
of
an
initial
stab
at
trying
to
figure
out
weightings.
It
was
very
ad
hoc,
that's
gone
away,
so
the
changes
that
I've
been
working
on
have
now
are
introducing
a
new
fragmentation
metric
so
that
we're
exposing
fragmentation
for
the
first
time
out
to
the
user.
So
you're
going
to
have
the
ability
to
see
fragmentation
at
the
pool
level.
Each
device
will
report
its
fragmentation
and
then
each
meta
slab
actually
has
an
internal
fragmentation
metric.
B
So
we're
using
that
now
as
the
ability
to
go
and
get
some
of
this
enhancements
on
when
when
we
want
to
select
the
right
meta
slab,
we
can
base
that
on
how
fragmented
that
metaslab
is
so.
The
the
nice
thing
is
that
the
metaslab
or
the
fragmentation
metric
is
a
totally
in
core
ask
you
know
component.
So
as
we
get
more
data-
and
we
want
to
tweak
this
thing-
it's
easy
to
go
in
tweak
it.
And
then,
when
your
pool,
you
know
when
you
reboot
your
pool
comes
back
up
it
simply
recalculates
fragmentation
metric.
A
Cool,
hey
luke,
for
those
of
you
who
don't
know
luke
marston
of
hybrid
cluster
is
responsible
for
the
hosting
the
opengfs
web
website.
So
we
have
him
to
thank
for
the
opengfs
website
hosting,
which
is
donated
and
also
funding,
to
create
the
awesome
open,
zfs
logo,
which
is
on
all
on
our
t-shirts.
D
Yeah,
that's
right:
nice,
nice,
new
t-shirts.
We
actually
had
andre
at
our
developer
conference
last
week
did
a
talk
on
opengfs,
which
I
might
be
able
to
publish
in
case
anyone's
interested
in
it.
Oh.
D
He
was
wearing
his
open,
cfs
t-shirt,
but
he
forgot
to
take
off
his
jumper,
so
the
video
doesn't
show
it
anyway.
No,
that's
really
exciting
stuff
that
I
just
heard
the
tail
end
of
in
terms
of
performance
improvements
on
fragmented
pools,
so
good
work,
good
work,
george.
A
Yeah
thanks,
it
sounds
like
not
just
on
fragmented,
not
just
on
what
we
traditionally
consider
fragmented
pools,
but
even
on
kind
of
moderately
full
and
fragmented
pools.
You've
seen
a
lot
of
performance
improvement.
Is
that
right.
B
Yeah,
that's
correct,
so
we
you
know
we
have
kind
of
this
artificial
benchmark.
We
use
internally
and
I
think
some
people
may
have
even
seen,
like
blog
posts
from
uday
that
where
he
describes
a
little
bit
about
this
tool,
but
that's
the
the
intention
behind
it
is
you
kind
of
specify
like
how
fragmented
you
want
the
pool
to
be
by
creating
some
series
of
files
that
are
so
large?
B
So
you
can
you
consume
that
much
space,
and
then
you
fragment
the
crap
out
of
those
files,
and
you
then
measure
based
on
that
like
how
fast
are
you
actually
able
to
do
writes
for
those
files
that
you
that
are
now
very
fragmented?
So
you
kind
of
let
this
thing
run
for
like
half
an
hour
kind
of
gets
to
like
a
stable
state,
and
you
can
take
some
measurements
off
of
that.
So
it's
it's
kind
of
an
interesting
benchmark.
B
It's
probably
a
worst
case,
benchmark
like
when
you
really
look
at
a
pool
that
you
fill
up
to
like
90
and
you
let
this
thing
run.
You
look
at
the
fragments
that
are
left
and
they
are
you
know
it
really
is
it's
like
your
entire
pool
is
nothing
but
like
if
you're
using
ak
blocks.
It
would
be
nothing
but
like
ak
blocks
are
smaller,
very
few
contiguous
space
anywhere
on
the
pool.
D
That's
really
interesting,
and
I
know
that
in
our
experience,
we've
had
some
some
really
painful
performance
around
even
as
low
as
65
70
fullness
of
a
pool,
and
so
I
guess
that
sounds
like
this.
Work
should
really
help
us
there.
I
have
a
question
around
that
which
is
just
to
what
extent
does
it
is
this
just
delaying
the
inevitable
performance
pain
until
you
get
to
a
certain
level
of
fragmentation
that
you
can't
literally
laws
of
physics
can't
do
anything
about
right.
B
It
kind
of
is,
I
mean
it's
making
it
faster
when
there
still
are
larger
segments
to
be
had,
and
one
of
the
things
that
we've
been
looking
at
is
trying
to
figure
out.
How
do
we
make
better
choices
about
dealing
with?
You
know
these
small
fragments
that
exist,
and
so
I
don't
know
if
you
heard
earlier,
but
we
we
described
kind
of
this
way
of
taking
some
feedback
from
the
upper
layers
like
the
right
throttle
code.
B
That
knows
how
much
is
being
ingested
into
the
system
and
then
based
on
that
ingest
rate,
we
can
make
a
decision
at
the
allocator
to
say
if
the
rate
is
low,
then
don't
try
very
hard.
Go
use
a
bunch
of
the
fragments
fill
those
up
first,
if
the
rate
is
high,
then
you
know
try
to
keep
up
with
the
system
and
use
kind
of
very
free
space.
B
Now,
the
once
we
get
that
code
in
place
and
working,
then
it's
probably
going
to
benefit
people
that
have
a
lot
of
free
space
to
begin
with,
but
if
you're
already
in
in
a
bad
spot,
it's
tough
to
kind
of
get
out
of
without
freeing
up
some
space
and
kind
of
relaying
out.
You
know
blocks
at
that
point
in
time.
D
That
makes
sense
awesome.
A
Any
other
questions
for
george
on
irc
or.
C
I
have
a
question,
so
I'm
not
really
sure
where
I'm
going
with
it,
but
so
in
general,
I'm
drawing
on
my
knowledge
from
log
structured
file
systems
where
a
lot
of
the
time
you
want
to
separate
the
hot
and
the
cold
data,
because
the
hot
data
it
will
get
freed
and
then
you
end
up
with
all
sorts
of
fragmentation
while
the
cold
data
it
sticks
around
forever.
C
B
I
would
say
probably
the
closest
thing
to
something
like
that
is
some
work
that
is
being
done
at
nicenta,
where
they're
looking
at
having
kind
of
it's
it's
almost
it's
almost
in
a
way
like
tiered
storage
policy,
where
you
want
data
that
is
very
hot,
to
go
potentially
to
either
faster
devices
or
something
like
that
and
data
that
has
kind
of
cooled
off
to
migrate,
to
you
know
longer
kind
of
archive
storage.
B
So
so
there's
been
some
work
that
has
been
done
there
and
I
know
that
they're
they're
still
kind
of
finalizing
that,
but
the
kind
of
like
the
compression
there's
a
very
compression
within
zfs,
there's
a
very
nice
plugable
way
to
create
these
classes
of
storage
that
allow
you
to
do
things
kind
of
like
what
you're
mentioning,
for
example,
if
you
wanted
to
separate
meta
data
and
data,
and
you
wanted
to
have
like
flash
storage
for
all
your
metadata,
because
you
knew
that
that
was
always
going
to
be
like
you
know,
it's
not
all
that
big,
and
it's
always
you
know
you
always
want
fast
access
to
it.
B
You
could
actually
create
within
cfs
a
new
meta-slab
class.
That
is
specifically
only
for
metadata
and
then
when
the
rights
go
through,
part
of
the
allocation
is
the
first
thing
that
you
do.
Is
you
define
what
class
you're
writing
to?
That
class
has
is
going
to
have
a
series
of
devices
associated
with
it
and
we
refer
to
those
as
metaslab
groups
and
so
once
you've
defined
the
class
and
so
like
in
this
case
it's
metadata.
You
could
then
say:
oh,
I
have
four
devices
off
of
that.
B
B
You
could
define
your
own
block
allocator
for
a
specific
metaslab
class
like
for
metadata,
which
doesn't
necessarily
have
to
be
dynamics
that
it
could
be
something
you
write
or
there's
other
plugable
ones
that
are
actually
exist
in
the
code
today
that
you
could
actually
use
and
finally
get
a
better
performance.
So
there
there
is
kind
of
a
similar
concept
to
what
you're,
referring
to
it's,
not
necessarily
hot
cold.
B
In
that
sense,
in
in
the
sense
of
like
the
data
will
eventually,
you
know,
be
trans
transient
will
go
away
in
this
case,
it's
more
of
like
you
can
define
the
type
of
storage
based
on
the
access
patterns
that
you
would
need,
typically
need,
and
so
having
that
facility
within
zfs
is
again
kind
of
a
simple
thing
to
to
do.
I
a
classic
example
that
I
give
is,
if
somebody
really
like
dedupe,
has
always
been
kind
of
a
problem
child
because
you
always
run
out
of
space
in
the
arc.
B
B
You
know
device
that
you
always
write
all
your
youtube
table
to
it's
easy
to
kind
of
plug
those
kinds
of
things
in
it
does
take
a
little
bit
of
knowledge
to
kind
of
figure
out
the
underlying
structure.
But
there's
it
it's
nice
that
there
is
kind
of
that
pluggable
form.
A
It
sounds
like
that.
That's
very
similar
to
the
stuff
that
niccenta
presented
at
the
open,
zfs
developer
conference,
exactly
we're
creating
this
additional
sub
class,
so
it
sounds
like
they
they've
implemented
a
lot
of
what
you
just
described.
If
anyone
from
nicenta
has
more
info
on
when
that
might
be
available
for
other
people
to
take
a
look
at,
let
us
know
there
were
a
couple
other
questions
on
irc,
so
sasha
was
asking
about
encryption.
This
is
something
that
we
were
actually
having
some
discussions
around
the
office.
A
Yesterday,
george,
you
weren't
there,
but
let
me
maybe
put
the
question
to
you
this
way
since
I
know
like
we
are
not
really
tasked
with
with
doing
encryption
at
our
full-time
jobs.
If
somebody
wanted
to
implement
zfs
encryption,
how
would
you
recommend
that
they
go
about
it.
B
First
of
all,
I
assume
that
somebody
that
wanted
to
do
this
is
pretty
knowledgeable
in
the
different
types
of
encryption,
algorithms
that
are
out
there
and
the
thing
that
I've
noticed
is
that
a
lot
of
them
tend
to
be
tied
to,
like
you
know,
they're
kind
of
like
platform
specific.
So
the
one
thing
that
I
would
like
to
see
is
kind
of
a
generic
encryption
component.
I
know
that
when
this
was
done
at
oracle,
there
was
a
lot
of
ties
to
like
the
spark
architecture
and
stuff
like
that.
B
But
what
I
would
recommend
is
to
actually
get
involved
with
the
opencf
open,
cfs
development
community,
I'm
more
than
happy
to
kind
of
help
like
kind
of
guide.
Somebody
where
you
would
plug
this
into
the
pipeline
and
there's
also
other
ramifications
of
like
the
key
management
that
you
would
need
for
encryption,
which
have
a
lot
of
close
ties
into
say,
like
the
dmu,
and
even
you
know.
How
do
you
want
this
to
be
represented
in
the
arc?
B
If
you
don't
have
a
key
and
so
there's
a
lot
of
pieces
there,
and
I
think
really
the
open
cfs
community
is
where
that
comes
in
very
you
know,
it
becomes
a
very
useful
asset
because
you
know,
if
there's
questions
about
how
do
you
plug
into
the
pipeline,
I'm
happy
to
help
get
somebody
kind
of
started
there.
Questions
about
how
you
plug
into
like
the
dmu.
I'm
sure
that
you
would
probably
you
know
chime
in
so
to
me.
That's
that's.
B
The
way
somebody
would
get
started
is
become
a
you
know,
an
integral
part
of
the
open,
cfs
community
and
somebody
that's
really
passionate
about
trying
to
go
off
and
and
do
this
I
think,
there's
there's
a
lot
of
people
that
are
passionate
about
trying
to
to
do
this
work,
but
don't
necessarily
have
the
time.
So
I
think
I'm
more
than
happy
to
help
people
get
get
a
start
on
it,
and
I
know
at
one
point
in
time.
A
I
don't
know
either,
I
think,
as
a
couple
people
said
on
the
irc.
The
first
task
for
that
for
any
kind
of
encryption
work
would
be
to
define
what
the
use
cases
are.
So
you
know
the
use
case
that
was
defined
for
the
encryption
project
at
sun.
Slash
oracle
was
extremely
broad,
so
that
implementation
is
relatively
complex.
A
B
B
Yeah-
and
I
know
that,
like
other
platforms
have
implemented
encryption,
not
necessarily
within
zfs
but
kind
of
around
zfs,
like
some
of
them,
have
them
at
lower
layers,
some
of
them
have
them
at
upper
layers,
but
it's
it.
I
think
it's,
it's
really
going
to
be
somebody
who
has
the
need
and
desire
for
encryption
to
kind
of
interface
with
the
open,
cfs
community,
and
I
think
it's
just
a
matter
of
time
before
it
gets
in.
A
Yeah
I
mean,
I
think
one
one
of
the
really
compelling
use
cases
for
encryption
is
on
your
personal
devices,
like
you
know,
laptops
and
cell
phones,
and
things
like
that,
where
you
want
to
be
able
to
transport
it,
or
you
know
that
if
somebody
else
gets
their
hands
on
it,
then
they
won't
be
able
to
decrypt
your
data
right,
but
you
know
that
isn't
that
isn't
really
zfs's
strongest
use
case
right
now
on
laptops,
I
think
that
we're
going
to
see,
but
I
think
we're
going
to
see
more
and
more
inroads
into
that
as
the
linux
adoption
picks
up.
B
Yeah-
and
I
think
that's
what's
great
about
the
community-
is
that
it?
You
know
something
like
this
could
totally
come
out
of
the
linux
community
and
and
get
pushed
into.
You
know
other
platforms
just
because,
like
you
said
they
have
a
very
compelling
use
case.
A
Cool
looks
like
luke
just
posted
a
question.
A
B
They're
kind
of
defining
that,
as
a
kind
of
I
think,
a
tunable
or
a
property
of
some
sort,
but
yeah
you
could
definitely
do
that.
The
advantage
of
of
again
of
having
the
metaslab
class
is
that
when
you
go
to
do
the
allocation,
the
first
decision
you
make
is
which
class
you're
you're
writing
to.
So
as
long
as
you
have
enough
data
in
hand
when
you're
in
the
pipeline,
we
know
which
like
object,
sets
and
we
can
pass
down
properties
and
things
like
that.
B
Then
you
simply
say:
okay,
this
is
going
to
go
off
into
you,
know
this
file
system
special
and
I
think
even
the
next
hunter
guys
right
now
have
come
up
with
kind
of
a
special
as
their
new
meta
slab
class
they're,
trying
to
figure
out
exactly
how
to
how
to
term
it.
B
But
if
you,
if
you
haven't
seen
the
presentation,
there's
actually
a
really
good
presentation
posted
on
the
open,
cfs
website
from
the
developer
conference
that
was
done
by
boris,
so
it's
worth
taking
a
look
at
because
it
does
kind
of
describe
how
you
you
know
what
they've
done,
what
state
you
know
they're
in
and
if
you
have,
if,
if
anybody
has
interest
in
this,
you
know
definitely
contact
me,
I'm
happy
to
kind
of
guide
you
in
in
the
direction
like
I
said:
it's
not
that
difficult
to
kind
of
create
your
own
type
of
of
metaslab
class
and
and
then
figure
out
how
to
really
leverage
it.
B
Maybe
you
should
just
go
and
watch
the
presentation.
Well,
it's
kind
of
a
decision
that
you
know
you
would
make.
If
so,
let's
say
you
were
actually
doing
this
and
you
were
writing
your
own
meta
sub
class.
You
may
decide
that
if
that
ever
happens,
that's
not
a
situation
you
want
to
be
in,
but
so
you
you
may
actually
want
to
stop
all
activity
just
for
whatever
reason,
but
most
cases
it
turns
that
it
simply
is
just
a
fallback
to
the
normal
data
pool.
B
So
if
you
can't
allocate
from
your
high
speed
device
or
whatever
special
device,
you
have
and
just
simply
rely
on
the
data
pool
being
the
backing
store
for
everything
else,
it's
very
similar
to
the
way
that
we
do
log
blocks
today
so
for
separate
intent
logs.
It
does
the
same
thing
if
you
can't
actually
allocate
from
the
separate
intent
log,
it
just
falls
back
to
the
pool
and
does
the
intent
log
right
there
very
cool.
A
So
I
have
a
question
for
you,
george,
this
is
kind
of
you
might
have
to
think
about
it
for
a
minute,
but
yeah
the
open,
zfs
developer
summit.
We
we
heard
about
a
lot
of
different
ideas.
You
know
everything
from
like
new
tools
for
examining
the
ondisk
format
to
like
this
special
device,
storage,
tiering,
stuff,
test
coverage.
You
know
performance
stuff.
What
what
did
you
see
that
you
are
most
looking
forward
to
seeing
implemented
in
the
future.
B
Wow
yeah,
so
there's.
A
I
think
that
I
can
give
you
a
few
more
reminders
if
you'd
like
of
some
of
the
hackathon
projects
and
other
things
so
like
yeah.
No,
I
you
know
channel
programs,
one
meg
block
size,
the
compressed
arc
stuff
that
you
that
you
worked
on
the
the
the
dmu
changes
that
justin
gibbs
and
will
anders
worked
on
at
spectrologic
for
improving
performance
of
sub
sub
block
rights,
lots
of
different
cool
ideas.
B
Yeah
there's
a
lot
of
stuff
out
there.
I
I
would
say
that
the
the
the
channel
program
thing
is
a
very
kind
of
interesting
concept
in
many
ways,
because
it
simplifies
a
lot
of
our
code,
which
I
think
one
of
the
things
that
we've
done
well
is
periodically.
We
kind
of
go
back
and
after
we've
added
a
bunch
of
features,
we
kind
of
go
back
and
and
revisit
things
and
kind
of
clean
things
up
a
little
bit.
B
It
makes
it
easier
for
the
next
person
to
come
along
and
kind
of,
enhance
it
and
make
future.
You
know,
add,
add
features
and
so
forth
going
forward.
But
so
I
think
that
the
channel
programs
is
one
of
the
things
that
I'm
very
interested
in
seeing
going
in.
It's
also
probably
one
of
the
more
controversial
things
going
in
or
potentially
going
in.
B
Well,
I
I
should
the
ability
to
actually
pass
in
kind
of
a
scripted
program
down
into
zfs
that
would
run
in
an
atomic
and
consistent
fashion,
so
part
of
the
problem
just
to
kind
of
explain
the
problem
behind
it
is
like
you,
take
something
and
I
think
snapshots
recursive
snapshots
tends
to
be
one
of
the
ones.
That's
probably
a
good
one
to
like
look
at,
but
a
lot
of
the
work.
B
So,
by
the
time
you
get
into
the
kernel,
you
don't
know
if
the
view
that
you
had
you
know
when
you
first
looked
at
this
is,
is
the
same
one
that
you're
going
to
get
at
the
very
end,
and
it
makes
it
very
difficult
to
do
things
like
error
handling
and
pass
up
the
right
errors
and
figure
out.
You
know
make
sure
that
everything
gets
done
correctly.
B
So
there's,
I
think,
that's
the
big
problem
behind
it,
the
and
so
channel
programs.
The
idea
is
that
if
you
could
kind
of
define
the
script
that
gives
you
the
the
functionality
that
you
need
like
you
want
a
script
that
says:
okay,
traverse
all
data
sets
and
snapshot
them.
You
know,
and
all
that
can
be
done
in
syncing
context.
B
So
that's
kind
of
the
scripting
portion
of
it
and
the
controversial
part
of
it
is
that
the
way
that
it's
currently
being
pursued
is
by
taking
lua
and
adding
this
interpretive
language
to
the
kernel.
B
B
I
guess
it's
not
an
unheard
thing,
a
an
unheard
of
thing
to
do,
but
I
think
that's
probably
why
some
people
were
like
well,
I'm
not
so
sure
about
having
this
there's.
Also
the
ability
that,
if
you
were
to
open
up
these,
you
know
little
lua
scripts
that
you
would
pass
down
to
the
kernel.
B
Then
it
has
the
the
impact
that
you
could
actually
do
things
like
create
an
infinite
loop,
and
if
you
pass
that
down
to
the
kernel,
then
now
some
safety,
you
know
some
barriers
and
guards
have
to
be
put
in
place.
So
there's
still
some
investigation
work,
but
the
reason
that
it's
actually
very
cool
is
that
now
it
would
allow
potentially
allow
administrators
to
kind
of
come
up
with
things
that
they
all
you
know,
want
done
and
administer
zfs
in
a
way
that
you
kind
of
define
how
you
want
this
to
behave.
B
You
want
to
take
a
snapshot
and
you
know
destroy
a
file
system
all
in
one.
You
can
now
do
this
as
one
transaction
create
a
little
script
pass
this
down,
and
it
all
gets
done
for
you.
So
it's
kind
of
cool
and
and
gives
you
a
lot
of
flexibility,
but
it
also,
you
know,
has
to
have
the
appropriate
guards
around
it.
A
Yeah-
and
there
are
some
some
questions
on
irc
about
you-
know
this-
the
creating
recursive
snapshots
and
how
that
works
today,
and
I
think
you
know
the
creating
recursive
snapshots
it.
Actually,
it
works
very
well
today
and
because
we
handle
all
these
various
weird
error
cases,
we've
designed
the
whole
system
to
know
about
like
okay.
A
This
is
how
it
works
and
if
anything
goes
wrong,
then
like
oh,
if
if
we
try
to
take
a
snapshot,
but
the
file
system
doesn't
exist,
then
that's
fine,
just
ignore
that
and
drive
on,
but
if
it
fails
for
some
other
reason,
then
we
stop
the
whole
thing.
So
the
I
think
one
of
the
benefits
of
the
channel
program
is
that
it
would
allow
the
user
program
to
define
these
semantics
by
defining
the
error.
Handling
of
like
okay,
create
a
snapshot
if
it
fails.
For
this
reason,
then
do
this.
It
fails.
A
Fails
for
this.
Other
reason,
then
do
that,
in
addition
to
like
you
said,
you
know,
eliminating
a
lot
of
those
error
cases,
because
it's
all
happening
in
the
same
transaction
group
really
allows
the
code
to
be
simplified.
A
lot
both
conceptually
and
in
terms
of
the
implementation,
so
I
think,
like
most
of
the
stuff
that's
implemented
now
like
recursive
snapshots,
it
works.
A
It's
you
know,
it's
fine
and
channel
programs
would
allow
it
to
be
implemented
in
a
little
bit
simpler
way,
but,
like
you,
said
the
potential
for
creating
new
kinds
of
operations
compound
operations
that
can
happen
atomically
in
the
kernel.
I
think
it
is
really
is
really
exciting
and
it
simplifies
a
lot.
It
simplifies
the
code
every
time
we
add
a
new
type
of
thing,
like
you
know
recently,
I
think
actually
just
like
yesterday,
it
was
integrated
into
lumos,
the
zfs
bookmarks.
A
Every
time
we
do
a
new
thing
like
that,
like
if
you
wanna
you
know
you,
we
have
this
primitive,
like
create
a
bookmark,
but
then
you
want
to
be
able
to
do
all
these
compound
operations
like
create
a
bunch
of
bookmarks
at
once
and
then
destroy
a
bunch
of
bookmarks
at
once
at
once,
or
you
know,
get
me
the
list
of
all
the
bookmarks
at
once,
there's
so
much
boilerplate
code
involved
with
just
like
the
iterating
over
the
list
of
things
that
the
kernel
sent
down
and
figuring
out
like
what
should
the
arrow
semantics
be?
A
If
you
know
they
asked
me
to
create
a
bunch
of
bookmarks
and
one
of
them
couldn't
be
created
and
you
know
doing,
making
changes
in
the
kernel
is
obviously
a
lot
more
complicated
than
making
changes
in
user
land
so
being
able
to
like
have
simple
primitives
that
the
kernel
implements
and
have
new
zealand
define.
A
You
know
the
specific
error
handling
semantics,
that
it
wants
and
compound
operations
and
how
to
do
that,
I
think,
is
going
to
create
a
framework
that
will
allow
us
to
continue
like
extending
zfs
with
all
these
new
kinds
of
operations.
You
know
much
more
easily
than
than
we
have
been
able
to
in
the
past.
B
Yeah
and
matt-
I
don't
I
don't
know
about
you,
but
I
can
I'll
speak
for
myself
is
like
whenever
I
have
to
whenever
I'm
I've
made
a
change
that
has
a
user
land
component
to
it.
That's
the
last
thing
I
want
to
do
yeah
it's
like
like
I'll.
I
will,
you
know,
spend
a
bunch
of
time
like
getting
like
the
kernel
pieces
together,
but
then
having
to
deal
with,
like
the
all,
like
you
were
saying,
all
the
error
handling
and
all
the
cases
in
the
user
land.
A
Yeah,
I
think
if
you
look
at
the
diff
for
the
zfs
bookmark
stuff
like
creating
and
destroying
and
listing
bookmarks
like
each
of
those
is
probably
like
less
than
100
lines
of
code
actually
down
in
the
kernel,
but
getting
there
involves.
You
know
like
five
additional
layers
above
it
and
each
of
those
layers
is,
you
know
not
a
ton
of
code.
A
You
know
each
of
those
layers
is
maybe
20
or
30
lines
of
code,
but
it's
just
a
lot
of
places
to
keep
track
of,
and
I
think
it's
for
people
who
don't
know
all
the
layering
of
zfs
as
intimately
as
we
do.
I
think
it's
probably
a
little
bit
daunting
to
be
like
okay,
so
I
created
the
sync
task,
and
now
I
can
just
like
run
that
and
then
I'll
have
my
feature
right,
and
they
don't
realize
that.
A
Oh
now
I
have
to
like
create
a
new
iactyl,
and
then
I
have
to
create
a
new
like
live
zfs
quarter
routine.
Then
I
have
to
create
a
new
libsifest
routine.
Then
I
have
to
add
a
new
command
line
option.
Then
I
have
to
like
parse
the
options
from
the
command
line.
Then
I
have
to
handle
all
the
errors
from
you
know
like
this
is
just
this
long
list
of
other
things
that
are
kind
of
ancillary
to
the
core
of
what
you're
doing
so.
A
Reducing
that
I
think,
is
going
to
allow
us
to
more
easily
implement
all
this
stuff.
Like.
C
A
Ideas
that
we
have
for
doing
stuff
in
channel
programs
like
we
could
do
it
all
today,
like
there's
nothing
that
we
can't
do
with
the
current
infrastructure.
It's
just
it's
a
big
pain
and-
and
it's
it's
very
like
there's
a
lot
of
things
where
it's
like
it's
a
single
purpose
thing,
and
so
you
know
why.
Why
should
I
go
through
the
effort
of
doing
that
unless
it's
going
to
be
a
big
benefit
versus?
A
If
we
create
this
infrastructure,
then
it'll
be
lightweight
enough
to
create
these
new
compound
operations
that
we
can
do
it
for
things
that
have
you
know,
each
of
them
has
like
a
small
amount
of
benefit,
but
obviously
adding
it
all
up
together
will
make
cfs
more
powerful
and
more
flexible.
A
Okay
enough
ranting
about
awesome
future
features
more
questions
for
george
and
there's
been
on
the
irc.
There's
been
kind
of
an
ongoing
discussion
about
encryption.
B
You
know
that
would
almost
be,
and
maybe
we
should
take
one
of
our
office
hour
openings
and
invite
some
of
like
people
that
are
really
passionate
about
it
to
come.
Talk
talk
about
it
because
I
think
that
there
is
a
lot
of
you
know
interest
in
it.
It's
it.
It's
really
finding
the
people
that
are
really
excited
about
it
to
just
kind
of
lead
that
effort
yeah.
A
So
hope,
powell
and
sasho,
I
hope,
you're
listening,
and
I
hope
that
we
can
get
you
to
join
us
for
future
open
zfs
office
hours,
and
you
can
tell
everyone
how
they
should
implement
encryption.
A
Because
you
know,
I
mean
honestly
george
and
I
are
not
encryption
experts
by
any
means
we're
kind
of
like
yeah.
If
you
want
you
want
to
do
this,
okay,
we
can
tell
you
how
to
do
it,
and
that's
kind
of
you
know
that's.
That
was
our
role
in
the
the
encryption
that
was
implemented
at
sun
and
in
oracle.
Where
you
know
we
were
basically
like.
B
A
The
last
office
hours
but
bookmarks
basically
is
a
mechanism
that
allows
you
to
well
maybe
I'll,
back
up
and
tell
you
kind
of
the
problem.
So
if
you're
using
zfs
10
and
receive
for
remote
replication,
then
you
you
have
this
loop,
where
you're
basically
saying
okay,
create
a
new
snapshot
and
then
do
an
incremental
send
from
the
previous
snapshot
that
I
sent
to
this
new
one
and
then
with
that,
when
the
send
is
done
and
I've
received
on
this
side,
then
delete
the
previous
snapshot.
A
So
now
I'm
leaving
around
this
one
snapshot
for
next
time.
I
come
then
I'll
be
able
to
say
create
a
new
snapshot,
send
it
from
the
previous
snapshot.
So
we
kind
of
have
this
like
okay,
I
have
this
one
snapshot,
that's
there
and
then
I
create
a
new
one,
send
between
them
and
then
delete
the
old
ones.
Then
I
saw
this
one
next
time
I
make
around,
create
a
new
one,
send
between
them
and
then
delete
the
old
one.
A
The
issue
is
that
you
always
have
to
have
this
one
snapshot
on
your
system.
So
if
you
have
no
other
need
for
it,
then
it's
basically.
You
know
locking
down
some
data
unnecessarily
so
with
zfs
bookmarks
you're
able
to
do
zfs
send
from
kind
of
the
point
in
time
that
that
old
snapshot
represents
without
having
to
actually
have
the
old
snapshot
lying
around.
A
So
you
can
create
a
bookmark
that
represents
the
point
in
time
that
snapshot
was
created.
Then
you
can
delete
the
snapshot
and
then
you
can
do
as
if
I
send
from
the
bookmark,
rather
than
from
the
snapshot.
C
So
this
this
bookmark
is
basically
just
the
transaction
group
number
yeah.
A
Right:
yeah,
that's
that's
fundamentally
what
it
is,
there's
a
little
bit
more
info,
so
there's
a
transaction
group
number
and
then
there's
also
a
good
which
is
just
like
a
random
number
that
identifies
random.
That
uniquely
identifies
this
snapshot
and
that's
used
to
verify
that
you're
doing
this
and
from
the
correct
point
in
time.
So
we
don't
want
to.
Let
you
like
shoot
yourself
in
the
in
the
foot
by
you,
know,
saying,
send
from
this
other
random
snapshot
and
then
receive
it.
A
B
B
So
again,
this
is
some
work
that
that
one
of
our
colleagues
alec
alex
reese,
is
going
to
start
taking
on,
and
the
idea
behind
it
is
to
leverage
some
work
that
niccenta
has
done
in
defining
like
vdf
properties,
so
they're
using
it
for
a
totally
different
reason,
but
rather
than
you
know,
we
learned
at
the
at
the
hackathon
that
they
were
doing
this
and
we
figured
rather
than
just
implement
everything
new.
Let's
go
ahead
and
leverage
this
but
kind
of
the
design
is
that
you'll
now
be
able
to
specify.
B
When
you
actually
create
your
you
know
your
your
pool,
you
can
override
the
default
action
of
discovering
the
a
shift
or
the
sector
size
by
specifying
as
a
property.
So
if
you
already
know
that
you
happen
to
have
a
4k
sector
device,
then
you
would
simply
say:
okay,
zip
will
create.
You
know
4k
sector,
comma
sector,
size,
equals
4k
pool
name,
and
it
would
force
it
to
take
that
property
on
there's
a
one
of
the
reasons
that
we
wanted
to
kind
of
make.
B
This
generic
across
all
platforms
is
that
there's
limitations
that
have
existed
in
every
implementation,
that's
out
there.
So,
like
you
know,
the
initial
one
was
like
having
oa
shift
that
worked
great
because
you
could
create
you,
know
a
pool
and,
like
everything
took
on
that
a
shift.
But
now,
if
you
wanted
to
do
things
like
add
devices,
I
don't
think
they
had
it
initially
for
ad.
B
Then
there
were
cases
of
like
you
want
to
be
able
to
attach,
and
it's
like,
okay,
how
do
I
specify
it
on
the
attach?
So
it's
like
we
wanted
it
kind
of
a
generic
form
that
follows
in
line
with,
like
everything
that
zfs
has
tried
to
do
with
the
administration
model,
of
keeping
it
consistent
and
simple.
So
that's
the
main
reason
that
we're
really
tackling
it.
B
I
think
the
problem
is
it's
still
kind
of
rampant
out
there,
but
it
has
improved
somewhat
with
newer
devices
that
are
now
at
least
telling
the
truth
more
or
less
so
you
know,
but
there
are,
there
still
are
issues
that
that
come
up,
so
it'll
be
good
to
actually
have
that
as
a
common
mechanism.
A
Cool
all
right,
one,
a
couple
more
questions,
we're
from
irc
that
we'll
take
and
think
about
any
last
questions
that
you
might
have
as
we
near
the
end
of
our
hour.
B
Yeah,
it's
it's
definitely
possible
to
have
kind
of
a
mechanism
where
you
could
say
I
write
to
this
preferred
side
because
it's
the
fastest
and
the
other
one.
I
issue
the
right,
but
maybe
I
don't
necessarily
you
know,
have
to
wait
for
it,
the
mirror
logic
and
have
the
ability
to
actually
have
these
these
dtls
dirty
time
logs.
So
you
could
actually
specify
like
let's
say
you
wrote
to
you
wrote
to
the
mirror
and
as
a
result,
you
issued
two
rights,
one
two,
the
preferred
side
and
one
to
the
non-preferred
side.
B
As
long
as
one
of
them
completes,
we
consider
the
right
to
be
good,
so
the
other
one
could
just
end
up
with
a
dtl
and
then
have
kind
of
a
background
component.
That's
periodically
checking
to
see.
Is
there
still
a
you
know,
a
dirty
time,
log
that
exists
for
this
device
and,
if
so,
almost
doing
like
a
perpetual
re-silver
in
in
a
way
of
like
catching
those
rights
up,
so
that.
A
Would
be
like
for
like
loosely
coupled
thing
where
you
don't
even
it's
just
like
it's
it.
That
sounds
more
like
a
replacement
for
like
cfs,
send
and
receive
where
you're
like
okay,
just
like
get
it
to
the
other
side,
eventually
yeah.
What
about
something
where
like
we
can?
We?
It
needs
to
get
written
to
both
sides
synchronously,
but
we
just
want
to
have
a
preferred
side
to
read
from.
B
Just
you
know
not
the
preferred
place
to
go
reads.
So
I
think
that
there's
there
there's
a
lot
of
extensions
again
from
the
vdf
property
that
you
could
do
that
or
we
could
do
it
intelligently.
Just
simply
by
looking
at
how
many
things
are
you
know
queued
up
and
and
how
much
activity
does
the
device
have
to
determine
what
the
preferred
read
side
would
be.
A
Yeah,
I
think
those
are
both
really
cool
ideas.
I
really
like
the
idea
of
doing
things
automatically
for
the
user
when
we
know
that
it
is,
you
know
strictly
better,
so
I'm
really
looking
forward
to
seeing
the
you
know
better
mirror,
read
selection
algorithm
that
I
think
sasha
is
working
on
pulling
over
from
freebsd
into
lumos
stephen
right.
A
Oh
stephen,
that's
right,
yeah,
sorry
steven's
working
on
pulling
over
from
from
fbst,
but
it
sounds
like
this
property
for
the
v
dev
property
infrastructure
should
make
it
really
easy
to
also
express
this
administratively
exactly.
B
I
think
that
you
know
once
that's
in
place.
It
kind
of
opens
up
finer,
I
guess
kind
of
an
advanced
level
of
control
for
for
administrators.
If
they
you
know,
if
they
don't
want
to,
let
the
automatic
logic
kick
in.
A
A
Is
writing
a
new
edition
of
his
freebsd
design
and
implementation
book,
and
the
new
edition
is
going
to
have
a
chapter
on
zfs,
so
I
I'm
working
with
him
to
you,
know
kind
of
advise
him
on
on
how
zfs
works,
and
you
know
what
parts
are
are
relevant.
So
I'm
really
looking
forward
to
seeing
that
there's
gonna,
be
I'm
gonna,
be
reviewing
your
first
draft
next
month.
A
So
I'll,
let
you
guys
know
how
it
looks,
but
I
have
really
high
hopes
for
that
and
I
think
that'll
be
a
really
good
introduction
for
people
that
are
very
new
to
cfs.
I
don't
think
that
there's
a
really
good
overview
of
like
here
are
all
the
different
components
and
how
they
all
fit
together
from
a
high
level
point
of
view.
A
So
I
think
that
this
hit
this
chapter
of
his
book
is
going
to
have
that,
and
also
some
details
of
like
the
on
disk
structure
and
what
the
different
pieces
are
on
disk.
George,
do
you
have
any
any
pointers
to
things
that
people
might
you
used
to
start
learning
about
zfs
today
that
are
not
vaporware.
B
Yeah,
I
think
you
know
if
somebody
is
interested
in
kind
of
getting
their
hands
on
it.
I
think
one
kind
of
interesting
learning
tool
is
just
you
leveraging
like
d
trace
and
watching
some
of
the
administration,
administrative
actions
that
happen
within
zfs.
B
B
It
gives
you
firsthand
look
into
like
all
the
layers
that
goes
through
whenever
you
have
to
do
something
like
you
know,
looking
at
read,
for
example,
you
can
kind
of
follow
a
read
system,
call
down
to
the
disk
and
look
at
it
and
get
an
idea
of
like
all
the
different
pieces
that
come
together,
and
I
would
advise
anybody
that
kind
of
you
know
if
they,
if
you
do
start
doing
things
like
this,
definitely
try
to
like
keep
notes
and
put
together
an
article
posted
on
the
open
cfs.
You
know
developer.
I
think
developer
resources.
B
We
have
some
things
up
there.
That's
also
another
good
place
to
kind
of
go,
read
and
and
see
what's
there,
but
those
that
are
kind
of
just
getting
into
it
anything.
You
learn
definitely
post
it
up
there
and,
if
you're
looking
for
people
to
kind
of
like
validate
what
you've
learned,
send
it
out
to
the
developer,
alias
and
say
hey,
I
was
looking
at
it.
This
is
just
the
way
kind
of
things
work.
A
Yeah,
I
think
that
the
the
resources
that
we
have
available
today
are
mainly
a
kind
of
fragmented,
detailed
information
like
on
the
website.
Like
you
mentioned,
george.
A
Most
of
that
is
not
like
overview
like
how
does
zfs
work
it's
more
like
how
do
snapshots
work,
or
how
does
you
know
the
I
o
pipeline
work
very
specific
things,
and
then
the
other
resource
that
we
have
is
just
the
people
that
are
participating,
so
you
know
being
able
to
post
on
the
mailing
list
and
get
answers
participate
in
events
like
this
hang
out,
like
the
open,
cfs
developer,
some
developer
summit,
which
we're
planning
to
have
another
one
and
we're
also
looking
at
planning
some
lower
key
like
local
meetups,
so
like
we'll,
probably
have
like
a
bay
area.
A
All
right,
so
we've
done
about
an
hour
any
last
pressing
questions
before
we
let
georgie
go
tony.
I
saw
you
just
joined
you.
You
want.
E
So
we've
we've
been
seeing
yeah
sorry
I
joined
like
I
had
another
meet.
I
had
another
meeting,
so
I
just
jumped
on
at
the
very
last
minute
here.
So
I'm
looking
at
a
case
where
unmapped
operations
discussion,
map
operations
from
from
windows,
for
example
right.
E
E
We
ended
up
spending
a
lot
of
time
inside
of
dmu
free
range,
free
long-range
info
right,
and
I
see
a
lot
of
times
are
spent
inside
of
basically
walking
through
the
list
of
dmu
buffer,
so
that's
kind
of
and
and
we
we
we
see
similar
problem
when
we
are
deleting
a
big
file
too,
to
say
that
you
know
this.
This.
This
bottleneck
is
also
it's
also
present
in
a
in
the
file
system
scenario.
E
B
Yeah
because
it
so
in
the
case,
tony
that
you're
mentioning
when
the
unmapped
comes
through,
I'm
assuming
that,
like
from
within
the
iscsi,
stack
that
all
gets
handled,
synchronously
exactly.
E
So
so
that's
that's
one
thing
that
I
was
doing
about
from
comstar's
perspective
right
looking
at
the
specs
last
night,
the
the
t-10s,
because
he
expected
last
night,
it
sounds
like
there
might
be
a
way
for
us
to
tell
the
initiator,
like
you
know
how
many
lba
blocks
that
we
support
for
each
unmapped
command
at
a
time,
so
that
so
that
you
know
the
initiative
doesn't
have
to
spend
it.
Does
it
doesn't
send
us
a
big
range
right
and
then
we
spend
we
spend
a
lot
of
time
doing
this
synchronously.
E
B
B
E
A
So
I
think
what
dredge
is
proposing
was
like
to
do
it
asynchronously.
I'm
not
sure
that
we
would
exactly
be
able
to
use
the
async
destroy
code
path.
A
A
A
The
least,
I
think,
like
the
reason
that
the
freeing
of
ranges
is
implemented
synchronously
today
is
because
like
for
posix
files
yeah,
it
also
has
the
semantic
that,
like
those
blocks,
become
zero
filled.
A
So
if
that
were
to
be
done
in
the
background,
then
there
would
need
to
be
some
kind
of
interlock
to
make
sure
that,
like
as
we
are
bringing
it
in
the
background,
then,
if
somebody
comes
in
with
the
right
or.
E
A
I
mean
I
think
this
is
totally
doable,
though
you
know
you
look
at
like
the
way
that
we
have
the
freeze,
implement
free
range
is
implemented
today.
You
know
it
creates
a
a
record,
that's
processed
in
the
next
transaction
group,
but
it
still
has
to
handle
like
if
rights
come
in.
In
the
meantime,
then
we
have
to
take
that
range
and
break
it
up
into.
You
know
two
ranges
so
that
the
part
that
we're
writing
is
now
not
being
marked
as
being
freed.
A
E
A
Cool
all
right
thanks
thanks
everyone
for
participating
and
thanks
george
for
hosting
this.
Yes,
thank
you
guys.
I'm
we'll
be
looking
to
line
up
another
open,
zfs
office
hours,
probably
in
late
january,
so
stay
tuned
for
who
I
will
be
able
to
rope
into
posting.