►
From YouTube: MattAhrens
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We've
got
Matt
here
from
well
the
co-founder
of
open
ZFS
he's
down
here,
he's
going
to
do
a
lovely
presentation
for
us
about
open,
CFS
and
a
little
bit
about
the
illumise
and,
what's
going
on
in
Lemos,
followed
by
andre
about
doing
some
talk
about
freebsd
and
as
well
as
league
marcin
from
hybrid
cluster,
doing
a
little
bit
about
linux
and
then
we'll
go
for
coffee,
so
without
kind
of
further
I
do
I'm,
just
making
sure
I'm
well.
In
view
of
the
camera,
it's
good
thing:
I'm
not
seeing
the
livestream.
B
B
B
You
can
get
it
in
this
nice
light
blue
color.
B
Cool
so
I
met
errands
I
helped
to
create
the
ZFS
project
back
at
Sun,
Microsystems
in
2001,
and
and
also
create
the
open
ZFS
project.
More
recently,
I
work
at
dell
fix
I'll,
be
talking
some
more
about
what
we
do
at
dell
fix
later
on
today.
So
this
talk
is
just
about
open,
ZFS,
so
I
think
probably
most
of
you
know
this
so
I'll
fast
for
it
a
little
bit
through
a
recap
of
what
is
zia
facet,
and
why
should
you
care,
as
you
guys,
probably
know,
pooled
storage,
so
you
can
create?
B
B
What
else
do
I
want
to
mention
here?
I
think
you
guys
are
familiar
with
this.
So
one
thing
that
I
like
to
say
is
that
when
we
created
ZFS,
one
of
the
main
goals
was
to
end
the
suffering
of
system
administrators.
We
saw
how
hard
it
was
for
for
people
to
administer
separate
file
systems,
volume,
managers
and
storage
products,
so
we
wanted
to
create
a
unified
interface
and
make
that
interface
easy
to
use.
B
Another
thing
that
I
think
really
feeds
into
ease
of
administration
is
actually
a
lot
of
things
that
you
don't
think
about
it
in
to
resume
administrative
model,
but
by
removing
limitations
that
existed
in
previous
storage
products
like
number
of
files
in
a
directory
that
you
can
have
a
number
amount
of
total
storage
number
of
disks.
Things
like
that.
We
make
it
easier
for
this
for
the
system
administrator
to
design
a
system
that
works
for
them
rather
than
having
to
design
a
system
architect
a
system
that
works
around
your
file
systems
limitations,
so
I.
B
This
show
is
kind
of.
How
is
the
fs
fits
into
the
overall
software
stack
so
on
the
Left?
We
have
an
example
of
a
old-school
software
stack
with
the
separate
file
system
and
volume
manager.
A
lot
of
information
gets
lost
along
this
interface
between
the
file
system
and
volume
manager,
because
it's
just
a
simple
block,
interface
versus
in
ZFS
we've.
We
are
connected
the
insides
of
the
software
stack
for
storage
and
we've
separated
out
into
kind
of
three
main
layers.
B
The
upper
layer
deals
with
the
POSIX
semantics
for
files,
like
files
owners,
permissions
things
like
that
and
also
of
virtual
volumes,
and
then
this
middle
layer
deals
with
just
providing
atomic
transactions
on
objects.
So
you
can
forget:
I
pull
the
POSIX
layer
if
we
need
to
rename
a
file,
we
need
to
remove
that
the
entry
from
the
old
directory
add
entry
to
the
new
directory,
and
you
know
maybe
change
some
information
about
the
file
itself.
So
this
layer
only
needs
to
worry
about
the
things
that
it
needs
to
know
about.
B
B
This
is
kind
of
most
analogous
to
the
interface
between
a
filesystem
and
volume
manager,
but
you
see
that
it's
actually
very
different
because
in
the
traditional
model
the
file
system
chooses
where
to
write
something
it
needs
to
worry
about
allocating
space,
all
that
kind
of
stuff
and
the
volume
manager
worries
about
well.
It
doesn't
worry
about
as
many
things
as
it
should,
but
it
worries
about
writing
it
to
two
places.
For
example,
if
it
smeared
in
ZFS
the
the
dmu
it
just
knows.
B
Ok,
I
have
this
chunk
of
data
that
I
need
to
get
written
to
disk,
and
then
I
need
to
get
it
back
at
some
point
later
on.
The
spa
worries
about
everything
to
do
with
how
that
it
is
stored.
So
it
takes
that
data,
it
might
compress
it.
Then
it's
going
to
actually
find
space
on
disk
to
allocate
for
that.
It
might
need
to
allocate
a
little
bit
extra
space
if
you're,
using
raids
e.
B
B
By
the
way,
any
questions
at
any
time
here
just
raise
your
hand
or
shout
out
we're
going
to
be
getting
into
a
little
bit
more
of
the
philosophy
of
ZFS.
Now
so
at
any
technical
questions
or
just
things
you
want
to
talk
about
just
let
me
know
so
a
little
bit
of
the
history
of
ZFS.
How
did
we
get
to
this
point
here
where
we're
having
this
conference
so
back
in
2001?
It
was
just
me
and
one
other
engineer.
B
Jeff
bond
was
started
working
on
ZFS
at
Sun
Microsystems
and,
as
I
mentioned,
you
know
we
really.
We
saw
all
the
problems
with
existing
file
systems
and
how
hard
it
was
to
for
people
to
use
them.
We
weren't.
Actually,
you
know
we
weren't
industry,
vets
of
the
storage
system.
We
were
mainly
outsiders
seeing
how
hard
it
was
for
people
to
use
existing
storage
solutions,
and
we
wanted
to
come
up
with
something
that
was
better
so
in
2005
and
by
the
way
you
know
there
were
ton
of
people
working
on
this,
not
just
two
guys.
B
But
then
there's
this
kind
of
big
event.
That
was
a
little
bit
disconcerning
in
the
industry
in
the
industry
as
a
whole,
but
also
more
specifically,
to
people
who
are
fans
of
ZFS,
so
Oracle
acquired,
Sun,
Microsystems
and
basically
stopped
contributed
the
the
opensolaris
project
and
stops
contributing
any
source
code
to
ZFS.
B
So
the
reason
this
is
such
a
big
problem
is
because
you
know
up
until
this
point
in
time:
ZFS
was
open
source,
but
the
vast
vast
majority
of
all
source
code
contributions
came
from
sun
and
all
the
every
all
changes
needed
to
go
through
sun.
Basically,
they
were
the
you
know
the
gateway
in
the
arbiter
of
what
was
ZFS,
and
you
know,
perhaps
rightfully
so,
given
that
they
were
contributing.
B
B
So
thankfully
I
some
people
got
together
who
are
creating
products
based
on
opensolaris
and
formed
the
illumise
community.
Lumos
is
basically
a
continuation
of
that
open
of
the
open
solaris
project,
but
under
a
truly
open
development
model,
meaning
that
there's
contributions
from
many
different
companies
there's
not
just
one
company.
That's
controlling
all
the
changes
cool
more
more
recently,
we've
seen
ZFS
ship
as
part
of
well
we've
seen
the
ZFS
on
Linux
port
become
really
much
more
much
more
mature.
B
So
then,
just
just
in
the
past
year
we
we
saw
that
there
were
all
these
different
ports
of
ZFS,
so
illumos
linux,
freebsd
and
they're
all
kind
of
doing
doing
great
on
their
own.
But
we
saw
that
there's
starting
to
be
a
lot
of
duplicated
effort
between
the
different
platforms.
So
we
created
the
open
the
open,
ZFS
community
as
a
way
to
try
and
unite
these.
B
These
different
platforms
and
I'll
talk
some
more
about
the
specific
things
that
we
did
later
there's
also
a
new
member
of
the
open
ZFS
family,
which
is
a
ZFS
for
Mac,
os10
port,
which
is
much
less
mature
than
some
of
the
other
ports,
but
but
gaining
a
lot
of
momentum,
and
it's
really
cool
to
see.
You
know
it.
A
new
platform
joined
the
open,
ZFS
community,
any
questions
about
history
and
kind
of
how
we
got
here,
yeah.
B
C
This
license
I
understand
this.
This
this
may
be
some
complexities
with
using
ZFS
I
mean
this
license
is
kind
of
a
bit
unknown
to
everybody,
it's
kind
of
like
ooh.
What's
this
I
think
it's
one
of
the
one
of
the
one
of
the
problems
that
the
community
faces
is
effectively
explaining.
What
the
implications
are.
I
mean
I
hear
a
lot
from
people
like.
Oh,
it's
not
really
open
source.
B
What's
so
the
so
I'm,
not
a
lawyer,
so
this
is
not
like
a
legally
binding
anything,
but
my
understanding
of
the
license
terms
of
the
CD
dl
are
that
if
you
make
changes
to,
if
you
make
changes
to
specific
source
files
that
you
received
under
that
licence,
and
then
you
should
a
if
you
and
then
you
ship
binaries
that
our
result
of
those
changes,
then
you
need
to
release
the
changes
to
those
particular
source
files,
so
you
can
think
of
it
as
kind
of
somewhere
between
the
GPL
and
the
and
the
BSD
licenses
so
kind
of
like
the
GPL.
B
You
need
to
contribute
changes
that
you
make,
but
it's
more
explicitly
defined
as
being
on
a
profile
basis,
so
you
can
take
so,
for
example,
like
the
port
to
freebsd.
You
know,
freebsd
is
generally
under
the
bsd
license,
so
they
have
separate
files
for
things
that
are
bsd
license
and
several
files
for
things
that
are
CD
dl
licensed
and
then,
when
they
make
those
changes
to
the
like
the
ZFS
code,
which
is
CD
dl
licensed
they
really
stander
the
CDL
license
as
well.
So.
B
Don't
think
there
should
really
be
any
question
about
whether
it's
open
source
like
it
has,
has
the
open
source
stance
stamp
of
approval
from
you
know
the
people
who
give
that
stamp
so
and
I
think
you
know
it's
I
would
say
that
it's
more,
you
know
it's
more
open
than
the
then
the
GPL,
because
it's
clearly
allows
you
to
make
proprietary
extensions
in
separate
source
files
and
a
little
bit
less
open
than
the
than
the
GPL
license
in
that
it
does
require
you
to
share
changes
to
those
source
files.
Thanks.
C
D
B
B
So
all
right,
so
I
mentioned
open
ZFS.
So
what
does
open
ZFS?
What
was
the
point
of
it?
So,
as
I
mentioned,
you
know
it's
a
community
project.
B
The
point
is
to
bring
together
open
source
developers
from
all
the
platforms
that
are
building
on
top
of
open
ZFS.
So
we
want
to
make
sure
that
people
are
aware
of
the
fact
that
open
source
ZFS
is
alive
and
well.
Zfs
is
not
just
a
proprietary
technology
that
belongs
to
one
company.
It
belongs
to
it's
open
source
software
that
belongs
to
all
of
us.
B
We
want
to
make
sure
that
people
developers
in
these
different
communities
are
talking
to
one
another
and
that
you
know
they
are
duplicating
work.
That's
happening
on
linux
as
well.
On
freebsd.
You
know
we
tend
to
come
from
these
specific
communities
where
you
know
we're
used
to
doing
things
a
certain
way
on
a
certain
operating
system.
We
have
our
own
development
model
and
I
I
from
talking
to
people
who
are
working
on
these
different
ports.
I
realized
that
they're
facing
a
lot
of
similar
problems.
B
So
what
do
we
do
to
towards
those
goals
so
like
any
good
open
source
project?
The
first
thing
that
we
did
was
create
a
website
with
a
wiki
and
a
mailing
list
in
this
case
that
the
mailing,
the
point
of
the
mailing
list
is
mainly
for
developer
discussion.
So
this
isn't
intended
as
a
replacement
for
the
platform
specific
mailing
lists
where
people
can
ask
about
about
like
how
does
the
fs
you
know,
how
do
I
use
the
fsm
on
this
platform?
I
ran
into
a
little
problem.
Can
you
help
me
out?
B
B
This
is
an
online
event
where
an
expert
in
the
community,
like
a
ZFS
developer,
hosts,
come
like
a
call
in
QA
where
people
can
call
in
via
youtube-
and
you
know
video
chat
and
IRC
to
ask
about
you
know
what's
new
and
ZFS,
what
are
you
working
on
I
ran
into
this
problem?
Can
I
get
some?
You
know
one-on-one
help,
so
this
has
been
pretty
popular
and
I
think
it's
time
to
schedule
another
one,
because
it's
been
a
couple
months,
so
I'll
probably
be
bugging
some
of
you
to
see.
B
Mr.,
so
all
these
ports,
all
to
all
these
different
platforms,
are
really
very,
very
active.
These
numbers
are
a
little
bit
out
of
date,
but
you
can
see
that
there's
you
know
about
a
hundred
people
contributing
source
code
changes
across
all
these
platforms.
You
know
the
the
newer,
the
the
the
last
mature
ports
you
see
have
you
know
many
more
commits
as
they're.
You
know,
just
starting
or
in
the
middle
of
the
porting
process,
and
the
stability
of
ZFS
on
these
different
platforms
has
enabled
a
lot
of
companies
to
create
products
based
on
open
ZFS.
B
I
mean
we
have
people
from
several
these
companies
here
today.
I
want
to
ask
maybe
for
a
show
of
hands
how
many
people
are
using
ZFS
on
illumos
cool
about,
maybe
a
third,
how
many
people
are
using
ZFS
on
freebsd,
another
third
cool,
how
many
people
are
using
ZFS
on
linux
wow?
This
is
amazing.
It's
like
almost
exactly
split
into
thirds.
I
think
this
is.
This
is
probably
the
first
event
where
that's
been
the
case.
B
Claudius
systems
has
created
a
new
operating
system
called
osv,
and
it's
designed
specifically
for
for
use
in
cloud
virtualized
environments,
so
it
only
runs
on
hypervisors.
It
actually
has
no
userland
components
is
basically
they
observe
that
you
know.
There's
a
in
modern
cloud
deployments,
there's
a
lot
of
different
layers.
So
basically
you
have
like
an
application
that
running
on
an
operating
system
and
there's
usually
only
one
application
running
there,
and
then
the
operating
system
is
running
at
a
hypervisor
and
the
hypervisor
is
actually
running
on
the
bare
metal.
B
So,
basically,
they
removed
a
bunch
of
those
layers
at
in
the
operating
system
in
higher
to
create
like
a
very
thin
kind
of
container,
around
applications,
and
they
wrote
this
operating
system
totally
from
scratch,
which
is
kind
of
amazing,
and
one
of
the
few
components
that
they
took
from
the
open
source
community
was
the
file
system
which
is
ZFS.
So
I'm
really
curious
to
see.
You
know
where
that
project
goes.
B
E
B
Basically,
how
can
we
increase
the
number
of
contributors
on
you
know
limos
to
get
to
the
level
we
know
where
they
are
on
we're
there
on
linux,
for
example,
I'm
not.
I
think
that
would
be
great.
I'm
not
sure
that
that's
necessarily
a
requirement
for
success
on
a
given
platform.
I
mean
you
look,
there's
even
fewer
contributors
on
freebsd,
you
know,
but
we
saw
that
you
know.
There's
a
you
know
about
a
third
of
the
of
the
audience's
creating
products
based
on
freebsd.
Even
given
you
know
so
a
few
number
of
contributors.
B
So
I
think
there
is
a
power
that
a
relatively
small
number
of
people
can
make
a
big
impact
on.
You
know
on
ZFS
on
an
event
platform,
and
I
think
that
this
these
numbers
are
kind
of
show
the
diversity,
but
they
don't
show
the
depth
right.
So
I
mean,
to
my
mind,
20-foot
like
if
you
have
24
people
that
are
actually
working
or
let's
say
you
know
even
19
people
working
full
time
on
ZFS
contributions.
I
mean
that's,
that's
huge.
You
know,
that's
that's
going
to
be.
B
That
said,
you
know,
I
think,
would
be
great
to
attract
more
people
to
the
limos
platform,
but
I
think
that
that
is
primarily
a
blue,
most
platform
issue,
and
not
so
much
specifically
as
the
fs
issue.
I,
think
that
you
know
open
ZFS
is
going
to
be
a
success
because
of
you
know,
because
of
what
open
ZFS
offers
and
because
it's
available
on
the
platforms
that
people
want
to
run
it
on
right.
B
So
you
know
maybe
maybe
someday
someone
will
dethrone
linux,
and
you
know
linux
won't
be
the
number
one
a
server
operating
system
in
the
world.
You
know,
I
think
that
that
is
not
necessarily
a
good
or
bad
thing,
and
I
think-
and
I
hope
that
you
know-
file
system
technologies
like
open
ZFS
will
be
able
to.
You
know,
find
a
home
on
whatever
platform
people
want
to
be
able
to
use
the
don.
I
know
that's
kind
of
like
a
non
answer
to
your
question.
B
B
D
E
B
I
definitely
I
mean
I
guth,
where
you're
coming
from
in
terms
of
having
you
know
you
wanna,
you
want
to
be
using
a
product,
or
you
know
software,
that
someone
stands
behind
enough
to
at
least
say
that
like
hey,
this
is
one
dot.
Oh,
you
know
one
of
those
a
good
start.
B
Most
people
don't
even
want
to
use
a
wonder
product
right,
so
I
think
that
that's
there's
a
lot
of
people
with
that
sentiment
and
I
think
that
the
people
who
are
kind
of
in
charge
of
the
ZFS
online
export
from
Lawrence
Livermore
National
Labs
in
America.
They
definitely
have
a
pretty
thorough
engineering
culture.
So
I
think
that
you
know
they
they
want
to.
They
don't
want
to
market
10
until
it's
really
something
that
there
are
no
known
issues
with
your
least
no.
B
So
I,
you
know
mainly
what
I
can
say
in
terms
of
the
Linux
stability
is
that
you
know
there's
a
lot
of
people
here
today
who
are
using
ZFS
on
the
Knicks
in
production,
and
you
know
we're
going
to
hear
some
more
about
their
specific
use
cases
later
today.
But
I
would
also
love
to
see
a
window
release.
B
Yeah,
so
I
would
actually
disagree
with
that
a
little
bit
just
I
guess
for
it
to
repeat
that
boat.
A
B
C
B
So
I
guess
the
the
part
that
I
would
disagree
with.
Is
that
is
just
that?
You
know
Brian
in
the
guys
at
Lawrence.
Livermore
are
only
tasked
with
the
supporting
luster
they.
My
understanding
at
least,
is
that
they've
actually
been
given
some
pretty
broad
independence
to
make
ZFS
on
linux
be
great
for
everyone.
B
You
know,
so
they
definitely
started
with
the
just
the
luster
support,
and
you
know
there
wasn't
even
any
ZPL,
so
you
couldn't
actually
use
it
as
a
real
file
system,
but
you
know
they
were
actually
given
the
success
of
that
you
know
they
were
given
the
responsibility
to
actually
go
and
make
this
a
real,
a
real
thing
in
the
community
and
finish
the
CPL
port.
So
I
think
a
lot
of
the
stuff
that
they're
working
on
nowadays
is
actually
more
towards.
You
know
the
community
and
making
ZFS
work
well
for
the
community
at
large.
G
B
Did
you
have
okay,
so
you're
raising
your
hands?
Okay,
cool
thanks
where
we,
okay,
cool
I,
think
I'm
going
to
fast
forward
over
some
of
this
specific
things.
Unless
people
have
questions
about
it,
I'm
going
to
get
to
some
more
general
points
about
the
open,
G
ask
the
open,
ZFS
community
and
the
code
base.
B
This
is
some
performance
improvement
stuff
that
I
worked
on
recently.
This
is
actually
not
intended
to
be
strictly
a
performance
overall,
throughput
improvement,
but
a
way
to
improve
the
to
make
sure
that
you
get
consistent
performance
so
ensuring
that
latency
is
is
roughly
the
same
from
operation
operation
rather
than
having
these
huge
outliers
that
we
saw
with
previous
releases
of
ZFS,
so
we
went
from
outliers
being
multiple
seconds
to
only
30
milliseconds,
so
the
outliers
here
means
basically
that
ninety-nine
point
nine
nine
percent
of
all
the
operations
took
less
than
this
amount
of
time.
B
If
you'd
like
some
more
info
on
this,
my
colleague
wrote
up
a
couple
of
blog
posts
explaining
it
also
in
open
ZFS.
We
have
LZ
for
compression
the
really
cool
thing
about
this
is
it's.
You
know
it's
faster
and
better
than
the
previous
default
of
LZ
jb,
it
compresses
and
decompresses
using
less
cpu,
and
it
actually
gets
a
little
bit
better
compression
ratio.
B
The
thing
that's
really
cool
is
that
because
it
uses
so
little
CPU
and
it's
able
to
reject
things
that
are
uncompressed
beliy
it
you
can
almost
always
just
enable
compression
with
lz4
and
actually
get
better
performance
on.
Almost
all
workloads,
because
by
by
compressing
the
data,
there's
less
data
to
read
and
write
to
the
disk.
B
One
of
the
ways
that
we
that
we're
going
to
do
this
is
by
creating
a
platform
independent
code
repository.
So
let
me
go
to
the
diagram.
Second,
all
right.
So
currently,
the
kind
of
status
quo
is
that
there's
lots
of
changes
to
ZFS
being
made
on
on
all
these
platforms.
But
there's
the
changes
are
only
really
being
pulled
in
one
direction.
B
So
when
we
make
a
change
in
Lumos
its
pulled
down
into
freebsd
and
linux,
and
then
you
know
from
linux
to
Mac
os10
but
yeah,
you
know
if
somebody
makes
an
enhancement
in
Mac,
OS
10
that
really
a
plot
could
apply
to
all
the
other
platforms.
There's
no
common
mechanism
to
get
those
changes
into
the
other
platforms.
B
The
a
part
of
the
reason
for
this
is
that
everyone
only
wants
to
work
on
their
own
platform,
which
is
totally
understandable.
Right,
like
it's
hard,
it's
really
hard
to
figure
out
how
to
upstream
stuff,
when
you
have
to
you,
get
a
new
operating
system
figure
out
the
development
model
get
a
development
environment.
How
to
compile
it
figure.
B
What
mailing
lists
to
talk
to
you
so
I
if
somebody
yeah
like
as
I
said,
you
know
if,
if
we
make
a
enhancement
in
freebsd
that
fixes
part
of
the
generics
DFS
code,
it's
really
rare
that
we
get
that
picked
picked
up
in
the
other
operating
systems.
So
the
goal
is
to
create
a
platform
independent
code
repository
that
opens
DFS,
repo
and
all
of
the
operating
systems
will
be
able
to
pull
changes
from
that
and
also
push
changes
to
it
much
more
easily
than
they
can
upstream
changes
today.
B
So
how
are
we
going
to
make
that
actually
easier?
So
the
goal
is
that
we'll
be
able
to
actually
test
all
the
code
in
the
open,
ZFS
repo
in
user
land.
B
So
that
means
that
if
you're
developing
changes
for
the
Linux
kernel,
you'll
be
able
to
test
them
out
in
a
in
your
Linux
kernel
module
and
then
also
apply
those
changes
to
the
open,
ZFS
repo,
compile
that
code
as
user
land
library
and
test
it
out
in
user
land
on
your
linux
system,
once
you're
sure
that
it
works
on
linux,
then
you
can
upstream
those
changes
into
the
open,
ZFS
repo
without
having
to
you
know,
have
any
other
operating
systems
involved.
So
by
sandboxing.
B
So
this
is
going
to
include,
as
I
mentioned,
the
code
that
is
actually
platform
independent,
which
is
most
of
ZFS,
but
it
doesn't
include
at
least
initially,
it's
not
going
to
include
the
CPL,
which
is
the
POSIX
layer.
So
there
won't
be
any
like
the
intent
is
not
that
you
would
use
this
code
in
the
open,
ZFS
repoed
directly
for
anything
other
than
testing,
but
rather
that
you
know
it
would
be
easy
to
pull
that
code
into
the
kernel
implementations
on
different
platforms.
F
B
Know
so
there
isn't
like
any
fuse
support
or
anything
like
that,
so
there
won't
be
any
way
to
interact
with
it
like,
but
by
like
creating
and
removing
files,
but
I.
So
today,
this
kind
of
this
user
land
implementation
does
kind
of
exist
in
the
form
of
lib
zpool,
which
is
used
by
like
Z
test,
which
is
a
test
program
and
also
as
EDB,
which
is
the
debugger.
So
this
is
already
true
to
some
degree,
but
we're
working
on
creating
mechanism
so
that
you
can
use
the
like.
B
The
ZFS
and
zpool
command
line
tools
to
interface
with
the
lib
zpool
userland
implementation
as
well.
So
this
will
enable
us
to
test
a
lot
more
of
the
code
in
New
Zealand
because
actually,
like
I,
think
more
than
twenty
five
percent
of
the
ZFS
source
code
is
actually
in
the
user
land
libraries
like
lib
ZFS
and
the
ZFS
command-line
tool,
so
that
will
enable
us
to
basically
port
the
test.
Runner
test
suite
to
run
against
this
use,
design
implementation,
which
will
give
us
much
broader
test
coverage.
B
B
B
So
I
have
another
side,
no
I
don't
have
the
slide
on
that.
Okay,
and
so
the
idea
is
that
you
know
I
mentioned
a
sandbox
that
will
put
the
code
in.
Basically
that
means
that
all
of
the
interfaces
that
the
ZFS
code
uses
will
be
you
know
pretty
strictly
defined
by
you
know
like
porting
layer
or
I've,
been
tentatively
calling
it
like
the
z
ki
for
a
ZFS
Colonel
interfaces
so
like,
rather
than
calling
like
a
mutex
enter,
or
you
know
nano
sleep
or
things
like
that
directly,
we
would
cut.
B
We
would
create
like
rappers,
for
that,
like
you
know,
z,
ki,
underscoring
new
tech
center
and
then
ZFS
would
call
those
routines.
So
every
platform,
including
Lumos,
would
have
like
a
small
shim
layer
that
translates
those
z,
ki
routines
to
whatever
the
native
platform
does
and
then
part
of
the
open,
ZFS
repo
would
be
a
user
that
implementation
of
those
z
ki
routines.
That
uses
just
like
the
POSIX
interfaces
like
POSIX
mutexes,
and
things
like
that.
H
B
We're
not
really
trying
to
attack
that
with
this
specific
with
the
open,
ZFS
repo,
the,
because
that
is
platform
specific.
You
know
really
for
this
we're
just
trying
to
attack.
You
know
the
the
lowest
hanging
fruit,
which
is
the
code
that
really
is
platform
independent.
Like
you
know
the
the
spa,
the
dmu,
the
dsl
most
of
the
you
know
the
lib
ZFS
stuff.
B
B
B
It
should
be
fairly
straightforward
to
extend
that
to
also
run
against
the
criminal
implementation,
which
will
hopefully
make
it
much
easier
for
people
of
doing
like,
for
example,
freebsd
development
to
run
a
full
test
suite
against
their
colonel
implementation,
including
the
xiv
all
support
to
catch
problems
like
the
one
that
you
saw.
You
know
earlier
in
the
development
cycle.
B
Cool
so
we're
reaching
the
end
of
the
talk.
I
will
mention
a
few
things
about
how
to
get
involved
and
then
take
questions
about
kind
of
open
projects
that
we're
working
on.
So
I
think
most
of
you
guys
probably
already
know
this
year.
You
are
involved
in
the
community,
but
you
know
if
you're,
making
a
product
with
ZFS
then
again
touch
with
us.
Let
us
know
we'd
love
to
put
your
logo
on
our
website.
B
With
the
description
of
you
know
what
what
your
product
is
and
how
is
the
fs
helps
to
make
it
successful
and
also
you
know,
on
the
back
of
the
t-shirts
that
we
print
up
most
of
these
I
apologize
for
some
of
you.
If
your
logos
are
on
there.
These
t-shirts
were
actually
made
for
a
freebsd
specific
conference
that
I
was
at
last
week,
so
it's
mainly
freebsd
companies
on
the
back
of
it,
which
is
not
intended
to
be
representative
of
all
the
companies
using
ZFS.
B
So
if
you're
using
ZFS
as
an
admin
or
user,
you
know
let
people
know
you
know
spread
help
us
spread.
The
word
about.
You
know
the
technology
that
you're
using
there's
a
lot
of
people
who
I've
talked
to
you
whoo-hoo,
say
to
me:
hey,
like
Oh
gfs,
is
still
around
like
I
used
the
you
know
seven
years
ago
when
it
was
part
of
solaris,
but
you
know
solaris
seems
kind
of
well.
I
want
to
mention
this
on
the
video
stream,
but
you
know
they
aren't
you
using
solaris
anymore,
and
you
know
they
aren't
they.
B
You
know
if
you're,
if
you're
writing
a
code
if
you're
working
on
this
with
a
ZFS
source
code,
please
join
the
mailing
list
and
you
know
get
help
with
your
projects,
get
advice
on
your
source
code
changes
and
this
week
we're
announcing
the
second
annual
open,
ZFS
developer
summit.
This
is
going
to
be
two
day
conference
in
November,
November,
tenth
and
eleventh
this
year
in
San
Francisco.
B
The
conference
last
year
was
really
successful.
You
can
see
on
the
open
ZFS
web
page.
There's
video
recordings
I
mean
slides
from
all
the
presentations
last
year,
so
we're
hoping
to
have
a
similar
conference.
It's
probably
going
to
be
a
little
bit
bigger,
we're
not
sure
if
it'll
actually
fit
in
the
Delphic
office
in
San
Francisco,
but
we're
hoping
we'll
be
able
to
squeeze
in
we're
going
to
do
the
talks
a
little
bit
differently
this
year.
B
You
know
I
just
made
that
trip
so
I
definitely
sympathize
with
not
wanting
to
deal
with
the
jet
lag,
but
it
would
be
great
to
see
some
of
you
there
if
you
can
make
it
and
we'd
especially
love
to
hear
about.
You
know
how
you,
how
you
are
using
ZFS
or
enhance
that
you've
made
to
ZFS,
and
so
please
submit
your
talk,
proposals
cool.
B
B
Thank
you.
We've
been
working
on
that
for
quite
a
while.
This
is
actually
a
project
that
was
started
by
a
new
hire
at
Dell
fix,
I,
want
to
say
like
two
years
ago
and
we've
kept
putting
it
off
and
putting
it
off
and
putting
off.
And
finally,
it
is
actually
on
the
road
map
for
our
the
next
release
of
the
delphos
product.
So
it
does
actually
have
to
get
done.
So
we
plan
to
finish
this
by
the
end
of
the
calendar
year.
Yeah.
C
E
B
So
the
issue
is:
when
is
especially
when
a
storage
pool
gets
fairly
full,
then
the
free
space
can
become
very
fragmented,
so
essentially
the
when
we,
when
we
write
data,
you
know
with
ZFS
it's
copy-on-write,
meaning
that
we
can
write
wherever
we
want,
but
we
can
only
write
to
where
there's
free
space,
not
just
literally
anywhere.
We
want
overriding
your
existing
data,
so
ideally
we're
going
to
have
a
big
chunk
of
free
space
and
we're
going
to
take.
You
know
all
of
your
rights.
They
might
be
two
random
random
logical
locations.
B
We
can
take
all
those
rights
and
write
them
out
nice
in
one
contiguous
chunk
on
desk.
So
that's
going
to
get
a
really,
really
good
right
performance,
but
if
the
free
space
gets
very
fragmented
so
that
it's
just
like
little
bits
and
pieces
of
free
space
throughout
the
whole
storage
pool,
then
when
we're
writing
data,
you
have
to
just
write
to
all
those
little
bits
and
pieces.
So
you
know
if
you're
using
a
you,
know
a
disk
as
to
actually
move
the
disk
head
all
over
the
place.
B
So
it's
much
much
slower
up
to
like
a
hundred
times
slower
to
do
random,
writes
verses,
sequential
writes,
so,
what's
being
done
to
address
this,
there
are
so
there's
we've
run
into
this
problem.
A
lot
Adele
fix
because
we
host
databases
which
use
small
block
sizes
which
tends
to
exacerbate
this
problem
because
the
little
pieces,
little
bits
and
pieces
tend
to
be
even
littler.
B
Problems
with
the
space
allocator,
where
we
were
not
finding
a
big,
contiguous
chunks
and
using
them
when
we
should
have
been
so,
we
I
think
I
mentioned
some
this
somewhere
on
this
side.
It
says
something
about
it:
anyways
we
added
some
more
tracking
information
on
disk
that
tells
us
essentially
a
histogram
of
the
chunk
sizes
of
free
space.
So
essentially
lets
us
tell
like,
on
this
part
of
the
disk.
I
might
have
a
megabyte
free
and
on
this
part
of
the
disk
I'm.
B
I
also
have
a
megabyte
free,
but
I
can
tell
that
in
this
part
of
the
disk.
That
megabyte
is
all
in
one
kilobyte
chunks.
It's
like
a
thousand
one
kilobyte
chunks
versus
on
this
part
of
the
disk.
It's
actually
2
512
k
chunks.
So
it's
much
preferable
that
I
use
this
this
part
of
the
disk,
even
though
it's
the
same
amount
of
free
space
in
both
of
them.
It's
more
contiguous
over
here,
so
I
want
to
use
that.
B
Another
thing
that
we're
working
on
right
now
is
its
own
part
of
that
work
is
the
fragmentation
metric.
So
when
you
do
like
zpool
list
v,
it
actually
shows
you
how
fragmented
each
device
is,
so
that
at
least
can
help
you
understand,
like
I'm,
seeing
bad
performance.
Is
it
be
as
a
fragmentation
or
is
it
because
of
something
else?
B
Another
thing
that
we're
working
on
right
now
is
the
ability
to
preserve
some
of
that
those
chunks
of
contiguous
free
space.
So
we
we
look
at
how
busy
the
system
is
essentially
and
when,
when
there
aren't
a
lot
of
Rights
going
on,
the
system
will
actually
proactively
seek
out
those
little
bits
of
free
space
and
fill
in
those
holes.
So
this
I
this
makes
those
rights
slower,
but
it
preserves
those
chunks
of
contiguous
free
space
for
when
you
actually
need
them.
When
there's
a
high
high
write
throughput
workload.
B
I,
don't
know
that
anyone's
working
on
disk
defragmentation
I
think
that
I,
while
it
might
be
theoretically
possible,
I,
don't
think
that's
actually
a
practical
solution,
because
there's
going
to
be
a
lot
of
cases,
disk
defragmentation,
essentially
it's
probably
really
slow,
and
because
of
that,
it's
probably
not
practical
to
run
like
in
the
general
case,
the
we
were.
G
G
I
F
B
So
yeah
I'm
definitely
not
tending
to
re-implement
the
mythical
bpv
right
feature,
which
is
kind
of
related
to
like
the
defrag,
where
the
performance
implications
are
really
really
kind
of
extreme.
So
the
use
cases
of
device
removal,
there's
kind
of
two
main
use
cases.
I'm
thinking
of
one-
is
basically
people
who
make
mistakes.
So
if
you
accidentally
add
a
device
to
the
storage
pool,
there's
no
way
to
undo
that
right
now.
So
you
know
if
you
meant
to
like
add
a
device
as
like.
Maybe
you
meant
to
do.
B
B
But
in
all
seriousness,
they
we've
also
I'll
talk
a
little
bit
more
of
our
particular
use
case
of
how
people
use
dell
fix
in
the
next
session.
But
I
guess
briefly:
they're
using
dell
fix
they're
using
Delphic
stand
open
ZFS
as
like
a
virtualization
layer
and
depending
on
the
workload,
they
might
need
a
lot
of
storage
for
like
this
for
a
given
project
and
then
when
they,
when
they're
done
with
that
project,
they
don't
need
that
storage
anymore.
So
they
want
to
be
able
to
remove
those
remove
that
space
from
ZFS.
B
So
having
that
flexibility,
that's
kind
of
the
other
main
use
yes.
So
this
is
not
like,
because
it's
virtualization
they
aren't
like
literally
going
and
like
pulling
out
drives
from
one
machine.
It's
like
you
know
we're
running
on
top
of
like
buns
on
a
sand
or
something.
So
it's
pretty
easy
for
them
to
do
that
in
general,
but
they
can't
do
it
with
ZFS
because
they
can't
remove
the
store
the
devices
and
so.
B
So
the
idea
is
that
we're
going
to
take
all
the
space
that's
allocated
on
that
device
that
we're
removing
and
you
can
think
of
it
as
like,
allocate
a
big
file
that
represents
that
device
and
they
copy
the
space
into
that
file.
So
essentially
we're
adding
another
level
of
indirection.
In
terms
of
you
know,
when
we
look
up
a
given
offset
in
that
file,
the
file
maps
to
given
location
on
disk.
So
it's
it's
actually
a
little
bit
trigger
than
that
not
like
using
a
literal
file
but
using
like
a.
F
B
That
will
store
in
the
in
the
spa
layer
and
the
reason
that
it
needs
to
be
a
little
bit
trickier
is
because
we
we
want
to
make
it
so
that
at
least
for
file
systems,
when
you
read
that
that
block
you
will
have
to
go
through
this
additional
translation
to
find
you
know
it's
on
this
indirect
Vita
would,
but
that
part
of
the
indirect
freedom
is
actually
stored
on
this
real,
concrete,
V
dev.
Once
we've
made
that
mapping,
then
we
can
actually
tell
the
file
system.
Oh
here's,
where
it
actually
is
on
disk.
B
So
you
can
replace
that
block
pointer
in
that
one
file
system.
So
the
next
time
you
read,
it
will
be
able
to
go
directly
to
the
location
on
disk
and
then
we
can,
like
proactively,
make
those
translations.
You
know
odd
if
that's
needed.
Ok,
thanks
any
more
questions.
Yes,
what?
What
about
encryption?
B
So
a
lot
of
people
ask
about
encryption.
I've!
Actually,
I
think
at
every
event
to
at
least
two
people
ask
me
about
encryption,
but
as
far
as
I
know,
nobody
is
working
on
it.
So,
there's
a
lot
of
different
kind
of
ways
to
implement
encryption.
Obviously
it
depends
a
lot
on
what
you're
you
know
what
the
use
case
is.
You
know
what
attack
you're,
trying
to
prevent
against.
So
there's
that
you
know
those
products
or
that
work
like
below
the
level
of
ZFS
by
doing
like
a
criminal
disk.
B
As
for
but
like
I
said
as
far
as
I
know,
nobody
is
actually
working
on
this
I
think
that
the
way
that
Oracle
implemented
ZFS
encryption
is
probably
overkill
from
for
the
vast
majority
of
use
cases.
I
think
you
know
a
common
use
case.
Well,
probably
the
most
common
use
case
is
I
need
to
check
this
box
on
something
x
yeah.
You
know
on
the
product
that
I'm
selling
to
my
customers
and
for
that
any
kind
of
encryption
works,
but
another
you
know
compelling
use
case
is
like
I.
B
So
for
that
like
whole
pool
encryption
would
you
know
would
be
much
simpler
to
implement
than
the
like
/
dataset,
key
management,
stuff
that
that's
in
a
work'll
gfs
and
I
know
I'd-
be
happy
to
work
with
anybody,
that's
interested
in
implementing
that
we
don't
have
any
any
need
for
it
right
now
at
Dell
fix.
So
you
know
we
aren't
going
to
be
working
on
it
in
the
in
the
foreseeable
future,
but
I
might
be
happy
to
work
with
anyone
who
does
need
it.
There's.
A
B
Yeah
so
the
embedded
block
data,
the
idea
is,
it's
actually
a
little
bit
more
generic
than
that,
which
is
that
any
so
like
in
ZFS.
We
can
compress
data
and
we
take,
you
know
any
block
size
or
we
can
compress
it
down
and
then
we
store
in
a
smaller
block.
Well,
the
smallest
block
that
we
store
it
in
is
like
just
one
sector
of
the
disk
so
like
512,
bytes
or
4k.
B
This
is
targeting
things
that
are
very,
very
compressible
that
can
actually
compressed
down
into
about
100,
bytes
or
less.
We
actually
take
that
data
and
put
it
into
the
block
pointer
itself
so,
rather
than
the
block
pointer
like
pointing
to
the
compressed
data,
it
actually
just
contains
the
compressed
data
directly.
B
So
this
saves
you
save
space,
not
so
much
space
on
512
byte,
because
it's
only
like
you
know,
100
vs
500,
but
it
saves
a
lot
of
iOS
because
you
don't
have
to
do
the
I/o
to
read
and
write
that
data
and
it
helps
the
fragmentation,
because
you
don't
have
all
these
tiny
little
blocks
so
yeah.
If
you
have
a
small
file
into
the
block
pointer.
Sorry,
the
denote
has
a
block
pointer
inside
of
it.
B
Normally
that
points
to
a
block,
but
if
the
file
is
very
compressible,
then
Wilda
store
the
data
in
the
denote
directly.
This
is
especially
good
for
like
directories,
because
you
know
most
directories
only
have
a
few
entries
in
them
and
so
they're,
just
using
like
15
12
bit
sector
to
store
all
those
entries.
All
the
entries
are
really
really
compressible,
because
they're
just
like
text,
so
most
directories
actually
can
be
compressed
away.
B
B
Know
so
in
this
case,
so
no
normally
the
question
was
about
where
the
check
some
of
that
embedded
data
is
so
you
know.
Normally
the
block
pointer
has
a
checksum
that
checksums
the
data
that's
pointed
to
so
in
this
case,
because
we're
putting
it
the
data
in
the
block
pointer
itself
that
block
pointer,
along
with
all
the
other
data,
in
that
indirect
block,
is
text
them
by
the
pointer
that
the
black
pointer
that
points
to
it.
So
it's
kind
of
the
same
question
of
like
who
checks
the
text.
B
So
even
if
you
aren't
doing
embedded
data,
it's
like
okay,
well,
I
have
this
checksum
in
this
indirect
lock.
How
do
I
know
that
the
checksum
is
right,
and
you
know
it's
right
because
the
block
point
of
the
points
it
has
the
correct
checksum.
So
basically,
this
whole
tree
forms,
what's
called
a
Merkel
tree
where
the
topmost
checksum
verifies
that
everything
below
it
is
correct.
B
A
B
B
As
you
know,
there's
no
like
simple
solution
to
just
say,
like
oh,
add
a
device
and
then
take
my
raid
Z,
you
know
take
my
raid
Z
stripe
and
then
just
add
one
more
device
and
then
be
able
to
use
that
because
the
data
is
like
each
block
in
razy
is
spread
out
over
all
the
devices.
So
you
can
think
of
it
like
that.
B
Logical
ordering
of
the
data
is
like
if
the
devices
are
columns.
You
know
the
blocks
are
kind
of
numbered
like
1
2,
3,
4,
5,
6,
7,
8,
9,
10,
11
12.
So
if
we
add
another
one
here,
you
know
we
could,
through
some
tricks,
know
that
this
is
different
in
its
free.
But
then
you
essentially
you
have
like
all
these
single
free
sectors
by
themselves,
which
you
know
so,
if
I
have
like
four
allocated
sectors
and
then
one
free
sector,
then
for
allocated
than
one
free
that
I
can
actually
use
those
singleton
sectors.
B
F
B
D
C
B
B
Then,
when
the
second
right
comes
in,
it
has
to
wait
for
that
first
batch
of
rights
to
complete
before
it
and
everything
else
that's
come
in
during
that
time
can
be
written
out.
So
in
practice
you
know
I've
done
a
little
bit
of
performance
testing
on
this
and
in
practice
it
looks
like
this
essentially
halves
the
performance
that
we
could
get
so
because
the
Zille
is
like
essentially
like
a
linked
list
of
blocks.
B
We
have
to
wait
so
once
we've
issued
that
first
right,
we
have
to
wait
for
that
to
complete
before
any
future
rights
can
logically
complete,
but
we
could
theoretically
start
that
right
immediately
and
then
wait
for
both
rights
to
complete
we're.
That's
also
on
the
road
map
of
what
we're
looking
at
doing
at
Dell
fix
I.
B
I'm
not
going
to
make
any
promises
about
when
it's
going
to
be
done,
but
it
is
something
that
we're
looking
at
in
the
short-term
medium-term
and
from
yeah
from
what
I
saw
it
looks
like
this
would
approximately
double
the
right
latency.
When
you
have
a
lot
of
concurrent
rights
going
on
to
file
system,
sorry,
it
will
have
the
right
latency.