►
From YouTube: OpenZFS Developer Summit Part 2
Description
Boris Protopopov of Nexenta on multi-tiered storage. Channel Programs (Chris Siden & Max Grossman)
B
C
A
D
About
the
open,
gfs
code
repository
which
we
can
do
over
lunch
and
breaks
through
the
rest
of
the
day,
but
I
wanted
to
give
other
people
a
chance
to
talk
so
boris
is
here
from
niccenta.
I
think,
maybe
to
tell
us
about
what
you
guys
are
doing
with
zfs
and
multi-tiered
storage.
F
Yep,
that's
right,
hey
guys,
I'm
boris
from
systems-
and
I
didn't
make
it
didn't-
need
to
make
it
look
very
formal
or
anything.
So
please
ask
questions.
It's
just
a
way.
I
figured.
F
I'd
like
to
present,
so
that's
what
I'd
like
to
cover
in
this
presentation
quickly.
Why
am
I
multiple
storage
steers
at
all?
What
do
we
use,
what
to
do
with
them
etc,
and
the
multitude
of
stories
with
nzf
in
particular,.
F
F
So
I
feel
I'll
go
through
those
pretty
quickly.
I
have
more
time
for
discussion,
so
you
know,
generally
speaking,
we
get
some
questions
and
requests
all
the
time
from
our
customers,
because
our
product
address
is
kind
of
general
purpose,
man's
use
case
and
a
lot
of
enterprise
plus
customers.
F
You
know:
have
this
checkbox
placed
on
this
side.
Okay,
you
do
this,
you
do
that
you
do
this,
and
entering
is
something
that's
sort
of
a
pretty
common
thing
in
the
vocabulary.
So.
F
My
knowledge
and,
of
course
there
might
be
a
bunch
of
other
reasons
to
use
multiple
tiers
people
use
it.
For
you
know
you
can
try
to
mute
like
various
very
meet
the
needs
of
varying
mixed
workloads.
F
You
know-
and
you
know
just
generally
try
to
achieve
good
performance
at
a
fraction
of
a
cost
etc.
F
Also,
you
know
different
storage
devices
have
you
know
very
different
characteristics
and
some
applications
and
some
use
cases,
one
those
specific
characteristics
and
from
the
management
perspective.
Of
course,
it
seems
like
it's
easier
for
the
end
user
to
deal
with
one
quote-unquote.
F
We
can
call
it
unified
or
consolidated
full
storage
that
can
be
managed
with
some
policies
etc,
and
you
know
with
the
ability
to
place
of
differentiated
placement,
basically
to
place
different
data,
different
payload
data
metadata
on
different
storage,
for
various
reasons,.
F
So
so
that's
that's
why
we
were
thinking
about
a
multiple
years
of
storage,
and
so
what
is
interesting
about
multi-tiered
zippo
in
the
data
path.
So
I
would
like
to
achieve
would
like
to
mention
place
the
payload
into
different
storage
based
on
some
sort
of
some
goals.
F
B
F
So
access
the
the
same
data
and.
F
Are
notoriously
difficult
to
my
knowledge
on
you
know
hard
drives
because
they
introduce
all
sorts
of
undesirable
disk
movements
may
cause
in
the
worst
case.
It
may
cause
like
head
thrashing,
etcetera,
and
you
know,
elite
support
performance,
especially
for
data
access
patterns
that
are
random
and
and
those
data
patterns
that
kind
of,
unfortunately,
you
know
correlate
with
with
some
background
activities
that
you
know,
like
a
metadata,
intensive
scan,
a
lot
of
metadata,
etc.
F
So
that's
that's
why
I
think
that's
important
to
keep
in
mind
and
of
course,
if
there's
there
are,
like
you
know,
two
tiers
with
you
know
similar.
F
Like
yeah
higher
ups
and
a
low
latency
or
otherwise,
yet
they
serve
two
different
purposes
like
let's
say
dedicated
as
a
ztr
and
l2r.
F
F
So
so
the
the
existing
tiers
and
z
pool
are,
you
know,
quote
unquote
normal
tier
following
the
you
know,
the
name
of
the
metaslop
class
and-
and
there
is
a
there-
are
logs
and
there
are
in
our
interest
or
there's
cash.
So
you
know
if
you
type
z-pool
status
or
something
will
see
those
listed
separately,
and
I.
F
F
F
But
so
if
we
were
to
consider
introducing
a
a
different
tier
and
yet
another
tier,
you
know,
along
with
let's
say
normal
logs
and
as
you
call
some
chords
quote,
unquote
special.
But
what's
another
name.
I.
F
From
a
management
perspective
that
come
to
mind
so,
first
of
all
like
how
would
we
do
that,
would
we
choose
a
reserve
name
for
it,
or
would
you
allow
the
users
to
define
a
name
for
that
tier
that
can
make
sense
to
them
and
make
it
a
little
more
manageable
and
what
kind
of
placement
and
the
you
know,
data
movements,
such
migration
policies
would
like
to
participate
with
those
and.
F
Is
only
relevant,
of
course,
if
the
placement
was
transit
in
the
first
place,
if
it's
permanent,
then
then
it's
not
necessary.
F
Interesting
to
associate
specific
redundancy
and
replication
parameters
with
the
particular
tier
and
in
zero
process,
everyone
else
instead
was
you
know.
F
A
specific
redundancy
geometry
to
the
top
lbdf
infinitier
and
the
playing
with
the.
F
You
know
how
many
of
those
are
associated
with
that.
So
how
do
you.
F
F
And
finally,
it
would
be
probably
necessary
to
introduce
some
tier,
specific
square
management
because,
like
you
know
at
this
point
in
time,
there's
only
one
quantum
quote:
I
don't
want
to
use
the
word
pool
a
collection
of
tears.
Right,
I
mean
collection
of
spirits
are
in
the
pool
and
and
if
they're,
if
they're
not
homogeneous,
then
obviously
there
needs
to
be
some
logic
that
understands
that.
Well,
you
know
if
this
is
an
ssd.
Let's
replace
you
know,
drive
from
the
ssd
here
with
that,
and
it
has
a
hard
drive.
F
Otherwise,
so
basically
the
devices
and
materials
they
matched
with
the
subgroup
squares.
F
So
the
challenges
in
in
the
multi-tiered
group
situation,
so
the
general
case
of
of
multi-tiering,
you
know
generally
kind
of
requires
data
migration.
There
are
some
use
cases,
though,
that
do
not
in
particular,
if
we
decide
that
we,
you
know,
we
want
to
place
something
in
that
here
and
it's
just
going
to
stay
there.
So
that's
the
same
form
so
and
the
you
know
in
my
mind,
there
are
three
three
separate
issues
there
with
migrating
stuff
on
zfs.
F
B
F
D
F
F
That's
correct
right,
but
but
not,
but
this
is
current,
for
I
mean
I
agree
that
this
is
somewhat
undesirable
side
effect,
but
it
will
not
break
any
kind
of
fundamental
laws
at
this
point
in
time,
but
yeah
it
definitely
the
sharing
would
be
broken,
yep
so
and
then
there's
the
duplication.
F
Duplication
is
something
that
even
with
immutable
a
data
set
requires
extra
consideration,
because
this
is
a
degree
of
interaction,
and
you
know,
if
you
change
something
in
that
table,
then
it
will
affect
a
whole
bunch
of
other
data
sets
and
then,
of
course,
there
is,
like
you
know,
the
classic,
difficult
problem
of
moving
of
modifying
block
pointers
that
are
part
of
a
snapshot
etc.
F
Split
it
up.
This
way
is
because
you
know
the
first
case
is
pretty
easy.
Quote-Unquote,
you
know
modular
side
effect
of
breaking
the
sharing
of
the
of
the
blogs
and
the
duplication
is
harder,
but
not
as
hard
as
snapchat.
That's
always
good.
F
So,
and
and
of
course,
you
know
if
there
are
challenges,
we
need
to
come
up
with
opportunities
here,
to
balance
it
out
somehow
and
and
the
opportunities
the
short
term,
and
this
place
are
basically
the
use
cases
without
migration
of
immutable
payload.
F
If
we
try
to
come
up
with
a
general
statement
which
includes,
as
I
might
have
mentioned
earlier,
something
that
does
not
require
migration
in
the
first
place,
of
course,
plus
something
that
does
not
mind
the
the
side
effect
of
breaking
the
the
sharing
and
the
the
big
opportunity,
of
course,
is
to
enable
the
a
general
purpose.
Payload
mobility,
which
which
requires.
G
F
F
So
yeah-
and
I
realized
that
we
might
you
know,
there's
something
more
to
decide
about
the
management
aspect
of
it,
so
we
might
need
to
rewind
at
some
point
in
time
to
previously
I
apologize
for
breaking.
F
So
what
we
thought
about
was
you
know.
B
F
Based
dedicated
here
and.
F
There
are
no
contributions
yet
to
the
to
the
open
space
to
the
to
the
open
source
code
and
therefore
it's
kind
of
hard
to
talk
about
what
what
has
been
done.
Oh
exactly
so,
we
are
planning
to
to
open
source
this
our
work
fairly
soon
in.
F
Correlating
it
with
the
with
the
release
of
our
of
the
next
version
of
our
product
and
early
february,.
G
F
Early
february,
so
it's
a
couple
months,
two
and
a
half
months,
no
three
months
right
now,
I'm
sorry!
My
arithmetic
gets
a
little
around
around
the
around
that
time
frame
and
besides,
even
I'm
not
announcing
the
release
here,
I'm
just
basically
telling
you
we're
going
to
open
source
and
that's
the
approximate
the
time
frame.
So
I
don't
you
know
all
right.
F
So
so
why
do
we
want
to
well?
What
do
we
want
to
use
this
ssd
based
tier?
We
wanted
to
store
the
metadata,
and
in
this
case
you
know
basically
we're
looking
at
the
permanent
placement,
we're
not
planning
to
migrate.
F
At
least
at
this
point
in
time,
so
we're
just
gonna
put
place
it
there
and
and
therefore
we're
not
really
concerned
with
any
kind
issues
that
otherwise
migration
or
anything
of
that
nature.
So.
B
F
F
From
placing
metadata
on
something
that's
you
know
low,
latency
and
and
high
iops,
because
well
because
some
workloads
like
it
and
and
also
because
there
is
a
whole
bunch
of
activities
in
outside
that
relate
to
management
and
background
tasks.
That
sort
of
you
know
try
to
elect,
to
read
metadata
and.
B
F
F
So
what
we've
considered
we've
considered,
basically
just
placing
metadata
and
also
classifying
metadata
into
different
classes,
as
it
was,
as
you
guys
might
know,
that
there's
a
whole
bunch
of
different
types
of
metadata
in
in
zeoclass
and
a
dbu
layer
basically
has
an
enumerator
that
basically
describes
a
whole
bunch
of
different
things,
and
you
know,
depending
on
your
application
and
the
the
way
I
mean,
and
the
pool
properties
and
the
data
set
properties.
F
You
know,
different
types
of
metadata
can
benefit
from
being
placed
in
in
the
special
tier
and
because
the
special
theory
is
smaller,
presumably
than.
F
F
F
Two
tiers
actually
like
l2,
auric
and
logs,
that
are
also
traditional,
ssd
based
and
nzfs.
We
need
to
manage
interaction
between
those,
in
particular
case
file,
to
work.
You
know
if
people
say
metadata
is
already
placed
into
something:
that's
as
fast
as
a
torque.
In
the
first
place,
it
would
seem
it's
unnecessary
or
wasteful
to
place
it
in
the
torque
as
well,
but
in
terms
of
options,
we
were
thinking
about
like
a
special,
only
placement
on
that
special
tier
dual
placement
or
placement.
F
You
know
for
only
and
we
control
it
with
with
properties.
You
know
for
full
level
metadata.
Those
are
full
properties
for
the
data
set.
The
data
set
level
metadata.
Those
are
data
set
properties
and
there
are
feature
flags
that
go
with
those
as
well
so
and
in
terms
of
the
spare
management.
F
We
also
have
to
come
up
with
some
some
way
of
figuring
out
which
first
to
use
at
which
point
in
time
we
need
to
replace
which
device
and
our
approach
is
kind
of
simplistic
from
this
place,
or
basically
just
add
labels
on
on
disks,
both
on
spares
and
on
the
on
the
tier
I
mean
on
the
storage
devices
and
the
tiers,
and
just
a
slight
enhancement
in
the
logic
of
replacement,
basically
looks
at
labels
and
says:
well,
you
know
if
we
need
to
replace
this
guy
with
this
label.
F
F
So,
and
in
this
case
we
actually
use
another
feature
that
we
introduced.
That's
called
class
of
storage
and
the
cost
of
storage
is
basically
a
slight
generalization
of
of
properties
as
well.
So
at
this
point
in
time
you
know
in
xerops
you
can
only
tweak
a
tunable
which
is
which
is
global
for
the
whole,
for
all
the
tools
in
the
kernel
or
you
can
change
the
properties
that
are
for
pool
or
for
video.
F
What's
what
we've
added
we've
added?
You
know
all
these
other
properties
per
tier
of
recollection
of
virtual
collection
of
videos
plus,
you
know
a
set
of
properties
for
video
just
just
for
extra
flexibility,
so
things
like
q
depth,
for
instance-
or
you
know
how
good
this
device
is
for
reading.
Let's
say
in
the
in
the
mirror:
kind
of
configuration
could
be
controlled
by
video,
specific
properties
so,
and
we
will
leverage
the
video
specific
properties
for
basically
attaching
the
labels
to
make
the
spare
replacement
here.
F
And
let's
see
so
something
else
we've
been
looking
into,
but
at
this
point
in
time
it's
just
a
it's
more
of
a
research
product
project
work
in
progress,
they're
not
quite
ready
to
put
it
out
for
the
open
code
base.
So.
B
F
B
F
And
therefore,
basically
keeping
in
mind
what
I've
mentioned
in
the
challenges
section
of
this
presentation.
F
The
the
case
when
we
don't
need
to
move
snapshots
etc,
because
we
don't
know
how
we're
just
at
this
point
in
time
we're
working
on
that
and
the
the
reason
to
do
it
is
to
try
to
absorb
transient
spikes
in
the
workload
assuming
that
this
ssd
tier
is
provisioned
appropriately
and
can
actually
do
so
so
for
flexibility
again
and
management
so
that,
for
instance,
if
the
customer
doesn't
want
to,
they
don't
have
to
have
a
separate,
multiple
ssd
based
tiers.
F
We
can
configure
these
tiers
per
data
set,
so
it
can
be
used
as
a
log
device
called
slog
and
the
meta
device
and
and
the
right
cache,
f4
and
all
right.
So
all
these
things
are
kind
of
inclusive
in
our
current
implementation.
So
if
it's
a
meta
device,
it's
also
used
as
a
log.
Unless
there
is
a
service
log,
you
know,
in
which
case
it
kind
of
takes
off
it
takes
to
learn.
So
let's
say
the
customer
doesn't
like
doesn't.
F
If
the
customer
wants
to
use
the
steer
for
metadata,
yeah
they're,
not
happy
with
the
you
know,
performance
of
the
steer
as
a
slog.
They
can
still,
you
know,
have
their
special
tier
in
this
case.
Their
zeal
is
going
to
go
into
that
slot.
F
F
Also
so
the
way
the
way
we're
thinking
about
it
is
that
there
will
be
a
background
task
that
will
be
sensing.
How
intense
the
workload
is.
You
know
how
much
data
is
being
written
by
the
user
and
when
that
amount
of
data
drops
basically
based
on
the
telemetry,
that's
available
in
the
right
throttle
line
framework
right
when
that,
whether
the
amount
drops
you
know,
basically,
we
start
moving
off
the
data.
F
B
F
F
The
applications
are
going
to
experience
performance
drops,
so
we
have
also
like
high
and
low
watermarks
they're,
quite
events
to
alert
people
of
the
fact
that
they
need
to
add
some
more
stories
there.
So
you
know
going
back
to
the
to
the
metadata.
I
mean
normally
it's
it's
fairly.
It's
not
very
hard
to
kind
of
provision,
a
metadata
appropriately
to
not
overflow,
with
the
exception
of
the
case
on
the
applications,
maybe
because
duplication
is
also
metadata.
F
That
we
can
provide
the
management
events
for
people
to
act
accordingly.
F
And
so
this
this
basically,
what
I've
mentioned
about
what
we're
working
on
is
basically
either
this
one.
It's
around
this,
this
pretty
much
fairly
complete
piece
of
work
that
it
deals
with
the
met
with
this
storage
pool
as
a
place.
That
is
a
place
to
call
it
metadata.
Android
is
right,
cache
and
the
associated
work
with
the
cluster
storage
and
video
specific
properties
yeah,
and
this
is
the
future
work
and,
as
I
mentioned
previously
in
my
window,
like
two
big
categories
here
that
need
to
be
addressed.
F
Let's
say
in
case
of
duplication
in
the
general
case
of
crosstee
or
payload
migration,
and
the
first
case
I
think,
is
it's
a
little
easier
than
the
general
purpose,
because
you
know
it
could
be
addressed
with
a
or
replaced
like
from
the
high
level
standpoint.
It
would
seem
that
it
could
be
addressed
with
transient
dt
class
because,
as
you
might
know,
the
current
duplication
framework
in
zfs
uses
different
classes
for
the
duplication
table
at
this
point
in
time,
they're
basically
broken
down
based
on
the
frequency
on
the
reference
count
of
a
block.
F
To
kind
of
pull
back
and
look
up
things
in
multiple
application
classes,
so
if
we
were
to
you
know,
think
about
migrating
stuff
and
introduce
something
to
introduce
a
transient
duplication
table,
we
could
move
things
on
the
assumption
that
it
will
change
the
data
path
in
the
read
path.
That
says:
well,
if
we
can't
find
something
this
block,
we
need
to
go
and
look
at
this
transit
application.
F
But,
but
only
when
there
is
a
special
global
proflux
as
we're
transitioning
stuff,
that
sort
of
thing-
and
this
is
a
very
real
idea,
just
like
you
know.
Obviously
you
know
there's
not
enough
detail
to
have
a
yeah,
but
it
looks
like
it
would
work.
Yeah
I
mean,
generally
speaking,
it
would
seem
that
it's
it's
easy
a
little
bit
then
dealing
with
the
block
pointer
right.
So
so
we
will
try
that
first
and
if
it
doesn't
work
see
if
that's,
but
you
know
at
what
point
I
read
is
something
that's
very
intriguing.
B
F
I
don't
know,
I
don't
think
it's
a
significant
change.
I
mean
in
cfs
there's
already
a
framework
for
going
through
and
classifying
the
the
type
of
functions
of
the
right.
So
you
need
it's.
You
know
there
is
a
the
function's
name
is
dmu
rate
policy
that
basically
gets
the
block
pointer
and
returns
like
cio
flags
is
here
properties
or
something
that's
called
in
nci
all
right,
and
that
basically
says
what
it
is.
F
F
Right
so
I'm
gonna,
if,
if
I
have
a
little
more
time,
I
just
wanted
to
ask
the
audience
then
so.
F
Would
be
interested
in
multi-tier.
G
D
So
can
you
just
kind
of
overview
like
what
you
mentioned,
that
you're
working
on
open
sourcing
some
of
this
stuff?
What
of
what
specific
features
are
done
and
you're
ready
to
work
on
open
sourcing
it
versus
like
future
ideas
here
so
like
what?
What
do
we
have
to
look
forward
to
specifically
like
what
new
properties,
what
new
settings.
F
Without
an
explicit
commitment
to
open
sources,
you
know
initially,
so
there
has
to
be
some
work
done
to
actually
clean
it
up,
and
you
know
refractory
and
open
sources,
and
you
know-
and
it
was
kind
of
developed
you
know,
but.
F
Is
what
code
is
that
right?
I
was
just
going
to
get
to
it.
Yes,
it's
a
general
statement
and
so
the
code
that
I
mean
we
have
already
got
a
commitment
to
open
sources,
the
cluster
storage,
a
video
specific
properties
and
the
the
nether
cfs
meta.
F
Which
includes
the
you
know,
configuring
tier
the
provisioning
the
whole
the
whole
set
of
usable
data,
so
so
that,
in
the
you
know
at
this
point
in
time,
due
to
our
my
limited
imagination,
this
this
class
is
called
special.
F
It's
the
different
stories,
but
it's
here,
but
I
mean,
if
you
guys,
have
a
better
idea
as
to
what
to
call
it.
Well,
partly
you
know
the
the
complexity
here
is
in
that
we
we
do
want
to
make
this
apparently
a
special
purpose
right,
so
the
the
best
and
in
the
most
general
use
case,
is
to
define
the
tier
in
on
the
pool
level.
It's
like
it
is
done
for
other
tiers
in
the
zippo
and
then
use
data
set
level
properties
to
actually
define
how
every
data
set
uses
it.
F
Much
like,
let's
say
for
real
to
work.
You
know
every
data
set
can
choose
what
to
do
with
it
and
that
sort
of
thing.
D
And
then
they
could
for
like
moving
data,
that
is
moving
data
from
file
systems
between
tiers.
That's
that's
also
part
of
this.
That's.
F
That's
work
in
progress
right
now.
We
just
need
to
it's
it's
sort
of
available
raw,
not
from
the
necessary
from
the
point
of
stability,
but
from
the
point
of
what
to
do
with
it
and
how
to
configure
it
and
how
to
get
things
going
that
sort
of
thing,
but,
but
you
know
to
address
the
the
question
of
you-
know
how
it
could
be
done.
I
mean
it
really
is
not.
F
I
mean
keeping
in
mind
that
the
equals
is
a
copy
and
grade
system
and
under
the
assumption
that
there
is
already
you
know
this
general
framework
leveraging
the
new
raid
policy
that
basically
lets
you
control
where
to
place
a
particular
block.
Then
all
you
need
to
do
is
just
look.
I'm
gonna
read
it
read
the
blog
mark.
It
dirty
mark
that
block
dirty
and
want
to
basically
tell
this.
If
that's
where.
F
D
F
D
It
sounds
like
the
the
first
code
that
you're
working
on
putting
back
would
be
basically,
the
like
class
of
storage,
so
you'd
be
able
to
configure
special
or
fast
devices,
and
then
say
this
type
of
data
goes
on
to
the
fast
devices
versus
the
slow
devices
and
it
just
like
it
just
stays
where
it
lives
forever.
That's
right,
yeah.
B
F
Feature
there's
a
flag
that
basically
makes
importing
the
tool
for,
for
the
software
doesn't
know
how
to
do
it.
F
Well,
because
if
you
I
mean
because
here,
if
you
don't
do
so,
then
if
you
import
it
and
basically
start
messing
around
with
it,
you
cannot
violate
all
the
constraints.
I
mean
it
will
be
readable,
etc
and
writeable,
and
this
is
this-
is
a
problem
because
then
the
metadata
that
changes
will
go
to
the
wrong
place.
F
Time
rate
cash
kind
of
includes
it's
kind
of
inclusive
relationship.
So
when
you
there
are
three.
F
Three
values
for
this
profit
here
you
know
this
one
one
value
is
zero.
Those
critical
data
set
only
uses
it
for
the
slope
device
and
there
is
a
meta
property
that
basically
says
this.
Particular
data
set
uses
a
special
tier
for
metadata
and
zero,
and
then
there
is
right
cache,
which
means
that
particular
and
the
right
cache.
Let's
say
we
don't
we're
not
ready
to
operate
because
they're
still
publishing
things
like
understanding
the
department's
implications,
etc.
F
That,
basically,
is
inclusive.
It
says
this
particular
dataset.
You
know,
creates
data
into
this
when,
when
it's
appropriate,
which
means
when
there
is
space
there
and
they
use
super
metadata
and
race,
but
there's
always
also
an
option
to
override
the
usage
of
your
special
tier
brazil.
F
D
H
Rest
of
the
stuff
in
this
spot,
I
think
it
will
interact
pretty
easily
like
the
because
the
framework
of
setting
up
metaslab
classes
is
already
there.
So
it's
just
a
matter
of
finding
it.
The
reason
I
was
asking
that
was
trying
to
come
up
with
what
should
that
name
be
right.
Special
yeah.
B
F
H
F
I
mean,
but
but
but
so
we've
been
kind
of
a
little
worried
about
also
about
you
know
the
the
failure
characteristics
of
ssd
is
etc.
Given
the
fact
that
this,
you
know,
this
thing
costs
metadata,
it's
a
little
unnerving
to
you
know
if
you
lose
it.
Basically,
the
whole
pool
is
gone
as
far
as
I
can
tell
so.
You
know
it's
probably
like
a
durational
year
to
have
some
sort
of
backups
etc.
B
G
F
H
B
H
Think
the
one
thing
that
you
may
want
to
consider
is
especially
for
metadata,
is
having
one
copy
on
this
class
of
storage
and
because
you
were
mentioning
the
failure
rate
of
ssd.
H
D
D
H
I
D
F
Yeah,
the
you
know
the
automark,
I
guess
I
mean
I
agree
with
you
that
there's
a
lot
of
commonalities
and
we
thought
about
it.
But
you
know
our
reservation
about
using
altora,
for
this
purpose
is
that
it's
hard
to
control
what
stays
there
for
work
and
what
goes
out
so
because
it's
just
it's
got
to
be
pushed
out
at
some
point.
Right.
Do.
I
F
J
F
Now,
yes,
well
also
loop
was
kind
of
important
here
because
you
know
ddts
are
you
know
the
biggest
consumer
of
the
of
this
metadata
space
on
this
device,
and
so
what
we
added
also
who
added
some
tweaks
to
control
the
number
of
data
copies
for
the
edt
and
the
reason
we
did
it
is
because
ddt
is
actually
rebuildable
right.
It's
not
really
it's
it's
redundant.
B
F
You
really
lose
it
so
so
you
can
tweak
the
number
of
jitter
pointers
for
the
dtu
with
the
level
property
and
then
you
know
you
can
kind
of
have
easier
time.
It's
not
the
so.
The
full
solution
is
kind
of
like
a
half
measure,
but
it's
it's
fairly
effective
because
in
fact
ddt
is
the
biggest
concern
of
the
space
and,
as
I
mentioned,
it's
a
you
know
more
or
less
we're
done
in
the
debate
in
the
first
place.
F
D
D
This
is
like
the
mega
version
of
the
lightning
talk,
I
guess
for
an
xenta,
but
for
other,
maybe
we
can
get
one
person
from
each
other
company,
that's
here
to
just
talk
about
how
like
what
product
you're
creating
with
zfs
and
like
any
cool
stuff
that
you've
done
that
you're
also
looking
to
push
back
and
put
into
to
open
source.
At
some
point,
it's
cool
thanks.