►
From YouTube: Storage Multi-Tenancy For Containers by Allan Jude
Description
From the 2022 OpenZFS Developer Summit https://openzfs.org/wiki/OpenZFS_Developer_Summit_2022
Slides: https://docs.google.com/presentation/d/1fZEQNoJJhz6pW3M8S6cg-XKrBmHSDxyc/edit?usp=sharing&ouid=112595186103367032517&rtpof=true&sd=true
A
Hi
everyone
I'm
Alan
from
Clara
systems
and
today
we're
going
to
talk
about
making
ZFS
work
better
for
multi-tenant
situations
where
you
might
have
a
pool,
that's
shared
with
multiple
customers,
and
you
need
to
manage
how
they
have
access
to
that
and
all
the
fun
things
that
happen.
When
you
have
multiple
people
who
aren't
necessarily
on
your
side
accessing
your
pool.
A
It
is
a
machine,
that's
basically
being
parceled
out
to
a
bunch
of
different
users
that
aren't
necessarily
from
your
organization,
and
so
with
that
you
need
to
be
able
to
control
what
they
have
access
to
be
able
to
see
and
how
you
can
isolate
them
so
that
they
don't
impact
each
other.
You
need
to
keep
user
customer
a
from
being
the
Noisy
Neighbor
that
makes
customer
B's
system
slower,
but
also
that
your
customers
don't
know
about
each
other
and
can't
see
things
that
they're
not
supposed
to
be
able
to
see.
A
A
Already,
like
ZFS
allow
to
be
able
to
say
you
know
this
user
or
this
group
of
users
on
the
system
has
access
to
manipulate
this
file
system
or
chain
of
file
systems,
and
in
previous
DN
Solaris,
it
was
possible
to
delegate
control
of
a
subset
of
the
data
sets
to
a
container
a
jail
or
a
zone
or
whatever
you
want
to
call
it,
and
then
the
root
the
pseudo
root
user
inside
that
container
would
have
the
ability
to
create
their
own
data
sets
and
make
manage
snapshots
and
change
the
mount
points
and
properties
and
everything
else.
A
A
B
A
Does
have
a
container
system,
it
doesn't
map
very
cleanly
to
what
the
Solaris
and
previous
D1
looked
like
in
Solaris
or
BSD.
You
had
this
top
level
object
the
jail
or
Zone
itself.
That
would
have
a
name
or
an
ID,
and
everything
belonged
to
it,
whereas
on
Linux
the
namespaces
are
much
more
amorphous
and
they're
separate
namespaces.
You
know
the
map
namespace,
the
PID
namespace,
the
network
name
space,
the
username
space
and
a
container
can
have
some
or
all
of
these,
rather
than
it's.
A
A
So
this
made
a
bit
more
difficult
to
map
that
to
the
existing
code
in
ZFS,
because
it
assumed
you
would
just
delegate
the
data
set
to
this
object
that
had
like
an
ID
or
a
name
and
on
Linux.
You
can
have
layers
and
layers
of
these
and
there's
not
really
the
same
concept.
A
So
we
had
to
find
a
way
to
solve
that.
So
the
use
case
for
this
was
one
of
our
customers
that
provides
a
hosted
CI
platform
and
basically
their
customers
provide
a
Docker
container.
That's
going
to
go,
do
some
workload
and
they
wanted
to
put
that
in
unprivileged
lxd
containers
on
Linux
rather
than
on
the
host,
but
because
they're
going
to
run
a
Docker
container
Docker.
A
They
wanted
to
use
that
fs
and
so
Docker
during
its
setup
phase,
wanted
to
run
a
bunch
of
ZFS
commands
to
create
all
the
data
sets
and
clone
things,
but
they
couldn't
do
that
in
an
LSD
container
by
default
on
Linux,
and
so
our
challenge
was
to
solve
that.
A
And
so
in
this
case
the
the
end
user
doesn't
actually
have
access
to
run.
Any
ZFS
commands
it's
just
what
Docker
needs
to
to
set
up
to
download
the
the
customer's
workload
and
the
inner
container.
The
second
container,
that's
actually
provided
by
the
the
end
user,
doesn't
have
access
to
ZFS.
It's
just
the
container
that
our
customer
controls
that
has
access
to
ZFS
to
do
enough
setup
to
make
Docker
habit
happy
instead
of
having
to
use
the
VFS
passthrough
system
or
something
else
in
Docker.
A
That
turns
out
to
have
a
large
negative
impact
on
performance,
and
so
we
wanted
to
make
the
basic
ZFS
commands
available
to
the
root
user
in
the
the
outer
lxd
container.
A
So
to
get
started
with
that
everywhere
in
ZFS,
there's
a
check
called
in
global
Zone.
It's
just
a
macro
that
decides.
Are
we
in
the
host
system
or
are
we
inside
a
container,
and
you
know
on
FreeBSD
that
just
checks?
What's
our
Jail
number?
If
it's
zero,
then
we're
on
the
host,
otherwise
we're
in
a
jail.
A
It's
similar
for
zones
on
Linux,
it's
a
little
more
complicated,
but
you
can
get
which
username
States
you're
in
by
checking
this
pointer
and
there's
a
what's,
not
a
pointer,
but
this
value
and
there's
a
well-defined
random
number.
That
means
the
host,
and
so
we
were
able
to
make
something
an
alias.
So
now,
when
you're
inside
a
container,
if
you
did
have
access
to
slash
devs,
it
would
say
you
know
you
don't
have
access
to
do
this
because
you're
inside
a
container.
A
The
interesting
thing
is
with
the
user,
ID
and
group
ID
mapping
that
the
Linux
container
system
supports
where
you
have
a
username
space,
where,
if
you're
running
the
LSD
container
as
a
regular
user
on
the
host
system,
that
has
you
know,
user
ID
1001,
you
get
a
range
of
user
IDs
in
a
higher
range
like
256k,
plus
64k
and
all
those
usernames.
Actually,
in
the
end
map
back
to
your
unprivileged
user
on
the
host,
but
from
inside
the
container
it
looks
like
you
have
the
full
range
of
64
000
user
IDs.
A
There
was
some
code
to
try
to
do
the
mapping
back
and
forth
with
that.
So,
as
you
cross
the
boundary
out
of
the
kernel
back
into
ZFS,
it
would
look
it
up
and
figure
out
the
the
user
ID
for
inside
that
container.
But
that
turned
out
to
actually
be
the
wrong
thing
because
on
disk
in
ZFS
we
always
want
to
store
the
the
raw
user
ID
and
it
turns
out
the
Linux
kernel
will
actually
translate
it
to
the
right
thing
for
the
view
inside
the
container.
A
And
so,
when
you
create
a
file
inside
the
container,
it
gets
owned
by
the
user
that
only
exists
in
that
container.
But
when
it
comes
back
through
VFS
to
ZFS,
once
we
have
the
flag
saying
we
support
it,
it
would
map
back
to
the
the
really
high
user
ID
and
that's
what
we
want
to
store
on
disk
so
that
it
can
be
read
from
the
inside.
A
Without
this.
What
happened
is
if
you,
you
know,
touch
the
file
as
root
inside
the
container.
You
would
then
LS
and
look
at
it
and
it
would
be
owned
by
nobody,
because
it
was
a
user
ID
outside
of
the
range
that
was
allocated
to
your
container
and
you
wouldn't
be
able
to
access
the
file
you
just
created.
So
we
had
to
remove
that
little
bit
of
code
that
you
added
in
2016,
because
it
actually
did
the
translation
twice
effectively
and
that
resulted
in
all
the
files
being
unaccessible
and.
C
A
The
super
block
in
Linux
we
had
to
set
this
user
NS
Mount
flag
and
that's
how
the
file
system
tells
Linux
that
it
supports
being
mounted
in
a
namespace
in
a
username
space
and
that
it
will
take
care
of
all.
The
kernel
will
take
care
of
all
the
user
ID
mapping
for
it,
so
that
it
doesn't
have
to
do
any
extra
heavy
lifting.
A
So
this
led
to
the
biggest
challenge
is
because
there's
not
just
a
jail
name
or
a
Zone
ID
or
whatever.
How
does
a
user
specify
which
username
space
they
want
to
delegate
a
data
set
to?
A
A
It
was
not
nice,
but
the
bigger
problem
with
that
is
given
that
ID
there
was
no
way,
there's
no
interface
in
the
Linux
kernel
to
turn
that
into
a
reference
to
the
actual
like
struck
namespace,
and
so
that
led
to
the
problem
of
okay,
we've
delegated
this
data
set
to
that
namespace.
A
What
if
they
destroy
that
namespace
and
create
a
new
one
and
they
get
to
sign
the
same
ID
now
some
unrelated
namespace
has
access
to
this
data,
set
that
the
user
didn't
actually
delegate
it
to,
and
that
was
a
problem.
A
And
so
what
we
had
to
do
was
make
the
interface
take
a
path,
so
you
actually
give
it
the
slash,
proc,
slash
the
ID
NS
user,
and
that
will
it
will
actually
open
a
file
descriptor
to
the
username
space
and
pass
that
through
the
ioct,
build
into
the
kernel
where
it
can
convert
that
into
a
reference
to
the
actual
username
space,
and
that
way
we'll
basically
have
a
hold
on
that
namespace
and
it
won't
actually
be
reclaimed
until
we
stop
delegating
a
data
set
to
it.
A
And
this
way
we
know
that
it's
not
going
to
get
reused
and
the
data
set
won't
randomly
be
exposed
to
some
other
container
in
Linux
itself.
They've
changed
the
API
for
all
of
this
namespace
stuff
a
lot.
It
kind
of
grew
organically
over
a
couple
of
years,
and
so
at
one
point
they
had
the
struct
had
different
elements
depending
on
the
type
of
namespace
it
was,
and
then
they
switched
to
having
this
common
Union.
A
That
would
have
it
all
and,
and
so
the
code
is
littered
with,
if
deaths,
because
depending
if
you
have
a
3.x
or
4.x
or
5.x
kernel,
all
the
interfaces
for
dealing
with
namespaces
are
different
and
also
depending
on
what
district
of
Linux
you
have.
It
depends
what
version
where
they
enabled
any
of
these
things.
So
it
gets
a
little
messy,
but
it
works
very
nice
out
of
the
box
on
Ubuntu
and
new
enough
red
hat.
C
Richard,
this
is
just
very
simple,
but
did
anyone
think
of
basically
having
an
if
def,
to
prevent
ns.inum
or
proc
underscore
item
and
just
Define
that,
if
death
to
that
based
on
the
defined
and
then
to
use
whatever
I,
want
to
say
compatibility
inum
in
place
of
that?
Because
it
looks
like
you
can
avoid
the
you
can
push
the
in-depth
soup
into
a
header.
A
Or
into
like
a
macro
yeah,
we
just
made
an
accessor
function
for
it
that
figures
it
out,
because
it's
probably
going
to
change
again,
oh
yeah,
so
we
tried
to
like
we
avoided
putting
the
same
if
def
in
many
many
places,
if
we
could,
by
making
wrapper
functions
for
it
because
yeah,
otherwise
it
would
get
nasty
every
time
you
had
to
go
and
update
the
alt.
If
tests
and
so
yeah
we
we
did
avoid
just
making
a
soup
of
it,
but.
A
So
now
I
want
to
talk
a
bit
more
about
what
else
we
can
do
in
ZFS
to
make
this
use
case
of
you
know.
There
are
multiple
competing
customers
all
sharing
one
pool.
How
do
we
make
sure
you
know
we
don't
have
noisy
neighbors
or
people
doing
like
a
Denali
service
attack
against
our
thing
by
just
doing
some
expensive
operation
over
and
over
again.
A
So
one
thing
we
want
to
talk
about
is
just
overall
improving
support
for
the
container
workload
and
there's
a
bunch
of
open
PRS
that
we're
going
to
talk
about
and
just
other
things
ZFS
can
do
to
support
that
case.
Also,
what
other
features
unrelated
necessarily
to
Containers
could
make
sense
for
multi-tenant
use
cases
and
some
other
interesting
pits
that
we're
looking
at
so
the
first
one
for
containers
that
I
know
MAV
and
the
people
IX
are
interested
in
is
rename
at
2.
So
this
has
a
couple
of
flags.
A
One
of
them
is
rename
exchange,
so
this
is
atomically
swapping
two
files
so
rename
a
to
B
and
B
to
a
as
one
Atomic
operation
and
that's
used
a
lot
in
the
container
stuff,
especially
with
overlay,
fs
and
there's
a
second
one
that
does
a
rename
and
leaves
a
whiteout
for
the
old
file
name.
So
again,
if
you
have
a
layered
file
system,
you
rename
a
file.
You
don't
want
this.
A
The
file
with
the
same
old
name
from
the
lower
layer
to
show
through
now,
because
you
just
rename
it
so
it
has
to
create
a
white
out
behind
it
as
it
renames
the
file
away
and
so
there's
a
PR
for
both
of
those
and
that's
making
some
progress.
Although
there's
an
open
question
about
you
know,
these
new
things
are
going
to
be
a
new
type
of
Zill
record,
and
what
do
we
do
on
systems
where
there
is
no
rename
at
to
ciscal?
How
do
we
deal
with
the
compatibility
of
you
know?
A
We
don't
want
to
make
the
pool
incompatible
just
because
somebody
used
this
and
it
might
be
in
the
Zill,
and
so
you
know.
Currently,
the
plan
is
to
just
import
the
pool
anyway,
but
print
out
an
error
that
Brazil
didn't
get
completely
replayed,
because
the
different
OS
you're
importing
it
on
doesn't
support
that
that
zil
transaction
type.
A
For
whiteout
support,
there's
some
interesting
cases
there.
Just
the
way
white
outs
are
done
on
previous
D.
Some
of
the
existing
file
systems
support
specific
whiteout
like
renode
type,
so
there's
an
object
type
in
the
file
system.
That
is
a
whiteout,
whereas
on
Linux,
a
whiteout
is
done
by
just
creating
a
character
device
with
id00
and
that
wipes
out
the
file
in
the
special
handling.
For
that,
and
so
the
one
of
the
questions
was,
should
ZFS
have
an
explicit
object,
type
for
a
whiteout.
A
You
know
how
we
have
object,
types
for
files
and
directories
and
devices,
and
so
on.
Should
we
have
one
for
white
out
like
other
file
systems
like
ufs
do,
but
if
we
do,
then
we'd
have
to
bump
the
version
number
of
ZFS
the
file
system,
which
I
think
is
currently
five
and
hasn't
been
buffed
in
a
long
time,
and
we
never
did
feature
flags
for
the
file
system
version
number
because
we
haven't
changed
it
since
way
before
V28.
A
A
Especially
in
the
case
of
you
know,
different
operating
systems
are
going
to
have
different
levels
of
support
for
different
overlay
and
other
types
of
file
systems
where
a
white
out
really
makes
sense-
and
you
know
if
we're
looking
at
like
we
have
the
pull
request
open
for
Mac
OS.
Now,
what's
the
right
answer,
that's
going
to
work
nicely
across
three
or
more
operating
systems.
A
The
ID
map
mounts
one.
This
is
another
flag
you
can
set
in
the
Linux
super
block
for
ZFS,
saying
that
it
supports
ID
map
and
actually
this
got
merged
like
three
or
four
days
ago.
So
that's
actually
done
now,
and
you
have
access
to
that.
A
There
is
an
interesting
other
older
pull
request
from
joint
somebody
at
least
got
to
start
at
pointing
joints,
IO
Zone
throttling
to
open
ZFS,
and
this
would
basically
allow
you,
when
you
delegate
a
data
set
or
a
chain
of
data,
sets
to
a
container
to
actually
apply
throttling
to
it
saying
you
know
this
container
can
only
have
this
many
iops
or
whatever.
A
It's
actually
just
you
know
we
have
a
bunch
of
data
sets
and
there
are
different
use
cases
or
different
departments
or
whatever
you
know,
just
different
NFS
exports,
and
we
want
to
be
able
to
throttle
those
individually
without
having
to
make
them
into
containers
and
also
are
just
straight
throttling
of
you
know
you
can
have
at
most
this
many
iops
with
maybe
a
burst
or
something
or
this
many
megabytes.
A
second
is
that
the
right
answer:
how
do
we
deal
with
the
fact
that
certain
types
of
iops
are
more
expensive?
A
A
If
there
is
spare
capacity
in
the
system,
we
don't
necessarily
want
to
slow
anybody
down.
It's
just.
We
need
to
deal
with
the
case
of
we're
oversubscribed.
We
want
to
make
sure
that
we're
being
fair
with
how
the
I
o
is
being
partial
day
and
it
kind
of
comes
back
to
some
of
the
stuff
Matt
was
looking
at
with
the
non-interactive
io.
A
How
do
we
make
sure
that
one
user
doing
an
easier
type
of
request
isn't
able
to
cause?
You
know
a
different
type
of
workload
to
be
pessimized,
not
even
maliciously.
Just
you
know,
certain
operations
are
easy
and
ZFS
will
do
those
and
batch
them
together
and
get
it
done,
and
then
the
harder
workload
will
suffer
and
can
we
manage
that.
A
So
yeah
we
want
to
look
at
extending
that
concept
from
his
own
throttle
further
and
maybe
doing
more
than
just
you
know
hard
limits
in
something
a
little
more
like
quality
of
service,
but
how
to
do
that
in
a
way
that
doesn't
get
overly
complicated
and
also
what
is
the
inheritance
for
that
look
like?
Is
it
like
photos
where
the
limits
of
everything
going
up
the
tree
should
apply
and
view
what
happens
if
you
have
an
inversion
where
you've
got
the
the
limit
on
a
sub
data
set
higher
than
the
parent?
A
And
how
do
we
decide
what's
allowed
and
what's
not
and
what
would
that
look
like
another.
A
A
That's
read
only
and
then
another
ZFS
data
set
mounted
on
top
of
it
even
kind
of
Summit
analogies
to
a
clone,
except
for
you
have
a
base
data
set
that
doesn't
change
and
then
an
empty
ZFS
data
set
you
map,
on
top
of
it,
to
store
the
changes
and
doing
that,
but
all
within
ZFS,
so
that
you
don't
have
to
deal
with
near
as
much
complexity
on
the
locking
in
the
VFS
layer
that
you
have
with
anything
like
overlay,
FS
or
Union
fs,
and
instead
just
doing
it
all
in
ZFS
and
making
something
that
could
even
possibly
support
swapping
out
what
the
bottom
layer
is
with
a
different
one.
A
And
then
another
idea
I've
had
based
on
pavel's
work
is.
Could
we
actually
use
something
like
brt
to
reconcile
a
diverged,
clone
and
kind
of
re-share
some
of
the
blocks
and
do
that
without
having
to
do
too
much
jumping
through
hoops
for
the
fact
that
we're
going
to
rewrite
a
bunch
of
the
blocks,
but
we're
not
actually
rewriting
them
replacing
them
with
clones
of
or
brt
references
to
the
original
data
set?
A
But
then
that
raises
obvious
questions.
Hopefully
we
can
look
at
it
hackathon
today
of
what
happens
when
they
aren't
during
empty
directory.
Should
we
destroy
the
data
set
and
how
do
we
deal
with
all
the
questions
that
come
up
there.
A
Other
interesting
one
is
just
the
ZFS.
Allow
system
is
actually
quite
good,
but
there's
some
interesting
things
we
could
do
with
it.
For
example,
a
lot
of
people
Express
an
interest
in
the
option
to
instead
of
right
now
you
just
ZFS
allows
send
and
they
can
send
whatever,
but
some
people
would
want.
You
can
only
send
the
raw
already
encrypted
stream
of
this
data
set,
so
this
unprivileged
user.
That
does
my
backup
for
me.
A
I
want
them
to
be
able
to
replicate
my
encrypted
backup,
but
I
don't
want
them
to
be
able
to
get
the
plain
text
version
of
all
my
files,
and
so
should
that
be
a
separate
probation.
And
how
do
we
deal
with
that
and
currently
in
order
to
do
resume
of
a
ZFS
receive?
A
You
have
to
have
the
permission
to
be
able
to
just
release
the
hold
and
and
like
destroy
the
old
clone,
that's
hidden
in
the
background
and
generally,
if
I'm
resuming
a
replication
I
think
it's
weird
that
I
require
the
destroy
permission
to
do
that
and
maybe
that
specific
part
should
have
a
different
permission
or
be
included.
In
the
permission.
A
Another
thing
is:
if
I
want
to
allow
a
user
to
delete
snapshots,
they
have
to
get
the
destroy
permission,
which
means
they
can
destroy.
Child
data
sets
too
what,
if
I,
only
want
them
to
be
able
to
delete
snapshots
and.
D
A
Zfs
allow
have
even
more
things
that
it
has
now.
You
know
the
list
of
permissions
you
can
give.
Somebody
is
already
more
than
a
whole
screenful,
adding
a
whole
lot.
More
seems
like
it
makes
it
pretty
messy.
A
Another
problem
I've
had
in
the
past
is,
as
we
had
new
properties,
existing
people
that
have
permission
to
everything,
don't
have
permission
to
that
property
anymore
and
that
can
break
your
replication
scripts,
especially
if
you're,
using
like
capital,
r
or
P,
and
so
you
know
for
some
of
the
properties
should
we
have
like
a
meta
permission.
That's
just
like
all
properties.
A
This
person
is
allowed
to
read
or
write
all
the
properties,
that'll
be
future
proof
over
as
new
properties
get
added,
or
these
groups
of
properties
or
something
and
the
other
one
is
now.
If
you
delegate
a
data
set
to
a
container
inside
that
container,
they
can
only
see
the
data
sets
you've
given
them
access
to.
A
Might
we
want
that
for
just
unprivileged
users
on
the
host
right?
Now?
If
you
are,
as
you
know,
running
as
nobody,
you
run
ZFS
list,
you
can
see
every
data
set,
that's
been
the
default.
We
might
not
want
to
change
that,
but
should
there
be
some
way
to
say
you
know
these
data
sets
aren't
visible,
or
only
these
data
sets
are
visible
or
you
know
if
you're
not
root,
you
can't
see
the
data
sets
and
then
another
idea
that
I
think
originally
came
from
joint
was
when
you
do
delegate
something
to
a
container.
A
We
have
to
expose
all
the
parents
of
what
you
delegated.
So,
if
you
delegated
you
know
my
pool,
slash
customer
slash
the
customer
name,
slash
containerx
Foo.
They
can
see
Foo
and
everything
under
it,
but
they
see
read-only
versions
of
all
the
parents
to
get
back
to
the
root
of
the
pool
and
they
had
the
idea
of
basically
creating
an
alias
that
would
cover
up
most
of
this
a
to
make
the
name
a
lot
shorter.
A
Just
so
it's
less
typing
every
time,
you're
doing
a
command,
but
also
you
know
the
customer
doesn't
need
to
see
your
whole
organizational
system
of
how
you've
categorized
their
stuff.
And
can
we
make
some
kind
of
Alias
to
to
hide
a
lot
of
that
complexity
from
the
end
user?
That's
inside
the
container
so
that
they
don't
see
as
much
of
that
or
have
to
deal
with
it.
A
And
lastly,
I
was
wondering
about
in
the
case
where
you
have
a
bunch
of
these
different
containers
or
tenants
on
a
system.
What
about
limiting,
how
much
of
the
arc
each
of
them
can
use,
or
at
least
even
just
being
able
to
account
for
how
much
of
the
data
in
New
York
is
the
fault
of
each
of
those
tenants.
D
When
you
delegate
a
data
set
to
a.
A
Host
level,
so
at
the
host
level
it's
still
visible,
but
it
can
never
be
mounted
once
the
zone
property
is
set
because
inside
the
container
root
user
there
can
control
the
mount
points
and
if
you
brought
it
back,
they
could
mount
it
over
slash,
EDC
and
override
your
password
file
or
whatever
so
they're
visible,
but
not
mountable.
As
soon
as
the
once
the
zone
property
is
turned
on,
it's
not
possible
to
mount
them
on
the
host,
but
they
do
still
show
up
in
ZFS
lists.
A
A
Property
is
per
data,
set
yeah
and
it's
it's
on
or
off
and
once
it's
on,
then
it's
possible
to
run
ZFS
Zone
the
data,
the
namespace
and
the
data
set
and
give
it
to
them,
but
that
zone
property
is
what
protects
the
host
from
that
data
set
ever
getting
mounted,
because
the
user
that
it
was
delegated
to
could
have
set
the
mount
point
to
slash,
Etc
or
or
anything
else
that
could
cause
all
kinds
of
havoc
on
the
host
and
that's
the
protection
that
was
just
inherited
from
the
original
Solaris
design.
A
For
this
and
it's
exactly
the
same
on
FreeBSD
except
for
zoned,
was
renamed
jailed.
But
it's
the
same
thing.
D
C
The
Zero
Records
you
mentioned
having
systems
that
don't
support
it
print
that
the
Zill
was
skipped.
But
let's
tell
me
what
skipped
in
the
zoo
before.
However,
we
can't
go
back
in
time
to
the
older
systems
that
don't
know
anything
about
this
at
all,
whether
or
not
they
supported,
but
they
don't
even
know
if
they
don't
support
it.
E
C
Something
occurred
to
me
is
since
the
zoo
is
replayed,
essentially
atomically
as
you
don't
just
stop
in
the
middle,
and
all
things
go
forward,
you're
going
to
go
through
every
single
record.
Why
not
just
break
it
into
multiple
existing
like
two
regular
rename
records.
C
Yes,
degree
I
just
spoke
dream,
reading
records
for
compatibility
purposes
and
then
maybe
have
of
fragrances.
I,
don't
care
about
compatibility
of
multiple
owners
that
I
don't
give
up
compatibility
just
use
the
more
efficient
approach
that
is
going
on
against.
B
Actually,
zero
is
not
it
does
the
guarantee
opportunity
in
a
sense
like
how
long,
how
far
do
we
play
it
may
have
on
those
records
will
appear
in
different
steel
blocks,
and
some
of
them
will
not
reach
the
east
coast
of
the
corrupted
or
anything.
It
will
replace
the
first
part
as
much
as
it
can
into
installed.
A
Yeah
the
risk
with
decomposing.
It
is
that
it's
supposed
to
be
one
Atomic
operation,
and
so,
let's
play
into
three
means
that
you
run
the
risk
of
it
not
turning
out
to
be
Atomic.
And
then
you
know
the
file
you
were
trying
to
swap
A
and
B.
Now
a
is
suddenly
called
some
name,
you
don't
control,
and
that
would
be
bad.
B
From
a
different
situation,
when
you're
doing
rights
and
for
Sky's
experience
for
Atomic
rides,
I
want
to
be
able
to
control
that
okay,
this
64
kilobyte
right
will
be
Atomic,
64
kilowatt
right,
all
right,
some
amount,
but
again
how's
your
Implement,
how
it
may
chunk
it
in
several
pieces-
and
you
may
replay
one
of
them,
but
not
the
other.
Maybe
we
could
introduce
something
or
something
foreign,
something.
A
E
D
Is
some
problems
actually
managing
the
gut
because
sometimes
I
want
to
about
it
before
my
post
system
and
I?
Had
we
had
some
problem?
I,
don't
remember
the
details,
but
it's
it's
kind
of
tricky.
A
Yeah
it
was,
it
was
quite
a
pain
to
unwind
it
all
to
be
able
to
see
each
on
the
files.
When
the
we
were
talking
about
the
case
where
the
file
wasn't
owned
by
the
users
that
created
it
because
of
the
invalid
mappings
and
so
I'd
have
to
stop
the
container
fix
the
mount
Point
set
zoned
off
Mount.
It
see
it
on
the
file
and
then
set
it
all
back
up
again,
and
that
was
a
bit
of
a
pain,
I
think
with
with
FreeBSD
the
mount
Dash
T.
A
Doing
temporary
mounts
would
be
a
lot
of
work
on
Linux
it
doesn't.
It
seems
that
just
ends
up
calling
ZFS
Mount,
which
doesn't
let
you
override
the
intended
Mount
Point
as
easily.
C
Maybe
one
last
question:
oh
this
isn't
a
question:
it's
just
a
follow-up
to
that.
So
on
the
next.
If
we
were
to
pass
ZFS
util
as
an
option,
usually
we
can
just
bypass
the
mount
ZFS
Helper
and
go
straight
into
the
kernel
and
the.
A
A
Yes,
if
you
need
development
or
support
for
ZFS,
then
you
can
reach
out
to
us
at
Clara
systems.
Thank
you.