►
From YouTube: June 2020 OpenZFS Leadership Meeting
Description
At this month's meeting we discussed: OpenZFS Developer Summit; Block Reference Table for file cloning; Fix for bug related to toggling "-L" flag to zfs send.
meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#heading=h.7pnof6czteeh
A
A
A
So
the
converse
is
gonna,
be
free,
it's
going
to
be
online
and
we
will
serve
registration.
The
registration
is
mainly
so
that
you
can
get
notified.
Will
you
know
we'll
send
an
email
blast
to
remind
folks
about
it
and
also
first
attract
who's
attending?
You
know,
you'll
get
folks
who
register
to
attend
will
get
invites
to
the
slack
channel
and
stuff
like
that,
but
more
importantly,
for
today
and
for
the
folks
who
are
on
this
call,
we
are
calling
for
presentations.
A
We're
still
looking
to
working
out
the
details
of
the
technology
that
we'll
use
for
that
and
kind
of
see
what
other
conferences
have
done.
But
we've
seen
some
things
that
worked
pretty
well
in
terms
of
combining
like
live
spoke
in.
You
know,
via
audio
with
text,
for
getting
questions
from
from
the
audience,
so
I
think
we'll
be
able
to
work
out
something
that'll
work
pretty
well,
and
one
of
the
things
that
we
would
really
like
to
do
is
to
try
and
replicate
the
like.
A
Hallway
interaction
feel
that
we
have
in
person,
because
I
know
a
lot
of
the
value
of
the
conference.
Is
you
know
the
talks
are
great.
We
get
a
lot
of
info
from
the
talks.
That's
like
disseminating
information,
but
it's
really
great
to
like
see
other
people
who
are
working
on
things
being
able
to
have
those
serendipitous
interactions
or
like
over
here
somebody
talking
about
something
that
you're
interested
in
and
then
be
able
to
come
up
and
listen
in
and
and
get
more
information
about
something
that
you
might
not
have
known
that
you.
A
You
know
that
other
people
were
working
on
or
that
other
people
were
thinking
about
the
same
way.
You
are
so
we'd
like
to
figure
out
how
to
do
that
and
we're
still
kind
of
working
on
that.
But
if
you
have
seen
other
things
that
work
well
at
other
conferences,
let
us
know
and
we'll
investigate
all
the
all
the
kind
of
different
technologies
in
in
modes
that
we
have
for
doing
that.
A
B
D
A
D
A
People
on
IRC
right,
oh
yeah,
okay,
well,
I,
didn't
say
too
much
after
that.
All
right,
I,
don't
know
what
happened
there.
My
internet
cut
out
or
something
yeah.
So
you
know
we'd
like
to
find
some
kind
of
technology.
That's
not
just
like
a
hundred
people
talkie
in
the
same
channel
in
IRC,
but
something
where
we
know
like
kind
of
breakout
rooms,
and
you
can
go
find
out
what
people
are
talking
about
in
different
rooms.
A
So,
just
to
reiterate,
we
are
looking
for
folks
to
give
talks
about
what
they've
been
working
on
with
ZFS
or
what
they've
discovered
about
ZFS,
or
you
know
how
ZFS
works.
People
really
love
those
kind
of
deep
dives
on
like
you
know
how
it
is
the
Ark
work.
How
does
the
zio
pipeline
work
because
those
are
really
helpful
to
both
people
new
to
new,
to
ZFS
development
people
who
are
just
using
the
ZFS
as
well
as
people
who
are
more
experienced
but
might
not
know
the
nitty-gritty
of
like
that
particular
subsystem.
A
So
the
website
is
live
we'd
like
to
let
me
know
by
July
20th,
if
you're
giving
a
talk
I'm.
So
you
have
about
a
month
to
think
about
it,
and
then
the
conference
will
be
at
the
end
of
September
questions
about
the
call
for
presentations
or
the
format
or
anything
else.
We'll
also
have
hackathon.
That
would
be,
like
a
you
know,
virtual
hackathon
that
will
try
to
arrange
and
then
have
kind
of
ad
hoc
presentations
at
the
end
of
it
online
as
well.
On
the
second
day.
A
We
we
will
the
Congress,
it
does
not
cost
us
nothing
to
put
on
despite
being
online,
so
there
are
still
sponsorship
opportunities
for
this
year
I'm.
So,
if,
if
your
companies
are
interested
in
sponsoring,
we
would
love
to
work
with
you
and
one
of
the
things
that
we
would
like
to
do.
We
we
kind
of
started
thinking
about
this
last
year,
but
one
of
the
things
that
we'd
like
to
do
is
be
able
to
build
up
a
bit
of
a
buffer
of
funds
so
that
we
aren't
start
every
year.
A
That
would
allow
us
to
you
know,
be
more
confident
about
you
know,
making
monetary
commitments
to
venues
and
stuff
like
that
before
all
of
the
sponsorships
for
that
year
have
landed.
So
if
you
know,
if
your
company
does
have
the
opportunity
to
sponsor
it
would
really
help
to
set
us
up
for
continued
success.
A
So
ideally
we'd
like
to
be
able
to
sponsor
people
to
come
and
give
talks
at
the
conference
in
person
when
it's
in
person
in
future
years,
who
wouldn't
otherwise
be
able
to
so
like
people
like
students
that
don't
have
companies
that
are
going
to
pay
for
their
travel
and
also
I.
We
want
to
we'd
like
to
have
the
money
to
cover
some
smaller.
A
Well,
I
mean
we're.
You
know
we're
open
to
using
new
kinds
of
sponsorship.
Stuff
I
know
that
github
has
some
kind
of
sponsorship
mechanism
that
I
haven't
really
looked
into.
I
mean
I'm,
not
that
familiar
with
patreon
I
know
it's
used
by
a
lot
of
artists
and
I.
Don't
know
if
that
was
if
that
proposal
was
made
in
jest
or
not,
but
it.
E
A
A
You
know
we
don't
want
it
to
be
the
kind
of
thing
where
you
know
you
have
to
give
money
in
order
to
get
your
stuff
considered,
and
you
know
that
hasn't
been
the
case
at
all
in
the
past.
So
we
would
want
to
be
pretty
careful
about
that,
but
definitely
a
way
for
people
to
show
their
support,
and
if
there
is
some
kind
of
you
know
exclusive
stuff
that
we
could
give
people,
maybe
some
exclusive
pieces
of
swag
or
you
know
special
called
a
chitchat
with
the
you
know
with
the
developers
or
something
like
that.
A
Sure
I
mention
that
so
for
folks
who
don't
know
open
ZFS
is
part
of
software.
The
Public
Interest
SPI,
which
is
a
like
financial
sponsor,
so
they
handle
all
of
our
finances.
There
are
a
501
C,
3
tax,
you
tax
advantaged
organization
in
the
United
States
so
and
we
accept
donations
from
the
public.
There's
like
a
link
on
our
web
page.
A
Yeah
so
I
mean
we,
you
know,
donations
of
all
sizes
are
welcome
and
appreciated
and
helpful.
The
I
do
kind
of
like
the
idea
of
ongoing.
You
know
sponsorships,
which,
which
is
kind
of
the
patreon
model,
as
opposed
to
like
what
we
have
now
or
it's
like
yeah.
You
can
go
out
there
and
click
a
button
and
give
us
X
dollars.
It
was
a
one-time
deal.
A
A
E
You
hear
me
guys:
yep,
okay,
so
I've
been
pondering
this
idea,
which
is
not
new,
but
many
people
think
that
it's
it
should
be
pretty
easy
implemented
for
ZFS,
and
it
just
feels
that
ZFS
should
support
something
like
this
and
this
functionality
you
may
know
from
Linux.
There
is
a
reference
which
for
a
copy,
or
there
is
a
clone
file
system,
call
on
Mac
OS.
E
E
E
E
Of
course,
a
large
file
cloning
is
is
an
obvious
one.
You
can
easily
clone
like
VM
images
or
large
file
like
this,
and
it
should.
It
should
be
very
fast
operation
and
take
almost
no
additional
space.
You
still
have
to
pay
for
the
in
their
indirect
blocks,
but
that's
of
course,
much
much
lower
cost
and
actual
data
blocks.
Another
interesting
use
case
is
being
able
to
recover
files
from
snapshots
without
paying
price
for
additional
space.
E
E
E
E
E
E
So,
but
the
problem
is,
of
course,
that
if
we
have
a
data
block
on
the
desk-
and
we
have
a
block
pointer
pointing
at
this
block-
we
cannot
have
like
a
reference
counter
in
this
block
pointer
because
of
course
we
cannot
refer,
we
cannot
modify
a
block
pointer.
So
my
idea
was
to
create
something
similar
to
dee
doop,
but.
E
Bear
would
be
before
before
you
start
not
to
like
the
idea.
So
what
what's
the
difference
between
this
and
we
do
so
with
D
dope,
every
write,
every
blog
that
we
write
goes
to
the
doop
table.
So
one
of
the
problems
and
the
large
cost
of
the
dope.
You
cannot
have
this
always
enabled
right,
because
every
single
write
creates
an
entry
in
need
of
table
in.
A
E
E
Yeah
so
one
of
the
ideas
you
could
think
about
and
I'm
sure
it
was
discussed
to
optimize
DD
opposed
to
remove
entries
that
we
have
count
1
that
are
older
than
something
good.
Of
course,
it's
always
some
other
may
be.
There
will
be
another
entry
and
you
could
reuse
it
somewhere
in
the
future
so
but
going
back
to
this
block
reference
table.
So
so,
yes,
some
did
up
the
cost
of
data
piece
for
every
single
brokering
right,
even
if
the
ref
reference
counter
is
equals
to
1,
we
have
to
pay
the
price.
E
Another
problem
is
that
we
have
indeed
of
table.
We
have
checked
some,
so
we
cannot
really
sort
properly.
So
the
the
D
dooper
entries
are
scattered
through
the
whole
data
table,
and
we
have
to
pick
and
choose
what
we
when
we
write
and-
and
we
can
reference
the
de
pantry
so
but
so
we've
block
reference
table.
E
You
only
create
entries
when
the
reference
counter
is
greater
than
one.
So
you
have
your
block
of
data.
You
have
a
block
pointer
pointing
to
the
data
right
and
we
cannot
modify
that,
but
once
you
call
this
clone
file
system
color
whatever
you
want
to
call
that
we
will
read
existing
block
pointers.
We
will
create
new
block
pointers,
just
copying
the
the
data
from
the
original
block,
pointers,
storing
them
in
new
file
and
creating
entries
in
this
block
reference
table.
So
the
entry
is
only
created
when
the
block
is
actually
cloned.
E
E
When
you
free
the
block,
we
cannot
really
tell
if
the
block
was
has
more
than
one
reference.
So
for
every
single
free
we
have
to
consult
the
table
so
indeed
OOP
it's
different,
because
because
every
single
right
goes
to
the
book
table,
every
single
block
pointer
that
points
to
a
block
which
is
in
the
disabled
has
this
D
beat
set.
E
E
E
E
Another
another
optimize
well
another
fortune
properties
that
we
actually
can
sort
those
entries,
because
so
we
reference
the
vdf
and
offset
we
can
actually
sort
out
those
entries
on
disk.
So
if
we
want
to
clone
a
file,
it
is
likely
that
all
the
entries
are
nearby.
So
we
may
get
all
the
entries
by
just
doing
single,
read
yeah.
A
A
E
E
So,
for
example,
copying
the
idea
or
trying
not
to
pay
the
price
on
every
single
free.
One
of
the
ideas
would
be
good
to
have
a
toggle
on
the
data
set
that
this
data
set
supports,
block
referencing.
This
would
allow
us
to
put
let's
say,
our
beat
into
the
block
pointer,
which
tells
that
this
block
was
created
in
data
set,
that
supports
block
references
and
then
on
the
free
we
check.
If
this
bit
is
set.
If
it
is,
then
we
then,
and
only
then
we
control
the
block
reference
table,
but
of
course,
could.
F
You
do
that
easier.
Using
the
per
data
set
feature
flag
thing
like
we
do.
For
you
know,
this
data
set
has
borne
some
block
that
used
shot
if
these
are
512
or
whatever,
so
that
you
wouldn't
have
to
even
do
it
in
every
block
pointer.
Just
this
data
set
has
used
ref
linked,
so
every
free
from
this
data
set
has
to
consult
the
table.
E
This
whenever
we
clone
the
block
to
the
source
for
a
destination
they
decide,
we
have
to
market
us
contaminated.
Let's
say
yes,
and
then
from
a
now
on
just
well,
we
could
well
I
would
like
to
discuss
possible
optimization
because
that's
the
maths
biggest
concern,
but
we
just
finish
because
something
that
I
didn't
expect
it
and
I
didn't
like
it
wasn't
obvious
initially
for
me,
but
there
is
no
way
this
could
survive,
ZFS
send
and
receive
so
with
with
D
dope.
We
we
send
the
block,
we
recalculate
the
checksum
and
diggable.
E
B
E
B
E
A
A
Okay,
the
only
the
only
way
yes
you
could.
It
does
seem
not
great
to
have
to
require
this
like
post
processing,
which
would
be
very
expensive
right,
because
the
post
processing
would
be
basically
like
generated
YouTube
table
of
all
the
stuff
that
you,
like
think,
might
have
point
like
data
in
common,
and
then
you
know
when
they
find
something
that
you.
B
D
B
A
B
E
A
E
E
A
But
to
get
kind
of
full
functionality,
because
you
know
you're
saying
that
you
want
to
be
able
to
use
it,
not
just
for
like
within
one
file
system,
Makoni
files,
but
like
Cloney
files
across
file
systems,
or
you
know
after
the
fact
do
you
do
where
we
go
and
find
and
anything
anywhere,
storage
pool.
That's
the
same
as
anything
else,
and
then
you
know
basically
add
those
to
the
block
reference
table
to
do
that.
To
guess
similar
result
on
the
receiving
system.
You
would
have
to
do
this
big.
A
You
know.
Global
post-processing
right,
like
I,
could
have
done
it
right
as
per
this
receive,
which
has
the
same
contents
as
something
in
some
other
file
system.
That
seems
to
be
unrelated,
so
you
have
to
go
like
look
at
everything.
The
whole
like
go,
read
all
the
block,
pointers
in
the
whole
pool
and
put
them
into
a
giant
table.
So
this.
B
D
D
A
A
E
B
A
G
E
A
D
Think
that
the
thing
that
terrifies
me
is
that,
with
the
way
that
it
sounds
like,
if
you
experience
a
problem
now,
look
if
you
get
yourself
into
the
state
where
it
doesn't
fit
in
memory
anymore,
and
now
you
have
to
spilled
a
disk
that
you
would
think.
Okay.
Well,
it's
okay,
I'll,
just
clean
it
up
and
turn
it
off,
but
like
what
did
you
the
clean
up,
then
in
the
failure
mode
is
extremely
expensive
and
takes
possibly
forever
and
probably
affects
the
pool
operation?
Like
that's
the
worrying
thing.
A
A
C
A
C
A
C
D
B
F
Yeah
you
know,
building
it
with
a
quota.
To
start
with,
doesn't
seem
like
a
bad
idea.
The
advantage
this
has
is
that
if
the
aanda
structure
is
more
sorted,
it'll
be
a
little
easier
to
load
the
range
you're
trying
to
free,
rather
than
you
know,
each
free
being
a
random
read
from
somewhere
in
the
data
structure.
Finally,.
F
B
C
C
B
C
B
E
When
you
compare
it
to
D
dupe,
the
entry
is
much
it's
like
five
times
smaller,
but
if
you
think
about
that,
it's
it's
even
two
times
smaller
when
you
go
further,
because
the
worst
case
scenario
for
this
is
that
every
single
block
in
the
pool
is
referenced
twice
right,
that's
the
worst
case
scenario.
Indeed
nope.
E
The
worst
case
scenario
is
that
every
single
block
is
referenced
once
so
by
definition
this
table,
even
if
the
entry
would
take
the
same
amount
of
space
as
di
pantry,
the
table
would
be
half
of
the
deed
of
table
already
right,
because
you
cannot
reference
the
block
once
to
get
in
this
time.
So
so
at
this
point,
so
with
this
in
mind,
the
entry
in
memory
cost
is
around
ten
times
lower
than
D
dupe.
It
is
still
a
cost
I'm,
not
denying
that
and.
E
And
I
recognize
that
you
can
get
like
past
the
point.
Oh
of
course
you
are
more
aware
what
you
are
doing,
like
indeed
OOP
the
blocks
are
written
and,
and
you
have
no
idea
that
the
table
is
growing
here.
You
are
more.
This
is
more
conscious
process
that
you,
you
clone
the
file
so
but
but
it's
still
like
you,
don't
expect
the
cost
on
free.
This
is
this
not
intuitive,
but
also
like
moving
files
between
datasets?
E
That's
that's
practically
free,
because
when
you
move
blocks
between
datasets,
you
only
create
those
entries
temporarily
and
you
free
them
once
you
move
the
file.
So
if
you
don't
clone
a
file
but
just
move
between
datasets,
that's
free,
but
that's
of
course,
a
small
win.
We
want
to
address
the
bigger
problem.
A
A
F
So
when
under
Groupama
originally
created,
the
kind
of
next
boot
feature
you
named
it
next
boot,
but
that
actually
conflicts
with
the
name
of
an
existing
feature
in
freebsd,
which
is
mostly
a
loader
feature
that
lets
you
choose
a
different
kernel
or
something
rather
than
the
data
set.
So
we
would
like
to
change
the
name
to
boot
once
because
we
will
also
be
adding
support
for
next
boot
and
it
would
be
confusing
to
call
the
one.
F
A
B
B
About
a
year
ago,
the
topic
came
up
of
how
to
deal
with
system.
Properties
are
actually
OS
specific
and
share.
Nfs
was
one
that
was
that
was
given
the
other
very
share.
Typed
example:
I
use
is
mount
options
because
that's
my
particular
passion
I
created
a
PR
and
other
than
some
knits
about
which
characters
to
use
having
gotten
much
feedback.
Is
this
still
something
people
care
about.
A
A
E
A
A
Of
volunteer
effort,
it's
it's
a!
It
doesn't
necessarily
come
to
the
top
of
mind
when,
when
it
should
or
when
it
could
so
I
think
you
know
kind
of
weekly
reminders
or
check-ins.
You
know
it
can
be
really
lightweight,
just
send
them
a
message
on
slacks
that
have
been
email
or
send.
You
know
send
me
a
message:
I'll
do
that
today.
A
The
last
thing
that
I
had
was
something
from
last
week.
This
will
hopefully
also
be
sure,
but
they
send
capital
L
large
files,
bug
so
I
think
we
meant
to
talk
about
this
last
month.
The
fix
has
since
been
integrated
for
folks
who
aren't
familiar.
You
can
read
more
about
it
in
the
pull
request,
but
there
was
a
bug
where,
basically,
if
you
are,
if
you
have
a
file
system
that
has
blocks
more
than
128
K,
so
you
can
change
the
record
size
property
to
be
more
than
128
K.
A
Then,
when
you
send
those
really,
you
probably
should
always
been
using
send
a
capital
L
to
send
those
large
blocks.
If
you
don't
use
capital
L,
then
the
sand
tries
to
split
the
large
block
into
smaller
blocks,
which
was
kind
of
a
bad
idea
that
I
implemented
back
in
the
day,
and
this
resulted
in
the
bug
where,
if
you
are
using,
send
and
you
have
large
blocks
and
you
switch
between
sending
with
capital
noncapital,
so
you
switch
between
actually
transmitting
large
blocks
and
splitting
up
the
large
blocks
or
vice
versa.
A
Then
you
could
get
the
wrong
contents
on
the
target.
So
it
would
mean
correctly
zero
out
that
files
contents.
This
apply
is
like,
if
you're
updating
the
file
incrementally,
let's
so
like
you
know,
you
have
an
existing
file
and
then
you're
changing
some
part
of
it.
Then
it
could
zero
out
the
other
contents
of
it.
So
the
fix
the
main.
A
The
main
thing
that
people
need
to
be
aware
of
is
that
now,
if
you
are
switching
from
sending
large
blocks
with
capital
L
to
not
send
any
large
blocks
with
without
capital
L,
then
the
the
receive
is
going
to
fail.
So,
rather
than
giving
you
like
the
wrong
data
on
the
receiving
system,
it's
just
gonna
say
no.
You
can't
do
that
if
you
set,
if
you
switch
from
not
using
capital
L
to
using
capital
L,
then
the
receiving
system
will
now
handle
that
correctly.
D
A
D
A
D
A
That
is
a
great
question,
so
the
reason
that
we
don't
want
to
do
that
is
because
that
could
exacerbate
the
problem
for
existing
systems
so
like,
if
all
long
you've
not
been
using
the
flag
and
then
all
of
a
sudden
we
make
the
default
be
to
use
the
flag.
Then
we
put
you,
we
kind
of
put
you
directly
into
this
case
where
you're
going
to
hit
the
bug.
Now,
if
the
receiving
system
has
been
upgraded,
then
it'll
then
it'll
know
how
to
do
this
and
it'll
handle
it
properly.
A
But
if
you
upgrade
the
sending
system,
but
you
don't
upgrade
the
receiving
system,
then
basically
we
automatically
triggered
this
horrible
bug
for
you.
So
we
don't
want
to
do
that,
but
I
I
kind
of
paved
the
road
to
being
able
to
make
the
capital
L
be
the
default
or
only
case
by
adding
a
new
sense
stream
format
flag,
and
basically
this
will
allow
us
to
say
that
when
we,
when
we
do
switch
to
the
this
new
default,
you
can
only
receive
those
streams
on
newer
systems.
So
you
know
at
some
point.
A
You
know
next
year,
a
couple
years
from
now
or
whatever,
depending
on
releases,
we
can
change
the
default
to
be
to
send
the
large
blocks
and
there'll
be
some
rule
about
like
well,
that's
the
default,
but
you're
not
going
to
be
able
to
receive
those
on
those.
You
know.
Ancient
systems
from
you
know
pre
2020,
but
able
to
take
that
correctly.
The
receive
will
give
you
a
message,
and
then
you
know
you
can
you
know
you'll
be
able
to
say
split
up
the
blocks,
small
blocks
or
whatever
to
go
back
to
the
old
behavior.
A
So,
yes,
I,
agree
with
your
sentiment.
I
would
love
to
get
to
that
world
where
that's
the
default.
I
wish
now
I
had
just
done
that
from
day
one,
but
I
think
that
this
change
will
let
us
get
to
there
once
we
once.
We
can
be
confident
that
we
aren't
going
to
be
inconveniencing
a
huge
number
of
users
by
either
making
either
triggering
the
bug
for
them
or
making
their
receives.
That
used
to
work,
fail.
A
That's
the
idea
is
that
you
know
today,
and
this
is
kind
of
why
I
did
this
functionality
to
begin
with
was
like,
if
you
just
use,
if
I
send
you
don't
give
any
options.
It's
giving
you
like
the
base
type
of
sensory
format
that
can
be
received
on
the
widest
variety
of
systems,
and
the
idea
is
that
yeah
in
the
future.
A
I
mean
we
could
definitely
do
that.
Those
things
are
much
more
straightforward.
I
mean
we
could
do
that.
We
could
look
at
that
today
because,
like
turning
on
embedded
block,
pointers
or
compressed
sends
or
whatever
those
are
hint
like,
if
you
do
that,
then
the
receiving
system
always
has
correctly
detected
that
it
can
do
that
or
not
do
that
right.
So
it's
not
going
to
trigger
any
horrible
bugs
either
way
it's
just
a
matter
of
like
you,
won't
be
able
to
receive
it
on
systems
older
than
whatever
2015
so
yeah.
A
We,
you
know,
we
probably
kind
of
miss
designed
this,
or
we
did
not
have
an
eye
to
future
enhancements
and
I.
Think
that
you
know,
ideally,
we
probably
should
have
made
it
so
that
the
send
uses
all
the
features
by
default
and
then
like
you
can
opt
down
to
sum
to
whatever
your
receiving
system
is
going
to
have
kind
of
similar
to
what
we've
discussed
with
like
the
pool
format
with
the
like
compatible
on
this
feature,
Flags
equals
2019
or
whatever
there
we
default.
A
F
Definitely
been
helpful
during
the
transition
like
when
we
went
from
V
15
up
to
V,
28
and
and
the
beginning
of
future
flags
and
stuff.
It
was
definitely
helpful.
The
fact
that
it
didn't
default
to
doing
that
in
particular
like
the
script
that
manages
the
replication,
doesn't
have
to
try
to
specify
a
you
know
the
backwards
compatible
flag
that
the
older
system
doesn't
know
exists
or
something
at
the
same
time.
F
You
know
what
I
mean
like
you've
been
fingered
as
a
replication
tool
like
zed
rifle
or
something
that's
or
maybe
a
simpler
one,
like
zed
x4,
where
it's
going
to
call
the
ZFS
command
line.
If
it
has
to
figure
out,
is
your
version
knew
enough
to
know
about
this
new
backwards
compatible
flag?
Then
that
can
get
well.
A
I
mean
if
you're
reading
one
of
those
tools,
then
presumably
you
would
be
smart
enough
to
think
about
this
and
you
and
say,
like
minimum
features.
You
know
to
get
the
behavior
that
we
have
today
or
say
you
know,
compatible
version
equals
2010
like
I.
Don't
really
care
about
people
receiving
this
on
systems
from
before
that
are
more
than
a
decade
old
right,
so
giving
people
the
option
of
those
kinds
of
things
is
what
I
would
like
to
see
and
then
we
can.
We
can
bike
shed
the
default
to
death
like.
A
A
E
A
E
My
concern
would
be
that
if
people
are
not
using
CFS
and
to
actually
put
the
streams
aside-
because
this
is
what
we
were
thinking
about-
that
we
wanted
to
send
ZFS
streams
to
a
file
and
store
them
somewhere
and
just
do
regular,
full
backup,
incremental
backups
and
keep
them
somewhere,
because
if
we
cannot,
if
we
cannot
rely
on
a
receiving
system
to
actually
have
ZFS
and
we
still
need
to
backups.
So
that
was
our
our
idea.
But
then
we
would.
We
would
learn
when
we
try
to
restore
that.
E
C
F
F
Single-Threaded,
reeds
are
a
lot
slower
with
it
because
they
have
to
decompress.
Well,
it's
mostly
because
you
don't
start
decompressing
until
you
request
each
block.
So
there's
like
there's
no
decompress
ahead.
There's.
C
A
F
So
ya
prefetch
would
have
decompressed
blocks
into
the
ark
for
you,
I
bet
I
had
in
general
I'd.
You
know
you're
still
getting
gigabytes
per
second,
it's
just
you
know
in
mind
the
one
bedrock
I
did
in
the
PR.
It
meant
three
gigabytes
a
second
instead
of
eight
gigabytes,
a
second
reading
from
the
files
fully
cached
in
the
Ark.
C
C
I
that
would
make
me
think
that
being
able
to
disable,
it
is
useful,
because
I
can
certainly
I
certainly
know
people
who
who
design
their
working
sets
to
fit
in
the
system
Ram,
with
with
the
understanding
that
they're
doing
it
for
performance
or
maybe
a
better
way
of
putting
that
as
they
design
their
system
Ram
to
fit
their
working
set,
I'm,
not
sure,
but
yeah.
It's
a
two
and
a
half
X
yeah,
like
I,
could
totally
understand
those
people
wanting
to
shut
it
off.
Although.