►
From YouTube: October 2021 OpenZFS Leadership Meeting
Description
Agenda: Block Reference Table; Encryption minor format change; Hackathon ideas
https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
Let's
get
started
with
the
october
2021
open,
cfs
leadership.
Meeting
the
last
leadership
meeting
before
the
developer
summit
conference
looks
like
we
have
a
couple
of
items
on
the
agenda
paul
if
you're
ready,
why
don't
you
take
it
away?
I'd
love
to
hear
an
update
on
the
block
reference
table
stuff.
B
So
it's
been
a
year
since
I
presented
the
idea
actually
almost
exactly
a
year
and.
B
A
sense
of
guilt-
I
guess,
drag
me
back
to
this
project
because
matt
and
george
spent
a
lot
of
time
with
me
explaining
some
stuff,
and
so
I
didn't
want
to
all
this
go
in
vain.
I
really
I'm
really
happy
that
I
recorded
our
our
session
and
I
could
go
back
and
go
back
to
the
project,
so
the
progress
I
made
is
that
block
reference
table
works.
B
Zfs,
write
override
trick
works
for
this
purpose.
So
currently
the
way
it
works
is
that,
for
now
the
data
structure
is
pretty
much
similar
to
the
duplication.
So
this
is
something
that
have
to
be
worked
on.
I
guess
we
want
to
more
optimal
data
structure,
but
when
we
clone
a
file,
I
create
a
kind
of
log
in
memory
and
that
will
that
keep
tracks
the
references
I
add
to
the
given
block
and
transaction
group
it
it
is
happening
in
so
in
open
context.
B
B
B
So
we
don't
need
additional
additional
step
in
the
zeo
pipeline
for
writes.
We
do
need
additional
step
in
zeo
pipeline
for
freeze,
so
this
is
how
it
currently
works.
B
I
have
a
demo
to
show,
but
there
are
still
some
few
remaining
issues
and
some
a
few
interesting
observations.
For
example,.
B
It
does
work
with
the
duplication,
so
you
can
use
both.
There
is
one
issue
remaining
with
that.
It
works
with
embedded
blocks,
it
doesn't
work
with
holes.
Yet
I
get
a
panic,
I'm
not
sure
I
understand
how
holes
are
created.
I
don't
think
we
create
a
dedicated
block
pointer
when
we
have
a
hole.
I
think
it's
handled
in
the
parent
block,
but
I
didn't
check
that
yet,
but
when
I
just
copy
block
pointer
to
the
new
file,
I
get
some
panning
somewhere.
B
So
I
need
to
investigate
that
further.
Let
me
take
a
look
into
my
notes,
so
the
whole
interaction
with
the
duplication
is
pretty
interesting.
So
if
we
have
a
block
that
was
cloned
using
block
reference
table
and
also
we
have,
the
same
block
was
have
additional
reference
in
the
duplication
table.
So
we
have
additional
references
in
both
tables.
B
So,
for
example,
if
we
we
are
happy
with
the
duplication,
implementation
and
basically
we
don't
want
to
get
rid
of
the
duplication
table.
We
would
like
to
have
brt
before
in
front
of
ddt3,
so
we
want
to
keep
the
duplication.
References
in
the
table,
so
regular
writes,
can
bump
those
references,
but
you
can
also
use
brt
to
let's
say,
hijack
or
replace
the
duplication.
B
We
can
simply
clone
a
block,
so
we'll
create
reference
in
the
brt
table
and
then,
when
we
free
the
block,
it
will
be
free
in
the
duplication
table.
So
we
will.
We
can
decrease
the
duplication
table
this
way
by
cloning
blocks
and
we'll
keep
the
references
in
the
brt
table
and
get
it
and
when
we're
free,
we
get
rid
of
a
reference
to
the
dead
duplication
table.
So
once
the
duplication
table
have
no
more
references,
we'll
fall
back
to
brt
and
start
freeing
brt
references.
B
So
I
don't
know
just
observation
also
in
terms
of
interaction
with
the
duplication,
I
was
wondering
how
we
can
address,
because
not
sure
if
you
remember
but
brt
is
not
sendable.
We
cannot
send
information
to
the
to
the
remote
host
that
this
block
has
additional
references
because
it
simply
the
block
pointer,
is
totally
different,
but
we
could
send
a
hint
to
the
remote
server
that
this
block
has
additional
references
and
the
remote
server
could
put
this
block
onto
the
duplication
table.
B
B
So
that's
the
only
idea
I
came
up
with
to
address
sending
for
now
how
to
because
brt
and
duplication,
of
course
is-
is
very
similar
to
some
extent,
but
also
very
different.
A
Yeah,
could
you
remind
me
like,
with
the
how's
the
brt
working,
I
remember
it's
like
you,
you're,
basically
cloning
files
and
then
it's
and
then
like
there's
some
table
that
keeps
track
of
like
these
files.
All
share
some
blocks
with
a
rough
count.
But
can
you
remind
me
like
would
that
take?
Is
that
table
specific
to
the
file
or
is
it.
B
A
B
Work
so
the
brt
table
is
global,
just
like
the
duplication
table.
Okay,
so
we
are
not
cloning
files
per
se.
We
are
cloning
blocks,
so
so
internally
we
could
clone
any
block,
so
we
can
go
to
a
file,
locate
a
block
and
just
clone
this
block
into
the
same
file
into
different
file
into
different
data
sets.
B
So
because
the
table
is
global,
so
we
can.
So
simply,
we
just
copy
block
pointer
to
the
new
file
and
just
update
physical
and
logical
birth
times
of
the
blocks.
A
B
It's
different
it's
much
much
smaller
in
a
block
reference
table.
The
key
is
vdf
id
and
offset
that's
a
single
video
idf
also
because
it
will
be
the
same
in
every
block
pointer
that
we
clone
and
the
data
is,
is
only
ref
count.
So
that's
it,
and
also
when
the
block
is
not
cloned.
B
So
so
to
remind,
if
if
you
don't
use
brt,
there
is
no
cost,
but
there
is
a
cost.
Once
you
start
using
brt,
there
is
a
cost
on
every
free
okay,
because
there
is
no
special
flock
in
the
block
pointer.
So
we
have
to
consult
brt
table
for
freeze
so.
A
If
I
kind
of
compare
this
with,
do
you
could
use
d
dupe
to
do
to
do
the
same
thing
like
you
know
like?
Presumably
you
have
some
higher
level
operation.
That's
like
hey
like
I
want
to.
I
want
to
copy
this
file,
but
but
make
it
fast
right
and
you
could
do
the
same
or
like
I
want
to
copy
this
part
of
this
file
into
this
other
file
or
whatever,
and
you
could
do
the
same
kind
of
thing
using
do
just
like
you
know
you,
you
go
find
all
the
block
pointers
you
bump.
A
You
add
the
block
pointers
over
here.
You
bump
up
the
ref
count
in
the
detail,
table
the
disad,
the
downsides
or
like
the
the
advantage
of
the
block
reference
table
is
basically
like
all
the
the
kind
of
critical
operations
are
all
kind
of,
oh
of
the
same
thing
they
all
like.
They
have
to
look
up
in
this
potentially
giant
table,
but
the
entry
size
in
the
table
is
smaller
and
you
never
have
entries
for
things
with
ref
count,
one
which
could
potentially
be
huge
correct.
A
B
A
Two
things
are
going
to
be
the
same,
so
you
know
dedupe
them
but
of
course
there's
copy
on
right.
So
if
you
write
to
one
then
you
know
you
get
that
one
has
its
own
copy,
like
normal.
C
Okay,
so
yeah,
I
think
the
the
big
benefit
that
you
talked
about
powell
was,
for
example,
copying
a
file
from
a
snapshot
back
to
the
the
host
file
system,
to
say
to
revert
a
change
or
whatever
is
now
it's
you're,
referencing,
the
already
on
disk
version
and
so
a
it's
faster
and
b.
It
doesn't
use
any
space
compared
to
now.
You
know
if
you're
reverting
one
file
from
a
snapshot,
you're
actually
rewriting
that
whole
file.
C
B
A
I
mean
that's
what
you're
doing
for
your
use
case
right
like
you're,
like
you,
have
some
semi
octal,
that's
like
hey
like
copy
this
file
and
you're
going
and
reading
all
the
block
pointers
and
then
adding
them
to
a
table
correct.
So
you
know
you
could
do
the
same
kind
of
thing
with
dedupe.
If
you
had
dupe,
enabled.
C
Yum,
but
you
know
with
with
the
new
syscall
copy
file
range,
that's
in
bsd,
and
I
think
this
in
linux
as
well,
meaning
that
copying
files
between
zfs
data
sets
wouldn't
have
or
moving
a
file
between
two
zfs
datasets
could
be
reference
on
all
metadata
operation.
Basically,
instead
of
actually
having
to
copy
the
entire
file
would
speed
things
up
and
just
make
zfs
feel
a
lot
nicer
too.
A
B
Table
yeah,
unfortunately,
but
also
internally,
as
I
said,
I'm
I'm
cloning
the
blocks,
but
the
interface
I
came
out
with
for
now
is
a
clone
file
system,
call
that
they
basically
clone
the
entire
file.
The
reason
for
that
is
that
it's,
if
you
copy
file
range,
you
operate
on
offsets,
and
here
you
cannot
really
operate
on
offsets.
B
You
would
need
to
you
could
integrate
that
with
like
copy
tool
or
something
like
that
to
to
figure
out
where
are
the
where
the
blog
boundaries
lie
and
basically
just
copy
the
data.
When
you
cannot
copy
the
you
cannot
copy
the
whole
block
and
and
just
copy
the
data
when
it's
part
of
the
block.
A
A
You
mentioned
like
better,
better
data
structures,
were
you
thinking
something
like
like
having
the
same
kind
of
basic
idea
of
of
mapping
from
dva
to
ref
count
but
like
like
being
able
to
do
like
runs
of
dvas
or
something
where
you
could
where
it's
like?
Well,
if
the
file
happened
to
be
I'll
continuous
on
disk,
then
we
could
be
like
oh,
like
it's
this
this
whole
range
this
whole,
like
gigabyte
of
the
disk,
has
ref
count
two,
because
I
copied
the
file
the
file
was
locked
into
you
as
this
one
gigabyte.
B
So
I'm
most
concerned
with
the
cost
on
every
free.
So
I
wonder
so.
We
need
some
kind
of
data
data
structure
that
that
can
tell
us
that
the
block
is
not
there
quickly.
A
B
So
yeah
right.
B
A
If
a
block
is
maybe
in
the
table,
with
the
cost
of
like
roughly
10
bits
per
10
bits
of
like,
if
you
want
to
have
this
in
memory,
10
bits
per
block.
B
Yeah,
so
maybe
something
like
that,
but
at
the
end
we
may
need
to
limit
the
the
size
of
brt.
Just
like
you
were
thinking
about
changing
data
structure
for
the
duplication
to
also
fit
into
memory.
B
If
we
are
going
to
do
that,
make
the
table
to
fit
into
memory,
then
I
would
that
we
would
probably
want
to
have
additional
system
call
when
we
are
moving
files
between
data
sets,
because
this
is
also
interesting
operation
when
we
just
want
to
move
a
file
between
data
sets,
so
we
will
bump
the
references,
but
only
for
a
moment
and
then
remove
the
file
on
the
source
and
we
just
leave
with
the
destination.
B
A
A
Yeah,
I
think
that
you
could.
I
mean
the
data
structure.
You're
talking
about
sounds
like
it
has,
like
maybe
three
words
like
three
eight
byte
words
per
entry.
Yes,
the
deviates
two
is
two
words
and
then
the
ref
count,
so
your
24
bytes
beats
24
24,
bytes
right,
total,
so
yeah
for
the
entry.
So
you
know
you
could
do
that
yeah,
you
could
probably
say
yeah.
A
It's
got,
albeit
memory
or
you
could
do
the
boom
filter
technique
and
then,
instead
of
24
bytes,
it's
like
one
and
a
half
bytes
or
something
of
keeping
the
bloom
filter.
That
would
tell
you,
if
it's
possible,
that
it's
in
the
table,
but
then
you
have
issues
of
like.
Oh,
I
gotta
like
rebuild
the
bloom
filter
every
so
often
and
like
it's
definitely
annoying.
A
I
want
to
clone
some
vmdk
files
right
and
I'm
going
to
be
cloning,
a
lot
of
them
they're
going
to
have
lots
of
lots
of
random
accesses
and
whatnot,
and
in
the
former
case
it's
like.
Well,
it's
so
small.
Nothing
really
matters
in
the
latter
case.
Performance
could
get
very,
very
bad
if
you,
if,
if
it
evolves
into
being
the
same
as
d-dupe,
and
I
think
you
would
be
better
off
like
saying
this-
is
a
new
thing-
it's
it's!
A
A
Like
the
ddt
log
thing
that
you
know
was
prototyped
and
then
we
can
make,
we
can
make
more
kind
of
guarantees
about
the
performance
like
like
you
might
not
be
able
to
brt
your
thing.
If
you
don't
have
enough
memory,
but
if
you
do,
then
the
performance
is
going
to
be
blazing
fast,
and
if
you
don't
like
that,
feel
free
to
fall
back
on,
do
you
do.
C
B
Was
my
make
the
entries
smaller
to
keep
separate
tables
per
b
dev,
so
we
don't
really
have
to
keep
vdf
id
in
the
entry.
A
So
you
know
instead
of
24
bytes
you're
smooshing
it
down
to
like.
Maybe
you
can
get
to
12
bytes
or
something
like
that.
I
think
that's
a
good
idea,
but
it's
not.
It
doesn't
fundamentally
change
the
problem.
It
just
reduces
the
size
of
the
entry
so
like
now,
your
ram
is
twice
as
effective
or
whatever,
because.
C
A
The
hard
problem
is
the
hard
problem
is
like
I'm
rmming
this
file,
the
file
has
a
billion
blocks,
and
you
know
obvious,
like
they're,
spread
out
over
the
entire
pool
yeah.
That's
that's
the
problem
that
matters
I
think,
and
and
if
you're
looking
at
it.
Just
with
like
one
single
block,
then
yeah
the
performance
of
doing
the
one
block
doesn't
matter
at
all.
It's
when
you're
doing
a
gajillion
of
them
and
they're
likely
spread
out
over
everywhere
on
the
whole
pool.
C
I
guess
that
raises
other
questions.
Does
it
make
more
sense
to
try
to
group
them
by
like
birth
time
or
something,
or
is
there
a
certain
ordering
or
structure
we
could
use?
That
would
mean
that
more
of
the
time
we
would
have
to
search
less
of
the
space,
but
everybody's
use
cases.
Our
usage
pattern
is
different,
so
I
don't
know
that
you
know
you
would
be
able
to
say
that
you
know
this
big
following
going
to
delete.
Most
of
his
blocks
are
going
to
have
a
birth
time
close
together,
necessarily
yeah.
C
A
A
Tricky
versus
you
can't
do
those
corner
cases
of
like.
I
just
want
to
copy
a
file
from
a
snapshot,
or
I
just
want
to
copy
a
file,
I'm
going
to
move
the
file
from
one
file
system
to
another
and
there's
no
snapshots
involved.
Those
are
like
you
know
you
can
get
by
with
even
less
cost
than
the
block
reference
table,
because
those
are
special.
C
But
it
sound
like
the
block
reference
table
worked
nice
for
my
idea
for
the
kind
of
a
rebaseable
clone
where
you
basically
have
an
overlay
file
system,
and
then
you
know
if
you
modify
a
file
that
doesn't
exist
or
if
you
access
file
that
doesn't
exist
in
the
overlay.
It
goes
to
the
underlying
file
system
and
finds
the
file.
But
if
you
modify
it,
we
have
to
copy
up
the
file
to
the
higher
layer,
and
if
we
can
use
brt
to
do
that,
then
we
get
the
kind
of
clone
type
semantics.
C
Where
you're,
not
you
know,
just
because
you
change
one
block
in
this
giant
vmdk
doesn't
mean
that
you,
the
new
file
system,
needs
to
contain
the
entire
vmdk
or
doesn't
have
to
reference
the
entire
or
has
to
reference
it,
but
not
show
up
as
a
user,
so
they.
D
I'm
wondering
if
there's
a
mode
of
this,
where
you
know
as
the
table
starts,
to
get
larger,
that,
instead
of
just
being
like
okay,
I
can't
clone
a
file.
It's
like
I
create
entries
that
are
temporary,
so
just
so
that
I
get
the
performance
point
of
doing
the
clone,
but
eventually
like
I'm,
actually
going
to
copy
and
overwrite
blocks
like
I'm
going
to
create
a
new
file
under
the
covers,
but
outside
of
the
performance
path,
so
that
you.
B
B
D
The
last
10
of
like
once
I
get
to
that
point.
It's
like
I'm
gonna,
stick
files
in
here
and
I
you
know
kind
of
in
the
background
I'm
gonna
copy
in
you
know
in
the
performance
path.
I
give
you
a
reference,
but
that
reference
is
gonna,
go
away,
so
my
table
never
gets.
You
know
big
enough
instead
of
just
being
like.
I
can't
do
this.
A
You're
like
doing
something
like,
maybe
that
kind
of
reserving
some
of
your
table
space
for
these
type
of
operations,
where
it's
like.
Okay,
you
know,
I
I'm
dedicating
a
gig
of
ram
to
this,
but
an
eighth
of
a
gig
is
reserved
for
these,
like
allegedly
temporary
uses-
and
you
know
if
you're
like
envying
a
file
between
file
systems
and
there's
no
snapshots,
then
you
could
say
like
oh,
like
I
would
like
to
add
stuff
to
brt.
B
B
But
in
the
background
you
still
do
copy
right.
So
it's
from
the
user
point
of
view
it's
hard
to
predict.
If
you
will
need
you
know
additional.
D
C
B
B
Think
it's
not,
I
think
it's
not
the
cisco
system
call
is
not
the
place
to
do
the
copy.
I
think
we
should
just
use.
B
A
B
A
B
I
have
a
chance
to
to
put
a
flag
to
cpu
or
something
that
if
it
fails
to
fail
right,
yeah
yeah,
if
it
doesn't
just
to
copy.
A
B
A
Think
you
would
just
say
like
normally,
I
wouldn't
worry
too
much
about
the
running
out
of
memory
and
and
having
like
a
reserved
or
super
reserved
space,
or
whatever
I
mean,
I
would
just
say,
yeah,
like
you
have
this
much
memory,
we're
gonna
do
our
best
at
it
and
then
add
some.
I
ideally
add
some
observability
tools,
so
people
can
see
like
what
you
know
how
much
of
your
memory
is
being
used.
Oh
you're,
at
99,
maybe
you
want
to
you
know,
be
aware
of
that.
A
A
You
know
that
are
like
kind
of
holding
the
references
that
are
recorded
in
the
brt,
maybe
even
have
like
some
kind
of
log
where
it's
like.
Oh
you
know,
because
to
do
that,
like
for
real
you'd
have
to
go
like
iterate
over
all
all
the
metadata
in
the
whole
storage
full.
But
maybe
you
have
something
where
it's
like.
Here's
a
list
of
all
of
the
data
sets
and
object,
ids
that
have
ever
been
that
I've
ever
participated
in
the
brt.
B
Yes,
I
was
also
that
was
also
one
idea.
I
was
considering
to
use
the
to
store
somewhere
information
about
the
source
file
and
that
could
help
us
with
the
free
case
as
well.
If
we
could
remember
somewhere,
because
we
cannot
change
source
block
pointers,
but
maybe
we
could
remember
the
the
object
id
somewhere.
A
People
could
think
of
it
as
like.
Hey.
If
I'm
willing
to
devote
some
amount
of
ram
to
this,
then
I
can
get
like
a
decent
performance
boost
of
these
kind
of
operations
and
not
have
to
worry
about
it
tanking
anything
else,
so
I
would
design
it
more
around
like
yeah.
Every
every
free
is
gonna
go
check
in
this,
but
it's
all
in
memory
and
I'm
going
to
make
the
hash
table
like
it's,
a
big
giant,
in-memory
hash
table,
I'm
going
to
make
that
hash
table
really
fast.
A
Maybe
you
do
like
super
optimizations
to
make
it.
You
know
super
compact
so
that
you
get
really
good
memory
efficiency
and
you
know
you
kind
of
structure
that
hash
table
in
some
optimal
way
to
get
really
good
performance
as
long
as
it
gets
in
the
hash
table
and
then,
if
you
can't
put
in
the
hash
table,
then
it's
fine,
we'll
fall
back
on
copying.
That
block.
B
Yeah
with
trying
to
keep
structures
like
this
in
memory,
I
always
wonder
if
what,
if
you
want
to
import
this
pool
on
a
system
that
has
less
memory-
and
you
cannot
really
fit
this
into
memory,
we
would
probably
need
some
kind
of
like
okay.
We
will.
A
A
A
I
think
the
issue
is,
I
think,
what
I
fell
back
on
was
like,
obviously
you
wouldn't
add
anything
to
the
table
and
you
could
log
decrements
but
yeah,
but
then
you
have
the
problem
of
like
you
need
to
look
up
to
even
know
if
you
should
decrement
it,
so
you
don't
want
to
have
to
fall
back
on,
like
blog
every
free,
which
makes
it
kind
of
which
does
make
this
a
little
challenging.
B
Because,
in
essence,
the
brt
table
could
be
treated
as
like.
I
know
space
maps
or
something
like
that
that
we
basically
just
touching
in
the
sinking
context
and
and
updating
there,
but
yeah
yeah.
C
Because
they're
indexed
on
the
dva
does
something
kind
of
like
that
space
mapper
meta
slab
makes
sense
where
we
have
a
limited
size
for
each
one,
the
amount
of
space
you
can
cover
or
whatever-
and
I
don't
know
that
we
have
to
load
and
unload
them
quite
the
same
but
being
by
limiting
the
size
of
each
one.
It
would
mean,
while
we
would
expect
to
have
all
of
them
loaded
in
memory.
It's
more
tractable
in
the
case
where
you
don't
have
that
much
memory
to
only
have
the
ones
you're,
actually
servicing
loaded.
A
Yeah,
potentially
it
would
just
it
could
become
very
difficult
because
it's
like
the
problem
is
every
free
needs
to
needs
to
check.
Am
I
in
the
brt
we're
not
right,
but
in
most
cases,
if
we're
going
to
have.
A
A
A
Right
like
because,
because
you'd
be
like
oh
great,
I'm
in
one
txg
I've,
I've
decided
that
I'm
freeing
this
million.
You
know
these
million
blocks,
they're
spread
out
over
the
entire
pool
they
might
or
may
not
be
in
the
brt,
like
probably
99
of
them
are
not
in
the
brt,
but
I
need
to
go.
Do
the
look
up
so
you
know
like
maybe
I
can
only
cash
10
out
of
my
100
brt
tables,
and
so
you
know
you
end
up
being
like
every
every
free.
A
A
Of
all
the
freeze,
I
sort
them,
then
I'm
like
okay,
these
ones
are
in
the
first,
the
first
table
load
that
table.
Do
the
right
do
the
manipulations
of
it.
Write
it
back
out,
unload
it.
So
every
basically
every
txg,
you
might
be
loading
rewriting
and
unloading
every
the
entire
brt
chunk
by
chunk
yeah.
A
C
A
Yeah
I
mean
you
can
do
it.
You
can
do
it
with
one
table
by
just
you're
iterating
over
it
multiple
times
right
the
advantage.
Yeah
I
mean
you
can
do
that,
but
you're
only
keeping
like
you
over
the
whole
thing,
but
you're
only
building
an
in-memory
table
of
you
know
a
third
of
it,
because
that's
what
you
can
fit
right
and
then
writing
that
out
and
then
read
the
whole
thing
again
build
the
next
third
et
cetera.
B
And
this
this
logic
on
demand
is
also
nice,
because,
if
you-
even
if
you,
if
you
have
like
giant
brt
table
but
on
some
data,
sets
that
you
but,
on
the
other
hand,
you're
always
freeing,
so
you
probably
need
to
load
it
anyway.
A
This
is
where
the
maybe,
what
you
maybe
what
you
do
is
this
you.
Maybe
this
is
where
you
use
a
bloom
filter
where
you
say
like
look,
if
you
don't
have
enough
memory
to
load
the
whole
thing
I'll
read
the
whole
thing
generate
it
in
memory
bloom
filter.
Hopefully
you
have
defined
my
memory.
For
that
I
mean,
I
guess
you
do
it's
just
going
to
be
like
less.
A
If
it
less
you'll
get
more
false
positives
right,
because
you
can,
the
bloom
filter
can
be
of
some
constant
size
and
then
you
and
then
you
look
up
all
the
freeze
in
the
bloom
filter
and
if
they
say
that,
hopefully
most
of
them
say
nope,
I'm
definitely
not
in
the
brt,
and
so
you
don't
have
to
do
anything
and
then
the
ones
that
say
maybe
I'm
in
the
brt,
you
just
log
them
and
say
like
yeah,
that
space
is
kind
of
leaked
until
you
can
until
you
can
load
this
on
a
machine
with
more
ram.
A
I
mean
you're
you're
going
to
be
in
a
very
degraded
mode.
It's
just
a
question
of
to
what.
How
do
you
want
to
take
that
degradation?
You
see
either
like
performance
of
all
rights
is
extremely
awful
or
your
spa,
like
we
can't
reclaim
some
of
your
space
until
you
load
on
a
bigger
pool.
I
mean
I
feel,
like
the
ladder
is
more
palatable.
B
Yes,
we
would
definitely
need
to
like
let
him
know
that
this
is
the
reason
the
the
performance
sucks.
A
Yeah
yeah,
I
mean
the
alternative
of
saying
that
yeah
you
can
do
everything,
but
the
performance
is
like
a
thousand
times
worse,
a
million
times
worse,
I
don't
know
you
know
to
process
your
freeze
versus
being
like
yeah
like
we
can
process
pretty
much
all
the
freeze
of
things
that
are
not
related
to
the
brt
and
the
brt
things
like
you.
Don't
have
enough
memory
for
the
brt
so
like
we're
not
going
to
do
those
freeze
right
now.
A
A
A
C
B
A
C
That's
super
easy:
here's
two
reviews
that
are
ready
for
review.
I
saw
matt,
you
looked
at
the
vwn
yesterday.
I
have
to
go
through
that,
but
the
the
linux
namespace
one.
I
think
all
the
outstanding
issues
are
covered
and
hopefully
that
can
land
soon.
C
I
think
the
only
open
question
is
what
if
the
namespace
goes
away
and
zfs
is
still
referencing
it,
it
doesn't
cause
a
problem
for
zfs,
but
it
means
if
the
names,
if
everything
that
was
running
in
the
namespace
closes,
the
namespace
goes
away
and
if
somebody
created
a
different
namespace
and
they
happen
to
get
the
same
pointer
from
the
kernel
or
whatever,
then
that
data
set
is
now
with
this
potentially
unrelated
namespace,
and
that
can
be
viewed
from
inside
there.
I
don't
know,
there's
anything.
C
E
Quickly,
since
I
didn't
make
it
to
the
last
meeting
in
one
before
that,
I
was
on
vacation
with
there's
that
issue
with
the
encrypted
cfs
with
the
denote
hashes,
where,
with
mac
os
on
a
lumos,
there
was
well,
there
was
an
incompatible
format,
change
that
happened
during
the
zfs
encryptions
thing.
So
anyone
running
a
intermediate
open
zfs
could
potentially
hit
this
as
well,
because
there
was
an
income
because
that
incompatible
format
change.
E
E
E
Two
kind
of
the
idea
is
one
to
say:
if
it's
been
quote
unquote
upgraded,
then
the
other
one
indicate
the
state
of
what's
there
so
that
if
it's
zero,
it's
kind
of
okay,
try
both
so
that,
like
kind
of
what
happens
today
on
both
platforms
is
the
same,
but
then
after
upgrading,
then
the
other
bit
indicates
the
state.
So
that
way
an
operator
can
you
know
they
can
switch
it
at
their
convenience
versus?
Oh,
I
import
it,
and
I've
changed
all
this,
and
now
I
can't
you
know.
Oh
you
want
to
go.
E
You
know,
go
back
well
too
bad
you!
You
want
to
try
to
avoid
that,
and
so
that
was
the
kind
of
the
thought
about
that.
It's
going
to
use
that
same
upgrade
mechanism
to
just
you
know.
You
know
basically,
then
use
the
bit
to
set
what
it
is.
So
that
way,
then
both
sides-
you
know
when
they
see
that
can
just
consult
the
you
know
flag
to
know
what
it
should
be.
As
well
as
be
able
to
change
it
through
like
a
property
and
then
oops
sorry,
I
hit
my
move.
My.
E
Yes,
it
went
over
to
the
hot
corner
and
it
started
to
lock
it
but
the
screen,
but
and
then
so
that
way,
then
you
could
set
a
property
that
would
change
which
one
it's
used
is
kind
of
the
basic
idea.
E
Just
so
that
way,
kind
of
the
current
state
on
existing
things,
which
would
kind
of
would
be
the
same,
but
then
anyone
with
the
fixes
on
either
side
they'd
get
the
upgrade,
which
would
then
just
say,
okay,
that
other
bit
tells
which
way
it
is
and
kind
of
reflects
the
property
with
how
it
is
today
without
changing
it,
and
then
they
could,
you
know,
go
in
and
change
it
whenever
they
want,
and
so
I
just
haven't
had
time
to
finish
coding
it
up.
But
that
was
my
thought
on
that.
I
don't
know.
A
I
think
that
what
you
described,
I
think
I
understand
it,
and
that
sounds
great.
I
mean
that
sounds
like
a
really
good
full,
robust
solution.
I,
in
my
opinion
I
I
would
probably
be
fine
with
a
much
simpler
approach
of
just
saying,
like
we,
we
try
both
checksums
forever
and
like
there's.
You
know
you
don't
need
any
feature
flag,
you
don't
need
any
whatever.
Just
we
accept
both
of
them
forever,
like,
I
think
the
security
implications
of
that
are
essentially
zero.
Yeah.
E
Where
it
gets
a
little
hairier,
because
it's
like
okay,
you
know,
because
what
do
you
which
one
do
you
do
on
the
right
side,
then?
Because,
if
you're
worried
about
you
know
compatibility
with
you
whatever
your
previous
is
that's
where
it
gets
a
little
messy.
E
Thinking
with
it,
yes,
that's
true
yeah
and
that's
where
it
gets
a
little
more
because
I
know
there's
actually
you
know
someone
actually
has
quite
a
few
data
sets
impacted
by
this
and
so
yeah.
That's
why.
A
E
Just
a
simple:
oh:
we
just
accept
both
and
then
write
out
the
new
one
and
that
works.
I
tried
that
that
that
works
fine,
but
then
it's
just
the
whole
now
all
of
a
sudden.
Now,
if
you
want
us,
you
know
the.
E
Yeah
and
that's
why
I
was
thinking
well,
we
could
just
have
a
property
and
that
that
you
know
reflects
that
and
that
we,
you
know
we
when
we
do
the
quote-unquote
upgrade.
You
know
it
gets
set
on
the
first.
You
know
with
to
reflect
what
it
is,
and
so
that
way,
like
I
said
on
an
older
thing,
they'll
just
see
you
know
you
know
they'll
just
ignore
those
bits
and
they'll
just
do
what
they
do
today,
which
mean
you
know
you
know.
E
E
But
I
just
wanted
to
ask
since,
like
I
said
it
impacts
you,
like,
I
said
not
just
a
loomis
but
mac
os
as
well.
So
that
way
we'll
and
then
like
say,
then
everyone
else
will
be
able
to.
You
know,
get
it
and
get
it
everywhere,
so
that
everyone
will,
you
know,
we'll
be
able
to
enter
you
interoperate
on
those.
So.
A
All
right,
how
do
you
do
you
give
a
five
minute
demo
or
is
it
gonna
take
longer.
A
B
Okay,
so
I
have
a
pool.
The
feature
flag
is
already
there,
it's
an
early
build,
so
brt
is
not
you
not
in
used.
Yet
I
have
a
single
file,
four
gigabyte
file
on
the
pool,
so
that's
the
file
so
like
it's,
of
course,
virtual
machine
etc,
but
like
to
show
you
the
difference
and
the
performance
difference
we
we
may
expect
by
using
brt.
So
I
will
just
do
regular
copy
of
the
file
export
the
pool
to
make
sure
everything
is
sync.
B
A
A
B
B
A
B
It's
weird
because
I
was
sure
it
did
work,
at
least
the
reference.
I
was
not
increased,
but
that's
weird,
but
the
difference
I
saw
with
between
this
and
dedupe
was
that
available.
Space
wa
is
decreasing
in
case
of
dope.
It
doesn't
decrease
properly.
That's
so.
A
Well,
a
couple
of
things.
First,
let's
talk
about
the
refer.
I
think
that
the
way
that
you're
doing
the
refer
space
is
the
way
it
has
to
be
done
because
imagine
like
if
I
have
two
file
systems
and
I
you
do
the
copy
and
then
I
do
the
copy
into
a
different
file
system.
A
So
now
there's
three
references
now
now
I'm
going
to
delete
some
of
them
depending
on
what
order
I
delete
them,
I
might
need
to
change
the
space
reference
by
a
different
file
system
right,
like
if
you're
saying
when
I
make
the
copy
it
doesn't
increase.
The
refer
then,
like
I
make
the
copy,
it
doesn't
increase
the
refer.
Now
I
make
the
copy
to
another
file
system
that
presumably
also
doesn't
increase
the
refer
of.
D
B
A
Have
to
handle
the
same
way
as
dupe
where,
like
everybody
gets
charged,
because
we
don't
know
when
to
uncharge
people-
and
you
don't
want
to
have
this
like
surprise,
yeah,
surprise
where
it's
like.
I
change
my
file
system
and
then
you
suddenly
get
charged
more.
So
instead
we
just
charge
everybody
the
full
cost
all
the
time
and
then
and
then
there's
like
extra
magical
free
space.
So
d-dupe
has
some.
A
I
forget
how
it
works,
but
like
some
something
that
makes
it
seem
like
there's
more
space
and
you
would
want
to
do
something
similar
so
that
the
veil
doesn't
go
down
the
way
it
is.
D
A
Like
if
you
look
at
z
pool
list,
then
you
should
yeah
yeah,
they
did
it
right,
so
z
pool
list
only
has
the
eight
gig
allocated
it's
just
now.
You
have
a
bunch
of
free
space
that
can't
be
used
by
the
dmu,
which
is
why
you
need
to
do
this
like
hack
up
the
avail
space,
and
you
can
look
at
how
dupe
does
that
you
you
want
to
do
basically
the
same
thing.
I
think
and
there's
some.
B
Yeah
there
is,
there
is
some
logical,
still
don't
understand
about
d-dupe.
There
is
some
reaper
mechanism
that
I'm
not
sure
how
it
interacts
with
scrub
and
re-silver.
Why
is
it
there?
B
A
That
might
be
a
lot
harder
to
do
for
this.
I
think
you
would
have
to
have
some
auxiliary
data
like
like
having
some
bit
in
the
brt
or
some
other
table
that
tells
you
has
been
scrubbed
yet
because,
like
with
d-dupe,
the
scrub
is
done
via
the
ddt
right.
So
the
ddt
has
all
the
info.
So
it's
like.
Well,
we
just
iterate
over
the
ddt
scrub
everything
in
there
and
then,
when
you're
iterating
over
the
indirect
blocks
skip
anything
that
has
the
bit
set.
B
A
A
B
But
yeah
we
could,
if
you.
A
A
Yeah
all
right
well,
this
this
is
really
cool
and
I'm
always
excited
about
things
that
make
dedupe
less
necessary,
and
it
gives
me
hope
that
one
day
we
can
like
deprecate
dedupe
in
favor
of
something
that
has
slightly
less
well.
It
has
massively
less
horrible
performance,
but
that's
probably
a
long
way
off,
but.
D
A
Still
hope
so,
next
meeting
the
next
meeting
falls
on
the
dev
summit
days.
So
it's
exactly
four
weeks
from
now,
so
I
was
going
to
propose
that
we
cancel
the
next
meeting
and
so
we'll
meet
eight
weeks
from
now.
A
December
in
the
earlier
time,
nine
am
so
we'll
just
skip
this
skip
this
one,
the
november
one.
The
conference
is
coming
up
four
weeks
right,
so
I
don't
know
about
you
guys.
We
are
busy
preparing
talks,
look
forward
to
seeing
you
all
there
for
the
hackathon.
A
I
really
like
the
idea
that
was
raised.
I
think
it
was
the
last
meeting
by
alan
about
trying
to
focus
on
bugs.
So
if
you
have
any
like
favorite
bugs
that
you
would
like
to
work
on
or
like
to
see
fixed,
let's
try
to
collect
those.
There
is
a
spreadsheet
that
I
will
send
out
in
the
email
soon
or
something
that
to
try
track
proposed
projects
for
the
hackathon,
and
we
can
put
the
bugs
that
we
want
to
address
in
there.
A
The
other
thing
was
just
like
finding
those
bugs
like
there's
a
huge
backlog
of
bugs.
So
I
think
it'd
be
super
helpful.
A
Oh
thanks
alan
for
putting
the
link,
it
would
be
super
helpful
to
go
over
the
backlog
of
bugs
and
even
if
we
aren't
like
fixing
them
or
anything,
I
think
that
the
the
big
benefit
we
could
get
from
that
is
find
the
bugs
that
that
are
really
really
critical
so
that
they
don't
fall
through
the
cracks,
because
I
think
that
that
may
have
happened
in
some
cases,
so
obviously
it'd
be
nice
to
like
close
out
but
close
out
issues
that
are
no
longer
like
actual
bugs
investigate
stuff.
A
But
those
are
all
kind
of
nice
to
haves.
The
real
big
thing
is
like
hey:
let's,
let's
improve
the
quality
of
the
software
by
making
sure
that
we
find
and
fix
those
really
critical
bugs
that,
are
you
know,
data
corruption
or
panics?
That
can
happen
easily,
because
I
know
we
don't
have
much
of
a
process
for
identifying
those
normally.
But
I
think
it
would
be
really
helpful
to
do
that
as
part
of
the
hackathon.
A
A
Cool
all
right,
then,
I
I
hope
to
see
you
all
in
four
weeks.
Please
register
for
the
conference.
The
link
is
on
the
open,
cfs
website
register
for
the
conference,
so
that
you
can
join
the
zoom
and
participate
in
q
a,
and
we
will
see
you
all
in
four
weeks.