►
From YouTube: June 2022 OpenZFS Leadership Meeting
Description
Agenda: DirectIO PR; ZIO taskq's for multiple pools; multiple encryption keys
detailed notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit
A
All
right,
let's
get
started,
welcome
to
the
june
2022
open,
zvs
leadership
meeting.
A
B
A
All
right,
then,
why
don't
we
get
started
with
brian's
update
on
direct?
I
o.
C
Hey
so
the
last,
I
think
probably
anybody
who's
been
updated
on.
This
was
the
developer
summit
and
good
news.
I
think
that
the
linux
side
is
completed
now,
first
and
foremost,
I'm
here
to
solicit
reviews
on
the
pull
request.
A
C
Go
sorry,
apparently,
my
internet
connection
is
unstable
at
the
moment,
but
I
don't
know
what
the
last
thing
you
guys
heard,
but
there
was,
there
was
one
part
we
had
talked
about
were
doing
stable
pages,
and
that
was
the
concept
of
say:
we've
taken
a
user's
buffer,
we've
met
or
pinned
it
in.
You
know
using
the
kernel
and
we're
going
to
directly
write
it
out.
Well,
they
could
still
change
the
contents
of
that
buffer.
C
Unfortunately,
after
going
down
many
kernel
rabbit
holes,
it
doesn't
seem
you
can
actually
do
that
through
the
kernel.
The
main
trouble
comes
in
with
the
users
pages,
possibly
being
anonymous
pages,
which
is
commonly
what
they
would
be.
So
this
just
means,
like
the
pages
aren't
backed
by
a
file
or
anything
so
like
if
they
malek
put
fill
them
up
and
send
them.
C
There's
no
way
in
the
kernel
to
write,
protect
the
page
table
entry
of
anonymous
page
at
least
no
way.
I
could
figure
out
how
to
do
so.
That's
very
unfortunate.
So
what
I
wound
up
doing
instead
is
I
added
a
module
parameter,
specifically
just
cio
direct
write,
verify
and
what
it
does
is.
If
this
is
enabled
it's
another
stage
in
the
cio
pipeline,
so
right
before
it
goes
to
say:
okay,
let's
update
the
block
pointer
in
the
done
function,
because
we've
successfully
committed
this
out.
C
If
you
get
a
checksum
failure
there,
it's
like
no,
don't
do
that.
This
is
failed
and
what
I
wind
up
doing
is
sending
a
z
event
or
z.
You
know,
so
it
can
be
seen
that
way
and
also
added
as
equal
status,
lowercase
d,
and
you
can
see
these
checks
on
verify
failures
that
happen
there,
but
it's
never
committed
to
disk.
In
that
case,
it's
just
like
no
something
changed,
and
so
we're
not
going
to
do
this
anymore,
so
it
just
bails
out
and
returns
the
en
val.
C
In
that
case,
the
good
thing
is
that
does
work
on
freebsd,
though
stable
pages
do
work
so,
unfortunately,
with
linux,
we're
kind
of
stuck.
A
The
alternative
would
be
to
not
require
that
they
abstain
from
writing
or
saying
from
storing
to
that
memory,
while
the
raise
in
progress
but-
and
the
only
way
to
do
that
would
be
to
like
to
be
copy.
The
data
somewhere.
C
Else
exactly
yeah,
so
I
mean
it's
least
a
safety
net.
In
there
and
again
the
the
cool
thing
is,
I
know
one
concern
we
had.
Is
you
know
if
you
do,
if
a
user
does
unintentionally
manipulates
it
and
it
writes
it
out,
there
is
the
idea.
Well,
you
have
a
checks
on
failure.
You
go
to
read
it.
It's
like
hey,
something's,
all
wrong
here
and
there's
no
real
way
to
coordinate
that
back
to
what
caused
this
is
it.
C
C
I
have
it
off
for
now,
because
I
kind
of
view
that,
as
like
well
you're
doing
like
an
extra
checksum
on
every
right,
it
runs.
C
C
C
C
D
C
Then
again,
there's
a
they
also
added
that
event
like
it's
there
as
well.
I
put
a
test
case
in
there
as
well
to
test
this
out
for
linux,
as
well
as
the
freebsd
side.
Again,
the
good
news.
We
can
write
protective
freebsd,
no
problem,
so
I've
added
a
case
for
that.
C
That
seems
to
be
working,
fine
and
the
only
other
thing
that's
kind
of
lingering.
At
this
point
there
is
this
one
freebsd
error,
which
I'm
also
soliciting
help
for
that
here
as
well.
C
For
some
reason,
just
with
reads
the
first
like
like
if
you're
doing,
multiple
direct,
I
o
reads,
requests
one
after
another,
the
first
one
for
some
reason
when
it
goes
to
fault
hold,
the
pages
seems
to
fail
quite
often,
and
I
have
no
idea
why
there's
just
a
failure
that
returns
up
from
the
bm
fault,
quick,
hold
pages
from
freebsd,
and
in
that
case,
because
it's
like
the
the
linux
compatibility
freebsd
code,
they
return
to
e-fault,
which
is
what
I
do
in
that
case
and
percolates
back
up
to
the
vfs
layer,
and
then
it
sends
it
back
down.
C
C
A
C
D
Is
the
can
the
copy
be
like
opportunistic
like?
Can
you
detect
at
the
time
that
somebody's
actually
making
the
second
modification
and
only
do
the
b
copy,
then.
D
A
Yeah,
like
I
mean
I
wonder
rather
than
so,
I
guess
I
have
two
questions.
One
is
around:
what
should
the
default
behavior
be
and
the
other
is
if
they
do
not
if
they're,
if
they,
if,
if
we
are
verifying
by
default,
then
is
the
right
behavior
really
to
give
them
eio
or
ian
val,
or
is
it
to
just
like?
A
C
C
A
A
Yeah
yeah,
I
mean
I'm
sure,
that's
like
more
complicated
to
implement,
but
I
guess
one
of
the
questions
is
what's,
which
is
the
right
data
to
write
the
original
version
or
the
version
they
just
modify
it
again.
Either
version
is
correct.
G
C
C
A
You
could
have
the
same
checksum
before
and
after,
but
in
the
middle
it
had
some
other
value.
That
is
not
the
value
that
was
checksum,
so
I
mean
maybe,
given
that
you
know,
maybe
it
doesn't
make.
Maybe
the
e
in
val
does
make
sense,
because
you
know
we're.
We
can't
guarantee
correctness
right
in
the
face
of
concurrent
stores.
We
can
just
try
to
detect
some.
A
You
know
modifications
and
you
know
maybe
what
you
so
given
that
you
know.
Maybe
what
you
do
is
you
say
for
the
question
it
should
be
on
or
off
by
default.
Maybe
it's
maybe
it's
not
on
or
off
by
default.
Maybe
this
is
probabilistic
and
you
say
the
default
is
we're
going
to
check
one
percent
of
all
of
your
rights
or
you
know
some
percent,
so
we're
gonna
double
check
some.
A
You
know
one
percent
of
your
rights
and
then
let
you
know
if
you're
doing
the
wrong
thing
right.
The
requirement
still
continues
to
be
like
you
have
to
not
modify
it,
but
we're
just
going
to
give
you
like
a
little
bit
of
a
double
check
so
that
you
know
you
know
you're
not
get.
If
you
check
some
error
later
on
and
you
see
that
the
counter
was
bumped,
then
you
know
that,
like
the
application
probably
is
doing
bad
things
and
if,
if
the
counter
is
not
bumped,
you
don't
really
know
for
sure
either
way.
A
Yeah,
because.
A
Of
being
on
all
the
time,
but
you
don't
have
the
advantage
of
if
we
have
it
off
by
default,
no
one
will
ever
turn
it
on
yeah,
like
you
have
the
performance,
the
performance
would
be
probably
negligible
performance
impact
of
doing
it.
One
percent
of
the
time
would
probably
be
negligible,
and
that
way
you
have
at
least
some
chance
that
people
who
are
just
using
it
like
naive.
The
concern
is
naive.
Users
right,
like
naive
users,
are
like,
oh
great,
like
oh
direct.
A
Let's
like
let's
go
direct
it
that'll
make
things,
go
faster
right,
like
everybody
on
the
internet,
says
user
directs
make
things
go
fast.
They
don't
know
what
they're
talking
about
they
don't
know.
They
may
not
even
know
that
you're
using
zfs
or
whatever,
but,
like
I
read
some
advice
on
the
internet.
That
says
oh
direct
means
go
fast,
so
I'm
going
to
use
odirect
and
then
like,
and
then
you
know
a
month
later.
Oh
zfs
is
just
full
of
trucks
and
errors.
My
discs
are
fine.
A
Every
like
this
is
all
broken
right,
and
so
we
want
to
like
have
some
kind
of
safeguard
for
those
naive
users,
which
is
why
I
think,
having
some
kind
of
verification,
maybe
not
100,
of
the
time
verif
like
double
check
some,
maybe
it's
one
percent.
Maybe
it's
10,
like
I,
don't
know
some
kind
of
verification
for
those
naive
cases
so
that
at
least
then
like,
if
we
get
inundated
with
like
zfs,
says
unexplained
check
some
errors.
We
can
be
like
hey
check
this
count.
A
Oh
the
count,
the
count
of
you
know
wrong.
Double
check
sums
is,
is
non-zero
you're
using
okay
you're
using
o
direct
wrong?
So
yeah?
You
know.
C
A
A
A
E
The
question
is
like
how
much
absolute
overhead
do
you
get
with
a
checksum
because,
like
if
you
look
at
it
in
a
latency
histogram,
then
say
a
50
50
split
and
we'll
have
one
peak
at
the
latency
without
doing
the
extra
checksum
and
one
pic
the
other
one.
So
if
you
have
you're
doing
something,
latency
critical,
you're
gonna
end
up
with,
like
always
assuming
that
you're
check
something
right
so
on
average,
on
average
matt
you're
right.
But
if
you
care
about
like
p
and
90
whatever.
E
A
I
I
agree
that
that
is
true.
I
hope
and
assert
that
it
is
not
important,
but
if,
if
people
know
that
it
is
important,
then
then
definitely
contradict
me.
They
can
promise
to
use
direct.
I
o
right
and
set
it
to
zero
percent
yeah.
E
A
C
G
E
A
A
Since
you
know
we're
we're
still
like
it,
this
doesn't
the
tunable
doesn't
control
any
kind
of
correctness
behavior.
What
the
expected
interface
is.
It's
just
like
you're
always
required
to
not
do
stores
on
direct
like.
While
you
have
a
direct,
I
o
right
in
progress
and
we're
just
doing
a
little
double
check
for
you.
A
If
you
get
it
wrong,
then
what's
going
to
happen,
is
you're
going
to
get?
Is
that
like
the
right
will
succeed
and
when
you
go
to
read
it
you'll
get
a
check.
Some
error
so,
like
you,
told
us
to
write
this
data,
but
you
can't
actually
read
it
back,
so
you
know
we're
just
trying
to
help
you
not
get
we're
trying
to
help.
You
know
when
you
might
be
in
that
circumstance.
A
G
G
G
A
D
C
A
E
It
okay,
the
problem
is
ultimately
that
we
don't
attach
to
the
block
on
disk
or
the
block
pointer
that
points
to
the
block
whether
the
block
has
been
written
by
o
direct
or
not.
If
we
had
that
bit
of
information,
then
we
could
like
surface
to
check
some
errors
in
the
correct
way
differentiate
between
all
that.
E
A
A
Yeah
and
if
we
wanted
to
do
that,
I
suppose
the
way
we
could
do
that
was
the
the
per
data
set
feature
flag
activation
thing
like
we
did
for
z,
standard,
where
you
know
we
flag
a
data
set
as
soon
as
any
block's
ever
been
born
that
use
direct.
I
o
for
write
and
we'd,
be
like
you
know.
If
you
get
to
check
some
error
here,
maybe
it
means
something,
but
I
don't
know
if
that
makes
sense.
A
In
my
opinion,
I
think
it'll
be
sufficient
to
do
the
you
know
because,
like
we're
saying
you
know
applications,
you
know
they
they
wouldn't
have
deterministic
behavior.
If
they
were
doing
this
thing,
if
they
were
doing
the
concur
modification
today
so
like
they
are
very
likely
to
not
be
doing
that,
and
so
you
know
we
can
just
kind
of
assume
that
that
is
fine,
but
we're
you
know
like
in
zfs.
We
try
to
go
above
and
beyond.
We
have
checksums.
A
We
assume
that
you
have
the
checksums
on,
and
so
you
know
part
of
that
doing
better
than
other
file
systems
is
like
we're,
gonna
check
and
make
sure
and
record.
A
I
had
another
thought,
but
I've
always
lost
me.
I
think
that
this
is
this
approach
seems
good
and
like
better
than
what
other
file
systems
are
doing.
Oh
the
other
thing
I
was
going
to
mention.
If
we
wanted
to
go
even
further
another
thing
that
we
could
do
to
help
folks
would
be
to
have
a
tunable.
A
That's
like
don't
verify
like
don't
verify,
checksums
on
read,
and
so
that
would
let
you
like.
If
you
ran
your
application,
you
did
these
rights
and
it
did
the
concurrent
modifications
now
you're
like
oh
I'm
getting
checked
some
errors,
my
data,
isn't
there
I'm
so
sad,
we
could
say
well,
you
can
set
this
this
tunable
to
just
say
like
ignore
the
checksum
error
and
give
you
the
bad
give
you
the
data
that
is
inconsistent,
which
is
just
as
good
as
what
other
file
systems
are
doing.
A
E
A
E
Throwing
in
this,
so
there
is
actually
like
a
practical
concern.
For
example,
if
you
use
service
and
receive
somewhere,
I
think
I
think
if
we
encounter
it
check
some
error
doing
zeros,
then
we
stop
the
send
right.
I
don't
know
whether
there
is
a
flag
to
tell
it
to
send
corrupted
data
there
isn't
there
is
yeah
yeah
yeah.
A
Anyways,
I
think
that's
kind
of
not
required,
but
be
something
nice
if,
if
if
this
is
even
at
all
like
relevant,
I
mean-
hopefully
none
of
this
is
irrelevant
and
we're
just
like
going
the
extra
mile.
That
is
not
really
required,
but
I
think
especially
given
that
you've
already
implemented
this.
The
like
double
check
sum
checking
and
all
that
stuff
keeping
it
in
and
having
it
on.
A
You
know,
one
percent
or
a
few
percent
of
the
time
would
be
nice.
C
Okay,
yeah
and
it
seems
like
the
count
I
mean,
makes
sense
and
we
can
just
set
a
default
of.
I
don't
know
100
or
something
yeah
and
just
I
mean
I'll,
have
to
go
back
and
update
the
code,
obviously
to
increase
the
counter,
but
that's
not
hard,
it
changed
the
module
forever.
I
mean
just
to
make
sure
it
makes
sense
yeah.
C
C
A
C
A
Like
does
it
make
sense
to
do
zero
copy,
direct,
a
or
writes
if
you're
doing
compression
at
all,
like?
Maybe
if
you
have
compression
on,
we
can
just
be
copy
your
data
first
and
then
compress
it.
My
concern
is
like
if
you're,
concurrently
modifying
it.
A
A
A
Then
I
did
the
checksum
I'm
texting
the
compressed
data,
which
is
like
obviously
correct,
like
the
the
checksum
will
match.
I
write
it
out.
I
read
it
back
in
I'm
like
great,
like
I
checksum
is
correct,
but
then,
when
I
go
to
decompress
it,
the
decompressor
is
like
this
input
is
garbage.
C
A
Right,
like
you,
could
imagine
a
compression
thing
where
it's
like
you
know
it's
building
some
table.
That's
like.
I
noticed
this
string
at
this
offset
and
then
later
on.
A
You
know
is
this:
is
the
string
at
the
current
offset
the
same
as
the
string
at
this
rate
and.
C
A
And
I
mean
maybe
the
output
would
be
like
decompressible,
but
just
give
you
the
wrong
results
right,
yeah
and
the
checksum
would
be
right
and
you'd
be
like
great,
like
it
checks
them
correctly.
Here
are
the
results?
Here's
here's
your
data
back
and
it's
just
like
it's
it's
neither
the
it's
it
like.
It
is
no
version
of
the
correct
data.
It's
not
like.
We
got
your
data
as
of
some
point
in
time.
It's
just
like
completely
mangled,
because
the
compression
decompression,
like
you
know,
lost
its
brain.
F
D
A
D
A
C
A
A
C
A
C
G
G
A
I
think
that
we
should
just
change
that
to
say
you
know
we
always
make
the
copy,
and
maybe
the
way
to
do
that
is
like
in
this
code
path,
rather
than
using
abd
bar
above
copy.
You
use
like
the
abd
iterate
function
and
do
the
copy
ourselves
it's
like.
We
know.
We
definitely
know
that
a
copy
was
made
because,
like
we
allocated
the
buffer,
we.
C
A
C
F
A
C
Yeah,
but
still
I
mean
we,
we
I'm
glad
we
talked
about
all
that
that
was
good,
but
the
the
return
buff
still
could
trigger.
Also.
C
C
A
G
C
C
So
yeah,
I
think
that
was
the
only
major
things
I
had
with
it
and
obviously,
like
you
know,
in
reviews
people
look
at
it
and
taking
it
for
a
test
driving,
I
mean
again,
I
feel
pretty
good
about
the
linux
side.
Making
these
modifications
makes
100
sense
and
yeah
if
we
just
get
anybody
to
look
at
that,
one
lingering
error
and
again
I
have
a
branch,
my
own
branch-
I
can
point
them
to
it's
like
hey
run
this
there's
the
message:
output,
at
least
to
show
where
the
failure
is
starting
to
occur.
A
Yeah
thanks
for
bringing
up
these
issues
here,
I
think
I
mean.
Hopefully
none
of
this
really
matters
because
applications
aren't
doing
it,
but
I
definitely
feel
better.
Knowing
that
you
know,
zfs
is
kind
of
doing
as
best
as
we
can
with
these
kind
of
tricky
semantics,
yeah,
absolutely.
A
All
right
next
agenda
items,
zio
task,
you
scaling
from
multiple
pools
yeah
that
was
mine,
so
we
have
a
tunable
zeo
task.
You
batch
percent
defaults
to
75
percent
and
defines
how
many
threads
we
do.
The
zio
processing
on,
which
is
the
bulk
of
that
is
compression
and
checksums
and
things
of
that
nature.
A
So
it
defaults
to
75
of
your
cpu
course
the
idea
being
to
leave
some
system
time
left
over,
so
that,
if
you're
writing
to
a
highly
compressed
data
set
or
something
that
you
know
your
system's
not
going
to
not
be
responsive
because
all
the
threads
are
busy
doing
compression
the
problem.
Is
we
create
that
many
task
queues
for
each
pool?
A
So
if
you
had
two
pools-
and
you
say
where
zfs
sending
from
one
and
receiving
to
the
other
you'd
be
using
a
hundred
and
fifty
percent
of
your
cpu
cores
for
decompression
and
compression,
if
you
were,
you
know,
I
think
the
user
was
reading
from
a
pool
that
was
jesus
compressed
and
writing
to
one
that
was
it's
standard
compressed
or
you
know.
In
other
examples,
you
know.
A
Has
seven
pools,
then
that's
a
lot
more
threads
than
they
actually
have
cpus
yeah
is.
Did
you
have
a
solution
proposed
for
this?
I
was
curious.
What
other
people
thought
the
right
thing
to
do
there,
like
with
the
dynamic
mode
that
we
have,
we
could
try
to.
You
know
lower
the
number
of
threads
per
pool
when
you
add
more
pools,
but
you
know,
sometimes
people
have
some
pools
that
aren't
busy
and
some
that
are
and
then
how
does?
A
A
D
I
mean
that
seems
that
seems
like
the
reasonable
approach,
but
then,
like
you
know
like
I
wonder
what
things
like
you
know
like
when
you
have
our
pool
and
then
a
really
busy
pool
like.
Could
you
get
into
like
that
deadlock
scenario
where,
like
the
real
busy
pool
just,
is
like
starving
out
our
pool.
A
A
Like
I
mean
you
know,
they're
work,
stealing
and
whatnot,
but
like
more
or
less
it's
like
more
or
less
fifo.
So
I
would
think
that
the
fairness
would
be
reasonable.
My
concern
is
with
like
there's
dependencies
because
like
right
now
you
know
you
can't
have
dependencies
from
one
or
what
is
it
like
within
one
ta
within
one
zero
type
like
say
reads:
you
can't
have
a
read
that
depends
on
another
read
you
can
ever
read.
That
depends
on
the
right
or
right.
That
depends
on
a
read
because
those
are
different
task.
A
A
A
I
guess
the
other
question
is
inside
the
task
queue.
Does
it
have
the
information
about
which
pool
it's
for
or
does
it
avoid
that?
Because
it
knows
that
the
whole
task
queue
is
for
one
pool
like
do
we.
A
So
in
two
different
cases,
the
one
case
was
just
a
user
doing
the
center
received
from
one
pool
to
another,
and
the
system
was
basically
becoming
almost
unresponsive
because
they're
using
150
of
the
cpu
cores
for
compression
decompression,
because
two
pools
each
using
75
and
the
other
case
was
just
the
customer
that
has
seven
separate
busy,
pools
and
noticing
you
know
their
starving
each
other
for
cpu
time
or
just
overall
we're
creating.
A
But
because
that
is
a
tunable
you
can
only
set
before
you
import
any
pools
it
gets
hairy
to.
You
know,
try
to
adjust
it
as
you
import,
more
pools
and
so
on.
D
Yeah
and
for
like
a
customer
like
that,
I
mean:
did
you
get
a
sense
for
what
they
would
like
to
see
like?
Would
they
want
to
see
all
the
pools
like
reduced
the
number
of
like
right
threads
or
are
they
wanting
something
like?
Oh,
I
have
these
three
pools
that
are
more
critical,
so
I
really
want
them
to
have
more
cpu
than
the
other.
You
know
four
pools.
A
In
this
seven
pool
case
they're
all
the
pools
of
equal
importance
to
them,
but
I
can
definitely
see
that
being
for
other
people
yeah,
I'm
almost
thinking
like.
Should
we
have
a
z-pool
property
that
controls
this
and
then,
if
it's
not
set,
we
default
back
to
the
system-wide
tunable
or
something
and
does
that
make
sense,
or
does
that
not
make
sense?
A
Yeah,
that's.
This
is
just
the
zio
batch
test
queue.
That's
not
even
mentioning
the
you
know.
We
got
what
the
read
normal
and
read
high
priority
issue
and
interrupt
and
then
all
those
for
write
null
ioctl,
etc.
Yeah
I
mean
I
think
that
there's
not
an
obvious
solution
here.
If
you,
you
might
try
making
it
a
system-wide
task,
you
rather
than
per
pool
and
like
see
if
that
seems
to
work.
A
Yes,
the
other
option
is
making
that
tunable
more
dynamic,
so
you
can
change
it
at
runtime
and
just
lowering
the
you
know
when
it's
the
dynamic
task
you
code
goes
to
be,
like
you
know,
should
I
spawn
more
if
you're
like?
Actually
I'm
over
my
quota,
I
should
shut
down
test
queues
as
they
become
idle
or
before,
like.
D
A
great
hackathon
yeah
feels
like
a
great
hackathon
type
project
to
go
even
for
like
either
of
those
options
right
like
to
go
as
a
global
cio
test
queue
or
a
global
task
implementation,
and
then
one
that
like
allows
you
to
adjust.
A
Cool:
let's,
let's
go
to
the
next
topics
just
so
we
can
try
to
give
like
five
minutes
for
each
of
these
other
two
topics:
the
next
one,
multiple
user
keys
for
encryption.
B
Yep,
so
I
was
looking
at
basically
the
user
facing
documentation.
What
not-
and
it
looks
like
the
the
cleanest
way
to
do
this
from
the
user's
perspective,
would
be
to
use
the
at
notation
on
the
properties
and
have
multiple
slots.
So
you
would
have
like
key
format
at
zero
at
one
et
cetera
for
slot
zero
slot,
one
for
different
user
keys
and
obviously
all
that
would
be
stored
in
the
metadata
and
we'd
just
wrap
the
encryption
key,
multiple
different
times,
which
is
similar
to
what
we're
doing
now.
B
Just
do
it
more
times
and
then
you'd
add
a
command
for
zfs,
add
key,
which
would
allow
you
to
add
an
additional
key
zfs
for
move
key
that
would
prompt
for
a
key
that
would
then
find
the
slot
that
is
using
that
key
and
remove
it
or
remove
slot,
which
would
then
just
clear
that
slot
is
the
high
level
of
that
it's
largely
modeled
off
of
what
luke's
is
already
doing
with
their
multiple
key
slots.
B
I
believe
they
offer
eight
key
slots.
I
was
going
to
go
with
that
until
I
saw
a
technical
reason
otherwise,
but
that's
the
the
high
level
view
of
that.
A
Sounds
reasonable,
I
think
yeah.
If
you
want
to
get
feedback
kind
of
before
you
before
it's
ready
for
a
code
review,
you
could
maybe
open
an
issue
or
push
the
mailing
list
with
the
with
the
man.
Page
oranges,.
A
A
F
F
And
what
can
happen
is
that
you
can
have
an
arc
right
that
happens
and
dumps
out
the
buffers
while
it's
going
through
arcread.
Well,
it's
debuff
right,
which
calls
are
great
but
anyway,
and
so
you
can
be
trying
to
do
a
read
on
one
of
the
buffers
while
somebody
has
temporarily
said
it
to
null
and
your
life
is
very
sad.
F
The
p,
a
b
d
and
r
e
b
d
in
arc,
buff
header,
get
set
to
null
temporarily
while
going
through
the
path
of
arc.
Right
ready
are
great,
are
great
done
and
they
don't
really
check
much
before
they
do
that.
They
just
said
it
so
like
it
sets
it
to
null
in,
I
think,
ready
or
so
and
then
change
and
fixes
it
in
done.
A
F
The
two
solutions
that
I
have
at
the
moment
are
that
I
have
something
that
refactors
the
arc
right
code
path,
so
it
doesn't
temporarily
set
it
to
null
it's
somewhat
invasive,
but
it
works.
F
The
one
I
think
I
prefer
is
that
the
thing
that
does
the
decryption
in
place
doesn't
really
actually
need
to
reference
that
buffer.
So
I
just
added
handling
that
notices.
If
the
buffer
is
null
and
if
it
is,
it
just
goes
and
takes
it
just
allocates
a
temporary
buffer.
F
F
D
Like
what
I'm
thinking
about
is
like,
if
the,
if
the
header
is
already
in
right,
then
how
is
it
discoverable?
It
should
be
an
anonymous
header
right
and
we
would
have
released
it
as
part
of
the
write.
F
I
can
go
into
detail
on
how
this
happens,
but
basically,
if
you
trigger
the
quota
recalculation
code
that
goes
and
dirties
everything
there's
a
window
where
this
can
happen,
I
don't
have
a
good
explanation
for
why
honestly.
F
I
I
would
prefer
a
better
solution
than
this,
but
I
will
take
not
panicking
as
a
starter.
D
That
and
and
then
maybe
I
I
can
provide
some
some
better
alternatives,
but
yeah
I'd
like
to
kind
of
understand
the
scenario
first,
but
I'm
I'm
cool
with
what
you
know
if
you're
doing
like
a
temporary
copy.
If
that's
a
solution,
you
know
to
keep
us
from
panicking.
Maybe
that's
what
we
need.
F
F
Your
write
up
or
point
me
at
it.
I
just
linked
it
in
the
notes.
It's
not
everything,
but
that's
most
of
it,
but.
F
F
A
Cool
it
sounds
like
we
need
to
dig
into
this
a
little
bit
more,
at
least,
to
to
understand
thanks
sturge
for
volunteering,
to
take
a
look
at
this
rich.
A
I
think
we're
almost
at
the
end
of
the
meeting
time.
I
wanted
to
just
mention
that
we're
we're
still
working
on
planning
the
open
zfs
conference,
we're
looking
at
dates.
It
looks
like
it
might
be
in
early
november,
we're
we're
still
trying
to
plan
an
in-person
conference,
as
folks
have
indicated
from
the
last
meeting
and
we're
still,
the
opportunities
are
still
available
to
help
out.
If,
if
anybody
can
help
with
the
planning
and
logistics,
that
would
definitely
be
appreciated.
A
A
So
looking
forward
to
seeing
you
all
in
the
fall
and
hoping
that
kovud
will
be
calm,
then
in
the
meantime,
the
next
meeting
will
be
in
four
weeks,
which
is
july
19th.
We
delphics
folks
have
a
company
event
during
that
time.
So
I
think
we
might
not
be
some
folks
might
not
be
able
to
attend,
then
brian
or
brian
ballendorf.
Would
you
be
available
to
lead
the
meeting
on
that
time
on
the
19th.
A
A
Great
well
thanks
everyone
and
goodbye
from
sunny
san
francisco,
where
they
it's
reached,
95
degrees
on
the
solstice
today,
which
is
why
my
I'm
on
a
different
camera,
because
my
camera
overheated,
so
I
had
to
fall
back
to
the
macbook
camera,
so
have
a
great
enjoy
your
summer
and
we'll
see
you
again
in
four
weeks,
see
ya.