►
From YouTube: September 2022 OpenZFS Leadership Meeting
Description
Agenda: OpenZFS DevSummit talks; partitioning; encryption bug status
A
Welcome
to
this
september,
2022
open
zfs
leadership,
meeting
not
too
much
on
the
agenda
this
time
so
plenty
of
time.
For
other
folks
to
add
some
agenda
items.
The
one
thing
I
had
was
the
open,
z,
best
dev
summit.
Thanks
to
everyone
who
submitted
talk
proposals,
we
have
some
great
talks
lined
up.
Let
me
show
my
screen.
A
So
I
have
nine
great
talks
here
from
a
bunch
of
different
members
of
the
community
and
we'll
have
the
conference
in
person
in
san
francisco
in
just
six
weeks.
So
now
would
be
a
great
time
to
register
for
folks
who
are
interested
in
attending
we'll
also
be
live
streaming.
A
The
conference
so
folks
who
can't
attend
in
person
will
be
able
to
watch
online
and
we'll
be
posting
the
talks
on
to
youtube
after
the
conference
as
usual.
A
All
right
that
was
the
only
topic
I
had
what
other
things
which
looks
like
to
discuss.
B
A
Yeah,
so
I
think
that
it
varies
depending
on
platform,
but
my
recollection
is
that
at
least
on
illumos
and
linux,
when
you
specify
like
a
whole
disk
like
if
you
do
like
you,
know,
zoopool,
create
and
then
like
on
a
lumos.
You
know
dev
c
one
t
one
d,
one
with
two
with
no
s
number
on
the
end
or
on
linux.
If
you
do
like
dev
nvme
one,
what
is
it
nvme?
One.
A
M1
yeah,
like
nvmu1n1,
without
the
like
s1
on
the
end,
then
zfs
will
partition
it
and
with
like
one
big
partition
with
the
gpt
label
and
and
then
it
also
like
internally,
zfs
will
know
that
that's
like
a
whole
disk
and
then
it'll
like
it
was
added
as
a
whole
disk
and
then
it'll.
Let
you
do
like
auto,
expand
and
stuff
like
that
on.
It.
A
C
And
lumos,
as
I
understand
it,
about
creating
their
petition
tables
and
my
understanding
on
freebsd,
none
of
that
happens.
There
is
no
option
currently
on
linux,
at
least
to
not
create
the
version
tables.
If
you
give
it
a
whole
disk
got
it
it
will,
it
will
figure
it
out
if
you
pre-partition
the
disk
and
point
it
at
a
partition
of
your
choosing.
It
will
just
use
that,
of
course,
this
property,
but
yeah.
B
A
D
A
Use
case,
you
know
what
I
mean
if
there's
an
alignment
thing
that
could
that's
like
hey,
look,
just
always
make
sure.
There's
one
mega
line.
Man
like,
I
think,
that's
kind
of
a
no-brainer
to
just
hard
code
that
and
say
yeah
when
zfs
partitions,
for
you
we're
going
to
follow
best
practices,
which
includes
one
meg
alignment
of
everything
or
whatever
the
lime
is
that
we
should
have
yeah.
So.
C
That's
in
fact
exactly
the
behavior
today
it
will
do
at
least
the
linux.
I
don't
know
about
the
other
platforms
per
se,
but
on
linux
it
will
create
the
prediction
tables
and
it
will
make
sure
things
are
one
mega
lined
at
the
beginning
and
end
of
the
partitions
right,
because
it's
just
better
behaved
right.
There
was
some
discussion
about
adding
additional
flags
to
force
alignments
or
a
partition
size,
or
something
like
that
and
various
pull
requests.
But
again
it's
just
been
discussions.
C
Nothing's
been
merged
there
because
there
were
some
use
cases
where
maybe
at
least
early
on
with
the
ssd
like.
Maybe
you
wanted
to
partition
it,
but
you
didn't
want
to
make
use
the
whole
disk
for
some
reason
you
know
wanted
to
just
use
half
of
it
right
more
man's
wear
leveling
right,
but
at
the
end
of
the
day
you
guys
said
there
are
better
tools
for
that.
Well,
maybe
not
better
tools
for
that,
but
there
are
non-zfs
tools
for
that.
If
you
want
to
go
down
that
route.
A
I
think
that
this
other
devices
like
cache
and
log,
follow
the
same
partitioning
rules
as
main
devices,
so
you
know
it'll,
it
will
partition
them.
If
you
give
this,
if
you
give
zfs
the
whole
device,
it'll
partition
it
on
linux
and
lumos,
and
it
will
not
do
that
on
freebsd.
E
C
F
A
Really
really
know
what
I'm
doing
just
use
this
raw
thing
and
don't
try
to
partition
it
or
whatever
that
that
would
be
fine.
B
I
appreciate
that,
thank
you
so
much
for
discussing
this,
because
there
there's
no
one
answer
and
it
was
encouraging
and
discouraging
to
see
that
open
zfs
has
one
of
the
few
cross-platform
partitioning
tools
and
maybe
it
needs
to
be
ripped
out
and
used
externally,
because
it's
cross-platform,
unlike
everything
else
out
there
so
kudos
for
that.
A
E
C
Do
you
know
brian?
There
is
a
pull
request
with
that
work
in
it
that
was
being
pushed
forward.
I
think
maybe
not
died,
but
definitely
stalled
out
is,
I
guess,
probably
a
better
representation
of
the
status
of
that.
Let
me
see
if
I
can
dig
up
the
pr
number.
I
know
a
bunch
of
folks
were
interested
in
that.
C
I
think
it
got
to
the
point
where
it
was
largely
working,
but
we
were
trying
to
work
through
the
issue
of
making
sure
we
could
run
the
test
suite
and
make
sure
everything
behaved
properly
and
we
could
test
it
and
confirm
it
was
behaving
as
intended,
and
I
think
we
ran
into
some
some
problems.
There.
E
C
So
there
are
definitely
some
open
outstanding
bugs
in
encryption
that
we've
had
people
working
on,
but
the
code
is
generally
convoluted
and
it's
been
hard
to
get
engagement
from
the
folks
who
actually
are
really
familiar
with
the
code
to
get
it
tested.
So
there's
been
a
bunch
of,
I
think,
what
you're
getting
at
a
bunch
of
lingering
bugs
with
encryption
that
mainly,
I
think
they
revolve
around,
send
receive
and
corner
cases
so
yeah.
C
I
think
some
of
those
issues
have
been
are
being
worked,
but
not
very
not
in
a
very
focused
way
right.
So
I
don't
think
there's
been
a
concerted
effort,
unfortunately,
to
run
all
those
things
down
and
get
them
fixed
again.
I
appreciate
any
help
in
that
area
right.
It's.
E
B
Okay,
I
I
brought
up
this
topic
a
few
months
ago
and
I
took
a
bunch
of
notes
from
all
the
links
people
gave
I'm
happy
to
either
post
paste,
those
in
the
chat
or
mail
them
to
you
directly.
B
C
There
are
definitely
a
couple
issues
that
we've
seen
yeah
on
linux,
with
with
encryption
and
send
receive.
In
particular,
there
have
been
a
couple
attempts
at
fixing
them.
There
was
one
that
actually
went
into
one
of
the
point
releases
that
we
ended
up
like
we
ended
up
backing
out
because
we
discovered
some
side
effects
that
weren't
originally
discovered.
So
we
fixed
the
bug
right
and
then
discovered
that
there
was
some
other
case
that
wasn't
quite
handled
and
rolled
it
back.
E
D
Yeah,
if
someone
does
have
a
list
of
those-
because
I
think
I
mentioned
I've
mentioned
on
the
slack
channel
and
it's
one
thing.
I
know
we
see
on
the
lumos
side,
which
you
know
maybe
we'll
show
open
open,
cfs
or
not.
I
mean
I
remember
when
we
first
supported
the
encryption
thing.
You
know
we
were
held
up
for
months
for
a
bug
that
turns
out.
It
was
just
a
lot
easier
to
trigger
on
a
lumos
but
was
still
there
at
the
key.
D
But
you
know
with
you
were
doing
a
send
receive
of
an
encrypt
turns
out
it's
an
encrypted
data
set
and
it's
causing
some
bogus
cios
to
be
generated,
which
the
metaslab
code
basically
freaks
out
about
and
basically
disables
the
pool
until
you
re.
You
know
export
or
reboot
you
when
you're
receiving,
and
so
unfortunately,
it's
been
extremely
hard
to
make
any
progress
on
it.
Just
because
the
we
can't
get
access
to
the
crash
dumps.
D
So
we
have
to
actually
walk
the
person
through
the
debugging
steps
back
and
forth.
So
it's
a
very
slow
process.
E
F
C
D
And
like
with
that,
one
like
we
haven't
been
able
to
reproduce
it
internally.
You
know
which
makes
it
you
know,
you
know
all
the
more
harder
you
know,
because
the
hope
was
at
least
you
know.
If
we
you
know,
if
we
could,
then
you
know,
then
we
don't
have
to
worry
about
you
not
having
a
not
really
having
access
to
the
dump.
A
All
right,
if
there
aren't
any,
then
our
next
meeting
will
be
in
four
weeks.
I
think
october
13th
it'll
be
at
the
same
time
as
this
and
that'll
be
the
last
meeting
before
the
conference,
which
will
be
okay.
Those
dates
right
october
11th
will
be
the
next
conference.
Oh
sorry,
october
11th
will
be
the
next
of
these
calls
and
then
just
a
couple
weeks
after
that
october,
24th
and
25th
will
be
the
conference,
and
I
hope
that
I
see
many
of
you
there
in
person.
F
When
it
suspended,
because
too
many
drives
went
offline
or
whatever,
and
when
it
comes
back
and
you
resume
with
the
sequel
clear,
sometimes
you
end
up
with
files
that
were
supposedly
written
before
the
pool
suspended,
but
you
know
didn't
actually
make
it
to
disk
before
the
disk
went
away
and
zedibus
doesn't
seem
to
ever
actually
finish
those
rights.
A
F
I
know
those
files
are
gone,
yeah
yeah,
part
of
it
looks
to
be
because,
like
zayo
flush
does
don't
propagate
errors,
and
so,
if
you
did
a
bunch
of
asynchronous
rights
and
then
flush
and
the
flash
failed,
zfs
just
continues
anyway,
and
while
it
doesn't
make
sense
to
retry
the
flush
after
we
resume
from
being
suspended.
F
A
Maybe
but
I
mean,
even
if
you
did
that,
I
I
don't
know,
the
problem
is
like
I'm
not
sure
that
zfs
has
any
mechanism
to
like
you
give
a
right
once
the
disk
says
like
yes,
I
completed
your
right.
A
I
don't
know
that,
there's
a
way
to
say:
oh,
but
the
flesh
failed
or
this
pool
got
suspended
or
whatever.
So
let
me
reissue
that
right,
even
though
the
disk
said
it
was
done
right,
because
the
problem
is
like
we
sent
to
the
disk,
the
disk
said
yep.
I
got
you
right,
but
it's
actually
in
a
buffer
which.
A
Yeah
so
like
a
workaround
or
or
maybe
a
real
solution
to
this
would
be
you
know
when
the
pool
gets
suspended
and
then
resumed
or
like
if
the
disc
has
you
know
these
certain
kinds
of
errors
that
indicate
that
it's
cash
may
have
been
lost,
then
we
do
like
a
force
pool
export
and
you
gotta
like
reimport
it
in
order
to
resume
operation
you
you
can't
act,
you
know
it's.
Actually
not.
The
code
doesn't
exist
to
correctly
suspend
lose
some
of
these
rights
and
then
resume
right,
but
I
guess
my
question.
F
A
I'm
not
sure
if
that's
true
like
for,
for
example,
for
ignoring
brazil,
if
we're
just
talking
about
like
the
main
rights
to
the
pool
yeah,
you
know,
that's
writing
the
whole
txg
and
then
at
the
very
end
it
does
a
flush
cache
to
the
disks
and
I'm
pretty
sure
that
those
rights.
A
F
A
Yeah,
I
fear
that
it
may
be
difficult
to
kind
of
do
the
optimal
thing
of
like.
Oh,
you
know
we
we
did
a
bunch
of
writes,
we
did
the
cash
flush,
oh,
the
cash
flush
failed,
or
we
got
some
scuzzy
error.
That
indicates
that
the
hardware
right
cache
is
gone
great.
Well,
like
just
let
me
reissue
all
of
the
writes
that
happened
since
the
last
flush,
which
might
be
the
whole
txg.
It
might
be
hard
to
implement
that.
F
A
Yeah,
I
mean
it's
probably
still
in
the
arc,
but
anyway,
as
soon
as
we.
A
F
A
A
You
could
say
because,
because,
like
the
flush
is
part
of
you
know,
the
zil
knows
about
that,
and
it
could
not
ignore
the
flesh
error
and
we
issue
the
right
and
the
flush.
A
A
However,
if
you
don't
also
solve
the
problem
of
like
you
know,
maine
pool
rights
are
persistent,
then
you
would
still
have
potentially
corruption
or
you
know-
data
loss
where
the
rights
didn't
actually
take
effect,
and
in
that
case
it
might
be
less
common,
but
having
it
not
actually
work
seems
not
good.