►
From YouTube: April 2023 OpenZFS Leadership Meeting
Description
Agenda: Block Cloning update; 2.2 release;
full notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
All
right,
it's
one
o'clock,
I
guess
we
should
get
going
with
this.
Welcome
everybody
to
the
April
25th
open,
ZFS,
meeting
I
see
we
got
a
couple
things
on
the
agenda
today.
Let's
see,
there's.
B
C
A
Mentioned
he
wanted
to
go
first,
so
if
you're,
if
you're
ready,
maybe
we
can
launch
into
that.
Yes.
D
Okay,
so
let
me
start
I
will
share
my
screen.
I
want
to
show
you
guys
something
and
also
talk
about
block
cloning.
D
So
when
blood
cloning
was
merged
to
freebies
they
had
there
were
some
issues.
D
There
was
one
data,
corruption
with
embedded
blocks
and
we
were
overwriting
eight
bytes
when
the
when
we
had
very
little
data
so
between,
like
60,
bytes
and
112.
This
is
where
we
use
embedded
block
on
this
specific
region
that
was
overwritten.
D
So
that
was
a
bit
unfortunate.
This
was
fixed,
but
there
is
some
other
issue
that
were
found.
Those
were
fixed
as
well,
except
for
one
and
this
one
turned
out
to
be
a
bit
harder
to
fix,
and
it
made
me
to
step
back
with
with
the
implementation
and
and
think
about
it
more
because
the
debuff
life
cycle
is
extremely
complicated
and
there's
a
lot
of
like
Corner
cases
that
we
need
to
that.
D
We
need
to
handle,
and
so
this
specific
Corner
case
was
when
we
cloned
a
block,
and
then
we
wanted
to
partially
overwrite
the
Clone
in
the
same
transaction
group.
So,
for
example,
this
is
totally
different
case
than
clonica
block
and
fully
overwriting
the
the
Clone
block.
Those
are
different
cases.
There
are
many
coronary
cases
like
this,
so
I
started
to
work
on
on
adding
tests
to
to
ZFS
test
suite,
and
there
will
be
a
lot
of
tests.
D
Unfortunately,
but
I
I
really
want
to
like
properly
test
everything.
I
can
show
you
like,
for
example,
some
things
I'm
going
to
test,
and
this
is
really
not
not
full
list,
but
they
want
to
test
like
all
the
interactions
were,
for
example,
there
is
no,
the
block
is
destination
block
is
not
cached,
we
clone
it.
We,
for
example,
override
it.
D
Then
we
read
in
the
same
transaction
group
and
then
after
exporting
and
importing
the
pool,
we
read
it
again
so
and
this
is
done
in
a
loop,
but
there
are
multiple
cases
like
this
and
basically
I
I
have
like
four
loops
and
try
all
those
cases
one
by
one.
So
this
one
Loop
is
around
like
100
tests,
then
we
start
from
overwrite
we
clone
and
then
read.
This
is
a
different
case.
Then
we
have
clone
overwrite
and
clone
again
and
then
read
and
stuff
like
this.
D
So
there
are
many
cases
that
are
not
obvious,
and-
and
this
is
I
would
guess
this,
like
10
of
stuff
I,
want
to
test
I
have
different
tests
for
like
embedded
blocks
holes.
D
D
D
It
changes
a
lot
we
need
to
like
handle
those
cases,
because
normally,
when
the
block
is
cached,
we
assume
that
we
have
the
data
and
if
we
have
a
clone,
we
already
have
dirty
record,
but
we
have
no
data,
so
we
need
to
special
case
all
those
situations
to
to
handle
this
to
to
read
the
data
from
the
Clone
if
needed.
So
if
we
want
to
read
clone
block
or
we
want
to
override
a
partially
override
clone
log,
then
we
have
to
read
it.
D
We
have
to
change
the
debuff
state
under
it,
dirty
it
again,
if
needed
and
stuff
like
that,
so
I
spent
a
few
days
trying
to
understand
the
debuff
life
cycle
and
I
think
I
a
much
closer
to
understanding
it,
not
sure
if
fully
yet,
but
but
I
think
it's
much
much
better
than
I
will
be
sending
the
pull
request
soon.
D
Once
I
finish,
the
tests
I
would
want
to
publish
this,
and
hopefully
that
will
help.
But
another
thing:
I
block
learning
was
enabled
by
default,
and
this
was
this
caused
some
problems
because
people
started
to
upgrade
pools
and
there
is
not
truly
going
back.
D
But
I
think
well.
The
problematic
code
was
when
we
cloned
the
blocks.
So
what
I
would
like
to
do
is
to
I
implement
it
in
freebie
they
implemented
a
suite
that
turns
off
block
cloning.
D
Even
if
the
pool
was
upgraded
and
we
cloned
some
blocks,
we
can
still
disable
it
and
in
3bd,
will
keep
it
disabled
by
default
for
now,
but
we
can
still
free
clone
blocks,
so
people
don't
have
to
like
worry
about
I,
don't
know
going
through
some
like
restoring
the
data
from
the
backup
before
Zippo
upgrade
and
stuff
like
that.
D
So
I
think
it's
like
like
good
solution
in
case
that
people
already
upgraded
pools
and
and
still
it's
good
when
people
want
to
try
block
cloning,
they
can
just
enable
it
and
just
just
to
do
just
try
it
out
and
see
if
it
works
and
and
disable
when
they
want
so
the
freeing
code
I
think
it's
much
less
complex,
so
that
should
be
okay.
D
So
that's
block
cloning,
not
sure
if
you
guys
have
any
questions.
D
Bsd
to
assist
control,
so
it's
only
for
FreeBSD.
For
now
it's
it
was
committed
to
FreeBSD,
so
I'm
sure
there
is
equivalent
in
Linux
for
configuration
like
that,
but
this
was
yeah.
This
was
freebies
this
specific
and
it's
only
in
freebies.
It
wasn't
I
didn't
send
pull
requests
with
this
one.
Yet
yeah.
A
So,
on
the
Linux
side,
this
is
one
of
those
things
we
haven't
wired
up.
Quite
yet
all
right,
so
I
don't
think
anybody
who's
used
the
black
cloning
functionality
there
yet
and
it's
not
wired
up
today.
This
is
calls,
so
that's
also
on
the
to-do
list
for
us
to
work
once
we
get
all
the
other
bugs
nailed
down,
but
it's
great
to
see
the
careful
test
Suite
being
put
together
for
it.
That's
excellent.
D
Yeah
I
think
I
underestimated
the
Buffalo
cycle,
initially,
unfortunately,
but
hopefully
now,
I
have
a
better
understanding
and
okay,
so
we
discussed
rate
limiting
I
think
on
the
last
call,
and
there
was
a
lot
of
pushback
to
to
implement
rate
a
meeting
to
be
hierarchical,
so
in
order
so
to
be
able
to
to
configure
rate
limits.
For
example,
some
like
a
single
limit
on
one
of
the
top
data
sets
and
then
basically
just
divide
this
limit
into
some
lower
data
sets
and
initial
implementation.
D
Didn't
do
that
and
and
again
I
had
to
but
I'm
grateful
for
for
the
motivation
for
the
pushback,
but
I
I
had
to
like
yeah
think
about
this,
some
more
because
one
one
of
the
concerns
was
performance
and
log
contention.
D
If
we
decide
to
do
this
because
what
we
want
is
to
work
for
this
to
work
exactly
as
quotas,
so
we
can
configure,
as
you
can
see
here,
higher
limits
below,
but
we
still
want
this
higher
limit
to
be
enforced
and
not
the
bottom
one
so
I
think
with
quota
it
works
the
same.
You
can
configure
higher
quota
than
your
parent,
but
still
you
have
to
you
won't
be
able
to
go
over
the
Department
Squad
and
so-
and
we
also
so
this
is.
This
is
prototype
of
the
new
design.
D
It
also
includes
using
separate
properties
for
separate
properties
for
all
the
limits.
So
now
we
can
in
theory
we
could
do
this
more
granule.
So,
for
example,
include
separate
rate
limiting
for
metadata
operations.
D
That
would
request
that
I
didn't
want
to
pack
everything,
because
the
the
previous
implementation
had
like
a
single
rate
limit
property
and
every
all
the
limits
were
squeezed
into
this
one.
So
I
didn't
want
to
overload
that
I'm,
not
sure
I'm,
not
yet
convinced
if
we
want
to
have
separate
rate
limits
for
metadata
operations,
but
it's
definitely
now
much
easier
and
much
cleaner
to
do
and
with
this
new
design
and
the
performance
I
think
I
found
a
way
to
to
avoid
log
contention.
D
I
would
I
will
use
readlock
around
the
structures,
so
all
the
iOS
shouldn't
should
be
able
to
to
run
concurrently,
no
matter
on
which
data
set
you
you
work
on.
There
is
a
bit
much
more
work
to
do
because
we
have
to
update
the
structures
at
every
single
level
that
is
involved
in
rate
limiting.
D
So
if
you
configure
like
three
rate
limits
on
three
different
data
sets
going
down,
then
we
have
to
update
those
three
structures,
so
it
will
be
a
bit
more
expensive,
but
if
you
have
like
two
three,
it
shouldn't
really
matter
from
my
tasks.
If,
if
you
have
Freight
limiting
configured
to
very
high
number,
there
shouldn't
be
a
difference
between
having
create
limits
and
having
having
no
rate
limiting
and
having
great
limiting
configured
to
really
high
numbers.
D
Yeah
so
10
megabytes,
then,
if
I
read
from
like
bus,
I
will
get.
D
I
will
get
six
megabytes
right
and
but
if
I
read
concurrently
from
Buzz
and
Foo,
one
have
six
megabytes
one
half
eight
both
should
do
around
five
megabytes
because
the
higher
limit
is
10
megabytes
right.
So
let's
try
to
do
this.
D
D
So
that's
the
and
what
we
also
gain
with
those
hierarchical
limits
is
that
we
can
treat
each
limit
separately.
So
we
could
configure
like
read
limits
here
and
operation
limits
higher
or
something
like
this.
You
could
we
could
mix
and
match
which
wasn't
possible
with
the
with
the
initial
implementation.
D
Yeah,
so
that's
pretty
much,
it
I
think
this
prototype
works
pretty
well
and
I
think
it
meets
the.
It
means
the
feedback
from
the
from
the
last
call
any
questions.
D
D
D
So
there
will
be
a
lot
contention
going
up
when
we
update
the
structure,
but
when
we,
when
we
go
up
and
look
for
those
structures-
and
there
is
only
read
lock
to
go
through
through
the
entire
hierarchy
because
it
can
be
complex,
I
guess.
C
So
for
for
those
who
don't
use
any
rate
limits,
there
will
be
so
a
red
lock
acquisition
for
every
parent
data
set
for
which
is
not
exactly
free,
but.
C
D
But
we
need
right
lock
when
we
like
do
renames
when
we
configure
limits
and
stuff
like
that,
so
those
are
rare
operations,
so
we
we
could
use,
read
most
logs.
Definitely
that
we
can.
If
we
still
have
a
little
bit
of
time,
we
can
try
to
do
a
simple
test.
D
Okay,
so
now
this
is
basically
Spurs
file,
so
it
will
be
quick,
but
now,
let's
try
to
configurate
limits
to
some
High
number.
D
So
we
can
use
Terra
yeah,
it
should
be,
it
should
be
close,
but
of
course
yeah.
If,
if
you
have
like
complex
hierarchy-
and
we
have
to
update
like
we
have
a
rate
limits
configured
on
the
top
data
set,
we
will
need
to.
There
will
be
some
log
contention,
updating
the
structure
from
all
dials
coming
coming
up.
D
But
of
course,
even
if
you
have
all
those
like
a
single
data
set
and
all
the
all
the
files
in
a
single
data
set,
you
would
still
have
the
exact
same
log
contention
on
this
structure,
because
every
I
o
would
need
to
update
the
structure.
C
D
D
No,
so
actually
I
think
there
is
a
shortcut
so
because
each
data
set
points
at
the
rate
limit
at
the
data
set.
That
has
the
right
limits
configured
so,
for
example,
this
data
set
has
a
pointer
to
this
one.
Sorry.
D
Okay,
thank
you
for
letting
me
go
first,
because
I
have
to
leave
early,
so
thank
you
guys.
Take
care
no.
A
A
Let's
see
next
on
the
agenda,
what
do
we
got?
I'll,
probably
hop
back
to
the
top
here,
so
an
update
on
the
GitHub
action,
Runners
right
and
some
work
being
done.
There
is
Tino.
C
A
Maybe
not
so
even
a
little
bit
of
background
I
know
a
little
bit
about
this
there's
been
some
work
to
you
know,
stand
up
some
additional
CI
infrastructure
to
get
better
test
coverage
on
some
other
architectures
Power,
PC
ar64
or
that
kind
of
thing
and
there's
some
resources
available
at
openstack,
so
I
know
I'm
sure
Tino
can
talk
about
it
more,
but
the
hope
is
to
be
able
to
take
advantage
of
that
kind
of
functionality,
and
you
know
set
up
some
new
Runners
and
just
get
better
test
coverage
there,
because
right
now
we're
still
pretty
x86
Centric
in
the
test
coverage.
A
So
I
don't
know
about
any
more
of
an
update
than
that
other
than
as
a
work
in
progress.
So
maybe
maybe
we
can
Circle
that
back
to
that.
If
he's
jumps
on
the
call
later
so
something
I
do
know
something
more
about.
Is
the
upcoming
2-2
release?
Let
me
talk
about
that
for
a
little
bit,
so
we've
got
pretty
much
all
the
core
functionality
in
now
in
the
master
Branch
for
the
2-2
release.
A
There's
a
lot
of
good
stuff
in
there.
A
bunch
of
cool
new
features,
I
mean
we've
been
accumulating
them
for
for
quite
a
while
now,
so
the
plan
is
to
cut
a
2-2
release
Branch,
hopefully
in
the
next
couple
of
weeks,
the
sooner
the
better
so
expect
release
candidates
out,
for
that
shortly
to
start
stabilizing
and
testing,
they
should
be
in
good
shape.
A
They'll
just
be
branched
off
the
master
Branch
as
a
2-2
release
and
I'll
put
out
fairly
frequent
RCS
and
when
we're
go
through
a
couple
of
those
and
are
happy
with
the
stability
we'll
tag,
a
final
2-2
release,
I
think
things
are
in
pretty
good
shape
at
the
moment,
but
you
know
we'll
give
it
a
little
bit
more
testing
all
right
on
a
stable
release
branches.
So
if
people
are
available
to
help
test
that
and
make
sure
it's
really
completely
solid,
that
would
be
great.
A
Are
there
any
questions
about
that?
Maybe
I
can
address.
E
A
A
So
following
the
2-2
release,
or
at
least
making
that
Branch,
the
other
noteworthy
thing
is
getting
in
some
major
new
features
and
one
of
those
that
you
guys
talked
about
last
week.
Sorry
I
missed
that,
but
that's
getting
the
OS
X
support
in
all
right.
So
once
we
cut
that
Branch
I
think
I'm
watched
last
week's
call
and
I
agree.
It
sounds
like
a
good
plan
forward
to
get
all
those
changes
merged
and
get
them
stabilized
and
that'll
be
a
good
time
to
do
it.
A
After
that
release
branches
cut,
and
then
we
can,
you
know,
just
move
forward
with
that
and
get
it
integrated
and
you
know,
stand
up
any
testing
infrastructure.
We
need
eventually,
but
it'll,
be
great
to
see
that
kind
of
stuff
get
integrated,
so
that
should
be
coming
up
too
I.
Just
wanted
to.
You
know
mention
that
briefly,
as
an
update.
A
A
Here
on
the
list
and
I
don't
have
much
to
say
about
it:
either
is
raid
Z
expansion
I'm,
not
quite
sure,
what's
going
on
with
this
one,
maybe
others
have
some
more
insight,
but
I
know
this
is
still.
You
know,
work
that
Matt
did.
That
needs
to
be
he's
more
code.
Review
needs
more
testing,
it
needs
to
be
rebased,
so
there's
still
a
bunch
of
work
there
to
do
time
to
get
it
finalized.
I
know.
A
People
have,
you
know,
done
some
testing
with
it,
but
it's
still
not
quite
wrapped
up,
but
it
would
be
great
to
push
this
work
forward
and
get
it
rebased
and
tested,
because
I
know
this
is
a
feature.
A
lot
of
people
are
keen
on.
It.
Just
hasn't
quite
come
together
yet,
but
it's
a.
A
A
So
I
don't
know
if
there's
other
things,
people
want
to
talk
about
or
questions
I
guess
we've
got
some.
C
A
E
Brian
I
want
to
touch
base
very
briefly
about
the
issue.
I
opened
last
week,
the
pr
the
deadlock.
D
E
Issue
yeah,
given
that
you
have
the
the
revert
lined
up
and
it
looks
like
Richard
thinks
it
is
his
issue.
Are
you
going
to
go
ahead
and
revert
that
that,
for
now,
yeah.
A
C
A
A
Think
we
should
revert
that
and
whatever
else
may
inadvertently
depend
on
it,
that
we
didn't
notice,
yeah
yeah
all
right,
but
so
I
will
try
to
take
a
look
at
that
or
or
I.
Don't
know
if
Richard
might
take
a
look
at
it
too
or
if
you
want
to
take
a
look
at
it.
That'd
be
great,
but
yeah
I
re-ran
the
z-test
a
couple
of
times
and
they're
failing
pretty
reliably
but
I
haven't
had
a
chance
to
pull
down
the
exact
traces
and
look.
Why.
A
A
I,
don't
know
of
any
other
significant
issues
in
the
master
Branch,
but
if
you
guys
know
of
anything,
that's
you
know
open
bugs
are
issues
that
are
major.
Please
let
me
know
we'll
want
to
get
those
fixed
and
you
know
obviously
get
that
was
fixes
folded
into
a
release,
but
I
think
we're
in
a
pretty
good
spot.
For
that.
A
Well,
I,
don't
think
I
have
anything
else.
Unless
there
are
other
topics
here,
it
might
end
up
being
kind
of
a
quicker
meeting
this
week
or
months.
B
A
Like
if
we
my.
B
We
fixed
the
other
two
over
the
weekend
and
there's
just
the
one
where,
when
we
cancel
the
Zio
and
it
invokes
the
the
done
callback,
it's
expecting
there
to
be
a
reference
left
and
there
wasn't
and
we're
just
trying
to
debug
that.
But
we
solved
thanks
to
your
help,
actually
figured
out
the
one
where
it
was
never
actually
getting
into
the
force
unmount
code,
because
the
unmount
happening
first
was
hanging
because
of
the
suspension
and
wasn't
doing
the
right
thing.
Yeah.
A
I
think
the
fix
you
guys
settled
on
there
was
a
good
idea,
adding
the
additional
axles.
I
was
going
to
mention
suggesting
that
of
the
pr
at
the
time,
but
then
I
forgot
so
I'm
glad
you
guys
ended
up
in
the
same
spot
yeah
it.
B
Was
because
we
originally
designed
this
on
FreeBSD
back
in
was
that
like
2018
or
2019,
where
force
unmounting
of
a
file
system
is
the
thing
you
can
just
do
it
didn't?
Have
this
particular
problem,
but
making
it
work
on
both
has
always
been
a
goal.
It's
just.
We
hadn't
managed
to
figure
out
where
it
was
getting
stuck,
but
now
we
have,
and
so
yeah
that's
just
that
last
one
and
hopefully
between
Marius
and
myself,
we
can
get
that
sorted
up
very
shortly.
Yeah.
A
If
we
can
get
that
merged,
then
I
think
we
can
Branch
the
2-2
release
right
away.
Okay,
please
scan
it
out
for
folks,
I
I
know
that's
you
said
the
feature
that's
been
worked
on
for
a
long
time
now
and
I
know
a
lot
of
people
want
it.
It's
on
my
short
list
of
killer
features
too.
B
A
B
Yeah,
because
Alexander
did
you
already
talk
about
what
we
were
looking
at
or.
C
A
I
mean
I
think
a
month,
maybe
would
be
reasonable
if
we
put
out
RC's
weekly
or
something
like
that.
You
know
we
get
whatever
fixes
we
need
in
there.
But
if
we
cut
an
RC
branch
and
we
let
it
marinate
a
month
or
so
I
think
we're
probably
in
good
shape,
because
that
the
master
branch
has
seen
a
lot
of
testing
and
it's
pretty
solid.
So
but
once
we've
got
RC,
the
people
again
can
jump
on
and
kick
the
tires
there
and
it
always
helps
build
confidence.
D
A
I
guess
the
only
other
thing
on
a
release
front
worth
mentioning.
Is
we
put
out
a
211
release
just
a
week
or
so
ago,
as
well
with
a
bunch
of
back
ports
and
stuff,
so
the
stable
release,
kind
of
update
so
I
expect
that
to
roll
out
places.
B
A
B
Our
homework
great
yeah,
that's
I,
think
the
hardest
thing
about
testing
it
is
Hardware
always
behaves
differently
than
not
Hardware
yeah.
B
Because
yeah,
actually,
the
thing
we
wrote
it
for
originally
was
for
emulated
disks
kind
of
a
S3
like
thing
that,
like
what
the
devil's
people
were
working
on.
C
C
A
Well,
if
we
don't
have
anything
else,
I
guess
we
can
call
it
a
meeting
and
you
guys
can
get
an
hour
or
half
hour
back.