►
From YouTube: March 2022 OpenZFS Leadership Meeting
Description
Agenda: compression; review requests
https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
All
right,
let's
get
started.
This
is
the
march
now
march.
2022
opens
the
fest
leadership
meeting.
I
want
to
start
off
just
by
saying
that
you
know
I've
been
following
the
news
in
in
ukraine,
and
I
know
that
there's
a
number
of
developers
in
our
community,
from
ukraine
and
from
russia
and
just
wanted
to
say
on
a
personal
level
that
it's
a
real
shame
what's
happening
there,
and
I
hope
that
all
of
your
all
of
you
and
your
families
are
safe.
If.
A
It's
a
real
shame
and
I
hope
that
I
hope
that
you're
all
safe.
A
All
right
on
to
open
zfs
business,
I
saw
there
were
a
couple
of
things
added
to
the
agenda
rich,
I
don't
see
rich
here.
Rich
has
an
update
on
compression
that
he
put
in
the
agenda,
looks
like
he's
working
on
modular
compression
versions,
which
is
cool.
B
He
and
I
were
working
on
lz4
on
by
default
as
well,
which
was
a
minor
change,
although
it
seemed
to
tickle
a
bunch
of
things
in
the
test
suite
that
he
had
to
poke
at.
I
don't
know
the
status
of
that
off
the
top
of
my
head
either,
but
I
know
that
was
another
area.
He
was
poking
at.
C
Yeah
I
was
looking
at
that
myself
that
actually
came
along
pretty
nicely.
It
ended
up
being
like
a
lot
of
little
bits
in
the
test
suite
fix
up,
but
I
think
that's
in
good
shape.
Now,
there's
a
pull
request
open
and
it's
these
final
reviews,
probably
which
should
be
good
to
go
in.
B
The
first
one
was
a
small
one
that
a
customer
did,
I
think,
back
in
the
summer,
just
basically
the
startup
service
script
to
be
able
to
load
and
unload
encryption
keys
automatically
at
boot
for
their
database
and
so
on,
and
I
think
that
was
just
a
matter
of
that
ending
up
in
the
contrib
directory
or
something
like
startup
scripts.
Normally
do.
B
A
Let
me
see
we're
a
little
late
on
attendees
today.
Let
me
see
if
I
can
check
who
who
needs
to
take
a
look
at
these,
I'm
bringing
them
up.
A
All
right
so
the
zfs
keys
script.
A
A
I
can
ping
tony
on
doing
another
review
or
finding
another
reviewer.
A
The
right
throttle
smoothing,
so
I
should
probably
take
a
look
at
this
since
I
wrote
a
bunch
of
this
right
throttle
stuff
earlier
way
earlier.
A
I
haven't
really
had
a
chance
to
look
at
it
at
all,
but
I
will
put
this
on
my
short
list.
I
love
your
little
graphic
there.
The
the
graph.
B
I
think
alexander's
point
in
that
one.
I
think
he
was
surprised
that
space
goes
back
into
the
available
dirty
or
that
the
amount
of
dirty
goes
down
very
sharply
all
at
once,
rather
than
kind
of,
as
things
are
flushed
out.
B
So
it's
like
once
the
transaction
group
completes
the
dirty
data
drops
down
sharply
all
at
once,
and
that
seemed
to
be,
I
think,
maybe
the
root
cause
of
why
we
would
see
the
vm
being
able
to
write
to
the
disk
or
right
to
zfs
faster
than
the
disc
could
possibly
go
up
until
we
hit
basically
the
brakes,
because
we
were
running
into
the
the
dirty
data
max
and
then
we'd
see
a
stall
for
a
couple
seconds
where
basically
the
vm
could
do
no
writes,
and
then
it
would
jump
back
to
being
fast
and
install
and
fast
install.
B
And
so
we
basically
said
you
know
once
we've
had
to
apply
the
right
throttle.
You
know
keep
the
throttle
on
for
a
couple
of
seconds
so
that
we
get
smoother
instead
of
this
kind
of
spiky-ness.
D
Oh
yeah,
I
that's
exactly
what
I
told
and
I
remember
matt.
You
was
working
on
that
before
so
that
every
completed
right
should
decrement
the
amount
of
dirty
data
yeah.
So
it
should
be
pretty
smooth.
That's
either
something
got
broken
or
some
kind
of
traffic
is
not
accounted
there.
Maybe
de
dupo
or
I
don't
know
what.
E
D
A
A
Yeah
I
mean
when
the
transaction
group
syncs,
when
it's
in
the
middle
of
thinking
like
as
each
item
exactly
it,
should
be
getting
documented.
D
A
Yeah,
that's
exactly
right.
I
think
there
are
some
known
corner
cases
that
are
eluding
me
right
now,
but
I
don't
think
it
would
be
related
to
z-vols.
A
So
if
you're
seeing
that
like
all
the
time,
then
we
probably
want
to
look
into
that
because
that
will
be
a
you
know,
a
bigger
improvement
once
that's
working,
then
you
should
only
have
that.
Like
remainder,
like
alexander,
said
that
is
dropping
quickly
and
then
you
know,
there's
definitely
fluctuations
but
they're
much
more
minor.
Normally,
you
know
like
at
the
very
end
of
the
txg
you're
not
sinking
as
quickly
because
you're
waiting
for
the
last
few
things
to
happen.
A
B
B
A
B
Well,
when
it,
I
think,
when
it's
hitting
the
top
there,
it's
it's
having
to
completely
stop,
because
there's
no
dirty
data
buffer
left
at
all.
B
Other
than
that,
the
max
delay
is
100
milliseconds.
A
Yeah,
so
one
problem
might
be
that,
like
once
you
hit
the
max
delay,
then
it
it
can
take
a
little
bit
for
the
workload
to
be
allowed
to
like
ramp
back
up,
because
the
the
max
delay
is
so
big,
100,
milliseconds
or
whatever.
A
A
B
B
Yeah,
like
it
should
kind
of
be
moving
back
and
forth
along
that
curve
like
we
should
have
in
the.
A
Like
zfs
delay,.
A
60,
okay,
so
yeah,
so
it's
not
even
until
you
get
60
dirty
that
there
should
be
any
delay
but
depending
on
the
time
scale
there
like
it
may
be
that
all
the
applications
threads
are
like
waiting
for
ios
to
complete.
Do
you
know
in
your
graph
there?
What
is
the
time
scale
like
the
kind
of
period
of
that
oscillation.
A
Yeah,
so
if
that's
the
case,
then
that
could
easily
be
like
the
max
delay
being
100
milliseconds.
It
could
be
like.
Okay,
all
your
applications
are
in
there
they've
hit
the
max
delay,
100
milliseconds,
and
now
they
can't
do
anything
for
100
milliseconds,
even
though
stuff
has
been
flushed
out.
You
know
when
we
wrote
this.
Originally,
it
was
in
the
context
of
spinning
disks.
A
If
you're
running
on
ssds,
it
could
easily
be
the
case
that,
like
that
100
millisecond
delay
is
just
like
ridiculously
long,
and
you
know
I've
seen
this
a
lot
on
ssd
pools
where,
like
the
txgs,
can
sink
very
quickly,
because
you
know
the
max
dirty
date
is
four
gigs
like
it
doesn't
take
that
long
to
write
out
four
gigs.
If
you
have
a
bunch
of
ssds,
so
you
know
you
can
kind
of
work
around
that,
probably
by
like
decreasing
the
max
delay
as
kind
of
a
stop
gap.
A
You
know
so
instead
of
100
milliseconds
make
it
like
10,
milliseconds
or
1
millisecond
or
something,
and
then
you
know,
you
may
also
look
at
increasing
the
max
the
max
amount
of
dirty
data
from
four
gigs
to
you
know,
16
gigs
or
something.
B
We
were
making
it
smaller,
but
but
yeah
we
were
just
seeing
the
to
the
point
where
the
application
was
just
stalled
completely
and
by
and
we
basically
kind
of
even
with
we
lowered
the
point
where
we
start
inserting
delay
from
60
to
like
30
or
something
like
that,
and
I
think
basically,
what
we
saw
is
that
you
know
the
application
would
go
full
throttle
and
it
would
end
up
running
into
the
dirty
data
max
and
and
having
to
slow
down,
and
then
the
throttle
would
stay
on
for
the
rights
after
that,
as
it
started
to
flush
stuff
out.
B
B
Number
a
small
number
of
writers
and
then
it
would
just
run
into
the
the
limit
again
and
so
our
solution
was,
you
know.
Once
we've
had
to
apply
the
brakes,
you
know
don't
go
completely
untold
again
until
we've
stayed
below
the
threshold
for
a
number
of
seconds.
D
Alan
it
just
got
to
my
mind
since
you
you
say
you
have
so
many
threats.
It
may
happen
that
if
all
of
the
threat
came
same
time
and
each
of
them
will
increment
weight
more
and
more
and
more,
the
last
will
get
queued
in
situation
when
first
haven't
even
yet
completed.
So
practically
your
delay
is
not
increasing
gradually,
but
it
just
goes
all
the
way
to
the
max
and
then,
as
matt
told
you
get
some
100
millisecond
delay
when
really
everything
is
blocked,
so
would.
B
B
A
I
think
that
alexander,
I
think
that
the
the
fix
would
probably
be
like.
Probably
what
we're
doing
is
you
know
when
you
check
how
much
delay
you
should
have
it's
based
on
like
the
amount
of
dirty
data
right
but,
like
you
said,
threads
might
have
been
allowed
in,
but
they
haven't
yet
actually
dirtied
their
data,
so
we
probably
would
need
something
where
it's
like.
Okay,
like
you're
coming
in
you're,
checking
the
amount
of
dirty
data.
You
want
to
dirty
some
data,
however
much
data
you
want
to
dirty.
C
A
It's
like
you,
don't
get
to
get
like
you,
don't
get
to
go
until
you've
incremented
this
like
dirty
and
then
you
know,
and
then,
when
you
go,
do
the
actual
dirty,
then
you
like
clear
out
your
reserve
dirty
as
you're
doing
that
increasing
the
actual
dirty
that
way,
like
the
the
amount
of
and
then
when
you're
checking
the
amount
of
dirty
data
you're
adding
in
this
reserve
right
so.
A
Yeah,
I
think
that
would
be
like
the
right
solution
to
having
a
lot
of
threads
there's
other
things
in
there
that
are
supposed
to
handle
like
having
a
lot
of
threads
like
once.
You
start
delaying,
then
the
delay,
then
it
should.
It
should
kind
of
the
delay
should
work,
irrespective
of
the
number
of
threads
like.
If
you
have
a
lot
of
threads,
then
it's
like
in
the
delay
is
one
millisecond
then,
like
the
first
thread,
waits
it.
A
If
all
threads
come
in
the
same
time,
you
wait
a
millisecond,
then
one
thread
gets
to
do
something
then
another
millisecond.
Then
another
thread
gets
to
do
something,
so
you
can
put
it
like
that,
but
the
the
window
there
like
you're,
saying
between
when
you
check
and
then
when
you
go
and
dirty
it-
could
allow
a
lot
of
threads
in.
D
Have
hundred
threads
you'll
get
a
scheduled
hundred
weights
in
advance,
so
your
latency
will
grow
even
much
higher
than
it
would
be
with
single
thread.
So
the
moment
of
dirty
data
in
cream
increment
may
be
right,
but
maybe
only
half
of
the
problem.
The.
D
A
C
A
Issue
which
is
like
ridiculously
long,
you
know,
I
think
that
when
we
designed
it
in
in
what
I
observed,
you
know
a
long
time
ago
was
like
you,
you
almost
never
hit
that
max
that
maximum,
because
it
ramps
up.
You
know
as
you
get
towards
the
end.
It's
right
ramping
up
exponentially,
but
so
so.
B
B
A
That's
good
at
least
yeah,
but
100
milliseconds
could
still
be
a
long
time
in
the
time
scale
of
that
graph.
That
you're
looking
at
where
you
know
in
the
world,
where
you
could,
like
sync
out
a
t,
every
txt
in
like
a
half
a
second
you
know,
100
milliseconds
is
still
a
long
time
to
wait.
Oh
yeah.
D
And
in
context
of
delays
being
sequential,
I
was
thinking
that
somehow,
maybe
it
would
be
great
to
have
them
working
in
parallel.
For
example,
if
you
have
one
tray
doing
large,
big
rights
and
other
doing
a
lot
of
small
rides
now
they
all
get
serialized
into
the
same
line.
At
the
end,
bandwidth
of
the
thread
doing
large
right
will
be
much
higher
because
they
are
all
waiting.
All
practical.
All
of
them
have
insane
iops.
A
D
No,
it
could
slightly
reduce
amount
of
duty
data
in
the
counter,
so
it's
slightly
reduced
delay
but
still
assigns
it
all
like
since
they're
all
going,
sequential
you'll
still
have
get
it's
exactly
same
number
of
iops
through
all
the
threads
and.
D
Yeah
from
one
side,
it
all
works
in
context
of
bytes,
while
fairness
happens
in
context
of
iops.
D
It's
a
bit
weird,
so
I
haven't
got
good
ideas
how
to
like
from
what
then
base
the
delays,
because
if
we
put
them
all
from
current
time,
not
from
the
last
one,
then
it
will
be
slightly
on
the
other
side.
More
trades
will
be
able
to
push
more,
but
maybe
it's
actually
okay.
They
will
just
put
more
into
dirty
data
and
then
get
all
throttled
at
the
end
as
a
result.
A
Yeah
then
you
I
mean
the
obvious
way
of
deserializing
them
runs
into
the
problem
of
like.
If
you
have
more
threads,
then
you're
like
letting
more
io,
go
through
right.
If
every
thread
gets
the
same
delay
from
the
starting
point
then
like
having
more
threads
means
you
get
to
do
more
work
at
that
amount
of
dirtiness.
A
A
B
Because
yeah,
I
think
what
we're
trying
to
do
is
avoid,
just
as
you
have
a
lot
of
threads
ending
up
in
the
you
know,
the
last
kind
of
ten
percent
of
that
chart
where
the
the
delay
goes
up
really
sharply.
You
know
you
should
be
hopefully
slowing
down
sooner
than
that
and
coming.
A
A
Yeah
exactly
so
yeah
I
would
definitely
look
at
like
is:
are
we
hitting
this?
A
The
graph,
so
it
can,
you
talk
a
little
bit
more
about.
I
mean
that
might
be
like
the
right
solution,
but
maybe,
like
smoothing
more,
would
add
a
little
bit
of
niceness
as
well.
Could
you
talk
a
little
bit
more
about
what
the
pr
is
proposing
like?
I
see
this
graph
with
the
yellow
line
right
so.
B
A
B
Down
yeah
with
the
minimum
turn
down,
but
the
idea
is
that
there's
no
delay
at
all,
and
then
you
hit
that
threshold
where
the
delay
kicks
in
and
that,
then
we
have
that
exponential
curve
or
whatever
there,
that
that
increases
the
delay
as
your
jks
dirty
data
value
gets
higher.
So
once
you've
gone
up
that
line.
What
this
patch
does
is
that,
after
you
drop
below
that
threshold,
instead
of
going
back
to
zero
delay,
it
does
the
yellow
line
of
keeping
an
amount
of
delay
on
until
you've.
B
We're
still
going
to
apply
some
amount
of
delay
to
make
sure
that
when
a
bunch
of
threads
show
up
that,
we
don't
run
back
up
to
90
again,
that
we
slow
them
all
down
and
hopefully
more
gradually.
A
But
I
feel
like
this
is
kind
of
a
hack
around
the
real
problem
of
like
why
did
like.
If
you
had
this
high
workload,
why
did
the
amount
of
dirty
data
ever
go?
Does
it
like
ever
go
down
that
low?
It
should
have
been
able
to
keep
it
high,
but
the
reason
we
didn't
keep
it
high
is
because
we
applied
too
much
of
this
delay.
A
You
know
when
you
slammed
into
the
limit,
so
you
know
this.
Is
you
like
a
way
of
saying
okay
well
like
when
you
get
out
of
that,
like
we
don't
want
to
oscillate
but
really
like?
A
D
A
Yeah
like
there's:
if
there
wasn't
this
issue
with
like
like
a
gajillion
threads,
then
you
know
when
you
get
down
to
zero
and
you
in.
If
we
said:
okay,
fine,
like
you,
can
go
jump
up
to
30,
dirty
instantaneously,
that's
fine
and
then
like
right
and
then
now
you
start
having
a
delay
right
like
you.
There
should
be
a
need
to
like
delay
from
zero
to
30
and
then
start
to
you
know
you
should
be
able
to
get
to
30
start
delaying.
Then
you
have
more
or
less
the
same
behavior.
A
A
Zero
to
eighty
percent
dirty-
and
I
think
that
something
like
this
like
reserve,
would
would
prevent
us
from
doing
that
jump.
A
D
Maybe
you
could
collect
put
to
the
same
graph
just
by
understanding
different
states
of
transaction
group
processing,
sync
processing
when
it
started
when
it
finished.
When
it
started
cache
flashing,
maybe,
and
also
out
there
average
delay
with
some
sampling
or
like
something
like
that
to
see
whether
it's
delay
a
problem
or
it's
a
problem
of
releasing
fs
going
to
some
slow
stages.
And
then
it
has
to
accumulate
more
data
that
actually
causes
throttling.
B
B
Right,
I
think
there
we
were
setting
it
to
about
1.2
gigs,
okay,
so.
B
Alexander's
idea
of
looking
at
that
there's
the
k
stat
that
gives
you
like
the
last
hundred
transaction
flashes
and
and
how
much
time
they
spent
in
each
state
that
and
maybe
trying
to
also
profile
the
delays
might
give
or
sample
the
the
delays
might
shed
more
light.
D
I
mean
I
meant
more
like
try
to
correlate
delays
with
exactly
what
zf
is
doing
at
this
point,
whether
it's
writing
data,
whether
it's
starting
flashes
or
or
for
the
devs
updates
of
last
stages
of
sync
updates
of
labels
or
uber
blocks
whatever
and
just
to
perceive
on
that
graph
where
exactly
it
starts
throttling.
Maybe
it's
valid
pulsation.
Maybe
it
will
just
have
very
long
sync
time
for
caches
cache,
flash
time
or
something.
A
A
Like
if
you
only
have
16
threads,
I
would
think
that,
like
you
know,
the
amount
of
dirty
data
is
like
going
up
and
up
and
up
and
then
like
you,
get
to
like
one
millisecond
delay
and
like
isn't
that
like?
Is
it
shouldn't?
One
millisecond
delay
be
plenty
enough
to
kind
of
bring
the
incoming
amount
of
dirty
data
very
low,
because
you
know
you
only
have
16
threads
and
you
have
one
millisecond
delay.
You
can
only
really
accepting
one
operation.
Every
millisecond.
A
Like
I,
it
would
be
nice
if
we
could
find
a
better,
a
solution
that
isn't,
as
like
tunable
dynamic,
behavior
dependent.
You
know,
and
and
maybe
we
can
find
like
a
real
bug
with
the
existing
stuff.
A
Rather
just
because,
like
you
know,
piling
on
more
and
more
dynamic,
behaviors
makes
the
system
harder
and
harder
to
understand.
So
it's
like.
We
should
definitely
do
that
if
we
have
to.
But
if
we
ca,
if
we
can
avoid
it,
then
that
would
be
better
right.
D
Just
a
last
comment:
I
just
looking
now
on
the
graph
on
its
right
side
and
they're,
pretty
small
amount
of
dirty
data
that
grow
or
delay
from
10
millisecond
to
100
milliseconds.
It's
really
only
like
five
percent.
Just
looking
on
the
graph,
I
don't
see
numbers
they're,
pretty
tiny
there,
and
maybe
that
or
actually
five
percent
is
that
what
16
threads
can
actually
drop
in
the
last
moment,.
D
A
And
then,
because
the
delays,
if
you
have
multiple
threads,
because
the
delays
kind
of
compound
it
shouldn't
matter
that
you
have
a
bunch
of
threads,
you
know
like
it's.
The
delay
is
basically
like.
We
only
take
one
operation,
every
10
milliseconds,
regardless
of
how
many
threads
you
have
like
that,
should
make
you
really
really
slow,
but
maybe
there's
somebody
to
circumvent
that
if
you
have
a
lot
of
threads
and
it's
not
like
the
check,
isn't
working
right
or
whatever
well,.
B
We're
definitely
seeing
the
part
where
the
writes
get
really
really
slow
to
practically
zero
going
through,
and
then
it
recovers
and
goes
back
to
going
faster
than
the
disc
you
go
and
then
quickly
ramps
up
and
runs
into
having
to
slam
on
the
brakes
again.
A
I
I
would
try
to
figure
out
if
that
is
kind
of
behaving
as
intended
and
the
re
and
it's
just
like
the
workload
and
the
ramp
and
the
memory
size
are
not
meshing
well
or
is
it
like
somehow,
the
ramp
has
been
circumvented
by
you
know
like
maybe
it's
enough
that
only
even
only
16
threads
come
in
and
they
check
and
then
they
all
see,
oh,
like
we're
at
0.1,
millisecond
delay
and
then
they're
able
to
like
jump
it
up
enough
that
it
hurts
the
performance
a
lot.
A
All
right,
let's
take
a
look
at
the
next
one
that
you
mentioned
spot
a
size
inflation.
Yes,
I
think
that
we
talked
about
this
or
I'm
going
to
talk
to
you
about
this
a
while
ago,.
B
Part
of
what
we
saw
there
was
if
there
was
a
quota
or
something
on
a
data
set
that
we'd
end
up
having
to
stop
and
wait
for
the
currently
pending
stuff
to
flush
out
before
we
could
write
more
data
to
it
and
it
would
get
really
really
slow
because
it
was
every
amount
of
data
you're
trying
to
write.
It
was
just
blindly
multiplying
it
by
24..
B
So
you
know,
if
you
had,
you
know:
500
megs
of
free
space
or
a
gig
of
free
space,
and
you
tried
to
write
a
100
megabytes
to
it.
It
would
say
actually
that
will
be
2.4
gigabytes,
so
we
don't
have
enough
room
for
that.
So
we're
going
to
stop
you.
After
writing.
B
You
know
20
something
megabytes
and
make
you
wait
for
transaction
flush
before
you
can
write
more
and
so
that
change
goes
and
looks
at
the
each
vw
that
you
have
and
finds
the
worst
inflation
that
that
v
def
could
cause
and
uses
that
value
instead
of
24.
A
B
So
that
was
the
first
change
was
just
that,
because
we
don't
have
ditto
blocks
anymore.
We
didn't
need
an
extra
multiplied
by
two
that
was
in
the
formula,
so
the
default
value
could
go
down
from
24
to
12.
but
yeah.
I
wasn't
sure
there.
So
I
we,
but
we
basically
don't
use
the
the
tunable
anymore.
So
we
should
probably
garbage
collected
if
it's.
B
B
A
B
So
each
v
dev
advertises
its
the
worst
case
and
then
the
pool
takes
the
worst
of
all
the
v
devs
and
uses
that
for
everything.
B
If,
if
it
yeah,
if
it
has
a
pointer
to
the
data
set
and
can
check
the
copies
property,
it
uses
that
and
if
it
can't
get
it,
then
it
assumes
the
worst
case.
If
copies
equal
three
yeah.
A
B
B
A
A
Cool
yeah,
why
don't
you
garage
collect
that
tuna
bull
assuming
that
it
really
is
no
longer
used?
And
then,
when
you
ping
me
and
I'll
I'll,
take
a
look
at
it?
B
It's
basically
the
same
way
that
you
can
delegate
a
data
set
to
a
zone
or
a
jail
on
lumos
and
freebsd.
You
can
do
that
to
a
namespace,
a
username
space
on
linux,
so
that
root
in
that
username
space
can,
you
know,
do
zfs
commands
on
that
subset
of
datasets
and
it
takes
care
of
the
the
uid
mapping
stuff
that
linux
supports
where
each
namespace
has
its
own
range
of
uids.
That
map
to
the
typical
range.
A
Cool
and
this
hooks
into
the
like
zone,
like
is
zoned,
permission
check,
stuff
yeah.
B
It
basically
does
actually
hooks
up
is
global
zone
so
that
it
can
tell
if
you're
in
the
parent
name
space
and
it's
normal
or
if
you're
in
the
some
other
name
space
in
it,
then
you
can
only
see
the
data
sets
that
you'd
normally
be
able
to
see
and
yeah
it
just
hooks
up
to
all
the
existing
stuff
that
illuminos
and
bsd
use,
so
that
you
know
it's
following
all
the
same
rules.
A
That's
cool,
so
we
need
to
figure
out
who
can
code
review
this.
C
So
I
made
a
couple
passes
at
it.
I
think
it's
looking
pretty
good.
We
had
a
couple
open
questions
that
I
don't
know
that
we
came
up
with
any
good
solutions
for
really
but
they're
they're,
probably
not
big
deals
and
things
we
can
live
with
so
it'd
be
great
to
get
additional
eyes
on
it,
but
I
think
it
looks
pretty
good.
C
B
Yeah,
it
adds
a
new
command
like
zfs.
B
Cfs
zone
or
zfsjl
did
we
end
up
changing
it
to
actually
use
zone.
C
A
Okay,
maybe
for
for
review
purposes.
You
know
if
you'd
like
to
have
folks
review
that
you
know
the
command
line,
interface,
user
interface,
which
seems
like
a
good
idea.
Could
you
write
that
up
in
the
first
comment,
like
I
see
in
the
first
comment,
it
just
says,
like
oh
zfs
userness
attach
like
some
random
number
and
then
a
you
know:
data
set
name.
So
it
would
be
nice
if
you
could
put
like
you
know,
maybe
the
rendered
man
page
or
like
a
description
of
like
here's.
A
You
know:
here's
how
here's
the
sub,
here's
the
new
sub
commands,
here's
the
arguments,
here's
how
you
would
use
it
and
then
probably
more
people
are
going
to
be
able
to
understand
that
and
give
meaningful
feedback
on
it
than
necessarily
the
details
of
the
code
right.
A
A
Especially
for
overlook,
if
we're
like
you
know,
it's
kind
of
related
to
zfs
zone
or
whatever
the
most
folks
might
have
thoughts
on
you
know
their
experience
with
that.
I
don't
know
right,
you
know
maybe
they're
like
oh,
that
command
is
actually
really
hard
to
use
you
should
have.
We
should
have
done
this
other
way.
Maybe
you
can
fix
that
or
something
I
don't
know.
B
C
I
just
mentioned
that
this
is
a
probably
not
an
exclusive
list
of
things
that
need
reviews.
If
people
have
time
or
interest
there
are,
there
are
more
than
enough
other
issues
that
the
pull
requests
open.
That
could
use
some
feedback
and
eyes
on
them.
It's
more
and
more.
A
Every
day
so
yeah
and
thanks
a
lot
brian,
I
know
you've
been
doing
a
lot
of
the
gatekeeping
stuff.
The
the
delfix
team
has
been
pretty
distracted
with
the
with
we're
working
on
getting
the
object,
store.
Zfsn
object,
store
product
out
the
door
so.
A
But
hopefully,
in
the
next
few
weeks
that
will
happen,
and
then
you
know
we
should
have
some
more
bandwidth
from
the
other
maintainers
and
hopefully
creative
viewers
as
well
yeah,
so
feel
free
to
maybe
not
this
week,
but
like
starting
next
week
feel
free
to
more
aggressively
bug
people
and
remind
them
of
the
responsibilities
that
they've
been
putting
off
myself
included
for
sure.
A
C
And
anybody
can
do
their
reviews
too
right
if
they
want
to
comment
things
and
look
at
stuff
right.
I
know
alexander's
had
some
peers
outstanding
for
a
while
now
that
you
know
he
was
looking
for
reviews
months
ago
and
there's
there's
lots
of
out
there
that
are
like
that
for
cool
features
and
cool
functionality.
They
just
need
more
eyes
on
them
right,
yeah,.
D
D
Say
I
have
twelve
seven,
eight
nine
improved
log
space
map
load
time
which
hangs
like
for
three
months.
We
are
already
including
it
in
our
releases
and
obviously
would
like
to
have
it
upstream
at
all
at
very
least
some
eyes
on
it.
Just
in
case
I
missed
anything
like
it
works
for
us,
but
best
eyes
are
always
good
to
have.
A
Yeah,
let
me
let
me
find
the
link
and
then
I'm
gonna
at
least
ping
seraphim,
because
I
think
that
he
might
have
time
to
look
at
this
soon.
Have
him
take
a
first
pass
and
then
I'll
take
a
look
at
it
at
some
point
as
well.
A
Thanks,
do
you
wanna?
We
do
have
a
couple
just
a
couple
minutes
here.
Alexander,
do
you
wanna
highlight
like
what
the
what
the
changes.
D
Like
there
are
like
general
problem
is
that
there
is
no
upper
limit
for
number
of
dirty
space
maps,
so
if
we
have
huge
pool
unless
there
is
upper
limit,
but
it
scales
with
number
of
v
devs
if
number
of
vdfs
goes
to
like
hundreds,
many
hundreds
or
thousands,
then
a
number
of
dirty
space
map
can
reach.
I
don't
remember
dozens
of
thousands.
A
D
It's
practically
require
a
head
seek
for
each
of
them
to
read
them
on
full
import
and
then
cpu
time
to
process
them
and
question
whether
it
makes
any
sense
to
have
so
many
of
them
before
flashing
into
from
space
map,
lock
into
main
space
map,
storage
and
yeah.
In
my
limit
in
my
patch
I'm
doing
by
addressing
from
two
sides
from
one
side,
I'm
limiting
maximum
lifetime
of
log
to
100
transactions
or
transaction
groups
so
that
they
never
accumulated
too
much
from.
E
D
Side,
I
am
implement
I've
implemented
parallel,
read
of
log
of
the
log,
so
that
like
this,
I
can
accumulate
or
amortize
all
the
read
latencies.
So
it's
practically
get
cpu
bound
even
on
hard
disk
pools
for
csd,
it's
not
so
dramatic,
but
for
for
hard
disks,
very
latency
is
substantial
and
there
I
see
like
10
times,
improvements
and
more
cool
partially
because
of
latency
reduction
partially
because
of
maximum
history
length
reduction,
so
both
sides
help
and
to
me
it
doesn't
look
controversial
or
questionable
it's
more
like
in
case
I
missed
anything.
A
Yeah,
I
think
the
idea
of
tying
it
to
the
number
of
v-devs
was
that
the
like,
as
you
have
more
v
devs,
you
can
do
more.
I
o
and
parallel
so
like
in
theory.
A
You
know
all
else
being
equal.
You
add
more
v
devs
you
have
more
more
logs,
but
then
like
assuming
you're
reading
them
all
in
parallel.
You
could,
you
know
you
could
load
them
in
the
same
amount
of
time,
because
your
pool
can
do
more
iops.
D
From
parallel
thing,
that's
should
be
helped
now
with
parallel
region,
but
problem.
Now
it
getting
cpu
bound
because.
A
A
D
A
Okay,
yeah,
that
makes
sense.
I
was
just
trying
to
understand
if
you're
hitting
that
yeah,
that
that
totally
makes
sense
to
me
I'll
I'll,
take
a
look
and
see
if
surfing
could
take
a
look
at
that
too.
A
All
right,
so
I
saw
somebody
posted
something
in
chat.
That's
this
will
be
the
last
one
that
we
have
yeah.
E
Hey,
I
threw
two
small
review
requests
into
the
agenda
notes.
First,
one
is
on
supporting
incremental
receive
of
clone
streams.
I
think
we
talked
about
that
recently
and
the
other
one
alone.
Is
this
one?
It's
getting
fairly
old.
Now
it's
a
pretty
hkc
race,
condition
between
cfs
z,
get
and
cfs
z.
Note
dmu
fini!
E
A
You
can
ping
them
for
sure
all
right.
This
is
probably
a
good
place
to
end
the
meeting,
thanks
for
all
the
contributors
work
on
that
I
know
the
pride
you
know,
the
the
leaders
and
other
project
contributors
have
been
a
little
slow
on
getting
these
things
reviewed
and
merged.
Hopefully
we
can
pick
it
up
a
little
bit
in
the
next
few
months,
but
thanks
thanks.
Everyone
see
ya.
Thank
you.