►
From YouTube: July 2022 OpenZFS Leadership Meeting
Description
Agenda: OpenZFS conference; quota enforcement; ZED config; 2.2 release
full notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit
A
All
right,
it's
a
couple
minutes
after
the
hour.
Let's
get
started,
welcome
everyone
to
the
july
2022
open,
zip
s,
leadership
meeting
I
I
have
to
run
after
a
couple
minutes,
so
brian
is
going
to
be
leading
the
meeting.
A
A
It's
going
to
be
it's
going
to
be
october.
24Th
and
25th
I'll
send
out
an
email
today
with
some
more
details,
but
the
the
website
is
live.
So
if
you
go
to
the
open,
dfs.org
website,
you'll
see
some
info
there.
A
So
right
now
we
have
the
request
for
proposals
is
open.
So
folks
we'd
love
to
see
a
lot
of
great
talks
this
year.
I
know
people
have
been
working
hard
on
zfs.
So
send
me
email
me
with
your
proposals
of
what
you'd
like
to
talk
about.
A
A
And
we
also
have
sponsorship
opportunities.
Obviously,
in-person
conferences
are
much
more
expensive
than
remote.
Converses
we've
raised
the
ticket
prices
only
slightly
to
150,
which
you
know
does
not
come
anywhere
close
to
covering
the
the
cost
of
putting
on
the
conference.
So
you
know
we
rely
on
sponsorships.
We
we've
had
a
lot
of
great
sponsors
in
the
past
and
we
hope
to
get.
You
know
some
solid
sponsorship
this
year,
financially
financial
sponsorship,
so
that
is
open
as
well.
A
A
But
if
you're
interested
in
sponsoring
email
us
or
if
you're
interested
in
speaking
email
me-
and
I
hope
to
see
you
all
there-
you
have
until
the
end
of
august
august
29th
to
submit
your
talks
and
we'll
be,
you
know
we
will
be
having
covid
protocols,
including
masking
and
hopefully,
testing
and
and
vaccination
requirements.
If,
if
we're
able
to
set
up
contact
with
a
third
party
to
do
like
you
know,
verification
of
that
info.
A
All
right:
well,
I'm
pretty
excited
to
see
all
of
you
back
in
person
I
mean
I
haven't.
I
haven't
been
to
a
in-person
conference,
maybe
since
the
2019
open
cfs
one,
so
it's
been
quite
a
while
for
me,
I
hope
to
see
everyone
who
feels
comfortable
traveling
in
october.
C
I
see.
We've
got
a
little
bit
of
an
agenda
this
month.
Things
get
added,
looks
like
the
first
item
on
the
list.
Here
is
relaxed
quota
enforcement
for
performance,
so
I
don't
know
who
put
this
one
on
the
agenda,
but
I
assume
that
yeah.
C
I
assume
this
is
talking
about
our
long-standing
issue
when
you
get
really
close
to
being
near
the
quota
performance
kind
of
just
throttled
back
severely
right
as
it
starts.
C
B
Yeah,
if
you're,
especially
if
you're
overwriting,
something
that's
there,
you'll
end
up
being
like
okay,
you're
gonna
exceed
the
quota,
so
I'm
not
gonna,
so
I'm
gonna,
you
know,
do
a
wait
for
transaction
flush
and
then
the
performance
yeah,
like
with
a
benchmark.
B
You
can
even
see
the
performance
dropping
to
like
single
digit
megabytes
per
second,
once
you
start
hitting
that
it's
a
little
bit
less
bad
with
the
lower
inflation
setting
for
when
zfs
assumes
how
badly
it's
going
to
lay
things
out
on
disk
that
helps
a
little
bit,
but
you
still
run
into
that
quota
and
it
gets
pretty
bad.
This
came
out
of
that
work.
We
were
doing
on
the
right
smoothing
and
it
turns
out.
The
problem
is
actually
the
same
quota
thing
applies
to
z-vols,
especially
when
and
in.
B
Case
you're
always
overwriting
right.
You
can't
you
can't
exceed
the
quota
on
a
z-vol
because
a
it
doesn't
have
a
quota
and
b
it's
you
can't
write
to
lbas
that
don't
exist,
and
so
you
can
actually
see
that
same
performance
problem
on
z,
vols,
where
basically
everything's
in
overwrite,
once
it's
relatively
full
and
that
turns
out
that's
what
was
actually
causing
the
right
stalls
where
it
would
just
be
like
a
big
chunk
of
time
where
you
couldn't
write
anything
more
because
the
thread
was
stuck
waiting
for
the
transaction
to
flush
to
see.
B
So
to
address
the
version
for
file
systems,
we
looked
at
basically
adding
a
new
property
like
quota
strictness
or
something
we'll
need
to
come
up
with
a
good
name
for
it
and
when
set
to
lax
it,
would
let
you
you
know,
do
up
to
like
one
transaction
group
over
the
size
to
you
know.
B
Instead
of
being
strict
to
exactly
the
quotas
like,
you
can
go
over
a
little
bit,
but
you
know
so
that
the
performance
won't
suck
as
bad
and
then
either
using
that
or
just
outright
making
z
vols,
not
respect
the
quota.
C
I
was
going
to
say
it
seems
like
the
thing
to
do
there
for
z.
Balls
is,
you
know,
just
not
enforce
the
quota,
because
you've
kind
of
got
one
already
implied
right
by
the
size
of
the
volume.
B
Yeah
and
since
you
can't
actually
set
a
quota
proper,
because
in
that
case
it
might
actually
make
sense,
because
a
quota
includes
the
snapshots
and
as
well,
but
you
you
know,
if
you
try
to
set
a
quota
on
a
z-val,
it
tells
you
that
doesn't
apply
to
this
type,
but
it
turns
out
that
the
actual
quota
is
still
set
internally
and
used
in
all
these
cases.
B
So
yeah
we're
thinking
of
basically
just
you
know
if
the
object
type
is
evol,
we'll
just
say:
yeah
it's
okay,
but
also
might
as
well
fix
it
for
file
systems
while
we're
in
there.
So
we
expect
to.
F
Considering
that
the
walls
can
be
seen
provisioned
and
oversubscribed,
but
whatever
quota
may
have
sensed
there,
so
I'm
not
sure
it
should
be
dropped
completely,
but
I
agree
that
some
some
relax
could
be
there.
Just
speaking
about
your
case,
have
you
said
kubota's
there
explicitly
or
apparent
data
set
or
somewhere.
D
B
It
it
looks
like
the
value
is
just
getting
populated
with
the
vol
size,
which
would
be
the
same
thing.
I
think
in
the
case
of
the
thin
provision
version,
but
we
should
test
that
and
make
sure
we
don't
cause
explosions
so
yeah
we
got
most
of
a
prototype
together.
We're
just
working
on
adding
the
test.
B
Cases
to
you
know
show
that
what
the
problem
is
and
how
we
fix
it
and
that,
like
mav,
says
I
want
to
make
sure
we
don't
do
the
wrong
thing
in
the
case
of
a
thin
provision
quota
because
yeah,
I
was
very
surprised
to
see
the
like
on
the
dsl
there,
the
dd
underscore
quota
being
set
to
a
value
for
a
z-volt,
because
you
know
you
can't
set
a
quota
on
his
evolve.
It
won't
let
you
so.
I
was
wondering
where
it
was
coming
from.
C
B
Vmware
at
least
does
something
semi
saying
it
will
just
pause
like
suspend
the
the
vm
and
say
fix
your
disk
or
get
some
space
or
something
but
yeah
a
lot
of
applications
will
not
know
what
to
do
if
they
get
back.
You
know
space
when
writing
to
a
block
device.
F
No
vmware
officially
declares
that
it's
one
of
each
of
the
ai.
I
forgot
what
it
is,
but
yeah
vmware
should
handle
that
it
should
freeze
vms.
It
should
do
that.
You
should
have
some
warnings
and
turn
us
on
freebsd
reports.
It's
rice,
kazoo
layer
like
like
water
usage.
You
can
see
that
from
initiator
site
how
close
you're
to
kuwata.
So
it's
supposed
to
all
work.
Yes,
it.
C
B
F
B
Yeah,
I
think
strict
photo
because
the
other,
I
guess
the
other
question
I
had
was
right
now
we're
looking
at
just
a
an
enum
of
like
you,
know,
strict
and
lacks
or
whatever,
but
does
it
make
sense
to?
Instead,
maybe
have
you
specify
how
much
they
can
go
over
or
something.
E
Yeah
wow,
can
we
be
intelligent?
Can
we
can
we
be
intelligent
about
it?
If
we
say
like
the
quota
quota,
size
is
larger
than
x
times
the
dirty
data
max.
Then
we
are
always
likes
and
otherwise,
like
we
scale
it
back
proportionally,
can
we
do
something
like
that,
or
does
that
not
make
sense?
Do
we
always
have
to
be
at
least
one
txt
over
for
it
to
work.
B
B
Just
based
on
what
the
quota
is
so
that
yeah,
if
you
set
a
quota
to
a
gig,
they
can't
go
more
than
like
100
megabytes
over.
But
if
you
set
it
to
a
terabyte,
they
can
go
some
gigs
over
and
in
the
end,
as
long
as
it's
never
more
than
I
don't
know,
six
percent
or
whatever
number
we
decide,
then
maybe
that
makes
sense.
E
C
B
Yeah,
that's
really.
The
big
thing
is
especially
since
it
can
affect
things
pretty
badly.
If
yeah
you're
waiting
for
the
transaction
group,
you
can
end
up
making
other
rights,
the
pool
that
aren't
affected
by
the
quota
slow
and
then
it's
like.
B
So
yeah
we'll
look
at
that
and
get
something
together.
C
Sounds
good
so,
let's
see
what
else
we
on
our
list.
What
could
we
have
one
about
the
z
here,
how
best
to
store
configuration
for
the
zed?
Yes,
so
somebody.
B
Wants
to
change
like
how
many
errors
and
how
many
seconds
before
it
kicks
the
device
out
of
the
pool
right
now
in
the
lumos,
those
come
from
some
system
properties
or
whatever,
but
on
linux
it's
they're
just
hard
coded
as
like
10
errors
in
15
minutes,
or
something
and
yeah
we'd
like
to
be
able
to
change
that.
But
what
would
that
look
like.
C
Yeah,
so
this
is
actually
a
bit
of
outstanding
work.
That's
been
outstanding
since
the
zed
got
implemented
on
linux.
Originally
we
always
had
an
item
to
come
back
and
make
this
tunable
and
configurable,
and
then
we
never
did
it
never
bubbled
up
to
the
top
of
the
list,
but
I
think
that's
a
great
idea.
The
original
thinking
was
to
provide
some
kind
of
you
know,
live
config
file
in
etsy
or
whatever
that
could
tune
various
z
parameters,
but
that
was
just
an
initial
notion.
B
Because
I've
asked
in
my
other
presentation
when
I
did
the
vdip
property
stuff,
it's
like
how
many
properties
is
two
properties
or
too
many
properties,
and
how
do
we
decide?
B
C
Yeah,
that's
an
option
too.
My
recollection
is,
there
were
only
maybe
three
or
four
things,
something
like
that
that
were
obvious
to
tune
and
what
the
right
values
for
them
wasn't
obvious,
either
right,
which
is
how
we
ended
up
with
the
current
defaults
like
well.
This
seems
reasonable
for
most
cases,
that's
really
something
you
can
dial
in.
I
mean
I
think
a
flat
config
file
would
be
fine
or
something
along
those
lines.
C
I'd
say
that
fell
apart
because
we
didn't
want
all
the
work
involved
in
pulling
in
a
parser
all
right
to
do
that
again.
It's
like
that
seems
unnecessary
for
three
or
four
values
right.
A
C
E
C
E
C
So
the
goal
is
to
put
them
somewhere,
you
know
or
they're
easier
to
tune.
I
think
that
would
be
fine
if
we
just
want
to
make
command
line
options.
That
seems
like
the
easiest
thing
to
do.
If
we're
only
talking
about
two
or
three
does
anybody
know
offhand
where
they're
set
on
illumos
or
where
they
come
from
on
a
lumos.
A
I
had
to
hit
the
mute
button.
I
would
need
to
look
they're,
probably
coming
from
the
fma
system,
because
that's
really
what
handles
that
there's
a
whole
separate
infrastructure
on
a
lumos.
E
C
F
E
Well,
then,
if
you
wanted
to
use
it
in
production
with
a
different
value,
you'd
have
to
edit
the
services
file
for
the
the
zfsz
service
or
run
it
manually
right
now
you
can
use
like
we
can
still
have
the
service
file
source
in
environment
file
and
then
have
devoids
for
that.
I
think
they're
just
a
practice
for
demons
that
do
not
support
config
files.
C
Out
is
in
the
c
file,
and
not
just
one
more
z
script,
because
the
zed
doesn't
source
any
of
the
z
scripts
when
it
runs
right.
It's
just
a
binary
like
it's
launched
to
the
long
running
daemon,
so
it
never
looks
at
any
of
those
values
other
than
values
that
are,
you
know,
supplied
to
it
as
command
line
options.
C
Yeah,
there's
a
whole
fault
management
infrastructure
that
we
got
from
solaris,
that's
kind
of
partially
built
into
the
zed
where
it
does
things
like
opens
a
case
or
any
kind
of
failure
from
the
the
disc
and
associate
the
vents
with
it
and
eventually
resolves
it
by
doing
something
like
faulting,
the
disc
or
whatever.
So
it's
actually
quite
a
lot
of
state
that
gets
managed
internally.
D
Here
is
it
a
huge
complication
to
make
this
is
cool
properties,
because
the
pools
are
always
going
to
be
associated
with
the
family
of
devices,
and,
as
I'm
pondering
this,
I'm
wondering
if
there
are
a
family
or
tier
several
tiers
of
devices,
each
making
up
a
different
pool.
How
would
I
apply
a
different
set
of
settings
to
each
pool,
depending
on
this
attribute
that
we
want
to
have
some
control
over.
D
Well,
there's
a
line
of
it's
a
level
of
coupling
that
has
to
be
enforced
manually.
C
D
I
hope
I'm
not
stepping
on
any
territory
here,
but
I
I'm
the
individual
who
reached
out
to
allen
with
these
questions.
I
work
for
seagate
and
we've
noticed
a
couple
of
things
with
some
broadcom
iocs,
where
they're
mp3
sas
drivers
generating
info
level
messages.
There
isn't
a
genuine
error,
but
we've
got
no
way
to
it.
It's
completely
random
quantum
mechanics
or
something
like
that.
It's
deeply
embedded
in
broadcom's
ioc
no
other
file
system
that
we've
ever
worked
with
has
ever
had
an
issue
with
and
by
the
way.
D
This
is
specifically
looking
at
our
some
of
our
our
raid
light
products,
in
particular
core
vault,
which
I've
had
huge
success
for
in
the
web.
3
space
and
d
store
space,
we're
doing
10,
plus
petabyte
direct,
attach
file
systems,
but
there's
this
annoying
thing
that
comes
around
about
once
every
couple
days
we
get
several
instances
of
these
info
messages
coming
up
and
the
zed
daemon
doesn't
know
the
difference
between
these.
D
This
being
a
write
error
or
an
information
message
coming
up
from
the
mp3
mpt3
sas
driver,
it's
a
log
info
message,
and
basically
what
it
says
is
that
we
have
a
right
operation
in
progress,
a
cdb
right,
16
and
occasionally
the
target
says,
detects
something
that
is
like
a
checksum
or
something
and
asks
for
a
retry.
The
driver
does
it.
It
always
succeeds,
not
a
retries
and
a
right
retry.
But
a
re-transmission.
D
However,
though,
that
info
message
that
that
event
occurred
is
percolated
back
up
the
stack,
and
so
it
doesn't
happen
on
the
9400
class
iocs
that
it's
32,
I
forget
what
the
exact
part
numbers,
but
it's
it's
limited
to
a
handful
of
broadcom
ioc
chipsets
and
we
can't
fix
it
from
the
seagate
side
and
broadcam
has
been
scratching
your
head
over
it
for
for
five
months,
and
and
again
it's
not
indicative
of
an
error
per
se.
It
is
a
simple
thing
that
happens.
D
C
C
Yeah,
if
you
don't
mind,
I've
got
a
couple
questions
about
that.
So
so
my
recollection
is
the
nothing
should
percolate
up
through
the
zed
unless
at
least
on
linux,
we
actually
get
an
error
from
the
low
level
block
device.
So
either
we
got
an
error
back
returned
from
the
block
layer
right
as
an
eio
or
something
like
that,
or
we
legitimately
got
the
wrong
data
back
from
a
device
right
and
then
we
generate
a
checksum
error,
saying
like
yeah.
D
Right,
there's
no
data
that
comes
back.
This
is
this
is
an
event
that
takes
place
completely
at
the
driver
layer,
because
it's
the
host
is
doing
a
a
write.
D
The
the
target,
in
this
case
the
the
seagate
sas
target,
says
something
wasn't
right
about
that.
I
I
I'm
missing
something
or
checksum
didn't
match.
Please
reset
the
driver.
Does
it
when
it
completes
the
driver?
No,
the
mp3
sas
driver
notifies
with
this
log
info
message.
I
can
give
you
the
exact
screen
and
give
you
a
live
demonstration
of
it
right
now.
D
I've
been
testing
it
all
day
again,
but
it's
also
interesting
that
their
broadcom
has
a
number
of
tools
that
the
store
cli
and
if
anybody
wants
to
duplicate
this
in
a
benign
context,
if
you
simply
use
the
store
cli
to
retrieve
error
counters
on
a
running
broadcom
hba,
it
will
generate
these
error
messages.
If
you,
if
you
request
the
error
counters
10
times
in
less
than
five
minutes,
you
will
fault
the
device
if
it's
in
a
z-ray
type,
config
and
no
no
device
error
has
has
occurred.
C
Yeah
I'd
be
interested
to
dig
into
that
a
little
bit
more
because
I,
like
you,
say
I
I
don't
think
that
should
percolate
up
beyond
the
driver
itself
right,
I
don't
think
the
higher
level
of
efs
should,
as
I
remember
the
code
right
now,
should
even
be
notified
of
it.
Nothing
should
notice
that
so
I'd
be
curious
to
know
where
the
vet
is
picking
up,
the
bearers
from.
C
D
It's
interesting
because
xfs
xt4
butter
fs,
none
of
them
pay
attention,
pay
attention
to
this
info
message.
It's
percolating
up
and
I
can
reproduce
the
same
thing
using
nothing
but
but
dd
and
the
events
still
take
place
and
never
a
bite
of
data
is
lost
or
or
requires
a
retry
from
dd
itself,
or
anything
like
that.
So,
in
the
simplest
use
case
possible
we're
streaming
rights
out
there
and
it,
depending
on
the
position
of
the
planets
or
something
like
that.
D
D
But
again,
this
is
a
it's
it's
an
issue
that
is
limited
to
a
very
small
subset
of
broadcom
chipsets.
Unfortunately,
seagate
uses
a
lot
of
them
and
we're
looking
so
this
I'm
going
back
and
I'm
I'm
asking
how
to
best
make
this
a
tunable,
because
it
would
probably
be
on
a
per
device
pool
basis.
B
A
E
D
The
bad
behavior
is,
if,
if
more
than
three
events
happen
in
something
like
a
five
minute
window
again,
these
relatively
benign
events-
if
we.
E
D
More
than
three
of
them
in
a
finite
period
of
time,
the
device
will
be
evicted
from
the
pool
it
provided.
If
we
are
running
in
a
raid,
z
or
a
d-rate
configuration
where
there's
enough
devices,
the
device
will
be
evicted
from
the
pool.
Okay,
issuing
a
z-pole
clear
brings
it
back
everybody's
happy
if
we
are
running
in
with
a
single
device
against,
so
we
just
created
a
zfs
file
system
on
a
single
scuzzy
target.
D
D
I
I
can
create
a
stack
of
800
terabyte
luns
and
I'm
giving
the
web3
community
whether
we're
talking
chia
or
filecoin,
or
anything
on
the
the
storej,
the
the
ipfs
world
for
the
first
time
ever
they're
getting
direct
attached,
cfs
file
systems
in
the
10
plus
petabyte
range,
and
it
works.
E
Yeah,
so
it
sounds
like
zed
said,
does
have
lot.
You
know
as
as
as
ryan
said
earlier,
zed
has
logic
to
to
do.
The
device
swap
device
fail
processing,
but
I
I'm
with
you
brian.
I
don't
know
where
this
information
is
coming.
We
need
to
probably
do
just
some
debugging
with
zed
to
find
out
how
these
events
are
being
handled
by
by
zed
and
and
because
it
sounds
like.
Maybe
this
is
just
a
bug.
D
Well,
I've
stared
into
the
abyss,
and
I
can't
make
sense
because
there's
there's
a
notification
between
different
aspects
of
zed
and
so
there's
not
a
clear
line
of
sight
communication
where
counter
event
is
noticed
in
a
counter
increments.
D
Well,
I
knew
I
was
outside
of
my
lead
league
and
I
ran
to
allen,
and
but
we
want
to
do
the
right
thing
for
zfs
I
I
don't
want
to
do
a
cheat
in
any
kind
of
way.
However,
this
is,
I
spent
some
time
in
spectrologic
working
on
gfs
several
years
back,
so
that
I
want
to
do
the
right
thing
by
the
community,
and
I
I
understand
that
you
know
the
importance
of
not
fudging
this
to
make
one
shady
product
work.
D
I
don't
think
we've
got
a
shady
product
and
I
I
certainly
know
that
broadcastcom
cannot
figure
out
why
they're
sending
a
knack,
sometimes
on
some
of
these
rights
and
they've
been
instrumenting
the
hell
out
of
it.
C
So
I
think
one
easy
thing
to
check
initially
might
be
to
run
the
z-pool
events
command.
So
basically
there's
two
streams
of
data
that
the
zed's
picking
up
on
there's
what
it
gets
from
the
kernel
modules,
which
you
can
see
all
those
events
in
the
z-pool
events
command.
C
If
you're
running
it'll
show
everything
that
kernel
modules
detected
and
the
zed
picked
up
on,
but
it's
also
got
a
stream
of
events
coming
from
udev
for
the
block
devices,
so
that
would
be
probably
a
spot
to
start
to
see
like
why,
which
one
of
those
horses
is
causing
the
is
that
to
be
aware
of
the
issue
that
would
at
least
give
us
a
spot
to
help
narrow
it
down
yeah.
It
doesn't
have
like
a
bug.
D
Yeah
and
like
I
said,
I'm
turning
to
alan
on
this
stuff,
I
want
to
do
what's
right
by
the
community,
but
so
far
it's
taken
us
a
lot
of
time
to
start
to
start
setting
up,
you
know,
seagate's
a
big
company
and
and
whatnot
and
just
getting
the
door
open.
So
we
can
start
peering
into
the
void.
I
am
not
qualified
to
make
these
decisions.
I
looked
at
it
a
little
bit
and
I
got
all
tangled
up
and
nuts
trying
to
figure
out
it
goes
into.
It
goes
out
as
of
the
zed
damon
traffic.
D
C
You
said
you
had
a
relatively
easy-to-run
reproducer.
D
And
I
I
hope
within
a
day
or
two,
I've
got
a
go.
Teleport
infrastructure
set
up
so
I'll,
be
able
to
get
alan
inside
of
my
private
labs
and
give
them
access
to
six
core
vault
systems
and
twelve
controllers.
C
D
F
Maybe
I
already
asked
a
couple
of
times,
but
what
are
our
plans
for
2.2
or
do
we
have
any
schedule
to
plan
ahead
because
we're
doing
the
release
engineering
for
trunas?
We
wanted
just
to
know
what
what
to
expect
and
very
narrow
case
like
what?
What
shall
I
do
now
for
scrap
performance
optimizations?
C
C
You
know
I've
seen
quite
a
bit
of
interest
in
people
maintain
the
need
for,
like
a
longer
term
maintained,
stable
branch
like
the
two
one
was
going
to
be,
or
is
intended
to
be
at
the
moment
right
where
we
just
cherry
pick
more
stuff
into
it,
just
have
a
stable
base,
a
people
run
for
a
long
time.
I
don't
know
that
stops
us
from
taking
a
new
tutu
release,
though
people
think
that
it's
enough
interesting
stuff
is
in
there
to
warrant
us
making
a
new
release
and
maintaining
it.
C
F
F
So
we,
if,
if
you
look
back
into
history,
they
are
so
periodically
you
can
just
pronounce
when
the
next
one
should
happen,
even
so
it's
not
published,
so
it
would
be
good
to
have
at
least
some
ideas
if
it's
feature-based
it
sets
one
thing,
that's
useful
if
it's
time-based,
then
at
least
some
forecast,
because
we're
already
planning
things
through,
at
least
to
the
end
of
the
year
with
tuna
scale,
that
we
have
already
two
branches
in
development
right
now
they
are
both
going
from
2.1,
but
the
one
could
already
potentially
benefit
from
japanese
user
science.
C
Mean
if
we
stuck
with
our
previous
cadence,
it
would
be
about
november,
or
something
like
that.
We
would
probably
cut
a
2.2.
We
should
probably
talk
if
that's
a
reasonable
plan,
if
that
makes
sense
to
people
if
there's
enough
stuff
there.
You
know,
if
that'll
be
worth
cutting
a
branch
and
maintaining
it.
C
My
feeling
is
actually
quite
a
lot
of
good
stuff
in
the
master
branch
that
hasn't
been
back
ported.
Yet
I
probably
won't
be
backboarded
for
two
one
right,
but
there's
a
lot
of
interesting
features
in
there.
I
don't
have
a
list
handy,
but
that
wouldn't.
B
It's
the
the
linux,
namespace
stuff,
vdf
properties
and
how
that
applies
to
mark's
device,
evacuation,
cueing,
stuff
right.
B
E
B
E
B
I
guess
what
the
timeline
timeline
would
be
good
for
you,
alexander,
like
I
know.
We
also
want
to
think
about
in
the
future
figuring
out
what
that
timeline
is
going
to
look
like
for
all
the
different
things
that
consume
it,
whether
that's
you
know
the
next
ubuntu
lts
or
freebsd
and
truenas,
and
so
on.
F
Well,
it's
impossible
to
predict
everything
like
we
are
for
scale,
which
is
now
in
nightlife.
We
plan
better,
I
think
in
in
august
or
just
pretty
soon
so
november,
maybe
already
not
so
far,
pretty
late
again
again
in
release
cycle,
but
still
probably
acceptable.
I
don't
remember
when
rhys
is
planned,
but
somewhere
in
q4,
so
I
think
november
would
be
not
bad.
C
I
mean
it
would
also
be
worth
taking
a
look
at
what
outstanding
features
we
have
that
are
pretty
close
to
completion
right
if
any
of
those
are
worthy
of
getting
some
effort
on
to
get
them
wrapped
up
and
merged
all
right.
I
know
there's
some
outstanding
container
work
too,
there's
quite
a
lot
of
open
pr,
significant
features
in
them
right.
The
range
the
expansion
work
right
would
be
great
to
wrap
up
the
whole
bunch
of
things
that
fall
in
this
category.
C
I
know
it's
a
never-ending
list,
but
I
guess
we'd
have
to
take
a
real
look
at
what's
outstanding,
still
to
figure
out
what
we
could
reasonably
include
in
that
time
frame
who
might
be
able
to
work
on
it.
We
want
to
get
it
in.
C
So
I
guess
what
would
next
steps
be
there?
I
guess
I
could
go
through
and
make
a
list
of
like
major,
outstanding
features
and
kind
of
assess,
what's
currently
in
the
master
branch
right.
That
would
at
least
give
us
a
better
idea
of
what's
already
ready,
and
you
know
what
might
need
a
little
more
work.
B
Powell's
brt
stuff,
I
think,
is
almost
to
that
point
as
well
right.
C
Yeah
yeah,
I
believe
so
it'll
be
great
to
be
able
to
get
both
those
things
in
and
speaking
of
the
direct
I
o
stuff.
I
know
it
would
really
benefit
from
a
review
on
the
freebsd
side
too.
I
think
it's
been
heavily
tested
on
linux,
but
there
might
be
one
or
two
open
issues
still
with
freebsd,
we're
not
quite
sure
it's
behaving
right,
so
people
have
a
chance
to
look
at
it.
That
would
be
great,
but
I
know
we're
looking
for
reviews
for
that.
C
Yeah,
that
would
be
great
because
I
think
that
is
ready
to
go
too,
and
I
think
we've
got
a
couple.
Other
features
too
that
are
really
close,
ready
to
merge.
We
could
wrap
up
and
that
might
give
us
a
really
nice
two-two
release
with
a
bunch
with
enough,
but
not
too
many
features
right.
We
gotta
be
careful
about
piling
too
many
in
so.
C
All
right:
well,
I
guess
we've
got
next
steps
there
and
something
to
talk
about
a
little
bit
more.
Do
we
have
anything
else
on
the
agenda.
E
B
B
The
my
review
request
for
the
spa
inflation
thing
where,
instead
of
just
statically
using
24
times
your
a
size,
it
actually
looks
at
the
v
dev
type
and
figures
out.
What
is
actually
the
inflation
is
going
to
be
so
that
you
don't
look
like
you're
going
to
go
over
the
quota
by
24
megabytes.
When
you
write
one
megabyte.
C
All
right,
so
I
could
take
a
look
at
some
of
those
things
looks
like
also
on
the
agenda
here
is
directory
scaling
yeah,
so
that
is
something
that
I've
looked
at
in
the
past.
You
know
what
things
and
my
experience
has
been.
It
actually
scales
quite
well.
Once
you
get
the
fat
zaps
at
least
the
the
insertion
removal
time
is
constant.
All
right,
we've
scaled
up
to
billions
of
entries
in
a
directory
right,
and
it
does
scale
at
least
that
far.
C
It's
not
our
experience,
but
maybe
you're
driving
us
something
a
little
different
here.
B
E
C
C
Yeah
yeah
but
like
if
you
saw
that
transition
at
a
thousand
arsenal
right.
It's
maybe
that
ballpark
right
and
maybe
things
have
changed,
but
my
recollection
has
been
that
we
saw
pretty
good
scaling.
You
know
once
it
did
that
conversion.
You
started
making
pat
sapps
it
scaled
pretty
well,
but
if
you
get
data
that
shows
differently
it's
something
we
should
probably
look
at
we're.
B
E
I
think
the
other
thing
yeah,
the
other
thing
is:
if
you
you
delete
files
from
a
directory,
the
the
zap
doesn't
shrink.
So
even
with
an
empty
directory
they
used
to
have
a
lot
of
entries.
You
could
get
run
into
some
performance
problems
that
way.
Yeah.
That's
a
that's
a
known
issue!
Wasn't
there
a
pr
for
that?
There's.
B
The
lumos
one
for
being
able
to
shrink
the
zap,
and
it's
been
on
my
radar
for
a
while
to
look
at
even
just
for
the
d-dupe
stuff,
because
you
have
the
same
problem
when
you
drain
your
d-dupe
thing
and
you
end
up
with
the
big
ddt.
Even
though
you
don't
have
any
entries
in
it,
and
in
particular
that
was
throwing
off
all
the
math
for
trying
to
figure
out
to
try
and
do
the
ddt
quota
stuff.
B
E
B
E
E
B
Yeah
more
on
that
soon,
I
guess.
C
B
B
C
Does
anybody
else
have
anything
they
want
to?
Have
we
gotta
see
we
got
about
10
minutes
left
or
any
other
topics?
We
should
cover.
C
C
To
me
so
maybe
we
can
wrap
this
up
a
little
bit
early.
C
Hearing
no
one
else,
I
guess
I
guess
that
was
good,
we'll
call
it
a
wrap
thanks
everybody
for
showing
up
this
week
and
talking
through
the
stuff
all
right
thanks.