►
From YouTube: August 2020 OpenZFS Leadership Meeting
Description
At this month's meeting we discussed: DevSummit Hackathon; dRAID; OpenZFS 2.0 dates; semantic versioning; L2ARC cache feed policy
Full meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#heading=h.fzej8k12trha
A
So
welcome
to
the
august
2020
opengfs
leadership
meeting.
Let's
start
off
with
an
update
about
the
developer
summit,
so
the
conference
we
announced
the
speakers
this
morning
or
last
night
or
whatever,
so
we
have
11
great
talks
lined
up
for
the
conference.
The
conference
is
going
to
be
october.
6Th
to
7th
we're
going
to
have
the
first
day
is
going
to
be
most
of
the
talks
second
day
hackathon,
and
you
can
see
that
the
the
speakers
in
the
talks
on
the
website.
A
A
But
I
think
that's
going
to
require
a
little
bit
more
advanced
planning
than
usual
so,
and
I
think
in
particular
the
hackathon
you
know
has
been
very
freeform
in
the
past,
but
I
I
I
worry
that
that
amount
of
free
form
miti
might
result
in
a
lot
of
newcomers,
not
not
participating
and-
and
I
think
that
one
of
the
great
things
about
hackathon
in
the
past
has
been
that
we
get
folks
who
are
first-time
contributors
to
to
zfs,
to
start
writing
code
or
writing
documentation
and
and
working
with
other
more
experienced
participants.
A
So
to
that
end,
we're
going
to
try
to
do
a
little
bit
more
organization
around
the
hackathon
and
we're
going
to
ask
for
volunteers
of
folks
who
want
to
lead
groups
of
the
hackathon.
A
So
the
idea
would
be
that,
like
you,
have
an
idea
of
a
project
or
an
area
that
you
would
like
to
to
work
on
and
and
you'd
like
some
help
with
it
and
you're
willing
to
you
know,
help
other
people
to
help
you.
So
this
could
be
like
writing.
Code,
like
I
I
want
to
you
know,
implement
some
feature,
and
I
want
some
other.
A
Some
folks
to
help
me
with
that
or
it
could
be
like
I
want
to
work
on
documentation,
and
I
have
like
you
know,
there's
so
many
aspects
of
documentation
so
like
the
leader
there
might
create,
like
a
list
of
subtasks,
like
you
know,
update
this
webpage
with
the
latest
info.
A
Go
you
know,
delete
all
the
out-of-date
info
on
this.
Other
page
consolidate
it
from
here
to
there
work
on
a
work
on
a
man
page
or
something
like
that
or
it
might
be
like.
I
have
an
idea,
but
I'm
not
ready
to
start
coding
it,
but
I
think
other
people
might
have
valuable
input
on
how
we
should
design
this
or
or
where
it
will
be
useful.
A
A
But
if,
if
you
have
something
that
you
would
like
to
help
other
folks
work
on
at
the
hackathon,
then
I'd
love
to
hear
from
you,
and
we
can
also
make
time
available
at
the
beginning
of
the
hackathon
day
for
folks
to
kind
of
pitch
their
idea
to
the
whole
audience
and
and
find
folks
who
are
who
are
interested
in
willing
to
help
you
out
on
that.
A
A
Cool
registration
is
open.
You
don't
need
to
register
to
like
view
the
live
stream
as
usual,
but
you
folks
will
need
to
register
to
get
the
like
zoom
links
to
join,
doing
it
live
to,
like
at
they'll,
be
like
time
for
q,
a
and
like
hallway
interaction,
type
stuff
that
you
will
want
to
register
for
the
conference
to
attend
that
way
and
the
registration
is
free
it's
here,
because
we
don't
have
any
t-shirt
or
food
type
costs
to
cover
from
those
tickets
cool.
A
C
Hello,
just
the
second:
I
need
to
swap
the
room.
B
Sorry,
I
just
need
to
put
you
on
the
spot,
so
yeah
we've
been
working
on
extending
the
boot
once
feature,
which
is
called
nexboot
currently
in
what
was
originally
upstream
from
freebsd,
and
what
belfix
then
extended
to
work
with
grub,
although
it
turns
out
that
actually
kind
of
conflicts
with
a
different
feature
in
freebsd
called
nextboot.
B
That
allows
you
to
change
which
kernel
or
provide
additional
environment
variables
in
the
kernel
and
later
rather
than
during
boot,
and
so
we
think
that
the
name
boot
once
fits
better,
and
we
will
be
implementing
that,
along
with
support
for
next
boot
in
zfs
for
freebsd
and
the
we
started
with
the
work
that
paul
dagnelly
did
at
delfix
to
broaden
the
and
improve
the
format
for
that,
and
he
wrote
the
the
stubs
for
an
envy
list
based
implementation
and
and
thomas
has
extended
that
and
completed
it.
C
What
we
have
done
is
that
we
have
switched
the
leaps
at
the
fs
interface
to
use
only
embellished,
and
so
we
are.
We
are
sort
of
like
assuming
that
the
data
we
receive
with
the
leapstefs
is
already
packed
in
envelyst,
and
so
we
can
propagate
this
data
to
the
pull
label
and
to
in
a
tab
in
that
attempt
to
provide
support
for
different
systems,
backward
compatibility.
C
We
are
trying
to
to
handle
both
old
free
best
next
zfs
net
next
put
case
and
they'll
fix
raw
data
stream
case
from
from
reading
the
the
label
and
and
but
in
case
of
writing.
We
are
actually
pretty
much
enforcing
analyst
based
approach
and
the
idea
is
that
by
using
canvas
in
in
a
label
previous
part,
two
area,
we
are
allowing
more
flexible
data
structures
and-
and
we
are
allowing
different
kind
of
system
support,
because
with
the
envelopes
we
can
actually
handle
side
by
side
different
systems.
C
C
The
the
problem
with
the
current
openstack
tfs
implementation,
and
this
work
we
have
done
is
that
we
kind
of
changed
few
signatures
of
of
functions.
C
B
B
Concern
there
is
getting
the
any
of
the
api
changes
for
live
zfs
in
before
2.0,
so
that
that
stays
stable,
and
we
talked
about
this.
I
think
it
was
two
months
ago
when
we
first
started
on
boat.
You
know
if
we
could
manage
to
quickly
change
that
interface
before
anybody
started
using
it.
C
I
did
actually
had
a
little
conversation
with
paul
and,
and
he
already
gave
few
bits
of
feedback,
but
I
don't
think
it's
it's
too
comprehensive
right
right
now
I
mean
it's,
it
probably
will
need
some
some
more
attention
from
reviewer's
side
as
well,
but
I'm
I'm
remaining
quite
optimistic
in
the
sense
that
at
least
so
far
all
the
tests
I
have
done
seem
to
to
to
support
this
idea
and,
of
course,
initial
implementation.
Very
initial
implementation.
C
I
did
drop.
Actually
the
version
word
from
the
from
the
boot
environment
data
structure,
but
later
on,
we
did
agreed
to
reinstate
it,
to
help
to
provide
or
recognize
the
need
for
for
translating
the
the
old
material
to
support
delfix
bits
and
to
support
the
old
free,
best
boot
and
export
bits.
B
One
thing
we
noticed:
that
was
an
issue.
There
is
the
when
the
integer
was
encoded
to
put
in
the
label
at
the
beginning
to
mark
the
what
type
it
was.
It
was
host
encoded,
not
network
order,
encoded.
B
Actually,
it
would
be
read
differently
depending
on
yes,
actually.
C
The
the
the
there
was
a
little
bit
catch.
The
and
catch
was
about
the
value,
because
the
current
opens
at
the
this
implementation
is
is
actually
using
pretty
much.
Only
the
the
value
of
zero
for
for
a
raw
material
and
zero
is
zero
in
every
system,
and
since
we
did
start
to
use
the
the
next
value
number
one
for
envelyst
data
structure
it
it
became
obvious
that
we
actually
should
use
the
indian
indian,
specific
conversions
there
and
yeah.
We
are
storing
the
network
order.
A
Yeah
in
terms
of
the
backwards
compatibility,
I
think
that
probably
from
delfix's
point
of
view,
we
can
probably
deal
with
it
either
way
like
once
once
we
have
the
new
bits,
then
we
can
just
say
like
okay,
now
like
update
it
to
you,
write
it
out
the
new
way
and
ignore
the
old
way.
So,
if
you
didn't
want
to
support
reading
the
old
format,
then
I
think
that
would
probably
be
okay
with
us,
but
you
know
there
might
be
other
other
folks
that
that
do
have
concerns
there.
B
Yeah
on
freebsd,
the
raw
c-string
has
been
in
use
for
five
or
more
years
now,
so
we
need
to
catch
that,
although
we
can
catch
that
by
you
know,
it
always
starts
with
zfs
colon.
So
it's
easy
enough
to
detect.
C
Yep
and
to
to
provide
a
little
bit
more
support
about
about
encoding,
the
the
incoming
or
outgoing
data
tour
from
analysts.
C
I
I
did
actually
create
a
basic,
very
basic
library
for
that
purpose,
hopefully,
in
hope
that
those
interfaces
might
be
useful
and,
of
course,
at
current
state
that
library
is
built
based
on
on
on
need
what
we
have
seen
from
from
and
and
three
best
systems.
C
So
I
I
assume
that
that
that
specific
interface,
if
it's
useful
for
other
people,
it
probably
will
will
get
extended
in
in
some
some
extent,
and
perhaps
it
is
good
idea
to
to
integrate
it
with
a
open
zfs
code
base,
as
well
as
a
as
a
prototype
or
or
or
base.
A
Cool,
so
I
know
you
talked
about
wanting
to
get
stuff
in
before
2.0,
so
maybe
this
is
a
good
transition
to
hand
off
to
brian
to
talk
about
the
2.0,
branching
timeline
and
ideas,
and
also
brian.
If
you
could
talk
about
like
d
raid
and
kind
of
where
that's
that
in
the
process,
a
little
bit.
D
Yeah
sure
I
can
talk
about
that.
So
the
plan
originally
was
to
branch
in
mid-august
and
we
are
still,
I
would
say,
close
to
making
that
branching
date.
There
are
a
few
things
that
are
really
close
to
wrapping
up
that
we
would
like
to
get
in
so
we're
gonna
drag
our
feet
for
a
week
or
so
and
hopefully
get
a
2o
branch
out
after
that.
D
So
that'll
give
us
some
time
to
get
in
any
last
remaining
free
speed,
freebsd
changes
we
need,
and
some
other
features
that
are
really
close
like
a
z
standard,
is
pretty
much
ready
to
go
and
just
needs
to
be
merged
in
so
that's
the
plan
for
2o
branch
and
then
once
it's
cut,
we'll
have
something
stable
the
base
off
of
related
to
that
one
of
the
other
changes,
hopefully
to
get
into
2-0
will
be
d-raid,
which
is
a
feature
that
I
think
people
have
heard
about
for
a
long
time.
D
Well,
it's
finally,
finally
a
thing
after
many
years,
so
this
is
a
distributed.
Parity
implementation
for
zfs,
so
our
first
new
vw
type
in
a
long
time,
and
it
is
wrapping
up
as
well,
and
I
would
like
to
get
that
on
the
2o
release
and
it's
in
need
of
reviewers
there's
a
pull
request
open.
So
if
people
want
to
go,
look
at
that
and
kick
the
tires
and
look
at
the
data
that
would
be.
That
would
be
really
good.
D
A
Cool
I'm
gonna.
As
you
know,
I'm
going
to
continue
reviewing
that
code
and
mark
maybe
is
going
to
have
a
talk
at
the
conference
about
d-raid.
Finally,
and.
E
B
A
I
think
is
going
to
be
kind
of
an
overview
of
what
d-rate
is
and
the
recent
changes,
and
hopefully
the
status,
which
will
be
simply
that
it
is
in
in
master
by
that
time.
A
Michael
dixter
had
a
question
about
raids:
the
expansion
in
2.0
I
mean
the
blocker
is
like
that.
The
code
is
not
done
so
the
two,
the
the
2.0
over
these
and
subsequent
releases
I
mean
the
intent
is
that
these
are
releases
are
time-based
and
that
they
they
wait
for
no
feature
and.
F
A
We're
kind
of
making
we're
stretching
the
definition
of
that
a
little
bit
with
this
first
one.
But
you
know
if
it
is
only
a
few
weeks,
then
that
that'll
be
still
a
huge
improvement
over
what
a
huge
change
from
what
we've
been
doing
in
the
past.
But.
A
The
reads
the
expansion
is
for
folks,
who
don't
know
we.
We
have
resumed
work
on
that
in
earnest,
so
progress
is
being
made
on
that,
but
it's
not
you
know
it
is
not
weeks
away
from
integrating
by
any
stretch
of
the
imagination.
Oh
thank.
A
You
other
questions
about
duvet
or.
A
2.0,
all
right
cool
so
on
to
our
next
question
about
versioning
gabriel,
are
you
on
you?
You
mission
wanted
to
talk
about
semantic
versioning
with
gfs.
E
Yeah,
I'm
I'm
here
well,
so
obviously
people
have
probably
taken
a
look
at
the
definitions
for
semantic
versioning
and
such
before,
and
I
just
wanted
to
to
bring
up
the
topic
since
there
was
a
little
movement
here
on
branching
on
branching
off
this
patch
series
branch
early,
which
was
lovely
to
see.
Thank
you
very
much,
but
I
I
wanted
to
kind
of
trigger
a
discussion
on
what
is
what
do
we
merge
in
some
in
a
in
a
patch
branch
like
that?
E
What
where
are
we
going
to
define
bug
fixes
versus
improvements
versus
new
features
in
the
context
of
zfs?
I'm
not
a
developer
of
zfs
any
earnest,
I'm
mostly
a
user.
But
I
wanted
to
express
my
interest
in
having
a
reasonable,
stable
version
that
just
updates
to
support
new
kernels
and
such
from
a
user
perspective
for
deployments.
H
If
we
were
to
do
semantic
versioning
as
it
is
written,
would
that
mean
I
mean?
Under
what
circumstances
would
we
have
a
like
a
a
backwards,
incompatible
major
like
to
get
to
3.0,
then,
in
that,
in
that
scheme,
we
would
be
saying
that
a
3.0
release
would
not
be
compatible
in
some
way
with
the
2.0
release,
which
doesn't
feel
like
something
that
we
would
like
to
do
too
much.
I
would
think.
E
A
E
I
I
I'm
not,
I'm
not
disagreeing
that,
like
the
the
semantic
versioning
doesn't
necessarily
perfectly
map
on
to
the
the
way
in
which
a
file
system
works
like
this,
but
I
just
wanted
to
to
discuss
kind
of
what
goes
in
this
patch
series
branch.
For
example,
that's
just
been
opened
up
is
a
performance
improvement
or
something
that's
like
fixing
a
performance
regression.
Is
that
a
minor
patch
or
not,
for
example-
and
I
don't
know
the
answer
to
that-
I
just
wanted
that
to
kind
of
be
discussed
in
a
larger
forum.
A
Yeah,
I
think
that
just
to
kind
of
set
the
frame
a
little
bit,
the
idea
of
like
semantic
versioning
is
like
it's
a
bit
of
like
language
lawyering.
You
know,
and
I
think
that
folks,
it's
easy
for
folks
to
be
like,
oh,
but
like
we
aren't
going
to
make
incompatible
api
changes
so
like
let's
change
the
definition
of
what
it
is
or
whatever,
which
I
think
gets
us
further
from
doing
things
that
are
actually
useful
and
and
then
I
also
hear
you're,
you
know
you're
asking
about
what
do
patch
releases
releases
contain.
A
When
would
we
bump
this
version
versus
that
version?
I
think
that's
a
very
that's
like
definitely
a
useful
discussion
to
be
to
have
or
a
more
useful
discussion
than
like,
let's
adopt
symmetric
versioning,
because
you
know,
in
my
opinion,
like
it
does
matter
like
it.
Like
the
question
of
what
we
put
into
patch
releases
and
what
branches
do
we
maintain
in
parallel
for
how
long
etc?
Are
I
mean
those
are
fundamental
to
the
project
and
having
releases
at
all?
A
So
I
think
maybe
it
would
be
helpful
to
first
discuss
that
and
kind
of
you
know,
brian.
Maybe
you
want
to
talk
about
like
how
what
the
approach
has
been
to
the
patch
releases
and
stuff
yeah.
D
Sure
so
the
policy
up
to
now
has
basically
been
that
we
have
major
release
branches.
We've
had
0607.08
we're
about
to
jump
to
2o
and
the
policy
for
them.
Pretty
much
has
been
no
new
features.
No
on
this
format
changes
if
we
can
avoid
it,
if
possible,
so
within
a
major
branch
you're
compatible,
you
should
be
able
to
move
forward
or
back
within
a
branch
and
then
fixes
for
kernel
build
issues.
D
That
kind
of
thing
go
in
performance
changes
if
they're,
really
important
or
critical,
and
not
that
invasive,
but
a
lot
of
it's
kind
of
been
on
a
something
either
a
case-by-case
basis,
sort
of
thing.
You
kind
of
look
at
a
change.
So
if
you
can
make
a
good
case
for
why
something
needs
to
be
in
a
release
branch-
and
it
doesn't
break
something
important,
it's
something
we've
considered
in
the
past
and
I
I
think
that's
kind
of
the
policy
we'd
like
to
have
going
forward.
At
least
I
would
like
to
see
going
forward.
D
We
can
have
a
2.0
branch
and
it
can
be
open
to
pretty
much
everything
but
major
on
disk
format.
Changes
right.
That's
going
to
break
compatibility
there
we'd
like
to
keep
the
changes,
small
and
simple
as
possible,
but
you
know
if
there's
a
reason
for
something
to
be
there,
it's
something
we
can
consider.
D
Yeah
people
want
something
stable
that
they
can
run,
that
they
know
they
can
upgrade
and
it
won't.
You
know
it's
an
easy
thing
for
them
to
track,
and
then
you
know
the
cadence
of
those
things.
That's
we've
been
shooting
for
a
year,
but
that's
all
stuff
for
discussion
or
exactly
like
what
the
subversions
mean.
I
don't
know
that
we've
thought
beyond.
You
know
two
dot
whatever.
What
those
second
and
third
digits
exactly
mean
we
could.
We
could
define
what
that's
going
to
be
in
like
the
cadence
for
releases.
D
If
people
have
thoughts
about
that,
we've
been
doing
it
about
every
three
months
at
the
moment,
which
maybe
isn't
quite
frequently
enough,
based
on
how
fast
the
curl
moves,
but
it
is
a
file
system,
so
you
know
we
want
it
to
be
stable
and
well
tested.
So.
E
E
You
know
I
have
a
a
generalized
feeling
that
the
the
very
minorist
of
releases
could
be
more
more
frequent,
but
I
think
we
just
just
kind
of
mentioned
that
in
in
some
passing
way,
particularly
when
a
bugs
creep
up
that
in
say
the
not
the
core
code
but
the
systemd
stuff,
or
something
like
that,
where
that
kind
of
patch
could
probably
come
out
a
little
bit
quicker
relative
to
the
core
code
that
could
be
released
and
fixed.
E
A
That
is
just
a
bunch
more
work,
but
if
we
wanted
to
basically
say
like
there's,
one
branch
for
2.0
and
the
next
release
might
be,
you
know,
let's
if
the
current
release
is
2.1,
the
next
release
might
be
2.2
or
maybe
2.1.1,
depending
on
kind
of
whatever
content
happens
to
be
in
there.
But
this.
But
it's
just
one
train
of
increasing
numbers.
There
I
mean
that
might
be
reasonable.
D
I
I
think
that
could
be
manageable.
Going
back
to
the
branch
we
just
created
against
open
zfs,
I
think
maybe
I
should
talk
about
that,
a
little
bit
more.
We
did
in
order
to
make
this
process
easier.
For
us,
I
mean
the
reason
we've
had
such
long.
Delays
between
point
releases
is
because
it
takes
a
lot
of
work
to
pull
all
those
fixes
together
and
back
port
them
and
get
them
tested
and
make
sure
everything's
stable.
So
it's
been
a
couple
months
between.
D
I
was
talking
to
tony
hutter
about
this,
and
we
created
a
staging
branch
going
forward,
we're
going
to
try
to
use
for
people
in
the
community
if
they
want
to
see
a
patch
in
the
next
point,
release
to
open
a
pull
request
against
that
staging
branch,
and
then
we
can
review
it
and
get
it
tested
and
merge,
but
having
being
able
to
help
allowing
the
community
to
help
us
get
those
patches,
ported
and
merge
should
help
speed
up
the
process,
at
least
what
we're
thinking.
We're
willing
give
a
try.
D
So
I'm
hopeful
that
we
may
be
able
to
see
a
little
faster
cadence
and
then
we're
also
working
to
automate
our
process
for
testing
and
building
packages
and
more
of
that
kind
of
thing.
E
I
think
you'd
have
quite
a
bit
of
interest
in
on
those
branches
to
especially
for
getting
for
supporting
the
newer
kernels
people
to
port
that
stuff
early.
D
And
I
think
merging
it
all
into
existing
branch,
even
if
it's
not
tagged
is
helpful
for
people
too,
because
they
can
see
what's
queued
up
what's
merged,
they
can
start
testing
that
stuff.
It
may
not
be
exactly
that
the
staging
branch
gets
merged
and
becomes
an
exploit
release,
but
it
will
probably
be
pretty
close
right.
We
may
add
a
few
things
to
it,
or
testing
may
turn
up
some
things,
but
that's
the
intent.
E
H
Think
send
for
the
focus
is
like
it
makes
a
lot
of
sense
in,
like,
like
library,
apis.
I
think,
where
there's
only
sort
of
compatibility
in
one
direction,
that
is
the
person
who's
like
looking
against
the
library
and
making
question
calls,
but
I
think,
with
with
the
whole
operating
system
or
the
file
system
like
zfs,
you've
got
like
compatibility
facing
down
like
the
on
disk
format.
You've
got
compatibility
facing
up
with
library.
H
E
H
Get
things
out
once
a
year
as
a
like
a
stable
major
like
I
would
probably
just
call
them
the
april
2020
release
or
something
and
then
like
and
then
have
clear,
established
policies
for,
as
as
it
seems
that
we
really
do
feel
like
disk
format.
Changes
are
pretty
pretty
tight
idea
of
when,
like
if
you're
gonna
add
something
incompatible,
it
needs
to
be
a
deferred
update
that
you
adopt
on
purpose
as
part
of
an
upgrade
and
that
switching
to
new
software
doesn't
just
foist
it
on
you
and
you
know.
Otherwise.
H
So
I
think
releases
of
this
many
different
kinds
of
software
that
are
all
kind
of
shackled
together.
It's
it's
difficult
to
put
assemble
tag
on
that.
I
think,
as
at
least
as
simva
is
written,
so
I
kind
of
agree
with
matt's
assertion
that
this
would
result
in
as
much
like
language
lawyering.
I
guess
as
like
trying
to
fit
that
mold.
I
think,
but
I
do
agree.
It's
extremely
important
to
have
like
policies
on
all
of
those
different
things
and
the
way
that
they
can
evolve.
B
On
the
topic
of
diversity
and
stuff,
do
we
have
any
plans
or
ideas
on
how
we
could
come
up
with
release
notes
more
easily
and
so
on?
Like
a
couple
of
months
ago,
I
was
writing
an
article
for
the
freebsd
journal
about
what
stuff
is
coming
in
openz
2.0
and
there's
not
really
a
concise
list
already
built
somewhere
and
there's
a
lot
of
commit
history
since
0.8.
D
Yeah,
this
is
a
struggle
I
always
have
like
every
time.
A
new
major
release
comes
out.
I
spend
days,
usually
writing
release
notes
and
going
back
to
the
commits.
I
would
love
a
better
system
for
this,
but
the
only
one
I
have
at
the
moment
is
the
commit
history
and
proposals
for
how
to
do
it
better
would
be
welcome.
B
D
In
general,
it's
not
been
too
bad
with
opencv
if
it
takes
time
to
do,
but
the
commit
messages
are
detailed
enough
that
usually,
you
can
go
through
the
commits
and
you
know
figure
out
why
it
changes
there
and
what
it
does
and
how
important
it
is.
So
it's
just
a
matter
of
spending
the
time
to
do
it,
but.
B
Yeah,
I
was
wondering
if
there's
something
we
can
do
say
even
just
going
forward,
maybe
using
the
the
github
projects,
interface
or
whatever
to
just
kind
of
you
know,
keep
a
list
of
that.
We
can
say
you
know
once
a
month,
maybe
somebody
can
go
through
and
be
like.
Here's,
the
things
that
happen
this
month,
that
we
will
want
to
mention
in
the
next
release,
notes
and
kind
of
amortize,
the
cost
of
going
through
the
entire
commit
history
and
even
maybe
making.
J
You
know
project
sounds
good.
Something
systemd
have
the
news
file
in
the
master
branch,
which
is
updated
every
time
when
a
feature
is
matched
to
the
master.
So
whenever
you
go
to
the
master,
you
have
this
list
of
the
features
added
to
the
version
to
the
master.
So
every
time
they
release
new
version,
they
just
copy
and
paste
the
this
portion
of
the
news
file.
J
That's
simple
text
file.
Everyone
can,
with
the
rights
to
the
master,
have
ability
to
update
it,
so
it
doesn't
require
too
many
extra
work
to
maintain
that.
D
So
then
the
idea
would
be
with
any
major
new
feature
or
change
you.
We
require
entry
to
the
news
file,
basically
describing.
J
D
A
B
Yeah
we
do
something
kind
of
similar
in
previous
d,
except
for
with
the
updating
file,
although
it
usually
is
only
for
things
that
are
going
to
cause.
You
trouble
when
you're,
upgrading
or
something
but
yeah
that
that
concept
maybe
makes
sense
as
a
way
to
do
it
and
and
like
matt,
said
the
fact
that
we
could
touch
it
up
after
means
that
it's
not
a
big
deal
if
something
gets
missed.
G
B
A
Yeah
cool,
I
think,
we've
we've,
maybe
straight
a
little
bit
from
the
last
topic
about
cementing
versioning,
but
any
more
questions
on
this
topic
before
we
move
on
to
the
next
one.
A
Is
from
I,
I
don't
want
to
mispronounce
your
name,
but
okay,
org.
K
Yes,
you
can,
can
you
hear
me,
you
can
just
call.
K
A
K
So
there
was
some
discussion
recently
on
the
issue
list
of
github
on
open
zfs,
that
perhaps
the
thought
was
that
right
now,
l2
arc
cache
is
both
most
frequently
used
and
most
recently
used
data
and
metadata,
and
the
user
is
basically
given
the
choice
through
a
data
set
policy
or
a
feature
flag
to
change
to
to
select
between
data
and
metadata
or
just
metadata.
But
this
still
concerns
most
frequently
and
most
recently
used
buffers.
K
So
the
question
was
where
whether
it
would
be
useful
as
a
feature
to
to
give
the
user
the
choice
to
select
between
mfu
and
mru,
or
only
most
frequently
used
data
based
on
the
fact
that
perhaps
sometimes
the
user
might
copy
a
large
file
and
he
they
wouldn't
want
these
to
be
cached
in
l2
arc
or
in
the
case
of
a
backup
of
a
zfs
send
again.
Similarly,
you
wouldn't
want.
Perhaps
you
wouldn't
want
the
l2
arc,
be
filled
with
this
type
of
data
and
metadata.
K
So,
given
given
the
interest
on
the
draft
pull
request,
I
would
like
to
hear
some
thoughts
from
the
members
that
are
present
right
now
or
some
additional
feedback.
It
is
interesting
that
richard
richard
erling
mentioned
something
that
I
was
not
completely
aware
of
that.
Perhaps
in
a
specific
scenario,
if
we
are
not
caching
mru
data
in
l2
arc,
then
we
might
have
the
case
that
the
mfu
data
will
gradually
decline
over
time,
which
is
a
very
interesting
aspect.
K
A
Property,
I
I
think
the
question
about
whether
it's
a
module
parameter
versus
a
data
set
property
kind
of
comes
down
to
like
how
like
how
would
we
explain
to
end
users
or
developers
or
sysadmins
like
what
like?
What
do
you
want
to
set
this
to,
and
why
like,
when?
When
would
I
want
to
change
this
for
what,
from
the
default
setting.
K
The
most,
the
most
appropriate
scenario
I
have,
in
my
mind,
is
that
you
have
a
data
set
that
you
back
up
pretty
often,
and
you
wouldn't
want-
and
if
you
do
this,
presumably
by
zfs,
send-
and
you
wouldn't
want
data
from
that
backup
to
be
cached
onto
the
l2
arc
again
this
this
would.
This
would
eventually
mean
that
the
most
reasonable
way
of
implementing
this
would
be
as
a
data
set
property.
K
A
Well
so,
in
that
specific
case,
there's
kind
of
two
other
solutions:
one
is
if
you're
using
zfs
send.
It
is
not
included
in
the
l2
work
and
it
is
not
included
in
the
arc
either.
As
of
a
couple
months
ago,
when
I
made
that
change
so
zfsn
doesn't
doesn't
add
things
to
the
arc
anymore
or
l2
arc,
but
for
other
you
know
you
could
be
scanning
it
for
other
reasons
or
using
different
backup
software
that
that
does
that
could
potentially
cause
it
to
go
in
there.
A
That
we
would
need
to
be
able
to
clearly
explain
what
that
nuance
is,
so
that
people
can
make
a
good
decision
about
it
in
general,
like
if
it's,
if
it's
kind
of
hard
or
impossible,
to
explain
to
somebody
like
here's,
what
you
need
like.
A
If
I
can't
ask
you
a
simple
question
that
you
can
answer
that
tells
me
that,
then
I
can
tell
you
what
what
setting
you
should
use
for
the
property.
Then
it
probably
shouldn't
be
a
property.
It
should
probably
be
something
that
either
the
system
controls
automatically
or
we
just
provide
a
good
default
and
then
like
there's
a
module
parameter
for
people
that
are
that
can
put
in
the
effort
to
like
understand
all
the
details
about
how
the
code
works,
to
figure
out
what
works
best
for
them.
That's
my
perspective,
at
least.
L
I
agree,
I
think
I
think
the
problem
is
that
it's
hard
to
reason
about
you
know
from
an
from
an
end
user
perspective,
if
there's
a
heuristic
or
something
that
we
would
want
to
employ
within
the
arc
to
say,
like
hey,
we
always
kind
of
favor,
this
type
of
behavior.
L
To
avoid,
like
you
know,
caching,
just
randomly
you
know
cache
data
that
may
never
be
seen
again.
That's
something
that
we
can
do,
but
I'm
not
sure
that
it
would
make
sense
to
kind
of
expose
that
out
because,
like
matt
was
saying,
I
don't
know
how
you
would
say.
Oh
yes,
I
want
to
make
sure
that
you
know
that
this
type
of
query
doesn't
end
up
in
my
l2
arc,
but
it
will
end
up
in
the
arc.
You
know.
So
it's
it's
hard
to
kind
of
like
differentiate
between
the
two.
A
Oh
you
know,
most
of
the
l
to
work
is
being
used
for
this
type
of
thing
that
got
like
it's
used
for
this
data
set,
which
and
then
I
wrote
it
because
it
was
in
the
hit
once
you
know,
mfu
list,
but
I
don't
want
it
to
be
in
there.
So,
therefore,
I
can
like
turn
that
property
to
something
else,
but
I
think
that
we
don't
even
have
the
observability
of
like
which
data
sets
data
is
in
the
arc
or
the
l2
arc.
Today,
which
is
you
know
not
great,
but.
A
It's
hard
to
match
to
the
business
case
for
sure.
Intellectually,
I
can
dream
up
stuff,
but
matching
it
to
a
business
case
is
different.
I
think
that
there's
it's
great
that
you're
looking
at
this
I
mean
I
would
love
to
see
more
work
on
arc
and
el
torque
caching
policy.
A
I
think
that
it's
kind
of
like
richard
was
implying
it's
really
hard
to
know
what
a
small
change,
what
impact
a
small
change
will
have
on
like
the
overall
behavior
of
the
system
and
then
like
communicating
that
to
the
user
is
even
harder,
so
we've
kind
of
like
sidestepped
around
that
by
just
being
like
dark,
just
like
we're
just
gonna
kind
of
do
what's
best,
for
you
don't
worry
about
it
like
it'll
it'll,
do
something
nice
and
and
because
of
that
people
I
think
haven't
realized,
for
example,
how
poor
the
el
torque
eviction
policy
is
quiet,
quiet.
A
So
like
I,
I
would
love
to
see
more
work
in
this
area
and
and
more
investigation.
I
think
we
would
need
like
a
bunch
of
data
before
we
could
really
tell
users
what
they
should
be
doing
with
this.
I
think
that,
if
we
can
kind
of
show,
if
we
just
want
to
change
the
default
to
be
like
yeah
like
now
they'll
to
work
only
feeds
from
the
things
that
have
been
hit
more
than
once,
I
mean
that
that
might
be
reasonable.
A
A
But
like
aside
from
that
observability
and
it's
just
like
we
tell
you
like,
you-
can
observe
the
hit
rate.
You
can
observe
how
much
stuff
is
in
there,
but
you
don't
know
what's
in
there
or
why
or
like
why,
it's
being
evicted
or
anything
about
that
so
changes
to
any
of
those
kind
of
policies.
Like
are
kind
of
you
know,
whatever
we
do
the
best
we
can.
A
C
A
B
Yeah,
like
I
wonder
if
we
could
just
even
start
with
some
new
k
stats
that
track.
You
know
this.
Many
bytes
came
from
the
hit
once
versus
hit
mini
list
when
they
fed
into
the
l2
arc
or
something.
A
A
M
And
just
out
of
interest,
what
is
the
cpu
budget
that
we
can
spend
on
making
the
the
caching
policy
for
the
l2
arc
more
intelligent,
because,
like
did
anyone
do
measurements
on
that?
Or
is
it
just
okay?.
A
The
cpu
that
we're
spending
on
it
now
I
mean
because
it's
just
a
bunch
of
lists
right
and
and
all
the
operations
on
it
are
constant
time
and
now
we
we
now
that
they're,
like
multi-lists,
you
know
it's
fans
over
cpus
like
pretty
well
skills
skills
with
cpus
pretty
well.
A
The
amount
of
like
algorithmic
complexity
is
like
almost
zero,
but
the
you
know
the
per
operation
costs
can
be
quite
significant
in
some
cases
like
if,
if
the
cash
eviction
rate
in
terms
of
like
blocks
per
second
is
high,
then
you
know
we
definitely
burn
a
lot
of
cpu
doing
that,
but
it's
mostly
not
like
determining
what
to
evict
it's.
Just
like
the
mechanics
of
doing
the
eviction
and
dealing
with
like
the
arc
buffers-
and
you
know,
kim
and
malik-
came
in
free
that
kind
of
stuff.
So.
D
A
A
Make
it
oven,
but
like
there's,
probably
some
room
for
spending
some
more
cpu
on
that.
M
I
just
had
a
thought
that
maybe
like,
if
users
want
to
have
customizable
policy,
then
I
don't
know
maybe
just
insert
some
lure
code
that
says:
okay,
everything
in
this
directory.
I
want
to
do
that.
I
I
was
just
thinking.
Is
that
like
possible-
or
would
it
add
so
much
cpu
to
the
cpu
overhead,
to
the
to
the
read,
write
and
eviction
paths
that
it
wouldn't
be
feasible
at
all.
H
I
think
we
I
think
step.
One
is
some
kind
of
scalable
statistics
gathering
mechanism
which
would
inform
subsequent
decisions
on
whether
we
need
to
do
any
anything
else
at
all.
I
think
and
also
then,
if
we
do
do
something
else
to
be
able
to
test
it
to
see
if
it's
better.
So
I
think
that's
probably
the
first
thing
to
do.
B
Yeah,
I
think
we
don't
have
enough
tests
for
l2r.
Currently,
I
think
I
found
a
bug
where
the
l2
arc
threw
away
everything
you
read
from
it
pretty
badly.
B
B
Right,
but
this
was
when
you
once
you
had
something
fed
out
to
the
lgr
you'd,
try
to
read
it
in
from
the
l2
arc
it
wouldn't.
It
would
compare
the
checksum
incorrectly.
Even
though
the
data
was
right,
it
was
using
the
wrong
size
and
calculating
the
checksum,
not
on
the
block
you
were
trying
to
read,
but
on
only
a
fraction
of
it
or
more
of
it
than
you
were
reading,
and
then
it
would
throw
it
away
and
read
it
from
the
real
view:
death.
B
K
Although
I
think
I
think
that
was
only
in
the
case
where
the
arc
was
not
compressed
right,
yes
yeah,
it
was
only
in
that
case,
so
it
wasn't
a
common
case,
but.
A
Yeah,
let
me
just
wrap
up
this
one
by
saying:
yeah
observability
would
be
great,
and
I
think
that
this
applies
to
the
arc
as
well
as
the
l2
work.
Obviously,
everyone
is
using
the
arc
and
not
everyone
is
using
the
torque.
So
you
know
even
just
starting
with
observability
of
the
arc
like
how
big
is
the
ark.
What
is
in
it?
When
are
we
getting
hits
on?
A
B
A
Joshua
go
ahead
with
d
node
range,
lock,
stuff
range,
trees.
I
Yeah,
I
I've
been
seeing
crashes
on
the
debug
kernel
and
on
further
investigation.
It
looks
like
the
range
tree
destroy
stuff
has
been
unsafe
since
forever,
where
essentially,
as
that
range
tree
is
being
torn
down,
we
have
to
drop
the
lock
on
it,
which
lets
other
people
come
in
and
observe
its
state,
which
no
one
is
supposed
to
touch
it
once
you
start
tearing
it
down
and
was
hoping
to
get
someone
to
take
a
look
at
either
either.
I
A
Yeah,
that's
probably
me
or
maybe
george.
I
can
take
a
look
at
this.
A
Yeah,
you
just
had
to
apply
it
to
the
doc.
There's
one
in
open.
A
Cool
yeah
we're
almost
out
of
time.
Next
meeting
will
be
four
weeks
from
now,
and
I
think
you
will
be
at
the
earlier
time
so.
A
15Th,
nine,
I
think
nine
o'clock,
pacific
time
and
yeah
and
the
conference
is
coming
up.
I
hope
that
you
can
all
make
it
and
get
in
touch
with
me
if
you
would
like
to
lead
a
hackathon
session.
L
Hey
matt,
could
I
steal
like
30
seconds
from
this
group?
I
I
just
opened
up
an
issue
also
on
opencfs.
I
wonder
if
other
people
have
seen
kind
of
weird
crashes,
I
think
it
affects
only
linux,
but
it's
an
old
commit
that
is
in
master
today,
it's
been
around
since
late
last
year,
but
it
seems
like
it
there's
a
race
accessing
like
the
inode
field
of
file
structures
and
we've
kind
of
been
seeing
it
for
a
little.
L
While
I
did
finally
find
the
commit
that
kind
of
introduced
this,
but
I
don't
know
if
others
have
seen
kind
of
strange
crashes
in
these
cases.
It's
not
even
like
an
ncfs
code.
It's
like
you,
know,
kind
of
the
kernel
kind
of
accesses
and
I
know
it
and,
and
it
blows
up,
but
it
presumably
like
in
our
case,
because
we're
running
cfs
root,
obviously
that
there's
zfs
is
involved
in
some
way,
but.
L
Yeah,
we'll
we'll
keep
digging
into
it.
At
least
we
know
when
it
came
in,
but
something
to
keep
in
mind
if
we're.
If
we
have
a
release,
planning,
that's
being
planned,
that
might
be
impacted
by
this.
Okay
thanks.
B
Was
was
the
code
actually
introduced
there
as
it
would
just
moved?
This
looks
like
when
a
bunch
of
code
got
moved
into
the
os
linux
directory
yeah.
L
I
could
not
reproduce
it
before
this
at
all,
so
I
think
it.
It
really
does
seem
like
a
race,
given
that
there
are
instances
that
we're
seeing
where
the
inode
structure
is
null
and
then
other
instances
where
the
inode
structure
was
null
at
the
time
that
the
instruction
was
executed,
but
that
when
you
look
at
it
in
the
crash
dump,
it's
been
filled
in
so
almost
like.
L
You
know
we
need
some
kind
of
you
know:
memory
access
barrier
there
in
the
code
that
maybe
you
know
happened-
just
happened
to
have
to
exist
before
that
got
missed
as
part
of
the
kind
of
the
restructuring
of
some
of
this
logic.
But
I
haven't
gotten
to
the
bottom
of
that.
Yet.