►
From YouTube: January 29, 2019 OpenZFS Leadership Meeting
Description
We discussed status updates on several features, and also the future of the OpenZFS github account.
Agenda and meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit?ts=5c34d12f
A
All
right
welcome
we're
at
a
new
different
time
today,
two
hours
earlier
than
usual-
and
we
are
three
weeks
after
the
previous
one
due
to
some
schedule
changes
so
glad
to
see
as
so
many
folks
could
make
it,
and
hopefully
yeah
thanks
I,
agree
just
supposed
to
say
thanks
for
your
time,
you're
welcome.
Hopefully
this
works
better
for
folks
in
Europe,
I.
A
Think
probably
next
meeting
next
meeting
will
be
back
to
the
regular
time.
Well,
we
try
and
alternate
or
switch
between
mass
and
frequency
since
I
know,
this
doesn't
work
for
folks
in
Asia,
so
the
next
meeting
is
going
to
be
one
campus
if
ik
on
February
26th,
also
my
emails
about
that
as
well
cool.
So
for
the
agenda.
For
today,
we
had
a
bunch
of
things
that
are
most
of
these
are
kind
of
updates
on
previous
projects
and
things
that
are
in
progress.
A
B
It's
been
broken
up
into
small
or
commits
a
couple
times
to
try
to
make
easier
review
and
right
now,
I
have
sort
of
divided
the
work
into
chunks,
which
can
be
sort
of
independently
reviewed,
which
are
in
the
order
of
a
couple
thousand
lines.
We
still
need
people
to
review
these
chunks.
We
have
a
couple.
People
have
taken
sort
of
a
partial
look
at
it
on
github
I've
talked
to
some
people
on
slack.
Who
said
that
you
know
once
release
stabilization
is
done.
B
A
C
I'm
here
the
goal
is
to
get
oh
that
eight
out
as
soon
as
possible,
so
I
would
think
redact
to
send
receive,
would
visit
because
we
want
to
get
it
out.
We
wouldn't
want
to
block
waiting
on
it,
so
the
hope
is
to
get
out
at
eight
out
this
month
early
next
month.
The
only
thing
holding
it
up
is
the
term
works.
We
would
like
to
get
in
there
for
three
bsd,
so
we
aren't
carrying
those
patches
so
soon
so
I
would
hope.
A
A
A
year
and
a
half
something
like
that
and
I
think
that
the
issue
was
that
you
know,
because
both
this
and
encryption
have
major
changes
to
send
receive
code
and
restructuring
it.
We
we
thought
we
were
being
polite
by
rebasing
our
code.
The
redacted
Center
received
up
on
top
of
encryption,
so
encouraged
to
go
in
first
and
not
have
to
deal
with.
A
You
know
the
restructuring
that
that
we
were
doing,
but
I
encryption
is
taking
a
lot
longer
than
expected
to
get
into
Lumos,
so
I
I
think
like
it's
one
section
goes
into
that
pull
request.
Is
you
know
epsilon
from
being
ready
to
be
integrated?
It
would
just
need
you
know.
The
same
sort
of
code
agrees,
that's
happening
on
Linux
and
the
code
is
pretty
similar,
pretty
much
the
same.
How
much
differences
are
there
fall
between?
You
know
once
you're
on
top
of
encryption,
illumos.
B
Versus
the
differences
are
very
small
once
you're
on
top
of
encryption,
there's
I
think
a
couple
of
small
changes
in
terms
there's
like
a
couple
things
where
it
has
to
interact
with
sort
of
some
platform,
specific
stuff
involving
threading
and
stuff
like
that.
But
those
changes
are
dozens
of
lines.
I
mean
it's,
it's
not
very
much
cool.
A
C
Or
two
I
would
say
top
I
would
say:
I
would
like
to
have
it
out
by
the
end
of
February,
but
probably
it
will
be
March
so
cuz.
It
is
pretty
much
ready.
There
were
some
other
changes.
We
talked
about
for
some
bug,
fixes.
There's
that
one
encryption
issue
Tom
was
working
on
that
we
need
to
get
in,
but
okay.
A
A
C
Picked
up
the
work
that
Tim
Jason
was
doing
on
trim
that
was
based
on
the
work
that
Nick
sent
to
did,
and
basically
I've
just
spent
the
last
week
pushing
forward
just
bringing
it
up
to
date
with
all
the
tool
new
stuff
that
we've
added
VSS.
In
the
last
couple
of
years,
the
original
ex
antipas
was
written
before
a
lot
of
this
functionality,
so
it
just
needed
to
be
updated
to
take
advantage
of
a
lot
of
the
new
stuff
like
the
new
style.
I
octal
and
pur
Lisse
be
devs
out.
C
That
kind
of
thing
I've
done
most
of
that
mechanical
work,
to
get
it
ready
and
I'm
hoping
to
have
something
ready
for
reviews,
but
takes
advantage
of
all
that
by
the
end
of
the
week.
So
I'm
beating
the
bushes
for
viewers
to
if
anyone
is
interested
and
can
help
with
that,
I
can
point
them
at
the
full
request
when
it's
ready.
C
So
as
part
of
that
restructuring
I've
tried
to
make
the
code
basically
consistent
with
what
was
done
for
V
dev
initialized
as
much
as
possible
to
not
have
duplicate
functionality.
So
this
is
in
the
spirit
of
what
Nick
prenta
did,
but
just
cleaned
up
to
be
realigned,
but
the
latest
stuff
Mike.
Oh
very.
A
C
Basically,
all
the
basic
functionality
is
still
maintained
in
the
same
way,
the
only
thing
I
did
that
would
really
be
user,
visible
as
I
aligned,
the
CLI
options
and
the
status
reporting
with
how
we'd
have
initialized
works
because
they're,
basically
the
same
operation.
It
seemed
reasonable
to
me
that
they
should
basically
be
presented
the
same
to
the
user.
Well,.
A
D
Yeah
because
I'm
in
Silicon
Valley
a
hideous,
Internet
and
I'm,
using
a
phone
to
dial
in
serve
up
that
I
unmuted
myself
in
the
app
and
that
didn't
follow
through
the
phone.
So
we
did
some
bunch
of
back
and
forth.
There's
been
a
lot
of
different
proposals
about
what
the
UI
for
that
feature
would
look
like.
There's
been
a
lot
of
different
proposals
about
how
how
it
should
be
implemented.
I
think
we
have
sort
of
collapsed
down
to
a
reasonable
proposal
that
most
people
are
either
happy
with
or
willing
to
willing
to
sign
off
on.
D
A
Mean
I
would
say:
let's
we
had
a
big
discussion
about
it
when
when
we
first
brought
it
up
in
this
meeting,
so
why
don't
we?
Why
don't
you
send
it
out
in
the
email
and
then,
if
there's
delegate
folks
a
chance
to
you
know,
take
their
time
reading
it
and
then,
if
there's
questions
or
contention,
then
we'll
have
a
discussion
about
it.
On
the
next
meeting,
okay.
D
One
thing
that
didn't
really
get
considered
is
blue
pools,
so
somebody's
on
the
mailing
thread
that
opens
the
FS
manless
brought
up
the
point
of
boot
pools
and
it
never
really
went
anywhere,
but
at
least
on
previously
there,
at
least
in
releases
past
there's
been
a
fairly.
You
know
significant
divergence
between
what
you
could
boot
off
of
than
what
you
could
import
and
I.
Don't
know
what
what
sort
of
monkey
wrench
that
throws
into
the
proposal
if
we
want
to
have
separate
handling
for
blue
pools.
A
I
mean
this
you're,
basically
talking
about,
like
mostly
relying
on
the
bootloader
and
what
is
the
boot
layers.
Work
right,
yeah
I
think
that's
a
interesting
idea
in
terms
of
I
assume.
The
idea
would
be
to
have
some
kind
of
way
of
saying:
I
want
to
create
a
pool
that
doesn't
have
any
features,
enabled
that
can't
be
used
by
this
version
of
the
bootloader
or
whatever
yeah.
A
D
Subtle
change,
but
you
know
I,
don't
know
what
the
I
don't
even
know
what
the
Linux
landscape
looks
like
and,
of
course,
the
FreeBSD
landscape
changes
quite
quite
rapidly,
because
you
might
think
that
not
being
able
to
boot
off
of
a
feature
is
kind
of
a
buck.
So
people
are
consistently
having
that
ality
yeah.
A
In
my
opinion,
I
think
that
not
solving
that
problem
right
now
is
ok,
like
I
think
that
we
should
move
forward
with
this,
even
without
a
real
solution
to
that,
but
like
it,
it
does
sound
useful.
So
if
you,
if
you
or
someone
else,
wants
to
work
on
adding
that
to
the
proposal,
then
I
think
that's
great
as
well.
Ok,
yeah.
D
D
A
Me
that
can
boot
from
both
the
Lumos
and
linux,
so
it
is
possible,
but
it
booting
is
definitely
like
kind
of
a
special
it's
kind
of
a
special
case,
because
there's
so
much
operating
system
specific
stuff
going
on
there
so
I.
For
that
reason,
I
don't
think
that
it's
like
a
core
requirement
that
we
solve
it
here.
Okay,.
D
F
You,
like
I,
know
I'm
on
FreeBSD
previously
we'd
had
a
warning
if
you
tried
to
enable
a
feature
or
like
even
at
the
having
skin
but
enabled,
but
not
active
was
fine,
but
if
it
was
active,
then
it
was
a
problem
or
whatever,
and
so
when
you
tried
to
said,
if
I
set
it
on
a
system
that
had
boot,
if
has
said
it
would
give
an
error
or
something
I,
don't
know.
If
it's
something
we
can
have
a
more
formal
version
of
that
to
to
warn
a
user
when
they
try
to
do
something.
A
So
the
next
two
things
I
have
on
the
agenda
are
exciting
feature
removals,
so
the
first
and
and
I
proposed
both
of
these
so
I'll
kind
of
give
you
some
background.
These
are
both
been
posted
on
the
mailing
lists,
but
one
of
you
folks
another
chance
to
chime
in
so
the
first
is
removing
dee
doop
Dido.
So
when
you're
using
Dee
Doo
by
default,
like
you
can
have
a
million
copies
of
the
block
and
we
distort
one
block
for
it,
so
you
get
like
a
million
to
one.
A
You
know
space
savings,
the
you
can
optionally
set
this
dupe
dito
property
on
the
pool,
so
you
need
eggs.
Equal
set,
e
dupe,
ditto
equals
100,
and
then
that
basically
tells
DFS.
If
you
have
more
than
a
hundred
copies
of
this
I'll
get
a
tipped
away
then
actually
I
want
to
keep
two
copies
of
on
disc,
rather
than
just
the
one
and
actually
I
think
there's
some
like
exponential
scale
up
so
like.
If
you
have
like
100
times,
100
copies
that
or
100
times
100,
you
know
versions
of
the
dee
doop
away.
A
Most
folks
are
probably
unaware
of
this
functionality,
and
it
turns
out
that
I
didn't
actually
work
right,
because
these,
like
extra
the
extra
copies
that
were
kept
as
a
result
of
DG
potato
setting,
were
not
correctly
scrubbed
so
like.
If
you,
if
you
happen
to
read
it,
then
you
would
get
it,
but
it
would.
It
would
do
like
correction
then,
but
if
you
like
do
every
silver
or
scrub
say,
then
you
could
lose
the
it
wouldn't.
Actually
you
know
restore
the
dedupe
data
block.
A
So
thanks
for
thanks
to
Brian
for
figuring
that
out
and
then
also
actually
fixing
the
issue
but
kind
of
when
this
issue
was
brought
up.
I
was
reminded
of
like
how
this
code
is
very,
not
not
very
used
and
apparently
not
hasn't,
been
very
useful
until
Brian
made
that
fix.
So
my
proposal
was
to
leave
the
property
intact.
A
So
if
anybody
has
scripts
that
are
sending
it
or
whatever
then
that'll
continue
to,
like
you
know,
exit
zero,
but
it
wouldn't
do
anything
anymore,
so
we
wouldn't
create
any
new
deductive
locks,
but
the
pool
would
still
the
software
would
still
how
to
know
how
to
deal
with
existing
data
blocks
and
it
would
free
them
correctly.
You
know
when
you're,
when
we're
done
with
them,
etc.
So
this
is
not,
it
seems
a
little
bit
of
code.
You
know
it's
a
rare
pull
request
that
actually
removes.
A
You
know
core
kernel
code,
but
it's
not
a
huge
amount
of
code
removal,
but
I
thought
that
it
was
probably
worth
doing
just
because
we've
seen
the
bugs
that
it
has
caused
bugs
and
test
failures
and
rather
not
have
to
deal
with
that
on
a
continual
basis.
So
I
think
most
of
the
feedback
that
I
got
so
far
was
positive.
I
got
I,
think
one
some
neutral
feedback
from
I
would
feedback
classifies
neutral?
A
Maybe
he
would
disagree
from
Tom,
Caputi
I,
don't
know
if
you're
on
the
call
you
don't
see
his
name
here,
but
just
wanting
to
give
it
anybody
else.
The
opportunity
to
thumbs-up
thumbs-down
I
mean
anybody's
actually
using
this
and
like
thinks
that
it's
useful
for
them
I
would
love
to
hear
about
that.
G
A
A
That
would,
let
you
say:
hey
like
I,
want
to
proactively
go
and
touch
all
the
indirect
locks
in
this
file
system,
causing
them
all
to
be
rewritten
pointing
directly
to
the
new
locations,
and
then
in
theory
that
would
allow
it
once
nobody
is
using
the
mapping
anymore.
Then
we
can
get
rid
of
it
and
reduce
the
memory
in
practice.
For
this
to
happen,
first
off
you
have
to
you,
do
not
have
any
snapshots
or
you
have
to
remove
all
the
old
snapshots,
because
we
aren't
modifying
snapshots,
we're
only
modifying
file
system
and
then.
A
With
some
later
changes
that
I
made
to
device
removal
now
we're
able
to
not
like
a
vehicle
on
map,
big
chunks,
where,
if
you
have
a
very
fragmented,
even
if
you
have
a
lot
of
fragmentation,
we're
able
to
store
the
mapping,
usually
with
a
bit
with
like
much
fewer
number
of
mappings
that
cover
larger
sections
of
the
disk.
So
the
end
result
is
you
use
a
lot
less
memory
for
the
mappings,
and
the
problem
is,
this
is
solving,
is
less
of
a
problem
and
second,
because
you're
mapping,
several
logical
blocks
with
one
entry.
A
A
A
A
A
We
may
want
to
look
at
I
feel
I
think
that
on
Davis
on
Linux
we
kept
most
of
the
code
there,
but
we
did
totally
remove
the
command-line
option.
So
if
ESRI
map
it'll
just
say
like
that's,
not
it
command
mine,
that's
not
a
thing.
Here's
the
uses
and
I
should
message
in
exit
nonzero,
so
yeah.
We
could
consider
doing
something
that
has
a
little
bit
more
backwards.
A
C
C
A
Do
you
folks
have
thoughts
on
I
guess,
there's
kind
of
three
options
for
the
furthermost
in
freebsd:
one
is
leave
it
in
their
status
quo.
Two
is
remove
the
functionality
believe
this
the
command-line
option
in
so
that
any
scripts
or
anything
we'll
selects
it
in
if
people
type
it
in
then
they'll
get
a
message
telling
them
hey,
it's
deprecated
doesn't
do
anything
and
three
is
just
totally
remove
it
and
it's
like
it
was
never
there.
A
G
A
The
caveats
are
its:
it
would
be
pretty
hard
to
get
a
big
benefit
if
you
have
small
block
sizes
and
fragmentation,
if
you're
using
larger
blocks,
like
certainly
like
one
mega
above
then,
it
would
be
easier
to
see
just
to
see
benefit
I'm,
even
with
the
default
128k
and
large
files.
You
can
see
benefit
with,
with
the
caveat
of,
if
you
have
those
to
begin
with
so
with
to
cap
gas.
A
If
you
have
large
box,
if
usually
128k
blocks
and
your
fools
not
that
fragmented,
then
probably
the
mapping
is
wasn't
it's
just
not
that
big
to
begin
about
so
yeah,
it
can
do
its
job
and
and
save
you
the
memory.
It's
just
that,
like
the
memory
that
you
were
using,
is
not
that
bad
to
begin
with.
Okay,
so
that's.
A
That's
why
it's
like
yeah,
you
can
save
the
like
ten
tens
of
megabytes
of
memory,
I
think
it's
I
think
it's
something
weird
of
like
tens
of
megabytes
per
terabyte
removed
in
the
kind
of
good
normal
case.
So
you
could
you
could
save
that
memory
if
you
fall
into
this
category,
which
is
like
not
implausible
that
you
would
be
not
using
snapshots
and
using
the
default
record
size,
but
it
was
really
designed
for
an
older
version
of
device.
Removal
with
super
fragmentation,
we're
reducing
lots
of
memory,
and
it
was.
A
A
And
yeah,
and
obviously
you
served
there
with
the
caveat
if
there
was
no
costuming
Taney
at
which,
unfortunately,
there
is
given
that
there
is
a
known
bug
that
can
I
think
deadlock
if
a
panic?
Okay,
if
you
do
the
removal,
so
if
you
do
the
ZFS
remap,
while
are
you
doing
a
bunch
of
other
operations
on
the
file
system
like
I,
think
it's
linked?
If
you
do
the
remap
and
it's
and
it
was
like
you
do
your
remap,
then
you
do
receive
and
they're
both
like
in
they
they
hit
at
the
wrong
time.
A
A
H
A
F
A
F
D
A
E
So
I
just
want
to
give
an
update
about
the
upstreaming
of
the
load
space
map
feature
for
the
people
that
are
not
from
the
year
with
it.
There
are
two
open
ZFS
dogs,
one
from
last
year
and
one
from
the
year
before,
but
basically
the
feature
improves
random,
writes
and
very
fragmented
pools
and
probably
I.
E
Not
probably
but
I
actually
started
opening
some
PRS
in
CL,
basically
breaking
down
the
change
to
like
multiple.
You
know
like
self
comments
and
I
probably
got
the
attention
of
some
jokes
so
far,
but
basically
the
update
is
that
aiming
to
have
a
full
request
for
the
feature
itself,
probably
by
the
end
of
this
week
or
beginning
of
the
next
month.
A
E
Can
actually
I
initially
was
working
on
opening
FBR
for
open
GFS,
but
then
I
decided
to
do
yeah.
Okay,
won't
be
a
ruby
fine!
Yes,
so
I
can
open
one.
We
are
for
the
whole
picture,
together
with
the
side
changes
for
open
GFS,
but
really
easier
for
it
years
to
actually
you
know.
Squeak
down
like
I,
do
definitely
different
comments,
but
yeah
I
I
was
actually
trying
to
do
that.
Work
afterwards.
A
E
E
A
You
did
cool
all
right.
Why
don't?
Why?
Don't
you
keep
doing
up
down
that
next
them
any
more
questions
on
locks,
basemap
all
right,
then
next
I'm
going
to
add
last-minute
agenda
item.
I
didn't
really
say
you
were
that
far
along,
so
Sarah
Mike.
Can
you
talk
about
fascicle
deletion
of
streaming
status
is
another
new
feature,
new
feature
that
has
on
disk
for
my
application.
That's
a
performance,
improvement,
yeah.
I
So
yeah
kind
of
similar
to
Seraphim.
This
is
a
feature
that
was
initially
developed
here
on
limits,
so
I've
opened
up
or
request
on
open,
ZFS
and
then
I'm
also
working
on
a
pull
request
that
I'll
soon
open
on
ZFS
on
Linux.
There
is
not
a
huge
amount
of
change
between
the
two,
but
basically
it's
a
performance.
Enhancement
for
Jame
include
deletion.
It
improves
the
performance
of
deleting
clothes
tying
back
to
actually
the
number
of
rights
to
the
clone,
as
opposed
to
the
size
of
the
underlying
snapshots.
A
A
The
fast
convolution
is
like
it's
kind
of
neat.
It's
just
all
new
code
right.
It's
not!
It
doesn't
really
interact
very
much
with
the
existing
code.
It's
just
like
oh
you're,
deleting
a
clone,
and
you
have
this
new
data
structure
on
disk
great
we're
gonna.
Do
it
this
new
way
and
there's
a
whole
bunch
of
code
to
manage
that,
but
it
doesn't
really
interact
that
strongly
like
there's
this
little
bit
of
code,
that
looks
into
like
oh
you're
you're,
freeing
a
block
and
into
this
other
data
secretary
yeah.
I
I
J
J
A
Sounds
like
you
might
not
be
here,
I
guess,
I'll.
Just
briefly
say:
Tom
I
know
tom
is
working
on
a
fix
for
encryption.
That
I
think
is
the
last
issue
that
we
that
were
sitting
seeing
on
the
illumos
encryption,
Laura
crust
and
the
issues.
If
you
mix,
if
you
do
both
raw
receives
and
non
raw
receives
and
then
raw
receives
again,
then
the
the
the
Mac's
are
wrong,
the
Mac
being
the
like
cryptographic,
verification
of
the
block.
A
So
you
would
get
like
a
invalid
invalid
checks
out
Versalles
in
air,
so
he
is
he's
working
on
a
fix
for
that.
It's
a
little
bit
involved
with
some
on
disk
format,
changes
and
he's
working
on
how
to
make
folks
that
are
already
using
encryption.
Aware
of
the
change
that's
being
made
here,
but
I
think
for
for
new
pools
going
for
is
it's
definitely
very
straightforward.
G
A
A
A
Alright,
so
we
were
doing
so
great
on
time.
I'm
amazed,
two
remaining
items
on
the
agenda,
one.
This
is
from
the
backlog.
The
question
was
about
event
and
fall
management
frameworks
and
I
guess.
The
question
is
like:
are
we
doing
all
different
things
on
each
platform
and
should
we
unify
them
somehow
I'm,
not
sure
who?
Who
added
this
but
I
I'm,
pretty
sure
that
there
are
different
frameworks
on
at
least
illumos
linux
and
freebsd.
A
At
least
on
the
Lumos,
it's
like
the
existing
framework
is
like
aluminum
system-wide,
so
it's
not
just
ZFS,
so
it
would
be
hard.
I
think
you'd
be
challenging
to
you,
know,
make
changes
to
that
and
that
maybe
may
or
may
not
be
the
case
on
other
frameworks,
but
other
platforms.
So
yeah
leave
the
question
which
I
grow
quested
on
the
chat
was,
you
know,
should
we
pour
FM
a
to
the
others
and
I
I'm,
not
really
a
great
position
to
answer
that,
but
I'll
win
it
for
other
folks
from
Linux
and
FreeBSD
to
talk
about.
H
Could
speak
to
that
Matt
I?
Think
just
yeah,
Justin
and
I
gave
a
talk
like
two
years,
maybe
three
years
ago.
So
that's
a
good
point
of
reference
and
in
the
Linux
code
we
emulated
enough
of
FMA
to
get
the
functionality
that
the
modules
needed,
like
the
third
engine
and
a
few
things
like
that
there
is
a
pretty
there's
a
document
inside
the
there's
a
readme
in
in
that
code.
H
H
G
With
some
context,
the
rough
architecture
of
FMA
is
that
there
are
a
bunch
of
sources
of
reports
which
are
basically
little
events
that
contain
things
like
this.
This
video
had
this
client
like
how
to
check
some
arrow.
This
disk
had
an
I/o
error,
but
it
recovered
things
like
that
and
that
all
gets
funneled
through
the
sis
event
pipeline
into
a
demon
process.
G
Fmd
that
runs
it's
a
it's
a
like
a
user
mode
process,
not
a
kernel
module
that
also
collect
events
from
lots,
as
Matt
says
lots
of
other
parts
of
the
system,
and
the
thesis
of
that
is
that
we
can
then
correlate
errors
from
send
a
disk
subsystem
which
is
obviously
not
in
the
ZFS
codebase,
with
things
that
happen
at
the
V
dev
layer.
But
then
there
are,
as
Don,
says,
a
bunch
of
modules
that
that
respond.
G
So
we
we
tied
the
events
together
into
cases
and
diagnoses
and
and
then
modules
can
take
actions
to
automatically
resolve
cases
or
in
response
to
groups
of
fault
telemetry
that
are
all
correlated
together
so
that
they
might
spare
in
a
device.
They
might
offline
a
disk
things
like
that.
But
it's
it's
definitely
like
I
think
trying
to
do
it.
G
All
of
that,
just
inside
ZFS
would
mean
that
you'd
leave
out
the
opportunity
to
correlate
with
lots
of
other
sources
of
telemetry
in
the
in
your
or
an
operating
system
that
are
often
necessary
for
certain
kinds
of
fault.
Diagnosis,
like
I
mean
what
one
thing
that
we've
hit
recently
is
like.
You
might
have
disks
that
are
presenting
IRS
or
checksum
errors,
but
then
maybe
you've
got
eight
disks
in
your
system
that
have
those
errors,
but
then
they're
all
behind
one
says
expander,
and
it
might
actually
be
this
as
expanded.
G
That's
the
problem
and
you
don't
necessarily
want
to
fault
out
wildly
fault
out
a
bunch
of
disks
which
might
just
like
cause
serious
pool
redundancy
problems
when
the
problem
is
somewhere
else
and
it's
difficult
to
be
able
to
make
that
determination
automatically,
without
correlating
fault,
telemetry
from
multiple
sources
of
which
set.
If
s
is
just
one.
So
that's
that's
why
ffs
amazed
as
a
broader
system
is
interesting
to
us
some
context.
D
A
Well,
so
I
will
close
the
meeting
with
those.
So
those
are
a
bunch
of
great
pretty
much
non-contentious,
updates
and
I.
Think
that's!
That's
good!
That's
a
great!
You
say
this
meeting
I
like
I,
want
also
like
we're
gonna
have
more
contentious.
You
know
more
more
lively
discussions
as
well
going
into
the
future
and
the
last
one
that
I
wanted
to
throw
on
to
the
agenda
today
is
the
open,
ZFS,
github
repo.
A
So
a
couple
of
questions
have
come
up
around
this
and
the
future
of
it,
given
the
context
of
the
development
moving
more
towards
ZFS
on
linux
and
and
other
law
firms
taking
changes
from
there.
So
let
me
give
some
background
information,
which
is
that
you
really
the
they
they're
kind
of
two
reasons
that
we
created
that
repo
I
created
that
repo
one
is.
A
A
The
other
second
motivation
was
your
kind
of
publicity
value,
and
you
know
kind
of
it's
for
for
folks
who
aren't
necessarily
for
developers
to
see
like,
oh,
like
opens
e
Fester's,
a
github
repo,
so
it
must
be
like
really
a
thing
and
I
can
see
code
there
and
I
can
like
see
activity
and
that's
good
and
I
think
both
of
those
are
valuable,
but
really
the
first
one.
You
know
helping
people
make
technical
contributions
is
by
far
more
important
to
my
parents
for
the
project.
A
So,
in
my
opinion,
like
that
that
the
at
least
the
technical
need,
there
is
still
the
distal
value
like
people
are
slow,
making
changes
and
illumos
we're
talking
about
pulling
changes
from
other
platforms
even
more
in
the
future.
So
I
think
that
there
is
still
potential
value
there.
If
people
want
to
use
it.
A
A
You
know,
because
del
phix
is
moving
to
doing
more
of
our
work
on
Linux
we
probably
won't
be.
We
won't
have
as
much
time
to
put
into
maintaining
the
open
ZFS
of
the
most
stuff
going
forward.
So
we'd
love
to
start
a
discussion
with
people
about.
Is
this
like?
How
useful
is
this
and
also
who,
if
anyone
would
be
willing
to
put
in
the
work
to
keep
it
going
and
keep
you
know
kind
of
greasing?
The
contributions
from
outside
of
the
illumise
community
straight
to
illumos
for.
G
What
it's
worth
we're
looking
at
take
within
the
project
itself,
trying
to
stand
up
some
of
the
things
that
you're
doing
for
us
this
year
so
like
we're,
gonna
go,
get
Jenkins
infrastructure
or
whatever.
It
is
like
to
do
more
more
visible,
externally
accessible.
Like
automated
testing
of
things,
we've
got
some
bugs
to
go
fix
to
make
it
easier
to
run
certain
kinds
of
guests
on
different
virtualized
platforms,
to
make
the
the
testing
experience
easier
to
orchestrate
I
think.
But
you
know,
we've
got
kind
of
a
plan
to
get
that
stuff
done.
G
You
know
first
half
of
this
year.
So
while
we
appreciate
having
the
thing
and
all
the
work
that
you've
been
doing,
I
think
that
it's
it's
on
us
to
to
take
that
over
you
know
first
off
of
this
year
and
get
provide
the
facilities
you're
providing
and
try
and
get
the
engagement
to
be
sort
of
more
directly
within
the
the
people
leading
the
Illinois
project
itself,
rather
than
this
thing
that
you're
sort
of
having
to
push
in
the
side.
Basically,
so
that's
definitely
something
that
we're
looking.
A
At
that's
awesome,
I
mean
I.
Think
that's
that's.
The
ideal
solution
is
that
the
Lumos
project
itself
becomes
easier
to
contribute
to
yeah
and-
and
that's
great
to
hear
do
you
know
if
love
is
any
that
plan
or
I
said
I
only
heard
you
mentioned
their
testing
stuff,
which
is
great,
but
I
know
that
the
familiarity
that
people
have
with
github
is
also
beneficial.
Yeah.
A
G
A
Yeah,
probably
you
know,
since
that's
like
a
publicity
value
kind
of
thing,
I
would
probably
wait
until,
like
you
know,
a
Loomis
has
made
some
progress
on
the
things
that
you
talked
about,
and
you
know
we've
made
progress
on
making
sure
that
changes
they
go
into
Linux,
get
up
to
other
platforms
and
all
that
so
that
we
have
more
of
a
complete
story
there
before
making
kind
of
largely
visible
changes.
But
that's
that's
good
to
know.
I
wonder.
F
F
A
We
I
think
that
we
had
a
discussion
about
that
couple
meetings
ago,
at
least
at
the
time.
The
decision
was
basically
that
hey,
let's
make
sure
that
everybody
knows
about
this
stuff
by
sending
email
to
the
developer.
Mailing
developer
opens
the
efest
mailing
list.
I
sent
some
emails
to
all
the
all
the
various
lists,
kind
of
letting
folks
know
that's
where
it
would
happen
and
I
did
see
a
slight
uptick
in
subscriptions
to
the
Quincy
fest
developer
mailing
list
after
that.
A
E
Seraphin
I,
what
I
wanted
to
emphasize
mostly
was
people
doing
porting
work
and
making
sure
that
you
know,
maybe
even
if
you
don't
want-
or
they
don't
have
time
before
the
feature
make
sure
that
maybe
they
put
some
padding
or
things
like
the
over
block
or
other
on
this
extraction
site.
Just
to
make
sure
that
you
know
we
don't
enjoy
queues
like
some
heart
and
compatibility
between
in
existence.
That's
what
I
mostly
wanted
to
emphasize,
because
I
actually
had
to
do
some
of
this
work.
E
But
when
I
was
assuming
the
physical
check
point-
and
you
know,
we've
added
some
fields
in
the
over
block,
but
there
were
also
some
changes
from
GOL
for
them
and
be
changes
and
we
turned
out
fine
and
everything
was
good
it
just.
It
would
be
good
to
keep
an
eye
for
things
like
that
in
the
future.
Yeah.
A
I
think
letting
folks
know
and
then
also
making
sure
that
we
like
open
blow
requests
on
the
other
platforms
too
yeah
yeah
to
reserve
fields
and
stuff
like
that
I
mean
I,
know,
don't
fix.
We
both
like
then
almost
burned
by
this
several
times,
and
we've
also
like
almost
burned
other
people
on
this
several
times.
So
we
can
definitely
do
better
as
well
with
this
I.
A
Don't
know
if
I
mean
in
terms
of
like
something
that
we
would
keep
it
up
in
a
repo
I
think
the
email
you
know
it
like
email
communication
is
like
that's
really.
The
first
step,
I,
don't
know,
maybe
maybe
makes
sense
to
have
something
where
some
of
these
structs
or
descriptions
of
them
or
like
and
like
a
text.
A
Description
of
the
on
this
format
is
kept
in
some
common
repo
and,
like
the
expectation,
is
you
modify
that
before
you
push
to
any
platform
you're
pushing
the
oddness
for
machines
to
any
platform
that
that
could
be
an
option
and
that's
something
that
we
could
like.
We
create
a
repo
for
that
under
the
opens.
The
efest
account.
You
know
any
time
that
somebody
is
willing
to
do
the
work
except
populate
it
to
begin
with
yeah.