►
From YouTube: December 2021 OpenZFS Leadership Meeting
Description
Agenda: ZFS Interface for Accelerators; Native encryption; https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
All
right,
let's
go
ahead
and
get
started,
welcome
to
the
first
open,
zfs
leadership,
meeting,
postcard,
post
dev
summit.
I
hope
everybody
had
a
great
time
at
the
development
summit.
Four
weeks
ago
we
have
a
couple
of
great
ideas
on
the
agenda
today.
A
First
one
we
were
just
discussing
so
folks
who
attend
regularly,
may
not
benefit
from
this
much,
but
there's
a
lot
of
people
who
can't
attend
or
attend
sporadically
and
the
meeting
notes
are
really
useful
to
them
to
see
what
was
discussed
and
and
decide.
You
know
when
they
need
to
go,
look
and
read,
watch
the
video
for
the
details
of
something
that
was
interesting
thanks
a
lot
to
seraphim
for
taking
those
notes
for
the
past
several
years.
A
I
think,
and
he
needs
a
little
break-
he's
been
at
it
for
quite
a
while.
So
we're
looking
for
volunteers
and
thanks
to
christian
for
volunteering
to
do
to
do
that.
That's
much
appreciated
and
I'll
send
out
those
meeting
notes.
Folks,
don't
haven't
seen
them
already.
I
send
them
out
to
the
mailing
list
developer
mailing
list
after
the
after
the
meeting.
A
All
right
cool
thanks
a
lot
christian,
so
the
other
things
jason
jason
lee
had
some
things
to
discuss
about
using
zfs
with
hardware
accelerators
and
I
think,
jason.
You
said
you
had
some
slides.
You
wanted
to
share
to
talk
about
your
idea.
C
Alrighty
so
hi
everyone,
I'm
like
matt,
said
I'm
jason
lee
from
los
alamos
and
the
plan
for
today
is
just
to
talk
about
the
zfs
interface
for
accelerators
and
a
companion
piece
called
the
data
processing
unit
services
module.
C
So
this
is
going
to
be
super
high
level
and
is
more
meant
as
a
feeler
to
get
people
interested
in
our
work
and
to
see
if
we
can
to
see
how
many
people
are
interested
and
maybe
get
comments
on
it.
When
I
eventually
open
a
pull
quest
in
the
open,
dfs
repo.
C
C
The
motivating
reason
for
why
we,
this
work
exists,
can
be
summarized
by
this
graph
graciously
provided
by
brian
atkinson.
At
the
lab
we
like
to
run
ray
z2
z,
pools
with
compression
and
tracfoning
enabled.
Specifically,
we
like
to
run
gzip
as
our
compression
algorithm
on
this
graph.
You
can
see
that
with
no
compression
we
get
about
eight
and
a
half
gigs,
a
second
of
throughput
across
a
large
number
of
io
threads,
targeting
a
single
file
and
slightly
under
that
line.
We
have
lv4
compression,
which
is
about
at
a
7.5
gigs
a
second.
C
C
Well,
that's
a
massive
drop-off
which
we
would
prefer
to
not
have
so,
and
the
basic
reason
for
this
is
because
compressing
data
fuse
up
on
a
general
purpose.
Cpu
is
just
really
really
slow.
C
Fortunately,
there
are
devices
called
computational
storage
devices,
storage
devices
and
confidential
terminal
storage
processors
which
are
currently
being
developed.
They
have
hardware
implementations
of
operations
such
as
compression
and
checksuming,
and
erasure
coding,
which
happen
to
be
to
overlap
incredibly
well
with
the
features
that
vss
provides,
which
makes
vfs
an
ideal
candidate
for
offloading
operations.
C
As
a
side
note,
I
do
know
that
the
intel
there
is
code
for
intel
qet
operations.
Unfortunately,
that
doesn't
run
on
my.
C
C
So
here
is
a
diagram
of
the
right
pipeline,
with
the
i8
enabled
at
the
top
is
a
highly
simplified
diagram
of
the
cio
right
pipeline
on
the
left.
We
get
a
cio
plus,
it's
abd.
We
send
it
to
compress
do
a
bunch
of
checks.
Do
it
transform
push,
I
think
with
and
then
pass
the
compressed
data
to
check
something
raise
z,
the
v
does
disk
or
video
file
code
to
issue
the
I
o
and.
C
Data
lands
on
storage,
which
in
this
case
is
nvme
so
with
via
enabled
what
one
data
enters
the
the
I
o
right,
compress
function.
C
The
function,
the
the
zfs
shim
layer
will
see,
hey
via
is
enabled
easy.
Ia
compression
is
enabled
this
data
is
not
offloaded.
So
let's
offload
the
data
onto
the
off
the
hardware
accelerator
and
compress
it.
So
when
this
data
is
offloaded,
the
via
interface
returns,
a
handle
for
for
zfs
to
keep
track
of
this
handle
will
be
stored
in
the
abd
as
a
void
star.
So
so
it's
opaque
and
vfs
has
no
idea
what
it
is
and
can't
do
very
much
with
it
except
pass
it
around.
C
So
then
the
zio
is
passing
to
the
trapsome
stage
and
the
handle
is
passed
into
the
via
checksum
function,
which
in
turn
calculates
the
checksum
of
the
data
and
then,
similarly
for
rate
v,
the
date
the
data
is
went
into
trunks
and
the
parity
columns
are
calculated
and
for
rate,
the.
C
In
addition
to
the
abd
data
itself,
it
also
will
hold
a
another
zia
handle
inside
the
razero
script.
C
And
finally,
the
data
is
passed
into
the
the
vw
disk
or
pipeline
and
written
out
to
the
to
storage.
C
So
just
during
normal
operations,
data
will
never
be
brought
back
into
memory
other
than
minor
stuff
like
return
codes
or
checksums,
because
checksums
are
needed
in
block
pointers,
but
in
general
data
will
that
the
actual
data
in
the
abds
will
never
return
to
to
memory.
So
there
will
not
be
a
cost
of
constantly
transferring
data
back
and
forth
between
or
on
the
pcie
bus.
C
However,
there
are
errors
in
in
the
operation,
for
instance,
he
passed
in
it.
He
passed.
The
data
is
passed
into
the
checksum
function
when
you
have
request
flexure
for
and
this
hardware
accelerator
for
some
reason
doesn't
support,
flexure
4,
then,
yes,
the
data
will
be
passed
back
into
memory
and
then
offloaded
later.
But
hopefully
this
is
not
a
normal
occurrence,
so
you
shouldn't
incur
that
cost
too
often.
A
This
is
really
cool.
Can
I
ask
a
question
about
that?
Previous
slide
sure.
So
it
looks
like
in
the
diagram
here
like
the
the
xeo.
The
zio
pipeline
is
still
the
thing.
That's
like
driving
this,
so
every
stage
of
the
pipeline
is
still
executed
like
normally
and
then
it's
just
passing
this
hand
like
after
the
compression.
A
Then
we
go
back
to
doing
the
pipeline.
It
goes
to
the
next
stage.
Maybe
the
next
stage
is
checksum
and
then
and
then
it's
going
to
pass
the
handle
to
the
zia.
You
know
call
like
callback
or
whatever
is
that?
Do
I
understand
that
correctly.
A
Correct,
okay
and
and
how
integrated
is
this
with
the
abd
stuff
like,
for
example,
let's
say
that
we
add
some
other
stage
in
here
into
the
pipeline
that
needs
to
process
the
data,
and
maybe
it
needs
to
process
it
on
the
main
host
cpu
by
by
like
just
like
calling
the
abd
you
know,
iterate
or
abd
copy
kind
of
functions.
Is
that
gonna
is
the
abd
code
then
going
to
know
that,
oh,
like
I
have
this
zia
handle?
A
I
need
to
like
copy
it
back
from
the
you
know,
the
external
board
memory
into
the
main
memory,
so
that
I
can
do
it
or
like
do
we
risk
like
looking
at
the
main
memory,
which
is
now
out
of
date,
because
it
was
you
know
not
we
don't
see,
we
don't
have
the
compressed
version
or
whatever.
B
C
The
abd,
if
you
want,
for,
if
you
want
to
bring
data
back,
you
pass
a
fully
allocated
abd
into
the
callback
and,
assuming
I
this
is
implemented
correctly.
The
data
will
be
unloaded
into
the
abd
according
to
whatever
abd
type.
It
is
whether
it's
linear
or
scatter
or
okay,.
A
Cool
yeah,
I
guess
my
main
concern
is
just
like.
Like
you
know,
the
normal
case
is
going
to
be
like
not
having
this
external
processing
unit
and
we
want
to
make
sure
that
we
don't
like
break
the
use
of
the
external
processing
unit
by
like
adding
something
that
you
weren't
thinking
about
into
the
ziya
pipeline.
A
And
so
it
would
be
nice
if
there
was
a
way
to
for
the
like
at
the
zio
layer
to
have
some
kind
of
error
checking
or
something
to
make
sure
that
the
data
like
if
there
is
a
handle,
then
you
know
and
you're
trying
to
get
the
data.
Then
you
have
actually
like
copied
it
back.
Like
have
some
assertion
that
you
know
you
you
already
copied
it
back
or
or
some
way
of
checking
that
you
know
what
I
mean.
A
E
C
So
the
current
check
I
do
is
simply
checking
whether
or
not
the
cia
handle
is
null
if
it's
null
it
was
not
offloaded.
If
it
isn't
sorry,
if
it's
not
null,
it
is
offloaded.
If.
A
Okay,
so
yeah
so
basically
have
some
check
where
it's
like.
Oh
you
know,
you're
trying
to
do
abd
copy
whatever
or
abd
iterate
whatever.
If
their
handle
exists,
then
we
explode
or
we
explicitly
copy
it
back
or
something
like
that.
C
D
E
A
Yeah,
I'm
sorry,
I
don't
want
to
like
rattle
too
much
on
this
it,
but
maybe
just
take
the
high
level
point
that,
like
let's
find
ways
that
we
like
this
sounds
really
cool.
It
sounds
like
you've
integrated
it
really
slickly
with
the
zio
pipeline
at
a
high
level.
That's
just
additionally
think
about
like
how
we
can
add
safeguards
to
make
sure
that
we
don't
you
know
accidentally
get
confused
about
where
the
data
is
and
then
you
know
operate
on
still
data
or
something
because
of
that.
D
C
But
right
this
presentation
is
just
super
high
level.
Here
I
did
a
thing.
A
Yeah
this
is
this
is
really
neat
I
like
it
feel
free
to
go
on
to
the
next
slide.
If
you
want.
F
C
Data
each
stripe-
or
I
guess
row
in
dfs
terminology-
is
read
in
and
then
the
the
data
is
reconstructed
using
the
data
that
was
read
and
the
checksums
are
verified,
verified
and
if
it
is
successful,
the
parity
bits
are
property.
C
C
Eventually,
issue
any
repairs
that
have
to
be
made
to
2d
storage
through
zia,
instead
of
through
the
vfs
vw
disk
or
the
file
functions.
A
Very
very
cool
and
very
tricky
just
background
info,
how
like
the
razer
reconstruction
is
non-trivial
and
non-standard.
How
is
this
like?
A
is
this
data
processing
unit
like
an
fpga
that
you've
like
programmed
to
know
how
to
do
the
recon,
like
that
raid
z,
math
stuff
in
parallel,
or
how
does
that
even
work.
G
A
C
A
Well,
they're
not
like
I
mean
they're
not
like
something
that
was
in
use
before
zfs.
As
far
as
I
know
like
that,.
A
Oh
okay,
so
it's
it's
not,
let's
see
so
you're
just
going
to
get
the
vendors
to
implement
like
you're
going
to
send
them.
You
know
raid
z,
that
v
raid
z
dot
c
and
be
like
hey
cut
up
this
math
and
hardware.
A
Okay,
oh
that's
cool
yeah.
I
mean
that
that
math,
it's
not
like
it's,
certainly
not
impossible.
I
mean,
I
know
that
it's
like
vectorized
routines
have
been
written
for
a
bunch
of
different.
You
know
vector
instruction
sets
to
accelerate
the
raid
z
reconstruction,
which
is
you
know,
pretty
complicated,
especially
when
you
get
to
raid
z3.
A
You
know
with
three
discs
died.
It's
it's
pretty
involved.
C
Cool
so
right,
so
the
original
plan
was
to
write
this
shim
and
then
have
it
link
into
an
existing
bit
of
glue
code
for
that
talks
between
accelerators
and
the
via
shim.
Unfortunately,
that
was
just
not
a
good
design.
It
worked
to
get
me
started,
but
it
wasn't
extensible,
for
instance,
if
we
want
to
if
we
have
multiple
accelerators
and
we
want
to
switch
between
them,
we
couldn't
just
do
that
by
linking
vfs
directly
to
the
accelerated
glue
code.
C
Instead,
we
created
the
data
processing
unit
services,
module
which
acts
as
a
registry.
For
then
vendors
implement
what
we
call
providers,
which
is
the
glue
code
that
talks
to
their
accelerators,
which
allows
us
to
turn
accelerators,
often
on
from
or
switch
between
them
or
or
any
number
of
other
operations.
That.
A
So
yeah,
like
you,
might
have
one
accelerator
that
does
gzip
and
one
that
does
raid
z,
or
maybe
you
have
like
two
that
do
gzip
and
three
that
do
raid
z
and
you
need
to,
like
you
know,
multiplex
over
them
right.
Oh.
C
Oh
so
that
that
could
work,
but
that
was
not
the
plan.
We
were
hoping
to
keep
data
offloaded
on
the
same
accelerator
throughout
throughout
its
lifetime.
So
crazy
copies
wasn't
right.
A
So
you
would
have
like
the
idea
is
that
you'd
have
multiple
accelerators,
each
of
which
provides
all
of
the
functionality
that
you
need.
Yes,.
C
Probably
not
we,
I
guess
we
haven't
really
thought
about
that.
We're
still
we're
still
trying
to
sort
of
get
all
the
issues
ironed
out
so.
A
C
So
this
diagram
just
shows
the
general
ecosystem
of
how,
via
will
be
used
with
the
data
processing
services,
module
or
so
commands
coming
from
user
space
on
the
left
into
zs,
with
the
ia
enabled
which
in
turn
sends
data
and
commands
down
into
the
dp
usm,
which
talks
to
the
the
glue
code,
the
providers,
for
instance,
if
a
software
provider
or
bluefield
2
provider,
which
in
turn
communicates
down
to
pcie
bus
to
wherever
the
accelerator.
D
C
Which
then
finally
writes
to
the
backing
storage,
so
the
the
dpsm
has
two
interfaces.
C
So
that's
all.
I
really
had.
A
Yeah,
this
sounds
really
neat.
How
how
would
this
relate
to
the
intel
affair,
the
acronym
now,
but
the
qatar
qat
yeah,
like
the
qat
offload
like,
would
that
could
that
become
one
of
these
providers
that
plugs
into
the
d?
P?
U
s
m
or
would
that
have
to
remain
kind.
C
Of
a
separate
thing,
I
believe
so
the
I've
only
seen
the
qrt
code
in
a
few
places,
but
the
modifications
weren't
terribly
big.
A
A
Yeah,
I
don't
remember
how
exactly
the
memory
management
with
the
qat
worked
in
terms
of,
like
you
know,
providing
the
by
copying
the
buffer
and
then
like
it
does
the
do
you
get
a
handle
that
kind
of
lives
across
multiple
invocations
or
or
what
right?
But
I
guess
even.
A
I
like
this.
This
is
very
interesting
to
me,
but
it's
not
really
in
my
like
personal
use
case,
so
I'd
definitely
be
interested
to
hear
from
other
folks
about
both
technical
like
how
this
would
work,
and
also
like,
is
this
interesting
to
you,
and
I
imagine
people
might
be
curious
about
like
how
they
could
get
their
hands
on
these
fancy
accelerator
boards.
C
F
Might
be
a
question
about
like
the
the
memory
model,
so
the
idea
is
every
like
all
the
accelerators
will
be
attached
to
pci
express
there.
There's
no
idea
for
like
tighter
co-processors,
which
have
appeared
in
more
recent
arm
processors,
for
example.
There
are
like
arm
processors
with
pretty
tightly
integrated,
fpgas
and
so
on.
That
could
be
used.
C
A
A
But
you
could
imagine
like
maybe
that
you
know
maybe
the
like
handle
is
just
like.
Oh,
the
handle
is
just
like
a
pointer
to
main
memory
and
the
copying
to
and
from
the
handle
is
just
like
here's,
the
pointer.
So
it
seems
like
the
general
thing
that's
being
designed
here,
would
probably
work
just
fine
for
a
more
tightly
coupled
accelerator.
F
Yeah
I
mean
like
if
I
in
the
product
interpreted
the
first
slide
correctly
then
or
one
of
the
first
slides,
then
the
idea
was
to
directly
copy
like
tell
the
nvme
device
copy
from
the
accelerator
onto
the
nvme
device
as
possible.
In
that
case
like
we
have
to
make
sure
that
it's
dm
a
copyable
memory
on
the
outside
and
so
on.
A
few
implications
there,
but
in
general
yeah,
should
be
flexible
enough.
G
I
guess
my
question
would
be
how
far
in
the
future,
do
you
think
you
see
this
being
minimally
viably
usable
because
it
sounds
like
a
bunch
of
it
is
still
at
least
pending.
Let's
go
tell
the
hardware
vendor
to
go
implement
it,
which
sounds
like
a
longer
timetable.
C
E
E
Just
one
other
comment:
I
know
with
the
qat
there
was
a
maybe
a
minor
issue
with
gzip.
In
that
the
way
they
implemented,
it
meant
that
the
same
block
compressed
with
the
cpu
would
have
a
different
checksum
than
the
block
compressed
by
qat
like
it
was
decompress
compatible,
but
it
wasn't
actually
bite
for
byte
the
same
as
if
you
would
compress
it
on
the
cpu.
They
used
a
like
a
slightly
different
table
for
for
the
gzip
that
resulted
in
it.
C
A
E
Does
it
matter
only
in
the
case
of
like
compressed
arc
is
turned
off
and
you
have
an
l2
arc
so
not
in
most
deployments,
don't.
E
We
looked
at
it,
but
there
are
enough
use
cases
where
it
makes
sense.
E
Off
that
yeah,
I'm
looking
at
other
ways
to
try
to
solve
that
problem.
A
A
D
One
question
for
the
project
matt:
as
you
know,
assuming
that
we
see
kind
of
vendors
coming
into
the
space
and
kind
of
creating
these
these
hardware
platforms,
how
does
the
project
ensure
that,
as
we
make
changes,
we're
not
breaking
these
things?.
A
Yeah,
I
think
that's
a
great
question.
I
see
on
the
slide.
That's
up
now.
You
have
something
called
a
software
provider,
I'm
hoping
that.
Maybe
that's
like
you
know
something
that
could
that
can
be
a
provider
that
hooks
into
this
interface
but
is
implemented
in
software
and
doesn't
require
any
special
hardware.
A
You
know
if
we
could
have
something
like
that
that
where
it's
like
yeah,
technically
it's
just
in
software
technically
you
don't
need
to
copy
it
off
anywhere
else,
but
for
testing
purposes
we're
actually
going
to
copy
it
to
some
other
memory
so
that
we
can
catch
any
bugs
related
to
you
know,
copying
or
not
copying
from
the
main
abd
buffer.
C
I
will
have
to
check
with
our
legal
people
for
about
that.
F
A
Well,
presumably,
it's
gp
like
by
making
a
gpl
too,
aren't
you
like
intending
that
all
the
providers
have
to
be
open
source.
C
C
Well,
oh,
how
does
this
work?
I.
F
A
But
talk
to
whoever
you
need
to
but
like
it
sounds
like.
That
would
be
a
great
way
to
enable.
D
You
know
to
even
like
get
some
of
the
stuff
into
into
the
main
repo,
because
otherwise,
like
I
can
just
imagine
you
guys
you're
going
to
be
like
fixing
bugs,
as
somebody
comes
out
with
some
new
thing
in
the
pipeline
and
all
of
a
sudden
like
this,
this
stuff
just
starts
blowing
up
and
unless
you're
tracking
those
changes
closely,
you
may
not
recognize
them,
and
then
you
know,
god
forbid,
like
you
end
up
with
some
corruption
or
something.
G
E
D
Yeah
that
way,
it
would
be
nice
if,
like
if
this
becomes
kind
of
a
generic
framework
that-
and
we
do
have
something
like
a
software
provider
or
a
test
framework
or
something.
Then
we
like
get
rid
of
qat,
bring
it
into
this
framework
and
and
then
provide
ourselves
some
protection
there
and
and
consumers
of
that
protection.
A
Yeah
I
mean
I
I
could
imagine
that
this
becomes
like
like
there
is
no
non-dp
usm.
Now
usm
is
just
like
that's
the
interface
for
doing
you
know
doing
like
bulk
operations
on
data
and
all
the
current
software
implementations
move
to
the
other
side
of
that
api
as
part
of
the
software
provider,
and
then
it
and
then
all
that
you
have
is
a
switch.
That's
like
oh,
like
are
software
providers
operating
in
place,
or
are
software
operators
gonna
like
be
copied
out
to
another
buffer
for
testing
purposes?.
A
A
Yeah,
I
think
that
getting
it
like
accepted
upstream,
if,
if
there
were,
if
the
only
use
case
of
this
was
if
you're
a
special
entity
that
can
purchase
this
special
hardware,
then
you
can
use
a
special
thing.
Then
I
think
that
it
would
be
a
hard
sell
to
get
it
integrated
upstream,
but
if,
if
either
you
can
like
hook
into
software
providers
hook
into
the
qat,
that's
generally
available-
or
if
this
special
hardware
is
you
know
is-
is
purchasable
by
other
entities.
A
Then
then
I
think
that
there's
then
it
makes
sense
to
get
this
upstream
so
that
people
can
use
it.
C
A
On
you
know
what
you
know,
maybe
if
you
find
out
that
oh,
like
they're,
not
going
to
sell
it
generally,
then
that
means
that
you
know
you
need
to
put
some
more
emphasis
on
the
software
providers
or
hooking
into
other
hardware
accelerators
that
are
generally
available,
and
you
know,
and
and
of
course
it
also
just
happens
to
work
for
your
use
case
as
well.
H
A
Like
I
mean,
I
see,
you
have
gpu
provider,
like
obviously
there's
a
lot
of
gpus
you
can
buy
in
the
open
market
if
somebody's
gonna
write,
like
a
you,
know,
raid
z,
vectorized
math,
for
for
for
gpus.
That
would
be
really
cool
too.
A
A
Cool,
do
you
have
any
other
questions
or
things
you
want
to
get
feedback
on
jason.
F
So,
as
I
understand
it,
this
dp
usm
will
be
a
separate
linux.
Kernel,
module
and
cfs
would
just
use
the
exported
interface
of
it.
Just
like
any
other
file
system
would
use
it
like.
Will
this
dp
usm
do
any
kind
of
resource
multiplexing,
or
will
it
expose
like
like
what?
What
will
the
interface
look
like
for
zfs?
C
I
haven't
really
tested
that
out
because,
as
I
mentioned
before,
we're
still
getting
things
to
work,
but
I
don't
believe
there's
any
reason
why
we
can't
use
a
provider
but
have
multiple.
F
A
E
Well,
like
I
know,
I
think,
with
qat.
It
only
made
sense
if
your
buffers
were
big
enough.
So
if
you
were
trying
to
check
some
only
for
8k
the
overhead
of
copying
out
to
pcie
and
back
wasn't
worth
it.
But
if
you
were
check
something
128k,
then
it
was,
and
it
might
make
sense
to
be
able
to
decide
which
ones
to
offload
and
which
one's
not
to
on
some
threshold.
A
Cool
thanks
a
lot
for
this
presentation,
jason.
I
really
appreciate
you
like
coming
and
bringing
these
ideas
to
the
the
team
and
and
getting
the
feedback
early
on,
and
you
know
and
engaging
with
the
community.
I
think
that
you
know
there'll
be
a
lot
less
surprises
later
on.
You
know
when,
when
you
want
to
get
it
integrated
so
right
next
steps.
Could
you
send
this
a
link
to
your
slides
to
me
just
so
that
we
can.
A
I
don't
have
this
online,
can
I
just
email
it
yeah
yeah.
You
can
just
email,
a
pd,
any
email,
a
pdf
or
a
powerpoint
or
whatever
to
me
and
then
I'll.
I
can
put
it
up
just
as
part
of
our
meeting
notes,
okay
and
then
yeah,
I
mean
feel
free
like
when
you
make
progress
on
this
and
you
have
more
questions.
You
know
feel
free
to
come
back
to
this
meeting
and
and
have
another
discussion
or
open
a
pull
request
whenever
you're
ready.
A
Cool
looks
like
we
have
a
few
more
things
added
to
the
agenda.
I'm
gonna,
let's
see,
let's
do
let's
do
is
rich.
Are
you
here?
Do
you
want
to
talk
about
the
encryption
stuff
and
then
we'll
end
up
we'll
close
out
the
meeting
with
the
requests
for
code
reviews
it
works
for
me.
A
G
So
it
turns
out
encryption
has
bugs
surprise.
I
know
it's
shocking
right,
but
it
I
I
went
through.
I
can
share
the
spreadsheet
I
produced.
I
went
through
all
the
bugs
tagged
encryption
and
I
gave
them
a
first
pass
and
it
turns
out
that
they
all
fall
into
one
of
like
three
categories:
more
or
less
there's
a
couple
of
outliers,
but
broadly
the
big
one
is.
G
I
wish
I'd
made
slides
thinking
about
it.
It
didn't
occur
to
me
that
that
would
be
useful
here.
Whoops.
Sorry,
the
big
one
is
a
bug
where
you
end
up
with
somebody's,
not
rough
counting
d
notes
correctly.
G
G
Yes
and
as
a
result,
this
can
produce
all
sorts
of
different
stack
traces,
because
you
know
one
of
the
threads
is
in
the
middle
and
already
checked.
This
is
all
valid,
and
then
it
comes
back
and
suddenly
it
is
not.
G
I've
been
looking
into
this
for
a
bit.
I
have
had
some
trouble
reproducing
it
on
anything,
that's
useful.
For
some
reason,
I've
only
been
able
to
reproduce
it
on
random,
not
x86
architectures.
All
the
reporters
are
from
x86
architectures,
but
you
know
I
have
a
zfs
test
suite
loop.
That
will
do
it
inside
of
a
day
on.
G
So
you
know
software
very
annoying,
but
all
right,
I
should
have
led.
I
mostly
came
here
to
ask
for
like
if
anyone
would
be
willing
to
spend
a
bit
of
time
and
offer
insight,
because
I
am
not
at
all
familiar
with
this
code.
I
have
tried
reading
it,
but
it's
somewhat
difficult
to
run
down
what
might
be
going
wrong
and
the
bugs
at
least
yeah,
the
all
the
bugs
that
I'm
looking
at
seem
to
go
back
to
when
encryption
was
first
integrated,
like
the
one
I'm
talking
about
right
now.
A
Yeah,
and
is
this
bug
that
you're
talking
about
related
to
send
and
receive
yeah,
or
is
that
not
on
the
scene.
G
I
think
maybe
so
like
I
haven't
seen
any
proof
that
it
has
to
be
sent
received
to
get
this
bug.
I
have
only
seen
it
reproduce
with
send
receive,
but
you
know
the
other
one
that
I'm
going
to
mention
is
a
buck
is
a
set
of
bugs
with
send
and
receive
explicitly.
G
Not
too
surprising
a
but
a
bug
which
seems
to
be
the
same
thing
on
a
lumos
ate
someone's
data.
So
you
know
that's
good.
G
A
G
We
keep
going
yeah,
we
know
the
other
one.
Well,
the
other
one
is
a
set
of
bugs
where
you
do
incremental
send
receive
and
it
nukes
your
data
set.
So
you
can't
open
it
anymore.
G
One
of
them
is
the
user
object,
counting
problem
where
it.
If
I
understand
it
correctly,
it
copies
the
user
object,
counting
data
that
it
got
from
the
send
stream
in
and
because
I
think
it's
the
mac
doesn't
match,
then
when
it
goes
to
try
reading
it,
it
gets
upset
later.
G
There
was
a
pr
that
fixed
this,
but
it
was
broken
for
some
git
release
or
some
git
versions
that
never
made
it
into
a
release.
So
it
got
reverted.
G
G
G
A
I
think
this
is
a
great
opportunity
for
like
someone
or
some
company,
that's
using
encryption
to
you,
know,
step
up
and
be
a
leader
in
this
area
like
it's
hard
for
me
personally,
like
I
know
a
little
bit
about
encryption
and
and
probably
more
than
I
would
like
to
about
d
nodes
and
center
receive.
But
you
know
I
have
a
lot
of
other
stuff
to
do
and
I
don't
and
like
I
don't
use
encryption
at
my
job,
so
there's
probably
other
people
who
are
using
encryption
at
their
jobs.
G
Oh,
I
would
hope
so,
but
I'm
not
actually
convinced
anyone
is
using
it
at
a
large
company.
I
don't
know
about
that
either
that
that's
the
problem.
F
Yeah
rich,
we
talked
about
this
a
few
days
ago
and
I
didn't
get
whether
you
got
a
working
reproducer
for
at
least
oh.
G
F
G
F
G
Any
ppc
64
vm
that
appears
to
be
okay
right
now.
I
do
not
know
why,
but
it
definitely
reproduces
very
readily
on
one
and
not
the
other.
A
G
G
Well,
printf
debugging!
Here
you
come
yeah
that
that's
where
I've
been.
E
G
Less
memory
I
did
because
it
does
seem
to
be
related
to
free
memory
on
one
vm.
I
have
it
only
reproduces
right
before
the
kernel
proceeds
to
permanently
scream
and
yell,
because
it
has
run
out
of
memory
for
higher
order
allocations,
but
I
have
reduced
the
amount
of
ram
on
the
vms,
and
linux
has
a
nice
flag
on
real
hardware
that
you
can
say.
I
have
this
much
ram.
G
I
promise
it
does
not
appear
to
significantly
impact
the
outcomes
it
might
make
it
faster
on
the
ones
that
reproduce
it
normally,
but
I
don't
sample
size
hard
because
variants
high.
G
It's
cumbersome,
which
is
why
I
was
hoping
to
find
somebody
who
might
be
more
familiar
with
the
code
to
look
at
it
or
at
least
help
me
look
at
it.
I
should
say:
I'm
not
trying
to
punt
it
all
off
to
someone.
A
G
D
G
I
Yes,
I
can
say
that
that
everything
is
great
there.
We
saw
some
rough
edges,
but
it's
not
terribly
broken.
I
would
say.
G
Yeah,
like
there
are
some
people,
if
you
look
in
the
issues
who
just
come
back
and
say,
yeah,
did
it
again
today
tomorrow,
the
next
day.
But
you
know
I've
tried
on
my
x86
hardware
and
it
doesn't
no
yeah.
A
G
Reproducing
it
on
freebsd,
I
think
all
of
the
ones
I've
seen
reproduce
it
are
on
linux.
Well,
that's
not
entirely
true
right.
I
mentioned
all
the
ones
that
I
was
discussing
before
on
linux,
but
I
did
mention
the
person
or
the
the
people
reproducing
something
that
looks
very
similar
on
lumos
for
the
one
where
the
buffer
gets
the
wrong
ref
count.
G
So
I
don't
know
if
they
just
migrated
off
and
didn't
keep
looking
or
what,
but
do
you
know
what
the
illumos
ticket
is
offhand?
I
think
it's
14
0003.
J
G
J
No,
I
think
they
may
have
just
ran
out
of
time
or
with
other
stuff,
but
y'all
alex
is
in
australia,
so
I'll
ping
him
this
this
evening,
north
america
time.
Sorry
thank.
G
A
We're
almost
out
of
time
for
the
meeting,
but
thanks
a
lot
for
bringing
this
up
rich.
I
think
you
know,
in
the
absence
of
anybody
like
picking
standing
up
to
to
really
take
ownership
of
this.
I
think
we
should
continue
to
bring
this
up
at
future
meetings
just
to
to
continue
advertising
this
opportunity
for
for
more
engagement,
yeah
exactly-
and
you
know,
to
make
folks
aware
of
of
you-
know
the
work
that
you're
doing
and
and
that
people
are
still
hitting
this.
A
A
Yeah
cool,
I
think
in
the
last
minute
that
we
have
let's
let
folks
solicit
code
reviews.
I
think
there
are
a
few
things
in
here.
E
Everyone
basically,
instead
of
using
a
static
allocation
inflation
value
of
24
times
the
logical
size
of
the
block,
each
v
dev
type
advertises
what
its
inflation
would
be.
E
So
you
know
raid
zed
three
is
four
times
the
a
shift
and
it
uses
that
value
the
the
worst
v
dev
in
the
pool
to
calculate
you
know
using
the
the
same
math
as
before,
and
since
we
have
access
to
the
object,
it
also
looks
at
what
copies
equals
set
to
for
that
object
set,
and
so
with
that
you
can
generally
get
the
inflation
down
to.
D
Ellen
is
that
that
inflation
dynamic,
as
you
add,
like,
b
devs
and
b
dip
types.
E
Yeah,
so
if
you
every
time
you
open
a
v
dev,
it
checks
and
increments,
the
the
worst
v
dev
if
the
new
v
dev
is,
is
worse.
E
So
I
just
keep
track
of
a
pool
wide.
What's
the
you
know
worst
minimum
allocation
size
for
any
v
dev
in
this
pool
and
uses
that
value,
and
it
changes
the
math
to
round
up
to
the
nearest
that
many
sectors,
rather
than
just
straight
up,
multiplying
the
logical
size
by
the
worst
case
right,
because
if
you
end
up
meeting,
you
know
seven
sectors
and
the
raid
zed's
gonna
bump
that
to
10
you
don't
actually
want
to
multiply
the
size
by
10.
You
want
to
round
it
up
to
the
next
10.
E
And
I
saw
in
the
older
delfix
fork
the
the
static
24
was
just
dialed
down
to.
I
think
two
because
you're
like
we're,
never
gonna
have
dude
we're,
never
gonna
have
raid
zed
and
so
on.
H
Yeah
and
I
have
an
update
last
time-
I
talked
in
one
of
these
calls
we're
talking
about
the
corrective
receive
patch
and
that
there
was
a
test
that
was
failing.
I
was
trying
to
figure
out
what
was
going
on
there,
so
all
of
that
has
been
addressed,
so
it's
ready
for
review
again.
H
For
I
don't
know,
people
that
may
not
remember
corrupt
corrective
receive
enables
us
to
heal
permanent
data,
corruption
using
the
consent
file
or
something
and
there's
a
talk,
that's
linked
in
the
pr
for
more
context
on
this.
You
can
go,
listen
to
it
and
watch
it,
but
yeah
any
help
in
moving
this
forward
or
or
anybody
that
wants
to
review.
It
is
appreciated.
A
A
I
guess
it
gets
hit
enough.
Where
did
hook
doesn't
want
to
hide
it
yeah,
I
think
yeah
I
don't
know,
I
think
it
might
just
be
like.
Oh
you
know
when
there's
more
than,
however
many
comments,
I
just
it's
just
like
you
get
the
first
three
and
then
you
the
last
seven
and
anything
in
the
middle
is
just
like.
A
Explicitly
hidden
by
you
know
by
somebody
but
they're
like
we
don't
want
to
like
display.
We
don't
want
your
page
to
be
more
than
like
you
know,
20
meters,
long
or
whatever,
hiding
all
right
cool.
Well,
thanks!
Everyone
and
we'll
see
you
four
weeks
from
today
at
the
later
meeting
time.