►
From YouTube: March 2020 OpenZFS Leadership Meeting
Description
At this month's meeting we discussed: DIRECTIO PR; ZFS repo move; FreeBSD progress.
Details and meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
A
A
C
So
github
should
have
put
in
place
redirects
for
all
of
the
previous
links
on
the
ZFS
emilynics
github
page
the
whole
redirect
to
the
open,
ZFS
tracker.
That's
where
pull
request
issues
comments,
all
that
stuff
should
get
redirected,
but
you
shouldn't
have
to
change
anything
locally.
Github
provides
some
instructions.
If
you
want
to
update
your
remote.
B
B
C
Just
a
quick
update,
then
so
over
the
last
few
weeks,
we've
been
systematically
getting
all
the
tests
shipshape
for
the
test
suite
and
getting
them
running
on
FreeBSD
and
Linux,
and
most
of
that
work
has
been
wrapped
up
at
this
point.
So
there's
one
big
pull
request
still
outstanding.
That
adds
the
rest
of
the
freebsd
code
into
the
tree
and
that's
getting
closed,
ready
to
merge.
C
Last
I
heard
they
just
wanted
to
make
sure
that
all
the
tests
were
passing
and
FreeBSD,
or
at
least
the
majority
of
them,
or
they
understood
why
ones
weren't
passing
so
the
hope
is
to
be
able
to
go
ahead
and
merge
the
rest
of
that
code
relatively
soon.
Maybe
this
week
or
next
week,
things
should
be
in
pretty
good
shape,
cool.
A
A
And
what
what
do
we
expect
as
far
as
like
code
review
on
that?
Is
that,
like
already
been
reviewed
by
FreeBSD
folks,
and
we
don't
really
need
any
more
eyes
or
do
you
do?
We
need
more
people
to
look
at
that
good.
C
My
understanding
it's
been
reviewed
by
FreeBSD
folks
and
a
lot
of
it
is
taken
from
the
existing
free.
The
implementation,
where
needed
right
so
there's
the
common
code
and
then,
where
previously
needed
to
purge
the
existing
code
from
FreeBSD,
was
pulled
in
and
updated
as
needed.
More
reviewers
are
always
welcome.
So
if
anybody
wants
to
look
at
that,
that
would
be
great.
A
C
D
D
A
Cool,
oh
yeah,
I
think
we
probably
need
to
get
the
previously
folks.
This
is
good
to
comment
on
their
release
process
and
when
it
would
fit
in,
but
I
think
the
idea
is
once
it
integrates
into
where's.
This
integrates
into
the
common
repo,
then
you'll
be
able
to
like
you
know,
get
all
that
from
github
on
on
freebsd.
You
know,
make
make
install
and
then
you'll
be
running
the
bits
from
the
common
repo.
C
A
All
right
we're
zooming
right
through
the
agenda,
but
the
next
one's,
a
big
one,
so
I
asked
mark
maybe
to
come
and
talk
a
little
bit
about
the
direct
I/o
pull
request.
That's
out
there
and
talk
about
well,
so
I
understand
that
you
always
turn
correctly,
that
you
implemented
moistness
code
originally
and
then
and
then
prayin
a.
E
Direct
I/o
code
path
and
verified
that
things
seemed
be
working
a
little
bit
of
background
here.
This
is
I
work
for
cray
and
cray
was
very
interested
in
achieving
some
performance
improvements
with
ZFS
on
nvme
hardware.
What
we're
seeing
was
that
performs
analysis
was
that
ZFS
would
drive
a
small
number
of
nvme
drives
to
you
know
near
their
hardware
performance
levels,
but
as
you
ramped
up
the
number
of
drives,
the
form
of
didn't
ramp
with
it
I
would
ramp,
but
not
at
the
not
the
same
rate,
and
so
with
two
or
three
four
drives.
E
E
Large
block
reads
and
writes.
So
after
doing
some
analysis,
we
discovered
that
there
was
some
some
bottlenecks
and
one
of
the
clear
bottlenecks
was
the
memory
management
and
specifically
the
the
process
of
copying.
Data
in
and
out
of
kernel
was
an
issue
as
well
as
some
other
memory
copy.
So
we're
happening
inside.
A
E
A
E
Know
that's
specific
to
us
and
there's,
but
there's
still
some
costs
involved
there,
although
avoids
the
in
and
out
of
kernel
copy
cost.
Obviously
we're
lagina
file
we're
looking
at
actually
the
process
of
moving
data
in
and
out
all
the
way
from
usual
and
into
the
into
kernel
and
out
so
we
implemented
server,
though
I
said
a
quick
and
dirty
version
of
direct
I'll
just
to
see
how
that
would
work.
E
E
These
tests
were
just
raw,
simple
data,
so
there
was
no
compression,
no
encryption
happening.
We
are
interested
in
both
of
those
but
and
I'll
talk
a
little
bit
about
that
later,
because
it
does
come
into
play
when
you're
talking
about
direct
I'll.
What
does
that
mean
to
use
compression
encryption,
for
example
right?
E
So
what
we
are
doing,
simple
fastest
code
path
and
and
the
basic
implementation
was
for
the
read
side?
Of
course,
it's
it's
pretty
straightforward
read
is
already
pretty
much
immediate
IO
path.
The
main
difference
in
direct
IO
read
was
to
avoid
doing
the
the
copies
of
the
buffer
from
the
user
land
you
IO
into
a
narc
or
a
debuff
and
then
stash
in
the
arc.
Instead.
E
E
Really
architected
to
buffer
IO
is
to
obviously
to
use
a
a
pipeline
to
gather
together
lots
of
rights
and
push
them
all
once
at
the
end.
So
the
direct
implementation
they're
essentially
leverages
the
VM.
You
sync
code
path,
although
it's
a
variation
on
it
misses
actually
mat
derived
from
morphing.
You
done
some
time
ago,
sort
of
an
initial
implementation
of
the
concept,
yeah
and
I
sort
of
fleshed
out
a
bit
more
till
back.
True
original
question
did
I
write
the
code
most
of
it
I
did
right.
E
There
was
definitely
contributions
by
not
consumed,
is
sort
of
the
person.
I've
been
relying
on
to
shepherd
things
up
streaming
to
the
community,
and
he
has
also
been
contributing
to
some
a
software
development
as
well
and
when
behlendorf
has
also
contribute
now
and
again
to
the
project
as
well.
So
those
are
those
largely
I'm,
responsible,
I
think,
let's
take
the
blame,
at
least
for
for
the
basic
concept
based
off
of
your
initial
design.
Oh.
A
Yeah
I
think
that
speaks
a
lot
to
the
motivation
of
what
you're
trying
to
achieve
with
it,
which
is
cool
a
kind
of
side.
Well,
I'll
leave
my
side.
Can
you
for
their
great
the
yes
see,
so
you
talked
a
bit
about
the
motivation.
I
wanted
to
hear
so
I
want
to
make
sure
we
covered,
have
the
implications
of
directly
reading
and
writing
and
then
kind
of
take
that
and
then
talk
about
the
user
interface
right
and
kind
of
how
we
could
simplify
it
from
what
was
originally
proposed.
A
E
And
as
it
turns
out
again
in
our
use
case,
it
isn't
it's,
it's
actually
somewhat
of
a
benefit
for
our
use
case
and
in
general
for
high
bandwidth
use
cases
like
this.
You
know
we're
obviously
is
being
used
in
a
Leicester
environment
is
this
is
when
you
are
more
push.
You
know
a
checkpoint
down
to
disk
as
quickly
as
possible,
for
example,
and
you
don't
really
need
that
to
be
cached
anywhere,
and
so
you
don't
want
to
so
to
be
cached.
So
the
fact
that
isn't
going
about
the
caches
is
not
an
issue.
E
The
fact
that
we're
layered
with
Westar
also
means
them
a
lot
of
the
object
storage.
You
know
the
caching
isn't
that
important
to
us,
and
so
the
fact
we're
not
caching
is
similar
for
the
bonus,
because
it
means
we
can
configure
the
storage
to
be
less
meat,
needful
of
cash
and
and
allows
us
to
say
all
right.
We
don't
need
a
lot
of
memory
really
because
we're
going
direct,
il
yeah.
A
But
we
do
have
existing
like
primary
cash
property
that
lets
you
choose
whether
like
a
particular
data
set,
is
cash
in
the
arc.
So,
like
the
you
know,
the
big
kind
of
new
value
add
for
your
use
case
is
the
performance
improvement
of
less
be
copying
right,
yeah,
so
I
took
some
notes
in
the
or
added
some
notes
before
the
meeting
into
the
agenda
document.
A
If
folks
gonna
follow
along
just
to
kind
of
structure
the
conversation
because
I
think
it's
a
little
bit
tricky
and
there's
a
lot
of
like
related
things,
so
I
was
also
curious
from
hearing
from
other
folks
who
might
want
to
use
direct
IO
in
terms
of
what
their
motivations
are.
So
we
talked
about,
like
you
know,
be
copying
stuff,
less,
there's.
Also
less
memory
being
allocated
you
can
temporarily
like
we
don't
have
to
keep
dirty
data
in
the
debug
in
the
debuff
cache.
A
A
A
E
A
D
Been
punched
in
the
face
repeatedly
by
the
read-modify-write
behavior,
even
in
the
Naughton,
ostensibly
go
fast
path
like
it
just
it's
kind
of
like
it's
like
when
a
sequel
database
will
do
an
onion
next
query
for
you,
which
is
like
fast
for
a
while,
until
until
it's
not
and
then
it's
just
really
awful
yeah
and
there's
no.
But
it's
like
it's
because
it's
magical
and
it's
doing
the
extra
work
or
the
or
the
slow
thing
on
your
behalf
and
there's
no
like
direct
feedback.
D
A
A
E
E
B
E
Up
to,
and
in
fact
it's
usually
stated
and
documentation,
it's
up
to
the
file
system
to
choose
some
more
of
assets,
specifically
semantics
and
so
I
think
I
think
you
may
want
to
motes
or
me,
and
perhaps
the
default
is,
as
Matt
suggests,
that
it
fail
on
an
underlined
dial.
It's
block
unblock
aligned,
but
maybe
I
also
have
a
mode.
This
compatibility
that
says:
well,
yes,
you
will
not
actually
fail
but
will
fall
back
to.
E
E
It's
a
good
question,
and
so
you
know
the
way
you
usually
support
it.
It's
it's
currently
implemented
in
the
code
is
that
it
is
zero
copy.
The
checksum
is
computed
on
the
fly
because
the
checksum,
the
I/o,
is
actually
written
as
part
of
the
the
I/o
call
the
X.
You
can
sort
of
have
the
expectation
that
no
modifications
made
to
that
block
while
the
right
is
in
flight,
and
so
therefore
it's
simply
mapped
and
computed
and
written
all
using
the
map
to
pages
and
so
there's
no
actual
protection
of
copying.
E
C
I
think
this
is
the
normal
expectation
for
a
direct
i2
on
those
file
systems
like
while
the
pages
are
in
flight
and
you're
in
the
system
fault
like
no
user
space,
I
could
touch
that
map
to
memory.
Graduation
of
the
system
call
I,
don't
think.
That's
too
abnormal
for
applications
to
deal
with
I
think.
D
That
makes
sense
for
applications
but
like
if
it
starts
to
show
up
subs
like
how
do
I
tell
the
difference
between
an
Iowa
Innes,
subsequently
I'm
in
administrative
or
system
I,
don't
really
understand
anything
about
direct
IO
and
I
do
a
scrub
with
my
tool
and
it
reports
a
bunch
of
checksum
errors.
How
do
I
tell
the
difference
between
checksum
eras
from
shenanigans
and
checking.
F
E
A
C
A
E
F
A
E
A
A
D
D
If
you
get
a
checksum
error,
there
is
something
wrong
with
your
pool.
Right
and
I
mean
like
like
from
a
fault
management
perspective
like
if
the
the
system
that
you
have
for
doing
fault
analysis
is,
like
you
know
what
that's
10,000,
checksum
errors,
I'm
gonna
spare
out
the
disk
like
I
mean
it's
not
necessarily
an
unreasonable
posture.
So,
like
perhaps
at
the
very
minimum,
we
need
to
write
some
kind
of
flag
for
a
direct,
I
or
written
block.
D
E
A
A
E
D
E
C
Yeah
so
lost
your
struggles
is
a
similar
problem
when
it
sends
buffers
over
the
network.
Its
solution
to
it
was
to
check
some
before
and
after
the
buffer
was
bent
because
they're
directly
mapped
they
might
be
modified.
Maybe
we
could
do
something
similar.
Do
we
check
the
check
them
after
we've
written
the
pages
relatively
inexpensively?
But
what,
if.
C
D
D
Point
is
like
if
it's
a
16,
Meg
record
right,
there's
probably
gonna,
be
long
enough
during
the
processing
of
that
I/o
for
me
to
it,
gets
checksum
once
upfront
and
then
I
change
it
a
little
bit
while
you're
like
copying
it
into
the
storage
device.
So
that
did
the
storage
of
mice
is
doing
DMA
and
then
like
I've
timed
it
such
that
at
the
end,
I
put
it
back,
so
you
checksum
it
again
and
it
checks
the
same.
E
D
E
D
E
E
I
think
I
think
the
concept
of
saying
this
is
you
know
a
a
special
type
of
checksum
or
a
special
type
of
right
that
has
these
that's
open
to
these.
What
this
particular
attack
vector,
let's
call
it
sounds
all
right
we
can.
We
can
decide
what
to
do
with
it.
Then
you
can
build
out
their
infrastructure
if
you,
if
you,
if
required,
but
doesn't
preclude
the
ability
of
users
to
leverage
this
facility.
So
you
say
all
right:
we
can
treat
these
blocks
as
in
a
different
fashion.
Yeah.
A
Continuing
to
let
them
get
the
false
checksum
error,
if
application
like
obligations
could
generate
the
false
checksum
error,
then
I
think
that
we
can't
make
the
default
be
too
honored
direct,
io
rights
right.
The
people
has
to
be
that
you
cannot
cause
us
to
get
check
Samir's
and
you
know
yeah.
The
system
administrator
has
to
go
and
do
you
know
ZFS
set
direct
io
rights
equal
to
enabled
or
whatever.
So
you.
C
D
C
E
D
D
C
E
D
D
E
You
know
it
would
be
a
true
check.
That's
a
little
error.
Sorry
yeah!
It's
a
true
checksum
error
that
but
it,
but
what
I'm
saying
is.
It
doesn't
imply
a
hardware
issue
which
you
normally
tribute
a
checksum
to.
That's
really
the
only
issue
we
have
here.
I
mean
still,
you
know.
If
you
fail
checksum
you
all.
The
other
aspects
of
the
code
stack
seem
relevant.
E
D
Mean
also
for
mirror,
writes,
there's
probably
not
a
way
to
atomically
issue,
the
two
or
the
three
right
bios
to
different
devices
like
let's
say,
cuz
you're,
effectively,
mapping
that
user
buffer
and
you're
gonna
have
problem,
maybe
as
many
as
three
different
HBAs
do
their
own
DMA
activity
to
get
those
bytes
into
the
storage
they're,
not
gonna.
Do
it
necessarily
at
the
same
point
in
time
and
they
might
not
see
the
same
values
either.
A
A
So
maybe
for
now,
why
don't
we
table
this
and
kind
of
leave
it
to
the
implementers
to
come
back
with
a
proposal
of
how
you
want
to
handle
this
which
might
involve
you
know,
multiple
passes
or
some
new
checksum
mode
or
whatever,
and
then
you
know
document
like
what
the
proposed
semantics
are
and
what
the
implications
are
and
then
we
can,
you
know,
continue
the
discussion
on
this.
You
know
on
the
PR
or
the
next
meeting.
Does
that
sound
reasonable?
B
A
One
of
the
things
I
wanted
to
get
input
on
from
other
folks
that
might
have
experience
using
direct
I/o
on
other
systems
is
what
the
expected
semantics
are
like
in
terms
of
caching
and
how
it
works
with
sub-block
right,
especially
so
I
wrote
some
notes
kind
of
in
the
design
in
the
meeting
notes,
so
I
think
that
we
need
some
kind
of
mode
where,
like
it's
the
same
as
the
current
behavior
right,
we
just
you,
you
request
direct
I/o.
We
ignore
that
we
do.
A
The
old,
like
just
like
we're
gonna
have
compatibility
with
all
the
FS
behavior.
We
want
some
new
mode.
That
would
be
the
new
default,
which
does
in
at
least
some
cases,
do
direct
I
requested
operations
directly,
and
then
maybe
we
also
want
some
mode.
That's
like
some
IO
is
directly
even
when
you
didn't
request
it.
If
you
want,
you
know
if
it's
like
I
know
that
it
would
better
to
do
this
way,
but
the
application
doesn't
know
to
do
the
directly
requests.
A
But
we
need
to
figure
out
I'm
the
key
things
to
figure
out
our
kind
of
what
exactly
like
I
said.
There
were
a
lot
of
wiggle
words
there,
like
some,
some
iOS
will
be
performed
directly
and
which
ones
exactly
we're
talking
about,
and
a
lot
of
that
comes
down
to
the
partial
block,
our
reads
and
writes
mm-hmm
all
those
something
Shawn
you're
saying
we
should
fill
those
yes,
if
you're
trying
to
living
is,
if
you're
doing
direct
IO
you're
generally
not
doing
something
as
a
very
generic
application.
A
A
E
So
I
just
think
that
it's
not
uncommon
that
that
you
want
to
do
you
know
in
a
database
application.
For
example,
we
see
this
at
Oracle
a
lot
where
customers
wanted
to
be
able
to
do
the
database
scan
rapidly,
so
they
wanted
a
relatively
large
block
size
for
their
database,
but
at
the
same
time
they
wanted
to
have
an
efficient,
read
access
to
it
and
so
they're
interested
in
a
direct
I/o.
E
You
know
small
block
read
for
8
K
for
K
reads
and
they
want
to
perform
along
the
way,
and
so
it
seems
to
me
that
you
could
get
some
some
performance
out
of
it.
Direct
I/o
by
saying
sure
you
can
just
so
do
those
and
we'll
want
cache.
T-Rex
Rio
is
pretty
clean
on
a
database
example
where
it's
random,
I
io
and
you
don't
really
need
to
be
sequential
stuff,
so
the
caching
doesn't
really
give
you
anything.
Yeah
I
think.
E
And
and
so
there's
you
know,
an
obviously
visit
decision
and
anybody
makes
with
direct
I/o
is,
is
whether
that's
that's
the
we
or
not,
because
you
know
the
tournament
of
that
like
an
offer.
The
counter
to
that
is
it
actually,
if
you're
working
set
of
small
enough,
then
you'll
pay
a
price
for
a
little
bit
slower
reads
for
a
period
of
time,
but
once
it's
all
cashed,
you
know
you're
getting
things
faster,
that
you're
pulling
them
from
cash
all
the
time.
But
that's
that's
an
application-specific
or
a
context.
Specific
decision.
D
A
A
Know
that
in
a
lot
of
other
file
systems
that
direct
like
you
can
do
that
do
direct
IO,
the
block
size
is
effectively
4
K.
So,
like
your,
your
region,
rights
only
have
to
be
page
aligned
to
get
the
directly.
Our
behavior
with
the
FS
like
the
default
is
record
size
128
case.
So
you
know
in
the
default
case
you
might
have
some.
You
may
have
a
naive
application.
D
A
A
A
E
Well,
it's
actually
it's
a
little
bit
of
a
middle
ground.
Right
is
saying
all
right.
The
care
about
direct
I/o
semantics
is
that
it's
not
caching
and
it's
not
so
much
about
the
lack
of
be
copy.
That's
an
issue
for
me
or
something,
but
it's
met
earlier.
You
can
actually
set
that
in
as
a
property
as
well,
so
you
can
get
a
same
feature.
B
A
D
A
A
E
I
think
you
know
to
your
question,
though:
circling
around
a
little
bit
Matt,
you
know
you
might
want
to
put
out
a
request
sort
of
to
those
who
are
more
directly
interested
in
leveraging
direct
I
as
a
feature
if
they
have
any
issues
with
this,
where
they
have
any
particularly
use
cases
that
make
sense.
Unless
I
said
from
my
perspective,
I
can
say
well
yeah.
It
doesn't
matter
me
because
I
have
a
very
constrained
environment.
E
E
D
D
A
E
So
a
couple
of
comments
here
too,
to
give
you
sort
of
a
context
from
a
code
coding
perspective
of
how
time
influenced
some
of
this
stuff
is
I
actually
tried
to
implement.
I
have
implemented
sort
of
a
fully
generalized
implementation
of
correct
I/o.
Here
I
can
tell
you
that
you
know
there
is
significant
complexity
in
supporting
the
partial
block
reads
and
writes,
particularly
if
you
want
to
be
purist
about
direct.
E
E
A
A
E
F
A
A
A
C
Okay,
what
I've
seen
for
most
file
systems
who
implement
this
as
they
push
all
of
these
problems
up
to
the
application,
so
you're
not
clear
and
keep
coherency
or
anything
like
that?
You
will
bypass
the
cache.
You
could
you'll
get
the
not
cached
version,
at
least
for
the
Linux
file
systems,
I'm,
aware
of
so
they
kind
of
punt
on
a
lot
of
these
issues,
it
blew
them
up
with
back
from.
D
A
A
A
Regular
to
the
same
file,
if
you
do
it,
if
you
do,
if
you
modify
the
block
with
a
regular
right
and
then
you
do
a
directory
read
at
directory,
just
fails
right,
rather
than
giving
them
nothing
to
see
two
different
versions
of
the
data
like
it
just
it
seems
like
not
that
hard
to
prove
an
IP
I
expect.
If
you
choose
this
as.
E
D
D
E
D
C
D
B
G
Marker
I
had
a
question
for
you
had
have
you
seen
or
dealt
had
to
deal
with
like
the
case
where
the
alignment
is
you
know,
might
be
fine
at
the
record
size,
but
not
at
the
sector.
Size.
G
E
E
C
E
You
can
declare
your
your
buffer
that
begins
from
the
application
layer.
You
can
declare
alignment
constraint
on
the
buffer
you're
using
to
write
your
data
in
to
get
a
tattoo
to
guarantee
the
alignment
there
and
then
once
you
get
below
that
GFS
it's,
you
should
probably
be
fine.
If
you
have
a
larger
page
and
a
smaller
buffer
that
you're
writing
out.
Yeah.
G
E
G
E
C
C
Of
this,
that
kind
of
problematic
from
a
user
interface
perspective,
is
it
there's
no
real
easy
way
at
the
moment
to
get
the
block
size
for
an
application
for
a
particular
file
right?
The
application
knows
you
can
check
with
the
data
set
property
is
set
to,
but
that
doesn't
necessarily
tell
you
what
the
block
size
on
the
file
is.
You
can
get
it
by
running
stat.
C
C
C
A
D
A
D
A
F
A
But
yeah,
but
yeah
I,
think
that,
for
the
profiles
is
the
one
block
it
might
be.
It's
probably
returning
the
preferred
the
record
size
property
rather
than
the
sighs,
the
eventually
smaller
first
block,
which
could
potentially
confound
direct
io
but,
like
I,
mean
come
on,
like
don't
do
a
direct
eye
on
files
that
small
not.
C
E
C
Right,
maybe
for
some
really
fast
nvme
devices,
you
really
don't
want
the
copy
all
and
you
want
to
cut
as
much
out
of
it
as
possible.
I
don't
know.
Maybe
the
use
case
there
as
well
anyway,
I'm
just
saying
that
we
should
have
some
interface
where
we
can
actually
reliably
get
the
block
size
for
applications
at
the
best
staff.
That's
great,
but
if
we
have
to
change
the
behavior
there
to
make
sure
that's
true
is
that
going
to
cause
any
problems
on
a
Lumos.
A
A
Let's
see
what
would
be
the
problem
that
you're
grading
yeah
the
problem?
Isn't
the
problem
only
applies
to
single
files
and
let's
say
you
know
you
have
a
file,
that's
the
actual
block
is
64
K,
but
we're.
But
this
website
is
saying
that
it's
128,
the
preferred
block
size
is
128
K
because
that's
the
property
is
set
to
that.
Then
you
do
a
read
like
you.
A
E
E
E
A
A
There
it
the
written
region,
I
mean
it
seems
pretty.
It
seems
like
a
less
than
10
line
kind
of
wrapping
function
to
be
like.
Oh,
you
wanted
direct
I/o.
You
try
to
read
more
than
the
single
block.
That's
here
great,
like
do
a
directory
read
of
the
beginning,
passing
in
the
beginning
of
your
buffer
and
then
easier.
The
remainder.
A
It
relates
to
the
per
block
overheads
of
doing
this
stuff.
Doing
IO
is
when
you
have,
you
know
small
record
size,
and
you
want
to
read
or
write
a
lot
of
data.
There's
a
lot
of
per
block
overheads
that
I'm
trying
to
address
you
might
have
seen.
My
full
request
about
ZFS
send
like
doing
five
are
bypassing
the
ZFS
send,
which
you
could
also
call
direct
IO
because
it
doesn't
use
the
arc
uses.
J
ODEs
directly.
I
need
to
do
some
similarities.
What
we're
doing
here
with
directi?
A
E
A
E
Specifically,
I
think
the
main
problem
was
around
the
checksumming
issue.
I
think
we
really
I
don't
know
if
we've
come
to
any
specific
conclusions
in
some
of
these
other
places.
That
would
you
like
a
proposal
on
some
of
the
on
your
on?
Come
EP,
use,
interface,
questions.
It
came
up
or
yeah
I
think
at
least
on
the
like.
A
Partial
block
questions
to
like
at
least
put
something
to
the
bluegrass
saying,
but
you
know
we
had
this
discussion.
The
proposal
is
that
you
know
partial
block
reads:
will
do
X,
partial
block,
writes
will
do
Y,
and
you
know
this
is
kind
of
how
much
consensus
we
got
on
the
call
for
each
of
these,
and
if
anybody
has
problems
with
these,
then
let
us
know,
and
then,
if
you
don't
hear
you
know
okay
and
then
change
the
implementation.
Based
on
that.
If.
F
E
A
F
A
Cool
thanks
a
lot
mark.
Welcome
thanks
everyone
for
staying
couple
extra
minutes,
the
next
meeting,
so
the
next
meeting
is
scheduled
for
four
weeks
from
now.
That's
the
31st.
We
could
potentially
push
it
out
a
week
so
that
we're
like
doing
it
in
the
next
month.
One
may
do
wonders
one
mating
per
month:
oh
yeah,
rather
than
one
we
need
for
four
weeks,
I'm
fine
with
just,
but
it's
supposed
to
be
every
four
weeks.