►
From YouTube: December 2019 OpenZFS Leadership Meeting
Description
At this month's meeting we discussed: saved send feature; ZSTD; Encrypted dedup
Details and meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit
A
Already's
one
after
the
hour,
so
we'll
get
started.
So
we
had
a
pretty
lady
agenda.
I
asked
couple
folks
to
tag
Rossum
projects
so
Tom
do
you
want
to
start
off
talking
about
the
saved
send
feature,
and
then
we
have
will
also
talk
about
the
Z
standard
work
and
then
there
will
probably
be
a
time
after
that.
Folks
have
other
topics
that
they'd
like
to
discuss.
B
Tom,
can
you
hear
me
yep,
okay,
reinstalling,
Windows
messes
up
with
zoom
but
anyway.
So
basically,
I
wanted
to
talk
a
little
bit
about
a
feature
which
I
originally
called
partial
sense,
but
which,
for
a
few
reasons,
is
now
being
the
name
is
contained
to
the
same
sense.
I
guess
I'll
just
start
out
with
kind
of
the
real
basic
overview
of
what
the
feature
is
and
why
we
wanted
it.
Basically,
the
idea
was:
we
have
our
data
center.
B
We
have
function,
storage,
nodes
which
all
have
a
bunch
of
data
sets
on
them
and
they
back
up
to
a
secondary
data
center
somewhere
across
the
country,
and
traditionally
we
have
done
this.
This
backed
up
with
ZFS
send
which
we
do
two
files
over
X.
We
basically
type
DFS
end
to
a
compressor
and
like
that
over
SSH
to
a
file
on
the
remote
side,
and
then
we
receive
the
uncompressed
that
file
and
receive
it,
and
the
reason
that
we
do
it
like
that
is
because
in
the
past
we
haven't
had
this
interval
sense.
B
B
B
B
Basically,
if
you
do
not
send
a
if
you
do
not
send
a
DRN
record
in
any
sense
stream
and
on
Chrystie
side,
if
you
receive
with
the
resumable,
receive
flag,
as
presumably
were
going
to
anyway,
you
basically
end
up
with
a
with
a
with
a
stream.
That's
resumable,
even
though
you
haven't,
even
though
you
haven't
gotten
everything.
B
So
basically,
we
originally
did
all
that
in
the
kernel
space,
but
after
looking
at
some
of
the
kind
of
edge
cases
and
stuff
with
redacted
sins,
what
we
realized
is
that
it
would
be
a
lot
cleaner
is,
if
we
just
took
if
we,
if
we
did
pretty
much
all
the
processing
out
and
use
this
space,
so
basically
the
way
that
the
implementation
works
right
now
is
you
it's?
Basically
the
user
space
module
it
goes
and
it
gets.
B
The
resumable
received
token
that
normally
would
be
used
to
to
tell
the
remote
side
have
a
resume
the
descent,
and
instead,
it
just
kind
of
restarts
that
back
at
zero,
it
just
kind
of
rewinds
it,
and
then
it
changes.
The
name
of
the
dataset
back
to
whatever
that
partially
saved
data
set
is
and
then
and
then
we
just
basically
do
this
end
from
there
and
in
the
kernel,
the
only
real
change
there
is.
B
The
one
other
bit
of
it
that
I
will
mention
for
answering
questions
that
anybody
might
have
or
might
have
is
included
in
this
patch
is
the
ability
to
resume
a
receive
to
resume
I
saved
receive
from
a
bookmark
which
is
very
confusing
to
think
about.
But
basically
the
idea
is
that
if
you
were
in
the
middle
of
this
safe
send
procedure
and
then
that
got
interrupted,
you
can
still
resume
this
as
well.
That's
really
all
that
that
kind
of
means,
so
you
can,
you
can
resume
a
a
save,
send
from
a
bookmark.
B
That's
kind
of
that's
kind
of
all.
I
had
other
than
just
you
know:
I
wanted
to
kind
of
get
a
feel
for
the
room
in
terms
of
whether
or
not
people
were
okay
with
it
being
implemented.
In
kernel
space
user
space,
like
I,
said
we,
we
originally
did
it
in
kernel
space
and
then
we
kind
of
Riemann
over
that
user
space.
But
I
was
wondering
if
anybody
had
any
specific
kind
of
concerns
about
that
or
thoughts.
A
So,
just
to
make
sure
I
understand,
like
the
use
case
in
Derby
iterate,
like
you're
talking
about
you're
like
you're,
doing
a
receive
onto
your
like
secondary
system.
And
then
you
realize,
like
oh
this,
this
system
that
I'm
moving
out
to
is
actually
dying.
So
when
I
like
replicate
everything
from
this
system
to
another.
One
like
a
third
system-
and
you
want
to
do
that
before
you
have
like
completed,
receive
that's
running
on
the
secondary.
B
B
Some
of
our
like
famously
the
example
we
have
at
within
daddo,
is
that
we
have
one
partner
who
is
on
a
boat
and
they
just
run
there
they're
like
appliance
on
a
boat
and
that
like
when
they
get
to
dock,
that's
when
they
kind
of
back
everything
up
and
other
than
that
it
goes
over
like
a
satellite
modem.
So
it's
very,
very
slow,
so
we
kind
of
designed
for
that
use
case
and
yeah.
So
the
idea
is,
we
didn't
want
to
make
the
partner
because
we
had
a
node
dying
on
our
end.
B
A
Cuz
it
nap
in
that
model
leaves
are
being
the
primary
that
you're
doing
this
end
on
is
actually
like
on
the
boat
right.
Exactly
that's
right
now,
and
the
secondary
is
like
the
machine
that
you're
actually
operating,
that
is
in
your
data
center,
but
it
might
be
failing,
and
so
you
want
to
be
able
to
like
see
if
I
send
everything
off
of
that
system
to
like
a
third
system
clang.
B
So
did
anybody
have
any
questions,
thoughts
about
that
the
I
can
put
the
pull
request
and
github
I'll
bring
it
up
for
an.
But
you
know
if
anybody
wants
to
take
a
look
at
it,
please
feel
free
I'm.
You
know
we're
looking
for
a
couple
more
viewers,
although
if
it's
been
it's
been
approved
by
Brian
I
believe
at
this
point
so
I
do.
A
A
Yeah
in
terms
of
like
the
kernel
versus
user
land
like
where
it's
headed
figuring
out,
what
to
do,
I,
don't
wanna,
have
to
I'll
take
a
look
at
the
code,
but
it
seems
like,
on
the
one
hand
like
having
to
use
Lindh,
do
it
is
kind
of
like
it
feels
a
little
bit
like
use.
The
land
is
like
tricking,
the
kernel
into
doing
what
it
wants,
but,
on
the
other
hand
like
it
probably
the
implication
is
nice
and
simple,
and
with
minimal
criminal
changes,
which
seems
like
that's
valuable,
too
yeah.
B
The
other
thing
is
I.
Think
from
my
perspective,
is
that
we
kind
of
want
it
to
trick
the
kernel.
That's
that's
kind
of
like
that.
That's
kind
of
the
goal
is
because
we
want
a
scent
which
basically
looks
it.
It
basically
kind
of
kind
of
looks
and
feels
like
it's
coming
from
that
original
system.
Ui.
We
just
want
it.
You
know
you
got
because
we
want
the
exact
same
scent
file.
Basically,
we
don't
want
to
we
don't
we
kind
of
want
to
make.
B
Nothing
gets
reinterpreted
or
anything
like
that,
so
I
I,
think
tricking.
The
kernel
messes
isn't
necessarily
a
bad
thing,
even
if
it
wasn't
originally
designed
for
that
it.
You
know
it
seems
to
fifties
it
pre
well,
I,
do
see
a
couple
of
questions
in
the
chat
which
not
notice
before
more,
like
replicate
partial,
receives
I.
Think
yeah
pretty
much
that's
exactly
it.
We
want
to
take
a
partial,
restate,
partially
finished,
receive
and
replicate
that
to
somewhere
else,
given
a
replicate
to
be
replicated
something.
B
The
the
question
for
those
of
you
who
are
looking
at
chat
is
basically
can
I
use
this
to
start
sending
part
of
the
data
set
to
someplace
else
before
it's
finished
being
received,
I
I,
don't
again
what
I
was
given
ends,
I,
don't
really
know
we
haven't
really
tested
it.
In
that
case,
yeah.
C
B
B
B
Okay,
well,
that
was
my
spiel.
A
Cool
thanks,
Tom,
thank
you
and
by
the
way
did
you
can
you
sit
down
a
message
to
the
open,
z-best
developer
list
just
select
folks,
another
make
sure
they
focus
on
other
platforms.
Know
about
this
I.
B
A
I
think
we
talked
about
this
at
one
of
these
meetings
like
several
months
ago,
but
just
since
I
mean
this
is
probably
a
little
bit
less
of
a
concern
now
with
FreeBSD.
You
know
joining
together
with
those
you
go,
but
just
to
make
sure
that,
like
focus
on
it,
Lumos
and
other
platforms
are
aware
of
all
the
upcoming
features.
A
We
were
asking
folks
to
just
send
an
email
to
the
opens
investor
valve
or
mailing
list,
and
you
know
just
okay,
we're
we're
guns,
new
feature.
Here's
a
link
to
the
full
request
to
make
sure
that
you
know
all
of
the
relevant
experts
have
a
chance
to
weigh
in
even
if
they're,
necessarily
following
every
zero
all
over
crust.
Okay,.
B
Yeah
I
can
definitely
send
out
an
email
you
know
tonight
or
tomorrow
about
that.
So.
A
All
right
any
more
questions
about
that,
all
right.
The
next
on
the
agenda:
I'm
I'm,
probably
gonna,
mispronounce
your
name
so
I
apologize
in
advance,
but
Kajal
door.
Why
don't
you
tell
us
how
to
pronounce
your
name
and
then
talk
about
the
Lizzie
standard
which
I'm
very
student
hearing
like
what
the
status
of
that
is
and
kind
of
what
you
think
we
need
to
do
to
get
this
completed?
Oh.
B
D
A
Sebastian,
who
is
brain
Slayer
handle,
has
completed
the
majority
of
the
design,
and
a
lot
of
testing
has
been
done
and
then
he's
that
he
Sebastian
and
also
Michael,
who
goes
by,
could
0
are
working
on
finishing
cleaning
it
up,
restructuring
documenting
and
some
work
on
compliance
with
the
C
standard
library
itself.
If
you
have
any
input
or
want
to
do
some
testing
before
the
final
TR,
he
slips
up
and
the
PR
number
is
nine
six.
Seven
three:
is
there
one
of
our
guys
cool,
yeah
too
bad?
A
You,
you
make
a
phone
I
guess:
I
was
interested
in
like
also
like
maybe
Brian,
Gay
Venus.
In
this
background,
but
like
the
history
of
like
we've,
had
several
different
port
like
ports
or
forks
of
forks
of
forks
and
several
different
cars,
and
they
you
know
if
you
or
anybody
else
has
like
some
some
background,
not
that
and
kind
of
what
you
know
what
we
need
to
do
to
move
this
for
those
here.
Yeah.
D
So
I
have
some
of
that
background.
At
least
I
can
try
to
summarize
my
understanding,
like
you
say,
there
have
been
a
couple
versions
of
this
out
there
and
I
believe
the
current
work
is
to
try
and
bring
those
existing
versions
together
in
one
pull
request
and
that
work
is
ongoing
in
the
new
pull
request
and
it's
mainly
in
need
of
reviewers
and
feedback.
At
this
point,
I
know
they've
been
iterating
on
it,
a
lot
but
I'm,
not
sure
if
Allen
and
brain
Slayer
are
both
commenting
on
it.
A
Who
the
credit
struck
the
current
restructuring
work
takes
the
best
parts
of
the
two
major
forks
Allen's
one
and
brain
Slayers.
One
and
I
guess
combines
them
together.
So
that
sounds
great
I
guess
I
and
other
interested
folks
should
take
a
look
at
the
final
PR
when
you
have
that
ready,
I
think
you
mentioned
media
that
should
be
in
a
few
weeks.
A
A
Killed
and
and
Allen
should
have
an
offline
conversation
killed
said
that
he
tried
to
get
it
down,
but
wasn't
able
to
so
we'll
see
we'll
try
to
happen,
so
we
can
make
sure
everyone's
on
the
same
page,
and
then
you
know,
hopefully
we
can
move
forward
with
this
new
version
as
soon
as
it's
ready.
It
should
be
very
soon.
A
B
I
had
one
so
basically
and
I
apologize,
because
I
haven't
been
at
the
last
meeting,
or
so
maybe
this
is
stuff
that
we've
already
talked
about,
but
so
a
feature
request
came
in
today
or
it's
it's
been
around
for
a
little
while.
Basically,
the
feature
request
is
to
allow
encryption
to
work
with
be
do
across
multiple
data
sets.
Currently,
encryption
works
with
D
do
with
in
what
I
call
a
poem
family,
so
basically
a
dataset
it
snapshots
and
any
clones
of
those
snapshots,
and
that's
all
nice
and
everything.
B
But
some
people
have
asked
for
the
ability
to
be
able
to
to
also
do
that
across
different
data
sets.
That
is
possible,
but
the
the
drawback
there
is
that
it
is
not
cryptographically
possible,
then,
to
separate
those
data
sets
in
the
same
way
that
all
data
sets
normally
are
are
kind
of
them
are
separated,
so
normally
like
each
data
set,
is
kind
of
encrypted
on
its
own
in
the
fact
that
they
share
a
user
key
is
kind
of
just
a
management
layer.
B
The
two
things
I
wanted
to
on
this
were
one:
what
are
people's
opinions
on
this
kind
of
thing,
because
there's
a
couple
of
there's
a
couple
of
different
takes
on
it?
One
is
you
know
if
this
is
you
know,
if
you
don't
ever
want
them
to
be
separated
cryptographically,
and
you
want
them
to
leave
like
that.
Why
not
just
put
all
of
the
data
in
the
same
data
set?
You
could
kind
of
ask
that
question.
A
B
E
B
E
B
A
B
No
because
if
the
cryptographically,
if
the
internal
key
is
the
same
or
what
I
call
the
master
key
is
the
same,
then
it
doesn't
really
make
sense
to
allow
people
to
have
different
user
keys
for
it,
because
then
cryptographically,
if
I
know
the
password
to
one
data
set,
I
can
decrypt
the
other.
So,
like
that's
kind
of
you
know,
it
would
be
crazy
yeah
if
we
set
it
up
exactly
so.
If
we,
if
we
set
it
up
so
that
you
could
do
that,
cuz
I
mean
we
could.
B
A
Know
like
create
like
today
so
I
guess.
First
of
all,
just
in
case
it
wasn't
clear
that
the
clone
family
that
you're
talking
about
is
like
all
of
the
datasets
that
are
related
by
cloning
and
snapshotting.
So
you
know
you
create
a
clone
of
some
snapshot,
then
now
that
that
clone
in
the
thing
that
you
created
it
from
are
part
of
the
same
family
because
they
shares.
D
D
A
E
A
B
E
A
Yeah
and
you
this
within
with
like
the
inheritance
where
you
can
do
something
like
you
know,
when
you
create
that
first
data
set
with
the
encryption
on
there's
some
encryption
property.
That's
that
says,
like
everything
under
all
my
child
file
systems
get
exactly
the
same.
Not
not
only
are
they
encrypted,
but
they
have
to
use
the
same
modification.
B
B
That
might
that
might
be
the
easiest
way
to
do
it
and
the
way
to
do
it
without
breaking
a
ton
of
code
that
currently
exists,
because
if
we
allow
you
to
kind
of
link
to
arbitrary
data
sets,
you
know
hold
that
you're
good,
like
that.
That
can
get
very
complicated
in
terms
of
like
when
you
go
to
move
stuff
right
now.
There
are
a
bunch
of
checks
that
make
sure
that
you're
not
moving
anything
outside
of
its
encryption
route
and
those
checks
could
get
real
complicated
agreements.
B
B
Think
it
would
be
a
new
if
I
had
and
again
I'm
literally
coming
up
with
this
right
now
as
we're
talking,
but
I
think
it
would
be
a
new
property
kind
of
playful
nappa,
suggesting
where
you
say
like
you
know,
you
have
a
property,
that's
like
shares
key
equals
true
and
then
like
on
common
bet.
That
gets
a
little
complicated
as
well,
because
then
you
have
new
inheritance
goals
for
that
property
that
you
lost
money
for
a
year.
To
you
know,
none
of
this
individually
is
all
that
crazy.
B
E
B
A
I
think
it
sounds
like
there
is
interest
in
this
and
like
if
we
can
design
this
interface
in
a
way
that
you
know
makes
it
so
that
people
can
understand.
What's
going
on
and
not
get,
you
know
not
shoot
themselves
in
the
foot
and
rusev
their
protections
and
also
not
get
like
make
undue
restrictions
on
what
things
they
can
do
with
the
data
sets.
Then
it
sounds
good.
We
probably
aren't
going
to
be
able
to
totally
design
that
out
in
this
meeting.
So
maybe.
E
A
E
B
I
think
that's
kind
of
like
the
like
I
know.
We
spend
a
good
amount
of
time
talking
about
like
what
it
would
actually
look
like,
but
I
think
the
thing
that
I
kind
of
wanted
to
get
coming
out
of
this
was
dewy.
It
sounds
to
me
like
we
think
it's
a
good
idea
to
a
lab
like
it's
a
reasonable
idea
to
allow
this
and
to
add
this
feature.
B
E
C
C
E
E
A
The
use
case
stuff
was
asked,
you
know,
I
assume
did
at
least
one
use
case
would
be
like
you
know:
I
have
a
bunch
of
disc
images
that
are
in
different
Z
balls
and
they're.
All
each
of
them
has
similar
contents
because
they're
all
you
know,
windows,
installs
or
whatever
or
backups
of
the
same
system
into
different
Z
balls
or
what
something
like
that
and
then
they
want
it
and
they're
already
using
YouTube
today
that
see
being
between
these
different,
you
know
totally
independent
evolves
from
ZFS.
A
B
Yeah,
the
the
guy
who
who
filed
the
full
request,
was
saying
that
basically
for
his
these
case,
you
have
a
lot
of
images
which
look
really
similar,
VM
images
that
look
similar
and
he
gets
better
performance
by
using
v2
because
they
can
share
the
arc
memory.
So
when
he
tries
to
glue
all
of
them
at
once,
you
don't
they're,
not
doing
it
for
any
all.
B
Yeah:
okay,
but
well
I'm,
not
hearing
any
like
significant
pushback.
Then
we
should
think
this
through
yeah.
E
B
Mean,
like
I,
think
I
think
the
easiest
way
to
do
that
is
just
make
it
behave
the
way
it
currently
does
by
default.
But
it's
just
like
a
setting
that
you
know
again.
I'm
gonna,
say
this
one
guy,
but
really
anybody
who's
using
you
can
watch
daddy
who's.
Currently,
you
can
use
across
multiple
data
sets,
wants
to
and
wants
to
add
encryption
to
that
yeah.
A
D
Call
oh
yeah
I
got
one
thing:
maybe
if
we
have
other
people
who
are
interested
in
this
functionality,
there
has
been
work
that
is
ready
for
review
to
port
D
persistent
healthy
arc
to
Linux.
So
there's
a
pull
request
open
for
that
now
that
has
test
cases
and
is
ready
for
reviewers.
So
we
should
probably
mail
the
almost
list
to
let
people
notice
out
there
for
review
it's
a
port
of
the
work
done
for.
D
E
D
A
B
A
A
Move
they
were
really
meeting
time,
9
a.m.
Pacific
instead
of
11
a.m.
so
it's
gonna,
be
this
one.
This
one
was
at
1
p.m.
Pacific,
so
it's
4
hours
earlier
than
this
meeting
was
in
whatever
time
zone.
You
are
in
cool.
So
thanks
everyone
and
have
a
great
holiday
in
New,
Year
and
we'll
see
you
in
January.