►
From YouTube: January 2020 OpenZFS Leadership Meeting
Description
At this month's meeting we discussed: ZoL 0.8.3; checksum feature flags; zfs change-key; changing default to GCM
Details and meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
Let's
get
started,
we
have
three
items
on
the
agenda
and
then
hopefully
a
lot
of
time
after
that,
if
folks
have
other
things
they'd
like
to
discuss
so
the
first
question
came
to
me
via
IRC
from
somebody
whose
name
was
fire
snake.
Their
question
was
what
was
two-part
white
when
will
a
DEP
3
ship
and
will
it
include
the
large
Dino
DFS
diff
fix
so
Brian?
Are
you
on
or
somebody
else
who
has
more
familiarity
with
release
schedule?
Yes,.
B
I'm
here
so
at
the
moment
there
is
an
outstanding
patch
set
for
Oh
a
three
in
a
pull
request
that
has
been
put
together
by
Tony
hunter
and
we're
working
on
testing.
Now
it
does
include
the
dis
six
along
as
a
bunch
of
other
patches.
It's
there
for
review.
The
pull
request.
Number
is
nine,
seven,
seven,
six.
Four!
Anyone
want
to
take
a
look
at
it.
B
If
you
have
a
particular
fix,
that's
not
in
there.
Definitely
let
us
know
before
we
cut
the
final
tag
and
I
will
make
sure
that
it
gets
included,
or
at
least
looked
at
after
when
it
gets
released.
I
would
say
when
we
wrap
up
our
testing,
so
we
think
everything's
here
that
we
need
we're
just
making
sure
everything
works
when
we
get
packages
built
and
tested
for
everything.
So
soon
I
would
say
and
take
a
look
at
the
pull
request.
So.
B
C
B
There
freeze
date
is
looks
like
mid
February,
but
yeah
this
month,
I
would
is
the
target
sounds.
A
A
Okay,
then
I
just
heard
from
the
next
two
are
about
encryption,
I
just
heard
from
Tom
Kapiti
that
he's
going
to
be
joining
us
towards
the
second
half
of
the
meeting.
So
if
there's
other
stuff,
maybe
we'll
do
that
first
and
come
back
to
these
two
encryption
questions
since
you
might
have
you
may
be
able
to
understand
better
than
I
can
at
least
I
see
that
Alan
is
typing
a
question
into
the
document
as
we
speak,
I'll
give
you
if
you
want
to
go
now,
go.
D
Ahead,
yes,
sir,
so
during
the
testing
of
the
zed
standard
work,
we
realized
a
slightly
interesting
something
if
you
set
the
checksum
or
compression
algorithm
on
a
data
set
to
a
newer
version
or
newer
algorithm
that
isn't
included
in
older
versions.
But
you
don't
write
any
blocks
to
the
data
set.
If
you
then
export
that
pool
and
import
it
on
an
older
version,
it
will
fail.
The
assert
of
that's
that
compression
or
checksum
algorithm
is
out
of
range
yeah.
A
A
And
that
the
case,
even
it's
like,
like
you,
could
imagine
bumping
the
ref
count
when
you,
when
you
set
a
property
or
then
like
you,
can
unset,
you
can
change
the
property
to
something
else
later
and
like
we
need
to
leave
the
data
set.
Then,
like
you,
don't
know
whether
you
should
be
definite
or
not.
A
Certainly
we
could
be
more
sophisticated
about
this,
though
I
mean
we
could
we
could
change
it
so
that
the
ref
count
is
essentially
like
it's.
We
add
one
if
the
dataset
contains
blocks
of
this
type,
which
we
were
doing
currently
and
you
add
one
if
the
property
is
set
to
the
value
so
like
you
might
typically
have
the
ref
got
bumped
by
two,
for
it
is
that's
using
it.
A
But
if
you
unsend
the
property,
then
we
like
or
when
you
change
the
property,
then
we
would
decrement
the
ref
count
immediately,
but
still
probably
leaving
the
ref
count
up
by
one
due
to
the
blocks.
In
there
another
I
mean
another
workaround
would
be
to
change
the
code
that
you
to
change
the
insertion
right
to
say,
like
oh,
you
know,
yeah
there's
once
out
of
range
I'm,
just
gonna
be
like
property
value
equals
like
unknown
or
like
unrecognized
value,
or
something
like
that
and
kind
of
ignore
that.
D
D
D
A
A
So
you
know
we
would
have
to
look
at
like
you
know
where.
However,
we
could
look
into
the
property
code
in
Sinking
contexts
to
do
that
which
I'm
not
sure
I
have
to
look
at
it
to
see
how
complicated
that
would
be,
but
it
might,
it
might
be
straightforward
to
do
that.
I
think
the
tricky
thing
is
like
the
what
happens
when
you
set
a
property
and
then
he
gets
inherited
down
by
a
bunch
of
file
systems
like
presumably
one
at
home
Grovetown
for
everyone
that
inherits
the
property
which
the
change
callback.
A
A
You
can
say
it's
not
that
we
don't
set
for
every
prop
group
for
every
inherited
file
system.
We
just
said
it
like,
if
you
know
for
every
set
point
essentially
cuz,
because
I
mean
that
would
give
us
the
same
result.
They're
all
over
trying
to
do
is
say
like
if
any
buddy.
If
this
property
is
set
to
standard
anywhere
in
the
pool,
then
we
need
to
have
the
ref
got
bumps
so
that
you
can't
get
where
the
pool
on
over
systems
yeah.
B
I
still
think
it
would
be
prudent,
though
I,
like
the
idea
of
if
we
were
settled
on
the
identifier
for
the
feature,
adding
into
a
point
release
prior
to
this,
that
might
head
off
a
lot
of
trouble.
I
assume
it's
like.
If
we
don't
hit
the
assert
I
assume
it's
harmless
right,
it
fails
gracefully,
some
other
way
or
I
guess
it
doesn't
fail
at
all,
because
none
of
these
blocks
actually
gonna
use.
Well.
The
problem.
A
Is
like
what
so,
what
happens
when
you
go
to
write
something
to
that
file
system
that
has
the
property
that
set
that
has
checks,
check,
compress
equals
e
standard.
You
know
and
there's
no
Z
standard
support
in
the
inactive
base
like
I
have
to
like
make
it
like
fallback
to
some
other
like
we
make
it
like
fall
back
to
yeah.
E
A
A
Yeah,
so
we
can
you
I
know
what
happens
if
you
production
on,
but
what
most
people
are
on
right.
No
I
suspect
that
when
you
try
to
write
a
block,
then
you
get
a
sinking
context
and
you
saying
like
Oh
like
please
compress
this
with
compress
with
compression
algorithm.
Remember
well,
I,
don't
know,
I,
don't
even
know
what
happened
because
it
has
translate
the
well.
The
property
is
actually
the
property
value
is
in
number
or
not
the
screen.
So
yeah.
A
A
D
A
A
I
mean
I
think
that
making
that
decision
could
be
pretty
tricky,
especially
with
the
checksum,
because,
like
some
checks
enough,
like
you
may
add
a
new
checksum
algorithm,
which
is
you
know,
a
strong
checksum
algorithm,
that's
like
duty
capable
and
whatnot,
or
you
might
add
a
new
texture
value.
Then
that's
like
lightweight
and
fast
in
you
know
the
behavior.
A
A
F
A
B
You
could
accidentally
poison
certain
data
sets
right
that
and
make
them
get
the
reference
count
set
accidentally.
Just
by
having
me
write
a
block
that
D
dupes,
you
couldn't
intentionally
exclude
one
to
say
this:
one
should
never
get
the
Z
standard
feature
enabled
because
it's
my
RIT
data
set
or
whatever
I
never
want
this
to
happen.
A
C
D
A
A
G
Yes,
so
I'm
sharing
this
primarily
because
anything
that
a
boon
to
does
with
regard
to
ZFS
support
is
high
profile.
There
isn't
necessarily
an
action
item
here,
so
I'm
really
just
soliciting
if
there's
any
additional
feedback
that
I
haven't
already
provided
them
so
because
ZFS
encryption
must
be
enabled
at
data
set
creation
and
a
boon
to
developers
proposed
that
their
installer
should
always
enable
encryption
using
a
fixed,
known,
passphrase
I
believe
in
the
example,
it's
like
Ubuntu
ZFS,
the
value
doesn't
matter.
The
key
part
is
it's
just
a
fixed,
known
passphrase.
G
G
So
I
assume
this
is
some
combination
of
a
stopgap
proposal
until
that
is
fully
implemented
and
or
a
proposal
of
how
to
handle
users
who
do
not
enable
encryption,
I've
made
some
specific
proposals
about
how
they
might
do
the
installer,
UI
and
I.
Think
that
should
be
their
priority
personally,
but
regarding
security,
which
is
really
I,
think
the
important
thing
here
I
had
some
concerns
with
this
as
I
understand
it,
the
master
key
is
stored
on
disk
wrapped
with
the
user
key
which
in
this
case
is
derived
from
the
passphrase.
G
The
user
key
is
thus
known.
So
when
the
key
has
changed,
when
the
user
key
is
changed,
the
old
wrapped
master
key
will
still
be
present
on
disk.
Therefore,
it
seems
to
me
that
an
attacker
with
access
to
the
disk
can
read
the
old
wrapped
master
key
decrypt
it
using
the
known
passphrase,
then
decrypt
the
whole
pool,
including
newly
written
data,
because
the
master
key
doesn't
change.
I
discussed
this
with
Tom
a
bit
the
in
area
for
future
ZFS
development
might
be
to
overwrite
the
rapped
master
key
on
disk
when
it
has
changed.
G
That
may
be
a
good
idea,
independent
of
whether
or
not
it
won't
you
you
know.
Does
this
I
guess
I've
already
shared
my
concerns
with
them
about
that?
If
anyone
has
any
additional
thoughts,
I'm
happy
to
pass
those
along
or
you
may
wish
to
comment
directly
on
the
bug
report
which
I
have
linked
in
the
document.
C
Right
and
just
to
elaborate
on
that,
a
little
bit
so
back
when
we
when
we
were
writing
encryption
and
we
kind
of
knew
that
trim
was
coming
at
some
point
in
the
future,
but
it
wasn't
quite
around
yet
we'd
actually
thought
about
this
problem
and
what
we
had
said
was
when
trim
gets
added.
It
will
be
easy
to
do
things
like
secure
erase
and
we
just
never
really
revisited
that
so
I
think
this
is
this
or
I
think
in
terms
of
like
making
a
solution,
because
what
what
they're?
C
The
the
general
proposal
from
that
kind
of
UI
is
not
altogether
unreasonable
or
anything
it's
just
kind
of
not.
You
know
from
a
strict
from
a
very
strict
security
standpoint.
This
is
kind
of
not
the
way
our
that
you
know
this.
This
won't
really
work
without
being
able
to
like
kind
of
zero
out
and
sit
or
securely
erase
the
the
old
encryption
key,
so
I
think
at
least
from
our
perspective.
C
So,
in
the
case
of
a
zpool
checkpoint-
and
this
is
like
one
of
the
things
that
we
need
to
figure
out-
those
definitely
would
kind
of
go
back
in
time.
We
don't
really
have
a
way
to
prevent
that,
because
if
we
like
from
a
security
standpoint,
if
you
have
a
checkpoint,
you
know
you
need
to.
We
need
to
allow
the
user
to
be
able
to
go
back
that
far
and
you
know
so.
C
If
we,
if
we
delete
the
encryption,
so
the
key
needs
to
live
on
the
disk
somewhere,
you
know,
even
if
we
decide
to
have
some
special
mechanism
for
it.
It's
really
not.
It's
really
kind
of
the
same
thing
from
security
perspective,
because
the
key
still
needs
to
live
somewhere
on
disk
wrapped
by
the
old
password
or
you
know,
Frankie
in
general
terms,
so.
C
So
as
far
as
checkpoint
is
concerned,
maybe
we
just
need
to
have
some
kind
of
documentation
like
there
are
already
some
warnings
in
the
let's
man
page,
for
you
know,
you
know
things
that
encryption
will
affect
you
against
some
little
caveats
and,
and
things
like
that,
so
adding
this
you
know
once
we
add
some
kind
of
you
know,
secure
erase
feature
for
the
encryption
keys.
That's
definitely
something
that
that
we
should
add
to
the
dock,
but
I.
Don't
think,
there's
any
way
around
that
and
probably.
C
In
terms
of
extremely
wide
I
mean
there's
some
random
stuff,
we
could
kind
of
do
there.
That
would
be
a
key
like
you
know
you
just
but
again,
I
think.
That's,
that's
more!
That's
I!
Think
in
general.
The
correct
solution
here
is
going
to
be
more
a
matter
of
documentation,
then
than
anything
else
because
like
and
and
and
we
it
will
probably
also
take
a
little
bit
of
you
know
us
making
a
decision
in
terms
of
when
you
you
know
when
you
do,
we
need
to
lead
a
key
or
you
know
web.
C
G
C
Personally,
what
I
probably
advocate
for
is
just
you
know,
rewrapping
the
key
immediately
and
then
or
it
is
rewrapping
wiki
immediately
and
then
kind
of
not
you
know,
like
you
know,
and
then
extremely
Ryan
for
work,
but
I
think
extreme
rewind
is
a
early
niche
use
case
kind
of
thing
right.
That's
only
a
couple.
D
C
Esther,
anyway,
exactly
and
it's
the
kind
of
thing
where
it's
it's
very
much,
a
desperation
thing,
whereas
this
is
more
of
like
a
general
integrity
of
the
file
system
and
that's
kind
of
my
personal
opinion
right
now.
But
you
know
we
definitely
talk
about
it
or
early
for
that.
But
this
my
current
thoughts
of
it.
A
A
Ok,
the
security
aspects
of
this
seem
very
questionable
to
me,
even
with
security
race.
Like
you
know,
if
somebody
has
access
to
the
disk
before
they
change
the
key,
then
they
can
get
the
the
you
know
the
key,
that's
wrapped
the
map
they
hit
the
master
key,
that's
wrapped
with
a
known
user
key
and
then
you
know
access.
Then
your
encryption
is
worthless
right,
regardless
of
changing
to
he
or
whatever.
A
We
can
issue
right
some
set
of
trims,
but,
like
we
still
don't
know
what
it's
really
happening
to
hardware,
we
don't
know
if
the
hardware
has
like
remap
some
block
and
there's
some
way
to
like
access
the
old
mapping
or
like
what
you
know.
It's
it.
It
seems
very
questionable
for
us
to
be
saying
that,
oh,
like
don't
worry,
we
trimmed
everything
so
now,
there's
no
way
for
them
to
get
your
old
key
great
or
you
can
weave
over
in
everything.
And
so
it's
know.
A
A
You
know
there's
if
the
problem
they're
trying
to
solve
is
like
the
user
might
want
to
encrypt
their
own
data.
You
know,
after
installation,
without
setting
up
at
install
time,
then
you
know
it
would
be
easier
to
just
like
create
a
new
file
system
for
the
user's
data,
with
the
encryption
and
or,
like
you
know,
take
the
file
system
that
has
the
users
data,
do
a
send
and
receive
to
a
new
one
that
has
encryption
and
then
delete
the
old
one
or
you
know
barring
something
like
that.
You
know.
Maybe
we
go
down.
A
I
think
I.
Think
the
only
the
closest
thing
to
whether
her
poisoning
that
might
be
like
security-wise
reasonable
would
be
to
say
that,
like
we'll
change,
the
master
key
so
that
at
least
like
you
know,
all
of
those
written
with
the
old
user
key
can
be
read
with
a
little
bit
user
key
that
everybody
knows
and
stuff
that
is
written
after
that
has
new
key
and,
and
it
has
a
new
master
key
and
and
people
can't
read
that
I
think
that
is
like
I.
A
It's
tricky
it's
tricky
to
explain
exactly
what
that
means
because,
like
how
do
we
tell
them?
What
is
the
old
data
and
what's
the
new
data
like
after
you've
done
this,
you
don't
have
any
visibility
into
what
what
data
is
secure
and
what
it
is
not
secure,
which
is.
You
know,
I,
think
why
Tom
chose
originally
to
not
let
you
change
the
the
obtaining
existing
dataset
and
say
now.
A
So
another
point
I
think
it
I
think
that
another
point
against
having
a
known
user
key
ever
is
that
you
could
imagine
use
cases
where,
like
you,
install
some
a
bunkie
machine
and
it
has
the
known
key
and
then
you
you're
gonna,
rely
on
this
mechanism
that
changes
the
key
later
on.
Protune
is
the
ranges,
the
user
key
that
are
on,
but
you
take
this
new
system
and
you
like
make
a
bunch
of
copies
of
it.
A
So
now,
like
your
whole
fleet
of
machines,
is
using
the
same
master
key,
which
now
anybody
could
like
I
get
one
of
these
machines
and
now
I
know
the
master
key.
That's
for
all
of
the
machines.
So
even
if
you
and
then
even
if
you
change
the
user
key
like
it's,
not
just
this
machine,
that
I
can
read
it's
like
every
machine
that
was
a
clone
of
that
because
they
all
have
the
exact
same
identical
master
key.
A
It's
even
if
that's
not
come
the
intended
use
case,
it
seems
like
a
really
a
really
rough
kind
of
gotcha
that
we're
leaving
that
that
would
be
cut
left
in
there
as
part
of
the
design,
because
they
have
this
known
user
key.
So
you
have
to
be
careful
to
avoid
that
use
case,
which
you
know
people
wouldn't.
This
is
design
for
kind
of
naive
users,
I
assume
and
so
like
adding
this
additional
requirement.
On
top
of
like
naive
users.
Use
cases
is,
you
know
they
aren't
going
to
get
that
security
Bolton.
G
C
So
one
of
the
things
I
did
want
to
say
is
regarding
the
I
I
have
a
different
opinion
regarding
the
regarding
the
the
idea
of
trimming,
the
stuff
and
basically
so.
The
idea
of
that
was
not
that
we
are
issuing
trim
commands
the
idea
of
that
was
that
previously
or
Edith
initialized
at
least
didn't
CSS.
As
far
as
I'm
aware,
we
didn't
really
have
a
way
to
zero
out
free
data,
and
so
I
wasn't
really
the
idea
there
wasn't
to
use
like
a
trim
command.
C
C
Right
so,
and
also
instead
of
trim
like
the
actual
term
command,
there's
a
different
one,
at
least
in
Linux
I'm,
not
a
really
a
hundred
percent
sure
about
the
other
operating
systems,
but
in
Linux
they
have
it.
They
have
a
different
man
called
secure
erase,
which
is
kind
of
advertised.
For
this
purpose
of
you
know,
I
want
to
overwrite
the
old
data.
Now
you
brought
up
the
point
that
we
don't
really
know
what
the
hardware
is
doing,
and
that
is
a
hundred
percent.
C
True,
you
know,
we
don't
know
what
kind
of
remapping
is
happening
under
the
you
know.
Under
the
hood.
We
don't
know,
you
know
we
don't
know
if,
like.
If
somebody
gets
out
a
you
know,
a
soldering
iron
and-
and
you
know,
gets
into
the
firmware
of
the
actual
hard
drive
what
they
might
be
able
to
find,
but
at
the
same
time
you
know
we
can't
like
as
a
file
system.
C
You
know
it
shouldn't
be
that
it
shouldn't
be
that
we're
not
a
hundred
percent
sure
that
this
will
work
all
the
time.
Therefore,
we
don't
want
to
make
any
promises.
I
think
this
is
I.
Think
personally,
the
way
I'd
like
to
look
at
it
is
more.
We
are
not.
You
know
we
have
issued
the
command
and
done
the
best.
We
could
to
make
sure
that
this
key
won't
be
recoverable.
C
You
know,
and
whether
or
not
that's
actually
good
enough
could
depend
on
your
hardware,
you
know
and
how
they've
decided
to
implement
some
of
these.
You
know
some
of
these
types
of
ideas,
but
the
idea,
at
least
from
my
perspective,
is
that
you
should
at
least
do
our
best
to
clean
up
that
part
of
where
we
can.
From
that
perspective,
the
sorry
did.
Somebody
have
you.
C
A
Is
not
New,
York,
I'm,
sorry
good,
so
I
mean
I
kind
of
I
do
kind
of
agree
with
you
there.
In
that
case,
we
are
not
changing
the
master
key,
though,
is
it
correct
that
we're
not
changing
the
master
key
and
therefore
like?
If
you,
if
you,
if
your
old
passwords
compromised,
then
even
after
changing
the
password
somebody
can
come
compromise?
The
newly
reinforce
is
that
the
case
assuming.
A
C
They
can
find
that
they
can
find
the
master
key
either
by
you
know,
remembering
it
from
you
know
from
when
they
knew
the
password
and
throwing
that
themselves
in
memory,
like
you
know,
for
by
knowing
the
old
passphrase
and
finding
the
old
wrapped
encryption
key
and
then
reading
crypting
that
from
the
free
data
of
the
files.
Yes,.
C
Problem
is
that,
okay,
so
one
of
the
things
that
I
kind
of
wanted
to
avoid
when
we
originally
set
out
to
do
this.
That
two
points
there.
One
of
one
of
the
things
is
that
one
of
the
things
I
wanted
to
avoid
was
the
idea
that
your
data
is
partially
protected.
If
that
makes
sense
so
like,
if
you
have
an
encrypted
data
set,
you
know
I
kind
of
wanted
that
to
be
same
thing
with
the,
why
you
have
to
choose
and
connect
create,
sometimes
because
you
know,
theoretically,
we
definitely
could
have
made
it.
C
So
you
know
your
data
is,
you
know,
what's
not
encrypted,
and
then
you
turn
them
on
and
then
it's
encrypted.
But
then
you
have
no
visibility
into
what
data
is
protected
and
with
you
know,
or
if
you
change
the
algorithm,
you
know
and
then
one
of
those
algorithms
had
a
security
problem.
So
that's
kind
of
why
we
made
the
decision
to
do
that
in
terms
of
changing
the
master
key
I
do
not
believe
and
I
need
to
go
through
it
and
try
to
figure
something
out
like
give
it.
C
A
D
C
It
would
at
minimum
without
again
without
having
put
too
much
previous
thought
into
it
at
minimum.
It
would
be
kind
of
like
a
matter
herb
I.
We
might
be
able
to
do
something
by
creating
a
completely
separate
object.
That's
encrypted
completely
separate
or
completely
new
entries
in
the
SAP
in
the
in
the
same
SAP
with
the
new
master
key
there's
what,
then,
you
end
up
with
all
kinds
of
issues
like
how
do
we
locate
what's
master
key
to
use?
C
What,
if
you
keep
doing
this
command
over
and
over
again,
because
you
don't
realize
what
you're
doing
and
you
just
see
master
key
rotation,
and
you
decide
to
do
that
over
and
over
again
and
now,
all
of
a
sudden,
like
you,
have
to
search
through
all
of
your
old
master
keys
to
find
the
one
that
you
actually
need
for
this
block
yeah.
So
there
are
some
issues
there.
A
A
I
will
be
possible,
but
if
not
easy,
it
seems
like
the
current
like
design.
It
kind
of
gives
you
the
worst
of
world's
because
you're
saying
like
no,
you
can't
change
the
master
key,
because
I
don't
want
to
give
you
data,
that's
like
partially
protected,
but
I'll.
Let
you
change
the
user
key,
but
like
what
that
means
is
essentially
like
anybody
who
has
any
of
the
user.
Keys
can
access
all
the
data
you
know.
So
it's
like
you
I,
think
saying
that
you
can
change
it.
A
Saying
like
we
want
to
handle
this
use
case
of
your
key
was
compromised,
so
you
can
change
the
therefore
we'd.
Let
you
change
the
user
key,
but
actually
it
doesn't
give
you
any
protection,
because
the
old
user
key
is
still
valid,
for
you
know
for
accessing
it.
Assuming
that
you
can
get
the
you
know,
a
copy
of
the
old
rap
key
like
that
seems
a
little
bit.
A
D
C
Well,
my
one
counter-argument
to
that
and
I'm
sorry
for
interrupting,
but
my
one
counter-argument
to
that
is
if,
at
one
point
in
the
past,
you
had
the
the
users
key
and
you
got
into
the
system
like
you.
Could
it
you
you
have
everything
you
need
at
that
point
to
get
to
a
master
key
right.
So
as
soon
as
you
have
a
master
key
like
you,
you
know
any
amount
of
changing
user
key
or
doing
anything
like
that.
All
of
a
sudden
doesn't
matter.
Why.
C
Right,
but
the
only
way
to
deal
with
that-
and
they
aren't
the
only
way
for
you
to
get
back
to
the
point
where
your
data
is
protected
again,
is
to
re-encrypt
all
of
the
data
like
by
doing
some
kind
of
sender
received,
but
at
the
same
time,
if
we
are
assuming
that
whoever
it's
done
stack
has
both
like
has
already
gotten
this
far
then
kind
of
your
data
is
probably
a
you
know.
By
extension,
your
data
could
already
have
been
compromised
anyway.
C
And
more
kinds
of
attacks,
but
at
the
end
of
the
day,
the
only
the
only
way
to
really
make
sure
that
your
data
is
secure
is
to
not
have
it
written
down
anywhere,
which
you
know.
That's
like
that's
a
we
I'm
that
that
sounds
very
defeatist.
I
guess,
I
realized
when
I
said
it,
but
I
don't
mean
it
to
say
I,
don't
mean
to
say
it
like
that.
But
what
I
mean
is
that
like?
C
C
Every
level
of
like
every
level
of
attack
like
this
just
or
every
level
of
attack
like
this
I'm,
not
I'm.
Sorry
I'm,
not
sure
that
preventing
all
of
these
kinds
of
detectors
necessarily
makes
sense,
because
at
some
level,
if
you
get
to
one
of
these
attacks,
you
already
have
one.
If
that
makes
sense,
you
know.
A
A
I
guess
is
that,
like,
let's
not
give
the
users
the
false
idea
that
changing
your
like
changing
your
passphrase
actually
does
anything
like
changing
your
passphrase
is
for
like
I,
don't
like
typing
that
old
thing
anymore,
I'm,
gonna
type,
some
new
thing
not
for
somebody
knows
my
old
passphrase.
Let
me
change
it
to
one
that
people
don't
know.
You
know
what
I
mean
and
therefore,
like
that's
kind
of
what
we're
saying,
then
this
proposal
by
a
bunch
of
having
a
known
passphrase
but
doesn't
make
any
sense
great.
C
I
do
have
to
head
out,
but
could
we
talk
about
this
a
little
bit
more
offline
and
yes,
because
I
think
we
definitely
at
the
very
minimum.
We
have
some
thinking
to
do
here.
Okay,
I
have.
F
C
G
C
F
A
So
yeah
I
think
we'll
need
to
follow
up
with
them
more
by
the
way
is
there
anybody
from
canonical
on
this
call
all
right,
I
guess
not
well,
we'll
make
sure
to
follow
up
with
them
on
their
bug
tracker
and
you
kind
of
once.
We
come
to
some
sort
of
conclusion
as
to
what
we
want
to
recommend
to
them.
Obviously,
with
the
knowledge
that,
like
you
know,
canonical,
can
do
what
they
want,
but
we
can
give
them
the
best
best
advice
that
we
can.
A
A
G
So
I'll
cover
this
I
guess
a
little
bit
briefly
before
we
can
take
any
action.
I
think
we'd
want
to
make
sure
that
you
know
tom
has
a
chance
to
review
it
and
comment
and
so
forth,
but
basically
currently
encryption
on
means
aes-256
CCM.
My
proposal
is
that
we
change
that
to
a
es
256
GCM.
In
my
view,
there
are
two
main
factors
to
consider:
security
and
performance
I'm,
not
an
encryption
expert,
so
I
can't
make
a
rigorous
argument,
but
I
will
say
that
GCM
is
widely
used
these
days.
It's
that.
A
F
Okay,
so
you're
liking
to
basically
CCM
is
great
in
small
quantities,
but
in
order
to
do
any
verification
of
it,
you
have
to
decrypt
the
entire
thing
first
and
then
compute
your
hash,
whereas
with
GCM
you
compute
the
hash
based
on
the
encrypted
data.
This
means
that
you
can
have
larger
values,
larger
encrypted
streams
and
get
better
performance
just
when
trying,
even
when
you're
trying
to
verify
but
with
CCM
you
go
to
decrypt,
you
have
to
decrypt
the
entire
block,
compute
the
hash
and
then
decide
whether
it's
okay
or
not.
F
G
Okay
yeah,
so
there
there's
I,
guess
an
argument
in
favor
of
a
change.
I
guess
the
other
things
I
was
going
to
point
out.
Is
it's
the
best
practice
to
use
GCM
in
TLS
these
days?
According
to
sources
like
Mozilla,
open
SSL
supports
both
modes
for
TLS
1.3,
but
only
enables
GCM
by
default.
So
I
think
that's.
You
know
more
evidence
that
GCM
is
the
way
to
go
really.
G
The
other
argument
here
is
about
performance,
so
GCM
should
be
faster
in
theory,
from
my
understanding
that
seems
to
hold
true
in
practice
on
pull
requests,
97
49,
just
one
random
tests,
showed
that
GCM
and
ZFS
was
about
15
percent
faster
than
CCM.
That's
before
the
changes
in
that
pull
request,
which
improved
GCM
performance,
but
on
modern
Intel
processors
by
twelve
times
so
that
speed-up
comes
from
porting.
G
The
assembler
routines
from
open
SSL
and
the
author
said
that
if
the
CCM
routines
were
also
ported,
which
they
are
not
in
that
PR
GCM
would
still
be
three
to
four
times
faster.
So
in
general
it
feels
like
in
you
know,
2020
that
GCM
is
both
more
secure
and
significantly
faster
and
I.
Guess.
That's
the
impetus
for
me
suggesting
that
it
should
be
the
default
instead
of
CCM
I.
B
A
A
H
Although
both
points
are
very
brief,
so
I'll
start
with
bookmark
cloning
one
and
then
let's
see
how
far
we
can
get
I
think
the
the
other
points
were
already
addressed,
they're
just
in
the
wrong
order
in
the
agenda
notes:
okay,
so
the
bookmark
cloning
PR
I,
implemented
book,
lock
cloning
and
support
for
China
programs
to
create
bookmarks.
There
is
a
work
in
progress.
We
are
open
github
because
there
is
one
remaining
design
issue
to
be
solved.
I.
H
If
you
click
on
the
the
link
in
the
like
indented
enumeration
item,
you
will
see
that
there
is
a
fix
me
comment
in
the
document,
and
that
is
about
what
should
we
do
when
we
clone
a
redaction
bookmark
so
book?
My
cloning
at
generators
I
think
fairly
straightforward,
like
you
have
a
bookmark
already,
and
you
want
a
second
bookmark
that
has
the
same
contents.
Then
you
just
run
the
bookmark
command
with
the
bookmark
as
a
target,
instead
of
a
side
shot
as
a
target.
H
So,
from
a
user's
perspective,
it's
very
simple
and
problem
is:
what
do
we
do
if
the
target
like
that,
if
the
thing
that
we
are
cloning
is
in
fact,
irrediction
bookmark,
because
the
retake
to
bookmark
has
a
huge
amount
of
data
attached
to
it
in
the
reduction,
object
and
I?
Don't
really
know
what
to
do
here
like
at
the
moment,
I
error
out
into
and
support
cloning
retention
bookmarks,
but
that
might
break
use
experience
or
something
I,
don't
know
so
I'll
be
happy
for
any
input
on
that.
Your
questions
about
the
feature.
A
A
A
A
E
H
A
I
think
you
can
make
arguments
either
way
like
for
practical
use.
Cases
I
think
probably
just
copying
it
into
160
would
be
fine.
On
the
other
hand,
like
you,
probably
could
create
some
artificial
use
case
where,
like
you,
can
make
it
arbitrarily
large,
because
it's
all
remind
me,
is
it
a
list
of
blocks
or
is
it
ranges
its.
A
E
A
A
H
H
So
there's
a
whole
other
anger
to
that
and
that's
from
like
from
the
user
experience
or
user
expectation
point
perspective,
because
you
do
the
bookmark
cloning
by
running
ZFS,
bookmark
and
almond
ZFS
bookmark
always
creates
a
lightweight
put
mark
like
we
won't
not
like
every
to
start
bookmark.
Probably
at
this
point,
yeah.
A
H
At
least
like
I've
I
think
like
I've
I
got
this
far,
but
right
now,
I
have
no
idea
how
to
like
properly
do
the
object.
Copying
in
this
Indian
context,
like
I
I,
didn't
find
code.
That
does
something
similar,
so
I
I
couldn't
find
myself
around
in
the
code
base
and
keep
in
mind
like
this
is
all
obvious
work.
So
the
second
bullet
point
on
the
to
do.
This
is
I
need
some
review
on
this
I
think.
E
That
at
least
a
valid
preliminary
implementation
would
be
like
you
can
pass
in
a
redaction
bookmark
to
copy,
but
it
just
won't
like
it
will
attempt
to
copy
the
redaction
list.
It'll
just
create
a
new
bookmark
with
the
same
like
word,
cxg
and
written
stuff,
and
it
just
won't.
It's
like
it'll,
just,
etc,
tact
knob
to
zero.
So
it
isn't
a
redaction
bookmark,
but
like
the
command-line
utility,
would
warn
you
before
you
allowing
you
to
do
that
or
it's
okay.
E
H
C
A
Which
is
like
you
know,
you
running
ZFS
bookmark,
so
like
earrings,
DFS
bookmarks
target
named
new
bookmark
today,
like
that
creates
it
creates
a
new
bookmark
that
references
the
time
of
the
target
where
today
the
target
is
always
a
snapshot.
Obviously
the
bookmark
is
not
a
copy
of
the
snapshot.
It's
just
a
telling
you
the
time
that
the
snapshot
was
created.
So
I
I
think
that
it's
a
valid
argument
to
say
like
well
now
the
target
can
be
the
marker
and
it's.
A
C
E
A
So,
like
I
think
that
that
going
that
way,
kind
of
makes
sense
to
me.
The
like
phrasing
of
bookmark
cloning
I
think
runs
a
little
counter
to
that,
but
but
I
I
think
only
the
title
of
the
or
breath
and
not
actually,
you
know
anything
about
the
actual
change.
So
you
know
we
just
call
it
more
clearly
like
like
you,
can
create
a
bookmark
from
another
bookmark
in
the
documentation
in
don't
use
the
term
like
cloning
or
whatever.
Then
I
think
that
it
could
be
just
fine
to
do
that.
A
H
A
Okay,
then
I
think
that
that
sounds
like
a
reasonable
way
forward,
with
your
pull
request
and
yes,
I
got
them,
I
think
Paul
and
I
at
least
need
to
take
a
look
at
the
you
heard
of
you
as
well
all
right.
Thank
you.
Well,
thanks
for
thanks
for
including
this
Christian
and
thanks
for
you
know,
driving
forward
and
bringing
up
the
outstanding
issues
on
that.
The
meeting.
H
Maybe
one
last
question:
like:
should
I
separate
out
the
channel
program,
support
into
separately
requests,
or
should
it
be
the
simple
request,
the
channel
program
support
builds
on
top
of
the
like
they
put?
My
crowning
feature
would
be
hard
to
like
split
them
up
into
two.
Their
separate
commits,
but
I
think.
A
A
Cool,
so
sorry
that
we
do
not
get
to
everything
today.
The
encryption
discussion
was
very
interesting
and
you
know
I
look
forward
to
continuing
that
and
you
know
I
think
I
think
we
got
good
input
from
everyone
and
I
like
to
be
clear:
I,
don't
think
that
I
necessarily
have
the
the
one
right
answer
on
that
encryption
stuff.
So
I
definitely
look
to
other
folks
for
continued
input
and
discussion
on
kind
of
what
both
like.
Should
we
be
implementing
something
different
than
what
we
have
and
what
should
we?