►
From YouTube: January 2022 OpenZFS Leadership Meeting
Description
Agenda: Encryption bugs; Compression algorithms
A
Hey
everyone:
let's
get
started
with
the
new
year
open,
zfs
leadership
meeting
welcome
everyone.
I
saw
that
there
were
a
few
things
added
to
the
agenda
and
then
maybe
we'll
have
time
for
open-ended
q
a
so.
A
B
C
Okay,
because
I
did
get
additional
reports
of
that
via
the
what's
it
called
cincoid
maintainer
that
he's
getting
a
bunch
of
reports
in
his
bug
tracker
about
these
encryption
bugs
as
well
one
of
the
replication
tools.
A
Hey
rich
were
the
things
that
you
wanted
to
discuss
about
the
encryption
bugs
it
looked
like.
Maybe
the
few
you
added
a
few
to
the
list
since
last
time.
D
D
It
uses
dedupe
it
does
it's
not
clear
to
me
if
that's
related,
my
guess,
without
having
more
information,
is
that
it's
just
that
using
d
makes
it
slow
enough
that
it
can
lose
a
race,
and
then
it
loses
the
same
race
that
send
receive
can
lose
and
free
something.
While
it's
in
use.
D
I
don't
have
that
much
to
update
on
encryption.
Actually,
I
did
spend
a
fair
bit
of
time
working
on
it
before
the
holidays,
and
I
found
a
lot
of
things
that
don't
help.
D
But
I
wrote
up
a
summary
of
the
workflow
that
I
think
is
happening
in
one
of
the
bugs
11679,
but
basically.
D
Debuff
wright
gets
called
and
debuff
reed
gets
called
and
debuff
right
finishes
and
frees
buffers
temporarily
or
frees
the
buffers
temporarily
at
the
end
of
the
right
and
then
debuff
reed,
which
is
in
the
middle
of
running,
suddenly
tries
to
dereference
one
and
is
very
sad
with
its
life
choices.
A
All
right:
well,
it
sounds
like
it's
great
that
you
have
an
idea
of
what's
going
on
there,
so
we
sounds
like
the
next
step
is
you're
doing
some
testing.
It
looks
like
your
tests
may
be
passed.
D
I
tried
something
it
made,
it
less
likely
to
happen
on
some
systems,
but
it
still
happens.
So
what
I
was
going
to
do
next,
but
haven't
finished,
doing
yet
is
trying
to
figure
out
what
makes
the
encrypted
received
buffers
behave
differently
here
than
everything
else,
or
if
it's
just
really,
anyone
could
lose
that
race,
encryption's
just
slightly
slower
and
but
that
seems
unlikely
since
I
don't
think
I've
seen
anyone
report
the
same
crash
without
encryption.
A
All
right
well,
it
sounds
like.
Maybe
the
next
step,
then,
is
for
folks
who
are
familiar
with
the
dmu
internals
to
take
a
look
at
your
analysis
there.
That
would
be
nice.
That
makes
sense
myself,
probably
being
included
among
those.
A
It
looked
like
you
had
another
topic
about
compression
that
you
wanted
to
bring
up.
I.
D
Did
yeah
because
it's
been
exciting
some
people
for
a
bit
and
I'm
also
slightly
interested
because
I
have
a
pure
outstanding
about
it.
Yeah
that
particular
bug
I
linked
is
a
bug.
Somebody
reported
requesting
a
solution
to
the
problem
of
you
know:
never
updating
the
compressors
ever
because
you
know
things
will
break
some
discussion
is
slightly
heated,
though
not.
A
So
maybe
we
could
get
on
the
same
page
about
what
exactly
can
break,
because
I
think
that
some
of
the
some
of
the
heat
there
may
be
coming
from
you
know
maybe
misunderstandings
or
disagreements
about
what
exactly
can
break
if
that,
if
the
compression,
if
the
compressor,
compresses
things
differently.
D
C
Because
if
the
copy
in
memory
is
already
compressed,
it
can
just
write
it
that
way
to
l2
arc
and
it
will
match
the
checksum.
That's
in
the
block
pointer
when
it
reads
it
back
from
the
l2r.
You
can
check
the
block
pointer
and
see
if
the
text
seems
the
same,
but
if
the
copy
and
the
arc
is
not
compressed,
then
if
the
block
pointer
has
compression
enabled,
I
think
then
it
has
to
recompress
it
before
it
writes
it
to
the
l2
arc.
C
Although
it's
you
know,
it's
not
going
to
panic
your
system
or
something.
If
that
does
happen,
it's
just
going
to
falsely
report
that
your
l2
arc
is
failing.
A
Yeah,
so
that's
one
problem
is
this:
if
you,
if
you
disable
compressed
arc
and
you
have
l2
arc,
then
it
doesn't,
it
might
not
totally
really
work
right,
but
the
the
kind
of
fallback
is
relatively
graceful.
C
A
C
This,
the
smarts
we
added
for
z,
standard
and
we
could
add
for
lz4
in
the
future
or
something
we
would
be
able
to
say.
Oh
if
the
versions
don't
match-
and
we
see
this
problem-
we
can,
you
know
silently
throw
the
data
away
and
not
use
the
l2
arc
or
never
make
it
eligible
for
the
l2
arc.
Instead
of
you
know,
reporting
a
check,
some
error
and
making
people
think
their
ssd
is
failing.
A
C
A
Yeah
I
mean
there's
a
bunch
of
workarounds
that
could
be
done
there
like
you,
could
check
the
checksum
before
you
write
it
out.
For
example,
you
know
you
could
be
like.
Oh
you
could
say
you
could
just
say.
Well,
you
have
compressed.
You
have
uncompressed
arc,
we're
not
going
to
write
stuff
to
l2.
Sorry
that
just
doesn't
happen
anymore
or
you
can
try
to
compress
it
and
then
try
to
decompress
it.
Try
to
compress
it
and
then
try
to
check
sum
it
see
if
it
checks
the
matches.
What
we
think
it
should
be.
A
If
it
doesn't
match,
then
don't
write
it
out.
We
could
also
just
be
like
no
more
uncompressed
dark,
so
there's
a
lot
of
workarounds
for
that
one.
The
other
problem
potential
problem
is
with
not
right
where,
like.
If
you're
writing
the
same
logical
data
over
top
of
itself,
then-
and
you
have
a
strong
enough
checksum-
then
it'll
it'll
not
do
the
right,
because
we
know
that
your
right
is
not
changing
the
data
and
so
that
wouldn't
be
effective.
A
C
A
For
them,
so
those
are
the
two
problems
that
I
know
of
the
uncompressed
arc,
plus
the
torque
and
then
the
knob
right
are
there
other
problems.
D
The
only
other
one
I
can
think
of
which
is
sort
of
in
the
same
vein
as
an
upright
it
conceptually
to
me
at
least,
would
be
d-dupe
right,
because
you're
hashing
against
the
compressed
blocks
still
there.
So
you
have
a
similar
problem
where
you
don't
save
your
space
anymore.
D
A
So
in,
in
my
opinion,
all
three
of
those
problems
are
very
minor
and
like
if
it
certainly
if
somebody
wants
to
solve,
if
somebody
wants
to
solve
the
updating,
the
compressor
like
more
elegantly
and
more
completely
and
and
then
that
would
be
great,
but
personally
I'd
be
fine
with
just
like
having
all
three
of
those
very
minor
problems
and
which,
which
would
happen
if
we
just
like
updated
the
compression
algorithm
and
didn't
do
anything
else.
Basically,
like
didn't,
do
any
mitigation
at
all.
You
know
those.
A
I
think
the
the
semantics
that
you
would
get
are
not
are
not
that
bad,
that's
my
opinion,
but
I
I
certainly
would
welcome
others.
So
I
see
brian
joined
brian.
Did
you
hear
that
kind
of
summary
of
the
potential
problems
with
changing
the
compression?
A
A
C
E
Thanks
yeah
that
drives
up
my
understanding
too.
I
know
I
originally
was
concerned.
There
might
be
some
corner
cases
here
with
encryption
too,
but
based
on
all
the
discussion
and
whatnot,
but
looked
at
that
a
little
bit
more
and
actually
couldn't
come
up
with
anything
particular
that
were
corner
cases,
new
corner
cases
for
encryption.
So
I
think
I
think
those
are
probably
the
three
cases
I
think
they're,
all
pretty
minor
too.
D
F
A
So
rich
does
that
like?
How
does
that
match
up
with
with
your
thinking
or
what
you
were
trying
to
achieve.
D
That
all
seems
pretty
reasonable
to
me
like
I
personally,
I
don't
use
d-dupe,
except
when
people
have
bugs
with
it,
but
I
do
use
an
upright,
but
I
don't
really
care
about
a
one-and-done
cost
for
it.
Honestly.
D
Oh,
I
had
been
under
the
impression
that
not
having
an
elegant
solution
to
this
was
what,
for
example,
derailed
the
z
standard
pr
for
updating
that,
ultimately.
E
E
A
Yeah
so
for
z
standard
we
can
like
the
l2
arc.
Thing
would
be
handled
more
more
elegantly
because
you
would
know
you
could
know
like.
Oh,
it
was
originally
compressed
with
like
compression
compressor
version.
Two
and
now
I
don't
have
compressor
version
two.
I
only
have
compressor
version
three,
so
this
particular
block
I
can't
put
in
the
l2
arc
when
I
have
the
uncompressed
arc,
but
other
blocks
like
that
work
written
with
the
compressor
version.
Three.
I
can
do
this
trick
of
recompressing
them
and
getting
the
exact
same
thing.
A
A
That's
why
it's
not!
You
know
a
property,
it's
just
a
tunable.
C
You
know,
I
think,
our
other
concern
of
well.
You
know
this
pain
once
isn't
a
big
deal,
but
if
we're
going
to
do
this
frequently,
maybe
it
is,
but
I
don't
really
see
us
updating
the
version
of
zed
standard
more
than
you
know
once
a
year
once
a
major
release
or
whatever.
So
I
don't
see
like
we're
going
to
keep
moving
the
target
on
people
or
something.
A
A
C
The
only
other
thought
that
paul
doadek-
and
I
were
talking
about
it
and
we
thought
you
know-
would
it
make
sense
to
do
something
like
we
do
for
encryption,
where
we
basically
split
the
checksum
into
two.
Basically,
some
of
the
logical
block
and
the
rest
of
the
physical
block,
so
that
we
could
detect
when
the
the
logical
block
was
the
same,
but
the
compression
meant
the
physical
block
was
different.
C
C
Some
free
bits
in
there
I
think
possibly
or
you
know,
do
we
just
use
some
of
the
undefined
space
to
store
an
extra,
smaller
checksum
of
just
of
the
logical
block.
That's
just
you
know
for
a
quick
check
kind
of
thing,
because
you
know
we
don't
necessarily
wanna
in
the
default
case,
suddenly
make
everybody's
check
some
only
half
as
strong
as
it
was
before
by
you
know,
making
it
only
128
bit
everywhere
instead
of
256
bit
by
default.
D
A
Yeah,
I
think
that
like
if
you,
if
somebody
wants
to
implement
that
it's
obviously
a
lot
more
complicated,
but
that
could
could
be
good
too.
C
But
you
know
in
the
end,
all
it's
doing
is
is
mitigating
those
pretty
rare
corner.
Cases
like
it
might
be
able
to
make
not
write
less
of
an
issue,
because
you
could,
you
know,
check
and
and
be
like.
Okay,
this
is
the
logical
block,
is
the
same,
and
so
we
can
skip
it
all
together
and
not
have
the
pain,
yeah.
C
But
I
agree
that
the
uncompressed
arc
slash
alto
art
case
is
you
have
to
be
doing
two
slightly
odd
things
together
for
that
to
even
be
an
issue
yeah?
And
I
guess
the
other
thing
that
rich
has
found
is
that
you
know
it
turns
out.
We've
already
been
doing
this
a
bit
like
we
discovered
a
couple
years
ago
that
the
qat
offload
for
gzip
actually
results
in
slightly
different
output
than
the
software
version,
and
then
rich
just
noticed
that
a
lumos
and
freebsd's
gzip
is
actually
slightly
different
than
the
one
in
linux
kernel.
A
A
And
using
gzip
or
like
probably
not
very
many
people
are
using
qat
at
all.
B
A
Those
that
are
probably
not
going
back
and
forth
between
qat
and
non-qet,
but
so
I
wouldn't-
I
wouldn't
necessarily
take
those
as
by
themselves
as
licensed
to
do
this.
You
know
to
everybody,
you
know,
but
I
still
think
that
the
the
downsides
you
know
these
problems
are
so
minor
that
it's
just
not
that
big
of
a
deal.
E
I
mean
if
the
linux
kernel
ever
does
update
their
implementation
of
gzip
on
us
too.
It
might
just
happen
to
us
without
us,
knowing
it
right.
One
day,
the
new
kernel
comes
out,
they've,
updated,
gzip
and
now
we're
using
a
new
one.
So
getting
ahead
of
that
might
be
nice,
but
it
seems
unlikely.
They
would
do
that
for
the
same
reasons
that
we
don't
want
to
touch
it
right.
D
Actually
so
I
mentioned
this
in
the
bug
that
I've
made
about
it,
but
it
turns
out
they
did.
They
did
something
like
what
I
proposed
with
lz4,
where
they
updated
the
decompressor,
but
not
the
compressor,
but
not
because
it
changed
the
output,
but
because
the
performance
was
slightly
worse
on
some
architectures
back
in
like
2006
and
they've
never
touched
it
again.
D
A
Yeah
I
mean
as
far
as
I'm
concerned,
if
somebody
wants
to
open
the
pr
to
update
the
decompressor.
That's
like
total
no-brainer.
A
If
somebody
wants
to
open
a
pr
to
update
the
compressor
as
well,
then
I
would
say
like
test
these
corner
cases
that
we
think
are
problematic,
make
sure
that
they're
handled
in
you
know
non-horrible
ways.
You
know
like
not
right.
It's
like!
Oh,
you
just
don't
get
the
knob
right.
We
you
get,
the
data
gets
written
again
and
that's
fine
with
the
compressed
ultra
with
the
uncompressed
arc.
A
No
I'm
just
saying
like
basically,
you
would
oh.
C
C
I
think
yeah,
it's
not
going
to
be
that
hard,
and
I
can
help
rich
with
that,
because
I
think,
in
the
long
run
that's
probably
a
little
easier
for
us
maintenance-wise
than
trying
to
maintain
multiple
different
versions
of
z-standard.
C
Like
you
know,
oh
we're
gonna
keep
the
compressor
at
145
but
update
the
decompressor
to
150
and
then
what
happens
when
you
know
160
comes
out
and
and
so
on.
We
don't
want
to
be
carrying
around
multiple
versions,
and
you
know
the
one
downside
to
the
way
we've
imported
zed
standard
of
using
the
embedded.
You
know
one
giant
c
file
version
of
it
is
that
it
compiles
a
lot
slower
because
it
doesn't
have.
D
A
Brian,
what
are
your
thoughts
on
like
acceptability
of
those
pr's
die
potential?
Pr's
that
I
mentioned.
E
I
mean
that
sounds
fine
to
me.
I
think
it's
a
good
way
to
move
forward
on
this
and
it's
all
pretty
reasonable.
Like
I
say
you
can
refine
the
heuristics
later.
We
want
to
make
it
smarter,
but
it's
a
corner
case
of
a
quarter
case
almost
at
this
point
already,
so
I
think
that's
all
pretty
reasonable.
A
Cool
any
objections
to
this
plan.
A
All
right,
then,
I
would
encourage
someone,
alan
or
rich,
to
write
up
the
pr
and
and
and
hopefully
we
can
try
to
write
up
the
pr
and
then
bug
brian
and
I,
as
soon
as
we
open
it,
so
that
we
can
get
it
reviewed
and
integrated.
You
know
before,
like
those
people
show
up
it's
not
that
we
don't
want
before
you
know
it.
It
accumulates
a
lot
of
comments
that.
A
I
welcome
people
to
come
and
say,
like
hey,
isn't
this
going
to
break
whatever
you
know?
We
do
want
that
input.
What
we
don't
want
is
for
the
amount
of
fud
to
exceed
our
ability
to
defend
it.
You
know
like
we
don't
want.
A
We
don't
want
to
have
a
bunch
of
people
coming
and
saying:
hey,
like
I
think,
maybe
something
might
about
this
might
not
work,
I'm
not
going
to
say
what
it
is,
but
I'm
scared
of
it,
and
then
you
know
for
the
for
the
pr
submitter
to
not
have
the
time
or
energy
to
go
and
to
go
respond
to
every
one
of
those
and
say
like
I'm,
pretty
sure
it's
fine,
you
know
like
what's
the
problem
and
how
they
get
kind
of
mired
down
in
like
bad
feelings
that
are
not
specific
problems.
C
A
I
would
guess,
there's
a
there's,
probably
a
lot
of
people
using
dupe.
They
may
not
be
developers
that
are
on
this
call.
D
I
was
thinking
lv4
and
the
standard
because
I
actually
have
prs
to
do
one
of
these
and,
as
it
turns
out,
someone
else
has
written
the
other
one.
So.
A
Yeah,
I
mean
that's,
I
think,
let's
open
them
if
lz4
definitely
would
be
impactful
since
that's
the
default.
A
So
you
know
if,
if
people
that
they
use
do
yeah
adam
says
that
he
uses
it
and
regrets
it,
which
I
think
is
a
lot
of
folks
experience,
but
I
I
know
there
are
people
that
use
it
and
do
not
regret
it,
or
you
know
that.
Do
you
appreciate
the
space
savings
that
it
gives?
A
A
Yeah
I
mean
I,
I
would
guess
that
you,
it
would
be
really
nice
if
we
could
do
it
without
changing
the
honda's
format
without
needing
a
future
flag.
You
might
be
able
to
do
that
like
if
you
look
at
kind
of
the
there's
already
a
header
on
there,
a
header
in
the
block
that
has
right.
C
For
the
it's
the
compressed
size,
because
the
allocation
is
always
going
to
be
a
shift
based,
and
so
it's
so
we
don't
feed
the
slack
into
the
decompressor
and
break
it.
Yeah.
C
But,
like
you
said,
we
haven't
found
a
use
for
the
version
number
yet
so
maybe
we
don't
suffer
a
bunch
of
pain
to
future-proof
something
that
might
not
turn
out
to
be
useful,
although
you
know
I
just
made
the
case
for
not
carrying
around
multiple
versions
of
these
things,
but
I
suppose
we
do
mitigate
the
pain
for
people
upgrading
if
it
is
a
feature
flag
where
they
can
opt
to
not
enable
the
feature
until
they're
ready
for
it,
rather
than
just
when
they
upgrade
their
version
of
zfs.
It
starts
not
deduping
anymore
temporarily.
A
B
E
Yeah
I
mean
that
works
as
long
as
you
update
lz4
like
once
right
right.
If
we're
ever
going
to
do
it
twice
or
three
times
and
make
it
a
regular
thing,
that's
going
to
become
more
problematic,
but.
C
Well,
once
once
we've
done
it
once
we
have
the
version
number
information
in
the
header,
we
can
update
it
without
having
to
change
the
thing,
although
we
don't
want
to
set
the
expectation
that
every
time
we
upgrade
the
compressor,
we
let
you
choose
whether
you
upgrade
or
not,
because
yeah
again
like
I
said
we
don't
want
to
be
carrying
around
multiple
versions
forever.
Although
lz4
is
a
lot
smaller
to
carry
around
than.
D
A
C
E
You
might
have
a
root
surprise,
though
right,
if
you
were
expecting
it
to
do,
but
suddenly
you're
you
need
twice
the
capacity
you
thought
you
did
and
you
don't
have
it
right,
there's
a
thing
that
could
happen
with
you.
If
you
potentially
writing
a
lot
right,
you're
expecting
everything
to
be
do
and
suddenly
it's
not
after
an
update.
D
A
E
At
four,
not
much
yeah,
that's
pretty
cheap
z
standard
is
huge,
though
right.
C
A
little
bit
of
maintenance,
I
think,
because
you
have
to
rename
all
the
functions,
so
they
don't
conflict
and
that's
mostly
all
scripted
and
so
on.
But
that's
a
one-time
kind
of.
C
D
E
I
think
the
maintenance
burden
is
pretty
low
for
talking
about
one
or
two
or
three
versions
of
it.
I
think
where
it
gets
problematic.
If
we
end
up
with
10
versions
of
z
standard
in
there
right,
it
gets
harder
to
maintain
and
makes
the
compile
time
longer
rate.
I'm
less
worried
about
the
compile
time,
but
yeah
I'd
prefer
not
to
carry
too
many
versions
of
it
around.
A
I
think
for
elsin
yeah,
the
hard
part
is
writing
the
code
to
do
the
like.
There
are
different
versions:
I've
selected
the
most
recent
version
I
have
either
like
I
have
a
tunable
or
I
have
feature
flags
that
tells
me
which
one
when
I
want
to
do.
I
feel,
like
that's
stuff,
that's
tricky,
but
you
do
that
once
and
then
the
fact
that
you
have
like
a
bunch
of
different
c
files
for
different
versions
of
z,
standard
or
lz4,
like
we
never
touch
those.
A
It's
not
a
big
deal
like
as
far
as
I
know,
we're
never
like.
Oh,
we
have
to
go
into
the
depths
of
every
compression
algorithm
and
change,
something
about
it
because
we're
refactoring
something
like
that
code
is
just
like
it
gets
through
and
once
it
sits
there
and
we
never
edit,
we
never
modify
it.
So
from
that
point
of
view,
it
seems
like
keeping
around
the
old
versions
has
very
little
cost.
E
I'd
say
the
only
caveat
to
that
is
if
we
start
doing
that,
we're
rapidly
not
going
to
be
testing
the
older
versions
we'll
just
end
up
testing
the
newer
version
properly
in
all
the
automated
testing.
So
we're
okay
with
that
not
testing
that
older
code
ever
because
we
never
touch
it
right.
But
if
we
do
have
a
need
to
touch
it,
it's
going
to
be
painful
if
we
have
to
go
back
and
touch
all
the
implementations
to
update
them
to
handle
some
kernel
change
or
something
that
a
compiler
flag
or
who
knows
what.
D
I
mean
I
wouldn't
suggest
keeping
an
indefinite
number
of
versions
around
like
I
said,
I'd
keep
maybe
one
around
if,
depending
on
how
frequently,
we
ever
need
to
update
anything
if
we
need
to
update
it
again,
punt
the
old
one
and
put
big
flashing
notices
for
like
a
major
version
release
or
two
before
you
do
it.
D
Sure,
which
is
why
it
unfortunately
suggests
defaulting
to
the
conservative
option
of
like
using
the
older
one
if
they
have
an
existing
data
set.
C
B
The
thing
I'm
asking
themselves
is
like,
even
if
you
put
up
this
warning
like,
will
the
warning
be
actionable
of
any
sort
like
if
you
rely
on
your
pool
being
deduped
and
you
don't
have
provisions
twice
the
capacity
like
the
case
brian
mentioned?
What
are
you
gonna
do
about
it
in
the
end.
D
I
mean
at
that
point
you
would
have
to
stay
on
a
version
that
supports
your
version
of
the
compressor
for
the
lifetime
of
the
system.
If
you
don't
have
any
way
to
migrate,
your
data
around.
F
F
A
step
in
the
logic
there
because,
if
like,
let's
say,
let's
say
I'm
on
version
1.0
and
we
only
keep
the
last
two
versions
of
lz4,
and
so
when
I
upgraded
zfs
to
you,
know
the
next
version,
there's
lz,
1.0
and
2.0
or
rn
the
code
base.
As
soon
as
we
go
to
lz
3.0,
it's
only
2.0
and
3.0
in
the
codebase,
but
I'm
using
lz
1.0.
So
if
I'm
doing
dedupe
I'm
getting
burned
right
as
soon
as
I
do
my
upgraded
bits.
F
That
is
true
yeah.
So
it's
like
to
me,
I
feel
like
it's
either
like
it.
Maybe
it
delays
the
pain
point,
but
that
that
cliff
point
is
coming
and
it's
either.
We
we,
I
feel
like
we
either
have
to
commit
to
say
we're
going
to
keep
all
the
versions
around
or
like.
Why
keep
any
around.
C
Or
at
least
you
know,
we
keep
the
lz4
we've
always
had
and
with
a
feature
flag-
and
you
know
you
can
opt
out
of
that
feature
flag
and
just
continue
using
the
old
one
forever
or
you
enable
the
new
feature
flag,
which
is.
This
is
the
new
lz4
and
it's
going
to
keep
updating
and
you
agree
to
deal
with
it.
C
D
D
D
But
yes,
we
do
have
a
new
lz4
pr
in
the
queue
it's
mine.
But
unless
you
mean
that
you
just
opened
one
either.
B
A
A
You
know
it's
like
there's
lz4
v1
and
this
is
lz4
v2
and
you
know
we're
going
to
change
the
default
compressor
to
lz4
z2,
but
you
know
only
for
new
data
sets
or
something
or
only
for
new
pools.
I
don't
know.
I
think
that
you
could
you
could
kind
of
mitigate
that
and
then
say
sure,
we'll
keep
around
the
old
versions
of
lz4
forever,
but
it's
a
different
story
for
z
standard.
A
I
feel
like
there's
a
lot
like
there's
a
lot
of
flexibility
in
the
design
space
here,
yeah.
So
really
it's
up
to
whoever
wants
to
implement
this.
Like
you
know
it
seems
like
people,
people
a
lot
of
people
want
to
see
this
get
done
so,
however,
much
effort
you
want
to
put
in
to
do
all
these
various
mitigations
it's
up
to
whoever
implements
it.
E
A
Reviewers,
if
you're
introducing
like
a
lot
of
new
mechanism,
you
know-
I
imagine
a
lot
of
the
comments
are
going
to
be
about
that
new
mechanism
and
how
that
works
and
the
properties
and
the
future
flags
and
whatnot,
as
opposed
to
you
know.
If
your
pr
is
like
you
know,
I
add
like
I'm
keeping
the
old
file
the
old
lz4.c,
I'm
adding
a
new
lz4
v2.c
and
like
it's
just
a
new
compression
algorithm.
A
That
seems
easy,
easy
to
review.
Likewise,
if
you're,
like
I'm
just
deleting
the
old
file
and
adding
a
new
one
like
that
seems
also
seems
easy
to
review
the
the
more
mechanism
we
add
the
harder
it
is
to
review,
but
I
mean
hope
I
think
that,
hopefully
we
can
get
it
get
it
done.
If
somebody
wants
to
implement
it,
so
I
wouldn't
you
know
I
wouldn't
discourage
somebody
who's
motivated
to
do
that.
F
What
what
advice
would
we
give
rich
since
he
has
an
lz4
pr
out
now?
A
Is
your
pr
riches,
your
pr
to
just
replace
the
old
one,
a
little
algorithm
with
the
new
algorithm
and
that's
it.
D
The
pr
I
currently
have
open
is
just
replacing
the
decompressor
of
lz4
with
the
new
one
from
you
know
the
last
year,
yeah
it
just
punched
the
decompressor
into
a
separate
file,
that's
mostly
untouched,
just
upstream
lz4
code.
A
C
Yeah,
yes,
we
might
do
it
slightly
if
we're
planning
to
pull
in
the
entirety
of
1.9.3
or
whatever
of
lz4.
We
might
want
to
keep
the
file
in
the
shape
that
is
upstream
instead
of
breaking
out
the
decompressor
into
its
own
file,
but
other
than
that.
I
think
that.
D
That'd
be
the
only
thing
that
would
change
the
way.
We
would
lay
out
that
pr
I
mean
the
the
only
problem
with
that
is
that
I
think
the
file
the
function
names
conflict
with
the
existing
ones.
We
have
right
so
you're,
going
to
end
up
needing
to
like
have
giant
commented
sections
which
we
can
do,
but.
A
D
So
I
also
have
a
companion
pr.
I
haven't
opened
that
substitutes
the
compressor,
but
it
just
replaces
the
compressor
right.
It
just
does
the
same
thing
as
the
decompressor,
but
pumps
it
into
a
different
file.
A
I
mean
my
advice
would
be
like,
let's
see
if,
if
somebody
wants
to
do
more
than
that,
let's
find
out
if
nobody's
going,
to
volunteer
to
do
more
than
that
say
you
know
in
the
next
month,
then
we
should
just
take
that
we
should
just
take
what
you
have
of
replacing
the
lz4
compressor
with
the
new
version,
and
you
know
that's
what
that's
what
people
were
willing
to
implement?
There's
the
semantics
that
we'll
have
assuming
that
there
are
no
dire
objections
from
d-dupe
users.
C
Yeah-
and
it's
mostly
just
it-
would
keep
the
old
one
hooked
up
as
lz4
for
compressor
and
the
new
one
hooked
up
as
lz4
dash
new
or
whatever
naming
we
come
up
with,
but
they
can
both
use
the
faster
decompressor.
So
we
get
that
performance
gain
on
the
decompression
either
way.
A
All
right
well,
that
sounds
great
to
me.
Let's,
let's
see
if
we
can
see
some
progress
on
that
before
the
next
meeting
and
if
not
then
we'll
then
we'll
reevaluate,
like
falling
back
to
a
simpler
way
of
doing
it.
A
Cool
well,
I
know
we've
got
we've
kind
of
gone
around
and
around
a
bunch
with
this,
I'm
glad
that
everybody
was
able
to
kind
of
get
their
thoughts
out,
and
even
though
this
seems
like
you
know,
if
it
feels
like
we're
spending
a
lot
of
time
on
a
minor
thing.
It's
a
minor
thing
that
people
have
been.
You
know
talking
about
and
spending
spinning
a
lot
of
wheels
on
for
a
long
time.
So
if
we
can
come
to
some
kind
of
conclusion
and
get
something
integrated,
then
that'll
be
a
great
outcome.
C
Can
I
agree,
that's
it's
a
no-brainer,
it's
just
the
typical
thing
of
well.
If
we're
going
to
do
that,
we
could
do
all
this
and
then
it
quickly
becomes
bigger
right
and
so
maybe
getting
something
is
better
than
getting
lost
in
the
weight.
So
I
think
matt's
plan
makes
sense
of
if
we
can
come
up
with
something
before
next
meeting
then
great
and
if
not,
then
we
just
go
to
easy
ramp.
A
Cool
we're
almost
out
of
time.
I
I
think
this
is
on
the
agenda
at
least.
Were
there
any
other
things
that
people
wanted
to
discuss
today.
B
C
C
C
D
For
the
gzip
example,
for
example,
right,
it
was
more
like
95
percent
of
the
test
blocks.
I
used
the.
A
Well,
I'm
going
to
go
look
at
rich's,
decompressor
pr,
and
I
think
that
we
can
all
look
forward
to
some
progress
on
this
before
the
next
meeting
and
alan
and
rich.
I
definitely
appreciate
you
putting
in
the
you
know
the
hard
work
to
actually
implement
stuff
and
test
it
and
adam
on
chat
for
living
dangerously.
A
I
guess
with
the
by
testing
the
bits
in
the
pr
in
these
pr's
if
anyone
else
is,
is
able
to
help
with
that
effort,
then
then
get
in
touch
with
rich
and
alan
on
slack
or
in
the
prs.
A
Yeah,
I
I
hear
you
there
I'll
try
to
take
a
look
at
the
at
your
analysis
of
the
debuff
stuff.
Thank
you
cool.
Well,
our
next
meeting
will
be
four
weeks
from
today.
I
think
it's
the
same
time.
1Pm.
Let
me
check
our
news.
Previous
was
1
pm,
oh,
so
the
next
one
will
be
the
earlier
time
no
hold
on.
Let
me
double
check,
no
all
right.
C
I
have
one
other
question
for
the
spa
inflation
thing.
Somebody
raised
a
point
that
it
looks
like
maybe
my
assumptions
about
the
what
the
formula
needs
to
look
like
for
that
are
wrong.
How
do
we
actually?
Is
there
a
resistant
code
somewhere
that
actually
calculates
what
it
must?
How
much
space
a
block
is
going?
A
a
logical
block
is
going
to
take
to
write
out
to
a
raid
z3.
B
C
A
Is
it
the
answer?
When
could
it
be
more
than
that.
C
A
A
A
I
mean
if
gank
blocks
are
involved,
then
it
could
be
more
than
4x
right,
it
could
be,
it
could
be
4x.
You
know
it
could
be
have
to
gang
multiple
layers
of
ganging
down
to
single
sectors,
and
you
know,
and
then
plus
the
gang
block,
header
header
blocks
and
all
that
stuff.