►
From YouTube: February 2022 OpenZFS Leadership Meeting
Description
Agenda: Block Reference Table; Encryption bugs; Blake3 checksum.
https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
All
right
welcome
everyone
to
the
february
open,
cfs
meeting.
We
have
a
couple
of
interesting
things
on
the
agenda
for
today
update
on
the
block
reference
table.
A
Encryption
bugs,
let
me
see,
is
rich
said
he
might
not
make
it
so
we'll
see.
Who
would
like
to
go
over
that
and
then
a
question
about
the
blake
3
checksum.
B
B
Okay,
let
me
get
my
notes.
B
Okay,
so
just
a
reminder,
brt
is
a
block.
Reference
table
is
in
some
way
similar
to
dedupe.
It
allows
to
manually
clone
specific,
given
blocks
it's
similar
to
reflect
from
linux
basically,
and
we
can
clone
individual
blocks
or
we
can
clone
whole
files,
but
the
idea
is
that
we
only
create
block
pointers
that
will
point
at
existing
data.
B
Okay,
so
I
think
the
code
is
like,
I
would
say,
98
percent
ready.
I
will
subtract
sidetracked
a
bit
with
a
small
security
issue.
B
B
B
After
my
last
update.
B
There
is
some
changes,
for
example,
there
was
some
interaction
between
dedupe
and
brt,
so
there
is
some
like.
B
We
could,
if
we
had
a
block
that
existed
in
both
tables
we
had
to
choose
from
which
table
it
should
be
freed
first,
and
both
selections
have
some
some
implications,
and
alan
came
up
with
interesting
idea,
which
we
implemented,
that
if
we
are
going
to
clone
a
block
that
already
has
d-dub
flag
set,
we
will
just
increase
the
counter
in
the
d-dub
table
and
we
won't
put
this
block
into
brt
at
all.
B
B
So
I
wanted
the
code
to
be
ready
for
that
and
I
implemented
some
some
missing
bits
to
to
try
to
close
and
complete
the
implementation
so
stuff,
like
enforcing
file
size,
limit
range,
locking
of
the
source
files
enforcing
quotas
passing
I
o
flags,
so
we
can
tell
if
we
should
use
zeal
or
not
update
a
time
on
the
source
files
on
the
source,
file
and
stuff
like
that.
B
The
zfs
clone
file
function
initially
was
in
the
freebsd
specific
code,
but
I
rearranged
the
code
to
to
have
as
much
system
independent
code
as
possible.
So
most
of
the
code
is
now
available
to
everyone
to
use.
B
I
implemented
fclone
range
system
call
for
now,
because
the
going
code
itself
allows
to
clone
blocks,
but
the
cisco
I
implemented
initially
allowed
only
to
clo
clone
entire
files,
because
that
would
be
probably
the
most
common
use
case,
but
I
also
added
because
it
was
simply
easy
to
do
f.
Clone
range,
cisco
that
allows
to
clone
any
sets
of
blocks
even
within
the
same
file
as
long
as
the
ranges
don't
overlap.
B
B
This
cloning
blocks
is
a
bit
different
than
all
the
other
system
calls
and
operations
because
we
can
have
source
and
destination
files
in
different
data
sets.
So
for
zeal.
I
didn't
wanted
to
reference
data
set
that
it's,
maybe
I
don't
know
not
yet
available
for
some
reason,
or
so
I
didn't
wanted
to
use
file
ids
from
different
data
sets,
so
I
had
to
implement
replay
totally
different.
B
So
now
I
log
just
bps
that
I
want
to
clone
and
I
don't
reference
the
the
source
file
at
all,
so
the
zeal
support
is
implemented.
I
wanted
to
thank
christian
for
for
his
reviews
and
comments
and
discussions.
So
thank
you
christian
for
that.
B
Apart
from
that,
I
think
some
tests
tests
would
be
nice
to
have
so.
I
would
appreciate
if
someone
pointed
at
the
right
direction,
how
we
could
test
that
or,
if
there's
any
similar,
maybe
how
the
duplication
is
tested
now.
So
maybe
I
could
work
on
something
similar
and
there
is
also
one
more
item
I'm
thinking
about
implementing,
because
now
the
the
block
reference
table
is
a
single
large
table
and
the
key
into
the
table
is
vdf
id
and
offset.
B
So
we
have
those
two
64-bit
values
as
a
key,
and
the
counter,
of
course,
is
the
the
value
of
the
of
the
element
in
the
table.
But
the
vdf
id
is
repeats
often
right.
If
we
have
just
a
bunch
of
vdfs,
then
most
of
the
keys
share
50
of
the.
So
basically
we
have.
B
So
we
have
24
bytes
of
data
right,
16,
16
bytes
for
the
key
and
eight
bytes
for
for
the
data
and
so
30
33
of
this
repeats
almost
in
every
entry.
So
it's
probably
not
a
huge
space
savings,
but
it
just
bothers
me
that
we
are
repeating
everything,
so
I'm
just
considering
to
splitting
brt
per
vdf.
So
we
we
won't.
B
B
Not
really
so
not
sure
if
I
mentioned
that
in
the
past,
but
one
of
the
big
problems
was
that
when
we
free
a
block
we
have
to
reference,
we
have
to
look
at
the
brt
table
because
we
don't
have
like
specific
special
bit.
As
for
the
duplication
for
the
duplication,
you
only
consult
the
the
table
when
the
bit
is
set
for
brt.
B
We
we
don't
have
that,
but
what
I
implemented
to
to
work
to
to
make
this
work
better
than
the
duplication
and
it
can
work
better
with
the
duplication,
because
we,
the
key,
is
not
random.
So
it's
not
spread
across
entire
pool.
B
I
I
I
now
I
split
the
vdfs
in
ranges
like
every
few
gigabytes
and-
and
I
keep
like
and
I
keep
how
many
references
are
within
this
range.
So
this
array
of
ranges
and
and
those
counters
within
ranges
is
very
small.
I
think
it
was
like.
B
One
megabyte
per
one
terabyte
or
even
smaller-
something
like
that,
so
it's
very
small,
so
what
it
can
it
allows
us
to
tell
is
that
if
there
might
be
if
the
given
block
might
be
in
the
brt,
so
if
it
is,
then
we
will
we
will
go
and
and
and
consult
block
reference
table,
but
we
can
immediately
tell
if
it's
not,
and
so,
if
we
clone
even
a
very
large
file
chances
are
all
those
blocks
will
be
within
a
single
range,
so
that
will
just
single
entry
with
the
duplication,
because
we
use
cryptographic
checks
on
this
will
be
spread
across
entire
pool
so
for
dedup
it
wouldn't
work,
but
for
brt
it
works
very,
very
well.
B
But
yes,
if
someone
will
use
brt
very
heavily
and
have
clones
across
entire
pool
like
then
then,
yes,
that
would
impact
memory
as
well,
I'm
sure,
because
of
course
we
want
to
cache
as
much
as
possible.
So
this
entry
now
will
be.
If
I
implement
this
tweak,
it
will
be
just
16
bytes
per
block.
B
A
Yeah,
you
might
even
be
able
to
get
away
with
like
a
32-bit
ref
count,
saving
even
more
space.
There.
B
Yeah,
the
offset
also
might
be
we
can
use
the
like.
You
can
use
blocks
for
offsets,
not,
but
it
won't
save
us
a
lot
so
instead
of
using
byte
offset,
we
could
use
block
offset.
Let's
say
so:
we
save
few
more
bits,
but.
A
Yeah,
you
could
do
that
and
then
like
either
use
those
bits
for
the
ref.
There
might
not
be
enough
bits
there
to
use
for
the
ref
count,
but
you
could
like
use
those
in
case
the
ref
count
overflows
the
32
bits,
or
maybe
the
refcon
is
16
bits,
plus
the
height
plus
the
bits
that
you
have
left
over
in
the
offset.
You
know
something
like
that
to
squeeze
it
as
squeeze
as
much
as
you
can
to
fit
it
to
fit
stuff
into
memory.
B
I
think
that
seems
like
it
probably
is,
I
think,
with
those
counters
within
ranges,
I
think
it's
no
longer
like
a
huge
problem.
I
think
the
problem
was
solved
with
this
one
so
before
it
was
a
concern
of
mine
that
will
basically
end
up
with
the
same
problems
of
as
the
duplication,
but.
A
A
A
B
I
was
actually
surprised
with
the
calculations,
but
for
me,
even
one
gigabyte
was
like
very
small,
because
normally,
when
you
clone
a
file,
you
probably
want
to
clone
pretty
large
file
right.
Okay,
so
one
gigabyte
file
is
not
that
big
and
the
whole
file
should
should
be
should
fit
into
a
single
range,
because.
A
Yeah,
I
guess
I
was
thinking
that
people
might
use
it
to
clone
like
a
bunch
of
smaller
files.
Like
you
know,
I
it's
not
just
like
my
one
disk
image,
but
you
know
once
this
is
available
and
easy
to
use.
People
might
be
like.
Oh,
like
I'm
gonna,
cp
r.
All
of
my
photos
that
are
each
like
a
couple.
Megabytes
and
cp
is
just
you
know
using
your
brt
under
the
table
to
not
have
to
actually
copy
the
data.
A
B
What
you're
saying,
but
just
to
put
it
into
perspective,
like
we
can
have
one
megabyte
record
size
right
and
with
this
space
optimization,
let's
say
we
have
16
so
for
those
blocks
that
are
one
megabyte
and
for
ranges
that
are
one
megabyte.
It's
a
matter
of
like
if
it's.
If
this
range
table
takes
eight
megabytes
for
one
terabyte
or
if
brt
takes
like
16
megs
per
and
one
terabyte.
B
So
it's
not.
I
think
one.
A
B
A
You
have
those
big
blocks,
then
you
probably
aren't
having
problems
at
all
with
it.
Right,
like
the
whole
brt
will
be
cashed
in
ram
rate.
A
So
I
guess
maybe
the
the
argument
that
you're
alluding
to
which
I
think
I
agree
with
is
like
either
you're
cloning
files
that
are
laid
out
that
are
like
have
large
record
size,
in
which
case
the
brt
is
small
or
you're
cloning
files
that
have
small
record
size,
in
which
case
the
files
are
likely
very
large
right,
like
if
you're
cloning,
a
large
file
like
a
disk
image
or
a
database
or
something,
then
those
are
the
cases
where
the
records,
where
you
have
small
record
size,
medium
sized
files
wouldn't
have
small
record
size,
and
so
that's
kind
of
why
your
the
solution
might
work
out
really
well,
because
you,
because,
like
either
you're
cloning,
these
medium-sized
files
with
large
record
sizes,
in
which
case
the
number
of
entries
in
the
brt,
is
small
or
you're
cloning.
A
These
huge
files
that
are
have
small,
maybe
may
have
small
record
size,
but
they're
laid
out.
You
know
in
in
in
sections
of
the
disk
that
you
can
use
this.
Like
summary
info
to
reduce
at
least
reduce
the
cost
of
the
brt
manipulations
for
freeze
of
things
that
are
not
actually
involving
it.
B
B
A
So
so
then
you,
you
know
you're
using
that
to
reduce
the
cost
of
the
freeze
when
the
brt
is
not
involved.
You
still
have
to
worry
about
the
size
of
the
brt.
For
those
small
record
size.
A
Small,
you
know
small
record
size
cases
when
you
are
like
actually
freeing
one
of
these
clone
files
right,
because
you're
going
and
documenting
the
ref
count,
then.
C
B
Still,
we
have
some
savings,
so
we
are
not
really
freeing
the
data
just
in
decreasing
reference,
but
apart
from
like
trying
to
get
the
the
entry
in
the
table
as
small
as
possible,
I
don't
think
there
is
anything
we
can
do.
A
I
think
that
makes
sense.
Have
you
I
don't
know
if
you've
already
done
this,
but
in
the
table
has
an
entry
for
each
logical
block?
Is
that
right.
D
A
B
I
know
it's
something
to
do
that,
but,
like
I
suspect
it
would
be
huge
complication
to
the
whole
code
to
try
to
when
you
start
to
like
cloning.
Just
ranges
in.
B
B
So
I
don't
think
that
if
we
even
go
with
this
code,
it
doesn't
like
prevent
us
to
do
that
in
the
future
like
we
are
not
committed
to
to
anything
that
will
prevent
us
to
do
this
in
the
future,
but
I
I
definitely
would
prefer
not
to
not
to
do
that
now,
because
I
I
expect
huge
complications.
A
Sounds
like
you're,
like
you're,
really
far
along
like
this,
is
this
is
getting
really
real.
B
Yeah,
I'm,
I
think
it's
it's
pretty
much
almost
ready,
so
I
yeah,
I
regret.
I
I
stumble
upon
this
security
issue
because
this
took
a
while,
but
but
yes,
the
code
is
pretty
much
ready.
A
And
for
the
logging
that
you
mentioned,
I
assume,
like
the
the
overall
clone
operation,
might
be
like,
including
some
giant
file
and
are
you
I
assume
you're
like
splitting,
that
into
smaller
log
records,
where
it's
like
just
like,
when
you
do
a
huge
write
system,
call
we
split
it
into
like
one
mag
or
something
log
records.
Are
you
doing
something
similar
where
it's
like?
B
Yes,
I
pretty
much
mimic.
What's
how
right
is
is
done,
so
I
split
those
of
course
into
larger
chunks
than
the
rights,
because
I'm
just
storing
kbps
but
yeah
cool
each
chunk
is
stored
in
different
in
separate
transaction.
B
B
E
I
haven't
had
a
chance
to
look
at
the
latest
revision
so
which
sys
coils
are
hooked
up
to
this
mechanism.
Right
now,
you
told
talked
about
f
clone
range
or
whatever.
I
think
that's
a
freebsd
thing.
B
E
B
B
I'm
not
really
planning
to
do
that.
I
hope
it
will
be
easy
enough
for
someone
to
do
that,
based
on
the
on
the
cisco's
I
implemented
for
freebsd,
but
I
think
that
in
linux
it's
either
ioctyl
or
some
other
way
to
to
use
rafflink.
It's
not
really
a
cisco.
Okay.
I
remember.
E
So
for
f-link
I
think
yeah
that
should
be.
B
Macos
macos
has
a
separate
system
called
to
do
that:
okay,
but
mac
os,
also
the
difference
between
macros
and
what
I
did
is
that
they
they
have
like
a
clone
file
system,
call
where
you
provide
paths
to
the
to
the
files,
not
file
descriptors,
and
I
decided
that
I
don't
want
to
replicate
entire
logic
of
copying,
because
the
clone
file
in
macos
tries
to
copy
all
the
permissions
or
the
acls
and
stuff
like
that
to
the
target
file,
and
I
decided
that
I
don't
want
to
really
duplicate
this
logic.
B
Yes,
I
think
that
when
I
looked
the
rough
link
in
brt
in
batterfest
was
implemented
using
iota
life.
It
wasn't
system
called,
but
but
maybe
something
changed
or
I
don't
know.
F
B
F
B
C
Well,
a
small
question:
you
mentioned
it
implemented
for
files.
What's
about
the
walls,
is
it
general
enough
to
be
used
to
obstruct.
B
But
yes,
it
can
work
on
zebras
just
as
well.
I
didn't
came
up
with
any
interface
to
do
that,
but
you
can
definitely
do
that
on
zebels
too
you.
We
just
need
some
kind
of
system
caller
some
some
way
to
to
get
to
some
interface
to
do
that,
but
it
can
be.
C
G
C
B
A
Yeah,
so
it
sounds
like
the
interface
is
like
you
have
a
new
interface
that
you've
implemented
for
freebsd.
A
A
So
you
know
maybe,
when
you're
ready,
we
can
try
to
find
you
know
who
who
would
be
interested
and
able
to
to
work
on
hooking
that
up.
I
think
that
would
increase
the
like
utility
of
this.
A
lot.
B
B
Yes,
for
freebsd,
there
is
nothing
like
that
as
far
as
I
know
so
we
will
need
to.
We
will
still
need
to
work
on
the
tools
to
to
be
able
to
to
use
that,
although
I
was
also
wondering-
and
maybe
you
guys
have
an
opinion-
that
about
implementing
some
kind
of
like
demon
that
can
work
in
the
background
and
find
stuff
that
can
be
deduplicated,
of
course,
with
snapshots.
B
So
the
idea
for
me
was
that
the
demon
would
walk
through
file
systems
and
just
read
blog
pointers
and
checksums.
So
it
should
be
much
faster
than
reading
all
the
data.
So
we
would
need
an
interface
for
that
as
well
and
and
then,
when
we
find
when
we
find
duplications
in
the
in
the
data,
then
we
we
could
use
brt
to
clone
like
older
block
to
the
to
the
newer
one,
and
even
if
we
have
snapshot
the
other
block
would
stay.
G
Can
you
go
through
it
now
and
find
any
blocks
that
have
changed
since
I
cloned
it,
but
are
the
same
again
now,
like
I
installed
the
same
windows
update
in
two
vms
or
whatever,
or
especially
things
that
were
actually
like,
zfs
clone,
like
dataset
clones
and
they've,
just
diverged
a
lot,
and
can
we
find
blocks
that
are
common
between
them
and
and
turn
them
into
references
instead
of
taking
up
the
extra
space.
B
Yeah
well,
the
demon
would
be
nice
because
I
think
that
there
are
some
systems
already
that
use
offline
duplication
like
that
that
when
the
machine
is
like
less
busy,
the
demon
would
just
start
and
walk
and
try
to
find
what
can
be
duplicated
and
especially
if
we
will
came
up
with
the
interface
that
can
read
only
block
pointers.
So
this
should
be
much
much
quicker
than
just
reading
all
the
data
and
and
based
on
the
checksums.
B
If
it's
not
secure
check,
something
will,
of
course
compare
the
data
as
well,
but
if
it's,
but
still
so
so,
this
demon
could
be
really
like,
quick
and
also
if,
if
we
can
also
came
up
with
interface,
that
allows
us
to
to
swap
blocks
without,
even
because,
of
course,
we
are
working
on
a
live
file
system.
So
the
file
can
be
in
the
middle
of
changes,
so
it
would
be
also
nice
to
be
able
to
swap
blocks
to
have
a
mechanism
to
swap
blocks
like
atomically.
B
I
know
this
is
just
like,
as
you
would
like
two
minutes-
open
open
pr
as
soon
as
possible.
Well,
first,
I
would
like
this
security
issue
be
solved.
First,
because
I
depend
I
I
I
depend
a
bit
on
the
fix,
because
this
is
how
I
found
it.
I
needed
this
for
brtt,
so
I
want
this
to
be
merged.
B
I
want
to
merge
it
to
my
branch
and
then-
and
I
think
it's
pretty
much
ready
to
open
up
pr.
It
will
be
still
missing
tests,
but
I
think
it
would
be
good
to
start
a
discussion
and
and
see
what
people
think
about
the
code
itself.
B
If,
because
I
don't
yet
know
yeah,
I
know
that
christian
was
taking
a
look
in
the
trip,
the
entire
thing
or
just
the
zero
part,
but
it
needs
more
much
more
review.
So
I
think
the
code
is
pretty
much
ready
for
pdr.
I
just
need
this
security
fixing
device.
That's
great,
but
no
promises
who
knows.
A
Thanks
thanks
christian
for
taking
notes,
I
see
you
typing
in
there
in
the
doc
next.
Should
we
talk
about
encryption?
Bugs
is
rich
here
I
don't
see
your
name.
He
said
he
might
not
be
able
to
make
it.
Is
there
anyone
else
working
on
this?
I
would
like
to
speak
to
the
summary
of
what's
going
on
with
encryption
bugs
here.
H
I
could
take
a
shot.
Can
you
hear
me
yeah?
Thank
you.
This
is
george
amanakis,
so
I
was
actually
involved
in
solving
two
bugs
one
of
them.
I
see
rich
mentioned
in
the
document.
H
That
was
a
problem
mainly
when
ross
sending
snapshots
to
a
target
and
then
trying
to
send
them
back
to
the
originating
file
system.
That
was
not
possible
because
of
the
user
accounting
being
present.
So
we
decided
to
work
around
that
by
actually
resetting
the
flag
in
the
hospice
of
the
of
the
file
system
and.
A
H
Yes,
it's
actually
indicating
that
it's
not
complete,
so
when
we're
sending
it
back,
it
will
bypass
comparing
the
local
mark,
and
so
it
now
enables
the
sending
back
of
snapshots
to
the
originating
data
system
and
as
soon
as
the
key
is
loaded
and
the
file
system
is
mounted,
it
will
go
on
complete
to
complete
the
user
accounting
by
counting
the
the
the.
H
By
including
the
new
snapshot
that
was
sent
back
in
the
user,
accounting
and
the
flag
will
be
set
again.
So
that's
how
we
solved
that,
so
the
user
accounting
is,
is
always
there.
The
big
nodes
are
they
all
d
nodes
are
not
touched.
H
It's
just
a
flag
in
those
space
that
indicates
that
the
user
encountering
is
not
complete,
so
don't
go
ahead
and
check
the
local
mac
when,
when
robo
receiving
so
it's
only,
it
was
only
a
matter
of
skipping
the
comparison
for
the
local
mac
in
case
of
raw
receiving
cool.
H
There
was
a
second
one
that
it's
not
mentioned.
It
manifested
as
a
as
an
error,
an
unclear
error
when
again
rose
ending
and
receiving
snapshots,
but
it
was
actually
so
that
was,
I
think,
issue
12
720,
so
12
720,
where
it
manifested
during
gross
ending.
But
the
issue
was
only
because
the
encryption
code
path
was
changed,
was
checking
for
discrepancies
between
the
denode
bonus
length
and
the
denote
spill
pointer
flag,
so
actually
in
very
early
releases
of
open
zfs.
H
So
I
went
ahead
and
filed
a
pull
request.
This
is
13014,
so
13,
000,
0
14.,
where
we
actually,
when
we
encounter
such
body,
denotes
whose
bonus
length
is
actually
greater
than
the
predicted
one
when
accounting
also
for
the
denote
spill
pointer
flag.
We
report
an
error
and
this
is
done
during
normal
raw
receiving
and
also
during
scrubbing.
H
So
now,
when
you
we
do
a
scrub
and
we
pick
up
such
faulty
d
nodes.
We
report
an
error,
and
I
also
added
in
that
we
are
an
assertion
at
the
end
of
dump
of
denote
sync,
so
that
this
this
it
it
can
actually
check
for
this
discrepancy.
There,
too,.
A
I
saw
that
there's
a
few
things
mentioned
here
in
the
notes
that
rich
put
in
about
works
in
progress.
Do
you
know,
do
you
know
the
status
on
those?
Do
you
know
if
he's
looking
for
help
on
those
or
if
he's
making
progress
on
them.
H
H
Meeting
too,
I
think
he
thinks
that
there
is
a
problem
with
the
ref
count,
accounting
either
in
the
r
code
or
in
the
debuff
code,
codepaths
that
actually
the
the
the
ref
count
account
is
not
done
correctly,
and
so
we
can
release
a
debuff
and
then
in
the
code
path,
for
example,
of
tip
of
right
and
then
deepa
freit
is
going
on
to
access
it
and
it's
not
no
longer
there,
so
it
it
panics
with
a
new
pointer
the
reference,
I
think
also
he
has
submitted
a
draft
pr,
but
I
don't
think
it's
it's
final
at
this
point,
so
I
think
he's
still
looking
into
it.
H
I
can
quickly,
let
me
see
again
it's
it's
a
draft
pr
us.
As
far
as
I
can
remember,.
H
As
I
said,
it's
I
think
it's.
This
is
only
a
draft
pr.
He
he
he
actually
is
going
over
the
locking
to
see
where,
if,
if,
if
something
can
be,
if
the
the
backing
bypassed
with
manipulating
the
locking,
but
I'm
not
sure
that
we
actually
know
that
the
problem
is
there,
it
can
also
be
a
problem
in
the
in
the
r
code,
but
with
with
the
rest
count
accounting.
A
All
right
well,
it
sounds
like
it
would
be
great
to
have
more
folks
who
are
familiar
with
the
dmu.
Take
a
look
at
that
pr
and
the
analysis
that
he's
done.
A
All
right,
then,
let's
go
to
the
next
gen
item,
which
is
the
breakthrough.
Checksum
tino
added
this
item.
If
you're
on
the
call.
D
D
A
D
That's
that's
the
micro
benchmarking
like
it's
done
for
fletcher,
it's
the
same
meaning
and
for
a
1k
block
for
4k,
16k
and
so
on
and
up
to
four
megabyte
block
and
then
the
throughput
in
megabytes
per
second
cool.
I
even
implemented
a
char
tool,
I've
written
with
this,
but
but
it's
not
still
requested.
I
would
open
this
when
plague
is
in
and
then
I
would
also
do
some
stuff,
then
some
different
implementations
for
avx
and
and
then
generic
and
shiny
instructions
and
so
on.
D
Just
the
like
three,
that's
just
the
break
three
stuff
and
the
benchmarking
of
of
is
it
implementation
of
the
benchmarking
zooming
and
then
in
a
later
pull
request.
I
would
also
maybe
add
for
for
sure,
and
maybe
also
screen,
assembler
stuff
and
then
also
some
benchmarking
and
then
the
fastest
implementation
is
chosen
when
the
module
is
inserted.
D
A
See,
and
is
your
does
your
pr
include
the
like
the
avx
versions
of
blake,
three
okay
cool
yeah?
I
mean
those
numbers
like
great.
I
mean
it's
way
beyond.
You
know.
The
avx
512
is
like
what
like
four
three
times
faster
than
e
dot
r
and
like
10
times
faster
than
shot.
So
that's
that's
really
impressive.
A
D
D
A
Know
well
in
your
in
your
pr,
are
you
allowing
it
to
be
used
with
dedupe
without
verification.
D
A
Sorry
I
mean
that
all
sounds
good
to
me.
Are
any
any
thoughts
or
objections
from
other
folks.
B
I
think
we
should
really
talk
to
some
crypto
expert
if
we
can
use
this
without
verification,
if
it's
not
the
standard.
E
E
The
credentials
are
pretty
good,
but
I
post
the
link
in
the
chat,
but
I
don't
know
like
what
was
the
benchmark
for
the
other
checksums
that
were
used
in
like
who,
who
admitted
them.
A
Well,
I
think
that
I
think
that
the
idea
was,
I
I
mean,
probably
me
or
or
brian
like
gave
our
stamp
of
approval,
but
I
think
that
the
idea
of
the
other
ones
was
that,
like
they
had
been
kind
of
vetted
extensively
by
third-party,
you
know
stance
of
approval
and
the
goal
I
mean
the
goal
here
is
basically
just
that
you
could
use
it
for
d-dupe
and
nobody
can
generate
like
a
matching
key
or
like,
like
nobody
can
generate
another
block
that
checksums
to
the
same
value
as
a
given
block.
A
G
Right,
there's
no
known
way
to
craft
collision
on
purpose.
A
E
So
black
three,
that
is
it.
I
think
that
is
the
definition
of
a
cryptographic,
checksum
and
blake.
Three
was
a
contender
in
the
shark
contest
by
a
quite
renowned
cryptographer.
I
can
ask
someone
who
is
like
doing
a
phd
in
cryptography
to
weigh
in
on
this.
He
is
also
a
previous
opencfs
contributor,
so
I
don't
know
he
has
some
domain
knowledge.
I
think
that's
I
can
reach
out
to
him.
B
So
note
that
blake
3
is
2
years
old,
almost
exactly
so.
It's
very
young
still.
H
B
The
the
more
peer
review
it
had
and
but
of
course
it
would
be
great
to
have
like
very
fast,
very
strong,
securely
strong
checksum.
So.
A
But
yeah
yeah
I
mean
given
the
performance,
I
mean
I'd
love
to
see
it
in
even
as
an
alternative
to
eat
on
r,
which
we
don't
except
as
using
for,
but
you
know
it
can
be
used
for.
A
I
think
we
use
it
for
an
operate
detection
and
it's
obviously
much
much
stronger
than
fletcher.
So
you
know
if,
if,
if
we
weren't
going
to
use
it
for
d
dupe,
then
it
seems
like
the
bar
is
already
met.
I
guess
the
question
is
like:
are
there?
Are
there
any
concerns
about?
You
know
unknown
vulnerabilities
that
might
make
it
not
appropriate
for
dido.
A
I
think
that
was
the
concern
with
with
elon
r
and
that's
why
we
didn't
make
it.
What.
A
Well,
with
an
operate,
you
don't
really
have
to
be
concerned
about
an
attacker.
It's
only
like
accidental
matches
that
we
have
to
worry
about,
because
you
know,
if
you
can,
if
you
can
write
to
that
location,
then
obviously
you
can
write
whatever
you
want,
and
so
you
know
we
weren't
really
concerned
with
the
case
where,
like
an
attacker,
can
control
what's
written
but
then,
like
not,
you
know
we
were
all
really
worried
about.
A
B
A
But
it
seems
like
I
don't
know
how
you
would
use
that
to
actually
cause
anything
bad
to
happen
to
anyone.
But
yourself.
B
Because
so
yeah
it's,
it's
definitely
different
different,
like
level
of
threat,
but
but
it's
also
important
like
you,
you
definitely
don't
want
to
leave
like
a
vulnerable
binary
only
because
the
updated
binary
have
the
same
checksum.
So
we
definitely
don't
want
this
to
be
accidental,
that
it's
possible
to
generate
something
similar
because,
like
I
also
give,
I
give
this
example.
B
Sometimes
in
terms
of
like
reproducible
builds
where
open
ssh
had
a
bug
and
the
binary
of
open
ssh,
the
the
secure
one
and
and
the
the
vulnerable
one
was
only
one
bit
different,
because
there
was
only
a
matter
of
changing
the
I
think
less
signed
to
less
equal
so
and
the
binary
resulted
in
just
a
single
bit
difference.
B
So
but
yes,
it's
you're
right.
It's
much
much
much
harder
to
exploit
them.
A
Yeah,
I
think
we're
we're
almost
moving
to
the
end
of
the
time.
I
have
a
meeting
right
after
this,
so
why
don't
we
go
on
to?
Well,
I
guess,
let's
conclude
this
by
figuring
out
like
what
are
the
next
steps
needed
to
get
this
integrated?
I
think
one
of
them
is
to
have
this
person
who's
working
on
their
phd
in
cryptography,
to
take
a
look
at
it
and
and
just
tell
us
if
they
think,
based
on
kind
of
the
consensus
of
cryptographers.
A
Cool
thanks
and
then
on
the
code
review
front.
Do
you
look
like
a
few
people
had
started
looking
at
it,
do
you
need
more
viewers,
or
do
you
need
those
reviewers
to
take
another
look.
G
D
From
my
side,
it
seems
to
work.
This
is
a
break
three
checks
on
this
from
cryptographers,
of
course,
and
but
I
I
cannot
speak
for
them.
Yes,.
A
I
guess
my
question
was
about
the
reviewers
like
rich
and
another
person's
name.
Ha6.
Are
they
do
you
know
if
they're
planning
to
do
a
review
of
like
the
zfs
parts
of
this?
A
I
don't
know.
Okay,
we
might
need
to
coordinate
and
mark
I
see
you're
on
but
mark
is
the.
D
Mike
yeah
mike,
of
course,
should
maybe
say
something
about
it:
rind
just
tested
the
arm
stuff
and
also
did
the
implementation
of
the
neon
code,
which
is
some
assembler
stuff.
A
Sorry
guys
I
need
to
drop
but
mark.
Could
you
just
give
your
thoughts
and
finish
up
the
call
for
george.
I
Yeah
yeah
thanks
yep,
I
had
a
drop
so
yeah,
so
I'll
definitely
follow
up
with
some
the
code,
reviews
reviewers
and
and
and
make
sure
that
they're
happy
with
things
and
I'll,
probably
even
take
a
look
at
it.
Myself
was
there.
There
was
another
final
item
on
the
on
the
list.
Is
that
right
that
I've?
G
I
just
had
a
bunch
of
code
reviews
I
have
pending
okay.
I
G
I
think
one
of
them
is
at
least
one
of
them
is
someone
other
than
you.
G
G
Dataset
to
a
namespace
on
linux
and
basically
implementing
something
similar
to
what
freebsd
can
do
with
jails
or
alumos
with
zones
right
that
one's
been
sitting
there
a
while.
I
think
the
only
thing
you
know
there's
some
idea.
We
wanted
to
somehow
take
a
reference
to
the
name
space,
but
there
doesn't
seem
to
be
an
api
for
that.
E
G
What
we
want
to
do
about
that
one
and
then
there's
the
spa
inflation,
one
just
figuring
out
what
that
actually
means
and
then
there's
the
the
right
swaddle
one,
which
I
think
alexander
gave
us
a
good
review
on
that
one,
and
we
made
some
changes
based
on
that
found
a
brain
oh
in
there,
but
would
like
to
see
what's
required
to
move
forward
with
that.
I
Okay,
all
right
I'll
I'll
I'll
follow
up
with
the
ones,
at
least
that
are
that
are
assigned
to
me
and
I'll
see.
If
I
can
ping
the
anybody
who's
who's,
not
me
who's
in
charge.