►
From YouTube: May 2022 OpenZFS Leadership Meeting
Description
Agenda: Block Reference Table, FIEMAP; Blake3; Write Throttle Smoothing
Details: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit
A
All
right
welcome
everyone
to
the
may
2022
open,
zfs
leadership
meeting
months,
keep
flying
by
and
we're
getting
closer
to
the
next
open
zfs
conference.
I'll
have
a
few
notes
on
that,
but
we'll
take
a
look
at
the
agenda.
There's
not
too
much
on
it.
So
lots
of
time
for
other
topics
that
folks
would
like
to
discuss
today.
A
B
I
think
there
was
just
some
recent
interest
in
getting
that
change,
cleaned
up
and
merged,
which
I
agree
with,
and
I
think
I
think
we
might
have
some
interested
folks
picking
up
that
work.
But
we'll
see
cool.
A
Yeah,
I
I
think
that
I
spurred
that
on
years
ago,
with
my
with
a
proposal
to
do
kind
of
a
subset
of
that
with
like
a
custom
myoctal
we're
still
using
that
in
our
product
by
the
way-
and
I
think
you
probably
saw
I
opened
a
pr
to
like
make
some
minor
improvements
to
that
performance.
Wise
yeah,.
B
I
wish
I
should
finish
up
yeah.
I
think
the
fine
map
stuff
could
be
wrapped
up.
I
guess
just
to
summarize
there
was
just
some
outstanding
review
comments
there
about
things.
We
could
do
better.
I
know
you
made
a
passive
initial
review
and
then
I
never
got
back
to
cleaning
it
up
and
you
know
getting
it
all
tested.
There's
still
some
work
to
be
done
there.
I
think
it's
all,
you
know
very
doable.
It
just
needs
to
get
wrapped
up.
A
All
right,
that
was
the
only
thing
that
was
in
the
agenda
doc,
so
other
topics
that
folks
would
like
to
cover
today.
B
I
just
make
a
call
for
additional
reviewers
of
the
blake
3
changes,
so
I
made
a
pass
over
that,
and
things
are
looking
really
good
to
me.
I
think
they're
in
good
shape,
I'd
like
to
go
ahead
and
merge
them
this
week
this
week,
if
possible.
So
if
people
have
additional
feedback
or
comments
or
concerns,
please
take
a
look
at
the
pull
request
and
post
anything.
You
can
post
your
thoughts.
A
Can
your
mind
folks:
is
there
anything
in
that,
so
it's
adding
a
new,
a
new
checksum
algorithm?
Is
there
anything
kind
of
unique
or
or
special
about
how
that's
implemented
compared
to
the
existing
checksum
algorithms.
B
No,
not
really
it's
implemented
the
same
way
as
the
eden,
r
stuff
and
the
sky
and
stuff
it's
plumbed
in
exactly
the
same
way.
In
all
the
same
places,
it's
got
good
test
cases.
It's
got
test
vectors
for
user
space
and
the
test
suite.
There's
a
couple
assembly
implementations
for
non
x86
architectures,
but
those
are
all
pulled
from
upstream
and
like
if
you've
got
a
power
pc
or
an
arm
box,
and
you
want
to
test
those
out.
That
would
be
really
good.
B
We've
done
some
of
that,
but
any
additional
testing
would
be
welcome.
I
put
it
through
the
paces
locally
myself,
at
least
in
x86,
and
it
held
up
quite
well.
So
it
looks
like
a
hands-down
win
or
better
check
some.
C
The
assembly
implementations
aren't
just
from
upstream
or
not,
and
I
know
because
they
came
from
me-
I've
misunderstood
that
sorry,
no,
I
should
be
more
precise,
a
bunch
of
them
like
the
x86
ones,
all
came
from
upstream,
but
I
believe
the
arch
64
one
and
the
powerpc
one
both
got
programmatically
generated
from
the
x86
intrinsics,
because
there
wasn't
an
upstream
one
that
was
faster
and
it
works
fine.
Mostly,
I
submitted
a
fix
for
that.
C
No,
I
tested
the
ppc
64
one
on
little
endian,
at
least
and
after
the
fix
that
I
made
it
works
and
the
air
64
one
I
originally
did
because
upstream
has
in
native
air
64
one,
but
it's
much
slower
so,
okay,
but
no
those
didn't
all
come
from
upstream.
So
if
you're
just
trusting
them.
For
that
reason
just
be
aware.
A
I
mean
are
the:
is
it
using
the
salted
checksum
stuff
that
we
introduced
with
the
those
other
algorithms?
A
A
B
Not
we
should
we're
we're
really
close,
because
we've
got
our
we've
got
the
graviton
stuff
and
aws,
but
we're
not
at
the
moment
we're
building
there
we're
just
not
running
the
full
test.
Suite.
A
Okay,
so
we
would
do
a
build
on
arm
64,
but
we're
not
running
the
test,
speed
yeah.
That
would
be
cool.
I
think
you
know
I'm
specifically
interested
in
the
graviton
processors,
so
you
know
doing
the.
If
the
project
did
the
test
feed
there.
That
would
be
interesting.
B
C
Cool
one
thing
I
would
mention
the
blake
3pr
ads
is:
it
adds
a
generic
interface
for
check
summing
benchmarks,
so,
like
the
fletcher
forward
breakdown
in
proc
on
linux,
it
has
one
of
those,
but
for
all
the
implementations.
D
A
Yeah,
I
know
you
you
review
from
me.
I
I'm
not
sure
how
much
time
I'll
have
to
spend
on
it.
I
would
like
to
spend
some
time
on
at
least
the
kind
of
high
level
aspect
of
it,
but
I'm
not
sure
which
time
I'll
have.
D
I
know
it's
it's
hard
for
people
to
find
time.
That's
why
I
also
propose
like
to
to
meet
for
some
time
or
even
meet
after
the
call.
So
I
can
walk
anyone
interested
through
the
code,
at
least
to
get
you
some
like
high
level
overview
of
what's
going
on
there.
So
maybe
it
will
be
easier
to
figure
it
out
alan
already
volunteered
to
to
stay
after
this
call.
D
A
Yeah
sounds
good.
A
The
only
other
topic
I
had
was
the
onesie
fits
developer
summit
so
based
on
the
feedback
that
we
had
from
this
group
last
month,
we're
we're
going
ahead
with
trying
to
plan
an
in-person
event,
that'll
be
in
the
bay
area.
We
would
love
to
have
some
help
with
the
organizational
aspects
of
it,
as
I
mentioned
last
time.
A
So
if,
if
you
or
if
you
know
someone
your
company
or
in
your
community,
that
could
help
with
organizing
in
terms
of
the
logistics,
you
know
interfacing
with
the
venue-
and
you
know
maybe
helping
on
the
day
of
with
you
know
all
the
vendors
and
registration
and
things
like
that.
We
could
really
use
some
help
with
it,
but
yeah
well
we're
planning
to
go
ahead
with
in
person
we'll
investigate,
like
the
venues
that
we've
used
before.
A
Keep
folks
attuned
to
that,
hopefully
we'll
have
hopefully
in
the
next
month
or
two
we'll
have
the
details.
Knelt
down
matt
did
denise
come
up
with
a
clear
position
from
that
company.
Not
really
okay,
we're
we're
talking
with
her,
but
haven't
really
heard
back
on
any
specific,
like
commitments
of
what
they
can
do
understood.
A
So
yeah
any
thoughts
on
that
we'd
love
to
we'll
have
a
call
for
presentations
soon.
You
know
once
we
have
the
date
and
everything
and
have
details
that
speakers
can
know
like
when
they
would
need
to
be
available.
A
I'd
love
to
hear
about
brt,
hopefully,
which
will
hopefully
be
closer
to
integration
or
integrated
by
that
time,
but
yeah.
I
think,
there's
been
a
lot
of
other
interesting
work
that
that
would
also
be
great
to
hear
about
the
conference.
E
So
hello,
everyone,
so
I
believe,
two
months
ago,
allan
raised
some
issue
with
this
evil
performance
and
the
pr
that
we
proposed
to
smooth
the
throating
mechanism
mechanism.
Maybe
you
recall
this
case.
A
E
We
are
writing
a
lot
of
data
and
the
data
aren't
flash
to
some
level
and
then
suddenly
they
are
flash
at
once,
and
what
we
can
see
is
that
the
flashing
is
like
done
in
one
let's
say,
step
and
that
non
additional
data
is
accepted
until
the
flash
is
finished.
Because
of
that,
we
can
see
that
the
bandwidth
is
dropping
to
zero
megabytes
per
second,
and
we
had
a
little
bit
hackish
way
of
handling
that
to
add
additional
delay
to
every
right
after
some
some
level
of
duty
duty
data.
E
E
One
of
them
was
to
add
additional
accounting
and
to
add
this
additional
size
of
the
dirty,
the
urgent
buffer
to
the
delay,
so
actually
how
it
works
right
now
is
that
we
first
decide
how
long
we
should
delay
for
the
for
the
transaction
group,
and
then
we
add
the
dirty
data
to
the
the
to
the
counter,
and
but
because
of
that,
we
we
assume
that
we
weren't
able
to
delay
the
rides
enough
to
to
fill
up
the
duration
buffer.
E
We
test
that
and
it
turns
out
that
it
doesn't
help
it's
still
behaving
the
same
in
the
same
manner.
We
also
try
to
just
add
the
accounting
much
earlier,
so
before
actually
doing
the
delay,
but
this
didn't
work
as
well.
We
looked
into
the
accounting
of
the
dirty
data
and
it
seems
to
work
fine.
I
mean
we
couldn't
find
any
place
that
actually
was
missing
some
this,
where
the
substitution
wasn't
done
actually
yeah
and
basically
we
are
a
little
bit
out
of
ideas.
What
we
can
do
else.
E
We
also
I
I
have
seen
that
enough
proposed
some
touch
and
some
other
pr
that
he
pointed
out.
That
may
be
the
cause
of
the
issue,
but
unfortunately
nor
the
patch,
nor
the
reverting,
the
the
the
comic
that
he
pointed
out,
helped
in
this
case
as
well.
So
we
still
see
the
same
behavior.
A
All
right,
yeah
see
the
last
couple
comments
are
are
from
alexander
kind
of
saying
like
oh,
you
know,
we
should
do
it
this
other
way,
but
it
sounds
like
you're
saying
that
didn't
work.
A
All
right
yeah,
it
would
probably
be
good
to
you
know
mention
that
in
the
pr
so
that
you
know
alexander
understands
oh
yeah,
we.
E
A
Yeah,
I
kind
of
assumed
that
it
was
like
oh
okay,
well
that
folks
were
gonna.
Take
this
other
approach
that
alexander
had
proposed,
but
it
sounds
like
that's.
That's
not
gonna
work
entirely,
at
least.
A
Yeah,
I
I
don't
have
a
any
specific
ideas
based
on
that.
I
know
last
time
when
we
discussed
it,
you
know
there
was
concern
about
kind
of
the
heuristics
in
here.
E
It
was
also
one
more
thing
about
barrier
that
we
allowed
too
many
threads
to
write
at
the
same
time,
but
we
also
limited
the
threats
actually
to
one
thread
and
it
still
behave
the
same
so.
A
E
E
A
Because
I
think
the
the
just
to
summarize
for
other
folks
following
along,
I
think
the
the
observed
behavior,
is
that
the
system
gets
to
having
like
100
dirty
data,
and
then
the
delays
is
like
super
duper
long
and
then
you're
kind
of
stuck
in
that
state
until
the
txt
finishes
syncing
and
the
mystery
is
like
it's
not
like.
A
You
know
the
this
is
showing
like,
as
the
dirty
data
increases,
the
delay
increases,
and
this
is
like
a
log
graph
log
scale
of
the
delay.
So
you
know
when
you're
getting
like
close
to
having
all
the
dirty
data.
You
know
the
delay
here
is
10
milliseconds,
which
is
a
long
time.
A
E
Yes,
we
do
this,
so
the
idea
is
that,
after
reaching
the
the
maximum
level
from
after
some
period,
we
still
add
some
delay-
and
this
is
this
yellow
delay
that
we
actually
still
adding
after
after
reaching
it.
It
doesn't
solve
the
issue
completely,
but
it's
still
like
smoothing
the
the
the
bandwidth
actually.
E
I
tried
also
to
add
the
that
the
those
count
of
bytes
before
actually
doing
the
delay,
so
we
shouldn't
like.
If
we
have
a
big
bunch,
then
you
know
we
still
should
have
something
like
right,
because
right
now
the
accounting
is
done
after
delay,
but
I
also
tried
to
do
it
at
the
beginning.
A
And
it
still
didn't
work
that
would
matter
if
your
rights
are
very
large
compared
to
you
know
the
whole
dirty
data
limit
right
like
right.
This
is
four
gigabytes.
If
you're
like
oh
I'm,
coming
in
with
a
one
gigabyte
right,
then
you
know
it
would
really
matter
whether
you're
doing
the
accounting
before
or
after,
but
if
you're
coming
in
with
like
a
a
kilobyte
right,
then
it
doesn't
really
matter.
A
Do
you
know
what
size
the
rights
are
in
the
test
that
you're
doing.
E
A
That,
then
I
mean
it
can't
be
more
than
like
six.
I
think
it's
like
16
megabytes,
yeah
a
little
bit
more
than
that
yeah.
It's
very,
it's
very
surprising,
especially
with
just
one
thread
that
you
would
that
this
you
know,
yellow
part,
would
matter
because
it's
like
okay
yeah,
like
maybe
it's
smoother
to
to
do
this
yellow
part,
but
it
shouldn't
it
should,
like.
A
You
know:
you're
gonna
get
to
this
middle
part
where
the
behavior
is
the
same
and
then
like
what
you
know.
Why
would
that
matter
like
you're
still
in
here
in
the
middle
you're,
doing
a
little
bit
of
delay?
And
then
you
get
to
close
to
here
and
then
doing
a
bunch
of
delay
and
then
like
something
either
goes
wrong
or
does
it
wrong
doesn't
go
wrong,
causing
us
to
get
up
to
100?
E
And
I
can
share
one
more
graph,
actually,
which
maybe
is
will
be
more
illustrative
here.
So
I
will
try
to
share
my
screen.
Then.
E
E
Yes,
something
like
that:
it's
something
around
like
the
maximum
of
dirty
data.
I
believe
we
have
one
gigabyte
of
buffer
right
now.
A
Run
into
the
like,
if
you,
when
you
run
into
the
bug
of
like
getting
to
100
dirty
data,
then
the
delay
is
so
great
that
you
know
it's
not
until
it
drops
weight.
You
know
once
you
hit
that
like
100
second
delay,
you're
kind
of
stuck
there
for
100
seconds
and
then
it
can
all
drain
out
and
then
after
100
seconds
you
can
come
back
in.
A
A
E
E
Pattern,
I
would
need
to
double
check
that
to
be
100
sure
if
we
are,
if
we
are
actually
inserting
the
delays,
because
I
did
so
many
tests
that
I
don't
remember
actually.
E
But
I
I
remember
there
was.
We
was
also
surprised
about
the
amount
of
these
delays
that
are
that
we
actually
insert.
A
A
Cool
yeah,
it's
definitely
an
interesting
mystery
here
and
do
you
have
steps
to
reproduce
this
in
the.
E
Yes,
it's
it's
100
reproducible.
I
can
also
put
it
into
the
pr.
I
guess
that
will
be
the
best
place.
A
A
E
A
Okay,
then
I
guess
you
can
meet
up
with
him
in
a
half
an
hour
and
do
you.
I
don't
think
this
zoom
will
work
for
like
once
we
close
it
out,
then
you,
I
don't
know
if
you
can
rejoin
it.
Do
you
want
to
like
send
out
a
zoom
link
for
folks
that
are
interested
in
joining
in
a
half
an
hour
to
do
that.
D
G
G
F
G
But
if
anybody
knows,
if
anything,
I
am
all
ears
and
I
make
no
guarantees
on
producing
a
usable
artifact
by
the
end
of
the
summer,
but
yeah
anything
anybody
has
to
say,
but
I
would
love
to
hear
like
I
said
I
think
my
answers
were
my
questions
were
already
answered
in
slack.
I
think.
A
Cool
well
welcome
to
the
community,
and,
if
you
maybe,
if
you
come
up
with
like
a
design
in
terms
of
what
the
user
interface
would
look
like,
maybe
that
would
be
you
share
that
with
the
team
on
either
on
slack
or
like
in
a
issue
on
on
github,
but
also,
if
you
want
to
like,
make
a
little
presentation
about
it
here
next
month.
A
All
right,
then,
we'll
see
you
all
in
four
weeks
when
we'll
have
the
me
at
the
later
time
that
will
be
the
21st
of
june
and
I'll
see
you
all
then
bye.