►
From YouTube: OpenZFS 2021 Hackathon Presentations
Description
From the 2021 OpenZFS Developer Summit
Details: https://openzfs.org/wiki/OpenZFS_Developer_Summit_2021
A
Cool
so
we'll
have
kind
of
two
classes
of
presentations,
the
open
class
and
the
ones
that
are
within
the
theme
of
the
hackathon,
which
is
quality,
go
ahead
and
use
this
spreadsheet.
To
add
your
name
and
your
project
to
the
list
and
paul,
I
know
you
need
to
go
soon.
So
let's
go
ahead.
B
The
last
time
I
was
talking
about
brt,
we
had
one
pretty
big
pending
problem,
how
we
can
avoid
overhead
on
when
we
are
free,
freeing
blocks
so
because
with
brt
we
cannot
rely
on
the
like
dedupe
bit
set
in
the
block
pointer.
So
we
actually
on
everything,
have
to
go
and
check
if
we
don't
have
to
update
the
brt
table.
So
I
found
the
solution
to
that
some
time
ago.
B
I'm
just
dividing
all
of
e
devs
in
regions
like
one
gigabyte
regions
and
I
and
I
keep
a
counter
reference
counter
for
every
region.
So
this
reference
counter
for
the
region
is
a
sum
of
all
brt
counters
for
every
block
in
this
specific
region,
and
I
can
keep
those
counters
in
memory
because
they
take
very
little
memory.
B
It's
eight
kilobytes
per
one
terabyte
of
of
top
level
vdf.
B
So
it's
very
little.
So
I
can
keep
those
in
memory
and
so
then,
what
I
have
to
free
the
block.
I
just
consult
this
regions
counters
and
if
it's
zero,
then
there
is
nothing
to
do.
If
it's
more
than
zero,
then
possibly
the
given
block
is
in
brt.
So
then,
I
have
to
go:
look
up
the
entry
and
update
the
entry,
so
so
this
pretty
much
fixes
the
problem.
B
What
I
implemented
today
was
just
some
few
cleanups
and
the
idea
alan
had,
which
is
that
both
the
duplication
and
brt
somehow
serve
similar
purpose.
So
we
could
have
blocks
that
are
both
duplicated
using
the
mechanism
we
have
now
and
also
cloned
using
brt.
B
So
we
can
have
the
same
block
in
both
tables
at
once.
So
alan's
idea
was
to
check
the
bit
and
if
we
have
the
duplication
bit,
then
the
block
is
already
in
the
duplication
table.
So
there
is
no
need
to
create
an
entry
in
brt
table.
We
can
just
bump
the
reference
counter
in
the
ddt,
so
this
basically
means
that
using
prt
we
do
manual
the
duplication.
B
So
if
the
the
bit
is
not
set,
then
we
simply
go
to
brt
table
and
just
increase
an
entry
in
there.
We
cannot
resign
from
this
table
because
we
also
want
to
clone
blocks
that
don't
have
the
bit
set.
B
So
we
just
want
those
two
functionalities
to
work
nicely
together.
So
yeah.
One
thing
I
was
hoping
to
to
resolve
today
was
to
make
sure
that.
B
B
So
it
seems
that
I
don't
really
have
to
do
anything
special,
but
I
will
be
confirming
that
on
slack
later,
so
I
can
understand
that
I
can
show
you
a
quick
demo.
So
let's
say
we
have
d-dupe
turned
on
on
the
on
the
pool
and
there
are
no
blocks
yet.
So
I
will
copy
one
block.
B
I
will
copy
one
block
with
dupe
on.
B
Yeah
so
now,
let's
turn
off
d-dupe.
B
B
Yeah,
so
we
are
just
reusing
the
duke
table
in
that
case
yeah,
so
that's
pretty
much
it.
I
did
some
tests
yesterday,
some
initial
performance
testing,
so
there
is
more
or
less
like
two
orders
of
magnitude,
difference
between
copying
the
file
and
cloning,
the
file.
So
so
that's
it.
Thank
you.
A
B
Yeah,
I
think
so
just
some
few
really,
I
hope
minor
details
too
fresh
out,
but
might
turn
out
to
be
pretty
hard,
like
compatibility
with
vdf
removal.
So
I
didn't
touch
that
yet
so
we'll
see.
C
Hey
so
I'm
gonna
go
ahead
and
share
my
screen,
because
I
have
a
very
quick
demo
and
while
the
demo
is
running,
I
will
explain
the
feature,
so
everyone
can
see
this
okay,
all
right.
C
So
as
we
talked
about
a
little
bit
yesterday
during
the
hackathon
planning
session,
the
automated
prs
that
get
filed
or
the
automated
test
runs
that
get
started
when
you
file
a
pr
against
the
opencfs
repository
are
a
good
idea
in
theory,
but
in
practice
they're,
basically
always
read,
and
the
reason
is
that
we
have
a
number
of
flaky
tests
and
we
have
a
number
of
issues
that
crop
up
from
time
to
time,
and
one
of
the
things
we
talked
about
improving
was
trying
to
get.
Those
automatic
test
runs
to
be
more
useful.
C
C
So
what
I'm
doing
here
is
I'm
running
a
single
test,
suite
the
case,
normalization
suite
and
it
has
a
few
known
failures
in
it,
and
so
you
can
see
now
we
have
a
couple
tests
coming
that
have
failed
and
once
this
initial
set
finishes,
we
automatically
rerun
all
the
failing
tests,
along
with
the
necessary
setup
and
cleanup
steps,
and
then
at
the
end
you
get
the
results
summary
and
it
informs
you
hey.
These
are
expected
failures
and
we
didn't
have
any
unexpected
failures
in
this
run.
C
This
work
is
partially
based
off
of
an
almost
pr
with
some
slight
tweaks
to
make
it
a
little
bit
more
usable
for
the
automation,
workflow
and
then
there's
also
some
changes
to
the
github
workers
to
make
them
automatically
use
this
flag
and
then
another
change
that
needs
to
be
made
in
the
buildbot
repository
to
make
all
the
different
builds
use
this
flag.
But
a
pull
request
has
been
filed
for
this
and
it
is
visible
on
github.
So
if
people
want
to
take
a
look
to
code,
they
can
that's
it
thanks.
D
Cool,
so
my
name
is
trevor
and
at
the
los
angeles
lab,
we've
been
kind
of
experimenting
with
certain
nvme
drives
and
zfs
rebuild
performance
and
the
drives
that
we've
been
kind
of
using
are
the
pm1725bs
from
samsung
and
what
something
that's
interesting
that
we
notice
is
when
we
sort
of
emulate
a
workload,
a
rebuild
workload
with
cfs
like
we
emulate
one
that
would
be
from
zfs
on,
like
a
single
nvme
drive,
we
see
that
the
read
and
write
bandwidths
are
kind
of
divergent,
so
this
graph
is
emulating
an
eight
eight
to
one
read:
write
against
pm1725b
and
we
kind
of
see
something
similar
in
trying
to
emulate
like
a
three
reader
and
one
brighter
case,
also
on
the
same
drive
where
the
reads
are
quite
higher
than
the
rights,
and
these
are
what
they
said.
D
Samsung
pm1725bs
like
I
said
something
interesting
that
we
see
with
a
different
nvme
drive.
We
see
with
the
certain
keyoccia
drives
that
the
read
and
write
bandwidths
are
pretty
close
to
each
other
as
we
vary.
The
percentage
of
jobs
that
are
writing
versus
reading
so
here
with
zfs
will
probably
be
around
the
10
to
20
of
writing
jobs
here.
D
D
Where
on
this
graph,
you
can
it's.
On
the
left
hand,
side
of
the
dotted
line
for
each
of
these
graphs
is
the
rebuild
performance.
The
left
whole
graph
is
the
read
and
the
bandwidth
and
the
right
whole
graph
is
the
right
bandwidth
and
we
still
see
like
350
meg
per
second
rebuild
right
bandwidth,
which
was
the
same
thing.
D
We
saw
with
the
samsung
1725
bs,
and
so
something
else
that
we
tried
was
having
a
rig
going
on
on
the
nvme
drive,
but
specifying
a
single
write,
every
n
interval
seconds
or
microseconds,
and
so
here
we
varied
the
amount
of
time
between
writes
from
one
microsecond
all
the
way
up
to
two
seconds,
and
we
see
that
the
light
green
line
on
the
left
graph,
which
represents
right
for
every
250
microseconds,
is
about
the
same
bandwidth
that
we
saw
for
the
rebuild
rights
on
the
pm1725bs
and
the
kioskier
drives,
and
so
to
go
a
step
further.
D
We
instead
of
doing
one
right
every
n
time
we
said
hey,
let's
write
a
certain
amount
of
you
know:
data
like
two
gigs
or
eight
gigs
per
end
time,
and
then,
when
we
saw
that
and
when
we
tried
that
we
saw
that
the
right
bandwidth.
So
actually
it
raised
a
lot
to
about.
You
know
the
peak
there
in
the
light
red
graph
is
like
over
two
gigs,
a
second.
So
what
we
try?
D
What
we
thought
to
do
for
the
the
hackathon
is
to
try
to
emulate
that
functionality
in
zfs,
instead
of
just
kind
of
writing
rebuild
rights
kind
of
haphazardly,
so
to
speak.
We
said,
let's
group,
a
bunch
of
them
up
together,
that
that
amount
being
adjustable
by
a
certain
parameter
and
then
issuing
that
to
the
device
and
to
kind
of
explain
what
we
did
there.
I
had
to
hand
it
over
to
brian.
If
he's,
if
he's
there.
E
Yeah,
so
what
trevor
and
I
experimented
with
today,
is
basically
trevor's
actually
put
a
lot
of
work
into
this.
Prior
to
today,
he
took
the
time
to
split
up
the
veto,
rebuild
queue
originally
in
zfs
the
reads
and
the
rights
they
were
being
placed
in
the
same
queue,
and
so
what
he
took
the
time
to
do
in
previous
work
was
actually
split.
Those
two
things
apart
and
so
today,
since
we
had
the
separated
queues,
we
basically
identify
okay.
If
this
is
a
right,
let's
just
go
ahead
and
first
identify
have.
E
We
meet
met
this
maximum
amount
before
we
actually
issue
all
these
rights
out
at
once,
and
so
with
this
quick
hacky
thing
we
did
today,
we
just
added
a
module
parameter
and
if
we
haven't
met
a
certain
threshold
of
counts
for
the
writes,
we're
not
going
to
issue
it
out,
we're
just
going
to
leave
it
there
pending
and
then
eventually,
once
we
hit
that
right
count,
we're
going
to
flush
them
all
out
at
once,
and
the
idea
is
we're
going
to
try
and
get
to
about
two
gigs
right
now,
we're
just
doing
actual
counts
of
the
ios
themselves,
and
unfortunately
we
didn't
get
this
all
working
today.
E
Can
we
just
group
a
lot
of
these
things
together,
issue
them
all
at
once
and
then
kind
of
come
to
that
center
of
the
mv
nvme
device
for
these
radial
rights,
because
at
present
with
the
samsungs-
and
you
know,
even
with
the
kyotos,
we're
to
a
point
where
it's
like
we're
we're
almost
stuck
and
better
off
doing
raid
z
re-silvers
just
to
get
the
full
right
bandwidth
of
a
single
nvme,
so
hopefully
by
tomorrow,
we'll
have
to
test
it
out
and
see.
E
If
we
match
the
results
we
were
seeing
and
we
can,
I
don't
know,
maybe
open
a
pull
request,
start
a
discussion
about
it.
Although
this
version
is
extremely
hacky
at
the
moment,
but
we
just
want
to
see
if
we
can
get
it
to
work,
but
that's
pretty
much
where
we
stopped.
A
So
I
was
working
on
triaging
prs
and
also
making
putting
together
the
videos
from
yesterday.
So
brian
and
mark
helped
me
with
this.
We
went
through
the
oldest
prs
and
we
got
through
one
page,
the
last
page
of
pr's.
We
got
through
looking
at
all
them
and
we
found
some
that
probably
just
need
to
be
closed.
A
We
added
a
few
to
this
spreadsheet
of
almost
done
ones
that
like
need.
You
know
a
little
bit
more
reviews
and
contacted
folks
on
that
and
that's
about
it,
and
then
I
also
worked
on
the
video
for
from
yesterday,
remembering
how
to
use
imovie
and
what
my
workflow
was
from
last
year.
I'll
show
you
so
you
know
I.
G
F
A
Throughout
the
other,
you
know
this
is
my
the
first.
The
first
talk,
the
one
that
I
did
yesterday
yesterday.
G
Yeah,
so
if
you
have
you
looked
at
the
command
line
options
for
zdb,
you
would
notice
that
it
has
a
ton
of
options
37
to
be
precise.
G
I
went
and
checked
this
morning
and
if
you
want
to
add
a
new
option
which
I've
been
trying
to
do
a
little
while
back
it's
a
pain
because
all
of
the
intuitive
options
are
taken
and
you
would
like
to
have
long
options,
and
so
that's
what
I
did.
I
added
long
options
to
zdb
so
that
you
can
do
stuff
like
this
dash,
dump
debug
message
or
dash
dash
overblock.
G
I
updated
the
the
main
page
as
well,
and
that's
what
I
did.
A
Cool
any
questions
for
manush.
I
think
that's,
that's
a
very
useful.
A
little
bit
mechanical
thing,
that'll
make
make
our
lives
easier.
F
Yeah,
I
think
the
the
main
function
in
zdb
could
could
use
some
organizing,
but
I'm
very
glad
to
have
long
enough,
at
least
as
a
starting
point.
A
Cool
thanks
next
up,
we
have
tony
and
john.
H
That
sounds
good.
Give
me
one
second
here,
let
me
share
my
screen,
so
what
we,
what
we
did
today,
was
working
on
adding
z-vol
performance
tests.
I'm
sure
that
everyone
is
aware
that
we,
we
actually
don't
have
any
zevo
performance
testing,
and
I
actually,
I
didn't
think
about
it
until
I.
H
There's
block
mq
change
that
went
in
and
I
kind
of
realized
that
kind
of
it's
it's
it's
a
gap
for
us
and
I
kind
of
brainstormed
with
john
nice,
and
we
thought
it
should
be
pretty
straightforward
for
us
to
add
the
support
right
because
the
performance
we
already
have
a
lot
of
the
common
code
and
infrastructure.
H
We
just
need
to
modify
a
lot
of
the
fio
sort
of
parameters
and
how
we
kind
of
build
up
the
z-volts
instead
of
data
sets,
and
so
so
that's
that's
kind
of
what
we
spent
time
today
kind
of
experimenting,
so
you
saw
the
command
I
just
ran.
I
just
kicked
it
off.
Basically,
given
a
couple
of
disks
to
the
to
the
test
and
the
test,
what
we
got
working
is
the
sequential,
writes
and
sequential
reads:
two
z-volts.
H
H
Maybe
yeah,
so
I
think
the
idea
is
that
we
were
able
to
prove
out
that
we
can
get
this
done
pretty
easily
for,
writes
and
reads,
and
we
can
go
go
through
the
rest
of
the
of
the
other
tests
right.
The
the
the
random
read
random
rights
and
then
the
the
cash
reads
and
such
hopefully
we'll
be
able
to
kind
of
wrap
this
up
and
then
have
a
pr
out
so
that
we
can
have
you
know
zevo
performance
coverage.
I
can
show
you
what
the
sort
of
the
results
look
like.
H
A
F
F
So
I
added
I
don't
know
if
we
want
to
come
up
with
a
different
name
for
the
property,
so
it
doesn't
get
confused
with
the
pool
alt
route.
But
for
now
I
just
created
a
property
called
alt
route
and
you
set
it
on
a
data
set,
and
it's
inherited
by
all
that
data
says
children
and
that
alt
root
gets
prepended
to
the
mount
point
property
before
it's
when
it's
generated,
so
it
actually
takes
the
pool
wide
alt
route.
F
Then
the
data
set
alt
root
and
then
the
data
set
mount
point
and
concatenates
those
together
to
make
the
the
mount
point
and
the
the
main
idea
behind
this
is
if
you're
zfs
receiving
data
sets
from
another
machine
that
might
have
the
mount
point
hard
set
to
something
like
slash
far.
You
want
to
make
sure
that
all
those
data
sets
under
your
backup
data
set
or
whatever
are
not
over
mounting
the
rest
of
your
system
or
otherwise
interfering
with
it.
F
But
you
don't
want
to
not
set
the
properties,
because
you
expect
those
to
be
able
to
work
and-
and
you
know
be
able
to
cease
route
into
that
or
whatever
other
mechanism
you
might
want.
F
I'm
still
puzzling
a
bit
about
how
the
inheritance
should
work.
It
seems
to
just
straight
normal
inheritance
seems
to
do
what
I
want
at
this
point,
but
it'll
be
interesting
to
see
what
other
people
think
and
and
what
use
cases
there
might
be.
A
All
right,
then,
we're
coming
to
the
last
person
that
has
signed
up
james,
and
so,
if
anybody
else
is
thinking
about
doing
your
presentation
now
would
be
a
good
time
to
add
your
name
onto
the
list.
I
Thank
you
today,
I
will
just
I
will
describe
this
as
wasting
your
time
efficiently.
I
I've
done
programming
a
bit,
but
nothing
on
zfs,
so
this
was
kind
of
a
leap
into
deep
waters,
so
I
figured
I'd
in
my
foolishness.
I
would
take
the
simplest
thing
to
start
with,
so
I
looked
at
the
the
pr
one,
one:
seven
eleven
introduce
v
dev
properties
and
figured
I'd.
Try
to
add
something
simple
like
I
had
a
few
test
cases
and,
of
course,.
I
I
So
I
manually
tried
the
commands
and
they
worked
so
then
I
tried
looked
at
the
code
to
see
what
all
the
other
properties
would
be
gettable
and
settable,
and
then
I
tested
each
one
and
of
course,
on
the
third
thing
that
I
tested,
I
had
some
great
excitement
and
by
excitement.
I
wish
I
did
not
have
that
excitement
if
you
set
the
path
to
something
random,
just
as
a
test,
it
accepts
it
and
if
you
reboot
and
the
path
is
something
random,
your
pool
not
load.
I
And
then
I
had
the
excitement
of
mentioning
this
to
allen
and
he
suggested
I
hey
boot
from
the
iso
file
and
try
to
mount
it
from
there
which,
after
after
two
attempts,
it
did
not
ever
successfully
mount
on
the
first
attempt
it
did
load
and
then
the
path
was
auto
corrected.
I
At
this
point,
I
could
boot
back
into
my
regular
freebsd
vm
and
it
worked
again
so
he
identified
where,
in
the
code
it
was
checking
for,
was
only
looking
for
a
prefix
of
slash
dev,
so
to
confirm
that
that
was
not
a
serious
problem
making
the
pool
unbootable.
I
I
So
after
I
manually
tested
all
the
other
parameters,
they
didn't
find
anything
else
that
was
incorrect,
so
that
that
was
good.
And
at
that
point
I
asked
if
the
the
man
pages
also
needed
to
be
updated
and
of
course,
alan
said.
I
Ahead
and
do
that
that
sounds
like
it
should
be
easy
in
the
back
of
my
mind,
I'm
thinking
not
knowing
that
I
would
have
some
excitement
of
mandoc
being
a
new
format.
I
hadn't
looked
at.
So
of
course
it's
like
chinese.
I
mean
I've
done
html,
so
I
I
know
annoying
prefixes
and
suffixes
and
all
that
nonsense,
but
it
was
it
was.
It
was
a
fun
thing
to
edit.
Let
me
just
say
in
fun
by
not
fun.
I
So
I
added
edited
the
a
very
rough
update
for
the
man
pages.
I'm
sure
it
will
need
to
be
tweaked
and
I've
checked
that
in
and
for
forwarded
the
link
to
that
to
allen
and
I've
made
a
list
of
all
the
manual
things
I
did
for
the
testing
the
properties
and
I
think
I'll,
just
separate,
like
the
list
of
properties
into
the
ones
that
are
read
only
and
just
make
a
test
of
can.
I
Can
I
read
it
and
can
I
not
write
it
and
then
for
the
remaining
two
properties
that
are
remaining
properties
that
are
writable
I'll,
just
make
a
right
modify
test
and
this
in
a
perfect
world
I'll
have
that
done
tomorrow
and
realistically,
maybe
sometime
this
week,
and
that's
it
if
you
want
to
jump
into
a
project
like
this
from
xero?
It's
it's
it's
a
very
steep
learning
curve,
but
I
would
have
to
say
extreme
thanks
to
allen
for
his
patience.
So
thank
you
very
much.
A
Yeah,
I
know
it's,
it's
not
easy
to
get
started.
So
thanks
for
thanks.
A
All
right
posted
the
url
in
the
chat.
You
should
be
able
to
click
on
that
and
then
choose
in
from
two
categories:
the
best
quality
hack
and
the
best
hack
in
the
open
class.
A
Are
going
to
be
puzzles,
as
I
mentioned
earlier
today,
so
puzzles
these
wooden
puzzles
with
cool
cutout
shapes
with
the
theme
of
different
insects
and
bugs
and
cool
nature
stuff
and
we'll
have
two
prizes
or
two
winning
entries
from
the
quality
category
and
one
from
the
open
class,
and
we
need
a
few
more
votes
because,
right
now
we
have
a
tie,
a
few
more
votes
in
the
for
the
quality
things.
Okay,
now
we're
not
tied.
A
All
right
so,
let's
see
now
I
guess
maybe
what
I'll
do
is
I'll
share
my
other
screen
and
you
can
all
see
the
results.
A
Oh
my
goodness,
all
right,
we
gotta
get
one
more
vote.
Somebody
vote
for
the
there's,
a
the
tie
for
the
quality
hack
for
second
place.
A
Oh
all,
right!
Well
I'll
disqualify,
the
one
that
I
did
that's,
how
that's,
how
we'll
decide
all
right,
here's
the
results
so
for
best
quality,
hack
and
winner
of
the
hackathon,
automatically
re-running,
failing
tests.
Paul!
Congratulations.
A
And
tying
for
second
pool
view:
dev
properties,
james
and
alan
and
triaging
lprs.
I
will
disqualify
my
the
team,
led
by
myself
and
give
second
place
to
pool,
pool
vita
properties,
tests
and
man
pages.
So
congratulations,
james
and
ellen.
A
For
the
best
open
class
hack
block
reference
table,
congrats
powell
and
allen.
A
In
a
close
second,
very
close
second
trailing
by
just
one
vote,
was
long
options
for
zdb.
So
I
almost
congratulations
to
you.
Minos,
sorry,.
A
A
We'll
have
you
know
we'll
do
this
again
next
year,
hopefully
in
person
and
and
hopefully
in
person
and
online
combo
next
year
will
be
really
fun
and
we'll
send
a
survey
to
folks
with
kind
of
questions
about.
You
know
what
you
think.
We
could
do
better,
that
you
can
do
better
at
the
conference
that
you
can
submit
anonymously
that
hopefully
we'll
put
together
the
next
day
or
two.