►
Description
Projects presented:
Richard Yao - zio_flags, etc
Brooks Davis & co - CHERI nvlists
Tony Hutter - force O_DIRECT
Allan Jude - unsticking spa_namcespace_lock (🏆 1st place)
Patrick & co - metadata kstats (🥉 3rd place)
Alexander Motin - uncached prefetch
Paul Dagnelie - zstream recompress
Serapheim - ZAP shrinking (🥈 2nd place)
https://openzfs.org/wiki/OpenZFS_Developer_Summit_2022
A
All
right
welcome
back
thanks
to
everyone.
We
had
a
I,
don't
know
I
had
a
busy
day
of
hacking
and
helping
And
discussing
so
now
we
have
time
for
presentations.
So
if
you
worked
on
something
today,
I'd
love
to
hear
from
you,
it
doesn't
have
to
be
a
demo.
It
doesn't
have
to
be
completed
code.
A
B
Yes,
you
have
a
question:
no
I
decided
to
volunteer,
even
though
I'm,
probably
cheating
by
saying
I
did
things
today.
B
A
So
everyone
have
a
chance
to
share
we'll
try
to
keep
this
to
one
hour
and
I.
Think
we'll
have
prizes
I
need
to
double
check
and
make
sure
that
they're
here
but
yeah.
You
want
to
go
first,
wonderful,.
B
I
actually
could
give
a
brief
demo
for
something
but
okay,
so
there
were
three
things
that
I
really
did
today.
The
first
was
reviewing
a
lot
of
people,
other
people's
cold,
the
Cherry
vsd
stuff,
Matt's
I've
made
the
expansion
and
something
minor
involved
between
what
other
people
working
on
it.
How
other
people
talk
about
so
and
the
other
two
things
I
did
were
two
pull
requests.
One
was
something
that
I
had
done
in
private
brands
for
Crow
a
while
back
that
apparently
Brian
wanted,
which
is.
B
We
had
ran
up
one
out
of
the
I
o
Flags
and
it's
basically,
you
know.
Enum
type
only
supports
32
bits.
So
what
I
had
done
in
that
private
Branch
was
converted
to
a
un64
for
the
type
Def,
and
so
that's
in
the
pull
request
right
now
is
I've
had
affected
anyone
to
really
wanted
without
something
that
actually
used
it,
but
Brian
wanted
it.
So
you
have
it
now
and
the
for
the
last
plane.
B
It's
something
that
so
people
might
know
that
I've
been
doing
static,
analysis,
tough,
weekly
and
something
that
I
had
recently
found
and
experimented
with
a
bit
on.
Saturday
was
GitHub
acquired
a
company
that
does
stack
analysis
and
it
made
their
static
analyzer
we
open
to
everyone.
It's
called
code
ql
and
integrate
with
GitHub
actions,
so
on
Saturday
I
had
experimented
a
bit
with
it
and
then
today,
just
so
I
can
say:
I
actually
did
something.
B
During
the
hackathon
aside
from
code
review,
I
went
through
a
trouble
of
making
a
well
I
I
made
a
pull
request,
so
we
can
actually
get
that
into
the
repository.
So
in
terms
of
doing
a
demo,
I
I
can
actually
show
you
what,
if,
as
far
as
how
it
works
using
that
or
my
GitHub
CFS
Repository.
B
Okay,
so,
unfortunately,
when
you
do
a
pull
request,
it
doesn't
actually
put
the
results
of
its
attack
analysis
there.
It
puts
it
on
your
security
tab
in
code
scanning,
and
we
can
see
here.
There
was
a
branch
scanned
a
few
days
ago
and
if
I
change
this,
it
tells
us
everything
they
had
found
and
there
are
four
different
types
of
issues
that
it
found.
B
This
has
a
very
low
cost
positive
rate.
None
of
these
are
false
positives,
although
this
one
I
mean
it's
one,
that
could
be
a
false,
positive,
but
I
think
it's
something
we
could
try
and
fixing
with
this
asset
print
out.
But
this
one
looked
like
a
false
positive.
It
turns
out
it's
not.
B
We
have
this
thing
here
in
the
lower
code
which
well
the
Channel
program
is
called
where
we
have
this
verify,
and
if
this
fills
or
what
what
happens
is
we
will
call
a
function
called
SPL
panic
and
because
of
the
variable
we
actually
are
going
to
print.
What
is
said
in
here,
a
seismograph
is
up
greater
than
SN
printer
and
because
this
is
all
concatenated
a
format
string,
we
actually
will
end
up
with
this,
which
is
a
format
specifier,
which
causes
expects
three
arguments
from
given
two.
B
Unfortunately,
I
can't
expand
the
macro
here,
just
so
you,
but
it's
when
you
check
it
out,
it
actually
becomes
clear,
although
it
really
does
look
like
a
false
positive
until
you
dig
into
it
and
this
information
here
explaining
exactly
what's
wrong
and
it
makes
a
recommendation.
It
gives
an
example
of
what
not
to
do,
and
actually
you
know
this
is
an
example.
What
not
to
do,
and
then
it
has
references
on
well
different
things
like
the
documentation
for
printf
the
common
weakness
enumeration.
B
This
is
more
of
a
security
focused
instead
of
animal
Azure,
rather
than
just
a
bug
finding
one.
Let
me
just
go
through
one
more.
B
This
one
was
actually
someone
surprising
to
me,
so
we
have
this
SN
printf
here
in
the
ray
GMAT
code
and
well
basically,
we
iterate
and
add
the
offset,
and
this
is
telling
us
that
they
we
can
get
an
overflow
and
the
reason
for
that
is
that
well,
SN
printf
won't
right
past
the
bounds.
What
it
was
told.
B
Given
the
return
value,
what
it
would
have
written
if
it
had
a
space
which
causes
our
offset
to
go
past
the
boundary
and
is
well
grown,
or
at
least
it
can
cause
incorrect
behavior,
and
it
explains
this
to
us-
it
gives
us
an
example
of
code
Blackness,
which
has
almost
the
same
exact
code.
It's
just
the
loop
on
trees
are
a
bit
different
and
then
it
gives
us
its
recommendation,
an
example
of
how
to
fix
it
using
the
previous
example
and
then
some
more
references
and
other
things.
B
So
by
hooking
this
into
well.
By
merging
that
into
the
repository,
we
will
actually
be
able
to
see.
I
think
it's
about
how
you
get
results,
but
every
pull
request,
or
if
there's
any
sort
of
regression,
we
will
be
able
to
find
out
at
least
any
sort
of
regression
in
the
100
things
that
code
QR
checks,
and
this
actually
supports
false,
positive
marking.
So
if
there
was
a
false
positive,
we
could
dismiss
it,
but
Island
series
to
be
forced
positive.
So
we're
not
it's
missing
them,
even
though
this
is
very.
B
This
is
probably
not
really
a
bug
into
cleaner
playing
commodity.
The
queen
developers,
one
two-
we
have
time-
extract
time,
reviews
that's
really
in
the
tests.
Cfs
test.
Three,
it
even
says
test
here.
It's
actually
about
us
more
than
identifying
which
ones
are
in
the
DNS
test.
Suite
I
didn't
do
anything
to
do
the
experience
to
it.
Just
figured
it
out.
B
So
you
know
that
is
my
contribution.
During
the
hackathon,
as
I
said,
I
did
this
so
I
could
say:
I
did
something
more
substantial
in
this
code
review
and
hopefully
get
some
nerd
credit,
as
I
considered
out
to
be
a
win
cool.
Thank.
A
C
All
right
so
I
sat
down
with
Paul
and
Richard
this
morning,
and
we
talked
about
zfesta
juice
or
ZFS
and
Cherry
issues.
This
particular.
C
The
first
thing
we
talked
about
is
the
xdr
on
disk
format,
for
MV
lists
and
problem.
Is
that
the
format
implicitly
encodes
some
details
of
the
in-memory
version
of
the
structures
which
change
with
pointer
size.
It
was
previously
handled
by
just
always
using
64
bits
for
every
pointer
which
worked
fine,
but
it
doesn't
work
with
cherry.
C
The
solution
we
arrived
at
is
we're
going
to
leave
it
alone,
and
the
Cherry
code
is
going
to
have
to
learn
how
to
compute
what
size
it
would
have
been
had
it
been
on
the
64-bit
systems.
It
looks
relatively
straightforward,
but
I'm
going
to
need
a
test
Suite
to
not
get
it
wrong,
so
I'll
get
that's.
My
last
slide.
E
C
And
then
charity
Platformers
will
compute
on
when
they're
decoding
they
will
compute
what
the
size
should
be.
The
obvious
first
solution
is
just
get
it
wrong
and
double
the
size
and
perhaps
have
to
disable
a
few
checks,
but
only
in
the
charity
case
a
later
version.
You
could
actually
go
ahead
and
parse
the
whole
xdr
stream
to
figure
out
what
the
right
sizes
of
everything
is
and
then
use
that
number
instead
and
trade
CPU
for
memory.
But
these
are
all
transient.
C
Thinks
that's
wrong,
I'd
be
happy
to
know,
but
that's
the
general
gist
of
that.
The
other
issue
we
have
is
we
use
native
encoding
as
an
eye
apples
and
these
literally
encode
the
exact
binary
representation
of
everything.
So
when
you
pass
CNA
string
array,
you
actually
pass
in
zeroed
space
for
the
pointers
and
pass
that
through
the
buffer,
so
that's
annoying
repacking.
That
would
be
incredibly
painful
and
I.
C
Think
the
solution
is
we
just
switched
to
using
xdr
encoding
the
receiver
shouldn't
care
because
the
receiver
gets
a
stream,
that's
typed
and
it
says
I'm
native
or
mxdr
and
I
did
a
quick
smoke
test.
I
just
changed
all
of
them
and
a
system
is
able
to
create
Z
pools.
F
C
C
Believe
none
of
the
code
paths
are
like
super,
so
performance,
critical
that
this
is
going
to
be
noticeable,
but
it
would
be
good
to
know
that
we'll
probably
have
to
find
out
by
running
the
performance
tests
and
then
the
the.
So
if
we
switch
all
the
senders,
then
sort
of
the
problem
goes
away.
We
continue
to
understand
NATO,
we
could
use
old
tools.
C
Everything
works
except
cherryhost
can't
talk
to
old
tools,
so
the
sooner
we
make
the
switch
the
sooner
we
don't
have
to
worry
about
it
on
charity
systems
in
the
future
when
Hardware
arrives,
because
hopefully
many
of
the
systems
that
are
using
Guild
tools
will
be
dead
by
that
and
not
stuck
in
a
container
or
in
a
container
or
any
container.
C
C
Than
later,
then,
the
problem
just
sort
of
goes
away
and
the
transition
is
not
a
problem
four
or
five
years
or
something
we
walked
through
the
Cherry
BSD
PR,
and
it
made
us
spreadsheet
of
what
to
do
with
all
the
various
things
some
of
them
are
ready
for
commit.
Some
of
them
aren't
related
to
open.
E
C
C
Anyone
knows
of
of
the
packed
xcr
the
packed
ND
lists,
so
I
think
we're
relying
on
the
fact
that
the
code
never
changes
at
least
that's
what
it
what
it
appears
to
be.
C
So
we
need
some
simple
round
trip
tests
and
then
we
need
to
generate
binary
references
and
compare
and
make
sure
that
we
are
can
generate
the
expected
results,
and
then
we
can
unpack
them
correctly
and
in
particular,
once
that's
written
and
done
and
you
know
tested
on
amd64,
we
can
move,
we
can
check
on
Cherry
and
make
sure
we
can
still
read
and
write
the
streaks
directly.
So
Matt
was
mentioning
that
yeah.
That
sort
of
thing
would
be
handy
if,
for
instance,
you
were
to
write
a
native
rust,
serializer
deserialize
it.
B
C
E
C
C
C
C
C
Or
LD
prelim
and
the
cool
thing
about
this
is
I
added.
This
output
and
the
LD
preload
persists
for
bash
and
then
all
of
its
child
processes,
so
you
could
spawn
off
bash
and
then
everything
could
use
this
and
it's
actually
a
little
bit
smart,
because
not
all
things
will
use
it
directly
from
Dev
zero.
It
doesn't
work
so
if
it,
if
it
tries
to
open
it
with
o
direct
and
you've,
got
Ian
valve,
then
it'll
just
try
opening
it
regularly.
C
G
So
kind
of
related
to
all
of
the
stuff
I've
been
dealing
with
lately,
where,
if
a
pool
runs
into
certain
types
of
failures
and
gets
suspended,
you
have
the
problem
of
all
the
other
pools
on
your
system
become
unusable
as
well,
which
can
be
really
annoying
when
you
know,
if
you
have
a
boot
pool
and
one
or
two
data
pools,
and
then
one
of
the
fools
gets
hung
is
trying
to
do
something
when
it
gets
suspended
and
the
process
that
was
in
the
middle
of
doing
something
is
hung
with
the
Spa
namespace
lock
held,
which
is
not
what
you
wanted
it
to
be
doing,
especially
if
you
happen
to
have
a
lot
of
scripts
running
or
something
that
are.
G
G
Things
to
get
hung
and
you're
not
able
to
keep
working
so
I'm,
just
testing
with
people.
The
last
couple
of
days
about
the
idea
of
especially
for
interactive
commands
that
are
done
by
users
if
they're
going
to
get
stuck
because
they're
waiting
on
the
spot
name
face
lock
that
might
be
held
by
someone
else
for
a
really
long
time.
We'd
like
to
do
something
about
that,
and
so
the
idea
was
to
basically
return
e
again
from
the
ioctal.
G
So
if
you
try
to
run
z-fold
status
and
you
can't
actually
get
the
spa
namespace
lock,
because
somebody
else
is
holding
onto
it
and
they're
not
giving
it
up,
if
that
ioctal
returned
again
and
use
space,
would
just
try
again
and
again,
it
would
mean
that
you
could
now
Ctrl
C
out
of
the
stock
interactive
process,
instead
of
it
just
being
stuck
until
you
fix
the
broken
tool
which
sometimes
you
can
only
do
by
rebooting.
Until
we
finish
the
forced
export
stuff
that
I
know.
G
Some
people
have
been
nagging
us
to
hurry
up
and
finish
so
to
demo.
This
I've
compiled
the
version
that
does
this,
although
I
found
instead
of
doing
e
again,
I
used
a
new
genfs
error
code
because
again
already
returns
an
error
message
from
the
like
ZFS.
G
Error
processing
stuff
saying
that
the
pool
is
suspended,
which
isn't
always
the
case
so
I
made
you
know,
set
up
as
error.
I
octal
trilog
failed
and
then
dealt
with
that
in
user
land.
G
So
now,
if
you
run
Z
Pool
clear
on
our
device
under
test,
it
works,
but
if
somebody
is
stuck
so
I
made
a
little
I
made
my
own
new
tunable.
That
runs
a
function
that,
when
you
set
it
to
one
It,
Takes
the
spot,
namespace
lock
and
just
sits
on
it.
G
And
now
now,
if
you
try
to
run
z-fool
clear
you'll
see
it
gets
back
that
it,
the
try,
lock,
failed,
and
so
this
debug
message
is
just
for
now:
I
wouldn't
actually
print
this
in
in
the
patch,
but
it
said
they're
stuck,
but
I
can
always
control
C
out
of
it.
Now,
because
the
I
octo
returns
and
we
sleep
for
a
little
bit
and
try
again
and
then,
if
I,
just
use
my
little
hack
to
release
the
lock
suddenly
the
command
completes
successfully.
G
So
now
all
the
commands
we
do
are
actually
killable
at
least
instead
of
being
stuck
forever
giving
the
user
a
chance
to.
You
know
get
control
their
terminal
back
in
those
situations
when
the
servers
have
jammed
up
and
a
bunch
of
commands
are
going
to
freeze
all
the
time.
Sometimes
you
just
really
appreciate
being
able
to
cancel
something.
G
Although
doing
this
and
other
testing
I
thought
it'd
be
really
handy,
maybe
to
have
a
secret
module
parameter
that
you
set
it
to
zero
and
it
would
drop
the
spawn
names
face.
Lock.
I
think
it's
a
really
bad
idea,
but
it's
happened
enough
where
it's
like.
If
it's
this
or
reboot,
maybe
I'm.
Okay
with
that
or
maybe
you
have
to
say,
I
promise.
This
is
not
for
evil
or
something,
but
whatever
it
might
be
useful.
But
anyway,
I
only
got
Z
Pool,
clear
working.
It
turns
out.
G
I
had
to
already
do
three
different
eye
octals
to
make
that
work,
because
it
tries
to
get
stats
about
all
the
pools
and
then
it
tries
to
open
the
pool,
and
then
it
runs
the
clear
Apple
and
so
is
that
his
evil
list
is
almost
working
but
I.
Think
there's
one
or
two
more
octals
I
have
to
fix,
still
and
teach
you
to
do
this.
G
G
Originally
what
we
looked
at
was
instead
of
doing
you,
know,
mutex
try
enter
in
all
these
places
if
we
had
a
version
that
would
like
just
time
out
after
10
seconds,
so
that
you
had
a
bit
more
of
a
chance
to
wait
because
on
a
very
busy
system,
it
feels
like
going
back
all
the
way
up
to
user
space,
for
each
retry
might
be
a
bit
of
a
pessimism,
but
the
reason
to
like
mutexes
don't
have
that
option
right
now.
So
I
think
this
is
good
enough.
E
All
right,
awesome,
yeah,
so
I
was
just
pulling
this
up
to
give
Sarah
from
some
credit
there.
E
He
previously
introduced
a
patch
for
k-steps
per
data
set
for
reads
and
writes
and
that
it
was
something
that
we
mentioned
or
sitapi
mentioned
yesterday
in
her
presentation
that
we're
interested
in
getting
something
similar
but
for
the
different
metadata
operations,
and
so
it
ended
up
being
a
pretty
straightforward
thanks
to
Alan,
Tony
and
Sarah
for
helping
out
with
this,
but
they
helped
to
get
the
the
groundwork
run
in
there.
But
I've
just
got
a
super
simple,
easy
pool
setup
here,
and
this
is
some
of
the
existing
case
stats.
E
E
Likewise
there
and
then
just
interesting
going
forward
having
I
I
added
the
the
create
and
the
open
operations.
E
Let
me
file
F2
and
then,
when
we
print
that
out
for
now,
we
made
the
decision
to
to
add
a
create
counter,
as
well
as
the
open
counter.
And
then,
when
you
open
a
new
file
with
the
Opry,
we
go
ahead
and
we
bump
both
of
those
counters
there
so
that
that
added
for
twice.
But
then,
if
you
did
anything
to
to
Uber,
write
that.
E
Then
we're
just
going
to
bump
the
the
open,
so
I'm
sure
there'll
be
similar
scenarios
that
we
run
through,
as
we
add
all
of
the
other
metadata
operations,
but
yeah.
That's
what
we've
got
for
now
see.
F
Everything
produced
much
or
just
few
other
related
lines,
but
my
idea
for
today
was
to
investigate
couple
of
optimizations
as
I've
told.
Oh,
it's
critical
to
have
prefix
in
many
cases,
and
many
people
ask
such
configuration
as
oh
speculative
privilege
for
uncachable
data
when
you
set
primary
cash
to
metadata,
but
still
want
to
get
data.
Prefetch
I
investigated
whether
it's
possible,
because
I
have
feelings
many
years
ago.
It
was
me
who
partially
disabled.
Maybe
it
was
broken.
I,
don't
remember
where
it's
all
started,
but
the
fact
that
we
are
not
pre-fetching
data.
F
If
we
set
not
to
catch
them
and
it
seems
like
it
should
be
possible,
but
there's
one
Edge
case:
if
data
are
not
claimed
not
read,
they
will
not
be
expunged
from
Arc.
They.
F
There,
the
problem
is
that
all
that
handling
for
primary
cash
is
done
on
divular
layer.
Art
doesn't
know
anything
about
it,
so
we
need
some
other
eviction
methods
to
handle
it
so
I'll
continue.
Thinking
about
it.
Just
haven't
succeeded
yet
just
just
mostly
look
how
it
works.
My
second,
even
more
crazy
idea,
was
to
avoid
one
memory
copy
during
read
process
from
cachable
data,
because
if
we
are
reading
data
which
uncashables
they
probably
not
going
to
stay.
H
F
And
if
they
are
uncompressive,
so
it
makes
no
sense
to
make
them
ABD.
If
we
erase
them
into
linear
buffers
and
you
can,
we
may
share
them
with
nibu
buffer
and
instead
of
two
reads.
Two
memory
copies
get
only
one
our
problem
again
in
details.
F
That's
first,
something
else
may
try
to
read
data
in
parallel
and
from
different
clone
and
again
they
will
stay
in
Arc,
but
in
this
case
they
will
be
linear
buffer
will
make
it
out
of
KVA.
That's
not
good
and
another
question
or
like
when
to
eat
and
how
there
are
still
some
details,
but
I'm
still
not
dropping
the
ideas.
I
will
continue
investigated.
H
All
right
so
after
working
on
the
Cherry
stuff-
and
you
know
talking
about
the
envelist
goat
and
all
that
stuff,
I
decided
I-
wanted
to
do
some
actual
hacking
and
write
some
code.
So
I
went
to
the
spreadsheet
and
found
example
project
which
was
Z
stream
recompress.
So
we
have
the
Z
string
utility
that
lets
us
do
various
manipulations
on
zfsn
streams,
and
you
can
use
it
to
analyze,
send
streams
you
can
use
it
to.
H
H
I
set
up
a
small
file
system
with
some
compressible
data,
I
wrote,
The
Stream,
you
know,
I
created
a
sense
stream
that
was
compressed
and
then
I
run
this
command,
which
is
you
know,
cat
stream
into
Z
stream,
recompress
and
there's
a
lot
of
debugging
output
because
it
mostly
doesn't
work,
but
in
this
very
specific
test
case,
so
the
old
the
original
stream
is
on
the
right,
and
you
can
look
here
and
see.
We've
got
a
right
record
for
object.
H
Raw
and
the
checksums
validate
and
the
whole
thing
looks
good
I've
never
tried
receiving
the
Stream
So,
it
probably
explodes
horribly,
but
at
least
for
the
case
of
going
from
one
compression
type
to
uncompressed.
It
works
well.
If
you
try
to
compress.
Instead,
it
breaks
because
you
need
to
actually
initialize
the
compression
algorithms
and
I
didn't
do
that
step
yet,
but
the
core
Loop
of
the
code
does
seem
to
work
correctly.
So
this
is
a
good
jumping
off
point
for
the
rest
of
the
the
package.
H
Yeah,
you
certainly
could
this
would
be
for
like
if
you
wanted
to
continue
storing
it
as
a
stream.
For
some
reason,
you
know
as
a
backup
file
so.
G
This
is
based
on
the
existing
pull
request
for
Z
stream
decompress.
H
H
H
You
would
need
to
like
add
the
separate
processing
steps
for
that
and
do
the
key
management
and
stuff
like
that.
But
there's
no
reason
that
I
can
think
of
black
shouldn't
work.
H
D
The
after
talking
more
with
the
SSX
folks,
I
was
reminded
of
a
problem
of
the
streaming
zap,
specifically,
if
you
have
a
directory
in
the
data
set,
and
you
just
you
know,
create
like
a
million
files
and
then
you
delete,
you
know
almost
all
of
them,
they're
still
like
that
entries
in
the
directory,
so
like
for
every
new
file,
there's
a
new
zap
entry
and
even
though
we
remove
those
the
zap
doesn't
actually
shrink.
D
So
after
talking
with
Madden
looking
around
on
the
web,
I
was
I
managed
to
produce
the
issue
and
also
revive
some
multiple
requests
from
illumos,
where
we
are
basically
slowly
shrinking
this
app
when
zap
entries
are
removed,
so
over
here
I
have
a
directory.
That's
there
where
I
basically
created
half
a
million
files
and
then
remove
them,
and
just
taking
the
real
world
clock
time.
We
can
see
that
of
the
system
we
spent
around
0.2
seconds.
D
While
on
the
clean
director
with
like
two
files,
we
basically
spent
almost
no
time
so
after
applying
the
parts
and
removing
out
of
both
files
and
basically
trying
to
reproduce
the
issue.
D
We
can
see
that
the
system
time
is
being
reduced
by
a
lot,
but
still
there
are
there's
some
work
to
be
done
after
applying
this
path,
reproducing
the
HCI
tried
to
casually
look
at
like
how
this
app
works
and
what
the
BR
is
doing
and
I
think
there's
certain
things
that
we
can
do
slightly
different
or
completely
change
the
way
that
we
go
about
this,
but
I
just
wanted
to
Showcase
kind
of
like
the
problem
here
with
flame
grass,
so
that
there
are
less
command
that
I
did
on
the
on
the
issue
when
the
machine
that
I
produce
the
issue
and
as
we
can
see,
we
spent
like
the
majority
of
the
time
inverse
up
cursor
retrieve
code
where
we
go
through
this
app,
and
this
is
called
things
and
by
the
way.
D
You
can
see
here
it's
around
1
million
samples
and
the
zap
stuff.
Don't
even
exist,
probably
because
it's
in
a
Microsoft
so,
but
here
is
the
same
system
with
the
parts
where
we
still
have
the
fads
up,
and
we
see
there
is
like
four
million
entries,
so
there
is
definitely
like
a
big
speed
up
just
with
this
part,
so
what
I
ended
up
doing
is
I
created,
I
I
have
my
ported
code
up
on
the
branch
and
I'll
probably
create
like
a
draft
VR
for
this.
D
The
code
is
really
not
that
much
but
I'm,
not
sure.
If
you
that's
exactly
what
we
want
to
do,
but
I
think
it
will
be
a
good
starting
point
to
just
try
to
talk
about
this
issue
again.
A
I
worked
on
showing
the
reads
the
expression
code,
some
folks,
so
hopefully
we
can
move
that
year
by
year,
a
little
a
little
closer
to
being
done.
So
we're
almost
done
with
the
day.
Let
me
send
out
a
link
with
a
survey,
so
you
can
vote
on
your
favorite
hack
for
today
and
we
do
have
prizes
that
are
not
here
right
now,
but
we'll
get
them
shipped
to
you.
So
for
the
for
the
three
favorite
ones.
You
have
a
question.
B
We
can
actually
write
a
check
for
that
and
add
that
to
the
repository
so
that
we
can
try
to
catch
other
instances
like
future
recurrences
or
other
existing
instances.
It
I
think
NASA
development
is
going
to
be
a
code,
but
anyway
it's
just
another
possibility.
This
opens
up
and
I
it
occurred.
To
me.
I
probably
should
have
mention
that,
but
yeah,
that's
my
agenda.
A
Great
while
people
are
voting,
I'll
tell
a
story
that
may
or
may
not
influence
your
voting,
so
I
joined
delphix
like
to
almost
12
years
ago,
and
one
of
our
colleagues
set
up
a
home
directory
server
that
was
based
on
actually
the
I
think
the
sun
fishworks
appliance
that
uses
the
fs
inside
of
it
and
it's
all
administered
by
ite
and
like
we
don't
have
any
access
to
it
or
anything.
I
just
have
my
home
directory
and
then
like
maybe
three
or
four
years
into
it.
A
I
was
doing
some
I
was
writing
some
tests.
The
test
was
to
test
like
how
is
the
fast
performers,
large
directories
and
so
I
was
creating
lots
of
files
and
to
see
how
you
know
how
fast
can
you
create
like
10
million
files,
if
I
had
what,
if
I
use
lots
of
threads
and
I
thought
I
was
running
this
on,
like
a
test,
I
was
running
on
a
test
system,
but
I
thought
I
was
running
it
in
a
test
pool
that
actually
I
was
running
in
my
home
directory.
A
So
then
I
got
to
test.
How
long
does
it
take
to
remove
10
million
files
over
NFS?
The
answer
is
a
long
time,
but
which
is
fine,
but
for
the
past,
like
eight
years,
every
time
I
go
into
my
home
directory
and
try
to
tap
complete
something
or
LS
it
takes
like
60
Seconds,
and
it's
not
because
I
have
a
lot
I.
A
In
my
home
directory,
but
it's
not
that
many,
the
problem
is
like
the
NFS
server
is
running
ZFS
and
it
has
this
fast
app
that
has
tons
and
tons
and
tons
of
at
least,
and
it's
like
iterating
over
all
of
it,
because
at
one
point
I
had
10
million
injuries
in
there.
So
that's
why
I
voted
for
seraphim.
A
In
first
place,
Alan
Jude
unsticking
the
spot,
namespace
lock.
Congratulations.
A
Oh
it's
second
place.
Maybe
I
did
sway
the
vote.
It's
a
hit
place
seraphim
with
zap
shrinking.
Congratulations.
A
So
I
will
contact
you
all
to
get
your
shipping
addresses.
We
do
have
multiple
prizes,
so
you
know
everybody
that
works
together
with
you.
Let
me
know
we
should
have
enough
for
everyone
who
worked
on
those
three
projects.
A
A
So
if
only
maybe
five
of
us
so
yeah,
it's
been
a
great
10
years
and
hopefully
we'll
have
10
more
years
of
great
open
ZFS
development.
Thanks.