►
Description
Hackathon Presentations and Awards - OpenZFS Developer Summit 2014
A
So
the
I
think
that
we
only
have
about
five
minutes
for
each
group,
so
this
is
going
to
be
extra,
lightning
style.
I
think
the
main
points
that
you
want
to
get
across
is
what
were
you
trying
to
do?
How
far
did
you
get
today
and
what's
left
to
be
done,
and
it
would
be
great
to
hear
from
everyone,
regardless
of
if
they
actually
got
much
done
or
if
they
just
have
one
or
two
sentences?
I
think
it's
still
a
great
stuff
to
share.
A
Could
I
get
a
volunteer
to
report
to
write
down
what
we've
done
in
the
on
the
webpage
or
might
have
to
call
someone
it's
going
to
be
someone
from
delfix?
If,
thanks.
A
A
Team
name:
okay,
I
will
go
first.
My
team
name
is
my
team
name
is:
is
ghettosen.
D
And
what
I
did
was
I
implemented.
A
Zfs
send
of
compressed
data
where
it
gets
the
data
that's
already
compressed
on
disk
sends
it
over
this
over
the
wire
and
then,
when
you
receive
it,
it
decompresses
the
data
and
then
rates
it
again
and
then
maybe
compresses
it
again,
and
it
basically
works
next
things
I
would
need
to
add
like
send
stream
flags.
That's
not
done
yet
add
new
command
line
plugins
to
enable
this,
since
it's
not
backwards
compatible,
obviously,
and
that's
it
so
who.
A
Would
anyone
like
a
beer
as
you're
waiting?
We
need
to
leave
here
at
5
40
in
order
to
get
to
the
dinner
on
time,
so
you
want
a
beer
before
we
go.
There's
honey
in
the
fridge.
E
D
F
E
But
one
of
them
that
is
in
the
test
phase,
is
to
simplify
the
scrub
code.
So
there's
logic
today
and
scrubs
like
throttle
it
back
and
do
all
sorts
of
funny
things,
but
with
the
right
throttle.
E
Quite
as
necessary,
and
there
might
actually
be
some
performance
benefits
of
not
using
that
logic.
So
the
idea
behind
this
is
that
it
changes
the
way
that
we
do
scrubs.
Now
it's
based
on
effectively
how
many
bites
do
you
want
to
push
down,
and
if
you
want
to
push
down
like
10
meg,
then
it
source
them
and
does
them
in
a
pseudo
led
order
based.
H
About
reaping
some
of
the
caches
that
have
external
fragmentation,
the
other
thing
that
we
saw
was
arc
well
better
to
go
to
something
really
big,
but
we,
but
we
have
a
huge
l2.
So
what
we?
What
I'd
like
to
do,
is
basically
prevent
l2r
ripping,
as
well
as
checking
for
external
presentations
on
the
cio
above
caches
and
then
see
if
we
can
switch
that
earlier.
H
D
H
E
And
then
we
also
started
looking
at
trying
to
come
up
with
a
way
that
we
can
create
fragmentation
in
a
very
fast
fashion
on
a
pool.
So
we
made
some
changes
to
z
hack
to
allow
you
to
pass
in
right
now.
It
doesn't
pass
in
much
of
anything
but
eventually
it'll
pass
in
a
histogram
distribution
and
then
we'll
walk
through
all
the
meta-slabs
actually
fragmenting
in
that
fashion.
So
you'll
end
up
with
a
pool
that
has
level
of
fragmentation
that
you're
looking
for
and
you
can
run
tests
on
top
of
that.
A
So
for
those
of
you
who
don't
know
the
open
dash
website
and
mailing
list
are
hosted
by
hybrid
cluster.
Yes,
thanks
to
luke
and
his
company.
F
Cool
yes,
so
we
are
currently
hosting
opencfs.org,
which
consists
of
at
the
moment
it's
a
hybrid
cluster
deployment,
which
is
our
previous
platform
that
was
built
on
top
of
opencfs
and
3dsd,
and
what
I've
been
working
on
today
is
taking
the
first
steps
towards
moving
that
to
a
docker-based
deployment
that
we
can
run
on
top
of
flocker,
which
I
showed
everyone
yesterday.
F
F
I
think
about
three
quarters
of
those
things
are
stateful
components,
and
so
it's
a
natural
blocker,
because
you
need
to
be
able
to
handle
data,
and
so
I'm
just
working
on
writing
doctor
files
for
each
of
them,
composing
them
using
a
fig
application
and
then
I'm
going
to
deploy
them
onto
a
host
on
ec2.
F
D
G
Yeah
so
we
worked
with
justin
on
some
changes,
but
that's
like
not
a
new
code
that
the
code
has
been
already
developed
by
spectrology.
So
I
reviewed
some
of
the
changes
and
transformed
them
to
be
able
to
go
to
numerous
code
base
and
submitted
review
requests
and
opened
issues
for
those
changes.
G
Really
a
small
bug
fix
and
another
change
is
a
performance
enhancement
which
was
to
aggregate
like
currently,
synchronous,
reads
and
rights
are
not
aggregated,
they
have
aggregated,
but
in
some
cases
it's
useful
to
aggregate
them,
and
the
code
allows
to
do
that,
and
it
also
allows
like
to
aggregate
synchronous
and
non-synchronous
operations
of
the
same
kind
of
reads
or
rights.
J
D
J
So
a
couple
times
a
second
or
something
like
that
they
would
just
do
a
commitment
that
would
get
like
98
aggregation
hit
on
that
synchronous,
right
string.
J
So
for
us,
at
least
in
that
workflow,
it
made
a
pretty
huge
difference
in
terms
of
overall
throughput.
So
together
we
were
able
to
get.
D
J
Things
into
the
queue,
so
five
new
issues
and
five
new
review
requests.
They're.
Actually
I
think
one
or
two
of
them
might
have
been
previous
review
requests
in
there,
but
at
least
now
they're
issues
and
everything
and
working
on
number
six.
I
was
hoping
to
get
a
little
bit
further
because
we
have
changes
from
like
two
and
a
half
years
ago
that
do
the.
I
A
So
you
guys
should
also
see
that
have
those
reviews
in
your
inbox
from
a
developer,
opendefense.org
and
you
know,
comments
appreciated,
but
I
think
that
your
your
teams
are
pretty
pretty
supportive.
K
L
Idea
and
the
the
big
upside
for
those
of
people
who
are
really
trying
to
understand
performance
in
big
complex
systems
is,
is
on
a
per
data
set
basis.
How
are
using
brazil
are
using
throughput
and
do
those
measurements
cheaply
and
then,
of
course,
that
translates
right
into
the
ports
to
bsd
and
and
linux
quite
nicely,
so
more
visibility.
The
more
you
see
the
more
you
can
fix
and
also,
I
think,
it'll
fix
some
of
the
issues
with
that
right
now
you
would
have
to
either.
L
If
you
don't
have
d
trace
on
the
system,
it
might
be
hard
really
hard
to
get
some
of
this
information
and
we've
been
kind
of
spoiled
with
d-trace
on
the
lumos
for
a
long
time,
and
so
we
wouldn't
think
twice
about
I'll,
just
right
here
trace
for
this,
but
putting
it
in
case.
That's
makes
it
much
easier
to
be
consistent
across
this
post.
A
Team,
joint
and
I'll
mention
it
just
remind
you
again
that
there
are
prizes
for
best
hackathon
project
the
this
is
going
to
be
determined
by
popular
vote
or
something
so
keep
in
mind
which
think
about
which
hackathon
project
you
want
to
root
for,
for
the
best
in
show.
M
Yeah,
so
this
was
the
second
of
the
two
problems
that
I
talked
about
yesterday.
In
my
talk,
which
was
that
we
have
this
race
between,
looks
like
zill,
close,
zfus,
rollback
and
possibly
spa
sync,
where
there
were
two
failure
modes,
one
was
a
panic
where
you
know
one
of
these
times.
We
do
this
roll
back
to
the
system
when
you
connect
and
the.
M
Stuck
in
a
task
you
destroyed
constantly
looping
and
tasking
detroit.
C
So
that
was
the
problem
a
while
back.
We
had
determined
that
this
was
a
race
and
had
written
an
e-script
that
we
we
actually
do
this.
I
don't
know
if
this
we
use
djs.
So
we
write
these
scripts.
When
we
find
that
we
can't
make
progress
on
it
done.
We
will
at
least
come
up
with
a
descript
to
capture
some
hypothesis
and
will
allow
us
to
bifurcate
his
face
a
little
bit.
It's
usually
an
extra
frustration
and
then
deploy
that
into
production,
and
that
way
we
know
we
see
it
again.
C
I
C
We
actually
could
use
amanda
job
to
locate
it
and
then
from
that
we
had
to
catch
up
to
our
previous
selves.
So
all
three
of
us
have
looked
at
this
at
various
times
and
I
think
it
took
us
all
a
couple
of
hours
to
even
get
back
to
my
previous
self-obviously
personally
and
then
we.
N
D
O
N
N
Was
basically
mounted
with
the
default
standard
synchronous,
behavior,
normal
operations?
Don't
do
this,
so
what
appears
to
have
happened
is
that
you
basically
have
cases
where
you
have
things
appended
to
this
asynchronous
list.
Therefore,
the
dialog
is
actually
dirty.
So
when
spa
sink
comes
around,
it
will
try
and
clean
that
up.
However,
if
you're
tearing
down
the
zfs
vfst,
because
it's
not
one
of
the
synchronous
operations
when
you
go
and
do
a
weight
on
all
the
synchronous,
you
just
do
your
spa
sync
weight
you're
not
going
to
actually
be
able
to
you're
going
to.
N
Happens
to
not
be
the
thing
you
care
about
go
around
in
the
pain
in
the
panic
case,
we've
actually
continued
going
on
through
zfs
vfs
tear
down,
so
we
destroy
the
task.
Q
team.
So
therefore
we're
trying
to
do
a
test
you
dispatch
to
null
blow
up
in
the
race
the
assumptions.
N
Basically
that
task
you
dispatch,
there's
a
test,
use
destroys
there's
no
entries
in
it,
and
so
it's
trying
to
add
an
entry
remove
an
entry
because
we
try
to
add
one
token
entry
to
kind
of
clean
up
its
internal
state
and
because.
C
M
N
N
It
is
dirty,
so
we
know
that's
awesome,
yeah,
basically,
just
panic
the
system
right
after
cfs
right
returns.
So
therefore,
you
know
you've
basically
done
this
asynchronous
right
that
will
be
sent
down
and
you
have
the
asynchronous
itx,
but
the
actual
synchronous
entries
are
all
basically
zero.
Yeah.
C
C
We
know
this
because
we
do
believe
that
the
d
script,
actually
the
d
script
made
it
much
less
likely
to
hit
us,
and
we
know
that
that
push
that
that
pushed
the
window
out
on
the
order.
100
microseconds.
So.
N
C
It's
just
kind
of
it's
super
tight
you've
only
seen
a
handful
of
times
and
the
question
is,
is
dave
ultimately
going
to
be
satisfied.
A
A
All
right,
so
we
have
here
a
system
set
up
with
a
pool
called
test.
Two
don't
ask
about
this
one.
Don't
ask
me
about
that.
So
there's
some
I
o
running
on
this
pool.
You
know
writing
a
bunch
of.
A
A
And
that's
it
creating
file
systems
while
we're
waiting
for
that
we're
gonna
go
over
here
and
we're
gonna
do
pseudo
key
time.
Zfs
channel
test
two
demo,
one
dot,
and
this
is
also
going
to
create
500
files
and
we're
done
with
the.
B
A
Do
step
two:
let's
say
that
this
was
like
a
bunch
of
data
that
you
know
it
was
either
internal
or
customer
data
and
you
didn't
feel
like
keeping
track
of
which
was
which
very
well.
The
only
thing
you
did
was
you
set
a
property
on
it,
so
we're
gonna
do
sudo
zfs
set
customer
id
equals
giant.
I
think
they
want
to
see
the
script
too,
but
we'll
get
to
that.
Oh
yeah,
test2,
fs127,.
A
All
right
so
now,
let's
say
what
we
want
to
do
is
we
want
to
snapshot
everything
that
is
a
customer
id
set
and
destroy
everything
else?
Normally,
you
know
you
have
to
be
like.
Oh
okay,.
A
A
This
one
took
almost
two
minutes
to
completely
adjust
the
creation.
Destroying
is
a
lot
slower
like
much
slower
and
chris
is
going
to
talk
about
the.
Are
you
going
to
talk
about
the.
P
Io,
oh
yes,
I
also
I
it's
really
boring
to
demo,
but
I
changed
the
snapshot
ioctal
to
the
backed
I
basically
deleted
all
the
c
code
in
as
soon
as
you
get
into
the
part,
that's
called
zfs,
ioctal
snapshot.
I
deleted
that
and
replaced
it
with
like
a
30
line,
script
that
doesn't
quite
get
every
edge
case,
but
the
edge
cases.
Q
D
P
P
Can
put
out
a
review
for
stuff
that
already
exists
now
and
people
can
look
at
it.
It's
not
ready,
probably
won't
be
ready
to
push
for
a
couple
months,
just
because
when
we'll
have
time
to
work
on
it,
but
the
main
thing
left
to
do
is
to
write,
tons
and
tons
of
tests
and
then
fix
all
the
bugs.
We
found
yeah.
D
P
Yes,
so
the
zfs
channel
command
should
be
part
of
it
should
be
part
of
it.
Although
it's
it's
only
going
to,
let
you
run
it
if
you're
rude,
that's
the
source
for
different
one
by
the
way,
it's
not
as
simple.
A
P
Is
a
dry
run-ish
option,
so
the
property
you
can't
run.
You
can't
drive
around
an
entire
script
because
it's
pointless
because,
unless
you're
making
the
changes,
if
you
do
two
dependent
operations
unless
you're
actually
making
the
changes
you
you
know,
your
script
is
going
to
just
behave
differently
in
driver
versus
not
driver.
D
P
You
can
do
what
paul's
about
to
do
so
here.
Creating
all
the
file
systems
isn't
isn't
dependent.
None
of
the
none
of
those
operations
are
defending
each
other.
So,
instead
of
there's
like
two
versions
of
every
of
every
sync
funk
that
you
can
run,
there's
a
version
that
actually
does
the
operation
and
a
version
that
returns
the
same
error,
slash
success.
That
would
be
returned
if
you
ran
it
in
the
current
state.
P
P
Over
and
like
pre-checking
that
some
things
were
going
to
work,
but
then
things
like
snapshot
limits
have
to
be
enforced.
You
know,
based
on
how
many
snapchats
you
actually
created,
so
the
the
the
way
that
the
new
lewis
group
does.
It
is
a
bit
different
because
it
just
goes
and
starts
creating
everything
and
it
it
eventually
hits
the
limit.
It
just
goes
back
and
destroys
everything
that
it
created
and
nobody
ever
sees
the
intermediate
state
because
it
was
all
done
within
one
txj.
P
Yeah
yeah
matt
doesn't.
P
Kind
of
it'd
be
really
hard
to
make
that
generic,
because
the
right
now
we're
just
calling
the
same
sync
funds
that
exist
and
they
just
like
that's
most
of
zfs,
they
go
out
and
they
update
everything
they
just
change
whatever
state
they
want.
So
you
have
to
do
some
sort
of
like
staging
of
the
changes
that
doesn't
get
committed
until
the
script
succeeds.
R
Waiting
for
an
entrepreneur
to
to
see
it
as
about
192
db,
with
fresh
tvs
from
a
brand
new
cluster
and
scripting
that
that
it
can
go
out
between
once
the.
O
People
who
do
zfs,
who
do
open,
dfs
work
on
non-alumos
platforms,
since
the
lumos
is
still
the
upstream
of
open
cfs.
They
need
to
build
on
illumos
and
zero
to
nightly,
takes
an
hour
with
good
hardware
and
more
without
so
delphics
had
a
nice
script.
That
was
quick
and
dirty
and
would
build
just
the
open,
zfs
bits
still
some
code,
but
it
can
be
measured
in
minutes
instead
of
an
hour,
so
I
ported
it
into
user
source
tools
for
lumos.
I
had
some
people
try.
It
wrote
a
man
page
for
it.
O
It
is
now
up
for
public
review.
If
you've
read
your
mail,
you'll
see
it
on
the
limous
list,
the
cfs
for
lumos
list
and
the
open
cfs
list,
and
I
had
some
of
the
people.
Try
it
was
it
terrible.
Was
it
okay?
It
seemed
to
work
okay,
I
got
a
freebsd
thumbs
up
and
it
works
on
both
open,
indiana
and
omni
os
with
stock
and
lumosgate,
because
building
all
of
the
lumos
gate
on
anything
other
than
open
indiana
is
still
painful.
O
I've
got
dips
under
review,
you
know
most
people
in
the
audience,
please
review
them,
and
so,
but
just
the
zfs
pits
work
pretty
well.
I
suspect
they
work
on
smart
os
2.
We
had
some
bad
luck
getting
them
on
there,
but
that's
probably
bad
luck
and
nothing
more.
So
yeah,
it's
a
it's
a
it's
a
good
litmus
test
for
somebody's
changes
before
they
go
upstream
and
if
you
can
get
past
that,
then
you
can
engage
in
that
community.
Knowing
that
hey
zmake
worked
thanks.
A
H
A
A
But
I
did
pay
attention
and
I
learned
a
lot
about
olympus,
which
is
very
good
how
it
works.
I
got
distracted
by
a
minor
bug
fix
that
got
brought
up,
so
it
turns
out
that
if
you
have
a
suspended
tool
and
you're
trying
to
export
it,
you
will
deadlock
the
system,
the
current
code,
because
the
pool
check
function
is
not
implemented
so
or
not.
It
hangs.
A
So
if
you
yeah
and
then,
if
you
grow
subsequently,
so
the
deal
is,
if
you
have
suspended
tool
and
you
run
the
pool
export,
it
will
run
through
it'll
run
the
handler
on
the
kernel
side
at.
P
A
Which
is
really
unfortunate
because
any
other
command
like
say
people
clear
will
attempt
to
you,
know
wire
the
name
face
lock
before
they
can
clear
it
and
suspend
the
pool.
So
you
end
up
with
a
very
unfortunate
statement,
so
I
made
the
even
better
one
line
change
to
actually
like
put
the
right
handler
in
place,
so
you
can
get
an
error
now
from
people
export
if
you
try
and
export
a
suspended
tool
and
now
hopefully
you
have
the
skills
to
be
able
to
upstream
that
to
the
most.
I
do
actually.
A
Alex
are
you
guys,
ready
back
there
yeah
more
time
to
buy
the
demo?
Okay,
we'll
give
you
some
more
time.
There's
some
more
people
standing
over
here
richard
or.
A
A
S
A
Yes,
other
other
news,
I
think
someone
has
you.
B
Keep
account
of
the
number
of
objects
under
a
certain
user
id
of
group
id
in
addition
to
the
number
of
bytes,
which
is
currently
accounting
already.
So
it's
very
easy
to
do
it
incrementally,
because
the
problem,
the
biggest
problem,
is
to
initial
up
to
when
you
upgrade
the
pool
to
you
have
to
go
through
the
whole.
Every
object
on
the
in
the
data
set
to
you
know
to
count
stuff,
and
I
looked
at
the
old
code
that
does
the
accounting
for
user
and
for
buyers
for
issuer
and
group.
B
I
believe
what,
if
I
understood
it
correctly,
what
it
does
is
to
write
a
bit
into
the
every
single
denote
in
the
data
set
to
to
mark
that
you
know,
there's
a
countable
amount,
but
we
were
trying
to
avoid
doing
that
because
that's
going
to
be
quite
expensive
for
a
new,
a
big
data
set.
Also,
I
don't
think
you
can
easily
do
it
when
there
is
some
golden
chain
for
data
set,
so
we
looked
at
a
couple
ways
to
do
it
and
decided
that
it
was
easy.
Well,
I
mean
easy
for
for
my
skill.
B
Skill
level
to
to
implement
is
to
just
take
a
snapshot
of
the
of
the
data
set
and,
at
that
point,
divided
the
job
into
two
sets
into
two
parts.
B
One
part
is
to
do
the
regular
environmental
accounting
at
txt
synthetic,
which
basically
means
you
can
end
up
with
negative
cameras,
because
you
have
you
haven't
counted
the
original,
the
snapshot
yet
and,
on
the
other
hand,
to
iterate
over
the
objects
in
the
data
set
in
the
snapshot,
which
is
read-only
so
and
and
update
the
counters
accordingly,
and
the
nice
thing
about
that
is,
we
could
limit
this
amount
of
work
for
psg
and
also
there
could
be
ongoing
change
to
the
which
is
accounted
by
the
incremental
account
incremental
counting
our
update
updates
image
existing.
B
So
we
could
do
something
like
write,
a
software
to
look
at
the
amount
of
work
which
is
kind
of
handy
to
decide
how
much
you
know
we
want
to
go
to
english
so
yeah.
Basically,
that's
it
and
everything
is
about
destroying
the
snapshot,
cool,
so
yeah,
that's
it.
We
haven't
done
any
coding.
B
Coming
down
because
well
basically,
that's
my
paper
so
yeah,
but
we
we
just
wasted
some
time
in
some
more
some
to
explore
some
of
our
classic
solutions,
also
brian
suggested
to
to
instead
of
reusing
the
existing
zap
objects.
B
We
should
create
two
new
examples
to
do
that,
so
that
if
we
want
to
disable
this
feature,
we're
going
to
simply
remove
the
zap
object
instead
of
going
to
the
existing
zebra
test
to
find
out
which
entries
are
going
to
the
objective
okay,
so
that's,
hopefully
we
can
have
a
catch
for
cfs
on
the
max
plus
I
have
some
people
sub
commands
and
land
pages.
Oh
cool
cool,
that's
great!.
A
A
S
A
Through
the
class
for
they
call
it
the
grab
driver
and
see
what.
P
P
So
what
I
worked
on
was
actually
picking
up
something
I
started
about
a
year
ago.
So
basically
there's
a
number
of
ways.
The
space
maps
can
get
out
of
sync
with
the
actual
state
of.
What's
actually
you
know
the
actual
response
they've
been
allocated,
and
you
know
it's
written
out
as
the
free
and
allocated
coalesce
every
so
often
but
yeah,
but
there's
there's
been
at
least
looks
like
multiple
bugs
that
can
cause
probably
and
how.
P
Because
it
can
prevent
you
from
importing
the
pull,
and
so
the
workaround
you
know
covers
the
case
where
the
space
map
is
internally
consistent,
where
they're,
overlapping
and
stuff
like
that
and
sort
of
trying
to.
P
Already
constructed,
if
you
just
traverse
the
entire
entire
tribute
so
zdb
kind
of,
does
that
to
try
to
look
for
leaks,
and
we
can
also
just
do
that
ourselves
and
or
more
important.
Do
it
exactly,
you
can
use
the
cdb
traversal
and
reconstruct
so
what
bdb
does
is
it
boasts
it
loads
space
back,
has
outback
and
then
looks
real
and
tries
to
do
this
reversal.
P
It
turned
out
that
I
ended
up
re-implementing,
so
z
hack
got
larger
than
more
than
twice
as
large,
as
the
current
currently
is
before.
I
realized
that
I
probably
should
just
try
to
get
zv
to
open
the
pull
right
in
right
mode,
so
I
I
spent
like
I
guess,
ran
out
of
time
before
I
got
that
working
but
and
there's
also
some
other
changes
that
I
didn't
think
about,
because
when
I,
when
I
did,
there's
no
there's
original
changes
to
to
make
the
new
space
back.
I
P
S
So
and
so
so
we
team
hash,
the
my
goal
was
to
actually
get
originally
to
get
shot:
5
12
256,
but
after
some
discussion
with.
U
I've
been
sitting
on
that
for
about
a
year
now,
and
essentially,
we
decided
to
anger
the
gods
of
mailing
lists
and
we
are
going
to
be
trying
to
propose
a
new
cash
once
again
and
it's
probably
going
to
be
light
too,
which,
if
all
goes
well,
people
will
accept
without
too
much
grumbling.
It's
secure.
It's
fast
and.
S
O
U
Yeah
yeah
and
yeah,
so
the
initial
version
that
we
were
working
on
is
probably
not
going
to
be
like
using
those
crazy
ssc
and
abs
optimizations.
U
We
are
basically
trying
to
get
the
foot
into
the
door,
getting
everybody
to
accept
the
thing,
and
then
we
are
going
to
be
spending
our
time,
probably
optimizing
the
hell
out
of
it.
S
So
yeah
yeah.
Basically
I
took
his
code
that
he
had
done
on
various
other
hashes
like
trial,
56
and
some
other
stuff
got
them
into
a
dip
repository
and
actually
used
dan's
zmake.sh
script
to
for
the
first
install
on
the
os
and
actually
build
and
get
compiler
errors.
Because
of
the
changes
that
I
did,
weren't
complete.
So.
U
U
Did
so
that's
most
of
john
did
yeah
I
I
did.
I
was
kind
of
just
insulted
for
that
yeah.
My
work
was
kind
of
resurrecting
my
discussing
on
math
patch
from
a
while
back
and
getting
it
to.
Finally,
not
panic
on
me
right
up
right
away
on
boot.
U
Unfortunately,
the
I
did
spend
some
time
wallowing
around
in
the
sd.c
writing
a
new
scam
command
implementation,
but
so
far
I
don't
have
any
real
hardware
to
test
it
on.
So
that's
going
to
be
a
doozy
for
later,
but.
B
I
U
D
A
I
So
basically,
I
was
working
on
data
corruption
when
we
lose
the
power
of
the
beginning,
this
power
pool,
and
then
we
resumed
the
pool
after
pogba
did
this.
So
what
happened
basically
in
the
cfs
is
that
whenever
you
were
your
pool
was
to
scan.
When
you
resume
it,
you
resent
all
the
zeos
that
failed.
The
problem
with
that
is
that
some
of
these.
D
I
Were
actually
not
successful
because
we
didn't
send
the
flush
command
only.
We
only
send
a
flash
command
when
we
finish
the
whole
gs
sync.
So
basically,
what
I
was
working
on
is
trying
to
make
a
patched
flash
for
all
these
areas
which
are
sent
to
a
video
so.
I
It's
not
the
cio
by
ciu
flash,
but
there
is
a
bunch
of
cios
which
are
being
flushed
all
together,
so
it
can
be
done
like
every
100,
200,
200
milliseconds,
it's
a
configurable
sort
of
parameter
and
yeah.
So.
C
S
On
statistics
with
so
yeah,
the
idea
here
is
this
is
a
graph
showing
on
the
x-axis
time
on
the
y-axis.
A
A
A
Is
they
keep
a
sample
of
the
points
they've
seen
so
far,
and
they
are
more
likely
to
throw
away
stamp
points
that
are
farther
away
than
points
that
are
more
recent
and
as
a
consequence,
when
something
like
this
happens,
you
notice
almost
immediately
in
your
sample,
so
our
plan
was
to
add
this
to
zfs.
A
A
Be
paying
attention
to,
and
so
we
created
an
infrastructure
to
called
a
stat
for
alex,
stack,
aggregation,
stats
or
something
like
that.
Asaps
and
so
the
way
it
generally
works
is
we
have
an
asat,
we
can
print
and
we
get
the
histogram
and
this
histogram
is
generated
at
the
time
of
the
mdb
command
from
the
sample.
But
the
cool
thing-
and
hopefully
this
will
cooperate
with
me-
is
we
provided
a
parameter
for
how
recent
of
points
it
allows
or
it
prefers,
and
so
is
this
cooperating.
S
S
So
this
this
is
actually
generated.
I
A
A
I
A
Are
you
gonna
push
this
well
when
we
first.
A
H
A
A
It
you
can
do
that
that
and
when
I
tried
to
do
things
with
a
room
notes
in
this
story
that
didn't
work
out
quite
so
well
and
after
working
with
dan,
it's
occurred
to
me.
Finally,
it's
not
that
there's
some
thing
that
I'm
missing:
it's
just
that
if
you
do
not
have
everything
done
exactly
like
someone
else
has
done
it
or
exactly.
A
A
Tracking
down
of
bugs
in
gen
2
as
well
2
packets,
so
I
know
that
drop
2
actually
is
theoretically
capable
of
booting
off
multiple
v-devs.
In
fact,
I
not
only
know
it,
but
I
tested
in
the
past
with
a
fuse
file
system.
Driver
basically
uses
about
two
files,
it's
in
code-
and
I
actually
repeated
that
earlier
today
and
verifying
that
it
does
indeed
work
in
principle,
so
I
threw
together
a
script
to
automate,
say
enough:
gen
2
on
the
multiple
reader
4..
A
A
So
in
principle,
if
we
were
to
do
that
by
hand
or
just
we'll
first,
do
it
by
hand
develop
why
it
won't
work,
and
I
believe
it
will.
We
should
be
able
to
move
off
multiple
rehabs
with
well,
you
could
put
a
group
rule,
but
it
has
one
more
redouble
too
and
quite
frankly,
all
I'm
doing
is
blocking
that
is
fixing
the
config
file
and
then
the
config
file
generator,
and
then
it
will
just
work.
H
Q
A
So
I
guess
that's
about
it.
I
really
wanted
to
demo
the
first
boot
of
multiple
videos.
A
V
V
D
V
I
V
D
V
V
Does
the
naive
thing
where
it
lets,
you
specify
a
property
that
specifies
the
new
size,
and
I
said:
okay,
we'll
just
set
the
bonus
length
of
the
d
node
based
on
the
current
property
value
and
we'll
squirrel
away
our
you
know
our
bonus
buffer
in
that
space
and
see
what
breaks
and
lo
and
behold,
lots
of
stuff
breaks,
because
I
you
know,
don't
know
enough
about
zfn
zfs
internals
to
find
all
the
sharp
edges
and
assumptions
that
are,
you
know,
spread
throughout
the
code.
So
if
I
didn't.
V
V
Q
D
Q
G
V
A
Thanks
is
there
anyone
left,
except
for
john?
Did
you
guys
have
any
more
projects
than
any.
T
So
originally
dawn
and
I
started
out
just
adding
some
tests
to
the
test
runner
some
tests
that
he
had
been
using
at
korea
for
brazil
and
when
we
were
done
with
that
don
split
off
to
work
with,
like
brian
and
so
after
that
I
added
a
sub-command
to
z-pool
that
lists
disks
that
aren't
yet
being
used
by
zfs.
So
you
can
do
something
like
sql,
create
tool,
name
back,
take
sequel
available
and
then
just
use
the
rest
of
your
disks.
It
works,
but
it's
probably
needs
a
lot
more
testing.
A
So
stand
here
for
a
second,
so
we
have
three
awards
to
give
out
here
for
three
different
things.
The
second
one
is
going
to
be
best
best,
hackathon
project,
so
think
about
who
which
team
you
want
to
vote
for
best
hackathon
project.
The
first
award
is
the
matt
aaron's
handshake
of
appreciation
for
writing
up
all
of
the
hackathon
ideas
that
you
guys
just
said.
I've
seen
john
just
typing
away
furiously
for
the
past
hour.
A
So
the
second
award
that
we
have
is
donated
by
our
sponsor
in
xenta.
It
is
up
to
three
wireless
sound
systems
for
your
phone
or
other
things.
I
think
it
can
also
charge
your
phone,
so
you
can
probably
like
turn
into
a
robot
or
something
I
think
it
has
like
seven
functions
in
one.
A
So
I
think
the
way
we
should
do
this
is
point
to
or
yell
out,
the
name
of
the
team
that
you
think
should
be
awarded
the
prize
best.
A
A
Who
who
we
could
not
have
made
this
event
without
and
she's
sitting
over
here.
A
Organizing
everything
and
making
this
event
possible
without
you,
you
know
we
could
never
have
done
this.
You
made
all
the
food
happen
and
the
event
happened.
Everybody
show
up
here
and
got
all
the
sponsors
so
that
we
could
do
all
this.
So
thanks
so.
A
So
our
dinner
reservation
is
30
minutes
from
now.
We
need
to
leave
here
in
about
15
minutes
to
walk
over
to
the
dinner.
So
you
have
15
minutes
to
put
any
finishing
touches
on
your
hackathon
project
and
then
we
will
leave
at
5
45
sharp
thanks.
Everyone.