►
From YouTube: OpenZFS Developer Summit Part 5
Description
http://www.beginningwithi.com/2013/11/18/openzfs-developer-summit/
Data-Driven Development of OpenZFS (Adam Leventhal)
B
E
B
A
So
max
wasn't
here
we
started
and
I
he
wanted
to
gauge
interest
on
another
on
a
special
topic
for
us
afternoon.
You
want
to
just
give.
Can
you
have
a
few
couple
minute
or
reviews
out
what
I
want
to
talk
about?
A
little
bit
is
there's
a
tool
in
a
limo
source
Marla.
So
let
us
call
them
geeky
and
there's
another
tool
called
Zeb
and
I'll
be
careful
about
I'm
going
to
turn
this
way.
No
then
come
back.
A
D
D
A
It
gives
you
all
or
nothing,
and
so
I
started
it
when
Aaron
was
talking
and
I'm
trying
to
find
a
particular
zap
object,
because
that's
why
I
need
some
information
about
it's
a
GD
b,
minus
8
d's,
the
pull
name
and
it
started
getting
me
to
space
map
you've
seen
this
well,
it
ran
for
an
hour
and
a
half
getting
giving
me
space
map
information
and
then
I
lost
the
connection.
So
I
mentioned
the
mattress
now
that
actually
solaris
11
I
happen
to
get
on
not
too
long
ago.
A
A
In
fact,
it's
basically
terrible
if
everything
is
fine,
it's
great,
because
it
does
everything
and
you
can
see
everything,
that's
there
and
you
can
figure
out
where
everything
is
and
it
that
part's
great,
but
as
a
debugger
I
want
something
that's
interactive
and
several
years
ago
now
I
modified
MDB
so
that
I
could
actually
run
md
be
on
the
raw
disk
and
then
look
at
huber
blocks
and
actually
walk
the
data.
Now
this
is
this
has
other
problems.
A
It's
very
much
you
step
by
step
on
hand
looking
at
everything,
but
but
you
can
see
everything
and
you
can
it
helps
you
to
understand
what
it
is
you're
looking
at
or
how
what
the
layout
actually
is
so
I.
What
I
wanted
to
do
is
show
this
a
little
bit
show
what
I've
done
the
root,
an
issue
with
using
MDB
is
that
I
haven't
figured
out
or
haven't
gotten
around
to
figuring
out
how
to
do
decompression.
A
This
is
this
is
the
real
headache.
So
what
I
want
is
something
like
comb,
comb,
z,
print
or
print
minus
Z,
or
something
like
this,
where
I
can
say:
okay,
I
want
to
be
progressive
block
and
you
print
it
out
as
a
knob
set
50
or
a
denote
50
or
a
dsl,
dear
50
or
whatever.
So
that's
what
I
hope
to
up
a
little
bit
and
I
just
did
if
you
want
I
can
go
through
some
examples.
But
if
you
had
other
things,
planning.
F
A
This,
like
a
kind
of
work
in
progress,
it's
very
much
work
in
progress
and
there's
a
bunch
of
especially
now
I'm
working
on
a
real
problem.
We
had
a
pool
basically
get
wiped
out.
We
can't
import
it.
If
you
run
gdb
on
it
received
pulling
forward.
It
complains.
It
complains
about
a
couple
of
metal
objects,
but
I
like
to
look
and
see.
What's
there
yeah
like
interactively,
you
know
exactly.
A
Particular
object
idea,
we'll
look
at
this
data
structure.
I'd
like
to
be
able
to
say
here's,
the
block,
pointer,
swimming
hole
and
block
block
pointers
and
have
it
actually
what
if
there's
indirect
blocks,
actually
walk
all,
but
in
the
red
box
display
I'm
like
kome
kome
block
pointer,
does
and
for
those
who
don't
know
anything
about
MDB
and
some
of
this
audience.
A
G
E
D
E
Sell
my
stuff,
so
I'm
kind
of
a
very
active
CFS,
dabbler,
perhaps
Jefferson
so
matted,
suggested
just
whoops
match,
had
suggested
this
title
for
the
talk.
David
driven
development,
open
CFS,
but
I
think
we're
like
ZFS
was
melissa,
is
faster
I
think
captures
a
little
more
there's
going
to
be
to
choose
your
own
adventure
component
of
this
talk,
so
stay
tuned
for
a
second
Matt
gave
his
version.
I'm
also
using
this
as
a
platform,
graham
stenton
that
give
his
version
of
the
liquor
box
so
functions.
E
So
this
is
please
and
then
third
stage
we
actually
ship
the
product
and
that's
when
we
really
learned
what
it
was
broken.
That's
where
all
of
those
features
were
really
being
used
very
extensively.
They
used
converted
performance,
critical
environments
and
and
where
we
built
that's
where
we
start
to
build
in
double
stability,
I
think
it
was
talking
to
George
earlier
their
stuff,
like
there
been
many.
What
felt
like
finish
lines
with
CFS,
but
I
think
we
realized
that
they've
all
been
started
lights
in
fact,
and
in
so
the
first
age
of
opens.
E
E
It
also
locks
them
in
peace,
not,
and
at
some
points
you
might
say
well
what
about
expires
a
and
the
data
I
have
is
the
date
I
have
and
I'll
try
to
get
in
to
get
bored
data.
I
wish
I
had
one
date
as
well,
so
this
is
the
choose
your
own
adventure
park.
So
the
two
ways
that
this
presentation
to
go
are
either
snippets
that
I
have
collected
in.
E
Going
to
do,
I
will
do
I
will
do
some
curated
version
and
then
I
will
go
into
the
asking
to
the
text
photos
or
something
actually,
no
we'll
start
with
the
text
files
and
apologies
to
folks
on
the
screen,
because
it's
absolutely
no
way
to
be
able
to
see
this
and
people
the
back.
If
you
start,
squinting
I
will
force
you
to
pop
your
instant
next
2
/
7
Chris
George
juice.
Can
you
read
the
text
on
the
screen?
Can
you
turn
off
the
elevator?
E
Bella
camera.
I
will
share
my
presentation
something
good.
So
let
me
of
the
background
customers
running
explained
already
about
delphox
deltax
uses
of
our
customers
version
of
ZFS
as
part
of
it,
and
you
know
we
had
a
customer
who
was
very
unhappy
with
the
front
fairing
on
their
system
so
started
just
by
looking
at
the
performance
of
in
delphos
works
for
a
software
clients.
We
connect
up
to
any
sports
this
case.
It
is
a
EMC
Sam
and
connect
up
to
their
Oracle
database.
E
E
A
E
E
Cached
reads:
look
great,
uncashed
reads
kind
of
looked
at
that
same
five
mode
of
distribution
of
a
soft
before,
but
we
need
to
the
rights
we're
starting
to
see
a
really
awful
stuff.
So
these
are
our.
This
is
on
the
top
async
rights,
and
these
are
synchronous.
We
see
this
kind
of
tribal
distribution.
Some
rights
are
super
fast,
like
micro
seconds,
then
we
have
a
little
hung
around
eight
milliseconds
and
then
another
hump
around
two
seconds,
two
seconds
of
latency
per
operation.
E
So
looking
at
that,
we
understood
why
the
customer
was
cranky
I'm,
going
to
kind
of
I'm
gonna
start
there,
I'm
going
to
go
over
to
the
curator
of
slides
just
because,
but
I
promise
I
will
get
set
up
to
the
ugly
mess
at
some
point.
So
this
is
the
picture
of
just
saw
cigarettes
for
bad
I.
All
rights
seems
fine.
E
E
Some
first
problem
to
talk
about
is
the
right
throttle,
so
we
looked
at
who
either
this
DTrace
script,
to
see
how
long's
boss
interesting
so
and
respond
sick.
We
counted
the
amount
of
time
and
we
keyed
it
by
the
number
of
space
map
loads.
That
happened
along
the
way
and
sauce
spa
saying
was
taking
a
nervous.
Is
blunt,
this
sort
of
focus
your
attention,
boss,
tink,
was
taking
one
to
two
more
seconds
now
I'll
get
into
an
expedition.
E
What's
going
on,
then
we
looked
at
where
is
sponse,
giving
up
the
CPU,
so
the
thought
here
was
spa
sink
is
taking
an
awfully
long
time.
Two
runs
faucet
is
the
thing
that
writes
out
all
of
your
data
to
disk,
taking
a
long
time
to
run.
So
we
wanted
to
understand
why
it
is
giving
up
the
CPU.
So,
every
time
we
went
off
the
cpu
we
were
taking
touched
in
every
time.
E
We
went
back
home
to
see
you
with
another
time
again
to
see
where
we
were
giving
up
those
frequent,
and
we
saw
that
where
we're
giving
it
up
most
is
on
this
dsl
full
sync
on
gi.
Oh
wait!
Actually,
sorry,
but
I'll
press
on
I'll
get
to
the
August
bit
more
sailing
data
on
the
on
the
text.
File
so
see
this
right
tomorrow,
or
has
anyone
looked
into
the
ZFS
by
throttle?
Yes,
couple
people
maybe
have
before
Matt
pushed
the
better
ZFS
right
throughout
all.
E
Had
anyone
looked
into
it
a
couple
people
great
same
people,
so
the
old
gfs
right
throttle
worked
roughly
like
this
Mountain
Georgia
Street
me
arms
there.
We
want
to
keep
transactions
to
a
reasonable
size
and
limit
the
amount
of
outstanding
data.
They
did
this
by
targeting
a
fixed
amount
of
time.
So
on
some
systems
it
was
one
second
than
others,
whose
five
seconds
it
basically
worked
like
this.
E
Let
me
extremely
I'm
as
much
data
as
I
can
write
up
in
one
second
or
write
out
in
five
seconds,
and
I'm
going
to
figure
that
out
based
on
some
sort
of
notion
of
history.
That
was
not
particularly
well
formed
once
I've
gotten
has
that
much
data
stop
just
wait!
That
transaction
group
is
done.
We
need
to
not
make
it
any
bigger
and
sink
it
out.
If
there's
someone
already
sinking
that
everybody
needs
to
stop
here's
the
really
working
thing
we're
going
to
seven
Pete's
of
that
limit.
If
inserts
of
10
millisecond
delay.
D
E
There's
no,
no
idea
where
like
why?
If
you
look
on
the
web,
you
can
find
some
like
rationalizations
for
why
this
is
a
good
idea.
It's
basically
like
when,
when
we're
getting
close
to
the
wall,
then
start
to
slow
down,
but
in
coming
ready
or
something,
but
you
can
see
it
from
the
gate.
You
can
see
that
we
have
a
hunch
of
stuff
that
is
lately
split.
They
have
this
pile
of
stuff.
E
That's
about
ten
milliseconds,
because
we've
inserted
some
random
delay
and
this
pile
of
stuff
down
here
which
takes
multiple
settings
and
the
stuff
that
takes
multiple
seconds
is
because
we've
reached
our
limit
and
we're
not
allowed
to
go
above
it,
and
you
know
the
next
transit.
The
previous
transaction
group
is
taking
several
seconds
to
go
through
to
run
this
dtrace
script,
and
you
can
kind
of
parse
it
on
your
own.
But
to
look
at
what
the
lookin
fer
limits,
the
fs
was.
E
The
right
flora
was
trying
to
figure
out
how
much
data
kind
of
push
through
the
system
in
the
amount
of
time
that
I
have
to
push
through
the
system,
and
it
was
doing
this
based
on
some
some
kind
of
notion
of
hysteresis
measuring
this.
It
would
vary
widely
so
on
the
left,
you
have
seconds
how
many
seconds,
since
they
started
running
the
script
on
the
right,
give
the
number
of
hyperinflation
megabytes
that
it
thought
it
could
right
through
on
this
sample.
It
varies
anywhere
from
472
670
megabytes,
a
second
on
the
on
the
system.
E
Just
anecdotally
I
saw
range
anywhere
from
100
to
800
megawatts.
So
what
that
means
is
like
are
no
sure
of
how
big
a
transaction
group
is
is
deeply
flawed.
It's
it's
ranging
in
place,
so
I
means
we'll
have,
will
try
to
make
a
big
transaction
group
and
then
swing
the
other
way
and
make
a
small
one
and
sometimes
will
be
blocking
it.
Sometimes,
deletion
ass
off
transactions
all
over
the
place
also
reads
impacted
this.
Those
is
a
mess.
E
So
in
this
case
we
were
looking
at
io
q
times
every
day
we
saw
that
there
were
a
bunch
there's
a
bunch
of
work
in
the
in
the
IO
cube
for
synchronous,
writes
that
was
that
ZFS
was
queuing
and
not
pushing
to
the
backend
storage.
If
we
look
at
a
sink
rice,
it's
a
even
sadder
picture
where
we
see
things
that
are,
you
know
our
peak
for
things
being
the
queue
is
around
250
500,
milliseconds,
I.
A
E
At
that's
good,
because
so
we
saw
upstaged
hanging
on
the
cube
in
for
an
amount
of
time
that
is
similar
to
the
amount
of
time
it
takes
to
service
the
I
am
so.
Let
me
start
looking
at
the
cuter,
you
see
the
folks
in
need
of
max
coming
is
anyone
kind
of
turn,
this
knob?
Okay,
right
lots
of
people
turn
this
up.
This
agree
not
to
talk
with
Manta
tuna
did
enough.
You
got
a
different
on
feedback
right.
My.
E
Particular
customer
system,
with
the
queue
depth
of
10
and
in
kind
of
a
similar
workload,
I
can't
I,
you
know
their
fluctuations
of
the
workload,
so
this
is
queued
up
for
20.
You
can
see
it
kind
of
shifts
down
a
little
bit
he's
up
to
30.
This
is
all
on
production
right.
This
is
all
on
their
production
system.
Under
look
cute
up
of
64
may
also
get
average
latency
down
here,
so
the
average
legacy
is
44
milliseconds,
which
is
already
getting
too
time
for
a
storage
array
to
respond
and
then
queue
depth
of
128.
E
Eighty
eight
milliseconds
now
so
we're
we're
kind
of
leaning
on
the
system
more.
The
latency
is
going
way
up,
but
the
throughput
is
only
increasing
a
little
bit
so
some
io
problems,
you
know
the
choice
of
the
Iowa
queued
up
was
obviously
very
important
with
all,
as
evidenced
by
all
the
hands.
Don't
know,
I,
don't
know
where
the
default
of
ten
came
from.
Maybe
that
makes
sense
for
disks.
I
suspect
it
does
not
make
sense
for
this.
There's
some
people
the
idea
to
deal
boil
with
physical
spindles
more
than
I
do
what?
B
F
E
E
So
one
of
the
big
lessons
that
we
learned
here
as
a
company
that
sells
a
product
that
sits
between
a
very
trusted
storage
box
and
a
very
trusted
database
is
that
if
we
start
queuing
stuff
inside
of
our
product
inside
of
ZFS,
because
it
feels
like
the
back
end
can't
keep
up,
it
doesn't
really
that
the
problem
doesn't
light
up
in
the
way
that
we'd
ideally
like
to.
For
example,
if
we
say
to
a
customer,
you're
back
in
storage
looks
low
and
we
show
them
the
picture.
Let
me
show
them
this
picture.
E
What
do
you
mean
my
backup,
storage,
slow,
a
back
to
extort,
looks
awesome,
but
if
we
show
them
this
picture,
they
say
yes.
I
back
does
looks
like
so
there's
an
important
point
there
to
pick
depending
on
where
you
decide
to
queue.
It
can
show
the
problem
in
different
areas,
which
can
you
know,
help
her
depending
on
what
purpose:
tak,
who
care
most
about
so
for
this
Matt
and
I
scratched.
Our
heads
into
later
on
to
think
of
this
new
is
new.
E
The
CIO
scheduler
we
thought
kind
of
talked
about
is
a
new
right
thought
of
the
map.
Have
insisted
on
referring
to
io.
Scheduler
I
think
is
a
much
better
way
of
thinking
about
the
basic
way
it
works.
Is
we
choose
to
limit
the
amount
of
dirty
data
on
the
system,
the
amount
of
modified
data
that
we're
allowed
to
accumulate
now
as
more
dirty
data
kind
of
builds
up
where
we
schedule
more
concurrent
items?
So,
if
you've
kind
of
a
light
stream
of
of
data
opinion
on
the
system,
then
we
will
optimize
for.
E
E
E
A
question
came
up
over
here
having
a
snapper
there
about
how
we
roll
out
some
of
these
performance
enhancements.
It's
a
great
question
because
we
can't,
in
our
laps
in
Delta,
G's
labs
their
lives.
We
can't
exhaustively
enumerate
all
the
different
types
of
performing
scenarios
that
all
of
our
customers
are
going
to
experience.
E
So
in
this
case,
we
can
expand
the
size
of
that
dirty
data
window
to
allow
more
or
less
to
accumulate
and
then
change
the
number
of
concurrent
items
that
have
been
ioq
depth
for
all
of
those
different
views,
as
well
as
domestic
you
are
a
global
limiter,
should
say,
and
then
change
kind
of
factors
that
have
to
do
with
how
that
delay,
ramps
up
any
questions
or
comments
about
io,
scheduler
or
other
data.
You
might
want
to
see
if
you.
A
A
E
E
H
E
D
I
I
Yes,
like,
for
example,
ray
dg2
when
you
invite
thread
counts,
so
I
was
earning
fio
and
I
was
just
ramping
up
the
number
of
threads
that,
if
I
own
uses
do
is
I
think
they
call
that
printer,
the
number
of
jobs
so
like
when
I
got
up
to
60
428
breads,
the
performance
Wittmann
patch
kind
of
knows
guy.
This
is
with
the
people,
so
I
haven't.
Looked
it
big
different,
so
I'm,
just
you
deaths.
E
Or
so
there
couple
things
here,
so
what
is?
It
is
not
our
expectation
that
the
DOI
our
scheduler
will
be
strictly
better
in
all
cases
yeah.
The
goal
is
to
have
something
that
is.
Is
you
can
reason
about
in
some
rational,
the
old
iocl
throttle
is
kind
of
like.
Let
me
roll
the
dice
and
pick
how
big
this
thg
is
going
to
be,
and
then
I
will
stop
your
finger
tracks
if
they're
too
big
and
the
knobs,
you
could
turn
we're
we're
so
kind
of
oblique
and
remove
from
any
real
understanding.
E
So
now
at
least,
our
goal
is
to
put
in
dobbs
that
make
sense
now,
in
light,
be
the
case
that
your
workflow
is
slower
with
it
or
that
you
run
into
lock
contention.
If
we
did
nothing,
we
would
never
have
that
data,
and
so
that's
that's
a
lot
of
it
is
given
forward
and
again
it's
like.
We
can
reason
about
how
big
that
window
should
be.
Look
at
this
on
memory
on
the
system
and
kind
of
the
throughput
of
back
and
storage
and
figure
out.
E
It's
a
quantity
that
you
can
even
kind
of
all
part
and
about,
and
also
there
now
then
there's
stuff
around
into
the
eye
or
CUDA.
It's
another
thing
that
you
can
measure,
that's
another
thing
that
you
can
interrogate
the
vacuum:
storage
box,
no
definitive
experiment
and
debased
up
data
and
then
the
delay
time.
Similarly,
so
these
are
all
primitives
that
make
a
lot
more
sense.
Now,
if
we,
you
know,
if
we
lift
on
the
terms
of
like
introducing
massive
massive
lock
contention,
no
photo
fixer
and.
A
E
It's
great
feedback,
or
not
so
on
the
subject
of
blocked
intention.
I
want
to
pull
up
some
examples.
So,
on
this
system
we
started
looking
at
locks
that
can
people
read
this
in
the
back
great.
So
just
you
know,
keep
keep
in
mind
this
particular
lock.
We
see
a
task
you
where
we're
blocks
for
a
damn
long
time.
E
This
is
this
also
spinning
Sony's
work,
and
this
is
a
nanosecond
so
down
here,
we're
spinning
for
like
wow
seconds
69
yeah
60
seconds,
microphones,
67
milliseconds,
which
is
like
a
long
time
for
a
cpu
described
to
be
on
spin
there.
Yes,
it's
a
very
that's
a
long
time,
but
I
mean
so
we're
spending
a
tremendous
amount
of
spin
phases.
I
was
some
awful
contention,
a
same
law
different
place
just
as
bad
spinning
same
lock,
different
place
just
as
bad
spinning.
E
E
E
Right
back
to
the
PowerPoint,
so
that's
where
youse
are
from
ox
tab
and
we
found
it
to
be
this
lock.
So
is
the
CRA
issue
boxing.
It
would
have
been
kind
of
deep
in
the
bowels
of
zio,
with
lots
of
task
queues.
There's
like
the
static
block
of
code,
where
it
says
like
this
task.
You
should
have
six
thread.
E
E
C
E
E
Yeah,
if
you,
if
you
have
other
for
better
analysis
tools
for
looking
at
lock
contention
or
other
workloads
that
exhibit
heavy
lock
contention,
there's
there's
a
bunch
of
data
that
you
know
we've
collected
here
and
you
must
ask
you
is
kind
of
dominated,
but
I
think
you
do
want
to.
Maybe
some
people
like
what
disinformation
this
is
together.
E
Don't
forgive
me
I,
don't
know
about
the
other
operating
systems,
but
in
a
little
you
have
adapted
lumps
adapter
blocks
will
say
if
the
thread
through
owns
this
lock
is
on
cpu,
then
I'm
going
to
sit
here
and
spin
under
the
understanding
that
we've
optimized
the
system
for
the
unintended
case.
So
if
someone
is
on
CPU,
has
the
lock
they're
probably
going
to
drop
it
in
a
moment,
go
to
sleep
and
being
open
up,
so
it
shows
both
those
types
of
events
as
spin
events.
E
It
also
shows
blocking
events
where
I've
we're
a
threat
has
actually
put
itself
to
sleep
in
order
to
wait
for
this
year
before
that
musics
to
be
handed
over,
lock
contention,
obviously
really
impedes
performance
or
can
potentially
ruling
impede
performance,
especially
when
you're
blocked
for
a
very
long
time
or
you're.
Spinning
program.
A
So
this
dis
is
showing
is
like
a
pretty
this
particular
lock
with
that
address
on
was
this
is
blocking
or
first-inning.
Hispanic
was
spinning
when
called
from
this
stack
trace
on
the
right
and
he's
stuck.
We
detected
97,000
spin
times
that
it
came
in
here
it's
fun
and
then
this
is
showing
how
long
it
was
spinning
each
time.
So
you
know
most
of
the
time
it
was
spinning
for
a
16
micro
seconds,
but
then
a
few
times
it's
fun
for
whatever.
That
is
the
entire
second.
E
So
if
you
want
me
to
just
conquer
them
down
through
here,
there's
lots
of
block
contention
events
through
here
we
look
through
some
of
the
good
ones
arc
state
change,
we've
seen
other
lock
contention.
There
goes
the
other
lock
that
you
broke
up
with
the
aggregation
of
and
like
accounting
information,
I
didn't.
A
Made
the
main
lock
break
up
that
we
did
as
a
result
of
this
was
the
stuff
you
mentioned
the
task
queues
and
then
not
actually
lock
break
up,
but
dropping
making
sure
that
we
dropped
the
arc
reflex
abicim
diet,
our
block
that
I
just
held
for
a
long
time
when
we're
really
good
stuff
from
art-
and
it
was
just
like
grabbed
up
grab
the
lock
it
makes.
You
know,
however,
many
gigabytes
I
might
be.
I
need
food
right
now
and
then
drop
it.
A
E
E
E
Second,
under
the
new
Iowa
scheduler,
it's
when
we've
accumulated
enough
data
that
we've
decided
to
break
off
a
new
transaction
group,
and
so
here
we
looked
at
all
the
functions
that
it
calls
and
the
amount
of
time
the
percentage
lives
that
we
spend.
I
hope
you
guys
succeed.
Bottom
I
think
I
have
this
so
dsl
pools
sink
is
where
we
expect
to
see.
Most
of
our
time
continues
to
the
bottom.
E
Dis
oppose
thing
is
where
we're
expecting.
It
means
we're
actually
writing
data
out
to
disk,
but
we're
going
to
surprise
to
see
VF
sink
done,
taking
up
as
much
time
as
I.
Think
we're
surprised,
I
was
surprised
to
see
if
taking
us
as
much
time
as
it
did,
and
in
this
case
there
was
about
sixteen
percent,
but
we
saw
it
both
about
twenty
percent
out.
E
Ideally,
we're
gonna
spend
all
of
our
time
in
to
yourself
will
sink,
and
none
of
you,
because
that's
what
we're
actually
doing
the
work
of
writing
data
after
this.
So
let
me
taste
this
down.
We
wrote
a
bunch
of
this
is
fairly
over.
What
we
did
is
what
I
did
was
take
NDP,
basically
MVP
disk
type,
to
graph
throw
a
perl
script
to
write
a
bunch
of
D
transcripts.
So
this
doesn't
say
tell
me
everybody
who
I
just
want
to
trace
everybody
who
this
calls
this.
E
Destroy
it
or
not,
so
what
is
beat
up
synced
done
doing.
Well,
mostly,
it's
going
better,
stop
safe
done,
and
what
about
that
of
slabs
sink
done?
Well,
it's
spending
most
of
its
time
on
a
space
map
unload
and
then
we
started
looking
at
all
these
different
space
map
functions
of
where
we
were
spending
our
time
during
our
particular
spa
sink
and
then
we're
calling
some
of
these
just
a
ton
of
times,
and
some
of
these
were
taking
a
really
long
time.
E
So
this
this
started,
submit
George
started
catching
wind
of
this
and
started
getting
interested.
What
the
heck
was
going
on
there.
We
also
used
the
cpu
performance
counters.
Does
it
amuse
DJ's
CPC
promoter
there,
a
couple
of
ones,
yeah
I
mean
it's
I
like
I
worked
with
John
Haslem
on
this
ages
and
ages
ago.
I've
never
really
used
it
banker,
but
we
started
looking
at
the
seeking
for
provider.
E
In
particular,
looking
at
data
misses
on
the
TLB
and
saw
you
know
some
of
these
space
map
operations
and
Metis
live
sex
has
compared
coming
up
a
lot
like
a
lot
now.
You
know
also
lzj
be
and
something
from
the
network
driver,
but
this
was
a
little
horse
in
project,
so
there
are
a
bunch
of
things
going
on
here.
E
Ethics
is
a
bit
of
a
it
pushes
the
fs
of
a
couple
of
days.
One
we
are
mostly
dealing
with
small
record
sizes.
Pk
record
size
is
typically
a
lot
of
the
work
that
we
have
done
previously
focused
on
large
record
sizes,
and
when
we
have
small
record
sizes
it
mean
these
medicines
can
get
very,
very
swiss-cheesed,
as
we
like
to
say,
and
so
most
of
the
space
maps
on
each
medispa
had
upwards
of
30,000
segments
to
keep
track
of
what
was
free
and
mo's
out
kid.
So
I
think
George
that
feel
large.
E
Okay,
they
kept
me
going
at
the
time.
Wait.
We
kind
of
you
know,
imagine
these
space
maps
being
a
little
more
nimble
in
terms
of
data
structures
and
then
found
that
the
reality
of
our
workloads,
which
are
small,
blocks,
small
blocks
and
lots
and
lots
of
rewrites.
You
know,
add
random
places
throughout
the
file
very,
very
little
sequential
access.
Then
we
were
choosing
these
files
tremendously.
We
also
found
that
we
were
trying
to
build
the
perfect
space
map,
and
this
is
an
observation
that
might
help.
E
We
spend
a
lot
of
time
kind
of
taking
thing
all
the
data
we
had
and
really
making
sure
the
thing
that
we
rode
out
to
disk
represented
the
absolute
best
canonical
form
of
the
space
map
would
be
a
very
nice
to
say.
Is
it
just
allocation?
Just
being
so?
It's
just
Alec
Alec
Alec,
Alec
Alec,
and
you
know
matter
of
relational.
E
So
we
determined
that
close
enough
would
work
for
a
lot
of
these
cases.
Then
we
were
bunch
doing
just
a
bunch
work
that
felt
very
much
like
digging
a
big
high
road
yep
digging
a
big
bitching
and
filling
it
back
in.
So
in
particular,
we
took
this
30,000
element.
Space
map
walked.
All
of
this.
You
know
always
contain
all
of
its
components,
segments
and
then
put
them
on
a
different
data
structure.
So
it
was
just
sitting
there
kind
of
moving
things
from
one
pile
into
another.
E
E
A
Both
like
we're
burning
a
lot
of
CPU
on
it,
but
yeah
CPUs
are
fast
and
you've
got
a
long,
but
not
only
that,
but
this
is
like
time
that
we're
like
beating
the
disks
idol,
it's
like,
if
you're
trying
to
write
as
fast
as
possible.
That
is
one
hundred
percent
formas
left
on
the
table.
So
it's
like.
We
decrease
that
to
half
a
percent
in
it's
like
you
get
twenty
percent.
You
can
write
quite
upset.
A
E
Space
amount
of
quieter,
we
start
looking
at.
I
think
you
know
in
this
system
finish
what
you
know.
I
think
murder
it
was.
We
saw.
We
were
spending
a
lot
of
cpu
time
and
a
lot
of
cpu
time.
There
was
disproportionate
to
the
to
like
what
it
was
doing:
a
disassembled
the
functions
like
a
couple
of
loads
really,
what
time
you
have
to
be
possibly
de
spending
three
seconds
or
simpler
time
in
there
and
you
live
our
performance
expert
at
dell.
E
Folks
actually
did
a
lot
of
work
and
being
previously,
and
so
we
kind
of
talked
to
them.
You
cannot
walked
us
through.
I
can't
replicate.
It
looks
like
what,
if
you
hit
in
this
patch,
but
it
came
out
of
cash
but
people.
You
know
a
remote,
see
me
in
a
remote
socket,
but
you
know
all
these
different
layers
of
missing
and
it's
kind
of
like
only
if
we
missed
everywhere.
E
You
know
you
have
the
mists
and
a
bunch
of
tlv
is
what
we
realized
in
order
to
that
the
type
of
slowness
Percy.
So
that
kind
of
put
us
on
today
but
I
think
it's
a
good
general
technique.
Just
you
know
throw
CPC
at
it
to
see
if
it's
up
just
like
it's
a
different
type
of
program,
all
right,
any
other,
maybe
Delphic
strokes,
any
other
pieces
that
you
remember
from
the
analysis
that
I
could
pull
up
in
the
textiles.
This
is
so
long.
Hunger
I'll
just
got
our
as
we're
walking
through
dinner,
they're
good.
E
G
We
might
need
to
do
our
deferred,
free
and
operations
and
later
Singh
pass
or
allow
free
to
actually
happens
both
sick
past
one
hanson
s22
you're
deferring
when
you
actually
avoid
training
to
later
that
was
resulting
in
the
sink
to
convergence.
Taking
much
longer
look,
we
would
end
up
with
passes
that
work,
nine
to
12
passes,
and
so
we
start
playing
around
with
that
value
and
saying:
okay,
defer
free
after
you
get
past
sink
best
ones,
only
very
interesting
past
and
after
that
way
to
the
next
transaction
group.
If.
D
A
E
Is
just
digging
him
up,
I
think
actually,
there's
something
out
down
here:
I
think
they
actually
like
I,
think
I
think
that's
wrong,
but
this
is
why
it's
uncreated,
but
we
also
things
to
highlight
here-
are
it
was
you
know,
sink
past
one
took
them
in
this
case.
The
minority
of
the
time-
and
you
know
also
took
12-
passes
to
find
the
converge,
and
all
this
was
also
throwing
off
the
calculation
of
how
much
data
we
can
push
out
in
a
given
transaction.
E
Some
of
these
data
driven
discoveries-
you
know
sitting
sitting
back
George
a
few
months
ago
after
we
kind
of
collected
a
bunch
of
this
data,
we
realized.
We
were
just
talking
to
all
the
solution,
because
we're
really
at
the
beginning
again
of
this
new
era,
there's
a
lot
in
jorts
to
walk
through
a
lot
of
work.
That
is
that
he
has
on
going
and
looking
at
in
the
future,
but
all
the
stuff
around
kind
of
all
the
stuff
we
can
do
with
the
io
scheduler.
E
Now
that
we
have
this
notion
of
idleness
on
the
system
more
concretely
expressed,
we
can
do
things
like
schedule,
scrubs
more
intelligently,
or
do
you
do
something
kind
of
like
a
bylaw
that
you
see
on
some
hard
drives
really
have
to
write?
The
data
is
correct:
there's
even
a
possibility,
potentially
of
learning
more
about
how
the
physical
drives
operate
and
physical
of
the
bumps,
the
virtual
last
month's
offered
tip
to
to
our
algorithms,
so
that
we're
more
intelligently
the
software
is
more
intelligently
choosing
between
through
Play
and
latency,
not
just
for
forcing
that
all
tangent.
F
That
kind
of
open-ended
question
is
there
any
way
that
we
can
engage
or
like
things
like
this,
we
can
engage
the
broader
community
with
because
I
know
on
our
systems
like
the
past
year,
six
months
or
so,
we've
come
to
the
same
conclusion
that,
like
the
Wright
brothers
broke,
and
then
you
know
a
week
later,
Matt
comes
up
with
this
patch
like
holy.
Everyone
else
sees
yeah,
but
yet
there's
like
it
there's
no
like.
If
I
see
a
problem,
I
go
to
Bryon's
right
next
door
to
my
office.
Like
do
you
see
this?
F
We
talk
about
it.
People
should
we
figure
out.
What's
the
problem
is
but
there's
no
discussion
outside
of
individual
companies
and
I'm
sure
we
see
a
lot
of
the
same
problems
and
my
guess
is
you
guys
work
on
this
problem
for
a
long
time,
and
then
you
push
a
patch
it's
like
here's
we've
been
seeing
in
for
a
year.
Here's
a
solution,
it's
like
Oh.
What
we've
been
seeing
it
too
and
we
did
all
the
same
work
we've
been
working
on
the
same
histograms.
Maybe
we
should
have
been
communicating
instead
of
duplicating
all
the
effort.
E
C
D
E
A
Didn't
know
this
is
a
good
place
to
start
saying
that
out,
I've
also
been
hanging
on
the
IRC
channel.
Yes
idea,
that's
that's
a
good
place
to
do
more
interactive
things
works
like
I
saw
this.
So
can
you
do
this
to
you
also
get
this
information.
It
looks
like
you
later
this
problem.
Okay,
you
know
run
in
kind
of
work
more.
We
driving
and.
E
In,
in
particular,
on
feedback
for
the
URL
scheduler,
we'd
love
to
get
the
data
to
share,
why
it's
good
or
weds,
back
and
I'm,
hoping
also
to
share
a
bunch
of
scripts
that
presented
here.
I
would
definitely
show
the
slides
if
there
are
scripts
or
data
collection
methodologies
that
you
posted
is
pretty
going
to
be
potatoes
d
tracing
their
stuff
than
useful
to
be
very
happy
to
share.
So
just
contact
me
if
I,
if
I
haven't
sure
you
hold
on
cool.