►
From YouTube: DRAID Performance by Carles Mateo
Description
From the OpenZFS Developer Summit 2018
slides: https://docs.google.com/presentation/d/1GZXj0u13_FcgfnFCZNbKemlvC0NPZ7I56UVNQNHJwIc/edit?usp=sharing
B
C
B
The
rate
is
a
project
that
is
still
work
in
progress,
literally
promise
promising
technology
company.
We
wanted
to
see
how
far
we
can
go
so
here
we
will
cover.
How
did
we
get
to
understand
the
technology
and,
what's
the
speed
we
can
get
from
it?
I
really
I
will
use
this
C
for
pointing
the
number
of
methods.
Therefore,
D
for
putting
the
data
strips
per
stripe,
P
for
parity
and
S
4's
purse,
which
are
distributed
Esper's,
not
physically
non
physical
whispers.
B
Okay,
the
drives
we
were
using
for
those
tests
were
four
terabyte
Seagate
SAS
drives
those
drives
were
performing
and
200
megabytes
per
second
for
E
and
for
right
you
can
get
the
fat
split
or
checking
back
with
DV,
like
inputting
from
deep
0
to
the
device
directly
with
a
large
amount
to
prevent
the
cache
from
the
operating
system
or
doing
things
like
generating
a
big
random
file
from
the
random,
for
example,
and
then
copying
in
a
ram
drive
and
then
from
there
overwriting
the
device
directly.
B
Okay,
other
tests
confirm
the
same
speeds
like
if
IO
et
cetera,
it's
important
not
to
use
zeros
I
use
that
for
for
sample,
because
I
was
accessing
gonna
be
the
device
each
okay.
You
can
overwrite
with
that,
but
not
with
that
the
face,
because
you
can
have
the
source
we
are
using
for
this
HBA
controllers.
B
So
no
not
any
kind
of
cash
just
agree
attached
to
their
system,
and
this
is
this
speed
will
be
for
the
drive,
but
obviously
as
we
as
we
creating
a
write
system
that
will
be
aggregated
and
we
will
have
more
okay
with
servers
we
were
using
for
that
we're
using.
Therefore
you
60,
it's
a
sixty
drive
in
for
you.
B
This
had
more
or
less
powerful
CPU
intention,
$2,500
per
CPU
and
64
gigabytes
of
RAM
controller
was
set
to
with
a
max
bandwidth,
more
or
less
of
3.8
gigawatts
per
second,
and
we
use
another
server
which
is
40
90
that
has
in
for
you.
90
drives
the
CPUs
in
here
where
a
chip
I
call
crappy,
but
is
not
that
polite,
but
where
400
each
and
the
RAM
of
the
system
was
120
gigabyte.
B
B
B
We
had
to
understand
it,
so
what
the
rate
does
is
it
uses
a
small
fraction
on
every
day
every
disk
to
provide
this
table
Spirit
arrives
so
when
one
drive
dies,
you
can
rebuild
using
do
small
fractions
and
I'm
gonna
get
the
one
with
the
cold
arrives,
so
you
avoid,
like
the
limit
of
the
speed
of
one
drive,
that
case
200
miles
per
second,
you
can
use
the
man
with
aggregated
of
all
the
surviving
drives,
so
the
theory
was
and
is
the
most
drive
we
use
the
faster
the
revealed
will
be.
B
We
have
some
constraints
like
the
performance
of
the
IOC
and
the
number
of
them.
We
have
the
asked
the
question
of
what
was
important
of
the
CPU
in
the
rebuilding
process.
It's
critical.
We
can
use
that
CPUs.
We
need
like
super
powerful
CPUs.
So
at
the
beginning,
CPU
really
seemed
to
be
very,
very
important.
Then
we
discover
something
which
we
discover
is
we
had
to
break
the
back.
C
B
B
Okay
was
a
challenge
for
the
company
to
understand
code
that
technology
compared
to
traditional
right
system
like
8
to
256,
so
the
beginning
was
very
disappointing
for
me,
because
I
was
forced
to
use
direct
conflict
with
eight
years
to
roost
whispers.
So
I
will
explain
and
you
will
understand
why
that
doesn't
make
any
sense
for
the
rate.
B
B
We
have
supplied
with
many
many
parameters
while
we
were
rebuilding
I
learn
about
it,
but
we
didn't
get
an
improvement
of
that,
maybe
one
minute
or
two
which
is
not
significant.
We
repeated
the
test
many
many
many
many
times,
and
sometimes
we
can
have
by
ends
of
two
or
three
minutes.
So
one
minute
or
two
is
not
relevant
I,
which
for
for
doing
this,
we
have
to
feel
the
pulse,
which
was
something
really
slow
with
so
many
lives,
and
it
have
something.
B
Okay,
so,
basically
how
long
it
takes
to
reveal
derived
what
we
call
in
a
favor
silver
in
a
traditional
rating
system
within
the
rate
and
that
for
terabyte
such
as
pinging
drives
at
200
miles
per
second,
it
was
taking
more
than
nine
hours
on
the
490,
so,
basically
how
it
works
is.
It
has
to
read
from
all
the
surviving
drives
all
the
information
from
all
the
drives,
and
then
they
build
a
parity
so
that
a
lot
has
been
lost
and
then
brag
to
the
new
drive.
B
B
So
basically
because
we
have
to
read
also
from
all
the
writes
for
the
content,
I
will
introduce
here.
The
concept
of
ownership
of
data
of
the
drives
later
I
will
explain
a
bit
more
and,
and
you
have
strain
on
the
spinning
glass
you
can
read
or
you
can
write
if
you
can,
if
you
distribute
the
rate,
sorry
in
the
faster
rate
you'll
get
from
all
and
you
read
from
oh,
so
you
cannot
do
both
at
the
same
time.
B
B
Okay,
go
home.
We
and
block
unblock
that
situation.
Explain
as
I
explained,
the
filling
the
drives
90
drives.
Server
is
really
really
slow
to
70
percent.
Oh
yeah
so,
and
the
service
were
always
easy.
B
So
that
way,
we
managed
to
to
know
something
very
interesting,
interesting
that
the
rebuilding
times
were
linear,
more
or
less
so
giving
that
you.
You
know
that
the
time
for
5%
revealed
you
can
approximate
very
well
very
friendly
when
we
two
minutes
up
down
like
that.
But
you
can
approximate
very
well-
and
you
can
know
also
the
reading
man
with
their
bright
mind
with.
So
we
got
very
nice
information
from
that
and
again
more
passion,
more
patience.
B
Okay,
so
I
destroyed
the
pools.
I
could
I
was
hating.
These
pools
and
I
created
a
conflict
that
I
wanted
to
win,
so
I
went
for
the
maximum
number
of
Billups
and
not
caring
about
that
efficiency.
In
this
case,
so
I
created
a
config
of
29b
depths
of
two
data,
strips
plus
one
parity
step.
First
right
and
three
distributed
disperse
I
feel
it
to
70%.
It
was
73%
in
this
case,
but
something
interesting
that
came
is
like
feeling.
B
It
was
much
much
much
more
faster
than
with
the
previous
context
and
then
I
removed
one
Drive
and
with
the
right
rebuild,
and
in
this
case
it
took
only
57
minutes,
which
was
something
interesting,
giving
that
clients
from
the
company
were
reporting
like
times
around
one
week
or
something
like
that.
So,
okay
with
right
seats.
B
Okay,
you
have
to
take
in
content,
even
if
their
rates
exposure
are
not
very
big,
like
eight
plus
two
or
ten
brew
stores,
I,
like
all
the
lights,
are
shining
the
controllers,
so
the
bandwidth
you
are
limited
by
your
language,
so,
if
the
other,
if
them,
if
the
rates
that
are
healthy,
are
being
excessive
as
read
by
a
lot
of
users,
the
language
you
have
for
rebuilding
is
also
reduce.
So
that
will
increase
the
time
you
have
to
take
in
account
the
bank
with
your
server
or
George.
B
B
Also
I
did
several
several
tests
that
we
can
with
several
countries
and
where
I
saw
is
that
the
smaller
the
data
strip
is
the
faster
I
was
going
to
I
was
able
to
access
the
data
and
too
bright
and
too
revealed
later
much
more
data
confuse
over
all
of
this.
So
the
funny
thing
is
that
when
I
showed
this
to
the
shoes
to
to
the
xxxx,
they
say:
ok,
let
cameras
work.
He
knows
what
he's
doing
so,
don't
disturb
him.
That
was
nice.
B
He's
getting
from
from
Google
Drive,
ok,
what
what
we
have
here
is
a
graph
honor
with
all
the
drives
taking
the
bandwidth
of
the
older
drives
during
a
rebuild
of
of
a
config
of
data
sleeps
closely
parodies
I
think
it's
five
with
apps
in
my
notes.
I
will
check
and
that's
the
read.
Sorry,
that's
a
bright
M
bandwidth
that
is
it's
using
for
rebuilding
and
that's
the
read
bandwidth
on
sorry
is
not
as
well
displayed,
but
on
top
it's
2.5
gigabytes
per
second,
we
were.
B
B
C
B
Yeah,
okay,
so
the
coffee
I
was
showing
here
plus
five
times.
Eight
plus
three
plus
three
using
Part
III,
is
something
that
I
really
like,
especially
when
we
have
so
many
lives
for
this
test.
We
created
400
gigabytes
of
real
data,
which,
which
is
five
hundred
and
fifty
gigabytes
with
the
parity
blocks,
and
it
was
rebuilt,
9.50,
eight
kilobytes
of
data,
so
9.2
dat
noise
of
data
split
across
so
so
many
drives
is
why
it's
so
so
fast.
B
Okay,
so
what
we
learn
from
from
this
is
that,
basically,
the
rate
is
a
trade-off
between
the
number
of
BJP's
and
the
speed
so
we're
in
right,
Zig
system.
You
have
one
plus
three,
for
example,
or
five
times
five,
eight
plus
three
grades.
We
aggregate
that,
so
we
still
have
five
eight
plus
three,
but
in
a
single
pool,
so
the
more
data
strikes
per
pool.
We
have
the
more
speed
we
will
have.
B
B
So
if
we
use
C
times,
16
plus
1
lose
whatever
Spurs,
we
will
have
16
surviving
strips
that
will
be
read
in
order
to
rebuild
one
strip
if
it
was
like
a
close
one,
only
eight
strips
will
be
read.
That
means
that
whatever
combinations
like
7
of
that
on
one
parity
or
data,
whatever
we
can,
you
can
have
lost
data
and
parity
and
the
same
for
2
plus
1.
In
order
to
in
order
to
rebuild
just
one
data
strip
that
are
party
to
of
the
blocks
will
be
read.
B
B
Ok,
so
basically
we
have
8
drives
and
the
data
strips
data
and
parity,
whatever
doesn't
doesn't
matter,
are
distributed
when
you,
when
you
use
the
veggie
HCG
it
creates
a
mapping
or
that
in
order
to
try
to
configure
the
drives.
So
when
we
use
it
all,
the
rice
will
be
used.
The
most
approximate
to
the
ideal
situation
so
basically,
is
like
they
are
randomly
distributed
and
the
s
1
and
s
2
refers
to
they
own
format.
B
B
Ok,
so
drive
that
drive
free
is
is
lost.
We
have
lost
that
information
or
it's
simply
removed.
So
we
see
that
for
these
status
tribe
we
have
the
three
blocks,
the
two
that
and
the
parity,
and
but
for
this
one
is
missing.
This
is
the
one
that
we
have
to
rebuild
in
here.
We
have
the
yellows
three
heroes:
okay,
but
therefore
they
are
for
the
orange
okay.
We
have
three
as
well,
so
that's
okay,
that
will
not
be
used
to
blues
and
three
green.
B
So
we
need
to
rebuild
one
blue,
so
then,
okay,
sorry,
then
we
do
the
event
rebuild
and
in
the
space
that
were
yourself
for
the
distributed
despair
one
we
have
revealed
the
red
block.
That
was
missing
no
matter.
If
there
was
that
our
parity-
it's
put
in
here-
and
here
we
didn't-
we
didn't
need
it,
because
we
had
the
three
oranges
and
three
yellows
and
here
same.
B
B
So
the
key
here
is
that
we
have,
we
have
regained
full
redundancy
very,
very
fast,
that's
the
idea,
so
we
safe
the
cool
is
safe
as
soon
as
possible.
Obviously
we
want
to
replace
that
drive.
So
at
the
end,
it
will
be
our
silver
going
on,
but
for
now
we
we
have
or
parity
restored,
something
which
is
cool
in
this
case
I
like
this
example,
because
we
we
only
had
one
party,
but
we
have
two
experts,
so
we
have
lost
one
drive.
We
have
rebuilt.
We
regained
that
parity
that
safe
condition,
so
we
can
lost.
B
Okay,
so
that's
a
that's
a
graphic
that
shows
how
impacts
the
reveal
times.
The
solution
has
lost
a
lot,
but
with
a
red
six,
traditional
technology
and
the
rate,
the
blue
is
the
rate.
So
when
we
have
more
drives
understanding,
then
we
are
more
Baudette.
We
are
adding
more
beliefs,
we
are
cutting
the
times,
that's
the
hours
number
of
hours,
so
from
five.
B
B
B
B
C
B
Initially
we
had
that
config.
That
was
basically
a
plain
one.
Be
there.
Eighty
four
five
roof-tree
of
party
be
very
long
tooth,
plus
three
Spurs
so
distribute
the
dispersed
and
we
feel
seventy
percent
and
we
need
a
revealed
and
it
took
19
hours
and
fifty
for
34
minutes.
I
have
put
the
seconds
in
here,
cuz
I
doesn't
matter.
Sometimes
it
goes
up
or
down
two
or
three
minutes
or
five,
so,
depending
on
the
only
time
so
seconds
doesn't
matter
so
something.
C
B
B
Well,
when
you,
when
you,
when
you
feel
it,
you
feel
the
actual
server
capacity,
but
when
you
bright,
you
are
right
into
strips
and
party
strips,
so
you
are
always
writing
to
more
drives
that
real
data
is
available,
but
I
will
do
the
center
of
the
where's.
Later
it's
sevens
matter
its
many
questions.
We
are
almost
there
so
from
from
that
initial
point
of
having
just
one
big
area
started
to
break
in
more
vedettes.
B
So
so
we
did
in
this
case
a
4b
depth
of
20
plus
2,
that
that
was
with
not
with
the
initial
release
of
the
way,
because
the
initial
release
of
the
weight
only
a
low
power
of
two
for
the
data
splits,
so
that
that
costs
that
would
cause
a
kernel,
panic
later
Isaac,
which
are
not
originally
that
allow
us
to
do
that.
Oh.
For
this
config
rebuilding
one
missing
drive,
it
took
four
hours
and
nine
minutes
much
better
and
the
other
efficiency
was
still
good.
B
With
eighty
data
drives
and
eight
parity
drives,
the
usable
capacity
is
according
to
the
data
arrives,
so
the
bandwidth
for
it
was
7.4
gigawatts
per
second
and
again,
brighting
was
not.
Was
not
bright
and
brilliant
and
we
were
when
we
were
feeling
we
were
using
2.2
bytes
per
second
according
to
is
fat.
B
So
please
note
that,
isn't
it
in
this
case
the
the
D
or
the
data
strips
per
stripe
was
weak,
so
20
when
we
reduce
the
number
like
in
this
case,
like
8
things,
go
much
much
much
more
faster,
so
it
makes
sense
cause.
In
this
case,
2
revealed
one
parity
block,
20
refs
have
to
be
read
calculated,
but
in
this
case
it's
only
8
okay.
So
with
this
coming
with
his
config,
we
got
2
hours
and
24
minutes
the
data.
B
B
B
All
of
these
tests
were
with
490.
In
this
case
we
did
the
same
test
with
the
460
and
we
got
something
interesting,
even
even
if
the
input
of
the
controller
was
such
2
and
not
3,
and
we
had
only
one
and
not
four
like
in
the
four
you
90
I,
we
got
a
small
reduction
in
time
which,
which
seems
to
be
related
to
the
parity
calculations
and
the
fact
that
the
CPU
was
very,
very
powerful.
It's
not
important
is
we
are
talking
about
4
minutes
in
this
case,
but
it's
significant
to
understand.
B
Ok,
it's
work
for
me
like
to
invest
like
500,
sorry
$5,000
in
good,
very
good
CPU
to
reduce
10
minutes.
Maybe
maybe
it's
or
maybe
it's
not
in
when
we
are
talking
about
two
or
three
four
hours
of
really
have
the
mute
is
not
very
important
on
the
the
price
of
the
of
the
hardware
is
so
it's
a
balance
that
every
company
or
every
person
has
to
has
to
decide
another
conflict.
We
reduce
again
we
use
six
data
streams
per
side
and
just
one
parity,
and
that
was
much
more
faster.
B
B
So
in
this
case
we
got
seven
billions
of
this
config
and
nine
believes
and
we
didn't
get
a
very
big
improvement.
Something
we
condone
belief
is
that,
in
order
to
cook
in
half
the
time
we
have
to
multiply
to
the
number
of
lives,
so
really
jumping
from
seven
believes
to
nine
behaves
doesn't
makes
a
big
difference.
It
will
make
a
big
difference
if
we
have
with
the
same
config
14
lives,
that's
a
lot
of
lives,
so,
in
order
to
cook
cooking,
have
the
rebuilding
times
more
or
less
in
health.
B
B
The
best
time
we
got
was
25
minutes
at
the
end,
and
that's
something
very
interesting
and
we
use
in
this
case
29
it's
in
here,
29
parity
drives
so
from
from
88
we're
using
29
for
parity
plus
one
for
this
pair,
so
that
efficiency
goes
down,
but
you
get
a
very
a
very,
very,
very
nice
writing
a
speed,
reading,
speed
and
revealing
speed
so
for
applications
like
video
or
things
like
that.
You
can
afford
to
have
a
complete
like
that.
You
need
a
lot
of
space.