►
From YouTube: Multi-Actuator HDDs by Muhammad Ahmad & James Borden
Description
From the 2019 OpenZFS Developer Summit
slides: https://drive.google.com/open?id=1eAbqaiwfGQRV7pCwAjSAoEL9RWSTXWL8
B
We're
both
gonna
kind
of
team
tag
you
guys
and
by
the
way,
for
a
decade
my
banal
life
was
better
tooling.
So
I
really
wanted
to
kind
of
talk
to
anybody
who
wants
to
talk
about
that,
and
there
was
some
I
think
yesterday.
Some
people
make
some
comments
about
health
of
devices
and
we
have
a
lot
of
work
going
on
and
research
about
that
as
well.
Seven
data
science
type
work
as
well,
and
if
the
community
wants
to
kind
of
talk
about
that
help,
you
do
later
on.
D
Hi
I'm
James,
Board
and
I
work
for
Seagate
I'm
in
the
product
planning
group
and
you
know,
I
support
large
CSPs
and
you
know
work
of
our
product
architecture
groups
to
try
to
make
sure
that
we
have
products
that
you
know
we
can
each
of
our
customers
so
I
have
no
expand
engineering
and
I
span.
Customer
marketing
and
engagement-
and
you
know
today
we're
talking
about
and
Mohammed
we'll
be
talking
more
about
the
file
assistant
side
of
it.
D
It's
in
in
the
industry
here
we
been,
you
know,
growing
capacity.
You
know
significantly
over
the
last
few
years,
I
mean
it's
wasn't
that
long
ago
that
a
one
terabyte
drive
was
pretty
standard
and
in
in
the
last
couple
years
we've
been
getting
around
the
16
or
18
terabyte
on
out
topping
out
on
the
PMR
technology.
Now
we've
reached
up
pretty
much
the
limits
of
that,
and
now
you
see
the
industry
starting
to
implement.
D
You
know
energy
assistant,
which
in
a
case
of
CJ,
you
know
we
call
it
a
host
of,
is
you
know,
hammer
which
is
a
heat
assistant,
magnetic
recording,
I,
think
generically
in
the
industry.
We're
calling
it
as
energy
assisted,
recording
wd
has
something
they're
working
on
called
mammer,
which
is
microwave-assisted,
but
it
still.
Basically,
this
works
on
the
concept
of
energy,
assisted
where
we
heat
up
the
head.
D
B
B
C
D
C
D
D
So
what
we've
been
looking
at
is
you
know
we
need
to
find
a
way
to
improve
this
I
asked
for
terabyte
line
and
get
it
back
up
to
what
the
SLA
is
going
to
be,
and
to
do
that,
you
basically
have
to
drive
more
data
for
the
drive
and
to
do
that,
there's
really
only
a
few
ways.
The
most
logical
way
is
to
add
another
actuator
into
the
drive.
In
this
case
the
we
have
a
product
on
it,
xs-2x.
D
Well,
we've
added,
basically,
where
we
had
one
actuator
with
16
heads
on
talking
to
16
sides
at
a
platter,
we've
now
splitted
in
the
two.
We
have
an
eight
and
an
eight,
and
what
that
has
done
is
allow
us
to
transfer
effectively
twice
the
I/o
out
of
the
drive,
because
we
have
two
active
actuators
and
two
active
heads
at
one
time.
D
Yes,
we
call
it
the
Mach
2,
it's
basically
doubling
the
fruit
butter
to
drive
its
its
ass.
Only
it's
right.
Now
it's
set
up
as
dual
lung,
we're
looking
at
other
possibilities
of
other.
You
know
a
lot
guzman,
some
customer
drawn
doing
different
things
or
our
initial
customers
have
been
dual
lon
and
do
a
lens
requiring
sass.
And
you
know,
maybe
you
want
to
step
in
here.
We
start
talking
about
the
architecture
and
you
know
basically
single
port
sass,
dual
one
to
two
active
channels
and.
B
B
B
B
At
12
just
recently
leads
to
succeed,
it's
the
same
actuator
it
doesn't
it's
just.
It
doesn't
make
a
difference
to
the
processor,
but
as
all
file
systems
have
an
inherent
bias
about
how
data
is
laid
out,
and
so
we
on
our
SSD
side
of
the
business
at
work,
the
class
systems
to
make
them
more
optimal.
Our
competitors
have
done
that.
So,
as
we
make
more
dual
actuator
drives
or
multi
actuator
drives
in
the
future.
B
You
really
want
to
understand
what
how
the
fly
system
works
from
the
actual
developers
and
that's
why,
this
year,
just
to
kind
of
give
you
food
for
thought,
we're
even
thinking
nvme
HD
needs
either,
not
others
that
you
work
in
that
space.
Something
to
think
about
should
be
namespace,
be
type
of
an
actuator.
Should
it
not
be
if
you
we
have
currently
do
align
people
single
month
does.
B
Go
being
separated
up,
should
it
be
in
early
should
be
bifurcated
like
these,
we
see,
says
I
hate
half
of
the
grime
zone
once
what
would
the
ZFS
community
feel
like
that
will
be
optimal?
What
is
what
is
the
best
thing
to
do
here
so
in
this
particular
case,
the
way
so
the
the
one
which
is
on
the
truss
so
to
speak?
It's
like
James
said
it's
dual
on,
but
it
shares
the
cache.
We
have
two
independent
channels
in
the
back
so
reading
my
channels
are
independent
for
each
of
the
actuators.
B
So
as
you
read
it,
you
can
actually
get
twice
to
two,
but
it's
not
always
twice
the
total,
but
it
depends
on
your
workload
so
just
on
the
interest
of
time
I'm
going
to
go
quickly.
So
back,
but
these
are
different.
You
know,
workloads,
random,
reads:
random,
writes
whatever
the
best
one
you
can
probably
get.
Is
this
potential
reads
in
sequential
writes
and
you.
B
A
B
C
B
B
Some
of
the
things
are
this
talks
about
like
I,
don't
know
if
the
picture
earlier
on
and
it
gives
you
the
idea
of
how
its
mean
see.
So
it's
the
same
pivot,
but
it
has
two
things
coming
up
right.
So
somebody
yesterday
was
talking
like
that.
We
have
another
actuator
here,
another
actuator
here
this
real
estate
at
DoD
is
really
important.
So
you
don't
want
to
put
an
accurate
in
current
design.
We
decided
that
that
real
estate
is
much
more
important
than
whatever
benefit
you
will
get
an
actuator
on
the
other
side.
B
B
Let's
see
this
talks
about
like
how
you
can
configure
eight
different
ways,
one
way
is
to
after
just
in
Linux
you
will
do
a
device.
Never
right
so
I
know
CFS.
You
can
actually
do
some
mirroring
if
you
do
that.
Just
make
sure
that
your
that's,
not
your
only
V
Devon,
because
one
lose
one
London
is
gone
most
likely,
the
other
one
has
done
as
well,
so
those
data,
but
also
you
can
actually
do
it
just
totally
independently
and
aggregated
somewhere
else
and
get
the
most.
B
This
talks
about
the
same.
You
guys
have
the
the
rate.
Dfs
has
the
rate
Z,
so
you
can
actually
do
you
know,
basically
one
zeros
or
even
ones
versus
odd
lines,
and
that
way
you
can
kind
of
divide
it
up
and
you
will
get
the
same
throughput
on
each
line
and
I
can
show
if
I
have
time
for
the
demo.
Oh,
this
is
happening,
so
you
can
create
your
pools
with
all
even
lines
on
one
and
all
odd
lines
on
the
other
and
then
get
the
best
of
both.
C
B
E
A
B
Two
electrodes
all
in
the
same
enclosure,
so
that's
one
way
to
look
at
it:
a
game
in
the
crock-pot
system.
Some
people
try
to
look
at
it.
This
way,
just
showing
the
same
thing,
the
single
actuator
will
come
up
as
a
single
line
and
the
other
one
is.
If
you
really
want
or
like
earlier,
talked
about
the
hole
depth
tree.
If
you
want
to
go
down
that
path,
it
shows
up
like
that.
If
you
actually
want
to
have
this
find
out
like
hey,
is
my
device?
Do
one
line
or
a
single
one?
B
B
B
B
So
what
did
I
do?
What
I
did
was
I
can
show
it
in
the
demo.
If
you
have
time,
but
basically
I
just
ran
FIO
on
each
one
of
those
devices
create
a
small
little
batch
script.
That
starts
FIO
on
every
one
of
them
and
you
can
see
the
to
put.
This
is
the
single
actuator,
the
the
I/o
map
kind
of
screws
up
your,
but
this
is
basically
SDC
SDB,
which
were
the
single
actuators
they're.
B
Giving
me
the
to
put
for
a
sequential
read
up
to
36
and
the
other
ones
which
are
dual
actuator
are
giving
me
the
support
of
the
same
for
each
one
of
the
ones.
So
that
shows
that
each
actuator
is
popping
out.
You
know
30
to
250
aggregated
500,
megabytes
per
second
for
sequential
week,
so
this
shows
again
the
same
same
type
of
thing,
trying
to
figure
out
how
to
do
the
pooling.
B
C
B
On
here,
as
you
can
see,
it
was
doing
one
so
I
think
the
odd-even
thing,
so
all
of
the
even
ones
are
in
one
pool
and
all
the
odd
ones
are
you
at
the
pool.
So
I
have
now
four
pools
everybody
with
me.
So
far,
so
single
actuator
free
created
two
different
partitions
and
in
dual
actuator
I
already
have
you
know
into
alliance,
so
didn't
have
to
create
any
partitions.
I
created
those
mm-hmm.
B
Again
sequential
reads
that
that
was
the
you
just
want
to
kind
of
go
ahead
and
do
I
did
the
p1
so
going
back
my
partition
partition
one
partition
two.
These
are
both
on.
Only
the
single
actuator
drives,
just
two
pools,
I
read
it
that
way,
and
then
even
one
on
one
and
one
on
the
other.
So
when
I
did
just
on
the
partition,
one
I'm
getting
52
megabytes
per
second,
and
this
is
what
the
vio.
B
From
I/o
stack
perspective,
I
didn't
I'm,
not
a
ZFS
expert,
so
I
don't
know
which
knobs
to
turn
to
get
the
most
out
of
it.
Maybe
we
can
talk
about
it
in
the
hackathon
and
how
to
do
it.
This
was
my
profile.
What
it
look
like
in
the
dual
line
process:
I
get
a
little
bit
less,
but
I
haven't
quite
figured
out
of
it,
because
ZFS
prefers
partitions
over
raw
devices.
If
it's
something
else
that
I'm
doing
differently,
but
the
more
interesting
part
is
then
I
created
a
small
little
batch
script.
E
B
E
B
At
269
and
I
can
both
of
the
pools
at
265
in
239
I'm
sure,
if
I
can
get
it
over
many
ones
and
then
create
an
average,
it
will
probably
be
more
even
but
that's
what
it
came
out
to
be.
So
if
you
have
time
for
a
quick
demo
or
okay
great
so.
B
D
Know
we're
obviously
interested
in
what
makes
the
most
sense
for
Z.
If
that's
the
seat
to
lunch,
single
on
Friday
will
be
a
space.
You
know
those
kinds
of
things:
dual
port
I
mean
right
now
we
have
dual
lungs
and
that
was
largely
driven
by
our
early
customer
adoption
candidates
who
want
to
do
lunch
because
they
want
to
manage
this
one
little
way
up
in
the
upper
part
of
the
stack
and
they
can
drive
the
I/o
so
we're
looking
to
get
feedback
on
what
makes
sense
from
your
guys
perspective
right.
B
About
tooling
there's
a
open
source
tool
called
open
sea
chest
and
it
can
show
you
all
all
the
cool
little
things
about
any
device,
not
necessarily
CGI
device,
and
it's
telling
you
all
these
different
things.
This
I
have
a
customer
test
unit.
So
that's
why
the
WWN
is
all
messed
up.
Otherwise,
we'll
see
you
actual
the
other
thing
I
wanted
to
make
sure
that
people
understand.
Currently
it's
saying
that
we
look
ahead
right
cache
and
some
of
the
other
NB
c--
caches
are
not
supported
or
not
available.
B
C
B
So
I'm
running
just
the
regular
I
have
that
it's
to
see
all
the
drives-
and
this
is
a
script
that
I
wrote
just
simple
little
Stokes
goes
to
all
the
devices
and
didn't
start
doing
a
live.
A
live,
a
io,
sequential
reads
right,
so
it's
really
nothing!
It's
just
saying:
hey,
go,
create
those
files
and
then
just
start
them
right
away.
So
if
I
run
that
you
should
see
all
of
this.
C
B
B
C
B
C
B
The
mega
bytes
from
spindle
that
I
see
being
read,
I'm,
not
exactly
sure
why
it's
not
all
the
way
through.
That's
what
I
was
at
emailing,
you
back
and
forth
about,
but
it
seems
like
if
it
works
in
terms
of
you
know,
running
both
the
the
partitions
pools
which
are
on
the
partition
separately
once
this
dies
down.
Let
me
actually
just
quickly
just
do
the
whole
thing,
because
it
would
be
quicker
that
way,
and
so
this
one
is
a
test
which
goes
to
I
mean
actually
to
show
you.
C
B
B
Do
the
same
thing
and
you
will
see
each
one
I'm
showing
the
whole
thing
so,
but
each
one
of
those
lines
is
independently
pumping
whatever
the
single
actuator
one
was
function,
so
basically
showing
this.
That
goes
so
with
that.
Just
the
final
slide,
this
toxify
that
we're
trying
to
refine
how
the
lung
or
the
you
know
in
the
SAS
topology
how
it
should
be,
maybe
in
the
future,
if
the
community
sees
it
better,
we
will,
rather
than
actually
showing
a
topology
of
two
lungs.
B
Just
one
line
will
port
right
in
the
case
of
we're
also
thinking
about
how
to
do
it
in
Stata.
There's
some
progress
there,
too
there's
some
customers
who
are
looking
for
that,
so
dual
actuators
data,
which,
as
a
topology,
does
not
allow
dual
lunch,
so
it
has
to
be
single
and
sahabi.
Lba
space
will
be
in
that
case
and
what
the
file
system
would
prefer
will
have
to
be.
Now,
it's
discovered
through
the
device.
B
One
thing
is
for
sure
that
I
want
to
make
sure
is
that
as
a
Seagate
we're
not
we're
not
holding
back
from
multi
actuators.
So
in
the
future
we
will
be
just
because
we
see
that
as
the
capacities
grow,
we
need
to
have
that
I
over
terabyte
grow,
and
so
we
will
be
producing
multi
actuator
drives.
So
as
a
file
system
where.
B
E
Things
you
talked
about,
you
actually
remove
the
amount
of
pets
wrecked
hydrants.
You
passed
it
so
each
actually
have
like
great.
So
how
do
you
actually
achieve?
You
know
double
the
balance
if
the
amount
of
has
our
exchangers
actually
passed
as
well?
Oh,
it's
like
they
I
also
would
go
up
twice
newbie,
but
the
bandwidth
should
be
remain.
B
The
same
so
you're
saying
that
the
I
ops
will
go
output,
the
bandwidth
will
stay
the
same
mainly
because
there
were
channels
are
different,
so
each
of
the
actuator
can
seek
one
can
seek
at
one
time
and
the
other
can
be
reading
and
so
you're,
basically
reducing
the
same
time
and
then,
by
the
time
that's
those
it
can
start
reading.
Even
though
the
cache
is
shared
that
each
that
that's
live,
which
talked
about
the
engineering
sliding
up
the
and.
E
D
B
B
That
means
that,
theoretically,
that
little
actuator
drive
will
have
worst
reliability
than
a
single
actuator,
but
in
practice
the
kind
of
errors
that
you
would
find
in
the
field
which
is
you
know,
head
going
back,
there's
some,
maybe
contamination,
some
of
the
other
stuff,
the
it
is
so
unlikely
to
have
a
pivot
or
an
actuator
level,
failure
that
essentially
for
anybody
else
the
reliability
doesn't
reach.
But
if
you
do
the
actual
you
know
like
two
reticle
maths,
just
because
we.
C
B
B
B
A
X
actual
physical,
like
two
drives,
these
four
throw
an
actuator
like
when
you
also
get
one
place
of
an
exit
where
you
get
to
us.
You
know
like
the
comparison
that
worth
the
heat
is
like
you're
telling
me
I
have
two
drives
in
one.
You
know
in
my
box
is
the
form
it's
the
same
as
if
I
had
two
drives
in
two
different
boxes.
No.
D
No,
you
would
get
2x
a
few
different
drives.
The
do
actuators,
giving
you
1.7
X
in
some
scenarios
and
sov
some
with
to
share
cash.
There's
a
little
bit
of
traffic
management,
the
master
slave
and
there's
a
little
bit
of
seep
management.
That's
going
on
and
that's
where
you
see
sequential
czar,
very
close
to
2x
randoms
tend
to
be
a
little
bit
softer
than
2x.
D
1.7
1.8
connects
as
do
some
of
the
management
within
the
drawers,
so
yeah
I
mean
it's
you're
getting
I
mean
in
our
early
deployments,
dual
lines
or
you're
treating
it
like
you,
surfer
drive
and
getting
pretty
close
to
it's
performing
some
scenarios
they're
looking
at,
but
your
other
point
to
fail.
Your
domain
management
very
important.
So
your
domain
awareness
and
that's
the
whole
water
give
it
about.
B
B
People,
because
I'm
kind
of
curious
how
the
record
size
applies
to
this.
So
if
you
had
a
flat
I'll
be
a
space.
How
would
the
with
the
record
size
and
all
these
other
knobs
be
able
to
take
the
most
advantage
of
the
dual
actuator,
so
a
flat
LBA
space?
Is
you
have
a
14
terabyte
drive
and
though
it
shows
up
as
one
lund,
but
one
of
the
you
know
zero
to
seven
is
on
one
actuator
and
the
other
one
is
in
the
other.
C
C
B
Like
sectors
ear
on
one
and
sector,
two
on
the
other
and
whatever
the
size
may
be
so
I
mean
there's
a
trade
offs
for
that
right,
I
mean
so.
There
are
issues
where,
when
you
cross
that
line
boundary,
you
were
gonna
actually
cause
the
performance
to
drop
because
then
you're,
then
your
weed
is
now
dependent
on
both
actuators
by
then
on
one.
Now,
if
you
can
keep
the
boundary
like
a
hardware,
RAID
could
like
an
hardware
RAID.
You
have
a
stripe
sides
and
it
never
goes
or
on
the
stripe
sides.
B
D
Thought
about
we
can
do
an
internal
spray
gun
and
it's
hard
to
get
alignment,
but
that's
done
in
software.
You
can
set
the
sprites
eyes
optimally
from
from
your
workload
and
application
perspective,
and
it's
hard
for
us
to
know
one
what
the
stripe
size
is
and
to
its
partner
for
the
stack
to
manage
to
a
given
sprite
size
and
one
that
you
guys
choose.