►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Evers
can
go
for
a
real,
deep
dive
here
for
us,
as
well
as
doing
some
bits
and
pieces
on
there
on
HTS
T's
product
line
and
how
that's
going
to
affect
us
and
then
probably
have
a
brief
discussion
afterwards
about
especially
larger
capacity
drives
and
how
that's
going
to
fit
in
with
ZFS
as
we
slowly
grow
the
capacities
up
from
4-6
a
in
the
up
to
10
terabytes
per
dr.
So
there
are
further
g
periods.
C
Thank
you
very
much
yeah.
My
name
is
Matt
Berger
I
hope
we
have
sufficient
overlap
in
the
languages.
We
speak
I'm
an
engineer,
not
a
software
guy,
so
I
learned
doing
punch
cards
on
an
IBM
1130
when
I
did
my
degree,
so
nothing
to
do
with
stuff
that
you
do
today
and
yes,
I'm
going
to
do
a
deep
dive.
But
how
do
you
base
deep
so
actually
I
want
to
start
on
the
surface
and
then
see
how
far
we
get
here's
the
environment
we
we
work
in
as
a
hard
drive
manufacturer.
C
What's
changing
landscape
for
us
is
the
hyper
scale
in
data
centers,
which
is
driving
the
opportunities
for
one
thing,
but
the
go-to-market
shifts
are
changing
from
the
OEM
model
that
we
were
in
for
years
to
an
audience
stuff
that
that
you
do
because
it's
basically
non
branded
hardware
running
independent
software
code,
starch,
which
is
evolving
and
fragmenting
that's
new
interest
in
tape
on
optical,
which
is
not
us,
but
nonetheless
it
does
exist
and
here's
one
of
the
main
reasons
why
I'm
talking
about
SMR?
It's
the
challenge
to
increase
data
density
on
a
single
platter.
C
So
we
have
to
do
things
in
a
different
fashion:
good
growth
in
SSDs
in
enterprise.
We
love
that
we
sell
those
as
well
there's
a
gradual
transition
from
three
naf
to
two
and
half
inch
in
desktop,
which
concerns
us
to
a
degree.
The
notebook
market
is
stabilizing
at
a
low
volume
and
again
there's
growth
in
cloud
and
hyperscale,
which
again
ask
things
like
seraphis.
The
yellow
ones
are
the
ones
that
are
really
driving
SMR
as
an
initiative.
C
The
benchmark
is
no
Mike
like
40,000.
We
could
go
to
two-dimensional
magnetic
recording
where
we
analyze
data
left
and
riff
left
and
right
off
the
original
track
in
order
to
reduce
the
error
rate
in
reading,
which
then
allows
us
to
put
tracks
closer
together
again,
something
that's
not
quite
ready.
We
could
go
go
to
bit
pattern
media
where
we
physically
separate
the
magnetized
portion
from
its
neighbors
on
the
hard
drive
which
again
gives
us
more
stable
magnetic
bits.
C
The
challenge
is,
we
can't
really
write
them
individually,
so
we
actually
have
to
write
these
in
an
SMF
fashion
as
well,
so
SMRs
not
going
away
and
that's
why
I'm
making
all
these
points
to
generate
this
understanding?
It's
not
a
one-time
thing.
It's
here
to
stay,
so
you
better
learn
to
deal
with
it
then
another
thing
that
we
have
done
a
while
back
is
we
introduced
helium
into
the
drives
to
reduce
friction,
reduce
power,
reduce
vibration
and
so
on.
So
that's
here
to
stay,
and
that
makes
hammer
work
better,
because
the
problem
here
is
heat.
C
Helium
is
a
very
nice
medium
to
actually
cool,
because
the
molecules
are
one-seventh.
The
density
of
air
very
mobile
cools
them
nicely.
So
it
helps
hammer,
but
it
doesn't
really
break
it.
I
mean
it
doesn't,
give
it
a
break
through
and
then
SMR,
which
is
another
thing
available
to
us.
Basically
at
no
cost
just
software
and
that's
free,
as
you
all
know,
so
that's
what
we
decided
to
go
with.
C
So
we
left
with
two
things
and
the
focus
is
really
on
helium
and
sm
are
going
forward
for
the
time
being,
both
are
multi-generation
technologies,
as
indicators
of
BPM
works
better
with
SMR
hammer
works
better
with
helium.
If
you
don't
have
either
one,
you
basically
fall
a
step
short
in
the
development
and
the
other
good
thing
is.
We
understand
both
technologies
very
well,
so
we
don't
run
into
any
technology
or
reliability
risks.
If
you
do
that-
and
this
is
the
total
landscape.
C
Since
we
introduced
the
hard
drive
to
the
market
in
1956,
we
had
a
50
million
times
improvement.
We
just
slow
down
a
little
lately,
hoping
for
these
things
to
eventually
become
available,
so
LMK
are
or
longitudinal
magnetic
recording
is
out
of
steam,
we're
here
with
pmr.
Today,
it's
marz
the
next
thing
and
Beyond
2017.
We
think
we
got
hammer
working
to
a
satisfactory
degree
but
again
with
helium,
most
likely
within
SMR
the
product.
C
We're
shipping
today
is
the
HD
eight
with
eight
terabytes,
so
that's
been
in
the
market
for
a
while
at
one
point,
one
gigabits
per
platter
and
introducing
SMR
gets
us
us
to
the
h8n.
On
other
words,
one
point
four
gigabytes
per
letter.
So
it's
basically
twenty
five
percent
capacity
bonus
you
get
by
introducing
SMR
and
that's
the
first
drive
that
will
hit
the
market.
We
do
utilize
helium,
as
I
indicated,
we
do
utilize
SMR.
C
That's
why
I
brought
my
notebook?
These
animations
don't
usually
work
on
an
apple,
not
that
I
dislike
apple,
a
great
customer,
so
in
an
air
based
device
where
you
have
helium
and
oxygen,
which
comes
in
Adam
pairs
or
molecules.
In
other
words,
you
have
pretty
heavy
gas,
in
other
words,
which
would
you
which
creates
friction
and
vibration
on
the
drive.
C
So
it
gets
very
hard
to
actually
follow
the
tracks,
whereas
in
helium
it
runs
much
smoother,
lighter
and
with
less
power
consumption
invoice,
and
it
enables
a
few
things
we
can
make
the
plaid
dress
thinner
because
they
don't
need
to
be
as
stiff
as
they
would
have
to
otherwise,
which
allows
us
to
put
more
in
the
same
device.
So
it
includes
increases
density
in
the
data
center
and,
if
you're
paying
for
space
and
boxes.
You
appreciate
that
there's
less
drag
23%,
even
though
we
did
increase
the
platter
cup
count.
C
Yeah,
it
also
you
to
bring
it
up,
while
the
drive
still
retains
its
temperature
four
degrees
cooler
and
it
results
in
higher
reliability
overall,
which
isn't
relevant
for
onedrive.
If
you
have
one
drive,
it
either
works
or
doesn't.
If
you
have
a
million
drives
liability
dictates
how
often
you
have
to
run
to
the
data
center
replace
broken
hardware.
C
We
announced
helium
in
2012,
and
at
that
time
we
had
a
few
years
of
internal
experience
that
we
decided.
Let's
do
it.
We
announced
the
platform
in
2003.
We
actually
started
tripping
the
first
model
with
6
terabytes.
We
meanwhile
introduced
an
a
terabyte
version
and
at
the
same
time,
announced
a
sixth
hour
by
drive
in
air
and
said
this
is
going
to
be
the
last
one
of
those
models.
C
Early
this
year
we
had
the
millionth
helium
drive
shipped,
and
the
next
million
is
literally
around
the
corner,
but
in
between
actually
this
June,
we
will
bring
this
ten
terabyte
drive
to
market
helium
did
help
reliability,
but
originally
we
didn't
put
it
in
a
spec
because
to
really
test
2
million
not
to
another
million
hours
MTBF.
You
have
to
put
in
quite
a
lot
of
test
time
and
we
decided
to
ship
it
at
the
original
specification
which,
at
the
time
was
2
million
hours.
C
Desktops
are
here
competitive
home
office
enterprise
drives
or
now
strives
at
1.4
million
I
was
empty.
With
we
started
from
two
and
having
been
in
the
market
for
a
while.
We
now
have
the
data
that
proves
the
actual
reliability
that
we
achieved.
So
far
is
two
enough
million
hours
mean
time
between
failures,
which
means
a
group
of
a
million
drives
reports,
an
error
every
two
and
a
half
hours,
and
it's
not
the
end
of
the
flagpoles.
C
We
know
there's
more
in
the
technology,
but
we
don't
want
to
test
it
because
that
cost
time
we
just
wait
for
the
market
feedback
and
put
that
into
our
calculations
and
now
SMR.
Finally,
traditionally
when
we
increase
the
capacity
of
a
drive,
we
make
the
tracks
in
error,
but
we
maintain
a
gap
between
them
to
magnetize
a
film
that
is
very
hard
magnetically.
We
need
a
stronger
field
and
that
has
the
disadvantage
that
it
also
strace
to
both
sides.
C
So,
for
that
reason
we
accept
that
SMR
is
writing
fairly
wide,
but
then
we
overwrite
part
of
it.
So
we
get
benefits
both
ways.
We
get
smaller,
then
stable
bits
and
we
still
get
two
small
tracks
by
overwriting.
But,
as
you
can
imagine,
rewriting
somewhere
in
the
middle
here
is
a
problem
because
he
would
write
wide
again
and
kill
the
previous
content.
So
you
have
to
write
sequentially
but
not
the
whole
disk.
You
can
actually
write
in
individual
zones
and
I'm
going
to
talk
about
that
in
more
detail.
C
So
this
is
as
wide
as
we
write.
What
we're
left
with
are
the
small
bands
that
we
can
read
comfortably
with
the
sensors
we
have
available
today,
and
this
is
what
the
head
looks
like
when
you're
on
the
media.
Looking
up,
this
pole
actually
generates
the
right
field
and
we
left
it
open
to
one
side
which
gives
us
a
stronger
field
down
to
the
media
and
allows
us
to
write
data
dancer.
C
Yeah,
the
guy
covered
or
lease
now
lay
out
here
is
it
different
pole?
Does
the
writing
from
the
reading
then
yeah
yeah
for
several
generations
we
had
a
separate,
read
element
and
a
separate
inductive
right
element,
so
we
could
optimize
them
individually,
one
advantage
being
that
we've
always
been
writing
white
and
reading
narrows
that
were
used
to
that.
It's
just.
We
never
over
wrote
in
the
past
yeah,
so
we
had
to
take
a
few
decisions.
C
One
is
how
large
will
we
make
an
SMS,
oh
and
it
could
have
been
64
or
any
number
schrodt
of
infinity
and
with
the
size
you
actually
decrease
the
number
of
zones
you
have
on
the
drive.
On
the
other
hand,
between
zones,
we
need
gaps
because
the
eventually
need
a
break
point
at
which
you
can
start
rewriting
again.
If
the
gap
is
only
two
tracks,
we
get
to
that
kind
of
utilization
of
the
available
storage
area.
C
C
I'll
get
it
narrow
the
gap.
The
more
influenced
is
from
one
active
zone
to
the
neighbors.
I'll,
explain
it
on
the
next
foil
good
question,
but
one
page
too
early.
We
basically
decided
on
this
operating
point
here,
so
we'll
have
zones
that
are
256
megabytes
in
size,
a
fairly
large
number
of
them,
with
a
gap
of
all
need
two
tracks
and
here's.
Why?
If
this
happens
to
be
a
very
active
rights
on
this
stray
field
of
the
head
will
eventually
influence
signal
quality
on
its
neighboring
zones.
The
effect
is
there.
C
Today
it's
called
far
track
interference.
It's
not
new.
The
hard
drive
industries
used
to
deal
with
it.
That's
another
effect
called
advanced
track
interference,
which
is
an
influence
on
the
immediate
neighboring
track
that
goes
away
because
we're
writing
sequentially,
so
we're
going
through
the
zone,
but
the
largest
Rayfield
eventually
has
an
impact
on
the
on
equality.
So
we
can
rewrite
that.
We
monitor
that,
of
course,
and
we
can
rewrite
it,
but
it
takes
about
55
seconds
should
we
have
to
so
in
the
design
we
have
to
consider.
C
C
Now,
on
the
host
side,
you
have
various
options.
We
can
try
to
make
the
drive
behave
in
a
fully
transparent
fashion,
manage
everything
on
the
inside.
It
has
been
tried
not
by
us.
It
looks
okay
when
the
drive
is
fresh
out
of
the
box
once
it
has
been
used
for
a
while.
It
drops
to
something
like
like
three
hours
once
all
the
buffers
are
full
all
the
media,
careful
you
left
with
something
like
three
hours,
so
not
necessarily
good,
so
we
decided
drive
managed,
is
not
something
we
would
go
with.
C
There's
another
one
and
that's
why
I'm
talking
to
you,
you
can
actually
make
it
hosts
managed
where
you
need
special
commands.
You
need
to
be
aware
of
the
zone
size.
You
need
to
be
aware
of
your
right
pointers.
You
have
to
write
sequentially
and
so
on.
What
you
get
in
return
is
a
very
high
throughput
and
a
very
well
prediction
of
that
throughput
over
over
time
and
with
block
sizes,
and
so
on.
For
that
you
have
to
change
to
these
two
commands.
C
Whether
you
are
on
T
10,
which
is
the
SAS,
would
use
the
zone
based
commands
or
if
you
aunty
13,
which
is
SATA,
you
would
use
these
zone
ATA
commands
you
would
largely
right
sequential,
but
not
at
a
single
point.
We
actually
have
hardware
support
right
at
16
different
areas
of
the
drive
we
can
support
more
by
firmware,
but
then
you
see
a
slight
overhead
increase.
C
So
as
long
as
you
stick
to
the
16
right
points
on
a
drive
that
can
be
done,
there
are
no
additional
changes
required
with
that
and
the
drive
behavior
is
very
consistent
and
predictable.
Now
there's
another
one
where
the
name
host
aware
was
picked,
which
basically
is
an
invitation
to
write
bad
code.
That's
why
I'm
pointing
that
out,
because
if
you
tolerate
commands
as
they
come
in
you'll,
never
have
the
code
fully
debug.
C
There
are
things
that
will
bring
performance
penalties
and
that's
why
this
is
actually
much
preferred
because
it
will
abort
every
command
with
an
error
if
it's
out
of
sequence,
if
you're
trying
to
read
behind
the
right
pointer
if
you're
writing
in
a
zone
without
resetting
the
right
pointer
to
zero.
So
all
these
things
are
very
well
controlled
and
at
the
end
of
the
process
your
code
is
very
clean
and
once
you
wrote
the
code,
you
can
actually
apply
it
here.
C
So
if
you
do
the
work,
why
not
do
it
here
right
and
get
a
good
debug
done?
The
industry
said.
Ok,
we
agree,
drive
managed
nothing,
no
good
for
us
host
manage
yes,
but
do
we
have
to
should
we?
It
was
seen
as
a
temporary
thing,
but
it
really
isn't
because
the
gains
are
permanent.
With
every
new
technology
we
introduced
you're
still
that
extra
bit
ahead
with
SMR.
C
So
it's
not
a
throwaway
software
solution
and
host
aware
was
perceived
as
the
the
ultimate
because
then
you
just
get
away
with
doing
what
you
did
and
no
no
real
mandatory
changes.
Now
over
time.
The
more
people
the
grocer
SMR
came.
Basically,
no
more
people
actually
thought
about.
They
said
okay.
This
is
still
not
the
way
to
go.
This
is
emerging
as
the
best
options,
because
at
the
end
of
the
process
we
will
have
clean
code
and
we
could
switch
to
that.
But
why?
C
C
Now
the
future
is
that
these
two
holds
to
wear
a
host
managed
will
will
definitely
come.
One
might
eventually
migrated
to
that,
even
though
I
personally
would
not
prefer
that
as
an
engineer,
there's
another
option
coming,
but
it's
fully
transparent.
Should
we
switch
to
do
the
MIR
you
would
you
would
not
notice
it's
completely
transparent
to
the
host?
Come
doesn't
come
with
any
new
requirements.
C
Now
talking
about
layout
in
more
more
details,
so
we
give
away
about
1%
of
the
drive
capacity
for
a
random
right
zone,
because
you
have
to
have
some
place
to
put
your
metadata
all
right,
you
don't
want
to
write
metadata
and
blocks
of
256,
and
one
percent
doesn't
sound
much,
but
actually
it's
100
gigabyte,
which
I
think
is
plenty.
Then
you
have
the
sequential
write
zones
with
the
gaps
in
between
in
the
host
managed
right
son
will
always
remain
a
sequential
right
zone.
C
If
you
do
a
random
right
into
these
zones
in
a
host
aware
drive,
it
will
switch
from
the
sequential
mode
into
random
mode
which
doesn't
actually
do
anything
here.
It's
just
the
data
will
be
somewhere
else
and
you'll
have
to
manage
an
indirection
table
internally,
which
is
a
bit
of
a
challenge.
You
want
that
to
be
power
safe
and
all
these
things,
and
then
the
conventional
zone
is
this
one
is
part
of
the
addressable
user
space,
so
we
have
full
control
over
that.
C
The
media
cash,
which
you
would
utilize
in
the
case
of
a
host
where
drive,
is
not
part
of
the
user
space.
So
that's
where
the
data
lands
when
you're
writing
out
of
sequence,
just
to
save
the
situation
and
keep
the
data
somewhere.
The
next
zone
is
the
sequential
right
preferred
zone
for
the
host,
where
implementation
or
the
one
that
is
definitely
sequential
right.
Only
in
the
case
of
the
host
managed
implementation,
here's
what
they
media
cash
looks
like.
C
We
had
invented
that
because
of
SMR
and
then
found
it
quite
useful
in
our
pmr
drives
as
well.
So
what
it
does.
It
is
great
zones
to
drop
data
into
across
the
whole
surface.
So
wherever
the
head
is,
if
it's
a
small
right
rather
than
doing
a
Sikh
and
wait
for
latency
and
then
drop
the
data,
we
just
drop
it
wherever
we
are
and
on
a
pmr
drive,
we
get
about
a
two
and
a
half
times
random
throughput
performance,
which
is
good
on
on
sm
r.
C
You
have
to
have
that
if
you're
not
host
managed
right,
we
don't
think
you
need
it
with
host
managed,
because
you
will
write
in
the
proper
sequence.
Only
and
the
whole
thing
is
managed
by
so-called
right
pointers
and
that's
the
one
reason
why
you
need
a
different
command
set,
because
you
could
write
with
a
standard
command.
You
couldn't
read
with
a
standard
command,
but
you
need
to
know
where
you
are
and
for
that
you
need
to
manage
these
pointers
on
the
host
side
and
should
you
ever
lose
them?
C
You
need
the
command
to
ask
the
drive
here.
Where
am
I,
which
zone
is
full,
so
that's
what
the
new
commands
are
there
for
the
actual
read
and
write
is
no
different,
and
then
there
are
few
recommendations
like
you
should
never
read
beyond
the
right
pointer,
because
nothing
is
there
at
least
no
valid
data,
and
you
should
always
right
at
the
rind
pointer
and
then
increment
that
until
the
zone
is
full
and
then
you
get
to
the
next
zone,
but
you
should
not
right
across
boundaries.
C
C
D
Now
I'm
talking
about
say,
for
instance,
I've
written
to
the
zone.
Previously
every
set
the
right
pointer
back,
there's
still
the
same
data
I've
written
there
but
say
for
instance,
I
just
want
to
overwrite
sounded
data
in
a
linear
fashion
and
skip
some
portions
that
I
want
to
keep.
No,
no.
You
can't
keep.
C
C
So
that's
the
media,
cash
and
now
some
software
aspect.
So
if
we
introduced
an
sm,
our
drive
down
here
did
nothing
else.
All
the
applications
running
on
the
system
could
not
write
to
it.
They
would
just
flag
errors
already
could
perceive
writing
an
application
that
actually
talk
straight
to
the
hard
drive.
If
you
wanted
to,
you
could
have
the
application
talk
so
on
basic
commands,
but
it's
not
really
recommended
what
we
do
recommend.
C
B
C
B
C
So
that's
what
we
recommend
to
enforce
data
to
be
written
in
sequence,
and
this
is
what
we
make
available.
So
basically,
the
library,
an
emulator
that
you
can
take
standard
drive
and
by
driver
fake
it
to
behave
like
an
SM,
our
drive
for
you
actually
bar
and
some
sample
applications,
and
all
that
is
on
on
the
next
page.
It's
on
github.
So
that's
where
you
find
all
the
documentation
where
you
find
code
that
should
compile
on
your
machine
and
with
that
you
should
be
ready
to
go
and
give
it
a
give
it
a
try.
C
Now
I
have
to
go
back
one
actually.
So
this
is
what
we
asked
for
launch
with
host
managed
SMR
do
the
sequential
writes,
which
is
enforcing
clean
code.
You
can
migrate
toast
aware
later
on,
you
have
to
start
with
a
SAS
drive,
because
t13
has
not
released
the
package.
You
should
consider
256
megabytes
fixed
zones,
which
is
pretty
close
to
set
of
s,
which
is
around
28
I
was
told,
and
again
I'm
not
a
software
guy
here.
Other
sone
sizes
are
possible
at
this
is
what
we,
what
we're
shipping.
C
B
C
Yeah,
it's
entirely
up
to
you
d.
The
thing
is
just
an
offer
right.
You
can
put
everything
in
the
kernel
as
s
you
want
it.
Just
I
think
gives
you
a
quick
start
by
seeing
the
applications
by
looking
at.
What's
in
the
library
see
how
things
work
you
have
access
to
the
source
code
and
then
you're
free
to
do
your
own
thing.
Basically,
so
it's
our
library
is
not
the
industry
standard
is
just
to
give
you
a
kiss.
B
C
C
B
C
C
B
C
C
C
C
Where
the
way
I'm
going,
oh
really
and
then
start
right
and
the
number
is
not
limited,
so
the
firmware
will
handle
any
number
but
eventually
get
to
a
very
big
table
with
a
big
overhead
and
we
only
managed
16
in
hardware.
At
this
point,
solo
penalty,
there
other
than
performance,
and
actually
it's
not
just
the
battery.
It's
the
whole
notebook
froze.
F
Just
have
a
very
basic
question,
like
a
hundy
on
the
overview
of
this
now
you're
talking
10
terabytes,
which
seems
just
just
a
little
bit
above
the
normal,
our
drives
they're
currently
available.
So
the
reason
to
deal
with
all
of
this
new
way
to
deal
with
things
and
complexity
and
difficulty
is
because
this
type
of
technology
is
going
to
go
a
lot
higher
in
capacity
per
drive.
Then
yes,
the
normal
drive,
yeah
or
is
it?
Where?
Is
it
gonna,
be
like
you.
C
F
C
F
E
E
C
When
it
comes
out
today,
we
don't
see
hammer
without
jingling,
because
we
can
get
the
laser
spot
to
the
right
size,
but
the
Magnetic
Pole,
just
don't
get
any
narrower.
The
smallest
wave
created
the
60
nanometers
and
we
would
have
to
get
to
something
like
10
to
really
have
a
big
impact
there.
So
we
don't
see
today,
that's
why
I
said
initially
SMRs
here
to
stay,
so
it
applies
to
all
the
technologies
I
showed.
And
yes,
so
it's
a
thousand
hours.
But
then
it's
this.
Fifty
percent
bonus
on
every
future.
Hard
drive
right.
A
Any
more
questions
we
know
I
mean
I.
Suppose
I
do
have
a
question
which
is
you
know
with
SM
are
going
forward,
I
mean:
do
we
see
it
because
no
I
think
it's
saying
we
as
the
efest
we
might
have
to
deal
with
just
because
you
know
what's
the
projective
lifestyle
on
on
a
pmr
and
there
and
that
roadmap
going
forward?
So
it's
you
know,
there's
obviously
gonna
be
some
limitations
and
then
that's
gonna
yeah.