►
From YouTube: DDrDrive
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
welcome
back
everybody,
and
so
just
to
kind
of
finish
off
the
commercial
aspects
of
it.
I'd
like
to
introduce
christopher
george
cto
of
ddr
drive
and
hopefully
he's
a
who
will
do
a
little
speaking
now.
B
B
So
we're
big
supporters
of
open
zfs,
really
what
we
were
gonna
I
was
gonna
do
today
is
just
show
our
newest
revision
and
a
new
product
introduction
so
start
with
that,
and
then
just
any
questions
from
the
crowd.
B
So
so
what
you're
seeing
on
here
is
actually
our
original
I'm
a
much
better
engineer
than
I
am
a
marketer
so
that
our
website,
sadly
hasn't
been
updated
since
may
of
009.
How
embarrassing
is
that
to
admit,
especially
on
camera,
but
here's,
our
newest
revision,
the
most
common
most
obvious
is
that
we're
now
with
obviously
low
profile
gems,
which
we
did
almost
immediately
after
taking
this
photo,
but.
B
A
pci
express
card,
so
this
is
what
most
customers
are
familiar
with
and
then
truly
the
the
saving
grace.
So
overall
we're
talking
about
a
zelda
log
device
and
our
intent
was
to
design
what
I
hope
to
be
kind
of
the
ultimate
zill.
So
the
speed
performance,
endurance,
longevity
of
dram
and
then
with
the
non-volatility
of
flash,
only
being
utilized
on
a
power
failure
and
then
really
the
the
missing
length
of
what
we're
using
today.
B
Is
our
5
cap
super
cap
power
pack
and
for
some
reason
I
enjoy
saying
that,
but
there's
a
certain
alliteration
to
super
cat
power
pack,
but
so,
and
so
this
would
mount
internal
to
the
chassis
which
again
would
preclude
or
replace
the
external
power
which
you've
had
in
the
past,
and
so
you
have
one
attachment
to
the
card
and
then
one
to
the
to
the
power
supply
what's
also
unique
about
our
product.
B
Is
it
requires,
or
in
my
case
I
was
kind
of
happy
to
oblige-
is
a
custom
device
driver,
so
we
were
able
to
basically
target
zill
optimization
all
the
way
from
the
driver
all
the
way
down
to
the
hardware,
and
I
really
felt
feel
because
of
that
we
were
able
to
do
some
unique
things
that
wouldn't
be
possible
if
one
were
forced
to
hit
more
one,
this
a
standard
storage
layer
like
scuzzy
or
sass.
B
So
there
was
a
lot
of
kind
of
exciting
optimizations
with
that
short
circuiting
of
taking
directly
scuzzy
and
immediately
taking
a
scuzzy
command
and
putting
that
to
a
memory
memory
transfer,
I
really
believe.
Storage
overall
is
it's
just
an
extension
of
the
memory
hierarchy,
so
I
basically
designed
a
device
that
kind
of
acted
like
just
like
a
memory
device
and
obviously
that's
a
good
fit
for
ddr,
but
our
driver
model
kind
of
instantiated.
B
That
idea
of
why
hinder
or
slow
the
hardware
trying
to
understand
scuzzy
when,
ultimately,
we
really
want
to
just
move
data,
reads
and
writes
so
that's
kind
of
the
genesis-
and
this
is
what
we're
introducing
today
and
again
open
cfs
and
hopefully,
that's
short
and
sweet
and
open
to
any
questions.
If
there
may
be.
C
And
what's
the
difference
between
ddd
drive
and
physionio.
B
But
they
are
also
flash
and
pci
express.
I
was
lucky
enough
to
know
david
flynn,
the
founder
of
fusion
io,
so
that
he's
no
longer
with,
but
they
had
obviously
it's
a
great
product.
I
think
when
I
think
fusion
I
o,
I
think
boy
it'd
be
just
a
great
l2
arc
device
and
it'd
be
a
perfect
fit.
I
would
love
to
see
you
know
a
fusion
I
o
and
a
ddr
drive
side
by
side.
D
The
fusion
I
o
at
least
that
I'm
familiar
with,
is
it's
just
flash.
E
D
This
is
primarily
dram
right.
So
the
point
of
this
is
that,
like
it's
smaller
capacity
than
you
might
get
with
the
a
fusion
I
o,
but
the
latency
is
going
to
be
a
lot
better
because
you're
writing
to
dram
and
then
it
just
copies
it
to
flash
when
you
lose
power.
Okay,.
B
I
think
I
love
the
idea
of
pci
express
and
any
flash
on
pci
express
is,
I
think,
a
positive
development.
I
mean,
I
really
think
solid
state
technology
needs
to
move
as
close
to
the
cpu
with
the
lowest
latency
possible
and
that
that
device
perfectly
and
was
a
very
early
example.
So
I
think
it's
a
good
product
do.
B
B
So
I
because
I
I
don't
know
I
was
tired
of
other
vendors
playing
games
and
I
wanted
to
be
able
to
stand
up
like
this
and
say
well
I
did
that
and
I'm
telling
you
the
truth
kind
of
thing,
so
we
hit
like
44,
000
iops
and
that's
using
a
4k
random,
write
workload
with
iometer
and
that's
running
our
card
running
an
earlier
version,
basically
open
solaris,
which
was
the
extent
whatever
the
nixenta
plus
their
their
unique
patches
cool
so
and
on
our
website
we
actually
promote
when
we
first
introduced
the
product.
B
B
I'd
like
to
think
there's
a
you
know
if
you
line
them
all
up,
it's
going
to
be
me,
so
we're
obviously
passionate
about
the
product,
but
yeah
is
that
44
000.
B
So
when
we
first
introduced-
and
this
is
again
many
years
ago-
we
did
like
300
000
iops
off
of
just
a
pci
express
gen,
1
connector,
and
that
was
pretty
unique
at
the
time
and
still
you
know
still
kind
of
is
so
we
would
love
to
be
able
to
show
a
512
benchmark,
that's
specific
to
desil,
I
hate
doing
iometer
kind
of
stuff,
because
it's
not
workload
specific.
I.
B
No,
but
we
have
a
list
and
I
get
these
emails
I
get
hit
over
there
on
a
daily
basis.
So
it's
kind
of
interesting
too
so
freebsd
is
the
most
asked
for,
and
then
linux
and
then
vmware,
normally
those
guys
just
say
vdi,
and
they
just
want
to
make
it
work
and
they
don't
normally
even
ask
sometimes
they're,
not
even
they
know
exactly
that
they
need
a
driver.
They
just
want
us
to
make
vdi
faster,
specifically
to
vmware.
B
So
currently
we
support
every
solaris
derivative,
obviously
nixenta
omni
os,
which
is
I'd,
be
curious
to
see
anyone
using
omnios,
oh
yeah,.
B
Just
a
huge
uptick
in
support,
so
I
really
give
kudos
to
those
guys
for
doing
a
great
job,
because
a
lot
of
production
workloads
are
moving
to
the
omni
os
and
we're
the
kind
of
canary
in
them
in
the
coal
mine.
So
we
hear
about
things
immediately,
half
the
time
it's
about
stuff.
That
has
nothing
to
do
with
us,
but
we
here
and
they're
doing
a
great
job.
So
I
think
the
community
supporting
omni
os
efforts
are
I'm
I'm
right
behind
them.
So.
A
I
kind
of
just
wanted
to
to
ask
about,
and
there
was
some
talk
on
the
mailing
lists
very
recently
about
all
ssd
pools
and
whether
you
still
need
to
have
a
separate
log
device.
If
you
kind
of
want
to
yeah.
B
Yeah,
I'm
a
real
direct
person,
so
it's
kind
of
contentious,
because
the
idea
is
some
have
taken
attack
that
if
you
have
an
all
ssd
pool
and
then
the
hybrid
storage
model,
the
idea
of
having
a
second
develop
second
or
a
dedicated
zeal
device,
wouldn't
have
the
same
value
as
it
would
have.
If
you
had
hard
drives
being
the
bulk
of
the
store-
and
I
personally
disagree
with
that
for
two
reasons-
and
one
I
think
is
the
least
often
seen
is
traffic
contention.
B
I
mean
one
of
the
beauties,
especially
about
being
pci
express
in
brazil,
is
all
that
traffic
has
its
own
dedicated,
independent
path
and
the
moment
that
path
has
to
be
shared
over
the
same
external
cables
to
the
same
jbod
or
even
just
the
same
pull
traffic.
You
have
pull
contention
and
anytime.
You
have
pool
contention,
you're,
increasing
latency.
B
So
to
me
just
from
a
performance
standpoint,
the
zil
concept
is
just
as
valid
today
as
it
would
be
with
hard
drives
and
just
just
as
valid
today
with
ssds
and
then
to
me
even
the
more
obvious
is
you
know
our
customers?
Our
practice
gets
pounded
on
24
7
with
random
rights,
and
I
designed
the
nan
controller.
So
I'm
pretty
familiar,
I
mean
that's
the
worst
case
scenario
for
nand.
So
it's
you
know
what
did
the
customer
spend
on
those
ssds?
B
You
know
for
a
meaningful
size,
pool
50,
50k
and
up
I
mean
a
meaningful
amount
of
money
on
ssds.
Why
would
you
want
to
just
beat
them
to
death
with
basically
random
right,
zil
traffic?
I
mean,
I
think
I
believe
this
zil
could
easily
just
pay
for
itself
as
far
as
longevity
for
the
remainder
of
the
pool.
So
I
again,
I
think
it
goes
the
the
true
you
know
the
the
elegance
of
the
the
original
design.
B
D
Mean
I
would
argue
that
the
latency
to
dram
is
still
an
order
of
magnitude
faster
than
latency
to
flash.
So
you
know
depends
on
how
much
performance
you
want
if
you're,
using
a
flash
device
for
your
log
device
before
then
yeah.
Maybe
if
you
go
to
all
flash
pool,
then
you
then
you
might
as
well
just
stick
with
you
know.
B
Well,
that
should
have
just
gave
the
question
right
to
you.
Yeah
well.
Well.
Well
put
I
mean
someone
else
has
just
asked
a
question.
I
thought
this
was
interesting
too.
A
good
way
of
thinking
about
it
is
you
know
each?
I
o,
I
consider
like
a
hop
so
the
moment
it
touches
a
controller,
be
it
an
fpga
or
an
asic.
B
That's
just
a
certain
amount
of
latency,
that's
involved,
irrespective
of
who
makes
the
chip
there's
just
time
to
get
on
and
then
get
back
off
and
a
great
way
to
really
kind
of
get
a
just.
An
intuitive
sense
of
latency
is
count
the
hops.
You
know
how
many
controllers
did
that
I
o
have
to
hit
before
it
hit
its
final
final
destination
and
one
of
the
beauties
of
our
product.
Obviously,
is
that
because
we're
pci
express
there's
one
hop
if
one
would
put
a
competing
product
of
ours?
B
Just
is
zeus
ram,
which
is
an
excellent
product.
They
handle
some
scenarios
that
we
don't
handle
today,
for
example,
because
we're
pci
express
we
don't
handle
high
availability,
which
will
be
remedied
in
our
next
generation
product.
But
today,
if
you
want
zeus,
ram,
is
a
just
an
excellent
solution
and
does
great
with
zfs
but
by
nature
of
that
design.
You
know
they
get.
They
basically
get
h
a
for
free
because
they
use
a
sas
connector.
But
one
of
the
you
know
kind
of
the
failing
of
that
is
from
a
latency
standpoint.
B
So
you
can't
talk
directly
to
a
zeus
ram
right
from
the
cpu.
So
you
know
at
best
case
you
have
a
dedicated
hba,
which
is
one
hop
on
that
controller
and
then
the
second
hop
when
you
hit
the
controller
on
the
device
and
then
a
lot
of
customers
unknowingly
will
put
that
in
the
jbod.
And
now
you
have
three
ops
right,
so
you
have
hba,
expander
and
and
then
the
device
controller,
and
that's
you
can
really
kind
of
just
roughly
use
magnitudes.
B
So
if
you
cut
out
a
hop,
you
know
two
to
one
you've
cut
latency
in
half.
Obviously
three
to
one
is
is
a
third
and
then
it
also
gets
back
to
that
question
about
contention.
Do
you
really
want
your
zil
traffic
contending
with
your
txg
commits?
So
you
know
every
whatever
the
heartbeat
you
would
set
the
default
five
seconds.
Every
time
you
dump
that
to
disk.
Do
you
really
want
that
traffic
contending
with
the
zil
traffic?
I
think
there's
great
benefit
in
removing
contention
of
io
traffic.
B
No
excellent
question,
so
the
flash
is
never
utilized
unless
there's
a
power
failure,
but
every
power
failure
on
power
down
our
device,
you'll
see
a
red
led,
and
it
takes
about
54
seconds
for
us
to
dump
dram
to
flash
and
the
user.
Obviously
isn't
really
aware
of
that
and
then
on
on
reboot,
and
then
we
basically
restore
nand
to
dram,
and
that
takes
slightly
less
time
but
still
about
a
minute.
But
that's
normally,
you
know
overlap
with
other
startup
functionality.
So
a
lot
of
times.
B
You
won't
see
the
full
impact
of
51
seconds,
but
there
would
be
a
delay
on
boot
up
we
guarantee
and
it's
again
the
beauty
of
owning
the
driver
is
that
we
can
guarantee
that
a
card
isn't
accessed
until
one
the
super
cap
is
fully
charged
and
the
restore
is
obviously
complete.
So
with
the
driver.
It
gives
us
really
kind
of
ultimate
control
to
cover
every
possible
scenario
as
far
as
power
failure
and
power
return.
B
F
B
F
D
B
Yeah,
it
would
work,
I
mean,
there's
a
lot
of
intelligence
on
basically
all
these
scenarios,
so
the
card
itself
would
know
that
it
did
a
successful
backup
and
then
it
would
know
it
tracks
successful,
backups
and
successful.
Restores
being
that,
we
support,
if
you
do
like
a
quick
power
down
which
kills
a
lot
of
certain
other
ups
systems.
So
if
you
do
one
power
down
immediately
another
too
quick,
we
support
all
those
scenarios
because
we
kind
of
track
all
of
this
in
a
very
deep
way.
So
it
would
work.
B
E
B
Yeah,
that's
kind
of
we
we
feel
we
came
up
with
a
pretty
unique
solution
and
we're
just
not
ready
to
talk
about
it
until
we
release
it.
I
hate
giving.