►
From YouTube: Keynote: The System that Matters - Tim Massey, Chief Executive Officer & Phil Straw, CTO
Description
Keynote: The System that Matters - Tim Massey, Chief Executive Officer & Phil Straw, Chief Technology Officer, SoftIron
About Phil Straw
CTO, SotfIron
Phil Straw is the CTO of SoftIron, the Silicon Valley company behind HyperDrive® – the dedicated Ceph appliance, purpose-built for software-defined storage. Previously he has held senior technical roles with Security, Delphi Electronics, 3Com and Cisco.
About Tim Massey
CEO, SoftIron
Globally responsible for all business functions of SoftIron. Previously General Manager at Leadis, founder and CEO at Mondowave, and Principal at Band of Angels Fund L.P.
A
There
we
go
excellent,
excellent,
my
name's
Tim
Massie
I'm,
the
CEO
of
soft
iron,
we're
a
young
company
and
we're
just
making
the
rounds
at
these
kinds
of
conferences
and
I'm
delighted
to
be
with
you
here
today
in
Barcelona
it
soft
iron.
We
make
a
portfolio
of
fully
supported,
SEF
appliances
and
we
are
all
in
on
SEF
we're
so
confident
that
SEF
can
become
the
de
facto
standard
of
storage
that
we're
willing
to
bet
our
entire
company
on
it.
A
Sef
will
win
the
open-source
projects,
like
other
open-source
projects
have,
and
that
is
not
by
doing
anyone
particularly
thing
extremely
well,
but
by
being
versatile
and
having
the
ability
to
scale
SEF
as
a
Swiss
Army
knife,
it's
it's
already
reached
critical
mass
and
the
adoption
will
snowball
from
here.
So
if
you
want
to
see
what
adoption
looks
like
look
around
the
room
today,
you
can
see
that
there's
many
organizations
here
that
have
done
massive
stuff
deployment
and
it's
it's
an
honor
to
be
involved.
A
No
other
storage.
Open-Source
project
has
is
innovating
as
quickly
as
cephus
the
more
it's
deployed,
the
faster
it
grows.
After
all,
open-source
software
gives
organizations
the
freedom
to
choose
their
vendors
and
while
at
the
same
time
preserving
a
common
user
model
and
protocol
I
know
you
guys
already
know
this
and
we're
proud
to
be
a
part
of
it.
In
this
way,
the
end
user
can
maintain
control
of
their
software
base
and
can
never
be
held
over
a
barrel
by
being
locked
into
a
single
vendor,
not
even
soft
iron.
A
We
know
that
building
your
own
sefa
state
does
work,
and
many
of
you
here
have
already
done
it,
but
it
can
also
be
hard
to
bring
up.
It
can
be
hard
to
get
right
and
it
can
be
hard
to
maintain
over
time
soft
iron
and,
to
the
greater
extent,
the
Ceph
community
as
a
whole
is
on
a
quest
on
a
quest
to
build
to
build
the
very
best
chef
storages
we
can
and,
as
you
know,
self
can
run
on
anything.
It
can
run
on
a
generic
server
or
Raspberry
Pi
or
even
a
toaster.
A
A
So
if
we
started
with
the
black
box-
and
we
said
what's
the
very
best
system,
we
could
build,
what
would
that
system
look
like?
Well,
it
would
have
zero
latency,
it
would
be,
it
would
have
an
infinite
capacity.
It
would
use
no
power,
it
would
be
simple
to
use
and
it
would
be
essentially
free.
That
would
be
the
holy
grail
of
storage,
but
unfortunately,
the
real
world
gets
in
the
way,
but
still
we
should
be
able
to
ask
the
question:
how
good
can
it
be?
How
good
could
it
be?
A
We
have
found
that
there
are
many
organizations
that
just
want
to
buy
great
storage
and,
and
they
want
it,
fully
supported.
So
if
we
can
get
to
this
last
frontier,
if
we
can
get
to
a
place
where
staff
is
performant,
where
it's
low
power,
it's
easy
to
use
and
it
preserves
the
customers
freedom
to
choose,
we
can
accelerate
the
mass
adoption
of
Ceph
and
that
will
be
good
for
us
all.
A
A
B
It's
great
to
be
here,
you
know
it's
often
we
focus
on
Seth
and
we
do
that
every
day
and
so
over
time
that
come
many
years,
and
so
when
we
get
a
few
days
to
do
something
like
cephalic
on
that's
really
iconic
to
us
so
hi
there
SEF
brothers
and
sisters.
So
we
focus
on
assistance
approach,
as
Tim
said,
for
both
the
Ceph
experience
and
safe
product
ization
Tim
mentioned
results.
B
Things
like
power,
performance,
reliability,
quality,
ease
of
use,
all
that
good
stuff,
and
so
on
this
Grail
quest
of
trying
to
find
the
best
safe
system.
We
often
take
a
path:
less
traveled,
often
it's
radical,
at
least
at
the
start,
but
it's
in
the
search
of
results
and
it's
an
interesting
story,
and
that's
really
what
I
want
to
talk
to
you
about
today.
So
it's
easy
to
talk
about
results,
Tim's
obsessed
with
it.
It's
something
of
a
corporate
obsession
for
us,
but
as
a
technologist.
B
Yeah,
keep
it
real.
Tim
have
some
fun,
so
you
know
I'm
known
for
my
subtlety
and
I
happen
to
mention
that,
in
order
to
make
a
really
really
awesome,
safe
storage
product,
maybe
you'd
make
a
rather
weak
a
computer,
and
somehow
that
happened.
I,
don't
have
that,
but
said.
Another
way
come
focusing
just
on
computer
alone
is
by
far
in
a
way,
not
the
only
way
that
you
can
get
performance
in
multiple
vectors
for
serf
and
often
I'll.
Give
you
an
example
of
of
this.
B
So
often
as
silicon
is
productized
and
it
gets
faster
and
faster.
You
end
up
with
things
like
IO
paths
versus
compute
paths
become
a
compromise.
You
get
things
like
peripheral,
chaining,
I
or
bridging,
wait,
states
various
different.
You
know
things
with
buffers
and
also
lots
of
excess
heat
at
the
point
that
you
don't
need
that
compute,
it's
just
heat
and
that's
orthogonal
and
toxic
to
a
great
storage
products
of
any
kind.
B
B
B
What
works
in
hardware
generally
is
is
something
that
happens
in
parallel
and
by
definition,
in
a
CPU
things
happen
see
really,
you
know
instructions
ALU
gets
clocked
through
and
he
does
that
really
really
really
well,
but
there's
a
place
for
both,
and
so,
if
you
look
at
SEF,
usually
there
are
a
lot
of
drives
they're
all
connected
to
OS
DS
in
a
computer,
and
then
you
have
multiple
computers
in
a
cluster.
It's
parallel
in
many
ways
in
many
tiers,
and
so
there
should
be
opportunity
here
right.
So
where
can
you
not
make
things
parallel?
B
Well,
computing,
science
101.
You
know
just
to
review
a
sequence,
selection
and
dependent
repetition,
and
so
what
happens
is
when
you
find
something
that's
exhaustively
done
in
a
CPU.
In
software
that
is
actually
independent,
that's
a
possibility
for
parallelism,
and
so
acceleration,
at
least
in
theory,
and
so
when
we
looked
at
safe
at
the
lowest
levels,
what
we
found
is
that
there
are
opportunities,
galore,
they're,
an
embarrassment
of
riches,
far
from
the
hardware
close
to
the
hardware
and
everywhere
else,
and
so
the
first
thing
that
we
picked
is
really
erasure.
Encoding
and
erasure.
B
Encoding
means
a
lot
of
things
to
a
lot
of
people,
but
it
basically
usually
boils
down
to
a
result
that
everyone
agrees
on,
which
is
brilliant
economics.
Data
protection
can
be
achieved.
A
number
of
ways.
One
is
triple
replication,
that's
fairly
easy
to
understand,
but
also
erasure.
Encoding
is
an
alternative
and
you
can
achieve
similar
levels
of
data
protection
with
less
footprint.
B
So
by
that
I
mean
less
bytes
of
less
data
used
in
safe
to
store
the
data
you
care
about
after
data
protections
occurred,
so
that
means
less
drives
less
computers,
just
less,
and
so
that
means
less
money
and
there's
the
economics
right.
So
pretty
simple
in
this
talk,
I'm
going
to
talk
about
erasure
encoding,
which
can
have
different
options
and
different
geometries
I'm,
going
to
pick
six
plus
two
for
this
talk
just
so,
you
know
what
I'm
using
I'll
use
it
consistently.
B
B
My
dreadlocks,
you
know
I
figure.
The
way
is
I
was
born,
bald
and
so
I'm
actually
getting
younger
so
see
what
I
did
that
too,
so
acceleration
for
us,
accelerating,
erasure
encoding,
is
really
about
opening
up
the
aperture
for
erasure
encoding,
making
it
relevant
to
places
that
may
or
may
not
have
been
relevant
before
so.
B
The
headline
is
really
about
storage,
space
and
storage
efficiency,
and
if
you
pick
this
geometry
and
you
compare
it
to
triple
replication,
you
would
get
an
amplification
of
data
of
three
times,
but
with
this
erasure,
encoding
of
six
plus
two,
you
would
end
up
with
one
and
a
third
bytes
of
amplification
amplification.
So
you
don't
have
to
be
a
math
professor
to
understand
that
it's
less
than
50%
of
the
space
and
cost
and
or
two
times
for
a
set
arrow,
and
so
that
boils
down
unfettered
and
undiluted
to
directly
to
economics,
and
acceleration
of
erasure.
B
Encoding
really
is
about
creation,
mathematics.
So
anytime
you
have
a
read
error
in
particular
right
or
Hardware
recovery.
This
is
a
place
that
hardware
can
actually
do
real
acceleration.
So
not
only
can
you
do
right
and
recovery
and
accelerate
that,
sometimes
in
it
an
order
of
magnitude,
but
you
can
also,
as
a
second-order
effect,
alleviate
the
burden
on
the
processor,
and
this
is
really
important.
So
one
of
the
things
that
this
does
is
allow
the
processor
to
concentrate
on
frontside
delivery.
B
You
know
making
clients
happy
being
the
best
possible
safe
product
and
safe
experience
that
you
can
have
leaving
the
processor
just
to
do
some
housekeeping
and
some
DMA,
which
is,
is
Hardware
handoff.
So
it
leaves
the
processor
pretty
much
in
diluted,
which,
in
the
real
world,
is
pretty
much
as
close
as
you
can
get
to
a
free
lunch.
So
this
allows
Seth
to
be
allows
erasure,
encoding
to
be
more
applicable
in
more
places
in
different
workloads,
and
that
allows
those
economics
to
trickle
down
into
SEF
and
be
applicable.
B
So
we
had
a
team
of
people
work
on
this
for
well
over
a
year,
including
Hardware,
guys
software
guys
systems
and
optimization
guys,
obviously
Silicon
guys,
and
what
we
found
is
that
we
made
a
first
instance
proof
something
that
works,
and
we
put
it
in
hardware
and
the
net-net
is
that,
as
you
would
say
here,
no
bueno,
so
you
know.
Basically
it
was
an
on
result.
It
would
work.
It
was
an
instance
proof.
It
was
technically
correct,
but
it
didn't
matter
because
the
way
we
implemented
it
you
know
would
would
use
standard.
B
Hardware,
handoff
stand
word
standard
kits
for
doing
this.
We
tried
to
pull
the
easy
lever
and
the
results
really
weren't
in
it
was
just
the
same
as
doing
it.
In
software
there
was
no
net
benefit
of
handing
off
that
way,
and
so
you
can
forgive
us
for
pulling
the
easy
lever,
but
we
we
then
looked
at
what
we
normally
do
everyday,
which
is
a
systems
approach,
and
in
doing
that,
we
looked
at
software
and
hardware.
B
It's
not
what
I
do
naturally,
so
some
things
have
to
happen
in
software
and
some
things
have
to
happen
in
in
in
hardware,
and
we
talked
about
parallel
versus
serial.
You
know
hardware
software,
but
in
the
real
world
that
black
and
white
becomes
grey
very
quickly
and
one
example
is
that
hardware
is
a
finite
resource.
You
can
think
of
software
in
many
ways,
overtime
is
an
infinite
resource.
But
if
you
create
a
piece
of
silicon
for
parallelism,
you
have
a
certain
amount
of
real
estate.
B
You
can
use,
and
so
there
are
only
so
many
things
you
can
do
in
parallel
at
one
time
so
immediately
in
runtime
you
have
a
resource
management
issue.
You
also
it's
now
not
just
a
gray
day.
It's
raining
outside.
You
have
a
backwards
and
forwards
latency
of
sending
things
to
acceleration
and
getting
it
back,
and
so
now,
in
both
design
time
and
runtime,
you
have
a
decision
about
not
only
resource
availability
but
what's
actually
going
to
have
an
air
acceleration
and
what's
not
and
some
things
do
and
some
things
don't
as
it
turns
out.
B
So
you
know:
we've
gone
from
a
hundred
thousand
foot
view
of
parallel
cereal
to
maybe
a
50,000
foot
view
of
it's
really
complicated,
and
you
know
we
we
got
through
this.
We
pushed
through
this
systems
approach,
and
you
know
the
kind
of
results
are
in
we're
here,
proce
in
the
put
in
and
and
we've
kind
of
arrived,
but
what
I
can
tell
you
is
that
arrival
for
us
here
is
just
the
pitstop.
When
we
made
a
laundry
list
of
things
that
you
could
parallel
eyes
in
surf
to
make
a
better
safe
product.
B
What
we
learned
is
what
we're
going
to
do
next
is
probably
more
impactful
than
just
erasure
encoding.
So
that's
super
interesting.
It's
super
exciting
I'll
leave
you
with
that
cliffhanger,
because
I
think
we're
at
15
past
the
hour
so
I'm
on
the
soft
time
booth.
It's
pools,
102,
Jason,
1
or
2.
So
if
you
want
to
know
all
the
nerdy
details
that
go
beyond
kind
of,
you
know,
just
an
introductory
talk.
I'll,
be
there
all
day
my
name's
Phil
I'm
friendly
I'm,
house-trained
I'm
on
booth,
1
or
2.