►
From YouTube: RH InkTank Ceph Day Sessions Sage Weil REDHAT
Description
Ceph Day Boston 2014
http://www.inktank.com/cephdays/boston/
A
A
Came
from
how
it
got
to
where
we
are
today
and
then
give
you
a
little
bit
of
a
glimpse
of
what
it
is
we're
currently
working
on
and
where
we
see
the
project
moving
into
into
the
future.
So
I'm
going
to
start
by
showing
you
a
picture
of
how
we,
how
we
usually
talk
about
seth.
This
is
sort
of
a
block
diagram
of
how
the
system
is
built
and
so
traditionally
how
we,
how
we
present
the
system
so
at.
A
You
have
what
we
call
the
radio
storage
cluster,
which
is
you
know,
a
scale
out
object
platform
where
you
have
lots
of
different
commodity
storage
nodes
that
you
can
deploy
lots
of
different
stuff
on
top
of,
and
that
gives
you
a
generic
sort
of
storage
platform,
essentially
that
you
can
build
lots
of
different
services
on
top
of
and
we've
built,
the
step
object,
gateway
that
gives
you
s3
and
swift
compatible
restful,
object,
storage.
We
built
a
block
device
that
gives
you
virtual
disk.
A
And
you
can
use
all
these
different
interfaces
to
sort
of
access.
The
same
you
know
set
of
hardware,
that's
presenting
all
these
disks
in
a
highly
reliable,
highly
available
sort
of
way.
I
bring
this
up
now
because
I
think
the
interesting
thing
about
it
is.
This
is
not
at
all
what
we.
B
A
A
In
terms
of
where
we
got
how
we
got
to
to
where
we
are
today,
so
this
is
a
this
is
a
plot
of
contributors
per
month
over
the
last
nine
and
a
half
years
ten
years,
I
think
in
like
two
weeks,
which
is
kind
of
exciting,
which
I
think
we
sort
of
logically
divided
into
a
couple
different
phases.
A
So
there's
there's
this
first
phase
where
steph
was
sort
of
coming
out
of
the
research
group,
and
it
was
a
bunch
of
you-
know:
academics
trying
to
build
this
highly
scalable
architecture
that
was
deemed
the
research
phase.
There
was
sort
of
a
middle
phase
where
we
were
incubated
sort
of
initially
building
the
system
afterwards
open
sourced
inside
of
dreamhost,
which
is
a
company
that
I
worked
with
previously
and
in
the.
A
Years
of
sort
of
the
inktank
phase,
where
we
sort
of
launched
a
full-blown
support
company
behind
seth,
when
we
saw
things
sort
of
really
take
off
up
into
the
right
and
started
doing
things
like
safe
days
and
talking
to
the
banks
and
so
forth.
So
I'll
start
with
sort
of
the
research
background.
A
So
it
all
began
at
the
university
of
california
santa
cruz,
a
former
research
grant
from
the
department
of
energy
lawrence
livermore,
los
alamos
and
sandia,
and
the
idea
here
is
to
build
a
highly
scalable
file
system
for
high
performance
computing.
So
it's
all
about
scalability,
reliability
and
performance.
You
know:
writing
terabytes,.
A
To
these
huge
huge
computers,
basically,
when
I
became
involved
in
the
project,
my
focus
was
really
on
the
scalable
metadata
management.
So
how
do
you
build?
How
do
you
keep
track
of
the
file
system,
name,
space
and
files
and
directories
and
items
and
permissions,
and
all
that
stuff
that
will
scale
to
you
know
a
quarter
million
cores
dumping
files
into
the
same
directory?
The
first.
A
Was
written
during
summer
internship
in
2004,
while
I
was
at
lawrence
livermore,
which
was
sort
of
a
very
surreal
experience,
I
was
in
a
high
security
national
lab
environment,
with
a
little
radiometer
on
my
my
badge
in
this
old.
You
know,
1940s
era,
building,
checking
in
my
code
over
ssh
cvs
to
university
of
south
santa
cruz.
A
It
was
sort
of
ironic,
I
know
the
nice
thing
is.
I
was
told
essentially
that
I
could
work
on
whatever
I
wanted.
You
know
within
because
they
like
what
it's
working.
They
could
work
on,
what
I
wanted,
which
they
happen
to
like
as
long
as
it
was
open
source.
So
I
was
like
great.
That's
that's
exactly
what
I
want
to
do
anyway,
so
over.
A
Couple
of
years
after
that
sort
of
initial
genesis,
we
sort
of
built
the
rest
of
the
step
architecture,
so
it
started
with
you
know.
How
do
you
deal
with
the
scalable
management
layer,
but
we
built
the
radius
component,
which
is
a
distributed,
object
layer
that
sort
of
sits
underneath
it
and
manages
all
the
raw
disks
we
wrote
built
a
system
called
evos
that
was
a
custom
object-based
file
system
that
manages
all
the
data
on
a
single
disk
and
stores
objects
on
it,
sort
of
a
simplified
local
file
system.
A
We
put
together
the
crush
algorithm,
which
is
essentially
a
way
to
algorithmically
algorithmically,
distribute
data
across
lots
and
lots
of
storage
nodes
in
a
way
that
takes
fault,
tolerance
and
failure,
domains
and
all
that
stuff
into
consideration.
It's
very
flexible
and
then
sort
of
this
paxos
management
framework
that
sort
of
orchestrates
the
entire
cluster.
A
So
over
the
next
couple
of
years,
we
sort
of
assembled
all
these
sort
of
disparate
ideas
that
have
been
floating
around
within
the
research
community
in
our
in
our
group
particular
and
built
it
into
like
a
single
actual
working
prototype
that
you
could
actually
run
and
read
and
write
data
from
which
is
kind
of
exciting.
So
so,
over
the
over
this
period
there
was
a
always
an
emphasis
on
building
consistent
and
reliable
storage.
We're
building.
You
know
a
posix
file
system
that
we
want
to
have
sort
of
strong,
coherency
and
strict
semantics.
A
For
so
you
can
run
legacy
applications
on
top
of
it
and
we
wanted
to
push
the
limits
of
scalability
by
pushing
all
the
intelligence
or
as
much
intelligence
as
possible
to
sort
of
the
the
individual
components
in
the
system
so
that
you
can
get
sort
of
the
massively
distributed
very
scalable
architecture
and,
in
the
end,
at
the
end
of
the
day,
we
ended
up
with
a
very
different
architecture
than
sort
of
traditional.
You
know
sound-based
file
systems
and
so
forth,
but
it
it
was
very
compelling
which
was
kind
of
exciting.
A
A
The
system
seth-
this
came
from
one
of
the
the
research
professors
that
was
involved
in
the
project,
carlos
malton,
so
staff
is
short
for
cephalopod,
which
is
a
you
know,
squid
or
an
octopus,
or
something
like
that.
So
that
was
kind.
A
That
a
cute
vdi
logo
and,
as
I
sort
of
finished,
finished
my
my
sort,
of
course
in
my
graduate
work,
I
observed
what
what
seemed
to
me
to
be
an
industry
black
hole.
So
you
had
a
storage
industry
that
was
dominated
by
lots
of
large
storage
vendors.
You
know
the
netapps,
the
data
domains
and
emcs
of
the
world.
They
were
selling
proprietary
solutions
that
ultimately
didn't
scale
all
that.
A
Well,
you
know
they
would
they
would
they
would
scale
out
to
you
know
eight
nodes,
16
nodes,
maybe
32,
something
like
that,
and
we
were
trying
to
build
something
that
would
scale
to
hundreds
or
thousands
of
thousands
of
machines.
There
were
very
few
open
source
alternatives,
particularly
particularly
at
this
point
in
time
they
either.
B
A
B
A
Sort
of
basic
enterprise
features
that
that
you
would
need
to
deploy
these
sorts
of
systems
as
a
real
business,
so
you
didn't
have
things
like
snapshots
or
quotas
or
that
sort
of
thing
that
that
real
business
is
really
neat,
and
the
other
thing
that
I
observed
is
that
all
of
my
peers
within
these
research
programs
would
build
very
interesting
systems
as
part
of
their
dissertations.
A
That,
were,
you,
know,
really
cool
and
they
would
they
would
get
them
to
work.
You
know
just
enough
to
build
all
the
plots
to
write
their
papers
and
do
the
graphs
and
everything
and
as
soon
as
they
graduated,
they
would
get
hired
by.
You
know
the
netapps
and
the
emcs
of
the
world
and
their
project
would
just
sort
of
die
on
the
line.
A
So
there
would
be
a
paper
published,
but
nothing
really
come
of
it
and
the
system
would
never
actually
live
on
in
the
form
of
software,
and
I
went
through
this
process
of
interviewing
and
doing
presentations
at
a
bunch
of
these
organizations,
and
I
realized,
finally,
that
they
weren't
actually
interested
in
seth.
They
were
interested
in
hiring
me
to
have
me
work
on
their
proprietary
system,
which
ultimately
wasn't
wasn't
terribly
interesting,
so
I
sort
of
opted
for
for
a
different
path.
A
I
had
the
luxury
of
not
needing
a
salary
that
was
to
pay
the
rent
of
the
time,
and
so
my
my
goal
was
to
change
the
storage
world
with
open
source.
You
know
do
due
to
the
storage
industry,
what
linux
did
to
solaris
and
iris
and
ultrix,
and
all
these
sort
of
proprietary
unixes
back
one
day.
You
know
what
could
go
wrong.
A
That
you
had
to
sort
of
do
in
order
to
make
that
happen.
Obviously,
so
we
had
open
source
software,
we
had
to
choose
a
license.
We
opted
for
lgpl,
which
basically
means
that
it's
a
copyleft
license.
You
have
to
share
changes
to
ceph,
but
it's
flexible
enough
that
you
can
link
it
into
sort
of
other
parts
of
the
stack
with
proprietary
software.
So
you
can
build
the
proprietary
stuff
on
top
or
link
it
underneath,
but
seth
itself,
sort
of
just
the
storage
platform
would
remain
open
source
which,
which
seemed
like
a
good
balance.
A
At
the
time
we
wanted
to
sort
of
avoid
a
lot
of
the
unfriendly
practices
that
I
saw
on
some
other
open
source
projects.
So
on
things
like
dual
licensing,
where
you
sort
of
try
to
build
up
the
community,
but
then,
as
soon
as
people
try
to
use
the
client,
they
can't
link
into
their
other
applications
and
you
sort
of
force
them
to
buy
a
commercial
license.
B
A
Community
perspective
early
on
and
then
you
know
you
have
to
pick
a
platform.
We
put
this
up
on
sourceforge.net
if
anybody
remembers
the
days
before
github,
and
so
you
know.
A
B
A
Back
to
work
for
dreamhost,
which
is
a
company
I'd,
help
found
prior
to
doing
grad
school
college.
It's
a
web
hosting
company
based
in
l.a,
so
I
moved
back
to
la
continued
hacking
just
on
staff.
I
had
the
luxury
of
not
really
worrying
about
anything
like
deliveries
or
customers,
or
anything
like
that.
So
we
hired
a.
B
A
Developers
to
start
you
know,
building
stuff
into
the
you
know
the
next
generation
source
platform
that
we
thought
it
should
be,
and
we
did
a
lot
of
things
so
because
we
had
sort
of
no
deliverables.
We
our
customers,
I
guess
to
answer
for-
or
you
know,
qa
tests
to
pass
or
anything
like
that.
We
had
a.
A
And
into
the
architecture
that
we
thought
were
necessary
to
make
something
that
was
truly
transformative,
that
could
really
take
on
sort
of
proprietary
vendors.
So
one
of
the
first
things
that
we
tackled
was
building
a
native
linux
kernel
client,
so
that
you
could
mount
the
failed
file
system
natively
from
from
the
kernel
that
up
until
that
point,
we're
using
fuse,
which
was
sort
of
a
stop-gap
solution,
didn't
perform
very
well
at
the
time.
A
Directory
snapshots
so
anywhere
in
the
file
system,
you
can
create
a
snapshot
of
any
subdirectory
which
is
sort
of
the
the
logical
extension
of
having
snapshots
and
enterprise
systems
where
you
had
sort
of
volume
granularity
whatever
you
try
to
be
very
flexible
and
do
something
new
and
exciting
things
like
recursive
accounting,
which
lets
you
look
at
any
directory
and
see
how
much
data
is
stored
within
that
part
of
the
hierarchy
without
having
to
do
a
du,
which
is
something
that
no
other
systems
had
at
the
time.
It's
kind
of
cool,
I
think
still.
A
Actually
we
built
this
object
class
mechanism
in
so
within
the
object,
storage
layer,
you
could
you
could
sort
of
inject
your
own
code
into
the
system
as
an
administrator
assuming
other
permissions
to
do
so,
and
you
could
sort
of
run
methods
on
object,
classes
that
are
stored
in
the
storage
list.
You
can
push
computation
all
the
way
down
to
the
storage
layer,
build
sort
of
a
more
flexible.
B
A
The
realm
of
just
purely
distributed
file
system,
but
into
a
sort
of
object-based
storage
platform
radio's
gateway,
was
a
component
that
we
built
in
2009
that
exported
that
object.
That
low-level
object,
api
using
s3
protocol.
So
that's
through
the
biggest
time
and
later
we
added
switch
support
as
well,
so
you
could
sort
of
as
competitor
to
amazon
or
somebody
who
just
wanted
to
leave
amazon
and
run
run
your
apps
in
your
own
data
center.
You
could,
you
could
run
it
on
a
pure,
open
source
storage
platform
in
your
own
data
center.
A
So
you
could,
actually,
you
know,
run
in
a
slightly
untrusted
environment
and
rbd
was
sort
of
one
of
the
biggest
things
actually
in
sort
of
recent
years
is
building
a
virtual
disk,
that's
sort
of
built
on
top
of
this
object,
storage
platform.
So
you
know,
I
think
that
gives
you
similar
functionality
what
you
get
out
of
the
sand.
A
The
biggest
thing
really
that
we
built
was
was
the
chronologist
was
sort
of
the
thing
that
really
sort
of
brought
the
project
to
the
next
level.
I
think,
at
least
in
our
vine,
so
up
until
this
point,
we
were
using
fuse
to
access
the
file
system
which,
at
the
time
was
not
very
fast.
There's
been
a
lot
of
improvements
in
the
views
stuff
in
recent
years,
so
it's
not
as
bad
as
it
used
to
be,
but
it.
B
A
A
B
A
When
I
met
rick
like
seven
years
ago
now,
so
you
know
eventually
had
this
20
000
line
patch
set
that
implemented
a
client
that
actually
worked
and
started
spamming.
The
the
file
system
developer
list
initial.
A
Were
rejected
by
by
linus,
he
was
like
you
know,
who's
going
to
use
this
and
like
who
wants
it
and
whatever
there's
not
sufficient
evidence
really
for
user
demand,
but
but
several
people
have
been
following
the
project
and
really
wanted
to
run
it.
I
think
there's
a
couple
people
at
the
lab,
you
know
followed
up
on
his
emails
and
said.
Actually
this
is
like
a
big
deal
and
we
really
want
to
see
this
scan
into
girl
and
after
a
couple
cycles.
A
Finally,
he
sort
of
relented-
I
think
andrew
morton
probably
was
born
on
a
sunday
night,
had
a
glass
of
wine
or
something,
and
he
actually
started
reviewing
a
bunch
of
the
code
and
basically
sort
of
gave
his
stamp
of
approval.
So,
finally,
in
2634
the
the
chrono
client
was
merged
and
it
was.
It
was
a
big
deal
and
we
celebrated,
but
I
think,
sort
of
over
this
process.
A
That's
part
of
the
stack
so
originally
we'd
written
this,
our
own
file
system
to
manage
local
disk
to
store
optics
and
at
some
point
we
realized
that,
in
order
to
sort
of
make
that
robust,
we
need
to
add
things
like
tech
summing
and
improve
the
memory
management
for
the
b3
code
and
all
this
all
this
really
annoying
stuff
that
every
single
file
system
in
the
universe
pretty
much
has
to
do,
and
we
realized
that
butterfest,
which
is
the
new
file
system
at
the
time,
was
really
doing
all
of
those
same
things
that
really
had
a
very
similar
design
to
what
what
we
had
done
and
that
we
could
just
sort
of
use
that
and
not
reinvent
the
wheel
again
and
have
you
know
a
separate
community.
A
Work
well
and
be
totally
robust.
So
at
that
point
we
sort
of
said:
okay
we're
going
to
drop
this
sort
of
custom
thing
and
leverage
existing
linux
profile
systems
and
make
sure
that
we
work
well
on
top
of
those.
So
we
did
some
early
contributed
some
early
functionality
to
butterfest
like
the
ability
to
clone
files,
which
is
something
that
seth
needed
to
do.
It's
copy
and
write
snapchat,
stuff
and
acing
snapshots
to
improve
the
performance
of
commits
and
so
forth.
So
and.
A
Of
the
day,
we
wanted
this
pervasive
check,
summing
and
so
forth
throughout
the
stack
so,
and
we
also
saw
that
the
the
stuff
community
was
really
starting
to
kick
off.
So
we
started
hanging
out
on
irc
on
sort
of
the
public
channels.
We
had
a
mailing
list
on
the
kernel.org
email
server,
which
was
kind
of
exciting.
We
found.
A
A
The
system
was
just
too
unstable
in
these
early
days
for
for
real
deployments
right
we
were,
we
were
about
packing
together
sort
of
a
research
project
that
did
all
these
great
things,
but
not
so
much
around
the
sort
of
process
and
testing
and
so
forth.
So
we
were
really
focused
on
building
the
right
architecture
and
getting
the
right
technical
solutions
to
the
problems,
but
we
knew
nothing
about
how
to
like.
You
know
ship
something
that
you
could
run
in
production
and
support
over
time.
A
Change
things,
so
one
was
that
dreamhost
decided
in.
I
guess:
2011
2010,
that
there
wanted
to
build
an
s3,
compatible
object,
storage
service
and
that
was
going
to
be
based
on
seth,
and
so
that
sort
of
like
focused
our
efforts
and
made
us
realize
that
we
needed
to
make
it.
You
know
production
ready.
I
guess
so.
There
was
a
new
focus
on
stability.
A
We
focused
on
specifically
on
the
core
rados
object
layer
that
underpins
everything
on
the
block
device,
because
it
was,
it
was
simple
and
then
the
radius
gateway.
Obviously
because
it
presents
that
that
f3
and
compatible
compatible
api,
it
meant
we
sort
of
neglected
the
file
system,
which
was
a
very
painful
decision
for
us,
but
it
was
also
the
most
complicated
piece
and
sort
of
can
do
everything
all
at
once.
With
the
limited
resources
we
had
to
pay
back
a
lot
of
technical
debt.
You
know
we
started
investing
a
lot
of
time
in
building
testing.
A
A
A
So
it
was
not
a
huge
team,
but
you
know
we
actually
had
multiple
people
doing,
reviews
sort
of
playing
like
we
were
in
the
real
world,
which
was
kind
of
fun,
but
the
reality
was
really
that
we
also
found
that
we
had,
you
know
growing
incoming
commercial
interest
and
we
started
talking
to
the
likes
of
the
people
who
are
today
who
were
very
interested
in
stuff
and
wanted
to
deploy
it.
But
they
had
no
idea
how
to
talk
to
dreamhost,
which
is
like
a
consumer
web
hosting
company.
A
Just
for
commercial
deployments
that
just
it
just
didn't,
make
any
sense
and
there's
a
sort
of
clear
realization
that
the
project
in
order
to
be
successful,
needed
a
commercial
company
that
was
going
to
back
in
and
support
it
and
pulled
into
production,
so
both
both
on
the
engineering
effort
and
to
do
all
the
testing
and
do
the
ongoing
support,
and
so
in
2012
we
orchestrated
a
spin-out
out
of
dreamhost
that
found
inktank
an
entirely
new
company
startup.
A
So
that's
sort
of
the
last
phase
of
the
evolution
to
get
to
where
we
are
today,
so
we
were
going
to
build
an
open
source
company
right
and
we
wanted
to
from
the
outside.
We
wanted
to
figure
out.
How
do
we
do
this
right?
How
do
we?
How
do
we
make
sure
this
is
going
to
be
a
successful
endeavor,
because,
as
I
think
most
of
you
know
that
it's
very
easy
to
do
it
wrong?
A
There
are
lots
of
pitfalls
along
the
way
you
know
startups
fail
frequently
just
in
general,
but
I
think
having
getting
the
open
source
business
model
right
and
sort
of
getting
that
right.
Set
of
ingredients
is
going
to
be
very
challenging,
so
we
went,
you
know
how
do
we
build
a
strong
open
source
company?
How
do
we
make
sure
the
company's
strong?
How
do
we
build
a
strong
open
source
community
around
the
project
so
that
we
can
sort
of
leverage
the
overall
efforts
of
the
community
and
make
sure
that
you
know
step
is
successful?
You.
A
A
Part
of
spinout
mark
shuttleworth,
the
canonical
founder,
was
a
very
early
and
vocal
supporter,
which
was
was
great
sort
of
getting
us
getting
us
take
started.
So
we
had
a
couple
of
goals
right,
so
we
needed
to
have
a
set,
a
stable
step
release
to
do
these
initial
early
deployments.
We
needed
to
lay
the
foundation
for
the
community
to
make
sure
we
had
a
wide
spread
option.
A
So
there's
a
lot
of
things
like
making
sure
we
could
support
all
the
different
distro
platforms
for
everything
with
booth
to
souza
and
red
hat.
You
had
to
write
documentation
so
people
that
actually
installed
the
thing
build
and
test
infrastructure
that
sort
of
thing
and
then
on
the
business
side,
we
had
to
build
out
the
sales
and
support
organization,
and
you
know,
hire
more
engineers
to
get
that
to
work.
A
There's
this
early
decision
to
to
do
to
engage
a
professional
industry
for
branding
brian
bogensberger,
the
the
ceo
of
the
company,
as
we
as
we
started
out,
started,
throwing
around
terms
like
brand
core
and
design
system
that
I
never
heard
before,
and
it
was
highly
skeptical
of
and
we
went
through
this
whole
process
with
the
company
that
designed
to
do
this
thing,
which
in
retrospect
actually
turned
out
to
be
very
interesting.
But
I
was
very
skeptical
as
it
was
happening.
A
So
there
was
a
very
early
decision
to
make
sure
that
we
separated
the
branding
for
the
company
from
the
project.
So
you
see
lots
of
open
source
projects
and
companies
where
the
the
company
and
the
name
of
the
project
are
the
same,
and
you
get
the
certain
blurring
of
the
lines
between
what
is
the
what's
the
dot,
organization.com
and
so
forth.
A
We
wanted
to
make
sure
that
it
was
very
separate
so
that
that
seth,
the
project
would
be
sort
of
independently
successful
and
robust
as
a
community
and
the
company
would
sort
of
you
know
capitalize
on
that
on
the
increased
you
know,
community
attraction
and
so
forth.
So
we
wanted
to
work
very
hard
to
establish
a
healthy
relationship
with
with
the
user
and
developer
community
that
booked
their
program,
and
you
know
we
had
the
ask
for
aspirational
messaging.
It
was
going
to
be
the
future
of
storage
right,
we're
gonna,
we're
gonna
transform
the
world.
A
Powerpoint
templates
that
were
really
broken
when
you
start
using
them
on
openoffice
and
all
this
you
know
fancy
stuff.
They
made
some
cool
t-shirt
designs,
so
it's
kind
of
exciting
and
we,
the
last
two
years,
have
been
you
know,
sort
of
a
wild
ride
up
into
the
right
right.
So
we
started
getting
all
these
production
employments
and,
as
we
sort
of
had
our
first
versions
of
staff,
we've
been
sort
of
steadily
growing.
I
can't
count
how
many
people
are
running
stuff
anymore.
A
The
fact
that
it's
open
source
means
that
lots
of
people
are
running
it
and
don't
even
have
problems,
and
we
don't
even
know
about
that.
We
don't
care
about
them
on
mailing
lists,
which
is
kind
of
exciting.
I
can't
even
keep
track
of
the
customers
anymore,
it's
kind
of
exciting
and
we
have
a
huge
partner
list.
A
Everybody
sees
that
this
is
sort
of
a
strategic
way
to
change
the
face
of
the
storage
industry
and
want
to
figure
out
how
their
business
model
can
leverage
that
so
everyone
from
hardware
vendors
to
the
software
vendors,
the
you
know,
distros,
but
all
the
rest.
So
it
was
all
very
exciting.
A
We
got
lots
of
good
press
over
the
last
couple
of
years,
as
people
probably
noticed,
and
then
this
crazy
thing
happened
where
there
was
this
totally
independent
industry
thing
that
happened
called
openstack,
and
this
this
cloud
thing
happened
that
we
sort
of
weren't
really
paying
attention
to
we're
really
about
building.
You
know
a
storage
system
for
super
computers
and,
from
my
perspective,
coming
from
a
service
provider,
it's
all
about
just
building
a
scalable
system
in
the
data
center,
and
suddenly
you
had
this
huge
movement
where
everybody
was
like.
A
You
need
to
build
scale
out
architectures
for
everything
run
all
your
apps
on
top,
and
it
turns
out
that
that
the
ceph
distributed
source
platform
was
sort
of
a
perfect
match,
with
with
the
same
goals
and
sort
of
requirements
and
so
forth,
that's
openstack!
So
we
we
sort
of
got
to
ride
on
some
of
the
the
tails
of
all
that
marketing
money
that
was
sort
of
shoveled
into
the
furnace,
and
it's
been
it's
been
big,
so
openstack
was
big.
Definitely,
you
know
obviously
increased
focus
on
quality.
A
We
had
to
do
a
lot
more
testing.
We
had
to
support
staff
across
lots
of
different
platforms,
even
ones
that
we
didn't
run.
We
came
out
of
sort
of
a
debian
company
and
we
had
to
learn
how
to
use.
You
know
rpm
build
stuff
like
that.
We
had
to
focus
on
things
like
like
upgrades.
You
know
when
you're
actually
deploying
seth
in
the
real
world.
A
You
have
a
production
deployment,
that's
highly
available
needs
to
be
up,
you
know
24
7,
and
you
need
to
upgrade
to
the
inversion,
and
you
need
to
do
that
in
a
way
that
doesn't
require
you
to
take
down
the
system.
So
we
had
to
do
all
the
investments
in
all
the
you
know,
future
bits
and
encoding
and
so
forth
over
the
wire,
so
that
you
could
actually
do
upgrade
a
live
system
without
interrupting
any
of
the
workflows.
A
So
we
did
a
lot
of
that
work
during
this
period,
which
is
paid
off
and
so
forth
them,
and
we
saw
the
the
developer
community
started
to
take
off.
So
we
started
to
have
significant
external
contributors
outside
the
sort
of
dream
host
inc
tank
engineering
team.
We
had
first-class
contributions
of
features,
so
the
new
erasure
coding
functionality
that
came
in
the
firefly
release
that
just
came
out
a
couple
months
ago
was
initiated
by
a
developer
at
clavwat
and
sort
of
sharpened
it
through
the
process,
which
is
kind
of
exciting
inktank
was
doing
it's.
A
A
We
have
this
build
big
test,
lab
that
we
put
in
one
of
the
dream
house,
data
centers
and
we
start
giving
external
contributors
access
to
that
test
lab,
so
they
can
run
their
own
tests
and
they
can
go
diagnose
failures
using
the
same
tools
that
we're
doing
common
tool
set.
You
know
we're
using
irc.
You
know
all
that
good
stuff
working
with
the
distress
so
really
trying
to
make
sure
that
staff
is
sort
of
doing
all
the
right
things
to
build
the
community
independent
of
the
business.
A
B
A
Community
process
for
building
the
project
roadmap,
but
for
the
next
you
know
three
months
of
sort
of
the
cycle
that
we're
doing
so
100
online.
We
use
google
hangouts,
you
don't
have
to
fly
anywhere,
do
anything
so
it's
very
low
barrier
to
entry.
I
think
that
the
most
challenging
thing
is
that,
if
you're
in
china,
you
have
to
use
some
annoying
proxy
to
get
on
on
google
hangouts,
so
those
people
tend
to
be
sort
of
lagging
and
reliable,
but
you.
A
In
spring
of
2013,
the
fifth
is
going
to
be
in
a
couple
weeks,
so
we're
getting
kind
of
good
at
it
now
and
they're
great
feedback
growing.
A
A
And
we
sort
of
hire
more
developers,
you
know
early
on
sort
of
forcing
them
into
the
open
way
of
doing
development
so
that
that
it
makes
it
very
easy
for
other
organizations
to
get
involved,
and
then
this
crazy
thing
happened
this
year.
So
we
were
in
the
process
of
raising
our
sort
of
second
round
of
funding,
we're
trying
to
figure
out
what
pcs
we're
going
to
talk
to,
and
you
know
we
had
all
our
slide
decks
that
were
said.
A
You
know
we
want
to
be
the
red
hat
of
storage
and
then
red
hat
came
along
and
said
we
want
to
buy
you
and
it
turned
out
to
be
sort
of
a
you
know.
All
along
my
personal
goals,
at
least,
were
to
make
sef
this
sort
of
transformative
technology
and
a
really
key
piece
of
that
was
having
it
be
the
open
source
having
an
open
technology
that
was
going
to
make
sort
of
that.
A
A
I
guess
a
month
ago
now
and
one
of
the
one
of
the
sort
of
first
results
of
that
is
that
what
happened
with
calamari,
so
the
inktank
strategy
was
always
to
become
sort
of
an
enterprise
distribution
of
staff.
So
we
wanted
to
focus
on
the
people
with
all
the
money
because
they
they
could
actually
help.
You
know
we
could
build
a
business
around
that,
and
so
we
had
this
inktank
sep
enterprise
product
that
we
were
selling.
A
So
it
consists
of
a
specific
version
of
seth
that
we
we
tested
and
backboarded
fixes
to
and
sort
of
validated
and
stabilized
and
so
forth.
There
was
this
calamari
component
that
we
started
building
about
a
year
ago:
that's
a
management
layer
and
a
web
gui
and
all
stuff.
That
makes
it
much
easier
to
manage
a
step
cluster.
So
it's
sort
of
pulling
off
the
command
line
into
a
web
browser.
We
were
going
to
do.
A
A
So
that
was
sort
of
the
enterprise
product
in
a
box
but
sort
of
the
the
difficult
decision
we
had
to
make
as
a
startup
was
that
that
calibori
management
layer
we
made
proprietary
because
we
needed
to
have
something
that
was
sort
of
in
the
box
that
we
were
selling.
That
was
that
was
not
just
support,
so
we
thought,
but.
A
S
model
is
a
little
different
and
they've
been
very
successful
about
it.
It's
a
pure,
open
source
company,
everything
that
they
that
they
sell
is
pure
open,
and
so
one
of
the
very
first
things
we
did
after
getting
bot
is
open
source
that
that
layer,
so
that's
been
very
exciting,
especially
for
the
developers
who've
been
working
on
it
for
a
long
time,
but
also
the
community
has
been
very
excited
about
that
as
well.
A
A
We
need
to
deal
with
sort
of
this
year.
I
guess
is
governance.
So
how
do
we?
How
do
we
make
sure
that
the
project
community
for
step
is
really
strengthened,
because
you
know
again,
the
thesis
of
all
this
is
that
we
need
to
have
a
very
strong
open
source
community
around
the
project
and
and
leverage
the
efforts
of
you
know
all
the
different
businesses
who
are
building
storage
products
and
and
businesses
around
stuff
to
make
make
stuff
as
great
as
possible
and
and
if.
A
Will
be
successful
so
so
there
are
a
couple
things
you
always
have
to
sort
of
formally
acknowledge
and
document.
You
know
how
how
governance
and
project
works?
I
mean
sort
of
acknowledging
what
my
role
is,
which
has
been
sort
of
informal
of
until
now,
I'm
recognizing
all
the
project
leads
who
are
sort
of
building
all
the
sub-components
of
staff.
We
do
most
of
the
heavy
lifting
these
days,
I'm
flying
around
doing
things
like
this
formalize,
some
of
the
processes
around
cds
and
make
the
community
road
that
more
transparent.
A
So
it's
very
easy
for
people
who
want
to
get
involved
and
particularly
organizations
who
don't
necessarily
understand
how
open
source
works,
to
sort
of
plug
into
that
community
and
contribute
and
so
forth.
There's
also
the
possibility
of
creating
a
sort
of
an
external
entity
that
manages
a
lot
of
those
shared
resources.
So
we
built
a
pretty
big
test
lab
as
inktank
big
for
us.
At
least
you
know
how
do
we?
How
do
we
have
the
shared
resources?
A
Do
lots
of
things
like
testing
and
hardware
validation
and
all
the
different
distros
and
platforms
and
so
forth
and
share
those
resources,
make
it
very
easy
for
people
to
plug
into
that.
So
these
are
still
things
that
we're
exploring
today,
there's.
A
Now
road
map,
you
know:
what
are
we?
What
are
we
going
to
build
next,
and
there
are
a
couple
sort
of
important
questions
we
have
to
answer
right.
So
so,
how
do
we
reach
all
the
new
use
cases?
How
do
we
make
stuff
and
broaden
the
reach
of
stuff?
How
do
we
make
sure
that
the
current
people
who
have
already
sort
of
bet
their
businesses
and
products
on
staff
make
sure
that
they're
successful
so
that
we're?
Actually,
you
know
not
going
to
lose
their
data,
for
example?
And
how
do
we
make
sure?
A
I
think
the
sort
of
the
balance
is:
how
do
we
make
sure
that
seth
is
really
successful
in
sort
of
the
businesses
that
we
have
penetrated
so
that
those
it
really
you
know,
works
well
enough
for
the
people
who
are
investing
there
to
for
those
those
people
to
thrive?
So
we
can't
go
too
broad
and
try
to
solve
every
problem.
We
have
to
make
sure
that
we
also
have
that.
A
Have
that
focus
so
we
have
breadth
to
expand
the
community,
but
also
enough
focus
to
win
in
those
spaces
where,
where
we
are
so
there's
a
bunch
of
stuff
that
we've
been
working
on
one
of
the
most
one
of
the
new
things
in
firefly
is
caring,
so
so
sort
of
diving
into
some
of
the
architectural
stuff.
Client-Side
caches
are
great,
but
they
only
buy
you
so
much
at
some
level.
A
You
have
to
realize
that
there
there
are
lots
of
different
types
of
storage
out
there,
there's
slow
discs
fastest,
flash
non-volatile
memories
are
starting
to
show
up.
So
the
question
is:
how
do
we
separate
the
hot
and
cold
data
within
your
system
into
different
tiers
and
so
that
you
can
sort
of
leverage
both
types
of
technologies?
A
And
there
are
a
couple
different
things
we're
looking
at
so
the
first
is
cache
pool,
which
is
what
we
did
in
firefly,
where
you
sort
of
move
all
the
hot
data
into
a
fast
tier,
and
so
that
it's
fast
and
then
push
the
cold
stuff
back
down
and
then
there's
a
sort
of
the
secondary
problem.
How
do
you
take
the
really
cold
stuff
and
sort
of
move
it
up,
so
something
has
a
really
slow,
whether
it's
erasure
coated
or
really
slow
discs
or
tape
or
whatever
it
is,
and.
A
Hold
you
know
sorts
of
systems,
so
these
these
sorts
of
tiering
tiering
architectures,
are
common
in
a
lot
of
enterprise
systems
and
sort
of
a
necessary
ingredient
for
a
lot
of
times,
but
but
typically
not
found
in
open
source
solutions.
A
A
Is
ratio
coding
support
so
traditionally
seth
has
always
done
replication
for
redundancy.
It's
flexible.
It's
fast,
it's
relatively
simple,
although
it's
actually
still
pretty
complicated
to
get
right,
but
the
problem
is
that
for
large
clusters,
it's
very
expensive.
So
if
you
have,
you
know
competitive.
A
B
A
It's
just
going
to
cost
more
when
you're
trying
to
do
repairs.
So,
for
example,
if
you
do
3x
replication
and
you
lose
a
disk,
you
have
to
read
that
that
one
disc
is
worth
the
data
to
recover
it
in
a
replicated
system.
So
you
lose
one
terabyte.
You
have
to
read
one
terabyte
to
repair
in
a
ratio-coded
system.
If
you
lose
one
terabyte,
you
have
to
read
the
entire
stripes.
A
You
end
up
reading
like
five
terabytes
of
ten
terabytes,
and
when
you
do
that
recovery,
so
it
costs
more
when
there's
a
failure
to
repair,
but
you
only
have
to
store
you
know
like
1.3
1.5
times,
and
you
end
up
with
a
much
better
overall
data
durability.
Assuming
you
can
sustain
that
repair
traffic,
but
it
also
turns
out
that
there
are
these
things
called
local
repair
recovery
codes,
which
sort
of
are
sort
of
a
midpoint.
A
So
you
got
a
little
bit
of
extra
storage
overhead,
but
then
you
can
read
less
of
the
data
when
you
recover.
So
if
you
only
lose
one
disk,
then
you
can
reach
only
a
fifth
or
a
third
of
the
stripe
and
if
you
lose
two
disks,
then
you
have
to
read
the
whole
thing.
So.
A
A
A
A
For
asynchronous
replication
and
multi-data
center
stuff,
so
it
comes
from
enterprises.
You
need
to
have
sort
of
a
backup
copy
in
another
data
center.
You
know
for
business
reasons
and
for
regulatory
reasons,
and
it
also
comes
from
sort
of
global
web
scale.
Tech
companies
that
need
to
have
you
know
highly
available
services
that
can
tolerate
entire
data
centers
going
down
without
loss
of
service.
So
there
are
a
couple
different
strategies
that
we
call
to
sort
of
satisfy
this.
A
So
one
is
to
look
at
all
the
different
individual
use
cases,
the
different
types
of
storage
apis
that
you
build
on
top
of
seph
and
solve
them
in
sort
of
the
specific
ways
that
make
sense
for
those
apis.
So,
for
example,
for
the
s3
protocol
greatest
gateway.
We
have
one
approach
for
solving
in
that
case
and
then
for
the
block
device
another
way
and
then.
A
Sort
of
build
in
sort
of
at
a
very
low
level,
in
the
rate
of
object,
storage
layer
build
in
some
multi-data
center
functionality.
So
we
started
with
the
sort
of
the
per
use
case
stuff,
so
initially
we've
added
multi-site
multi-cluster
capabilities
to
the
rada
gateway.
So
the
idea
here
is
that
you
have
this
object:
storage
service.
You
can
create
different
regions.
You
know
east
coast,
west
coast,
singapore,
europe
whatever,
and
then
you
create
all
these
different
zones
within
those
regions.
And
then
you,
you
figure
out
how
to
federate
this
across
lots
of
different
stuff
clusters.
A
So
you
federate
the
different
zones
together
and
the
goal
here
is
to
have
sort
of
a
global
bucket
and
username
space.
So
think,
s3
right,
you
have
you
have
the
arrester
user
and
you
create
different
buckets
and
that's
the
same
across
the
entire
service.
But
when
you
create
a
bucket,
you
create
it
in
on
the
east
coast
or
in
the
west
coast
or
whatever.
So
it's
a
very
same
sort
of
data
model
that
we
provide
and
the
trick.
B
A
Just
to
synchronize
the
objects
across
the
zones
in
an
asynchronous
fashion,
so
you
can
do
that.
That's
actually
something
that
s3
doesn't
do.
Actually
it
turns
out
so
so
the
radius
gateway
gives
you
the
ability
to
have
this
for
global
name
space,
but
then.
A
So
this
this
was
added
in
the
dumpling
emperor
time
frame
and
we
continue
to
support
that
and
sort
of
extend
the
possibilities
for
dealing
with
that
on
the
block
device
side.
We
have
a
couple
different
ways:
you
can
sort
of
deal
with
multi
data
center
type
solutions.
So
today
we.
A
A
Replication
capability
and
disaster
recovery,
so
you
can
do
this
on
the
granularity
of
hours
or
days,
and
so
it's
really
sort
of
a
backup
solution.
But
that's
good
enough
for
some
people
and
then
sort
of
looking
forward.
We
want
to
have
that
have
at
the
ability
to
do
sort
of
a
real-time
replication
for
block
devices.
So
not
quite.
A
But
asynchronous
so
you
can
have
you
know
a
replica?
That's
you
know
trailing
by
seconds
or
maybe
configurably.
So
you
wanted
to
be
exactly
an
hour
behind
that
block
device
for
disaster
recovery
purposes,
and
there
are
also
there's
the
ability
to
sort
of
add
some.
Some
whispering
features
like
the
ability
to
rewind
a
block
device,
because
you
have
the
sort
of
timeline
of
all
the
different
changes
that
have
happened
to
it.
A
A
For
the
next
couple
of
releases
to
add
to
the
block
device
but
then
sort
of
the
the
alternative
strategy
is:
how
do
you
have
sort
of
the
the
other
possibilities
to
add
this
at
the
lowest
levels
of
staff?
So
you
have,
at
the
low
level
object,
storage
layer,
have
this
generic
ability
to
replicate
across
data
centers
and
have
all
the
other
services
sort
of
give
you
back
on
top
of
that,
and
it.
A
Really
hard
problem,
because
the
whole
point
of
the
object
store
is
that
you're,
this
massively
scalable
right,
you're,
starting
your
data
across
thousands
of
servers,
but
at
the
same
time,
in
order
to
have
this.
This
work
with
all
the
different
workloads
that
you're
putting
on
top.
You
need
to
have
point
in
time
consistency
of
that
replica
in
another
data
center,
so
you're
both
distributing
things
and
then
you
also
need
sort
of
a
coherent
time.
A
One
is
to
maintain
that
sort
of
stable
timeline
coherent
across
the
source
cluster
to
the
string
from
and
then
replicate
that
to
another
cluster
and
then
make
sure
that,
on
the
target
cluster,
it's
applied
in
a
way
that
that
you
only
sort
of
apply
rights
after
you
know
that
you
have
everything
that
logically
happened
before
it,
so
that
the
the
complex
applications,
whether
their
block
devices
or
file
systems
or
databases
or
whatever
they're
using
on
top,
have
a
coherent
view
of
storage.
A
You
have
react,
rights,
consistency
and
all
that
stuff,
so
this
is
a
challenging
research
problem
and
we're
sort
of
trying
to
work
through
all
the
technologies,
to
figure
out
how
to
make
it
happen,
and
so
we've
been
collaborating
with
student
programs
and
universities
and
so
forth
to
try
to
push
this
forward
so
that
we
can.
We
can
make
sort
of
a
real
solution
to
this
problem.
The
exciting
thing
is
that
I
I
don't
actually
know
that,
I'm
not
as
familiar,
I
guess,
with
the
proprietary
space,
but
I
don't
think
many
systems
do
this.
A
A
There's
stuff
best
the
file
system,
so
the
irony
is
that
seth
started
with
the
distributed
file
system.
That's
where
it
all
began,
and
that's
like
sort
of
the
one
piece
of
the
system
that
we
don't
sort
of
support
and
label
production
quality,
which
is
which
is
very
frustrating
so
we
want
to.
We
want
to
change
that.
We
want
to
finally
have
something
that
people
can
actually
run
in
production.
So
today
most
of
the
work
is
around
improving
the
qa
coverage
and
squashing
bugs
you
know.
A
The
nfs
and
support
is
now
largely
complete
and
complete
and
robust,
and
we're
continuing
to
improve
the
multi
metadata,
server
performance
and
stability
and
so
forth.
But
what's
really
still
needed
for
seth
of
s
is
an
ongoing
qa
investment.
We
just
haven't
had
the
resources
as
a
small
startup,
to
really
do
all
the
testing
and
build
all
the
testing,
automation
and
so
forth,
and
to
make
sure
that
it
works.
So
there's
a
lot
of
hardening
that
needs
to
happen
and
then
there's
some
some
of
the
features
that
are
inside
the
best
entirely
complete
or
correct.
A
So
the
snapshot
stuff
needs
a
lot
of
work.
It's
pretty
cool
to
be
able
to
create
snapshots
in
any
directory,
but
there's
some
there's
some
issues
that
need
to
be
cleaned
up.
A
The
good
news
is
that
there's
been
an
amazing
community
effort
here,
so
even
though
inktank
hasn't
been
able
to
sort
of
afford
the
investment
to
do
a
lot
of
work
here,
a
lot
of
what
other
organizations
have
so
we're
going
to
hear
from
cohort
fest
and
staff
in
the
afternoon
or
morning.
I
guess
about
their
work
with
stuff
with
us.
There's
some
engineers
in
intel
who
are
doing
some
amazing
work
on
the
kernel
side
and
the
client
on
the
server
side
to
make
this
work.
A
Even
cloud
stack
integration
is
there
on
the
compute
virtualization
side,
we're
integrated
with
kvm
and
keemu,
and
also
zen
to
make
sure
that
you
can
run
hypervisors
on
pop-up
block
devices
and
there's
a
whole
big
data
space
with
you
know
the
hadoop
ecosystem
and
all
the
stuff
that's
sort
of
coming
out
of
that,
like
spark
and
so
forth,
and
so
you
can
plug
you
can
plug
ceph
as
this
file
system,
storage,
layer
underneath
hadoop
and
then
all
that
stuff
on
top
of
stuff.
It's
kind
of
exciting
there's.
A
That's
one
that
sort
of
frustrates
me
personally,
maybe
because
I'm
not
a
big
data
person.
I
actually
don't
know
a
whole
lot
about
it.
But
the
thing
that's
always
struck
me
is
that
the
whole
big
data
ecosystem
is
built
around
this
hdfs
and
gfs
type
architecture.
That's
really
a
really
stupid
storage
model
like
if
you
really
come
down
to
it.
It's
like
big
files
and
it's
striping
them
and
whatever
it's
like.
A
It's
really
basic,
like
we've,
got
to
be
able
to
do
better
because
at
the
end
of
the
day,
the
real
the
real
challenge
is
to
move
the
computation
to
the
data
and
just
sort
of
manage
the
the
fact
that
the
data
is
structured
and
has
them
move
the
computation.
I
would
actually
deal
with
that
in
some
of
the
positive
ways
so
rados.
B
A
Object
store
has
this
sort
of
really
neat
capability
to
inject
code
into
the
source
system
and
we
have
sort
of
a
rich
data
model
for
what
can
put
in
an
object.
And
then
you
can
push
code
to
the
storage
node
to
actually
process
it
and
return
the
results.
So
we
have
this
sort
of
flexible,
compute,
storage,
hybrid.
A
That
seems
like
it
should
be
very
exciting,
but
open
up
lots
of
possibilities,
but
so
we're
really
trying
to
figure
out
how
to
how
to
engage
that
community
and
figure
out
how
to
build
build
cool
things
on
top
of
it.
A
So
I
think
I
think
that
really
amounts
to
sort
of
evangelizing
those
the
raido's
capabilities
to
make
sure
people
are
aware
of
what
we
can
do,
having
sort
of
case
studies
and
proof
points
and
profits
and
limitations
and
so
forth,
so
that
people
can
sort
of
plug
into
that
that
set
of
apis
and
ultimately.
A
A
A
In
the
enterprise
is
going
to
come
down
to
a
couple
different
things,
it
means
figuring
out
how
to
support
the
legacy
and
transitional
interfaces
so
sort
of
in
the
old
world.
There
are
all
these
client
server
protocols
where
the
client
thinks
it's
talking
to
a
single
server,
obviously
with
a
scale
out
architecture.
That's
not
true.
You're
talking
to
lots
of
different
servers,
so.
A
A
So
for
us
largely
that's
been
openstack,
because
all
these
people
want
to
set
up
their
private
clouds
and
once
you
sort
of
get
in
the
organization
and
they
sort
of
see
the
the
economics
of
the
architecture
and
those
open
source
sort
of
solutions,
then
you
can
see
if
they
can
use
that
same
infrastructure
and
platform
for
other
use
cases.
So
it's
sort
of
a
two-pronged
strategy.
A
So
once
you
sort
of
have
that
true
product
approach,
then
then
you
win.
So
I'm
sort
of
end
on
a
on
a
aspirational
note.
So
I
think
that
the
real
question
for
me
is
why.
Why
do
we
think
that
you
know
these
sort
of
uppity
open
source
solutions
are
going
to
be
successful?
I
think
it
comes
down
to
a
couple
of
different
things.
A
First
is
that
it's
very
hard
to
compete
as
a
proprietary
vendor,
it's
very
hard
to
compete
with
open
source
software,
so
it's
an
unbeatable
value
proposition
being
able
to
have
that
flexibility
on
the
hardware
side
and
have
sort
of
a
very
much
much
lower
cost
burden
on
the
software
side.
It's
ultimately
just
a
more
efficient
developer
model
when
you're
leveraging
developers
and
so
many
different
organizations
collaborating
on
a
single
project.
A
It's
hard
to
meet
without
a
pure
proprietary
product,
and
the
other
thing
is
that
it's
very
hard
to
manufacture
community.
So
you
might,
you
know,
have
this
fear
of
you
know
netapp
going
and
open
sourcing
ontap
or
something
like
that,
and
I
think
the
reality
is
that
that
just
can't
happen,
because
these
organizations
don't
really
understand
how
to
build
community
and
and
and
make
that
actually
be
successful
and
also
having
sort
of
an
existing
legacy
base
and
throwing
it
over.
The
wall
has
very
rarely
proved
to
be
successful
as
a
future.
Conservative,
okay.
B
A
Also
has
a
very
strong
foundational
architecture,
so
we
spend
a
lot
of
time
trying
to
make
something
that
we
think
really
is
sort
of
the
right.
The
right
way
to
do
scale
out,
storage
so
being
able
to
build
on
that
gives
us
sort
of
a
wake
up
along
with
the
other
alternatives,
and
I
think
one
of
the
key
things
is
that,
because
we're
open
source
on
both
the
server
side
and
the
client
side,
we
can
sort
of
have
the
flexibility
to
innovate
at
the
protocol
level.
A
So,
if
you're
a
proprietary
vendor
and
trying
to
sell
a
box,
you
have
to
speak
nfs
or
iscsi
to
all
your
clients,
because
that's
what
the
clients
are
running,
which.
A
A
legacy
client
server
protocol
with
a
scale-out
architecture,
which
means
that
you
have
to
do
all
this
forwarding
and
proxy
and
annoying
stuff
on
the
back
end,
whereas,
whereas
ceph
and
other
open
source
solutions
can
innovate
on
the
client
side,
and
so
the
client
understands
is
talking
to
an
entire
cluster
of
servers
and
and
do
it
in
a
better
way.
So
you
know
having
a
native
one
external
client
that
supports
that
investment.
A
Rvd
is
integrated
in
kvn,
leukemia
and
so
forth
gives
us
a
lot
of
advantages
and-
and
I
think
ultimately,
we're
just
part
of
an
ongoing
paradigm
shift.
So
so,
as
as
projects
like
openstack
have
sort
of
demonstrated
within
the
larger
business
community,
people
are
realizing
that
these
scale-out
open
source
data
center
software
platforms
are
better
when
they're
open
right.
You
have
all
this
collaboration,
you
have
a
lot
of
innovation
happening
in
the
space
and
it's
ultimately
going
to
be
cheaper
and
better
and
faster,
moving
and
cooler
than
building
buying
these
private
options.