►
From YouTube: Powering CloudStack with Ceph RBD
Description
Patrick McGarry
ApacheCon NA 2013
Cloud Crowd
A
Hi
good
morning,
still
I
guess:
timezone
appropriate
greeting
here
here
to
talk
a
little
bit
about
cloudstack
and
chefs
RVD.
To
do
that,
I'm
going
to
give
you
a
little
bit
of
an
architecture
overview
of
what's
F
is
kind
of
how
it
works,
how
it
fits
together
and
then
at
the
end,
I
can
show
you
how
it
plugs
into
CloudStack
scenes
how
we
had
such
a
great
CloudStack
introduction
here
just
a
minute
ago.
A
So
without
further
ado,
we'll
get
started
and
talk
about
Seth,
first,
a
little
bit
about
stuff,
why
you
should
care
about
yet
another
distributed
storage
platform,
the
big
three!
Obviously,
our
time
cost
and
requirements
right
so
Seth
does
things
a
little
bit
differently
in
a
number
of
different
ways.
The
first
thing
that
we
kind
of
wanted
to
take
a
crack
at
is
time
right
and
especially
with
the
number
of
challenges
being
thrown
at
any
given
DevOps
person.
A
Sometimes
they're
involves
a
lot
of
manual
data
migration
load,
balancing
a
lot
of
attention,
they're
like
very
small
children,
sometimes-
and
you
kind
of
need
to
give
them
a
lot
of
attention.
We
wanted
to
kind
of
make
Seth
more
of
an
adult,
or
at
least
an
adolescent,
so
it
doesn't
need
quite
as
much
of
your
time
and
along
with
that
also
is
painless,
scaling
right,
scaling
up
and
scaling
down.
We
realize
that
you
know.
Storage
needs
change,
your
locations
change,
how
you're
handling
your
data.
A
You
know
anything
is
up
wide
open
for
change,
so
we
wanted
to
be
able
to
scale
the
storage
system
out
to
meet
your
needs
dynamically.
So
you
wouldn't
have
to
kind
of
shut
it
down.
You
could
just
kind
of
plug
and
play
and
the
cluster
would
rebalance
and
kind
of
adapt
to
whatever
it
was
that
you
needed
it
to
do.
Of
course,
the
the
the
big
numbers
in
everybody's
eyes
is
cost
right.
A
You
want
your
storage
to
be
a
very
kind
of
linear
or
as
close
to
linear,
as
you
can
get
it
function
of
size
and
performance
to
how
much
money
you're
pouring
into
the
end
you're
given
set
up
and
the
nice
thing
about
SEF
is
it's
designed
to
run
on
commodity
hardware.
You
know,
rather
than
kind
of
these
forklift
upgrades,
and
we
can
talk
a
little
bit
more
about
kind
of
the
architecture.
But
cost
was
a
big
feature
for
us
right.
We
wanted
it
to
be
open
source.
A
We
wanted
to
stay
open
source
and
in
doing
so
sage,
while
the
founder
created.
Not
only
is
it
kind
of
this
copyleft
license,
you
know
it's
the
lgpl
too,
but
also
any
contributor.
The
project
owns
their
own
contributions.
So
no
matter
what
happens?
Nobody,
no
corporate
entity
is
going
to
be
able
to
clothes,
f
down
and
charge
crazy
amounts
of
money
for
it.
So
you
know
you're
not
going
to
get
that
vendor
rock
in
and
you'll
be
able
to
keep
doing
that
linear
progression.
So
you
want
you
want
another
host.
A
A
People
want
to
start
small,
typically
small,
being
a
relative
term
depending
on
your
organization
in
your
use
case,
but
you
want
to
start
small,
make
sure
everything's
working,
you
get
all
the
kinks
ironed
out
of
your
system
and
then
you
want
the
ability
to
scale
that
same
system
to
you
know
it
infinity
with
a
little
asterisks
next
to
infinity,
and
so
the
the
read.
The
way
we
wanted
to
scale
was
with
that
heterogenous
hardware
right
so
kind
of
some
of
the
enterprise
storage
stuff.
In
the
past.
A
It's
been,
you
know,
I
want
a
petabyte
I
go
to
storage,
vendor
X,
they
send
me
1i
fork
left
it
in
then
fell
on
another
petabyte,
it's
another
forklift,
either
to
replace
the
previous
one
or
another
forklift.
For
the
exact
same
which
I'm
going
to
spend
you
know
and
dollars
times
2
on
for
the
next
one.
A
So
we
wanted
to
make
it
a
lot
easier.
You
know
you
plug
and
play
down
to
the
disc
level
rather
than
the
giant
Rex
or
data
centers,
and
then
on
top
of
that
we
want
a
lot
more
reliability
and
fault
tolerance
because
we're
moving
to
this
kind
of
heterogeneous
hardware
setup.
You
know
this
commodity
hardware.
A
So
what
is
Seth
Seth
is
a
distributed.
It's
a
unified
storage
platform
and
it
does
primarily
three
things
right.
We
do
object,
block
and
file
storage.
You
know
the
object.
Typically
we'd
say
it's
an
object
store.
You
can
treat
it
with
the
native
interfaces,
but
we
also
have
a
restful
interface
that
you
can
get
at
it.
I'll
get
more
into
that
a
little
bit
later.
We
have
a
block
device.
You
can
do
it's
a
thinly,
provisioned
block
device.
On
top
of
your
object
store.
A
A
So
does
it
look
like
it
looks
like
this,
so
the
part
of
it
all
Seph
is
a
distributed.
Object
store
right,
its
rate
us
it's,
a
very
reliable
autonomic
distributed,
object.
It's
or
comprised
of
kind
of
these
a
lot
of
self-healing
self-managing,
intelligent
storage
units
right,
and
you
can
see
that
if
you
can
read
the
slides
so
the
base
of
it
it's
this
object
store,
and
on
top
of
that
we
give
you
a
number
of
different
ways
to
talk
to
it,
to
interact
with
it
and
you'll
see
it.
A
A
You
know
you
can
ask
a
million
different
people
and
you'd
probably
get
a
million
different
answers,
but
the
the
biggest
thing
is
when
we
started
looking
at
what
do
we
want
to
put
as
the
underlying
technology
right?
Some
people
started
with
with
block
devices
and
they
kind
of
aggregate
block
devices
and
do
crazy
things,
but
we
wanted
objects
because
it
seemed
to
us
that
it
was
more
useful.
A
It
gives
you
names
in
a
single
flat
namespace,
you
can
have
wildly
variable
size
and
it
gives
you
a
very
simple
API,
with
relatively
rich
semantics
that
you
can
work
with
from
the
very
base
level
building
block
of
what
you're
working
with.
We
also
found
it
to
be.
You
know
more
scalable
than
individual
files,
so
you
don't
have
to
deal
with
that
kind
of
hard
to
distribute
hierarchy.
You
don't
have
to
worry
about
how
your
objects
are
spanning
across
multiple
different
blocks
and
things
like
that
and
the
workload
is
very
trivially
parallel.
A
A
You
can
actually
define
rules
of
how
you
want
your
data
placement,
your
data
storage,
to
go
based
on
individually
on
those
pools.
So
you
can
say
you
have
one
pool
that
says:
I
want
three
copies
of
everything
in
this
pool
versus
another
that
I
walk.
You
know
ten
copies
of
everything
in
this
pool
and
then
in
those
pools
we
have
objects,
obviously,
which
huge
metric
piles
of
data
have
the
the
actual
data
itself.
You
know
in
these
objects
you
can
have
blobs
of
data,
which
is
you
know,
bytes
two
gigabytes
in
size.
A
You
can
have
the
attributes
assigned
to
those.
You
know
bites
to
kilobytes
kind
of
thing,
and
then
you
can
also
store
the
key
value
bundles
in
there.
You
skip
forward
a
little
so
usually
when
you
have
a
system
will
get
into
the
architecture
a
little
bit
here.
You
have
a
given
system
right,
it's
a
human
talking
to
the
computer
that
has
any
number
of
disks,
whether
it's
spinning
roster,
SSD
or
whatever.
So
it
looks
like
this.
A
Well,
a
little
bit
more
like
this
right,
so
you
usually
have
huge
numbers
of
people
trying
to
get
at
your
data,
that's
sitting
on
your
disks
and
so
that
computer
very
quickly
becomes
a
bottle,
ink
and
so
SEF
kind
of
dubs
it
a
little
bit
differently.
We
aggregated
a
whole
bunch
of
different
machines
and
we
just
treat
it
as
a
big
pile.
I
had
to
explain
it
to
someone
bring
non-technical
the
other
day
and
I
said
it
was.
It
was
like
there
was
a
pretty
girl
to
dance
and
there's
a
thousand
guys
like
me.
A
You
all
want
to
dance
with
her,
and
so
we
made
her
arbitrarily
large
so
that
everyone
could
dance
with
her
at
once.
They
didn't
think
that
was
a
terribly
good
way
to
simplify
it,
but
in
a
sense
that's
what
we're
doing
is
we're
making
it
really
we're
making
a
thousand
copies
of
her.
Perhaps
it
would
be
a
better
way
to
say
it.
A
So
in
your
storage
cluster,
you
have,
a
large
number,
you
know
tends
to
tens.
Thousands
of
these
object,
storage
Damon's
right.
Typically,
we
will
run
the
object,
storage,
Damon,
one
per
disk
weather
and
that's
you
know
the
harder
genus
hardware
thing,
whether
that's
an
SSD
or
you
know,
just
a
typical
say
to
drive
or
a
rate
configuration
of
some
sort.
You
drop
your
OSD
down
on
top
of
that
and
those
are
the
things
that
are
actually
doing.
The
the
serving
of
Stewart
objects
to
your
clients
right
and
they
do
a
lot
of
things
there
are.
A
Some
intelligence
has
been
built
into
them
so,
rather
than
having
we
can
get
into
the
the
lookups
and
stuff
later,
but
rather
than
having
to
worry
about,
you
know
going
through
a
single
controller
you're,
getting
your
clients
that
are
eventually
you
know
as
the
air
traffic
controllers,
which
are
the
monitors.
I'll
talk
about
a
second
they'll.
A
Tell
you
where
to
hit
your
data
and
you'll
go
directly
to
that
OSD,
and
then
the
OSD
will
intelligently
kind
of
peer
with
the
rest
of
the
cluster
to
worry
about
things
like
data
replication
data
recovery
when
another
OSD
goes
down,
they're
always
talking
to
each
other
can
get
a
little
chatty,
but
the
OSD
is
tend
to
have
a
certain
amount
of
intelligence
built
into
them
so
that
you
don't
have
to
worry
about
always
having
traffic.
Come
off
your
cluster
to
do
things
and
go
back,
there's
a
lot
of
inter-cluster
discussion.
A
A
These
guys
are
they
air
traffic
controllers
they're,
the
ones
that
are
maintaining
the
cluster
state
authentication
they're,
the
ones
that
are
providing
consensus
for
these
kind
of
distributed
decision-making.
They
aren't
actually
involved
in
the
data
path,
so
they're,
the
kind
of
ones
that
tell
everyone
where
to
go
and
how
things
are
what
the
current
state
of
the
cluster
is.
A
So
if
you
look
a
little
closer,
it
kind
of
looks
like
this
right,
so
the
OSD
notes
typically
will
be
more
than
just
a
single
disk
in
a
single
OSD
you'll
have
a
machine,
that's
running
and
those
will
actually
have
disks
underneath
them
and
you'll
have
some
sort
of
filesystem
on
top
of
them.
We
think
the
future
should
probably
be
butter
FS,
that's
where
we'd
like
to
be.
There
are
some
performance,
concerns
and
stuff
that
it
isn't
quite
there.
Yet
most
people
are
using
like
XFS
or
you
know.
A
X4
is
another
good
option,
but
you
have
some
file
system
sitting
on
top
of
your
disk,
and
then
you
drop
your
OSD
on
top
of
it,
and
you
know
you
maybe
we're
seeing
like
anywhere
from
the
four
to
12
disks
in
a
machine
for
a
single
note
and
then
kind
of
all
of
those.
You
have
many
nodes
that
are
put
together
to
form
your
cluster.
A
So
what
makes
SEF
cool
while
the
the
one
thing
for
me
that
was
interesting
and
continues
to
be
interesting
as
both
a
piece
of
technology
as
well
as
an
academic,
interesting
pursuit
is
crush,
crush,
is
kind
of
at
the
heart
of
what
makes
SEF
powerful.
It's
a
pseudo
random
placement
algorithm,
it's
a
controlled
replication
under
scaled
hashing.
This
is
what
allows
SEF
to
be
really
fast
when
it
comes
to
things
like
look
up
or
you
know,
data
placement
data
retrieval
and
things
like
that.
A
A
I,
don't
want
any
two
copies
to
live
on
the
same
row
in
my
data
center.
So
it's
actually
too
aware
of
your
your
topology.
You
know
your
no
data
center,
what
it
looks
like,
and
so
it's
you
know
it's
the
nice
thing
about.
Is
it's
repeatable
as
deterministic,
and
so
you
know,
let's
say:
I
want
to
put
some
data
into
a
cluster.
I
will
go
and
I'll
figure
out.
You
know
talk
to
the
monitors
who's
in
my
monitor
or
who's
in
my
cluster
or
other
and
I'll
be
able
to
say
okay.
A
A
You
know
all
of
the
OSD
is
know
that
that
one's
out
and
it
knows
how
to
rebalance
your
data,
and
so
it
doesn't
have
to
move
a
whole
lot
of
data
around
because
crushed
changes
and
it's
going
to
know
where
it
needs
to
move
things.
So
there's
not
a
whole
lot
of
data
movement
needs
to
happen.
So
how
does
this
work
right?
So
I
want
to
push
something
into
my
cluster,
and
so
I
talked
to
my
monitors.
I
found
out
the
state
of
my
cluster
and
I.
A
Take
this
object
whatever
it
is,
and
I
split
it
up
into
a
number
of
placement
groups:
tunable
arbitrary
placement
groups,
and
so
these
individual
placement
groups
then
get
pushed
into
the
cluster
based
on
crush
and,
as
you
can
see
here,
pretty
color
coding.
It
goes,
and
it
looks
at
all
of
my
OS
DS
in
this
case.
I
have
10
OS
DS
and
it
takes
I.
Think
I
set
a
replication
level
of
two
and
it
takes
two
copies
of
each
one
of
these
and
drops
them
on
separate
OS
DS.
A
So
really
it's
a
little
higher
level
you're
pushing
your
data
through
crush
the
algorithm.
Decides
where
it
goes
you
talk
to
for
each
individual
placement
group.
Then
you
take
that
placement
group.
You
push
it
into
an
OSD
that
OSD
will
intelligently
pier
with
what
the
crush
algorithm
says,
the
other
place.
Your
copy
of
your
data
should
live,
pushes
it
there
for
you
and
then
it
returns.
It
says:
okay,
you
successfully
store
that
placement
group
move
on
to
the
next
one.
A
So
what
happens
is
something
breaks,
and
this
is
what
I
was
talking
about.
Let's
this
OSD
here,
the
one
that's
shaded
out
it's
hard
to
see.
Let's
say
this
red
yellow
placement
group
OSD
decides
to
set
on
fire
alien
invasion,
something
somebody
decides
to
trip
over
a
cord
goes
down,
and
so
your
cluster
is
aware
that
that
particular
OSD
has
gone
down
and
then
the
two
that
are
carrying
the
copies
of
that
data
of
say,
hey
I've,
got
the
copy
of
the
data.
A
So,
let's
revisit
a
bit
of
how
we're
talking
to
this
cluster
right
we're
talking
to
this.
This
object
store,
so
there's
really
four
different
ways
that
you
can
talk
to
the
object
store,
the
first
being
obviously
liberate
us.
This
is
our
native
way
to
talk
to
the
object.
Store,
has
a
lot
of
different
features
that
you
can
that
kind
of
define
why
it's
cool,
but
it's
basically
C
C++,
Python,
Java
PHP,
whatever
there's
a
number
of
different
ways
that
you
can
talk
to
it,
but
this
allows
you
to
have
atomic
single
action
transactions
it.
A
A
The
next
thing
that
you
can
use
is
the
gateway,
and
this
is
the
other
way
to
talk
directly
to
your
object,
store
it's
just
to
do
a
little
bit
different
way
to
do
it.
It's
a
restful
gateway.
So
you
can
talk.
You
know
over
the
HTTP
protocol
and
we
actually
have
instantiated
both
s3
and
Swift
API.
So
you
can,
if
you
have
something
that
already
uses
s3
as
your
endpoint
or
uses
OpenStack
Swift.
A
If
you
have
one
of
those
things,
you
can
spin
up
a
class
SEF
cluster
create
a
ratos
gateway
or
a
number
of
load
balancer
at
those
gateways,
change
the
endpoint
and
no
one
will
ever
know
the
difference
and
then
there's
the
file
system
which
I
don't
didn't
want
to
go
into
too
deeply
today,
because
the
cephus
is
still
a
little
wild
Westy.
It's
not
where
the
majority
of
our
work
has
been
focused
on.
A
There's
a
lot
of
love,
that's
coming
in
the
near
future,
but
it's
a
POSIX
compliant
distributed
file
system
that
is
based
on
your
object
is
based
on
the
object
store
right.
So
it
allows
you
to
have
a
distributed
file
system
that
you
can
mount
from
a
number
of
different
places
that
is
then
backed
by
this
distributed,
object
store,
which
gives
you
all
kinds
of
really
cool
stuff,
but
is
it's
just
not
there
yet
so
that
brings
us
to
our
BD.
Finally,
the
part
that
everyone
came
here
to
hear
about.
A
A
It
allows
you
to
store
disk
images
in
ratos,
really
is
what
it
comes
down
to,
but
it
kind
of
allows
you
to
decouple
the
vm
from
the
host,
because
you
have
kind
of
this.
You
know
distributed
storage
platform
that
you're
storing
things
on
that.
So
you
have
images,
VM
images
or
disk
images
or
whatever
it
is,
that
gets
striped
across
your
entire
cluster
split
up
into
those
placement.
A
Rupes
we're
talking
about
this,
allows
you
to
do
some
really
cool
things,
but
with
rbd
you
have
the
ability
than
to
do
snapshots
and
and
because
it's
a
distributed
platform,
you
can
do
things
like
copy-on-write
cloning
and
live
migration
and
there's
you
know
their
support
in,
as
you
can
c
qm.
U
kv
m,
there's
a
mainline
Linux
kernel
after
two
at
six,
thirty,
nine,
where
you
can
just
mount
it
right
out
of
Linux
and
then
there's
support
for
CloudStack
OpenStack
and
then
there's
xenserver
stuff,
that's
still
being
ironed
out.
So
what
does
it
look
like?
A
This
is
what
I
was
talking
about.
So
you
have
some
discs,
that's
mounted
somewhere
as
you
can
see,
and
then
it's
Britain
broken
up
into
a
logical
number
of
components
and
split
across
your
set
cluster
and
really
the
the
use
case
is
for
running
VMS
right
and
so
that's
kind
of
how
it
ends
up.
Looking
like
when
you
have
vm
running
in
cloudstack,
and
then
you
have
the
disks
behind
them.
A
They
get
split
up
across
yourself
cluster,
and
this
this
does
a
number
of
really
cool
things,
for
you
is,
if
you
have
a
extremely
large
disk
or
if
you
have
a
really
really
busy
disk,
it
doesn't
care
because
it's
split
across
a
number
of
different
hosts,
so
it
helps
to
even
out
some
of
your
hot
spots
and
whatnot
so
that
you
don't
really
have
to
worry
about
that
as
much
because
SEF
is
kind
of
has
some
again
intelligence
built
into
it
to
know
all
even
out
those
hot
spots,
basically
how
we
doing
on
time.
Oh.
A
So
the
idea,
obviously,
is
it
all
of
those
distributed
objects
liberate.
Us
puts
all
those
objects
together
into
the
block
device.
So
then
our
l-live
rbd
excuse
me
then
puts
those
together
for
a
virtualization
container
and
then
the
container
exposes
it
to
the
vm.
The
long
and
short
of
it
is
this
allows
you
to
have
something
that
is
essentially
Amazon's.
Elastic
block
store.
You
get
your
own,
but
because
it's
a
shared
environment-
and
this
is
what
I
was
talking
about
before
is
you-
can
do
really
fun
things
like
migrating
running
instance
between
houses
right?
A
A
And
then
yeah
the
driver
in
the
mainland
linux
kernel
allows
you
to
map
it
as
a
native
you
know,
dev,
RVD,
0
or
whatever
you
can
just
mount
it
as
a
normal
device.
So
what's
this
copy
all
right,
cloning
I'm
surprised
at
the
number
of
questions
I
get
about
copy-on-write
cloning.
The
the
best
example
obviously
is
I
would
make
a
golden
image.
Let's
say:
I
have
a
boon
to
1204
image
that
I
want
to
make
available
to
my
CloudStack
instance
and
I.
Have
you
know?
Usually
it's
not
we're.
A
Spinning
up
one
copy,
its
I
want
to
spin
up
100
copies
of
this
instance,
and
so
what
this
allows
me
to
do
is
I
spin
up
a
hundred
copies
of
this
instance,
but
I
do
it
instantly
and
I
don't
actually
copy
anything?
I
just
used
this
golden
image
and
I
instantiate
100
copies
of
it
and
it
takes
up
0
additional
storage.
So
now
I
have
in
this
case
for
copies
and
it's
taking
up
no
additional
storage,
and
so
what
that
means
is
then
each
of
these
individual
machines
will
be.
A
You
know
boot
up
and
and
whatever
end
user
is
control
of
that
particular
machine
will
start
writing
data
to
it.
The
only
thing
that
you're
going
to
have
that's
going
to
take
up
new
storage
now
is
the
data
that
you're
writing
to
it
and
then,
when
they
go
back
and
they
want
to
read
things
from
that
particular
instance,
they
will,
if
it's,
if
there's
something,
that's
changed,
they'll
read
it
from
their
copy.
If
it
hasn't
changed,
it'll
just
read
straight
through
to
the
the
golden
copy.
If
you
will-
and
so
that's
copy-on-write
cloning.
B
B
A
Okay,
so
the
question
is:
does
this
lead
to
performance
issues,
because
you
have
a
hundred
copies
that
are
reading
from
the
same
base
image?
And
the
answer
really
is:
no,
because
that
base
image
is
actually
split
into
a
huge
number
of
placement
groups
across
a
wide
number
of
machines,
and
so
I
mean
as
long
as
your
network
is
relatively
reliable
and
fast.
A
A
That
means,
when
you
spin
up
a
cloud
stack,
if
you
were
here
during
the
last
talked
you'll
you'll
know
a
lot
about
this,
but
if
you
spin
up
your
cloudstack
that
you
still
need
the
very
small
NFS
to
serve
those
system
VMS
initially,
but
then,
after
that,
your
primary
storage
can
be
CEFs.
Rvd,
later
versions
are
coming
where
you
won't
need
that
NFS
you'll
be
able
to
use
set
for
everything,
but
we're
not
quite
there.
A
Yet
there's
no
support
right
now
for
VMware
or
Zen,
and
there's
really
no
plan
to
the
the
guy
who
wrote
the
integration
I'll
get
to
him
in
a
minute.
It
he's
not
from
ink
tank.
He
doesn't
work
directly
for
for
us
and
he's
not
he's
doing
it
for
a
very
specific
use
case.
There's
no
real
plans
to
to
build
and
support
for
for
VMware
ins
and
patches
are
always
welcome.
I
would
love
to
see
someone
else
get
really
deeply
involved
and
help
Vito
with
a
bit
more
of
the
building
out
of
SEF
and
CloudStack
integration.
C
C
A
So
the
live
migration
stuff
that
I
was
talking
about
is
supported,
but
we
we
don't
have
the
the
cloud
stack.
Integration
doesn't
have
the
ability
to
do
to
do
snapshots
from
SEF,
yet
that
one
is
coming.
Actually
it's
already
written,
we're
just
vetoes
just
waiting
for
the
the
database.
Backend
refactor,
that's
coming
in
for
that
too,
so
yeah,
that's
that's
CloudStack
now
and
the
setup
is
actually
really
easy.
A
A
You
just
do
our
BD
and
you
point
it
at
the
particular
place
where
your
you
know
your
monitor
or
whatever
it
is
that
you
want
to
plug
into,
and
then
you
fill
in
your
authentication
entry
info
force,
FX,
maybe
tag
it
is
already
so
you
can
do
some
stuff
later,
but
there's
there's
really
nothing
special
about
it.
As
long
as
you
can
spin
up
CloudStack
and
you
can
spin
up
Steph,
it's
really
easy
to
plug
the
two
of
them
together.
A
So
what's
next,
the
snapshot
and
the
the
backup
support
is
probably
going
to
be
able
to
come
in
in
for
dot
to
with
that
that
new
storage
code,
refactor
stuff,
that's
kind
of
a
width
for
dot
to
most
of
the
underlying
stuff
is
already
written.
So
it's
just
managing
a
matter
of
making
sure
that
how
they
did
the
new
storage
stuff,
it
doesn't
break
anything
cloning
or
if
your
European
I
was
informed
that
that
that
is
actually
called
layering
support,
so
the
the
the
copy-on-write
clone
stuff
that
I
was
talking
about.
A
That's
also
going
to
be
coming
in
the
future.
Probably
well,
maybe
you
for
not
one
but
probably
before
not
to,
and
then
SEF
support
for
being
able
to
be
a
secondary
storage
so
that
the
the
storage
of
your
images
themselves,
the
image
catalog
and
for
backup
storage,
backup
storage
is
new
with
for
dot
to
I.
Guess
I,
don't
know
a
whole
lot
about
it,
but
the
secondary
storage
also
will
be
coming
with
four
dot.
A
Two
and
that's
actually
a
great
use
case
for
that
for
that
gateway
for
your
backup,
storage,
stuff,
so
who's
to
blame,
the
guy,
who
actually
wrote
the
integration,
his
name's
Vito
den
Hollander
he's
one
of
our
partners
in
Europe
and
we
actually
like
it
this
way.
Where
we're
not
writing
all
the
integrations,
we
would
much
rather
see
the
community
not
only
write
them
but
own.
Those
integrations
we're
happy
to
help
with
whatever
people
are
interested
in
doing
or
building,
but
we
would
much
prefer
that
it's
not
all
centrally
located.
A
You
know
if
our
la
offices,
you
know
if
the
the
big
one
hits
and
our
la
team
gets
swept
out
into
the
ocean,
we
want
to
make
sure
that
that
the
distributed
knowledge
is
there.
But
if
you
have
questions
on
the
integration
between
clouds,
deck
and
SEF,
this
is
the
guy
to
ask
he
hangs
on
our
IRC
channel.
So
if
you
want
to
come
to
our
IRC
channel
run
oft
cnet
at
SF,
you
can
hit
his
website
42
I
com,
if
you're
a
european,
dude
or
chick,
and
you
want
to
talk
about
CloudStack
and
SEF.
A
He
has
a
there
42
on
the
company,
that's
what
they
do.
They
spin
up,
Constance
F,
that's
pedo,
yeah,
and
but
you
know
we
we
work
in
Europe
and
we
kind
of
throw
stuff
over
the
wall
of
each
other.
But
but
yeah
he's
the
guy
who
wrote
the
integration
he
is
actively
working
on
it
and
I
know
he
would
welcome
help
or
patches
if
he
wanted
to
do
anything
that
he
isn't
currently
doing
so.
That
suggests
I
busted
through
that.
Pretty
quick
do
with
questions
yeah
far
away.
D
A
A
D
C
B
A
That's
at
that
point,
it's
just
a
question
of
your
cluster
implementation
right
because
that
gateway,
that's
providing
the
ability
to
do
that.
As
three
stuff,
you
can
spin
up
as
many
gateway
machines
as
you
want
to
and
load
balance.
Somehow
you
want
to
the
actual
number
of
objects
within
SEF
itself
is
I
hate
to
use
the
word,
but
it's
essentially
infinite
right
depends.
You
just
have
to
scale
out
more
machines,
more
disks,
more
moro,
SDS
to
be
able
to.
A
A
That's
a
that's
a
much
longer
question
or
an
answer
than
I
probably
have
time
to
answer
and
the
guy
to
talk
to
about
that.
Actually
is
Mark
Nelson
he's
our
performance
guy.
He
does
all
things.
Performance
he's
done
a
number
of
really
nice,
lengthy,
blog
entries
on
the
ceph
blog.
But
if
you'd
like
to
know
more
specifically
hit
me
afterwards,
I
put
you
in
touch
with
Mark
and
he
can
give
you
the
brain
dump
of
all
brain
dumps.
A
D
A
Service
that
there
isn't,
as
far
as
I
know,
there
isn't
any
explicit
work
being
done
on
Foss
in
terms
of
something
like
that.
I
know
that
there
are
a
number
of.
D
A
A
But
I
mean
the
short
answer
is
that
there
are
some
things
that
you
can
do
with
staff,
where
you
can
tell
it
how
to
handle
particular
data
or
requests
or
kind
of
where
those
things
are
supposed
to
live
and
how
they're
supposed
to
get
there,
but
there's
no
real
answer
to
cost
right
now.
It's
the
short
answer,
any
other.
A
That
is
another
one
of
those
I'm
going
to
add
an
asterisk
to
the
answer,
but
that's
highly
dependent
on
things
like
your
network,
the
actual
size
of
your
placement
groups
is
tunable.
I
think
the
this
I'm
not
sure
what
the
minimum
placement
group
size
is
off
top
my
head,
but
yeah
you
can.
You
can
tune
that
to
be
based
on
your
particular
network
infrastructure,
to
speed
that
up
and
I'm.
B
A
No,
the
the
OS
DS
will
are
aware
of
who's
in
and
who's
out,
because
there's
multiple
monitors
and
they're
talking
to
each
other
all
the
time
and
if
it
hits
a
timeout
and
it
can't
talk
to
a
particular
OSD
after
a
certain
amount
of
time.
Also
configurable
it'll
say
this:
OSD
is
down,
it'll
market
is
down,
and
then
it
will
start
migrating
data.
A
So
yeah,
it's
it's
all
configurable
and
that's
that's
actually
why
we
invent.
We
created
ink
tank.
The
company
Seph
is
incredibly
powerful,
but,
with
you
know
a
huge
amount
of
any
system
power.
There's
also
means
there's
tons
and
tons
of
tunable,
Zand,
configurable
and
stuff
like
that,
and
so
we
were
getting
so
many
questions
about.
Hey,
come
in
and
tune
my
cluster
that
that's
why
we
finally
made
the
decision
to
say
we
need
a
company
to
do
that.
A
All
right,
I,
don't
want
to
say
yes,
because
I
don't
know
for
sure
I
liked
Steve's
talk
about
how
it's
you
know.
The
guy
teaching
is
not
necessarily
the
guy
with
all
the
answers
I'm
far
from
the
guy.
With
all
the
answers
shoot
me,
an
email
I'll
get
you
the
right
answer,
any
other
questions
all
right.
Great
thanks.
Everybody
early.