►
From YouTube: FOSDEM 2014 - Ceph
Description
FOSDEM 2014 - Ceph
B
Still
haven't
said
anything
so
don't
clap
your
hands
that
let's
wait
for
the
end
of
the
presentation.
So
during
the
next
15
minutes
we
will
be
discussing
several
topics:
the
state
of
the
integrate,
the
state
of
the
integration
of
step
into
OpenStack
and
staff
itself,
also
so
directly,
jumping
in
so
just
produce
with
you
when
it's
familiar
with
staff,
just
got
a
free,
introduce
what
itself,
what
it
does.
Oh
okay,
that
was
unexpected
okay
well
anyway,
surf
is
a
project
that
started
in
2006
by
sage.
Wild
ring
is,
is
it's
PG?
It's
on?
B
Obviously,
an
open-source
software
under
lgp
a
license
in
it's
written
in
c++
surf
is
a
unified
storage
distributed
system
and
it
has
several
capabilities.
The
first
one
is
the
it's
self
managing.
So
we
have
this
on
the
close
to
live
so
that
pickle
scrub.
We
periodically
checked
consistency
of
objects
and
we
compared
hashes
from
the
master
version
to
order
replicas,
it's
self
feeling
as
well,
because
as
soon
as
something
goes
wrong,
we
just
replicate
all
the
object
to.
B
We
calculate
we
just
calculate
the
location
of
the
object
and
then
we
move
around
it's
a
well-staffed
balancing,
because
we
we
tend
to
have
a
uniform
distribution
of
the
data
as
soon
as
we
add
a
new
road
or
a
new
disk.
Everything
gets
just
spread
around
the
entire
cluster,
so
when
efficient
well
and
the
only
the
only
thing
that
makes
self
really
unique
is
not
even
written
there
because
of
the
wealth
display
issues.
But
what
makes
that
so
unique
is
the
feature
that
that
is
called
self
crush.
B
Sorry
crush
is
stands
for
control
replicated
replication
of
the
scalable
hashing,
which
means
that
we
compute
every
time
a
client
wants
to
do
an
eye
operation.
We
just
compute
the
location,
so
everything
is
based
on
the
calculation.
We
don't
do
any
lookups
on
a
hash
table,
and
that
makes
the
wulfing
repeatable
indeterministic.
B
So
that's
that's
one
of
the
good
point.
First,
with
what
we
call
the
crash
map
and
within
this
crush
map,
we
have
all
the
information
about
the
water
infrastructure.
We
have
all
the
disk
and
all
the
buckets
where
we
have
all
the
nodes
and
it's
which
means
that
its
topology
over,
so
you
can
have
your
data
center.
You
can
design
your
AG
two
nodes,
and
this
is
set
up
like
this.
B
I
have
this
amount
of
nodes
in
this
specific
rack,
and
thanks
to
this,
you
can
just
allocate
portions
of
your
storage
to
a
dedicated
a
lot
of
service.
What
we
can
do
is
specifying
that
you
have
one
rule
that
points
to
a
specific
set
of
hardware.
This
could
be
an
entire
rack
of
SSDs
or
either
an
entire
lack
of
SATA
disk
and
as
soon
as
you
compute
an
eye
operation,
the
data
is
just
written
into
the
either
SSD
rack
or
the
saddle
rack.
B
So,
let's
move
ahead
to
the
general
design
upset,
as
you
can
see,
stuff,
is
build
upon
what
we
call
the
rattles.
The
rial
ability
on
this
distributed
object
store,
so
everything
in
self
is
stored
as
an
object,
and
we
just
built
on
top
layers
layers
on
top
of
of
the
rattles
object
store.
Everything
is
possible
thanks
to
the
lib
ratos
libres,
just
library
that
has
several
Dining's
pythons.
Obviously
Spurs
PHP
any
world
will
be
any
languages
and
where
you
can
just
plug
your
own
application,
it
that's.
What
are
the
guys
from
Singapore?
B
Do,
for
example,
there's
just
build
their
own
application
using
the
Leigh
brothers,
and
then
this
is
how
they
can
store
objects
into
SEF.
The
the
second
component
is
called
our
rattles
gateway.
It's
just
a
restful
api.
It's
the
exact
same
thing
as
amazon,
s3
and
OpenStack
Swift.
It
has
a
support
of
users,
Kota
multi-region
capabilities
and
yeah.
That's
that's
oh
I
with
dr
possessive
and
everything,
but
no.
B
B
B
The
second
one
is
qemu
driver
that
is
compatible
with
both
kvm
again.
Images
have
have
several
the
one.
Numerous
features
like
there
are
Finn
provisioned.
They
knows
about
snapshotting
and
cloning,
so
that
makes
the
wolfe'ik
really
really
efficient.
You
can
easily
put
a
virtual
machine
and
then
just
if
you
want
wait,
just
clone
it,
so
just
do
a
copy
on
write
clone,
and
that
makes
the
the
whole
process
really
fast
and
the
last
place
is
a
distributed.
File
system
opposite
complex
compliant,
distributed
file
system.
B
It
supports
snapchatting
as
well
of
the
directories,
and
it
has
also
a
feature
that
it's
that
that's,
that
is
blood
balancing
the
load
on
several
directories.
So
if
so,
while
the
MDS
demon,
which
is
the
one
that
is
responsible
for
all
the
metadata
of
your
cluster,
sees
that
one
of
your
directory
is
more
I/o
intensive
than
another
one.
But
then
the
closer
will
allocate
a
specific
demon,
a
specific
metadata
demon
to
work
on
that.
This
is
what
they
call
subtree
partitioning
and
that
that's
done
for
the
wall
overview
of
the
of
staff.
B
So
now,
let's
just
jump
into
the
state
of
the
integration.
The
current
theories
of
OpenStack
is
vanna,
so
this
is
what
we
currently
have
all
I,
truly
hope
that
most
of
you
are
familiar
with
OpenStack.
If
you're
not
Nova
is
the
compute
parts.
So
that's
the
one
who
is
possible
for
running
booting
virtual
machines
and
allocating
resources.
Glen's
is
mainly
the
image
stored
of
the
catalog
of
that
stores.
All
the
images
and
cinder
is
the
component
responsible
for
the
blog
devices,
so
you
just
created
or
devices,
and
then
you
can
attach
them
to
virtual
machines.
B
After
this,
you
can
also
apply
several
qsr
functionalities
at
the
addy
hypervisor
level,
so
you
can
really
efficiently
restrict
other
hikers
operation
that
goes
from
your
client
to
the
hypervisor.
Something
that
came
also
with
vanna
is
the
cinder
backup.
It's
just
a
new
processes
that
has
to
well
backup
volumes
basically,
and
the
really
good
thing
with
self
is
that
what
we
have
several
several
ways
to
to
do
this
back
up
either
you
can
use
the
same
pool
so
meaning
the
same
set
of
hardware,
but
it's
not
really
recommended.
B
So
well.
This
is
the
main
thing
that
you
want
to
do
for
dr
purposes.
The
the
good
thing
is
that
we
use
the
differential
tongues
manatees.
So
first,
you
just
do
a
first
back
up
a
complete
backup,
and
then
we
just
do
a
diff
on
the
blocks
for
for
the
volume.
So
that's
the
current
stuff,
the
current
state
of
the
of
the
integration
and
obviously
we
also
support
live
migration.
So
since
it
stores
stores
shared
storage
based,
we
just
have
to
migrate
the
workload
and
the
risk
only
remains
on
the
safe
side.
B
So,
as
you
can
see,
this
is
well.
The
wulfing
is
quite
efficient
because
SEF
unifies
or
the
other
OpenStack
components,
and
we
will
really
differentiate
the
storage
pouch
from
the
the
application.
The
software
part,
where
we
have
this
storage
layer
and
this
on
top
layer.
So
that's
so
that's
a
really
interesting
design
and
I'm.
Already
nine
minutes
and
Avan,
however,
have
a
nice
not
a
purpose
tag,
because
the
feature
that
we
wanted,
the
most
seamlessly
booting
virtual
machines
into
SF
is
quite
buggy.
Hopefully
we.
B
B
One
of
the
don't
know
a
fish.
Okay,
ash
I
have
another
table
after
this
one
that
resume
summarize
everything,
but
for
isos.
What
we
will
have
to
we
would
like
to
have
is
just
as
soon
as
you
put
a
virtual
machine
you
and
if
the
image
is
already
stored
in
glance,
then
you
just
do
a
copy
on
electron,
because
now,
when
you
good
a
virtual
machine,
the
computers
to
download
the
image
and
then
it
has
to
reimport
it
into
staff,
this
is
really
inefficient.
B
Something
that
we
would
like
to
have
also
is
the
volume
migration
functionality
where
you
have
several
types,
so
one
type
is,
for
example,
volume,
SSD
or
volume
SATA,
and
then
you
can
on
the
fly,
change
your
volume
type.
So
the
storage
will
move
your
volumes
to
the
other
type
and
then
you
will
be
charged
less
something
that's
going
to
be:
that's
not
going
to
be
into
hi
selves,
because
the
product
is
not
even
incubated
into
OpenStack
is
manila.
Manila
is
the
distributed
file
system
as
a
service
project,
but
we
could
do
this
with
CBS.
B
However,
the
OpenStack
has
a
permanent
functionality
where
you
can
boot
virtual
machine,
but
they
are
not
virtual
machines.
They
are
like
physical,
a
host,
so
you
can
dedicate
other
host
to
virtual
to
clients,
basically
and
then
to
attach
block
devices.
What
you
could
simply
do
is
just
use
the
kernel
module
from
rbd
and
map
a
block
device
on
this
host,
and
that's
it.
The
last
feature
that
we
would
like
to
see
and
to
be
honest
with
you
when
I
did
this
book
at
the
OpenStack
summit.
B
I
wasn't
really
convinced
that
we
could
do
this
for
house
house,
but
we
we
already
started
to
work
on
that
and
it's
well.
We've
made
major
progresses
on
this.
It's
the
ability
to
use
the
Swift
API
and
to
use
the
rattles
object
store
as
a
back-end.
So
it's
not
that
we
want
to
get
rid
of
Swift
or
anything.
It's
just
that
we
want
to
come.
B
Tinued
the
unification
of
the
storage
layer
with
OpenStack,
so
I
believe
that
for
high
sauce
you
will
be
able
to
use
the
Swift
API
and
to
do
Swift,
API
calls
and
then
on
the
back
side.
You
won't
know
about
it,
but
that's
going
to
be
stored
in
self,
and
this
is
a
really
really
cool
feature.
So
this
is
where
we
are
now.
So
it's
more
like
a
mid-course
progress
table,
so
your
photos
back
end
is
already
in
process.
B
The
Deaf
stack
staff
also,
if,
if
we
want
to
have
more
developers
involved
into
OpenStack
and
SEF,
the
first
step
is
to
build
devstack
environment
to
see
how
its
configured
with
both
F
and
OpenStack.
So
this
is
one
step
forward
to
for
further
new
coming
developers,
abhijeet
EDT
for
other
I
/
visors.
As
you
may
know,
OpenStack
supports
several
hypervisors
Zen
kvm,
but
also
proprietary,
I
professors
like
VMware
and
yeah,
two
minutes
VMware
and
yeah
and
I
pervy
vmware
realized
mainly
on
ice
cozy.
B
There
there's
an
implementation
of
that
exports,
rbd
blogs
through
TGT,
so
we
just
you,
can
just
you
this
implementation
and
my
penis,
because
he
target
and
that's
under
the
hood.
It's
just
a
nobody
blog.
So
we
could
implement
this
thing
for
the
vmware
processes,
vmware
blocks,
and
then
we
could
make
the
use
of
vmware
virtual
machines
running
under
SEF
enable
cloning
backends.
B
Just
what
I
said,
cloning
yeah
one
minute
left,
I
know
that's
over
there
I'll
be
sharing
the
slides,
and
I
think
you
already
got
the
main
idea
anyway,
because
I
blend
that
whoa
last
but
not
least,
fireflies
coming
up
during
this
during
the
during
februari.
That's
the
next
stable
release
of
SEF.
That's
going
to
be
the
first
LTS
version
for
long-term
support.
B
Cheering
will
be
available
fast
and
called
data
average
encoding
red
5
over
distributed
system,
whatever
ZFS
filesystem
to
back-end
by
default,
as
soon
as
you
store
an
object
in
staff,
it
store
that
is
a
file
on
a
file
system,
but
we
can
do
this
with
a
more
efficient
we're
using
backends
like
Road,
DB
or
leveldb,
or
also
NV
kbd
from
fusion-io.
So
we
don't
need
to
use
the
fastest
time
anymore
to
suffice
and
that's
it
and
you
have
no
time
to
ask
me.
Questions
I'll
be
outside
come
here,
yeah
I'll
be
outside
and
around.