►
Description
This video shows how I built my Openstack Icehouse test lab under ESXi 5.1 and the way it works connected to a Ceph cluster for storing Glance images and Cinder block devices.
All the machines are running CentOS seven, which is not fully supported at the moment
A
Well,
when
I
try
to
to
put
it
to
work,
I
try
to
use
some
some
tools
for
for
automation
like
backstack,
which
uses
puppet
and
and
other
things
like
that,
but
I
wanted
to
to
give
a
try
to
the
to
the
manual
installation
here
in
the
official
recommendation
from
openstack,
so
I
I
have
installed
the
this
architecture,
the
this
example
architecture,
but
using
a
fourth
node
for
cinder
for
block
devices.
A
So
I
followed
this
this
guide
for
installing
the
system
using
a
sent
os,
7,
centos,
7
and
rdo
repos.
So
this
is
the
the
idea
I
I
got
from
the
from
the
documentation
for
building
this
this
environment.
I
have
one
controller
node,
one
network
node,
one
compute
node
and
a
fourth
one
cinder
block
for
block
devices,
and
I
also
created
this
interface
configuration
three
networks:
one
for
management,
one
for
instance,
channels
another
one
for
for
external
connections.
A
I
also
have
a
fourth
network
for
storage,
which
is
used
by
the
the
sef.
The
set
machines-
well,
this
is
the
the
the
conceptual
architecture
from
from
the
documentation.
I
have
built
this
lab
here.
It
is
okay,
everything
is
running
under
a
vmware,
esxi,
hypervisor
and
well
here
you
got
on
the
left,
the
different
networks
which
are
all
connected
to
the
same
physical
interface,
and
I
have
this
for
network
storage,
external
instance
and
management.
A
A
A
Well,
for
the
openstack
environment.
A
These
are
the
machines
in
the
in
the
red
square,
which
are
working
inside
the
open,
stack
environment,
one
for
the
controller
node,
which
just
one
network
interface
in
this
network
in
the
management
network.
The
network
machine
with
notron
has
three
network
interfaces,
one
for
management,
one
for
instance,
and
another
one
for
the
external
network.
A
This
one
has
no
address
it's
just
for
bridging
the
virtual
machines
and
well
that's
all
about
the
network
node
and
the
compute
one
has
these
two
interfaces,
one
in
the
management
and
one
in
the
instance
and
at
least
the
block
device.
The
sender
has
just
one
network
in
the
management
network.
Well,
this
machine
here
unto
cloud
is
going
to
be
a
an
instance
running
inside
the
compute
node
well
inside
the
openstack
environment
and
for
the
ceph
for
the
saf
environment.
A
A
I
have
also
two
osds
which
are
connected
and
no
no.
They
don't
have
that
have
a
public
interface
and
these
machines
are
I'm
going
to
show
you
where
I
got
these
machines.
Archim
sufi
servers
from
this
provider
chip
servers
for
for
just
for
testing.
I
got
these
machines
here
for
just
10
euros.
I
don't
have
any
any
kind
of
of
relationship,
commercial
relationship
with
kim
sophie
or
ovh.
I
just
use
these
machines
because
they
are
cheap
and
they
actually
work
so
well.
A
This
is
my
environment,
and-
and
this
worked
for
me-
so
I
go
to
two
of
these
machines
and
I
have
installed
onto
them
center
seven
for
working
as
ost.
Well,
if
you,
if
you
don't
know
how
seth
work
I'll
explain
a
bit
later,
but
these
two
machines
are
physical
machines
externally
hosted,
and
this
one
purpo
is
also
here,
because
this
machine
has
two
interface
cards,
one
for
the
storage
network
and
one
with
public
access.
Why
wait
I
have
I
I
did
this.
A
This
is
because
I
I
haven't
managed
to
work
as
f
monitor
behind
that
with
a
just
a
private
address
and
well,
I
didn't
manage
to
make
it
worse,
so
I
prefer
to
to
give
another
interface
address.
Interface
network
and
directly
connected
to
the
to
the
to
date
so
world
to
to
with
a
public
ip
address.
So
this
way
it
works
and
well
that's
that
was
not
not
a
problem
to
get
just
an
ip
address.
A
There
are
two
installations,
the
quick
one
and
the
manual
one,
and
I
followed
the
quick
one.
There
is
a
step
one
upper
flight
where
you
can
check
if
you
have
all
the
necessary
infrastructure
or
or
things
for
for
building
a
simple
surf
cluster,
and
here
in
the
preflight,
there
is
a
simple
diagram
where
you
can
see
the
the
the
sim
the
simplest
way
to
to
to
create
a
sf
node.
A
A
A
Seth
are
also
the
bad
guys
from
the
crisis
game,
but
that's
another
story.
Okay,
back
to
the
to
the
work
using
the
one
admin
node,
you
can
install
the
the
different
roles
of
the
other
nodes
using
ssh
sessions
without
password
with
a
a
public
and
a
private
key,
and
it's
very
it's
quite
simple
to
to
make
it
work.
So,
for
these
two
nodes,
the
osds,
these
are
the
nodes
which
really
stores
the
data.
A
A
But
this
is
a
lab:
don't
expect
total
stability
and
and
a
good
performance,
because
it's
a
lab,
I
just
wanted
to
to
see
what
ceph
does
and
what
openstack
does
actually
so
here,
choco
is
the
is
the
admin
node
from
this
machine.
I
have
run
several
self
deploy
comments
and
from
this
machine
I
have
created
the
other
ones.
So
simple,
simple
self
comments:
if
we
type
cf
df.
A
It's
like
a
df
commanding
lineup,
so
I'm
getting
all
the
pools
which
I've
had
created
in
ceph
and
the
storage
used
in
each
pool
and
the
objects
stored
in
its
pool.
If
we
go
to
to
one
of
the
osds,
all
the
information,
all
the
data,
I
have
selected
the
opt
directory.
So
here
in
this
directory,
I
have
journal
and
the
osd
information
so
here
inside
the
osd
inside
current.
A
A
There
is
the
data,
so
there
are
several
simple,
simple
comments
for
sev
and
for
knowing
how
it
is
working.
Another
one
is
sev
hyphen
w
where
you
can
get
the
info
of
the
actual
transactions
made
into
the
cluster,
so
the
health
is
okay.
As
you
can
see,
you
can
see
the
monitor
and
the
amount
of
osds
which
are
up
and
in
up
reachable
in
which
are
inside
or
configured
inside
the
cluster.
A
A
And
health
is
okay,
nobody
has
to
go
to
the
doctor
yet,
and
another
simple
comment
should
be
sev
osd
tree,
where
you
can
see
all
your
osd
machines
and
if
they
are
up
or
not
okay,
following
the
the
guide.
Well,
this
is
the
the
safe
cluster
up
and
running
and
working
and
and
with
two
osds
and
well.
This
cluster
is
ready
to
work.
Well,
you
can
see
here.
I
I
have
told
you
that
the
machines
hearing
in
kim
sufi
have
two
terabytes
this
each
one.
A
So
it's
a
very
big,
very
big
amount
of
gigabytes
for
just
20
euros
well
for
for
lab
environment
is,
is,
is
enough?
It's
okay!
Well,
this
is
the
stuff
cluster.
Now
I
have
followed
this
guide.
A
Okay,
saf
openstack
rdo.
Give
me
a
second
for
searching
for
it:
okay,
here
using
seth
for
cinder
with
ryu
havana.
In
this
case,
it's
not
hawaiian.
It's
heist
house,
it's
the
last
version
of
openstack
and
with
this
guide,
if
you,
if
you
have
already
a
self
cluster
up
and
running
it's
very
simple,
to
follow
this
guide
and
put
into
work
the
different
services
not
into
openstack
for
storing
the
data
or
retrieving
the
data
into
the
the
the
ceph
cluster.
A
There
is
a
fourth
one
for
virtual
machines
for
nova
and
in
that
case,
nova
doesn't
create
an
ephemeral
disk
locally,
so
creates
the
the
femoral
disk
for
the
running
instances
dietary
on
on
ceph.
But
I
don't
have
that
working
right
now.
Okay,
following
this
guide,
is
quite
simple
to
to
make
it
work,
sender,
glance,
etc.
A
Now,
let's
take
a
look
here
at
the
openstack
dashboard
and
let's
make
some
things
work
well,
first
of
all,
building
openstack
under
center
seven
was
not
an
easy
task,
because
it's
not
yet
supported
at
100
percent,
and
I
have
to
deal
with
some
bugs
and
other
problems
and
well.
It
was
not
an
easy
task,
but
I
learned
a
lot
about
openstack
and
how
it
works
so
well.
A
It
was
good
at
the
end,
a
couple
of
hints
a
couple
of
tips
if
you
are
going
to
to
build
openstack
inside
another
virtualization
solution
like
like
this
case
under
vmware,
I'm
going
to
show
you
again
the
diagram.
Okay.
A
First
of
all,
you
have
to
put
your
network
card
into
promiscuous
mode
for
the
network,
the
network
to
to
run
to
to
run
okay
using
neutron,
and
this
and
the
on
the
the
bridge
for
for
the
different
virtual
machines
in
that
external
network
put
your
card
in
promiscuous
mode.
This
is
necessary.
A
Another
thing
you
should
know
is
that
the
compute
node,
if
it's
virtualized
inside
the
vmware,
you
have
to
use
nested
virtualization
for
getting
the
most
of
your
processor
and
the
visualization
technologies
of
your
processor,
so
that
is
called
nested
virtualization.
It
means
that
you
run
an
hypervisor
inside
another
hypervisor.
A
So
it's
quite
simple:
if
you
follow
this
this
this
post
from
this
guy
william
well,
he
explains
here
how
to
enable
nested
virtualization
into
sxi
and
another
hypervisors
and
well
it's
quite
quite
simple.
You
just
need
to
have
your
sphere
5.1,
at
least
and
your
machine.
Your
virtual
machine
for
compute
has
to
be
virtual
hardware
version
9..
So
thank
you
very
much,
william
and
who's
awesome.
You
are
awesome.
Thank
you
very
much.
A
Okay,
let's
continue
with
the
dashboard
well
now
glance
here
in
images
is
connected
to
to
the
ceph
to
the
subcluster,
so
every
single
image
we
create
here
inside
the
images
section
of
the
dashboard
is
going
to
be
stored
into
the
the
ceph
cluster
where,
where
into
the
surf
cluster,
if
you
remember
here
in
the
documentation
for
installing
and
connecting
openstack
to
to
ceph,
there
was
a
step
for
creating
here
at
the
beginning,
different
pools,
volumes,
images
and
backups.
A
A
Here
I
have
an
image
for
that.
I
want
ubuntu
trustee
and
I'm
going
to.
A
Now
I'm
creating
an
image
inside
glance
manually
from
the
command
line
for
trustee
server
in
the
cloud
version
for
for
ubuntu
server.
A
A
A
As
you
can
see
here
in
the
power
on
time,
it
took
about
three
four
minutes
to
to
to
put
it
in
active
in
active
mode,
took
a
bit
to
to
boot
because
of
the
different
locations
of
the
cef
cluster,
the
speed
of
the
safe
cluster
and
the
speed
also
of
the
network.
So
it
took
a
bit
long
to
to
run
the
instance,
but
now
we
have
an
instance
of
ubuntu
which
image
is
stored
into
the
self
node.
A
A
A
A
A
A
A
Now,
let's
go
here
to
volumes
and
see
how
cinder,
which
is
well
something
very
similar
to
to
the
way
glance
works
with
self
cinder
is
now
connected
to
ceph
and
any
of
the
volumes
we
create
is
going
to
be
created
in
the
staff
cluster
and
where
you
may
already
know
inside
the
pool
of
volumes
here
now
there
are
zero
used
and
no
info
here
in
this
pool.
Okay,
let's
create
a
new
volume.
A
It's
already
created
and
if
we'd
we
do
self-df
well,
there
is
something
there
to
have.
No
sorry
that
it's
images
16
there
there
is
no
no
letter
for
for
the
for
the
size.
We
don't
know
if
they
are
megabytes
or
kilobytes.
Well,
there
is
something
there.
A
A
After
that
we
have
created
a
block
device,
hard
disk
block
device
in
inseph
using
the
openstack
dashboard,
and
we
have
connected
that
disk
to
this
instance
to
this
virtual
machine.
So
now
we
have
two
discs
in
two
different
places:
let's
see
if
it's
actually
working,
let's
go
to
opt,
let's
download
something.
A
A
And
well,
you
can
see
here
that
we're
working
around
four
megabytes
well
that
I
have
seen
here
all
the
all
the
the
speed
that
100
megabit
network
can
give
you
around
12
10
12
megabytes
per
per
second
well.
This
is
this
is
okay
for
for
the
lab.
A
Let's
now
do
some
crash
tests
well,
ceph
is
intended
to
work
with
at
least
three
osds,
but
in
this
lab
environment,
and
if
you
follow
the
official
guide
for
deploying
using
the
quick
start,
there
is
an
option
for
for
just
using
self
with
with
two
osds.
This
replication
between
the
two
osds
and
data
is
quite
safe
under
this
this
environment.
A
A
A
The
compressing
of
the
file
is
not
being
stopped,
it.
It
already
works,
and
if
we
check
the
health
of
the
set
cluster,
you
can
see
that
it's
not
okay,
it's
not
it's.
Not.
Okay
has
some
degraded
degraded
warnings
in
the
file
system,
but
the
system
is
still
working.
So
I
have
my
my
disk
connected
and
I
have
my
my
files
and
nothing
has
been
erased.
There
is
still
connection
and
it's
working,
and
here
in
sap
health,
with
this
common,
you
can
see
that
one
of
the
two
osd's
are
down.
A
A
A
A
A
If
it's
not
reachable
it,
it
loses
the
connection,
but
doesn't
hang
to
the
machine
when
the
service
is
already
when
you,
the
service,
comes
back
again,
it
recognizes
automatically
and
you
have
nothing
to
to
to
do
manually.
So
let's
wait
a
bit
and
you
will
see
that
the
process
will
keep
on
uncompressing
the
file.
A
A
You
can
drop
me
a
line
and
ask
whatever
you
want
about
this
environment
and
I'll
do
my
best
to
to
help
you
if
you
need
my
help
and
well
for
the
next
videos
for
the
next
lab
testing,
I'm
going
to
add
a
third
osd
to
self
and
do
a
few
more
crash
tests
and
see
if
it's
production
ready
and
also
want
to
install
calamari,
which
is
the
the
web
interface,
the
web
dashboard
for
for
managing
self
environments.
So
well.
This
is
everything
for
the
moment.
Thank
you
for
watching
and
bye.