►
From YouTube: Webinar - Getting Started with Ceph
Description
Looking for an introduction to Ceph? Look no further my friend.
A
Welcome
to
ink
tanks
webinar
getting
started
with
step,
I'm
Danielle
Wombles,
director
of
marketing
at
you
tank
your
moderator
and
webinar
organizer.
Before
we
start,
let's
take
a
moment
to
ensure
that
everyone
is
familiar
with
the
webinar
control
panel,
the
top
of
the
site
panel.
You
will
find
four
buttons.
These
funds
can
be
used
to
ask
a
question
answer
Boden
questions,
view
attachments
with
additional
related
material
and
also
allows
you
to
raise
the
webinar
thing.
Leave
us
feedback.
Please
feel
free
at
any
time
during
the
webinar
to
ask
a
question.
A
Employees
feedback,
as
these
are
important
to
us,
thank
you
for
showing
the
first
webinar,
which
is
part
of
a
webinar
series
that
will
reel
reel
that
we
will
be
running
over
the
next
couple
of
weeks.
Today's
topic
is
getting
started
with
steps
during
this
webinar.
We
will
introduce
you
to
step
in
in
pink,
discuss
technology
foundation
of
steps
and
walk
through
how
you
can
get
started
using
set
now
I'd
like
to
turn
it
over
to
MERIS,
Bob,
sowinski,
technical
marketing,
engineer,
Eddie,
pink.
B
Thank
You
Daniel,
as
Daniel
mentioned,
we're
going
to
talk
about
getting
started
with
SEF.
Hopefully
some
of
you
have
heard
about
stuff,
but
if
you
haven't,
you
know
will
provide
an
overview.
B
Here's
the
agenda
for
a
webinar
first
real
talk
a
little
bit
about
Stephanie
tank,
then
we'll
provide
an
overview
of
the
SEF
technology,
go
through
a
getting
started,
walkthrough,
showing
the
steps
of
what
you
could
do
to
get
started
and
get
hands-on
experience
using
Seth.
We
will
also
have
attachments
after
the
webinar
that
provide
a
more
detailed
guide
for
that
walk
through
and
then
review
resources
and
next
steps.
B
Okay.
So
if
you
haven't
heard
of
Stephanie
tank
Steph
is
a
distributed,
unified
object,
block
and
file
storage
platform,
it's
been
created
by
storage
experts
as
open
source
software
and
it's
been
integrated
with
the
Linux
kernel
for
several
years
now.
It's
also
integrated
into
various
cloud
management
platform,
for
example
like
OpenStack
and
other
cloud
management
platforms
as
well.
We'll
talk
a
little
bit
later
on
about
what
makes
SEF
unique
and
a
really
cool
technology.
Ink
tank
is
the
companies
that
Danielle
and
I
work
for
we're
a
company
that
provides
professional
services
and
support
for
staff.
B
It
was
founded
in
2011
and
seed
funded
by
dreamhost
and
Mark
Shuttleworth.
The
CTO
is
sage
while
who
is
the
creator
of
self
technology
stage
started
working
on
Seth,
probably
back
in
2005
2006,
so
Steph
itself
is
a
technology.
That's
been
part
of
the
open
source
community
for
a
while
and
has
already
matured
to
the
point
where
we
are
comfortable
recommending
it
for
production
deployments.
B
When
Seth
was
being
conceived
and
developed,
there
are
a
number
of
different
principles
that
were
kept
in
mind
as
important
for
developing
this
technology.
So
things
like
every
component
must
scale.
There
can't
be
any
single
point
of
failures.
The
solution
must
be
software
based
so
that
it
can
be
flexibly
adapted
to
a
number
of
different
environments,
including
appliances,
but
not
being
appliance
itself,
and
it
has
to
run
on
readily
available
commodity
hardware,
because
it's
going
to
be
deployed
at
scale
and
in
situations
where
there
can't
always
be
an
administrator
handy.
B
It
has
to
be
self
managing
whenever
possible,
and
one
term
that
I
heard
in
the
past
is
rot
in
place.
The
idea
is
when
something
fails.
You
just
leave
it
there
and
you
have
enough
redundancy
in
the
overall
system
that
everything
keeps
working.
When
you
have
some
time
to
go
in
and
swap
fail
disks
or
replace
components,
then
you
can
do
that
maintenance,
but
the
system
itself
is
architected
in
such
a
way
that
it
can
be
autonomous
for
a
good
chunk
of
its
independent
runtime.
B
So
what
are
some
of
the
key
differences
between
SEF
and
some
of
the
other
open
source
storage
solutions
that
are
out
there?
Well,
probably,
the
most
important
one
is
the
crush
data
placement
algorithm.
If
you
go
on
the
SEF
website,
you'll
see
a
number
of
different
papers
from
sages,
PhD
work
that
talked
about
the
crush
algorithm
in
a
lot
of
details.
If
you
want
to
learn
all
about
it,
you
can,
but
at
a
very
high
level
crush,
is
a
data
placement
algorithm
that
can
replace
having
to
manage
centralized
metadata.
B
So,
instead
of
having
to
keep
track
of
where
all
the
different
pieces
of
data
are,
we
can
compute
them
on
the
fly
using
crush
and,
in
addition
to
being
able
to
compute
them
rather
than
having
to
store
all
that
information
crush
is
also
intelligent
about
the
infrastructure.
So,
when
disks
are
inserted
into
the
overall
safe
system,
they
can
be
tagged.
Relatives
to
where
that
disk
lives,
in
terms
of
which
notice
in
what
rack,
what
row
in
the
data
center?
B
What
switches
is
connected
to
you
know
even
what
power
supply
fault
zone
or
fire
protection
fault
zone?
It's
in!
All
of
that
can
be
encoded
in
the
infrastructure
map
and
then
crush
will
intelligently
place
and
replicate
objects
so
that
every
object
is
protected
across
multiple
zone
failures,
or
at
least
no
single
point
of
failure
will
be
able
to
lead
to
any
kind
of
data
loss.
So
that's
crush.
B
One
of
the
other
things
that
makes
F
unique
is
that
as
an
open
source
technology,
the
unified
storage
platform,
so
Seth
can
provide
object
with
virtual
block
devices
or
objects
with
object,
placement
block
with
virtual
block
devices
and
then
also
a
distributed
scalable
file
system.
There's
other
open
source
solutions
out
there
that
provide
one
or
another,
but
SF
is
really
the
first
complete
mature
solution
that
provides
all
three
we
mentioned
the
block
device.
B
One
of
the
things
that
set
provides
that
has
been
leveraged
in
a
number
of
different
solutions
is
a
filming
provision,
virtual
block
device
that
has
some
very
compelling
and
our
price
style
features
like
thin
provisioning,
the
ability
to
do
allocate
on
right
snapshots
and
volume
cloning.
So
this
is
part
of
SEF.
It's
also
been
integrated
with
OpenStack
cloudstack
and
a
number
of
other
technologies
that
lets
kvm
and
q
mu,
take
advantage
of
the
set
virtual
block
device
and,
finally,
the
SEF
file
system.
You
know
it's
file.
Systems
are
complex,
especially
distributed
file
systems.
B
This
is
still
kind
of
evolving
as
a
complete
solution,
but
one
of
the
things
that
the
set
provides
is
Seph
FS,
which
provides
distributed
scalable
metadata
servers,
so
you
can
deploy
multiple
metadata
servers
that
all
work
in
parallel
and
they
actually
dynamically
share
and
shift
the
burden
of
who
manages
which
made
up
metadata
to
allow
very
large
scale
clusters
to
be
deployed
in
terms
of
use
cases.
You
know
here
is
just
kind
of
a
a
popular
sampling
in
the
object
use
case.
B
People
have
used
staff
to
build
archival
and
backup
storage,
use
it
for
primary
object,
data
storage,
there's
a
s3
and
Swift
compatible
gateway
component.
That
I'll
mention
a
little
bit
more
later
and
that
can
provide
a
amazon
s3
like
storage
services
on
top
of
SEF
people
have
used
it
to
build
web
service
platforms
and
also
for
application
development.
If
you're
developing
applications
for
let's
say
Amazon,
then
you
can
use
staff
as
a
private
cluster
for
application
development.
B
On
the
block
side,
you
know
people
have
used
it
for
sale
replacement
using
set
block
devices
either
leave
with
their
linux
applications
or
react
sporting
them
using
I
scuzzy
or
as
virtual
block
devices
for
VM
images
in
cloud
management
platforms
and
also
just
virtualization
environments.
From
the
file
system
side,
people
have
used
it
for
hpc
or
really
any
kind
of
POSIX
compatible
application.
B
This
diagram
shows
the
key
architectural
components
that
are
available
when
stuff
is
employed,
deployed
and
implemented
at
the
base.
Is
the
ceph
object,
storage
layer,
it's
called
Rados,
so
some
of
other
names
are
kind
of
rado.
Something
rado
stands
for
reliable
autonomous
distributed,
object,
store
and
it's
basically
a
massively
scalable
self-healing
self-managing
foundation
of
object,
storage,
and
the
key
observation
is
that
when
you're
building
distributed
complex
storage
systems,
it's
much
better
to
start
with
intelligent
extensible
objects
than
dumb
disks
and
blocks,
and
by
starting
with
objects.
B
We
can
do
things
like
ask
those
objects
to
replicate
to
their
peers.
Micah
peer-to-peer
network
and
the
crush
algorithm
also
distributes
some
of
the
works
to
the
clients
themselves.
That
can
use
the
map
and
I'll
talk
a
little
bit
about
that
later
on,
but
basically
Rados
provides
that
object.
Storage
foundation
on
top
of
that
is
that
blue
block
on
the
left
hand,
side,
which
is
libraries
and
library
toes,
provides
a
application
library
that
developers
can
use
to
integrate
their
applications
directly
with
rados.
B
They
can
take
advantage
of
the
distributed,
object,
storage,
capabilities
of
the
rado
slayer.
There
are
SDKs
for
C
C++,
Python
Perl,
you
name
it
there's
probably
a
SDK
or
a
set
of
library
is
already
available
for
it.
So
if
what
you're
looking
for
is
to
build
a
dedicated
distributed,
application
seff
can
serve
as
that
foundation,
but
most
people
end
up
using
Seth
you
fit
through
one
of
the
other
three
tasks
listed
on
the
slide.
So
I
mentioned
the
Rados
gateway.
B
This
is
a
restful,
s3
and
Swift
compatible
gateway
that
allows
restful
api
calls
to
be
made
to
the
Gateway
and
then
on
the
back
end
quickly.
Translated
into
native
Rados
calls.
So
if
you're
working
with
Amazon
s3,
for
example-
and
you
want
your
own
private
cloud
in-house-
that
behaves
very
similar
to
how
Amazon's
Cloud
behaves,
the
ratos
gateway
is
an
easy
way
to
implement
this
three
component
I
mentioned
rbd,
which
is
the
SEF
block
device.
This
is
a
reliable
and
fully
distributed
block
device.
B
It
has
enterprise
features
and
it's
also
been
integrated
with
the
Linux
kernel
and
like
kvm
and
Q
mu
on
virtualization
hypervisor
side
and
finally,
there's
the
SEF
SS
client,
which
is
also
integrated
with
the
the
Linux
kernel.
There's
also
a
fused
version
of
the
file
system,
clients,
so
you
know
we're
doing
some
work
with
the
community
to
make
that
fuse
clip
fuse
client
works,
let's
say
on
Macs
and
other
non
Linux
environments
and
that's
the
high
level
picture.
B
One
of
the
key
things
about
ratos
is
that
there
are
two
main
players
in
making
Rados
work.
One
is
the
monitors
and
the
monitors
are
the
ones
that
managed
the
map
of
the
cluster.
That
know
where
all
the
different
object
stores
or
the
disks
are.
So
it
basically
provides
consensus
on
the
state
of
the
cluster,
and
because
of
that,
we
need
to
have
an
odd
number
of
monitors.
B
Three
is
the
minimum
we'd
recommend
for
production,
or
that
we
require
I
should
say,
and
you
know,
if
you
have
a
cluster
with
more
than
a
couple
of
hundred
nodes,
it
might
make
sense
to
go
from
three
to
five
or
if
you
have
many
different
fault
zones
that
you
want
to
be
able
to
manage.
It
might
also
make
sense
to
go
to
five
monitors
or
something
else
like
that,
but
in
general
the
monitors
are
for
managing
the
cluster
map
on
the
component.
B
That
scales
and
shows
up
most
often
are
these
rados
storage
nodes,
which
we
call
o
SDS
or
object
storage
demons.
So
in
our
best
practices,
there's
a
one-to-one
map
between
an
object,
storage
demon
and
a
disk
that
that
demon
manages,
so
we
recommend
at
least
three
nodes
in
the
cluster
for
production.
B
Thus,
the
OS
DS
effectively
serve
the
stored
objects
to
the
clients,
but
they
also
turn
that
dumb
disk
into
an
intelligent
object,
store
or
intelligent
object
service,
so
those
OS
DS
can
do
things
like
replicates
to
their
peers
and
understand
when
appear
is
down
and
it
needs
to
replicate
to
a
new
location
based
on
an
updated
cluster
map.
One
of
the
other
cool
things
about
OS
DS
is
that
they
are
extensible.
So
in
the
same
way
that
in
programming
you
can
extend
any
object
class
with
new
methods.
B
You
can
extend
OS
DS
with
new
methods
that
are
defined
using
vibrators,
and
you
know,
for
example,
if
your
objects
are
images
and
you
want
to
generate
thumbnails
of
those
images,
you
can
extend
the
object,
class
width
or
the
objects
with
a
new
method
that,
when
calls
will
compute
the
thumbnail
for
that
object
and
return,
the
thumbnail
rather
than
the
full
resolution
image.
So
that's
one
of
the
things
that
makes
OSD
powerful
as
a
foundation.
B
On
this
slide,
we
see
a
picture
of
what
the
cluster
looks
like
towards
the
top.
We
see
what
a
ratos
node
would
be
where
there
is
a
disk
layer,
some
kind
of
file
system-
and
you
know
we
do
extensive
testing
with
butter,
FS,
XSS
and
ext4,
and
on
top
of
that
is
the
object
storage
demon.
That
then
presents
that
the
space
on
that
disk
as
an
intelligent
objects
or
many
of
those
OSD
nodes
and
monitors
are
gathered
together
to
create
the
rados
cluster.
B
The
monitors
maintain
a
map
of
the
cluster
and,
if
something
changes
in
that
cluster,
then
the
map
gets
updated
and
propagated
to
all
the
OSD
s
and
also
all
the
clients
that
are
using
the
cluster.
So
those
events
tend
to
be
relatively
rare
effectively
only
when
something
fails
or
something
new
is
added
to
the
cluster.
So
the
map
updates
tends
to
happen
very
quickly.
Relatives
to
map
changes,
okay,
so
that's
really
the
overview,
and
now,
as
danielle
presented
in
the
introduction,
we
have
a
couple
of
questions
that
we're
hoping
you
can
vote
for.
B
Okay,
so
hopefully
that
was
enough
time,
if
not,
you
can
really
vote
at
any
point
during
this
presentation
doesn't
have
to
be
just
while
that
slide
is
up
okay
and
now
we're
kind
of
going
to
jump
into
the
technical
details
of
this
talk,
and
this
is
effectively
in
a
extended
walkthrough
of
what
it's
like
to
get
started
and
get
stuff
up
and
running,
and
to
do
that
I'm,
basically
just
going
to
be
talking
about
some
of
the
steps
and
maybe
some
command
snippets
that
are
useful.
B
As
I
mentioned,
we
will
have
the
actual
detailed
guide
that
corresponds
to
this
walkthrough
available
as
an
attachment
to
this
webinar.
Probably
you
know,
within
a
few
days
of
today,
up
on
the
website,
Danielle
will
send
out
an
announcement
when
all
the
materials
are
available
as
attachments,
and
you
can
access
them
later
on.
Okay,
so
as
an
overview
of
what
we
will
cover,
we're
going
to
be
using
VirtualBox,
it's
a
pre,
powerful
hypervisor
platform,
and
it's
also
free
it's
available
on
a
number
of
different
OSS
and
platforms.
B
So
it
seems
like
a
really
great
common
foundation
to
use.
But
if
you
have
other
hypervisor
platforms
that
you
like,
they
should
work
just
as
well.
This
is
pretty
agnostic
relative
to
the
hypervisor,
and
one
of
the
key
things
to
note
is
to
simplify
the
walkthrough
we've
relaxed
some
security
best
practices
to
because
speed
things
up
and
in
this
walkthrough
that
I'm
going
to
present
today,
I've
emitted
those
security
setup
steps
there
in
the
guide,
and
you
know
it.
B
What
we
really
recommend
is
you
know
something
relatively
slim:
it's
running
ubuntu
linux,
so
it
doesn't
really
require
a
lot
of
resources
to
perform
fairly
well
in
production.
You
know,
of
course,
we
require
a
lot
more
resources,
but
for
this
going
to
walk
through
and
demo
and
getting
started,
one
or
more
cpu
cores,
512
megs
or
more
memory,
we're
going
to
be
working
with
a
boon
to
1204,
which
is
LTS
release
and
the
latest
updates
1210
also
works.
But
you
know
we
wanted
to
really
use
the
LTS
VirtualBox
guest
out
on.
B
So
after
you
create
the
the
vm
install
the
VirtualBox
guest
add-ons
that
are
going
to
help
simplify
things,
including
being
able
to
shut
down
the
VM
and
kind
of
power
off
the
virtual
machine
more
easily,
each
VM
is
going
to
have
three
virtual
disks,
all
of
them
dynamically
allocated
or
thin
provision
so
that
they
don't
take
up
space.
If
you
have
SSDs
to
build
this
on,
it
makes
things
much
quicker
and
more
responsive.
It
doesn't
really
require
a
lot
of
space.
I.
B
Think
all
four
of
the
VMS
that
I
use
together
at
a
used
up
less
than
about
10
gigs
of
space
by
the
end
of
the
walkthrough
arm.
So
the
first
disk
is
the
OS
disk
I
allocated
28
gigs,
it
probably
used-
maybe
three
or
four
of
that
doesn't
have
to
be
that
big,
and
then
we
created
to
eight
gig
that
can
be
used
for
staff
data.
B
Okay,
since
it's
much
easier
when
you
have
a
template
and
just
clone
that
template
consider
just
refining
this
with
a
single
template,
vm
and
then
cloning
it
four
times
to
create
the
VMS
for
the
walkthrough.
B
Okay,
once
you
have
the
VMS
up
and
running,
especially
if
you
cloned
them,
we
need
to
go
into
each
VM
and
adjust
the
networking
settings
so
that
we
have
a
static
IP
address
on
the
host
only
network
and
we
really
want
to
be
working
with
static.
Ip
addresses
for
the
SEF
notes,
just
since
things
are
a
little
bit
more
stable
and
predictable.
B
That
way,
so
for
each
0
define
a
static,
IP
address
and
netmask,
and
for
East
one
you
can
let
it
go
and
use
DHCP,
but
the
key
is
that
you
want
to
define
the
Gateway
for
that
interface
in
virtualbox.
All
of
the
nap
interfaces
will
be
10,
dot,
0
dot,
something
and
the
gateway
address
for
that
NAT
virtual
interfere.
Virtual
network
is
too
so
to
find
the
gateway
as
10
0,
dot,
something
dot
2,
and
that
way
each
one
port
will
become
your
gateway,
and
you
know
you
can
use
that
to
download
packages
and
updates.
B
If
you
cloned,
your
vm,
you
also
regenerated
MAC
addresses-
or
at
least
you
know
that
makes
sense
so
that
all
the
MAC
addresses
are
unique
and
you
might
need
to
go
in
and
adjust
the
persistent
network
naming
rules
so
that
the
new
mac
addresses
match
up
to
the
interfaces
that
you
want,
so
that
those
rules
are
in
seudah
rules,
d
and
then
70
persistence
net
rules.
If
you
didn't
delete
that
file
before
you
shut
down
your
template
vm,
then
it
will
have
the
old
mac
addresses,
plus
the
new
ones.
B
B
The
first
we
configured
a
boon
to
user.
So
all
the
work
is
done
through
a
user
named
Ubuntu
and
we
used
authorized
keys
so
that
hoobland,
you
can
ssh
to
all
the
different
machines
without
having
to
provide
a
password.
We
also
added
a
blend
to
to
the
sudoers
file
with
full
access
and
the
guide
kind
of
describes
how
to
do
that.
If
that
isn't
something
that
you've
done
a
bunch
of
times
on
we
configured
root
on
the
server
nodes,
so
that
rude
can
ssh
between
the
nodes.
B
Okay.
So
now
that
we
know
that
shortcuts
we
took,
let's
take
a
look
at
some
of
the
additional
steps
you
know.
First,
is
we
want
to
have
name
resolution
working
in
our
little
toy
cluster
and
the
easiest
way
to
do
that
is
to
edit
empty
hosts
and
add
your
static
IP
addresses
into
that
see,
hosts
file
arm.
You
know
I
like
to
just
make
it
portable,
so
I
define
the
local
host
network
and
then
for
each
individual
node.
I
defined
the
e20
address.
B
So
in
my
example,
the
the
host
only
network
I
was
using
was
192
168
dot
56.
So
all
of
the
address
is
used
for
this
toy
cluster
are
based
on
that
post
on
the
interface
in
your
machine.
It
might
be
something
different,
so
you
know
just
need
to
check
to
see
what
it
is
and
then
once
you
have
that
at
the
host
file
copy
it,
you
know
using
SCP
so
that
it's
all
the
different
machines.
B
Okay,
once
you
have
name
resolution
working,
what
we
want
to
do
is
install
the
ceph
bobtail
release.
So
Steph
releases
are,
you
know,
named
alphabetically
after
different
kind
of
cephalopods,
and
you
know
first,
we
had
Argonaut
and
the
latest
release,
which
really
just
kind
of
went
out
a
few
weeks
ago,
is
called
Bob
Gale.
So
to
get
that,
what
we're
going
to
do
is
add
the
master
key
force
F
from
github,
so
we're
going
to
fetch
the
key
from
github
and
add
it
to
app
keychain
and
then
once
that
key
is
there.
B
But
you
know
this
needs
to
be
executed
for
all
of
the
different
nodes
in
the
cluster,
including
the
client,
and,
if
you're
running
this
from
the
clients,
you
don't
need
to
ssh
to
it.
But
you
know
you
can,
if
you
want
to
descript
that
so
again,
sudo
apt-get
update
and
then
sudo
apt-get
install
set,
and
this
should
download
all
the
packages
configure
them
and
install
them.
B
Once
that's
done
and
stuff
is
available.
We
need
to
create
a
safe
configuration
file,
so
the
step
configuration
file
lives
in
SPSS.
It's
called
Seth
calm
and
this
slide
shows
you
know
most
of
what
was
in
that
stuff
com
file,
the
guide
and
it
has
the
complete
picture.
But
let
me
talk
a
little
bit
about
the
sections.
There's
a
global
definition
section,
and
this
is
a
section
in
which
I've
turned
off
all
the
authentication.
B
So
you
know
it
says:
auth
cluster
require
equals
none
in
a
production
environment
instead
of
none,
it
should
say
FX
through
all
three
and
then
you
use
key
management
to
make
sure
that
everything
is
secure.
There
is
a
general
OSD
section
that
talks
about.
Oh,
it's
the
options,
and
here
we
define
a
journal
size
of
about
one
gig,
tell
X
adder
you
to
use
omap4
ext4
define,
make
FS
type
for
the
the
file
system.
That's
going
to
be
used
on
the
OSD
s
as
ext4.
I
did
that
just
so.
B
It's
easier
and
ubiquitous
there's,
actually
a
really
great
series
of
blog
posts
about
performance
testing
with
OS
dizon
chef,
calm
and
that
outlines
different
advantages
for
different
systems,
and
you
know
kind
of
when
you
might
use
butter.
Fs
vs.,
XS
s
vs.
ext4,
but
just
to
keep
things
simple
in
this
walkthrough
I
used
ext4
define
some
mount
options
for
those
ext4
file
systems
and
that's
kind
of
the
general
OSD
section
now.
B
So
it's
kind
of
a
really
nice
short
cotton
simplification,
so
for
each
OS
even
need
to
have
something.
There
I
skipped
a
few
just
to
save
space
on
the
slide
once
that's
there.
We
also
define
an
MDS
which
is
metadata
server
for
the
step,
distributed
file
system
only
defining
one
over
here
and
it's
you
know
running
on
node
1.
So
once
we
have
that
set
comp
file,
we
want
to
make
a
copy
of
it
available
on
every
single
node
in
the
cluster.
B
B
Okay,
so
once
we
have
set
com
for
all
the
different
machines
we
copy
it
to
all
the
nodes,
and
now
we
need
to
create
the
directories
that
the
differ,
objects
or
object.
Demons
are
going
to
be
running
in
and
all
the
staff
kind
of
working
directories
are
under
of
our
live
set
and
what
we
want
to
do
is
create
a
OSD
directory
and
for
under
the
OSD
directory,
a
set
dash
something
directory
for
each
OSD.
So
OSD
0
goes
into
seth
0
0
SD
five
goes
into
seth
five
on
the
correct
node
right.
B
So
just
note
that
ssh
that
corresponds
to
that
maker
command.
For
example,
we
have
step
0
and
stuff
one
going
on
node
1,
step
2
and
step
3,
going
on
node
two
step
4
and
SEF
5
for
those
DS
going
to
node3.
We
also
want
to
create
directories
for
the
Mons
and
the
MDS
okay,
so
once
we've
run
those
make
their
commands
and
the
dash
p
option
basically
creates
all
of
the
previous
subdirectories.
B
If
needed,
we're
going
to
run
the
makes
ffs
command
and
the
mix
ffs
command
is
really
kind
of
the
the
critical
command
that
does
all
of
the
creation
and
set
up
for
the
set
cluster
it's
going
to
run
on
node1.
So
you
know
in
this
example,
I
Association
to
node
1,
become
root
and
then
CD
into
at
CCF
and
from
inside
of
NC
CF
run,
makes
ffs
dash
AC
with
the
facts
of
the
comp
file
dash
case
F
keyring,
and
this
really
just
kind
of
stores.
The
administrative
key
that's
useful.
B
If
you're
going
to
be
turning
on
authentication
later
on,
and
then
this
is
also
pretty
critical.
You
need
the
dash
dash
make
a
fast
flag.
This
is
the
flag
that
will
actually
force
makes
ffs
to
go
and
format
all
of
those
OS
geez
and
you
know
make
them
make
them
ready
for
mounting
on
those
directories
when
you
start
set.
Ok,
so
this
point
the
cluster
is
created
and
configured
but
not
yet
started.
B
The
next
step
is
to
start
the
cluster
and
we
do
that
using
the
standard,
Linux
service
command,
so
service
SEF
dash
a
start,
all
the
demons
and
then
start,
and
what
you
should
see
is
output.
That
looks
kind
of
like
this,
so
you're
going
to
get
a
section
for
each
of
the
monitors,
and
you
know
I
cut
out
some
of
the
text,
but
as
an
example
from
monitor
a
you'll,
see
a
message
that
it's
starting
and
the
note
it's
starting
on
and
then
some
other
information
about.
B
It,
then
a
message
about
starting
the
MDS
and
then
for
each
OSD
in
the
system,
you're
going
to
see
a
message
about
mounting
the
OSD
file
system
on
the
appropriate
bar
live
Steph,
OSD
directory
and
then
starting
the
ceph
object,
storage
demon
on
the
node
and
a
little
bit
more
information.
You
know
about
the
data
in
the
journal,
so
you're
going
to
see
those
messages
for
each
of
the
different
demons
running
in
the
stuff
cluster.
B
B
You
should
see
60
s
DS
two
on
each
node
and
you
know,
there's
going
to
be
something
some
messages
about
PG
maps
and
information
about
the
placement
groups
that
are
part
of
the
cluster
and
how
much
space
is
in
them,
and
all
of
that
include
and
then
the
final
thing
is
the
MDS
map,
which
is
a
map
of
metadata
servers.
You
should
have
one
and
it's
up
and
active
so
once
that's
there
and
health
is
ok.
B
You
should
be
able
to
run
step
OSD
tree
and
this
just
kind
of
shows
and
ask
you
print
out
of
the
different
OS
DS
and
how
they're
attached.
So
you
know
here
you
can
see
that
OSD
0,
&
1
is
attached
to
node,
1
and
they're
mapped
to
some
rack,
which
you
know
hasn't
been
defined.
They
didn't
define
Iraq
and
the
walkthrough,
you
know
and
then
there's
also
some
root
for
the
data
center
or
you
know
them
the
distribution.
B
Ok,
so
at
this
point
the
ceph
cluster
is
set
up
configured
up
and
running
and
it
started.
We
should
be
able
to
start
using
the
ceph
services
to
access
block
device,
object,
store
final
system.
What
have
you
and
in
the
next
couple
of
slides,
we're
just
going
to
walk
through
some
examples
of
using
set
when
you
install
stuff,
you
also
install
this
rbd
command.
The
rbd
command
is
how
we
manage
Steph
spiritual
block
devices.
You
know
also
rato
refer
to
as
ratos
block
device,
hence
our
BD.
So
there's
an
LS
command
that
lets.
B
You
can
have
multiple
different
tools
where
your
images
are
stored,
but
the
default
pool
is
called
RVD,
so
r
BD
images
by
default,
going
to
the
rbd
pool
and
when
you
do
our
bbls
for
the
first
time,
you'll
see
that
there's
no
images
you
can
create
a
new
RVD
image
simply
by
using
the
rbd,
create
command
with
the
name
of
some
one
that
you
want
to
create.
So
you
know,
I
do
rbd,
create
my
lan
and
specify
a
size.
B
So
this
is
in
megabytes,
and
you
know
this
command
will
create
a
four
gig
lon
and
it
should
come
back
immediately
because
these
lungs
are
all
thin
provision
by
default,
and
you
know
I'll
create
the
LUN
and,
if
I
run
rbd
LS
to
sign
with
dash
L
I'll,
see
that
my
line
exists
and
it's
about
four
gigs
in
size.
Right
at
this
point.
You
know
the
rbe
commands
effectively
go
out
and
talk
to
the
cluster
as
defined
in
the
clients,
f
com
file
and
create
this
virtual
block
device.
B
So
I
do
rbb
map
my
lund
and
specify
the
pool
where
that
money
live
and
then
after
I've
mapped
the
lon
I
should
be
able
to
see
which
ones
are
mapped
using
the
rbd
show,
map
command
and
because
these
are
operating
on
devices
in
my
file
system
or
in
my
operating
system,
I
need
to
be
route
to
run.
Those
device
commands
and
hence
I,
need
to
do
in
front
of
that
command.
So
pseudo
RVD
show
map
will
show
me
that
I
have
my
loan
mapped
and
it's
mapped
as
Deb
rbg
0.
B
If
I
do
an
LS
on
Deb
RVD
I
should
see
that
there's
an
RPG
subdirectory
for
the
pool
and
then
the
rbd
0
device,
if
I
get
a
listing
for
the
actual
lon
I'll,
see
that
it's
a
symbolic
link
back
to
this
RB
d0
device
and
if
I
you
know,
take
a
look
at
the
details
for
that
our
be
g0
device
I'll
see
that
it's
basically
a
device
model.
So
at
this
point,
I
have
a
set
virtual
block
device
mapped
to
my
client
and
it's
available
as
a
linux
block
device.
B
The
next
step
is
to
actually
start
using
it
and
to
do
that
I'm
going
to
format
it
with
a
file
system
and
I'm
going
to
then
make
a
directory
and
mount
that
directory
in
the
other
or
in
the
directory
that
I
just
created.
So
you
know,
I'll
do
make
der
mount
my
lund
and
then
sudu
mount
the
path
to
my
lawn
on
to
that
Milan
directory.
Once
that's
done,
I
should
see
that
you
know
I
have
this
four
gig
disk
mounted
as
now.
B
It's
my
line
or
a
four
gig
file
system
on
mount
my
lon
and
then,
if
I
want
to
do
some
I,
oh
just
to
test
that
everything's
working
I
can
DD
from
depth
zero
on
to
a
test
file
in
that
directory
and
I
should
see
a
you
know,
file
that
I
created
so
I
I've
created
test
file
and
it's
in
there.
So
this
point
you
know
we
demonstrated
that
we
can
use
steph
virtual
block
devices
from
the
client
and
we
can
read
and
write
information
to
that
block
device.
B
Okay,
the
next
step
is,
we
want
to
demonstrate
the
SEF
distributed
file
system
and
in
some
ways
that's
actually
easier.
The
cell
file
system
clients
should
be
integrated
with
arms,
the
linux
kernel.
It
should
be
available,
especially
once
we've
installed
saph,
which
would
have
installed
the
various
empty
apps
and
file
system
tools,
and
what
we
can
do
is
just
make
a
directory
for
the
stuff
distributed
file
system
and
then
run
mount
dots,
f.
B
Ok,
so
to
do
mount
dots,
f,
two
mouths
of
self
file
system.
You
can
do
mount
dash
t
set
as
well
and
then
here's
kind
of
the
cool
part.
What
you
want
to
do
is
specify
a
comma
separated
list
of
the
monitor
nodes.
So
these
are
the
nodes
on
which
the
monitor
is
running
and
as
long
as
the
monitors
are
in
quorum,
they
don't
all
have
to
be
up.
B
If
the
first
you
know
you
just
need
to
get
to
one,
but
as
long
as
one
of
those
is
up,
it'll
return
the
map
and
we're
just
going
to
mount
the
room
of
the
file
system
namespace
as
mount
mice.
Ffs.
Once
that's
done.
If
you
do
a
DF,
you
should
see
both
the
lund
and
the
file
system
you
created
mounted,
as
well
as
the
new
SEF
distributed
file
system
mounted,
and
the
mouth
path
should
include
the
IP
addresses
of
all
the
different
monitor
nodes
that
are
associated
with
that
file
system.
B
You
know,
exercise
that
cluster
and
demonstrated
kind
of
unified
storage
capabilities
by
using
both
a
silly
provision,
virtual
block
device,
as
well
as
a
distributed
file
system
from
the
client
and
the
next
step,
where
the
last
step
really
is
just
to
clean
things
up
and
make
sure
that
they're
in
a
safe
state
in
case
you
need
to
start
things
up
again
and
to
do
that,
we
unmount
all
the
file
systems,
we
unmap
the
block
device.
We
stop
SEF
right,
and
this
is
kind
of
important.
You
need
to
issue
the
service
f
dash.
B
A
stop
command
and
wait
for
all
the
different
pieces
to
shut
down
you'll,
see
kind
of
stopping
message
with
like
kill
in
the
name
of
the
process,
that's
being
killed
across
the
different
nodes,
so
you
can
just
trigger
it
on
any
one
of
the
notes.
That's
running
a
monitor
and
it
should
reach
out
to
all
the
other
nodes
and
then
once
sess
is
safely
stopped.
You
can
just
talk
the
actual
virtual
machines
using
service
halt,
stop
and
the
VirtualBox
vm
will
halt
and
power
off.
B
If
you
installed,
VirtualBox
just
add
ons
and
then
stop
the
client
and
we're
done
so
just
to
review.
We
created
VirtualBox
virtual
machines.
We've
prepared
those
VMs
for
creating
the
South
cluster.
We
installed
staff
at
all
the
VMS
from
the
clients,
configured
SEF
on
all
the
different
server
nodes
and
the
clients
as
well,
using
the
South
com
file,
and
then
we
experimented
with
access
methods
using
the
virtual
block
device
and
the
distributed
file
system
and
then
clean
things
up
safely.
B
So
you
know,
as
I
mentioned
before,
this
is
based
on
VirtualBox,
but
other
hypervisors
will
work
too,
and
you
know
one
of
the
keys
is
we
relaxed
security,
best
practices
to
speed
things
up?
But
you
know
in
production
it's
really
good
idea
to
keep
the
security
hi
one
of
the
things
that
in
tanks
can
help
with.
He
is
also
negotiating
all
of
the
nuances
of
getting
the
security
dialed
in
and
all
the
staff
authentication
working
smoothly.
Ok,
ok,
so
that
was
the
walkthrough.
We
have
about
10
minutes
and
let's
take
a
look
at.
B
B
There's
also
blocks
from
ink
tank
and
other
stuff
community
members
on
the
ink
tank
website
and
then
steph
calm
and
those
blogs
provide
some
really
nice
in-depth
write-ups
of
people
doing
things
with
steph,
and
you
know
some
of
them
provide
some
really
nice
how-to
guides,
and
you
know
places
to
learn
more
if
you're
a
developer
or
want
more
in-depth
information.
There's
a
number
of
resources
available
on
socom,
including
a
mailing
list
and
an
IRC
channel,
and
the
final
link
on
gmail
is
really
dumb.
B
The
developer
list
archives,
if
you
want
to
search
for
previous
topics
or
conversations
you
know,
Google
will
do
the
trick
as
well
that
you
can
go
and
browse
archives
of
this
way.
Ok
and
then
what's
next,
the
the
best
way
to
learn
is
to
try.
It
yourself
so
use
the
information.
This
webinar
starting
point
there's
also
a
getting
started
guide
in
the
documentation,
so
this
walkthrough
is
kind
of
based
on
that
getting
started
guide
with.
B
You
know
the
walkthrough
guide
that
we
all
attached
to
this
webinar,
providing
more
step-by-step
details
and
consult
the
ceph
docs.
You
know
kind
of
Google
around
mailing
list,
archives,
jump
on
the
IRC
channel
to
get
started
yourself
and
then
once
you're
kind
of
ready
to
embrace
SEF
more
fully
and
are
looking
to
put
it
into
production,
considering
tanks,
professional
services.
So
we
offer
consulting
services.
You
know,
there's
a
number
of
different
services
available
technical
overview
will,
you
know,
have
us
meet
with
the
team?
Explain
the
architecture,
functionality,
best
practices,
use
cases.
B
We
can
walk
you
through
the
code
and
talk
about
our
technology
roadmap
and
business
goals.
We
provide
an
infrastructure
assessment
service
where
the
intent
team
will
conduct
an
in-depth
on-site
assessment
of
current
storage
to
understand,
architecture
and
future
needs,
and
then
intent.
Engineers
will
work
with
you
to
customize
the
solution
for
your
business,
help
you
implement
a
proof
of
concept
and
that's
actually
a
really
cool
way
to
get
started
with
self.
B
If
you
want
some
help
implementing
the
solution,
we
can
provide
implementation
support
and
also,
if
you
already
have
a
production
instance-
and
you
just
want
to
fine
tune
if
we
can
do
some
performance
tuning
with
you,
and
this
is
really
kind
of
professional
services.
One
of
the
other
really
important
things
that
we
believe
is
critical
for
production
is
having
a
supported,
Lucien
some
place
where
you
can
go
to
access
expert
help
really
quickly.
So
we
have
pre
production
support.
B
One
other
things
that
I
wanted
to
mention
is.
This
is
really
just
the
start
of
a
series
of
webinars.
The
next
webinar
is
introduction
to
stuff
with
OpenStack.
You
know:
we've
done
a
lot
of
work
with
the
OpenStack
community
to
integrate
SEF
with
OpenStack,
and
you
know
in
particular,
we've
partnered,
with
Dell
as
well
and
integrated
set
for
their
crowbar
deployment
tools
that
can
simplify
deployment
of
OpenStack
clouds.
B
Lastly,
this
is
a
really
great
webinar
advanced
features
of
staff,
distributed
storage
and,
if
you're
already
familiar
with
stuff,
you've
been
using
it
for
a
while,
and
you
really
want
to
dive
deep.
This
webinar
will
be
presented
by
state
while
who's
the
creator
of
staff
and
ink
tank
CTO.
So
this
is
kind
of
a
really
great
webinar
for
those
of
you
that
want
in-depth
information.
B
Okay.
So
if
you'd
like
to
contact
us
to
follow
up,
you
know
there's
going
to
be
links
in
the
webinar
emails,
but
you
can
also
contact
us
using
info
tank-tank
amor,
our
phone
number
and
don't
forget
to
follow
us
on
twitter.
You
can
connect
to
us
on
facebook
and
check
out
our
channel
on
youtube
for
how-to
videos
and
more
information.
So
with
that
I'd
like
to
thank
you
for
your
time
and
attention
we're
about
50
some
minutes
into
the
webinar.
A
Thank
You
Marisol.
We
appreciate
everyone
taking
out
their
time
during
their
busy
schedules
to
join
today.
Since
we
got
so
many
questions,
what
we
will
do
is
put
it
into
a
QA
and
send
it
out
to
everyone
who
attended
within
the
next
couple
of
days.
There
were
just
too
many
questions
for
us
to
handle.