►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
We
already
had
the
grand
distributed
storage
debate
in
the
cloud
distributed:
storage
and
the
H
a
mini
conf
yesterday,
where
we
had
Steph
in
Gloucester
Affairs
kind
of
go
head-to-head
stage,
while
the
original
author
assef
is
doing
a
safe
talk
in
the
main
track
of
the
confidence,
I
believe
on
Wednesday,
and
there
is
also
a
safe
tutorial
full
set
tutorial
that
yours,
truly
and
Tim
strong
are
going
to
be
doing
on
a
friday
afternoon.
So
if
you
want
to
learn
more
about
this,
then
by
all
means
please
come
to
our
tutorial
on
Friday.
B
So
what
is
safe,
if
is
actually
not
one
thing,
but
is
for
different
things
and
out
of
these
four
I'm
going
to
highlight
three
here,
because
only
three
of
them
are
actually
relevant
to
OpenStack
or
as
general
purpose.
Storage
for
OpenStack
sif
is
fundamentally
at
its
core.
A
native
object,
storage
solution,
not
unlike
Swift
in
that
architecture,
it
uses
it
implements
the
idea
of
object,
storage,
very,
very
differently,
then
a
swift
does,
but
it
still
operates
on
similar
principles
such
as
at
the
level
where
we're
talking
about
distributing
and
replicating
and
balancing
data.
B
We
don't
really
need
to
worry
about
things
like
are
common
to
POSIX,
such
as
a
hierarchy
of
directories
and
nested
directories
and
files
on
them,
or
permissions
of
any
very
intricate
nature
or
or
a
lot
of
weekend.
We
don't
need
to
pay
a
lot
of
attention
to
ownership.
All
these
things
can
be
very
simple,
and
the
operations
that
we
apply
to
our
data
can
also
be
very,
very
simp.
B
It's
essentially
boiling
down
to
get
put
delete,
that's
it
and
what
Seth
does
is
it
implements
such
an
object,
store,
natively,
the
universal
unit
of
data
storage
is
not
a
file,
is
not
a
block,
it's
an
object
and
those
objects
are
automatically
being
distributed
across
the
cluster,
and
the
cluster
can
scale
to
hundreds
of
nodes
and
petabytes
and
exabytes
of
data.
If
we
want
to-
and
they
are
also
replicated
in
sorry,
they
are
stored.
B
Redundantly
in
a
very
highly
configurable
of
fashion,
and
then
everything
else
that
we're
talking
about
in
Sepphoris-
it's
not
this,
we
can
use
the
object
store
directly.
There
are
a
bunch
of
library
bindings
for
them.
We
can
talk
to
the
optic
store
directly
in
C
C++,
Python
whatnot,
but
most
of
the
time
we're
going
to
be
using
an
OpenStack
uses,
one
of
the
more
high
level
API
layers
that
we
can
use
to
interact
with
the
sip
stack.
One
such
thing
is:
we
have
a
block,
storage,
client
layer
on
top
of
SEF.
B
What
we
can
do
with
something
called
rbd
radio
Spock
device
is.
We
can
expose
block
device
semantics
to
a
client
either
in
the
kernel
or
in
the
virtualization
layer,
where
we're
we're
presenting
to
that
layer,
just
a
block
device
and
then
that
client
layer
driver
translates
all
of
the
I/o
to
that
block
device
to
I/o
on
radar
objects,
and
then
it
inherits
all
the
decay.
The
capability
from
the
underlying
object,
store
about
distribution,
replication,
redundancy,
etc,
etc.
Those
block
devices
can
actually
be
pretty
Marya.
Are
there
are
thin
provisioning
seff?
They
are.
B
B
So,
like
I
said,
there
is
no
such
thing
as
a
directory
hierarchy
or
any
of
that
nature.
Every
object
in
Steph
has
a
name.
It
has
a
payload
or
content,
and
it
has
an
essentially
arbitrary
number
of
key
value
pairs
attributes
that
we
can
tack
on
to
the
object
and
objects
can
be
of
almost
arbitrary
size,
I
say
almost
because
if
we
go
beyond
a
certain
threshold,
then
we
will
just
stripe
across
multiple
objects.
B
These
objects
are
assigned
logically
to
what
we
call
policeman
groups
and
every
placement
group
has
a
list
of,
depending
on
whether
we
use
the
outdated
documentation
where
sometimes
says,
object,
storage
devices
or
we
use
the
new
documentation,
object,
storage,
demons,
it
all
basically
reduces
down
to
0
SDS
and
on
these
OS
DS,
the
contents
are
stored.
The
contents
of
all
the
objects
in
this
placement
group
are
stored
in
a
redundant
fashion
and
wise,
isn't
why
is
it
an
ordered
list
because
it
uses
essentially
a
primary
copy
means
of
rights?
B
So
a
client
knows
ok,
what
is
the
first
entry
in
the
OSD
list?
That's
the
one
I'm
writing
to
and
that
OSD
is
then
responsible
for
replicating
off
to
the
various
replicas
and
how
many
replicas
we
have
is
completely
configurable
in
a
very,
very
flexible
manner.
The
interesting
thing
is
that
the
entire
object
placement
is
completely
algorithmic,
so
there
is
no
central
look
up
instance
like,
for
example,
a
metadata
server
and
lustre
would
be.
There
is
also
not
a
distributed
hash
table,
as
we
have,
for
example,
in
Gloucester
FS.
B
It
uses
an
algorithm
called
crush,
controlled
replication
or
scalable
hashing,
and
we
can
use
this
algorithm
to
define
in
a
very,
very
flexible
manner
our
replication
and
balancing
topology
in
the
cluster
by
default.
And
the
interesting
thing
is
that
pretty
much
everything
in
the
sera
of
this
algorithm
that
includes
the
OS
DS,
the
storage
portion
of
the
of
the
cluster
itself
and
also
clients,
and
the
only
thing
that
we
then
need
to
feed
the
system
are
the
parameters
to
this
algorithm
and
those
parameters
are
expressed
in
something
that
we
call
the
crush
map.
B
So
that's,
basically,
the
the
rule
set
that
we
define
how
data
is
placed
and
where
data
is
retrieved.
So
by
default,
Steph
uses
a
simple
crush
map
that
just
makes
sure
that,
when
we're
distribute
data,
no
two
copies
of
the
same
piece
of
data
are
stored
on
the
same
host.
But
we
can
extend
that
and
can
say
well.
B
How
this
is
done
and
how
this
actually
works
internally
is
something
that
I
regularly
completely
geek
out
about.
Very
unfortunately,
I
don't
have
time
to
cover
that
in
detail
in
this
talk.
But
if
you
want
to
geek
out
with
me,
then
please
by
all
means
come
to
tims
in
my
tutorial
on
Friday,
where
we're
going
to
go
into
it
in
a
little
more
detail,
and
you
also
get
to
play
with
it.
Live
on
set
boxes
that
you
run.
B
But
this
is
really
really
really
cool
stuff
and
there
is
a
there's,
a
lot
of
intelligence
that
is
actually
built
into
the
safe
storage
architecture
itself,
such
that
it
essentially
operates
under
the
assumption
that,
at
a
certain
scale
something
always
fails
and
that's
okay.
The
system
can
recover
from
this
very,
very
comfortably
in
very,
very
nicely,
which
is
really
really
cool.
Seth
is
obviously
built
for
big
data
or
more
precisely
relief
or
gigantic
data.
It
works
just
just
as
well
for
not
quite
as
gigantic
data.
B
So
if
you're
building
a
cluster
of
maybe
a
few
hundred
gigabytes
or
maybe
a
few
terabytes
or
tens
of
terabytes,
it
works
just
as
nicely
and
that's
really
cool.
We
have
lots
and
lots
of
client
api's
that
we
can
use
to
interact
with
the
safe
object
store.
We
have
AC
library
called
labret
das.
There
is
a
c++
version
of
the
same
called
liberate
spp.
We
have
python
bindings,
we
have
terpening
language
bindings
and
whatnot.
We
have
command
line
tools
that
we
can
use
to
interact
with
the
object,
store,
directly,
etc,
etc.
B
So
if
you
have
a
need
for
an
application-
or
rather
a
piece
of
storage
that
you
can
use
plug
into
an
application,
will
basically
take
care
of
all
of
this
distributed
storage
for
you,
then
you
can
just
use
that,
but
if
you're
not
a
developer
and
you're
just
looking
for
a
predefined
use
of
the
second
object
store.
There's
plenty
of
these
and
those
rather
high
level
klein
api's
are
what
is
typically
used
in
OpenStack
when
we
integrate
it
with
Seth
and
the
integration
between
these
seff
parts.
Api
parts
and
OpenStack
is
an
ongoing
one.
B
We
have
this
thing
called
a
radars
block
device
in
those.
Are
it's
in
provisioned
block
devices
that
stripe
data
that
is
written
to
to
them
across
multiple
safe
objects?
Multiple
radars
objects.
So
when
we
write
data
into
this
thing,
we
can
work
with
it
as
if
it
were
any
old
block
device.
But
what
happens
under
the
covers
is
that
the
IO
to
these
blocks
that
were
that
we're
working
with
gets
translated
into
creation
or
modification
etc
of
safe
objects.
And
then
we
can
do
cool
things
with
that.
B
B
Evidence
that
we,
when
we
create
a
new
block
device
of,
say,
I,
don't
engi
for
the
sake
of
argument,
then
the
amount
of
data
that
is
being
allocated
at
that
time
is
exactly
one
radars
object,
which
is
an
object
that
essentially
holds
a
little
bit
of
metadata
about
the
RVD
and
then,
as
we
write
into
it.
Only
as
we
write
into
it
do
actually
do
we
actually
see
radars
objects
getting
allocated
and
written.
So
it's
in
provisioning.
The
next
thing
is
snapshots,
because
all
of
these
objects
are
essentially
thing
provisioned,
because
we
don't.
B
We
don't
need
to
think
of
a
an
r
BD
as
this
one
contiguous
chunk
in
the
object
store
from
that.
It
follows
logically
that
we
can
also
very
very
cheaply
and
easily
do
a
redirect
on
right
snapshots
and
these
snapshots
are
read-only,
but
we
can
do
something
called
cloning
and
cloning
means
that
we
are
essentially
taking
a
snapshot
defining
it
as
a
master
copy
for
other
rbd
images,
and
those
RVD
images
are
then
writable.
B
So
this
is
something
you
initiate
a
process
called
mapping
and
as
soon
as
you
map
an
rbd
image,
it
simply
becomes
a
block
device,
a
virtual
block
device
in
your
Linux
box.
It
pops
up
under
def
rbd,
and
then
you
can
use
it
in
a
way
any
way,
shape
or
form
that
you
could
use
any
regular
block
device.
You
can
make
a
file
system
on
it.
You
can
write
directly
to
it
whatever
you
like
that
went
upstream
into
630-7.
B
However,
that
is
not
what
you
typically
use
an
OpenStack
for
the
simple
fact
that
most
people
will
be
running
OpenStack
using
a
liver
and
qmu
kbm
as
their
hypervisor,
if
you're,
not
if
you're,
using
xin,
for
example.
That
would
be
your
preferred
choice.
If
you
are
actually
running
on
q,
mu
or
kb
m,
there
is
something
else,
and
that
is
the
q
mu,
r
BD
storage
driver.
It
is
part
of
upstream
q,
mu
and
hence
kvm.
It
is
fully
supported
in
Lippert
as
well.
B
Mu
1
is
file
which
you
use
just
a
file-based
image,
and
one
is,
is
the
physical
one
phy,
which
means
you're,
actually
using
a
block
device,
but
that
storage
driver
architecture
in
q
mu
is
actually
pluggable,
and
this
is
one
of
the
plugins
that
are
supported.
So
you
can
run
q
mu,
kbm
directly
off
of
a
of
an
r
BD
image,
and
that
is
also
supported
in
conjunction
with
both
libvirt
and
and
with
no
ways
we're
going
to
see
in
a
second,
but
for
right
now
glance.
B
This
is
fully
integrated
with
rbd.
It's
really
really
simple.
If
you
have
a
running
safe
cluster,
to
switch
your
glance
to
use
that
that's
F
object
store
for
our
BD
image
storage.
Essentially,
what
you
define
is:
okay,
what
are?
What's
the
what's
this
FX
user,
which
is
essentially
a
shared
secret
to
to
to
connect
to
the
SEF
cluster,
we
define
what
is
the
safe
pours,
the
images
boom
off
we
go,
and
then
it
works
roughly
like
this.
B
B
It
also
essentially
talks
to
glance
getting
the
image
that
it
wants,
and
this
is
actually
smart
enough
in
latest
versions
of
both
safe
and
Nova
compute,
that,
if
you
are
I'll,
have
a
slide
for
that,
no
I,
don't
that
you
can
actually
do
a
direct
boot
from
volume
from
that
copy,
which
is
really
really
cool.
It
is
also
integrated.
Rvd
is
also
integrated
with
cinder
cinder
is
the
block
storage
layer
that
we
have
in
OpenStack
or
the
or
the
block
storage
stuff
project
that
we
have
an
OpenStack
and
that
used
to
be
called
Nova
volume.
B
It's
an
independent
project
as
of
folsom,
and
that
is
also
fully
integrated
with
RVP
and
there.
We
also
have
the
integration
with
Nova
for
a
boot
from
volume
which
is
cool,
and
this
is
this.
What
makes
this
combination
particularly
appealing
to
those
who
are
looking
for
OpenStack,
essentially
to
run
a
private
cloud,
which
means
they
are
running
their
their
look
for
or
implementing
a
modern
way
of
running
a
data
center.
This
essentially,
what
it
is
we
don't
want
to.
We
don't
want
to
deal
with
iron
wires
anymore.
B
We
want
to
have
fairly
standardized
installations
everywhere,
and
then
we
want
to
keep
going.
You
know
deploying
workloads,
so
here's
how
that
works.
When
we
talk
when
we
need
to
talk
to
cinder
API
directly,
then
the
integration
with
rbd
goes
through
cinder,
API
such
as
I'm,
creating
a
new
volume
and
whatnot,
and
if
we
have
Nova
actually
talked
to
these,
then
it
doesn't
need
to
go
through.
Send
your
API.
It
just
goes
through
cinder
API
to
figure
out
okay.
What
do
I
need
to
connect
to
and
then
it
is
actually
Nova
itself
that.
B
But
what
we
can
do
with
SEF
is
we
can
use
SEF
as
a
drop-in
replacement
for
swift
as
well.
Here's
how
this
works.
There
is
a
restful,
HTTP
or
https
https
access
gateway
into
the
object
store.
It's
called
ray:
das
gateway.
It
is
a
fast
cgi.
Application
is
built
on
the
liberators
PP,
C++
API,
and
it
generally
runs
essentially
with
any
web
server
that
supports
the
fast
CGI
interface.
The
canonical
way
of
doing
it
is
with
Apache
and
fat
CGI.
Theoretically,
you
can
do
it
with
anything
that
supports
fast
egi
it.
B
Theoretically,
that
would
include
I
is
no
I
would
not
necessarily
recommend
that
the
cool
thing
about
radars
gateway
is.
It
did
not
really
invent
its
own
wheel
in
the
sense
that
how
about
we
write
our
own
restful
api
know
it
went
and
implemented.
Existing
restful
object.
Storage,
API
is
how
about
that.
B
All
the
radar
skate
where
relevant
data
is
in
the
safe
object
store
itself,
so
that
means
we
can
have
as
many
of
them
as
we
want,
and
we
can
load
balance
them
with
round
robin
DNS
or
with
an
IP
load
balancer,
whatever
we
want,
which
again
is
kind
of
neat.
By
the
way.
The
same
thing
is
obviously
true
for
swift
proxy.
So
there's!
No,
it's
not
it's.
No
better
than
swift
proxy
here
is
just
feature:
parity,
yeah
and
something
that's
relatively
new.
B
Is
it
actually
does
support
keystone
authentication,
so
just
like
Swift,
which
originally
had
its
own
authentication
API
and
then
grew
keystone
support?
The
same
thing
was
true
for
write,
offs
gateway,
except
that
it
trails
with
by
several
months,
but
we
have
that
now,
so
we
don't
need
to
use.
You
know
separate
authentication
when
we're
talking
to
ray
das
gateway,
but
instead
we
can
just
use
our
Keystone
credentials
and
go
with
that.
So
that's
how
that
looks.
B
So
what's
next
in
OpenStack
and
safe
integration,
there
is
a
few
kinks
and
wrinkles
that
still
need
to
be
ironed
out.
So
we're
we're
going
to
see
some
usability
improvements
as
the
Grizzly
release
draws
near
and
then,
presumably
after
that
it
that
is
not
fully
baked.
Yet
it
could
be
better,
it
could
be
more
elegant,
but
it's
in
the
works.
B
This
can,
of
course,
use
testing.
So
if
you're
spinning
up
a
private
cloud,
all
you
need
to
do
to
build
a
sip
cluster
is
three
notes
and
you
can
build
a
small,
safe
cluster.
That's
essentially
all
you
need
and
then
the
modifications
that
you
actually
need
to
make
to
you
if
it's
zach
infrastructure
are
really
really
minimal,
so
actually
spinning
up
a
POC
for
OpenStack
/
safe,
is
much
less
work
than
you
might
actually
think,
and
then.
B
Finally,
this
is
a
solution
that
is
very,
very
interesting
for
private
clouds,
as
I
said,
at
least
that's
what
I
see
or
we
see
in
in
the
in
the
consulting
work
that
we
do.
The
the
people
who
are
interested
in
running
this
sort
of
combination,
OpenStack
and
safe,
are
usually
the
ones
that
are
looking
for
OpenStack
to
build
private
clouds
OpenStack
as
a
management
for
a
modern
datacenter.
B
B
My
email,
a
short
link
to
my
Google+
page,
our
company
Twitter
account
and
obviously
our
website,
and
if
you
want
these
slides,
all
of
this
is
up
on
github,
so
feel
free
to
clone
that
and
if
you
want
to
reuse
any
of
this,
all
of
this
material
is
under
the
creative
commons
attribution
share-alike
license.
So
please
feel
free
to
you
to
reuse
whatever
you
want,
except
for
the
safe
and
OpenStack
logos
for
which
I
would
suggest
you
get
permission
from
either
the
OpenStack
foundation
or
in
tank.
B
Yeah
right
so
Marc
was
saying
that
it's
perfectly
fine
for
the
OpenStack
foundation,
if
you're,
using,
if
you're
doing
an
OpenStack
talk
in
a
community
event
that
you
can
use
the
logo.
But
I
am
NOT
a
lawyer
so
when
in
doubt
double
check,
I
think
I'm
about
out
of
time.
Do
we
have
some
time
for
questions
one
or
two
questions?
Ok,
questions.
B
B
Yet
generally,
we
suggest
to
the
people
that
we
work
with
to
use
exif
s
on,
and
we
generally
advise
people,
and
this
is
something
that
that
is
that
is
generally
true
for
for
SFO
SDS
is
you,
it
uses
a
journal
writing
mode.
So
you
put
the
the
journal
on
a
fast
high
bandwidth
SSD
and
you
put
your
file
store
on
cheap,
two
terabyte
satis
pinners
and
you
slap
XFS
on
those
and
that
tends
to
work
really
really
nicely.
We
had
a
question
over
here.
B
Yes,
but
then
it's
not
so
the
question
was:
is
it
possible?
It
happen?
Yes,
of
course,
but
then
it's
not
boot
from
volume.
Then
it's
something
that
we've
always
been
able
to
do,
including
in
Essex,
where
you've
had
your
booting
off
of
an
ephemeral
disk
and
then
you're
attaching
a
volume
from
cinder,
and
you
can
obviously
do
that.
B
A
C
C
C
C
B
B
6
do
double
check
the
schedule
for
that,
and
that
is
a
90
minute
tutorial
also
check
the
conference
wiki
or
subscribe
to
our
Twitter
feed
because
we're
going
to
announce
war,
you
can
download
the
the
virtual
machines
as
soon
as
someone
from
the
conference
team
actually
tells
me
war
to
upload
them
to
right.
Thank
you
very.