►
From YouTube: Cephfs as a Service with OpenStack Manila
Description
Ceph is a popular storage backend for OpenStack deployments, in use for block devices via RBD/Cinder and for object storage with RGW. Ceph also includes a filesystem, CephFS which is suitable for integration with Manila, the OpenStack shared filesystem service. This presentation introduces the new CephFS Native driver for Manila, describing its the implementation of the driver, and how to deploy and use it. Areas of particular interest for Ceph users will include how CephFS snapshots map to Man
B
Want
to
hold
you
up
for
your
evening
plans.
My
name
is
john
spray,
I'm,
a
developer
at
Red,
Hat
and
I'm,
going
to
talk
to
you
today
about
how
you
can
use
surface
to
create
a
file
system
as
a
service
with
OpenStack
Manila.
So
I'll
start
by
giving
you
a
brief
introduction
to
SEF
and
vanilla.
Although
this
is
not
going
to
be
in
depth
on
either
of
those
topics,
it's
really
about
the
integration
of
the
two.
B
So
how
we
map
the
concepts
that
the
middle
or
API
exposes
to
what's
F
of
S
is
capable
of
the
experience
of
actually
implementing
that
driver
and
working
with
the
middle
or
interfaces
to
do
that,
then
we'll
be
a
tutorial
on
how
you
can
set
it
up
and
use
it
yourself
and
finally,
I'll
go
into
some
detail
about
the
next
steps
here,
since
the
driver
that
we
have
today
is
really
just
a
beginning
down
this
path
so
step.
Professor
is
a
distributive
POSIX
file
system.
B
So
you
probably
already
have
it
if
you
have
SEF
installed.
If
you
are
using
vendor
packages,
it's
possible
that
they
might
leave
it
out
if
they're
supporting
it
yet,
but
if
you're
working
with
upstream
packages,
you
probably
already
have
it,
you
can
mount
SEF
file
systems
using
the
colonel
client,
which
is
part
of
the
upstream
colonel.
You
can
use
the
fuse
userspace
client
and
you
can
also
use
a
library
called
lib.
B
Suffice
if
you
need
direct
access
to
the
file
system
from
your
applications
and
finally,
the
step
file
system
does
a
little
bit
more
than
most
file
systems.
It
has
directory-based
snapshots
and
it
also
has
recursive
file
system
statistics
so
that
you
don't
have
to
spider
directories.
To
get
the
stats
about
usage
so
in
visual
form,
that's
what
using
surface
looks
like
you
have
a
client
at
the
top,
which
has
a
file
system
mounted,
and
it's
talking
directly
to
the
radar
Buster
to
store
the
data
for
you.
B
If
you
need
more
detail
about
stuff
of
fests,
please
come
to
Greg
farnum's
presentation,
which
is
in
this
room
on
Thursday
and
there's
also
a
ton
of
information
online
in
the
form
of
the
official
docs
at
sap
com.
But
if
you
also
just
go
ahead
and
Google
sepa
fests
you'll
find
there
are
plenty
of
talks
and
videos
online
for
you
to
learn
about
it.
So
manila
is
the
OpenStack
shared
file
system
service.
B
At
a
time,
manila
exposes
an
API
that
tenants
applications
can
use
to
request
some
storage,
to
request
that
certain
Network
entities
are
authorized
to
access
the
storage
and
then
to
manage
the
lifecycle
of
the
storage,
including
providing
quotas
to
deal
with
the
multi-tenancy
aspects
of
having
many
applications
deal
with
a
fixed
pool
of
storage,
manila
maps,
those
operations
to
different
backends,
depending
on
what
you're
using
so
you
might
have
a
software-defined
storage
solution.
We
might
have
a
physical
Hardware
appliance
and
those
modules
in
manila
recalled
drivers.
B
Most
of
the
existing
drivers
offers
talking
to
proprietary
storage
systems,
but
there
are
a
few
existing
open
source
ones
already,
especially
gloucester
FS,
the
noose,
ffs
driver
and
the
so
called
generic
driver
which
exposes
NFS
shares
based
on
cinder
mullions.
So
the
usage
of
Manila
looks
something
like
this.
The
tenant
is
in
the
top
left.
B
He
sends
off
an
API
request
to
Mandela,
saying
I
would
like
some
file
system,
storage,
Manila,
picks
the
proper
driver
to
send
that
request
on
to
some
back-end
back-end
assigns
the
storage
and
very
address
that
the
client
can
use
to
mount.
The
storage
gets
passed
all
the
way
back
up
to
the
tenant.
Once
that's
happened,
the
10
can
pass
that
address
into
a
guest
virtual
machine
which
can
in
turn
mount
the
file
system.
B
So
the
point
that
the
file
system
is
mounted
nothing's
flowing
through
manila
anymore
Manila
is
a
control
plane
and
the
data
goes
directly
from
your
guest
VMS
to
whatever
back-end
you're
using.
So
what
do
we
want
to
put
these
two
things
together?
Well,
our
favorite
growth
from
the
OpenStack
survey
that
has
SEF
being
used
in
the
majority
of
clusters
means
that,
if
you're
looking
for
a
manilla
back-end
the
chances,
are
you
already
have
self
storage
that
you
would
like
to
use
with
Manila
so
that
you
don't
have
to
deploy
another
system
another
set
of
disks?
B
It's
also
pretty
useful
that
you
can
have
an
open-source
backend
for
your
open
source
cloud
and
you
don't
have
to
worry
about
buying
some
separate
pieces
of
hardware
from
some
storage
specialized
vendor
in
order
to
prototype
and
build
out
your
clouds,
you
can
start
with
all
free
all
open
source
software.
That's
also
really
useful
if
you're
a
test
or
a
developer,
and
you
want
to
hack
on
Manila,
you
need
a
back-end
that
you
can
just
install
yourself
in
use
and
at
the
highest
level.
Why
do
we
want
to
do
any
of
this
at
all?
B
Why
do
we
want
shared
file
systems
at
all?
Well,
because
applications
want
them,
and
there
are
a
lot
of
applications
out
there,
that
weren't
necessarily
built
for
the
cloud
and
some
applications
which
are
built
for
the
cloud
that
find
that
the
file
system
is
a
more
appropriate
model
than
object,
storage
or
block
storage.
So
all
of
this
is
ultimately
about
enabling
your
users
to
run
their
applications
on
your
clouds.
B
So
the
unit
of
storage
that
Manila
works
with
is
called
the
share,
and
it's
worth
specifying
exactly
what
they
mean
by
that.
So
Manila
combines
the
allocation
of
storage
with
the
act
of
sharing
it
over
the
network.
If
you
were
using
a
normal
linux
server,
then
you
would
separately
create
a
file
system
on
a
disk
and
then
configure
your
NFS
demon
to
export
it
to
a
particular
place.
There's
really
two
separate
concepts,
but
in
vanilla
they
are
combined
and
once
you've
exported
that
using
vanilla
that
forms
an
independent
namespace.
B
So
you
can't
move
files
between
Manila
shares.
They
are
individual
atomic
namespaces
Manila
also
expects
that
shares
should
be
limited
in
size.
That's
a
little
bit
of
a
perhaps
a
hangover
from
the
days
when
you
were
dealing
with
hardware
storage
controllers,
where
you
really
would
be
carving
something
out
of
the
lung,
and
so
we
have
to
do
a
certain
amount
of
work
to
enforce.
B
Aside
limit
the
size
limit,
whereas
some
I
guess
for
other
people,
it's
just
intrinsic,
so
this
doesn't
exist
as
a
concept
built
into
SEF,
but
we
can
take
the
primitives
that
surface
gives
us
and
use
it
to
compose
something
which
acts
the
way
manila
expects
it
to
so.
The
way
we
do
that
is,
we
start
with
a
directory.
B
We
can
use
the
layouts
thats
FS
has
for
controlling
where
data
goes
in
directories.
We
can
use
that
to
send
the
data
in
one
of
these
share
directories
to
a
particular
ray
das
pool
or
radars
namespace,
and
that
gives
us
our
isolation
between
tenants
so
that
one
tenant
can't
reach
into
the
radio
spool
that
another
tenant
is
using
all
the
rattlesnake
space
that
another
tenant
is
using
and
go
and
touch
his
data.
The
reason
that
that's
necessary
is,
if
you
recall
that
Seth
clients
natively
write
directly
to
the
radars
cluster.
B
So
if
you
had
two
clients,
you
have
to
make
sure
that
they
can't
write
directly
to
each
other's
rato
stored
data.
We
use
surface
file
system
quotas
to
enforce
a
limit
on
the
size,
and
we
also
use
cefs
built-in
authentication
system
to
restrict
what
metadata
clients
can
access
so,
rather
than
relying
on
clients
being
well
behaved
and
only
mounting
the
directory
that
we
would
like
them
to
use
as
their
share.
B
We
have
authentication
on
the
backend
that
enforces
that
those
ways
of
implementing
a
share
map
directly
to
suffer
fest
features,
some
of
which
already
existed
and
some
of
which
were
put
in
place
or
refined
to
deal
with
this.
So
the
ability
to
limit
clients
by
path
is
a
fairly
new
thing
that
I
think
was
in
the
infant
all
its
release
of
SEF.
B
Historically,
clients
could
always
change
their
layouts,
so
they,
if
you
tried
to
limit
them
to
a
pool,
they
could
always
just
point
themselves
to
a
different
pool.
So
we
fixed
that
and
there's
a
new
letter
on
the
end
of
your
MD
capabilities
that
you
would
need
to
have
to
set
the
pool
out.
We
took
the
DF
command.
B
B
Those
rules
refer
to
IP
addresses,
so
you
give
it
an
IP
subnet
that
should
have
access
to
it
and
using
IPS
for
authentication
is
not
actually
as
scary
as
it
sounds,
because
those
other
drivers
in
Manila
also
use
network
virtualization
to
restrict
clients
to
only
permit
access
to
clients
that
have
been
added
to
a
spittle
network
in
Neutron,
but
nevertheless
from
Manila
spoint
of
view.
Its
IP
addresses
that
you're
authorizing,
whereas
in
the
SEF
driver,
we're
authorizing
user
accounts,
so
not
a
network
in
concept,
but
an
ID
that
lives
within
Seth.
B
B
So
this
this
work
to
create
this
pseudo
separate
file
system
within
the
global
set
file
system
is
all
wrapped
up
in
a
new
class
which
is
part
of
the
dual
release
of
SEF
called
CFS
volume
client.
So
the
the
motivation
here
is
to
allow
us
to
iterate
on
this
and
test
and
release
this
independently
of
Manila
and
hide
the
SEF
implementation
details
from
Manila.
B
This
is
very
lightweight
at
the
moment.
It's
really
I
think
it's
less
than
a
thousand
lines
of
code,
but
it
will
grow
a
little
bit
in
the
future
as
we
need
to
support
more
Manila
features
beyond
just
creating
and
removing
shares
and
authorizing
them
and
denying
them
and
that's
a
diagram
of
where
the
separation
is
between
CFS
volume,
client
and
Manila.
It's
worth
being
aware
of
this,
because
the
top
half
of
that
is
in
one
get
dream
on
projects
release
cycle
and
the
bottom
half
is
in
a
different,
get
repository
and
a
different
projects.
B
B
So
historically,
drivers
were
things
that
let
you
talk
to
an
NFS,
filer
storage
appliance
and
they
typically
implement
NFS
or
possibly
sifs
if
you're
lucky
so
Manila
has
hard-coded
the
list
of
network
protocols
with
open
source
file
systems.
It's
not
the
case.
Sefa
fest
has
its
own
protocol,
so
does
Gloucester
FS
so
does
luster,
so
does
gfs2.
B
So
at
the
moment,
when
you
want
to
add
one
of
these
protocols,
you
have
to
actually
edit
a
series
of
places
in
the
Manila
codebase.
You
have
to
edit
the
the
Python
client,
the
the
UI,
the
API
server,
along
with
writing
all
the
corresponding
unit
tests
for
all
of
these
different
places.
For
something
that,
ideally,
you
would
really
be
able
to
declare
from
inside
your
driver.
B
Now
that
I'm
done
complaining,
I'll
move
on
to
the
tutorial
of
how
you
actually
use
this
so
caveat,
sup
front.
You
need
a
manilla
version
equal
to
a
greater
than
attacker.
You
need
seff,
jewel
or
higher,
and
with
both
of
these
things,
where
we're
still
in
the
pros
sess
of
like
smoothing
off
rough
edges,
so
that
will
be
point
releases
and
you
will
want
to
make
to
make
sure
you've
got
the
most
recent
ones.
B
The
guests
that
want
to
access
a
SEF
file
system
using
the
native
protocol
need
access
directly
to
the
SEF
cluster,
and
that's
really
the
biggest
caveat
right
now
and
that's
what
makes
me
say
that
that
this
driver
is
the
first
step
in
a
series.
Most
public
clouds
would
not
want
to
do
this.
Watch
me
I
would
say
all
public
clouds
would
not
want
to
do
this.
You
do
not
want
to
give
untrusted
third-party
code
access
to
your
self
network
to
your
storage
network.
B
However,
if
you
have
certain
use
cases,
you
might
find
best
useful,
for
example,
if
your
virtual
machines
are
somewhat
trusted
and
they're
being
used
as
container
hosts
for
untrusted
applications
within
it,
and
you
want
something
that
will
give
you
a
file
system
for
your
volumes
for
your
containers.
So
you've
got
another
level
on
top
of
it.
That's
going
to
isolate
your
applications.
You
might
consider
deploying
this
so
again
because
we're
usually
native
protocol,
the
guests,
need
to
have
the
SEF
infest
client
software
installed.
B
That's
not
a
fundamental
issue,
but
it's
kind
of
annoying.
If
you've
got
you
know,
pre-built
images
and
you
need
to
update
them
to
make
sure
they've
got
the
client
software
in
and
at
the
moment
the
quota
limitations
on
the
size
of
shares
are
enforced,
client
site,
which
means
you
need
to
somewhat
trust
your
client,
that's
really
less
of
an
issue
than
the
fact
that
you
need
access
to
their
cluster
network,
so
the
general
sense
is
whatever's
mounting
these
file
systems
needs
to
be
something
somewhat
trusted
and
not
random.
B
Third-Party
code,
so,
firstly,
you
need
a
safer
fest
file
system.
I'm
setting
up
your
surface
is
very
straightforward.
You
would
use
SEF,
deploy
or
sensible
or
whatever
the
tool
of
your
choice
is
to
create
an
MDS
demon,
and
then
you
need
a
pool
for
your
data
and
a
pool
for
your
metadata
and
finally
register
those
pools
with
SEF
for
use
as
a
file
system,
with
this
f
FS
new
command,
once
you've
done
that
you
can
start
sending
up
manila,
so
the
SEF
driver
is
part
of
part
of
manila
itself
as
all
the
other
drivers.
B
So
there's
no
separate
package
to
install
you'd
install
your
manila
package
built
from
a
tower
or
more
reason.
You
also
need
liberate
us
and
lips
ffs.
Those
are
the
libraries
that
are
going
to
be
used
for
the
driver
to
talk
to
the
south
cluster
and
make
sure
that
your
Manila
server
actually
has
a
connection
to
your
saturn
network.
That's
obviously
less
of
an
issue
than
connecting
your
guests
to
it,
but
it
still
needs
to
be
the
case.
The
Manila
server
will
also
use
a
safe
identity
of
its
own.
B
There
is
a
great
big
command
line
in
the
docks
for
creating
that
it's
it's
huge
because
it
has
a
white
list
of
which
administrative
operations
Manila
is
allowed
to
do
so.
We've
put
that
in
there
so
that
if
there
are
any
sort
of
unexpected
bugs
or
glitches
with
Manila,
it's
not
at
risk
of
wiping
out
other
stuff.
That's
going
on
on
your
Seth
cluster
and
once
you're
happy
that
you've
gone
through
this
process
run
SEF
status
with
your
client
Manila
key
to
check
that
everything's.
B
Okay,
once
that's
happened,
you
can
actually
load
your
config
into
Manila
itself,
so
make
sure
that
key
ring
that
you
created
is
visible
somewhere,
that
the
Manila
service
will
be
able
to
access
it.
The
default
location
is
best
so
that
you
don't
have
to
explicitly
configure
that
and
then
you
need
to
add
a
stanza
like
this
to
your
Manila
config
file,
so
we're
telling
it
the
share.
Backend
name
is
zephyr
fess,
one,
here's
the
path
to
the
Python
module
that
we
want
to
use.
B
That's
the
path
to
this
ffs
driver
and
there
is
the
config
file
that
belongs
to
SAP
and
that's
going
to
tell
the
liberators
and
lips
ffs
instances
that
we
have
everything
they
need
to
know
about
how
to
connect
to
SEF.
You
also
need
to
create
a
share
type.
False
ffs
share
types
are
a
manilla
concept.
B
Once
you've
got
Manila
up
and
running,
you
can
go
ahead
and
create
a
share
and
there's
probably
some
restarting
of
services
that
needed
to
happen
in
between
here.
But
that's
going
to
depend
on
you
know
what
packages
you're
using
and
how
you
doing
all
of
that.
So
when
we
create
a
share,
we
refer
to
the
share
type
that
we
created
earlier.
We
give
the
share
a
name
and
the
the
SEPA
fest
before
the
one
is
where
we're
telling
it
to
actually
use
the
thus
f
of
s
back
end.
B
Actually,
that
should
be
I,
think
that
should
be
the
name
of
back
end
from
the
previous
slide.
But
no
one
picked
me
up
on
that
when
I
presented
this
a
bolt
last
week,
so
I'll
get
away
with
it
I'm
creating
a
1
gigabyte
share
here
which
isn't
terribly
useful
but
yeah,
and
then,
as
I
said,
you
can't
do
anything
with
the
share
until
you've
authorized
someone
to
access
it.
So
here
we're
going
to
call
manila
access,
allow
for
a
user
called
Alice.
B
What
the
driver
is
going
to
do
on
our
behalf
here
is:
go
and
talk
to
SEF,
create
an
ID
and
a
key
for
a
user
called
Alice
and
give
Alice
the
off
caps
that
she
needs
to
access
the
share
that
we
just
created
or
specifically
to
access
the
directory
that
we
created
to
embody
the
share
that
was
requested
by
the
user.
Then.
Finally,
all
of
that
stuff
gets
passed
through
into
a
SEF
use
command
that
picks
up.
B
If
you
want
to
go
a
little
bit
further,
one
of
the
interesting
things
you
can
do
at
the
moment
with
Manila
is
have
multiple
back
ends
on
one
server,
and
that
includes
having
multiple
Sepher
fastback
ends.
So,
for
example,
you
might
choose
to
have
two
different
backends
that
were
using
a
different
root
directory
for
creating
their
volumes
in.
B
So,
if
you
do
that,
then
you
could
have
a
different
root
directory
that
had
a
different
layout
to
the
pointed
to
a
different
OSD
pool,
and
that
way
you
have
a
back-end
that
went
to
10,
SD
pool
and
back
in
the
went
to
another,
oh
steeple
in
jewel.
We
also
had
it
experimental
the
experimental
ability
to
create
more
than
one
file
system
within
a
set
cluster,
and
these
separate
file
systems
would
use
separate,
MDS
instances.
B
Now
going
to
move
on
to
what,
for
some
people
is
the
interesting
part,
which
is
how
do
we
go
from
this
initial
Sefo
fest
native
driver,
which
comes
with
a
long
list
of
caveats
to
something
that
is
suitable
for
deploying
clouds
with
a
shared
file
system
service
that
can
be
used
by
your
third
party
guests?
Your
third
party
tenants,
so
the
obvious
thing
to
do
is
to
put
some
NFS
between
SEF
and
the
guests,
and
the
NFS
servers
would
create
a
bridge
between
your
storage
network
where
you'll
set
cluster
lives
and
whatever
network
it
is.
B
You
want
to
use
for
the
guests
to
access
this,
so
you
would
have
some
other
network
that
you've
created
using
your
your
virtualized
networking
create
a
new
virtual
machine
which
will
act
as
an
NFS
server
connect
it
to
the
network.
You
just
created
create
your
guests
to
the
network
you
just
created,
and
then
you
would
have
guests
that
could
access
ass
effervesce
file
system
without
needing
access
to
the
storage
network,
so
this
isn't
necessarily
a
bad
idea.
It's
not
as
simple
as
it
first
sounds.
B
So
if
you
want
the
level
of
high
availability
or
the
level
of
performance
that
you've
come
to
expect
from
a
safe
cluster,
you
can't
just
spin
up
one
virtual
machine.
You
need
to
spin
up
multiple
virtual
machines.
You
need
to
handle
a
case
where
one
of
them
goes
down
and
you
need
to
create
a
new
one,
and
this
is
tractable,
but
somebody
needs
to
come
along
and
actually
do
the
work
to
make
it
happen,
and
it's
even
if
you
go
to
the
trouble
of
doing
all
that
work.
B
As
you
can
see,
you
still
have
this
extra
hop.
It's
an
extra
failure
domain.
It's
an
extra
piece
of
latency,
it's
just
all
around
a
kind
of
suboptimal,
V,
perhaps
slightly
farther
out,
but
ultimately
preferable
way
of
doing.
This
is
what
some
people
call
hypervisor
mediated
access
to
shares.
So
this
is
a
little
bit
like
what
we
currently
do
for
our
BD
and
cinder,
where
the
hypervisor
machines
have
access
to
the
storage
network
and
they
handle
the
challenge
of
controlling
what
the
guest
guests
can
see
and
exposing
it
up
into
the
guests.
B
So
the
guests
no
longer
need
to
connect
over
an
IP
network
to
anything
they
no
longer
need
to
know
about
a
remote
network
or
a
remote
address.
They
need
to
talk
to
they
just
communicate
somehow
with
their
hypervisor
to
say.
I
would
like
my
file
system,
whatever
that
may
be,
and
all
of
that
potentially
security,
sensitive
work
of
of
working
out,
which
file
system
that
isn't
exposing
it
happens
on
the
hypervisor.
B
So
the
question
at
the
moment
is
what
what
should
coordinate
all
of
this
and
what
should
that
last
link
between
the
guest
and
the
hypervisor
be
so
in
this
diagram.
It
says
NFS
over
V
sock
and
that's
our
preferred
approach,
so
in
Tokyo
sage
went
through
in
his
presentation
a
number
of
different
options
for
this,
and
at
the
moment
this
is
what
we're
favoring.
So
the
idea
is
to
take
the
existing
NFS
client,
which
exists
in
the
linux
kernel.
B
Existing
sefa
fests
NFS
server
implementation
that
we
have
in
the
form
of
NFS
commercial,
or
indeed,
the
kernel
NFS
demon
and
expose
it
to
guests.
Using
this
new
piece
of
functionality
which
we're
hoping
to
see
land
in
the
upstream
limits,
Colonel
soon
called
V
sock,
which
gives
us
a
guest
to
host
Network
socket
with
no
IP
networking
involved.
B
If
we
can
adopt
V
sock,
then
it
saves
us
the
effort
of
maintaining
any
special
code
inside
the
guest
for
dealing
with
file
systems.
You
have
this
little
special
piece
of
code
for
dealing
with
v
sock,
but
just
the
networking
part
no
extra
file
system,
so
we
avoid
the
need
to,
for
example,
maintain
that
9p
protocol
and
NFS.
At
the
same
time,
we
avoid
the
need
to
maintain
something
for
something
special
for
exposing
the
file
system
into
the
guests,
because
we
can
just
use
NFS
ganesha.
B
We
should
be
the
same
piece
of
software
that
we
would
use
if
we
were
running
a
remote,
NFS
key
demon.
So
if
you
interested
in
learning
more
about
that,
there
are
some
prior
presentations
that
are
online
at
the
moment,
both
sages
and
also
Stefan's
who's.
The
engineer
working
on
getting
this
upstream
into
the
Linux
kernel
once
you
have
this
path
between
your
hypervisor
in
your
guest,
you
need
something
to
coordinate
it.
B
So
there's
this
new
new
concept
needed
a
nova
called
a
share,
attachment
and
attachments
need
to
know
how
should
they
expose
the
file
system
into
a
guest,
because
NFS
v
sock
is
only
one
option.
You
know
you
do
also
have
libvirt
line
P
and
potentially
other
protocols
and
options
in
future.
You
could,
for
example,
you
could
do
lots
of
things
you
could.
If
you're
feeling
crazy,
you
could
create
an
IP
network
between
the
host
and
the
guests
and
run
sifts
over
it.
B
B
Finally,
once
you've
got
your
sort
of
high-level
ideas
and
concepts
in
Nova
of
how
to
make
this
link
and
expose
things
into
the
guest,
there
is
some
plumbing
that
needs
to
be
done
specifically
for
the
V
sock
case.
So
V
sock
has
host
local
addresses.
Each
guest
gets
an
address
called
a
CID
and
those
get
assigned
at
instant
startup.
So
something
needs
to
assign
the
addresses
and
write
them
into
the
domain
XML.
B
If
you're
using
I'm
cueing
you
kvm
ganesha,
needs
to
know
how
to
authenticate
based
on
those
things
and
libvirt
needs
to
know
how
to
map
those
things
from
the
XML
into
the
command
line
for
queuing
you
neither
of
those
things.
None
of
those
things
is
independently
particularly
complicated,
but
it's
to
give
you
an
idea
of
the
amount
of
little
pieces
of
plumbing,
though,
are
going
to
be
involved
in
making
this
a
reality.
B
There
are
some
more
short-term
actions
and
that
we
need
to
take
with
SEF
and
Manila.
So
some
of
the
stuff
I've
talked
about
today
didn't
quite
make
the
cut
for
jewel
and
so
there's
stuff.
That's
landing
at
the
moment.
It's
going
to
get
back
ported
and
the
driver
does
work
today,
but
things
like
the
DF,
basing
its
output
on
quotas.
We
need
to
make
make
sure
we
backport
all
the
right
stuff.
B
I
want
to
make
sure
I
have
time
for
questions
so
I'm
going
to
skip
past
that
currently,
the
driver
has
a
concept
of
data
isolation
where
you
can
pass
an
option
in
to
share
creation
and
it'll,
create
your
separates
ffs
pool
for
that
it
would
also
be
possible
to
extend
that
to
metadata
isolation.
So
this
is
similar
to
what
I
was
talking
about
earlier
with
having
multiple
backends.
They
used
a
different
file
system.
We
would
be
able
to
do
within
a
single
back-end
and
have
it
set
on
a
chair
by
share
basis
to
say
this.
B
Chair
for
this
tenant
should
use
a
different
file
system
without
having
to
have
multiple
backends.
It
would
also
be
possible
to
orchestrate
the
creation
of
virtual
machines
that
would
act
as
mdss.
So
once
if
you
go
to
the
position
where
you
have
lots
of
shares
that
all
wanted,
independent
MDS
is
obviously
you
wouldn't
want
to
do
that
on
your
general
hardware.
Storage
back
end,
because
you
wouldn't
oh
in
advance,
how
many
MDS
as
you'd
need.
So
you
could
it
be
interesting
to
try
virtualizing
this.
A
B
The
same
way
that
we
do
in
any
other
clustered
NFS
environment,
so
the
the
NFS
demons
running
on
the
hypervisors
become
a
sort
of
implicit
cluster.
In
way
that
NFS
Ganesha
works
on
top
of
CFS,
they
can
store
enough
of
what
they
need
to
do
to
do.
The
NFS
layer
coordination
inside
CFS,
so
each
one
individually
is
talking
down
into
sefa
fest
to
store
whatever
it
needs
to
store.
And
then
the
demons
running
on
different
hypervisors
will
be
able
to
see
each
other's
stake
and
that's
exactly
the
same
way.
B
A
C
B
So
because,
in
the
native
protocol,
clients
are
somewhat
trusted,
this
is
one
of
them.
You
know
the
big
motivations
for
later
having
NFS
on
top
of
it.
If
you
today,
if
you
recompile
your
Sepher
VES
client
to
ignore
the
quota,
then
you
could
do
so.
Similarly,
today,
if
you
recompile
your
Sepher
fest
client
to
take
a
lock
and
never
give
it
up
and
bring
the
rest
of
the
system
to
a
halt,
you
could
also
do
that
because
the
nature
of
the
native
SEF
protocol,
it's
that
clients
are
somewhat
trusted.
I.
C
B
C
B
The
issue
that
comes
up
with
Manila
is
that
the
way
that
Manila
wants
to
provide
users
with
access
to
a
snapshot
is
to
allow
them
to
clone
from
it,
whereas
in
Steph
we
expect
that
someone
creates
a
snapshot
and
then
we
expose
it
in
a
directory
called
dot
snapped
and
they
can
just
go
look
at
it.
There's
no
need
to
clone
something
else
to
go
and
look
at
it,
but
Manila
doesn't
currently
have
the
concept
of
a
read-only
share
so
when
they
do
clone
from
snapshot.
B
If
we
wanted
to
implement
that
so
that
it
just
pointed
to
one
of
our
snapshots,
we
would
be
giving
them
a
share
that
looked
like
it
should
be
writable,
but
really
wouldn't
be
so
the
the
solution
to
that
is
to
give
Manila
the
ability
to
have
read-only
shares
so
that
we
can
do
a
nice
efficient
implementation
of
clone
from
snapshot,
with
the
caveat
that
it's
read-only
and
I.
Think
for
for
many
users.
C
C
B
E
Yep,
as
far
as
I've
counted,
you've
mentioned
three
things
that
typically
take
a
relatively
long
time
to
trickle
into
a
distro
that
knows
very
few
safe
developers
and
very
few
OpenStack
developers
actually
have
any
control
over
that's
V
sock
in
the
colonel.
That
is,
the
whatever
you
need
in
liver
and
that
is
NFS
kamisha
packaging.
So.
B
E
B
Thank
you
not
not
not
in
not
in
this
forum,
but
the
ganesha
changes
have
already
landed
in
ganesha
it's
so
there
is
at
least
one
of
those
one
of
that
list.
That
is
already
somewhat
further
alarmed
and
list
at
Emory.
Well,
the
the
colonel,
the
colonel
part
is
the
the
biggest
one
and
it's,
but
it's
also
the
part
that
it's
most
important
to
get
right
before
landing
it,
because
once
it's
in
there
it's
going
to
be
in
there.
You
know
forever
so
well,
we'll
have
to
wait
and
see,
but
hopefully
soon.
F
B
I
should
that's
a
caveat
that
I
should
have
covered,
so
one
of
the
things
that
we
want
to
add
in
Newton
to
Manila
is
the
ability
to
have
it
return,
the
keys
from
a
share
currently,
if
you're,
if
you
create
a
new
identity
in
the
process
of
authorizing
someone.
So
if
the
thing
you
authorized
is
a
user
that
didn't
already
exist,
you
would
also
have
to
ask
your
friendly
local
staff
admin
to
go
and
use
his
command
line
tool
to
get
the
key
for
you.