►
From YouTube: CephFS Backed NFS Share Service for Multi-Tenant Clouds
Description
The OpenStack's Shared File Systems service, Manila, provides a modular framework for storage backends to securely export file shares through network separated data paths between tenants. CephFS, a POSIX-compliant distributed file system built on top of Ceph, is ready to leverage this multi-tenant framework to make it a cloud ready open source scalable storage backend that Manila lacks. It can serve NFS shares using NFS-Ganesha, a user-space NFS server. Recent updates to NFS-Ganesha and its inte
A
All
right
good
morning,
everybody
welcome
to
the
Ceph
day
and
surviving
to
the
final
day
of
OpenStack.
Always
an
accomplishment
I
think
we
lose
about
30%
of
the
people
along
the
way,
but
thank
you
very
much
for
making
it
this
early
morning
session
here
through
my
friends
from
Red
Hat
here
going
to
talk
about
Ceph
of
s,
which
is
something
they
haven't
heard
a
lot
about
in
official
talks
lately.
So
this
is
going
to
be
exciting
for
me
too.
B
Okay,
so
texting
that
works
great,
so
welcome
everybody.
Today
we
are
going
to
talk
about
is
Professor
or
College
says.
The
title
of
our
presentation
is
a
fast
bike,
an
official
service,
formal
teaching
and
clouds,
and
my
name
is
Victor
Martinez
la
cruz.
I'm
a
support
engineer
on
the
OpenStack
Mella
project,
hi.
B
Ok,
so
the
content
for
today's
is
split
in
four
sections.
First,
we
are
going
to
start
with
a
brief
overview
of
the
key
components.
We
are
going
to
go
over
the
tools
we
choose,
why
we
chose
them
and
well
maybe
say
a
few
words
about
the
latest
updates
of
them.
Then
we
are
going
to
to
cover
the
current
state
of
CFS
as
a
naughty
driver.
Then
we
are
going
into
the
current
state
of
steffeff
SNFs
driver
and
we
are
going
to
end
this
talk
with
a
brief
discussion
on
the
future
work
and
the
working
project.
B
We
have
and
we'll
put
you
on
on
context
of
what
our
plans
for
for
the
next
releases.
So
what
are
we
doing
here
right
like
we
have
several
tools?
First,
we
are
going
to
talk
about
Manila.
Then
we
are
going
to
talk
about
whatsapp
of
us
I'm,
finally
about
Ganesha.
The
idea
of
this
whole
presentation,
you're
going
to
see
today,
is
to
have
any
features
back
with
us.
F
with
cell
storage.
B
Have
you
know
consistent
access
in
the
cloud
through
Manila?
So
first,
let's
talk
about
OpenStack
Manila
OpenStack
Manila.
Is
this
year
file
system
service
for
OpenStack?
Basically,
what
it
does
is
offer
sets
of
AP
is
for
you
to
fortunates
to
request
the
system
shares.
It
has
support
for
several
drivers,
so
some
of
them
are
proprietary,
but
there
are
also
open
source
options
such
as
this
ffs
driver,
of
course,
and
the
so
called
generic
driver,
which
is
an
NFS
on
cinder
volumes.
This
is
a
reference
driver
we
have,
and
so
you
will
see
it
mention.
B
That
alone
is
its
that's
what
it
is
and
when
Manila
ok,
it's
usually
more
related
use
case
of
of
this
project,
so,
first
and
foremost,
file
based
applications
are
not
going
away.
If
you
want
to
run
those
kind
of
workloads
on
the
cloud,
it
is
really
useful
to
have
a
service
like
Manila
to
you
know,
get
your
share.
Some
demand,
apart
from
that,
is
very
useful
from
the
interoperability
standpoint.
Since
you
can
you
can
access
different
storage
systems
with
the
same
API.
Also,
we
have
to
mention
the
rights
of
containers.
B
That's
super
fancy
way
of
putting
it,
but
what
it
is
is
that
everybody
is
using
containers.
Everybody
wants
containers
like
in
this
conference.
You
have
for
continuous,
so
much
so
how
many
times
like
all
the
talks
were
about
containers-
and
we
have
to
remember
that
the
storage
in
containers
is
no
more
than
you
know,
a
file
in
a
file
system,
its
dots,
the
volumes,
but
basically
what
it
is
in
in
containers,
war
and
finally,
the
concept
of
permissions.
That
is
a
very
useful
concept
that
we
handle
in
file
systems.
B
It
applies
to
several
use
case,
for
you
know,
use
current.
You
were
close.
You
are
going
to
run
the
cloud
now.
Let's
talk
about,
CFS
safe,
probably
needs
no
introduction
by
now,
but
basically
it's
a
free
and
open
source
storage
platform
that
implements
an
object,
storage
so
and
provides
interfaces
for
the
object
block
on
file
level
storage.
B
Let's
focus
only
in
that
section
in
the
right
set
of
s
set
of
files
is
a
distributed.
Posix
file
system.
You
have
different
clients
to
interact
with
it,
the
most
basic
ones.
The
current
client,
you
also
have
leaves
ffs,
as
sisyphus,
sorry
leaves
and
profess,
and
your
second
dinner
with
fuse
and
yeah.
Well,
that's
pretty
much
it
about
sofa
when
you
want
to
say
today.
So,
let's
move
to
Weiss
ffs,
which
is
I,
think
more
important.
B
Here
we
had
a
graphic
that
we
we
got
from
the
user
survey.
That
shows
that
most
of
the
users
of
Manila
right
now
are
choosing
surfaces
over
other
file
system
solutions.
I,
don't
want
to
lie
off
here
like
this
is
the
numbers
as
we
see
them,
but
I
want
to
maybe
know
that
this
was
a
really
small
set
of
people
that
it
was
asked.
It
was
I
think
very
one
users
that
I
actually
answered
a
question
and
maybe
was
not
the
you
know,
expected
people
we
wanted
to
ask
for
those
questions
most.
B
It
was
developers,
but
this
shows
a
tendency
that
we,
like
surface,
is
being
adaptive
and
that's
what
we
expect
to
see
in
the
upcoming
releases.
This
is
also
the
same
for
cinder
in
cinder.
We
are
going
to
see
a
really
similar
diagram.
You
can
access
the
whole
numbers
and
everything
in
the
link
we
have
down
there.
B
B
Also,
it
provides
scalable
data
and
metadata
and,
of
course,
is
POSIX,
so
I
think
it's.
It
made
complete
sense
for
those
reasons.
I
know.
Let's
talk
about
when
FS
Ganesha
is
the
last
piece
of
our
combo
here
and
the
NFS
Ganesha
is
in
userspace
NFS
servers.
It
has
a
support
for
different
version
of
the
NFS
server.
It
has
a
modular
architecture.
B
It
has
a
provides
applicable
file
system,
abstraction
layer
that
allow
for
various
storage
backends
while
including
CFS
when
a
cluster
office,
and
it
I
also
have
another
interesting
features
such
as
polynomials
pores
for
the
with
Eva's.
It
can
manage
huge
metadata
caches
because
it's
user
level,
so
it
has
access
to
memory
in
different
way.
It
has.
It
provides
simple
access
for
other
uses
space
services
such
as
Kerberos
LDAP,
which
is
pretty
useful
and
again
it's
open
source.
B
So
for
your
storage,
open
source
storage,
for
your
open
source
cloud,
you
have
NFS
condition
of
for
and
fissures,
which
is
also
consortium,
and
why
NFS
Ganesha?
Why
you
are
going
to
use
NFS
Ganesha?
You
have
the
needed
driver
force
ffs!
Well,
if
you
want
NFS
back
in
sorry,
if
you
want
NFS
back
we're
an
open
source
storage
technologies,
then
NFS
Ganesha
works
for
you
and
if
you
want
to
leverage
any
16:7
while
keeping
your
own
affairs,
because
you
have
a
ready
workload
using
air
features,
then
you
can
use
NFS
condition.
C
Victoria
today,
I
will
talk
about
the
current
state
of
CFS
drivers
in
Manila
and
how
we
are
evolving
into
a
driver
it
into
a
driver
that
can
work
in
multi-tenant
workloads.
First
up
I
will
talk
about
the
surface
native
driver,
then
move
on
to
a
surface
NFS
driver.
The
surface
native
driver
was
introduced
in
the
Mitaka
release.
It's
been
there
for
a
while.
It
works
with
self
versions
of
jewel
or
later
it
creates
shares,
backed
by
self
a
fess
that
can
be
accessed
for
your
native
surface
protocol.
C
So
you
need
self
clients
in
the
OpenStack
beams
that
have
direct
access
to
the
storage
by
and
so
what
that
means.
Is
you
get
native
surface
performance?
But
you
know,
because
you
have
direct,
you
need
direct
access
to
the
storage
back
in.
You
need
the
clients
to
be
trusted,
so
that
makes
it
useful
only
for
you
know
certain
use
cases
of
private
clouds,
but
not
for
public
clouds.
C
You
need
to
keep
that
in
mind.
There
were
bug
fixes
since
the
Mitaka
release
or
the
CI
is
pretty
stable,
so
it
can
be
used
by
I
think
it
should
be
used
by
upstream
developers
and
testers
as
their
first
choice
of
back-end
when
they're
developing
a
stuff
with
Manila,
if
they're
familiar
with
self,
so
the
numbers
don't
lie.
We
can
see
that,
even
though
this
is
a
building
block
diver,
it
already
has
a
good
option
raid
okay.
So
later
today,
this
is
taught
by
Sean
on
how
they're,
using
this
building
block
driver
he's
right
there.
C
So
Annie,
okay,
a
surface
driver
in
OpenStack
is
a
control
control,
plane
service.
Like
other
OpenStack
components,
you
have
an
open
stack
tenant.
He
shows
a
HTTP
request
to
create
a
share
and
the
driver
goes
to
the
backend
or
I
mean
and
creates
directory
surface
directories
which
correspond
to
shares.
C
It
sets
a
quota
on
it
corresponding
to
the
sheer
size
and
you
create
those
surface
subdirectory
and
unique
radius
namespaces
and
after
that,
he
the
tenant,
wants
to
allow
authorize
certain
Sephiroth
IDs
access
to
the
share.
So
what
this
does
this?
The
panel,
the
native
driver
and
the
vanilla
service
authorizes
this
sephiroth
ID
to
access
the
share
and
returns
a
secret
key
back.
C
C
It's
been
created
it's
available,
then
he
gets
back
a
share
location
which
is
a
conjugation
of
SEF,
monitor,
addresses
and
the
directory
path.
After
that
he
would.
He
would
ask
for
certain
support
IDs
to
be
given
access
to
the
SEF
subdirectory
the
surface
share,
and
then
he
get
back
a
secret
key.
Knowing
all
of
this,
he
can
now
move
the
share.
So
in
the
data
plane
are
not
surprisingly,
it's
very
similar
to
how
you'd
use
such
FS.
Only
only
that
you
have
the
SEF
clients
running
in
the
OpenStack
Noah
VM
for
data
updates.
C
It
just
goes
directly
through
those
DS
and
for
metadata
updates.
It
goes
through
the
MD
essence
of
the
SEF
back-end
services,
just
reiterating
the
points,
because
the
clients
are
directly
connected
to
the
safe
public
network,
though
I
mean
we
are
kind
of
dependent
on
the
clients
that
they
be
trusted,
and
you
rely
just
on
the
native
Sephiroth
indication
system.
There
is
no
single
point
of
failure
in
the
data
plane
because
you
just
rely
on
the
surf
server,
daemons
and.
C
Besides
this,
we
worked
on
getting
this
working
in
a
you
know,
not
just
devstack,
but
also
in
a
triple
low
deployment.
So
work
was
done
to
make
sure
that
triple
o
can
deploy
the
SEF
nts's,
but
it
as
composable
roles.
What
that
means
is
you
can
place
it
in
the
node
you
want,
so
care
must
be
taken.
So
that's
F
MDS
is
don't
affect
other
services.
You
need
to
be
careful
about
that.
You
don't
want
SF
MDS
is
running
along
with
the
overstays.
C
Doesn't
make
sense,
so
you
typically
you
know
it's
better
to
run
it
with
the
SEF,
monitor
services
and
and
also,
if
you
run
it
with
Python
services,
the
you
know
the
OpenStack
service.
You
need
to
be
careful,
we
don't
you
know,
you
don't
want
them
to
like
affect
each
other,
so
that
that
that
work
was
done.
C
You
know
public
network,
so
you'd
have
to
connect
them
through
the
Neutron
router,
which
connects
to
the
external
provider
network,
and
then
that
is
the
that
is
the
public
network.
So
and
then
we
want
this
tenant
VMs
to
access
the
storage
public
network,
because
we
need
the
tenant
Williams
to
directly
access
the
safe
storage
network.
So
what
you?
What
you
can
do
is
have
another
neck
on
the
tenant
VM,
that
is
on
the
storage
provider
network.
It's
easy
to
set
up.
C
We
have
documented
that
the
patches
in
review,
so,
hopefully
the
last
link
it
gets
merged.
There
are
other
links
here
that
you
can
check
later.
Moving
on
to
the
surface
NFS
driver.
This
is
a
step
to
words,
building
something
that
works
in
multi,
dent
workloads.
It
creates
shares
and
if
a
shares
back
by
self
FS,
it
allows
NFS
clients
in
the
OpenStack
VMs
to
talk
to
the
surface
back
in
in
a
more
secure
way.
It
does
not
allow
direct
access
to
the
storage
network.
The
axis
is
mediated
via
NFS
Ganesha
gateways.
So
that's
good.
C
The
patches
are
still
are
still
being
reviewed
upstream,
but
hopefully
you
can
get
it
in
the
pike
release.
It
works
with
self
kraken
or
later
any
need
latest
version
or
Ganesha.
Okay
in
the
control
plane.
It's
similar
very
similar
to
the
diagram
I
showed
before
so.
The
tenant
wants
to
create
shares
and
the
native
driver
creates
ffs
subdirectories,
which
map
to
the
shares
returns
the
export
location
back
and
now,
instead
of
authorizing
Sephiroth
IDs,
you
want
to
authorize
certain
IPS.
C
So
what
that
does
is
once
you
send
that
request
that
native
driver
issues
caused
the
Ganesha
so
creates
export
in
Riis
on
disk
manipulates
them
as
per
whatever
the
user
requested,
and
it
sends
diba
signals
so
that
Ganesha
is
immediately
aware
of
the
new
access
list.
You
do
not
have
to
restart
Ganesha,
so
that
is
very
useful.
C
The
server
demons
do
not
introduce
a
single
point
of
failure,
but
if
you
have
a
single
Ganesha
gateway,
then
you
know
that
introduces
a
single
point
of
failure,
which
is
not
good,
so
that
is
work
being
done
by
the
NFS
Ganesha
community
to
make
it
a
che
in
an
active,
passive
mode.
First,
you
know
and
then
slowly
you'd
want
to
do
it
active
active,
but
that's
the
first
step.
Okay,
we
haven't
done
this
yet,
but
we
kind
of
figured
out
how
we
would
want
to
do
this
inter
pulao
or
we
create.
C
C
It's
in
the
data
plane
it
might
have.
It
is
a
bottleneck
so
Yi,
so
so
the
deploy
needs
to
make
some.
You
know
compromises
as
you
want
it
to
run
along
with
the
Mons
MDS.
Is
you
know
yeah?
That's
that's
up
to
the
deployer
right
here,
for
the
I
mean
what
we
propose,
at
least
for
the
first
titration
is
run
the
Ganesha
Ganesha
service,
along
with
the
share
service.
C
That
way,
Ganesha
as
all
the
connection
has
already
is
already
connected
to
the
safe
public
network,
the
storage
network,
so
that's
taken
care
of
what
we
need
to
make
sure
is
Ganesha
is
connected
to
the
is
accessible
to
the
tenth
Williams.
The
way
we
do
that
is
have
Ganesha
know
connect
to
the
external
provider
network
yeah.
C
C
D
For
running
so
yeah
I
get
to
talk
about
where
we're
going,
which
is
somewhat
speculative.
My
perspective
is
as
I'm
responsible
in
general,
its
OpenStack
and
turning
pool
upstream
ideas
and
to
product
to
a
great
extent,
so
I'm
so
I'm
interested
in
making
something
that
not
only
we
can
play
with,
but
that
we
can
stand
behind
with
customers
and
support
and
right
now
these
are
from
a
Red
Hat
perspective.
We
call
these
tech
preview
features
at
the
moment.
D
So
as
we're
thinking
about
how
to
productionize
things
fully
so
I
mean,
though
my
perspective
didn't
going
to
end
me,
isn't
the
only
valid
one
and
so
I'll
be
opinionated,
partly
in
the
idea
that
I'll
flush
out
other
opinions
and
that
get
valuable
feedback
when
I
talk
about
what
we're
doing
and
where
we're
going?
That's
our
thinking
at
the
moment
can
shift
right
and
also
this
is
open
source
project.
There's
room,
much
room
for
parallel
efforts,
a
lot
of
work
to
do
on
this
front.
Other
people
develop
something
cool,
we'll
be
glad
to
use
it.
D
The
work
that
he
and
his
team
have
done
on
self-esteem
and
that
we've
done
with
it
to
integrate
it
into
Manila
is,
is
solid
and
that's
working
out.
The
interesting
part,
that's
very
interesting
stuff,
maybe
more
interesting
stuff,
but
the
the
critical
stuff
is
to
figure
out
how
to
deploy
it
in
a
way
that
works
from
from
product
perspective.
D
So
when
we
think
about
that
picture
which
for
pike
where
we
place
the
NFS
Ganesha
gateway
on
the
controller
node,
which
we're
doing
for
various
reasons
to
have
more
to
do
with
triple
o
than
anything
else,
there's
some
things
to
like
can
some
things
not
to
like.
So
as
Ramana
said,
we've
separated
off
the
user
VMs
from
the
public
network,
which
is
critical
first
step,
and
we
have
mentioned.
We
have
pretty
good
separation
of
tenants
from
one
another
just
by
Neutron
these
days.
D
This
didn't
used
to
be
true,
but
Neutron
security
groups,
work
well
and
EB
tables
and
prevent
stuff
in
stuff
in
the
typical
OVS
and
so
on.
Will
will
stop
the
art,
poisoning,
attacks
and
stuff
to
people
worried
about
before,
and
when
we
talk
about
SEP
that
public
network,
this
isn't
out
in
the
world
or
anything?
This
is
the
you
know.
One
of
the
this
is
your
public
network
with
an
open
stack,
but
things
we
don't
like
here.
D
We
reload
the
exports
from
the
manila
database
rather
than
sharing
state
between
multiple
national
servers
and
so
on,
so
something
we
can
do
now
and
the
other
s
thing
I,
don't
really
like.
Is
we
really
mix,
control,
plane
and
data
plane
functions
that
curl
controller
notice
for
putting
control
plane
functions
and
we
put
a
Ganesha
service
which
is
in
the
data
plane
on
there?
We
want
to
be
able
to
place
and
scale
data
plane,
services
with
data,
plane,
resources
and
data
payload.
So
this
is
an
interim
step.
D
What
we
expect
to
be
able
to
do
for
pike,
as
Ramana
mentioned,
we
had
to
be
kind
of
careful,
Ganesha
can
be
resource
hungry.
We
may
have
some
isolation,
issues,
noisy
neighbor
type
issues,
and
so
on
that
we
need
to
work
with,
but
it's
a
step
and
it's
out
there
is
going
to
be
in
there
in
Pike,
and
people
can
use
it
and
play
with
it.
Give
us
feedback
fix
things
themselves
and
so
on
now,
where
do
we
want
to
go?
We
want
them.
D
This
talk
about
John
spray
from
the
Austin
summit
on
Manila
and
SEF
FS,
on
which
both
of
the
they
give
this
target,
which
is
there's
an
address
family
vie
sake,
address
family,
which
is
there's
a
paper
from
Stefan
at
the
end
you
can
read
all
about
it,
but
basically
what,
instead
of
putting
in
FS
over
TCP,
we
put
it
over
AFV
sock,
which
and
deliver
through
the
in
it
through
a
Ganesha
gateways.
The
way
we're
thinking
now
deliver
shares
into
a
key
mu,
hypervisor,
okay
and
then
from
there
to
the
tenant.
D
D
We
still
have
a
good
tenant
storage
path
separation.
Now
we
don't
even
have
Neutron
involved
in
that
it's
all
done
through
the
hypervisor
there's
no
shared
network
involved.
The
resource
demands
for
Ganesha
have
no
control
plane
impact
we're
over
on
the
compute
nodes.
They
scaled
per
port,
and
this
is
critical
thing
for
me.
They
scale
proportional
to
the
compute
demand.
Okay,
so
we're
putting
one
per
controller.
We
have
n
consumers
per
controller.
D
We
don't
need
that
pcs,
Kouros
Inc
machinery
from
the
drill
plane
that
we're
trying
to
move
off
of
in
OpenStack
in
general,
and
we
don't
have
dependencies
on
Neutron
or
l2.
Switching
now
critical
observation
for
me
is
that
the
consumers
of
amount
are,
in
the
same
h,
a
same
hardware.
Failure
domain
as
the
server
ok
in
this
little
picture,
I
want
to
keep
that
as
we
move
forward.
What's
that
mean
that
means
unplanned
outages,
at
least,
and
actually
in
this
case,
I,
think
even
migrations.
D
There's
a
bunch
of
dependencies
getting
from
here
to
there.
We
can
talk
about
it
more
I
know
it's
a
long
subject,
but
what
it
means
is
that
we're
not
in
accepting.
Perhaps
the
most
optimistic
of
scenario
is
going
to
have
this
ready
for
Queens.
So
what
do
we
want
to
do?
In
the
meantime?
One
of
the
possibilities
is
reconsidered
is
that
we
leverage
the
Manila
service
module,
which
is
a
server
instance,
module
which
is
used
by
the
windows
driver
and
used
by
the
generic
driver.
That
Victoria
alluded
to
to.
D
Basically
what
we
would
do
is
put
Ganesha
in
what
they
call
share
servers
which
are
administrative,
leerin
service,
virtual
machines.
They
would
beat
gateway,
we
would
still
have
them
on
compute
nodes,
but
they're
spun
up
dynamically
by
Manila,
ok,
and
they.
This
is
a
model,
that's
well
understood,
ends
in
the
Manila
community.
If
you
see
the
buzzword,
DHS
s,
sequel,
true
for
driver,
handles,
share
servers,
and
so
on
we're
talking
about
that.
So
so
people
say
well.
Why
don't
you
do
that
with
stuff
set
photos?
D
Okay,
it
gives
you
good
isolation
of
beams
from
the
public
network
and
it
gives
you
a
good
isolation
of
tenants
from
one
another,
but
here's
what
I
don't
like
about
it.
At
least
it's
very
expensive
heavyweight
approach
for
tenant
isolation,
because
you
make
a
service
VM,
one
or
more.
If
you
were
ever
to
do,
H
a
it,
doesn't
supply
H
a
right
now,
so
we'd
have
to
build
that
per
tenant
I'm
in
the
RDO
cloud
the
other
day.
Guess
what
I
have
my
own
tenant
project
just
like
in
a
UNIX
machine?
D
You
might
be
user
Ricky
Baron
in
Group,
D
Baron,
those
project,
T
Baron,
so
they're,
at
least
as
many
projects
as
users
in
that
cloud,
which
was
me
that,
in
order,
if
they
were
all
consuming
in
a
path,
I
would
need
at
least
as
many
service
VMs
as
users
in
that
cloud.
It's
not
the
only
way
to
build
a
cloud.
A
lot
of
them
have
fewer
numbers
of
projects.
It's
one
extreme
Allah
of
the
spectrum,
but
it
doesn't.
D
This
does
not
scale
well
within
the
scaling
doesn't
fit
with
the
actual
demand
on
the
on
the
computer.
It
puts
a
single
point
of
failure
in
the
data
path
unless
we
go
and
do
the
work
to
build
H
a
for
it
because
that's
not
in
there.
The
solution
right
now
involves
playing
with
open,
V,
switch
or
Linux
bridge
in
order
to
stitch
things
together
to
make
that
the
lines
on
the
diagram
previous
diagram
work.
D
You
know
from
the
service
VMs
into
the
tenants,
guess
what
open,
V
switch
and
Linux
bridge
aren't
the
only
switching
technologies
in
town.
We
would
need
to
write
plug-ins
for
everything
that
could
we
could
possibly
support.
You
know
it
so
I'll
skip
some
of
the
other
stuff,
but
I
think
anybody
wants
to
talk
about
this.
The
reasons
I,
don't
like
the
solution
or
us
for
what
we're
doing
is
fine,
for
is
the
reference
driver.
We
can
go
into
it
more,
but
it's
ignorant
of
it
was
the
code
just
written
a
while
ago.
D
D
D
One
per
few
commute
compute
note
keep
the
failure
domain
stuff
that
I
talked
about
before,
but
we
will
still
rely
on
connecting
through
Neutron
network,
the
external
provider
network,
so
the
bottom
half
of
this
picture
is
like
what
Lamont
ahead
earlier,
and
this
will
move
us
towards
the
direction.
While
we
work
in
parallel
to
get
the
B
stock
stuff,
ready,
post
acquaintances,
most
likely.
D
D
If
you
want
to
see
get
a
feel
for
the
be
sock
solution
want
to
thank
all
the
teams
that
one
of
the
one
of
the
really
fun
things
about
working
on
this
project
for
me
is
we've
I
work
on
the
OpenStack
team,
as
just
Victoria
and
we've
gotten
to
know
a
fair
number
of
the
people
on
this
ffs
team
here
and
now,
starting
on
the
NFS
kinesha
team.
It
really
rocks
to
get
to
to
do
this
and
work
with
people
on
this
kind
of
thing.
E
I
like
to
talk
but
I,
have
a
question,
and
it
seems
like
there's
a
lot
of
extra
complexity
in
deploying
NFS
kinesha
here,
as
opposed
to
fixing
the
kind
of
support
for
multi-tenancy.
In
course,
F
of
s
are
there
thoughts
about
avoiding
this
extra
layer
and
fixing
SEPA
fest
to
support
multi-tenancy
and
security
in
a
better
way.
More
natively
might
be
a
sage
question
yeah.
D
I
would
blask
rick
wheeler
what
he
thinks
about
that
I'm.
Currently,
joking
I
am
joking.
One
thing,
though
it
seems
to
me
just
speaking
offhand.
My
thought
is
to
be
a
fine
thing
to
do
and
a
great
thing
to
do
and
when
it's
done
then,
perhaps
we
as
a
downstream
provider
of
a
distro
I'd,
be
more
inclined
to
say.
This
is
something
we
feel
like.
D
We
could
just
have
anybody
support
as
opposed
to
just
sophisticated
people
at
CERN,
or
something
like
that
that
we
could
support
anybody
on
that,
and
you
know
if
we
had
that
kind
of
protection.
Of
course
I'm
over
on
the
OpenStack
side,
so
I
would
look
to
to
the
setup
FFC
developers
to
build
that
now.
That
said,
another
thought,
though,
is
an
in
offense
is
a
pretty
ubiquitous
and
well
understood
protocol.
So
we
need
to
do
that
independently.
E
And
you
know,
I
do
agree
just
to
kind
of
answer.
My
own
question,
I
mean
NFS
has
an
interesting
role
for
people
who
don't
have
set
the
vest
natively
baked
into
their
image.
That
I
think
is
that
the
longer
term
thing
that
you're
kind
of
alluding
to
even
Windows
clients
have
it.
The
other
thing
I'd
suggest
thinking
about
more
is
peanut
baths.
You
know
when
you
have
peanut
butter
sort,
because
then
you
split
the
control,
plane
and
data
plane
from
you
know.
Metadata
updates
versus
data
flow,
so
that'll
be
maybe
more
interesting
as
well.
Yeah.
D
C
F
C
D
C
G
G
C
Think
well,
yeah!
That's
one
of
the
ideas
we
had
I
mean
that's
in
the
works:
multiples,
ffs,
it's
being
developed,
I!
Think
it's
still
experimental,
even
upstream
right
now
for
Luminos.
Having
sage
will
update
you,
it's
more
about
getting
active,
active,
MVS,
working
first
and
then
we'll
think
about
multiple
CFS.
But
we
have
thought
about
that.
Yeah
and.
G
D
But
but
yes,
it
could
be
done
and
in
fact
you,
if
you
look
at
some
of
the
earlier
slides
from
sage
and
so
on,
it
doesn't
say
definitively,
but
it
suggests
that
that
be
done.
I
think
could
work
with
Ganesha
right
now
is
more
agile
working
in
user
space.
For
one
thing,
there's
also
wait
waiting
waiting
on
kernel
things
to
get
ready
and
be
perfect
is
not
the
quickest
way
forward
in
our
timeline.
Could
it
happen
eventually,
and
might
there
be
a
bit
better
advantages?
C
Yeah
yeah,
I,
think
I
think
say:
judo
you
know
can
give
you
a
better
picture
but
yeah
since
I
worked
in
the
self
esteem.
I
can
tell
you
that
there
was
at
wall
talk
there.
Was
this
talk
by
Patrick
Donnelly,
where
you're
characterizing?
How
house
active
active
MDS
works?
What
we
figure
that
the
dynamic
load
balancer,
which
balances
the
metadata?
D
H
H
H
Of
testing
so
for
it's
actually
pretty
important
to
have
an
affair
support
on
top
of
NFS
week,
you
play
on
top
of
SATA
fest,
because
many
users
are
these
for
us.
They
are
pretty
conservative
and
they
know
NFS
very
well.
So
if
you
have
server
test
as
a
back-end
and
NFS
as
a
frontier
that
actually
heads
with
the
absorption,
they
also
like
products
that
require
you
to
have
an
NFS
file
system
for
warranty
of
Licensing
issues.
H
So
that's
it's
very
important
one
thing
about
the
service
for
the
Ganesha
service
and
where
it
should
be
on
the
controller
or
on
the
hypervisor
or
on
it.
So
this
the
N.
You
said
that
you
don't
like
it
so
much
that
there's
an
additional
service,
the
end
up
will
be
spawned
up
to
do
this
actually
I,
don't
think.
That's
necessarily
a
bad
idea
for
a
couple
of
reasons.
H
So
one
is
that
if
you
put
it
on
the
hypervisor
itself,
the
hypervisor
may
be
designed
in
a
way
that,
if
you
put
the
VMs
there,
there's
very
little
resources
left
for
something
to
run
on
the
hypervisor
in
a
different
way.
And
if
you
have
something
where
you're
not
really
sure
how
much
CPU
would
users
or
how
much
memory
users,
it's
actually
pretty
risky
to
put
something
to
put
something
there
and
as
soon
as
you
have
one
user
there
that
actually
does
NFS.
D
H
B
H
Centrally
managed
American
cluster.
They
have
to
take
this
out
of
their
own
resources,
similar
to
what
you
would
have
is
you
would
spawn
off
a
service
VM.
So
if
you
have
tenants
that
are
like
big
enough,
you
don't
need
like
many
Ganesha
services
in
there
right.
It's
like
one,
the
end
button
you're
having
your
turn.
A
breaker
may
have
like
hundreds
of
OPM's,
so
I
think
yeah.
The
idea
is
actually
not
too
bad
and.
D
That's
a
great
clarification.
My
concern
is
less
with
it
being
a
service
VM
rather
than
a
native
process
than
with
doing
one
per
tenant.
Is
that
scaling
concerns
the
bigger
thing
and
in
fact,
I
did
kind
of
as
a
side
say,
and
we
may
want
to
contain
the
resources
by
a
cgroups
or
a
container
or
a
service
VM
to
do
that
so
I.
Thank
you.
C
D
From
a
product
perspective,
we
observe
more
people
asking
for
NFS
with
sup
FS
back-end
right
now,
so
if
there
are
people
who
are
very
interested
in
service
from
that
that's
valuable
information,
a
lot
of
this
is
part
of
the
reason
we
do.
This
kind
of
thing
is
to
get
feedback
and
hear
from
people.
So
you
know
talk
to
us.
Ok,.