►
From YouTube: CDS Infernalis (Day 1) -- CephFS && OpenStack
Description
Videos from Ceph Developer Summit: Infernalis (Day 1)
03 March 2015
https://wiki.ceph.com/Planning/CDS/Infernalis_(Mar_2015)
A
All
right
on
to
the
next
session
here
this
one
is
a
three-fer
we'll
talk
about
some
set,
a
fast
multi-tenancy
features
and
then
some
of
the
OpenStack
stuff
specifically
around
manila.
I
believe
so.
This
was
three
blueprints
in
one
sage
head
to
and
Danny
had
one
there's
a
combined
pad
that
we
can
take
some
notes
on
but
sage.
You
want
to
kick
off
and
give
us
a
little
background
on
what
this
is
and
we're
running
about
15
minutes
behind.
So
maybe
we
can
make
up
some
time
to.
B
C
One
topic
was
thought
to
a
party
testing
so
Elton's.
We
discussed
it
basically
lost
safe
developer
summit
already.
So
I
want
to
come
back
to
that
because
I
from
my
feeling,
we'd
still
need
sapota
testing
support
and
the
OpenStack
stuff.
So
that's
currently
as
far
as
I
know
no
testing
against
and
real
tough
cluster,
so
yeah
I
guess
it's
mandatory.
C
B
We
don't
have
a.
We
don't
have
an
OpenStack
instance
in
the
in
the
sepia
lab.
Yet,
although
we
probably
will
sometime
soon,
but
we
also
don't
have
a
huge
amount
of
like
extra
hardware,
I
am
so.
I
think
I
think
the
ideal
situation
would
be
that
you
know
some
of
the
people
who
are
doing
more
of
the
OpenStack
work,
like
maybe
they're,
ahead,
open,
set
group
or
marantis
or
or
canonical,
or
are
these
people
who
already
have
OpenStack
testing
environments
that
they
integrate?
E
Jessica,
and
so
these
days,
my
Ashley
upstream
is
testing
open,
positive,
real
sequester's.
Since
there's
like
my
sister
finally
merged
in
the
last
cycle,
we
got
the
test
stabilized
hand
side
of
stuff
every
every
every
Austrian
job
now
is
running
so
with
stuff
has
a
non-voting
Java
as
part
of
the
regular
Dinkins
run
and
right
now,
that's
what
I
think
I
just
under
cinder
and
Nova
I
came
to
Bruce
and
glanced
as
well
or
not,
but
it's
least
well
to
Center
Nova
jobs,
I'm
with
using
stuff.
E
F
E
B
B
B
Okay,
well,
I
think
I
think
the
the
general
plan
for
the
community
lab
infrastructure
is
that
we're
going
to
get
an
OpenStack
instance
at
some
point.
Now
we're
going
to
use
that
to
do
like
all
the
vm
management
for
our
tests.
That
will
have
at
least
an
OpenStack
that
we
can
do
some
dogfooding
with
I'm
the
same
way
we're
dog
pudding
set
FS
right
now,
and
but
even
then,
it
won't
be
like
on
master.
It's
going
to
be
on.
You
know
the
most
recent
stable
release
or
release
candidate,
or
something
like
that.
G
B
Yeah,
okay,
okay,
so
the
other
idea
there
two
items
you
had
Danny
were
such
a
fast
for
Manila,
which
I
think
we
should
do
be
a
bit.
Lastly,
because
I
have
a
whole
thing
on
that
and
I
mean
how
to
keep
compatible
with
open
with
Swift
I
was
actually
a
rest.
Api
interface
I
think
there
are
there
a
couple
things
there.
One
is
that
the
the
biggest
sort
of
compatibility
gap
that
we
have
right
now
is
with
the
way
that
we
deal
with
namespaces
and
sort
of
tenant
container
or
whatever,
with
the
s3
API.
B
B
B
There
have
been
there's
been
a
recent
flow
of
tickets
from
I'm
apologizing,
forgetting
the
guys
name
at
mirantis
I
believe
fixing
a
bunch
of
compatibility
stuff.
So
there
are
some
interested
people
who
are
addressing
these
issues,
but
probably
the
biggest
thing.
That's
going
to
sort
of
take
a
big
bite
out
of
that
is
getting
the
more
recent
version
of
the
tests
running.
H
Okay,
in
my,
it's
probably
worth
saying
that
some
of
the
the
other
big
things
that
just
what
big
things,
though
they
probably
would
equivalent,
but
you
know
they
were-
there-
are
still
some
API
calls
missing
like
object,
versioning
and
expiration,
so
we've
got
object
versioning,
but
it's
s3
only
so
because
yesterday
and
Swift
stuff
works
differently.
So
the
versioning,
an
exploration
like
this
is
the
most
noticeable,
the
apps
that
we
need
to
close
after
the
the
multi-tenant
database.
H
B
I
think
it's
by
worth
mentioning
that
there
isn't
currently
a
plan
to
implement
Swift
object,
versioning,
because
it's
kind
of
lame
in
comparison
to
what
the
s3
versioning
does.
This
doesn't
really
it's
not
the
same,
and
it's
not
super
sophisticated,
but
I
think
that
their
expiration
is
more
interesting.
But
if,
but
if
there
are
people
who
are
interested
in
that
specifically,
then
obviously
we
should
discuss
and
revisit
that.
B
C
B
B
C
B
So
there's
always
going
to
be
a
little
bit
different,
but
I
think
that
those
are
sort
of
the
corner
cases
that
the
vast
majority
of
people
don't
really
notice.
C
B
C
B
Also
disposed
only
read
yeah.
I
think
that
leather
is
only
used
for
the
benchmark,
cool,
okay,
I
think
ok
so
did
mine.
It
might
be
easier
to
drop
it
if
we're
concerned
about
Nina's
on
anything
else
babe
if
Utley,
when
their
other
tools
like
cause
bench
and
then
whatever
else
that
are
probably
have
better
maintained.
B
Assuming
we
don't
sword,
discover
issues
clicking
cuddle
anyway,
yeah
it
end
up
being
a
pretty
nice
solution.
Okay,
which
brings
us
to
Manila,
which
has
another
blueprint.
So
I
started
a
thread
on
this
a
couple
days
ago
and
there
are
sort
of
several
ways
to
approach
this.
It
looks
like
people
are
sort
of
playing
with
a
bunch
of
them
and
so
I'm,
not
a
manilla
expert.
So
I'm
probably
gonna
get
some
of
this
wrong,
but
as
I
understand
that
there's
sort
of
four
different
ways
you
can
approach
it
I'm.
B
So
the
high-level
goal
obviously
is
to
just
make
sort
of
file
as
the
service
in
OpenStack.
Just
you
know
work,
however,
whatever
happens
on
the
back
ends
that
have
it
have
it
be
sort
of
a
usable,
stable,
viable
service,
that's
backed
by
SF
in
some
way,
and
so
it's
a
single
storage
back
end.
B
So
the
simplest
way
to
do
this
is
using
the
default
driver
which,
as
I
understand,
it
is
basically
taking
a
cinder
volume
and
mapping
it
to
the
Manila
vm
that
then
exports
NFS
and
that
obviously
works
because
cinder
works
and
you
can
use
our
be
deeper
sender,
but
that's
kind
of
a
lame
solution,
but
most
by
most
measures
if
it
works,
whatever
the
the
Ganesha
Ganesha
Ganesha
driver
is
recently
went
in
I.
B
Think
for
this
last
release
and
the
read
outs
storage
has
worked
a
lot
on
this
I'm,
mostly
in
the
context
of
cluster,
unless
they're
wolves,
our
motivation,
but
it
actually
doesn't
matter,
did
you
just
configure
Ganesha
either
to
re-export
cluster
or
we
export
Seth
sort
of
s?
And
in
that
case
it's
using
the
thinking
you
should
driver
essentially
just
use
links
to
the
client
library.
B
B
That
seems
to
be
the
sort
of
the
preferred
rep
that
most
people
are
looking
at,
because
it
gives
you
all
the
network,
isolation
and
multi-tenancy
stuff
that
you
mostly
want.
So
that
the
you
know
the
only
thing
that's
on
the
client
network
is
the
ganesha
vm,
and
so
that's
sort
of
the
security
gum
cut
point
and
then
that
one
vm
can
talk
to
the
back
and
storage
network
to
re-export
whatever
it
is,
and
that
is
basically
there.
Although
I
don't
know
whose
I'm
actually
been
playing
with
it
or
testing
with
SEF.
B
It
would
be
great
to
hear
when
people
try
it
out
I'm
as
I
understand
it.
It's
already
in
in
the
open
secretary
I,
think
the
performance
in
sort
of
in
general
is
going
to
be
okay
and
you
are
still
going
through
this
extra
gateway,
and
so
it's
not
going
to
be
as
fast
as
sort
of
a
native
file
mountain.
But
for
maybe
for
a
you
know,
Claudia
environment.
It
doesn't
doesn't
matter
so
much.
B
It
turns
out
that
the
the
driver
doesn't
actually
do
anything
within
the
guest,
because
there's
no
agent
at
manila,
all
it's
doing
is
setting
up
like
the
security
keys
and
the
exports.
So
in
the
SEF
case
it
actually
doesn't
do
very
much.
It
would
just
create
like
a
an
authentication,
a
new
authentication
key
and
create
a
directory
that
is
sort
of
that
pathetic.
A
shin
ki
would
ostensibly
be
able
to
export,
and
then
the
guest
is
it's.
The
guest
responsibility
then
go
actually
mount
SEF.
B
Based
on
that,
as
my
understand,
sighs
yeah.
C
That's
not
completely
completely
correctly
at
a
discussion
on
the
Manila
mid-cycle
meet
up
and
the
one
problem
is
that
Manila
is
requiring
some
features
from
the
share
file
system,
their
stuff,
like
snapshotting
and
so
on.
From
my
point
of
view
that
are
optional
features,
but
the
project
lead
doesn't
think
so
at
the
moment.
So
what
we
are
working
on
splitting
up
Manila
to
have
keys
features
optional,
but
if
that's
going
in
and
then
which
way
is
open
at
the
moment
and
I
think
that.
C
F
I
mean
it's
just
we
had.
It
goes
to
the
actual
shared
creative,
it's
creating
a
restoring
avoidant.
That's
the
thing
I.
I
I'm
quite
sure
that
every
driver
has
their
own
interpretation.
What
this
is
meaning,
but
yeah
in
general.
They
don't
want
to
to
to
allow
that
a
driver,
let's
say
accessible
features,
author
of
the
books.
They
want
to
have
all
the
features
in
place
for
all
the
drivers.
What's.
C
F
D
Other
difficulty
with
this
one,
though,
is
the
networking
and
I.
Don't
remember,
I'm,
not
sure
if
I
ever
understood
out
where
they
came
down
on
that,
but
because
Manila
does
very
little
networking
on
its
own
right.
It
sort
of
has
a
few
hooks
in
the
neutron,
but
for
a
native
driver
where
you
have
to
talk
to
an
entire
storage
infrastructure,
then
you
need
to
do
a
lot
of
hacking
with
the
routing
tables
or
whatever
you're
talking
to
I.
G
B
I
had
a
I
had
a
camera,
I
asked
the
person
who
did
the
cluster
native
driver
when
I
was
in
Bernal
couple
weeks
ago,
and
they
basically
said
that,
yes,
you
just
have
to
make
the
guest
on
the
same
network
as
the
storage
I'm
in
the
driver,
as
I
recalled,
their
driver
doesn't
do
anything
yeah
sort
of
assumes
that
this
is
the
topology.
So
the
assumption
is
that
you
would
only
use
this
in
an
environment
where
its
single
tenant-
and
you
don't
have
you
know
these
separate
network.
C
B
This
one
is
a
tricky
one.
I
think
the
model
makes
a
lot
of
sense
and
it's
sort
of
it's
the
analog
to
what
we
do
it
block
currently,
because
it
sort
of
isolates
the
teams
that
the
problem,
the
problem
there
is,
that
is,
the
sort
of
a
maintainer
ship
of
94
f
s.
9-P
somebody
told
me
that
IBM
is
no
longer
maintaining
it.
I'm
not
sure.
If
that's
true
or
not,
we
should
confirm
that
before
sort
of
spreading
that
but
I'd
em,
but
that's
that's
one
question
I'm,
also
not
sure
that
it's
actually
good.
B
That
9p
is
a
great
protocol
for
this,
like
it
was
a.
It
was
sort
of
a
convenient
hack
for
them.
It
was
the
simplest
existing
network
file
protocol.
Would
they
sort
of
saw
that
that
they
thought
would
be
the
thinnest,
but
it's
not
necessarily
ideal,
and
actually,
when
I
talked
to
somebody
at
fast
about
this
I'm
talking
to
Chris
stuff
about
it
and
his
I
was
I
was
Debbie
basically
said
that
it
seems
like
there
should
just
be
a
first-class
like
virtualization
like
a
bird
Iowa
coolant
called
britta
fest
or
something
but
not
using
9p.
B
Like
just
invent
your
own
thing
that
actually
maps
the
the
guests
file
system
as
cleanly
as
possible
onto
a
host
file
system
and
need
shook
his
head
knees
like
that
stupid.
That's
the
way,
the
time
you
should
just
use
NFS
and
like
use,
use
and
make
sure
that
the
data
paths
for
NFS
then
uses
verdi,
oh
I'm,
sort
of
in
it
to
optimize
the
data
path
essentially,
but
just
it
reuse
the
existing
protocol.
B
So
that's
another
option,
but
I,
don't
know
how
how
much
actual
work
just
involved
there
I
think
it's
I!
Guess
monks
are
short.
It
done
I'm,
not
entirely
sure
what
the
assuming
we
want
to
take
this
overall
architecture,
where
you're
using
something
like
fur,
defense
or
verdict,
as
I'm
not
sure
what
the
right
technical
solution
is
to
accomplish.
That
would.
B
The
very
first
thing
that
I
was
talking
about
yes,
but
that
means
that
you
have
to
write
a
kernel
file
system
that
you
get
then
get
into
the
mainline
kernel
that
it's
sort
of
equivalent
of
the
verdi,
Oh
block
driver.
That
sort
of
understands
that
if
I
man
hypervisor
I'm
just
going
to
like
put
this
stuff
in
the
name.
Memory
buffer.
And
it's
going
to
magically
be
do
the
right
thing.
B
D
B
Yeah
but
I
mean
I
mean
that's
basically
what
they
were
doing,
just
it
just
to
clarify
so
Kim
you
had
a
nine-piece
server
sort
of
embedded
in
the
hypervisor,
and
then
we
would
use
the
existing
colonel
9p
client
inside
the
vm,
and
they
would
keep
me
would
just
sort
of
like
capture
that
connection
I,
don't
know
exactly
who
that
the
end
thought
it
was
mounting.
But
but
that
was
the
idea,
but
they
were
that
deal
was
to
reuse
the
existing
client
code.
So
I
don't
do
like
make
a
whole
new
one.
B
So
I
think
I
think
the
same
thing
could
be
done
with
NFS,
but
that
today
you
could
do
it
just
by
mounting.
You
know,
Seth
FS,
on
the
host
and
then
exporting
it
to
your
guests,
and
then
the
guests
would
mount
three
or
four
using
some
like
local
network
or
whatever
to
the
host,
and
then
you
could
probably
add
some
io
path.
B
D
B
B
B
D
In
the
cluster
right
yeah,
and
not
because
right
now
that
they're
running
it
inside
of
their
own
special
VMS,
I'm
not
sure
if
they
don't
have
orchestration
access
to
just
like
run
a
process
on
the
host,
because
I
mean
that's
what
you'd
want
to
do.
If
you
were
running
it
on
the
running
one
for
vm
or
1,
/,
1
/,
ho
system,
it
like
have
one
running
in
the
host
and
then
having
its
share
out
to
whoever
it
should
be
sharing
out
too,
and
maybe.
B
The
goal
is
just
to
keep
it
simple.
Well,
I
mean
maybe
it
may
be
a
useful
experiment
to
do
would
be
to
to
just
actually
set
up.
You
know
Ganesha
running
in
a
vm
on
a
different
host,
doing
the
re-export
thing
and
then
also
mounts
ffs
on
the
host
itself
and
then
compare
the
performance
and
behavior
on
the
guest
like
is
it?
Is
it
effectively
the
same?
B
Is
it
faster
I,
don't
think
would
be
slower,
but
because
it
probably
means
that
the
this
Manila
driver
has
more
complicated
because
it's
orchestrating
mounts
and
all
the
hosts,
but
I
mean
if
the
performance
is
better
and
you
don't
have
this
extra
vm
running
somewhere,
that's
sort
of
a
single
point
of
failure,
then
there
might
be
a
good
thing.
I
mean
effectively.
The
host
becomes
the
single
point
of
failure
for
the
guest
beams,
which
it
already
is
anyway.
So
no
that
might
it
might
actually
simplify
your
but.
B
Yeah
I
guess
so,
probably,
but
then
you
do
just
need
to
like
you
still
have
to
orchestrate
these
VMs
it
to
make
sure
they're
running.
That's
what
they
stop.
They
get
restarted
in
our
I.
Think
there's
there
I
think
there's
some
generic,
tooling
and
OpenStack
to
kind
of
do
that.
But
I
forget
trifling
this
more.
It
seems
remember
hearing
something
about
that
yeah.
B
D
D
I
I
We
recently
we
just
to
help,
deter,
look
at
it
and
the
we
just
start
to
do
it
with
Nike
protocol.
We
found
this
year
has
some
performance
problem
with
my
nike
protocol
and
we
wanted
to
optimize
it
at
host
side
as
well
as
to
massage,
and
we
sink
a
nike
best
like
much
like
much
like
Swift
or
sm
fotito.
A
lot
like
fell
like
only
access
like,
so
it
has
some
more
former
public.
I
B
Okay,
yeah
yeah.
It
would
be
great
to
sort
of
hear
what
you
what
you
find
on
the
list,
I
would,
if
you're,
if
you're
experimenting,
it
would
be
really
interested
if
you
tried
NFS
too
and
just
see
just
to
compare
so
instead
of
getting
nine
Peter,
the
host
use
those
yeah.
I
C
I
F
I
I
F
F
B
So
this
would
be
one
reason
why
it
might
be
worth
checking
out
the
NFS
option,
because
you
wouldn't
need
any
that
right.
You
set
to
configure
the
NFS
server
that
happens
to
be
on
the
host
yeah.
Is
there
already
like
a
I,
got
a
local
subnet
or
something
that
you'd
be
able
to
use?
That's
like
set
up
by
default
or
something
so
that
the
guests
can
talk
to
the
host
I.
B
F
Not
usually,
oh
so
so
far
for
Manila,
usually
you
have
your
NFS
and
point
in
a
different
network
and
you
you
just
set
up
a
router
in
between
you're,
going
to
usually
have
them
in
the
same
network.
I
think
it's
an
option
for
money.
What
you
have
that,
but
I
think
in
a
default
case,
you
haven't
worked
on
it
in
between
the
two
networks:
okay,
but.
B
F
B
B
B
If
somebody
is
interested
in,
wants
to
try
this
out,
it
would
be
good
to
have
sort
of
idea
whether
this
is
something
that
is
interesting
or
not
going
into
the
next
open
sexcellent.
B
B
B
B
B
But
if
you
get
in
the
case
of
the
native
south
of
s
driver
where
the
guests
actually
get
to
map
out
the
house
isn't
actually,
then
suddenly
staff
is
the
one
who's
on
the
line
unhook
for
security,
so
I
just
I
opened
another
blueprint
that
just
called
out
so
what
the
current
caps
are
with
multi-tenancy
that,
in
that
Manila
case,
are
important
and
sort
of
also
any
other
stuff.
A
best
case
where
you
want
to
have
it's
a
multi-tenant
environment,
but
I
can
just
summarize
them
really
quickly.
B
I
wouldn't
have
to
go
into
a
whole
lot
of
detail,
but
the
people
are
interested
in
this.
This
would
be
a
great
place
to
sort
of
get
involved,
so
the
first
simplest
thing
is
just
to
allow
a
read-only
mount.
Currently
the
currently
the
capabilities.
When
you
are
that's
why
what
you
can
do
is
a
file
system
is
whether
you
can
mount
and
do
anything
you
want
or
whether
you
can't
mount
so
having
read
only
a
sort
of
the
the
easiest
iteration
on
that.
B
B
Don't
think
it
wouldn't
be
too
difficult,
gifted
that,
because
it's,
the
semantics,
are
a
little
bit
difficult
to
describe
and
they're
not
really
written
to
anywhere,
but
thinking
up
to
challenging
to
puzzle
out
the
more
complicated
one
is,
and
probably
the
one
that's
most
important
is
to
be
able
to
make
a
path
amount
pit
path
based
mount
restriction.
Where
you
say
this,
you
know
this
capability
only
lets
you
mount.
B
B
B
You
can
specify
layout
since
I
fest
that
say
that
all
new
files
are
created
in
a
particular
rato
spool,
so
the
data
goes
somewhere
else
and
you
can
make
capabilities
for
the
clients
that
sort
of
only
let
them
read
and
write
to
those
pools
that
is
enough
and
that
you
can
lock
different
tenants
in
two
different
rate
of
spools,
assuming
also
the
path
basing.
But
the
problem
is
that
you
don't
actually
generally
want
to
have
tons
of
rado
spools,
they're
sort
of
meant
to
map
to
the
different
types
of
number
of
different
policies:
placement
policies.
B
B
Use
an
additional
field
for
the
just
numbered
numbered
namespaces,
so
you
could
specify,
within
this
pool
and
also
with
anything
space
in
that
pool,
which
I
think
it'd
actually
be
pretty
simple,
they're,
just
a
bunch
of
different
places
that
you
have
to
touch
where
the
MDS
and
the
client
are
doing
IL
and
make
sure
you
fill
in
that
namespace
in
the
radios
context.
When
you're
doing
the
actual
is
oh.
B
A
B
B
D
B
But
I
mean
they
would
it
would
be
identical
to
the
fact
that
they're
allowed
to
mount
that
path.
So
these
are
your
responsibility
to
make
sure
that
your
permission
to
access
home
user
and
it's
sort
of
similar
to
your
greatest
capability
to
access
that
directory
go
it's
a
little
bit
a
little
bit
annoying
to
like
sort
of
orchestrate
that,
on
on
from
a
minister
perspective,
but
yeah
I
mean.
D
D
B
B
The
alternative
path-
that's
more
complicated
but
doesn't
have
this
restriction-
is
to
use
the
MDS
as
the
sort
of
intermediate
capability
grantor.
That's
so
so
vindhya
says
you're
allowed
to
read
and
write
that
directory.
Then
it
gives
you
a
special
token
than
those
you
do
that
the
problem
is
that
it's
much
more
complicated
and
we
have
to
figure
out
how
to
generate
those
capabilities
efficiently
and
make
them
ordinary.
B
So
a
traditionally
in
the
past
we
sort
of
shied
away
from
this,
because
it
all
sort
of
indications
seem
to
be
that
it
was
sort
of
infeasible
as
far
as
complexity
and
also
just
computationally,
generating
all
these
keys.
But
there
was
a
paper
we
read
recently
that
indicate
that
it
might
not
be
so
bad
that
ease
did
this
eased
I
think
a
symmetric
key
and
it
was
but
it's
yeah,
it's
very
complicated,
Oh.
D
D
B
G
G
It's
kind
of
tight
to
the
particular
things,
but
there
is
there's
kind
of
a
fourth
additional
thing
that
we've
kind
of
gone
home
cold
on
of
having
multiple
file
systems
that
you
would
actually
be
able
to
separate
on
two
different
MTS's,
so
I
mean
I.
Think
it's
true.
The
pathway,
man.
Restoration,
is
almost
certainly
the
most
useful
thing
immediately
for
people,
but
at
the
point
where
people
actually
started
putting
a
cloud
on
top
of
this
I
can
easily
see
them.
Wanting
I
have
some
separation
for
different
types
of
users
between
different
mdss
and
stuff,
and.
D
That's
actually
a
potful,
even
without
multiple
file
systems,
because
you
can
pin
subtrees
to
certain
servers.
We.
G
B
G
B
All
right,
let's
sorry,
actually
one
one
one
last
thing:
I
guess
the
the
thing
that
I'm
that
I'm,
lacking
and
sort
of
just
thinking
about
these
options
is
what
proportion
of
users
are
interested
in
stuff,
that
sort
of
only
works
in
single
tenant
or
twisted
tenant
environments
versus
those
that,
like
that's
out
of
the
question,
it's
all
multi-tenant
public,
cloudy
stuff
and
you
need
the
strong
security
I.
Don't
have
a
sense
of
like
how
many
people
are
deploying
in
each
of
those
types.
Do
it?
Does
anybody
have
any
perspective
or
thought
there
I
think?
B
D
C
And
I
we
are,
we
are
not
sure
I
mean
we
can't
use
it
before
enterprise-ready
at
all.
So
but
the
Moody
tenancy
is
one
feature
we
would
need
to
run.
Some
of
us
I
mean
what
we
would
like
to
prevent
as
another
level
of
indirection
I
mean
NFS
on
survivors
sounds
somehow
on
strange
and
from
the
performance
perspective
it
sounds
even
stranger.
C
So
I
would
like
to
you,
then,
if
I
make
the
decision
to
use
only
one
storage
technology
and
kick
out,
for
example,
a
net
absorb,
or
something
like
that,
then
I
want
to
use
this
torch
like
a
technology
directly
and
not
again
over
NFS.
So
you
know
I
would
prefer
if
we
would
be
able
to
use
several
user
directly
and
then
yeah
woody
tenancy
is
really
need.
It
there's
no
way
to
use
it
in
our
environments
without
multi-tenancy.
D
So
I
was
just
going
to
say:
I
mean
the
thing
about
Native
toughest
drivers.
That
means
that
anyone
who's
using
windows
is
going
to
need
something
else
anyway,
which
isn't
a
big
deal
for
most
people,
but
for
any
cloud.
That's
providing
that
and
wants
to
have
shared
file.
Data
access
denied
need
an
anniversary
or
something.
C
Right,
let's
take
one,
why
I
mean
nobody
knows
what
is
running
and
all
these
implyin
sauce
that
comes
from
and
a
few
n
laws,
but
I
hope
there
will
be
no
windows
in
this
case,
I
mean
then
even
then
we
can
perform
in
those
VMs.
We
can
share
it
over
and
a
first,
and
so
when
doses,
that
person
doesn't
make
any
difference.
If
to
you
shared
water
in
a
purse
in
this
case,
say.