►
Description
https://wiki.ceph.com/Planning/CDS/CDS_Giant_and_Hammer_(Jun_2014)
25 June 2014
Ceph Developer Summit G/H
Day 2
CephFS: security & multiple instances in a single RADOS cluster
A
Hey
we
were
home
plate.
B
All
right
in
multiple
instances
in
a
single
dose
cluster
go
all
right.
So
this
this
blueprint
is
basically
a
plea
for
people
to
come
and
give
us
information
about
what
they
want
to
do,
and,
and
so
the
reason
for
that
is
that
one
of
the
things
that
has
been
popular
to
talk
about
with
with
some
of
our
developers
lately
has
been
adding
support
for
multiple
staff
of
s
file
systems
within
a
single
ratos
cluster
I.
B
Don't
know
that
we're
likely
to
implement
it,
but
there
are
a
couple
of
different
approaches
to
how
we
can
do
it
better
bound
up
with
and
they
sort
of
have
different
security
versus
convenience
guarantees
and
security
is
also
a
thing
which
we
know
is
a
big
deal
for
many
people,
so
that
I
think
will
impact
how
we
go
about
doing
multiple
file
systems.
If
we
ever
do
it.
I
want
to
talk
about
both
of
those.
B
B
The
first
one
is
when
the
sages
championed,
which
is
where
we
say:
okay,
we're
going
to
have
a
we're
going
to
allow
you
know
any
number
of
of
file
systems
within
the
cluster
and
we're
going
to
do
that
by
having
multiple
MDS
maps
that
specify
different
pools
for
the
data
and
metadata
for
each
one,
and
that
mean
that
we
have
a
different
set
of
MDS
is
for
each
file
system
and
clients
when
they
can
act,
have
to
tell
you
which
file
system
they're
connecting
to
so
that
has
some
advantages
in
terms
of
the
file
system
code.
B
It's
a
lot
simpler.
It's
it's
more
work
in
the
met
in
the
in
the
monitors,
but
it's
very
simple
for
the
file
system,
because
basically,
nothing
changes
except
that
it
has
to
deal
with
like
when
you're
turning
it
on.
You
have
to
tell
it
which
one
it
belongs
to
so
I
can
tell
the
monitor
it
gets
us
some
fairly
effective,
multi-tenancy
features
very
very
cheaply.
B
Just
we
would
need
to
sort
of
specify
a
much
stronger,
mdss
defects
on
authentication
capabilities
system,
so
you
can
specify
the
file
systems
allowed
to
talk
to,
but
once
you
do
that,
then
basically,
you
can
have
physically
segregated
file
systems
or
every
tenant
within
a
within
a
shared
resource
cluster.
B
B
Alright
ever
say
standby
mdss,
no
just
a
different
set
of
mbss
in
general
white.
Each
file
system
has
its
own
with
it
are
getting
the
map
for
that
file
system.
Everything
now
I
mean
standbys
her
another
thing
the
monitor
has
to
cope
with,
but
in
terms
of
in
terms
of
the
generic
stuff.
That's
not
a
real
big
deal,
good
yeah,
okay.
So
the
alternative
approach
is
one
that
I'm
not
sure
that
I'm
I'm
not
sure
that
I'm
a
real
big
kind
of
this
one
either.
B
B
I
want
this
client
to
have
access
to
file
system,
a
which
is
just
the
directory
in
the
route,
but
you
know
we
don't
work
care
about
that,
and
then
you
get
much
better
and
then
you
don't
need
to
make
the
changes
to
the
to
the
monitor
that
allow
multiple
things
within
a
clinton
that'll
allow
multiple
files
from
the
cluster.
You
just
have
the
syntactic
sugar
and
you
also
get
all
the
benefits
of
load
balancing
across
your
clients.
B
But
so
that
would
be
good
to
talk
about,
like
which
of
those
are
more
important
to
people
who
think
they
might
deploy,
something
that
something
that
looks
like
we'll
tow
file
systems
within
a
cluster.
But
more
importantly,
or
not,
more
importantly,
but
also
I
want
to
talk
about,
like
the
security
expectation
that
people
have,
because.
A
B
Multi-Tenancy
by
way
of
multiple
MDS
isn't
strong
enough.
Then
it's
less
interesting
as
a
thing
to
implement.
If
we're,
if
we
already,
if
we
would
need
to
implement
a
serious
and
capable
security
system
within
our
normal
setup
anyway,
then
maybe
we
just
want
to
do
that
and
not
make
all
these
changes
to
the
monitor
and
allow
for
the
for
the
sort
of
administrative
simplicity
of
just
having
a
big
pool
of
metadata
servers.
I
don't
have
any
answers.
A
A
A
A
Yeah
yeah,
so
I
think
honestly.
I
think
that
both
of
these
things
are
things
that
we
want
to
do,
and
there
are
different
reasons
for
doing
each
of
them
like
option.
Two
is
really
just
about
having
being
able
to
have
an
MDS
capability
that
locks
you
into
a
subdirectory
because
you're
right
this
is
this
is
so
option.
Two
is
totally
the
model
that
that
I
think
we
always
envisioned
in
the
past,
where
the
whole
point
is
that
you
don't
want
to
have
subvolumes.
A
B
A
A
They
all
get
sort
of
jumbled
up
into
the
in
the
same
set,
mdss
and
maybe
I
mean
maybe
you
have
administered
commands
that
sort
of
lock
subtrees
into
different
classes,
and
you
sort
of
structure
have
a
service
overlay,
sub
structure
to
your
MDS
cluster.
Where
you
have
you
know
you,
it's
not
just
a
flat
array
of
MDS
is,
but
you
say
I
want
this
subtree
to
stay
on
this
and
yes
and
this
subtree
to
stay
on
that
one.
A
B
Though
yeah,
so
it's
not
just
about
locking
people
into
it
into
a
certain
location,
it
would
really
be
like
having
a
proper
security
model,
that's
enforced
on
the
server
sides
yeah
and
the
advantage
to
hats
it
to
going
through.
All
that
trouble
is
that,
then
you
can
do
things
like
have
client
have
different
tenant
like
have
a
tenant
share
its
data
with
other
dragons,
whereas.
B
Right
yeah.
A
A
That's
an
important
use
case
that
we
want
to
capture,
and
some
of
my
way
that
I've
always
sort
of
imagined
that
would
work
would
be
that
the
India's
capability,
essentially
for
that
FX
user
or
whatever
would
would
basically
just
say
you
know,
allow
MDS
in
path
home,
foo
and
then,
and
the
MDS
would
just
verify
that
the
inodes
that
you're
caching
and
operating
on
and
doing
all
the
stuff
that
they
are
just
within
that
that
sub
tree,
that
was,
that
was
sort
of
what
how
I
assumed
that
might
work
and
that
that
captures
sort
of
the
like
subvolume
useless.
B
To
your
mat,
there's
more
to
it
than
that,
though,
like
for
instance,
I
mean
really
what
generally,
what
you
would
expect
is
that
actually
they
can
only
do
stuff
in
their
own
home
directory,
but
then
mate,
but
then
maybe
there
is
someone
who's
who's
like
got
globally
shared
stuff.
They
want
everyone
to
be
able
to
just
access
from
wherever
game,
though
so
I
want
to
build
yeah
you
guys
as
well
and.
A
And
and
the
capability
would
would
do
that
right,
it
would
say,
allow
read/write,
home,
foo,
allow
read
home
shared
or
allow
star
get
the
whole
thing.
You
know
I
right
dude,
it's
an
access
or
what's
a
set
of
grants
whatever
you
call
that
yeah.
B
Multi-Tenancy
system,
where
it's
not
just
different
users
in
the
falsa,
but
actually
tenants
within
like
a
cloud
data
structure
within
a
cloud
data
center,
where
one
of
them
is
like
we're
in
general,
they
all
do
in
private
files.
But
then
one
of
them
is
like
I
want
to
share
this
location
of
my
stuff
with
everyone
who
comes
along
looking
for
it,
and
you
can't
just
have
them,
and
you
can't
just
give
everyone
a
grant
to
that.
A
A
I
guess
the
so
I
think
option
two
is:
is
it
in
my
mind,
is
largely
about
security
multi-tenancy
and
I
think
the
the
problem
I
has
is
it
would?
It
would
be
difficult
to
convince
a
paranoid
user
that
we've
sort
of
cover
all
our
bases,
because
our
client-server
protocol
client
india's
protocol
is
inherently
very
trusting
of
the
client.
A
So
the
clients
are
like
allocating
I
nodes
and
doing
all
this
they're
doing
all
kinds
of
stuff,
and
so
it
it's
it's
a
lot
of
it's
a
lot
of
holes
to
sort
of
clothes
and
convince
them
that
even
that
they
like
can't
exploit
a
bug
in
the
end.
Yes
and
oh
so,
whereas
I
think
option
one
for
me,
is
it's
a
little
bit?
It's
even
less
about
I,
guess
so,
but
consider,
but
it's
more
about
the
like
the
type
of
hardware
deploying
on
and
in
having
things
that
are
completely
independent.
A
So
in
the
same
way
that
you
create
a
ratos
pools
that
can
be
mapped
to
different
hardware,
you
could
create.
You
know
file
systems
where
you
have
I
just
want
a
completely
different
set
of
India
I
want
like
one
MDS
and
I'm
gonna
like
pound
to
death,
because
this
is
like
a
use
case
where
it
doesn't
really
matter
and
I'm
going
to
map
it
onto
a
state
of
pool
for
a
dose.
Where
is
this
other
one,
I'm
gonna
back
it
with
something
else?
A
That's
my
home
directory
and
one
to
be
totally
independent
from
you
know
my
databases
or
something
you
know
this
is
it's
the
same
way
that
you
would
you
can
create
multiple
reduce
gateway
zones
within
the
with
the
cluster,
and
you
have
a
different
set
of
readers
gateways
that
are
sort
of
sitting
in
front
of
them,
like
you
can
architect
the
where
the
demons
go
and
how
they're
mapped
onto
the
Monta
the
radio
Slayer?
That
way.
B
For
both
of
them,
it's
mostly
just
that
I
want
to
like
ya,
hear
from
users
about
what
the
overlaps
are
and
sort
of
where
their
relative
priorities
are,
because
our
two-week
projects
that
we
are
unlikely
to
work
on
at
the
same
time
and
I
mean
not
necessarily
soon
either.
But
if
it
turns
out
that,
like
multiple
MTS's
are
a
key
thing
for
a
whole
lot
of
people
and
it
would
satisfy
their
security
requirements
for
eighty-five
percent
of
use
cases,
then
I'd
become
a
lot
more
interested
in
working
on
it
soon.
B
A
I
think
I
think
part
of
it
comes
down
to
in
the
cases
where
people
want
security.
Isolation
for
that
multi-tenancy.
Is
it
because
they
want
it
for
like
hundreds
of
users
or
because
they
have
like
three
use
cases
yeah
that
are
like
logically
different
like
three
tenants
that
could
be
satisfied
by
three
independent
Custer's?
Or
is
it
really
that,
like
they
want.
A
They
have
their
own
little
yeah.
I.
Think
that
that
that's
sort
of
for
me
determines
whether
it
on
whether
it's
sort
of
pushes
things
towards
one
or
thur
or
not.
I,
think
the
other
thing
with
option
two
is
that
the
security
enforcement
that
we
need
to
address
that
it
sort
of
is
also
the
same
thing
we
need
to
do
in
order
to
eliminate
some
of
the
trust
on
the
client
that
you
currently
have
when
you're
using,
for
example,
stuff
views
or
lips
f
of
s
yeah,
where
you
have
a
potential
user
level
process.