►
From YouTube: 2018-08-01 Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
http://tracker.ceph.com/projects/ceph/wiki/Planning
A
With
her
believin
go
ahead,
okay,
so
the
first
one
I
had
on
the
list
was
I'm
trying
to
delineate
out
some
are
back
sow
cap
controls
for
a
BD.
We
have
I
think
starting
to
loom
that's
release.
We
added
profile
our
BD
and
profile
RB
d,
reading
that
kind
of
give
you
dirt
readwrite
access
or
read-only
access
to
a
pool.
A
The
RB
d,
like
things
in
a
pool,
but
there
has
been
a
request
to
say:
can
we
do
something?
You
know
more
granular
like
and
and
break
up
roles
where
you
have
like
a
storage
admin?
Who
has
the
rights
to
create
delete,
resize
or
whatever
images,
and
you
just
have
a
standard
user
that
just
has
the
ability
to
read
and
write
images,
not
not
to
restrict
them
down
to
you
have
access
to
image
a
bc,
but
just
say
you
only
have
access
to
read
and
write
to
images.
A
We
have
the
namespace
support,
that's
being
added
for
Nautilus
that
will
kindly
provide
tenant
level,
isolation
between
images
within
a
pool
to
try
to
solve
that
one
use
case.
So
you
know
making
sure
that
people
can
only
access
the
images,
but
they
should
have
access
to
without
having
to
you
step
back
to
say
well,
I've
talked
with
this
Pacific
image.
I,
do
you
know
things
like
that,
but.
A
This
wouldn't
depend
on
if
this
would
just
be
an
FAQ.
This
happen
right
now.
The
issues
or
the
portions
that
are
not
in
with
namespaces
is
the
ability
to
clone
across
namespaces,
so
you
can
copy
an
import
across
new
sound.
So
things
like
you
know
trying
to
push
cloning
as
the
better
option.
The
confidence
is
like
you
should
have
to
be
able
to
clone
the
cross.
That's
I've
got
that
code
about
halfway
done
and
then
there's
the
the
other
big
section
is
the
our
beading
year
and
support
for
these
cases.
A
That
you
can
use
anywhere
and
to
say
what
we
can
create
some
new
profiles
at
our
profile
allow
ABC
and
we
can
restrict
face.
You
know
what
class
methods
you
know
you're
allowed
to
touch
or
if
you're
not
execute
any
class
methods
for
just
reading
and
writing
images.
I
mean
there's
a
couple
class
methods
that
you
know
the
getter
method,
so
you
can
do
like
class
read
operation,
so
you're
not
allowed
to
do
any
class
foreign
operations
too
many
plates
I
mean
so
it
should
be
pretty
straightforward.
I'm.
B
D
A
You
should
be
able
to
tag
right
on
top
of
it,
because
you
could
already
say
like
profile,
RVD
pool
XYZ,
it's
gonna
be
also
namespace
across
ABC,
especially
no
reason
not
to
allow
that
it's
a
broader
question
to
it
is.
Do
we
want
to
keep
adding
all
these
things
to
you
know
these
magical
profiles
to
the
monitor.
You
know
the
OSD
cap.
You
know
class
work,
you
know,
can
decipher
all
these
there.
A
A
I
just
feel
yeah
good
question
I
feel,
like
our
biddies
gonna
start,
taking
advantage
of
it
right,
really
only
ones
that
you
know
there's
the
bootstrap
stuff
for
profiles,
and
now
we
added
in
luminous
we
added
profile
or
media
profile.
Already
read-only
and
I
don't
see
us
adding
like
20
new,
our
back
pro,
you
know
lower
back
profiles,
but
if
you
had
something
where
you
could
just
generic,
we
say
like
that,
our
profile,
you
know
the
other
set,
there's.
B
B
B
B
B
Well,
that's
TBD,
they'll
be
some
magic
that
happens
in
the
monitor
when
you
authenticate
that
says.
Oh
there's
something
about
this
user
authenticating.
That
makes
me
want
to
check
what
roles
would
apply
to
them
and
they'll
be
some
rules
around
that
and
then
assuming
that
matches,
and
the
Kerberos
case
will
be
based
on
like
what
group
they're
in
in
the
Kerberos
demean
and
they're
gonna
matter
to
Allah.
A
C
A
C
A
B
B
B
A
A
A
Okay,
that's
really
all
hotter
now,
I
know
all
these
should
be
really
short
topics.
Well,
I
quote
us
information
clintus,
so
yeah
I've
been
permission
quotas
again.
This
is
a
request
to
say
at
a
storage
admin.
I
want
to
basically
set
a
say:
you
know
this
namespace
or
you,
you
know
only
allowed
to
provision
out.
You
know
X
amount
of
base
for
all
images
within
that
namespace
or
within
that
pool
of
you
know
anything.
You
know
in
these
faces
and
it's
simple
decision
quotas.
So
when
necessarily
track
the
actual
usage
of
the
image.
A
So
really
all
we
have
to
do
is
crack
in
the
RVD
directory
or
whatever.
When
you
recreate
an
image,
we
can
track
the
size
of
the
image
we
need
to
put.
We
need
to
put
it
in
one
spot,
because
we
don't
have
to
open
up
thirty-year.
You
know
10,000
images
to
figure
out
the
current
revision
space.
The
abbeys
directory
seems
like
natural
place.
To
put
it,
the
one
thing
we
are
gonna
have
to
do.
A
Is
we're
going
to
have
to
restrict
add
those
images
or
that
pool
to
say
you
know
you
need
to
have
a
client
version
greater
than
or
equal
to
nautilus
if
you
use
in
the
future,
otherwise
it
wouldn't
know
to
actually
update
the
metadata
correctly
without
using.
We
can
do
that
using
the
existing.
You
know
min
client
version,
but
that's
kind
of
heavy
handed
with
mimic
we
added
within
our
bodies.
A
We
added
this
feature
called
the
out
future
code,
which
basically
you
can
read
and
write
to
the
images
all
you
want,
but
you
can't
do
any
manipulation
of
it.
If
your
client
doesn't
understand
one
of
these
opt
future
codes,
so
we
can
add,
if
you're,
if
you're
in
one
of
these
pools
and
in
spaces
where
a
storage
admin
is
to
find
a
information
quota,
we
can
have
the
the
at
least
the
newer
clients
can
when
they
create
the
image
they
would.
A
You
know
properly
tag
it
saying
like
older
clients
can't
open
it
and
manipulate
it
in
you
know
resize
it
or
whatever
or
delete
it,
but
it
does
not
stop
the
case
of
it.
Doesn't
stop
the
case
of
older
clients.
Parading
images
and
not
actually
studying
you,
know
the
proper
field,
but
if
you
use
your
namespace,
it's
only
you
have
to
have
a
new.
Your
client
anyway.
Do
you
think
spatially
yeah
yeah.
B
A
A
Yeah,
if
you
said,
if
you
thought,
if
you've
had
a
quota
in
namespaces,
you
by
definition,
have
to
have
a
nautilus
so
either
a
client
which
one
you
know
by
definition,
understand
these
competition
quotas
and
there's
no
words.
And
then
we
can
use
the
operate.
Your
code
to
restrict
older
clients
from
doing
things
like
removing
or
we
fighting
or
whatever,
okay.
B
A
A
A
Cool
okay
than
that
next
one
is
the
already
mirroring
remote
cluster
key
management.
So
this
came
out
of
talks
with
John
and
the
whole
like
orchestration
about
how
to
provision
the
RBD
near
diamond.
So
right
now,
when
we
provision
off
the
OBD,
mirror
demon,
we
kind
of
expect
and
require
you
have
to
have
the
you
know,
remote
cluster
comm,
you
know
SEMCOG
file
wherever
I
believe
either
a
diamond
is
located
in
that's
how,
when
you
set
up
that
remote
cluster,
the
name
of
that
remote
clusters,
my
name
is
a
configuration
file.
A
It
goes
and
tries
to
open
on
the
client
front,
which
is
fine
and
dandy
until
we
start
getting
our
containers
involved
in
automation
and
things
like
that,
like
injecting
have
to
inject
the
SATCOM
file
or
a
remote
cluster,
this
kind
of
awkward
and
hard.
So
what
we
can
do
we
can,
we
can
store
the
Mon
addresses
instead
of
the
cluster
name,
and
we
already
store
the
user
ID
to
use
for
the
only
piece
of
data
that
we
wouldn't
have,
then
is
the
key.
A
So
then
I
was
thinking
well,
we
could
just
it's
already
that
config
key
stuff
in
the
monitor.
You
know
it
has
it's
already
being
used
right
now
for
the
clip
these
with
the
encrypt
keys
and
things
like
that,
we
could
just
create
a
key
in
there.
That
is,
you
know,
here's
the
key
to
use
for
that
user
to
talk
to
another
cluster
for
for
our
buddy
mirroring,
and
that's
something
that
you
know
stuff
dashboard
can
inject
that
that
date,
and
so
I
already
see
a
lot.
A
B
A
B
B
C
A
A
A
A
The
only
thing
I
was
really
can't
go
in
there,
but
I
was
really
thinking.
I've
already
got
a
data
structure
on
hard
to
describing
the
peer
that
has,
you
know
the
client
ID
I
can
quickly
add.
Just
you
know,
a
JSON
field
or
whatever
array
yeah
buffer
list
in
there
is
to
store
them
on
addresses
if
I
didn't
want
to
also
store
and
yeah
yeah
off
data,
because
I
want
to
be
I
lock
it
down.
A
C
B
A
He
wanted
to
start
tackling
the
school
level
configuration
overrides
for
a
birdie
mirror
right
now
we
have
some
trickery.
We
already
have
image
level
our
buddy
here
overrides
and
you
can
do
with
with
luminous
or
mimic
when
you
added
all
the
global
config
overrides,
and
you
know
you
know
you
can
override
things
without
having
to
put
it
on
like
if
I
was
in
hyper,
but
there's
like
a
bottle
sound.
The
question
is
well
what,
if
we
wanted
like
pool
level
defaults,
we
can
we.
A
A
So
that
was
the
question
about.
If
we
do
something
like
that,
it
could
be
something
that
could
affect
the
OS
DS
rgw.
If
it's
you
know
our
govt
pools
or
whatever
like
there
have
to
be
some
smarts
about
every
wonder,
standing
like
this
is
the
pool
I'm
talking
to,
and
you
know,
overriding
those
keys,
but
thinking
about
having
a
centralized
place
to
manage
it
versus
again
one-off
solution
in
an
RB
d.
B
Like
a
many-to-many
relationship
or
there's,
not
a
clear
mapping,
yeah
an
option
too
right
now
right
now
the
way
that
those
options
get
expressed
eventually
funneled
down
to
for
any
given
demon.
You
get
a
bed
of
options
and
you
have
one
view
of
those
options
for
that
demon
or
as
if
it's
pool
and
oast,
he
has
across
multiple
pools
and
whatever,
and
so
it
would
be
different.
Even
an
RVD
client
that,
like
no
you.
B
A
B
B
B
Guess
we
did
to
you
something
weird
with
the
manager
modules
where
for
each
module
they
have
so
they're
a
little
config,
namespace
yeah,
that's
again,
gr
flash
like
I,
don't
like
config
RVD,
slash,
pool
flash
option,
equals
value
or
hourly
slash
image,
slash
something
yeah
yeah
I
guess
the
big
difference
is
the
in
the
case
of
the
manager.
Those
are
a
separate
set
of
options,
whereas
in
the
case
of
I
think
the
OBD
stuff
we
talking
about
you're
talking
about
options
that
already
exist
as
part
of
the
main
schema,
but
sending
them
more.
B
A
A
A
Also
wanted
to
clean
up
I
spotted
this
I
wanted
to
clean
up
the
way
that
you
can
do
image
level
overrides
so
that
you
just
have
hey
I'm,
specifically
doing
a
you
know
a
configuration
override
I'm,
not
you
know,
setting
the
clinical
image
meta
whatever
with
the
magic
prefix
on
it.
That
makes
it.
It
conceals
the
run
for
the
image
I.
Think.
B
A
Don't
I
don't
want
the
deed,
I
don't
want
them.
You
can
set
the
alive
anything
under
the
hood.
It's
a
question
of.
Do
we
build
something
ourselves
like
I'm,
happy
to
totally
find
happy
doing
the
our
BD
/
XYZ?
If
we
get
like,
if
we
can
whitelist
it
in
the
monitor,
could
write
out
only
wait
list,
so
it
allows
you
to
do
M
gr
/.
A
B
A
B
B
A
B
A
B
So
the
short
version
of
this
is
maybe
it's
time
we
move
sofa,
lien
client
into
SF
manager,
module
the
motivation
being
that
we
have
an
increasing
number
of
things
that
want
to
use
that
functionality.
Originally,
it
was
pretty
much
just
manila
and
then
some
kubernetes
provision
has
started
using
volume,
clients
and
then
sort
of
copy
and
pasting
parts
of
it
because
they
didn't
quite
like
the
way
it
worked,
and
now
we've
got
some
more
functionality,
I
think
getting
we
employ
didn't
go
in
if
you
wanted
to
do
places.
B
So
if
we
pull
all
that
stuff
up,
intercept
manager
and
just
expose
it
as
months,
then
we
can
hopefully
advert
this
sort
of
fragmentation
and,
at
the
same
time,
make
it
cooler
and
more
capable
as
well.
So
at
the
moment,
the
14
client
only
understands
how
to
create
file
system.
Sorry
volumes
within
a
file
system
where
they
are
essentially
just
directories.
B
B
If
we
could
unify
that
experience
in
in
a
safe
set
command
line
so
that
whether
it's
a
a
lightweight
organ
or
a
heavy
weight
volume
just
becomes
a
flag
that
people
can
use
and
that
didn't
used
to
be
that
useful
of
an
idea,
because
people
would
have
still
had
to
light
up
new
memory
as
demons
and
choose
therapy
genomes
and
that
kind
of
thing,
but
now
we're
getting
into
environments
where
we
can
automatically
create
MDS
with
the
orchestrator
interface
and
get
the
P
genome.
So
we
just
did
it.
B
So
we
can
put
this
friendly
veneer
on
pilot
system
creation
to
make
the
whole
thing
much
more
dynamic
and
then
just
also
get
a
better
user
experience,
because
it
can
integrate
with
the
progress
module
to
show
you
what's
going
on
and
that
kind
of
thing
rather
than
what
we
currently
have
in
set
volume
client.
Where,
if
something
calls
the
purge
functional
volume,
they
just
have
to
wait
for
that
and
whatever
is
calling
the
code
has
to
be
smart
enough
to
understand
when
it's
doing
a
little.
B
Not
so,
hopefully,
let's
know
that's
the
only
controversial
I
think,
but
the
questions
I
have
at
the
moment
are.
Is
it
good
enough
to
have
commands
for
this
that
are
accessible
either
by
a
liberator
source
as
Eli
or
do
we
need
to
go
as
far
as
having
a
rest
interface
to
it?
I
think
rest
rest.
Sometimes
it
seems
like
a
no-brainer
right
up
until
the
point
that
you
have
to
think
about
authentication,
and
you
realize
that
you
have
to
stop
doing
all
this
certificate
distribution.
E
B
Yeah
I'm
on
the
rest
side,
it
seems
like
thin
as
we
have
sort
of
a
clear
path
forward.
We
would
just
pop
this
into
either
a
current
restful
module
or
the
dashboards
rest
API,
but
it
either
way
it's
just
gonna
ask
through
the
same
code
right
right.
You
cool
through
into
this
module,
yeah
I
would
start
with
this
SQLite
and
then
just
sort
of
wire.
It
up,
however,
wheels
we
need
to
do
it
leader.
B
So
they
can,
they
can
use
liberate
us
to
send
any
comments
they
want,
including-
and
that
includes
like,
come
on
to
the
gate,
advantage
of
modules
and
paste
through
the
same
route
and
what
I
was
thinking
to
make
it
a
little
easier
on
the
OpenStack
people.
We
could
give
them
an
updated
set
volume
client
that
just
does
that
under
the
hood,
so
when
they
called
into
that,
and
it
would
just
be
sending
a
command
to
the
manager
instead
of
having
that
code
on
line
I'm.
B
E
B
E
Yeah
no
I'm,
not
just
testing
there
were
there
real
to-do
items
as
far
as
sharing
data
pools
with
Multi
file
system
I
think
you
pitched
the
idea
that
we
might
declare
it
stable,
but
only
with
separate
data
poles
and
that's
certainly
possible.
B
See
you
all
caps
as
well,
especially
for
this
use
case.
We
don't
to
make
sure
we
have
the
ability
to
have
clients
that
could
only
have
the
caps
for
a
particular
file
system.
I
think
we
don't
have.
Maybe
we
do,
as
you
can
tell
I,
haven't
quite
done
my
research
on
this
camera,
I
guess
if
this
feels
to
me
like
it's
orthogonal
to
the
data
file
sharing,
though
like
that's,
that
just
makes
volumes
more
efficient
when
they're
actually
deployed
or
whatever,
but
the
abstraction
of
these
are
sees
should
be
the
same.
E
B
B
No
I
was
kind
of
assuming
it's
for
promote
for
a
manager.
That
was
he
for
manageability
reasons,
but
once
once
you
have
dynamic
adjustment
of
P
genomes,
then
it
doesn't
it's
not
actually
that
big
of
a
problem
to
have
like
a
thousands
and
all
data
pools
yeah,
it's
just
of
it
I,
don't
know
it
seems
kind
of
unnecessary.
B
If
we
could,
just
you
know,
do
a
little
bit
of
configuration
work
to
have
the
million
share
a
pool
instead,
I'm
I'm
slightly
fuzzy
on,
in
which
cases
we
would
tell
someone
to
do
it
one
way
versus
the
other
way.
It
feels
to
me
like
it's
only
if
you
have
like
really
really
small
file
systems.
That
1pg
is
too
many,
because
you
could
just
have
a
one.
B
Pg
data
pool
and
I
won
PG
metadata
for
you
just
don't
get
the
parallelism,
I
guess
for
a
tiny
file
system,
but
yeah
I,
don't
know
it
feels
to
me
like
that.
The
MDS
is
are
the
bigger
issue,
but
the
I
guess
it
I
mean
this.
It's
kind
of
comes
and
this.
For
me,
this
comes
down
to
the
volume
sub
volume
files
with
some
terminology
and
what
they
mean
it
feels
like.
B
We
just
be
really
clear
about
what
that
what
that
is,
so
we
need
to
pick
a
name
that
communicates
that
the
heavy
weight
thing
is
actually
heavy
weight.
It
set
has
its
own
independent
data
to
the
server,
and
it's
more
about
you
pay
more
in
resources,
but
you
get
more,
you
know
vault
them
in
isolation,
I
guess,
out
of
it.
B
Well,
the
more
I
look
at
this:
the
more
I
find
the
term
file
system
kind
of
inconvenient.
For
that,
because
I
mean
what
we
give
people
you
know
in
a
cell
volume
when
they
mount
up
on
the
client
side,
I
mean
that's
the
file
system,
it's
just
normal.
We
call
the
file
system
on
the
other
side.
Yeah
yeah
I,
like
I
I,
think
I
would
lean
towards
converging
on
volume
and
sub
volume,
with
file
system
being
serve
an
optional,
alias
legacy,
alias
for
volume.
I.
B
E
D
B
E
B
Yep
certain
times
of
actually
doing
this
stuff,
I
I
will
probably
go
ahead
and
work
on
the
the
sort
of
initial
version
of
this,
especially
the
heavyweight
boolean
creation,
because
I
want
it
for
demonstrating
the
orchestrator
stuff
I'm
one
Holly.
My
magic
won't
come
online
operation
that
yes,
also
some
criticals.
That's
my
point
and
I
think
the
data
pieces
this
this
is
that
right
now
we
want
to.
We
want
to
line
up
the
CRTs
that
that
work
is
defining
with
these.
So
right
now
that
I
think
there's
a
filesystem
CRD.
B
That
is
the
same
as
the
volume,
though
we
should
fly
like
rename
it
I
guess,
but
we
also
want
to
create
a
sub
volume
CRD
and
make
sure
that
the
dynamic
prisoner
for
kubernetes
is
creating
sub
volumes,
because
those
are
the
lightweight
things
that
are
gonna
come
and
go
with.
High
frequency
and
low
latency
right
volume
is
the
container
that
they
live
in.
B
B
B
Yeah
mie
I
instinctively
want
to
do
that
as
well,
although
if
somebody
is
using
something
like
Manila
and
they've
got,
you
know
tens
of
thousands
of
little
volumes,
not
sure
how
useful,
and
actually,
as
to
it's
a
little
bit
like
with
the
Bobbie
Dee
images
that
it's
me
it's
get
here
at
the
admin
and
you're
looking
at
the
dashboard
yourself
cluster
and
you
don't
necessarily
like
browse
the
RVD
images
to
do.
This
is
created
with
Simba
yeah
if
find
out
yeah.
B
Like
that's
what
they
would
do
and
when
they
would
create
a
one,
you
know
more
traditional
environment,
you
know,
I,
both
phases,
I
think
is
I.
Think
it's
really
cool
to
have
it
in
the
user
interface
because
it
lets
people's
will
click
around
and
understand
that
system
I'm.
Just
not
sure
how
is
yeah.
B
You
know
what
the
dashboard
team
does.
It
may
be
I
guess
my
might
I
guess
my
my
p1
is
first
sort
of
document
and
canonicalize.
What
the
terms
are.
Mm-Hmm
and
I
would
I.
Would
sorry
I
lean
towards
I'm
moving
away
from
the
filesystem
term,
because
it's
ambiguous
and
it
doesn't
I-
was
facing
it
and
that
drives
people
nuts
to
volume
and
have
subvolume
have
this
sort
of
defined
meaning,
and
that
was
previously
sort
of
only
stuff
volume,
client
scoped
and
then
have
a
have
a
standard.
B
This
is
probably
clinically
detail,
but
when
it's
actually
call
those
comments
like
stifled,
you
knew
or
stuff
yeah
I,
don't
know
it's
a
like
commandeer.
The
level
of
you
too
yeah
I
would
keep
its
ffs.
So
we
know
we're
talking
about
POSIX
volumes
and
not
block
volumes
or
something
otherwise
to
be
big.
If
volume
is
one
of
those
like
you're
overloaded
term,
so
it
just
means
thing
yeah
thing
with
stuff
in
it,
yep.
B
Okay
and
then
yet
the
work.
The
right
thing
is
like
the
next
one,
because
we
need
to.
We
need
to
make
sure
that
that
is
doing
the
right.
The
right
thing
sooner
rather
than
later,
I
think
does
so
as
far
as
aligning
all
this
with
with
NFS
does
enabling
a
Ganesha.
Whatever
is
that
just
like
that
FS
but
volume?
B
A
E
E
C
C
B
B
Sort
of
use
this
as
a
way
of
having
individual
amounts
of
weather
subdirectory
within
their
file
system
that
they
want
right,
because
if
they,
if
they
want
to
do
that,
they
totally
can
all
right.
The
point
of
the
some
volume
concept
is
that
it's
packaged
up
ring
wrapped
and
you
know,
are-
are
commands
and
management
tools
understand
how
it's
gonna
work.
I
mean
the
exception.
B
I
would
think,
maybe
is
if
they
want
to
have
an
arbitrary
prefix
to
all
of
their
civilians
within
a
particular
volume,
because
they
hadn't
exist
in
the
file
system,
and
they
wanted
to
use
that
each
more
place
to
carve
that
out
with
them.
I
guess:
I'd
be
okay
with
that
yeah
I,
guess
that
was
I.
Guess
think
that
I
was
kidding
if.
C
You're
going
there
then
does
it
make
sense
to
do
something.
You
have
a
directory
hierarchy
where
you
have
some
sort
of
top-level
directory
to
call
sub
volumes
in
there.
You
never
walk
down
to
the
components,
but
it
might
make
it
cleaner.
If
you
have
a
rather
than
spring
drunk
dumping
a
bunch
of
directories
in
the
roofing
area,
I.
C
B
B
B
If
we
list
the
volumes
we
go
unless
the
file
system,
so
between
on
a
problem
necessarily
for
them
to
you,
know:
I
guess
as
long
as
we're
using
the
file
system
itself,
that's
the
first
tip
record
of
what
beliefs
exists
and
where
they
are
then
okay.
Well,
if
that's
the
case,
then,
with
that
kind
of
forces
us
into
the
mode
where
there's
a
single
prefix
and
all
the
sub
volumes
live
there,
because
that's
the
directory,
what
if
the
volumes
are
defined?
B
Okay,
so
then
it's
then
there's
another
variant
of
this,
where
you
could
have
like
a
collection
of
sub
volumes
that
are
in
vanilla
and
another
collection
of
the
blinds
that
are
in
splash
boo.
Those
are
just
two
totally
independent
sub
line
collections
and
then,
like
the
user
who's
using
this,
that's
why
I
switched
those
sublime
collections.
They
want
to
use
or
care
about,
or
something
is
that
useful
or
should
we
just
make
a
per
file
system
property?
E
B
B
And
one
way,
actually
one
one
way
to
do
that,
perhaps
with
the
to
have
the
volume
like
links,
though
we
would
have
like
one
directory
in
a
file
system
that
has
all
the
sub
volumes
of
em,
but
you
could
have
dummy
ones
that
were
just
a
pointer
to
another
path.
There,
the
Bulevar
like
housekeeping
stuff
on
our
ability
to
enumerate
some
volumes,
I
mean
had
to
deal
with
or
what
directory,
but
you
could
totally
have
a
softer
pedestal
volume
that
in
reality
was
somewhere
else
in
the
file
system.
B
Maybe
yeah
yep
maybe
be
coming
at
this
from
the
other
direction.
We
should
make
sure
that
the
function
will
be
built
to
like
add
an
NF
x.
Nfs
export
NFS
export.
It's
not
strictly
tied
to
a
sub
line,
so
you
can
still
export
rhubarb
as
and
he's
all
the
stuff
that
spins
up
knishes
or
whatever.
B
The
way
to
proceed
is
to
start
writing
the
code
assuming
a
fixed
path
for
sublimes
in
a
system,
but
be
thinking
in
the
back
your
mind,
how
big
a
role
that
fixed
path
actually
takes
and
if
it
would
make
sense
to
if
their
opportunities
to
extend
it
or
not.
Based
on
how
complex
the
implementation
actually
is,.
B
Yeah
no
I
mean
I
think
when
you
saw
a
second
ago
about
having
a
prefix
in
the
file
system.
Map
per
file
system
makes
the
most
sense
to
me
and
starting
it
was
a
starting
point,
but
you
can
obviously
see
how
if
we
ever
did
decide,
you
wanted
to
have
like
more
than
one
of
them
per
file
system.
We
could
extend
it
later
and
I
think
the
idea
of
having
links
into
subvolumes
satisfies
the
rest
of
the
person
that
has
existing
directories
that
they
want
to
start
treating
as
sub
volumes.
B
Group
is
where
the
quota
set
and
the
owners
restrictions
are
set,
or
what
is
the?
What's?
The
purpose
of
it's
just
snapshot
scope
of
snapshots
that
yeah
just
that
you
can
smell
them
together.
So
when
you
actually
mount
them,
you
might
with
just
individual
volumes,
not
agree.
Okay,
but
if
you,
if
you
want
to
be
able
to
curve
here
at
least
natural,
several
volumes
together,
which
is
something
of
an
amorphous
just
like
sentinels,
then
the
group
is
full.
Okay.
A
B
B
B
All
right
and
then
the
last
topic
was
stuff
s
MDS
workload,
profiling.
B
This
came
up
in
a
discussion
about
trying
to
understand
a
customer
workload
and
today,
talking
about
how
back
when
you
worked
for
whoever
they
all
these
great
tools
that
you
could
just
turn
on
in
a
production
system
and
it
would
capture
of
a
magic
file
that
would
tell
you
everything
you
wanted
to
know.
B
It's
hard
problem.
This
I
just
wanted
to
throw
out
a
few
ideas.
I,
don't
know
that
Avenue,
like
specific
proposal
or
good
solution,
one
way
to
approach
it
is
through
just
more
and
better
metrics,
so
the
idea
would
be
to
and
the
perf
counters
or
whatever
it
is
to
make
sure
that
we're
capturing
all
the
different
code
paths.
The
thing
is
is
taking
you
know
how
many
misses
it's
having
on
directories?
How
often
no
so
load
first
is
not
load
when
it
does
load.
How
big
are
the
directories?
B
How
big
are
the
directories
when
we
do
look
at
all
that
try
to
sort
of
quantify
all
that
stuff
and
probably
in
the
form
of
perkiness,
and
that
would
be
one
one
piece
of
it.
There
are
also
some
ideas
around
improving
the
same
sort
of
instrumentation
on
the
clients,
so
patrick
suggested,
making
the
caps
and
the
clients
keep
statistics
on
the
caps
as
they're
used
on
the
client
side,
and
whenever
it
passes
that
the
cap
relink
shaves
the
cap
or
updates
that
are
flushes
it
back
to
the
end.
B
B
Think
if
I,
like
training
neural
net
to
do
it
or
whatever
you
could
do
something
that
it's
sort
of
this
like
two
phase
process,
though
it's
not
entirely
satisfying,
but
at
least
tells
you
what's
going
on,
so
it
might
be
worth
doing.
B
We
could
capture
places
at
the
MDS
but
they're
hard
to
replay
at
the
MDS,
because
you
really
need
to
create
a
client
workload
and
that's
above
the
client
I'm,
not
sure
how
useful
ingest
trace
really
is,
except
for
debugging
I'm
or
you
could
try
to
capture
traces
at
the
client.
B
That's
probably
pretty
doable
for
the
for
the
user
space
client
for
stuff
use,
in
fact,
there's
a
bunch
of
tracing
stuff
in
there
already,
that's,
probably
totally
broken,
but
I
used
way
back
when
I'm
wondering
your
initial
initial
step
paper
doing
capturing
a
similar
trace
for
the
kernel
client
is
probably
a
totally
independent
approach
to
how
we
want
to
implement
it,
though,
and
so
I'm
not
sure
that
which,
how
that
whether
that
makes
sense,
then
I
think
about
of
traces
that
you
can
replay
the
trace
kind
of
traces
and
general
are
hard
to
read
flakes.
B
You
also
need
the
data
set.
That
goes
with
them,
and
you
know:
when
do
they
start?
When
do
they
end,
and
how
do
you
set
everything
up
so
that
only
kind
of
helps,
though
it
might
be
that
even
with
traces,
you
still
have
this
sort
of
second
phase,
where
we
have
to
like
do
a
bunch
of
work
in
order
to
actually
make
it
into
a
useful
workload
that
you
can
reproduce.
B
B
If
you
identify
somebody
buts,
doing
just
sequential
crates
of
files
and
never
touching
those
files
again
in
visual
term,
then
you
can
start
throwing
them
out
of
cash
immediately
after
you
create
them
and
I
think
there's
at
least
of
a
few
cases
where
you
can
pretty
easily
identify
the
worklight.
But
you
can
identify
something
that
looks
like
an
ant.
Are
you
can
identify
something
that
looks
like
at
last,
a
shell
pretty
much
just
by
looking
at
the
last
thousand
volts
and
if
they're,
all,
if
they're,
all
stats
on
different
files?
B
That's
an
LS
dash
out
if
they're,
if
they're
all
creates-
and
it
never
touches
anything
again.
Then
it's
probably
on
top,
but
even
that
kind
of
dimple
of
I
think
would
be
pretty
useful
if
you're
looking
at
your
top
ten
clients-
and
you
know
who's
this
guy-
hammering
my
system-
oh,
he
seems
to
be
doing
a
file
create
one
cloud.
B
C
B
If
we
just
sort
of
like
think
about
what
is
the
like,
simplest
vaguely
complete
model
of
like
a
metadata
workload,
it's
like
blissed-out
size,
histogram
older,
but
that
size
might
not
matter
too
much
directory
sites.
Histogram
would
be
one
directory
like
branching
factor,
like
average
average
and
standard
deviation
of
how.
How
much
directories
are
how
deep
that
hierarchy
is
like
distribution
of
austere
3rt
like
how
deep,
but.
E
E
B
So
then
you
could
scale
up
the
cache
size
and
then
know
whether
kind
of
perform
better
or
worse,
because
I'm
kind
of
assuming
that
thing
that's
gonna,
slowly,
foreclose
down,
it's
not
everything's
in
cash
and
so
you're
going
off
and
fetching
paging
directories.
In-And-Out
order
can
be
incomplete
or
whatever
and
so
being
able
to
characterize
the
factors
that
make
that
happen
or
not
happen.
B
Yes,
so
you
look
at
like
the
last
thousand
operations
and
for
each
one
you
record
what
title
of
it
was
whether
it
was
in
a
different
directory
than
the
last
operation
and
probably
also
was
it
touching
a
file
that
had
already
been
touched
in
the
last
thousand
blocks
or
something
like
that.
But
you
could
distinguish
between
things
which
are
operating
repeatedly
on
the
same
files
versus
things
which
operates
in
a
different
files.
B
Maybe
this
is
even
like
too
too
ambitious
like
or
codes
all
right,
they're
gonna,
like
scribble,
a
bunch
in
one
directory
or
they're
gonna
like
write
once
and
move
on
or
I
mean.
Maybe
it's
I
don't
know,
I'm
just
thinking
from
the
perspective
of.
If
we
wanted
to,
we
wanted
to
capture
a
bunch
of
metrics
on
a
workload,
and
so
the
analogy
of
the
block
case
and
then
have
something
that
would
like
both
generate
a
data
set
and
generator
workload
that
like
had
the
same
effect
impact
on
the
cash.
What
would
that
would
that
take?
B
B
B
C
B
Yeah
I
mean
we,
we
run
tools
like
the
bench.
That's
part
of
the
like
stress
test.
We
and
you
know
Ben
England,
created
this
small
file
tool
that
use
multiple
nodes
and
generates
lots
of
small
file
workloads
and
as
parameterised
in,
like
12
different
ways
to
try
to
reproduce
these
workloads.
I
think
the
thing
is
like
how
do
we
know
that
the
workload
that
we're
generating
is
related
at
all
to
what
a
customer's
doing
well.
C
I
think
you
have
to
be
able
to
record
their
activity,
yeah
and
I.
Think
if
you
used
something
like
LT
TNG,
maybe
big
cat.
You
could
capture
that
to
like
a
maple
trace,
which
would
be
lightweight,
then
a
rap
right
turn
right
around
and
you
build
some
program
that
bursts
that
into
some
sort
of
like
be
bench
like
program
file,
load.
B
C
Have
to
consider
first
part
yet
I
mean:
what's
the
I
mean
what
what
are
you
gonna
use
to
record
a
work?
Alright.
B
Was
the
paper
that
it
just
for
block
points,
but
it's
pretty
cool,
but
I
mean
the
high-level
idea
is
basically
like.
You
have
a
production
system
and
you
have
somebody
turn
it
on
for
a
little
bit
and
they
gather
some
data,
hopefully
with
minimal
overhead,
and
then
they
send
it
back
to
you,
and
that
gives
you
enough
information
to
like
reproduce
something.
That's
similar.
You
can
just
synthetic.
We
generate
a
similar
workload.
C
Trace
points
right
right
in
the
midst
of
where
you
dispatch
into
different
MDS
yeah
collect
all
the
info
that
you
need
for
those
you
know
yeah
little
heavyweight,
but
I
mean
if
you're
turning
it
on.
You
expect
to
take
something
back
and
it'll
teach
about
it.
Why
does
anything
you're
gonna
get
so
maybe
do
that
with
that
would
be
pretty
simple,
I
think
principle.
B
C
C
B
B
If
you
could
have
a
heuristic
for
different
sub
trees,
that
could
tell
is
this
a
sub
tree?
That's
doing
like
not
just
lots
of
file
creates
or
if
this
is
something
where
people
are
working
on
modifications
within
big
files
that
might
cue
a
replication
agent
to
either
like
wait
for
the
untaught
to
be
finished
and
then
do
it
in
one
go
or
if
people
are
editing
files
a
lot,
then
it
might
need
to
use
a
snapshot
based
method
to
copy
from
one
cluster
to
the
other,
and
that's
a
little
bit
more
out
there.
B
B
C
B
Then
don't
sort
of
try
to
do
the
that
counter
approach,
I
think
so.
The
other
nice
thing
about
the
trace
approach
is
that
it's
it's
more
open-ended.
So
if
we
implement
something
that
will
capture
a
hopefully
somewhat
meaningful
trace
of
workload,
you
could
do
some
post-processing
of
that
now
and
have
some
like
low-level
insights,
but
you
could
always
come
back
later
with
a
more
sophisticated
analysis.
Learn
more
from
it,
whereas
the
metrics
sort
of
like
box
you
into
exactly
what
you're
going
to
learn,
yeah.
C
B
C
C
C
B
You'd
have
to
have
some
hybrid
approach
where
the
client
is
counting
opportunities.
It's
caps
like
if
you
had
the
cap
and
metadata
that
we're
talking
about
where
you
know
how
much
the
caps
are
used
on
the
client
when
they
get
flushed
and
you're
logging
that
in
the
trace
may
be.
The
combination
of
those
two
things
is
enough
and
so
lets
you
do
it
just
on
a
per
yes.
C
E
A
lot
of
this
depends
on
on.
What's
the
goal
of
collecting
these
workloads,
is
it
to
be
produce
behavior
in
the
MVS
or
understand
the
performance
of
the
system
as
a
whole
and
I?
Think
you
only
really
need
to
collect
trace
information
from
the
clients
if
you
want
to
be
able
to
replace
something
for
the
purposes
of
analyzing
performance.
B
B
E
B
D
Let
me
provide
some
background
in
OSD.
We
have
course
Locke,
it's
PG
Locke
why
it
can
be
taken
X
only
exclusively,
it's
a
mutex,
not
right
right,
right,
right,
luck
or
something
like
that.
We
also.
We
are
also
lacking
a
synchronous.
Read
functionality,
no
SD
in
our
end,
in
our
object
stars
as
a
result,
if,
if
we
have
multiple
operations
by
Chris,
that
could
be
a
and
B
make
in
parallel,
but
they
are
subjecting
the
same
PG
as
a
result,
only
one
of
them
can
be
executed.
D
D
D
Basically
you
going
with
the
a
synchronous
which
approach,
but
in
a
more
selective
manner
that
reason
I
would
like
to
ask
about
about
possibilities
to
implement
in
our
video
in
any
other
radius
client,
some
kind
of
hints
that
that
can
be
consumed
that
can
be
consumed
by
an
OSD.
Something
like
this
operation
is
sequential
because
it's
read
ahead
or
something
like
that.
Do
we
have
such
possibility
today.
D
Unfortunately,
I'm
unfamiliar
with
the
read
ahead
implementation
in
leap
RBD,
it's
I,
know:
theoretically,
it
might
be
possible
I'm,
just
I'm.
Just
on
our
about
its
reality.
Ability
I
mean
the
the
trade-off
between
penalties.
We
could
get
for
the
costs.
We
will
need
to
paint
in
to
implement
that
feature.