►
From YouTube: 2017-JUN-07 :: Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
http://tracker.ceph.com/projects/ceph/wiki/Planning
A
A
So
if,
as
we're
discussing
things,
anyone
has
any
other
work
that
is
ongoing
or
they
would
like
to
discuss,
go
ahead
and
just
add
it
to
the
bottom
of
that
doc.
So
it
looks
like
the
first
thing,
we'll
just
start
top
to
bottom.
Make
sure
that
we
have
yes
John
is
here,
live
RBD,
persistent
cache
updates.
Did
you
want
to
give
a
bit
of
a
summary
of
that,
and
then
we
can
tear
into
it.
A
B
B
Basically,
it's
based
on
Jason's
sketch
distance
with
wood
branch
and
I
added
some
comments
into
that
branch.
Basically,
it
contains
a
generic
file
based
caching
framework,
which
rises
through
a
boss
right
through
and
wrap
back
spot,
currently,
the
key
components
for
the
caching
caching
logic
are
our
journal
store,
which
were
records
for
right
back
events
with
the
audit
or
was
imported
because
and
also
a
gymnast
or
which
were
store
all
the
cache
events.
B
B
So
let's
say:
there's
a
request
from
an
application
layer,
it
will
be
included
into
up,
were
killed
inside,
leave
a
default,
and
then
it
will
be
popped
out
and
then
go
to
general
store
and
can
install
the
record
this.
This
reckless
request
to
a
journal
file.
Then
there
will
be
our
two
requests
go
to,
which
goes
to
install
and
also
metal
store
separately
and
after
after
the
request
has
been.
C
B
Okay,
so
after
the
request
has
been
positioned
to
the
general
store,
there
will
be
a
take
a
to
the
clients
and
also
there
will
be
a
corresponding
right
back.
Requests
include
to
the
Opera
kill
and
later
sometime
a
few
seconds
later.
Maybe
the
this
right
back
requests
will
be
exact.
It
and
the
data
are
be
trusted
to
the
Randalls
here.
B
I've
just
got
some
hourly
performance
on
the
current
branch
and
the
promise
looks
on
not
so
good
and
oh
by
the
way.
This
is
a
two
node
setup
with
four
OST
on
each
server
and
the
Python
is
without
caching
for
4k
random
right.
We
can
get
2600
our
pubs
with
classroom
load
and
the
IOPS
dropped
a
bit,
but
it
still
can
go
to
2,000
and
up
with
a
red
back
mode
and
submit
jobs
to
Royal
servants.
So
this
is
kind
of
I
guess,
there's
still
some
bugs
inside
of
the
branch.
B
Okay,
so
here's
from
next
steps.
B
Country
in
our
plan,
so
until
last
week
of
two
weeks
ago,
we
had
a
sync
up
with
Jason
and
Jason
pointed
out
there.
We
are,
we
have
to
respect
to
the
red
barrier
from
a
British
in
their
country
in
our
in
the
in
the
e4
request.
Raspberry
are
actually
are
ignored,
simple
a
so.
We
talked
about
this
and
we
there
might
be
two
possible
ways
for
red
burial
spot.
B
The
first
one
is:
are
we
called
right
back
flash,
which
means,
if
there's
a
right
barrier,
request
first
for
the
implied
and
third,
a
date
where
we
possess
them
to
local
SSD
and
also
there
will
be
a
corresponding
flash
request
to
the
red
hose
here.
That
means
all
the
data
for
the
will
be
positioned
to
locality
and
also
positioned
to
reddest
here
fold
it.
In
this
way
we
can
get
a
clean,
the
safe
right
back
on
all
the
only
right,
pyria
requests,
and
second
one
is:
we
have
a
right
back
possessed
and
which
means
our.
B
And
also
lead
right
back
by
the
first
page,
and
it
will
be
a
bit
simpler
for
for
portfolio
at
bury
request
and
the
first
afraid
for
the
implies
requests.
A
data
data
will
be
positioned
to
localizes
he
also
and
and
all
instead
of
flushing.
Those
requests
to
read
us
here.
B
We
just
possess
them
to
the
dirty
martini,
to
localize
a
deal
or
so
so
we
can
save
some
rights
to
the
raddest
here
and
also
we
can
reduce
the
latency,
only
sync
requests
and
when
the
dirty
ratio,
which
is
some
threshold
or
interval
time
expose,
we
will
then
stats.
We
are
right
back
requests
and
also
our
recovery
data
data
and
cache
mapping
where
we
read
from
local
SSD.
So
we
saw
this
right
back
position.
Strategy
might
be,
but
but
have
a
higher
performance
for
for
source
sink
heavy
workloads.
B
Okay,
so
next
one
we're
also
trying
to
build
a
smarter
policy
for
the
flight
back
spot
and,
for
example,
we
can
instead
of
using
a
time
based
flash
which
is
implemented
in
the
current
fear.
We
can
use
some
smarter,
flash
or
evict
policy
like
a
pestilence
on
certain
racial
threshold
or
max
interval,
time,
etc.
E
Yeah,
so
this
Jason
I
mean
I
know
we
talked
about
you
several
times
still
would
like
to
see
this
get
kind
of
refocused.
Don't
worry
about
the
rights
for
your
case,
because
it's
for
the
thing,
the
the
selling
point
here
right
would
be
eliminating
tail
latencies,
just
as
the
network
hop.
So
if
you
go.
D
E
Week
and
if
we're
only
supporting
in
world,
this
is
an
optimization
for
the
right
back
case.
We
can
basically
need
to
stream
the
data
as
the
rights
as
they
come
in
to
you
know,
append
operations
to
like
a
local
file,
and
then
we
just
have
a
periodic
process
that
you
know,
can
coalesce
events
and
submit
those
to
the
OSD
that
you
know
as
needed
in
the
background,
so
I
think
the
the
right
back
picture
is
still
too
complicated
for
what
it
really
needs
to
do,
because
it
really
should
just
okay.
E
C
E
Find
a
true
ring
buffer
or
you
just
you
know,
use
n
number
of
files
where
you
know
a
certain
file
only
grow
so
big,
and
then
you
have
your
your
writing
and
then,
if
you
get
so
big
you'll
put
them
a
new
file
and
start
right
into
that
file.
Where
your
your
your
right
back
process
a
thread,
it's
just
sucking
in
chewing
events
off
the
the
tail
of
the
of
the
journal.
E
That's
the
generic
use
case
we
talked
about
for
you
know
that
could
be
applied
to
RDW
or
for
the
immutable
RBD
parent
images
for
just
cashing
reads,
but
for
the
for
the
journal,
senses
sort
of
just
a
limiting
right-tail
agencies.
That's
I
think
it's
just
a
straight
and
simple
journal
and
with
right
back
from
the
journal
and
not
to
worry
about
the
right
through
case,
if
you're,
enabling
this
feature,
because
this
is
something
you're
opting
into
you're.
Not.
This
is
not
like
going
to
be
the
standard
practice
for
for
rbb.
B
E
You
have
to
track
what
blocks
or
extents
within
that
image
so
or
dirty,
and
you
need
to
should
shift
them
all
off
to
the
OSD
right
yeah,
but
I
mean,
if
you're
true
right
through
mode
where
you
write
into
the
local
cache
and
also
sending
to
the
OSDs,
and
you
don't
act,
the
the
right
until
the
OS
Keys
and
the
local
disk
have
safely
committed
that
to
disk
on
both
sides.
What
does
that
buy
you?
E
B
E
F
E
That
yeah
I
know
yeah.
If
we
read
cache,
I
mean
I,
think
the
read
cache
really
only
going
to
get
the
best
performance.
Bangs
I
mean
so
your
operating
system
on
top
of
EDR.
It
has
its
page
cache.
So
it
really
only
be
like
the
warm-up
to
do
OS.
That
would
actually
benefit
from
the
read
cache
right.
Yeah.
F
E
I
mean
yeah
definitely
seems
like
that's
that's
the
use
case
that
people
are
interested
about
it
like
how
do
I
just
produce
my
right,
latencies
and
I
could
try
it
if
you're
just
trying
to
reduce
your
light
right,
Layton
sees,
and
you
kind
of
do
it
in
a
safe
and
can
you
know
consistent
data
way,
you
just
laughed
it
to
adrenaline
and
scream
it
up
stream
it
out
to
the
LSD
safely,
respecting
the
right
barriers.
You
know,
you're
flush,
request.
B
E
Yeah
and
again
so,
as
I
mentioned
before,
I
think
just
for
your
initial
work,
it,
the
you,
can
just
disable
the
in-memory
cache
to
them
again,
it
doesn't
buy
you
anything
if
I'm
saying
I'm
going
to
do,
I
want
to
write
back
and
I'm
trying
to
write
it
to
the
elasticity.
The
in-memory
cache
buys
you
nothing
yeah
yeah.
The
only
thing
it
buys
you
on
writes
is
that
it
can
coalesce
the
right,
but
your
SSD
cache
should
be
also
coalescing
the
rights
to
the
same
object
and
on
read.
E
You
know
account
support,
read
ahead
things
like
that,
but
the
operating
system
itself,
once
it
do
stuff,
is
also
doing
read
ahead.
So
that's
why
it's
not
like
a
big
day.
You
know,
I,
don't
think
it's
worth
it
to
focus
on
it
right
now
and
just
say
if
you
in
turn
this
on,
for
this
mission
prototype
being
in
effort,
let's
just
focus
on
so
it's
a
single
year,
so
I
can
find
it
on
and
you're,
not
using
the
'memory
cache.
E
At
the
same
time,
eventually
yeah,
we
might
again
circle
the
wagons
back
and
like
make
everything
pretty
and
hooked
up,
but
I
think
just
for
your
initial
focus,
I
say
assume
the
in-memory
cache
is
off
or
explicitly
turn
it
off.
If
you
know
during
your
test
game,
when
you're
using
the
SSD
back
cache,
okay,
okay,.
G
C
G
A
Great
thanks
young,
that's
that's
great
to
see
that
level
of
depth
on
a
CDM
call
so
good.
The
next
one
we
have
up
is
the
pool
tags
metadata,
see
its
age
and
Jason,
though
Jason.
If
you
want
to.
F
Just
started
I
know
over
this
morning,
yeah.
So
this
is
all
just
the
next.
Several
items
here
are
part
of
a
broader
discussion
that
we're
having
on
how
to
improve
overall
stuff
usability
and
things
that
users
are
frequently
confused
about
when
they're
setting
up
the
cluster
or
just
like,
don't
make
sense
and
have
been
sort
of
inherited
over
time
or
just
whatever.
F
We
can
think
of
that
it's
going
to
make
things
easier
to
use
more
intuitive
and
so
on,
and
this
this
particular
one
was
motivated
because
Jason
was
adding
something
to
the
new
set
dashboard
which
is
sort
of
a
webby
version
of
stuff
SS.
It
already
has
some
stuff:
there
tells
you
about
file
systems,
he
was
adding
stuff
about
block,
but
it
was
unclear
how
to
identify
which
pools
were
already
pools
and
which
pools
for
not
without
going
and
looking
the
pools
and
looking
every
time.
F
There's
like
a
map
update
or
whatever
is
no
sort
of
easy
way
to
identify
those
from
so
the
sort
of
intuitive
solution.
Suggestion
was
to
add
a
simple
tagging
facility,
so
you
can
dissociate
metadata
with
pools,
and
so
you
can
say,
for
example,
this
particular
pools
in
RVD
pool
this
particular
pools
as
that's
cool
and
potentially
other
things
too,
like
in
the
writers
gateway
case
you're
in
a
multiple
zones.
So
you
can
tag
the
pool
with
which
zone
it
belongs
to
and
so
on.
F
F
Pool
properties
based
on
how
their
things
are
being
used,
so
we
started
adding
all
these
fields
to
the
PG
bullet,
a
structure
for
cash
tearing
and
it
just
started,
get
crazy
and
a
lot
of
these
are
optional,
and
so
there's
this
thing
in
here
called
I'm.
Sorry
I'm
just
give
me
the
code
here
to
try
to
find
it.
It's
called
pool
ops,
ops,
tee,
I,
pasted.
Some
chat
here,
mix
here
and.
F
Pool
ops,
T
is
here,
and
so
these
are
bunch
of,
like
you
know,
things
like
sending
what
compression
other
than
you
want
to
use
for
blue
store
for
the
objects
or
this
pool.
What
your
text
some
chunk
size
is
bubble
blah,
and
so
we
already
have
this
sort
of
key
value,
property
value,
type
mechanism.
F
And
the
sort
of
initial
proposal,
based
on
what
we're
seeing,
was
to
simply
add,
like
a
map
string,
the
string
to
the
pool
was
sort
of
like
high-level.
These
are
more
user
visible
metadata,
as
opposed
to
like
that
in
the
weeds,
two
nibbles
pool
properties,
but
I'm,
not
certain
that
that
this
thing
should
make
sense.
I
just
wanted
to
check
like.
E
A
person
I
kind
of
I
didn't
realize
how
plops
keys
in
there,
but
to
me
like
the
idea
of
like
having
a
free-form
string
that
was
like
the
owner
is
RVD
just
seems
kind
of
like
kludgy
to
me.
So
if
there's
already
something
there
that
we
extended
says
you
know
total
owner
pool
type,
you
know
our
GD
FS
rgw
and
then
in
terms
of
our
G
W.
E
If
they
wanted
to
have
like
a
special
key,
they
can
put
on
it
or
whatever
yeppers
owns
or
something
like
that
or
the
file
person
wanted
to
have
it
for
file
system.
It
allows
us
to
be
file
system,
C
or,
however,
you
want
to
or
those
are
playable
if
you
just
put
any
multiple
file
systems
in
different
pools.
F
Think
that
the
difference
is
that
this
this
here
and
all
the
existing
pools
in
PG
pool
t
are
like
strictly
typed
and
structured
and
they
control
specifically
how
ratos
internally
behaves
and
and
in
my
mind
the
original
thought
was
to
add
sort
of
a
user
facing
tagging
mechanism.
So
you
can
say.
Oh,
this
is
my
pool
for
X,
and
this
is
my
pool
for
y.
G
F
C
I
think
we
should
put
things
like
our
video
MLGW
they
they
should
have
like
strongly
typed
stuff
in
in
viz
map
structure.
If
something's
going
to
be
consumed
by
external
tools,
where
you
know
things
like
IBM
lzw
are
external
to
rate
us,
we
should
make
it
freeform
enough
that
anybody
using
trade-offs
could
use
it
for
no.
C
They
need
to
use
it
for
that
might
still
expect,
but
I
think
there's
still
an
argument
that
hurt
what
that,
like
the
protocol
field
or
who
is
using
this
tool
that
key,
which
are
gonna,
be
relying
on
being
set.
You
know
being
a
certain
magic
string
anyway.
Maybe
that
one
should
be
strongly
typed,
but
of
that
we
always
know
like
what
key
that's
going
to
be
I,
get
separate,
feel.
C
F
F
Right,
which
means
it
seems
like
it's
either
just
like
we
have
a
map
of
string
to
string
which
is
arbitrary
metadata,
and
then
we
have
another
string.
That's
just
like
owner,
or
we
just
by
convention,
say
that
particular
key
like
owner
equals
foo
in
that
other
map,
thanks
I'm,
not
sure
I'm
against
title.
H
John
things
want
this
to
be
general
enough
for
anyone
to
use
an
expense,
but
also
it
needs
to
formalize
enough
that
stuff,
like
the
manager,
pink
account
ID.
If
we're
going
to
start
doing,
display
different
displays
based
on.
What's
there
man
once
yeah,
we
may
be
that
maybe
not
but
I
feel
like
once.
We
have
to
know
that
we
actually
want
our
tools
to
start
paying
attention
to
it.
C
I
work
up
I
would
claim
that,
whichever
liberators
consumer,
it
is
only
needs
to
know
their
own
magic
value
right,
so
the
key
needs
to
be
needs
to
be
well
defined,
so
that
I
put
there
look
for
that
key
and
say
you
know
it's
someone
else
or
in
using
it,
but
like
the
actual
string
that
says
RBE
or
rgw
or
whatever
that
doesn't
need
to
be
centrally
defined.
We
don't
want
to
have
a
like
a
registry
inside
the
OSD
map
that
lists
possible
values,
because
so.
F
That
sounds
to
me
like
we
just
want.
We
want
to
new
fields
for
PG
pool,
d1
is
owner
or
type
I.
Think
we
can't
use
type
because
type
is
already
cool,
PG
pool
type
whatever
like
replicative
you
see,
and
then
we
want
the
map
of
string
string,
that's
just
and
then,
when
you
create
a
pool,
you
specify
that
quote
unquote
type
owner
as
a
required
field.
So
when
you
crazy
police
say
this
is
some
already
type
cool
to
weave
in
with
a
application.
F
F
H
H
F
C
F
F
F
C
Is
kind
of
thing
we
was
used
from
the
Manila
driver
as
well.
If
it
was
creating
Google,
he
would
put
a
tag
on
site
like
which
Manila
shadow
blonde
cinnamon
in.
H
G
F
I
mean
so
for
a
bunch
of
these.
We
can
set
them
automatically
and
that's
probably
to
cover
like
80%
of
clusters,
because
if
it's
a
cool
idea,
zero
and
it's
called
RBD
and
it's
ard
pool,
if
there's
a
set
of
s,
SS
map
that
refers
to
it,
it's
an
FS
pool
going
to
set
those.
If
it's
probably
probably
right,
simple
you're
gonna
get
identified
energy.
J
F
C
E
I
mean
like
I
was
thinking
like
for
the
RB
d,
like
you'd.
Basically,
just
restrict
the
new
operations
of
creating
new
images
or
whatever
wouldn't
stop
you
from
opening.
You
know
existed,
you
know,
oh
yeah
that'd,
be
the
backwards
compatibility
path
for
that,
and
we
just
have
the
upgrade
note
of
saying
if
you
know
if
we
can't
determine
these
heuristic
suits,
like
upgrade
them
on
that
this
thing
and
then
proceed
to
upgrade
all
the
clients,
because
otherwise
you
know
you're
cinder
won't
be
able
to
create
new
images.
F
C
E
Yep
and
well
so
if
you
do
it
with
a
dash
dash
application
equals
whatever,
if
they
don't
provide,
you
can
lead
to
a
duplication,
warning,
saying
hey.
This
is
now
required,
but
will
allow
you
to.
You
know,
precedes
yeah,
breaking
CLS,
but
you're,
giving
people
like
one
release
or
whatever
to
get
used
to
it
or
you
defeat.
F
F
G
E
F
J
F
F
F
Make
a
health
the
health
warning
will
be
there,
but
it
I
think
it'll
be
more
confusing
and
that's
something
that
the
user
is
going
to
hit,
whereas
the
things
that
are
creating
have
scripted
creating
pools
are
more
likely
to
be
developers
who
are
working
on
the
integration
for
whatever
for
managing
a
step
closer.
We.
C
F
A
J
Yeah,
the
problem
that
I
see
with
bragging
is
that
you're
kind
of
fixing
one
usability
issue
with
in
introducing
usability
issue
where
users
or
son
dating
this
issue.
You
know
that
where
the
command
line
is
more
complicated
now
and
they
need
learn
something
new
that
they
didn't
know
before
or
that
things
are
kind
of
you
know
sternly
not
working
again,
and
so
you
know
I'm
not
sure.
Maybe
we
can
have
some
other
solution
that
wouldn't
click
we're
breaking
it
like
doing
something
automatically
somewhere
like
an
upgrade
path,
would
be,
would
be
fine.
C
So
again,
in
order
is
going
to
be
a
recurring
situation
where
we
have
people
who
are
and
who
have
written
their
own
scripts
and
automation
around
our
stuff,
because
our
stuff
was
too
hard
to
use
all
the
time
and
as
we
sort
of
change
the
way
we
define
things
to
make
things
more
usable,
it's
it's
gonna
cause
some
breakage
I,
think
for
human
beings
using
the
tools
directly.
We
can
make
it
and
reason
that
we
can
make
it
friendly
enough,
but
it's
not
a
usability
issue.
It's
just
a
change
right.
C
J
To
create
a
pool
and
know
it
now,
he
tells
you
a
you're
missing
the
application
feeling
like
what
the
hell
was.
Those
notifications
it
wasn't
interpret
in
here
and
do
I
need
to
go
to
the
documentation
now
and
you
know
and
read
another
paragraph
and
where's
the
documentation,
so
you're
kind
of
adding
a
complexity
you
know
for
for
for
for
someone
who's,
not
the
server
user
as
a
system
that
now
we
need
to
learn
this
new,
so
this
might
not
be
necessarily
for
their
specific
use.
J
C
J
F
Well,
Gail
turned
it
is
that
it's
not
a
required
field,
which
means
we
don't
know
and
then
and
then
the
tools
will
not
be
able
to
conclude
anything
well
like
issue
a
health
warning,
and
so
they
create
a
pool.
It
suddenly
there's
a
warning
that
says:
tell
forwarding
your
cluster
has
a
pool
that
does
not
an
application
legal
for
it
or
something
like
at
some
point.
We
need
a.
H
F
F
H
C
That
I
think
the
strongest
argument
for
this
is
that
anybody
today
who
tries
to
write
a
web
interface,
this
app
that
lets.
You
browse
your
RBD
images
house.
This
kind
of
there
is
no
good
way
to
do
it
right
and
the
only
way
to
create
a
good
way
to
do.
It
is
to
make
this
change
and
the
only
way
to
make
sure
that
the
people
who
write
interfaces
like
this
can
use.
C
Comes
whether
that
comes
backs,
what
you
who
was
saying
here,
but
at
that
point
somebody
is
then
having
to
go
to
like
learn
this
special
command.
They
have
to
run
and
I.
Think
it's
okay
for
someone
to
have
to
go
and
learn
how
the
latest
version
of
the
command
line
works
when
they
make
a
major
upgrade
from
one
LPS
stream
to
another.
But
I.
Don't
think
it's
ok
for
our
latest
code
to
in
a
form
that
isn't
sort
of
robust
within
itself.
E
C
C
C
Yeah
that
that
Hubble
it
should
work
there
should
be
a
single
command
to
set
up
us
that
creates
the
file
system
and
the
Bulls
underneath
there,
and
there
should
be
a
single
command
for
setting
up
our
PD.
The
issue
with
doing
both
of
those
things
is
carrying
around
the
PG
non
field
everywhere,
but
yeah
I
think
they
matter.
Anyway.
We
have
yeah
eventually
right.
C
So
in
the
short
run,
we
have
the
health
text
that
says
EGR
BD
and
let's
face
it
most
people
most
of
the
time
when
they
don't
have
to
like
they're
consulting
you,
know
the
weather
or
the
documentation.
They're,
not
it's
like
they're,
not
knowing
the
kimono
for
that
anyway,
and
then
it
said
yeah.
We
have
to
accept
that
in
the
long
run.
This
is
still
quite
a
lot,
because
people
create
command
is
still
quite
a
low-level
thing
and
there
should
be
things
on
top
of
it.
What's.
E
F
E
H
But
even
though
it
doesn't
of
it,
you
can,
it
can
add
those
the
labeling.
Well,
we
can
label
a
sense,
that's
once
we
put
it
in
an
MVS
map.
Yes,
I
guess
I.
Think
that,
like
if
we're
not
going
to
validate
input
ourselves,
which
we
can't
there
are
some
people
who
aren't
using
our
systems
they're
using
raw
burritos,
then
else
we
insisted
they
budget
wrong.
So.
F
This
was
actually
my
original
proposal
was
such
as
to
not
touch
create,
but
just
have
a
command.
You
run
after
that.
That's,
like
suppose
the
label
tag
boo,
but
it
means
that
there's
like
yet
another
step
they
get
to
follow
and
that
and
the
question
was
like
whether
not
actually
taking
that
step
was
a
valid
use
case
or
user
right.
If
you're,
if
you
don't
do
step
two
you've
done
it
wrong,
then
it
shouldn't
be
a
separate
step.
It
should
be
the
same
as
the
first
step.
E
F
D
E
E
If
you're
running
to
see
a
lot,
you've
probably
already
do
because
everyone
seems
wrong
with
Edmund,
but
yes
indeed,
probably
well,
that's
why
you
don't
want
like
the
sender
to
automatically
do
it
but
everything's
family.
Do
you
get
the
nice
helpful
armor,
you
see
a
line
that
says
hey.
This
is
not
an
initializer
who
will
please
on
this
command.
You
have
a
password,
it
trains
them
and
then
we
can
operate
on
your
Lexus
I'd.
Do
this
just
like
simple,
fast
I
already
have
it.
You
know
initialized
B
and.
J
F
J
E
H
F
I
F
C
C
Mean
I
think
in
general
I,
don't
like
people
doing
this,
but
especially
when
we
were
moving
in
the
direction
and
we
subtract.
That's
all
getting
people
to
put
everything
in
Rado
same
space
anyway,
and
at
the
point
that
we're
letting
people
put
multiple
file
systems.
You
know
letting
people
have
multiple
file
systems
use
the
same
pool,
which
is
fine
because
they
even
decorate
a
fence
papers.
It
seems
a
little
strong
to
also
tell
people
that
they
can't
put
RVD
images
under
an
excitation
level.
Yeah
that
kind.
J
Yeah
also
right,
we
do
support
multiple
names,
such
as
lean,
rgw
and
part
of
it
is
like
you
want
to
specify
that
to
police
for
specific
use,
I'm
talking
now
about
the
key
value,
but
the
pool
might
have
multiple
uses
right.
It
could
be
for
logs.
It
could
be
metadata
on.
C
Well,
because
we
could
have
a
is
what
people
are
already
saying,
that
we
would
have
a
metadata
dict.
That's
our
EMS
data
map
for
each
for
each
protocol
right.
So
you
have
the
list
of
protocols
that
are
enabled,
which
was
usually
only
be
one,
and
then
this
metadata,
flexible,
metadata
metadata
map
would
belong
to
that
protocol.
C
C
C
J
Maybe
one
of
the
required
fields
there
would
be
other
than
application.
Also
namespace,
which
box,
which
namespace
education
is.
You
think.
F
I'm
a
little
bit
hesitant
to
like
codify
namespaces
here,
because
we
made
that
decision
that
main
spaces
were
unbounded
just
like
objects
in
the
way
they're
implemented,
and
if
we
sort
of
do
anything
here
that
suggests
that
every
numerous
ways
has
to
be
identified
here,
then
that
is
sort
of
misleading,
but
you
could
have
an
application.
For
example,
if
F
of
s
puts
in
its
metadata,
it
could
add
even
those
that
report,
which
namespace
is
using
in
its
own
sort
of
special
way,
Wow,
where.
E
I
C
Think
we
should
say
what
what
the
application
I
think
I
think
being
the
IDS
should
be
for
the
application
to
declare
that
it's
using
the
hall
and
put
any
unique
identifiers
on
there
like
Zoll
or
than
then
we'll
share
or
whatever,
but
it
should
be
identifier
and
not
describing
what
it's
doing
within
the
pool,
so
namespaces
I
think
think
spaces
have
to
be
out,
I
mean
even
in
the
economic,
even
the
relatively
simple
step.
In
that
case,
you,
you
know:
users
are
allowed
to
set,
lay
out
some
tiles
that
point
water
or
seriously
bones
so.
C
F
C
J
C
F
F
H
F
C
Wonder
if
we
should
have
a
in
addition
to
the
metadata
side
message
hrrm,
if
we
have
a
command
that
that's
what
it
does
like,
it
should
be
called
enable
application
or
set
application,
or
something
like
that.
That
just
creates
the
route
of
the
just,
creates
the
field
of
creative
entity,
name
application,
yeah.
C
H
H
Then
those
record
dated
insulation
enable
yeah
and
those
application
able
communities
can
fail
by
default.
If
there's
already
one
said
like
I'm
I'm,
not
sure
if
you
want
to
make
them
force
it
to
enable
overwrite
or
who
want
to
make
them,
make
it
like
via
strings
I,
have
to
specify
all
of
the
applications
that
are
enabled
in
every
command.
E
Well
again,
I
don't
for
the
application
that,
if
you
that's
gonna,
be
advanced
persons,
gonna
be
doing
that
right
because
I
did
it's.
You
have
someone
know
the
best
put
in
rbg
or
versus
capital,
RBD
versus
rate
of
block
device
or
whatever
or
just
block
I
mean
I.
Don't
under
the
covers.
The
RPG
CLI
can
issue
the
mount
command
or
whatever
to
do
it,
but
if
you're
using
that
see
a
lion
absolute
like
we
have
like
you're.
F
C
With
is
without
adding
the
the
fields
of
things,
I'm,
not
quite
clear
whether
whether
it's
right
so
we
should
actually
ask
them
during
things
like
FSM
you
or
whether
really
it
should
be
that
the
user
should
be
thinking
about
what
protocol
is
for
when
they
create
the
pool
and
FSU's
consume
schools
that
were
created
for
it
rather
than
the
other
way
around.
Because
if
you,
if
you
don't,
have
it
as
me,
automatically
adding
it,
then
the
application
enables
thing.
Does
they
come
more
of
like
a
first
class.
E
But
where
do
you?
Where
do
you
see
the
end
state
of
this
being
that
there's
always
going
to
be
a
pool,
create
or
is
the
event
to
the
end
state
going
to
be
that
I
say
I
just
want
a
new
file
system,
I
want
a
new
our
body,
so
use
deputy
CLI
to
do
everybody
brings
news,
be
our
GW,
CLI,
afraid
and
our
GW
pool.
If
that's
the
end
state,
why
would
you
whether
we
add
this
temporary
intermediate
step
that
we
then
yank?
You
know
when
we
can
say
like?
Oh
now,
we
oughta
magic
autumn.
E
C
I
C
I
C
D
C
I
think,
logically,
you
still
have,
because
we
have
multiple
file
systems
conceived
in
design
cool.
You
logically
still
have
to
create
cool
bit,
but
but
but
then
I
get
ok,
doesn't
matter
so
much.
What
come
on
line
to
call
when
you've
got
something
else
live
on
top
of
that,
my
brain
is
one
out
of
this
project.
A.
H
C
We
probably
still
also
need
this
is
maybe
entire
application
basis
or
maybe
globally
a
simplified
called
create
command,
and
that
lets
you
say
you
know
what
what
protocol
is
going
to
be
for
or
like
what
drive
types
you
want,
for
example,
and
I
forget
where
we
land
it
with
the
the
device
classes,
but,
for
example,
I
shouldn't
have
to
run
like
extra
commands
to
serve
before
we
use
as
SSDs
I.
Think
at
the
moment
you
still
have
to
explicitly
create
across
roll.
Maybe
what
I
don't
actually
remember?
I.
C
But
anyway,
you
would
have
like
a
higher-level
Google
create
commands,
and
in
addition
to
that,
you
will
have
high-level
application,
specific
ones
and
if
someone
use
the
application
specific
one
like
by
defaulting,
probably
younger
the
cool
part,
so
I
guess
the
short
answer
is
yes.
That
should
all
be
having
high
level
come
on.
C
H
H
F
C
H
C
H
F
C
F
C
H
J
H
F
So
the
only
annoying
thing
about
having
our
recruit
pool
and
its
own
cleaning
pools
is
that
they're,
like
four
different,
optional
arguments
for
creating
pool
specifying
what's
not
just
teaching
them,
it's
also
which
crush
role.
You
want
to
use
whether
it's
eraser
coated
or
replicated,
and
whatever
is
your
profile
you
want
to
use
and.
A
I
I
D
C
F
E
H
F
J
F
J
E
F
All
right,
so
this
the
we
get
to
move
on
I
whispered
in
our
talking
about.
We
thought
that
simple
language
I
know
yeah,
so
this
next
one
is
an
attempt
to
supply
things
so
we're
talking
about
a
replacement
for
SEF
discs
that
use
LVM
instead,
which
we
talk
about
later,
but
the
first
one
of
the
issues
that
came
up
during
that
is
that
one
of
the
sort
of
ongoing
headaches
that
we
have
is
the
support
that
sort
of
half-baked,
originally
the
Sistina
scripts
and
partially
carried
forward
in
the
system.
F
T
scripts
is
the
ability
to
have
multiple
clusters
on
the
same
host
with
different
names
and
the
way
we
sort
of
Cluj.
This
in
was
that,
since
we
already
had
a
set
comp,
the
default
close
to
name
was
F,
and
so
the
come
fuzz
cluster
comps,
and
so
you
that
food,
comps
and
whatever,
but
in
order
to
use
any
of
those,
you
have
to
do
like
on
a
clas
to
do
test
cluster
an
uncle
named.
So
the
original
motivation
for
this
was
that
I
think
it
was
like
a
chat.
F
Telecom
was
like
dreaming
up
this,
like
big
multi-cloud,
craziness
and
they're
like,
but
we're
going
to
have
host
they're
going
to
have
those
DS
for
multiple
clusters.
Can
you
do
that
and
we're
like
sure
until
I
wrote
it
and
then
I
don't
know
that
any
I
think
there
are
some
users
who
actually
use
it?
Okay,
says
connections
or
anything.
At
the
end,
our.
F
Ever
be
marrying
uses
it
Oh,
God,
okay,
okay,
so
let
me
just
summarize
the
status
quo
in
system
D,
it's
only
partially
supported
in
that
all
of
the
demon
instances
system
da
only
has
a
single
field
for
the,
like
instance,
identifier,
and
so
that's
the
mod
name
or
the
OST
name,
which
means
that
the
cluster
is
basically
hard-coded
and
you
put
it
in
at
C,
says
configure
at
the
default,
slash
def,
you
say:
cluster
equals
foo
and
that
people
also
says-
and
that's
so-
for
any
given
host.
F
E
E
I
I
D
F
Annoying
that
the
original
purpose,
like
isn't
useful
anymore,
like
it's
at
this
point,
it's
like,
mostly
cosmetic.
So
it
means
that
the
like
that
mounted
clutters
up
the
mountain
Eames.
So
the
reason
why
it
slides
varlets
s
OS
d,
SS
one-
is
because
the
cluster
name
is
there.
So
it's
just
annoying
and.
F
K
Cluster
is
not
being
passed
or
it's
being
passed
incorrectly
or
it's
me
and
said
this
is
one
of
them
that
was
just
recently
face
deployment
tools
like
in
seconds
whoa
we're
still
struggling
to
really
nail
these,
and
it
is
so
pervasive
that
pretty
much
all
of
the
text
inside
pencil
gives
accustomed
questioning
just
to
be
able
to
catch
these
instances
where
we're
not
for
NASA
and
doing
a
graphic
did.
It
were
still
heating
issues
today,
and
this
has
been
going
on
for
a
while.
C
Without
saying
about
like
whether
they
should
exist,
I
kind
of
really
agree
with
that
that
it's
kind
of
a
gun
pointed
at
your
foot
could
so
when
people
try
to
write
user
interfaces,
they
they
have,
we
can
create
cluster
page
and
they
naturally
go
otherwise
in
the
fields
but
saying
what
their
clusters
should
be
told,
and
you
see
what
lets
you
see.
The
wireframe
and
you're
like
no,
because
they'll
have
to
spend
the
rest
of
their
life
I
think
that's
a
close-up
every
time
they
run
it
and
love.
The
current
implementation
is
definitely
yeah.
F
D
F
Yeah,
well,
it's
a
free
path.
Is
you
just
rename
your
pilot
valve
actually
because
that's
almost
all,
it
is
in
your
reboot,
your
box
of
Al
Mountain,
our
location,
your
Center
office,
probably
already
renamed
anyway
right
and
that's
well,
if
you
have,
if
you
have
a
clustering
name
of
su,
that
screwed
up
computers
have
to
rename
it
back
to
steps
I
come.
L
L
I
F
E
Oh
yeah,
that's
fine
with
me!
Taking
it
out
of
system!
D
support
is
fine.
With
me,
it's
the
taking
it
out
of
the
CL
is
like
being
able
to
specify
cluster
is
again
using
me.
I
redeem,
a
use
case.
If
I
go
you
just
watching
test
cases
run
through
like
now,
I'm
actually
gonna
have
to
have
like
two
hosts
where
I've
got
staff
and
staff
and
I'll
have
to
in
order
like
don't
check
the
stats.
Another
cluster
actually
have
to
go
over
to
a
node.
That's
only
configures
for
that
cluster
to
go.
E
F
In
it
try
to
move
this
line,
so
my
proposal
would
be
one
make
sure,
remove
it
from
set
ansible
and
remove
it
from
Seth
deploy.
So
any
new
accept
that
anyone
cares
not
to
go
past
a
to
play,
so
no
new
clusters
will
use
it
that
they
step
one
two
would
be
since
system.
D
already
doesn't
support
multiple
clusters
on
the
same
host.
F
K
Is
certainly
going
to
break
a
bunch
of
fun
things.
I
am
pretty
sure
we
rely
on
Barcena
names,
I
mean
unless
we're
going
to
like
we
say,
no,
no
custom
cluster
names.
If
that
means
we're
taking
the
SATs
portion
of
the
publication
of
VOT,
then
all
of
the
all
of
the
batch
scripts
everywhere
that
parts
that
location
in
split
well.
We
need
to
be
aware.
F
F
There's
not
much
I
can
get
out
of
that
one.
Okay,
up
on
that.
F
The
next
one
is
killing
the
default
tools
in
the
cluster.
It's
weird
that
when
you
create
a
cluster
and
our
buddy
pools
there
already
when
you
might
not
use
the
cluster
for
RVD,
it
also
complicates
that
opponent
tools,
because
I
have
to
make
sure
that
that
pool
is
created
with
the
right
number
P
geez.
Instead
of
waiting
until
later
and
then
you're
making
an
explicit
step.
F
F
Don't
give
any
screams
all
right
all
right,
so
the
beauty,
the
other
thing
in
my
list
is
the
manager
dashboard
so
that
I
send
email
to
the
post
or
the
spread
last
week
about
this,
adding
all
the
restful
stuff.
There
are
now
two
new
modules
in
the
manager
that
are
sort
of
enabled
not
enabled
by
default
but
are
chipping
by
default,
be
started
enables
both
of
them.
Now
one
of
them
is
the
dashboard
which
is
basically
accept
SSO
everything
I
mentioned
earlier.
F
You
basically
have
to
go
set
the
set
the
config
to
tell
it
what
IP
and
what
port
to
use
and
then
it'll
turn
itself
on
its
does
not
support
SSL
at
all.
It's
just
HUP
and
the
other
one
is
the
restful
dashboard,
which
requires
SSL
and
there's
still,
some
cleanup
I
think
we
need
to
do
with
how
the
sword
is
generated.
There.
That's
also
set
up
by
runs,
run
by
view
start
for
you
and
can
be
enabled
in
the
future,
but
I
was
just
sort
of
annoyed
by
their
like
lack
of
symmetry.
There.
F
I
can't
remember
exactly
where
realize
this.
My
recollection
is
that
I
think
for
the
restful
one
we
want
to
make
it
so
that
we
don't
use
the
OpenSSL
CLI
and
a
package
install
script
to
create
the
cert,
but
instead
the
module
crisis
start
on
demand
if
it
started
and
doesn't
have
an
already
configured.
L
But
the
reason
the
different,
of
course
is
the
dashboard
is
read-only
and
the
restful
is
an
API
that
allows
writes
to
cluster
so
yeah.
Only
if
we
change
our
mind
with
the
dashboard
and
then
have
it.
I
am
some
closer
modifications
and
it's
that
then
we
absolutely
need
authentication
there.
But
until
then
maybe
it's
okay.
The.
C
Idea
is
that
the
restful
has
authentication
token.
For
this
proof,
it's
protecting.
Where
is
being
redirected,
yep.
E
F
F
C
F
C
Absolute
that
we
start
adding
stuff
like
that
to
the
dashboard.
We
all
have
to
stop
calling
it
a
dash
book,
but
you
you
have
to
do
the
security
bit
properly
and
having
something
that
generates
a
self-signed
SSL
certificate,
it's
better
than
nothing
if
you're
trying
to
protect
some
credentials.
But
it's
it's
a
bit
of
a
stretch
to
call
it
SSL,
because
you
know
it's
not
right.
The
right
way
to
set
these
things
up
is
to
have
an
administrator
or
tool
or
whatever
you
have
a
right,
proper,
valid
certificate.
C
F
C
Engine
all
right,
well,
I,
think
the
most
important
work
flow
for
the
restful
module
is
not
the
creation
of
self-signed
cert
by
default,
it's
the
process
by
which
feyza
can
configure
it
to
be
properly
secure.
This
the
the
process
for
loading
in
the
individual
SSL
certificates
for
each
demon.
That's
what
really
matters
there,
because
you
know
the
restful
thing
is
writable
and
can
you
know
destroy.
C
F
Feels
like
there's
going
to
there's
going
to
be
a
document
of
documentation.
Page
that
says
these
are
the
four
steps
you
take
in
order
to
enable
the
restful
api,
because
you
have
to
set
the
IP
turbine
to
nips
at
the
port.
There's
just
going
to
be
one
more
command
that
to
generate
a
subside,
insert
paste
this
command
to
generate
your
SSL
cert
and
also
set
that
right.
So
it's
not
going
to
it's
going
to
take
them
an
extra
ten
seconds
so
like
paste
it
into
the
terminal.
C
F
K
K
Am
going
to
be
talking
a
little
bit
about
set
balling,
which
is
the
a
currently
it's
just
a
proposal
of
writing
a
new
tool
to
try
and
deal
with
the
point
in
managing
for
these
parting
ways
of
it
with
set
disk.
So
I'll
give
an
overview
of
the
current
status
that
this
is
for
some
carriers
that
were
looking
at
some
of
the
ways
were
thinking
that
we
can.
We
can
change
or
improve
the
deployment
process
for
an
OD.
K
K
It
basically
will
want
to
do
partitions
everywhere,
regardless
of
what
device
it's
it's
going
to
consume
for
another
key
for
setting
up
an
OSD.
The
reason
it
does.
That
is
because,
most
of
the
whole
most
of
all
the
pieces,
don't
say
all
the
pieces
that
interact
with
an
OSD
to
set
it
up
and
bring
it
up
and
started
to
stop
it
and
manage
it
basically
deals
with
some
reading
UPT
labels,
so
behind
the
scenes
were
slapping
a
bunch
of
different
GPT
levels.
K
This
for
teams
to
identify
if
a
certain
partition
is
unencrypted
device,
an
encrypted
file,
store
unencrypted,
regular
Journal.
What's
the
state
of
that
there's,
a
bunch
of
different
new
UI
needs
that
I
identify
each
one
not
only
says
this
reach
these
things
and
interacts
with
that,
but
also
other
projects
like
set
Walker
do
the
same
thing:
to
try
to
identify
the
things
the
copper
does
we
use
anything
with
you
therefore
systemd
scripts?
K
It
currently
something
we
have
situations
where
certain
or
these
don't
come
up,
because
we
rely
heavily
on
you
that
so
you
have
have
certain
events
that
will
call
on
to
set
days
to
activate
all
these
noun
them
and
make
them
available.
But
the
problem
with
with
you
do
that
is
that
for
this
whole
workflow,
where
from
UF
it
goes
to
to
set
days
and
to
systemd
in
back,
to
set
disk
and
back
to
system
B
again,
it
is
that
it's
really
hard
to
debug
when
something
doesn't
come
up.
K
It's
it's
unclear
because
the
beeps
work
work
going
because
we're
primarily
relying
on
you
this
to
to
fire
this
events
in
in
a
proper
order.
We
get
into
situations
which
don't
understand
why
a
boss,
rebooted
and
some
of
these
pn
come
up
it's
hard
to
replicate
it
not
impossible
because
I
mean
it
is
a
synchronous
link
nature.
K
So,
in
the
other
thing
and
I
think
it's
primarily
why
this
is
all
things
starting
or
like
we
trying
to
rethink.
These
is
because
we
want
to
support
devices
like
the
in
cash.
There
are
not
an
actual
device
that
it's
actually
like
more
like
three
physical
devices,
acting
as
one
very
like
LVN,
logical
volumes,
and
it
is
currently
very
very
tricky
to
make
it
work
with
space
gases
and
again
set
its
bones-
is
to
make
partitions.
K
K
So
the
word,
the
current
workaround
of
people
who
deploy
logical
volumes
or
things
like
the
in
cache-
is
to
use
the
directory
the
IPS
in
a
directory
option
from
set
this.
So
the
way
that
works
is
point
on
a
rectory.
You
create
the
pitching
teaches
that
the
Athenians
to
live
there
and
and
then
you
need
to
go
because
and
you're
you're
using
this
at
the
record,
where
you
might
really
be
using
it
as
a
real
bond.
So
what
we'll
do
is
they
mountain?
K
They
mount
this
volume
in
the
expected
location
and
and
and
then
the
our
teacher
starts,
the
the
cabin
with
that
approach
is
that
they
no
longer
can
rely
on
you
there
or
actually
use
it
to
be
able
to
mount
these
things.
So
what
they
do
is
they
write
directly
on
SS
tabs?
What
what
the
device
is
and
what's
the
destination
was
like
the
source
of
this
vision
for
this
week's
mount
instead
of
relying
on
the
whole
GPT
label
thing
that
does
that
automatically
would
set
this.
K
So
if
you
see
a
few,
things
are
needed
for
for
managing
at
the
point
or
these
we
need
to
know
what
the
source
of
a
mount
point
is.
What
the
destination
is
page
mention
at
some
point
that
the
flags
are
really
not
needed,
but
if
any
flags
are
specific,
flags
are
needed
for
mounting
these
file
system.
Those
things
are
consumed
as
well.
K
And
then
the
encryption
keys
if
encryption
see
peace
and
types
type
of
encryption
if
encryption
is
near,
so
those
are
the
bits
and
pieces
that
we
will
need
to
know
to
try
understand
how
to
properly
deploy
in
ot
and
bring
it
up
and
make
it
run,
and
the
current
approach
is
that
we
keep
everything
within
the
device.
That's
custom.
What
would
be
these
partitions?
K
We
store
things
within
the
OSD
there's
these
special
partition
called
a
lock
box.
So,
for
example,
in
the
case
of
having
a
device
in
you
want
one
big
device
encrypted,
you
will
have
a
partition
called
lockbox.
That
book
has
the
unencrypted
that
holds
the
keys
that
will
save
you
from
losing
those
keys
cos
were
to
crash
or
get
corrupted
in
a
non
repairable
state,
so
you
would
be
able
to
get
these
device.
K
So
thinking
of
being
able
to
consume
the
volumes,
as
is,
would
have
to
shift
away
a
little
bit
on
the
workflow
that
we'd
be
relying
on,
and
one
of
my
concerns
is
that
I
don't
want
to
typically
say
I
want
to
support
these
type
of
technology.
For
example,
I
want
to
say:
okay,
we're
gonna
super,
specifically
the
usage
for
the
encash,
although
it
would
be
easier
because
I
want
to
kind
of
like
have
say
a
a
way
for
supporting
any
volume
that
I
can
consume
and
say.
K
Okay,
you
gave
me
a
bawling
I'm
just
going
to
make
it
work,
it's
not
see
with
certain
constraints
ofcourse
so
going
back
into
the
partitions.
The
problem
is
that
we
need
to
think
about
what
the
workflow
is
like,
giving
all
those
constraints
that
we
haven't
set
this
in,
where
we
put
keys
and
where
we
put
configuration
and
how
everything's
tied
together
to
mount
these
things
and
make
them
work,
how
to
make
them
work
without
having
partitions.
So.
K
K
So
the
workaround
here
is
that
lVN's
allows
you
can
have
any
number
of
tags
associated
with
with
the
logical
volume.
So
that
then
that's
it
is
that
problem
in
dad
is
also
is
stored.
It's
basically
like
having
custom
metadata
in
your
logical
body,
so
that
don't
really
mean
because
there's
no
longer
a
restriction
of
what.
What
is
it
that
we
want
to
store
it?
Wouldn't
we
can
store
as
much
information
related
to
set
as
we
want
in
makes
make
devices
calling
its
work
really
simple,
but.
F
It
seems
like
that,
assuming
that
we
want
to
go
down
that
residues,
like
the
obvious
thing
to
do
it,
just
you
to
take
that
cluster
name
by
the
cluster
UID
out
of
the
volume
name
and
put
the
other
tag,
because
mostly
you
don't
care,
it's
written,
it's
going
to
matter,
but
then
the
name,
the
rest
of
the
name
is
still
going
to
be
like
Seth,
OS
D,
which
is
nice
and
descriptive,
you
know,
and
for
an
admin
and
then
the
UID
for
the
actual
OSD.
What
do
you
like
idea
that
there
should
be
something?
K
Yeah
now
the
to
me
that
can
yet
of
glean
dead
is
dead.
This
boot
just
be
a
special
case
for
lbm,
so
there
are
other
caching
technologies
that
are
not
based
on
LEM.
So
this
would
this
which
will
work
for
that
used
to
fix,
and
it
wouldn't
work
also
for
regular
discs,
they're,
not
lbm.
Right,
of
course,
I
mean
we
get
kind
of
white
I
guess
work
around
these
whole
days
and
drive.
K
It's
like
I'm,
I,
think
I'm
more
enticed
of
having
like
a
single
way
of
like
looking
at
these
devices
and
build
with
it,
and-
and
it
is
not
only
setting
them
up,
but
it's
also
the
discovering.
So
if,
if
an
eye,
caching
thing
already
counts
that
it's
LVM
like
it
means
that
we
need
to
define
a
way
of
like
having
some
sort
of
scarring.
K
Now,
when
we
were
discussing
these
last
week
with
sage,
we
say
well,
in
that
case,
you
could
sort
of
like
describe
like
really
accurately
and
describe
what
is
it
that
the
constraint
that
we
want
for
any
device
to
be
mounted
and
working
as
an
Aussie,
and
then
we
just
be
able
to
like
mount
it
and
get
it
going.
But
it's
just
the
the
setting
up
that
Easter,
the
the
humming
like
I.
K
This
is
why
I
mean
this
is
great.
Thank
you
so
much
for
for
raising
that
question.
So
one
of
the
things
that
I
did
is
that
I
also
met
I
mean
it
is
not
just
me
sitting
in
the
corner,
trying
to
come
up
with
a
smart
idea
or
not
not
not
so
smart
idea,
like
I,
went
the
medon
stage
met
with
John,
Briggs
and
I
may
be
the
Dean
cash
people,
the
OEM
people
and
asking
what
do
you
think?
What
would
make
sense
here?
So
after
actually
meeting
with
John?
K
K
What
would
be
the
way
to
tell
a
system
to
mount
this
thing
correctly
into
place,
then
looking
forward
and
it's
fairly
thin
transparent,
so
the
the
the
way
to
do
that
is
with
that
stats
now
manually,
nessus
down
in
a
sort
of
like
kind
of
like
using
it.
As
a
database
of
like
I
know
s
DS
and
removing
them,
because
that's
kind
of
like
what
the
plumbing
would
be
doing,
I
thought
well,
maybe
there's
a
way
to
define
different
locations
for
at
the
stab
file
and-
and
so
it
happens
that
there
is.
K
There
is
a
way
of
where
you
can
say
you
can
specify
where
you
want
what,
if
the
staff
should
be
consumed
by
Matt.
So
what
I
did
was
I
defined,
and
this
is
all
perfect
comfortable.
So
whatever
I
need
was
I
define
a
structure
which
is
very
similar
to
the
the
Seth
OS
D
structure
that
we
follow
in
Barnett,
step
for
entity,
so
any
sass
or
C,
and
then
the
Quatro
name
was
the
ID
and
then
a
bunch
of
different
metadata
information
there.
K
So
this
FX
tab
file
has
the
the
definition
of
this
one
OSP,
so
I
will
go
in
try
to
say
well.
This
is
about
the
URI
location
I'm
going
to
mount
it
here,
and
then
the
system
V
file
would
be
generic
and
out
and
say
well,
I'm,
just
going
to
start
for
our
CID,
let's
say
zero,
and
then
you
would
look
at
these
fat
export
it
export
or
passing
the
variable.
So
that
now
knows
what
it
has
got
to
use
and
use
use
it.
K
Now
that
would
allow
us
to
tap
these
one
out
that
we
currently
don't
have
G.
What
happens
when
you
that
fails
or
systemd
is
bad
or
we
have
a
bug
in
our
code.
That
approach
allows
us
to
say
well,
if
you,
as
a
user,
that
you're
having
issues
because
woman's
up
here
or
something
else
is
not
working
a
perfectly
transparent
and
his
staff
is
pretty
generic.
K
Everyone
can
understand
or
should
understand
how
it
works
and,
and
you
can
consume
it
assets
of
course,
modifiers
welded,
like
I,
wouldn't
recommend
Montana
get
out
then
probably
more
like
the
part
of
the
employment
open
OSC.
So
these
kind
of
might
be
exciting
part.
Now
this
is
it
for
me
now.
The
caveat
of
the
of
that
school
approach
would
be,
of
course,
we
would
be
doing
without
the
current
ability
a
everything
to
store
in
the
device
keys.
Are
there
a
special
closing
to
the
lack
of
partition
and
I
can
survive?
Coa
go
Assam.
K
K
F
Now,
what
I
think
it's
important
to
separate
out
the
different
environments
that
this
is
going
to
be
used
in,
because
in
something
like
a
docker
container,
been
like?
Yes,
I,
just
write
everything
out
inside
the
container,
your
rewrite
the
config
file
and
NTFS.
You
write
something
in
FS
tab
that
allow
our
wires
at
the
device
and
then
start
it
like.
You
can
just
hard
cut
into
the
kicker,
because
it's
not
hot
swappable
that
doesn't
need
any
that
dynamic,
Ness
right
associated
with
it.
F
But
in
that,
in
that
more
general
case,
we
have
to
make
sure
that
we
support
we
need.
We
need
the
lockbox
contact
concept
so
that
the
information
you
need
to
get
the
decryption
key
is
with
the
device
and
a
failure
of
a
boot
disk
won't
destroy
all
the
keys
for
all
of
your
to
Isis
in
the
most
reasonable
right.
You
need
to
do
do
that
when
you
did
maintain
the
ability
to
have
the
key
and
some
escrow
key
manager
service
or
whatever
we.
K
L
L
C
But
the
points
that
you've
got
some
unencrypted
this
of
your
device,
whether
it's
a
position
or
an
LBD.
What
to
stop
is
just
using
that
for
everything
right,
so
the
same
config
file
that
we
would
put
in
each
se
set
okay.
If
we
want
the
derives
to
be
self-contained
hot-swappable,
could
it
envy?
You
know
what
we're
calling
the
lock
box
right
I'm
a
tree.
It
seems,
like
writing
a
script
that
pause
the
config
file.
C
That
said,
you
know:
here's
the
uu
IDs
of
the
devices
I
want
to
use,
please
Mountain
for
me,
so
that
I
can
start
OST.
It's
a
much
simpler
task
than
coming
up
with
a
scheme
that
sort
of
self
represents
the
IDS
of
things
and
how
they're
related
it
feels
like
I
know.
This
is
easy
to
say
when
you're
not
having
to
write
the
code
yourself,
but
it
feels
like
this
is
we're
making
this
harder
than
it
needs
to
be.
H
So
we
can
understand
it's
baiting,
so
sorry
I
missed
this,
but
it
sort
of
seems
like
from
the
from
the
pad
at
some
of
the
discussion
that
a
lot
of
this
is
about
trying
to
get
all
the
information
locally
available
in
a
way
that
whole
system
can
make
decisions
without
the
orchestration
stuff
interacting,
and
that
sort
of
make
sense
that
you
want.
Like
you
know,
you
want
the
server
to
restart
and
fourth
turn
on,
and
that
makes
sense.
F
F
H
F
Don't
I
mean,
though,
second
point:
that's
the
dusty
idea
is
that
they
can
assimilate
can
be
to
whatever
they
want.
It
could
be,
could
point
to
dev
s
db3,
because
you
manually
said
that
that
it
could
point
to
dev
mapper,
something
that
I
like
manually
configured
with
ICAST
synovium,
or
maybe
it
points
to
like
my
RAID
controller.
It
could
be
whatever
you
want,
but
as
long
as
you
have
it
mounted,
then
you're
allowed
to
start
the
demon,
and
so
then
you
can
create
a
interior.
F
F
Lvm
might
be
another
way
to
do
that
by
just
having,
at
the
simplest
case,
just
a
standard
way
to
label
the
OSD
data
partition
so
that
a
system
to
unify
oh
can
look
at
the
logical
volumes
and
whichever
ones
look
like
Oh
skis,
it'll,
mount
them
and
start
the
demon
for
the
LVM
variant
of
hot-dogging.
So.
D
H
K
K
F
Wouldn't
work
well,
I
think
that
the
suppose
D
at
service
files,
like
unmodified,
will
work
perfectly
with
this.
This
is
just
describing
quo.
It's
just
describing
enough
detail
that
someone
could
go
manually.
Do
it
so
right
now,
step
talker
is
doing
its
own
weird
thing
and
they
kind
of
like
reverse
engineered
it.
F
D
So
thinking
on
is
a
little
bit
more.
The
thing
that
that
people
who
have
used
CBT
that
complain
about
it
about
kind
of
this
thin
type
of
problem
has
kind
of
usually
advocate
for
something
like
what
John
said.
Basically,
a
configuration
file
that
they
just
want
to
be
able
to
stick
like
flush
dead,
/sv,
be
in
there
/
debt
/,
you
know
and
be
mu
100
over
the
partition.
Is
there
that's
kind
of
been
what
people
have
has
kind
of
generally
asked
me
for.
H
And
in
the
reason,
I
think
I
think
that's
the
case
is
that,
like
different
environments
and
os's
and
deployment
strategies
have
radically
different
expectations
about
what
they
have
going
in
and
what
they
need
going
out
and
if
giving
those
siblings
set
up
is
the
hard
part.
Then
we
should
just
shortcut
that
and
if
it's
not
the
hard
part,
but
we're
like
all
the
our
energy
is
focused
on
that,
then
what's
actually
happening.
Is
that
we're
trying?
H
L
L
The
real
convention
is
that
we
we
have
an
OST
directory
that
has
these
fixed
items
in
it,
and
the
reason
that
it's
nice
to
be
able
to
have
partition,
tags
or
names
of
the
large
volumes
is
that
you
don't
have
to
mount
and
do
filesystem
IO
to
figure
these
things
out.
You
can
do
them
from
basic
inspection
that
doesn't
involve
this
guy,
oh
because
involved
in
understanding
the
file
system.
H
Seems
like
saying
we
should,
it
seems
to
me
like
that,
like
like
sort
of
the
overall
cluster
architecture,
is
intimately
tied
into
those
things,
and
so
trying
to
be
blind
about
these.
Where
you're
running
is
a
recipe
for
being
miserable
all
the
time
coming
for
actually
turn
it
on
OSD
daemon
it
just
these
pads
to
it
like
you
can
pat
it
on
the,
and
it
supports
a
bazillion
different
things
right,
yeah.
F
Like
reduce
the
number
of
variables
in
a
way
that
doesn't
actually
reduce
your
freedom
right
and
requiring
just
that,
you
mounted
that
is
already
mounted
before
you
start
the
daemon
and
it's
mounted
in
and
asking
that
they
put
it
in
the
default
that
location
instead
of
some
random
one
does.
It
feel
like
reduces
your
freedom.
H
F
That
feels
like
the
biggest
lesson
learned,
I,
guess
what
it
my
goal
would
be
to
allow
an
enlightened
administrator
to
say:
I
have
a
disk
I
want
to
use
it
I
want
a
simple
step2
like
turn
this
blank
disk
into
an
OSD
that
I
can
start
and
without
having
to
write
a
bunch
of
scripts
in
the
simplest
way
possible
and
also
have
another
workflow.
That's
like
I,
don't
know
anything
I,
don't
care
the
system.
Just
do
the
right
thing.
Do
whatever
you
think
is
best
and
make
it
work.
D
F
I
think
I
think
they're
both
important
but
I
want
it.
This
is
just
probably
standing
preaching
to
the
choir,
but
we've
historically
emphasized,
though,
like
the
making.
You
do
anything
you
possibly
want
by
having
every
possible
configurable
and
somewhat
and
collected
the
fact
that
we
need
to
make
something.
That's
simple
that
people
can
make
just
work,
and
so
I
don't
want
to
lose
sight
of
the
fact
that
there
should
be
a
command.
That's
one
line
that
says:
go
create
OSD
or
OSDs
out
of
these
devices
and
use
your
best
judgment.
F
H
We,
and
that
makes
sense,
is
the
sort
of
thing
you
support
with
our
people
init
scripts,
and
that
we
do
when
you
give
us
a
thing
that
looks
like
a
perfectly
normal
server,
but
that
doesn't
mean
that,
when
running
in
docker,
where
you
have
these
weird
ways
of
making
sure
that,
like
when
containers
move,
their
storage
is
still
available.
That
doesn't
mean
that
that's
the
right
model
or
that
we
should
be
going
through
the
same
pads
and
with
that.
It
is
another
way.
K
H
F
F
Think
the
thing
is
I
think
I
completely
agree
with
everything
you're
saying
but
I
think
you're
misunderstanding
what
is
written
here,
because
it's
nothing
in
here
that
that
says
that,
like
the
whole
point
of
this,
is
that
there's
there's
a
single.
The
only
thing
it's
saying
is
that
you
should
mounted
in
a
default
location
and
that
doesn't
prevent
you
from
doing
everything
you
just
said
like
unless
that
it,
but
it
does
do.
F
Is
it
means
that
if
you're
writing
the
doctor
thing
and
you
want
to
mount
it
on
flash
SRV
post
Eid,
then
you
can't
use
our
system
to
unify
oh
right,
which
I
guess
you're
free
to
do,
but
we
shouldn't
we
should
make
art
docker
file
mounted
on
vile,
obsess
and
use
our
standard
system
to
you
know
file
because
there's
nothing
preventing
you
from
doing
that,
except
for
just
not
using
a
completely
random
path.
Name,
there's!
F
No
loss
of
freedom,
there's
no
loss
of
generality
you're,
just
sort
of
steering
them
to
the
well-trodden
path
and
they
have
to
do
less
work.
And
if
we
choose
these
like
things
that
you
may
or
may
not
opt
into
like
maybe
I
opted
into
that
and
I
can
use
our
system.
You
know
files,
maybe
I,
don't
and
I
go
my
own
way.
Maybe
I
use
the
LVM
naming
convention
for
volumes,
maybe
I,
don't
maybe
I,
currently
use
set
discs,
or
maybe
I
like
manually,
have
a
bass
script.
F
That
runs
when
my
server
starts
up
and
actually
explicitly
runs
each
supposed
e
demon
like
if
we,
if
we
choose
what
those,
what
those
sort
of
well-defined
things
that
you
may
or
may
not
opt
into,
are,
though,
and
choose
those
so
that
they
sort
of
maximize
the
common
elements
that
don't
actually
restrict
your
flexibility
and
then
we'll
have
more
people
running
more
common
scenarios.
I
guess
and
we'll
have
less
weirdness
out
in
a
while
I
think.
K
That
the
point
of
contagion,
see
and
kind
of
like
where
we're
at
right
now
is
that
we're
almost
on
the
same
page,
except
on
trying
to
decide
what.
What
do
we
do
with
how
we
understand
where
what's
the
source
of
these
things,
so
none
I
think
with
some
initial
setup
the
women
that,
with
that
sign,
we
should
be
able
to
say
yeah,
we're
going
to
set
this
for
him
and
make
it
a
noisy,
pretty
easily.
I
think
that
the
difference
is.
K
H
I
understood
that
correctly,
I
think
what
it
is
is
that
said,
you're
saying
that,
like
oh,
it
like,
we
can
just
mount
these
subs
in
our
locations
and
use
share
code
and
that
all
makes
sense
as
long
as,
but
what
that
means
is
the
vocals.
That
means
that
we
do
need
to
have
a
certain
amount
of
information
at
every
step
along
the
path
towards
constructing
that
environment.
Is
that
in
some.
D
H
In
some
deployment
scenarios,
the
order
in
which
we
acquire
information
is
very
different,
and
so
we're
going
and
so
Alfredo's
going
through
a
lot
of
jumps
to
try
and
get
information
in
the
wrong
order
and
then
turn
that
into
the
right
looking
environment,
even
though
the
tools
expects
turn
up.
Even
though
there's
a
mismatch
between
what
information
is
accessible
and
so
that
I
wouldn't
break
the
fact
that
things
are
that's
very
different
and
just
like
it's
for
really
fine
Google.
H
They
thing
that
I,
don't
know
that
like
looks
at
a
random
disc
and
says,
and
and
because
maybe
in
this
environment,
easy
to
intersect
with
this,
because
oh
this
is
an
LC
I'm,
going
to
turn
this
disc
on
where
it
is
as
an
OSD
and
then
go
look
see
the
other
attached
sister
wrote
these
oars
in
another
environment.
You
might
start
out
with
the
config
pilots
that
is
here.
F
You
can
do
that
right.
You
don't
have
to
use
our
system
to
unit
file.
You
can
write
your
own,
but
like
that's
not
what
we
want
to
dear
customers
to
do
right.
We
want
to
have
them
normally
use
a
sort
of
simple
general
way
to
do
that,
so
that
it's
understandable
we
don't
alog
into
every
single
cluster
and
say
like
what
the
hell
is
T's
mount
points.
What
is
going
on
here?
What.
H
F
B
F
That
obviously
need
some
improvement
so
that
the
names
are
shorter,
but
the
main
difference
between
what
dis
does,
and
what
Sept
is
does
is
that
sip
disk
suffered
from
exactly
the
problem
you're
describing
where
it
didn't
know
what
oh
ste
it
was,
or
what
closer
belonged
to
hit,
don't
announce
it
temporarily
and
like
remounted,
and
it
was
like
a
mess
and
the
nice
thing
about
a
VM
and
the
reason
why
I
kind
of
like
that
logical
volume
levels
is
that
you
can
just
put
them
in
any
of
the
light.
F
D
F
To
realizations
there
one
is
that
it
doesn't.
You
don't
have
to
have
your
whole
OC
LVM
in
order
to
make
use
of
LVM
the
link
for
the
data
partition.
So,
for
example,
if
I
Cass
is
a
whatever
say
you
have
some
may
say
for
on
devices
with
I
cats,
or
maybe
you
have
to
run
them
whatever.
It
is,
and
you
could
still
label
the
OSI
data
partition
as
an
LVN,
logical
volume,
and
it
just
been
links
to
dev
ICAST,
whatever
it
is,
and
everything
would
still
work
like.
F
You
could
still
use
the
LVM
system
of
unit
that
automatically
mapped
mount
Zelda
at
the
low
C
data
directory
and
starts
at
the
OSD.
It
just
happens
to
point
to
a
device
that
doesn't
manage
failed
VM,
that's
totally
fine.
It
might
be
that
you're,
using
looking
into
the
future
using
blue
store
and
you're
using
SP,
DK
library,
so
you're,
not
even
using
block
devices,
in
which
case
obviously
opium
doesn't
make
sense.
But
in
that
case
also
you
can
either
have
you
know.
F
Etsy
staff
could
hard
code
and
the
varlet
stuff
could
just
be
on
the
root
volume
like
it
doesn't
even
have
to
be
a
separate
device.
So
the
nothing
else
being
mounted
and
just
system
unify
just
sees
it
there
and
starts
it
like.
You
would
really
have
to
do
you're
sort
of
not
using
any
of
this
automated
hot
plugging
stuff
or
it
could
be
that
I,
don't
know,
I
mean
there's
like
no
limit
really
like
by
making
it
simple.
You
can
still
piece
these
things
here
than
whatever
way
all.
H
F
Is
which
is
why
there
can
you
notice
again
line
80
to
95?
The
simple
rule
is
that
as
long
as
the
directory
exists
and
send
links
in
that
directory
point
to
devices
that
or
files
that
exist,
then
it'll
start
so
just
simply
to
the
devices
that
you
need.
It's
like
a
poor
man's
dependency
graph,
and
as
long
as
that
is
satisfied
in
whatever
way,
whatever
set
of
automation
tools
you
want
and
we'll
go
just.
L
L
If
someone
comes
up
with
a
new
disc
technology
that
neither
supports
partitions
nor
supports
names
and
I,
don't
know
how
it's
going
to
be
used,
so
I
think
we
may
be
over
worrying
about
the
all
these
other
ways
of
setting
up
drives
that
might
might
exist
in
the
future.
They're
probably
going
to
be
block
devices
and
the
program
would
have
names.
K
L
L
No
I,
don't
I,
don't
mean
that
to
say
that
it's
going
to
have
exactly
the
same
naming
format
as
OPM
I'm,
just
gonna
I'm,
just
saying
that
it
has
a
name
that
is
users
edible
somehow
or
it
has
partitions
it's
hard
to.
Imagine
that
there
will
be
blocked
device
technologies.
Adults
satisfy
one
of
those
two.
L
F
L
L
Like
if
we're
willing
there's
enough
that
we
have
to
open
the
partition
to
do
it,
that's
fine.
You
know
at
OPM
itself
rights
labels
and
fixed
locations
on
the
discs
so
that
it
doesn't
have
to
deal
with
a
file
system
to
be
what
identifies
this
from.
It's
conceivable
that
you
could
leverage
that
sort
of
scheme.
Somehow
it's
conceivable
that
there
are
flexible
ways
of
labeling
drives
that
we
could
start
using
for
staff
that
are
less
constraining
than
partition
tags,
less
per
chip
constraining
than
GPT
partition
tags.
L
But
the
upshot
is
still
all
the
same
if
we
can,
if
we
buy
without
involving
the
file
system,
it's
less
complex
and
it's
faster,
but
if
we
can't
we're,
probably
still
ok,
when
we
only
have
to
have
that
file
system
by
the
time
we
use
the
drive
anyway,
so
temporary
mount
just
isn't
that
big
a
deal.
But
my
point
of
this
was
going
to
LVM
style
block
devices,
making
everything
more
flexible
in
an
absolute
sense,
because
we're
now
no
longer
include
incurring
the
assumption
that
there
is
a
partition
table
under
Bush.
That's
all
it.
F
You
do
whatever
you
want
and
we
are
moving
to
a
situation
potentially
where
you
can
either
use
that
disk
in
its
Bruce
very
subscribed
ways
or
you
can
use
a
new
tool
that
uses
LVM
in
a
particular
way
that
happens
to
be
more
flexible,
but
not
infinitely
flexible
or
you're
on
your
own
and
then
the
last
piece
is
just
that
we
document
what
on
your
own
means
so
that
you
can
not
have
to
write
your
own
system
unit
file.
You
can
at
least
reuse
our
system.
F
F
H
F
What
is
formatting
means
like
in
every
every
so
se
back-end
requires
multiple
distinct
block
devices,
either
it's
the
profile
store,
it's
the
big
XFS
volume
and
a
small
journal
for
blue
store,
it's
a
small
X
of
s
volume
and
a
big
block,
and
maybe
a
couple
other
block
devices.
So
there's
always
like
the
requirement
to
chop
it
up
in
some
way.
F
So
you
can't
necessarily
you
either
have
to
choose
LVM
or
partitions
right
so
that
the
common
element
between
them
is
like.
Once
you
have
your
volumes,
the
actual
block
devices
ready
and
you've
layered
on
whichever
file
Sosa's
you
layer
on
file
systems
on
some
of
the
some
or
all
of
them,
and
then
you
run
step
O's
be
make
FS
like
that's
the
common
piece
sort
of
between
SEF
disk
and
between
LD
m.
We
could
try
to
separate
that
into
different
tool,
but
honestly
I
think
it's
simple.
F
L
Did
justice
allyship
to
perspective
there?
You
may
not
even
chop,
go
up
there
just
block
devices,
they
may
be
block
devices
that
are
parts
of
another
block
of
ice
or
parts
of
a
logical
volume
or
whatever,
but
they're
just
block
devices
you
can
hit.
You
have
for
SSDs
and
use
them
all
individually
for
pieces
of
a
blue
store,
OSD
more.
L
F
F
L
L
F
K
I
mean
fine
enough
point:
I
would
prefer
just
using
their
or
a
no
we're
doing
the
the
actual
name,
I
mean.
Of
course,
we
could
also
put
something
in
the
name.
That
is
the
matter
of
it.
We
could
for
for
anything
custom.
We
can
use
that,
and
maybe
we
could
just
use
the
mean
for
Rainer
disks
I
mean
we
do
Soviet
as
well.
D
F
Not
just
today
about
like
some
of
these
in-game
crypt
and
they
reboot
the
server
and
the
devices
didn't
come
up
right,
yeah
yeah,
you
know
love
to
how
do
I
go
away.
I
think
that
the
main
reason
I'd
like
to
keep
at
least
some
minimal
information
in
the
name
is
just
about
that.
The
administrator
sees
so
you
know
he
lists
they
just
just
write,
see
it
and
know
what
it
is,
so
it
should
have
stuff
in
there
somewhere.
F
F
Agree:
I
can't
tell
if
I'm
interpreting
that
statement
correctly,
because
one
way
of
interpreting
is
that
a
tool
called
CFL
VM
shouldn't
exist.
The
administrator
should
type
LV
commands
directly
and
that's
it
like
an
impossible
and
possibly
in
conflict
with
the
idea
that
we
want
a
store
system
that
is
usable
by
somebody
who
isn't
already
an
experienced
and
trained
Delvian
Linux
administrator
like
in
order
to
have
any
chance
of
succeeding
with
any
user
base.
F
K
F
F
It
is
and
says
thinks
a
little
bit
and
says
this
is
my
best
guess
at
the
best
approach,
and
does
it
because
expecting
users
to
like
know
how
much
they
should
carve
across
out
for
the
blue
star
wall,
whether
the
wall
should
be
on
DM
cash
or
not?
How
big
the
DM
cash
volume
should
be?
It's
like.
How
could
how
could
they
make
a
good
decision?
Listen
right
and
then
every
single
cluster
is
going
to
be
completely
different,
they're
going
to
get
weird
performance
really
like?
F
K
L
Take
that
hey,
it's
like
to
level
two
levels
of
crazy
here,
there's
the
whole
craziness
of
setting
up
DM
cache
itself
and
then
how
to
adapt
that,
appropriately
to
your
particular
set.
Configuration
and
I
have
no
qualms
with
writing
white
papers
and
describing
hell
what
what
sorts
of
storage
this
thing
needs,
because
DM
cash
is
certainly
not
the
only
thing
that
people
going
to
want
to
use
but
I,
you
know,
I
can
make
the
same
argument
about
SSDs
and
spinners
today.
I
Fairholme
is
re,
but
I
think
that's
exactly
the
same
kind
of
solution
where
we
actually
will
in
long
term.
If
you
want
automated
situation
where
we
have
an
installer
go
and
investigate
all
the
disconnect
server
and
say:
ok,
this
disk
has
me
as
it
can
serve
the
journals,
as
many
hard
drives
goes
up
that
up
I
take
it
agree
supreme.
D
F
Absolutely
we
should
write
that
om
without
DM
cache,
support
and
it'll
be
written
in
such
a
way
that
you
can
go,
set
up
being
pass
yourself
and
just
feed
it
to
set
bail
VM.
It
will
just
go
right,
but
I
think
that,
like
immediately
after
that,
we
have
to
have
a
way
to
make
cess
easier,
like
this.
I
feel
like
this
gets
the
heart
of.
F
Why
we're
getting
so
much
pressure
right
now
from
the
business
is
that
Seth
is
too
hard
like
you
have
to
be
assessed
expert
in
order
to
set
it
up
and
at
the
end
of
the
day
you
have
I
mean
you
have
a
box,
it
has
24
discs
and
it
has
to
India
knees,
just
carve
it
up
and
use
it
right.
I
mean
we
want
to
default.
The
blue
store
we're
going
to
take
those
two
lv2
and
dooney's
we're
going
to
split
them
across
those
discs.
F
We're
going
to
decide
how
much,
how
big
we
want,
the
PDS
for
the
blue
store
wall,
that's
hard-coded,
that's
a
policy
decision
and
then
the
rest
of
the
space
we
just
divide
by
whatever.
However,
you
decide
is
divide
it
into
chunks
of
those
disks
and
just
make
that
a
cache
for
the
data
partition
like
it.
D
L
F
L
D
D
F
It's
you
want
it
to
go
fast,
you
want
it
to
be
easy
and
we
can
make
a
thing
that
does
a
pretty
good
job
of
making
go
fast
and
is
easy.
That
is
the
best
of
both
worlds.
And
if
that's
not
good
enough
for
you,
then
you
can
go.
You
know
off
the
beaten
path
and
do
yourself
right,
but
we
have
to.
We
can't
just
say
that
oh
there
are
too
many
decisions
to
make
the
administrators
go
make
them,
because
we
don't
know
you're
going
to
go
by
and
see
your.
D
F
D
K
F
K
One
that
needed
it
fold
and
there's
like
two
or
three
like
that
I
haven't
run
out
of
football,
my
head
to
say
which
one
which
one
sort
of
it
like
there's
like
three
different
ones,
but
that's
just
one
thing
means
I,
don't
want
to
say
no,
but
I
want
to
say
that
two
or
three
things
and
I
think
will
be
blowing
run
in
circles.
That's
it.
Let
me
just
say
they
do
two
things.
K
First
of
a
dream,
cache
team
told
me
like
baby
is
like
the
really
really
think
about
what
what
we
want
to
accomplish
him
like
it's
having
parity
in
the
memory
tool
and
set
up
this
whole
thing
for
someone
else.
It's
a
hard
problem.
It
not
you
mentioned
that
you
have
different
different
prices
like
you
shouldn't
be
hard
to
say.
Well,
you
have
these
things
and
that
those
things
here
it
shouldn't
be
the
hearted
and
I'm
thinking
it
is,
and
these
guys
they
don't
think
it
is
wrong.
F
K
F
K
H
Be
very
best
good,
but
you
can
be
I
think
what
we
need
is
to
is
to
come
up
with
a
design
conservative
enough
that
we
know
that
if
you
do
it,
it
will
be
faster,
and
yet
they
don't
have
enough
heart
to
do
that
that
week
and
then
they
can
just
reduce.
But
if
they
do
then
we'll
get
it
up
a
way
that
works
right.
H
D
F
Might
not,
but
that,
but
that
the
point
is
that
there
should
be
a
tool
that
says
make
use
of
this
SSD
and
hard
set
of
hardness
and
SSDs
in
whatever
way
seems
the
best
given
what
I
have,
and
it
will
do
that.
Maybe
it'll
just
decide
that
I'm
going
to
just
going
to
put
the
wall
there.
Maybe
the
low
side
is
going
to
put
the
database,
the
DB
volume
or
whatever
for
boost
or
there.
But
if
you
have
tons
of
SSD,
then
it'll
take
what
it's
expect.
That
you're
only
gonna
have
this
much
metadata.
D
Worry
that
we're
talking
about
introducing
too
many
layers
of
things
we
already
have
so
many
layers
of
things
that
try
to
kind
of
in
roxy
be
itself.
You
can
separate
out
data
into
different
levels
right,
so
you
can
put
each
level
on
a
different
device.
We've
got
multiple
layers
of
caches
in
different
places.
Now
attention
we're
talking
about
putting
begin,
because
all
these
different
layers
add
the
possibility
of
something
going
wrong
and
doing
paths
and
somewhere,
yes,.
F
I
K
L
D
D
Basically,
the
gist
of
this
is
that
in
blue
store,
when
we
have
a
fairly
large
amount
of
the
oh,
no
data
per
OST
in
this
case
is
512
gigabytes
of
our
BD
volume.
So,
actually
not
that
much
necessarily,
but
enough
to
cause
this,
we
can
no
longer
keep
all
of
the
metadata
and
blue
store
in
cash
at
least
set
our
default
cash
values.
D
So
we
end
up
doing
a
lot
of
reads:
Knox,
Phoebe
and
I
really
quickly
this
morning
grab
some
data
during
a
test.
This
is
for
K,
random
right,
so
I'm,
a
single
nvme
Bakula
see
with
everything
on
nvme,
so
the
thread
has
long
the
database
and
the
d-block
partition
and
doing
14
random
writes
to
this
512
gigabyte.
Rvd
volume
we're
seeing
that
the
vast
brought
up
the
vast
majority,
but
the
majority
of
the
the
I/o
going
to
disk
is
actually
in
small
fries.
D
There
is
18,000,
read
iOS
going
to
the
database
partition
and
a
much
smaller
number
of
ryos.
Although
the
VTOL
amount
of
data
was
was
higher,
just
that
the
I/o
size
was
was
bigger
for
those,
so
you
can
see
on
the
the
right
head
log
there
that
it's
actually
a
high
number
of
iowa's
going
to
the
right,
have
log
and
also
a
high
number
of
ayahs
going
to
the
the
lot
partition
at
that
particular
moment.
D
Overall,
looking
at
the
performance
impact
of
this,
when
all
of
the
metadata
for
writes
in
this
kind
of
workload
with
its
kind
of
size-
and
usually
you
have
everything
in
cash
in
this
case-
dedicated
gigabytes
of
ram
to
the
OSD,
so
that
they
can
do
that,
you
give
out
30,000
right
eye
apps,
but
when
using
the
default
cache,
which
is
what
we
have
now
I,
think
it's
going
to
get
byte.
If
I
remember
right,
we
drop
down
to
about
10,000
random
right
eye.
D
So
we're
doing
read
ads,
but
it
used
to
be
that
we're
just
doing
tons
and
tons
and
tons
of
little
tiny
reads,
mostly
sequential,
but
through
there
random
reads
app
so
anyway,
moving
on
from
this
okay,
what
can
we
do?
Well,
the
nice
thing
to
do
would
be
if
we
could
somehow
reduce
the
amount
of
metadata
that
we
have,
but
short
of
that.
D
Maybe
there
are
things
that
we
could
do
or
different
solutions
that
we
could
look
at,
but
that
would
handle
it
leads
better
and
the
one
that
kind
of
was
looked
at
a
while
back.
Chinchin
had
made
a
key
value
back
end
with
LM,
DB
and
so
I
thought
well.
Maybe
would
be
worth
looking
at
this
again,
so
I
dug
out
his
old
PR
and
started
working
on
it.
D
I've
moved
it
over
to
see
make
from
auto
make
and
then
also
got
a
couple
of
things
implemented
that
that
we
needed
now
like
merging
our
range
keys,
so
it
does
compile
the
branch
I've
got
linked
in
there
does
compile.
Now
it
is
racing
without
debugging
stuff
added
in
it.
It
crashes
pretty
quick,
but
if
you
throw
in
lots
and
lots
of
debugging
in
like
d
outs
in
the
code,
then
I
can
actually
get
them
on
to
run
with
it.
So
there's
stuff
that
still
needs
to
be
fixed.
D
There
are
a
lot
of
caveats
with
this.
It
is
slow.
It
writes,
there's
what
it
looks
like
those
two
f
syncs
per
transaction
and
it
doesn't
scale
with
multiple
writers
and
suddenly
a
right
here.
So
those
are
some
pretty
big
hits
against
it.
Potentially
there
was
like
a
right
ahead.
Log
branch
floating
around
out
there
that
the
author
claimed
gave
huge
speed
out,
but
it
also
broke
stuff.
So
I
don't
know
what
the
size
of
that
was.
It
is
going
to
see
if
I
could
I
could
reach
out
to
them.
You
see.
D
So
this
this
kind
of
project
I
mean
really,
you
know
the
very
specific
part
is
MDB.
Look
at
it
see
if
it's
worth,
while
looking
at
again
I,
don't
think
it
quite
got
a
fair
shake
in
the
previous
PR
there's
a
lot
of
stuff
there
screaming
strings
and
doing
there
and
happening
that
probably
slowed
stuff
down
somewhat,
but
I
don't
look
like
it
has
some
legitimate
issues.
Maybe
the
bigger
topics
here.
D
The
bigger
story,
though,
is
you
know
we
need
to
figure
out
how
to
deal
with
the
metadata
workload
that
we
have,
because
it's
big
and
the
bigger
the
OSD,
the
worse
it
gets
the
SanDisk
guys
we're
trying
to
write
instead
of
scale
too
long
or
working
on,
like
I,
said
the
scale
adapter
for
blue
store
to
try
to
do
do
this
better
than
using
rocks
Phoebe,
but
that's
kind
of
out
of
the
picture
now
so
anyway.
That's
basically
what
it
is.
Anyone
have
any
comments
or
thoughts.
I.
F
Think
the
big
question:
what
Elm
D
specifically,
is
just
whether
the
architecture-
that's
not
optimized,
for
writes
how
that's
going
to
weigh
against
something
that
does
very
well.
It
reads
when
we're
doing
lots
of
metadata
reads,
also
and
out,
because,
anyway,
to
really
know
that
until
we
just
try
it
yeah.
F
F
F
If
you
can
also,
you
know
point
out
that
if,
if
you
look
at
like
any
store
system
ever
it's
usually
dominated
by
metadata
performance
right.
So
if
your
metadata
slow
than
everything
is
slow,
that's
sort
of
a
truism,
I,
think
of
any
will
say,
system,
I,
guess-
and
it's
probably
exacerbated
in
our
case
just
because
our
our
workload
is
so
randomized
because
it's
spread
across
so
many
different
nodes,
and
so
each
individual
unit
of
work
seems
to
be
small
or
a
sort
of
in
a
more
traditional
system.
D
F
Are
so
last
time,
I
synced
up
to
char
at
Intel
gal
I?
You
know
a
whole
plethora
of
projects
going
on,
but
some
of
them
are
specifically
looking
at
and
key
value
databases
on
nvme
and
on
persistent
memory
and
they
saw
their
email
and
that
spans
from
like,
let's
make
arcs
to
be,
go
fast
and
unique
and
whatever,
and
that
all
the
way
to
you
know
how
do
we
write
a
key
value
database
with
the
right
semantics
set?
All
these
distribute
systems
need
that
runs
directly
on.
F
So
I
think
there's
like
there's
a
reasonable
chance
that
something
is
going
to
come
along
in
the
next.
You
know
year
or
two
but
I
think
in
most
right
now
there
is
really
any
other
good
choice.
Yeah
baby
lol
movie
is
getting
close,
but
I'm
really
worried
about
that.
Writer
issue
is
just
like
just
not
behind,
and
our
workload
really
is.
It's
like
every
I/o,
the
right
IO
almost,
but
just
we
just
have
to
load
metadata
in
order
to
service
it
to
do
that.
Right,
yeah,
yeah,
yeah,
yeah,
I,.
F
F
So
they
wrote
specific
code
to
deal
with
our
PG
log
right
pattern,
where
you
have
consecutive
key
values
that
are
getting
retired
on
a
sort
of
a
time,
rotating
basis
or
whatever,
and
so
they
wrote
like
a
specific
back
in
for
a
part
of
the
key
space
that
dealt
with
that
or
code
specifically
and,
and
that
was
like.
That
was
where
they
got
their
performance
back
and
I'm
I'm
skeptical.
That
DB
is
going
to
be
that
yeah.
D
F
F
You
need
like
rollback
and
whatever
yeah
okay,
but
maybe
that's
the
best
tool,
I'm,
not
sure
anyway.
My
suggestion
would
be
to
to
make
it
build,
even
if
it's
just
on
the
monitors,
maybe
it
works.
Well,
the
monitor
workload,
I,
don't
know
a
big
the
monitors,
actually
there
I
or
so
it
can
be
pretty
light.
Now
that
I'm
yeah
now
that
the
managers
in
place
but
it'll,
if
nothing
else,
I
was
going
to
tell
us
more
about
what,
knowing
that
something
doesn't
work,
tells
us
more
about
what
we
need
so
yeah.
D
D
F
F
Doesn't
do
any
data
caching
and
there
going
to
be
some
workloads
where
we
do
need
caching
on
a
disk
or
there
also
be
workloads
where
you're,
the
entire
OSD
is
storing
key
value
data
because
it's
argue
to
view
indexes
or
something
in
which
case
them
you
know,
they're
actually
be
carrying,
might
not
be
sufficient
enough
about
to
see
hope
to
see.
If.
F
I
think
I
mean
even
if
you
ignore,
so,
even
if
you
say
that
I'm
about
to
team
cache,
even
if
you
say
that
geom
cache
is
not
going
to
work
for
most
workloads,
we
could
boost
or
it's
not
right,
the
coup,
bucks
key,
and
if
people
do
want
game
casting
go
set
up
by
themselves.
We
still
want
a
tool.
We
still
want
to
get
as
close
as
possible.
Workflow,
where
the
user
can
say.
F
I
have
this
many
hard
disks
and
this
many
SSDs
can
set
them
up
and
whatever
the
best
way
is
and
have
the
tool
say.
Oh
well,
you
have
you
know:
ratio
of
6,
OS
DS
to
SSD,
so
I'm,
going
to
carve
each
SSD
into
6
parts
and
I'm
going
to
use
this
much
for
the
wall
and
the
rest
of
it
for
the
DVD
or
whatever
it
is
I.
F
Think
we
still
want
that
like
I
think
we
should
have
a
Trello
card,
I
think
that's
like,
and
you
know
the
equivalent
of
set
disk
create,
but
for
like
create
mini
where
you
just
list
you
give
it
all
the
devices
you
have
like
this
is
everything
in
the
system.
This
is
everything
I
want
to
use,
and
it
just
like,
goes
and
creates
in
it
figures
out
what
end
should
be
in
a
does
it
so.
D
F
D
F
I
F
D
I
F
F
The
way
that
they
should
be
with
the
marking
them
as
like
consumable
SF,
and
then
they
wouldn't
got
activated
within
a
particular
cluster
I.
Think
just
having
a
convention
that
just
says
like
this
is
a
way
to
label
a
blank
disc
as
blank
and
allowed
to
be
consumed
by
Ceph
would
be
would
be
sufficient.
F
Maybe
that's
just
like
you
know,
creating
one
big
TV
with
the
label,
obsessed,
boring
or
something
I
don't
know
whatever
whatever
it
is
like
just
so
there's
a
way
to
just
like
have
to
mark
everything,
that's
usable
and
then
have
all
the
other
tools
know
that
they're
free
to
consume
them.
If
they
don't
have
the
riposte
grace
database
on
them.
F
D
A
Everybody's
them
and
if
you've
made
it
through
this
recording
thanks
for
sticking
with
us
a
little
we'll
see
you
in
a
month
all
right
together
way.