►
From YouTube: 2019-09-12 :: Ceph Performance Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
So,
on
the
read
side,
I've
seen
us
hit
a
little
higher
previously
than
on
the
right
side,
but
you
know
it
can
depend
on
a
lot
of
factors
and
kind
of
what
what
the
underlying
I/o
looks
like
right
is.
This
sorry
I'm
gonna
pepper
you
with
more
questions,
but
you
know
this
configuration.
Would
there
be
an
SSD
journal
or
would
it
just
be
everything
going
straight
to
the
hard
drive
yeah.
B
Sorry,
SSD
or
hard
drive,
SSD
ossd.
Sorry
I
thought
you
were
saying
hard
drive,
ok
an
SSD
set
up
you!
You
can
do
pretty
well
depending
on
what
it
is.
You
know
if
it's
it
was
like.
You
know,
able
to
deliver.
Super
super
high
ops.
We
won't,
we
won't
hit
it
well,
you'll,
probably
stall
it
on
CPU.
First,
if
it's,
you
know
kind
of
a
more
typical.
You
know
maybe
like
able
to
do
20
or
30,000
write,
I
ops.
You
know
for
like
a
SATA,
3
type
device-
you
you
can
probably
do
closer.
B
Then
it
depends
on
how
well
that
device
can
actually
do
like
OD,
sync
rights
and
write.
You
know
other
other
things
like
that.
So
I
guess
the
the
answer
is:
is
kind
of
highly
dependent
done
and
exactly
what
the
device
is
capable
of
and
and
kind
of
sometimes
some
non-obvious
things
like
you
know
what
what
how
well
it
really
handles.
You
know
think
rights
and
other.
A
You
know
other
things,
okay.
Well,
that's!
That's!
That's
good
info!
You
know
I've
been
doing
a
law.
You
know
still
continuing
this
stuff
stuff
I've
been
doing
all
year
and
I'm
at
the
point
now,
where
I
feel
like
I'm,
getting
close
to
the
threshold,
what
I'm
going
to
get
with
the
Luminess
code
using
so
now
so
yeah
yeah
Garrett,
except
for
some
of
the
stuff
we're
talking
you're
talking
about
today
right,
say:
it's
wonderful,
yeah,.
A
B
D
B
Cool,
that's
exactly
what
I
was
hoping
in
this
air.
So
I
don't
know.
If
we
want
to
do
my
PR,
because
you
know
it's
kind
of
you
can't
think
of
it
a
little
bit
little
hacky,
I
guess,
but
but
something
like
that.
Maybe
we
do
want
to
do.
I,
don't
know
we'll
see
but
cool
yeah,
that's
great
Eric
yeah.
D
B
Have
been
for
the
stuff
that
I
was
doing,
I
was
just
looking
at
like
wall
clock
profile
and
how
much
time
we're
spending
in
it
I
wasn't
even
bothering
to
look
at
like
you
know
the
network,
traffic
or,
or
anything
else
like
that.
Just
you
know
what
is
the
where
we're
spending
time
right:
okay,
yeah,
so
you
could
use
either
my
wall
clock,
profiler
or
Adam
his
profiler,
the
hero,
his
is
faster
than
mine.
B
D
E
Performance
of
the
filtering
stuff
is
going
to
depend
a
lot
on
what
all
your
object
names
look
like,
because
it's
the
delimiter
and
prefix
searching
the
filtering.
That's
moved
down.
If
you
don't
have
like
nested
directory
names
at
all,
then
then
it
won't
have
any
effect.
But
if
you
have
lots
of
nesting,
then
it'll
have
a
big
effect.
That's.
B
Yeah
it's
it
sounds
to
me
like
we.
We
probably
want
to
at
least
be
doing
the
decoding
step
in
the
CLS
code,
so
that
we
can
do
your
filtering
plus
all
the
existing
filtering.
We
do
there
plus
you
know
potentially
new
stuff,
I
guess
my
question
be
more
of
whether
or
not
it's
worth
actually
decoding
the
decoding.
Re-Encoding,
the
actual,
like
entries
right,
I,.
B
D
B
D
D
E
B
E
That's
Matt's
objection
to
doing
this
was
more
from
like
a
API
perspective
like
the
OS
clients,
don't
really
know
what
what
to
do
if
they're,
given
a
bufferless
blob,
so
there's
advantages
to
having
the
actual
structure
there
for
the
client,
but
we
might
be
able
to
do
some
hacky
stuff
on
the
sending
side
so
that
it
sends
a
buffer
list.
But
the
client
knows
to
decode
as
a
as
this
structure,
but
yeah
it
gets
a
little
messy.
That's
all
sure.
B
B
B
All
right:
well,
we
hi
all
these
things
to
go
through
so
I'll
get
through
here
quickly.
There's
this
one
from
Igor
about
that's
automatic
legacy
static
as
fix
anything
there
in
order
to
discuss.
F
B
Okay,
let's
see,
there's
this
reimplementation
of
test
/,
/,
local
or
adapting
or
arm
version
eight
fixes
and
make
that
work.
Apparently,
there's
this
inline
immutable
small
object
with
own
ode.
I
think
we
should
talk
about
that
here
later,
but
basically,
this
I
think
originally
started
out
as
a
PR
to
rgw
to
quit
small
rgw
objects
into
OMAP,
which
then
can
morphed
into
well
we're
thinking
about
switching
our
shrinking
the
metallic
size
in
blue
store.
Then
also
he
wrote
this
PR,
which
is
also
somewhat
similar
to
one
that
igor
wrote
a
little
while
back.
B
That
does
something
similar
with
putting
all
small
objects
into
small
burritos
objects
into
own
into
either
dono
during
you
know,
oh
map
or
whatever
of
directly
so
anyway.
Yeah.
Let's
talk
about
that.
After
get
done
with
all
these,
there
is
a
mutex
contention,
optimization
in
the
throttling
code,
Gregg
looked
at
it
and
gave
it
a
thumbs
up.
I
have
not
looked
at
that
yet
and
I
have
not
looked
at
that
code.
B
Let's
see
those
are
new
p,
RS
clothes,
stuff
neha
has
a
couple
of
QA
sweetpea
ours
that
that
merged
one
is
for
testing
to
combine
metallics
I
started,
taking
bio
SD
memory
target
size,
the
current.
What
minimum
is
one
point:
five
gigabytes
as
she
was
testing
and
there's
some
really
nasty
performance
during
OMAP
deletion
that
she
was
seeing
with
that.
So
I'm
just
curious.
If
we
started
to
gigabyte
memory
target,
if
we
still
see
that
since
larger
ones
were
working,
fine
might
be
a
cash-in
related
thing
just
thrashing
stuff,
because
there's
not
enough.
B
There's
this
one
from
I'm
gonna
mangle,
the
name
but
shy
sheen
cool,
maybe
I-
was
blue,
store,
blue
FS
max
free
size,
adding
that
this
was
just
more
I
recall.
Looking
at
it
dealing
with
some
of
the
a
bit
of
fallout
from
sages,
they
are
to
change
the
blue,
FS
Alex
eyes
to
be
smaller,
I
think
by
doing
that,
we
essentially
made
it
take
longer
to
do
allocations
in
some
cases.
I
remember
right
so
anyway,
that
that
apparently
merged
I
think
sage
had
reviewed
it
and
maybe
Gore
did
you
review
that
as
well.
G
B
Older
version
of
this
perf
local
arm
thing,
I
closed
dumping,
JSON
from
the
hot
path
during
rashard,
that
was
mine,
that
got
merged
by
Casey
and
then
Casey's
one
to
avoid
doing
these
recharge
checks
during
the
right
path.
When,
when
you
are
going
to
eventually
do
a
Rashard,
you've
already
know
you're
going
to,
but
you
are
haven't
started
yet
that
got
merged.
That's
the
big
performance
when
this
beast
parse
buffer
size
that
got
merged.
B
The
other
one
about
read
data
portion,
pushing
an
object
and
batch
I,
don't
remember
which
one
that
is
but
apparently
emerged.
That's
the
Adams
aging
test
for
booster
allocators
that
was
kind
of
a
long-standing
PR.
They
had
third
like
almost
a
year
ago,
I
think
and
sage,
one
camp
Henry
reopened
that
got
merged,
and
then
this
new
up
queue.
B
We
had
requested
performance
data
on
it
and
never
heard
back
and
that
eventually
just
got
closed
by
still
bot
here
and
then
updated
pr's
some
stuff
for
a
Raiders
bench,
I
had
looked
at
it
and
generally
gave
it
a
thumbs
up.
People
looked
at
and
had
a
couple
of
minutes
that
he
had
noticed
I
hadn't,
so
that
is
being
those
things
are
being
fixed
and
is
in
testing.
B
H
B
G
B
That's
continuing
to
get
updates,
I
think
it
at
one
point
you
to
rebase,
and
now
it's
just
getting
more
review
and
more
updates
Adams
charting
work.
It's
looking
really
good,
except
for
one
one.
Small
problem
is
that
rarely
it's
showing
a
stakeholders
coming
up
after
about
eight
hours,
six
to
eight
hours
worth
of
work,
look
dramatic!
B
Briefly.
We
were
worried.
It
was
from
my
my
older
cache
refactor
work
in
blue
storm,
but
Adam
thankfully,
thankfully,
for
me
sure
that
it
was
not
that
and
is
happening
even
on
an
older
master
prior
to
might
change.
So
he's
he's
working
feverishly
trying
to
figure
out
what
happened
or
what's
what's
going
on
so
for
now
we're
stuck
on
it
a
little
bit,
but
hopefully,
hopefully
soon
he'll
figure
it
out.
We
can
get
this
thing
merged.
B
Let's
see,
there's
a
PR
from
OSHA
and
ping
about
narrowing
down.
Lucky
I
was
going
to
try
to
take
a
look
at
that,
but
I
do
not
feel
comfortable
at
the
moment.
Making
judgment
calls
from
the
locking
part
and
that
code,
so
I'm
going
to
need
to
do
some
reading
first
and
then
Oh
Adams.
Other
objection,
sustained,
PR
I,
think
updates
as
well.
I
B
Basically,
yeah
I'd
say
just
bite
the
bullet
get
it
in
and
hopefully
hopefully
we
don't
break
anything
too
badly,
but
we'll
see
alright.
No,
that's
all
I
had
for
yars.
Thankfully
that
was
that
was
more
than
enough.
I
was
thinking
unless
anyone
has
anything
else
they
would
like
to
discuss.
First,
maybe
we
could
talk
about
Casey's
rgw
book,
it's
splitty
proposal
to
what
great
stalls
Hey
anyone
have
anything
else.
They
want
to
talk
about
before
that.
B
E
Sure
so
the
the
current
rgw
bucket
reshaping
involves
kind
of
lacking
the
bucket.
While
we
do
the
resides
so
clients
can't
write
it
in
the
mean
time,
and
we
would
like
to
find
a
way
to
turn
that
black,
all
io
and
so
I
proposed
a
way
to
effectively
allow
clients
to
right
both
bucket
indexes
during
rashard
and
trying
to
highlight
the
races
around
that
that
that
need
to
be
stopped
for
that
to
work.
E
Did
yeah
seems
quite
a
bit
more
complicated,
haven't
fully
digested
what
the
what
the
right
path
looks
like
in
that
case,
though
yeah
it
effectively
splits
the
the
retarding
into
two
steps
like
while
you're
copying
entries
over
to
the
the
Targa
shard
you're,
still
right
into
the
source
shard
but
you're
writing
to
a
separate.
We
called
it
an
overlay
and
then,
once
all
of
the
non
overlay
entries
are
copied
over
you
transition
to
writing
against
the
the
target
shard.
E
But
before
you
could
write,
you
have
to
check
the
source
shards
overlay
to
see
if
something
changed
since
either
it's
the
REE
shard
and
that's
that's
the
part
that
seems
kind
of
expensive
extra
round-trip
that
pull
the
the
source,
shard
and
I'm
not
sure
exactly
how
that
works
with
versioned
objects
needing
them.
Basically,
multiple
OMAP
keys,
but
yeah.
That's
the
difference.
If
it
can
handle
versioned
objects,
then
that's
wonderful,
but
yeah.
The
the
extra
round
trip
and
writes
is
not
quite
as
good,
but
it's
definitely
better
than
than
blacking
all
I/o.
D
E
Yeah
I
forgot
that
part
about
how
the
overlay
gets
transferred
over
but
I,
don't
think
the
design
implied
blocking
the
the
target
shard.
While
you
were
doing
that,
I
think.
As
long
as
writes
to
the
target
shard
first
check
that
overlay
and
apply
changes
there.
First,
then,
then
you
can,
you
can
do
it
without
blacking.
D
E
B
Been
the
the
email,
I
kind
of
sent
out
I
been
kind
of
thinking
about
whether
or
not
we
could
do
something
where
we
work
in
smaller
chunks
rather
than
again
whatever
we
decided
to
do
it
this
level
kind
of.
Could
we
have
a
kind
of
like
a
higher
level?
Could
we
do
charting
based
on
not
like
a
hashing
scheme,
but
instead,
just
like
you
know,
based
on
the
the
value
of
the
name,
you
know
the
value
of
the
name.
So
I
don't
know.
B
E
D
B
B
D
So
you're
sorting
at
a
gross
level,
but
then
a
define
level
you're,
not
sort
e,
but
it
simply
reduces
the
problem.
So
when
you're
listing
you're
looking
at
a
small
segment
of
the
entire
shard
space
and
likewise
when
you're
recharging
well,
actually
the
restarting
is
now
done
like
screw
a
splitting
mechanism
as
opposed
to
redoing
everything
right.
You.
B
B
Exactly
yeah
those
and
that
thing
gets
locked
and
those
shards
get
split
that
are
participating
in
that
pool
or
that
grouping,
and
that
can
happen
whatever
mechanism
we
choose
to
do
that,
splitting
I,
don't
you
know,
I,
don't
know
what
it
is.
Maybe
it's
the
same
one
we
use
now
or
maybe
something
more
advanced
or
you
know
you
know
good
that
we
come
up
with,
but
the
idea
is
that
then
you
don't
lock
the
entire
bucket
to
do
it.
D
Right
so
one
issue
that
we
face
is
the
metadata
to
describe
a
a
buckets
bucket.
Indexing
scheme
can't
be
too
large
so
like
that
would
definitely
work
if
we
had
maybe
hundreds
of
those
segments,
possibly
even
thousands
of
those
segments
but
like
whenever
you
get
to
the
bucket
you're
gonna
have
to
load
in
what
those
splits
are
from
metadata
and
we
don't
want
that
to
be
too
large,
because
otherwise,
that
metadata
are
dominating
certain
processes.
D
So
that's
something
we
want
to
think
through
on
a
scheme
like
that.
You
know,
but
you
know
it
sounds
like
they
could
be
a
middle
ground.
We
don't
we're
not
segmenting
everything,
because
within
those
segments
we're
using
an
existing,
say,
hashing
scheme,
so
that
that's
something
that
that
think
about
yeah.
B
B
D
To
allow
for
very
large
object
names,
and
so
for
splitting
on
object
names,
you
know
an
object.
You
know
these
things
they
be
relatively
large.
They
may
be
representing
a
directory
structure
with
the
limiters
in
there
and
we're
splitting
in
arbitrary
points.
Presumably
you
know
so
the
split
point
could
actually
be
represented
by
a
thousand
characters:
a
thousand
bytes
right.
If
we're
splitting
between
flash
XYZ
slash
at
ABC,
slash
blah
blah
blah.
You
know,
unless
you
had
some
extra
logic
in
there
to
try
and
find
optimal
splitting
points
that
the
the
names
would
be
shorter.
D
D
B
C
E
C
D
Yeah,
so
we're
trying
to
solve,
like
you
big
problems
and
rgw
with
these
various
proposals.
What
is
that
currently
or
listing
is
ordered
and
that's
a
very
expensive
operation,
because
the
hashing
scheme
entirely
jumbles
up
the
order
once
it
reaches
the
bucket
index
in
order
to
return
things
of
an
ordered
fashion,
we
have
to
consult
every
single
start.
Even
if
there
are
65,000
charge,
gets,
will
consult
every
single
one
and
say
give
us
your
first
and
give
us
your
first
a
thousand,
but
give
us
your
first
a
hundred.
D
So
the
the
hashing
scheme
for
bucket
indexing
is
is
ideal.
It's
very
fast.
It
gets
us
to
the
right
start
immediately
in
every
operation,
with
the
exception
of
bucket
listing,
which
is
ordered,
and
so
we're
trying
to
solve
that
problem
and
in
parallel
trying
to
solve
the
problem
of,
we
have
to
do
a
not
locking
the
whole
bucket
for
that
process.
So
that's
what
we're
trying
to
do
here.
D
E
So
yeah
I
just
wanted
to
bring
up
the
MDS
because
they
have
the
dirty
rag
tree
thing,
which
is
a
very
compact
way
to
represent
splits,
but
it's
it
splits
over
that
in
the
hash
space.
So
I
don't
think
it
really
applies
here.
B
D
Yeah,
so
the
idea
is
that
we
would
probably
use
something
like
your
original
schema
reward,
we're
taking
a
gross
cut
at
the
at
the
namespace.
You
know,
I
may
be
divided
them
into
segments,
because
there
are
very
likely
gaps
in
the
namespace,
so
they're
like
a
cluster
here
with
a
similar
prefix,
a
cluster
there
with
a
similar,
prefix
and
so
forth.
D
You
just
have
to
maintain
a
set
of
coefficients
for
various
terms
within
that
sequence
there,
and
so
that
would
give
us
the
compact
representation
and
a
fairly
quick
calculation.
The
expensive
part
is
determining
how
to
segment
those
things
up
and
how
to
determine
what
the
coefficients
are
for
the
Fourier
transform,
but
that
actually
can
be
a
slow
process.
We
do
what
we
can.
You
can
spend
spare
cycles
figuring
out
what
the
next
sharding
scheme
is.
D
Gonna,
be
what
might
be
optimal
for
it,
and
then
once
we
decide
you
know,
then
we
can
implement
it
and
in
that
part
can
be
fast
as
long
as
the
lookup
is
also
fast.
That's
kind
of
the
basic
idea:
it
takes
some
of
the
very
ideas
that
you
proposed
mark
and
then
tries
to
figure
out
a
nice
way
to
maintain
the
ordering
within
the
segment's
through
a
compact
representation.
D
Well,
one
nice
thing
is
that
if
we
ever
do
a
restart,
we
could
do
the
restarting
in
lexical
order
of
the
object
names,
and
we
could
only
necessarily
have
to
lock
the
the
portion
of
that
that
namespace
that
we're
currently
working
on,
because
in
before
you
reach
harden
after
you
restart
the
ordering
of
all
the
object
games.
It's
gonna
be
the
same.
So
we
can,
you
know
lock
down
a
through
J.
D
You
know,
and
a
K
through
Z
doesn't
have
to
be
locked
at
all,
while
we're
doing
the
restarting
andrey
splitting
of
that
piece
of
the
segment
that
makes
sense.
Okay,
so,
okay,
the
thing
is
with
hash
space
sharding,
you
know,
there's
no,
there's!
No
real
mapping
between
the
old
charting
scheme,
the
new
charting
scheme,
you
know,
I,
think
it.
D
E
B
E
C
B
B
E
E
D
So
I
was
I,
worked
on
it,
a
lot
like
in
2018,
then
a
bunch
of
other
stuff
came
up,
so
it
kind
of
went
on
the
back
burner
for
a
bit
and
now
I'm
bringing
you
back
up
again.
So
I've
been
like
I've
been
collecting
like
a
corpus
of
rather
according
to
add,
the
plural
of
corpus
is
corpora
apparently
so
corpora
of
object,
names
that
are
actually
using
the
real
world.
I've
asked
people
who
posted
the
things
like
SEF
users
and
say
you
know.
Why
does
bucket
listing?
Take
this
long
I
said?
D
Can
you
send
me
a
list
of
your
object
names?
So
I
can
you
know
start
playing
with
things
and
writing
some
code
to
do
the
segmenting
calculation
and
I
think
ultimately,
I
don't
know
whether
Fourier
transforms
is
the
correct,
integral
integral
integratable
function
to
use,
or
we
might
have
like
a
small
menu
of
them
and
see
which
one's
fit
the
day
to
the
best.
D
So
that's
kind
of
where
I
am
right
now
is
trying
to
figure
out.
Is
we
can
spend
a
lot?
We
can
spend
much
time
figuring
out
what
the
optimal
thing
is
for
the
current
list
of
object
names
as
long
as
once,
once
it's
determined
you
know,
we
can
start
it
quickly
and
figure
out
where
which
hard
to
go
to
given
an
object
and
quickly.
So
anyway,
that's
where
I
am
now
in
that
process
cool.
D
B
You
all
right!
Well,
then,
if
you
guys
have
to
leave
us,
let's
wrap
that
up.
The
only
other
things
I
had
here
are
loose
or
4k
metallic
sizes.
Small
object
in
OMAP
stuff,
so
Igor
has
been
doing
a
lot
of
work
looking
at
our
performance
on
harddrive
supporting
metallic
size,
and
they
see-
or
you
word
saying,
that
your
your
race
rerunning
some
tests
to
get
right
out
of
the
out
of
the
picture,
but
it
sounds
like
he's
still
still
faster,
with
4k
Menelik
sizes
generally
versus
54.
Oh.
F
B
B
B
K
A
good
question,
I
guess
one
aspect
to
consider
is
also
impact
on
especially
large
stores,
like
that.
You
may
add
a
comment
in
there
on
up
here.
We've
seen
particularly
problems
with
very
large
databases
and
so
I'm
being
kind
of
worried
about
stuffing
more
things
into
the
database
potentially
and
perhaps
hitting
those
problems
sooner
I
think
with
a
lot
of
testing
with
them.
It's
maliki
sizes,
too
large
database
sizes
sure
including
the
charting
stuff
from
Adam
to.
K
K
B
K
B
B
F
B
B
The
tricky
part,
though,
is
that
you
know,
depending
on
what
you're
doing
right,
that
an
automated
test
might
not
pick
it
up
you're,
at
least
in
my
opinion,
you're,
usually
better
on
anything.
That's
really
targeted
like
this.
You
know
writing
a
test.
That's
can't
design
to
hit
the
cases.
You'd
expect
to
see
performance
regression,
but
I
don't
know
this
this
just
my
take
on
it.
I
think
I
think
the
the
automated
testing
is
good
for
catching
kind
of
high-level
stuff,
but
maybe
not
you
know
specific
things.
F
Well,
what
I'd
like
to
have
is
a
sort
of
library
for
I'm
access
patterns
that
we
expect
are
commonly
used
and
then
benchmark.
This
battle
sure.
B
There
was
a
case
where
we
did
something
kind
of
like
that,
where
we,
there
was
a
customer
that
had
two
different
brands
of
SSDs,
that
one
was
showing
poor
performance
and
the
other
wasn't,
and
we
were
able
to
record
with
block
trace
the
the
actual
IO
pattern
going
to
the
disk
and
then
replay
it
on
one
part.
When
one
brand
of
envy
me
and
then
the
other
brand
of
envy
me
drive
and
showcase
that,
even
even
if
you
took
the
workloads
from
each
and
swapped
them,
the
one
still
was
bad.
B
Maybe
we
could
do
something
like
that
to
even
be
able
to
test
things.
You
know
different
different.
You
know
workloads,
I,
guess
hitting
hitting
some
the
underlying
layers.
We
could
do
it
at
that
layer.
We
could
do
it
at
the
stuff.
Republic
array
knows
maybe
work
load
hitting
hitting
a
oh
s,
D
or
something.
F
Well,
anyway,
I
understand
it's
not
a
question
we
can
answer
immediately.
Just
maybe
come
talk
to
think
about.
In
future,
there.
F
B
Cool
I
know
your
vacation
for
the
next
two
weeks,
so
I'll
try
to
see
if
I
can
replicate
some
of
the
stuff
you
were
doing
and
just
if
we
can
get
a
PR
gone
and
merge
it
I'm
pretty
comfortable
about
how
it
looks
right
now,
with
with
MVB
drives
so
and
really
like.
Your
testing
is
already
showing
you
know,
even
even
with
the
right
cache
disable.
It
is
still
faster,
so
I'd
feel
pretty
comfortable
I'll
try
to
see
if
I
can
get
some
application
of
your
results
just
to
make
sure.
F
B
The
only
other
thing
quickly
here,
I
want
to
bring
up
with
small
object
in
OMAP
and
Eva.
You
get
a
PR
that
does
that
and
I
guess
in
the
Oh
note
is
what
the
is.
What
this
other
one
does
any
any
thoughts
doing
this
I
have
some
concerns
in
general
with
it,
but
I
think
Josh.
You
mentioned
you
had
concerns
too
with
databases
which
is
my
concern
to
any
any
journal.
Thoughts
on
this.
F
Well,
what
I'd
like
to
say
is
that
probably
we
shouldn't
come,
we
shouldn't
concentrate
on
performance
only
in
this
case,
so
this
feature
looks
pretty
beneficial
for
users
who
don't
need
much
performance,
but
they
care
about
the
space
of
the
drive.
So
even
if
we
bring
some
performance
degradation,
it
might
be
useful,
and
so
it
might
be
optional,
so
one
can
enable
it
for
when
space
is
more
important
than
performance.
K
B
C
K
K
B
F
C
F
B
K
G
F
Yeah,
that's
a
good
starting
point,
but
well
again,
as
I
said
well,
having
this
feature
to
keep
all
objects
in
rock
to
be
I
mean
optional
feature.
It
probably
has
no
no
drawbacks.
If
you
don't
need
it
won't
enable
it
we
can
not
enable
it
by
default,
but
all
those
thought
users
who
want
it
please
enable
and.