►
From YouTube: Ceph Performance Meeting 2021-08-19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
morning,
folks,
I
think
we've
got
the
core
folks
just
wrapping
up
now.
A
Oh
yeah,
no
problem,
you
know
I
was
thinking
about
it
a
little
bit
more
this
morning
and
one
one
advantage
to
the
approach
that
you
guys
are
proposing
would
be
that
you
can
just
kind
of
dynamically
increase
the
size
by
you
know
adding
new
osds
to
that
pool.
So
there
is
kind
of
a
nice
flexibility
with
that.
A
But
but
yeah
still
still
worried
about,
like
the
total
number
of
pools
that
just
given
kind
of
some
of
the
other
things
that
it
seems,
like
maybe
the
feedback
I've
been
hearing.
At
least
the
we've
been
getting
requests
to
dramatically
increase
the
number
of
pools
that
can
be
created.
B
Yeah,
I
do
agree
some
other
discussions
have
been
about
making
each
storage
class
have
two
different
pools,
which
would
would
double
it.
But
the
suggestion
that
I
had
there
was
just
one
extra
pool
for
the
for
the
heads
which
I
think
could
be
pretty
much
global
to.
B
A
You
guys,
I
don't
know
what
real
people
are
doing
do
do
real
customers
or
real
users
create
like
multiple
rgw
pools
or
like
is
that
is
it
usually
just
the
the
default
ones.
B
B
And
generally
you
have
one
pool,
that's
faster
media
and
one
that's
slower.
Media
and
those
storage
classes
gives
clients
a
way
to
to
drive
objects
to
one
or
the
other.
B
B
A
B
But
there
are
definitely
newer
use
cases
that
treat
zone
as
something
that
you
can
pop
up
several
of
and
that
will
definitely
run
into
an
issue
with
pool
counts,
abused.
C
What
we
have
in
addition,
that
this
is
not
a
clear
thing
that
we
have
only
a
single
realm
of
a
single
seven
grouping
zone,
but
we
have
perhaps
multiple
rounds,
multiples
and
groups,
and
this
might
be
for
performance
reasons.
This
might
be
for
logical
structure
permissions,
so
this
is
so
having
only
a
single
zone
per
cluster.
This
is
rather
unnatural
in
our
customer
environments.
B
Right,
that's
that's
one
of
the
design
goals
if
it's
being
shared
on
a
single
cluster,
potentially
you
could
share
pools
between
the
zones
using
namespaces
of
the
pools
instead
of
giving
each
zone
its
separate
set
of
pools.
C
Well,
yes,
yes,
it
could
but
the
the
challenge,
sometimes
without
us
that
you
have
different
requirements.
Then
you
need
to
separate
those,
so
otherwise
we
would
go
with
the
fastest
requirement
that
you
have
and
roll
it
out
for
everything
it
might
be
not
feasible
for
just
highly
structured
environments
where
you
have
different
needs.
C
This
is
one
of
the
things
where
we
see
the
other
things,
just
even
within
a
single
pool.
If
you
have
the
share
access
from
from
different
buckets,
so
putting
different
indexes
on
this
is
not
the
best
performance,
so
you
could
get
better
performance
if
you
separate
the
pools
per
zones
or
bugs
so
to
have
really
just
distinct
pools,
created
per
zone
or
or
just
whatever
you
use
them
in
the
structure.
C
So
usually
we
see
we
see
that
customers
create
large
buckets,
so
single
buckets
with
with
billions
of
objects,
but
those
perform
rather
good
and
separating
the
data
across
different
buckets
is
one
thing
to
get
just
better
performance
distribution
if
those
pockets
use
different
pools.
C
Otherwise
you
have
only
the
structure
and
the
buckets
and
you
can
separate
the
index
lower
the
the
objects
per
bucket,
but
this
gives
your
half
demanded
journey.
A
So
so,
when
you
have
buckets
that,
are
that
full
you
just
to
make
sure
I
understand,
you
only
see
a
performance
advantage
by
breaking
it
across
multiple
different
buckets.
If
those
buckets
are
individually
on
super
pools,.
C
It's
not
the
only
as
hell
now
it
just
they.
You
had
a
better
performance.
If
you
then,
in
addition,
use
different
pools.
A
A
It
is
more
pgs
in
that
case,
or
the
same
number
of
pgs.
C
D
C
You
could
you
reduce
the
number
of
a
pool
there
in
this
case,
but
but
this
is
not
what
we
do
yeah,
usually
we
go
with
the
same
amount
and
then
create
so
this
way
more
pgs.
A
C
A
A
All
right:
well,
I
think
we've
got
everybody
from
core
in
so
I'll
quickly,
just
dive
into
the
pull
requests
here
and
get
them
over
with.
Then
we
can
figure
out.
I
think
there
might
be
a
couple
things
to
talk
about.
Okay,
the
only
new
one
I've
got.
My
list
is
one
from
kifu
that
is
changing
the
rock's
db
cache
to
work
with
an
update
for
rocksdb.
A
I
need
to
review
that,
but
that's
on
my
to-do
list
hopefully
soon
closed
a
bunch
of
random
stuff.
Here.
A
A
Oh,
the
one
from
booth
for
the
cmake
build
type
that
yeah
that
merge.
That's
fine.
I
was
just
have
a
fix
for
a
regression
that
was
introduced.
A
The
make
deferred
rights
less
aggressive
for
large
rights
from
igor
that
also
incorporates
the
fix
from
adam
and
kind
of
supersedes.
The
stuff
I
was
doing
is
good.
It's
a
better
fix
so
that
merged
in
last
week.
That's
really
good!
That's
that
fixes
an
issue
that
we
have
in
pacific
already.
That's
really
good.
A
Then
there's
adam's
one
that
got
incorporated
into
your
fix.
This
finisher
give
up
the
cpu
automatically
that
just
got
closed.
I
don't
think
there
was
much
additional
discussion
on
that.
One.
A
Yeah,
I
don't
know
anyway,
moving
on
updated
prs
this
week.
A
W
tracing
typica
is
reviewing
that
now.
A
More
review,
I
guess
of
this
osd
compression
bypass,
looks
like
casey
has
been
on
that
you
need
to
review
it.
Oh
and
then
my
rgw
cash
sharding
pr.
Thank
you
casey
for
taking
a
look
at
it.
I
did
I
wanted
to
ask
you,
though,
what
do
you?
What
do
you
think
about
the
the
locking
we
talked
about
a
little
bit
last
week,
but
I
didn't
have
a
good
response
ready
off
the
cuff,
but
where
did
you
think
of
it.
B
Well,
there's
there's
two
different
commits
that
are
kind
of
independent.
There
are
you
talking
about
the
shared
black
one
or
the
sh,
the
black
sharding
one.
A
The
the
my
cache
starting
pr,
the
portion
of
it,
where
I'm
changing
the
the
mutex's
from
the
the
shared
mutex
to
just
a
normal,
simple
one.
B
Yeah
I
mean
I
I
buy
that
okay,
but
but
I
would
just
like
to
see
some
numbers
if
possible,.
B
Okay,
I
don't,
I
think
these
are
things
that
came
from
downstream
and
I
haven't
haven't
kept
up
with
those,
so
I'm
not
sure
exactly
what
the
issues
are.
A
I
mean
there.
This
is
all
in
upstream
code
right
when
you
say
come
from
downstream.
What
do
you
mean
by
that.
A
No,
this
was
when
I
was
trying
to
test
that
caching
pr,
I
ended
up
basically
having
like
seen
this
this
regression
and
ended
up
doing
a
bisect
of
of
master
and
and
ended
up
identifying
what
they
were.
This
is
the
spreadsheet
here,
but
basically
this
is
what
I
was
seeing
when
I
was
going
through
and
doing
a
bisection.
A
A
That
one
I
couldn't
revert
easily
because
it's
so
much
just
dependent
on
it
now,
but
the
other
one,
the
35
355,
that
one
was
fairly
easy
to
revert
and
seem
to
do
bring
it
performance
back
to
the
way
it
was
before.
That's
this
request,
timeout
for
beast.
A
A
A
Might
take
a
little
while
to
find
the
old
messages
but
yeah.
In
any
event,
I
think
that
would
probably
eliminate
a
lot
of
the
problem,
even
just
figuring
out
how
to
fix
the
second
one.
A
So
yeah
once
once,
we
can
figure
that
out
or
even
if
you
give
me
the
go
ahead
just
to
like
run
with
it
reverted
I
mean.
Maybe
we
could
try
testing
the
cache
then,
but
it's
it's
kind
of
not
worthwhile
to
do
it.
When
it's,
you
know
things
are
kind
of
in
such
a
rough
shape,
rough
shape
with
those
pr's
kind
of
in
place.
B
Okay,
well,
we
were
discussing
your
cash,
locking
pr
mark
cogan
and
I-
and
he
wanted
to
do
some
experiments
on
it,
so
I'll
make
sure
that
he's
aware
of
these
other
ones.
Okay,.
A
Yeah-
and
I
can
I
can
split
it
up
so
that
we're
we're
breaking
the
the
locking
and
the
or
the
the
region
exchange
and
the
and
the
sharding
out,
because
I
don't
think
they're
really
dependent
on
each
other.
I
include
them
together
because
it
seemed
kind
of
like
they.
They
were
kind
of
complementary
in
a
way
and
that
my
hope
was
that
it
wouldn't
there
wouldn't
be
as
much
benefit
from
the
shared
mutex
with
the
sharding.
A
A
It's
it's
quite
possible
that
the
the
there
isn't
a
significant
benefit.
Honestly,
that's
when
I,
when
I
was
trying
to
to
tease
out
to
see
if
there
was
one
that
was
when
I
ended
up
going
down
this
whole
like
rabbit
hole
of
bisecting
master
and
finding
these
other
pr's
that
are
causing
problems.
B
E
A
Yeah
so
obviously
the
there's
like
a
bunch
of
data
missing
from
here,
but
that
was
that
was
the
road
I
was
going
down
when
I
was
trying
to
test
this
thing
before.
I
ended
up
like
getting
sidetracked
on
the
other
stuff.
A
But
yeah
it's
it's
quite
possible
that
in
the
long
run,
this
actually
doesn't
matter
that
much
like
the
majority
of
the
issue
that
I
was
trying
to
fix
might
have
been
like.
You
know,
not
cash,
even
though
it
looked
like
it
was
maybe
cash
initially,
I
mean
there
seemed
to
be
some
evidence
and
contention
when
I
was
doing
like
well
clock
profiling
on
rgw,
so
I
mean
there
might
still
be
some,
but
it's
you
know
a
little
fuzzy.
A
It
wait,
but
the
other
stuff
is
is
definitely
the
bigger
bigger
thing
to
tackle.
B
Yeah
also,
a
lot
of
your
testing
is
with
shard
counts,
less
than
512,
but
I
mean
we.
We
have
some
notable
customers
that
are
running
with
a
lot
more
than
that
and
okay,
with
larger
thread
counts.
I
would
expect
the
contention
to
be
a
lot
higher.
There.
A
Cool
well
yeah
and
I'm
excited
to
hear
if
mark
cogan
sees
anything
interesting.
You
know
he's
he's
really
good
at
a
lot
of
this,
so
he
knows
rgw
very
well.
So
that's
exciting.
B
A
Cool
cool
yeah-
I
I
didn't
because
the
I
wish
I
could
find
the
chat
window,
but
I
thought
someone
on
your
team
was
gonna
work
on
this,
but
but
it
is
possible.
They
got
sidetracked
with
other
stuff.
There's
lots
to
do.
A
All
right
cool,
so,
let's
see
I
don't
think,
there's
a
whole
lot
of
other
stuff
in
the
no
movement
other
than
that
with
the
roxdb
cache
changes
that
kifu
had
mentioned
all
right
had
made
to
fix
it.
It
makes
me
think
that
I
need
to
get
the
well.
It
just
reminds
me.
I
need
to
get
the
age
bidding
stuff
in
still,
and
maybe
this
is
a
good
time
to
do
it
since
we're
gonna
be
mucking
around
in
the
our
version
of
the
roxy
blru
cache
anyway.
F
A
F
D
F
E
Just
just
double
checking
adam
and
egor,
so
we
want
to
backport
three
nine
three,
seven,
seven
entirely
for
the
next
point
really
specific.
E
Yeah
yeah,
we
have
time
okay,
so
this
is
the
this
is
the
entire
there's.
No
other.
I
see
a
split
of
four
two,
eight
three,
two
and
four
two,
eight
three
one,
those.
D
A
All
right
cool!
Well
then,
I
guess,
I
think,
we're
all
done
with
prs.
A
So,
okay,
are
we?
Are
we
casey?
Are
we
done?
Do
you
think
on
the
topic
of
the
the
rgw
proposal
for
for
the
qlc
drives,
or
now
that
we
have
more
folks?
Was
there
anything
that
you
wanted
to
bring
up?
Regarding
that.
B
I
think
there's
been
good
discussion
on
the
mailing
list
and
if
I
would
want
to
make
sure
that
the
folks
from
intel
were
here
if
we
were
going
to
continue
discussion,
they've
been
coming
to
the
rgw
refactoring
call,
but
I
will
invite
them
to
come
here
next
week.
If
that's
all
right.
A
Yeah
absolutely
that'd
be
great.
Do
you
know
if
any
of
those
folks
are
on
the
opencast
team.
B
I
I
don't
think
so.
Anthony
who
replied
on
that
thread
is
the
one
that
we've
been
working
with
primarily.
A
Sure
so,
there's
been
separate
conversations
going
on
with
the
intel
folks
that
work
on
opencast,
because
they've
they've
obviously
open
sourced
it.
Now
this
open,
cast
and
and
we've
been
trying
to
work
with
them
to
get
it
into
shape,
where
they
might
be
able
to
submit
it
to
the
the
upstream
kernel,
but
I
I
think
right
now:
it's
it's
not
quite
there
yet,
but
potentially,
if
that's
done
that,
that
would
maybe
be
another
option
as
a
way
to
you
know,
approach
this.
A
I
do
think
one
way
or
another.
This
is
something
that
we're
going
to
have
to
deal
with
in
the
osd
on
some
level,
regardless
of
what
you
guys
end
up
doing
you
know,
if
you
want
to
try
to
piggyback
on
that,
or
you
know,
do
you
know
the
separate
pool
idea,
I
do
think
we're
going
to
have
to
figure
out
how
to
deal
with
these.
These
kinds
of
drives.
B
A
Adam
and
igor,
what
would
you
guys
think
about
being
able
to
place
objects
on
the
fast
device.
F
Existing
objects.
A
Or
or
maybe
you
just
say
that
you
want
objects
that
are
small
to
be
put
on
fast
device,
but
but
some
some
way
we
make,
we
figure
out
a
way
that
we
decide
that
we
would,
you
know,
use
some
portion
of
the
fast
device,
instead
of
always
for
only
roxdb
levels,
potentially
for
objects
as
well.
That
are
small.
A
G
A
A
So
the
alternative
would
be
that
when
I've
talked
to
the
the
opencast
guys
at
intel-
and
maybe
dmcash
can
do
this
too-
I
don't
know
it
sounds
like
they
maybe
have
the
ability
to
when
they
put
open
cows
in
front
of
a
block
device
to
to
pin
a
region
for
or
staying
on,
the
the
fast
device
behind
opencast.
A
So
maybe
the
first
I
don't
know-
maybe
you
know
20
gigabytes
or
something
it's
probably
definable.
Would
you
could
put
things
there
and
they
would
stay
pinned
on
the
fast
device
and
then
the
rest
of
your
device?
They
would
use
for
cache
to
float
things
in
and
out
of
the
fast
device
and
you
know
potentially
be
demoted
back
to
the
slow
device
you
know.
Potentially,
we
could
use
something
like
that
to
accomplish
this
as
well.
A
The
the
probably
in.
G
G
F
I
imagine
the
solution
would
need
to
somehow
target
the
allocator
that
will
get
extra
info,
whether
the
block
should
go
to
high
a
fast
device
or
slow
is
acceptable.
Then
allocator
would
provide
new
area,
and
maybe
that
way
we
could
keep
it.
Only
connection,
then,
would
be
a
blue
store
code
would
be
augmented
with
some
retrieval
of
fast,
slow
hint
and
on
depending
on
that,
will
trigger
allocator
yeah
yeah.
F
G
F
Not
stating
that
I
mean
just
I'm,
I'm
inventing
this
as
it
goes
to
try
to
adapt
to
solution
when
one
device
that
is
presented
through
block
device
interface
has
regions
some
regions
that
are
fast
and
some
regions
that
are
mapped
to
some
slow
device,
whatever
it's
being
done
on
the
actual
device
level
or
some
conglomerate
level,
and
I'm
now
thinking.
How
could
we,
if
such
architecture
is
presented,
how
can
we
adapt
blue
store
to
utilize
it
and.
A
And
what
I
want
to
point
out,
too,
is
not
necessarily
always
about
fast
and
slow,
but
in
this
case
it's
about
like
huge
qlc
flash
devices
that
have
64k
allocation
strategies
and
if
we
do
64k
metallic
size
to
match
it,
then
we
have
we're
back
to
having
the
same
space
amplification
problem
that
we
used
to
have
before
igor's
excellent
work.
F
A
That
can
be.
We
don't
necessarily
need
to
worry
about
that,
because
that
can
be
underneath
us
right
if,
if
you're
never
putting
anything
smaller
than
64k
on
the
these
qlc
things,
and
if
it's
all
aligned
properly,
then
you
can
still
have
a
4k
min
alex
size.
A
blue
store
lit
a
blue
sword
layer,
you
just
it
would
you'd
not
have
to
worry
about
it,
because
you'd
never
be
putting
things
that
are
smaller
than
that
on
these,
these
qlc
devices.
C
A
C
Yeah
so
just
as
a
quick
comment
on
on
the
idea
to
put
in
this,
in
roxy
b,
we
had
some
weeks
a
month
discussion
about
getting
rid
of
roxy
b.
Eventually,
perhaps,
and
when
we
just
would
go
this
way,
then
then
then
we
nailed
it
down
to
xp
forever,
but
pretty
much
yeah.
That
would.
A
No,
no,
I
don't
know
yeah,
let's
not
put
this
in
rock
cp.
Let's
not
put
these
things
in
our
xdp.
I
I
agree
that
was,
I
think
we
made
the
right
decision
not
to
do
that.
What
I'm
talking
about,
though,
is
how
the
the
allocator
is,
as,
as
adam
said,
be
able
to
say.
Well,
I
have
some
space
on
this
device.
I
have
some
space
on
this
device.
I
have
an
object
coming
in.
That's
got
a
hint
saying
that
this
is
small.
It's
going
to
remain
small
like
for
this,
this.
A
These
objects
that
they're
talking
about
for
rgw
and
therefore
I
want
the
allocator
to
just
to
find
and
put
this
in
some
region
on
the
fast
device
or
and
by
fast
device.
In
this
case,
it's
actually
a
device
that
can
handle.
You
know
small
location
sizes
under
the
hood,
but
if
it's
gonna
be
a
big
chunk
of
data,
then
I'm
gonna
put
it
over
on
this
other
thing,.
G
So
well,
I
think
this
problem
should
be
split
into
two
parts.
The
first
one
is
to
use
two
different
or
multiple
devices
under
blue
store.
Similarly,
like
we
have
in
blue
fest
right
now,
so
this
is
doable,
maybe
not
trivial.
A
So
so
I
don't
know
if
you
saw
it
igor,
but
the
the
alternative
route
that
they're
they're
considering
going
is,
is
far
simpler
from
our
perspective,
where
they
would
just
have
multiple
pools
and
they
would
store
the
data
that
they
know
is
going
to
be
small.
These
small
objects
for
rgw
would
go
in.
The
pool
is
backed
by
these.
You
know
obtain
or
whatever
devices,
and
then
the
bulk
big
data
would
go
into
the
pool,
that's
backed
by
a
qlc.
F
I
mean
making
it
on
data
architecture
level
in
deployment
level
is
preferable,
because
if
we
implement
this
on
blue
store
level,
then
we
have
to
tackle
all
that
issues
that
seeger
mentioned
that
we
are
changing
a
size
or
for
small
rights.
We
split
a
block
that
the
object
that
previously
had
been
continuously
written
and
by
some
hints
it
was
a
large
one,
and
then
we
are
overwriting
parts
of
it
with
small
rights,
and
there
are
a
lot
of
decisions
to
take
regarding
how
to
handle
such
situations.
F
G
G
Well,
well,
from
from
from
the
design
point
of
view,
the
the
approach
when
bluestore
supports
efficient
ways
to
keep
both
small
and
large
files
is,
is
preferred
to
me,
but
well,
it's
definitely
not
just
a
simple
task,
but
it
it.
F
A
I've
I've
wanted
for
a
long
time.
This
is
kind
of
like
a
dream,
but
to
be
able
to
have
blue
store,
handle
pools
of
devices
that
have
different
characteristics
and
then
have
it
make
better
decisions
about
where
to
place
data
based
on
the
characteristics
of
the
devices
that
it
manages.
A
So
so,
igor
right
now
like
how
how
generic
is
the
the
code
that
we
use
for
moving
the
roxdb
sst
data
back
and
forth
across
the
the
fast
device
and
the
the
slow
device?
Is
that
something
that
we
can
extend?
Or
is
it
all
very.
G
G
G
C
Once
we
will
have
such
a
mechanism
in
place
when
we
get
more
objects
than
we
have
space
left.
So
what
we
want
to
do
in
this
case
then,
and
how
could
we
could
distinguish
between
those
pinned
objects
and
whether
they
are
viable
still
to
live
on
this
fast
device
versus
just
getting
drained
out
like
in
caching
mode.
A
G
Well,
at
least
we
we
have
some
similar
methods
in
in
blue
first,
so
more
or
less
it's
clear
what
we
should
have
and
how
to
handle
that
second
pump
was.
The
second
part
is
more
scary
to
me,
since
it's
requires
some
pretty
significant
metadata
changes
to
maintain
multiple
objects
within
a
single
disk
block.
G
We
need
that
because
we
are
talking
about
64k,
optimal
block
size
for
plc
drives,
and
we
are
talking
about
small
files,
small
objects,
which
should
be
much
less
than
the
64k
block
pricing.
C
If
you
would
go
for
just
a
full,
fully
occupied
fast
device
space
and
your
cluster
is
nearly
full
and
then
you
start
to
spill
over
heavily
on
those
qsc
devices.
You
might
have
a
different
problem
than
that
you
need
to
handle
in
the
wild
it.
Just
you
see,
then
quickly
getting
up
capacity
that
that
is
not
seen
before.
Just
you
cannot
react,
perhaps
quickly
enough.
G
A
It's
kind
of
an
inherent
problem
to
these
right.
These
devices
is
that
you
they
they
want
a
big
allocation
size
and
if
you
have
lots
of
small
data
small
objects,
then
either
you're
going
to
need
to
figure
out
a
way
to
to
like
mangle
them
into
the
same
into
the
same
64k
chunk
or
you
need
to
like
eat.
G
A
Lot
of
space
to
just
put
them
into
one,
you
know
allocation
unit.
A
Your
choice
with
these
devices,
if
you
have
a
64k,
you
know
allocation
size
on
it,
either
you're
going
to
spend
a
lot
of
extra
work,
trying
to
mangle
and
put
lots
of
little
objects
into
one
chunk
to
store
on
the
device
or
you're
going
to
waste
lots
of
space.
Writing
these
out
to
the
device.
It's
just
kind
of
an
inherent
limitation
of
these
devices
right.
A
It
looks
like
that
so
yeah
I
mean
no
matter
what
we
do.
I
think
we're
going
to
have
to
we're
going
to
deal
with
this
one
way
or
another.
I
don't
think
there's
any
easy
solution
for
it,
except
maybe
the
only
easy
solution
is
to
not
put
small
small
data
on
it.
F
Yes,
but
I
don't
think
that's
a
very
wise
solution
because,
as
mathias
pointed
out,
we
would
definitely
see
some
threshold
when
we
start
to
spill
over
to
large
block
device
and
then
that
space
may
get
eaten
very
quick
and
there
will
be
a
lot
of
problems
regarding
that.
So
possibly
we
would
need
to
teach
blue
store
how
to
reuse
the
same
allocation.
You
need
for
different
objects,
as
igor
mentioned
before.
Well,.
A
F
Yeah,
I
don't
think
that's
a
good
idea.
I
mean
it's
offloading
problem
to
rgw,
guys
and.
H
A
G
C
So
so
today
we
have
those
kiwis
see
devices.
Is
there
anything
just
we
had
to
change
for
the
hd
in
the
past
for
the
allocation
size?
So
what
do
we
expect
in
the
future,
though,
how
the
devices
so
specialized
hardware
or
new
hardware
coming
up?
C
Would
it
be
viable
to
look
into
just
merging
smaller
objects
into
bigger
location
sizes,
just
just
in
a
way
looking
forward
into
the
future?
So,
however,
whoever
comes
up
with
new
ideas
from
art
for
vendors.
A
C
Yes,
so
to
have
block
right
is
something
that
I
I
I
really
want
wanted
to
avoid.
So
just
then
I
would
eat
rather
the
amplification
capacity
and
then
the
rest
of
running
quickly
out
of
space
instead
of
having
blocked
rights
for
no
real
reason
to
see
for
the
for
the
user.
G
Well,
well,
I
think
that's
actually
the
depends
on
the
use
cases.
So
if
that's
primary
for
backup
purposes,
then
these
performance
penalties
are
not
that
important,
but
if
with
some
live
system,
when
you
care
more
about
performance
and
not
much
about
the
space
amplification,
so
I
I
believe
there
are
no
general
general
solution
here,
so
both
both
might
be
useful
and
that's
too
much
of
access
parking.
A
F
Yes,
but
at
least
now
we
have
ability
to
monitor
how
much
that
pool
is
filled
on
admin
level,
but
if
we
hide
it
in
blue
store,
that's
that
information
will
no
longer
be
so
easily
available.
F
A
But
but
I
thought
that
we
we
did
now
export
how
full
the
the
fast
device
in
blue
store
is.
Maybe
I'm
wrong,
though,.
F
Yes,
we
can
read
that,
but
that's
the
fast
device
in
blue
fs,
it's
only
used
for
blue
fs
blue
store
block
device,
does
not
use
it.
A
Yeah,
I
agree
I
just
I
thought
that
got
exposed
out
through,
like
the
the
the
json
interface
to
the
dashboard.
Let
me
run.
F
Yes,
it
is,
but
I
mean
if
you
want
to
reuse,
that
device
for
a
side
channel
for
small
blocked
small
objects,
then
you
can
read
it.
That's
true.
If
that's
your
intention,
but
I'm
not,
I
wasn't
thinking
it
that
way.
I
was
more
thinking
that
block
device
booster
block
device
is
being
expanded
by
something
else,
not
that
blue
fs
and
block
device
have
to
share
that
special
device,
because
that
device
is
also
different
characteristics.
We
we
don't
care
for
small
block
for
blue
fs,
but
we
just
care
for
it
being
fast.
G
H
A
It's
it's
we're
past
the
hour
anyway,
so
nice
talking
to
you,
guys,
really
good
discussion.