►
From YouTube: Ceph Performance Meeting 2022-11-03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Lots
of
new
faces
here
so
feel
free
to
stick
around
folks.
If
you
want
to
otherwise
well
I,
don't
know
that
we
have
a
ton
today,
we'll
see
so,
let's
see
The
Usual
Suspects
are
not
here,
I,
don't
see
Adam
or
Igor,
but
today
I
thought
that
we
would
go
over
a
couple
of
different
things.
A
Second,
the
blog
article
about:
can
you
KVM
performance,
Gap
posted,
so
that
is
here
if
folks
are
interested.
The
gist
of
it
is
that
we
had
a
user
that
was
really
interested
in
seeing
how
fast
lib
RBD
could
work
with
qmukvm.
They
wanted
to
know
on
a
fast
cluster,
just
what
the
fastest
numbers
we
had
internally
were,
and
we
didn't
have
anything
recent
we
had
some
really
old
stuff,
but
nothing.
Nothing
really.
A
Recent,
so
I
went
through
and
and
kind
of
made
a
guide
for
for
looking
at
community
and
performance
and
kind
of
some
different
tuning
parameters
that
help
and
kind
of
the
the
gist
of
it
is
that
we
could
do
a
hundred
and
twenty
three
thousand
sixteen
K
random
reads
from
a
single
VM
and
about
65-ish
a
thousand
16k
random
right
iops.
A
When
everything
was
said
and
done,
if
you
enable
encryption,
messenger
version,
2
encryption,
those
numbers
do
go
down
on
the
read
side,
specifically
I
think
we
ended
up
somewhere
around
80,
000
or
87
000,
or
something
so
it
was
like.
Maybe
a
30
performance
hit
and
kind
of.
A
Interestingly,
all
of
it
seems
to
be
in
the
messenger
threads
on
the
client
side,
both
in
both
cases,
they
were
were
running
at
100
percent,
just
doing
work,
getting
data
to
and
from
the
client,
and
when
you
are
doing
encryption,
that's
happening
in
the
messenger
threads
as
well.
A
So
maybe
something
like
15
of
the
time
in
maybe
it
was
like
15
to
20
of
the
time
in
each
messenger
thread
was
being
spent
in
openssl,
decrypt
functions
and,
and
that
was
using
AES
and
I
and,
like
fully
you
know,
utilizing
the
hardware
capability
with
these
CPUs
turns
out
that
when
we
are
looking
at
message
frames
and
segments,
specifically,
we
actually
iterate
over
buffer
entries
and
bufferless
I.
Don't
know
how
many
per
message
we
were
doing
after
talking
to
Ilia.
A
It
sounds
like
we
definitely
are
doing
a
small
header
like
encrypting,
a
small
header
individually
and
then
potentially
some
overflow
from
that
header,
along
with
the
payload
and
whatever's
in
the
payload.
So
that's
probably
slowing
us
down
specifically
regarding
decryption,
not
sure.
If
there's
anything
we
can
do
about
it,
Ilya
seemed
to
think
that
we're
kind
of
limited
in
terms
of
the
way
that
we
construct
and
deconstruct
messages
over
the
wire
and
and
do
over-the-wire
encryption
and
decryption,
but
yeah
we'll
see
so
there's
that
I
don't
know.
A
All
right
well,
then,
moving
on
Igor
I
see
that
you're
here
now
I
want
to
mention
or
talk
about
some
of
the
work
that
Adam
was
doing
with
snapshots.
He
doesn't
look
like
he's
here,
but
we've
been
doing
some
further
testing
on
it
and
it
looks
like
the
new
version
is
maybe
a
little
bit
slower
than
it
was
previously
foreign.
A
So
for
everyone
else's
here
that
hasn't
been
coming
to
the
performance
meetings,
we're
trying
to
figure
out
performance
issue
that
we're
seeing
with
RBD
mirror
and
a
couple
of
different
people.
Paul
and
Gabby
have
been
looking
at
this
quite
a
bit
and
we
kind
of
determined
that
ultimately,
it's
coming
from
snapshots
on
the
OSD
side,
when
you
you
take
multiple
snapshots
and
are
doing
snap
trimming
work,
we
see
that
OSD
usage
tends
to
shoot
up.
It
gets
worse
over
time.
A
The
more
you
do
when
we
looked
at
this
OSD
side.
What
we
were
seeing
is
that,
because
the
more
you
take
snapshots
of
things
and
do
snap
trimming,
you
know
get
rid
of
old
snapshots,
but
the
more
you
do
this,
while
doing
kind
of
random
rights
to
the
underlying
data.
You
see
that
objects
get
more
and
more
fragmented,
they
have
more
shared
gloves
and
they
have
more
extents.
A
A
Adam
started
looking
at
whether
or
not
share
blobs
were
primarily
responsible,
well
I
kind
of
went
down
the
route
of
of
looking
at
fragmentation
and
extents,
and
so
this
this
first
tab
in
this.
This
spreadsheet
is,
is
Adam's
kind
of
looking
at
the
results
of
some
of
Adam's
testing
and
and
work,
and
he
tried
to
basically
get
rid
of
shared
blobs
kind
of
in
their
entirety.
A
He
still
has
one
that's
used
for
tracking
purposes,
but
we
don't
have
lots
of
shared
blobs
per
object
instead,
there's
just
one
now
and
the
initial
versions
of
his
PR
look
or
his
Branch
looked,
really
really
good,
that's
kind
of
the
the
yellow
and
green
sorry,
the
yellow
and
and
maybe
I,
don't
know
red
I
think
it
is
lines
there,
but
it
wasn't
working
on
passing
all
tests
and
we
tried
it
with
RBD
mirror
it
just
came
all
broke,
so
he
worked
on
fixing
some
of
those
problems,
but
unfortunately,
now
we're
seeing
that
the
performance
isn't
nearly
as
good
as
it
looked
initially,
it's
actually
a
higher
base
level,
CPU
utilization
than
even
even
just
master
or
main
is
now,
but
the
Peaks
are
lower.
A
So
it's
it's
a
mixed
bag,
but
it's
not
looking
nearly
as
nice
as
like
the
red
and
yellow
ones.
You
see
there
where
we
went
actually
almost
down
to
nothing,
no
overhead
for
for
doing
Snapchat
and
snapshot
creation,
so
that
was
what
Adam
did
what
I
ended
up
doing
I
took
kind
of
a
different
approach
and
looked
at
just
trying
to
basically
defragment
objects
when
they
become
too
fragmented.
When
we
take
snapshots,
the
the
upside
here
is
especially
on
hard
drives.
A
It
makes
the
overhead
of
taking
snapshots
much
lower
because
there
are
less
extents
to
iterate
over
and
less
shared
blobs
in
general,
once
you've
once
you've
defragmented
the
object.
The
downside
is
that
that
defragmentation
process
means
that
you
are
basically
doing
a
right
of
rewriting
the
object
out
in
a
single
new
extent,
every
time
that
you
end
up
with
a
project,
that's
too
fragmented.
So
there's
there's
a
fair
amount
of
additional
right
amplification
on
the
device.
A
A
You
can
kind
of
see
that
it's
it
helps
maybe
not
as
as
much
as
Adam's
initial
work
did,
but
it
might
actually
be
a
little
bit
less
than
the
fixed
version
of
atoms.
Pr
we'll
see,
I'm
hoping
that
Adam
can
can
maybe
figure
out
what
was
going
on
and
we
might
be
able
to
make
his
his
work
look
a
little
better,
but
but
for
now
that's
just
kind
of
where
we're
at
the
downside
with
mine.
A
My
work,
if
you
look
at
the
second
tab
with
BS
clone
defrag,
the
second
graph
down,
there's
a
disk
usage
graph
and
that
kind
of
shows
the
oscillation,
where
space
amplification
is
increasing
due
to
the
fact
that
we're
doing
these,
these
extra
rights
and
Creations
and
now
snapshots
maybe
have
data,
that's
not
associated
with
that
new
object,
so
we're
taking
a
little
bit
more
spaces.
It's
no
longer
just
the
extra
space
of
the
snapshot.
A
Sometimes
there
is
a
little
bit
of
duplicated
data
between
the
snapshot
and
and
the
the
the
current
version
that
you
have,
but
since
the
Snapchat's
immutable
anyway,
it's
you
know
it's
more
space
usage,
but
they
shouldn't
get
like
out
of
sync
with
each
other
or
anything
unless
they're
supposed
to
maybe
the
bigger
issues
that,
if
you
scroll
down
to
the
device
right
throughput
graph,
you
can
kind
of
see
that
we've
got
this.
This
right
throughput,
that's
hovering
around
100
megabytes
per.
A
Second,
that's
entirely
due
to
work
being
done
during
snap
trimming
to
to
basically.
A
Defragment
objects:
it's
a
fair
amount
of
extra
right
overhead,
but
on
hard
drives,
you
kind
of
want
to
be
fragmented
objects
anyway,
when
it
gets
super
super
fragmented.
So
you
know
we
can
tweak
some
of
the
parameters
to
lower
that,
but
that
will
also
mean
that
we'll
be
closer
to
the
original
CPU
usage
so
anyway.
A
This
is
this
is
what
Adam
and
I
have
been
working
on
either
I
kind
of
wanted
to
just
point
out
that
that
the
the
the
work
right
now
with
getting
rid
of
shared
blobs,
maybe
isn't
quite
as
good
as
we
originally.
It
was.
A
I'm
not
hearing
you
very,
very
well,
Igor
I,
don't
know
if
it's
on
my
end
or
on
your
end.
B
A
Honestly,
I,
don't
have
much
to
say:
okay,
yeah
I
could
I
could
kind
of
make
it
out
that
side.
So
you
know
good
good
enough
you're.
A
One
of
the
things
that
that
we
are
kind
of
talking
about
more,
though,
is
maybe
some
of
the
big
changes
we've
been
discussing
in
the
past
couple
of
weeks
regarding
Blue
star's,
right
path
and
I
wanted
to
kind
of
maybe
bring
up
again
that
the
the
idea
of
you
know
maybe
some
kind
of
Blue,
Store
2
or
you
know,
Blue
Store
Branch,
where
we
could
start
really
making
some
of
these
big
changes.
A
Do
you
do
you
think?
Is
that
still
the
the
way
you'd
prefer
to
go?
If
we
we
tried
doing
some
of
these
things
like
making
making
kind
of
big
changes
with
shared,
blobs
or
or
you
know,
deferred
rights.
These
kinds
of
things.
D
A
One
of
the
things
that
that
that's
come
up
again
has
been
kind
of
how
to
deal
with
hierarchical
storage
inside
the
OSD
inside
Blue,
Store,
I,
guess,
specifically,
and
and
maybe
even
Beyond.
You
know
hard
drives.
If
you
had,
you
know
typical
hard
drives,
you
know
if
you
have
even
slower
devices
behind
that
and
I've
been
I've,
been
thinking
a
lot
about
deferred
rights
and
and
kind
of
how
to
deal
with
this
higher
hierarchy
of
devices.
Now
that
can
exist
in
the
market
and
then
that's
for
me
personally.
A
That's
kind
of
one
of
the
the
directions
I'd
really
like
to
see
us
Explore
More
is
how
to
you
know
whether
or
not
we
can
make
the
right
path
simpler,
but
then,
beyond
that,
how
can
we
better
utilize
all
these
different
kinds
of
devices
that
are
available.
A
I
think,
to
some
degree,
it
ties
in
a
little
bit
with
your
right
head
log
work
as
well.
Right,
like
you
know,
that's
that's.
We've
we've
seen
benefits
already,
even
just
with
your
prototype
and
fairly
significant
ones.
I
think
that's
one.
D
B
D
No
well,
we
we
had
that
conversation
with
Adam
as
well,
and
it
looks
like
we
might
want
design
for
general
purpose
or
login
stuff
which
might
support
the
g-log.
What
else
did
you
write
a
headlock?
Maybe
this
default
rights
logs,
nothing
else.
So
it
looks
like
there
are
multiple
use
cases
for
for
this
login
pattern.
B
A
Speaking
of
which,
actually
the
rgw
team
right
now
is
working
on
trying
to
prototype
using
objects
for
bucket
indexes,
rather
than
omap
and
they're,
having
I
think
some
troubles
with
trying
to
do
small
appends
to
objects
and
achieving
high
performance
with
it.
So
that's,
that's.
You
know
when
you
talk
about
logging,
that's
that's
one
of
the
first
things
that
comes
to
mind
for
me.
D
Yeah
a
month,
unfortunately,
in
this
respect
it
looks
like
right:
the
headlock
is
already
a
bit
legs,
so
we
might
want
to
have
in
your
design
for
Poland.
D
Okay,
it
looks
like
generally,
we
we
might
want
to
have
login
engine
inside
booster
or
stop
version.
2.
foreign.
A
Yeah
yeah,
Adam
or
Casey.
You
guys
can
know
what
the
current
status
of
the
fifo
work
is
right.
Now,
I
I
know
like
a
week
ago.
I
think
it
was,
you
guys
were
having
a
lot
of
trouble
with
small
rights
and
even
like
aggregating
small
rights
into
bigger
depends.
It
sounded
like
it
wasn't
going
very
well.
Is
that
still
the
case?
Do
you
know.
E
Hey
Mike,
can
you
refresh
my
memory
that
doesn't
sound?
You
know
that
we're
debugging
some
fifo
stuff
for
for
multi-site,
but
I,
don't
think
it's
related
to
Performance
or
offense
up.
C
Trying
to
Market
noticed
that
well
so
the
way
Mark
has
his
his
cluster
set
up.
He
has
the
omap
part
of
the
you'll
deal
my
part
of
the
OSD
offloaded
onto
xeram,
and
there
is
an
OSD
as
I
understand
it.
This
sort
of
fast
memory,
slow
memory,
offload
or
fast
storage,
slow
storage,
offload
that
can
happen,
and
what
we
were
wondering
about
is
if
there
was
a
way
to
make
the
short.
C
Basically,
these
FICO
objects,
which
are
about
four
megabyte
blocks
with
a
bunch
of
say,
100
megabyte
rights
in
them
that
have
those
rights
specifically
be
offloaded
onto
the
fast
memory
so
that
we
don't
lose
performance
relative
to
omap.
A
Sure
so
so
we
have
this
concept
with
blue
FS,
where
extents
and
bluefest
extent
specifically
can
know
what
device
they're
on
and,
like
that's
kind
of
I,
think
the
primary
use
case
for
that
is
just
with
roxdb
SST
files
so
that
they
can
exist
on
either
device
potentially.
But
we
don't
really
have
anything
like
that.
As
far
as
I
know,
on
the
just
like
the
block
device
side
like
it's
only
in
blue
FS
itself,
Igor.
Does
that
sound
right
to
you?
That's
my
recollection
of
how
this
works.
D
A
D
A
Igor,
how
how
crazy
do
you
think
it
would
be
to
make
it
so
that
we
can
not
with
bluefest
but
just
in
general
know
which
extents
live
on,
which
device.
A
My
my
hope
would
be
kind
of
that.
Maybe
we
could,
if
we
did
that,
then
we
could
kind
of
migrate
away
from
the
current
deferred
right
path
and
always
have
new
rights
land
on
the
fast
device
unless,
unless
they're
really
big,
you
know,
if
you
have
a
big
extent,
maybe
you
decide
to
put
on
the
slow
device
or
not.
A
You
know
if,
like
in
that
this
case,
that
that
the
rgw
guys
are
talking
about,
if
maybe
they
they
just
always
want
a
right
to
end
up
on
the
festivals,
but
I
guess
my
thought
was
that,
especially
for
small
extents,
we
always
have
them
land
on
the
the
fast
device.
A
Then
eventually,
as
that
fills
up,
we
we
say:
okay,
this
object
is
really
fragmented,
with
a
lot
of
extents
we're
just
going
to
now,
defragment
it
out
to
the
slow
device
and
do
a
big
extent
right
down
the
slow
device,
and
hopefully
that
way
we
can
keep
the
hard
drive
from
becoming
heavily
fragmented
and-
and
we
can
do
you
know
just
straight
non-differ
rights
to
the
the
fast
device
which
hopefully
will
be
fairly
performant.
D
D
A
D
A
D
A
Yeah
I'm,
probably
thinking
more
something
along
the
lines
of
the
way.
You
know
some
of
these
different
block
cache.
You
know
layers
work,
but
hopefully
maybe
we
can
do
it
a
little
bit
better.
If
we
do
it
inside
blue
star
itself,.
A
If,
if
we
did
that,
we
could
also
then
think
about
compression
happening
once
you
write
out
to
the
the
slow
device
right
like
you
could
do
fast,
uncompressed
extent
rights
to
the
to
the
fast
storage,
so
that
compression
doesn't
get
in
the
way.
A
But
then,
once
you
take
an
object
and
want
to
do
like
a
full
big
right
to
the
the
slow
device,
then
you
could
compress
that
and
and
then,
if
you
do
overwrites
on
it,
you
don't
recompress,
you
just
do
your
little
rights
to
the
fast
device
and
have
those
extra
extents
and
eventually,
when
you
want
to
get
rid
of
them,
you
just
then
rewrite
the
object
and
recompress.
It.
A
Right
I
know
it's
big,
it's
a
lot,
but-
and
maybe
it's
not
even
worth
doing
right
like
maybe
with
the
store
we
we
just
focus
on
that
I.
Don't
know
there
are
use
cases
now
that
are
kind
of
asking
about
this
kind
of
thing,
though
so
it
and
more
and
more
I've
been
thinking
about
it.
A
So
yeah,
or
maybe
maybe
think
about
what
you
think
if
you
know
you're
you're
pretty
much
are,
are
the
the
I
think
the
the
most
knowledgeable
expert
expert
on
BlueStar
these
days?
So
you
know
if,
if
you
have
concerns,
I
definitely
want
to
hear
them.
D
F
Question
your
assumption
is
accurate,
I
think
they're,
currently
they're
going
to
have
a
a
development
preview
or
whatever
in
the
release
of
crimson,
where
you
can
turn
it
on
and
run
RBD
against
it.
But
it's
gonna
be
missing,
features
like
it
still
can't
do
PG
splits
and
merges
yeah
and
they're
hoping
to
support
RBD
on
Crimson
for
real,
at
least
with
adventurous
users
in
in
the
S
release.
So
it's
going
to
be
a
while
yeah
or
anyone
thinks
blue
store,
doesn't
matter
for
performance.
A
Yeah
Greg
there's
been
kind
of
a
desire
for
it
to
be
ready
like
this
year.
Yeah
I
think
that
is
probably
not
not
been
real,
realistic.
Frankly,.
D
A
A
You
know
we
don't
have
to
like
rip
the
Deferred
path
out
to
experiment
with
other
stuff
right
like
we
can
just
leave
it
there
until
we
get
to
the
point
where
maybe
we
have
something
that
works
better,
so
that
that's
kind
of
nice
right
like
where
I,
don't
I,
don't
think
we'd
have
any
trouble
just
trying
out
some
of
these
ideas
with
with,
like
always
doing
a
non-deferred
right
to
a
fast
device
and
then
migrating
over
to
the
the
slow
device.
We
can
just
leave
the
current
deferred
right
path.
D
B
D
I
think,
first
of
all,
we
should
set
the
priorities
and
decide
what
we
really
need
for
this
boot
store
version
to,
let's
name
it
this
way
I
mean.
A
But
there's
there's
definitely
something
I
forget
the
details.
Adam
can
talk
about
more
he's,
I
mean
he's
working
on
it
right
now
very
aggressively
to
try
to
fix
an
issue
that
came
up,
but
I
think
it
was
just
a
corner
case.
A
That's
why
he's
changing
some
of
the
logic
to
make
it
a
little
bit
more
straightforward
regarding
when
we
actually
do
deferred
rights
and
whatnot
in
general,
though
I
mean
like
I,
don't
know
how
you
feel
about
Igor,
but
whenever
I
look
at
like
do
write
small
and
kind
of
the
conditions
that
we
have
here
about
when
to
do
deferred
rights
and
whatnot.
It's
it's
really
unclear
how
we
make
those
decisions.
D
D
A
So,
okay,
let
me
let
me
comment
on
that.
Then
one
use
case
that
has
come
out
from
customers
is
that
they
believe
that
blue
store,
for
whatever
reason
they
believe
that
the
Deferred
right
path
should
work
like
file
stores
Journal.
They.
They
think
that
they've
got
this
much
right
ahead,
log
with
rocksdb.
If
they
make
that
really
big,
they
should
be
able
to
write
really
quickly
to
the
nvme
drive
with
like
small,
I
o
and
then
have
it
flushed
in
big
chunks
out
to
the
hard
drive.
D
D
D
B
A
You're
you're
just
talking
about
the
one
that
where,
if
you
have
it
set
to
like
64k
or
whatever
you
have
to
yeah.
D
I
guess
when
speaking
about
this
logical
working
engine-
or
maybe
you
write
a
headlock,
this
is
an
attempt
to
to
get
some
burden
of
Rob's
DB.
D
Since
we
it
looks
like
we
still
unable
to
generally
remove
or
kill
this
issue
with
probably
the
performance
degradation.
It
was
well
significant,
Networks,
but
but
default.
A
A
One
thing
to
think
about
in
relation
to
it,
too,
is
fragmentation,
part
of
what
I
want
to
do
with
this.
If
we
were
to
do
it
would
be
to
try
to
preferentially
do
big
rights
out
to
the
disk,
rather
than
always
writing
small
extents
to
the
disk.
A
So
that's
that's
kind
of
the
other
piece
of
this
is
that
we
have
seen
some
fairly
bad
cases
where
fragmentation
is
a
really
big
problem,
and
we
can,
you
know,
go
back
through
and
just
do
like
a
online
defraying
process
of
what's
there
but
I'm
wondering
if
we
can
do
a
better
job
of
reducing
fragmentation
from
the
beginning.
If
we
allow
new
small
extents
to
be
placed
on
the
fast
device
rather
than
automatically
going
to
the
the
slow
device.
D
Well,
maybe
that's
a
bit
different
issue
of
so
when
talking
about
drugs
to
follow
the
degradation
line
mainly
imply
the
issue
with
discount
stores.
A
There
there
is
a
Pierre
I
have
related
to
that.
I,
don't
know
if,
if
you've
seen
it
I
can
link
it.
If
you're
interested.
D
D
A
So
there,
there
kind
of
two
two
cases
that
are
big
right,
like
tombstones
in
the
mem
table
and
tombstones
and
SSD
files,
and
there's
no
really,
as
far
as
like
you
know,
there's
no
really
easy
way
to
deal
with
tombstones
and
menstruals
right
now.
A
We
kind
of
actually
looked
at
trying
to
add
support
for
that
in
rocksdb
and
it
it
wouldn't
be
nearly
as
easy
as
the
other
thing
that
they
have
but
I'm
I'm.
Hoping,
though,
that
that
if
we
no
one's
tested
this
yet
but
but
I
added
support
in
Blue
Store
in
this
PR
for
the
compact
on
deletion
trigger
that
roxdb
provides.
A
A
Now,
where
you
like,
go
and
delete
a
number
of
keys,
and
then
you
restart,
you
do
a
seek
to
first
again
and
then
you
start
iterating
over
again
and
you
iterate
over
all
the
the
two
zones
that
you
just
created
and
you
delete
a
couple
more
things,
and
then
you
start
it
over
again
and
do
it
all
over
like
that's.
How
we
work
right
now
is
awful.
A
So
at
least,
if
you
triggered
compaction,
then
all
those
subsequent
like
tries
would
now
be
over
a
fresh
range
right
with
no
tombstones
in
it,
and
you
add
some
more
until
it
grows,
and
eventually
you
trigger
convection
again,
but
it
should
be
way
better
than
than
just
doing
like
the
full
iteration
over
everything.
Every
time.
D
Okay,
but
okay,
so
you
can
see
the
pr
is
still
tucked
with
overworking
progress.
Are
you
planning
to.
A
Yes
never
been
tested
and
there
I
added
some
other
roxdb
settings
in
there,
but
we
can
pull
those
out
if,
if
that's,
if
we
don't
want
to
have
it
all
in
this
one
PR
I
I,
we
might
actually
have
a
test
case
for
this
PR.
Now
it
sounds
like
there's
a
user
cluster.
That's
that's
facing
issues
with
basically
just
having
timeouts
because
of
slow,
iteration
and
snap
trimming.
B
D
Yeah,
but
maybe
back
to
that
in
a
bit
different
form.
So
instead
of
updating,
Rob's
DB
settings.
D
Probably,
like
an
experimental
feature
to
be
enabled
by
the
additional
parameter,
and
it's
not
yeah,
yeah
modified
the
the
road
to
be
checking
things
like
that,
so
I
mean
users
might
want,
might
want
to
have
remote
control
over
that
stuff
and
enable
or
disabled
it
at
some
point
to
the
play
to
try.
You
know.
A
A
Yeah
yeah
I
agree.
So
maybe
maybe
we
want
to
break
this
up
and
just
do
separate
PRS
and
do
the
the
setting
changes
for
just
the
the
like
general
roxyb
settings
that
changed
in
there
as
a
separate
thing,
and
then
this
other
one
to
compact
on
deletion
as
a
an
independent
pull
request.
D
And
it
might
be
even
one
more
step
which
is
having
two
sets
of
roxdb
options
and
switch
between
them
with
a
difference.
Reach
like
well
our
default
settings
and
well
experimentals.
A
A
The
the
Improvement
is
pretty
significant
and
we
can
avoid
the
right
amplification
increase
by
making
sure
that
level,
zero
and
level
one
seem
to
be
matched
properly.
There
was
a.
D
Well
it
it
makes
things
better
for
for
your
case,
but
I
I
again
I'm
a
bit
scary
about
the
bulky
modifications
for
for
everyone.
D
A
D
A
A
Okay,
we
could
do
that,
though
we
could
add
a
switch
for
it
that
just
uses
these
other
settings,
as
kind
of
you
know,
use
try
experimental
performance
through
xdb
settings
or
something.
A
Okay,
if
that
Igor,
if
I
made
those
changes,
could
I
get
you
to
to
approve
the
pr.
A
D
A
Sure
I
I
think
in
general,
the
roxdb,
the
the
trimming
feature
for
basically
our
compact
on
on
you
know:
Finding
too
many
tombstones,
I,
don't
think
that
that
is
ever
going
to
be
a
bad
thing.
Frankly,
if
we
have
it
set
High,
you
know
if,
if
we
had
it
set
low,
I
could
see
it
being
a
problem,
but
if
we
have
that
set
fairly
High,
it
I
can't
imagine
that
it's
worse
than
than
the
life
things
are
right
now.
C
D
D
B
D
Own
reasons,
and
then
they
get
no
some
issues
which.
A
All
right,
well,
I,
don't
think
I
have
anything
else
other
than
this
stuff.
Were
there
any
other
things
you
want
to
bring
up
before
the
end
of
the
hour
here.
D
If
not
one
small
PR
of
for
private
allocators,
which
might
impact
performance
as
well,
this
link
so
my
face
an
issue
in
the
field
when.
D
In
best
fit
mode,
I
will
locator
might
take
already
significant
time
to
iterate
over
unaligned
chunks,
so
they
are
still
working
now
for
Bluffs
location,
but
they
are,
unlike
and
friends,
it's
to
bypass
them,
and
this
PR
provides
with
different
sorting
strategy
Global
overcomes.
So
please
take
a
look.
A
Yeah
I
saw
I,
saw
Adam's
reply
and
I
wanted
to
ask
him
about
it,
because
I'm
I'm
I
was
trying
to
understand
what
his
criticism
was.
A
A
But
generally
speaking,
I
agree
with
you.
I
have
seen
cases
where
going
into
the
slow
mode.
Whatever
we
called
it
was
a
best
fit.
A
C
A
Igor
go
ahead.
D
D
So
definitely
we
might
want
some
pixel
for
for
that
I
believe
4K.
D
Well,
maybe
we
shouldn't
use
it
by
default,
but
we
should
be
able
to
fall
back
to
the
4K
allocation.
Champion
is
off
a
lack
of
space.
A
You
were,
do
you
think
that
your
your
PR
reminded
me
that
I
still
haven't
done
anything
with
this?
One
short
of
the
fix
that
that
we've
made
to
to
make
sure
that
we're,
starting
at
the
same,
offset
that
we
left
at
last
time,
rather
than
restarting
from
the
beginning,
but
that
fix
went
in
as
a
separate
PR.
But
I
haven't
done
anything
with
the
switching
to
a
time-based
near
fit
algorithm.
Instead
of
using
bytes
and
and
and
distance
and.
A
Do
you
think
it's
still
worth,
while
to
maybe
switch
to
basically
deciding
how
much
time
to
spend
in
in
the
near
fit
before
switching
to
best
fit
that
it
made
sense
to
me
at
the
time?
But
what
do
you
think.
D
D
A
I
guess
the
question
would
be
whether
or
not
you
could
end
up
spending
a
lot
of
time
searching
for
space
in
near
fit
mode.
Even
when
you
start
off
at
you
know
the
last
search
location.
We
should
be
much
better
right
because
you're
not
just
spinning
over
and
over
again
in
the
same
location,
you're,
always
moving.
D
A
D
D
A
The
way
I
see
it,
the
biggest
advantage
of
the
time-based
approach
is
that
you
have
a
hard
limit
right.
Regardless
of
of
you,
know,
distance
or
or
count
or
anything
else
you
you
know
it
won't
take
longer
than
well.
It
might
take
a
little
longer
than
this,
but
it's
it's
essentially
bounded.
A
Yeah,
it's
I
guess
it's
bounded
by
the
the
the
the
the
the
limit
you
put
plus
potentially
the
time
of
one
search.
A
Because
you
could
be
in
a
search,
you
know
that
you
have
to
finish
before
before
you
know
you
hit
the
the
limit,
but
it's
close
to
the
Limit.
A
Okay,
well
I,
don't
you
or
I,
don't
think
I
have
anything
else,
I'm
happy
to
comment
on
your
PR?
If
you
want
but
I
I
think
I'm
hoping
that
Adam
will
will
maybe
just
reply
and
we
can.
We
can
go
from
there.
A
Oh
yeah
sure
I
mean
if
yeah
I
can
I
can
review
it
too.
If,
if,
if
you
want
and
just
take
a
look
and
see
if
it
looks
reasonable,
I'll
I'll
try
to
do
that.
A
All
right
anything
else,
anybody.
A
Well
have
a
great
day
and
talk
to
you
later
have
a
good
week,
bye.
Everyone.