►
From YouTube: Ceph Performance Meeting 2023-03-23
Description
Join us weekly for the Ceph Performance meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute
What is Ceph: https://ceph.io/en/discover/
A
B
B
Lord
we
we
talked
a
little
while
ago
about
making
sure
you
could
get
hardware
for
doing
the
testing
that
you
needed
to
do.
I
just
realized
this
morning
that
I
haven't
checked
in
for
a
while.
Are
you
are
you?
Okay,
do
you
have
stuff
or
do
you
need?
Do
we
still
need
to
get
you
stuff.
D
I'd
still
need
to
get
stuff.
I
was
wondering
if
I
was
going
to
bring
it
up
today.
So
what
you're
doing
for
your
blog
post
I
wasn't
sure
if
there
was
something
that
could
intersect
there,
yeah.
B
B
Yeah
so
the
reason
I
mentioned
it
yesterday,
it
was
actually
kind
of
like
a
completely
accidental
thing.
Is
that
when
I
was
doing
tests
on
our
our
nvme
cluster,
with
like
a
bunch
of
drives,
I
I
saw
that
when
I
first
was
doing
the
tests,
I
was
only
using
a
pool
with
a
thousand
pgs
for
our
G
for
RBD
and
when
I
bumped,
that
up
do
significantly
more.
We
went
from
like
two
million
random
readives
to
like
4.4
million,
so
it
was
like
a
huge
increase.
B
So
I
want
to
document
for
people
that
that
was.
You
know
such
an
improvement
but
I'm
also
disabling,
both
the
balancer
and
the
auto
scaler,
just
because
it
it
causes
so
much
variation
when
you're
doing
the
rights.
You
know
you
you
end
up,
like
you
know,
causing
a
lot
of
of
of
topology
changes.
So
I
think
what
I
want
to
do
is
go
back
and
look
at
try
to
look
at
whether
or
not
that
was
caused
by
block
contention,
PG,
lock
contention
or
not.
B
That
might
help
give
us
a
clue
whether
or
not
at
like
what
point
that
matters
and
what
point
it
doesn't
and
like
when
the
balancer
might
help
and
when
it
might
not
right,
because
if
it's,
if
the
balancer
can
make
everything
like
super
equal
for
reads,
then
that
might
like
solve
everything.
But
if
it's
more
PG
law
contention,
then
the
balancer
won't
help
as
much
and
that's
when
I
was
thinking
of
kind
of
just
like
handing
it
over
to
you
to
Showcase
what
like
the
primary
rebalancer
work
is
doing.
B
But
what
do
you
think
like
is
this?
Is
this
the
test
case
that
you
want
to
like
showcase?
Or
do
you
want
to
look
at
like
other
scenarios?
Besides
that.
D
Yeah
yeah,
I
I
think
it's
hard
to
say,
but
I
think
that
would
both
be
really
valuable.
So
the
the
test
case
I
was
more
looking
at
was
like
a
very
simple
small
cluster
set
up
with
just
to
see
the
basic
change
in
performance
with
the
on
an
unbalanced
scenario.
Before
and
after.
D
But,
however,
what
you're
describing
is
also
valuable,
so
I
I
would
be
happy
to
be
involved.
You
know
either
way
yeah
and
whatever
you're
doing.
B
You
know
like
there's
a
really
good
chance
that
unless
you're
running
on
like
crazy
high
performance
stuff
like
this
cluster
is
the
PG
lock
condition
might
not
matter
right
like
then
you
might
be
dominated
by
the
distribution
quality
or
the
peak
or
the
the
primary
read
balancer
helps.
B
But
in
this
case
it
might
be
that
we're
more
dominated
by
PG
law
contention
where,
like
then,
you
just
need
lots
of
pgs
to
make
it
happy,
we'll
see,
yeah
I'm,
not
sure
so,
I
guess
my
question,
for
you,
then
would
be.
Maybe
we
still
need
to
get
you
Hardware
so
that
you
can
showcase
the
small
scale
test,
and
then
this
is
like
a
much
bigger,
more
complicated
like
thing
right.
D
I
think
so
yeah,
but
it
but
I
would,
if
you're,
if
you
get
to
that
point,
where
you're
deep,
the
balancer
involvement,
let
me
know,
and
if
you
want
to
set
up
a
meeting
and
I,
can
show
you
how
that
works,
and
everything
and
I
can
also
be
involved
in
that
as
well.
Sure.
B
E
And
Mark
Mark
also
again,
you
talk
about
the
high
performance
and
the
the
PG
lock
I
think
in
order
to
show
how
the
balancer
the
efficiency
of
the
balance,
so
it's
best
to
work
on
non-performance
clusters.
So
if,
if
we
could
do
it
or
for
example,
on
hdds,
it
would
be
best.
You
know,
because
then
for
sure
the
contention
would
be
on
the
IELTS
and
which.
B
B
Is
exactly
exactly
so
so
when
Laura
and
I
were
talking
yesterday,
the
the
whole
reason
this
came
up
this
week,
so
I'm
doing
high
performance
tests
and
I
saw
in
those
tests
that
increasing
the
PG
count
for
the
RBD
pool
above
our
recommendations
actually
resulted
in
a
really
big
performance
gain
for
random
reads
like
I.
Don't
know
why
yet
I
haven't
investigated,
but
I
suspect
it's
a
mix
of
both
the
quality
of
the
distribution
and
it's
a
also
possibly
PG,
lock
contention.
E
For
the
reads,
my
experience
tells
me
that
and
I
don't
what
were
the
numbers
that
you
played
with,
but
every
time
that
you
double
the
number
of
the
pgs,
the
distribution,
that
the
quality
of
the
distribution
is
like
it's
hot
like
if
you
got
a
a
80
of
the
best
performance
because
of
distributions
of
read
after
you
double
you
get
around
90,
so
the
difference
between
the
the
what
you
have
and
and
100
and
half
every
time
you
double
it's
not
enough.
E
B
Exactly
I've
seen
very
much
the
same
thing
Josh
so
in
this
scenario,
with
60
osds
and
3X
replication
using
a
thousand
pgs
in
the
RBD
pool,
which
I
don't
remember
exactly
what
that
is.
This
is
pretty
low.
It's
like
20,
20,
pgs
per
OST
or
something
I
guess
I,
don't
know
that
yielded
about
two
million
random
readypes
and
I.
B
Don't
remember
what
it
was
on
the
right
side,
but
when
we
jumped
up
to
16
000
pgs
in
that
pool,
which
is
like
a
kind
of
really
big
right,
like
probably
unnecessarily
big,
but
just
just
to
verify
that
you
know
we
had
a
really
good
distribution,
then
it
was
about
4.4
million
ifs.
B
It
was
over
twice
as
fast
and
I
suspect
that
we
can
probably
get
most
of
that
even
with
a
smaller
PG
count,
but
I
haven't
tested
it
yet
I,
just
kind
of
you
know,
used
a
really
big
number
to
verify
that,
like
we
could
get
close
to
maximum
no.
E
But
for
a
thousand
you
took,
probably
some
of
this
goes
to
the
fact
that
it's
a
better
distribution
but
I
would
say.
My
guess
is
that
more
than
50
of
the
Improvement
relates
to
Locks,
because.
B
E
Because
thousandth
gives
you
pretty
good
I
I
got
for
a
thousand
I
got
something
like
on
smaller
on
smaller
clusters,
but
with
1000
pgs
I
got
something
like
if
I
remember
correctly,
between
20
and
40
decrease.
So
you
can't
double,
because
when
you
have
a
80,
you
can
double
just
by
the
distribution.
So
I
guess
it's
a
combination
of
of
locks,
but
it
makes
a
lot
of
sense
that
you
have
more
locks.
If
you
have
really
high
performance
systems,
yeah,
so
yeah
moral
of
contention
it
it
really
makes
sense
so
again.
E
So
if
you
want
to
to
to
to
neutralize
the
effect
of
logs,
it's
better
to
work
on
slower
systems,
dominate
the
the
the
performance.
B
B
Is
I
really
want
to
make
it
so
that
we
can
handle
more
pgs
per
OSD
by
default,
especially
for
smaller
clusters
and
I?
Think
that
that's
actually
good
for
the
balancer,
because
I
think
it
means
that
you
guys
have
more
leeway.
You
have
more
pgs
to
play
with
where
you
can
kind
of
figure
out
how
you
want
to
schedule.
The
topology
changes
in
the
cluster
since
they're,
so
impactful,
I
I,
see
it
as
complementary.
B
Personally,
even
though
it
might
mean
that
the
the
PG
balancer
is
kind
of
playing
a
little
bit
more
of
a
background
role,
I
think
it
actually
is
better
overall
and
gives
the
PG
balancer
more
kind
of
Runway
to
play
with
in
terms
of
how
to
like
migrate
things
around.
E
It
makes
it
more
only
on
very
small
cluster
and
small
Class
A
number
of
ocds.
If
you
have
enough
osds,
we
have
enough
leeway
to
to
play
with
smaller
numbers,
but
but
having
said
that,
our
first
Target
is
odf
and
in
odf
we
have
really
small
clusters
many
times
so
I'm
not
objecting
to
what
you
say.
It's
it's
improving
that
once
I.
If
you
have
like
I
my
guess,
it's
something
like
around
16,
plus
osds
and
replica
three.
E
We
have
enough
leeway
to
do
it
even
with
smaller
number
of
okay,
but
in
but
you
know
it's
the
in
in
odf
you
get
six
or
four
osds
or
really
small
numbers
many
times
so
yeah.
Adding
more
pages
would
improve
the
the
ultra
for
foreign
for
high
performance
with
the
low
contention.
Probably
there.
You
have
good
reasons
for
these
for
different
reasons
of
that.
B
I
would
very
much
like
to
see
us
on
small
clusters
be
able
to
support
thousands
of
pgs
for
OSD
and
when
you
scale
out
to
very
large
clusters
that
have
you
know
maybe
thousands
of
osds.
Then
we
have
to
scale
back
to
hundreds,
or
you
know,
maybe
even
less
pgs
per
OSD.
Is
that
Global
limit
that
I
see
as
being
the
bigger
problem
than
the
per
OSD
limit.
B
If
we
can
properly
manage
memory
in
npg
log
links,
I'm
hoping
that,
then
we
can
kind
of
have
the
balancer
almost
be
able
to
kind
of
adapt
to
those
scenarios
where
maybe
it's
for
small
clusters,
you
have
very
high
PG
counts
and
and
have
a
not
necessarily
a
bad
default
distribution,
but
things
that
the
balancer
can
tweak
very
slowly
to
kind
of
not
impact
the
cluster
but
make
it
perfect.
B
Whereas
on
the
big
clusters
I
think
the
balancer
is
actually
a
much
bigger
plays
a
much
bigger
role
because
we
may
not
be
able
to
support
High
per
OSD.
Pg
accounts.
E
E
So
we
won't
have
one
large
OSD,
which
becomes
weakest
link
in
the
chain
and
and
cuts
the
the
performance
of
all
the
other
ocds.
We
would
put
less
reads
on
larger
ocds
and
more
reads
on
smaller
icds,
and
by
this
we
could
balance
the
load
unevenly
in
terms
of
we
move,
but
but
all
the
it's
not
uneven.
B
E
B
That's
exciting:
okay,
good
good,
good
good!
Yes!
So
so,
if
you
have
a
very
high
quality
distribution,
it
can
show
as
much
of
improvement,
but
that's
nice
that
for
low
quality
distributions
you
can
kind
of
you
can
play
with
it
a
little
bit
and
get
a
better
a
better
redistribution
without
actually
moving
data.
I
didn't
realize
that
that's
that's
fantastic!.
E
For
the
highest
one,
when
we
have
good,
even
if
we
have
good
quality,
if
you,
if,
if
the
the
ocds
are
on
different
sizes,
which
happens
over
time,
that
you
have
larger
devices,
then
we
could
play
and
move
more
read
load
to
the
smaller
devices
versus
the
larger
devices
and
then
split
the
load
more
evenly
across
the
the
cluster.
So
the
the
the
what's
these
with
less
pgs
because
they
are
smaller,
we'll
do
more
reads
and
also
these
with
more
pgs
would
do
mainly
rights
and
the
sweets.
E
And
then
you
could
improve
improve
the
performance
because
they
get
more
rights
because
they're
larger
think,
assuming
you
know,
purple
the
red
ratio.
You
could
play
with
the
number
and
make
sure
that
this,
if
you
have
smaller
ocds
they're
doing
a
bit
more
reads.
You
have
larger
they're
doing
a
bit
less
reads
and
the
iops
in
terms
of
IO
plus
split,
evenly.
E
Of
one
terabyte
and
another
of
two
terabyte,
the
one
with
two
terabyte
get
twice
the
load,
so
it
becomes
a
full
pull,
it
becomes
a
it
gets
to
to
its
Max
bandwidth
first
and
the
other
one
is
doing
half
of
the
of
its
bandwidth.
Because
yeah,
if
you
move,
if
you
change
the
reads-
and
you
say:
okay,
the
smaller
one
is
doing
more
reads:
you
could
get
more
performance
from
both
of
them
because
they
will
get
to
this
to
to
Max
bandwidth
or
Max
iops
at
at
the
same
time,
yeah.
E
That's
for
the
the
next
version,
not
for
the
first
version,
but
that's
the
idea
and
we
could
do
it
because
our
operation
changing
the
primary
is
is
very
cheap.
We
don't
do
anything,
we
change
method.
We
add
up
map
record,
we
don't
we
don't
change
anything
in
in.
We
don't
know
moving
data,
so
we
could
play
with
it
really
easily.
B
Well,
this
this
is
very
exciting
guys
so
now
that
you've
convinced
me
that
if
I'm
doing
these
other
tests,
I
should
definitely
be
also
running
a
test
with
the
up
map
balancer,
because
it
doesn't
move
data,
it
sounds
like
it's
only.
You
know
probably
only
a
win,
so
we
should.
We
should
absolutely
showcase
that,
of
course,
it
will
probably
show
less
Improvement
to
G
Count
increases,
but
if
it
can
help
that
the
low
PH
accounts,
that's
that's
yeah!
That's
interesting!.
E
B
Yeah
yeah
all
right,
I
I,
suspect
I
am
not
going
to
get
to
that
until
after
cephalocon
I've
got
like
three
other
articles,
I'm
trying
to
write
before
that
and
I'm
theoretically
going
on
vacation,
so
we'll
see
how
it
goes,
but
but
Laura
looks
like
you're
presenting
at
cephalocons.
If
I
were
to
get
your
your
test
done.
D
Get
those
results
into
the
suffocon
presentation,
but
if
not
I
can
also
put
it
into
a
blog
post.
You
know
later
and
link
to
that
in
my
okay.
B
D
B
You
can
give
me
instructions
on
how
to
set
the
primary
rebalancing.
I
could
probably
get
some
tests
done
on
Mako.
It's
not
hard
drives.
It'd
just
be
envy
me
a
big
big
mvme
cluster
and
it's
I
know
it's
kind
of
like
ridiculous,
doing
it
in
the
wrong
order
compared
to
what
what
would
be
the
better
test
on
hard
drives,
but
I
could
probably
knock
those
out
without
and
get
you
the
data
you
know
and
then
and
then
just
hand.
B
It
add
it
off
to
you
to
like
deal
with,
because
my
my
bottleneck
right
now
is
I've
got
like
20
gigs
of
data
from
the
tests
I've
been
running
and
I'm
trying
to
go
through
it
all
right
now
before
cephalcon
and
it's
it's
like
you
know,
that's
why
I'm
bogging
down
on
not
to
mention
other
things
that
I'm
working
on
but
yeah
I,
think
I
could
probably
get
you
stuff
and
just
be
probably
a
lot
of
you
know
putting
around
trying
to
to
look
through
it
all
and
analyze
what
it
means.
D
Yeah
but
no
I
that'd
be
awesome
and
totally
leave
analysis
to
me.
So,
okay,
how
about
a
ton.
B
Can
you
just
give
me
instructions
on
what
to
do
and
then
I
can
make
it
happen?
Yes,.
D
Yes,
it's
just
in
the
the
I.
Don't
want
to
take
up
the
whole
meeting
here,
but
it's
just
the
OSD
map
tool.
It's
an
offline
version
right
now,
so
it's
the
same
as
like
running
a
similar
as
running
the
up
map
on
the
OSD
map
tool
and
then
applying
the
the
stuff
commands
that
are
generated
from
that
to
the
cluster.
So
I'll
send
you
the
the
instructions
is.
B
It
is
it
based,
does
it
need
data,
or
is
it
just
based
on
PG
distributions.
E
Yeah,
it's
it
needs
a
file
that
you
generate,
gets
a
file
of
the
class,
the
the
of
them
yeah.
E
B
So
in
CVT,
usually
the
process
is
that
we
create
the
cluster.
We
create
the
pools
we
pre-fill
the
pools
with
or
create
already
images
or
whatever
pre-fill
the
RBD
images
with
data,
and
then
we
run
the
tests.
Where
would
you
want
in
that
process
the
the
element
balancer
to
sit.
E
I
I,
then
we
will.
We
have
a
way
to
to
evaluate
how
good
the
distribution
is.
We
have
a
score
for
this,
so
we
would
like
to
run
the
score
on
the
on
the
tools
that
you
have
see,
the
pool
that
is
less
balanced
than
the
others.
Then
one
once
you
run
this
test
apply
the
balancer
and
balance
it
and
run
the
test
again.
So
on
one
pool
run
the
test
again,
so
we
could
compare
the
results.
That's
so.
B
E
No,
no,
no,
no,
no,
no
totally
online,
nothing
to
do
offline,
perfect.
The
the
thing
is
that
if
by
chance,
the
distribution
is
really
really
good
because
it's
large
or
anything
we
want
to
be
a,
we
would
see
when
you
run
the
offline
tool,
those
the
map
tool,
you
would
see
the
the
the
scores
if
the
score
was
good
and
it's
not
improving
a
lot.
E
We
could
even
you
know,
Skip
the
other
one,
because
we
know
that
it's
already
almost
perfect,
but
if
we
see
that
there
there
is,
we
can
we
have
the
score
we
can
improve.
Then
we
run
the
second
test
and
we
show
that
we
have
the
Improvement
as
we
expected.
B
E
B
Okay,
let's
talk
about
it
offline,
but
that's
I.
B
Thank
you
awesome
thanks.
So
much
Mark,
yes
and
we'll
we'll
still
try
to
get
you
an
inserted
mode
with
hard
trash
to
test
on
I'm.
Also
looking
for
one
now
as
well,
because
I've
got
a
separate
PR
that
we
can
talk
about
here,
which
maybe
maybe
this
is
the
good
segue
into
actually
talking
about
PR's.
B
So
all
right,
good,
good
talk
on
that
this
is
this
is
actually
quite
exciting,
all
right,
so
new
PRS
this
week,
what
I
have
one
is
just
a
doc
change
for
talking
about
PG,
Auto
scaling,
in
fact,
and
Anthony
has
reviewed
that
that
looks
good.
B
One
of
the
comments
I
made
in
there
Casey
is
it'd,
be
really
really
nice,
just
not
as
a
requirement,
but
just
if
there's
any
possibility
in
the
future.
Somehow
we
could
reduce
the
number
of
rgw
pools
for,
like
all
of
this,
like
metadata
it'd
be
it'd,
be
really
really
nice
in
certain
ways.
B
C
C
B
Yeah,
that's
exactly
what
I
was
thinking
too
index
data
and
metadata.
It's
not
a
big
deal
right
I
mean
it's
not
the
end
of
the
world.
The
way
it
is
now
it's
just
it's
it's
just
like
extra
complexity,.
B
So
you
know
that's
that
might
be
a
way
to
to
make
it
a
little
easier
in
terms
of
okay.
The
the
data
in
the
index
pools
need
a
lot
and
then
this
metadata
pool
doesn't
really
need
very
many,
and
maybe
we
can
actually
claw
back
some
pgs
that
way
for
the
metadata
pool.
B
So,
okay,
there's
that
PR
and
and
Anthony
give
lots
of
feedback
on
that.
The
only
other
PR
that
was
new
that
I
saw
was
the
one
that
I've
made
here.
The
this
is
an
experimental
PR
to
add,
that's
not
exactly
ad
delay,
but
it's
basically
in
the
kvsync
thread.
Wait
until
we
have
a
certain
amount
of
time
has
passed
or
we
have
a
certain
number
of
items
in
the
kvq
to
before.
B
We
actually
do
like
a
sync,
and
the
goal
with
this
is
to
see
on
hard
drives,
whether
or
not
that
improves
performance
and
latency
by
reducing
the
number
of
f
data
syncs
that
we
do
for
hard
drives,
that
would
translate
to
head
movement
or
slower
nvme
drives
without
power
loss
protection,
maybe
reducing
the
number
of
flushes
that
we
do
from
Cache
to
the
the
the
disk
or
to
the
the
cells.
B
So
it's
not
a
guaranteed
win.
It's
experimental,
I
think
both
Adam
and
and
Igor
expressed
some
concerns
with
doing
it.
This
way,
which
are
valid,
but
if
it
if
it
does
something
useful,
maybe
it
opens
the
door
to
more
discussion
about
whether
or
not
we
want
to
try
to
implement
some
method
of
of
batching
more
metadata
syncs
into
one
transaction.
B
Oh,
thank
you.
Casey
I
appreciate
it.
That's
that's
nice
not
high
priority
right,
but
yeah
it.
It
could
be
kind
of
a
nice
complexity,
reduction.
B
Okay,
closed
PRS,
it
looks
like
Laura
you
merged
Adams,
harmonize
log,
read
in
right
modes:
PR
looks
like
it
passed
stuff:
okay,.
D
B
Cool
cool
and
then
the
the
other
closed
PR
I
saw
this
week
was
for
rock
CB
store,
Igor's
PR
to
use
bounded,
iterators
and
RM
range
keys
and
Yuri
merge
that
Igor
was
there
there
anything
new
on
that
or
did
it
just
just
get
through
testing.
A
B
A
F
B
Let's
see
we
don't
have
Josh
here,
I
mean
I
would
say.
Yes,
we
should
back
Port
that
tariff
now.
But
that's
that's
just
my
take.
C
C
A
B
Well,
I'll,
let
someone
in
charge
of
things
decide
that
I
guess
but
I
think
you
should
submit
a
a
back
Port
request
for
it
and
then
someone
can
decide
if
it's
valid
or
not
that'd
be
my
my
vote.
B
Especially
if
we're
we're
back
pointing
to
Quincy
and
Pacific,
it
would
seem
crazy
not
to
to
also
about
Puerto
Reef.
B
All
right,
well,
yeah,
exciting,
Igor,
I!
Think
it's
it's
good!
That's
that's!
A
good
fix
all
right!
Next
we've
got
a
couple
of
updated
PR's
here,
Igor
another
one
from
you
for
your
improving
fragmentation
by
overriding
Rock
CB
writable
file.
Allocate
I
think
is
that
that
you
just
have
new
updates,
for
that
is
that
right.
B
B
Okay,
I'm
gonna,
just
throw
the
oh,
it
does
have
needs
QA
on
it
perfectly
perfect.
Okay,
I'll
just
note
that
here,
okay.
B
The
SEF
volume
PR
for
improving
the
performance
when
using
the
encrypt
so
Joshua
I
saw
you
added
the
comment
regarding
splitting
that
good
deal.
I
think
that's
the
only
update
we
have
right
now.
B
So
we'll
see
if
the
author
does
that
or
what
they
want
to
do
going
forward
next
performance
testing
for
Crimson,
with
CBT
in
toothology
I,
apparently
have
approved
that
I
think
there
are
some
more
updates.
I
I
heard
Matan
mention
he
was
working
at
this
morning
during
stand-up,
so
I
don't
know
exactly
what
the
state
of
that
is,
but
it's
still
being
updates
still
being
worked
on.
B
Next,
we
have
enabling
TC
maloquel
using
c-star
and
Crimson.
B
The
the
good
news
here
is
that
Reddick
has
been
working
on
adding
the
necessary
bypasses
in
asan,
similar
to
what
we
do
for
valgrind
and
it
looks
like
that's
making
everything
pass
properly.
So
that's
in
PR,
5598.
B
And
once
that's
merged,
we
could
either
do
this
as
part
of
that
PR
at
the
seasonal
thing
into
that,
or
just
get
that
merge
first
and
then
merge
the
TC
milk
PR.
It
doesn't
really
matter
to
me
either
way,
but
where
the
good
news
is
that
it
looks
like
there's
a
light
at
the
end
of
the
tunnel
here
toward
getting
TC
Malik
enabled
for
crimson
and
thus
making
the
the
blue
store
performance
and
comes
in
much
much
better.
B
There
were
some
benchmarks
included
in
that
and
it
got
a
new
review,
so
that's
actively
being
worked
on
and
that
was
it
for
new
closed
and
updated
PRS.
That
I
saw
anything
I
missed
from
anyone.
B
All
right,
so
we
already
had
our
discussion
topic
I,
think
on
on
primary
rebalancing,
which
was
very
good.
I
had
a
couple
of
other
small
ones
here,
but
maybe
before
I
get
into
those
since
they're
they're
small
would
would
anyone
have
anything
they'd
want
to
bring
up
this
week
or
talk
about
this
week?.
B
All
right,
then,
I
will
plow
ahead
with
the
the
couple
I
have
here.
So
I
mentioned
this
PR
for
trying
to.
B
B
B
What
I
recently
I
had
a
user
that
was
trying
to
run
a
blue
store
on
just
pure
hard
drive
setups,
and
it
was
it
was
pretty
slow
and
completely
seat
bound
on
the
hard
drives
and
so
I
I
kind
of
suspected
that
maybe
if
we
tried
to
reduce
the
amount
of
seeks
that
we
saw
going
between
the
right
ahead
log
and
writing
data
out
in
other
portions
of
the
the
the
disk
that
we
might
be
able
to
see
a
performance
gain.
B
I'm,
not
sure
this
actually
works
because
we'll
still
be
doing
the
F
data
sync
for
every
object
right
and
maybe
we
won't
be
able
to
really
batch
things
up
the
way
I'm
hoping
here
but
I
figured.
This
was
maybe
a
really
low
hanging
fruit
way
to
just
see
if
we
see
any
gain
by
by
trying
to
batch
transactions
together
or
batch
multiple
Ops
into
one
transaction
rather
so
yeah
any
does.
Does
anyone
have
any
any
thoughts
on
this
before
I
run
off
to
try
it
on
hard
drive
somewhere.
B
You're
I
know
you
had
commented
earlier
that
you
weren't
sure
whether
or
not
doing
this
in
the
kvsync
thread
was
the
right
place.
A
different
option
might
be
oh,
go
ahead.
E
B
A
A
B
B
The
one
interesting
bit
in
this
PR
that
I
saw
was
that
once
we
get
to
the
point
on
nvme
drives
where
we're
actually
throttling
performance
and
reducing
the
apps
a
little
bit,
we
do
see
tailings
you
go
down,
which
I
think
makes
sense
right
like
that.
That
doesn't
seem
unreasonable
to
me.
Receive
the
average
latency
go
up.
The
tail
latency
go
down.
F
B
B
But
we'll
see
can
fun,
but
it's
a
very
specific
kind
of
thing
for
people
that
are
running
on
hard
drives
with
no
flash.
That's
we've
discouraged
it
for
a
long
time,
so
this
is
kind
of
just
a
little
hanging
fruit
kind
of
thing.
If
we
can
do
it
easily,
maybe
we
make
it
better,
but
otherwise,
maybe
not
a
big
big
Focus
for
us.
F
Well,
I,
don't
think
it's
only
for
guys
that
running
it
on
hard
drives
since
the
same
effect,
it
would
be
on
any
deployment
that
data
sync
latency
would
be
similar
for
both
main
device
and
DB
device.
So
if
you
have
situation
like
this,
really
it
could
be
profitable
to
craft
mechanism
to
squeeze
more
operations
per
one
if
data
sync
to
main
device
pair
one
F
data
sync
to
DB
device.
F
So
while
I
don't
really
like
that
crude
just
make
a
slip
and
I'm
Dreaming
on
some
better
control,
just
picking
on
maybe
what
we
have
already
in
cure,
or
something
like
that.
I'm
just
I'm,
using
a
dreaming
deliberately
deliberately
here
and
but
I
see
that
some
control
over
the
process
could
get
us
some
performance
in
in
more
cases
than
only
hdds.
F
B
I,
don't
know
if
it
makes
you
feel
better.
I
didn't
actually
implement
the
sleep
it
just
Loops
until
you
are
either
past
your
your
time
window
or
or
have
enough
operations
coalesced
in
the
queue.
B
It
yeah,
it
just
looks
at
the
size
of
the
queue
or
the
time
the
duration
of
time
has
passed.
That's
the
only
parameters
that
it
it
if,
if
either
of
those
conditions
are
set
such
that
you
don't
have
enough
items
or
not
enough
time
has
passed
that
just
it
just
loops
and
skips.
B
Yeah
but
potentially
potentially
or
at
least
spin,
until
it's
hit
a
certain
time
limit
and
then
and
then
just
but
it's
interesting,
because
we
already
have
some
mechanism
to
sleep,
the
kvsync
thread.
So
it's
interacting
with
that
in
some
fashion.
B
Like
once,
you
have
an
item
in
the
queue
I
think
the
thread
wakes
up.
It
starts
spinning
and
then
it
will
either
time
out.
Merge
you
know,
do
the
operation,
you
know,
do
the
the
transaction
or
it
will
accumulate
enough
iOS
to
to
to
do
it
and
then,
assuming
that
the
queue
is
empty
again
after
that,
then
I
guess
it
probably
goes
to
sleep
I,
don't
know
exactly
what
does
that,
but
there's
there's
some
code
somewhere
I,
guess
that
tells
the
thread
to
go
back
to
sleep
at
some
point.
B
Please
do
please
do
because
yeah
I,
if,
if
I
I'd
like
to
have
a
second
set
of
eyes
on
this
too,
not
just
mine
to
to
see
if
this
is
a
good
idea
or
just
awful
foreign.
B
F
C
B
That's
that's
fantastic
news,
I'm
very
happy
to
hear
that
cool
okay.
So
enough
about
that
PR.
The
only
other
thing
I
had
to
mention
this
week
is
that
I've
been
doing
a
lot
of
testing
on
reef
and
I
suspect
that
when
we
take
away
the
roxdb
tuning
improvements
that
we
did
in
Reef
I
think
we
have
a
minor
regression
for
small
random.
I
o
at
least
small
random
rights.
B
It's
not
a
big
one
and
it's
completely
masked
by
the
roxdbe
tuning
improvements,
but
without
those
improvements,
I
think
we're
seeing
about
a
five
to
six
percent
regression
in
small
random
right
performance
and
it's
not
being
caused
as
far
as
I
can
tell
by
the
roxdb
upgrade
that
we
did
something
else
so
don't
know
what
it
is
yet
probably
need
to
do.
A
bisect
questionable
whether
or
not
the
bisectors
can
be
able
to
tease
out
a
five
percent
rejection.
B
That's
pretty
tight,
but
we'll
see
if
I
can
find
it
I'll
try
to
get
a
fix
in
place
before
Reef
no
guarantees
there,
though.
Having
said
that,
though,
Reef
overall
is
still
faster
than
than
Quincy
was
with
the
other
improvements.
So
you
know
overall
we're
still
looking
good,
just
as
a
minor
thing
that
that
we
maybe
want
to
try
to
track
down
if
we
can
and
I
think
guys.
That's
all
I
had
so
one
more
time,
I'll
open
it
up.
D
D
B
Cool
thanks,
Laura
and
then
yeah.
We
need
to
get
you
on
one
of
the
inserted
Notes
too
I.
Think
let
me
talk
to
Sam
or
I.
Don't
know
who
exactly
would
we
should
talk
to
about?
B
We've
got
like
four
uncertainties
in
the
Jenkins
queue
for
doing
Crimson
like
continuous
integration,
performance
tests
and
I'm,
pretty
sure
that,
like
only
one
of
those
nodes
ever
gets
really
used
effectively,
so
the
other
three
might
just
be
kind
of
sitting
there
and
if
that's
the
case,
we
should
just
start
using
for
development
again,
there's
no
reason
to
to
waste
those
resources
if
people
can
be
using
them
for,
for
this
kind
of
thing,
so
I'll
try
to
dig
into
that
and
find
out
what's
going
on
so
yes,
yeah
thanks
cool
all
right,
well,
I
think!
B
That's
it
guys.
Thanks
for
coming
have
a
great
week,
and
next
week
we
may
be
on
jitsi,
we'll
see.
Mike
said
that
he
wants
to
try
to
use
us
as
a
guinea
pig
for
for
doing
that
before
everyone
else
is
forced
to
move
over.
So
the
Earthly.
It
was
gonna
happen
today,
but
we
just
kind
of
ran
out
of
time,
but
next
week
I
think
we're
gonna
be
on
jitsi,
so
I'll
send
out
an
announcement
and
we'll
see
how
it
goes
thanks.
Everyone.