►
From YouTube: Ceph Performance Meeting 2023-03-02
Description
Join us weekly for the Ceph Performance meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contrib...
What is Ceph: https://ceph.io/en/discover/
A
A
So
it's
been
a
couple
of
weeks
since
we
last
had
a
meeting
that
was
because
I
was
in
New
York
for
the
Saturday
New
York
and
and
ended
up
on
holiday
for
a
little
bit
of
that
as
well.
So
a
couple
of
new
things
to
discuss
this
week,
I
guess
right
now,
we'll
go
through
puller
requests,
so
I
I
did
only
see
one
new
one
in
the
last
couple
of
weeks,
and
this
was
a
PR
from
Casey
in
rgw
to
switch
back
to
a
polymorphic
executor,
Mark
Hogan.
A
A
Okay,
that's
good
okay!
So
there's
only
one
new
guard
that
I
saw
this
week,
that
is
from
Casey
for
the
Beast
back
end
for
rgw
as
to
switch
to
a
polymorphic,
executor,
I,
guess,
switching
back
to
polymorphic
executor
and
Mark
kogan
tested
that
did
not
see
any
obvious
performance
improvements
from
it,
but
I
think
maybe
they're
still
planning
on
on
switching
back
to
it.
A
So
the
plan
right
now
it
looks
like,
is
to
wait
until
after
Reef
to
do
that,
which
makes
sense
since
reef
is
closed.
So
at
some
point
we'll
see
that
merge,
but
no
no
high
priority.
At
the
moment
there
were
a
number
of
closed
PR's
and
the
first
one
here
was
from
Igor
that
that
Yuri
merged
this
this
got
moved
through
pretty
quickly.
It
was
a
csq
change
to
use
larger,
read
chunks
in
cue
list,
entries
and
I
suspect.
A
That's
probably
a
small
performance
Improvement,
but
it
was
not
merged
quickly.
So
nice
glad
we
got
that
through
fast.
The
next
PR
is
a
big
one
that
we
can
talk
about.
I
think
I
might
wait
until
after
we
get
through
the
rest
of
the
prns
talk
about
this.
This
is
the
the
face.
The
new
rocksdb
update
from
Facebook,
so
I'll
just
skip
this
for
now
and
Laura
the
next
three
Piers.
That
merged
were
all
from
you.
A
So
this
was
the
the
primary
balance
score:
PR,
the
iteration
one
of
the
workload
rebalancer
and
adding
the
ability
to
handle
high
priority
operations
in
M
clock.
Do
you
want
to
talk
a
little
bit
about
these
Piers
that
that
you
merged.
B
Or
the
the
the
two
or
it
was
adding
a
primary
AKA
workload,
balance
score
and
iteration
one
of
the
workload
read
balancer.
Those
were
part
one
and
part
two
of
the
new
balancer
feature
and
reef,
which
is
an
offline
tool
that
can
be
used
with
the
OSD
map
tool
to
improve,
read
performance
and,
ideally
small
clusters
and
yeah.
This
is
just
the
new.
B
The
new
every
feature
will
be
merging
some
documentation
on
it
shortly
as
well,
but
I
actually
added
a
topic
later
on
on
your
etherpad
about
what
we
can
do
to
because
right
now,
it's
it's
like
a
theoretical
improvements
of
improvement
of
performance,
but
I'd
like
to
see
actual
numbers
of
this
Improvement.
B
So
I'm,
not
sure
if
you
have
any
ideas
of
what
we
can
do
to
you
know
prove
that
the
the
read
performance
has
improved
on
any
kind
of
setups
that
you,
you
know
or
anything
like
that,
and
the
third
PR
that
I
merged
about
M
clock.
That
was
a
feature
of
ashwarius.
That
is
also
every
feature
so
I
don't
know
too
much
about
it,
but
that
was
Merchants.
A
part
of
briefs
features.
A
Cool
cool
yeah:
let's
talk
about
of
the
rebounds
for
a
little
bit
more
once
we've
gotten
through
the
PRS
here,
okay!
So,
let's
see
what
do
we
got
next
updated
PR's.
There
is
a
PR
here
to
replace
an
RBD
config,
the
image
context
config
with
image,
config
proxy
I,
confess
I,
don't
remember
this
at
all.
I
looked
at
probably
once,
but
apparently
it
still
needs
review.
Oh
yeah,
it
looks
like
it's.
A
It's
a
nice
improvement
from
the
the
table
that
they've
gotten
this
this
PR
here
so
who
is
looking
at
this
Casey
I?
Think
they're
asking
you
for
a
review
in
Ilia
as
well.
C
A
Okay,
okay,
cool
all
right:
what's
next
crypto
add
qat
batch.
A
This
looked
like
it
was
a
really
really
big
Improvement
for
uat
I
think
there
was
some
reference
to
this
being
for
rgw
when
I
looked
at
it,
Casey
I
think
this
is
another
one.
Yeah
yeah.
C
D
A
Added,
let's
see
what's
next
then
Blue
Store
setrock
CB
iterator
bounds
for
the
collection
list
from
Corey.
A
It
looks
like
there
was
a
QA
failure
on
that
and
potentially
more
fixes
that
made
it
in
I.
Don't
know:
do
we
need
a
QA
run
for
this
now
nope
Corey
said
was
Corey
here,
I
can
no.
It
doesn't
look
like
chords
here,
so
he
said,
he's
gonna,
look
into
it
and
and
try
to
address
them
so
probably
not
for
Reef
I
guess.
But
that's
okay,
let's
see
what's
next
common
compressor
disable
busy
polling
in
qat,
so
maybe
we
can
discuss
that
as
well.
A
Once
we
get
to
the
discussion
topics,
so
it
needs
QA
and
finally
for
updated
PRS.
This
is
from
Igor
to
apply
the
rocksdb
delete
range
threshold
on
the
Fly
Adam
approved
it.
That
also
needs
QA.
A
A
So
I'll
I'll
just
leave
that
for
now
and
see
if,
if
somebody
picks
it
up
for
a
QA
run,
all
right
that
was
it
for
updated,
PRS
I
think
I
made
it
about
two-thirds
of
the
way
through
the
no
movement
PR's
possible,
something
at
the
end
here
got
cleaned
up
by
the
bot
or
merged,
but
I
doubt
that
anything
real
real,
exciting
was
going
on
with
the
rest
of
these.
So
let's
move
on
to
the
discussion
topics,
because
we've
got
a
lot
to
talk
about
here.
A
I
think
I'll
just
go
through
a
couple
of
really
quick
ones
here
that
I,
don't
think,
probably
need
a
lot
of
discussion.
So
we
we
merged
the
roxdb
update
for
Reef.
This
was
kind
of
a
tough
call
because
it
was
failing.
Our
existing
Val
grind
tests
and
radic
heroically
went
through
and
added
a
whole
ton
of
new
valgrind
suppressions
to
get
it
passing.
A
I
think
our
plan
right
now
is
basically
to
to
try
to
test
it
as
much
as
we
can
for
Reef
through
our
the
next
couple
of
months
of
QA,
and
if
it
looks
good,
we
keep
it.
If,
if
we're
seeing
problems,
we're
gonna
rip
it
back
out,
Reddick
does
that?
Does
that
sound?
Like
your
your
view
on
this
as
well,.
D
Yeah
I
saw
it,
however,
I
just
wanted
to
add
to
the
part
about
tons
of
of
new
vulgarian
suppression
rules.
Well,
I'm,
really
not
proud
of
making
them.
D
Go
ahead,
they're
more,
there
are
more.
There
are
mostly
except
one
I
think
they
are
almost
entirely
about
the
setup
paths
in
in
monitor.
Perhaps
we
missed
some
the
initialization,
perhaps
that's
our
fault,
but
because
of
the
big
change
in
the
in
in
roxdb.
Now
it
got
exposed
in
addition
to
those
init
related
things,
there
was
actually
one
thing
on
on
the
star
get
path,
however,
still
looking
like,
actually
lazy,
initialization,
basically
some
get
or
create
of
aesthetical
variable
to
hopefully
is
on.
We
shouldn't
see.
E
D
We
really
really
need
to
test
this
carefully
and
I.
Think
that
that
lesser
was
less
evil
is
if
we
merged.
If
we
try
to
ship
it
as
a
part
of
major
release,
done
on
like
a
backboard
with
one
of
those
miners
that,
for
sure
is
not
the
way
to
go.
If
you
want
to
upgrade
rocks
DB,
it
must
be
a
big
release.
A
Yeah
right,
I
agree
with
you.
I
was
somewhat
dismayed
to
see
how
much
we
had
to
how
many
development
Expressions
we
had
to
add
I'm,
not
worried
really
about
the
configuration
ones,
but
that
that
other
one
that
you
had
mentioned
we'll
see.
So,
having
said
that,
you
know
everyone.
That's
testing
Reef
over
the
next
couple
of
months,
be
on
the
lookout
for
memory,
leaks
and
any
other
weird
Roxy
B
issues.
If
you
see
anything,
please.
D
Report
it
well
this
this
PR
is
absolutely
the
first
one
to
blame.
We
are
aware
about
that.
In
case
of
of
how
many
troubles
we
will
be
reverting
there.
Yeah
yeah.
A
B
So
to
add,
Yuri
has
been
scheduling,
main
Baseline
runs
and
I
try
to
take
a
look
at
each
one
of
those
and
I
reviewed
the
the
most
recent
ones.
So
we
have
that
Baseline
to
compare
against
for
Reef.
B
In
case
we
see
any
new
memory.
Leaks
I
did
not
see
any
in
the
main
in
the
main,
Base
Line.
You
know
anything
related
to
Rox
TV,
so
I
feel
like
it
would
be
pretty
obvious.
A
Cool
cool
yeah
and
thank
you
Laura
for
for
doing
that.
I've
noticed
that
you've
been
you've
been
taking
on
a
lot
of
a
lot
of
this
kind
of
QA
stuff.
It's
much
appreciated.
A
All
right,
so,
yes,
I,
think
that's
enough
about
Rocks
TV.
Let's
see
next,
there
was
an
interesting
presentation
at
Seth,
Day
New
York
from
soft
iron
they've
written
a
new
benchmarking
Tool
in
go
that
uses
a
lot
of
the
underlying
libraries
like
clip,
RBD
and
and
other
things.
It's
kind
of
slick
it
automatically
populates
a
Google
spreadsheet,
with
benchmarking
results.
A
The
the
one
concern
I
had
with
it
is
that
they
didn't
really
show
how
well
it
worked
at
scale.
They
were
only
kind
of
doing
small
small
scale
testing
with
it,
so
I'd
like
to
see
more
big
tests,
and
especially
comparisons
with
existing
tools
like
fio
and
other
other
tools
to
see
how
how
it
really
Compares,
but
it
has
potential.
So
if
you're
interested
in
benchmarking
and
testing,
this
might
be
something
to
try
out
and
see
if
it,
how
well
it
works.
A
A
So
definitely,
if
you're
interested
in
this
stuff
check
it
out
and
report
back
what
what
you,
what
you
find
and
that
that's
really
it
for
what
I
had
actually
I
forgot,
was
there
anything
on
the
pier
set
that
I
missed
from
anyone
or
or
any
other
discussion
topics
to
add
here.
B
A
Like
oh
yeah,
that's
fine!
We
can
talk
about
it
here
and
and
Laura
Josh
reached
out
to
me
privately
and
said
that
he's
on
on
PTO
right
now,
but
would
be
interested
in
talking
more
about
the
work
you
guys
have
been
doing
next
week
as
well.
B
Yeah
yeah,
so
if
you'd
rather
me
set
up
a
meeting
between
the
three
of
us
or
something
that's
totally
fine
too
I'll,
just
I
guess
give
you
like
an
overview
of
what
I
was
hoping
to
achieve.
B
If
that
helps
so
it'll
give
you
time
to
think
Eric.
Well,
let's,
let's
talk
about,
maybe
something.
A
Sure,
let's
let's
talk
about
it
here
and
let's,
unless
we
run
out
of
time,
let's,
let's
just
move
it
to
the
end
here,
so
that
we
can
do
some
of
these
other
ones
and
then,
if
if
we
have
time
today,
we'll
do
it
today,
if
not,
we
can.
We
can
then
just
wait
until
until
Josh
is
around
next
week.
A
Let's,
let's
move
on
then
so
we
have
an
update
from
from
Josh
here.
What's
what's
going
on
with
your
guys's
cluster.
F
Many
judges,
okay,
so
I
think.
Two
weeks
ago
we
reported
that
16
to
11
addressed
the
right,
have
the
right
bite
amplification,
but
not
the
iops
amplification
since
then,
I'm
pretty
sure
that
I've
tracked
down
where
the
iops
amplification
came
from
and
I
think
it's
the
loss
of
log
recycling
in
roxdb,
they
disabled,
n6.8
and,
of
course,
we
gained
6.8.
whatever
in
Pacific
and
because
of
that,
every
single
time,
anything
logs
through
the
roxyb
log.
It
also
requires
a
blue
FS
metadata
update
versus
before
I
didn't.
B
F
Doubles
the
rights
essentially
right
for
any
any
rocks,
DB
log
right,
there's
the
blue
FS
law
grade
as
well.
So
that's
where
it's
coming
from
not
easy
to
solve.
Obviously,
log
resecting
is
turned
off
for
correctness,
reasons
yep
and
like
I
I
remember,
I
actually
was
reading
your
your
performance,
blog
post
for
an
unrelated
reason.
I
came
across
when
you
talked
about
the
log
recycling
being
turned
off
and
it
like
triggered
my
memory
as
soon
as
I
saw.
A
F
I
did
some
digging
in
rocksdb
and
like
this
is
one
of
those
like
how
much
time
do
we
want
to
spend
on
tuning
bluefest
for
moments,
because
my
understanding
is
the
way
rocks
DB.
The
recipe
actually
has
internal
mechanisms
to
try
to
reduce
this
effect,
because
it'll
actually
do
like
an
F
allocate
type
call
on
the
log.
F
F
But
basically,
you
could
tell
that
yeah
anytime
you're
about
to
extend
the
log
extended
by
this
much
using
an
allocate
pre-call
and
then
obviously
on
a
file
system.
It's
just
going
to
map
the
physical
bytes
in
underneath
the
covers
you
do
it
that
way
in
blue
FS,
but
obviously
you
also
have
to
do
that
big
enough
batches
to
make
it
worthwhile
otherwise
we're
just
back
where
we
started
in
blue
effects
right.
So
that's
the
roxdb
way.
A
Yeah,
it
didn't
look
particularly
hopeful
getting
that
re-enabled
in
Rocks
TV
as
as
much
as
it
would
be
nice.
It
was
like
only
a
couple
people
over
there
that
even
understand
it
like
they.
They
went
through
quite
a
bit
of
of
effort,
I
think
just
to
even
figure
out
whether
or
not
they
thought
it
was
unsafe
or
safe
or
like
what
circumstances
it
could
be
safe
in
and.
F
Yeah
funny
because,
like
my
I
mean
my
mind,
I
equally
went
to
which
is
like.
Surely
they
can
just
write
some
extra
zeros
somewhere
to
make
this
work,
but,
like
I,
also
know
that
there's
smart
people
looking
at
it
so
I
didn't
want
to
assume
that
the
solution
was
going
to
be
that
easy,
like
I,
think
their
concern
is
like,
if
they're
just
going
to
scan
the
log
if
they're
reusing
a
log.
How
do
you
know
what's
new
and
what's
old,
yeah
I'm
assuming
that's
what
the
problem
is
right,
yeah.
F
As
I
don't
know,
it's
just
I
feel
like
there's
solutions
to
that
problem,
but
again
there's
smart
people
looking
at
it.
So
maybe
this
is
something
that
I'm
thinking
of
are
probably
the
searches.
I'm
thinking
aren't
going
to
work
for
whatever
reason.
A
That
that's
kind
of
where
I've
been
to
I
honestly
I
was
like
well
I.
Don't
I,
don't
really
want
to
get
into
like
an
argument
over
this
with
them
and
it's
tough
getting
code
into
rock
CB.
Even
even
you
know
fairly
reasonable
code.
They
they
tend
to
not
like
a
lot
of
outside
changes.
I.
Imagine
this
whole
situation
has
made
them
even
more
a
little
bit
gun
shy,
so
yeah
well
all
right,
so
maybe
we
can.
Maybe
we
can
figure
out
an
alternative
like
you
were
just
talking
about
so.
F
And
I
mean
like
so,
let's
still
talking
about
like
the
nitty-gritty
detail.
The
net
effect
of
this
is
not
that
big
of
a
deal
as
long
as
you
have
a
system
that
can
take
the
extra
eye
offs
right,
yeah
and
like
for
us
pretty
much
all
of
our
Hardware,
except
for,
like
the
very
oldest
Theta
ssds
on
the
Block
side,
is
fine
like
it.
F
It
doesn't
show
up
as
far
as
I
could
tell
that
any
sort
of
latency
measurements
or
anything
like
that
I'm
far
more
worried
about
our
spinners,
like
we
haven't,
started
updating
our
spinner
object,
tough
to
Pacific
yet,
but
at
the
same
time
like
the
right
load
on
in
those
environments,
is
much
much
less
right.
F
So
it's
like
okay,
well
sure
we
maybe
will
like
increase
the
iops
by
30
percent
to
disk,
but
like
a
30
increase
on
10
or
20,
iops
isn't
going
to
matter
right
so
I,
don't
know
what
the
actual
net
effect
is
going
to
be
in
those
areas
yeah
we
might
or
should
be.
Okay.
We
just
haven't
done
the
analysis
yet.
A
Yeah
and
Spinner
is
the
thing
I've
been
I,
keep
meaning
to
get
to
and
haven't
yet
is
just
trying
to
see.
If
we
can
increase
the
number
of
operations
we
do
per
transaction
in
the
kvc
thread,
if
you're
on
pure
pure
Spinners,
we're
definitely
seek
bound.
So
you
know
this
hurts,
but
maybe
in
general
we
that's
some
other
another
mechanism
by
which
we
can
try
to
reduce
some
of
the
pain
right.
F
That's
true
yeah
I
guess.
The
other
thing
too,
is
like
it.
Deferred
rights
also
just
amplifies
this
problem,
because
every
deferred
right
is
a
right
to
the
Ross
DB
log,
which
means
the
blue
FS
update,
yeah.
A
F
G
G
And
actually
I
have
badly.
A
G
A
G
In
case
so
we
perform
double
right
anyway,
so
yeah.
If
we
perform
preferred
right,
we
do
right
to
right
ahead
log,
which
is
on
spinner
as
well,
and
then
we
do
now
the
right
to
my.
B
E
B
G
E
G
With
something
better
well,
they
are,
they
are
nice
if
you
have
for
DB
on
faster
Drive,
but
if
it's
a
pure
HDD
setup
and
then
apparently
no
sense
to
perform
the
rights
in
this
way,
I.
A
I
want
to
Igor
I
want
to
replace
it
with
writing
small
extents
directly
to
the
fast
device
and
later
on,
just
defragment
back
to
a
single
extent
on
the
solid
device.
A
Yeah
well
Josh.
Thank
you
for
doing
that.
It's
I
guess
in
retrospect,
not
surprising
to
me
that
that
something
like
this
was
going
to
happen.
We
knew
that
that
when
they
disabled
that
in
roxyb
that
sooner
or
later
it
was
going
to
come
back
and
show
up
somewhere,
but
it
was
easy
to
just
kinda,
ignore
it
and
just
be
like
okay.
Well,
this
is
get
hurt,
but
we
don't
know
when
or
where
now
I
guess
we
found
out.
A
F
Welcome
I,
actually
just
I,
just
remember,
there's
one
more
thought
in
terms
of
like
tricks
we
could
play
here
instead
of
going
the
like
thin
allocation
path
and
that's
the
every
single
time
we
map
the
page
in
is
that
for
a
log
we
actually
zero
it
and
then
extend
the
file
out
for
the
whole
page
because,
really
like
the
vanity
up
here
update
here,
is
the
file
size
update
right
yeah,
like
the
two
things
we
need
to
avoid
are
like
the
extent
Map
update
and
the
file
size
update.
Somehow.
C
D
F
G
Yeah
so
I,
don't
think
reallocation
makes
makes
much
trouble
here
since
right,
the
headlock
should
be
allocated
in
chunks,
Long
Island,
just
a
simple
location.
You
know.
If,
if
that's
not
the
case,
we
should
fix
that
and
it
should
be
doable
right.
F
G
Is
why
yeah,
but,
and
and
and
and
and
if
we
need
to?
If
we
would
like
to
avoid
file
size
update
on
each
right,
then
we
need
yeah
some
tricks
if
zero
blocks
with
zero
of
these
all
these
allocated
blocks
and
then
do
some
smart
reading
or
maybe
not
as
an
alternative
rap
each
right
with
some
heads
either
to
specify
the
size
of
this
board.
A
A
Igor
I
was
going
to
ask:
how
does
your
your
prototype
blue
story
to
have
a
log
work.
G
Cycled
buffer
and.
C
A
G
Yeah
well
well,
actually,
I
definitely
want
to
to
proceed
this
work
and
another.
There
is
another
driver
for
that
stuff
which
is
I'd
like
to
see
if
we
able
to
absorb
rocks
GP
strolls
due
to
compaction
or
whatever
maintenance
stuff.
So
we
definitely
can
see
periodic
High,
latency
Peaks
caused
by
August
DB.
So
maybe
we
can
try
to
avoid
that
with
large
enough
right.
The
headlock
well
I
mean
this
Standalone
right,
the
headlock
so
yeah
definitely
I'd
like
to
to
have
enough
iteration
to
that
stuff.
A
Maybe
now
that
reef
is
is
kind
of
you
know
being
wrapped
up.
Maybe
we
should
try
to
try
to
really
focus
on
getting
that
working
for
the
next
release.
E
E
E
A
To
motivate
you
and
okay
I
think
it's
good
I
think
it's
no
I
I
I
I,
like
it,
both
from
the
standpoint
that
it
it
improves
a
lot
of
things.
It
seemed
and
from
the
standpoint
that
it
makes
it
so
that
we're
less
reliant
on
rxdb.
It
makes
it
much
easier
to
start
thinking
about
being
in
control
of
our
own
destiny.
G
For
this
double
right,
just
didn't
write
the
head,
lock.
G
Yeah,
definitely
they'd
prefer
not
to
change
this
behavior
for
now.
Well,
apparently,
that's
the
only
case
we
get
since
then.
E
F
A
All
right
anything
else
on
on
specific
rate,
amp
issues.
F
No
I
think
at
this
point
it's
kind
of
like
Case,
Closed,
yeah
and
then
TBD
whether
this
is
going
to
cause
issues
for
us
on
our
spinners.
G
So
well
again
for
for
your
spinner
stuff
I
try
to
so.
If
you
can
see
issues
caused
by
that
stuff,
I
would
suggest
to
try
to
get
rid
of
the
third
rights
as
a
first
step,
and
maybe
this
well
would
provide
some
relief.
A
Okay,
as
I
said
too,
that
if
I,
if
I,
am
able
to
spend
some
time
trying
to
accumulate
more,
I
o
in
transactions
for
the
right
head
log
that
that's
an
area
I
wanted
to
explore
a
little
bit
as
well,
and
that
might
might
also
help
on
the
peer
setup,
pure
spinach
setup.
G
F
G
Mean
this
delay
to
accumulate
more
transactions
should
be
done
carefully.
So
if
yeah,
if
you
are.
A
All
right:
well,
let's
move
on
then
so
next
we
have
the
q80
question
so
who
wants
to
lead
this.
C
Thanks
Mark
I
linked
the
pr
and
the
mailing
list
thread
that
has
some
background
here.
How
long
are
you
on
the
call?
Is
your
mic
working.
C
Mainly
I
was
hoping
to
have
a
discussion
so
that
we
can
get
a
consensus
on
the
design
because
you
guys
have
been
doing
a
lot
of
good
work,
but
I
think
there
are
some
some
design
level
things
that
we
want
to
figure
out.
First,
so
I
might
just
follow
up
on
the
list
of
we
can't
discuss
it
here.
E
E
E
A
public
security
and
and
the
we
know
the
security.
The
performance
is
better
than
CPU
and
and
there's
a
few
another
QT.
E
To
for
the
you.
E
B
E
Part
mode
can
be
the
QT
performance,
foreign.
E
So
I
follow
the
cases,
the
advice
as
the
Justin,
the
no
coaching
mode,
another
and
and
the
and
and
the
parameter.
E
C
Yes,
other
the
updates
that
have
the
the
fallback
if
there
isn't
a
co-routine
there,
and
that
looks
fine,
but
kind
of
the
second
design
level
question
that
I
had
was
if
it
was
really
necessary
to
do
the
waiting
down
in
the
plug-in.
If
there
isn't
a
instance
available
at
the
moment,.
E
Why
must
there
are
no
free,
Union
things?
It's
a
director
to
fall
backwards,
View.
C
Yeah
right
so
I
guess
what
I'm
asking
is:
how
big
of
a
difference
would
you
see
in
performance
here
with
that
configured
to
Zero
versus
the
the
default
of
two
that
you
have
in
the
pr.
E
C
E
But
part
of
the
come
but
the
compared
they
this
pure
this
period,
May
function
as
the
two
mimic,
the
Purity
performance
to
for
utilization
and
the
previous
security
code
is.
C
C
I'll
follow
up
with
a
comment
kind
of
trying
to
drill
into
this
on
the
pr,
but
I'm
really
sorry,
I
have
to
drop
for
our
rgw
meeting
in
a
minute.
C
A
Yeah,
thank
you
both
for
for
public
discussion
on
it.
It's
really
good
I'll
I'll
eagerly
await
both
of
your
your
comments
in
the
pr.
A
See
Casey
thanks
for
joining
all
right.
Let's
see
what's
next,
then
oh,
we're
now
back
to
the
rebalancer
topic.
So
Laura,
do
you
want
to
Drive
discussion
on
this.
B
Sure,
and
and
yeah,
if
I
can
kind
of
talk
about
what
I'd
like
to
see
and
then
perhaps
I
can
schedule
you
and
me
and
Josh,
where
we
can
talk
more
detail
later,
because
I
do
want
to
Loop
them
in,
but
essentially
so.
B
The
the
way
the
rebalancer
works
right
now
is
that
it's
an
offline
tool
with
the
OSD
map,
Tool
offline
setting
in
the
OSD
map
tool,
and
when,
when
you
run
the
tool
it
provides
a
before
and
after
rebalance
score,
and
it
shows
how
or
this
is
part
of
what
we
implemented
in
those
PRS-
that
I
merged
it's
a
score
in
the
the
OSD
map.
B
That
indicates
how
well
your
your
each
pool
is
balanced
in
regards
to
reads
and
the
idea
of
the
the
read
balancers
that
it
improves
that
score.
So
if
the
score
is
like
two
point
three,
that
means
your
pool
is
unbalanced
and
maybe
the
rebalancer
can
get
it
down
to
like
1.2,
and
that
would
be
a
more
ideal
score.
B
And
essentially,
what
we're
looking
to
do
is
is
proof
that
the
score,
even
though
the
score
is
improving,
that
we're
actually
improving
performance
yeah.
So
what
I
like
to
do
is
somehow
get
some
kind
of
configuration
going.
B
Neha
mentioned
that
I
might
be
able
to
do
this
on
gibba
at
some
point
since
they're
they
might
be
redeploying
it
as
like
a
90
OSD
cluster,
but
really
something
that
would
be
most
ideal
as
a
small
cluster
with
different
devices,
but
maybe
like
three
or
four
osds
and
Josh
mentioned
that
HDD
would
be
better
than
SSD
to
to
measure
performance.
A
Yeah
yeah
I
think
I
agree
with
with
those
General
comments.
Hdds
are
tough.
We
could
try
to
get
you
on
one
of
the
inserta
nodes,
but
I
actually
gave
up
the
one
that
I
had
for
mcluck
Qs
testing,
so
I
don't
have
any
to
give
you,
unfortunately,
we'll
have
to
beg
somebody
else
to
give
up
give
up
one
of
theirs
or
or
maybe
we
could
steal
one
back
from
the
the
Crimson
testing,
because
I'm
not
convinced
that
Jenkins
is
actually
using
all
those
those
notes
properly.
A
So
I
think
some
of
the
insertion
machines
may
be
just
sitting
idle
in
the
Jenkins
queue,
so
that
might
be
an
option
too,
but
those
are
the
machines
that
we
have
that
have
hdds
and
and
actually
they
have
nvme
drives
in
them
as
well,
so
they're
pretty
nice
for
they're
old,
but
they're
they're
pretty
nice.
If
you
want
to
kind
of
be
able
to
switch
between
hdds
and
nvmees
and
just
kind
of
see
how
it
behaves
on
either
they
have
six
usable
hard
drives
and
four
usable
Envy
meat
drives
and
Niche
node.
A
So
you
could,
you
could
totally
do
like
a
4
OSD
cluster.
Just
locally
using
one
of
those.
A
Yeah
but
I
I,
honestly
I,
don't
think
it
should
be
too
hard.
I
mean
we
can
see.
If
the
folks
are
doing
clock,
testing
still
need
the
one
that
they've
got
that
I
gave
them.
That
I
think
that
should
be
inserted
zero
one.
So
they
might,
they
might
be
able
to
free
that
up.
So
you
can
try
it
on
there.
Otherwise,
there's
a
couple
others.
The
biggest
problem
we
had
actually
previously
was
that
they
were
not
updated
to
recent
operating
system
version.
A
B
Definitely
yeah
my
biggest
questions
are
you
know,
or
what
I
would
want?
Your
advice
on
most
is
like
what
your
process
is
for
testing
new
patches.
You
know,
since
what.
B
Has
to
do
with
like
give
us,
we
upgrade
to
you,
know
a
specific
show
or
something
on
a
build
on.
Oh
no,
what
am
I
saying
a
build
on
CI,
but
I,
don't
know
what
your
your
general
process
is
for.
If
you
want
to
just
test
something
on
performance,
so
any
advice
you'd
have.
There
is
helpful.
A
A
The
effect
of
the
the
primary
rebalancing
right,
like
you'll,
have
to
think
carefully
about,
like
how
many
pgs
do
you
want
to
do
use
and
do
you
want
to
test
against
like
a
really
unbalanced
case
or
just
kind
of
a
typical
case,
I'm
guessing
if
you
test
it
against
a
really
unbalanced
case,
you
could
show
a
really
big
Improvement
right,
but
the
question
is
whether
or
not
against
a
typical
case:
do
you
see
any
Improvement
time?
We
guess
this
can
be
much
more
nuanced.
In
that
scenario,.
E
A
B
Ahead,
oh
yeah,
yeah!
No,
no
way
to
go
because
there's
pros
and
cons
to
each
like
a
realistic
case
would
be
showing
realistic
numbers
of
something
somebody
might
actually
see
in
their
cluster
and
then
a
really
extreme
case
would
be
probably
less
likely
to
be
seen.
But
it
would
show
that
huge
performance,
Improvement
so
I
think
that.
A
A
Yeah
I
can
say
that
before
we
had
any
kind
of
you
know
balancing
in
place
when
we
were
only
dependent
well,
not
only,
but
when
we
were
basically
dependent
on
the
the
random
distribution
of
pgs,
that
you
got
from
Crush
that
once
you
got
to
like
100
pgs
per
OSD
on
a
given
pool
you,
you
didn't
really
see
a
lot
of
obvious,
consistent
improvement
with
like
random
reads
if
you
bumped
up
to
like
200
pgs.
A
So
my
guess
is
that
even
over
just
a
standard
distribution
of
like
100
pgs
in
one
pool
you're
not
going
to
see
a
lot
of
benefit.
But
if
you're
talking
about
like
32
pgs
per
OST
or
or
16
pgs
per
OSD
in
one
pool,
then
that
might
be
the
case
where
you
can
start
showing,
like
you
know,
more
obvious
advantages.
B
Right,
that's
exactly
the
the
case
that
we're
looking
to
Showcase,
rather
than
rather
than
a
very
like,
like
gibba
in
its
current
state,
I'm,
not
sure,
since
it's
so
large
or
since
we
have
so
many
peaches.
That
doesn't
seem
like
the
exact
since
this
is,
would
be
mainly
beneficial
to
like
odf
clusters.
Yeah.
A
And
historically,
with
the
standard
testing
I've
done,
I've
tried
to
eliminate
as
many
sources
of
like
Superfluous
bottlenecks,
as
I
can
so
like.
I,
usually
give
a
really
generous
amount
of
pgs
to
a
pool
to
make
sure
that
that's
not
like,
affecting
or
impacting
things
that
we
want
to
test
like
you
know
how
a
PR
is
affecting
something
that
doesn't
otherwise
touch
pgs.
A
So
like
that
wouldn't
be
appropriate
right,
you
would,
you
probably
wouldn't
see
a
whole
lot
of
improvement
with
your
PR.
In
that
case,
I
think
you
really
want
to
showcase
this.
Like
okay,
you've
got
lots
of
pools.
You
don't
have
that
many
PG's
Pros
deeper
pool.
How
much
can
the
balancer
help
you
in
those
scenarios?
That's
I
think
what
you
probably
want
to
showcase.
A
Cool
so
I
I
think
next.
Steps,
then,
is
that
hard
drives
seem
very
reasonable
as
a
thing
to
test
with
this.
So
oh
wait
and
maybe
nvme
as
well.
So
let's
see
if
we
can
get
you
an
insert
a
node
to
test
on
I,
don't
have
access
to
the
list
anymore,
because
I
was
on
the
red
hat
stuff,
so
we
might
have
to
recreate
who's
using
those
notes.
B
A
It
was
like
my
red
hat
Google
drive
or
whatever
or
Google
Sheets
like
I
shared
out
to
people,
but
it's
only
eight
notes.
So
it's
not
like
this
is
the
end
of
the
world.
It
should
be
pretty
easy
to
figure
out
who's
on.
A
So
my
guess
is
that
we'll
either
maybe
we
can
get
you
either
if
Adam's
not
using
his
or
that
one
will
probably
need
to
be
reinstalled
with
sent,
S8
or
or
something
or
suggested
stream,
or
something
else
or
I
think
somebody
that
was
working
at
m-clock
has
instead
of
a
zero
one
now.
Otherwise
we
can
look
and
see
if
Jenkins
is
actually
appropriately
using
the
four
nodes
that
has
I
know
it
uses
one
of
them
at
least,
but
you
might
not
be
using
the
others.
A
So,
let's
see
if
we
can
get
you
on
one
of
those.
B
Okay,
yeah,
that
sounds
great
and
maybe
I'll
start
like
an
email
or
slap
station
between
you
and
me
and
Josh,
and
if
we
want
to
do
a
meeting
we
can,
but
we
can
also
just
communicate
asynchronously
if
that
works,
better.
A
A
All
right:
well,
thanks
for
coming
everyone
good
meeting
and
see
you
next
week,
bye
thanks
all
thanks.
Man.