►
From YouTube: Ceph Performance Meeting 2021-06-17
Description
A
All
right
so
new
pull
request
this
week.
Let's
see,
there's
an
interesting
pull
request
for
the
mds
that
it
looks
like
it
may
improve
some
of
the
locking
when
you
have
concurrent
clients
accessing.
I
believe
it's
the
same
files.
A
I
think
the
gist
of
it
is
that
potentially
we
can
be
waiting
a
very
long
time
to
be
able
to
grab.
Was
it
a
read,
log
yeah,
I'm
requesting
a
read
lock
so
theoretically,
this
may
help
we'll
see,
but
definitely
there's
there's.
It
seems
like
there's
a
lot
of
room
for
improvement
in
this
area,
so
I'm
excited
to
see
if
this.
A
If
this
helps
next
ifu
submitted
a
new
b
tree
allocator,
which
has
potentially
some
advantages
in
terms
of
memory
usage
and
maybe
in
some
cases
performance
though
not
in
all
cases
at
the
expense
of
higher
cpu
usage,
adam
has
been
doing
excellent
work.
Looking
at
that
adam,
do
you
since
keith
was
not
here.
Do
you
want
to
talk
at
all
about
what
you
see.
B
Well,
I
just
taken
kyfu's
work
and
wanted
to
measure
its
actual
memory
benefits,
because
the
b3
was
not
connected
to
our
mempool
accounting,
so
that
had
to
be
tricked
a
bit
and
the
results
are
very
promising.
It
seems
we
are
having
almost
two
times:
space
improvement
with
regard
to
avl
and
the
cpu
consumption
is
not
much
larger,
so
it
seems,
from
my
perspective,
it's
the
very
good
candidate
to
replace
to
become
a
preferred
allocator,
but
this
is
just
preliminary.
B
C
This
we
have
a
threshold
of
about
64
megabyte
to
to
enable
switching,
and
maybe
we
should
take
this
scenario
and
check
the
memory
consumption
with
the
allocator.
C
B
C
I
was
unable
to
yeah
I'll
double
check
that
maybe
it
simply
reduces
the
threshold.
I
don't
remember
all.
Maybe
there
is
a
valid
scenario
to
reach
that
memory
with
high
memory
huge
well,
actually,
the
scenario
to
be
produced
that
should
be
pretty
trivial.
You
just
need
to
to
release
memory
in
an
interleaving
manner,
so
every
other
block
to
be
released
or
every
other
4k
block
to
be
released,
so
we
allocate
the
full
set
and
then
release
interleave
mode.
B
C
I
think
we
don't
have
anything
like
that.
I
know
that
in
production
from
time
to
time
we
are
getting
this
memory
saving
mode
in
in
hybrid
allocator,
so
we
can
definitely
reach
64
megabyte
threshold
in
production,
but
I'm
not
sure
if
I
have
scenarios
like
that
in
I
know
just
huge.
B
Okay,
so
maybe
I
will
just
try
to
crank
up
my
fragmentation
scenarios
to
go
some
extremes.
I
remember
I
cut
them
down
from
default.
Many
iterations
just
to
get
some
reasonable
run
time,
so
maybe,
if
I
crank
it
up
to
like
10,
iterations
or
20,
maybe
I
will
get
enough.
I
will
try
to
play
with
that,
but
that
will
take.
B
I
guess
a
few
days
yeah,
but
definitely
it
it
strictly
depends
on
the
use
pattern,
whether
it
it
uses
much
memory
or
not,
so
one
should
generate
a
pretty.
B
Data
and
by
the
way,
this
interleaving
scenario
looks
like
a
corner
case
for
for
getting
the
the
highest
ram
usage.
So
it
might
be
pretty
good
to
to
estimate
how
this
new
locator
behaves.
B
B
What
what's
the
difference
with
the
avl
one,
because
well
this
results
you
shared
today
or
yesterday
they
so
the
ram
usage
is
not
very
large
like
three
and
a
half
megabytes.
As
far
as
I
remember
so,
it's
definitely
not
the
large
data
set.
It's
interesting
how
it
it
grows
with
a
huge
amount
of
locations.
B
Yes,
you
are
correct
and
that's
why
I
was
telling
that
I
will
go
back
to
making
a
lot
more
iterations
of
a
fragmentation
test,
because
that
one
was
the
repository
one
which
is
now
trimmed
to
three
iterations.
Instead
of
initial,
like
I
had
it
10
or
something
that
produced
a
lot
of
a
lot
of
fragments.
B
Well,
the
interesting
thing
about
bitmap
locator
is
that
it
uses
fixed
map.
So
after
some
threshold
it
wouldn't
grow.
Well,
it
never
grows.
Actually
it
allocates
a
static
data
structure
in
memory,
so
it
would
definitely
win
at
some
point
from
memory
usage
point
of
view.
B
B
But
well,
it's
interesting
how
how
how
this
new
locator
behaves
in
in
comparison
with
avl
one
in
in
corner
scenarios
and
then
latin
in
production
unreal
scenarios-
maybe
so,
if
it's
well,
potentially
it's
like
200
times
less
or
something
like
that
as
well
as
I
remember
so-
maybe
it's
quite
okay
to
to
get
this
huge
setting
for
any
real
productions,
but
well
we'll
see
yes,
but
on
the
same
note,
I
would
like
to
point
out
a
hybrid
allocator
asset
part
bitmap
of
that
allocator
tends
to
usually
produce
more
fragments
per
requested
allocation
than
avl
and
now
b3,
so
it
I
that's
why
I
assume
that
it
would
be
better
to
hold
on
to
b3
and
that
type
of
memory
organization
for
longer.
B
B
I
remember
a
couple
of
years
ago,
looking
into
what
a
bunch
of
other
file
systems
were
doing
and
it
seemed
like
hybrid,
like
some
kind
of
hybrid
setup
was
really
common.
I
don't
know
if
that's
still
the
case
or
or
what
the
trade-offs
are
exactly,
but
that
was
one
of
the
reasons
I
was
asking.
That
question
is
if
we
have
any
expectations
as
to
what
we
would
think
we
would
see
at
scale
or
with
lots
of
data
with
each
each
each.
B
Yeah,
and
as
I
already
mentioned,
this
simple
living
scenario
would
be
the
the
perfect
estimation,
I
believe
so
it's
it
well.
In
this
interleaving
scenario,
we
are
getting
the
highest,
the
topmost
bound
of
memory
usage.
B
I
understand.
What's
the
worst
case,
I
can
get
well,
I
can
imagine
some
osd
that
has
a
lot
of
small
objects
that
we
allocate
continuously
a
space
for
them.
Then
some
step
like
deletion
of
objects,
so
we
delete
randomly
every
second.
B
B
B
B
That
wasn't
my
intention
all
right.
Well,
it
sounds
to
me
like
we
should
test
this.
Oh
go
ahead.
You
have
something
well,
potentially
we
we
can
build
a
sort
of
hybrid
allocator
on
top
of
this
new
allocator
as
well.
So
if
we
it
would
just
switch
to
bitmap
mode
later,
since
a
bitmap
okay,
since
b3
uses
less
memory.
B
So
if
we,
if
we
find
we
might
still
get
higher
usage
in
some
scenarios,
we
can
build
another
hybrid
look
at
them
pretty
easily.
I
believe.
B
B
All
right:
well,
then,
let's
move
on
oh,
but
we've
got
another
pr
from
adam,
provide
runtime
ability
to
modify
the
size
of
tc
malik
start
cache.
This
replaces
the
earlier
one
I
might
have
mentioned
last
week.
I
think
so.
B
The
idea
here
is
that
we
set
the
tc
malik,
red
cache,
bytes
or
max
total
thread
cache
bytes
inside
the
usd,
rather
than
doing
it
as
an
environment
variable
in
the
many
different
front
ends
and
installers
or
stuff
itself
adam.
I
am
very
much
in
favor
of
your
pr.
I
think
it's
absolutely
the
right
way
to
go.
B
B
Either
they
use
priority
cash
or
we
we
do
it
at
the
the
perfectly
level,
but
then
we
take
it
out
of
all
of
the
other
front
ends
or
installers
or
whatever
they
are
and
just
not
set
it
there.
At
all
other
than
maybe
allowing
it
to
set
the
the
the
cessna
comp
setting,
which
should
be
the
the
right
way
to
do
it
moving
forward.
B
Well
adam.
What
do
you
think?
Well,
the
current
that
pr
that
you
mentioned
is
an
implementation
that
basically
you
do
only
one
thing:
you
assign
a
conf
variable,
conf
option
name,
that
is
to
be
tracked,
for
changes,
and
that's
all
so
if
you
give
it
osd,
tc
ma
osd
tc
malloc
cache
size,
then
it
will
check
it.
If
you
replace
it
with
rgw
thread
cache
size.
B
It
will
do
that's
just
one
liner,
so
I
don't
see
a
much
room
for
improvement
there
or
unless
we
want
to
have
one
variable
for
each
demons,
then
there
is
an
open
question.
If
maybe
it
should
be
some
automation
that
will
detect
whether
a
safe
client
is
a
demon
or
not
and
accord
act
accordingly,
but
I
have
no
opinion
on
that
sure.
B
B
Okay,
I
was
moving
through
cache
bytes-
okay,
yes,
so
it's
just
for
the
osd
here,
oh
and
I'm
trying
to
remember
this.
B
Well,
so
so
adam,
yes,
I
think
generally,
this
approach
is,
is
I'm
still
very
much
in
favor
of
studying
this
either
with
you
know,
multiple
settings
for
different
demons
or
through
one
that
we
set
in
different.
You
know
regions
of
the
stuff.conf.
However,
we
want
to
do
it
doesn't
matter
to
me
really.
I
just
much
prefer
this
versus
trying
to
do
it
in
the
higher
level.
Tooling,.
B
I
just
copied
the
line
that
is
the
only
glue
between
osd
and
managing
thread,
cache
bytes
and
that's
your.
B
B
Yeah
yeah,
exactly
exactly
I'm
hoping
that
we'll
we'll
be
able
to
convince
the
books,
the
rgw
folks
and
the
mds
folks
to
adopt
the
priority.
Cache
too,
I
talked
to
matt
just
a
little
bit
about
it,
but
I
need
to
go
back
and
talk
to
a
little
more.
I
think
it
would
be
beneficial
if,
if
all
of
our
demons
just
use
the
same
mechanism
for
controlling
memory,
targets
plus
less
weird
independent.
B
Okay,
so
yeah,
I
don't
know
anyone
have
any
other
opinions
on
any
of
this.
I
I
feel
strongly
about
it,
but
that's
my.
B
Take
all
right
well
adam!
Maybe
we
can
talk
more
about
it
and
get
your
pr
merge,
because
I'd
like
to
see
it
happen.
I
think
it's
good
thanks!
B
Okay!
Next
we
have
a
couple
of
closed
pr's,
one
that
did
merge
for
setting
the
easy,
malloc
third
cache
fights
in
sephadium.
B
We
need
this
right
now
because
it
will
apply
to
all
demons,
including
those
that
don't
don't
well,
I
guess
maybe
atoms
would
work
for
anything,
that's
using
perf
blue,
so
maybe
that
would
be
fine
anyway,
but
in
any
event
that
merged
it.
I
don't
think
it's
the
right
way
to
go
long
term,
but
it's
it's
useful
and
good
for
now
a
long
time.
B
I'm
hoping,
though,
that
we
won't
have
all
of
these
individual
tools,
including
cbt,
set
this
on
their
own
next
closed
igor's
pr
for
fixing
something
in
the
priority
cache
with
the
perf
counters
priorities.
It
was
like
making
the
the
ui
for
one
of
the
tools
looking
at
this
like
go,
go
crazy
and
go
off
the
end
of
the
the
screen.
So
that's
good!
That
was
just
an
accent
on
my
part
that
they
made
it
into
do.
I
have
a
priority.
B
This
manager,
time
to
live
thing,
still
has
ongoing
discussion,
the
cash
there
and
then
there's
a
just.
A
discussion
going
on
with
the
avl
allocator
kifu
had
mentioned
b3
allocation
there.
B
There
was
some
discussion
about
data
structures,
but
I
don't
know
if
there's
much
else
going
on
there,
either
igor
or
adam,
was
there
anything
holding
that
pr
up?
B
This
is
the
blue
star,
avl
allocator
introduced
bluefs
avl
ffmax
power
options,
just
some
minor
topic
to
discuss.
So
generally
it
looks
okay
I'll
prove
it
shortly.
Yeah.
I
basically
like
the
the
implementation.
I
think
even
that
parts
that
were
just
implemented
here
should
also
go
to
the
new
b3
allocator
and
even
surprised
that
it's
not
there,
maybe
just
for
making
it
easier
for
comparison.
B
B
Cool,
oh
laura.
You
you've
mentioned
attention
in
the
chat
window
here.
Anyone
on
the
column
knows
what
the
priority
field
for
perf
counter
signifies.
B
This
may
be
my
own
ignorance,
but
I
think
the
the
only
thing
I
know
that
it
actually
does
is
is
changes
where,
where
things
show
up
in
like
display
windows,
adam
or
or
anyone
else
do
you
know,
is
there
anything
any
more
significance
to
those?
I
think
that's
exactly
right
mark,
I
think
it's.
B
I
might
even
control
how
many
of
those
are
reported
exactly,
but
it's
for
for
controlling,
which
ones
end
up
end
up
in
like
the
output
of
the
stuffed,
the
soft
demon
perf
command
and
I
think
it
may
control
which
ones
are
actually
reported
to
the
manager
too
yeah.
That's
what
even
the
dashboard
has
one
level
that
it
displays.
So
they
pick
one
level.
So
if
anything
needs
to
be
displayed
in
the
dashboard,
you
need
to
assign
a
particular
priority.
If
I
remember
correctly,.
B
B
Oh
right
so
lots
of
stuff
with
no
movement.
B
My
my
shard
object
cache
from
last
week
for
rgw.
I
don't
think
anyone's
had
time
to
look
at
it,
yet
so
that's
just
kind
of
sitting
there.
I
need
to
get
back
to
the
cash
binning
that
should
get
in
soon.
So
I
need
to
the
higher
my
priority.
Let's
just
get
in.
B
Igor
and
adam,
what
what
should
we
do
about
the
the
the
pinning
for
the
the
oh,
no
cash?
What
do
you?
What
do
you
guys
want
to.
B
B
B
B
B
B
B
B
B
In
any
case,
I'm
looking
forward
to
review
and
then
merge
it
yeah
yeah.
That
will
be
good
yeah.
I
agree
this
is
going
to
be
another
great
win
after
the
first
set,
so
but
either
after
after
you
rebased
it.
I
am
happy
to
run
this
through
qa
for
you.
B
Very
cool
all
right
anything
else
that
I
missed
for
prs
guys
mark.
I
just
wanted
to
bring
your
oma
bench
pr
for
discussion.
Do
you
think
yeah?
I
was
just
thinking.
How
can
we
expedite
that
and
make
it
more?
You
know
user
friendly.
I
know
it's
running
as
a
part
of
the
store
test
right
now,
but
yeah.
What
are
your
thoughts?
B
B
B
B
One
one
thing
right
now
is
that
it
talks.
It
doesn't
talk
to
the
an
osd
right
like
it's
talking
to
the
object,
store.
B
Maybe
I
could
take
the
fio
engine
and
glue
that
with
this
code
that
will
be
maybe
the
simplest
way
to
get
both
ability
to
connect
to
and
specified
theft
and
just
preserve
the
code.
B
B
B
Right
cool
thanks
guys.
B
Okay,
so
if
there
are
no
other
prs,
then
I'll
give
just
a
short
update
on
stuff
I've
been
working
on
so
for
for
downstream
the
red
hat
downstream
product.
There
was
some
testing
that
that
was
going
on
looking
at
kind
of
the
the
versions
based
on
novels
versus
version
based
in
pacific,
and
there
were
some
potential
warning
signs
that
there
may
have
been
some
regression
in
pacific
versus
nautilus
for
radius
gw.
B
So
I
went
back
and
looked
at
upstream.
So
this
is
all
you
know,
public
stuff,
and
what
I
saw
was
that
in
pacific
we
were
seeing
on
a
really
high
performance,
cluster,
higher
cpu
usage
and
a
lower
performance.
So
I
I
there
are
some
hints
that
maybe
it
may
have
been
malc
related.
I
went
through
and
did
a
big
sweep
looking
at
the
number
of
rgw
threads
and
the
what
happened
with
different
tc
malik
bird
cache
settings
for
for
radius
gw.
B
That
one
is
puts
this
one,
the
same
spreadsheet,
I'm
just
linking
different
individual
sheets
within
it
or
tabs
within
it,
but
anyway
we
saw
really
kind
of
different
behavior
and
different
performance
levels
and
and
zip
usage
between
novelists
and
pacific.
B
B
That's
not
charted
at
all,
just
a
single
individual,
so
I
made
a
pr
to
shard
that
and
that's
the
one
I
was
talking
about
earlier.
This
object
started
thing:
it
helped
some
not
a
lot,
but
some
not
enough
to
fix
things
so
the
extent
of
like
making
it
as
good
as
novels
was
so
then
I
kind
of
went
back
and
over
the
last
weekend
I
did
kind
of
a
semi-automated
performance
by
section,
and
these
are
always
really
rough.
They
take
a
lot
of
time
and
they
take
this.
B
I
don't
really
know
why,
yet
the
spawn
stuff,
which
is
the
first
one
I
mean
that
kind
of
makes
sense,
given
the
description
of
what
it
does,
I
could
imagine
lower
performance
and
higher
cpu
overhead,
given
those
changes
in
it
for
the
second
one,
the
step
16
pull
request,
35
355
that
one.
I
have
absolutely
no
idea
why
that
would
cause
that
kind
of
performance
change,
but
I
don't
know
this
code
very
well
either.
B
So
I
don't
know
adam
if
at
members-
and
I
don't
know
if
you've
looked
at
this
at
all
or
or
have
any
insight,
but
but
just
a
casual
glance
at
it.
I
didn't
see
anything
obvious
it
stuck
up
to
me.
B
All
right
what's
the
context,
so
so
this
pull
request
here.
B
B
I
can
go
back
and
look
at
it.
This
is
add,
request
timeout
to
beast
for
some
reason,
that
seems
to
be
dramatically
increasing
the
seepage
and
at
least,
and
and
actually
also
increasing
performance
for
our
beast.
I
don't
know
why
that
one
needs
to
mute.
B
I'm
honestly
not
sure
I
haven't
this
is
the
first
I've
seen
of
it
sure-
and
I
don't
I
don't
understand-
kind
of
how
beast
works
well
enough
to
have
a
good
good
insight
into
it,
but
that
was
at
least
the
bisex
seemed
to
be
showing
that.
So
that's
that's
what
I'm
trying
to
figure
out
now.
B
So,
in
any
event,
all
of
these
numbers
for
guests
and
puts
with
different
tc
malik
thread
cache
bytes
numbers
of
threads
and
all
these
things
to
some
extent
it's
all
kind
of
irrelevant,
because
we
need
to
figure
out
why
these
pr's
are
having
this
impact.
First
probably
get
those
fixed,
and
then
we
run
tests.
Looking
at
whether
or
not
it's
you
know,
the
the
current
number
of
threads,
which
is
512,
is
significant,
a
lot
of
threats
for
for
beast
and
and
what
he's
not
thread.
B
Cache
settings
actually
make
sense
given
given
that
or
or
given
a
change,
if
we
make
a
change,
but
that's
just
kind
of
the
only
launcher,
at
least
in
the
long
term.
B
I
can
tell
you
that
our
that
one
of
the
goals
behind
our
casio
changes
will
be
to
cut
down
the
number
of
threads
we
are
using
overall,
the
number
of
threads
we're
using
now
is
basically
a
result
of
the
original
civic
web
design,
which
we
aren't
using
anymore,
but
it's
assumption
that
everything
would
be
a
blocking
call
yeah
yeah,
that's
what
I
assume
too.
B
The
good
news
here,
though,
is
that
I
mean
in
these
tests
we're
talking
about
between,
like
100
and
two
hundred
thousand,
like
4k
puts
per
second,
that
that's
not
bad,
for
you
know
this.
This
kind
of
architecture
right
like
given
given
the
round
trips
that
are
required
to
the
osd's
and
and
given
this
the
s3
protocol
and
everything.
That's
in
my
mind,
that's
not
bad.
B
Probably
faster
than
a
lot
of
our
competitors
is
my
guess:
I'm
just
kind
of
making
that
up,
but
marking
you
a
bit.
I
would
like
to
upload
your
table,
which
lists
prs
with
performance.
That's
like
amazing!
B
Oh
thank
you.
That
was
I
I
had
to
have
like
a
drink
after
I
was
done
with
all
of
this
because
it
was.
It
was
a
lot
of
work,
but
yeah
it's
it's.
If
we
could
automate
this
like
figure
out
a
way
to
have
it
combine
kifu's
work
and
radix
work
for
like
trying
to
say
if
something
passed
or
failed,
maybe
make
it
even
a
little
smarter
with
like
doing
bisects
like
this
and
automate
the
whole
process.
It
would
just
be
so
much
nicer,
but
right
now
it's
only
semi-automated.
B
So
there's
one
other
thing
that
showed
up
as
I
was
looking
at
this,
this
bucket
init
tab.
That's
in
there.
So
that's
like
this
thing
here
check
window.
B
Well,
I
saw
when
I
was
trying
to
test
the
the
started.
Cache
implementation
is
that
all
of
a
sudden,
bucket
initialization
was
way
way
slower
than
either
nautilus
or
pacific,
and
I
I
didn't
test
anything
more
than
just
one
one
configuration
and
master
to
verify
that
it
was
happening
there
as
well,
but
it
was
so
something
between
pacific
and
master
appears
to
have
changed.
That's
causing
the
bucket
initialization
phase
here
to
be
much
much
slower.
So
that's
another
thing
to
bisect
and
figure
out.
What's
going
on,
there.
B
Which
I
just
haven't
brought
myself
to
start
doing
yet
because
it's
intense
to
go
through
those,
but
maybe
I'll,
maybe
I'll,
try
to
do
that
later
this
week
I
guess
it'd
be
today
and
tomorrow,
but
that's
another
thing
that
we
need
to
figure
out
and
then,
if
I
do
that,
then
I
can
start
really
more
more
reasonably
start
testing
these
different
sharded
cache
configurations
and
see
if
it
actually
really
helps
or
if
it's
kind
of
useless
or
not.
So
that's
that's
it!
That's
where
we're
at
on
this.
B
B
We
maybe
are
seeing
a
very
slight-
and
maybe
in
some
cases
not
even
that
slight
random
right
regression
in
pacific
versus
nautilus
or
for
liberty.
But
I
haven't
fully
completed
that
I
haven't
looked
at
master
for
those
those
tests
as
well
so
more
more
data
to
collect.
B
All
right,
so
the
other
thing
that
was
on
sorry
one
thing
just
looking
at
the
spreadsheet,
I
noticed
the
window
puts
column
for
cpu
usage
on
line.
18
looks
like
there's
a
significant
jump,
maybe
between
that
and
the
earlier
runs
about
like
50
difference
or
so
yeah.
Look
at
the
puts
per
second
as
well,
though,.
B
Okay,
so
we're
actually
just
doing
more
some
something
changed
that
let
us
do
more
work
got
it
we're
no
more
we're
not
more
efficient.
Maybe
we
might
even
still
be
a
little
less
efficient.
I
don't
know
I
haven't
somewhere
in
there.
We
can.
We
can
probably
figure
out
the
the
ratios,
but
I
didn't
look
too
closely
at
it
because
I
figured
it
was
just.
B
B
It
gets
weird
too,
because
you
help
I
mean
I
don't
even
know
how
to
to
make
it
reliable
in
an
automated
way,
because
I
I
end
up
like
at
different
steps.
If
I
get
inconclusive
results,
I
end
up
running
like
a
couple
of
them.
I
guess
you
could
automate
that
too
and
then
try
to
have
it
like
keep
running
stuff
until
it
either
gives
up
or-
or
you
know,
can
converge
on.
Some
level
of
you
know
assurance
that
it
really
is
a
pass
or
fail.
B
Yeah
yeah,
I
have
some
some
measurement
of
variability
across
a
few
runs,
and
only
only
like
the
passer
failed
based
on
like
a
real
large
special
difference
yeah-
and
there
was
enough
noise
in
these
that,
like
there
were
multiple
times
where
I
ended
up
having
to
do
like
several
runs
to
really
convince
myself,
like
master
failed,
so
it's
it's
kind
of
ugly,
sometimes
yeah.
B
That's
a
lot
trickier
than
this
afternoon
correctness:
testing!
That's
for
sure,
yeah!
B
Oh
one
thing
I
did
want
to
mention
on
this,
though
I
forgot
to
mention,
is
that
35
355
that's
step
16
on
line
25.,
that's
an
easy
commit
to
revert
it.
It
just
reverts
really
cleanly
from
from
pacific,
and
that
did
show
that
we
basically
more
or
less
reverted
to
the
same
level
performance
from
before
that
commit.
B
So
so
that
one
is
really
nice
and
clear
like
just
reverting,
it
makes
performance
go
back
up.
So
that's
easy.
The
the
spawn
pr
that
other
previous
one
on
step,
28
on
line
12.
that
that
thing
is
is,
has
been
layered.
On
top
of,
I
tried
to
rip
it
out.
I
spent
like
maybe
a
half
an
hour,
just
like
ripping
stuff
out
of
rgw
and
trying
to
make
it
go
away,
and
it
was
not
super
easy
or
clean.
B
B
All
right:
well,
that's
all
I've
got
for
that.
Sage
had
listed
a
couple
of
things
he
wanted
to
talk
about
from
ceph
month,
but
he's
not
here,
so
maybe
we
should
just
we're
kind
of
towards
the
end
of
the
hour.
Maybe
we
should
just
wait
till
next
week.