►
From YouTube: Ceph Performance Meeting 2022-06-23
Description
Join us weekly for the Ceph Performance meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
All
right:
well,
I
suppose
we
should
get
this
started
here,
so
I
don't
have
a
whole
lot
going
on
with
the
requests
that
I
remember
seeing.
It's
been
a
little
bit
slow
in
the
last
couple
of
weeks,
but
there
are
two
updates
that
I
saw
the
first
one.
Is
your
this?
Let's
get
rid
of
statifus
update
on
each
transaction,
adam
reviewed
it
and
approved
it,
and
igor's
got
a
couple
of
additional
updates
there.
So
I
think
it
probably
just
needs
another
review
from
core,
but
otherwise
sounds
like
it.
A
It's
moving
along
and
probably
looks
pretty
good.
Overall,
the
other
update
was
adding
kodell
to
blue
store
for
buffer
bloat
mitigation,
and
sam
did
get
a
chance
in
the
past
week
to
review
that
and
has
a
number
of
changes
that
he
requested
so
looks
like
that's,
not
quite
ready
yet,
but
is
actively
in
the
works
actively
being
reviewed.
So
that's
good.
That
was
that
was
it.
That
was
all
I
really
saw
for
updates
this
week.
A
All
right
well,
then,
moving
on
to
discussion
topics,
we
did
just
kind
of
talk
about
this
rocks
db
tuning.
A
I've
got
an
article,
basically
a
blog
post
in
the
works
it's
turning
out
to
be
quite
long,
but
the
good
news
is
that
for
the
the
last
year
or
two
there's
been
kind
of
these,
these
rocksdb
tunings
that
have
been
floating
around
on
the
mainland
list
and
on
blog
posts,
where
the
number
of
in
memory
buffers
for
the
red
hat
log
is
dramatically
increased
and
the
the
size
of
those
buffers
is
reduced
while
simultaneously
allowing
you
to
accumulate
two
of
them
into
a
flush
to
the
database,
and
I've
been
a
little
wary
of
of
those
that
tuning
primarily
because
some
of
the
options
were
changed
regarding
the
number
of
threads
for
compaction
and
the
number
of
threads
for
flushing
were
ridiculously
high,
and
I
kind
of
wasn't
convinced
that
this
tuning
made
a
whole
lot
of
sense.
A
But
in
the
past
couple
of
weeks
I've
tested
it
and
it
actually
results
in
fairly
decent
behavior,
and
in
fact
we
don't
see
the
the
right
amplification
increase
with
it
that
we
typically
see
when
using
smaller
mem
tables
and
and
smaller
flushes
overall,
and
it
turns
out
that
primarily
this
due
to
changes
that
were
made
in
the
number
of
files
that
are
allowed
to
accumulate
in
level
zero
and
the
size
of
l1.
A
Generally,
you
want
those
to
be
matched
so.
The
good
news
here
is:
it
actually
works
better
than
I
expected
it
to
and
and
and
I'll
say
as
much
in
this
article.
But
the
even
better
news
is
that
we
can
do
actually
better.
A
By
by
also
tweaking
level
zero
and
level
one
and
having
even
more
smaller
buffers
that
are
accumulated
into
a
larger
flush,
we
can
actually
get
even
better
rbd
performance
and
not
have
a
huge
increase,
a
fairly
modest
increase
in
in
right,
amplification
database.
It's
it's
basically
like
over
our
stock
settings,
a
20
performance
gain
on
our
faster
nvme
systems
for
like
a
maybe
10
right,
amplification
increase.
A
So
it's
and
it's
actually
not
even
that
it's
it's
a
10
writes
the
database
increase,
so
in
reality
it
actually
might
be
lower
right
amplification,
but
it's
roughly
the
same
and
the
key.
There
is
level
zero
and
level
one
tuning.
A
So
I'll
talk
about
that
in
the
article,
but
the
the
gist
of
it
is
that
it
looks
like
we
can
do
significantly
better
than
the
current
blue
store
defaults,
and
we
can
do
better
than
this
alternate
tuning
as
well
so
long
as
we're
really
careful
about
the
tweets
that
we
make
so
I'll.
Have
that
coming
out
at
some
point
here,
but
just
want
to
let
people
know
it's
it's
in
the
works.
A
The
other
thing
I'm
working
on
is
looking
at
our
sharding
and
threading
behavior
in
the
classic
osd
and
there's
some
evidence
that
that
when
we
allow
osd
shards
to
become
empty,
our
efficiency
goes
down
significantly
due
to
the
way
that
we
we
call
notify
all
for
all
threads
to
wake
up
and
that's
that's-
we've
got
data
for
it,
but
that's
farther
behind,
though,
more
of
that
that
analysis
and
work
will
happen
after
this
other
article
is
done.
A
I
may
try
to
change
the
way
that
we
wake
up
threads
in
the
osd
as
well.
So
anyway,
that's
that's
in
the
works.
So
that's,
basically
all
I
had
any
any
question
on
either
of
those
things
or
any
questions.
A
If
not,
then
I'll
open
it
up
to
everyone
else
is
there
anything
folks
would
like
to
discuss
this
week.
A
B
So
I'm
working
on
it,
I'm
rewriting
the
whole
code.
I
finally
understood
what
happens
in
the
snip
marker.
It
took
me
some
time
and
help
from
josh
and
running
with
debugger
to
realize
what
the
data
structures
are
doing
and
why
so
at
first
I
tried
to
give
paul
kuzner
a
version
just
skipping
roxdb
update.
B
B
It
seems
that
it's
really
that
we
read
it
from
rocks
to
be
so
I'm
now
working
on
a
new
version
when
everything
is
stored
in
memory-
and
today
I
finish
just
writing
it,
and
now
I'm
trying
to
debug
it
and
of
course
nothing
is
working,
but
that's
expected
this
weekend,
I'm
flying
to
rome,
so
I'm
only
going
to
be
back
monday.
A
Gabby,
when
and
and
the
new
version
you're
working
on
will
there
be
any,
will
it
be
per
pg?
Will
we
be
able
to.
B
Yeah,
so
I
expect
that
we
will
save
a
lot
of
memory,
but
it
depends
how
many
objects
really
exist
in
the
system.
I
don't
know
that
number.
If
the
number
of
clone
objects
is
more
than
whatever
we
do,
it's
not
going
to
make
much
difference
on
memory
footprint.
If
we
have
many
of
them,
then
sure
that's.
A
Cool
cool
all
right
well
enjoy
rome
and
and
definitely
will
be
excited
to
hear
more
about
about
your
work
on
this
later
on.
B
B
Hopefully,
I'm
going
to
be
able
to
get
hold
of
him
next
week
as
well,
and
then
I
hope
to
get
everything
working
shouldn't
be
too
long.
I
mean
just
to
be
able
to
test.
It's
still
not
going
to
be
data
structures
which
you
could
use
because
they're
not
persistent,
but
once
this
thing
is
done,
I
expect
that
reconstructing
from
the
object,
node
should
not
be
a
very
long
project
to
do
probably
a
couple
of
weeks.
A
All
right:
well,
I
don't
think
I
have
anything
else
guys,
so
unless
anyone
has
something
they
want
to
bring
up
we'll
wrap
up
early
today,.
A
All
right:
well,
thanks
for
coming
everybody
and
well
we'll
talk
again
next
week
have
a
great
day.