►
From YouTube: Ceph Performance Meeting 2022-11-17
Description
Join us weekly for the Ceph Performance meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
All
right,
I
reached
out
to
Adam
to
see
if,
if
he's
gonna
be
able
to
make
it
today
or
not,
but
I'm,
not
hearing
back
so
I
think
we'll
just
get
going
here.
A
So
I
apologize
but
I
didn't
look
over
the
pull
requests
in
my
the
last
couple
of
weeks
since
I
I,
just
wasn't
feeling
very
well
this
morning,
so
I
I
ended
up
not
working
on
it,
but
I'll
try
to
get
those
updated
for
next
week
there
has
been
a
fair
amount
of
stuff
going
on,
though
one
of
the
the
things
that
Adam
has
been
working
on
for
the
past
couple
of
weeks
is
trying
to
change
how
shared
blobs
work
in
Blue
store,
and
we
we
saw
some
initial,
really
good
numbers.
A
I,
don't
know
how
many
folks
remember
from
when
we
last
got
together,
but
some
of
his
numbers
were
looking
really
really
good
for
or
snapshots.
Basically,
when
you
take
snapshots
and
objects
fragment
into
lots
of
shared
blobs
right
now,
we
see
a
lot
of
overhead
keeping
track
of
the
shared
blobs
and
he
basically
tried
to
make
it.
So
we
only
have
one
shared
blood
per
object,
but
then
kind
of
segment
things
differently.
A
Unfortunately,
though,
once
once
he
fixed
some
things
that
were
wrong
in
the
initial
version,
the
performance
appeared
to
be
well,
it
wasn't.
The
performance
was
lower,
but
the
CPU
usage
was
quite
a
bit
higher.
The
Peaks
were
still
lower
than
Maine,
but
the
floor
was
higher
than
Maine
as
well.
So
on
average
it
was
probably
about
the
same,
but
just
maybe
a
little
bit
more
condensed
over
time,
so
that
didn't
seem
to
be
working
out.
A
So
hot
I
I
still
haven't
gotten
back
to
really
looking
at
in
depth
to
see
if
I
could
figure
out
why
that
changed
and
maybe
help
them,
but
right
now
he's
kind
of
thinking
about
maybe
going
down
a
different
path
and
keeping
shared
blobs
but
making
it
so
that
when
you
have
a
new
extent
that
you've
created,
you
could
add
that
to
an
existing
shared
blob,
rather
than
basically
creating
a
new
shared
blob
and
ending
up
with,
like
basically
one
shared
blob
for
every
new
extent
in
the
snapshot
scenario.
A
So
we'll
see
that
might
be
in
some
ways.
A
little
nicer
than
the
other
option
in
that
it
wouldn't
require
any
kind
of
on-disc
format.
Change
if
you
did
it
that
way.
So
you
know
that's
a
side
benefit,
but
it
also
means
that
we
still
have
shared
Bob's
candle
like
we
do
now
so
I'll
I'll
check
in
with
them
later
this,
either
tomorrow
or
maybe
next
week,
and
see
how
he's
doing
on
that.
But
it's
still
really
exciting
stuff
I
think
it's.
It's
still.
A
Maybe
the
right
way
to
go
for
fixing
these
snapshot
issues
that
we
have
right
now
with
our
reading,
mirror
and
and
just
snapshotting
in
general,
if,
if
it's
not
working
out
great
there's,
still
kind
of
my
PR
as
the
fallback
option,
which
is
to
defragment
objects,
when
you
take
snapshots,
if
they're
heavily
fragmented,
that's
another
way
to
get
rid
of
snapshots,
because
then
you
just
basically
shove
everything
back
into
a
contiguous
single
snapshot
and
extent,
but
the
downside
of
that
is
that
there's
more
amplification
on
disk
and
a
little
bit
more
space
amplification.
A
So
this
may
be
a
little
bit
less
attractive
than
than
doing
it.
The
way
I
was
trying
to
do
it.
Let's
see
so
what
else
has
been
going
on
from
a
performance
perspective?
We've
gotten
some
other.
Oh
actually,
there's
one
big
one
Casey
are
you
are
you
here?
Is
your
my
gun.
C
A
Without
getting
too
specific
about
Downstream,
it
sounds
like
you
guys
have
a
use
case
where
you
expect
an
arm
range
to
delete
more
than
like
a
thousand
keys
at
once,
and
I
was
wondering
if
I
could
try
to
understand
that,
because
we
right
now
and
Upstream,
we,
we
don't
really
have
delete
range
supported.
Basically,
because
it's
causing
all
kinds
of
problems
with
tombstones
I
was
wondering
what
what
is
it
that
you
guys
are
trying
to
do.
B
A
It's
like
more
than
a
thousand,
but
there
was
something
where
apparently
you
I
thought
you
guys
were
maybe
expecting
something
else.
So
I
was
trying
to
understand
what
what
it
is
that
you
guys
need
from
the
OSD.
So
we
can
make
sure
it's
doing
what
you
need.
B
While
all
of
this
work
is
around
rgw's
log
trimming
for
multi-site
replication
logs,
they
grow
and
deletion
is
really
expensive,
and
so
without
RM
range
trimming
has
trouble
keeping
up
at
scale.
Okay,
I.
A
A
B
I
mean
my
understanding
is
that
we've
been
happy
with
this
Behavior
Upstream
for
quite
a
while.
Okay.
B
B
A
Can
do
them,
we
just
can't
do
them
a
lot
like
they
have
to
be
like
and
the
the
basically
the
gist
of
it
is
is
that
ranged
tombstones
are
really
expensive
to
iterate
over,
and
so,
if
you
don't
have
like
a
right
workload,
and
you
accumulate
a
ton
of
these
things
too
quickly,
then
then
it
will
they'll
basically
make
everything
blow
up
during
iteration.
A
This
one
I'll
put
in
the
chat
window.
A
A
A
So
you
might
have
to
look
into
whether
or
not
that
would
actually
fix
it,
but
if
it
does,
it
means
that
we
might
be
able
to
get
rid
of
our
own
checks
and
instead,
just
let
roxdb
do
its
thing.
B
Okay,
interesting
so
I
think
you
know
what
we
what
we
do
on
Main
versus
what
we
can
do,
Downstream
and
then
Nautilus
Baseline
are
probably
different
things
I,
don't
think
that
we
can
all
right.
C
A
A
Sure
sounds
good
I
just
wanted
to
make
sure
that
that
there
wasn't
that
there
wasn't
like
something
custom
being
done,
Downstream
that
that
needed
to
be
supported,
Upstream,
because
I
I
just
heard
from
Neha
about
it
and
that
I,
the
the
initial
I
think
thought
was
that
maybe
downstream's
Behavior
was
differing
from
Upstream,
since
we
needed
to
like
get
these
in
in
sync
with
each
other.
But
maybe
that's
not
the
case.
C
B
I
know
of
okay:
just
before
we
had
this
RM
range
stuff
for
log
trimming,
we
had
a
lot
of
bugs
around
it
and
so
I
I.
Imagine
this.
This
is
just
Jerry
picking
some
rgw
bug
fixes
that
expected.
A
A
I
think
the
probably
the
trickiest
bit
would
be
if
you're
coming
in
at,
like
900
deletes
or
something
right
like
you're,
it's
just
enough
to
be
below
the
threshold,
but
it's
too
much
where,
if
you
enabled
Delete
range
in
roxdb,
it
would
start
causing
problems
like
there's.
There's
probably
some
like
area
in
there,
where
it
kind
of
is
awful.
Both
ways
right.
B
B
Mean
rgw's
use
of
this
for
log
trimming,
I,
think
well,
I,
guess
it
depends
on
how
many
keys
there
are,
but
it
yeah
it
essentially
wants
to
delete
all
of
them
to
other
places.
Object
deletion
uses
range,
deletes,
I
believe
to
delete
all
the
omap
keys.
Okay,.
A
And
yeah
it
looks
like
if
you're
coming
in
at,
like
10
or
20
or
something
some
low
value,
is
actually
a
lot
cheaper.
Just
to
do
normal
deletes
than
it
is
to
eat
the
the
range
deletion
Tombstone
for
whatever
reason
I,
don't
I
haven't
looked
at
it
enough,
but
for
every
reason
those
things
are
awful
bitter
right
over
in
rocksdb.
They
they
cause
all
kinds
of
problems.
A
Yeah,
if,
if
this
PR,
that
I've
got
actually
works
with
I
used
to
delete
tombstones
too
and
if
it
counts,
not
just
the
the
single
Tombstone
but
like
the
whole
range
of
keys
that
were
deleted
towards
the
convection,
it
might
but
I
mean
at
some
point,
I
suppose
that
compassion
can,
if
you
have
too
many
compactions
triggering
it,
can
be
bad
but
and
depending
on
how
well
this
option
works,
it
could
be
really
good.
A
Just
making
it
so
that
if
you're
like
reiterating
over
the
same
range,
you
know
you
can
you
can
stop
iterating
over
all
these
tombstones
in
the
same
range
over
and
over
again,
you
know
really
quickly.
A
I,
don't
know
if
rgw
does
this,
but
in
the
OSD
we've
seen
cases
where
we
do
like
this
kind
of
really
bad
pattern
of
like
we
seek
to
first,
and
then
we
walk
the
range
a
little
bit
and
we
delete
something,
and
then
we
go
back
and
seek
to
first
again
and
then
walk
over
everything
that
we
just
did,
including
the
Tombstones
and
then
delete
like
a
couple
more
things
and
then
do
that
whole
cycle
all
over
again.
That
was
what
was
causing
like
tons
of
problems,
at
least
on
the
OST
side.
A
A
But
that
that
was
like
the
big
big
problems
with
especially
daily
range
when
we
were
using
it.
B
I'm
I
don't
see
how
that
would
arise
from
our
use
of
the
replication
logs.
Those
are
mostly
just
either
listing
from
a
position
to
the
end
or
deleting
a
range
up
to
a
given
marker.
C
A
B
Before
delete
range,
we
were
using
an
omap
listing
to
get
a
thousand
keys
and
doing
a
a
per
omap
key
delete
on
each
okay.
Okay,.
A
A
A
A
Just
find
out
what
what
they
saw,
it
sounds
like
Adam
was
concerned
about
it,
but
I'll
I'll
try
to
take
a
look
and
see
and
then
yeah
we
can.
We
can
Circle
back
on
it.
C
All
right,
I,
don't
have
anything
else,
I
think
guys
and
I'm
I'm,
pretty
worn
out
is
any.
Does
anyone
want
to
present
to
anything
or
say
anything
this
week
before
we
wrap
up
get
well
thanks,
I
appreciate
it.