►
From YouTube: Ceph Performance Meeting 2023-01-05
Description
Join us weekly for the Ceph Performance meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contrib...
What is Ceph: https://ceph.io/en/discover/
A
All
right,
we
might
have
a
small
crowd
today,
as
I
reached
out
to
Adam
this
morning
to
see
if
he's
gonna
be
able
to
make
it
and
I
didn't
hear
back
from
him.
So
we'll
we'll
see,
if
he's
able
to
come
today
but
Corey
I'm
glad
you're
here,
because
I
was
curious,
how
how
your
your
stuff
is
going,
but
today
we
don't
have
a
whole
lot
of
of
pull
requests
to
go
through
because
The
Ether
pad
is
totally
down
at
the
moment.
A
So
until
that
gets
fixed,
I
wasn't
able
to
go
through
and
look
at
stuff
I
had
previously
done.
I
do
have
a
couple
of
new
PR's
I!
Guess
I
can
go
through
here,
but
then
I
was
hoping
after
that
Cory
we
could
talk
about
what
you've
been
working
on.
A
Getting
an
update
from
you,
okay,
so
new
PR's
this
week
there
is
an
update
to
the
m-clock
work,
that
is
to
add
a
high
priority
queue
for
operations
and
wrote
in
and
oh
a
couple
folks
have
actually
reviewed
that
now
so
I
think
that's
just
in
progress.
A
Let's
see,
we've
got
Corey
I
I
listed
yours,
I,
know,
I!
Think
we
talked
about
a
little
bit
before,
but
your
new
PR
here
and
it
looks
like
Igor-
made
a
couple
of
comments
in
there,
but
that
was
maybe
two
weeks
ago.
So
I
guess
hopefully
you've
seen
those
and
then
we've
got
two
closed.
Prs
I
saw
one.
It
was
a
fix
for
race
condition
in
owner
input,
but
the
author
closed
that
in
favor
of
Igor's
VR.
A
That
also
is
working
on
some
of
the
same
same
stuff
in
in
oh
no
beginning,
which
I
guess
maybe
also
fixes
this
issue,
and
then
there
was
a
appear
merged
by
Neha.
This
was
to
reduce
backflow
and
Recovery
default
limits
for
M
clock
and
other
optimizations
I.
A
Think
I
probably
listed
that
in
one
of
the
previous
performance
meetings,
because
a
couple
weeks
old
now,
but
in
any
event,
that's
that's
why
why
I
looked
at
it
anyway
this
morning,
with
my
limited
ability
to
to
go
back
and
update
previous
ones,
so
anything
I
missed
guys
from
anybody.
A
All
right:
well,
then,
Corey
how's
it
going.
B
C
B
It's
going
pretty
well
at
this
point.
We
got
things
stable
just
before
the
holidays
by
basically
canceling
all
the
backfills
and
stuff
using
one
of
the
scripts
I
think
digitalocean
created
and
then
manually
compacting
as
necessary
for
a
few
days
to
get
through
some
of
the
existing
deletions
that
were
queued
for
pgs
and
then
eventually
we
got
back
to
to
a
point
where
things
were
stable
and
then
I've
been
out
for
a
couple
weeks
since
then
until
this
week.
B
So
we
haven't
really
followed
up
too
much
after
that
because
of
the
holidays,
but
I've
been
getting
back
into
it
this
week
and
we're
starting
to
think
about
trying
and
upgrade
to
try
some
of
these
things
like
your
tuning
settings
for
deleting
a
or
for
using
the
deletion,
Compact
and
filter
and
stuff
like
that,
because
we
still
have
a
lot
of
data,
we
need
to
move
around
and
right
now
we
really
can't
like
let
things
move
without
having
problems
sure,
since,
whenever
PG's
move
we
run
into
this
delete
issue
right,
so
there'll
be
more
to
come
on
that
I
think,
as
we
start
testing
some
of
those
things
we
talked
about
last
time,
I
don't
have
a
ton
of
updates
related
specifically
to
that
right
now,
but
the
one
thing
I
did
want
to
talk
about
that.
B
I
have
been
testing
for
a
couple
days.
Now
is
the
latest
rocksdb
version,
7.8.3
yeah,
there's
one
really
interesting
update
in
there
from
7.7.0,
which
is
to
allow
range
delete
to
be
iterated
over
it's.
Basically,
it
skips
the
range
delete
tombstone.
B
It
doesn't
iterate
over
it
at
all
and
I've
tested
this
on
some
just
test
program
type
stuff
and
it
works
as
advertised
extremely
well.
I
think
it
could
be
a
huge
win
for
Seth,
okay
and
in
particular,
in
these
scenarios.
If
we
start
using
the
delete
range
to
delete
these
kinds
of
ranges
like
the
PG
stuff,
it
basically
avoids
the
iteration
issue
altogether.
A
We
should
we
should
get
that
into
testing
soon.
Then
I
know
someone
had
a
PR
for
updating
to
a
newer
roxdb,
but
I
don't
know
if
it
was
7.8
or
if
it
was
something
slightly
older
than
that.
That
sounds
like
we
should.
This
is
like
a
killer
thing
that
we
should
try
to
get
in
excuse.
B
I
think
he
did
update
it
to
this.
This
latest
version
that
was
released
7.8.3
a
week
or
two
ago.
Oh,
did
we
merge
that
no
I
don't
think
it's
been
merged,
but
he
did
update
the
pr
because
he
originally
updated
it
to
a
previous
release.
I
think
that
was
from
like
November,
but
okay,
so
so
yeah
I,
don't
know
it
seems
like.
B
It
definitely
seems
like
it's
something
that'll
be
worth
testing
and
then
probably
going
through
and
looking
at,
where
we're
where
we
could
be
using
delete
range
instead
of
iterating
over
and
doing
individual
yeah
I.
Think
that
seems
to
be
a
lot
more
efficient.
With
this
Improvement
to
rock
CB
that
they've
made
recently
yeah.
A
Josh,
what's
what's
the
the
freeze
window
on
Reef,
looking
like.
A
C
A
C
Yeah
yeah
I
think
it's
a
good
candidate
for
testing
out
and
seeing
as
soon
as
we
can
yeah
I
think
so.
B
There
are
some
other
interesting
changes
in
there
too,
that
I
haven't
really
tested
directly
because
they
weren't
as
relevant
to
this
particular
case.
But
it
looks
like
there's
quite
a
bit
of
good
stuff
in
that
update
that
might
help
in
different
scenarios.
Yeah.
C
A
A
So,
okay,
I
I,
think
you've
convinced
me.
Then,
though,
that
we
should
we
should
try
so
I'll
go
back
and
look
at
that
PR
the
update,
and
maybe
we
can
once
we've
got
the
VPN
restored.
We
can
try
to
get
that
into
testing
right
away.
B
Other
than
that
I
guess,
the
only
other
thing
that
I'll
comment
on
I
guess
related
to
the
the
previous
discussion
is
that
the
the
fact
that
our
our
DB
is
spilling
over
to
hard
drives
is
definitely
a
huge
part
of
the
problem
in
our
case,
in
terms
of
the
compactions
just
taking
a
really
long
time,
both
online
and
offline
and
I
think
they're
just
kind
of
getting
behind
and
not
able
to
clean
up
tombstones
effectively.
B
For
that
reason,
okay,
so
we're
continuing
to
look
into
that
and
try
to
figure
out
what
we
can
do
short
term
since
obviously
like
the
Rocks
DB,
upgrade
type
path
isn't
going
to
be
a
short-term
thing
for
Pacific
in
production,
but
yeah
I'll
get
back
on
these
meetings
in
the
next
couple
weeks,
hopefully
with
more
updates
related
to
that
cool.
A
Can
you
tell
right
now
when
you
spill
over
to
the
disk,
is
it
are
you
seeing
good
I
O
patterns
to
the
hard
drive
when
you're
doing
reads
and
writes
for
the
SST
files.
B
That's
something
else.
You
know
we
still
need
to
look
at
because
the
the
drive
obviously
is
also
used
for
the
data,
so
I
need
to
separate
I.
Think
we
touched
on
that
briefly
at
the
end
of
our
call
last
time
about
approaches
for
maybe
doing
that
and
seeing
what
what
I
o
is
coming
from
the
data
path
versus
the
the
DB
path,
but
no
I
haven't
really
looked
into
that.
Yet
it
is
on
my
list.
So,
okay.
A
If
I
remember,
right,
I
think
you're
gonna
land,
typically
in
a
different
section
of
the
disk,
for
the
SSD
files,
just
with
the
way
that
blue
star
tends
to
do
it,
our
bluefest
I,
guess,
I
I
think
that's
true.
If
you're
doing
a
a
completely,
maybe
you
might,
you
might
be
able
to
notice
that
the
SSD
files
tend
to
land
in
a
certain
part
of
the
disk,
and
that
might
then
let
you
use
something
like
block
Trace
to
they.
A
Just
like
you
know,
grab
for
the
right
range
or
something
okay,
I
I
might
be
wrong
on
that,
but
I
think
that's
the
way
it
works.
If,
if
not,
you
could
always
well,
it
might
be
more
work,
but
you
could
always
you
know,
stick
stick
it
on
like
you.
Could
you
could
do
a
big
hard
drive
like
DB
partition
or
or
you
could
look
at
just
the
iOS
that
are
going
to
the
the
nvme
drive
I?
A
Guess
right,
like
you,
wouldn't
expect
different
idle
Behavior
between
that
and
the
hard
drive
it
should
look
the
same
it
should
the
SSD
should
just
be
faster.
You
might
be
able
to
do
a
block.
Trace,
just
on
the
nvme
drive
that
you
have
for
the
flash
device
you
have
and
and
look
at
what
those
iOS
look
like.
Okay
and
you
don't
have
to
change
anything.
A
A
Like
as
you
iterate
over
the
the
range
right
to
do,
the
compaction,
the
the
it
should
be
like
reading
ahead,
like
I,
think
it's
two
two
megabytes
for
for
compaction,
so
I'd
hope.
That's
what
we
see.
A
All
right,
yeah
and
I'll
also
be
very
curious
to
see
if
the
other
PR
helps,
although
maybe
it's
less
important
with
the
Rocks
DB
changes
that
that
you
mentioned.
If
we
can
start
using
delete
range.
B
Yeah
I
think
it
probably
is
less
important
because
we
don't
care
as
much
about
when
the
delete
range
tombstones
actually
get
compacted
all
the
way
down
and
removed
if
they
don't
affect
our
performance
on
iteration
like
they
did
yeah
in
this
case,
I
basically
tried
a
little
test
program
using
racks
to
be
different
versions,
and
when
I
was
using
this
latest
version,
it
took
two
milliseconds
to
seek
over
that
Tombstone,
whereas
with
older
versions
by
the
way,
it's
50
million
keys
and
it
would
take
like
eight
seconds
so
pretty
huge
difference.
B
A
B
A
It's
actually
not
necessarily
horrible,
though,
because
that's
the
piece
that's
always
been
missing.
The
mem
table
piece
like
the
stuff
I
did
only
affects
the
SSD
side
of
it.
It
doesn't
affect
the
men
people
side
of
it.
So
if
this
fixes
the
other
problem,
the
two
actually
might
be
complementary.
C
B
A
Cool
all
right,
well,
I,
don't
have
anything
else.
Guys.
I
was
hoping
to
talk.
If
Adam
was
able
to
come,
I
was
hoping
we
could
find
out
how
this
stuff
is
going,
he's
working
on,
adding
making
it
so
that
you
can
add
extents
to
an
existing
shared
blob
to
reduce
the
number
of
shared
blobs
that
we
have
for
things
like
RBD
mirroring,
but
then
also,
more
recently,
he's
been
working
on
making
it
so
that
not
just
shared
blobs
but
blobs.
A
When
you
you
you're
at
an
extent,
you
don't
need
to
create
a
new
blob.
You
can
add
that
extent
to
an
existing
blob
and
that's
the
the
more
interesting
new
stuff
that
he's
been
working
on
and
I
I
got
the
impression
that
maybe
he's
he's
getting
it
to
pass
testing.
So
we
might
be
ready
soon
for
performance
analysis,
but
that's
I
think
he's
still
actively
working
on
it.
So
that's!
That's
all
I
know
right
now
so
yeah
beyond
that
anything
else.
A
Anyone
want
to
bring
up
before
we
close
all
right.
Well,
thanks
for
coming
guys
and
talk
to
you
next
week
have
a
great
week.
Thank.