►
From YouTube: Ceph Crimson/Seastore Meeting 2022-05-11
Description
Join us weekly for the Ceph Crimson/Seastore meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
Let's
see
for
me
this
week
has
been
I
merged
a
pr
with
a
bunch
of
fixes
for
the
op
track
and
stuff
radik
put
him.
So
that
seems
to
be
working
pretty
well
now,
and
I
have
the
initial
op
pipeline
rebase
out
for
review.
I
think
red
x,
probably
almost
ready
on
that
one
engine
how's
it
going.
B
Yeah
last
week,
I
mostly
I
reviewed
the
background
tree
and
there's
some
analysis
here,
and
I
think
that
pr
looks
fine
to
me,
except
some
minor
comments
and
the
second
part
I
I'm
working
on
the
segment
cleaner,
matrix
and
cleanups,
and
they
introduced
six
a
new
graph
to
understand
what
is
going
on
with
the
garbage
cleaning
yeah.
I
also
find
out
the
result
shows
that
the
reclaiming
only
with
lba
tree
seems
not
efficient.
B
I
think
that
matches
the
hands
observation
and-
and
there
are
some
adjustments
to
the
space
calculations
in
the
segment
cleaner,
and
I
need
to
make
sure
that
it
makes
sense
to
you.
That's
all.
A
Yep,
I
just
left
a
few
comments
about
20
minutes
ago.
It
looks
mostly
fine.
I
just
have
some
questions
about
the
metadata
changes
and
the
journal
seek
thing
mostly.
We
can
discuss
them
at
the
end
of
the
meeting
they're
real
complicated
as
far
as
the
bankruptcy
goes.
I
am
fine
with
that
pr,
so
once
you're
happy
with
it,
we
can
merge
it.
B
A
C
Last
week
a
root
caused
a
racing
test
issue
and
found
that
the
client
request
will
retry,
maybe
two
more
times
after
the
field.
C
For
example,
the
first
is
delete,
but
the
object
is
not
exist.
It
returned
non-exist
and
the
request
will
retry.
So
when
we
get
the
the
error
code,
we
should
write
a
error
log.
Then
the
requester
come
here
come
come
in
and
we'll
check
the
log
tools
to
check
if
it
is
already
finished
already
completed.
C
A
A
C
So
we
said
we
we
send
the
the
same
logo
to
the
replicated,
all
the
the
same
operation
to
the
replicated.
A
C
Because
we
have
changed
the
multiple
multi,
many
data
structures
and
they
are
different,
and
so
I
still
compare
them
yeah
so
still
working
on
the
pg
log
appear,
and
another
issue
is
that
when
we
do
the
right
operation,
we
set
the
object,
state
exists,
but
the
next
application
coming
and
the
check
that
exists
it
disappear
so
still
debugging
on
it.
I
don't
know
why
we're
right
into
it,
but
the
next
check
it
is
nothing
so
still
debugging.
It.
C
A
A
The
only
thing
I
don't
remember
what
message
classico
is
to
use
us
to
send
it.
We
want
to
use
the
same
message,
so
I
would
check
at
least
that
part,
but
let
me
know
when
you
have
questions
I
can
help
you
work
through
it
once
you.
A
C
A
C
Are
so
many
uncompleted
uncommitted
and
something
else
in
the
class.
C
A
Yeah,
once
you
feel,
like
you've
got
a
yeah.
We
can
also
schedule
time
to
talk
since
we're
in
the
same
time
zone
if
that
would
help
whatever
works
for
you
I'll
put
on
my
to-do
list
to
remind
myself
how
to
how
that
actually
works,
because
I've
also
forgotten.
C
And
the
rest
of
this
week
I
will
spend
some
time
on
conference
presentation
comparison,
so
maybe
no
much
time
on
the
debugging.
A
D
The
result
shows
that
if
we,
if
we
set
the
number
of
concrete
ios
to
under
188,
we
can
see
the
conflicts
drops
about
a
half
but
there's,
but
that
by
the
conflict
rate,
is
still
significantly
higher
than
that
on
other
types
of
extent.
So
I
am
trying
to
come
up
with
with
a
persimmon
stick
concrete,
concrete,
concrete,
concrete
control
design.
D
On
the
other
hand,
I'm
I'm
also
trying
to
notify
the
back
reference
pr
as
suggested-
and
I
think
I
can
finish
debugging
the
code
within
the
next
one
one
day
or
two
after
which
I
will
push
the
pr.
That's
all
for
me.
A
A
D
Oh,
no,
that's
that's
the
total
that
that
confirms
concrete
ios.
I
think,
according
to
our
experience
in
our
online
clusters,
are
we
already
set
that
the
throttle
of
concrete
ios
for
a
single
sd
to
about
to
about
ten
thousand?
I
think
that's!
That's
the
that's!
The
number.
D
B
Yeah
in
my
environment,
I
think
62
or.
A
A
A
D
Oh
actually,
I
changed
the
I
o
depth,
not
the
system
or
throttle.
A
Yeah
I'd
like
you
to
change
the
c
store
throttle
because,
like
I
said
the
number
of
ios
required
to
saturate,
the
entire
osd
is
going
to
be
different
from
the
number
required
to
saturate
c-store.
Okay,
okay.
Does
that
make
sense,
also
we're
going
to
need
a
throttle
eventually
like
there's
going
to
be
some
number
of
ios?
That
is
the
maximum
number
of
io.
That
is
enough
ios.
That
c-store
gets
full
throughput
right.
A
A
B
A
Yeah,
the
only
concern
is
there
may
be
users
where
we
actually
use
the
segment
id,
so
that
would
be
adding
a
step
where
you
have
to
go
to
the
segment
cleaner
and
get
the
mapping
to
get
the
correct
segment
id
back.
I
don't
know
if
that's
worth
doing
it
might
be
more
complexity
than
it's
worth.
B
I
think,
logically,
the
second
second
id
is
not
needed
from
the
general
sequence.
A
Yeah,
certainly
it's
recoverable
from
somewhere
else
and
when
it
gets
written
down
in
metadata
and
it's
not
needed,
it
just
increases
our
rate
amplification,
even
if
only
by
a
little
bit
so
it's
trouble.
It
may
well
be
worth
doing
yeah
and
are
you
okay
with
my
comments
on
the
extent
and
vote?
Do
you
think
I'm
less
sure
about
that.
B
Yeah,
I'm
okay.
I
think
it's
just
a
balance
between
complexity,
accuracy
and
overhead.
A
D
B
It's
it's
it's
it
doesn't.
The
the
size
of
extended
info
compared
to
the
same
size
is
not
huge.
A
It
also
means
that
this
way
we're
measuring
the
actual
live
data
like
just
the
blocks
themselves.
A
A
A
All
right
there
anything
else.