►
From YouTube: Ceph Crimson/SeaStore 2021-09-22
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
it
doesn't
look
like
anyone
else
is
trickling
in.
Let's
see
this
week
has
been
mostly
reviews
and
a
few
c-store
bug
fixes
last
week.
For
me,
I'm
trying
to
get
a
measurement
environment
set
up,
so
I
can
see
the
numbers
you've
been
seeing
things
done,
I'm
gonna
start
looking
into
c-store
right
amplification
and
the
garbage
collector
also
zns.
I
finally
got
environment
set
up
for
that.
B
Last
week
I
would
appeal
poc
to
address
the
transaction
conflict
issues
and
the
house.
I
also
confirmed
that
the
ono
tree,
fixed
parent,
is
not
a
problem
yet,
but
I
still
think
honor
3
knight
was
a
major
refactoring
and
also
needs
to
be
simplified
and.
B
A
A
All
right,
china.
C
So
last
week
I
I
tried
tried
to
trace
the
literature
next
and
found
that
oh,
my
occupy
and
most
cpu
cycles
for
the
allocated
extent
and
the
inc
has
already
fixed
the
amazing
hinge.
Then
I
try
to
retest
again
and
the
allocated
exchange
is
very
low.
Currently
a
occupier,
low,
cpu
cycles
and
most
is
the
kernel
system
call
and
c
star
for
the
chromosome.
C
Only
only
cash
exchange
prepare
right,
so
that
is
the
the
first
one
to
occupy
most
cpu
cycles
for
crimson
code
currently,
and
but
it
is
not
very
big
in
the
low
workload
in
the
light
workload,
for
example,
io
depth
is
two,
and
the
number
drops
is
one
it
is
only
about
to
about
2.5
percent.
When
I
increase
the
the
workload
six
16
io
depth,
number
drop
is
one
and
it's
about
the
4.5
percentage.
C
So
but
even
our
code
not
occupy
most
cpu
cycles,
but
the
latency
is
still
worse
when
the
workload
is
increased
compared
with
the
safe
osd
and
the
blue
stock.
So.
A
C
A
Out
of
gas
gotta
guess
what's
going
on,
is
we're
not
doing
journal
batching
blue
store
does
a
lot
of
work
to
so
once
once
there's
enough
stuff
in
the
blue
store
journal,
every
journal
commit's
going
to
tend
to
to
include
in
multiple
transactions,
not
just
one
of
them,
which
tends
to
speed
things
up.
I
think
I
suspect
we
will
need
some
kind
of
journal
batching.
That's
the
next
step.
A
Guess
that's
the
next
thing
we'll
want
to
do
if
you're
looking,
if,
oh
god,.
C
In
the
low
light,
in
the
light
lighter
workload
I
mean
the
concurrency
is
small.
Our
right
latency
is
better
than
self-st.
I
set
the
cipher
as
to
use
just
the
website
for
the
messenger.
Oh
yeah,
all
every
part
is
once
right
and
they
banned
the
process
to
the
one
cpu.
C
A
Yeah,
that's
consistent
with
right
batching.
So
when
we're
submitting
one
disk
barrier
for
every
transaction,
but
with
a
high
degree
of
concurrency,
we
should
be
only
issuing
a
barrier
every
couple
of
transactions
and
then
completing
them
all
at
once.
That's
what
bluestore
does
with
that's.
What's
it
called
the
sink
right
thread?
A
I
believe
it
submits
very
large
transactions
to
roxdb,
which
has
the
effect
of
allowing
rocks
to
be
to
do
a
very
large
compound
commit,
I
suspect,
that's
what
we'll
need
to
do
next,
okay,
if
you're
interested
in
looking
at
it,
the
relevant
code
would
be
in
journal.cc
where
we
submit
to
the
segment
manager
we
would
want
to
submit.
Instead
of
one
record,
people
want
to
spend
multiple
records
at
once
or
something
it
will
need.
A
Some
kind
of
there
will
need
to
be
some
kind
of
refactoring
in
general
to
support
this
I'll,
get
to
it
eventually,
but
if
anyone's
interested
and
take
a
look
in
the
meantime,
that's
what
we
probably
want
to
do,
trying
to
think
of
a
way
to
prove
that
this
is
the
right
fix,
but
it's
we
almost
certainly
want
it
anyway.
So
it's
probably
worth
implementing
period.
C
And
for
the
read
the
license:
it's
weird
and
the
lights
workload
in
the
light
work
load,
the
red
creation
of
read
latency,
is
worse
than
self-st
and
increase
the
concurrency.
C
The
read
lattice
is
better
and
I
can't
increase
the
much
workload
because
I
needed
the
same
separate.
Third
as
using
reported
the
block
I'll
wait,
something
like
that.
A
So
the
the
reason
for
that
one
is
almost
certainly
that
c-star
doesn't
do
any
or
c-store
doesn't
do
any
meaningful
caching.
Yet
if
you
go
look
at
cache.h
you'll
notice,
it
doesn't
actually
have
an
lru.
All
it
does
is
keep
any
dirty
extents
in
memory,
so
we
tend
to
evict
big
chunks
of
the
lba
tree
for
no
reason
we
probably
shouldn't
be
so
that
would
be
the
place.
You'd
want
to
look.
If
you
wanted
to
address,
read
latency.
A
A
C
A
That
is,
we
already
have
the
cache
itself.
We
have
the
ability
to
keep
extents
in
memory
indexed
by
physical
address,
but
we
let
things
fall
out
of
the
cache.
The
very
second
there
aren't
any
active
references
to
them
if
they're
clean,
so
we
want
to
do
is
keep
I
don't
know:
256
megabytes
a
tera,
a
gigabyte,
something
like
like,
like
that
of
the
most
recently
used
extents.
A
A
C
Wait
there
to
happen,
you
will
not
increase
the
comparison.
I
never
met
it
before.
A
Oh
yeah,
I've
seen
that
one
once
do
me
a
favor
and
create
a
bug
and
assign
it
to
me.
C
A
Mean
it
all
of
this
code
is
under
harvey
modification.
So
when
you
hit
these
things
file
a
bug
and
that
one
you
can
assign
to
me,
it
should
be
pretty
straightforward
to
the
track
down.
I
think.
A
All
right
greg
what's
up.
D
I
don't
have
anything
really
exciting
to
talk
about,
I
think,
but
you
and
I
need
to
schedule
a
meeting
for
thursday.
I
think
we'd
said
we're.
Gonna
start
talking
about.
A
All
right,
tm,
thanks.
E
Oh
sorry,
I
have
nothing
more
to
update
so
far
because
wait
I'm
taking
the
holiday
last
week
so
in
the
next
week.
Maybe
I
can
yes,
as
a
true
maze,
continuing
the
testing
on
the
c
store.
E
I
will
work
on
my
way
to
profiling,
along
with
the
setup
like
trimming
to
see
what's
going
on
there
and
the
bottom
neck
in
the
store.
So
that's
the
plan
here.
D
Last
week
I
pushed
the
push
the
multi-device
support
pr
right
now,
I'm
trying
to
modify
the
pr
as
sam
suggested.
I
was
also
trying
to
add
a
simple
extend
placement
strategy
into
the
epm
like
like
the
one
in
used
in
waffle.
Actually,
it
was
a
garbage
collection
strategy.
D
That's
all
for
me,
cool,
oh,
by
the
way
about
the
the
bug
the
mentioned.
I
I
met
it
once
and
I
think
the
situation
was
there
were
two
transactions
doing
away
hard
limits
on
on
the
second
segment
cleaner
and
after
the
after
the
future
result,
one
of
them
dbio
and
caused
another
blocked,
ioa
to
be
set,
and
so,
when
the
second
transaction
tries
to
it
is
scheduled
to
to
be
executed.
It
found
in
the
block
tile
weight,
promises
yeah.
D
That
sounds
right
yeah,
so
I
I
I
think
this
might.
This
might
indicate
we
need
a
complete
standard
conditional
barrier.
We
don't.
A
A
A
The
reason
why
there
isn't
a
condition
variable
in
c-star
is
because
you
literally
can't
implement
a
condition
variable
correctly
with
a
standard
threading
environment
without
it
being
baked
into
the
p-threats
library
that
isn't
really
true
with
c-star,
because
you
can
count
on
atomicity
of
anything
that
doesn't
return
to
the
reactor
so
most
of
the
time,
it
won't
be
worth
the
complexity
to
use
one
anytime,
you
feel
like
you
would
you
would
benefit
from
it,
though,
go
ahead
in
this
case,
though,
I'm
going
to
evaluate
what
actually
happened
and
see,
if
that's
the
simplest
way
to
fix
it,.