►
From YouTube: Ceph Crimson/SeaStore 2021-07-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
let's
start
last
week
I've
been
reviewing
bronze
pr
and
it
seems
to
know
the
progress
on
his
ends.
So
I'm
doing
some
dunking
cleanup
and
that's
me
to
me.
B
A
C
C
It
turned
out
that
the
problem
is
because
of
of
wrong
of
misappropriate
nonsense
used
to
to
configure
messenger
at
sepia.
We
are,
we
are.
We
are
testing
with
enviro
with
containerized
environment,
but
I
think
that
something
has
changed
over
the
time
and
the
p
that
we
that
is
being
assigned
to
the
process
is
not
one,
as
we
have
a
conditional
four
in
the
code,
but
literally
two
and
always
two.
So
there
is
a
bunch
of
conflicts
and
some
thanks
for
hinting
on
on
noses
thanks
a
lot.
C
How
about
you
know?
Okay,
now
nouns
are
cool,
but
usually
they
are
it.
The
problem
depends
actually
on
two
factors,
part
assigned
and
pid
process
id
assigned
by
cardo,
and
usually
I
think
that
what
we
have
usually
works
quite
nicely.
However,
there
is
no,
there
is
still
residual
tiny
probability
that
those
values
can
be
the
same,
and
this
might
cause
some
problem.
So
I
wonder
whether
we
should
introduce
a
check,
maybe
when
forming
an
osd
map,
maybe
maybe
monitor
what
do
you
think.
D
I
I
don't
think
it's
worth
it.
I
think
a
more
interesting
question
is:
should
we
just
always
use
random
nonces.
B
D
C
D
C
Yeah
quite
likely,
it's
just
not
worth
worth
the
effort
of
attacking
yeah.
I
mean
big
in
the
big
checks
introduced
in
you
know
in
for
always
monitor,
testing,
etc,
and
I
agree
that
the
idea
to
switch
announces
looks
very
straightforward.
It's
simple!
Actually,
there
are
two
plays
in
the
code,
one
for
classical
for
the
entire
classical
world,
one
for
crimson,
so
it's
yeah.
B
D
B
D
A
C
I
thought
initially
it's
just
it's
just
a
matter
of
of
checking
the
when
crafting
new
osd
map
as
a
result
of
let's
say,
I'm
osd
boot,
it's
just
a
matter
of
checking
that
single
new
new
items.
Looking
for
a
duplicate
for
the
addresses
that
we
that
for
the
osd
we
are
going
to
app.
D
D
C
Well,
yeah
because
well
in
that
case,
I
think
it
is
entirely
impossible.
In
that
case,
the
buying
failure
should
happen
and
because
of
you
know,
operating
systems
shouldn't
allow
you
actually
shouldn't
allow
to
process
to
listen
on
the
same
ip
addressing
part.
C
Right
think:
that's
right,
yeah,
anyway.
Okay,
here's,
here's
the
pr
with
the
fix!
That's
one!
That's
one
area
of
work.
C
The
second
one
was
and
still
is
performance
comparison
in
discussing
with
mark
the
testing
procedures
and
also
made
appeared
to
generalize
alienster
beyond
beyond
main
idea
is
to
is
to
have
a
tool
to
compare
c
and
star
to
compare
chip,
memory
based
objects
or
implementations,
to
judge
the
real
overhead
that
comes
solely
from
alexa,
so
I
it
will
be
mostly
used
with
with
memster
ali
and
I
in
comparison
when
we
come
when
we
check
for
sea
answer
performance
and
confront
it
with
the
performance
of
memster
behind
the
edit.
C
At
the
moment,
I'm
seeing
that
the
that
alien
takes
prices,
the
the
number
of
cycles
per
per
up
during
4k
random
rates
from
let's
say,
30
35
000
cycles
per
up
to
maybe
55
thousands
per
hour,
while
still
observing
the
bottleneck.
C
C
Busy
weighting
and
reduce
the
number
of
of
of
all
infrareds,
just
for
the
sake
of
of
of
experimenting
and
comparison
with
the
answer,
that's
me.
A
Already
correctly
that
the
the
ultra
per
cycle
is
of
alienized
illinois
store
is
higher
than
that
of
is
higher
than
classicality,
with
the
blue
store.
E
C
C
I
think
so,
if
I
currently
it
lowered
the
number,
the
overhead
by
maybe
10
10,
but
I
still
was
think
I
was
still
seeing
the
c
major
being.
B
A
E
That's
good,
I'm
a
handling
two
issues
wrap
code,
a
one
I
should
appear
but
add
some
objections
to
the
way
logging
is
handled.
The
logging
of
the
pg
status
is
handled
in
in
scrap
code.
This
is
one
issue
I
am
handling.
The
other
relates
to
our
race
between
statuses,
when
the
when
scrubbing
terminates-
and
this
is
something
that
I'm
I
hope
to
wish
you
a
pr
soon-
that's
it
for
me
and
by
the
way,
sam.
If
you
notice
josh
comments
on
the
the
out
on
the
log
pr.
D
D
D
A
F
This
week
I
I
refactor
sorry
sorry
this
week
I
refactored
the
extend
placement
manager
pr
and
made
made
the
extend
their
fresh
extents
in
transaction
also
go
through
the
extent
placement
manager
to
to
have
their
final
address
determined.
But
I
I
just
re-pushed
the
pr
this
morning
right
now,
there
seems
to
be
some
issue
within
the
sister
unit
test
case,
but
I
think
fixing
fixing
that
issue
won't
cause
won't
cause
a
major
modification
to
the
pr.
So
I
think
the
pr
is
good
for
review.
G
Week
I
did
some
profiling
with
with
the
the
store
matrix
implemented
or
the
metrics
implemented
in
systole.
So
if
you
open
it,
there
are
some
diagrams.
G
The
the
test
scenario
is,
I
wrote,
like
35
megabytes
data
using
the
radius
bench
and
and
until
the
system
crashed,
so
I
have
collected
21
rounds
of
bench
result
and
included
some
graphs.
There
are
three
sections.
The
first
section
is
cache
usage
and
cache
hit
ratio,
since
the
hit
is
about
98
or
99.
G
So
so
it
looks
good
and
the
second
section
is
is
about
the
transaction
in
validate.
So
I
want
to
know
how
many
efforts
are
discarded
because
of
of
transaction
invalidate-
and
it
seems
it's
around
100
or
150
percent,
and
it
doesn't
seem
so.
The
invalidate
ratio
doesn't
seem
to
drop
when
writing
35
megabytes
data
or
maybe
we'll
need
to
write
more
data,
we'll
be
able
we'll
need
to
write
more
data
to
to
evaluate
how
the
pattern
of
of
how
the
invalidate
efforts
are
a
job.
D
G
G
D
G
Yeah,
I
think
that's
correct.
It's
yeah.
The
cached
usage
is
mostly
the
about
the
30s
you
can
see.
The
trend
of
the
data
is
cloud
very
close
to
30
uses,
so
normal,
mostly
like
only
15
extents,
are
cached.
D
D
G
Yeah
you
can,
you
can
refer
to
the
third
graph
in
the
transaction
validate
section
that
is,
which
kind
of
extent
is
causing
the
conflict.
Oh
wow,
that's
cool
yeah,
so
we
can
see
actually
see
that
the
conflict
in
the
lba
trees
is
dropping.
B
D
I
wouldn't
expect
that
to
make
it
any
better.
I
don't.
B
D
Wait,
I
think
it
does
mechanically
the
same
thing.
My
question
is:
wait,
sorry,
so
the
total
size
was
35
megabytes.
G
B
B
G
D
That's
really
good,
actually,
okay,
so
normally
right
within
with
with
a
conventional
flash,
ftl
layer.
Typically,
the
right
amplification
is
over
four.
D
D
D
B
D
D
D
In
that
case,
this
readout
this
right
application
is
really
bad.
Okay,
yeah.
What
we're
seeing
is
the
metadata
overhead
from
the
oh
notes,
mostly.
G
D
So,
okay,
oh
sorry,
I'm
saying
this
is
true,
even
with
blue
store.
If
you
look
at
the
just
the
raw
size
of
the
object
info
in
the
osd,
the
metadata
overhead
for
the
osd
starts
at
something
like
1.5
k,
something
like
that.
When
you
add
the
pg
log
rates,
the
osd
is
going
to
do
in
the
background
and
then
all
c
stores
internal
metadata
yeah
three
times.
That
sounds
right.
B
G
B
G
D
G
D
G
A
Engineer
I'm
wondering
if,
if
does
it,
make
sense
to
to
automate
automated
the
process
of
generating
the
the
the
graph
the
the
performance.
G
D
Yeah,
I
was
going
to
say
if
it
makes
sense,
there's
a
perf
tools
directory
in
cbt,
where
I
stick
stuff
like
this.