►
From YouTube: Ceph Crimson/SeaStor OSD 2020-11-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
let's
start
on
last
week,
I
was
trying
to
refactor
the
the
object
context,
locking
with
with
gridlock
with
with
without
primitive
offer,
a
helper
of
the
pi
system.
I'm
putting
the
my
draft
draft
pr,
the
url
to
my
drift
api
in
the
chat
window.
So
it
could
have
some
some
preliminary
reviews
and
also
created
some
crea
cleaning
of
pr's.
B
This
gives
family
considering
the
flagship,
implementation
and
chapter
the
design,
and-
and
I
also
saw
some
comments
from
sam
for
the
omap3
and
I'm
going
to
update
the
patch
recently.
Well,
that's
all.
A
C
Redick
hello
this
week,
I'm
back
I'm
back
for
my
pto,
so
I
got
back
to
crimson.
I
reviewed
about
a
bunch
of
pr's,
particularly
the
sichuan's
crimson
interruptable
library
posted
a
posit
review
there.
That's
me.
D
Hi
I've
finished,
creating
answering
all
the
comments
on
this
classic
scrubbing.
D
A
D
A
E
Yep
see
I'm
making
a
bunch
of
changes.
The
internals
of
journal
because
running
it
on
an
actual
disk
revealed
that
it
has
a
tendency
to
pick
up
segments
from
the
last
time
I
ran
up
and
think
they
are
part
of
this
one's
journal.
So
I
added
a
nonce
based
on
a
uuid
which
entailed
a
few
changes,
the
uuid
header
and
stuff.
I
am
now
while
I'm
at
it,
I'm
fixing
where
I'm
I'm
implementing
checksums
in
the
journal
so
for
real
actual
atomicity.
E
They
were
almost
implemented
before,
but
I
didn't
actually
do
the
checksumming
and
re-reading
it's
a
little
bit
more
complicated
than
I
thought,
because
there
are
at
least
two
ways
we
scan
segments
where
we
really
don't
want
the
overhead
of
computing
checksums.
So
I'm
going
to
have
to
add
a
little
bit
to
the
journal
segment
rollover
mechanism
to
write
down
the
length
of
the
segment
so
that
if
we
get
a
cleanly
closed
segment,
we
don't
have
to
incur
the
overhead
of
recomputing
all
the
checksums.
E
So
that's
what
I'm
working
on
now,
the
piece
after
that
will
be
fixing
concurrency
so
that
I
can
get
decent
performance
numbers.
But
I
think
I
think,
I'm
probably
going
to
push
a
pr
for
what
I've
got
now,
plus
the
check
summing
before
next
tuesday
or
maybe
immediately
after
and
then
work
on
the
next
pieces
separately.
E
The
thing
that
does
check
something
works
now
and
the
numbers
are
okay.
We
get
like
4
000,
synchronous,
iops,
but
that's
completely
single
threaded,
a
full
I
o
has
to
complete
between
each
one
with
no
optimizations
whatsoever
for
batching
commits
once
we
can
keep
more
than
one
operation
in
flight
at
a
time.
I
expect
a
good
factor
of
10
improvement
there,
probably
more
and
then
we'll
get
down
to
the
real
hard
work
of
separating
longer
lived
blocks
into
separate
segments
and
other
optimizations
of
that
nature.
E
So
it's
a
good
start.
I
also
did
some
reviews
of
chun
mei's
webmap
tree
thing
I'll
take
another
look
once
those
are
addressed.
E
And
I
looked
over
yingsen's
plans
for
a
transaction
manager
that
integrates
support
for
persistent
memory
and
also
block-based
opt-in.
I
think
that
plan
is
shaping
up
to
be
appropriate,
so
I'm
looking
forward
to
the
do
further
work
there,
I'm
off
wednesday,
thursday
and
friday
of
this
week,
but
I'll
try
to
keep
an
eye
on
email.
That's
it.
A
I'm
going
to
question
a
couple
questions
regarding
to
your
uuid
and
the
concurrency.
What
do
you
mean
how?
How
is
the
uid
involved
in
your
interchange
in
in
english
start
a
scene
store?
Sorry.
E
Blue
store
file
store
every
version.
Every
store
we've
ever
had
generates
a
random
uuid
when
it
runs
makefest.
E
That's
literally
all
I'm
talking
about
it
that
uses
that
number
to
derive
a
per
segment
nonce
better
than
writes
into
each
record,
so
that
when
we
reuse
a
segment
when
we
let's
say
we,
let's
say
a
segment-
is
eight
gigabytes
long
and
the
first
time
we
use
it.
We
write
the
full
eight
eight
gigabytes,
but
the
second
time
we
use
it.
E
It's
so
that
the
block
implementation
underlying
it
doesn't
need
to
incur
the
cost
of
zeroing
out
segments
when
we
release
them,
which
we'd
have
to
do
otherwise.
E
A
E
Particular
write
will
do
a
sequence
of
reads
and
writes
in
between
the
reactor
may
do
other
things
like,
for
instance,
do
a
different
sequence
of
reads
and
writes
it
on
behalf
of
another
transaction.
This
is
completely
harmless
unless
they
try
to
read
and
write
the
same
pages.
We
talked
about
this
in
some
detail
with
the
original
design.
E
As
it's
how
databases
typically
work.
All
of
that
code
exists,
but
I
have
no
confidence
whatsoever
that
it
works
so
once
I
once
I
run
fio
with
more
than
one
outstanding,
I
o
at
a
time
I
expect
crashes
and
bugs
so
I
expect
to
spend
some
time
fixing
that
that
is
all
okay.
E
It's
important,
for
you
know
performance
though
we
need
to
be
able
to
service.
Let's
say
the
client
has
16
rights
outstanding,
because
it's
a
fairly
deeply
pipelined
workload
and
benchmarks
often
work
that
way.
Then
we
should
be
able
to
be
servicing
all
16
at
the
same
time,
sort
of
resources
permitting
quite
separately.
F
E
E
E
It's
no
different
from
in
blue
store
how
we
can
commit
many
transactions
at
once
if
all
of
the
transactions
arrive
at
the
queue.
While
it's
committing
the
previous
batch,
it
will
batch
the
new
large
set
of
transactions
and
send
them
all
down
at
once,
because
the
client
didn't
wait
for
a
transaction
one
before
sending
transaction
two.
It
says
16
at
the
same
time,
so
there
are
a
lot
of
opportunities
for
batching
and
combining
by
permitting
us
to
do
that,
and
I've
done
relatively
little
of
the
optimization
or
roundup.
E
A
Executing
stage
for
fetching
the
the
right
to
the.
E
No,
I
explicitly
don't
want
to
do
that.
The
executing
stage
would
serialize
them.
I
don't
want
to
do
that.
In
most
cases,
these
rights
will
be
non-conflicting,
and
so
it
doesn't
matter
what
order
they
execute
them.
They'll
be
correct.
Either
way
transaction
manager
is
designed
so
that
if
they
do
conflict,
the
second
transaction
will
fail
and
will
have
to
be
retried,
which
is
fine.
A
E
That's
useful:
when
we
explicitly
want
to
and
keep
in
mind
in
the
osd,
we
don't
use
one
executing
stage.
We
use
n
of
them
one
for
each
pg.
That's
why
it's
that!
That's!
Why
it's?
Okay,
we
deal
with
concurrency
by
permitting
multiple
pgs
to
operate
concurrently.
E
F
Last
week
I
spent
a
lot
of
time
doing
some
a
temporary
job
in
my
office,
so
I
didn't
make
much
progress,
but
from
yesterday
I'm
back
on
track.
I
think
this
week
I
will
first
try
to
tackle
the
issue
that
I'm
mentioned
by
relic
in
the
in
the
interrupt
future.
Pr
and
after
that,
I
think
I
will
continue
debugging
the
decline
to
request
a
sequencer
which
is
supposed
to
preserve
the
client
request,
order
in
case
of
an
ost
map,
change
or
appearing.
A
F
Oh
no,
the
client
request
sequencer
is
is,
is
the
stuff
that
I
added
to
attitude
crimson,
which
is
supposed
to
preserve
order
when
there
is
a
appearing
event
preserve
the
order
of
client
requests?
It's
just
meant
to
resolve
the
issue
that
we
discussed
the
other
day
that
which,
at
the
time
we
we
supposed
to
we,
we
we
intend
to
resolve
the
order
problem
in
the
in
the
opc,
lock
layer.
A
G
Well,
yeah
all
right,
nothing!
I
found
out
the
documentation
about
the
system
with
multiple
tier
devices
and
I'm
still
refining,
refining
the
documentation
and
address
requirements,
mostly
about
atomic
guarantee
and
architecture,
political
architecture
changes
and
that's
all
I
will
take
annually
next
week.
E
Next
week,
I
assume
I
assume
I
should
read
the
or
review
the
fl
tree.
Is
it
ready.
A
By
the
way,
I
also
uploaded
the
log
for
the
the
messenger
test,
failure
probably
couldn't
get.
You
take
a
look
at
it
as
your
convenience.