►
From YouTube: Ceph Crimson / SeaStor OSD 2020-08-26
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Last
week
I
was
still
working
on
the
technology-based
treasure
test
and
made
some
some
progress.
We
had
the
first
recovery
back
view
bug
I'm
taking
the
url
to
the
trigger
ticket
in
the
chat
window.
B
I
updated
the
extend
map
3
according
to
your
comments
and
please
reveal
and
you're
still
working
on
the
oman
tree,
no,
the
implementation.
So
that's.
A
C
A
B
The
same
network
as
the
last
week,
I
just
updated
the
extend
map3
pr
according
to
your
comments
and
still
working
on
the
old
map
tree
node
implementation.
So
that's
the
true
task.
A
C
Hi
everybody
famous
last
last
week
finishing
touches
for
pr
for
scrubbing.
C
D
Oh
yep,
I
sent
out
a
pr
for
the
dirty
accept
right
out,
so
I'm
moving
on
to
general
garbage
collection.
It
will
use
the
same
extent
rewrite
machinery.
The
new
code
will
be
scanning
segments
is
the
main
part,
but
the
code
for
changing
mappings
already
exists
as
part
of
the
pr
submitted.
Can
you
hurry?
D
D
Does
that
make
sense?
In
other
words,
when
we
rewrite
a
dirty
segment,
we
have
to
write
it
out
or
I'm
sorry,
not
a
dirty
segment,
a
dirty
extent.
We
have
to
write
it
out
to
a
new
location
on
disk
in
the
new
segment,
which
means
we
need
to
atomically,
update
the
lba
mapping
to
point
to
the
new
location
and
generally
make
sure
we
didn't
break
the
cache.
So
that's
what
that
pr
is.
The
next
step
is
space
accounting.
So
we
know
which
segment
is
the
right
one
to
garbage,
collect
and.
A
D
D
I
assume
that
the
overall
number
of
segments
will
be
relatively
small
as
we
control
the
size,
so
we
can
make
that
be
true
on
osd
startup.
It
will
simply
do
a
complete
scan
of
the
lba
map
and
rebuild
that
ephemeral
mapping,
and
it
will
just
maintain
that
mapping
in
line
during
during
execution
in
the
future.
We
may
want
to
make
this
smarter
so
as
to
make
osd
startup
faster,
but
this
should
give
us
more
than
enough
to
do
some
basic
performance
testing
in
that
we
won't
run
out
of
space
it'll
work
correctly.
D
Right
so
my
strategy
in
somewhat
more
detail,
is
I
want
to
specifically
avoid
the
blue
store.
Has
this
property,
where
you
can
build
up
kind
of
an
unbounded
amount
of
asynchronous
work
that
it
then
just
does
eventually
in
the
form
of
that
big
rocksdb
garbage
collection,
phase,
where
it
has
to
go
through
the
entire
level
and
whatever
and
the
level
below
it,
has
to
read
and
write
all
of
those
lsm
blocks,
and
a
lot
of
that
tends
to
interrupt.
I
o,
so
I'm
not
going
to
do
that.
D
What
it's
going
to
do
instead,
is
it's
going
to
maintain
a
an
estimate
of
well
it's
going
to
maintain
a
number
that
represents
the
amount
of
space
that
is
available
to
be
written
right
now,
an
amount
of
space
that
is
currently
free,
sorry,
an
amount
of
space
that
is
not
available
to
be
written,
but
is
free.
D
That
is,
if
we
garbage
collected
we'll
reclaim
it
and
amount
of
space
that
is
currently
actually
currently
in
use,
and
the
thing
I
just
outlined
does
give
us
that
ability
we'll
know
for
for
each
segment
down
to
the
byte,
how
many
bytes
we
have
to
relocate.
If
we
garbage
collect
it,
we
just
don't
know
where
they
are.
A
D
D
So
I'm
going
to
maintain
I'm
going
to
maintain
sort
of
a
soft
and
hard
limit
if
you're
below
the
soft
limit,
we
don't
do
any
background
work.
We
just
do
foreground
work
and
anytime.
We
have
slack
time
we'll
do
background
work
to
get
us
down
to
an
even
lower
threshold
between
the
soft
to
the
hard
limit,
it'll
exponentially
scale
up
to
doing
as
much
work
as
the
I
o
is
generating
until
you
hit
the
hard
limit,
at
which
point
you
have
to
pay
as
much
background
work
as
you
do
on
every
trend
transaction.
D
D
It'll
it'll
mix
them
in
line
so
that
we
never
get
that.
Well,
hopefully,
we
never
get
that
gigantic
spike.
That
costs
many
many
seconds,
but
we
also
don't
get
those
bursts
of
crazy
speed.
Where
we
didn't
do
any
background
work,
I
mean,
unless
we
so
it'll
it
with
with
luck,
it
will
be
the
same
average
pace,
but
without
the
big
outliers,
so
it'll
be
easier
to
predict
it
easier
to
schedule.
D
D
You
notice
the
current
pr
has
the
first
step
of
that.
It's
got
that
do
immediate
work.
That's
the
entry
point
where
the
segment
cleaner
will
do
what
it
feels
like
it
has
to
do
on
each
transaction,
and
then
there
will
be
another
call
that
will
be
cycled
in
at
a
different
point
in
the
transaction
when
we're
not
actively
processing,
I
o
or
catching
up
in
the
background.
So
those
will
be
the
two
entry
points
never
mind,
though
sorry.
D
No,
the
current
implementation,
what
the
current
implementation
does
is
it
bounds?
How
many
log
how
many
journal,
replays
or
so
without
my
current
pr,
you
will
in
the
your
memory
use,
will
increase
proportional
to
the
number
of
extents
you
have.
You
have
ever
written
to
a
second
time,
because
there's
no
way
to
flush
dirty
extents,
what
the
new
code
does.
Is
it?
The
other
thing
we
have
to
maintain?
Is
we
have
to
make
sure
we
don't
have
an
unbounded
amount
of
journal
to
replay?
So
the
first
thing
I
did
was
I
implemented
it.
D
You
can
give
it
a
soft
and
a
hard
limit,
for
it
will
make
sure
you
never
have
to
replay
more
than
two
segments.
Let's
say
worth
of
journal
entries,
so
with
every
I
o,
it
will
write
out
any
dirty
extents
that
are
that
old,
thus
ensuring
that
thus
doing
thus
ensuring
that
we
can
actually
write
out
and
release
things
from
the
cache
and
also
it
it
has
all
of
the.
I
believe,
all
of
the
logic
that
will
be
required
for
relocating
an
extent.
So
I
will
reuse
that
exact
pathway
when
doing
gc.
A
D
D
A
E
Okay,
I'm
still
debugging
the
interruptible
eraser
future
and
it
turns
out
that
the
the
interruption
machinery
is
a
little
bit
more
complicated
than
I
thought.
Actually
the
interruption
related
error
need
to
be
dealt
with
differently
from
a
common
error
radar
error.
So
I
have
to
add
a
little
add
a
little
more
code
for
that
part,
and
this
is
the
branch
I'm
working
on
now
and
during
the
debugging
I'm.
I
also
met
this
issue
and
it
turns
out
a
lot
of
problem.
E
I
met
during
my
debugging
is
due
to
this
issue
and
I
have
to
fix
it
first
before
I
can
fully
test
the
interruption
code
and
if
anyone
else
wants
to
take
a
look
at
it,
that'll
be
appreciated
and
that'll
be
appreciated,
and
that's
all
for
me.
A
Thanks
johan,
did
you
get
a
chance
to
to
to
check
check
out
this
to
the
issue?
It
sounds
like
things
like
you,
you've
already
fixed
it.
F
Yeah,
I
I
have
looked
at
it
today
and
seems
to
me
it
is
not
of
a
fatal
issue,
because
after
like
five
seconds,
the
heartbeat
connection
will
be
revealed
with
review
itself
correctly
in
the
log.
So
there's
actually
is.
There
is
an
issue
in
the
current
code,
because
a
reset
event
is
not
correctly
generated
to
the
heartbeat.
F
Yeah,
the
medicine
doesn't
report
the
reset
event,
looking
at
the
newly
replaced
connection,
so
so
the
heartbeat
holds
that
closed
connection
and
reject
to
accept
the
new,
the
new
one
connection,
the
new
connection
later.
So
so
during
that
period,
the
the
osd,
the
prsd
will
send
heartbeat
to
to
this
osb,
but
this
osd
will
not
stand
to
the
other
side
and
after
like
for
a
while,
after
this
pier
is
removed
and
and
after
it
is
added
again
that
appear
is
connected.
A
F
No,
no.
F
F
F
So
yeah
this
week
I
have
integrated
with
aerated
future
and
with
with
transaction
menu
interface,
and
the
unit
test
is
migrated
to
the
systa
google
test
framework
and
currently
I'm
integrating
the
logic
called
cached
extent,
and
I
have
looked
at
the
lba
map
and
the
lbht
and
omap
implementations.
F
I
think
the
solution
will
be
differently
because
the
the
node
layout
and
the
the
in-memory
node
structure
and
extent
are
decoupled.
In
the
oh
note,
3
implementation.
A
Do
you
mean
they
are
the
deterrent
block
used
by
extended,
for
example,
the
extent
block
should
be
different
from
that
used
in
your.
E
About
the
issue
that
I
submitted,
just
one
thing
I
want
to
note
is
that
there
are
chances
that
the
osd
key
pop
up
the
error,
don't
reply
from
the
other
osd
due
to
that
issue,
and
I
think,
because
to
test
to
fully
test
the
interruption
machinery,
I
have
to
shut
the
osd
down
and
start
it
again,
and
this
issue
is
actually.
F
Yep
after
the
issue
happened,
if
you
don't
risk
shut
down,
the
osb.
Does
that
issue
continue
to
happen
or
or
not.
D
Yeah
I
wanted
to
ask
about
one
thing:
this
is
just
about
sort
of
a
I
think,
a
while
back
someone
mentioned
microns,
I
want
to
say
heterogeneous
storage
engine.
Maybe
I
think
it
got
open
source
sometime
in
may.
D
I
talked
to
them
sometime
last
year,
but
they're
getting
the
kernel
components
upstream
somewhat
soon,
that
is
the
red
hat
journal
team
is
helping
them
get
their
patches
in
you
know
in
shape
to
actually
emerge,
so
I
met
with
them
a
bit
and
looked
at
it,
and
it
has
a
few
interesting
properties,
so
they
seem
to
have
actually
designed
it
to
operate
as
a
back
end
for
a
set
sephosd.
D
Its
real
target
honestly
is
to
replace
wired
tiger
or
roxdb
as
a
key
value
store,
but
there
are
features
in
it
that
they
say
were
designed
to
address
use
as
an
object,
store
implementation
for
sfosd
dewitt.
Its
transaction
implementation
is
powerful
enough
to
be
used
unmodified.
As
far
as
I
can
tell,
which
means
that
implementing
an
object
store
layer
on
top
of
it
would
be
quite
trivial.
A
D
This
isn't
something
I
plan
on
devoting
much
time
to,
but
I
do
plan
on
slapping
together
a
net
optics
for
implementation
for
classic
osd.
It's
not
possible
to
do
one
for
crimson
yet
because
they
don't
support
asynchronous
io,
but
they
are
working
on
it.
They're
going
to
use
lib,
iou
ring
or
they're
going
to
use
iou
ring
rather
to
support
asynchronous.
I
o
just
it
just
isn't
done
yet
anyway.
So
I
think
that
might
be
an
interesting
thing.
D
D
So
that,
at
that
scale,
it
would
basically
have
to
report,
have
to
replace
something
like
c
store
to
be
worth
the
engineering
cost,
but
I'm
all
in
favor
of
like
not
writing
an
object
store.
That
sounds
great.
So
if
I
have
time
to
I'm
going
to
try
to
find
some
time
in
the
coming
month
or
two
to
either
wire
it
into
classic
osd
or
conclude
that
it's
stupid
one
or
the
other,
or
conclude
that
it's
not
worth
working
on.
D
It's
not
going
to
be
like
that
they're
going
to
it's.
I
I
don't
think,
there's
any
way.
It's
going
to
be
a
direct
trade-off
and
I
don't
think
we're
just
going
to
like.
I
don't
think
we're
going
to
do
a
prototype
for
this
and
go
oh,
I
guess
we're
not
doing
c
store.
I
think
that
it's
going
to
take
them
several
more
months
to
get
it
in
kernel
in
the
first
place,
and
it
will
take
quite
a
bit
longer
before
we
find
out
whether
it's
performant
and
stable.
D
A
B
D
With
the
user
space
level,
because
the
vast
majority
of
it
is
user
space
code
in
library
form
not
the
kernel
component,
there's
a
fairly
thin
kernel
component.
That's
what
they're
getting
upstream,
but
in
order
to
offer
us
an
asynchronous
interface
we
can
use.
Not
only
would
the
kernel
have
to
support
iou
ring
the
intermediate
code.
Paths
would
all
have
to
be
asynchronous
as
well,
so
I
haven't
evaluated
it
far
enough
to
know
about
that
anyway.
I
just
think
it's
interesting,
at
least
when
I
first
looked
at
it.
D
D
However,
the
upshot
of
all
of
this,
and
the
reason
why
it's
worth
entertaining
this
at
all,
is
that
it
should
ins
on
in
theory
on
spec,
already
support
persistent
memory
and
zone
endzone
namespace
devices
in
a
you
know
heterogeneous
configuration
where
it
uses.
You
know
the
faster
storage
for
caching,
all
of
which
is
like
a
lot
of
work.
So.
D
Something
I
don't
they're
not
using
it
all
some
release,
I
don't
think
they
are
I'll
have
to
I'll
have
to
check.
I
suspect
that
they're
doing
several
different
things
depending
on
depending
on
how
you
have
it
configured.
It's
pretty
configurable.
D
A
A
Me
too
anything
else,
nope.