►
From YouTube: Ceph Performance Meeting 2022-01-06
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so
actually
one
one
thing
I
wanted
to
mention
is:
I
think
we've
got
a
couple
of
our
folks
from
from
poland
out
today.
So
one
of
the
things
I
wanted
to
talk
about,
I'm
not
sure
we'll
get
to,
but
having
said
that,
let's,
let's
get
through
prs
here
so
I
saw
I
didn't,
make
it
entirely
through
prs
this
week,
but
I
did
see
two
new
prs
over
the
last
two
weeks
that
I
want
to
mention.
A
One
is
in
common
using
a
thread:
local
pointer
variable
to
save
the
shard.
This
is
four
four,
four,
seven
nine.
So
I
don't
think
there's
been
any
review
on
that.
Yet,
but
oh
no
ronan
ronan
just
got
in
there.
A
Ronin
earned
any
comments
I
see
is
this.
This
is
real
fresh.
B
C
A
A
So
yeah
there.
If
anyone
wants
to
review
that
one,
that's
that's
there.
Also,
let's
see
there
is
a
doc
change
for
rewriting
the
hardware
docs.
It's
four
four
four
six
six
dan
approved
that
I
made
a
request
for
a
change
in
there.
The
only
thing
that
I
I
was
nitpicking
a
little
bit
is
that
it
mentions
needing
two
cores
for
heavy
rbd
workloads,
and
I
think
we
should
maybe
be
a
little
bit
more
nuanced
than
that.
A
A
Let's
see
closed,
two
prs
this
week
that
I
saw
first
is
igor's
er
for
cleaning
up.
Basically,
the
further
cleaning
up
the
pinning
in
the
the
o
node
code
for
pinned
entries
that
got
merged
by
yuri.
That's
a
does
a
really
good
pr.
I
think
adam
was
very
pleased
with
it
and
from
what
I
looked
at
it.
It
looked
like
a
good
cleanup.
So
that's
good
this
pr
from
last
summer
for
optimizing
the
object's
memory
allocation
using
pools.
A
There
was
some
feedback
on
that
ronan.
I
think
that
you
left
feedback
on
that
adam
may
have
as
well.
It
looks
like
for
now
the
author
closed
that
pr,
just
based
on
the
feedback
that
that
he
was
getting
it
had
been
marked,
do
not
merge.
I'm
actually
hoping
that
we'll
see
further
work
from
the
author,
because
this
is
a
really
interesting
area
but
yeah.
Maybe
the
approach
that
was
being
taken
wasn't
wasn't
quite
right.
A
Ronan
was
I
right
where
you
you,
you
looked
at
that.
I
think
right.
A
Yeah,
I
agree
all
right
moving
on
updated
this
week.
The
manager
time
to
live
cash
implementation
has
gotten
some
new
updates.
I
think
that's
just
chugging
along
so
hopefully
we'll
see
a
final
implementation.
There
josh's
primary
balancer
work
laura.
I
think
you've
been
doing
a
lot
of
reviews
on
that
anything.
Anything
new
to
discuss.
D
Nothing
major
we
are
going
or
he's
gonna
squash,
some
commits
to
make
it
more
cohesive
and
then
we're
gonna
run
it
through
tautology.
So
that's
the
major
updates
on
that.
A
Okay,
cool
sounds
good,
let's
see,
igor's
pr
to
make
the
shared
blob
fsdk
much
less
ram.
Greedy
adam's
been
doing
a
lot
of
reviews
on
that.
The
update
or
the
the
the
the
comment
that
I
made
on
that
pr
is
that
if
we
are
going
to
try
to
use
the
osd
memory
target
as
a
method
for
controlling
the
amount
of
memory,
we
use
the
amount
of
memory
we
use
for
that
process.
A
I
think
that
we
should
consider
then
adding
that,
as
a
priority
cash
target
to
compete
with
other
consumers
of
memory,
we
could
do
that
at
priority
zero,
which
basically
means
that
it's
only
competing
with
things
that
also
demand
memory
very
aggressively,
but
I
think
that
that
might
be
the
way
to
go
so
anyway.
We'll
see
what
igor
thinks
about
that.
But
that's
the
kind
of
feeling
I've
got
on
this
right
now.
Okay
and
then
next
is
low-fast
fine
grain
locking
this
is
adam's
big
pr.
A
I
think,
if
I
remember
right,
igor,
had
some
updated
reviews
on
that.
I'm
just
gonna
check
quick
to
make
sure
that
I
didn't
miss
categorize
this
one.
Oh
no,
it
neha
responded
in
that.
Most
recently,
there
was
a
apparently
a
segmentation
fault,
so,
okay,
I
have
to
update
that
one,
but
it
looks
like
maybe
there
is
still
some
issues
in
that
pr,
possibly
okay
and
then
speaking
of
locking
issues.
A
A
I
had
a
couple
two
separate
auction
issues,
one
of
which
I
think
was
what
was
causing
the
testing
failure,
another
that
was
more
subtle,
that
maybe
wasn't
the
the
first
of
which
was
just
a
a
really
stupid
access
to
the
age
bidding
data
structure
that
was
not
protected,
so
that
was
an
easy
fix
and
then
the
more
subtle
one
was
that
there
was
a
basically
a
re-modify
right
that
that
would
have
potentially
allowed
for
a
change
in
between
locks,
and
that
was
completely
unnecessary.
A
The
the
read
wasn't
needed.
We
could
just
do
a
straight
lock
and
write
so
that
I
I
fixed
that
and
since
fixing
those
two
issues
that
pr
now
appears
to
be
passing
both
make
check
and
the
and
the
bigger
test
that
the
yuri
ran
so
potentially,
if
we
want
to,
we
can
try
to
get
that
in
frequency.
I
think-
and
that
is
what
I
saw
for
updated
prs
this
week.
There
were
some-
I
didn't
look
at
here.
A
I
ran
out
of
time
this
morning,
but
but
otherwise
that
that
was
that
was
it
anything
I
missed
from
anyone.
E
There's
one
pr
mark
that
can
use
your
review.
I
think
I
think
it's
it's
just
an
enhancement
to
the
radius
benchmark
from
eager.
Oh
okay,
sure
I
think
it's
just
one
of
those.
You
know
nice
to
have
ones.
I
pasted
in
the
chat.
E
Yeah,
it's
essentially
small,
but
yeah
it'd
be
nice.
If
you
could
just
review
it
once.
A
Oh,
this
is
enhancements
to
the
omap
section,
part
of
it.
Okay,
that's
exactly
exactly
yeah
yeah.
A
A
Speaking,
actually
of
that
whole
topic,
neha
there's
my
pr
for
the
omap
bench
in
the
the
google
test
suite
which
I've
I
I
keep
thinking
about
it
in
the
back
of
my
head
and
to
really
make
that
useful
as
an
independent
benchmark.
A
We'd
either
have
to
make
a
executable
that
that
works,
kind
of
like
g-test
right
where
it
it
it
kind
of,
creates
a
mock,
osd
back-end
and
runs
tests
against
it.
Kind
of
like
g-test
does,
or
we
rewrite
that
as
like
a
tool
that
goes
through
a
pre-existing,
osd
and
tries
to
do
something
really
similar,
but
but
it
wouldn't
be
able
to
work
the
same
way
as
my
code
in
the
g
test
suite.
Does
I've
been
kind
of
putting
off
the
decision
on
what
to
do
with
it?
E
I
remember
this
came
up
when
we
were
freezing
last
year
and
it
was
sat
in
the
back
burner,
but
I
mean
like
being
able
to
run
on
a
pre-existing.
Ost
is
something
that
would
be
ideal
right.
A
E
Probably,
but
I
mean
like
you
know,
the
the
benchmark
in
general
would
be
really
useful
in
a
scenario
where
we
wanted,
like
we
have
issues,
and
we
want
to
just
run
this
on
an
existing
cluster
and
get
some
numbers
out
of
it
so
having
to
create
an
ost.
In
that
scenario,
I'm
not
sure
it's
going
to
be
that
useful,
but
in
other
cases
it
might
just
be
useful
right.
A
A
E
A
All
right,
any
other
prs
from
anyone,
they'd
like
to
bring
up
or
discuss.
A
Casey,
I'm
gonna
pick
on
you
a
little
bit
how
how
is
the
rgw
work
going
with
some
of
the
performance
pr's
we
talked
about
before?
I
think
I
think
one
of
them
you
mentioned.
Maybe
he
was
getting
in
for
quincy.
F
Hey
mark,
I
don't
remember
which
one
you're
talking
about,
we
didn't
have
the
the
beast.
Front-End
stuff
did
merge.
That
was
a
big
one.
A
Oh
good
emerged:
okay,
great
great
yeah.
I
think
that
was
what
I
was
thinking
of
the
the
the
issue
with.
I
think
it
was
oh
shoot
now,
I'm
forgetting.
F
Right-
and
I
know
that
you
had
a
couple
of
prs
around
the
caching
system-
I
think
that
we
haven't
revisited
yet.
Unfortunately,.
A
A
So
so,
with
the
the
beast
front,
changes
the
the
the
the
the
pr
that
merged.
Does
that
resolve
it
fully
or
was
it
an
improvement?
I
don't
remember.
F
A
good
question
when
mark
kogan
was
doing
the
measurements.
He
was
just
comparing
the
regressed
version
with
the
fixed
version.
I
don't
think
he
was
doing
comparisons
with
before
timeouts.
F
D
A
Okay,
it'll
be
it'll,
be
interesting
to
see
what
that
looks
like
now
compared
to
previously
in
both
both
before
and
after
the
changes
were
made.
A
I'll
I'll
add
that,
to
my
mental
list
of
of
things
to
think
about
here,.
A
All
right
anything
else
from
anyone.
A
Oh
okay
sounds
good,
so
the
only
discussion
topic
that
I
really
had
for
today
is
I
was
hoping
if
igor
or
adam
were
here.
We
could
talk
about
igor's
pr
for
fsck
and
reducing
the
memory
usage.
There
was
some
discussion
in
that
pr
about
trying
to
make
the
osd
fit
within
the
osd
memory
target
during
fsck,
and
if
we
do
that,
I
wanted
to
talk
about
utilizing
the
existing
code
in
the
priority
cache
framework
to
control
the
memory
usage
or
at
least
inform
the
memory
usage
of
the
fsck
process.
A
A
Maybe
one
thing
I
since
gabby
you're
here
I
didn't
want
to
ask
you:
how
is
your
work
on
the
fixes
for
the
the
allocation
changes
going.
G
I
got
three
fixes
on
the
work.
One
of
them
is
completed.
It's
dealing
with
the
expand.
There
was
a
problem
with
the
expand
code
which
I
didn't
handle
correctly
now
it
should
be
now
it's
working,
fine
and
there's
a
test
case
for
this,
which
was
submitted.
Another
thing
I'm
working
is
replacing
the
bitmap
allocator.
I
was
using
to
call
while
walking
the
location.
I
was
just
every
time.
I
would
find
an
area
an
allocated
area.
G
G
A
Okay,
okay,
cool
yeah,
gabby.
I
think
your
work
on
the
allocator
is
probably
going
to
shape
up
to
be
the
biggest
win
in
quincy
that
we
see
so
I
suspect
we
will
be
highlighting
it.
So
I
I
wanted
to
just
check
up
on
on
how
it's
going
with
with
some
of
those
fixes
but
yeah,
I
think
you're
you're
gonna
be
the
highlight
on
the
performance
side.
G
A
I
think
that's
that's
pretty
much
it
at
some
point
I
did
want
to.
I
don't
think
before
break.
A
We
got
to
really
talk
about
the
odsync
right
pipeline
with
there
was
some
evidence
in
some
tests
that
on
different
platforms,
we're
seeing
the
the
journal
right
be
sorry,
though
the
redhead
log
right,
behavior,
being
the
the
big
limiter
on
on
small
rate
performance
and
there's
some
question
about
whether
or
not
we
can
do
a
better
job
of
increasing
the
the
aggregation
of
rights
in
the
wall
to
better
kind
of
utilize,
the
behavior
of
underlying
devices.
A
A
Oh
actually,
one
other
thing
ronan.
You
mentioned
that
tc
malek
discussion
at
cppcon
right.
A
Do
you
think
that
would
be
a
good
presentation
to
discuss
at
one
of
these
meetings.
A
I
confess
I
haven't
watched
the
presentation
yet,
but
maybe
it'd
be
something
that
if,
if
folks
think
is
is
worthwhile,
I
mean
it
does
have
a
direct
relevance
for
what
we
do.
Definitely
watch
it
again
and.
A
All
right
well,
then,
thank
you
for
coming
everyone
and
have
a
great
week
I'll
see
you
next
week.