►
From YouTube: 2020-04-16 :: Ceph Performance Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
let's
get
this
started
so
this
week,
we've
got
as
far
as
I
could
tell
them.
New
new
PRS
that
came
in
one
is
from
Igor
looking
at
now:
switching
the
metallic
size
and
blue
sore
back
to
4k
for
hard
disks
after
his
excellent
work
on
the
hybrid
allocator
and
also
his
deferred
a
great
PR,
though
that
looks
good
I
think.
Definitely
we
should
switch
it
professor
at
Valentin
and,
let's
see
what
happens.
B
A
A
We
did
in
file
store
a
couple
of
years
ago,
where
the
idea
was
that
if
we
knew
upfront
how
many
objects
we
would
have,
we
could
pre-split
the
the
directories
or
PGs,
and
so
this
is
the
similar
scheme
in
the
NBS,
where,
if
a
user
can
hint
they're
going
to
have
a
directory
with
lots
of
files,
we
can
pre
fragment
the
directory
and
then
avoid
doing
that.
Well,
rights
are
happening,
and
that
seems
at
least
the
testing
that
I
did
to
yield
about
it.
A
A
If
you
added
PR
for
github
to
add
Jenkins
tests
or
classic
the
classic
OSD
performance,
so
that's
with
all
the
stuff
that
they
wrote
for
Crimson,
but
now
also
working
with
classic
posties
does
really
exciting.
Hopefully
it
will
we'll
be
able
to
tweak
that
and
that
maybe
start
catching
big
performance
regressions
and
then
Igor
both
of
his
PRS
merged,
the
hybrid
alligator
alligator,
and
also
the
four
big
rights
PR
updated
this
week.
A
I
think
the
nvme
device
one
here
Keith
was
still
working
on
a
review
for
that,
but
no,
no
real
movement,
but
just
that
he's
going
to
do
a
review
this
this
one
from
Mudge
and
Peng,
was
updated
with
more
numbers
or
changing
the
OSD
Optum
threads
per
shard
one
for
SDS.
We
can
talk
about
that
more
later
here,
giving
a
bigger
context
with
the
shard
up
work
queue.
A
Guarding
of
the
rocks
TV
database
Adams
excellent
PR
here
that
I
think
passed
its
most
recent
round
of
testing
and
I've.
It
looked
like
Josh
thought
it
looked
good
though.
Hopefully
we
can
merge
that
soon
and
then
that
will
release
a
little
bit
of
a
logjam
on
the
cold
other
peers
that
we
will
make
it
into
the
OSD.
Well,
that's
exciting!
Hopefully
we
can.
We
can
do
that
quickly
here
and
then
a
member
Sinitta
like
maybe
there
were
some
new
updates
on
your
objection,
triumphant
PR.
C
A
A
All
right,
then,
moving
on
so
Mahan
pings,
bread,
shard
testing,
that
he
did.
We
talked
about
this
a
little
bit
last
week
to
recap:
he's
basically
seeing
that
with
one
thread
per
shard
rather
than
two
threads
per
shard
and
a
high
Q
depth,
the
random
read
performance
is
significantly
higher,
with
with
one
thread
per
shard.
A
A
Can
we
figure
out
what
is
actually
going
on
here
and
what
we're
contending
on,
but
I
asked
margin
and
pen
for
if
you
could
just
bring
some
wall
clock
traces
in
both
cases,
we'll
see
if
that
can
happen,
Josh
or
for
anyone
that
remembers.
When
we
were
writing
the
shirt
up
work,
you
did.
We
ever
do
any
testing
looking
at
this
kind
of
stuff.
D
Okay
talked
about
this
a
little
bit
last
week.
I,
don't
remember
if
he
did
specific
testing
at
that
point,
but
I
know
that
there
way
the
points
in
time
it
has
been
occasionally
useful,
have
multiple
threads
per
shard
to
let
that
was
t-handle
or
some
kind
of
background
work
like
recovery
or
Bechtel
same
time
as
client
operations
versus.
A
D
A
D
A
Do
remember
when
I
have
done
some
wall
clock
profiling
and
the
fish
are
not
work.you
has
looked
suspicious.
You
know,
I,
see
popping
up
with
I
think
it
maybe
was
PG,
lock
contention
specifically
in
the
random
read
case,
I
think
so,
maybe
maybe
this
it
wasn't
huge.
But
maybe
this
is
that
was
like
a
you
know,
canary
in
the
coalmine
right
like
there's
something
going
on
there.
E
A
E
E
Will
be
no
added
contention
points
I,
think
ever
look
at
the
data
you've
gathered
I.
The
reason
that
we
do
a
shard
or
multiple
threads
per
shard
is
because,
in
a
read
workload
you
use
up
some
of
those
some
of
those
threads
can
block
on
the
read
what
water
is
happening.
This
all
happened
in
line
instead
of
on
waiters.
A
E
Well,
we
used
up
a
work
queue
thread
too,
while
performing
a
read,
and
so
we
want
to
allow
other
reads
to
happen
in
that.
At
the
same
time,
it
could
be
that
that
isn't
right
in
terms
of
our
assessment
of
the
impact,
but
that's
my
recollection
about
why
I
mean.
A
Maybe
we
should
be
asking
ma
Xian
tang,
what
he
sees
if
he
does
16
shards
and
one-third
for
shard,
and
sixteen
charts
feed
as
two
threads
per
shard
I
think
he
was
trying
to
look
at
like
a
total
number
of
threads
either
in
you
know,
eight
by
two
or
sixteen
by
one
configuration,
but
maybe
maybe
this
just
is
telling
us
that
it
we
just
need
more
shards
right,
I,
think
16
is
actually
the
default,
so
maybe
I
mean
that's.
Maybe.
A
E
A
A
D
E
D
Army
looks
like
we're
good
rather
than
pure
random
eats
like
you,
people,
you
have
reads
and
writes
mixed
together,
and
that's
where
my
contention
likely
was
talking
about
in
terms
of
stinkers
reads:
I
come
into
play
more
as
well
sure.
D
A
A
A
All
right
Igor,
as
soon
as
you
get
that
other
change
to
make
the
hyper
delegator
default,
then
Oh,
fantastic
I
will
approve.
That
then
sounds
good,
okay
and
then
performance.
The
I
key
Fuu
you.
You
did
a
bunch
of
work,
or
at
least
some
work
I,
don't
how
much
it
was,
but
it
looks
like
we
can
do
classical
as
detests
now.
F
A
F
A
F
A
For
doing
that,
that's
really
exciting
Josh
would
any
anything
that
you
want
to
more
talk
about
with
performance
here
this
week,
I'll
see
I,
think.