►
From YouTube: 2016-SEP-14 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
For full notes and video recording archive visit:
http://pad.ceph.com/p/performance_weekly
A
A
Alright
I
reached
out
to
Sage,
but
he's
not
responding,
so
he
may
be
busy
with
something
else
and
a
whole
bunch
of
folks
are
actually
out
in
India
right
now,
working
on
redhat,
tough
storage,
3pi
no
planning.
So
we
have
a
look
of
a
smaller
crowd
this
week,
but
we
can
just
get
started
here.
So,
let's
see
the
big
news
in
terms
of
flora
quests
this
week
is
sage
submitted
a
PR
for
his
a
new
encode
decode
scheme,
that's
based
on
Ellen
brada
and
Sam's
work.
So
you
haven't
already
taken
a
look
at
that.
A
It's
worth
looking
at
it's
it's
pretty
complex,
maybe
maybe
others
find
it
not
quite
as
complex
as
I
do.
But
it's
it's
something
to
say
if
I
want
at
least
spend
a
little
bit
of
time
looking
through
to
understand,
the
good
news
is
that
it
seems
to
be
at
least
as
good
as
what
we
had
before
and
I
think
will
help
potential
to
be
better,
we're
so
kind
of
going
through
and
figuring
out
how
it
should
be
integrated
into
blue
store
and
whether
or
not
there
are
things
that
we
can
do
to
kind
of
use.
A
Some
of
the
advantages
that
it
provides
in
better
ways
specifically
kind
of
how
to
do
estimation
in
a
way
that
lets
us
avoid
as
much
buffer
buffer
append
behavior
as
possible,
so
yeah,
just
lots
of
stuff
on
that
will
probably
be
spending
a
lot
of
time
this
week
continuing
to
look
into
that.
Otherwise,
there's
a
couple
of
things
here.
A
A
A
Stages,
sharted
extent,
map
er
got
merged
last
week,
so
that's
in
which
is
really
good,
that
that
has
improved
small
random,
write
performance
of
boost
or
pretty
dramatically.
So
that's
that's
definitely
worth
looking
at.
Actually
it
right
now
in
master,
at
least
in
the
test
I've
been
doing
or
we're
faster
than
file
store
for
small
random
writes,
at
least
in
five
minute
tests,
I'm
not
totally
sure
on
longer
running
tests.
A
C
Market
is
going
down
there
as
well,
so
I'm,
not
a
lot
more
than
a
commission,
but
but
starting
program
for
my
pinoy
ticket.
Is
that
equal
145k
mark
and
comes
down
girl
25k.
So
that
means
turn
around
so
considering
recreational
EK
following
peeps
coming
down
and
then
it
starts
working.
You
need
to
hopefully
to
stay
there
earlier
cases
or
curricular
study
break
from
40k
coming
down
to
15
k
for
the
period
of
pain
in
our.
B
Different
about
the
beginning
versus
the
end
of
the
run
is
that
you're
you're
switching
from
a
non
sharded
to
a
sharted
fetch
the
two
database
accesses
versus
one.
But
the
right
paths
should
you
know
not
be
terribly
different,
especially
and
if
you
are
seeing
that
that
suggests
to
me
that
maybe
the
shards
themselves
are
still
too
large.
A
B
You
by
say
filling,
but
if
space
is
increasing,
that's
actually
not
terribly
surprising,
I,
don't
think
rocks.
Db
does
path
compression
on
the
keys
and
you're,
adding
a
lot
more
keys
than
you
are
data.
D
C
B
By
you
know,
integer
factors
wouldn't
surprise
me
at
all,
but
it
should
level
out
and
it
shouldn't
be
awful.
It's
also
possible
that
we're
leaking
something
into
rox
DB,
that's
not
getting
deleted.
That's.
A
A
Yeah,
so
one
of
the
things
I
think
I
need
to
do
is
sit
down
and
actually
rip
out.
What's
in
rocks
DB
after
one
of
these
tests
and
just
look
at
what's
there
and
what's
taking
up
space
is
presumably
if
we've
got
tons
and
tons
of
stuff
in
there
I
don't
know,
I
would
suspect
that
maybe
rocks
TV
itself
is
slowing
down.
You.
B
Know
what
be
interesting
would
be
just
a
simple
program
that
just
reads:
rocks
TB
front
to
N
and
does
a
simple,
even
just
a
simple
histogram
across
the
different
key
spaces.
You
know:
counts
up
the
keys
and
the
average
sizes
and
just
some
stupid
stats
across
those.
It
would
be
a
fairly
simple
program
to
write
and
it
would
be
really
useful
in
this
environment.
B
B
A
A
So
another
thing
I
want
to
do
here
as
well
that
I
haven't
tried
in
a
while
is
actually
using
the
mem
de
store
back
end
to
see
if,
if
we
can
get
some
output
from
that,
one
of
the
things
that
I
think
I'm
oh
go
ahead.
B
A
B
I,
don't
even
rocks,
I
mean
just
code
that
takes
the
the
running.
Blue
store,
kv,
store
and
runs.
It
just
does
an
index
on
the
whole
thing
it
analyzes
it
that
should
be
I
mean
it's
even
easier
to
write
that
command
as
something
built
into
the
codebase.
You
know,
and
that
would
be
I
think
really
useful.
A
D
B
A
B
A
A
A
B
But
fundamentally,
if
you
know
somnath
had
done
some
work
measuring
the
performance
of
bitmap
allocator
versus
stupid
allocator
and
you
know,
but
my
comment
was:
is
that
you'll
by
design
bitmap
allocators
should
have
the
same
Big
O
performance
as
stupid,
allocator,
so
I'd
be
really
surprised.
If
you're
seeing
significant
CPU
differences
between
the
two
yo
yeah,
I
think
we
track
that
down
to
the
debug
and
you
know,
and
then
I
think
we've
gone
quiet
on
that
part
of
it.
In
the
interim.
C
B
A
C
A
B
B
D
B
D
B
Yeah
I
think
you
can
rerun
that
you're
gonna
see
a
very
different
but
profile,
and
things
like
that
or
probably
aren't
going
to
matter
now,
I'd
be
stunned.
If
there's
a
lot
of
collisions
on
the
internal
locking
and
bitmap
allocator,
it
just
can't
be
that
big,
a
percentage
of
what's
going
on
unless
there's
something
seriously
messed
up
in
the
code.
D
B
A
B
D
A
B
B
Anyways,
it
would
be,
it
would
be
nice
if
we
could
get
this
thing
to
run
long
enough.
You
know
in
the
next
couple
of
days
we
could
repeat
the
kind
of
crafts
that
you
had
it
would
be.
It
would
be
nice
to
see
where
we
are
and
take
another
snapshot
of
where
the
CPU
time
is
going
because
I
suspect
it's
the
profile.
Look
radically
different.
Now,
with
all
these
different
things
that
we've
done
to
it.
Yeah.
A
I'll,
actually,
as
soon
as
this
meeting
is
over
all
I'll
start
on
another
set
of
tests.
First,
looking
at
4k,
random,
writes
and
then
after
that,
we'll
go
through
and
try
to
full
suite,
but
I.
Imagine
we'll
we'll
have
some
bug's
to
work
out
here
before
we
get
anything
really
good.
It's
usually
the
way
it
goes.
A
C
If
the,
if
the
like
performance
degradation
over
time
is
happening
because
of
amount
of
data,
total
amount
of
data
is
Ben
get
into
ross,
you
be
so
I
think
it
will
be
still
there,
but
if
it
is
on
the
power
I'll
cases,
if,
if
that
is
actually
I
through
that
part,
actually
almost
similar
initially
but
I
still
check
the
long
run
or
what
is
the
effect.
But
if
ya,
if
para
okie
is
actually
he's
the
one
that
they
getting
the
performance
I
think
that
will
be
very
pitching.
With
this
thing.
B
You
should
see
it
stabilized
sooner,
because
you
know
it's
really
bimodal
you're,
you
know.
In
the
case,
you
got
your
converting
from
everything
inside
of
one
extent,
to
basically
an
extent
44
k,
Chun
but
I
think
you'll
be
dominated
by
the
extra
DD
access
on
the
Reid
side,
for
you
know,
I
sharted
extent,
map
and
42
rights
on
the
right
side,
and
you
get
to
that
state
pretty
quickly.
B
No,
it
shouldn't
take
too
long
or
too
many
overwrites
of
a
no
node
before
he
shards
it
and
once
he
shards
it
it
I,
don't
think
additional
sharding
is
going
to
make
a
significant
difference
in
the
amount
of
work
that
you're
doing
so.
The
degradation
that
you
see
should
plateau
out
sooner
in
time
than
it
does
with
the
current
code
and
and
presumably
at
a
much
higher
level.
2.
A
D
A
D
A
In
a
UN
32
t
group
far
in
code,
if
you
didn't
do
a
24-bit
value
because
you're
only
doing
8,
16
or
32,
you
could
use
that
extra
bit
as
an
indicator
in
the
prefix.
Whether
or
not
it
exists
right,
so
you
could
do
you
could
actually
encode
for
with
a
conditional
of
whether
or
not
it's
there
to
not.
Actually,
you
know
encode
anything
beyond
that.
Yep.
D
A
A
B
D
A
A
D
Yeah
so
I
guess
that's
good
news.
I
can't,
but
I
really
want
to
see
this.
The
profile
of
the
function,
the
cumulative
time
setting
to
function
sorted
by
that
function,
so
I
can
go
look
in
at
all
being
like
I
filter
by
I
filter
out
just
the
buffer
functions.
The
highest
buffer
function
is
n
or
flows
to
pen,
and
that
only
has
one
point:
one:
seven
percent
and
most
of
that
of
the
caller's.
It's
like
the
highest
is
point
one:
seven
percent
and
it's
03
requesting
code
like
it's
all
little
tiny
stuff.
A
D
D
Has
a
million
different
things
that
encode
and
so
give
me
life,
yet
all
them
up
its
yeah?
What's
up
percent
yeah,
none
of
them
are
particularly
big
and
I'm.
Not
seeing
there's
only
one
sstl
map
that's
coming
up
and
that's
the
the
buffer.
What
is
it
a
map
of
buffers,
I?
Think
I,
don't
know,
I,
don't
know,
I
wasn't
saying
anything:
okay,
I
mean
I'm.
Gonna
run
this
for
longer.
I
just
ran
up
for
60
seconds.
Okay,.
D
A
C
A
So
so
one
thing
that
was
doing
to
try
to
kind
of
exercise
the
the
CPUs
a
little
bit
more
as
I
actually
broke
it,
the
the
nvm
yep
into
multiple
whiskey's
on
it.
You
might
try
that
and
see
if
it
took.
A
B
D
Ok,
ball
keep
playing
with
this
Oh
unrelated
to
performance,
but
that
the
leak
checker
or
the
leak
fix
branch
is
passing
my
tests.
Anybody
want
to
take
a
quick
look
before
emerge
it
or
should
I
just
merge
it
all
right,
Marge
it
I
just.
D
D
A
A
B
A
A
Go,
let's
see
what
else
Oh
the
only
thing
else
I
had
in
here
was
another
person
on
mailing.
This
was
talking
about
low
rgw
performance
once
they
get
to
millions,
waikiki
many
objects
or
something
and
I
kind
of
tried
to
write
out
a
response
as
to
why
that
happens,
just
kind
of
a
big
general
review
of
what
we
know,
though,
if
anyone's
interested
in
that
go
and
take
a
look
but
I
think
probably
the
the
answer
really
is
blue
store
and
try
to
re-evaluate
it
once
we've
got
the
stuff
all
kind
of
same
state.