►
From YouTube: 2017-APR-12 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
http://ceph.com/performance
For full notes and video recording archive visit:
http://pad.ceph.com/p/performance_weekly
C
B
Enough,
all
right,
that's
better!
That's
fine!
All
right,
you'll
see,
there's
a
PTT
logs
from
move
semantics
for
Kiku
haven't
much
of
that.
Yet
on
the
blue
storefront
we
have
a
couple
of
them.
The
first
and
probably
most
important,
is
the
one
from
Igor
which
reef
actors
there.
Bob
Ryu
stuff
for
small
rights,
makes
it
a
bit
more
robust
and
also
fixes
an
issue
with
the
previous
version
that
merge.
That
was
just
creating
bad
to
me,
blobs
and
in
a
bad
way.
So
that's
merged
this
morning.
B
B
There
was
a
bit
of
a
regression
after
the
throttling
PR,
which
had
something
that
changed
it
for
top
size,
so
hoping
that
stochastic
ace
but
we'll
see
even
with
that,
though,
on
small
writes,
the
latency
is
higher
and
I
am
so
slower
for
blue
store
than
file
store.
So
we
definitely
need
to
figure
out
what's
going
on
there.
B
Convicted
path
is
just
a
lot
longer
in
this
case,
I
think
we
can
get
on
our
disk.
So
so
there's
that,
let's
see
there
is
another
one,
that's
from
rate
of
Slavs
that
Cain
just
blob
breaths
into
blobs
and
change
as
much
types
around.
It's
like
five
different
things
in
that
pull,
requests
that
probably
need
to
get
tests
separately
interviewed
and
tested
carefully.
So
that's
still
work
in
progress,
but
it's
pretty
promising
a
couple
ones:
merged
brightest
laws,
speed
tables,
optimization
that
is
profiling,
turned
out
that
one
merged
the
throttling
model
one
merged.
B
B
B
So
I
think
the
next
step
there
is
Marc
needs
to
test
it
and
see
how
it
does
it's
basically
trying
to
adjust
the
throttles
automatically
based
on
a
target
latency,
and
so
we
need
to
make
sure
that
works
and
how
it
performs
we'll
see,
there's
one
that
makes
a
tweaking
file
or
to
change
the
order
of
completion.
Someone
makes
me
pretty
nervous.
I
have
to
take
a
look
at
that.
B
There's
a
related
pull
request
somewhere
in
here
in
the
pile
that
does
something
similar
on
blue
store,
where
it
just
combines
the
callbacks,
because
in
blue
stars
case,
all
the
callbacks
are
called
all
at
the
same
time
and
everything
is
sort
of
the
is
readable
and
writeable.
Stuff
is
the
same
so
that
we
can
probably
that
one
might
think
pod
makes
more
sense
and
a
more
interested
in
making
that
boost
or
stuff
simpler
and
cleaner.
B
Since
that's
what
we're
going
to
import
going
forward
anyway,
we'll
see
booster
is
actually
a
little
bit
different
in
that
everything's
actually
readable.
As
soon
as
you
Simba
the
transaction,
you
don't
even
have
to
wait
for
it
to
commit,
so
it's
possible
that
we
don't
even
want
the
unreadable
callback
at
all
in
the
booster
case.
I
might
have
to
change
that
around
we'll
see
anyway.
A
B
Cue
thing
test
failures,
yep
I,
haven't
I,
haven't
tried
running
it
through
the
QA
suite,
so
it
used
to
have
a
bunch
of
other
stuff
mix
into
that
branch
that
all
got
merged
separately.
So
now
that
branch
just
has
the
right
cache,
so
I
could
try
it
again.
B
B
Kind
of
dropped
the
priority
a
bit
but
I
think
I
might
as
well
bump
it
up,
because
it's
it's
pretty
clear
that
it's
going
to
be
better
than
what
we
have
not
by
a
lot.
It's
probably
a
small
improvement,
but
it's
I
think
it's
the
right
direction,
so
we
might
as
well
be
merge
it
now
and
that'll
unblock
some
other
stuff.
So
Oh
put
that
back
up.
On
top
of
my
booster
to-do
list,
cool
I'll
try
it
today.
B
Then
the
that's
right
cache.
It
goes
the
next
two
things,
because
then
I
can
go
back
and
resurrect
the
blue
store
for
under
completion
branch,
which
has
a
bunch
of
bunch
of
good
stuff
in
it,
but
needs
a
musical
on
top
of
the
finalizer
thing
there.
A
couple
other
things
in
flight
and
one
is
that
it
occurs
to
me
that
we
should
probably
make
blue
FS
and
take
its
chunk
of
space
out
of
the
middle
of
a
hard
disk.
B
I
also
need
to
make
sure
that
FS
check
memory
usage
is
OK.
I
did
some
preliminary
stuff
that
improved
it,
but
I
think
there's
still
memory
usage
coming
from
somewhere
that
isn't
accounted
for,
so
I
need
to
run
it
through
massive,
that's
going
to
take
probably
days
on
a
full
disk
fold
it
data.
So
you
should
start
that
on
the.
A
Arm
movement
did:
did
you
get
a
chance
to
talk
to
the
guys
at
like
Western,
jeweler
or
yep.
B
D
B
Yeah
well
I
mean
we
could
make
it
so
that
we
also
write
data
in
the
middle
of
the
platter
at
the
same
time
too,
and
then
spread
to
the
outside.
But
that's
that's
probably
not
really
good
idea.
Cuz
everything
else
is
going
to
slow
down
this.
It
just
fills
up
anyway,
so
it's
just
going
to
increase
the
performance
disparity
between
early
and
old,
but
I
guess
we
should
why
just
try
moving
it
somewhere
else
and
then
just
see
how
much
in
early
performance
various
anyway.
B
That
I'll
probably
tell
us
if
it's
even
worth
doing
in
the
first
place,
so
yeah
it'll
be
interesting.
Either
way,
it's
pretty
easy
change.
It's
pretty
easy
thing
to
test
out.
So,
okay,
so
I
guess
the
that
first
set
of
pull
requests.
The
ones
that
are
served
in
flight
right
now
are
changing
the
way
that
blue
source
handling,
that
Ferg
writes
and
the
main
change
there
is
there's
one
more
pull
request
here:
that's
actually
not
on
the
list.
B
B
They
are
ascending
instead
of
descending
order
like
they
were
previously
and
basically
matters
for
sequential
writes,
so
these
sequential
deferred,
right,
I'll,
get
coalesced
in
memory
and
then
sent
down
as
one
I/o
which,
which
makes
difference
so
that
that
is
I,
think
ready
for
around
three
QA.
But
it
looks
pretty
good
on
the
performance
tests
on
our
disks.
That's
thinking
it
dips
difference,
they're,
still
a
few
more
things
to
do
with
that
code
that
we
can
do
there.
But
what
we
have
now
is
pretty
good.
I
think.
B
A
Need
to
comment
on
that
specifically
for
Nick's
benefit
the
probably
the
branch
that
you
were
testing.
If
you
actually
look
with
a
blocked
race
at
the
sequential
writes,
they
are
they're,
basically
descending
order,
so
it's
kind
of
like
yo,
lock,
tiller,
yeah,
you'll,
you'll
skip
forward
and
then
you'll
start
writing
backwards
and
then
you'll
skip
forward
and
stuttering
backwards.
A
E
Yeah
I
was
going
to
say
it
looks
like
it
is
better
than
what
it
was,
but
the
it
seems
to
be
like
Carla
coalescing
it
in
the
disk
cache
or
something
because
I'm
I'm,
I'm
sort
of
seeing
an
improvement,
but
the
disk
is
so.
The
disk
has
got
like
a
average
wait
time
of
two
milliseconds,
that's
roughly
what
I'm,
seeing
as
the
latency
of
the
average
IO
coming
out
of
fear,
so
yeah
I
mean
obviously
I
just
test,
but
this
change
has
done.
I've
also
managed
to
kill
everything
using
that
book.
E
E
B
Yeah
download
at
SAP
comm
is
old
and
probably
growing.
Oh,
no,
sorry,
no
you're
right
now
so
download.com
as
the
XO
releases
and
so
the
luminous
one
you're,
seeing
there
is
that
dev
release
that
were
you're
cutting
every
month
or
something
like
that.
So
that's
pretty
stale,
but
the
shaman
bills
are
coming
straight
out
to
get
okay.
B
A
E
Not
not
massive
ammonia,
it.
It
looks
like
whatever
the
it
was
doing
before,
whereas
the
hoo
was
waiting
on
the
the
disk
to
write.
It's
not
doing
that
anymore,
but
it
does
it's.
It's
like
it's.
Whatever
delay,
there
is
in
doing
these
coalesce
right
to
this
layer.
It's
still
sort
of
waiting
on
that,
so
it's
taking
its
the
other
disk
and
IO
is
too
many
seconds.
That's
what
we
want
in
the
Surfside
as
well.
You're.
E
E
E
E
A
Was
there's
at
least
one
bug
that
we
uncovered
where
it
was
just
that
stage
fixes
using
the
wrong
walk.
B
A
Booster
pedal
model
I
probably
have
more
updated
since
then,
but
that
was
that
what
I
was
seeing
at
that
point?
Okay,.
B
B
E
A
B
B
A
It
was
to
look
at
whether
an
unending
me
whether
or
not
the
matched
hours
so
that
five,
that's
right.
It.
C
B
B
Alright,
what
else
I.
A
Basically,
I
want
to
reject
the
group
here,
because
I
found
out
I
every
every
six
months
or
a
year,
so
I
come
back
to
us,
but
the
only
easy
way
I
found
to
do
Wolcott
profiling
is
to
basically
do
poor
hands
profiling,
where
you're
calling
gdb
over
and
over
again
sense
of
pain.
But
I
was
wondering
if
anyone
here
knows
of
any
other
tools
that
consistently
work
well
for
doing
this.
A
If
not,
I'm
probably
going
to
go
and
try
to
resurrect
some
ancient
projects,
I've
seen
to
actually
use
like
gdb
to
type
on
the
interface
so
that
you
can
go
and
get
factories
and
collect
them
without
having
to
like
continually
recall
gdb
over
and
over
again.
But
if
anyone
has
better
suggestions,
I'd
be
very
happy
to
hear
it.
A
A
A
slightly
better
wave
might
actually
be
to
use
the
Python
interface
to
gdb,
to
never
exit
gdb
and
since
you're,
just
asking
it
to
get
stuck
crisis
then-
and
there
was
a
project
to
do
that
like
five
years
ago,
that
doesn't
appear
to
work
very
well.
I
may
try
to
just
do
that
myself
and
see
if
it
works.
There's
also
a
no
camel
thing
that
I
tried
but
I,
don't
know
it
it
sort
of
works,
but
it
doesn't
actually
seem
to
capture
very
useful
information,
so
yeah
this
is
go.