►
From YouTube: 2018-JAN-11 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
http://ceph.com/performance
For full notes and video recording archive visit:
http://pad.ceph.com/p/performance_weekly
B
B
It's
pretty
straightforward
over
Christmas
there
and
there's
pull
requests
that
came
in
that
small
optimization
to
the
EC
back-end
for
simplicity.
Before
when
it
reads
from
itself,
it
was
just
sending
a
message
to
itself
and
then
replying
to
itself,
and
this
just
does
it
at
nine.
So
you
skip
the
hop
through
the
messenger
and
and
so
on.
B
That's
also
good
I'm
wondering
I'm
not
sure
how
much
stuff
use
a
simple
LRU
for
large
cast
ice,
but
anything
this
will
benefit
that
should
get
back
ported
David's
thing
that
simplifies
the
char
did
work
you
blocking
from
a
while
back
I.
We
just
closed
that.
I
think
this
is
all
going
to
all
those
data
structures
are
going
to
have
moved
into
like
a
big
class.
B
Peter
had
a
pull
request
that
changed.
The
CRC
cash
in
bufferless
just
be
like
a
pair
of
like
the
two
most
recent
say
you
avoid
the
dynamic.
Besides,
do
a
structure
much
simpler,
and
how
am
I
I
had
something
that
reduces
the
allocations
and
AC
connection
in
the
messenger.
So
it's
one
bigger
allocation
instead
of
two
smaller
ones,.
B
Acceleration
that
adds
support
for
Intel
processor,
accelerated
compression.
It
does
a
couple
different
things
that
does
lz4
and
z,
lib,
I,
think
and
I
haven't
responded
to
the
less
attack.
It's
my
concern.
There's
that
whether
or
not
using
CPI
X
GPU
acceleration
for
compression
shouldn't
affect
beyond
this
representation.
That's
gonna
press
data
and
I
think
the
way
that
it's
slipped
in
right.
Now,
it's
a
new
plug-in
type
and
right
now
the
plug-in
name
maps
to
the
compression
algorithm,
and
so
it's
like
it
looks
different.
B
It
looks
like
qat,
zip
instead
of
Z
wood
and
so
on.
So
I
need
to
figure
out
how
to
fix
that.
If
anybody
is
interested
in
chasing
that
one
down
be
my
guest
I'll
get
to
it,
but
I
definitely
be
good
to
get
the
acceleration
going
for
those
because
I'm
I
don't
make
a
big
difference.
Your
boost
or
I
think
which
reminds
me
and
I
put
this
on
the
list.
There's
a.
B
We
finally
reverted
the
aprox
ice
thing
now
that
we've
realized
that
we're
stuck
with
an
order
in
standard
list
size
call
I'm
just
need
to
get
back
ported
to
luminous
I'm,
not
sure
if
I,
don't
think,
there's
a
ticket
for
that.
So
somebody
just
needs
to
remember
to
do
it.
B
And
then
eager
still
has
this
pull
request.
That's
ready
to
go
for
luminous
that
fixes
the
amenity
to
spill
over
first
we'll
device,
its
just
cherry-pick
of
a
single
works
to
be
patch.
It
just
needs
to
go
through
a
testing
run.
I,
don't
know
who's,
doing
luminous
testing
stuff
right
now,
but
whenever
that
happens,
that
should
get
merged.
4.3.
B
B
B
This
car
preemption
is
still
in
testing,
hopefully
time
and
it's
finally
running
through
today.
Hopefully,
that'll
get
sorted
out
and
back
ported
to
aluminous
at
least
I
think
we
probably
don't
want
to
follow
up
on
the
shared
exclusive
locking
for
PGs
and
see
it
greens
get
most
of
that
benefit,
except
for
on
Matt
I,
don't
know
where
I'm
at
that
makes
me
nervous
and
it
all
go
LSE
star
well,.
B
D
Strongly
need
every
happen.
Every
every
cycle
we
can
get
out
of
a
map
and
in
where
C
star
will
be
ready.
Is
that
on
the
read
side
or
on
the
right
side,
all
sides
have
been
apparently
read
side
also,
but
Josh
concedes
can
be
back
in
there,
but
that
came
up
in
everything
that
intersects
with
ours
with,
for
example,.
A
In
in
our
code,
we
do
some
kind
of
thing.
You
know
rough
things
on
on
the
key
value
store
right,
I
mean
we
at
least
on
the
right
side.
Our
workload
is
pretty
rough.
On
the
read
side:
I,
don't
I,
don't
have
I
guess
have
I
haven't
thought
about
recently
enough
to
I.
Guess
I'm
I'm
worried
that
maybe
there
are
things
we
should
be
focusing
on
in
our
code
before
I
mean.
B
My
my
worry
is
that
I,
don't
think
this
is
I,
don't
think
the
shared
locking
on
PGS
is
gonna
help
or
it's
not
gonna
help
at
all
on
our
rights
or
the
read
modify
rights
because
those
are
exclusive
anyway.
It
would
only
help
if
you
have
concurrent
clients,
reading
doing
read
on
the
operations
on
the
index
and
I.
Don't
think
that
rgw
does
many
of
those
I
guess
the
exception
would
be
scrub.
B
A
D
D
A
B
B
And
my
as
my
recollection
is
that
shortening
the
PG
logs
to
be
really
short,
and
so
they
only
so
they
stay
within
the
like
the
walls
or
whatever
doesn't
really
matter,
because
if
we
don't
write
the
PG
log
entry,
we're
writing
the
du
pape
entry
instead,
and
so
it
doesn't
really
save
us.
Anything
and
I
don't
see
that
bargain
I'm
not
right
the
two
pop
entries
right.
We
have
to
write
those.
B
D
B
It
seems
I
mean
so
everything
we
tried
before
didn't
really
help,
and
it
really
it
didn't
seem
to
have
I
mean
it
was
related
to
the
size
of
the
wall,
just
because
the
size
of
the
wall
affects
everything
that
rocks
of
you
does,
but
it
wasn't.
There
wasn't
a
clear
cliff
the
way
that
I
was
expecting
where,
as
long
as
the
PG
log
entries
were
retired
before
they
into
the
all
of
the
wall,
then
like
tonight,.
D
C
C
C
A
It's
kind
of
complicated
in
general
because
it
looks
to
me
like
by
having
these
entries
leaked
into
the
database.
We
end
up
just
in
many
different
ways
suffering
because
of
it.
We
have
extra
write
amplification.
Now
we
have
indexes
being
recomputed
for
things
that
aren't
on
lived.
You
know
we're
recomputing
bloom
filters
for
things
that
must
aren't
necessarily
long-lived.
My
I
mean
Hans
and
my
preference
will
be
just
to
get
this
out
a
lot
of
rocks
DB
entirely
and
at
least
I'm
envious.
B
B
A
Is
it
writing
new
IOT's?
If
we
are,
we
have
a
separate
log
that
this
is
going
into,
but
we're
not
actually,
like
so
I,
understand
like
completely
how
you
don't
want
to
have
multiple
different
like
logs
on
spinning
disks
or
anything
like
that,
but
on
something
like
nvme,
where
it's
the
the
bigger
issue
is
parallelism.
Having
high
parallelism,
you
don't
necessarily
care
about
random
behavior
as
much.
Why
would
having
a
log
that
can
grow
as
big
as
it
needs
to
to
handle
these,
but
never
rewriting
the
data?
I?
B
D
B
D
B
D
B
B
But
I
mean
even
if
we're
we
could
make
it
at
for
the
sake
of
argument,
it's
a
totally
different
index
storage
that
doesn't
really
help
because
then,
when
you're
writing
to
it,
you
still
need
to
update
the
object
metadata
about,
like
which
version
of
the
object.
Have
you
just
send
me,
update
the
PG
log
so
that
all
the
replication
and
and
so
on,
works
so
separating
it
tends
to
make
it
worse,
not
better.
You
just
have
sage
more
things
that
have
to
commit
in.
A
The
the
kind
of
description
that
you
just
gave
it's
kind
of
what
the
the
optimal
scenario
would
be
right
where
you're
you're
writing
this
data
out
the
the
right
head
log
together,
you're
kind
of
getting
it
for
free
right
because
now
you've,
you've
coupled
them
together
into
one
right.
The
this
meditate
writing
out
and
your
other
stuff
that
you
want.
But
the
the
pain
right
is
that
now,
some
of
that
to
some
extent
is
being
copied
into
level
zero
and
you've
got
a
bunch.
B
Right
I
think
I
think
the
fundamental
problem
is
that
racks
to
be
is
a
log
structured,
merge
tree
which
amplifies
rights,
and
so
our
wall
gets
amplified
at
level
0
and
sometimes
it
even
hits
level
one
even
for
short-lived
data
and
that's
just
a
fundamental
design.
Property
of
log
structured,
merge,
trees.
That's
like
not
the
best
choice
for
flash
storage,
but.
A
B
A
B
D
B
Kind
of,
but
it
was
just
framed
and
written,
and
not
really
the
most
hopeful
way,
and-
and
it's
still
even
if
even
if
feta
scale
like
worked
perfectly
for
what
it
was
doing,
it's
still
not
what
we
want
it
might
help
for
blue
store,
but
it's
still
I
think
what
we
actually
want
is
a
single
implementation
that
is
handling
what
I
mean
all
our
other
data
to
you.
So
I,
don't
know
that
we
would
I'm.
D
C
A
B
I
mean
so
that
we
kind
of
tried
this
right.
We
tried
Lisa
and
Intel.
Did
that
whole
piece
of
work
in
rocks
DB.
That
makes
it
so
that
when
we
compact
wall,
we
look
at
multiple
walls
before
we
write
the
zero
file,
so
we
could
limit
the
amplification
of
stuff
and
that
that
should
help,
but
in
practice
it
didn't
seem
to
help
that
much
right.
A
B
B
B
B
D
Well,
I
assume
that
the
fact
that
we
can't
we
can't
the
fact
that
the
retire
of
an
OSD,
op
and
and
and
his
requite
requires
an
update
on
the
key
value
store
that
that's
operating
in
the
certain
and
in
the
latency
at
the
Valencia.
The
storage
domain
is,
is
itself
a
problem.
It's
if
you
retire
rating
it
at
a
much
higher
rate,
minimal
weighted,
so
I'm,
going
after
the
minimum
latency
of
a
tour
to
retire
in
ilysm.
B
So
the
minimal
latency
is
to
is
right.
It's
the
data
right
and
the
metadata
rate
based
on
the
current
design
of
blue
store.
That
is
the
minimal
we
can
see
unless
you
do
data
journal,
in
which
case
it's
one
IO
the
idea
with
the
new
design
that
we're
talking
about.
That's
all
ending
me
it'll
be
one
I/o,
because
the
whole
thing
is
log
structured.
So
it's
always
just
gonna,
be
one
I/o.
B
D
A
B
B
B
B
At
work,
I
think
we're
sort
of
you
know
right
to
the
actual
things
we
can
do
in
the
short-term
mid-term
that
will
improve
the
situation.
I
think
those
include
I
think
therefore
two
roads
that
I
see
that
we
can
go
down
realistically,
one
is
taking
another
look
at
leases
patches
that
try
to
deal
with
short-lived
keys
and
preventing
them
from
hitting
level
zero.
B
I.
Think
probably
we
need
to
actually
watch
have
some
better
insight
into
what
that,
how
quickly
our
our
short-lived
keys
are
actually
being
removed
and,
if
they're,
actually
being
if
it's
working
or
not
I,
don't
have
a
good
sense
of
like
whether
how
many
of
them
are
getting
eliminated
before
they
hit
l0
like
with
no
changes
and
with
our
change
and
then
trading
that
against
the
CPU
cost
I.
B
B
And
if
those
combining
those
two
might
work
great
too,
because
that
I
think
the
the
do
pops,
the
do,
pops
are
gonna,
get,
are
gonna
hit
level
zero
and
maybe
the
mobile
one,
because
they're
there
long
enough
that
they're
gonna
cover
like
replace,
request,
replays
and
muggy
clients,
or
else
they're
pointless
right.
That's
the
whole
point
is
that
they're
they're
somewhat
long-lived.
B
A
B
Is
no
longer
rock
CD
I
think.
Basically,
that's
not
a
lot
of
structure
registry
and
that's
kind
of
what
we
want
to
make
with
our
ever
larger
should
be
tree
thing
that
undescribed
I
think
that's
very
close
to
what
he's
talking
about
and
I
think
that's
what
we
want
to
do,
but
I,
don't
think
that's
something
that
we
can
do
in
the
context
for
XP
or
boost,
or
even.
B
C
C
C
B
Yeah,
it's
basically
just
making
rocks
trees
life
a
little
bit
easier,
then,
but
I
think
the
I
think
the
idea
would
be
that
we
would
have
a
relatively
small
number
of
eg
log
entries.
You
know
like
in
the
tens
and
then
we
would
and
we
would
trim
in
a
large.
You
know
and
similarly
logic.
So
we
say
we
retire.
B
Fifty
PT
log
entries
that's
going
to
be,
like
you
know,
4k
of
a
4k
right
to
the
ring
buffer
mm-hmm
and
you
delete
them
all
and
we
just
need
to
like
tune
that
number.
So
it's
it's
as
big
as
possible,
so
that
we're
as
efficient
as
possible
but
small
enough,
so
that
we
retire
them
before
they
make
it
out
of
the
wall
before
they
hit
level
zero
right.
B
A
Sage
I
linked
in
the
notes
I
took
from
when
Josh
and
I
were
discussing
this
earlier.
He
had
had
a
I
think
a
similar
idea
to
accept
that
using
like
sequentially,
ordered
keys
and
rocks
tbh
for
the
ring
buffer.
C
B
C
Mark,
maybe
you
can
refresh
my
memory
on
some
of
the
tests
that
you
did
previously
with
this.
What
kind
of
maybe
confer
you
see
when,
when
we
PG
log
writes
entirely.
C
C
A
Think
I
got
to
the
point
where
further
decreasing
the
number
of
keys
coming
in
didn't
change
any
performance.
It
was
kind
of
like
the
the
the
initial
benefit
was
as
far
as
I
could
get
and
then
anything
beyond
that
Rock
Stevie
was
now
just
kind
of
absorbing,
but
there
was
an
initial
kind
of
improvement
if
I
remember
right
so
I
don't
know
that
I
went
through
and
like
looked
only
at
do
pops
going
through
and
then
PG
log,
not
happening
or
PG
log
happening
and
not
do
pops
I,
think
I
kind
of
just
tried.
A
So
it
might
be
that
that
at
that
point
now
it
didn't
really
matter
too
much
after
that.
But
maybe
the
the
bigger
point
here,
though,
is
that
you
know
this
is
this
was
like
testing
on
one
one:
nvme
drive
on
the
the
node
that
has
like
tons
and
tons
of
CPU.
So
it
might
be
that
we
need
to
engineer
tests
that
really
stress
either.
A
C
C
We
need
to
do
more,
targeted,
OMAP
tests
to
see
whether
that
this
would
actually
be
a
helpful
or
not.
Well,
yeah
I
mean.
C
A
C
D
A
Other
point
I
want
to
kind
of
make
with
this,
too.
Is
it's
not
just
about
performance
right?
It's
about
how
much,
where
you're,
putting
on
the
flash
drive
so
much
extra
work
that
we
do
you
know,
even
if
it
doesn't
yield
a
performance
advantage,
anything
that
we
can
do
to
make
it
so
that
we're
we're
more
pleasant
on
the
hardware.
I
guess
I
think
is,
is
good,
general.
C
B
A
B
A
hack
that
I
did
in
Kraken
or
voluminous
I'm,
not
sure,
but
it
should
be
that
for
most
for
the
majority
of
updates,
we
should
be
updating
a
very
small
key
for
PG
info
and
only
in
sort
of
more
unusual
cases
like
burying
or
weird
ops.
Would
we
update
the
full
PG
info,
there's
a
fast
in
pokey
sure,
so
it
might
be
worth
double-checking
that
that's
actually
working
that
you
see
for
most
iOS.
You
see
the
fast
in
pokey
updating
another
dental
and
pokey.
A
B
What
Sophia
100
PG
is
you
have
100
keys
and
they're,
like
you
know,
a
few
hundred
bytes
at
most
like
that's.
It
like
this
is
fixed
info,
so
even
if
it's
amplified
all
the
way
to
level
3
which
I
think
it
actually
would
end
up
to
get
into
going
down,
no
matter
what
that,
doesn't
it
doesn't
matter,
it's
the
same
key
okay,.
B
B
D
Feels
more
believable
to
me:
I,
don't
feel
like
an
authority
on
this,
but
I
mean
I.
Think
the
one
I've
heard
it
discussed
it
by
triply
variable
like
that
it
talked
about
it.
Look
at
it
from
InfiniBand
perspective.
You
have
to
be
got
to
be
able
to
make
a
case
that
the
CPU
that
some
CPU
won't
touch
the
data.
B
B
This
is
this:
is
the
question
I
asked
it
was?
It
would
be
easiest
to
assume
if
we
can,
it
would
be
easiest
if
we
can
assume
that
the
qat
will
work
just
as
well
as
any
NIC
offload
would
and
they
done
with
it.
But
my
concern
is
that
the
NIC
would
might
be
able
to
do
the
encryption
in
line
without
affecting
wait
and
see
while
qat
would
add
latency,
because
the
core
has
to
give
it
to
QA
t.
B
E
What
why
do
you
want
to
ignore
the
Nick
offload
I
mean
buy
some
food.
Just
have
a
thing.
That's
like
tell
the
Nick
to
do
it
with
me.
Configure
that's
loaded
at
this
place
might
need
to
be
part
of
our
pipeline
at
all.
B
B
Right
I'm
saying
that
if
if
we
were
like
Apache
and
it
was
TLS,
we
could
probably
like
tow
open,
SSL
or
something
that
it
was
TLS
and
it
would
just
magically
work
and
and
I.
But
if
we
are
our
own
weird
protocol
and
at
the
specific
byte
offset
this
byte
range
in
the
stream
that
we
sent
out
of
the
socket
is
the
thing
that
needs
to
be
encrypted
with
this
particular
key
and
all
that
stuff
I,
don't
know
if
the
offload
engines
are
robust
and
flexible
enough
to
like
do
that.
Yeah.
E
E
E
B
E
B
E
B
A
One
thing
I'd
like
to
bring
up,
but
it
doesn't
need
to
spend
very
much
time
eyes
just
to
let
people
know
that
there's
been
some
benchmarks
floating
around
from
pharaonic
savate.
A
The
the
recent
security
flaws
found
in
multiple
different
chips,
but
I
guess
primarily
Intel
ones
and
folks
should
probably
know
that
there
it
looks
like
there's
some
pretty
bad
pain
with
both
nginx
and
Apache,
and
also
FIO
random
right
performance
on
the
SSD
tests
that
they
did,
especially
with
older
processors.
So
I
don't
know
if
anyone
from
the
community
would
have
any
interest
in.
If
you
guys
are
plenty
of
upgrading,
maybe
doing
some
before
and
after
tests,
but
have
the
combination
of
what
some
of
these
tests
do
and
the
ones
that
are
doing
badly.
B
A
D
A
The
one
that
that
scares
me
a
little
bit
is
some
of
these
like
context,
switching
benchmarks,
tres
ng,
I,
don't
actually
know
very
much,
but
I've
never
run
it
before,
but
I
mean
some
of
these
are
looking
like:
okay
30
a
30%
hit
on
some
of
these.
The
message
passing
system,
five
message,
passing.
D
B
B
A
Res
love
is,
is
looking
at
random
reads
right
now,
and
even
with
the
C
and
P
state,
pinning
on
a
single
note,
he's
seeing
performance
swings
that
he
doesn't
understand,
so
he's
trying
to
understand
that
better
he's
trying
to
figure
out
what's
going
on
I,
don't
I,
haven't
I,
think
he's
on
vacation
right
now,
so
might
not
be
until
next
week
that
we
hear
anything
back
from
him.
I
also
spent
some
time
looking
at
with
his
branch.
A
What
initially
looked
like
sequential
write,
performance
variation,
but
when
just
by
happenstance
I
was
rebooting
the
nodes
for
for
kind
of
an
unrelated
reason
and
saw
a
lot
of
those
results.
Titan,
which
was
interesting
Wolcott
profiling
showed
that
there
was
a
affirm,
a
significant
amount
of
time
in
rocks
DB
doing
F
sync
for
the
right
ahead
log.
It
was
like
55%
of
the
time
in
the
K
vsync
thread
was
spent,
doing
that
even
even
just
doing
large
sequential
writes
so
that
that
was
surprising
to
me.
But
I
guess,
maybe
not
that
surprising.
A
So
it's
not
really
related
to
Rateau.
Slavs
thing:
I,
don't
think
it's
more
just
you
know,
there's
there's
a
fair
amount
of
time
being
spent
doing
that
and
maybe
maybe
different
impacts
or
other
things
impacting
it
maybe
is,
is
having
you
know,
causing
some
variation
there.
So
that's
that's
kind
of
where
I
got
to
I
tried
to
look
at
the
old
batch
I/o
completion
work
that
Adam
did
and
getting
it
into
master
with
some
of
the
changes
that
have
happened
since
then
it
it
it
looks
like
it's
really
gonna
muck
things
up
and
complicate
them.
A
Even
more
I
mean
maybe
sage,
maybe
you'd
have
a
different
opinion
on
it.
But
with
how
we've
changed
now
we
have
like
deferred
transactions.
We
have
the
split
between
the
K
vsync
thread
and
finisher
it
doesn't.
It
doesn't
easily
go
in
yeah,
it
might
be
the
idler
yet
maybe
could
be
made
too,
but
at
least
you
know
when
I
was
looking
at
I
started
it
and
then
I
was
like
I'm.
I
was
pretty
sure
that
if
I
tried
to
continue
along
that
path,
I
was
going
to
introduce
a
lot
of
bugs.
B
Yeah
I'm
I
missed
the
last
few
days
of
the
blue
star,
stand
up
I'm
kind
of
I'm
on
I
think
it
was
Monday
I
asked
Adam
to
like
take
stock
of
the
various
optimizations.
They
tried
tweaks
for
rocks
DB
and
for
those
stories
use
of
rocks
to
be
to
see
what
was
had
been
actually
emerged
and
what
looked
the
most
promising.
So
we
can
try
to
figure
out
what
which
wants
to
pursue.
I
think
that
the
dispatching
falls
into
that
category.
B
We're
tried
something
a
while
ago,
given
every,
but
that
was
before
they
started
playing
around
at
all
with
blue
storm
or
start
with
box.
Tv
mm-hmm
optimizing
that
and
then,
given
everything
that
they've
learned,
what
is
what
is
what's
most
promising,
what
seemed
is
most
likely
to
make
a
difference
sure
it's
just
ready
to
spell
this
out.
Adam
still
here,
I.
A
Yeah
I
think
I
saw
this
out
tomorrow
as
well,
though
so.
Okay,
maybe
next
week,
yeah
sage
for
batching
batching
up
rocks,
while
rice
specifically
for
rocks
tbh
I
mean
to
me.
It
feels
like
not
having
that
be
in
glue
store,
maybe
having
it
be
at
the
the
heavy
layer
seems
better
to
me,
but
I,
don't
know
you
have
any
strong
opinions.
The.
B
B
So
that's
probably
something
that's
really
easy
to
change,
but
I
don't
know
if
it's
just
you
overhead
of
like
here's,
another
transaction,
here's
another
I,
don't
know
if
that's
the
part,
that's
expensive
or
the
fact
that
each
of
those
20
transactions
has
its
own
separate
right.
Batch
I
could
be
one
along
with
one
right
batch,
so
I'm,
not
sure
which
one
of
those
is
the
assume
it.
A
A
A
B
Yeah
I
mean
that's
that's.
That
is
what
it
is.
I
mean
we
can.
We
can
artificially
make
the
T
V
sync
thread
less
aggressive,
and
so
it
sinks
a
third
of
that
time,
a
quarter
that
time
or
something
but
then
we'll
just
have
higher
latency.
So,
just
just
a
choice:
all
right.
We
can
have
high
latency
and
better
batching
and
less
CPU.
What
we're
gonna.
B
Yeah
but
it'll
I
it
should
it
should
generally
auto-tune
anyway,
like
the
reason
why
we're
spending
all
that
time,
that's
thinking
is
because
they're,
a
bunch
of
for
Mike,
writes
in
the
queue
and
so
we're
waiting
for
the
for
my
grant
to
complete
before
the
F
sync
returns.
So
it
should
self-regulate
right.
If
you
were
doing
for
K,
writes
then,
instead
of
having
270
of
those
all
of
them,
K
writes
you're
gonna,
have
you
know,
you
know
twelve
thousand
of
them
or
something
right,
mm-hmm.
E
B
You're,
probably
spending
a
similar
amount
of
time.
I
was
thinking,
I,
guess
less,
because
you're
spending
more
CPU
doing
other
stuff,
but
I
don't
know
something
like
that.
Yes,
but
I'm
kind
of
I'm
not
really
worried
about
that
part.
Okay,
I'm
worried
about
the
CPU
time.
That's
spent
not
the
icing
time.
I
think
means
we're
waiting
in
the
storage
which
is
like
what
we
want
to
be
doing.
Hey.
A
B
I'm
not
entirely
sure
why
that
happened.
It
still
feels
like
we
should
be
getting
what
we
were
getting
before.
I
think
the
I
think.
Probably
we
should
do
a
on
a
hard
disk.
Do
a
block,
trace
and
see
where
the
non-sequential
iOS
are
coming
from,
because
I
was
seeing
the
same
thing
too.
I
was
when
I
was
doing
or
seeing
something
similar
where
I'm
I'm
trying
to
remember
exactly
what
it
was.
It
was
if
I
did
like
an
F
IO
where
I
was
doing
small
writes
with
and
flushing
them.
B
I
was
getting
very
high,
I
ops.
Clearly
because
there
was
some
cash
in
the
hard
disk
that
was
absorbing
the
writes
and
calling
them
safe,
even
though
they
hadn't
physically
been
written
on
a
platter
because
it
didn't
spend
a
fast.
But
when
I
did
the
same
workload
through
Bluestar
wasn't
getting
that
higher
rate
and
I
couldn't
tell
at
the
time
what
what
the
difference
was.
I
think
we
need
to
look
at
flick
it
up,
lock,
trace
and
figure
out
or
seek
watcher
or
whatever
I
think.
That's!
So
that's
and
that's
an
avenue
to
pursue.