►
From YouTube: 2018-May-03 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
http://ceph.com/performance
A
A
Let's
see
a
couple
more
folks
showing
up
it
might
be
a
small
crowd.
I
was
talking
to
Leo.
We
should
try
to
re-advertise
this
meeting
again
to
the
list,
since
we've
kind
of
been
slowly
losing
people
over
the
last
year
or
so
that's
good,
yeah,
yeah
I
know
I've
been
I've,
been
getting
lazy
about
it.
I
don't
know,
I
should
probably
figure
out
automate
that
just
have
it
send
one
out
but
yeah.
It
seemed
like
when
we
did
that
we
did
get
more
people
in.
A
A
There
was
a
brand
new
PR
from
Peter
to
reduce
bufferless
tree
builds
during
right
ahead
log
rights
in
in
blue
FS,
and
that
actually
could
be
a
really
good
one
to
look
at,
because
that
will
that
very
well
could
have
an
effect
on
performance
with,
like
small
random
rights,
given
kind
of
the
patterns
that
we've
seen
with
blue
store.
So
anything
that
I
can
reduce
overhead
in
in
kind
of
in
in
right.
Situations
like
that
is
probably
gonna
have
a
good
effect,
so
I'm
curious
to
test
that
one
out
and
C
word
what
it
does.
A
There
were
a
number
a
couple
of
ones
that
closed
here,
Josh
I
think
you,
you
looks
like
you
closed
a
couple
of
them
for
CBT
stuff
Neha's
been
working
on,
so
that's
good.
Yes,
cause
bench
networks
in
CBD,
again
day
again,
I'm,
not
sure
anyone
was
aware
that
was
broken.
A
Well,
I
assume
it
worked
well
at
one
point:
yeah
I
think
it
sorta
did.
But
you
know,
I
Intel
was
the
only
one
that
was
really
ever
using
it
and
I.
Don't
know
how
long
they
they
you
know
kind
of
kept
up
with
it.
But
that's
good.
Is
it
maybe
a
question
for
you,
then,
since
then
he
has
not
here.
Oh
no
anyhow,
you
are
here
so
for
either
of
you
two.
Is
it?
How
hard
is
it
to
kind
of
get
it
set
up
in
and
running
these
days?
B
With
tautology,
it's
all
automated,
but
if
you
want
to
run
it
standalone,
you
just
need
to
follow
the
instructions
which
are
pretty
much
there
in
a
readme
in
the
CBT
repository
to
setup
cost
spent
separately,
and
so
you
just
need
to
ensure
that
you
are
set
it
up
and
start
it
and
then
CVD
should
just
figure
it
up
there.
Okay.
B
So
it's
almost
like
a
file
so
like
if
I,
oh,
you
need
to
set
an
f
io
separately
and
you
know
find
the
right
directories
to
CBT
to
look
at
so
there
are.
There
are
some
dependencies
you
some
packages
that
you
need
to
install.
So
that
is
a
one-time
setup
that
you
would
need
to
do
to
run
it
separately
with
just
CBT.
But
if
you
run
it
using
tooth
ology,
the
script
does
everything
for
you?
Okay,.
A
B
Yeah
yeah,
so
so
I
think
it's
true,
since
this
was
integrated
that
there
are
certain
restrictions
when
you
run
a
cost
bench
using
CBT.
It
lets
you
do
a
couple
of
things.
Only
like
read
and
write
or
you
know,
but
it
doesn't
let
you
do
all
kinds
of
stuff
that
you
would
generally
be
able
to
do
with
just
cross
pens.
But
yeah
I
mean
it's
worth
trying
it
out.
A
Sure
sure,
okay,
you
know
for
a
lot
of
the
stuff
that
I've
done
I've
been
using,
get
put
just
because
it's
it
was
easy
ish
for
me
to
get
running
and
do
standalone
tests
with.
But
if
we
can
kind
of
make,
if
we
can
show
that
cost
bench
is
better
or
more
representative,
it
might
be
worth
trying
to
make
that
kind
of
more
any
more.
The
default
I
don't
know.
C
A
Okay,
cool
well
good
job,
guys,
those
kind
of
suffering
from
bit
rot
and
it's
good.
This
time
P
went
through
and
figured
it
out
thanks
all
right.
So
we've
got
a
couple
of
other
things
here.
It
looks
like
cash
reading
code
OSD
maps
which
I
haven't
looked
at
all,
but
that
sounds
like
it
could
be
really
good.
A
I
guess:
I,
don't
know
how
much
that
was
hurting
them
on,
but
good
to
see
it
there
I
guess
and
then
Redick
just
closed
one
of
his
old
PRS
trying
to
improve
stuff
off
in
favor
of
moving
toward
NSS.
So
he's
got
a
couple
of
these
out,
I
think,
but
so
I
don't
remember
exactly
what
was
in
each
one,
but
generally.
B
A
B
A
A
A
A
Anyway,
that's
it
for
updated
stuff,
there's
a
bunch
of
new
movement
stuff
here,
like
I,
said:
I
haven't
gotten
through
all
of
it,
but
usually
this
far
down
the
list,
there's
not
too
much
that
shows
up.
Most
of
us,
usually
just
you
know,
stale,
so
not
a
whole
lot
to
discuss
about
it
anyway.
So
that's
it
for
PRS.
A
A
That
I've
seen
when
doing
this
one
is
that
recently
we
decided
to
enable
buffered
writes
by
default
in
blue
FS,
and
the
reasoning
for
that
was
that
when
you
in
rocks
dB
when
you're
doing
a
compaction,
if
you
can't
store
all
the
SS
T
files
in
the
block
cache,
you
end
up
having
to
read
them
in,
and
that
means
during
compaction.
During
like
a
really
heavy
write
workload,
you
might
see
a
bunch
of
reads
happening
from
the
disk.
One
of
our
partners
saw
this
behavior
and
was
not
happy
about
it.
A
So,
by
having
buffered
writes,
you
can
ensure
that
if
you
have
a
lot
of
page
cache
available
that
those
SST
files
end
up
being
stored
in
the
page,
cache
and
read
if
it's
not
in
the
block,
cache
and
Roxy
B's
block
cache
will
come
from
page
cache
rather
than
the
disk.
So
you
that
essentially
eliminates
that
behavior.
If
you
have
enough
page
cache
where
you
don't
see,
those
reads
really
happening
anymore.
A
That
is
good
in
a
situation
where
the
buffered
writes,
don't
hurt
you
very
much
and
the
ability
to
have
those
as
his
T
files
in
put
into
two
page
cache
helps.
So
if
your
block
cache
isn't
very
big
and
you've
got
a
lot
of
page
cache
and
your
your
CPU
is
fast
enough,
so
that
the
be
buffered
rights
kind
of
don't
have
a
whole
lot
of
overhead.
You
can
see
some
benefit
by
doing
that,
especially
if
your
devices
can't
absorb
the
reads
really
easily.
A
So
it's
it's
a
notable
impact.
We
don't
see
that
same
kind
of
drop
on
our
our
big,
fast
test
boxes
in
the
lab
they're
there
it's
it's
kind
of
a
negligible
impact
when
even
when
you've
got
plenty
of
block
cache-
and
you
wouldn't
see
the
reads
anyway
and
potentially,
when
you
do
have
a
situation
where
you're
you're
reading
SST
files
from
disk
versus
reading
it
from
page
cache
having
them
in
page
cache,
you
might
actually
help
more
than
it
hurts.
So
the
gist
of
that
is
that
this
is
all
really
complex.
A
So
that's
kind
of
why
I'm
hoping
right
now,
as
of
current
master,
we
still
have
buffered,
writes
enabled
we
did
that
like
a
month
or
two
ago,
I'm,
not
sure
if
that's
the
right
answer
or
not,
it
certainly
lets
us
avoid
the
behavior
that
our
partner
saw
where
they
were
seeing.
These
reads
during
writes
so
long
as
you
have
enough
page
cash
available,
but
whether
or
not
that's
really
what
we
want
to
do.
I'm,
not
convinced
it
kinda
moves
back
toward
relying
on
the
kernel,
rather
than
kind
of
doing
things
ourselves
in
a
smart
fashion.
A
So
anyway,
I've
got
lots
of
data
and
collecting
more
data
right
now.
The
the
auto
tuner
is
maybe
faster
in
a
couple
of
situations,
but
I'm,
actually
not
seeing
a
huge
performance
difference,
I'm
a
little
disappointed
about
that.
But
I
am
seeing
much
better
behavior
in
the
caches.
It
seems
like
it's
having
a
positive
effect
in
the
way
that
you'd
expect
on
how
the
caches
are,
balancing
and
and
kind
of
laying
things
out.
A
So
my
hope
is
that
it
really
is
actually
helping
it's
just
that
right
now,
we've
got
other
things
that
are
kind
of
holding
us
back,
maybe
before
at
least
on
this
Hardware
before
you
know,
seeing
a
cached
read
versus
a
read
from
the
SSD
making
a
big
difference,
and
maybe
just
the
the
SSDs
that
I
have
in
here
are
pretty
fast.
So
it
might
just
be
that
they're
so
fast
they
can
absorb
everything
and-
and-
and
you
know
the
CPU
is
the
limit
which
you
know
definitely
seems
to
be
on
this
box.
A
A
If
we
moved
over
to
either
trace
points
or
like
a
binary
log
format,
so
I
that
was
probably
recorded
I
think
so
it
might
be
worth
people's
while
to
watch
that,
if,
if
you
have
any
interest
in
in
that
whole
topic,
yeah
I
think
it
was
recorded
and
there's
some
brief
notes
in
a
third
pad
as
well
cool.