►
From YouTube: 2019-04-04 :: Ceph Performance meeting
Description
A
C
Well,
in
short,
two
random
rich
scenarios
have
been
verified,
one
with
a
large
trunk
for
EM,
a
second
one,
with
4k
for
large
chance,
which
I
hope
are
not
so
important
at
the
beginning
of
the
project.
Well,
we
are
significantly
slower,
but
what
it
comes
to
two
small
chunks.
Well,
there
is
substantial
difference
in
my.
My
testing
is
based
and
the
curved
Tuesday
and
I'm
in
Sirte,
and
can
stick
around
one
and
half
more
eye
ups
for
60%
of
CPU
cycles
in
comparison
to
to
classical
OSD
ill.
C
B
C
B
C
The
of
the
size
of
the
stock
yeah
I,
imagine
but
I
personally.
Imagine
that
the
periphery
would
that
pearl
freak
out
when
it
comes
to
dwarf
profiling,
that
it
records
it
grabs
just
dum-dum,
maybe
configurable
amount
of
bytes
from
your
stock
and
then
it's
being
processed,
how
they're,
not
by
perf
record,
it's
being
processed
at
perf
report
stage.
That's
that's
how
I
imagine
had
and
I
profiling,
but
if
you,
if
they
grab
space,
it's
not
enough
to
describe
to
have
the
content
for
entire
call
Coltrane,
then!
Well,
you
are
getting
messenger
in
any
report.
C
D
A
C
C
D
C
C
C
C
He
started
bashing
our
TX
messages
when
it
comes
to
sent
we
are,
do
we
are
doing
button
right
up
in
DPR,
that's
quite
quite
from
meas,
unpromising
stuff
because
from
profiling,
I
can
see
that
we
are
bottle
necked
at
the
CPUs
front
end
we
we
are,
we
are
taking
a
lot
of
l1
instruction,
cache
misses
and
also
a
lot
of
terribly
every
branch,
mispredictions
yeah
in
Intel,
it's
it's
I!
Guess
it's
related
to
the
branch
address
calculator
in
the
in
the
in
the
Intel's
team
am
methodology
we
are.
C
B
C
A
C
B
B
B
E
B
The
goal
with
that
is
that
we
want
to
do
the
same
thing
for
the
other
demons
that
we
handed
for
the
OSD
have
automatic
memory
tuning
so
that
they
they
try
to
stay
within
the
certain
bounds
and
then
kind
of
balance
out
the
priority
of
memory
for
different
caches,
assuming
that
they
even
have
different
caches,
but
it's
fairly
generic.
So
it
could
be
used
if,
if
it's
useful
so
that
that
appear
for
that
just
appeared
this
morning
and
sage
put
it
in
his
testing
branch.
B
E
I
I
did
scan
it
and
that
looks
great
I,
like
basically
just
commented
saying
that
looks
really
useful
for
log
trimming
stuff
that
I've
been
looking
at
cool
it
I,
don't
know
the
specifics
of
rax
DB
and
the
problems
that
we've
had
with
range
deletes
in
the
past.
I,
don't
know
if
rax
DB
has
addressed
the
issues
there
or
if,
if
we're
still
kind
of
geared
to
use,
it.
E
B
G
B
G
B
E
B
B
The
there
was
an
update
and
discussion
around
the
right
around
cash
policy
from
Jason.
Bell
looks
really
exciting
that
combined
with
his
like
IO
scheduler
stuff,
and
he
talked
about
it
yesterday
in
the
the
we,
the
monthly
meeting
developer
meeting.
So
if
you're
interested
in
that
watch
the
the
recording,
because
it
sounds
really
good,
I'm
excited
about
it,
let's
see
some
of
the
work
on
the
auto
tuning.
C
C
B
All
right,
yeah,
I,
guess
probably
so
discussion
topics
Radek
already
went
over
the
crimson
stuff.
That's
really
exciting.
I'm
I'm,
very
encouraged
by
the
numbers
that
they're
seeing
out
of
the
gate,
given
that
it
was
just
the
first
attempt
and
it
sounds
like
things
are
improving
really
quickly.
So
that's
that's
great.
B
B
They
had
a
situation
where
they
would
have
been
where
they
were
very
much
affected
by
iterator,
read
ahead
and
his
code
was
I,
think
reducing
it
from
like
two
minutes
to
thirty
nine
seconds
and
then
trying
to
use
the
new
Rox
DB
iterator
read
ahead.
It
was
like
taking
more
than
ten
minutes,
but
he
thinks
it
may
be.
The
test
wasn't
right
or
something
so
he's
still
working
on
it,
but
lots
of
work
going
on
in
that
area,
which
is
good,
Oh,
Igor,
you're,
here
I'm
sorry
go
ahead.
D
B
All
right:
well,
then,
the
only
other
thing
that
I
have
got
is
that
we
we
wanted
to
look
at
blue
store
compression,
because
no
one
as
far
as
I
know,
no
one
has
really
looked
at
real
closely,
though
this
past
week,
I
I
ran
through
some
tests
very,
very
basic,
not
doing
any
kind
of
real
fine-tuning
of
it.
Just
looking
at
what
happens
with
a
standard
FIO
workload,
that's
somewhat
compressible
and
and
turning
aggressive
compression
on
versus
having
it
off,
and
so
that's
that's
in
the
Google
Doc
there.
The
the
gist
of
it
is.
B
Is
that
especially
with,
like
large,
writes
it's
quite
a
bit
slower,
especially
when
your
CPU
limited
there's
one
case,
that's
kind
of
interesting
for
small,
sequential
reads
where
it
appeared
to
actually
improve
them.
I,
don't
know
why.
But
but
that's
what
happened.
This
is.
This
is
probably
really
much
more
complex
than
you
know.
B
The
seemingly
simple
configuration
change,
really
you
know,
makes
it
look
like
right
because
there's
an
initial
prefilled
that
happens
before
I
run
these
tests
and
I
run
them
in
in
order
from
large
I
was
down
to
small
iOS,
doing
other
different
workloads,
and
so
potentially
you
could
end
up
with
initially
all
of
the
data
compressed.
But
then
some
of
the
the
rewritten
data,
maybe
is
not
compressed
when
you're
reading
it,
the
you
know
it's
hard
to
know,
what's
going
on.
B
B
But
you
know
overall
in
general,
here
the
performance
was,
you
know,
ranging
from
mildly
worse
to
quite
a
bit
worse,
so
I'm
not
sure
it's
something
that
that
most
users
are
going
to
want
to
use
in
its
current
state,
but
we'll
see
when
I
get
time,
I'm
going
to
try
to
run
it
through
some
profiling
and
see.
If
it's
anything
stupid,
that
would
be
easy
to
improve.
B
B
B
A
D
B
A
D
B
Even
as
you
go
through
like
even
if
you
did
like
a
pre-fill
stage,
which
is
you
know,
we
do
this
so
this
one,
you
did
people
and
then
do
a
bunch
of
tests
right,
but
even
there
you
could
imagine
that
if
you
do
pre
fill
and
then
start
doing,
a
bunch
of
random
writes
the
performance
throughout
the
course.
The
test
will
change
right.
B
B
Oh
fantastic,
thank
you.
Alright
I
will
look
at
those
you
know.
Actually,
speaking
of
rocks,
tbh
I,
don't
know
how
many
people
are
aware,
but
Toshiba
just
released
their
own
Fork
I
guess
of
rocks
DB.
That's
really
really
interesting.
I,
don't
know
if
this
made
out
just
after
though,
or
not
me,
link.
B
E
B
B
There's
a
wiki
or
something
out
here
too.
Let
me
get
it,
but
this
looks
really
really
really
good,
because
even
in
their
own
test
results
they're,
showing
that
the
compaction
overhead,
when
you
get
like
big
compactions
due
to
deep
leveling,
it's
way
faster
or
way
less
overhead.
At
the
very
least,
though,
for
us
this
could
be
like
almost
a
drop-in
replacement,
maybe
that
that
would
alleviate
some
of
those
issues
that
people
are
seeing.
E
B
Exactly
so,
you
know
working
on
the
the
sharding
sharding,
which
you
know.
One
of
the
kind
of
goals
of
that
is
almost
the
same
thing
where
we're
just
trying
to
avoid
the
deep
nested
level
hierarchies
by
spreading
it
across
multiple
spreading
the
data
across
multiple
shards,
and
maybe
only
ending
up
in,
like
you
know,
level
two
rather
than
potentially
ending
up
in
maybe
level
four
or
something
and
then
have
lower-right
amp
because
of
it.