►
From YouTube: 2018-Apr-26 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
http://ceph.com/performance
A
Is
it
a
little
scary
when
you've
you-you-you
write
code
and
in
this
case
it
didn't
compile
properly
so
that
was
comforting,
but
then,
once
you
fixed
your
compile
errors
it
you
know
everything.
Just
kind
of
works,
I
always
become
really
suspicious,
and
then
you
know
assume
that
there's
there's,
you
know
more
sinister
things
at
work
when,
when
things
appear
to
work
properly,.
A
A
D
Doesn't
the
don't
there's
them
on
thing
that
show
she
identified
with
Osteen
mapping
encoding
that,
when
is
almost
ready
to
merge,
will
need
to
get
back
ported
to
luminous,
basically
caches
that
re-encoded
map.
So
you
don't
to
do
it
for
every
message
and
see
I
think
the
other
one
is
read:
a
Slav
has
to
pull
requests,
trying
to
speed
up
your
crypto
stuff.
D
D
D
B
Worked
to
God
to
get
it,
it
separated
buffer,
low
row
implementation.
However,
I
would
like
to
put
to
squeeze
also
the
cost
of
going
to
Colonel
to
em
up
each
buffer
with
buffer
recirculation
I.
Guess
it
could
be
made
CPU
fret
private.
So
we
would
even
if
we
don't
mean
some
cue
for
for
garbage
collection,
maybe
we
could
go
even
without
without
any
synchronization
yeah.
D
It's
I
guess:
if
we're
only
using
this
for
the
blue,
that's
buffer,
then
I,
don't
think
it's
worth
it,
because
it's
only
like
every
couple
megabytes
that
there'd
be
a
single
Cisco
I.
D
D
D
C
B
D
D
D
D
Okay,
but
that's
that's
that'll,
be
a
post
mimic
thing
to
maybe
backport
I,
don't
know
that
there's
much
else
going
on
here.
D
D
A
C
And
ours
yeah,
you
were
you
weren't
playing.
He
was
thinking
that
we
needed
to
make
blue
Fez
allocated
in
smaller
trunks
to
that
down
to
the
metallic
size,
and
that,
since
this
was
primarily
used
for
SSD
workloads,
this
probably
would
not
be
an
issue.
He
was
thinking.
He
was
at
the
extra
bread.
D
Yeah
yeah,
probably
I,
was
just
trying
to
keep
it
simple,
because
the
the
piles
that
with
us
rights
are
always
big,
almost
always
big,
so
that
maybe
yeah,
that's
not
actually
true,
I,
don't
think
it
matters
too
much.
Yeah.
D
At
one
point,
Virata
was
looking
at
building
like
it's
just
full
test
that
would
generate
some
like
right
workload,
right,
delete
pattern
and
just
simulate
how
the
allocator
would
fragment
over
time.
I
don't
know
if
you
ever
actually
did
that
little
kind
of
depends
on
how,
but
what
the
simulated
work
loudest.
But
then
at
least
you
could
compare
different
alligators
I'm,
just
see
how
they
do,
in
all
admit.
Without
she
doing
any
on.
You
see
how
they're
framing
yeah.
D
D
D
D
A
A
D
D
D
A
Yeah
the
whole
thing
just
kind
of
feels.
A
little
convoluted
or
strange
I
mean
yeah.
It's
not
that
big
of
a
deal.
It's
just
something
that
kind
of
been
you
know
that
gets
leaked
out
to
the
user
right
where
they're
setting
all
these
ratios
and
they've
got
weird
things,
and
people
don't
understand
what
they're
doing
really
yeah.
D
I
mean
really
that's
it's
confusing.
If,
if
you
don't
exactly
how
big
you
on
each
piece
of
the
cast
to
be
cache
to
be,
but
it's
the
goal
is
to
simplify
so
that
there's
one
number
that
you
set,
which
is
how
much
memory
is
the
OSD
gonna
use?
Yes,
yes,
and
if,
if
you
want
that
to
work,
and
everything
else
has
to
be
a
ratio
of
that.
A
The
the
cache
itself
doesn't
really
need
to
that.
Doesn't
need
to
be
part
of
the
cache
interface,
though
right.
The
cache
interface
can
just
be
here's.
What
I
want
your
return
buffers
to
and
here's
what
I
want
you
to
trim.
You
know
Oh
nodes
to
maybe
by
the
Oh
node
count:
I,
guess
you
could
you
could
do
it
that
way,
which
I
think
is
exactly
the
internal
trim,
yeah.
A
So
you
know
we
could
just
make
that
the
thing
that
take
out
any
other
smarts
out
of
it
and
then
say
that's
the
thing
that
you
that's
what
you
give
it
as
a
user
I
mean
as
well
treating
it
like
as
a
developer,
treating
it
like
a
black
box.
I
guess.
A
A
D
B
A
I
guess
the
way
I'm
kind
of
looking
at
it
is,
you
know.
Potentially
we
have
lots
of
different
caches.
Maybe
in
the
future
we
have
even
more
than
what
we've
got
now
with
rocks.
Tbh
we've
got
buffers
in
blue
store.
We've
got
own
ODEs
in
blue
store.
You
know,
maybe
there's
other
things
that
we
end
up.
Cashing
down
the
road
I,
don't
know,
but
it
would
be
kind
of
nice
to
be
able
to
have
like
a
common
interface
for
all
of
these
different
things.
Where
you
know
you,
you
say
you
want
to
set
their
capacities.
A
D
A
D
Let
me
know
in
that
the
PR
is
ready
to
look
at
all
I'll.
Take
a
look
sure.
I
don't
have
much
of
this
in
a
while,
so
sure.