►
From YouTube: Ceph Performance Meeting 2018-09-27
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
B
So
maybe
it's
a
little
ridiculous,
but
it
would
be
really
nice
if,
from
a
user
perspective,
they
could
say,
okay,
here's
how
much
of
my
memory
I
want
to
carve
out
for
these
different
things.
We
have
some.
You
know
kind
of
minimums
that
we
we
suggest
for
for
for
everything
and
then
just
kind
of
you
know
make
make
our
demons
conform
to
it,
even
even
if
we're
potentially
giving
something
like
them
on.
B
You
know
more
cash
than
it
really
reasonably
needs
it'd
just
be
nice
if,
if
everything
kind
of
fit
into
boxes
that
we
give
them
so
I'm
I'm,
still
kind
of
leaning
that
way
myself,
but
in
any
event
it
sounds
like
we
were
not
giving
them
on
enough
cash.
Initially,
it
was
like
128
megabytes
and
bumping
that
up
fixes
whatever,
wherever
this
issue
that
was
run
into,
is
so,
if
you're
you're
interested
in
this.
That's
that
that
appears
to
be
what's
going
on
here.
B
A
Offer
implemented
currently
offer
backward
iteration.
That
seems
to
be
used
by
very
few
users,
but
pretty
prominent,
and
to
do
things
that
can
be
made
actually
a
bit
cheaper
I
mean
especially
our
encoding,
macros
and
constants.
For
instance,
it
takes
it
meets
backward
iteration
to
prove
to
all
to
allocate
time
to
allocate
space
for
things
like
size
of
the
size
of
the.
A
Can
it's
computable
only
after
doing
that,
after
doing
the
real,
serialization
and
and
those
data
are
injected-
and
these
are
basically
written
twice
at
first
some
chunks,
then
the
correct
value
is
put
there.
I
I
decided
to
replace
that
that
mechanism
to
kill
that
backward
iteration
and
replace
it
with
some
kind
of
all
appending
instead
of
encoding
and
really
meant
copying
some
junks,
we
are
just
preparing
a
whole
and
returning
very
thin
abstraction
of
over
over
dumped
pointer
well
in
testing
I.
Think
I
posted
some
comments
with
with
output
from
profiler
there.
A
Give
it
gives
maybe
have
1/2
percent
something
like
that
not
much,
but
I
treat
it
mostly
Allah,
so
clean
up
as
a
simplification
of
interfaces.
I
have
also
ranch
that
kills
the
up
and
buffer.
We
have
in
buffer
list
it's.
The
idea
is
to
always
try
to
upend
new
things
just
to
the
very
last
buffer,
of
course,
if
it's
possible,
if
we
really
are,
if
we
are
a
real
owner
of
of
the
buffer
of
the
last
buffer
still
it
needs
some
refinishing
but
I
plan
to
send
just
after
finishing
the
stuff,
with
with
containers
how.
B
A
I
took
a
look
provider
it
in,
and
it
seems
the
most
prominent
methods
of
buffer
list
is
c-string,
taking
up
and
sister,
not
appending
some
some
parts
of
other
buffers
or
something
like
that.
No
just
regular
this
drink
variant
of
happens
and
we
are
paying
a
pretty
high
price,
but
it
comes
to
using
it
in
its
current
form.
It's
because
we
are
always
appending
to
append
buffer,
then
trying
to
then
creating
a
buffer
pointer
with
from
those
used
space.
Then,
if
it's
possible,
we
are
merging
the
stuff
with
the
with
the
last
buffer.
A
Well,
it's
pretty
costly.
Let's
try
to
imagine
you
want
to
put
8
bytes
and
we
are
doing
that
over
and
over
because
the
buffer
that
they
append
the
C
straight
C
string
append
is
not
inlinable.
If
you
take
a
look
on
and
because
n
count
function.
Well,
it's
a
huge.
This
long
sentence
of
calls
to
justice
system
happens.
A
They
all
are
really
to
the
very
last
bite
available
in
current
in
India
pen
buffer
before
replacing
it,
and
it
does
that
in
a
loop.
Basically,
it
can
I
have
a
commit
that
just
just
turning
it
into
basically
two
two
stages:
first,
one
to
try
up
and
as
much
as
possible
to
to
currently
existing
buffer
append
buffer.
Then,
if
something,
if
really
necessary,
then
and
just
allocate
new
one
and
write
them,
then
there
what-what-what-what
left,
but
I
I
would
I
would
love
to
see
and
I.
A
Don't
know
whether
it's
even
possible
but
I
would
love
to
see
a
compiler
that
in
lies
all
those
methods
and
tries
and
verifies
whether
we
have
enough
space.
Only
ones
that's
that
was
they
think
we
made
with
new
turf
counters-
are
implemented
a
way
that
tries
to
use
per
fret
the
storage
we
have
reallocated
base
for
up
to
32
different
threats,
and
it's
not
possible.
A
If
we
saw
another
fret,
33
30
field,
for
instance,
then
we
are
going
with
shared
with
shared
per
counters
data,
but
that
check
the
check
isn't
met
is
made
in
a
way
that
that
addresses
typical
use
case
of
of
per
country
because
they
are
used
in
grub.
It's
not
so
common
to
see
one
single
increment
over
one
single
counter.
A
Usually
you
are
increment.
You
are
much
breaking
money
per
counters
in
the
same
place,
so
the
stage
of
verifying
whether
we
hit
with
whether
we
already
have
whether
our
fret
we
are
we
are
using
really
have
really
has
the
necessary
necessary
memory
is
made
once
for
all
those
grub.
I
would
love
to
repeat
that
pattern
it
in
in
buffer
list,
but
still
I'm,
not
sure
whether
it's
possible.
It's
a
matter
of
data
dependency,.
B
A
B
B
Some
last
week
after
our
last
performance
meeting,
I
started
making
this
thing
which
I
didn't
finish
because
I
get
sucked
into
as
RBD
stuff,
but
I
thought
that
if
we
can
start
documenting
some
of
this
kind
of
where
we're
spending
a
lot
of
time,
specifically
in
the
object,
creation,
destruction
and
then
I
actually
kind
of
noted
append
here
as
well.
That's
kind
of
a
highlight.
B
B
Talk
about
encode
and
the
list
depends
in
in
these
tests,
where
it
was.
You
know,
I
was
kind
of
trying
to
make
the
back
end
as
fast
as
possible
and
also
eliminate
a
lot
of
the
the
other
stuff
that
shows
up
a
lot
of
times.
Certainly
in
codon
list
append
showed
up
a
fair
amount
in
the
messenger
worker
threads.
Yes,.
A
It's
because
in
my
eye
so
that
that
messenger
is
actually
responsible
for
crafting
the
message
to
be
sent
to
client
TP
HTTP
just
calls
sent
sent
message,
but
there
is
a
check
actually
on
that
path.
If
ms
can
fast,
this
party
would
return
true
for
m
OSD
reply,
then
this
battle
civilization
would
be
made
will
be
made
by
TP
HTTP.
But
it's
not.
We
are
returning
false.
This
means
that
that
the
pointer
to
message
is
just
put
on
out
underscore
queue
of
a
mess
of
a
sync
messenger.
A
And
it's
taken
in
next
wap
of
the
of
the
event
loop.
We
have
there
now
messenger
is
responsible
for
crafting
the
message
and
what
is
even
more
painful
because
I
had
I
hacked
the
MMS
can
live
dispatch
just
written
true,
also
for
SD
reply,
and
it
seems
even
that
even
that
well
messenger
is
it
got
stuck
in
in
in
doing
in
print
memory.
A
A
B
C
B
A
A
Fast
but
I
am
afraid
that
memory
flowing
between
Fred's
always
a
problem
in
lip,
see
in
lip,
see
it's
well,
maybe
it's
not
so
visible,
because
even
making
a
single
fret
allocation
fret,
sorry
even
making
a
single
fret
memory
management
is
still
costly
because
you
always
have
an
atomic
inside
on
the
main
under
hot
bath.
Always.
A
A
A
B
A
B
A
B
A
A
B
B
No,
it's
okay!
It's
it's
good,
I!
Think,
let's
so
this
live
RBD
thing,
I,
don't
know
anything
about
it.
I
think,
eventually,
Jason
will
probably
look
at
reduce
the
token
bucket
fill
cycle
and
support
bursting,
IRA
configuration
I,
don't
know
anything
about
that.
Uh-Huh
anelia
is
gone.
He
might
have
been
the
only
one
that
could
could
say
anything.
B
Alright,
I
think
everyone
here
has
heard
all
made,
except
for
Casey's
heard
this
this
stuff
about
the
Exodus
default.
Ag
account
Casey.
For
your
sake,
the
gist
of
it
is
that
we're
exporting
crappy
block,
dev
values
with
kernel
RBD,
and
it
ends
up
resulting
in
excess
thinking
that
our
BD
devices
are
composed
of
multiple
underlying
devices
and
set
a
really
high
AG
count.
It's
like
17
in
the
test
server
running,
and
that
means
that,
like
writes,
like
small
file,
creates
started
many
different
offsets
in
different
directories
and
it
all
looks
random.
B
So
the
I/o
scheduler
does
a
really
bad
job
of
actually
like
merging
anything.
There's
tons
of
unplug
events
and
performance
is
really
bad
so
manually
forcing
the
AG
account
to
be
lower
like
4,
which
is
otherwise
default,
does
much
much
better,
like
2x
create
performance,
but
you
actually
get
a
warning
from
make
FS
saying
you
couldn't
do
this,
which
so
like
for
users
it's
gonna
suck
because
they're
gonna
be
like
wait.
C
B
B
The
game
the
barrel
of
monkeys,
that's
as
close
as
I
can
come
to
describing
it
so
yeah
anyway.
That's
that's,
hopefully
I'm
rerunning.
All
the
tests
now
based
on
new
kernel,
oh
yeah,
also
I'm
the
older
kernel
that
we're
testing
for
point
nine.
It
turns
out
that
when
we
switched
our
BD
over
to
the
block
multi
cue
stuff,
it
turns
out.
We
did
that
before
there
were
any
I/o
schedulers
available
for
it.
So
like
and
4.9
RB
images
will
not
use
an
I/o
scheduler
at
all
they're.
B
Just
there
are
none
available
and
not
until
4.11.
So
any
users
that
are
using
that
kernel
are
like
not
gonna,
have
any
kind
of
like
you
know,
merging
behavior
or
any
other.
You
know
stuff
like
that.
So
yeah,
there's
that
too
unfortunate,
but
anyway
yeah
it's
it's
all
kind
of,
hopefully
being
taken
care
of
now
to
some
extent.
B
A
Nice
cut
man
made
by
Allen
Samuels
from
from
on
disk
I'm,
resurrecting
the
idea
behind
that
code
to
use
to
use
slap,
a
locator
policy
just
to
amortize
the
allocations
of
made,
mostly
in
in
STL
containers
across
big,
a
bigger
number
of
notes,
just
to
not
have
a
single
tandem
of
Malakand,
free
or
each
item
on
a
list
on
an
STD
list,
but
out
in
testing
that
the
the
proposed
implementation
is
Heikal
coupled
to
our
main
pool
infrastructure,
and
memo
has
actually
two
disadvantages.
First,
one
is:
is
the
cost
accounting
it's
made?
A
A
Basing
on
quick
hacks
to
to
use
to
use
them
together
with
the
whited
priority
kill,
because
I
saw
a
huge
malach
traffic
there
and
also
on
set
Ilario
and
in
profiling.
It
turns
out
that
well,
it
already
helps
with
the
SEMA
lock
it
all
because
of
I
think
that,
because
of
providing
extra
locality,
I
mean
embedded
slabs
instead
of
inside
steel
containers.
Well,
it
also
allowed
to
squeeze
some
cycles.
I
can
see
now
that
takes
maybe
60%
of
cycles.
It
was
before
nice.
A
B
C
Very
sorry,
I
was
kind
of
tuned
out
for
the
beginning
of
that
we
were
talking
about
passing
memory
between
threads.
Is
that
the
context
here,
yeah.
B
Radek
has
been
working
on,
basically
the
the
mem
pool
and
slime
allocation
stuff
and
trying
to
kind
of
separate
them
out,
and
the
kind
of
question
is
in
the
C
star
world.
You
know
what
are
we
still
gonna
need
to
have
to
do,
especially
in
relation
to
yeah,
like
memory
passing
between
threads
and
I.
Guess:
I
brought
camp.
You
know
the
the
yeah
yeah
whole
thing,
I
guess.
C
Yeah,
so
these
star
is
basically
taking
all
of
the
memory
available
and
slicing
it
into
pieces
for
each
each
core.
The
memory
allocated
every
one
just
comes
from
from
its
own
internal
alligator,
and
that
has
to
be
freed
on
the
same
core
that
you
can
test
it
around
between
cores.
As
long
as
it
goes
back
to
its
initial
core
to
be
freed
and
there's
a
there's,
a
pointer
wrapper
that
does
that
are.
B
C
B
C
A
A
B
C
Yep
people
has
already
basically
created
the
d-star
version
of
the
thread,
pool
like
the
alien
thread
pool
that
does
message
passing
back
and
forth.
I,
don't
think
it's
seen
much
testing
and
probably
no
benchmarking
at
all,
but
I
think
it
could
be
interesting
just
to
play
with
that
see
what
kind
of
overhead
we
have
passing
things
back
and
forth
I'm
trying
to
look
at
the
memory
management
at
that
level.
B
B
A
B
A
Messenger
we
have
a
lot
of
Siskel's,
not
even
direct
related,
sending
or
receiving
data
just
even
to
orchestration
of
it.
I
mean
something
like
putting
the
scripture
to
the
block,
control
block
or
something
like
that.
It's
there's
a
lot
of
scholars
there.
If
we,
if
we
take
powerful
enough
machine
and
put
three
or
is
this
there
and
move
the
performance
hand
we've
offered
by
a
messenger
component
forward?
Well,
we
will
see
we
should
see
a
lot
more
AIIMS
because
at
least
for
hundreds,
we
are
not
both
liked
by
by.
B
Ported
to
master
as
part
of
men's
store,
because
it's
pretty
easily
separate
able
and
honestly
it's
the
the
best
part
of
pet
store
was
the
the
vector
objects,
the
the
rest
of
it
was
kind
of
you
know
not
not
helping
or
or
just
kind
of
changing
things
around,
but
that
was
the
real
thing
that
made
a
big
difference.
Yes,
do
we
have.
C
A
B
C
Please
stand
up
in
a
minute,
but
before
I
go
I
wanted
to
talk
a
little
bit
about
work.
You
saw
some
stuff
about
Adams
work
on
the
objector,
and
so
I
want
to
get
some
more
visibility
around
that
project.
So
maybe
next
week
Adam
and
I
could
present
a
little
bit
about
it
and
kind
of
say
what
the
high
level
goals
are.
There
absolutely
I
think
that
would
be
great.
It
sounds
like
we
should
have
a
PR
open
for
it
too,
to
talk
about
also.