►
From YouTube: 2016-OCT-05 :: Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
http://wiki.ceph.com/Planning
A
Alright,
welcome
back
to
the
SEF
developer
monthly,
everyone,
I've
added
it
to
a
few
calendars
now,
so
hopefully
it
will
be
a
bit
more
pervasive
and
easy
to
remember
for
everyone
I
if
you
have
not
yet
added
blueprints.
Even
if
it's
just
a
single
comment,
please
do
so
to
the
page
we
have
a
record
of
even
after
we
discuss
things.
Otherwise,
I
think
this
should
be
relatively
short,
so
sage,
I
guess
we
might
as
well
start
with
your
allocator
information.
B
A
B
B
Alen
went
off
and
implemented
it
proposal
and
we
just
pushed
a
working
version
of
the
branch
this
morning
or
last
night
I,
just
rebase
it
and
clean
it
up
a
bit
and
pushed
a
pull
request,
which
is
put
it
in
that
I
didn't
they
eat
their
pad
and
I've
run
the
test
and
it
works.
So
that's
as
far
as
I've
gotten
so
far.
B
But
that
fact
the
the
short
version,
basically
is
that
find
the
code
in
each
each
class
that
you're
going
to
put
into
a
memory
pool
do
something
like
this:
the
etherpad
you
just
call
a
macro
and
that
basically
declares
the
new
and
delete
operators
so
that,
when
you
allocator
tlk
that
object,
it'll
put
it
in
the
right
memory
pool
and
then
somewhere
in
a
dot
CC
files.
You
actually
declare
you
implement
those
operators,
because
they're
not
done
in
line,
and
then
nice
find
that
one
or
so
what
it
looks
like.
B
Tomorrow's
and
the
implementation,
there
is
pretty
straightforward.
If
you
look
at
the
macro
number
cool,
does
this
and
the
defining
nimble
as
this?
This
could
influence
them.
I
drop
that
assert,
and
here
just
for
performance,
because
the
hot
path-
and
it
calls
the
appropriate
factory
thing
and
then
do
you
basically
declare
the
mem
pools
that
are
you
want
to
do.
You
want
to
exist
somewhere
else,
I
care.
What
the
macro
macro
looks
like
here
at
the
top.
B
Let's
see,
and
the
interesting
thing
is
that
it
creates
a
namespace
or
looking
at
the
header
file.
It
creates
a
namespace
for
each
memory
pool
and
it
declares
all
the
container
the
common
container
types
versions
of
that
in
that
namespace
that
are
automatically
set
up
to
allocate
in
that
namespace.
So
I
can
change
my
standard
map
to
be
my
name,
space
map
or
my
mem
pool
man,
and
it
will
all
the
internal
allocations.
There
will
go
in
that
in
a
non
pool,
so
that
makes
it
pretty
easy
to
use
it's
based
on
his.
B
He
took
the
code
that
he
did
for
the
slab,
allocator
container,
stuff
and
rejiggered
it,
and
to
do
this
and
there's
a
normally
it'll
just
it
has
like
a
minimal
amount
of
accounting.
For
how
many
for
the
pool
you
can
ask
how
many
bytes
have
been
allocated
in
that
pool?
That's
most
mostly,
it
I
think,
but
there's
also
a
debug
mode
that
you
can
enable
with
the
compiler,
define
whatever
a
macro
definition
that
will
also
count
how
many
of
each
type
of
object
are
allocated
in
that
pool.
B
B
B
C
B
That's
that's,
that's
basically,
a
point:
yep
domen
pool
won't
have
a
max,
but
the
blue
store
or
whatever
will
periodically
go
and
look.
How
much
memory
is
being
used
in
this
memory
pool
and
they'll
say,
oh
or
close
to
or
above
our
threshold
I'm
going
to
exert
some
memory
pressure
and
start
trimming
some
stuff.
This
is.
B
C
B
Store
but
I
think
it's
also
going
to
be
really
useful
in
even
just
for
live
instrumentation
profiling
purposes
at
the
OSD.
So
in
during
recovery,
for
example,
when
there's
lots
of
memory
were
like
where's,
this
memory
coming
front
will
actually
be
able
to
look
and
see.
It
might
be
that
we
could
dynamically.
Do
our
PG
log
trimming
based
on
how
much
memory
we're
consuming
or
I,
don't
know
whatever
all
these
things
become
possible
watching
their.
C
B
Yeah
yeah.
Well
I
say
it
I
think
it's
going
to
be
I
mean
in
theory.
If
the
alligator
is
doing
a
good
job,
then
fragmentation
will
be
penalized.
I
think
if
it
turns
out
that
it's
not
doing
a
job,
then
we
can
revisit
that.
It
was
a
problem
a
long
long
time
ago
in
the
MDS
and
we
switched
to
boost
pools
and
it
got
a
lot
better.
But
that
was
like
years
and
years
ago
remember
how
long
I
was
so.
B
It
might
be
that
it
doesn't
I'm
guessing
the
day
that
it's
better
now
than
it
was
and
that
that
spreadsheet
that
you
looked
at
was
estimating,
like
500
bytes
per
blob
and
ferg
spent
extent,
and
when
I
wouldn't
added
up
the
actual
sizes
of
the
start.
It
was
like
within
five
percent,
so
I
think
we're
in
pretty
good
shape
there.
But
Wow.
Let
me
come
back
that
if
we
need
to.
E
Yeah,
it
seems
it
seems
like
a
good
idea,
there's
the
only
other,
the
other
other
intersecting
information
I
have
is
a
bit
but
relates
to
similar
types
of
techniques
that
might
be
necessary
to
interact
with
things
an
EBR
or
QSB
our
strategy
in
a
lock,
lock,
free
strategies
and
I.
Don't
think
that
some
ISIL
and
then
I
think
that's
heading
in
the
same
direction:
commercial
direction,
even
though
its
complexity
understand
all
they
converge,
but
they
deploy
do
yeah,
yeah
it
mad
yeah.
B
I
think
that
the
big
question
I
think
is
is
this:
is
the
this
wrapper
around
the
alligator
going
to
be
have
some
sort
of
overhead
and
that's
because
it
might?
The
statistics
are
our
shared,
so
that's
a
there
could
be
some
contention
there
to
mitigate
that.
It's
just
sprayed
across
a
vector,
like
you
know,
tens
of
slots
based
on
your
thread,
I.
You
know
hoping
that
it's
not
going
to
ping
pong
between
cores
too
much
but
like
any
notice.
E
B
Hoping
to
reduce
or
conventionally
on
the
other
guards
office
I'm,
not
certain
that
we
want
that
in
this
case,
like
we
kind
of,
we
might
just
want
the
accounting
and
then
push
everything
off
to
the
lover
alligator,
but
I
don't
know,
unfortunately,
but
when
I've
been
doing
perf
profiles
on
my
machine,
it's
like
none
of
the
allocation
stuff
is
really
showing
up
on
my
plots,
no
I!
Why
that's
the
case
so
I
don't
know
that
I
can
really
measure
the
impact
this
in
both
the.
B
B
E
B
Yeah
data,
fine,
definitely,
actually
that
reminds
me
I
think
there
was
something
in
oh
I'm,
not
lazy,
set
up
separate,
pull
something
off
to
go
Google
that
cuz
I
thought
there
was
actually
something
maybe
is
JD
Malik
had
a
whole
concept.
Remember.
B
B
B
B
B
Don't
know
maybe
again
I
think
the
the
generic
alligators
are
going
to
be
have
much
more
sophisticated,
locking
and
more
better
performance,
tuned,
Fast,
Pass,
hot
pants
or
whatever,
then
even
something
that
we
put
together.
That's
special
casing
it,
but
it's
unclear,
maybe
the
slab
stuff,
that
this
is
implementing
well,
that
will
have
to
have
to
contract
it
market
and
see.
F
Nearly
improve
the
performance
of
shuttle
shuttle
out
did
did
her
not
long
doubt
it
true,
because
for
long
now
the
structure,
I
guess
the
multi-threaded
like
a
briquette,
you
like
what
is
America
gmail
and
they
may
matter
for
this.
B
B
So
I
would
expect
it
to
be
the
same
as
the
alligator,
except
that
the
slabs
will
sort
of
extend
the
lifetime
of
those
allocations
somewhat
so
that
things
that
are
allocated
close
together,
we'll
be
stuck
on
the
same
slab
and
if
UD
allocate
one
but
not
the
other,
it's
going
to
be
pinned,
but
I,
don't
really
know.
No.
No,
it's
probably
pretty
complicated
behavior.
B
So
we'll
see
it's
probably
I
think
you
can
also.
It
would
be
nice
actually
if
we
could
just
disable
the
flat
piece
and
make
it
a
press
ups
as
a
one
and
have
it
go
away,
but
I'm
I'll
check
in
without
about
that
might
be
you
just
feed
both
ways.
B
F
B
I
remind
that
my
goal
is
to
is
just
a
trend
accounting,
that's
the
main
thing.
I'm
worried
about
I
want
to
be
able
to
put
like
10
different
data
types
and
all
the
containers
that
use
them
in
a
pool.
So
I
can
say
how
much
total
memory
is
allocated
by
all
of
these
things
and
put
memory
pressure
on
the
cache.
Based
on
that?
B
Yes,
that's
what
I
want
to
do.
I
just
think
that
yellowness
I
think
he's
also
trying
to
just
reduce
allocations
with
the
slab
thing
at
the
same
time
and
I'm
not
certain
whether
that's
gonna
be
a
good
idea
or
not.
Maybe.
F
B
F
B
Sense:
okay,
yeah.
I
agree,
see
ya.
C
B
Yep,
okay,
that's
all
I
had
there
I
guess.
The
only
other
topic
is
on
the
there's.
The
boost
stuff
is
basically
ready
to
merge
as
soon
as
we
get
the
tests
going
again
and
the
slop
containers
also
are
ready
to
merge
that
are
mostly
they're,
also
using
a
variation
of
this
slab
alligator,
but
they
put
they
lie
to
specify
that
some
number
of
those
slots
are
in
line
as
part
of
the
container.
So
when
you
have
only
a
few
elements,
then
you
don't
do
any
additional
allocations.
B
B
Yeah,
so
I
think
just
be
aware
that
that's
that's
coming,
I
did
it.
I
do
have
a
pull
request
that
tried
changing
buffer
list,
the
list
of
pointers
in
buffer
list
to
be
a
slab
list
with
three
per
allocated
slots,
and
it
makes
my
it
makes
the
unit
tests
for
buffaloes
run
like
three
percent
faster.
But
when
I
tried
like
a
I
did
a
pork,
a
random
right
work,
club
I'm
it
didn't
it
wasn't
noticeable
I
couldn't
see
any
difference.
Yeah.
F
B
Yeah,
it's
I
mean
I,
thought
I.
Think
that
the
problem
with
all
these
is
that
it's
there
aren't
there
isn't
going
to
be
any
single
container.
We're
switching
one
container
over
to
use
the
inline
types
is
going
to
make
a
significant
difference,
but
if
we
changed
all
of
them,
then
it
would
probably
significant,
but
that's
a
lot
of
work.
So
it's
hard
to
tell
up
front
whether
it's
going
to
be
worthwhile
to
change
all
of
them
or
not.
E
B
A
G
Yeah,
just
a
hi:
this
is
the
first
time
I'm
attending
the
safe
monthly
call.
So
I'm
not
sure
what
is
the
overall
goal
of
this
meeting.
But
I
thought
I
can
add
this
because
we
would
like
to
see
this
in
the
rbd
interface,
and
so,
if
there
is
any
update
or
anything,
we
can
discuss.
I
means
most
of
the
stuff
is
captured
in
the
mailing
list
and
also
on
the
tracker
which
is
listed
on
the
wiki
page.
G
D
D
So
it's
half
of
it
all
the
preliminary
stuff
needs
to
go
into
like
lib
are
because
it
would
be
responsible
for
when
an
image
is
created
or
a
snapshot
is
created,
that
it
would
generate
all
those
unique
IDs
that
would
be
stored
with
the
image
permanently.
But
yeah
then,
after
that,
back
support
is
including
all
the
changes
that
are
necessary.
G
D
H
So,
basically,
free
and
us
we've
been
working
on
this
for
a
while,
now
and
trying
to
add
in
port
for
and
using
the
DM
block
algorithm
to
be
able
to
eventually
have
quality
of
service
that,
like
reservations
but
among
different
clients
and
but
start
off
with
we're.
Just
focusing
on
getting
the
algorithm,
correct
and
erika
von
said
she's
on.
The
call
now
has
been
implementing
that
and
working
with
some
other
folks
from
south
korea,
telecom
and
CTE
to
fixing
bugs
and
get
that
working
correctly
and
now
it
seems
to
be
passing
the
rate
of
sweets.
H
Shoot
my
mic
back
or
is
it
gone
again?
Yes,
I
think
yeah
you're
good
now
so
I'm
headset
comes
at
night,
sometimes
so
the
initial
version
that
experiment
is
just
trying
to
distinguish
between
background
I,
oh
and
the
cluster,
like
recovery
scrub
bands,
not
trimming
vs.
client,
I
Oh,
since
that
cadets
been
a
problem
for
users
in
the
past,
so
that
branch
that
eric
has
been
I
undergoing
a
bunch
of
their
performance
testing
lately
and
I
think
we
should
have
some
results
to
share
soon
Eric
through.
When
I
add
and
thing
to
that.
F
G
G
This
is
what
we
would
like
to
see
if
we
can
provide
a
quality
of
service
to
us,
a
particular
service,
so
that
if
something
goes
bad,
that
service
gets
least
impacted,
and
we
have
like
priority
up
services-
is
that
what
are
you
trying
to
do?
I,
don't
know
much
about
it,
and
I
would
like
to
get
more
information
or,
if
there's
a
page
or
any
design
document,
or
something
we
like
to
know
more
about
this
yeah.
H
H
Perhaps
them
internal
clients,
reservation
and
all
your
Vitas
clients,
some
other
one
and
all
yours
have
those
clients
different
different
reservation,
that
they
are
guaranteed
a
certain
level
service
and
it's
some
kind
of
some
share
proportional
share
based
on
what's
available
after
that
point,
but
it's
it's
still
a
fair
ways
away
from
and
being
implemented.
So
far.
So,
but
that's
on
that's
that.
That's
that
plan
now
pick
up
there.
H
Just
curious,
and
not
precisely,
hopefully,
the
the
limit
more
limited
form
with
the
distinction
between
background
Rio
and
client
AO
will
get
into
luminous
and
I.
Think
the
client
side
and
stations
are
further
further
off,
since
the
requires
a
bit
more
work
on
the
configuration
side
and
you
kind
of
management
of
the
different
limits.
G
A
Think
it
was
probably
the
shortest
set
developer,
update
on
record
so
awesome
30
minutes
and
give
you
some
time
back
to
your
day,
so
alright.
Well,
thank
you.
Everybody
for
coming.
The
next
planning
session
will
be
on
November.
Second,
that
will
be
our
a
pack
friendly
time,
so
9
p.m.
eastern,
please
I'm
going
to
be
pushing
a
lot
harder
to
get
some
blueprint.
Discussions
on
the
wiki
beforehand,
I
want
to
make
sure
we're
making
good
use
of
everyone's
time
for
a
meeting
like
this.
You
know
if
you
have
thoughts
one
way
or
the
other.
A
If
it's
not
useful,
we
can
discontinue
it
and
figure
out
a
better
way.
If
we
are
going
to
do
it,
I
want
to
make
sure
we
do
it
correctly.
So,
if
you
have
any
questions
hit
me
on
the
list
or
directly
and
I'm
happy
to
answer
questions,
otherwise
we'll
see
you
on
November
second
or
on
IRC
thanks
everybody.