►
From YouTube: CDS G/H (Day 1) - librados review
Description
https://wiki.ceph.com/Planning/CDS/CDS_Giant_and_Hammer_(Jun_2014)
24 June 2014
Ceph Developer Summit G/H
Day 1
librados review session
A
B
We
give
you
a
minute
to
get
set
there,
but
this
session
is
going
to
be
the
liberate
us
review.
It
looks
like
we're
gonna
focus
primarily
on
the
object
earth
reading
stuff
and
the
parallel
reads,
although
I'm
sure
any
librettos
questions
or
discussion
will
be
will
be
welcomed.
So
sayed
you
go
ahead
and
take
it
away
whenever
you're
ready,
short.
A
Buttock
is
my:
is
this
sharing
my
screen.
I
can't
tell
weird
jeans:
is
it
is
yeah
all
right?
Okay,
well
just
going
to
say
before
jumped
into
the
greatest
thing
to
say
a
few
things
about
development
process
and
getting
involved
and
all
that
good
stuff,
so
vs
GH.
So
the
first
thing
is
that
if
you
want
to
get
involved
in
stuff,
develop
stuff
development,
I'm
the
keys
figuring
out
how
to
reach
developers
so
the
best
way
to
get
involved,
miss
Pia
the
mailing
list
or
IRC.
A
So
we
have
several
different
development
list
now
so
stuff
devel
as
I
betrayed
dr.
Khalid
org.
That's
where
all
the
general
sep
development,
email
discussion
goes
and
that's
sort
of
the
first
place
to
look
when
we
open
source
calamari.
We
created
a
separate
list
for
calamari,
since
its
tends
to
be
a
little
bit
different
as
far
as
what
topics
are
discussed
and
so
forth,
so
as
a
stuffed
calamari
list
as
well.
A
So
you
should
get
on
one
or
all
of
those
lists
depending
on
your
interest
and
of
course,
IRC
is
really
the
best
place
to
find
us
all
slept
at
all
on
IRC,
ok,
c.net
submitting
code.
Obviously
everything
is
on
github.
We
use
github,
pull
requests
to
capture
code
review
and
also
to
merge
most
code.
That's
sort
of
the
easiest
way
to
take
it
attention
that
and
pinging
individuals
on
IRC.
A
A
The
code
review
for
that
particular
component
for
those
pull
requests
make
sure
that
the
request
some
patches
and
so
forth
that
are
coming
in
are
addressed
and
either
you
know,
revised
and
so
forth.
I'm
are,
you
know,
tell
who
should
be
reviewing
it
or
whatever
or
or
merge,
also
there's
some.
They
need
to
sort
of
guide
the
prioritization
of
the
backlog
in
the
future
truck
we've
got
all
sorts
of
different
things
that
we
want
to
do
in
a
project
and
figuring
out
sort
of
what
what
to
be
sequence
next
to
sort
of
maximize.
A
You
know
stability
and,
and
so
forth,
for
the
project
and
also
sort
of
the
last
thing
is
to
as
these
as
we
do
these
summits.
We
want
to
make
sure
that
we're
talking
about
all
those
sort
of
major
components
of
stuff,
whether
it's
RVD
or
Corey,
a
dose
or
the
file
system
or
whatever
and
so
figuring
out
what
topics
are
going
to
make
the
most
sense
to
discuss
for
the
next
sort
of
development
cycle
and
how
we
should
generally
be
be
focusing
our
efforts.
A
This
is
a
sort
of
input
that
I'm
hoping
to
identifying
these
people,
you
know
have
more
involvement
in
and
those
and
those
pieces
and
and
so
forth.
So
without
further
ado,
this
this
lists
probably
isn't
a
surprise
to
too
much
of
anybody.
These
are
sort
of
the
people
who
are
the
key
individuals
responsible
for
these
purchase
for
parts
of
system
so
fork
or
a
dose.
The
low-level
distributed
object.
A
Layer,
Sam
just
is
definitely
the
the
go-to
person
is
the
deepest
in
the
code
and
generally
has
the
most
insight
as
far
as
what
should
be
done
when
and
how
for
the
rightist
block
device.
Josh
Dugan
has
written
much
of
that
code
and
I
sort
of
the
expert
on
rbd
and
also
a
lot
of
the
integrations
as
far
as
how
it
ties
into
OpenStack
and
so
forth,
ray
toast
gateway,
the
s3,
Swift
proxy
gateway
tool.
Yehuda
is,
is
the
person
it's
the
person
to
talk
to,
and
you'll
generally
notice
this.
A
A
File
system
and
jungyeon
at
Intel
is
sort
of
our
biggest
contributor
who's
been
doing
sending
in
the
most
patches
and
doing
a
lot
of
work
with
the
multimedia
stability,
and
probably
has
this
head
best
wrapped
around
some
of
the
nitty-gritty
internals
of
how
the
mole
team
vs
clustering,
stuff
works,
so
I
think
between
these
two.
We
have
the
right
combination
of
you
know,
knowing
where
we
should
be
going
technically
and
specifically
how
we
should
be
addressing
all
the
little
issues
for
the
new
calamari
project.
A
That
was
previously
worked
on,
mostly
by
yann,
but
now
sort
of
looking
for
people
to
to
pick
up
those
pieces.
So
it
might
be
that
will
will
sort
of
break
those
apart
later.
But
you
know
we'll
see
a
tooth
ology.
The
testing
framework
for
force
F
is
a
Zacks
primary
day
job
and
he's
doing
a
lot
of
good
work.
There
refactoring
it
to
makes
it
easier
to
install
to
make
it
a
simpler
overall
system
and
to
you
know,
generalize
things
so
that
it
can
and
more
easily
be
pointed
at
different
cloud
back
in.
A
So
you
can
easily
sort
of
stand
up
your
own
stuff
testing
infrastructure
and
also
sort
of
pushing
forward
this
vision
that
the
test
frameworks
should
be.
You
know,
general
NF,
to
run
and
test
other
distributive
systems
other
than
sex
or
the
division,
to
make
this
a
the
tool
that
we
would
have
wanted.
You
know
three
years
ago
that
would
have
avoided
us.
Having
to
write
around
acceptable
is
easy
step
deployment
tool.
A
Fredo
daisa
is
the
one
who
has
been
doing
most
of
this
work
and
on
the
on
that
the
chef
cookbooks
that
are
used
for
deploying
theft
with
Chef
Gil,
Hamlet
Rona's,
has
stepped
up
I
think
like
six
months
ago,
whatever
to
to
maintain
those.
So
these
are
sort
of
the
ones
that
come
to
mind.
I
think
there
I
think
there
are
other
sort
of
areas
where
people
have
some
level
of
ownership.
My
goal
is
really
just
put
put
this
on
the
wiki
somewhere
and
identify
these
people.
A
A
big
piece
of
getting
your
code
merge
its
getting
it
tested.
So
generally,
one
of
the
first
things
is
once
it
once.
The
the
code
hits
the
main
line
stuff
tree
in
a
branch
we
have.
These
get
builders
that
go
off
and
and
build
them
for
all
the
different
distros
and
packages
and
run
all
the
unit
tests.
You
can
see
the
status
at
get
bolder
URL
and
then
papito
is
a
newish
tool
that
Zacks
been
working
on.
A
A
Through
a
bunch
of
a
bunch
of
work,
enthalpy
chose
where
you
can
actually
see
those
results.
So
that's
a
public-facing
web
site
as
well
and
as
we
get
more
people
involved
in
development,
the
the
key
is
to
get
everybody
sort
of
plugged
into
all
this
infrastructure
so
that
we're
all
using
the
same
tools,
the
same
sort
of
CI
process
and
and
so
forth.
A
So
you
get
involved
so
and
then
the
last
thing
I
wanted
to
talk
about
was
that
the
the
released
cadence
has
sort
of
been
a
big
question
about
how
the
how
how
we're
going
to
do
this
going
forward.
So
in
the
past,
we
did
a
named
release
every
three
months
that
basically
meant
two
months
of
actual
coding
and
then
one
month
of
hardening
and
QA
before
we
did
our
sort
of
named
I'm
cephalopod
releases
and
then
as
ink
tank.
A
So
previously,
every
other
one
of
those
was
sort
of
supported
as
part
of
the
ink
tanks
at
Ventura
fights
in
a
price
release.
More
recently,
our
fire
fire
release
because
we
wanted
to
get
a
richer
coding
and
cash
steering
in
there
and
those
work.
Sort
of
big
chunky
features
that
ended
up
being
pretty
challenging
to
get
integrated
completely
that
time
lights
lit
by
23
months,
so
we
sort
of
missed
our
missed
our
window,
our
trained
by
a
lot.
A
So
one
of
the
questions
that
we're
sort
of
trying
to
figure
out
is
what
should
we
be
doing
in
the
future
and
I
think
the
questions
are,
you
know,
I,
think
they're
within
within
the
team.
Here
at
least
there
seems
to
get
general
consensus
that
we
want
to
have.
You
know
a
time
line,
type
train
release,
so
we
do
regular
releases,
regardless
of
what
features
are
in
or
out,
and
if
things
aren't
ready,
then
they
don't
get
merged
and
then
we
stabilize
what
we
have
and
so
forth.
A
So
I
think
there's
people
generally
like
that
and
that
you
know
source
frees
us
up
from
the
demands
of
the
the
product
side
of
red
hat
or
any
other
company
what
they
want
to
ship
in
a
product.
But
as
far
as
our
upstream
engineering
effort,
we
just
sort
of
focus
on
doing
things
in
when
they're
ready
and
not
not
sooner.
So
that's
that's
true,
but
there's
but
there's
a
question
about
timing.
A
We
didn't
actually
schedule
a
specific
wat
to
discuss
this,
but
we
can
we
can
sort
of
slip
it
in
when
we
have
gaps
or
get
at
it.
Add
it
in
and
definitely
you
know
if
we
can
bring
this
up
in
the
mailing
list
too.
But
I
wanted
to
flag
this
as
something
that
that
we
should
all
be
thinking
about.
So
that's
it
for
my
little
intro
a
bit.
A
A
Yep
got
it
okay,
all
right,
so
there
were
sort
of
two
things
that
we
discuss
to
the
last
summit
for.
A
Roger
two
things
that
we
discuss
to
the
summit
furla
brightest,
that
have
made
some
progress.
Well,
one
of
those
big
frog
s1
is
changing
the
internal
threading
model
for
the
libris
library,
so
we
can
better
handle
hi,
I,
ops,
workloads
and
we
made
significant
progress
there.
Well,
it's
not
quite
landed,
yet
it
master,
and
the
other
thing
was
that
we
talked
about
was
supporting
sort
of
parallel
reads
to
reduce
latency.
That's
actually
something!
That's
a
relatively
simple
measurement!
Implement
that
we
haven't,
we
haven't
done
yet.
So
I
guess
I
can
start
with
that.
One.
A
The
basic
idea
is
that
if
you
have,
I
have
to
spare
in
the
back
end
and
they
have
a
latency,
sensitive
application
and
you're
doing
replication,
then
you
could
just
send
the
read
request
all
replicas
and
whichever
one
comes
back
first,
you
know
you
use
Newton
or
the
other
replies
when
that
obviously
is
going
to
cost
you
a
bunch
of
I
on
the
back
end
but
for
latency
sensitive,
sensitive
applications.
That
might
be
just
fine
because
you
want
to
avoid
sort
of
the
long
tail.
You
know
improbable
case
where
you
have
a
spike
on
that.
A
One
primary
that
you
Hampton
reading
from
in
a
pop-up
behind
behind
that
node.
So
this
is
that
continues
to
be
something
that's
going
to
be
pretty
simple
to
implement,
but
we
haven't
done
yet.
So
it's
something
that
you're
interested
in
for
your
particular
application
and
want
to.
You
know,
jump
in
and
do
some
work.
A
A
That's
doing
all
the
client
initiation
for
for
the
radius
iOS
was
had
a
single
global,
lock,
basically,
and
so,
if
you're
doing
lots
of
iOS
or
everybody's
piling
up
in
this
one
lock-
and
you
have
you
can't
you
know,
there's
a
limit
to
the
number
of
if's.
You
can
do
so
this.
This
main
effort
was
to
basically
break
that
lock
down
into
a
bunch
of
small
box
and
use
reader-writer
locks
and
some
Atomics
in
a
bunch
of
other
sort
of
the
usual
multithreading
block
trickery.
A
To
make
that
happen,
one
of
the
first
steps
was
to
to
make
the
the
way
that
the
object
or
module
was
sort
of
inheriting
the
lock
of
its
of
its
client
that
it
was
using
sort
of
existed
under
somebody
else's
blocking
scheme
to
sort
of
graduate
it.
So
it
had
its
own
set
of
locks
and
was
multi-thread
safe.
Essentially,
and
then
we
created
a
read/write
lock
around
the
list
of
those
DS
that
we're
talking
to
essentially
the
map
of
what
OSD
sessions
we
have
open.
A
You
know
hundreds
of
posties
so
effectively,
we've
broken
this
one
goal,
the
lock
down
into
100
smaller
locks
and
which
gives
you
much
better
parallelism
and
at
some
level
you
always
have
to
see,
relies
on
that.
Individual
is
d
connection,
because
they're
going
over
the
same
TCP
pipe
and
their
old
order
when
they
go
over
that
pipe.
So
there's
some
serialization
that
has
to
happen
there,
but
we
sort
of
eliminated
the
synchrony
between
all
the
messages
that
are
going
to
different
OS
DS.
A
So
current
status
of
that
work
is
that
it's
largely
compete
complete
except
that
after
the
last
rebase
it
started
failing
some
of
the
integration
tests,
and
so
we
just
need
to
figure
out
what
happened.
What
broke
showed
up
at
the
last
minute?
Get
that
cleaned
up,
so
we
can
get
it
back
merged
again
and
that's
been
delayed
from
you
know,
bunch
of
other
random
stuff,
that's
happening
last
week.
A
lot
of
the
team
here
at
ink
tank
was
at
at
in
Raleigh
at
Red
Hat
doing
our
orientation.
So
not
much
happened.
A
A
So
this
is
work.
That's
gonna
be
happening
over
the
summer
so
and
it's
around
tracing.
So
the
idea
is
trey's
caption
replay
at
the
at
the
liberators
level
and
probably
doing
something
similar
at
like
the
barb,
ed
and
some
of
the
other
client
sides
and
the
general
I
don't
know
if
we
have
a
suite
on
a
separate
session
for
this.
A
So
I'll
talk
about
it
now,
but
the
basic
idea
is
that
we
want
to
be
able
to
have
a
way
on
a
live
cluster
to
go
in
and
capture
a
trace
of
what
the
current
I
Oh
looks
like
either
get
an
actual
explicit
trace
of
this
is
actually
every
I.
Oh
that's
happening
or
feed
that
information
into
something
that
is
just
characterizing.
A
Just
running
running
random
benchmark
applications
does
not
have
very
much
bearing
on
what
what
people
see
in
production
or
reality.
So
that's
what
that's
what
that
blueprint
is
about,
and
we
have
a
couple
a
couple
people
here
are
this
one
person
here
for
the
summer,
Adam
croom
is
going
to
be
doing
some
of
this
work,
specifically
in
the
context
of
lib,
rbd
and
I.
Believe
somebody
was
else
am
actually
the
blueprint
he
wants
to
do
the
same
thing
specifically
for
for
the
object.
A
Her
though
that's
that's
sort
of
a
picture
for
for
liberate
us,
I,
guess
I
there
any
questions
and
in
the
channel
or
any
any
general
questions
about
what
we're
going
to
be
covering
over
the
next
couple
of
days
before
we
sort
of
dive
into
dive
into
all
these
sessions
or
in
the
room.
I
guess
people
can
just
unmute
and
start
talking.
Amazing.
B
A
But
yes,
we
could
do
that
and
you
could
actually
do
that.
Just
above
liberate
us
also
where,
if
your
application,
you
could
just
read
from
all
right
because
and
well
no,
actually,
you
couldn't
that's
a
good
idea.
So
one
of
the
things
that
Sam
is
going
to
talk
about
my
career,
if
it's
today
or
tomorrow,
I
think
tomorrow
today
is
improving
the
scrubbing
and
repair
mechanism,
and
one
component
of
that
is
sort
of
exposing
through
some
of
the
internal
or
low-level
api's.
A
Is
the
ability
to
say
how
many
erupted
because
do
I
have
and
can
I
you?
Can
you
give
me
the
data
for
each
of
those
individual
replica
so
that
when,
when
scrubbing,
does
detect
that
there's
some
inconsistency,
you
can
actually
examine
what
those
different
objects
are
and
then
make
some
decision
like.
A
Oh,
this
is
the
one
that
I
want
and
repair
all
the
other
up,
because
based
on
this
one,
which
I
decide
to
be
the
correct
one,
so
I
think
some
of
that
infrastructure
is
going
to
be
similar
to
what
you're
asking
for
I
hadn't
sort
of
him
originally
thought
about
just
sort
of
magically
having
this
mode.
What
result
replicas
and
only
returns
it
if
they
all
match,
but
that
would
that
would
be.
A
That
would
be
a
good
thing
too,
and
one
of
the
general
problems
is
because
we're
sort
of
sitting
on
top
of
existing
local
file
systems,
we're
sort
of
trusting
that
they're
going
to
return
us
the
right
data
and
then
scribing
comes
back
periodically
to
detect.
If
that's
not
the
case,
but
if
you're
doing
an
individual
reads,
you
don't
necessarily
know
that
it's
that
it's
perfectly
correct,
but
that
would
that
would
be
an
expensive
way
that
you
could
sort
of
put
the
cluster
in
a
paranoid
mode
or
something
like
that.