►
From YouTube: CDS G/H (Day 1) - RDMA Update
Description
https://wiki.ceph.com/Planning/CDS/CDS_Giant_and_Hammer_(Jun_2014)
24 June 2014
Ceph Developer Summit G/H
Day 1
RDMA update session
A
B
These
are
going
to
be
short
net
short
as
week,
but
I,
let's
go
over
the
things
that
are
that
are
implementing
the
full
pull
up.
The
Firefly
is
the
big
thing.
It's
in
progress
still
but
they're
a
couple
of
points
that,
if
dollars
in
the
last
couple
of
weeks,
I'm
one
adaptive
adapted
to
changes
in
Excel,
yo,
inter
messaging
interface
or
I
wish.
B
B
But
the
do
in
the
significance
is
that
previously
were
previously
a
fairly
almost
complete
set
of
operations
would
almost
always
be
set
in
a
single
message
or
no
rice.
Now
with
what's
hanging
off
or
it's
more
likely
that
the
most
chaining
of
iOS
and
flight
and
they're
ordering
other
things
we've
already
be
handled,
so
I
did
explore
so
I.
So
it's
a
tune.
So
it's
a
potential
tuning
problem.
B
B
So
that's
an
open
item
and
the
number
to
the
second
high
level
point
on
their
head.
Nap
in
I
had
done,
find
me
and
profiling
and
testing
or
look
profile
based
testing
sets
pulling
up
to
Firefly
and
their
NN
and
I
had
to
impression
that
that
there
was
there
was
some
performance
regression
there
and
I
was
able
to
spend
some
time
less.
We
can
do
that.
There
indeed
was
some
superformance
regression,
but
have
been
able
to
address
it.
B
One
of
the
one
of
the
last
things
I
was
able
to
I
explore,
was
doing
some
reorganization
of
the
buffer
and
buffer
list,
which
I
believe
I've
improved
optimization
with
at
o2
and
I,
think
and
I
think
entry
to
performance.
So
using
of
someone
had
time
to
look
that
ongoing
stuff,
we
have
an
agreement
that
were
that
were
that
be
that
they,
but
if
the
media
target
as
a
single
messenger,
that's
a
compiled
runtime
selected
rather
than
whether
than
multiple
messengers
I
believe
I'll.
B
We
talked
about
in
our
last
meeting
with
their
ISM,
doesn't
that
we
have
talked
about
that
timing,
adding
channel
support
to
XO
messenger
in
messenger
in
general,
since
the
sentence
instead
such
the
current
was
going
to
current
setup.
Design
of
messenger
implies
a
messenger
for
any
for
any
for
any
logical
cloth
messages
and
the
way
where
you
need
to
avoid
a
head
of
line
blocking
and
so
forth.
B
But
it
does
see
it
appears
that
there's
a
little
bit
more
policy
that
will
be
needed
in
order
to
get
the
to
allow,
but
they
all
they
send
to
this
interim
receiver.
To
agree,
honest,
honest
with
this
specific
portal
or
a
specific
number
of
portals,
there's
also
we
have
been
discussing
with
with
upstream
armed
with
Excel
yo
teen.
Basically
looking
at
next
was
a
flow
control,
interface
and
yeah
a
bit
as
agreement.
B
C
So,
just
to
relate
that
back
to
to
seth.
I
think
the
two
they're
sure
to
things
that
we
would
use
for
I
mean
steph.
One
is
to
separate
the
flows
between
different
entities
in
order
to
prioritize
them
differently.
So,
for
example,
in
the
OSD
we
might
want
to
send
recovery
traffic
at
a
different
priority.
Then
replication
traffic
potentially
and
and
be
able
to
control
those
the
sort
of
the
qsr
them,
especially
with
those
and
then
on
the
other.
C
The
other
piece
is
that,
right
now
we
have
this
heart
beating
function,
that
the
OCS
are
doing,
whether
painting
each
other
to
make
sure
they're
still
alive
and
we're
using
totally
independent
sup
messenger
instances
that
are
opening
independent
TCP
sockets
and
making
sure
that
they're
communicating
which
is
expensive
and
also
a
bit
of
awkward
to
manage
and
whatever.
So,
on
the
other
hand,
it's
a
larger
change
to
be
able.
B
C
Was
get
out
those
that's
the
motivating
use
case,
but
we
need
a
dis,
not
a
slam-dunk
I
think
we
have
to
be
a
little
careful
with
that
because
of
the
way
that,
especially
with
the
simple
messenger,
at
least
the
way
that
the
throttling
works.
We
need
to
make
sure
it's
compatible,
I
guess
also
anyway,
yeah
I,
guess
it'd
all
be
different,
urine
and
yaks
a
up
case,
but.
C
Yeah,
so
I
can
just
jump
in
a
little
bit
and
let
you
know
where
we're
at.
On
the
sort
of
flip
side,
the
so
I
took
the
first
few
patches
of
the
original
xio
branch,
we're
doing
a
bunch
of
refactoring
making
much
of
refactoring
changes
in
the
messenger.
Api
is
in
abstractions,
so
that
got
cleaned
up
a
bit
and
tested
and
built
up
in
the
whip.
Master
branch.
D
C
I,
think
that
mean
the
main
piece
in
order
to
make
it
be
sort
of
ready
to
go,
is
to
is
to
drop
the
duplication
of
the.
We
talked
about
this
offline,
but
might
as
well
iterate
here.
So
everybody
sort
knows
we're
at
to
drop
the
separate
instantiation
of
the
XA
messengers.
So
instead
of
having
two
parallel
messengers,
there's
just
a
config
flag
that
will
create
a
different
kind
instead
of
the
simple
message
behind,
so
that
there
are
fewer
changes
across
the
code
and
it's
hopefully
just
like
11
patch.
C
C
C
C
Yes,
yeah
and
it
mostly
only
changes,
I
think
they're
like
two
or
three
patches
in
your
series
that
it's
sort
of
the
Brutes
and
whatever
and
then
I
think
a
bunch
of
the
other
ones
we
can
drop
out
once
you
add,
on
top
of
that,
if
you
add
the
flag
that
will
make
the
create
messenger
by
three
function
or
whatever
got
it
just
heavy
Taylor
kind.
Is
that
much.
C
I
need
to
what
I
think
it
it
works
in
my
sort
of
basic
testing.
I
haven't
run
it
through
cataula
g4.
Yet
oh,
but
it's
yeah,
it's
just
blocked
on
all
the
other
random.
That's
thought
I
miss
us,
but
if
you
imeem
do
you
do
that
rebase
and
it
falls
over?
That
would
be
helpful
to
know,
and
if
it
doesn't,
then
that
was
beautiful
to
know
my
cancer
prioritize
that
accordingly,
let's
eat
so
okay,
so
legacy
row
rebase
on
web
messenger,
fix
the
messenger
instantiation
vo
single
config,
option
and
I.
C
It's
a
superior
to
peer.
What
is
it
lossless,
the
lossless
peer
policy
mode
mode
or
whatever
I
think
that's
the
thing
that
the
thing
I'm
worried
about
I'm
not
actually
sure
it
might
just
work.
Okay,
sleep
but
I
think
that's
the
part
that
we
have
to
make
sure
behaves
because
the
in
simple
messenger
case
we
bend
over
backwards
to
make
that
work
properly,
with
all
the
weird
sequence,
numbers
and
retries,
and
it's
all
weird,
because
you
have
multiple
piercings
and
they
need
to
settle
on
a
single
single
stream.
C
C
B
C
Gonna
fall
over
when
you
have
a
cluster
of
like
10
notes,
instead
of
two
notes
or
my
yeah
yep
yep.
The
issue
is:
when
you
have
a
bunch
of
them,
they're
all
talking
to
each
other,
and
then
they
start.
You
know
one
of
them.
Flaps
goes
down
every
starts
or
whatever,
and
then
they
come
up
and
they
try
to
connect
to
each
other
and
I
get
confused.
We.
E
C
But
I
think
this
is
I,
think
that's
the
general
sequence.
So,
looking
forward
a
little
bit,
I
started
a
branch
with
adder
that
changes
the
encoding
for
the
address
type
in
a
in
a
way
that
will
be
backward
compatible,
but
not
forward
compatible.
I
guess,
however,
you
are
usually
not
liking.
C
Those
terms,
let's
say
the
idea
there
is
that
we
would
have
a
different
type
deal
on
the
on
the
address
that
says
this
is
an
x,
io
endpoint
and
not
a
regular
IP,
four
or
six
endpoint
and
I
sort
of
at
the
same
time
was
changing
the
encoding
to
be
more
efficient.
Currently
we're
just
dumping
a
socket
or
storage
struct
always
which
is
like
80
bites,
even
if
it
happens
to
be
an
ipv4
address
in
there,
which
is
one
of
the
reasons
why
the
OST
map
structures
get
really
big
on
big
clusters.
C
Even
ipv6
addresses
are
only
about
16
bytes,
so
we
can
cut
that
way
down,
but
the
other
half
of
that
is
adding
a
new
type
called
entity
at
avec.
That
is
a
like
a
priority
list
of
addresses,
so
you
could
have
a
demon
that
binds
to
multiple
multiple
protocols
and
multiple
points.
So,
for
example,
you
could
answer
our
both
ipv4
and
the
ipv6,
and
the
client
would
just
pick
one
of
them
based
on
whatever
its
priority,
whatever
one
matched
its
own
address.
D
C
Had
both
xiah
or
ipv4
ipv6
I
think
this
is.
This
is
sort
of
lower
priority
I
think.
Eventually
we
want
to
do
all
that,
but
for
now
we
just
want
to
get
this
working
and
the
requirement
that
it
be
purely
xio
in
order
to
sort
of
get
this,
you
know
get
this
working.
This
is
just
fine
of
curiosity.
I,
don't
know
if
anybody
time
people
in
IRC
channel
how
many,
how
many
people
would
are
actually
would
want
or
need
multi-protocol
access.
D
D
C
C
The
piece
of
this
is
getting
all
of
these
and
stuff
to
sort
of
migrate
off
of
the
old
API
or
messenger
that
were
you
passing
an
address
with
every
call
with
every
send
call
to
the
connection
based
API,
there's
so
much
users
in
the
code
base
that
are
using
guild
one
and
up
the
new
one
and
they
needed
a
sort
of
a
moderate
amount
of
attention
to
to
do
that.
Migration,
and
when
we
do
that,
we
can
drop
a
bunch
of
annoying
code
in
the
messenger
and
simplified
people's
lives.
D
C
D
B
Think
it's
not
a
big
deal,
I
mean
it
was.
The
key
was
to
get
all
the
reply
Vicki
as
to
when
you
have
dual
messenger
unique.
Really
you
must
reply
up
with
the
correct
connection
and
messenger
pair,
and
so
the
biggest
derisive
is
a
bunch
of
change
that
deals
with
that
and
then
and
then
the
fam
is
that
we
well
known
that
we
support
I.
Think
us
all
them.
No,
I
mean
so.
It
was
when
you
reply.
B
You
needn't
you
do
not
just
go
off
wearing
either
grab
my
messenger
and
send
on
it
and
you
needed
on
mix
and
match
anyway
anyway.
So
much
so,
and
so
it
is
okay
does
it
have,
but
but
it
mostly
works
correctly.
If
you
match
the
few
master
replies.
So
so
I
don't
so,
if
I
think
I'd
agree
not
obviously
want
to
transfer
into
it
as
much
as
possible,
but
it
shouldn't
be
inappropriate
to
use
it
and
address
what
makes
sense
so.
C
The
in
the
wit
messenger
branch,
that's
one
of
the
things
that
I
think
I
dropped,
I
think
I
cleaned
up
most
of
those
cases,
though,
there's
no
any
send
a
message.
Instead
of
doing
messenger
sent
message,
you
duquan
send
message,
and
so
you
can
just
do
my
incoming
message
get
connection
then,
and
you
can
send
it
back
on
the
same
pipe
without
figuring
out
which
yeah.
C
Yep
yep
yep,
so
I
think
most
of
the
remaining
users
that
still
use
the
add
or
interface
are
in
the
monitor,
which
is
doing
bunch
of
goofy
stuff
when
it's
talking
to
clients
and
in
the
nds
actually
and
then
the
OST
is
talking
to
each
other.
That's
the
last
one,
but
at
least
it's
all
funneled
through
one
helper
function.
So
there's
some
there's
some
hope
to
clean
it
up,
but
I
think
for
the
purposes
of
xiah
or
not
we're
in
pretty
good
shape.
D
C
D
Yeah,
ok.
C
E
E
F
C
So
and
so
back
to
this
multi
multi
stack
thing
right
now,
there's
an
option.
Ms
bind
ipv6,
that's
a
pool
that
will
globally
tell
the
messenger
whether
to
bind
to
a
v6
a
dresser
before
address
for
incoming
connections.
The
client
obviously
will
use
whatever
protocol
that
the
server
side
is,
but
we
also
have
like
public
cluster
or
sorry.
What
is
it?
C
Cluster
network
equals
and
client
network
equals
the
I
wonder
if
we
should
have
like
cluster
protocol
equals
or
client
protocol
that
sort
of
mirror
those
two
options
so
that
you
could
let
the
way
to
let
that
be
the
way
to
do
it,
because
then,
you
could
say,
like
client
protocol
equals
x
eye
out,
for
example,
or
even
in
order,
and
it
would
try
to
bind
to
an
x
io
for
that,
whereas
the
cluster
protocol
would
be.
You
know
that.
D
D
C
D
D
C
And
Xiao,
certainly
it's
easy,
because
it's
just
an
IP
address
right
I'm.
You
would
have
to
specify
the
IPS
that
match
the
interface
that
xio
or
the
whatever
is
our
that's
already
made
connected.
Yeah
I,
don't
know
what
any
other
type
of
a
dress
looks
like
honestly,
like
in
the
normal
InfiniBand
are
became
a
world.
Do
they
also
identify
things
by
IP
address
deserves,
or
some
other
weird
address.