►
From YouTube: CDS Hammer (Day 1) - Accelio RDMA Messenger
Description
http://goo.gl/U4b70r
28 October 2014
Ceph Developer Summit: Hammer
Day 1
Accelio RDMA Messenger
Matt Benjamin, Eyal Salomon
A
All
right,
I
guess
we'll
go
ahead
and
get
rolling
here
looks
like
most
people
are
back
from
where
they
were
the
next
session
we've
got
going
on.
Is
the
discussion
around
the
ongoing
accelio
rdma
messenger
work
being
done
by
looks
like
matt
and
el
we're
the
blueprint
owners
on
this.
This
version
this
time
around,
I
think,
you're
in
by
phone.
So
if
you
want
to
go
ahead
and
give
us
a
little
bit
of
an
overview,
those
following
along
at
home
should
be
able
to
take
a
look
at
the
pad.
C
C
D
D
I
was
hoping
for
a
little
bit
more
feedback
from
it
from
the
melanox
folks
about
what
things
were
being
added
to
accelio,
but
I
was
but
basically
the
basic
points
I
wanted
to
hit
here
with
our
new
things
that
actually
I
was
doing
there's
some
new
things
that
actually
messenger
is
doing
where
we
are
different
pieces
and
and
and
answer
questions
based
on
on
that.
So
starting
with
the
accelero
side
and
the
alginates,
you
can
bring
in
more
feedback
there
or
more
and
more
data
there.
D
Work
work
has
been
going
over
the
last
couple
of
months
to
several
months
to
add
multiple
multi
transport
support
and
and
the
the
tcp
transport
is
becoming
more
full
featured
they're,
also
also
working
on
new
explicit
flow
control
interfaces
within
within
it
within
it,
with
exposed
to
the
xlio
api
to
to
to
enable
applications
to
get
more
more
good,
to
get
more
control
over
over
over
the
resources
that
xio
consumes,
but
also,
but
also
to
get
get,
I
think,
get
features
similar.
D
You
know
I
like
it
like,
like
seth's
throttler,
to
tightly
integrated
with
with
acceleo's
workflow,
and
they
and
and
they've
been
in
gradually
fitting
in
different
different
rdma
memory
models
and
and
on
transport
memory
models
and
buffering
models
into
into
into
interview,
and
that
and
that
intersects
with
with
with
us
so
I'll.
D
So
I
won't
go
into
what
you
between
each
other
over
there
but
suffice
to
say
there
are
there's
various
ways
to
do:
memory,
registration
or
to
avoid
or
order,
or
avoid
it
and
and
the
strategy
that
that
that
the
system
takes
to
decide
when
to
use
rdma
send
received
when
to
use
rdma,
read
or
write.
The
latter
of
what
requires
registered
memory
and
so
on
in
the
switching
to
the
ximes,
is
a
review
of
that.
D
We
just
got
done
with
the
with
major
refactoring
of
of
of
that
of
that
aspect
of
the
xl
messenger.
What
we
found
is
that
is
that
they
were
the
model
we
were
using,
which
was
which
was
delegating
to
a
lot
of
that
detail
to
acceleo
meant
that
we
couldn't
return,
xl
messages
that
the
basic
sort
of
message
message
objects
to
you
know
to
acceleo.
D
Until
until
we
were
done
with
all
the
buffers
that
have
been
associated
with
that
message,
which
meant
that
which
which
mixed
imperfectly
with
with
the
with
the
way
the
rest
of
the
psycho
environment,
here's
buffers
by
default,
we
can't
put
the
way
around
that
which
we
called
strong
claim.
D
But
we
wanted
to
to
to
especially
well
not
excellent,
who
wanted
to
use
the
opposite
model
by
default.
They
wanted
to
use
to
consider
that
to
allow
sharing
by
default
and
that
we
also
found
that
we
could,
by
by
doing
that,
we
could
we
could.
We
could
release
messages
immediately
so
which
which
which,
which
would
which
would
make
mixolydia
more
efficient.
B
D
Yeah,
of
course,
yeah
the
idea.
The
idea
is
that
you
know
that
if
a
buffer
is
red,
the
buffer,
the
buffer
may
be
registered,
it
might
be
registered
or
not
or
whatever,
but
but
as
it
was
with
the
model
that
we
used,
you
know
excel.
D
Xlio
essentially
has
to
the
the
memory
buffers
we
were
using
were
linked
to
the
request
that
they
were
the
transport
requests
that
were
where
they
came
in
and
the
message
they
came
in
basically
and
we
and
and
and
that
that
that
and
that
in
that
model
we
couldn't
retire
like
some
fairly
heavyweight
accelerator
resources
until
substantially
later,
potentially
until.
D
There
was
some
there
was
some.
There
was
some
extra
overhead
associated
with
two,
not
only
that
there's
a
synchrony
required
for
that
could
cost
relatively
little
in
terms
of
the
in
terms
of
the
fast
path,
but
it
but
it,
but
it
put
a
lot
of
load
in
the
slow
path.
So
we
have
these
these
asynchronous
completions
that
fired,
eventually
and
and
and
burn
extra
cpu.
D
So
so
getting
rid
of
that
means
it
produces
a
new
model
where
we
we
we
still
use.
We
use
the
same
lock
free
memory,
primitives
that
excito
provides,
but
we,
but
we,
but
they
accept
the
application,
provides
all
the
buffers
and
therefore
they
can
be
returned
to
those
those
pools.
D
Whenever
I
I
went
ahead
and
blended
in
the
the
claim
idea,
also
for
places
where
we,
where
we
can,
where
we
explicitly
think
that
it's
likely
that
we'll
keep
a
buffer
around
bond
for
a
while
and
maybe
it's
a
small
buffer,
we
may
as
well
dupe
it
and,
for
example,
if
we're
bill.
D
If
there
are
so,
there
are
a
couple
of
current
cases
that
I
think
are
candidates
for
that
in
this
up
code
like
when
we're
building
up
an
osd
map
or
one
of
the
other
cluster
maps,
this
these
buffers
may,
let
me
live
for
a
while,
they're,
not
they're,
not
giant
they're,
not
gigantic
memory,
map,
buffers
and
and
and
I'm
showing
them
faster
would
be
cheap,
but
anyway
the
new
model
is
we
have.
Is
we
have
pools
of
of
pre-registered
memory?
This
is
a
common
model
in
our
in
in
iv,
verbs
programming.
D
There
could
be
like
this.
These
pools
could
be
very,
very
large
if
needed.
Potentially
they
they
need
not
be,
but
the
buffers
are
pre-registered,
which
is
so
actually
registering.
The
memory
is
as
a
relative.
It's
somewhat
expensive
operations.
We've
already
done
that
on
these,
we
associate
those
when,
when
an
a
in
a
pre-already
may
read,
step,
that's
orchestrated
by
accelio
and
then
a
couple
microseconds
later
or
whatever
we
decode
the
message
and
hand
it
off
the
excelio
transfer
transit.
D
Actually,
the
actual
message
structure
that
you
know
the
best,
the
under
the
low
level
message
structures
that
we
frame
on
datagrams
for
excelio.
We
just
we
dispose
immediately.
Then
the
stuff
message
just
goes
off
the
way
it
normally
would
and
when
all
and
then
when
each
buffer
is
is,
is
referenced
unreferenced
for
the
last
time.
We
know
we
return
it
to
the
pool.
At
that
point,.
D
It's
definitely
in
the
it's
definitely.
The
new
theme,
yes,
is
to
manage
that
resource,
but
we
speculate
that.
That's
that's
that
that's
manageable
and
we've
got
and
we've
got
some
outstanding.
You
know,
we've
got
some
debug
hooks
and
and
not
sending
message
a
memory
or
pool
or
pool
objects,
tracking
that
we're
using
to
figure
out
how
that's
going
to
work.
B
So
what
we
did,
what
we
did
in
the
past
with
the
existing
messenger,
is
that
when
we
have
incoming
messages
we're
using
a
throttle
that
basically
keeps
track
of
all
the
memory
that
we've
allocated
on
the
behalf
of
incoming
client,
I
o,
and
if
once
that
you
hit
a
limit,
you
would
essentially
just
stop
reading
off
the
network
so
that
we
can
bound
the
amount
of
memory
resources
that
are
consumed
by
clients.
B
Throwing
I
o
at
you
is
is,
is
that
is
that
the
same
model
type
of
model
going
to
work
where
that'll
in
turn,
I
guess
you'll
exhaust
those.
Would
you
simply
exhaust
that
pool
and
so
incoming
I
o
would
block,
because
it
was
not
there's
no
available
registered
memory
or
something
or
how
is
that
how's
that
going
to
work.
D
D
We
have
some
interfaces
that
we
always
try
to
implement.
For
that
this
it
won't
work
exactly
exactly
the
same
way.
It's
likely
it's
likely
that
we'll
with
that.
Without
that
in
some
in
some
future
past
we
will.
D
We
would
likely
like
we'd
like
to
leverage
accelio
for
this,
but
accelerator
rdx
already
exchanges,
credit
data,
so
it
has
it's
so
it
has
received
site
flow
control
that
so
there's
a
group.
That's
that's
that's
explicitly
shared
on
the
across
the
link.
Ideally
we
would
we
would.
We
would
expose
that
to
applications.
If
not,
we
could
we
know
we
would.
We
would
likely
build
that
into
into
the
protocol
above
excelio,
it's
not
exactly
the
same
as
being
about
the
depending
on
the
amount
of
memory.
D
That's
that's
in
that's
in
use,
but
but
that
it
comes
basically
the
same
thing
as
as
we
have
it
is
we
have
a
nozzle
budget
now,
there's
a
message
budget.
That's
there's
a
baseline
message
bill.
That's
already!
That's
already
established
that,
and
that
does
exist.
You
know
that
that's
like
it's.
D
You
said
you
said
you
said
a
queue
depth,
basically,
which
is
the
same
competition
space
with
that
budget
for
each
connection
and
and
yes,
if
we,
if
we,
if
we
exceed
that
xla,
won't
deliver
any
more
messages
into
into
until
we've
until
we've
recovered.
It.
D
D
D
Right
so
we
finished
this
wave,
this
refactoring,
that
does
this.
We
just
got
to
a
stage
where
it
now
runs.
It
now
runs
the
entire
cluster
again
and
so,
and
so
we're
sort
of
in
cluster
testing
we've
also
supported
some
intern
to
a
couple
of
internal
branches
and
we're
and
we're
pounding
on
it.
That
way,
also
there
had
been
a
second
branch
called
xio
giant.
D
I
guess
actually
I
was
like
was
called
zuma,
giant
or
xaos,
and
a
lot
of
that
giants
which
were
which
were
which
we
had
pulled
up
up
to
giant
we
we
have
yet
to
do
that.
We
we
want
to
do
that
at
some
point
shortly.
D
Probably
is
fast
dispatch.
If
we
had
those
two
things
we
would
we
went
and
once
we
just
once
once
and
once
we
beat
out
once
we
beat
on
things
enough
to
know
that
we
have
a
good,
stable
basis
for
for
this
type
of
testing
with
it
with
people
will
be
ready.
We
would
hope
to
be
ready
for
some
kind
to
some
kind
of
some
kind
of
pull
request
that
would
be
you
know
conditionally
compiled
so
folks
can
run
at
this
level.
Then
we
have
new
session
management
work
in
progress.
D
That
goes
past,
that
that
I
hadn't
pushed
yet
it
was
blocking
and
stabilizing
things,
but
stabilizing
branches,
but
there,
but
they
but
it,
but
there's
a
prototype
of
that
that
implements
all
the
remaining
messenger
interfaces
except
the
throttlers.
D
It
does
challenge
response,
basically
following
the
same
model
that
that
the
messenger
protocol
had
done
except
exchanges,
except
that
it
changes
messages
to
do
that
and
so
ping
pong
is
a
series
of
messages
until
it
reaches
a.
C
D
An
initialized
state,
or
also
that
or
else
it
fails,
but
if
it
does,
then
it's
exchanged
an
authorizer,
and
so
it
should
be
possible
at
this
point
to
use
cx
as
well
as
other
as
well
as
everything
else,
but
it
reduces
the
size
of
the
message
that
we're
sending
across
for
other
messages
as
well,
because
because
we
currently
kind
of
fake
some
of
that,
so
you
know
this
is
this:
is
the
that's
the
suffix
piece?
We
have
no
idea
whether
that's
useful
for
rdma.
C
Yeah,
okay,
so
this.
D
Is
a
there's
a
more
or
less
code
complete
prototype?
It
needs
more
more
words
to
be
sort
of
attractive,
but
it
but
it's
beginning
to
work,
and
that
would
be
sort
of
the.
I
think
the
main
thing
we
would
like
to
contribute
to
to
this
this
age
release
future
stuff.
As
you
know,
sage,
the
so
the
big
big
feature
we
think
is
is
floor.
D
Control,
stuff
and
and
we'd
like
to
have
the
most
visibility
the
most
sort
of
sharing
of
workload
between
we're
of
work
between
you
know
the
most
visibility
I
guess
of
the
of
them
of
a
common
model
inside
of
accelio.
That's
what
the
knocks
would
like,
also
in
their
and
they're
they're,
actually
exploring
that.
I
think
the
bite.
D
D
Throttler
over
or
for
example,
the
the
memory
budget
might
be
something
that
excellent
never
really
knows
about,
because
we're,
because
we're
explicitly
decoupling
that.
But
I
think
the
message
budget
ought
to
be
tightly
coupled
with
acceleo
and
that's
that's
kind
of
the
direction
things
are
going
right
now.
B
D
D
Both
are
yeah
without
that
everything
goes
down
down
down
a
raffle
real
fast.
C
B
Okay,
so
I
think
that
the
biggest
questions
for
me
are
really
about
how
we,
how
we
sort
of
get
this
stuff
integrated
so
I'll,
give
a
quick
update
on
this
end,
melanox
dentists,
which
and
some
cards,
so
we
can
we'll
have
some
rdma
hardware
in
the
in
the
lab,
which
will,
which
means
we'll
be
able
to
once
this
is
in
we'll
actually
be
able
to
run
regular
tests
against
it.
B
The
other
thing,
though,
is
that
now
that
accelio
has
tcp
transport
added,
we
can
actually
just
do
that
on
regular
hardware
and
test
most
of
the
code
same
code
as
exercise
the
rdma
stuff,
but
it'll
get
some
coverage,
so
that'll
be
helpful,
but
I
think
that
the
key
thing
is
how
how
to
get
this
in
a
state
where
it's
ready
to
go
upstream.
I
think
that.
B
There
are
a
few
different
places
where
this
sort
of
breaks
down
into
so
one
is
the
the
buffer
management
stuff.
All
the
bits
that
integrate
with
the
buffer
code.
B
Need
to
get
integrated
in
a
way
that
is
sort
of
non-disruptive
have
there
have
there
you've
mentioned
a
couple
times
that
there's
been
a
bunch
of
stuff
that
you've
played
around
with
the
buffers
have
is
the
xio
memory
management
stuff?
Is
that
particularly
intrusive?
No.
D
D
It
makes
me
sporty,
but
I
mean
you
know,
the
basic
one
is
just
an
xio
mempool
buffer
type
and
and
it's
which
is
basically
linking
it
to
the
mempools,
there's
a
small
change
that
there's
a
change
that
originally
was
intended
to
be
intrusive,
but
it's,
but
it's
not
now
that
this
this
lets
you
take
that
that
adds
as
a
new
virtual
method
to
raw
that
that's
a
you
know
that
only
excellent
as
well,
but
that
that's
only
really
interesting
for
for
cellulitis.
But
it's
this.
It's
the
strong
claim
concept.
D
That's
the
idea
that
you,
if
you
have
a
buffer
type,
that
would
rather
be
cloned
in
response
to
some
high
level
goal.
Then,
and
then
you
know
that,
then
it
integrates
basically
a
new
and
basically
memdup's
these
and
and
returns
the
accelero
buffer.
We
thought
this
would
be
useful
for
us.
I
say
cases
where
we
know
we're
we're:
putting
we're
putting
a
small
buffer
somewhere
sort
of
small
buffer
or
some
place
where
they're
not
going
to
be
returned
for
a
long
time.
C
D
But
but
again
that
that
that
doesn't
that
doesn't
that's!
That's
not
interesting.
So
that's
that's
the
sum
total
of
what
it
needs.
C
E
D
Well,
this
is
historically
the
way
we
came
up
with
the
initially
you
can't.
You
know
I
I
strategized
with
about.
We
started
guys
over
here
as
about
having
that,
as
a
way
to
deal
with
deal
with
deal
with
the
the
previous
model,
where
or
the
unm
or
the
or
the,
where
the,
where
we
want
it,
where
we
wanted
to
block
sharing
in
the
unmarked
case
that
that
would
have
had
a
performance
cost.
So
we
had
so
in
that
version
of
the
of
the
of
the
of
the
logic
which
you
can
read
it.
D
If
you
want
it's
out
there
is
it,
it
does
things
it
decides.
You
know
that,
basically,
basically,
we
made
all
we
explored
the
possibility
of
making,
essentially
all
sharing,
do
this
by
default
and
then
make
it
make
sharing
the
unmarked.
I
mean
the
mark
case,
and
so
the
nmr
case
in
buffer.
But
that's
what
that
that
would,
but
this,
but
this
change
is,
is
not
about
it's
not
doing
any
of
that.
It's
it's
just
saying
basically
cal
the
buffer.
D
If
you
call
buffers
through
through
some
bit
through
some
through
some
obvious
interfaces,
if
we're
grabbing
something-
and
we
may
I
mean-
and
we
don't
want
to-
and
we
don't
want
it
and
we're
explicitly
aware
that
we
don't
want
that,
we
don't
want
to
hold
it
and
hold
volatile
memory,
basically
dude,
so
you
would
have
to
do
it.
Otherwise
it
has
no
there's.
No
one.
There's
no.
B
So
the
other
okay,
so
going
back
to
as
far
as
getting
this
kind
of
stream,
it
feels
like
there
are
a
couple
of
things,
so
one
is
making
sure
that
the
buffer
code,
the
generic
buffer
bits,
aren't
and
that's
pretty
simple.
I
think
there's,
though,
that
the
build
integration
just
we're
still
using
automake,
so
we
need
to
flex
with
all
the
automatic
config
files
so
that
there's
a
enable
xio
thing
that
does
all
the
library
detection
and
defines
the
right
things
properly.
B
So
this
is
conditionally
built
and
there's
actually
a
library
itself.
B
I
don't
know
how
you
guys
are
building
or
distributing
it,
and
this
is
where
I'm
gonna
set
up
red
flags
for
danny,
but
that
is
is
the
idea
that
xio
is
gonna,
be
packaged
independently
and
independently
and
that
we
would
dynamically
against
it.
Or
are
you
using
this
as
a
as
a
static
linking
it
in
or
what's
what's
the
plan.
D
Both
are
possible,
but
initially
basically
it's
not
going
to
say
what
melox
wants
to
do.
I
I
mean
they.
C
D
C
D
B
Okay,
well,
I
mean
maybe
that
maybe
the
strategy
here
to
do
the
same
thing
we
did
with
rocks
where
there's
a
with
static,
xio
and
with
dynamic,
xio
type
options.
So
you
can
do
either
one.
D
Yeah
I
actually
like
linking
aesthetic
linking
for
this
and
then
we've
done
we
initially.
I
did
everything
with
that,
so
so
both
will
be
accessible,
but
I
think
I
think
once
you
once
you
can
agree
on
the
mechanical
location
of
the
code,
there's
an
official
accelio
github
that
has
all
their
published
branches
and
and
everything
and
then
I'll
just
clone
that
I
would
assume
that
yeah.
I
think
I
would
find
doing
that
and
I
think
that
would
collect
that.
B
Okay,
yeah
that'll.
Let
us
test
it
on
multiple
distributions
without
having
to
build
a
bunch
of
packages
which
is
not
what
we
want
to
do.
Okay
and
then
there's
just
the
the
messenger
bits
itself.
So
you
probably
noticed
this
when
we
merged
the
async
messenger,
but
we
split
up
the
message
subdirectory,
so
there's
a
subdir
subdir
for
all
the
different
messenger
types,
so
there's
a
simple
directory
and
an
async
directory
now,
so
I
assume
it
would
be
pretty
simple
for
you
to.
B
B
Create
factory
function
that
just
instantiates
the
messenger
and
there's
a
config
option
that
does
it
so
in
the
past,
your
early
early
branches
had
a
a
whole
bunch
of
changes
to
all
the
other
bits
of
code.
To
like
create
different
messenger
instances
and
stuff
has
have
you
cleared
all
that
stuff
out
now
like
is
it
is
any
of
that
still
necessary?
D
If
you've
had
branches
that
cleared
all
of
it
out,
we
we,
in
other
words
for
our
stabilization.
We
haven't
taken
it
out
and
we're
just
trying
to
stick
with
keeping
things
as
simple
as
unchanged
as
possible.
D
C
Okay,
okay,.
B
Yeah,
I
think
the
other.
The
other
thing
right
now
is
that,
right
now
the
factory
function
the
factory
conditional
based
on
g
conf,
ms
type
or
whatever,
but
that
probably
needs
to
change
so
that
the
caller
is
passing
in
the
messenger
type
it
wants
to
instantiate,
so
that
you
can,
you
know,
have
the
messenger,
for
example,
could
use
one
on
the
back
end,
one
on
the
front
end
or
something
like
that.
If
you
want
to
support
that
sort
of
thing
later,
but.
D
D
We
have
a
hack
that
deals
with
that,
but
especially
if
we're
gonna
have
different
nets
using
different
different
different
ones,
we
would
want
to
have
control
over
that.
D
Something
that
we
plan
to
work
on
over
the
next
while,
but
I
don't
have
a
full,
really
really
well-defined
timeline
for
when
it
would
work.
C
B
Yeah,
it's
going
to
be
it's
a
little
tricky
because
on
the
client
side
you
want
to
be
able
to
sort
of
use
whatever
transport
is
in
use
on
the
server
side,
but
it
doesn't
really
break
down
that
way.
I
think,
for
the
time
being,
it's
just
going
to
be
that
for
any
particular
plane
of
the
cluster
or
whatever
there's
going
to
be
a
simple
networking
type
used.
C
B
But
probably
the
simplest
thing
at
this
point,
though,
would
just
be
for
to
change
the
density.
Adder
t
type
field.
I
have
an
x,
I
o
type,
but
still
store
a
sore
stock
adder
that
is
identifying
the
endpoint
then
that,
but
the
the
address
type
would
actually
be
typed,
so
you
know
that
the
endpoint
was
a
xio
endpoint
and
for
now
I
think
your
messenger
would
just
sort
of
assert
that
that
was
true,
that,
in
in
the
long
term,
eventually
be
able
to
dynamically
behave.
Based
on
that,
probably.
C
B
But
I
think
I
guess
the
biggest
thing,
though,
is
just
pulling
up
all
the
current
code
to
giant
and
sort
of
carving
it
up
in
pieces
that
we
can
that
we
can
merge.
So
I
would
probably
do
it
in
you
know
in
this
order.
So
initially
do
a
patch
that
just
sets
up
all
the
the
dynamic
linking
build
stuff,
sub
module
and
or
dynamic
linking
to
the
library
that
bit
and
then
the
buffer
management
pieces
that
are
conditionally
built.
If
that's
turned
on
and
then
adding
the
messenger
type
piece.
C
B
D
B
B
Okay,
I
guess,
while
we're
while
we're
talking
about
it.
I
take
just
a
quick
moment
to
talk
about
the
other
messenger,
the
async
messenger,
from
how
my
united
stack,
we
merged
that
it's
disabled
by
default
and
has
been
tested
sort
of
externally,
but
we
haven't
actually
run
any
tests
in
the
lab
yet,
but
all
the
pieces
are
there
to
actually
do
that.
We
just
haven't
done
it
yet.
So
the
I
think,
probably
what
we
want
to
get
towards
is.
B
It
would
be
nice
to
have
a
test
suite
that
just
exercises
all
the
different
messenger
implementations
with
some
similar
workloads
that
stress
tests,
just
the
network
link
networking
layers
and
then
eventually
in
our
other
test
suites,
we
could
have
the
config
sort
of
randomly
pick
different
messenger
types.
So
you
get,
we
get
just
general
coverage
of
the
implementations.
C
E
B
That's
kind
of
the
problem,
but
at
the
same
time
I
don't
want
to
explode
the
current
suites
out
with
like
by
3x
by
saying
run
this
on
every
messenger.
I
want
like
a
constrained
thing
that
will
sort
of
focus
on
just
ones
that
are
doing
as
much
failure
injection
as
possible
with
some
maybe
osd,
thrashing
and
yeah
lots
of
clients-
and
I
don't
know-
presumably.
E
C
B
Yep,
we
could
do
that
too,
but
still
having
having
a
collection
that
is
as
much
stress
on
specifically
the
messenger
really
would
be
helpful.
Yeah.
I
think
I
mean.
Ideally
I
think
it
so.
B
The
the
simple
messenger
and
the
async
messenger
are
protocol
compatible
and
the
exterior
one
obviously
is
not
because
it's
not,
but
at
least
for
those
other
two,
it
would
be
ideal
if
we
could
eventually
get
to
the
point
where
the
messenger
implementation
is
just
randomly
chosen
for
every
run
or
something
so
that
we
just
have
a
broad
spectrum
coverage
for
for
both
of
them
and
then
in
the
in
the
xio
case.