►
From YouTube: CDS Hammer (Day 1) - Buffer Encoding
Description
http://goo.gl/U4b70r
28 October 2014
Ceph Developer Summit: Hammer
Day 1
Buffer Encoding Discussion
Sage Weil
A
B
Yes,
so
there
are
couple
stars
on
the
mailing
list
about
calling
out
offer
encoding
as
one
of
the
things
that
is
slow.
It's
not
really
yeah,
encoding
and
decoding,
and
there
were
a
couple
different
tangents
conversation.
One
a
matt
had
mentioned
that
they
were
playing
around
with
some
possible
optimizations,
I,
think
just
in
the
core
buffer
code
like
optimizing,
append
something
like
that,
but
not
sure
exactly
so
great
to
hear
about
that
and
then
on
the
other
end
of
the
spectrum.
B
How
am
I
had
some
more
radical
proposals
for
changing
the
way
that
messages
are
structured
in
general
so
that
there
are
more
fixed
size,
sort
of
sprite,
mem
copies
the
encoding
stage,
two,
so
Matt
do
you
want
to
join
us
start
by
talking
to
Sal
it
about
what
you
guys
have
done
and
what
you
were
looking
at
I.
B
Ok
well
Sam's,
here
at
least
so
yeah
I
mean
the
basic
problem
is
that
whenever
you
do
sort
of
a
profile
of
the
cluster
under
load,
one
of
the
top
things
that
pops
up
is,
if
remember
quickly,
it's
like
buffer
append
and
all
the
encode
decode
wrappers
around.
You
know
you
at
64
and
all
the
rest
which,
in
the
general,
usually
those
are
most
architectures
that
were
encoding,
Little,
Indians,
so
they're
just
doing
them
copies,
I'm
writing
to
the
buffer,
but
the
buffer
list.
C
Me
that's
oops,
I
was
talking,
I
was
I,
was
muted,
moms
yeah.
So
some
there
has
been
some
speculation
that
it
may
be
function
called
overhead
they're.
All
a
lot
of
them
are
in
separate
compile
units,
so
there's
no
enlightening
happening.
We
could
make
an
effort
to
pick
a
representative
group
that
would
allow
most
of
the
path
to
be
inlined
and
in
line
them
and
see
what
effect
that
that
that
has
that'll
tell
us
whether
that's
actually
the
problem.
B
B
Yeah,
if
I,
if
I
remember
there,
are
also
some
it's
not
just
a
function.
Call
overhead
but
I!
Think
they're,
just
just
a
fast
path
in
the
encode
could
plug
be
improved
because
we're
always
we
were
starting
with
the
buffer
list
and
more
then
comparing
links
and
then
we're
looking
at
the
buffer
pointer
to
the
pen
buffer
and
then
rechecking
the
link
there
again
and
then
doing
a
bunch
of
asserts
and
then
eventually
finally
copying
into
the
buffer
I
suspect
just
some
really
simple
micro
optimization.
B
Yeah,
ok,
so
in
lining
and
optimization
there,
so
the
other
options
then
are
trying
to
do
less
buffer,
append
and
encode
stuff
in
general,
we're
moving
up
a
layer
so
that
we
are,
for
example,
instead
of
encoding
structures.
Member
by
member,
we
could
try
to
make
the
in-memory
representation
map
to
what
we
would
be
encoding
us
and
do
sort
of
straight
up
copies,
at
least
for
the
hot
structures.
B
B
B
Okay
and
then
the
last
one
was
a
much
more
vicious
changed
that
that
how
I
was
proposing
work
just
completely
restructure
the
message
encoding,
essentially
so
that
instead
of
each
message
type
or
at
least
the
key
message,
types
sort
of
ad
hoc
filling
in
all
the
fields
in
the
message
structure
on
the
wire
to
have
exercised
chunks
and
send
those
across
did
you?
Did
you
read
that
email
Tim
in
detail.
B
C
B
Yeah
yeah
that
wouldn't
that
one
definitely
scares
me,
but
on
the
other
hand
it's
really
like
if
there
is
going
to
be
an
incompatible
change
like
the
only
place
is
worth.
It's
really
important
is
on
the
the
fast
pass
messages
where
it's
like
m,
ost,
op
and
mo
stay
up
reply
and
sub
up
and
suck
reply
right.
Those
are
the
only
ones
that
that
are
really
going
to
have
a
seem
to
be
a
impact.
C
C
C
B
B
Yeah,
you
know
I'm
a
different
way.
I'm
approaching
this
might
also
be
to
a
more
radical
change,
would
be
to
restructure
message
classes
and
so
that
the
in-memory
representation
is
the
on
wire
representation
and
have
all
the
accessors
pulling
out
of
the
hood
of
the
encoded
buffer
right,
because,
right
now
what
happens
is
when
you
get
a
message.
We
basically
copy
everything
out
and
to
properly
typed
members.
B
C
Challenge
there
is
currently
messages
are
mutable,
so
that's
no
good.
We,
we
can't
have
that.
So.
The
first
thing
is
to
change
every
user
of
the
existing
messages
to
work
in
terms
of
a
constant
message
that
is
built
in
the
constructor
or
built
in
some
sense,
in
an
append
only
fashion.
That's
the
tricky
part
where
a
Pendle.
They
obviously
means
something
where
you
have
to
set
the.
B
C
C
B
B
C
C
B
Well,
I'm
not
sure
I'm
a
DI
way
to
see
a
profile,
but
I
think
that
you
see
just
like
app
end
of
a
encoding,
just
like
you
n
64,
what
it's
going
to
show
up
just
because
there
are
a
lot
of
them
all
over
the
place,
but
maybe
not
yeah,
okay,
okay!
Well!
So
the
next
step
really
is
we
just
need
to
profiling
data
to
know
what
to
do
next.
B
B
B
B
D
D
This
was
a
lot
to
be
said
for
organizing
this
and
every
and
every
system
is
ready
with.
Is
that,
but
today
yeah,
but
we
have
way
if
we
have
to
change
than
one-third
in
progress
that
they're
just
better
to
their
that
are
just
same
to
aim
to
end
aimed
it
produces
we
do
reducing
costs
paid.
No,
you
pay,
they
pay.
Why
managing
a
pocket
lists?
We've
tried
to
personal
functions
and
ones,
and
one
and
one
change.
D
The
SWAT
and
change
more
inch
probably
be
more
interesting
insights
tonight,
when
I
do
when
my
friend
/
crease
profiling
anyway
I
see
buffer
release
a
lot
so
we're
paying
we're
paying
some
costs
for
for
sharing
that
we
may
not
using
all
those
sharing
kind
of
races
over.
So
we
don't
know
everything
everything
about
that
yet,
but
we
have
a
change
in
progress
that
that.
D
B
B
D
That's
that's
an
idea.
We
start
typing,
but
those
who
the
to
the
prior
to
exist.
We
push
those
yeah,
we
haven't,
we
haven't.
We
haven't
fully
types
and
attested
this,
but
what
snipe
optimist
started?
We
started
working
on
it.
We
obviously
they
found
that
by
making
buffer
pointer
and
use
an
intrusive
list.
I
was
the
first
time.
B
Right
that
makes
sense,
okay
and
that
that
sounds
that
sounds
promising.
So.
B
B
B
Yep:
okay,
okay,.