►
From YouTube: 2019-03-14 :: Ceph Performance meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so,
let's
see
this
first
new
PR
increased
messenger
apps
on
right.
The
the
author
is
reporting
really
big
increases,
which
looks
really
interesting
for
someone
who
knows
the
messenger
code.
Well,
they
would
be
worth
looking
and
saying
if
this
is
a
good
idea
or
not,
but
but
at
least
the
numbers
that
are
being
reported
are
pretty
massive.
So
that's
that's
exciting.
A
There's
another
PR,
also
related
to
messenger
and
async
connection,
looks
really
specialized.
Optimized
check,
loopback
connection,
I,
don't
know
if
that
actually
will
matter
much
in
in
typical
use
cases,
but
anyway,
people
are
looking
at
the
messenger
code,
which
is
good
because
that
also
was
where
I
started.
Zeroing
in
on
some
recent
performance
bottlenecks,
there's
a
PR
for
the
MDS
to
convert
unnecessary
usage
of
standard
list
into
the
standard
vector
that
wasn't.
Actually
this
is
the
performance
PR
I
added
it.
It
might
not
actually
make
any
difference,
but
I
figured.
A
This
is
the
kind
of
thing
that
sometimes
can
so
anyway,
there's
that
one.
There
are
two
PRS
that
close
this
week.
One
is
to
update
the
EM
clock
to
the
newest
version.
That,
apparently,
has
some
kind
of
performance
enhancements.
I
didn't
see
any
numbers,
but
that's
good
that
also
specifically
merchants
Nautilus.
So
that's
well
we'll
see
how
that
that
that
goes,
but
it
must
have
been
important
enough
that
that
we
felt
like
we
needed
to
get
it
in
the
other
PR.
A
Another
messenger
PR
from
ma
champagne
batch
handling
and
send
message,
apparently
that
didn't
actually
improve
performance
in
the
most
recent
round
of
testing,
but
at
least
to
my
eye
when
I
looked
at
it,
it
looks
like
the
performance
is
all
over
the
place.
So
we
can't
really
draw
any
conclusions
from
those
numbers,
probably
better
to
go
back
and
again,
look
at
perf
results
or
well
click
profile
and
try
to
determine
whether
or
not
it's
improving
the
behavior.
Even
if
the
performance
doesn't
necessarily
increase.
A
Maybe
some
of
the
same
goals
with
being
able
to
like
do
batching
of
sequential
I/o
operations
and
and
kind
of
move
more
rather
than
using
a
cache
more
like
an
I/o
scheduler
type
system
said
that's
very
new,
so
I
actually
don't
think
I
got
that
into
the
new
PRS
which
I
should
but
anyway
yeah.
So
there's
discussion
going
on
regarding
that
and
then
finally,
this
pair
for
canceling
ops
in
the
messenger
when
apps
are
redirected
and
Greg
put
in
a
new
review
and
questions
about
whether
or
not
the
reward
is
really
worth.
A
It
looks
like
this
again:
more
acing
messenger
improvements,
greg
approved
and
nice
QA,
both
of
Adam's
PRS,
are
I,
think
up
there,
but
you
know
are
still
kind
of
ongoing.
A
A
A
All
right
well,
then,
I
will
very
quickly
go
over
a
couple
of
new
things
and
then
we'll
wrap
up.
First
is
that
in
gdb
PMP
I
added
the
ability
to
do
inverse
wall,
clock
profiling
or
at
least
get
inverse
Telegraph's
out
of
it.
So
this
was
added
because
we've
been
looking
at
OS
T's
in
very
low
CPU
usage
scenarios,
where
we
only
have
maybe
even
a
portion
of
one
core
and
that
that
proved
to
be
fairly
useful.
So
it
works
it's
there
if
anyone
wants
it
kind
of
useful.
A
A
A
Think
probably
greg
has
been
the
biggest
advocate
of
leaving
the
messaging
logging
at
the
current
levels
so
that
it
makes
debugging
user
clusters
easier.
I
don't
want
to
discount
that,
because
for
the
folks
that
are
actually
going
through
and
looking
at
real
bugs
on
user
clusters,
if
that's
useful
I,
don't
think
we
should
necessarily
get
rid
of
it,
but
they
is
a
big
problem
or
especially
for
OS
DS
that
are
very
CPU
constrained.
A
C
I
I
will
try
to
be
far
away
from
spin
locks
in
in
userspace.
If
the
reason
is
is
about
the
Cascade
of
increased
CPU
demand.
If
you
are,
if
you're,
if
the
code
using
spin
locks,
the
process
that
it
has
that
owns
a
spin
of
negative
moment
is
preempted,
then
well
all
spine.
All
other
inners,
all
other
guys
wanting
to
take
the
spin
lock
will
spin.
It
means
huge
increase
in
CPU
usage,
yeah
I
I'm
afraid
that
the
spin
lock
is
quite
Jane
dangerous
in
userspace.
C
C
A
A
C
E
E
B
B
Going
back
going
back
that
for
the
seven
or
eight
nine
ten
I'm
how
many
years
now
we
started
doing
this,
and
it's
not
that
long
is
seven
or
eight
years
it
comes
from
a
year
back
to
the
Year
Zero
for
us
running
off
logging
to
being
instantly
all
logging,
as
though
it
was
the
only
way
to
get
within
two
orders
of
magnitude
of
actual
latency
and
mini
measure.
You
know
in
any
in
any
measurement.
So
it's
obviously
huge.
B
Don't
that
we've
talked
about
solutions
a
lot
but
precomputing
longhand,
precomputing
everything
in
there,
but
and
so
moving
less
data
into
the
log
is
seems
like
ideal
starting
points
and
then
past
that
finding
some
efficient
way
just
to
slot
it.
So
that
so
we're
that's
those
already
risk
intention
because,
as
you
point
out,
I
don't
agree
with
the
same
about
speed,
lock
some
user
space
as
being
as
being
the
gloss
but
I,
but
but
it
when
there's
my
contention,
it's
terrible,
yeah,
I
think.
C
Yeah,
but
the
real,
quick
well,
the
real
problem
is
that
we
don't
know
whether
the
debug
mess
MS
equal
one,
it's
really
necessary.
We
are.
We
are
programmers
we
have
I.
Have
you
acknowledge
about
the
really
usefulness
for
this
thing
when
it
comes
when
it
comes
to
seeing
heat
up
the
writers
perspective?
I
just
don't
know.
A
D
Sorry,
can
you
guys
hear
me
yes,
yep
I'm,
working
on
the
PR,
you
know,
I've
talked
about
this
a
while
back
using
NZT,
ng
or
logging.
Yeah
and
I
did
actually
benchmark.
I
did
change
the
logging
in
boost
or
only
in
boost,
or
that
he
seeks
only
to
boost
or
fight
I
moved
from
our
old
logging
to
ATP
energy
and
I
tested
it
on
a
real
cluster,
but
pretty
old
hardware
and
just
spares
the
gain
is
not
really.
D
B
Five
or
six
years
we
did
something
similar
in
cord
we
had
assumed.
We
had
this
inexperience.
We
got
a
big
speed
of
in
log
in
logging
when
we
consolidated
things
into
a
reformatted
buffer
and
then
shrunk
it
and
renewed
and
restored
local
caching
of
that
buffer,
yeah,
exactly
yeah,
well,
the
energy,
it's
a
fence,
that's
very
similar
to
the
to
this
block.
Yeah.
D
A
As
well
the
big
ones
Mohammed,
if
you
could
look
at
Duke,
you
have
a
source
tree,
handy
yeah
in
message:
dispatch
queue,
pre
dispatch,
that's
one
to
look
really
closely
at
just
basically
everything
in
dispatch
queue.
Actually,
that
would
be.
It
would
be
a
really
useful
place
to
start
out.
I
think
enforce.
D
A
D
D
D
All
exactly
and
one
of
the
things
about
the
DC
energy
is
that
it
kind
of
restricts
you
of
how
you
can
craft
your
payload
and
which
can
be
good
in
some
cases,
so
I
mean
kind
of
forces,
the
developer
to
think
about
how
to
craft
it,
as
rather
saying
but
yeah
to
your
point
that
that's
completely
true
I
look
into
this
one
specifically.
Oh.
A
Cool
yeah,
that's
I,
think
that
is
exactly
the
message
that
Greg
cares
about
too.
If
I
remember
right,
so
so
that
one's
a
very
worthwhile
one
to
look
at
all
right
cool,
let's
see,
do
I
have
anything
else
in
here,
so
the
I
guess.
The
only
other
thing
is
that
yeah
there's
a
lot
of
stuff
that
hasn't
helped
yet
debug
I
miss
one,
and
this
stuff
looks
like
it's
kind
of
a
big
thing
right
now
and
then
just
general
memory
management
and
object
life
cycles.
A
We
have
all
kinds
of
stuff
that
gets
created
and
deleted
all
over
the
place,
and
it
just
it's
it's
spread
out,
but
the
less
we
can
do
that.
Probably
the
better,
though,
if
you
know
here,
parts
of
the
code
and
if
there's
stuff
that
we
can
do
to
avoid
creating
and
deleting
objects
all
over
the
place.
That's
probably
you
know
a
worthwhile
goal
object.
Life
cycle.
Could
you
sorry
I
mean
like
just
we
there's
a
bunch
of
places
where
in
profiling,
all
this
stuff,
where
I'm,
seeing
that
we're
just
creating
temporary
objects?
B
Had
a
question
that
kind
of
came
from
downstream,
but
I
wanted
to
needs
to
help
upstream
them
and
I
figured
a
lot
would
be
here
to
talk
about
it
sure
other
people
can
filter
to
help
as
well.
Full
are
asking
us
about
the
overall
level
of
our
CPR
culture
and
related
optimization,
which
I
did
not
even
tell
them.
Then
it's
been
a
good
answer
to
that,
and
and
and
not
only
that,
because
obviously
it's
interesting
subsystems
and
their
engagement
with
different
libraries
and
different
platforms-
we're
not
I,
don't
think
we're
really
capturing
this
anywhere
under.
C
B
Mean
it
kind
of
like
constructing
a
sort
of
a
readout
or
a
list,
a
cover
for
our
report,
of
the
different
of
the
different,
optimization
levels
of
the
different
components,
all
that
pulled
out
from
that
list
and
from
our
Informer
and
power
options
as
as
given
so
that
you
would
sit,
be
able
to
say
well,
if
you're,
using
a
tracer
coding.
You
know
we
booked
this
way
with
this
with
this
with
them.
With
with
these
RF
receiver,
a
sharp
optimizations
+
allows
there's
an
assumption
of
sse4
Plus.
You
know,
then,
for.
C
B
C
But
it
wouldn't
it
apply
solely
to
the
things
that
are
under
control.
Our
our
under
our
control,
I
mean
especially
those
things
like
I
sell,
crypto
ordered
all
those
dependencies.
We
are
compiling
all
together
with
staff,
because
I
guess
we
don't
want
to
introduce
our
own
version
of
let's
say:
leap
crypto
compared
with
some
specialized
of
attuning
specialized
optimizations
block,
compiler
flux.
Well,
I!
Guess
we
don't.
B
B
Realized
if
we
realize
it,
you
know
that
we're
getting
some.
You
know
some,
but
some
of
this
in
the
case
of
crypto,
as
you
mentioned,
I
mean
there.
If
there
are
reports
we
have
had
some
from
Dunstan
we've
had
some
form
of
streaming.
There
are
places
where
their
environments,
where
we
are
getting
terribly
bad
acceleration.
B
C
B
Necessarily,
but
that's
what
option
but
I
mean
we
notice
for
to
live
in
SSTO,
but
SSL
is
let
me
go
Vitter
a
bunch
of
platforms
to
openness.
Fellows
got
better
acceleration
mmm-hmm
once
this
is
done
of
this.
Maybe
this
could
become
somewhat
moot,
but
there
might
be
some
place
where
he
is.
We
discover
it's
desperately
important
and
we
wouldn't,
if
we're
not
watching
it,
wouldn't
we
wouldn't
notice.
B
C
True,
we
don't
have
such
reset
of
the
moment,
but
this
is
also
well
constructing.
The
least
ex
house
Libby
would
require
also
some
involvement
with
guys
from
the
world,
let's
say
edge,
I
hope
from
system
from
vend
the
vendor
of
your
operating
system,
because
we
are
using
a
lot
of
external
libraries,
especially
for
crypto
I'm,
not
sure
about
recording
for
a
pair
account
think
we
are
using
our
own,
where
on
integrating
in
in
develop
I,
guess.
C
C
C
B
Really
have
a
number,
but
maybe
someone
will
find
it.
There's
enough
scream
remark
and
then,
in
the
end,
the
soft
tracker
about
sha-256
computation,
okay,
that's
one
and
that's
a
pretty
big
one
for
our
W
and
s,
3,
and
and
not
to
mention
all
other
things
and
I
suspect,
floating
about
and
and
and
then
performance
with
SSL.
B
So
you
know
with
TLS
I'm
the
luminous
for
time
frame,
that's
pretty
and
then
some
platforms,
it's
pretty
terrible
too,
and
probably
because
you're
not
getting
acceleration,
but
maybe
even
a
yes,
that's
weird
I
guess:
there's
a
general
problems,
but
I
think
but
I
think
shot
if
it
takes
is
a
good
example.
We.
A
B
A
C
B
C
C
B
I'm
not
I'm,
not
calling
out
an
iron,
make
a
big
mess,
but
I'm
going
on
it
is
there
spot?
Maybe
what
I'd
like
to
it
when
I
would
most
interesting
doing,
is
have
a
career
in
the
tool
spot
off,
but
also
Bishop.
They
live
wherever
we're
on
the
conversation.
If
shifting
to
some
other,
you
know
making
and
making
a
couple
of
shifts.
Yes,
there's
a
lot
of
things
that
once
we
probably
we
might
this
this
example
might
suggest
that
were
quite
a
place
where
we
wanna
make
sure
we
get
that.
A
C
C
B
B
A
C
Has
dedicated
rapper
for
calculating
the
H
max
something
like
H
max
sha-256?
It's
internally
uses
our
itself,
crypto
H,
Mac
abstraction.
It
was
email
till
today
it
was
implemented
on
top
of
NSS.
Even
after
the
transition
of
our
hashing
stuff,
it
was.
It
was
using
NSS
plus,
well
open
SSL
at
some
either
chances
when
it
comes
to
H
Knox
support
between
versions.
This
commits
eradicate
that
issue
and
rgw
should
be
able
to
utilize
open
SSL.
C
B
B
Okay,
so
this
further
work
to
do,
but
this
piece
alone
is
a
huge
which
is
this
wall
is
no
small
thing,
because
this
is
this.
Is
this
probably
addresses
that
tracker
issue
completely?
We
should
with
so
much
to
do
some
measurements,
but
if
that's
true,
this
can
fit.
This
could
be
a
very
large
speed
up
or
very
large
reduction
in
CPU,
depending.
B
B
B
C
B
C
C
C
A
C
C
B
I
don't
know
the
answer
you
Marcus
and
I
can
talk
about
this
awful.
The
answer
is
on
secret,
but
I
mean
we
need.
Keystone
do
I
need
that
food
about
those
api's
and,
in
fact,
in
fact,
moreover,
you
you're
the
expert
here
about
you
and
Marcus
are
I
guess,
but
if
we're
saying
for
purpose
it
for
other
reasons
for
like
a
very
compliance
reason
to
supplicate,
if
we
can't
use
them
anyway,
I
mean
I
sure
like
to
get
rid
of
them.
If
there's
some
other
way
to
do
the
same
work,
the.
B
C
B
C
E
C
C
A
A
All
right!
Well
then,
let's
adjourn
this
meeting
and
well
possibly
get
back
together
next
week
with
Nautilus
landing
soon,
everyone's
really
busy,
so
we'll
kind
of
see,
if
there's
anything
to
talk
about,
but
let's
plan
on
next
week
and
I'll
send
out
an
announcement
if
we
end
up
punting
just
because
people
are
working
on
stuff
anyway
have
a
good
week
and
see
you
later
guys.