►
From YouTube: Ceph Performance Meeting 2018-11-08
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Either
you're
on
mute
or
I've
got
somebody
of
stuff
happening.
Let
me
check
I'm.
B
B
A
B
Absolutely
isn't
either
pad.
Let
me
put
in
the
chat
window
here:
I'm
actually
working
on
it
right
now,
because
I
was
bad
and
tried
to
multitask
too
much
this
morning
and
got
behind
on
reviewing
pull
requests.
So
I'm
kind
of
wrapping
up
the
the
new
stuff
now
and
have
a
whole
bunch
of
old
ones.
I
still
need
to
go
through
this
week.
B
Should
probably
send
out
something
to
the
mailing
list
about
the
Permian
I,
don't
know
that
we're
gonna
get
a
whole
lot
of
guys.
Today,
a
bunch
of
folks
are
at
the
scale,
Adibi
conference
and
then
migrating
over
to
the
rocks
DB
conference.
So
the
Bay
Area
today
so
I
think
it's
going
to
be
a
little
a
little
small
I
think.
A
B
B
All
right
guys,
let's,
let's
all
get
started
here,
I
confess
again
that
I'm
behind
on
my
pull
request,
review
this
training
multitask
too
much
and
got
behind
on
it,
though,
still
working
on
that.
In
fact,
what
I'm
thinking
here
is.
Maybe
I
will
continue
to
do
that
and
let's,
let's
flip
things
around
because
Jesse
has
a
good
topic
you'd
like
to
discuss
here
and
maybe
I'll
work
on
this
in
the
background,
so
hey
Jesse,
wanna
VLE
as
vector
strings
in
you.
What
do
you
have
for
us
sure.
A
So
you
know
a
couple
of
times
that
I've
I've
noticed
that
in
staff
in
various
places,
we've
got
I
counted
it
but
looks
like
about
a
hundred
and
ninety
fieles,
and
you
know
looking
at
them,
I
mean
I
can't
read
people's
minds,
but
my
bet
is
that
most
of
them
are
probably
fairly
unintentional
or
are
things
that
could
probably
be
replaced
with
vector
or
in
a
lot
of
cases,
actually
with
stead
string.
A
If
we
are
interested
in
doing
so
now
that
the
Linux
kernel
has
banished
relays,
I
thought
we'd,
take
the
temperature
again
and
just
see
how
people
are
feeling
about
them,
and
maybe
maybe
it's
a
good
time
to
talk
about.
You
know
some
performance
implications
of
changing
them
or
you
know
if
we
really
want
to.
Is
it
worth
the
effort?
A
A
But
unfortunately,
there's
no
way
to
detect
errors,
bronsted
allocations,
so
this
implies
a
bunch
of
things
with
you
know:
a
relationship
to
security
and
cause
buffer
overruns
and
all
kinds
of
fun
with
them.
But
also-
and
this
is
this-
is
the
one
that
that
always
is
a
red
flag
with
me-
the
stack
corruption
caused
by
a
failed
VLA.
You
know
your
program
can
keep
running
for
a
long
time
before
you
ever
see
anything
caused
by
that,
and
you
know
once
your
stacks
in
smash.
It
can
be
really
hard
to
find
what
actually
caused
it.
A
So
you
know
my
personal-
and
this
is
just
just
this-
isn't
just
me
industry-wide,
but
it
may
just
be
me
here:
I
tend
to
prefer
to
squish
them.
Unless
you
know
someone
has
left
a
comment
or
something
saying:
hi
I'm
an
expert
and
I'm
doing
something
expert
level
here
you
know
my
bet
is
because
the
syntax
is
so
immaculate
a
lot
of
times.
People
are
sort
of
writing
them,
possibly
even
without
realizing
it.
So
so
let
me
come
just
ask
the
room:
how
do
people
feel
about
it?
A
B
Let's
say:
I
was
not
happy
with
that,
even
though
it
probably
was
a
slight
performance
game
right,
because
you
can
like
malloc
things
and
and
upfront
and
and
kind
of
get
a
nice
contiguous
range
of
memory
on
the
VLA
side,
I,
don't
I,
don't
actually
know
what
what
do
we?
Where
do
we
use
it?
How
do
we
use
it?
A
Some
I
haven't
visited
every
single
skull
site.
I
do
have
a
file.
I
can
point
you
at
and
share
with
the
group.
You
know
where
I
just
found
all
the
places
that
we're
using
them
by
turning
on
to
GCC
warning
for
them
most
of
the
time.
What
we're
doing
with
them
is
simply
allocating
a
buffer
at
runtime
and
a
lot
of
that
occurs
in
rgw
or
sometimes
MGR
or
other
play.
A
We
have
them
all
over
really,
but
but
a
lot
most
of
the
usages,
probably
actually
our
places
where
someone
could
have
gotten
away
with
using
stud
strings
because
there's
a
lot
of
functions
where
we're
we're
actually
taking
a
stud
string,
converting
it
into
a
C
string,
backed
by
a
VL,
a
doing
a
bunch
of
operations
on
it,
then
creating
a
new
C++
stud
string
and
ripping
it
out.
A
When
you
know
there
might
be
other
patterns
that
could
avoid
that
extra
copy.
For
one
thing,
because
it's
probably
that
whole
extra
set
of
allocations
is
probably
killing
any
advantage
of
using
the
VLA
I
mean
when
you
go
measure
you
know
like
vector
versus
VLA.
For
example,
it's
possible
enviers
are
a
little
faster,
but
it's
it's
not
as
far
apart
as
you
might
think.
So,
unless
you're
in
a
tight
loop
and
that's
that's,
the
situation
I
would
watch
for
is
where
someone's
intentionally
using
these
it's
hard
to
say.
A
Erasure
code
probably
has
the
most
complicated
usage
of
them
that
I've
seen
so
far,
and
it's
difficult
to
say
you
know
again
what
the
intent
of
the
program
is.
You
can
replace
them
with
vector
pretty
easily
in
that
module
from
what
I've
seen
so
far,
but
whether
or
not
that's
a
critical
performance,
you
know,
net
win-loss
game
is
hard
to
say.
I
haven't
seen
so
far
too
many
cases
where
you
can
replace
them
with
just
a
regular
array.
A
I
mean
there
are
mostly
looking
like
it's
legitimate,
dynamic
behavior
so
far,
so
if
anyone's
interested,
all
all
all
I
can
send
a
text
file
that
actually
shows
where
we're
using
them,
but
again
the
by
far
the
most
common
pattern
seems
to
just
be
someone
making
a
character
buffer
and
copying
some
data
around
and
yeah
the
most
common
case.
Yeah.
B
A
B
B
Simpler
and
safer,
first
and
then
kind
of
making
very
conscious
decisions
about
where
we
tune
versus
kind
of
scattering
stuff
all
over
the
place
and
and
not
really
having
good
reasoning
for
it,
which
I
think
sometimes
in
the
past
we've
we've
kind
of
just
you
know,
made
assumptions
about
what
things
are
gonna
be
problems,
and
what
aren't
going
to
be
so
I
would
be
in
support
of
just
eliminating
them
and
then
dealing
with
the
fallout
later.
You
know
if
there
are
specific
things
we
really
need
to
bring
back.
C
A
A
You
know
and
I'm
not
sure
one
thing
I
always
get
a
little
confused
by
is
the
the
pedantic
behavior
between
GCC
and
say
clang
I
think
I
think
might
be
the
reason
that
I
tend
not
to
enable
it.
In
my
projects
at
least
but
yeah.
Maybe
sage
has
some
thoughts.
I've
seen
pedantic
caused
quite
a
bit
of
trouble
in
larger
projects
before
but
I
agree
entirely
with
the
goal
of
trying
to
use
standard
C++
by
default.
A
So
so
Jessie
you
had
mentioned
rgw
and
how
it's
kind
of
taking
existing
strings
and
doing
some
C
formatting
in
a
PLA
and
then
copying
it
back
in
now
that
C++
17
has
a
standard
string.
Data
function,
that's
non
Const.
We
can
actually
just
do
the
manipulation
on
the
string
in
place.
So
that's
probably
a
really
easy
thing
that
we
can
replace
anywhere
that
that
happens.
A
Yeah.
Definitely,
in
fact,
you
know,
and
some
of
the
call
sites
have
looked
at
the
it's
it's
hard
to
guess,
someone's
intent,
but
but
I
really
think
they
probably
meant
to
take.
You
know
a
stud
string
by
value,
so
they
have
a
copy
of
it.
Do
some
manipulation
like
converting
case
and
filtering
characters
and
then
just
returning
something
you
ever
turning
that
copy.
A
But
instead
what
happens
is
you've
got
a
dead
string
reference
parameter
you
make
of
ela,
do
your
manipulations
and
then
return
a
new
string
from
that,
so
I
actually
think
in
a
lot
of
cases
by
revisiting
these
it'll
actually
be
faster.
Just
because
of
eliminating
things
like
that,
taking
advantage
of
things
like
our
vo,
you
know
again
with
C++
17.
There's
lots
of
our
vo
optimization
opportunities
that
that
just
didn't
exist
before
so.
A
A
A
So
far,
the
most
complicated
looking
code
that
I've
seen
using
VL
A's
heavily,
is
in
erasure
codes
and
it's
hard
to
assess
how
you
know
how
reliant
that
code
is
on
them
or
what
the
what
the,
what
the,
what
the
performance
impact
would
be.
It
almost
looks
in
fact,
like
you,
could
get
away
with
constant
sized
arrays,
but
it
turns
out
not
quite
it
is
the
need,
reading
them
from
a
file
and
and
the
dynamic
allocations
are
necessary.
A
B
Cool
all
right,
I,
confess
I,
just
just
real
quick
Jesse.
D
Yeah,
for
some
reason
you
don't
want
to
you
can't
you
know
you
push
your
strategy,
then
certainly
apply
them.
Sarge,
don't
you
know,
as
we
had
anything
interesting,
I
think
we
we
have
any.
We
don't
I,
think
they're,
not
interesting,
but
the
razor
cuttings
work.
Let's
I'm
interesting
someone
will
one
of
it
will
want
to
carefully
bench
that
what
we
grow
up
with,
but
probably
no.
It
could
be
better.
A
B
B
All
right,
I
made
no
more
progress
on
PRS
throughout
this
discussion
than
I
had
earlier,
so
I'll
just
go
on
it.
There's
a
couple
of
new
ones.
Here:
Radek
you've
got
this
new
introduced
shortcut
for
reads
or
replicated
pools,
I
confess,
I
didn't
look
at
it
at
all.
Is
there
anything
interesting
going
on
there.
C
At
the
moment,
it's
just
very,
very
early
W
AP
request,
may
I
made
some
benchmarks:
I
will
post
post
them
I
will
put
them
into
the
P
R.
However,
what
I
can
see
is
that
it
nicely
exposes
the
bottlenecks
we
all
right.
We
already
had
in
OSD
I
started.
Well,
it's
just
much
easier
to
profile
the
profile
to
profile
the
things
with
those
patches,
because
at
least
for
each
for
each
path
it
becomes
much
simpler.
C
A
B
Alright,
I
don't
know
anything,
but
this
replace
dashboard
service
thing
that
I
think
is
just
it's
more
manager
stuff,
though
there
I
think
working
on
improving
it
in
various
ways.
B
It,
oh
there's
someone
posted
benchmarks
for
these.
This
have
long-standing
lebar,
be
de
shared,
persistent
read-only,
RBD
cache.
They
look
fantastic
right,
but
it's
it
looks
to
me
like
the
the
workload
is
basically
entirely
sitting
in
cash.
So
it's
it's
not
very
interesting.
B
It
would
be
much
more
interesting,
in
my
opinion,
would
be
to
make
sure
that
there's
lots
of
evictions
and
promotions
happening
then
we'll
we'll
really
see
what
I,
what
the
behavior
is
like,
because
if
everything
is
just
coming
out
of
cash,
its
well,
no,
it's
fine,
but
but
that's
not
where
it's
gonna
probably
fail.
So,
let's
see
what
else
Adam
Adams
here
house
house
I
object
going.
B
E
Can
you
hear
me
now?
Yes
yeah,
it
is
basically
done.
I
am
writing
tests
because
everything
needs
tests,
but
apart
from
that,
I've
been
pulled
off
a
little
bit
on
the
side
for
some
rgw
stuff,
but
once
I
get
the
tests
done,
it
should
be
merciful
and
well
I
mean
I,
obviously
want
to
run
it
through
QA
first,
but
are
for
that.
Let's
get
the
tests
done
and
merged,
it
should
be
usable,
and
you
know
people
can
start
making
plans
to
do
stuff
with
it
and
all
that
stuff.
Right
now.
E
His
review
I
have
a
fix
for
that.
I
just
haven't
pushed
it
yet
I
sort
of
what
I
was
going
through
and
doing
the
reorganization,
because
I
have
a
couple
commits
where
I
do
something
and
then
change
it
later
on
and
I
was
just
cleaning
those
up
and
I'm
gonna
pull
that
lock
fix
in
when
I
finished
that
rebase,
okay,
cool
cool.
B
B
Yeah
and
then
sorry,
there's
a
bunch
of
stuff
I,
just
I
haven't
looked
at
to
see
if
it's
updated
yet
or
not,
but
oh
I'll
try
to
be
on
more
next
week.
I
just
got
sorry.
I
got
really
busy
this
morning,
so
I'm
I
guess
that's
it
for
PR.
Sorry,
any
anything
else
this
week.
That
folks
would
like
to
talk
about.
B
I
do
have
one
one
other
bit
of
info
that
it's
possible.
We
might
be
getting
some
new
performance
test
Hardware
in
the
lab.
It's
not
definitive
yet,
but
there's
a
chance
that
we
might
get
some
higher
and
hardware
to
test.
So
that's
a
little
bit
exciting.
We
might
be
able
to
get
the
more
performance
testing
in
on
some
more
recent
kind
of
higher-end
hardware.
So
we'll
see
how
that
goes,
not
not
not
sure
yet,
but
the
good
news
is
a.
It
looks
fairly
hopeful.
So
yeah,
that's
that's!
Basically
it
any
anything
else.
B
As
far
as
I'm
aware,
it
will
be
donate
donate
to
the
Ceph
community
lab,
which
means
that,
as
long
as
you
are
have
applied
for
access
to
the
lab,
then
it's
it
will
be
kind
of
designated
on
a
per
user
basis.
So
if
Muhammad
I
think
that,
assuming
that
we
get
this,
you
know
certainly
or
for
some
period
of
time,
you'd
be
able
to
use
some
of
that
hardware.
If
you'd
like
well.