►
From YouTube: 2019-08-29 :: Ceph Performance Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right:
well,
we
got
we've
got
folks
enough,
folks
that
we
should
probably
get
started
here.
So
let's
see
this
week
for
new
PRS
lots
of
rgw
stuff,
so
the
first
one
I've
got
here
a
a
couple.
These
are
me
so
I
guess
I'll
go
over
them.
This
one
is
kind
of
an
experimental
one
to
see
how
much
benefit
we
get
by
just
passing
the
buffer
list.
That
represents
a
directory
entry
for
bucket
listing
to
rgw,
rather
than
in
the
CLS.
A
B
The
cabal
is
just
radhika
was
like
overhead.
It
would
like
to
see
some
more
discussion
over
what
the
future
is
for
the
legacy
config
apps,
because
it
sounded
like
them
time
was
to
remove
them
eventually,
but
something
like
that
to
get
good
performance
and
in
the
Hat
path,
but
I
think
we
might
revisit
the
naming
of
that
or
find
something
to
replace
that
with
well.
C
A
Yeah,
so
I
I
don't
really
understand
why
we
ended
up
implementing
what
we
did
without
like
making
sure
that
it
worked
well
in
this
case
for
this
kind
of
thing
it,
it
always
bothered
a
little
bit
that
we
we
went
through
and
replaced
all
the
legacy
options
like
this,
and
then
we
replaced
it
with
something
that
you
know
it
performs
poorly
in
the
hot
bath.
But
and
then
you
have
to
do
kind
of
crazy
stuff,
like
you
know,
implement
observers
somewhere,
you
know
generic
or
more
centralized
and
then
pass
through
a
bunch
of
well
anyway.
A
C
Flocking
in
and
then
a
couple
onion
and
a
couple
other
things
but
I
guess
both
like
I'm
a
bulk
interface,
another
one
I
think
I
has
more
to
do
with
there
with
them
with
the
config
observer.
But
let
me
nice
just
peace
of
maybe
some
sort
of
a
Saif
Aldin
talk
or
something
about.
Oh,
we
might
have
time.
C
C
Yeah,
that's
that
yeah
I
thought
that
would
be
our
fair
for
reference.
That's
that
that's
less
encouraging!
You
would
call
out
in
your
PR
of
this.
This
PR
I
believe
that
this
initial
with
the
CLS
log
log
levels,
the
case
explain
to
me
why
this
is
but
why
this
is
broken.
Another
thing
we
should
fix
distance
if
it
doesn't
seem
to
be
providing
any
preprocessor
driven
election.
The
toll
I
wasn't
worried
about
I.
A
C
A
A
A
Okay
in
rgw
rashard.
This
is
another
one
of
mine.
This
is
don't
dump
a
bunch
of
json
in
when
we're
doing
these
background
retards,
and
that
was
like
a
stupid
win.
It
was.
They
went
from
like
65
seconds
for
one
of
the
retards
down
to
like
25
seconds,
so
it
was,
it
was
nice
and
then
the
last
one
here
is
KC's
PR,
which
is
really
good.
A
It
takes
these
recharge
checks
out
of
the
right
path,
and
that
was
a
really
big
performance
win
when
you
get
to
the
point
where
restart
is
going
to
happen,
but
it
hasn't
happened.
Yet
it's
got
like
that
delay
that
is
in
place.
I,
don't
remember
whether
it
is
the
things
like
default
to
10
minutes
in
master
or
something
and
previously
that
would,
during
that
time,
that
whole
time
it
would,
you
know
you
sound
like
a
4x
drop
and
put
throughput
just
going
to
one
nvme
OST
and
that
that
really
eliminates
it.
B
A
B
A
A
Let's
see
an
old
PR
from
Radek
got
merged
I
think
he
had
updated
it
and
people
reviewed
and
get
it
in
quick.
This
is
this.
This
slightly
optimizing,
this
container
based
bounding
code
case
I,
guess
and
then
ER
closed.
His
old
version
of
the
intelligent
DB
space
PR
in
favor
of
his
new
version
he's
got
a
little
bit
down
in
the
updated
list.
A
A
Igor's
new
version
of
his
intelligent,
DB
space
usage,
PR
I,
think
st.
reviewed
that
last
but
I
think
progress
is
being
made
there.
That's
good
Pam's
got
his
PR
for
expanding
LT,
TNG,
trace
points
and
improving
stuff
in
the
stuff
objects
for
back-end
and
I.
Believe
he
foo
reviewed
that
and
edit
it
to
this
testing
branch.
A
D
A
A
A
Sure
one
thing
is:
we
do
have
the
ability
in
CBT
now
to
to
basically
test
FIO
against
kind
of
all
of
the
the
common
RBD
interfaces
plus
FS,
both
the
kernel
and
fuse
interfaces.
So
you
can
kind
of
run
the
exact
same
test
against
all
of
them
by
just
changing
the
client
end
point:
there's
an
example
in
the
CBT
examples
directory
that
you
could
try.
If
you
want
me,
is
that.
A
A
A
Auto
tuning
of
it,
the
MDS
cache
memory
limit
Patrick,
has
been
continuing
to
review
that
and
suggest
changes,
so
I
think
that's
that's,
hopefully
close
and
then
our
other
Adam
has
been
continuing
to
work
furiously
on
his
charted
boost
our
work
and
that
found
his
biggest
workload
right
now
is
continuing
to
rebase
it
to
check
all
of
our
other
changes
and
he
wants
us
to
get
it
in.
So
we're
gonna
have
a
probably
a
big
push
next
week
to
to
make
sure
that
that's
in
good
shape
and
we
can
merge
it.
A
The
advantage
of
that
approach
is
that
then
we
have
half
the
number
of
calm
families.
Well,
not
quite
half,
but
but
roughly
half
we
wouldn't
need
an
extra
column
family
for
own
ode
rocks
TVO
node
nodes
per
per
shard
there.
There
may
be
an
advantage
to
doing
that.
Besides
the
cache,
that
was
the
primary
reason
for
doing
it
and
since
we're
gonna
shard,
we
probably
don't
want
to
double
the
number
of
of
column
families
for
this.
A
If
you
can
help
it
so
that
might
be
a
way
to
kind
of-
and
you
know
fix,
multiple
things
with
with
one
one
stone
I
guess,
then
we
can
pack
with
it
more
later.
But
that's
that's
that's
one
of
the
things
that
I've
been
thinking
about
a
little
bit
recently
so
anyway,
that's
it
for
PRS
I
think
unless
anyone
has
anything,
I
missed.
A
A
Alright,
let's
do
this
thing,
so
probably
this
can
be
seen.
There's
there's
a
lot
of
movement
happening
right
now,
lots
of
stuff
for
finding
things
that
are
being
fixed
and
I
suspect
them.
We're
done
with
all
of
this
we're
going
to
see
some
really
really
big
numbers
already
there.
There
are
some
kind
of
big
numbers
when
you,
at
least
for
certain
circumstances,
like
bulk
loads
of
objects,
Casey's
PR
by
itself
does
a
is
a
big
improvement
and
then
there's
other
things
that
are
improving
this
too.
So
I
think
I.
A
Think
we're
gonna
see
some
really
really
good
good
results
out
of
this,
but
there
there
are
kind
of
some
outstanding
questions.
Actually
you
can,
if
you
want
there
a
couple
of
Google
Docs
that
just
show
some
of
the
existing
testing
there
I
don't
know
if
I'm
gonna
go
through
all
of
them
just
because
it's
it's
a
lot,
but
if
you're
interested
in
the
ether
pad
there's
there's
a
bunch
of
data
there
to
look
at
so
the
two
big
things
that
that
I
think
are
open
questions
and
Matt.
A
You
can
correct
me
if
there's
others
too,
but
the
the
big
things
in
my
mind
are
how
to
reduce
the
overhead
in
the
OSD
for
kind
of
doing
work
on
getting
like
bucket
list
entries
to
you
rgw.
You
know
it
sounds
like
we
want
to
do
filtering
there
and
potentially
even
more
filtering
than
we
do
right
now,
and
so
the
question
is
how
to
get
from
rocks
TV
all
the
way
to
our
GW.
It's
the
least
amount
of
work
involved
and
the
intermediate
layers
is
possible.
A
D
C
A
C
A
A
Got
6%
in
CLS
log
like
we
were
talking
about
earlier,
5.10
percent
in
decode.
That's
our
GW
bucket
directory
entry.
Tico's
is
really
just
the
bucket
entry
decode
in
that
for
loop,
another
4.6
and
Inc.
Oh,
and
that's
the
thing
that
we
kind
of
more
or
less
mostly
get
rid
of
when
we
met
this
PR
or
you
know,
but
it
just
passes
the
bucket
or
the
buffer
list
through.
A
Sure
sure
or
I
guess
in
this
case
I
guess
I
was
thinking
I'm
very
much
on
the
OSD
side.
You
know:
can
we
make
those
faster?
You
know
if
I
can,
if
I
can
get
them,
if
I
didn't
make
duo's
the
ops
faster,
potentially
I
can
make
this
faster.
For
you
guys
and
then
the
other
stuff,
though,
is
all
very
much
inside
CLS
the
CLS
code
itself
right.
E
I,
don't
know
whether
this
is
the
right
point
to
insert
this,
but
you
already
mentioned
that
we
want
to
do
more
filtering
in
CLS
and
I
have
a
PR
that
I'm
working
on,
which
is
doing
that
and
it's
working
of
you're,
not
what
optimizing
but
further.
So
we
can
skip
past
sub
directory
entries,
which
we
don't
want
to
return
in
cases
where
there's
a
delimiter
but
I
lost
clocking,
and
this
is
an
idea-
that's
been
floating
around
for
a
bit.
E
E
A
E
C
A
It
only
he
seemed
to
do
one
op,
it
didn't
seem
like
it
was
actually
doing
any
kind
of
you
know.
Okay,.
C
C
Okay,
I'll
figure
out
why
it
doesn't
work
that
way.
Oh
I,
wasn't:
why
isn't
it
well
what's
over?
There
is
actually
reading
results,
but
there's
definitely
sending
stuff
that
I
wouldn't
have
verified
that
they
were
unordered
cool,
cool.
A
You
know
as
a
kind
of
the
backup
option
right
at
least
getting
the
JSON
out
of
that
path.
Improves
it
a
lot
like
you
know,
we're
dropping
from
a
you
know
like
in
the
test
case.
It
was
like
65
seconds
down
to
25
seconds,
so
we
you
know,
at
least
if
we
only
do
that.
It'll
be
better,
but
if
you
know
still
scary.
B
F
A
lot
of
improvement
if
someone
in
their
gratuitous
amounts
of
free
time,
where
does
where
to
go
through
and
take
all
the
places
where
we
basically
build
up
adjacent
object
out
of
maps
and
vectors
and
then
pull
we're
interested
in
and
then
throw
it
away
again
and
change
that
to
using
the
their
sort
of
pull
event
based
model
to
just
get.
What
we
want
at
the
point
of
parsing.
F
C
A
So
yeah
those
are
the
three
things
I
guess
that
seem
like
they
may
be
at
least
so
far
to
me
and
and
that
you're
absolutely
right.
This
is
stuff
that
you
guys
already
knew
about
and
we're
working
on,
yeah
just
I
didn't
realize
quite
how
you
know,
I
guess
the
benchmarking
brought
into
focus
for
me.
I.
C
Mean
if
again,
if
I'm
gonna
change
some
thing
or
some
priorities
around,
he
wouldn't
have
got
me
only
at
figured.
We
would
probably
have
kind
of
some
of
these
things
later
we're
trying
as
hard
as
we
can
to
do
the
to
do
the
reorganizing
up
directory
of
the
bucket
yeah
sharding,
the
right
right
up
front
because
it
because
it's
automatically
is
its.
It
has
only
no
its
there's
a
sort
of
this
professor
there's
some
right
ways
to
do
that.
C
Intrigued
by
that,
would
that
work
and
what
would
end
well
and
what
would
it
what
this
is?
It's
just
like
pre,
pre,
sort
of
pre
computed
things
that
have
been
accessed
recently.
It's
just
just
exist,
accelerating
we
are
go.
You
should
already
get
something
from
a
extent
catch
right,
things
that
are
already
hot
fast
to
read.
It.
A
The
problem
that
we
have
right
now
is
that
the
rocks
DB
block
cache
both
caches
the
caches,
everything
caches,
both
own
ODEs
and
OMAP,
so
that
if
we
want
to
prioritize
caching
o
map,
we
also
end
up
prioritizing
caching
proxy
do
node
entries,
which
we
don't
want,
because
we
already
kept
them
at
the
store
level.
We
really
want
to
have
Ono
dando
map
competing
at
the
same
level
with
rocks
DB,
cache
sort
of
operating
at
a
different
level,
actually
multiple
levels,
because
we
want
the
indexes
and
bloom
filters
be
cached
at
high
priority.
A
Family
and
rocks
to
be,
but
in
retrospect
that's
easy
to
do
like
it
wasn't
hard
to
write
that
PR
but
kind
of
in
retrospect,
as
I
think
about
this
I,
don't
think
that's
the
right
solution,
I
think
we'd
be
way
better
off
having
a
natural,
like
you
know,
some
way
of
caching
Oh
map
and
blue
store
in
it.
It
gets
weird
with
our
iterator
in
their
face
for
like
how
do
OSD
ops
works,
but
maybe
we
can
do
something
there
and
then
have
them.
A
Compete
have
at
the
same
level
in
the
priority
cache
manager,
and
then
you
know
this
would
maybe
also
help
with
if
we
didn't
have
the
the
hashing
fix
for
for
or
buggy
charts,
but
certainly
they
would
work
well
together
and
me
not.
You
know
the
caching
may
not
actually
be
that
big
deal
ones
that
gets
implemented
or
maybe
it
is
maybe
it's
nice,
because
if
you're,
you
know
have
multiple
things
trying
to
read
a
bucket
index
at
the
same
time,
maybe
yeah.
C
A
Exactly
the
the
neat
thing,
though,
is
that
if
we
have
it
there
once
I
get
the
age
bidding
in
for
the
LR
use.
That's
the
piece
that
lets.
You
start
like
dynamically,
changing
the
memory
allocation
per
cache
based
on
his
age,
the
relative
ages
of
things
in
the
cache.
So
that's
like
all
of
a
sudden
this
owner
is
getting
hit
with
tons
of
bucket
lookup
requests.
A
C
I
mean
I
mean
I,
that's
what
I
want
to
see
us
make
sure
we
have
some
on
efficacy
in
general.
I
was
constantly
gotten,
you
know,
can
you
please
give
you
somebody
on?
What's?
What's
not
accelerate
OSD
I
mean
I'm
hand
waving
if
I
talk
about
those?
No,
that
was
that
what
the
cache
representation
is
with.
If
it's
you
know,
if
it's
apply,
if
it's
essentially
the
part
of
the
block
cache
or
if
it's
something
else
but
but
yeah
I
mean
I
mean
if
you
think
it
has
potential,
we
should
explore
it.
C
Okay,
cool
chromatic
thing
is
I
mean
it's
good,
it
has,
it
has
it
insofar
as
insofar
as
locality
of
of
operate.
You
know
of
the
call
of
calls-
and
you
know
applies
here
and
and
I
think
it
always
does
in
this
type
of
environment.
Then
yeah
I,
don't
know
whether
there's
a
win
here,
but
it
sounds,
but
so
this
before
what
you
say
it
sounds
plausible
and
it
and
it
is,
and
it
isn't
about,
and
it
doesn't,
it
doesn't
change
the
cement
of
directories
nerd
as
it
requires
to
try
to
cash
all
of
a
bucket.
A
A
G
C
That
sounds
plausible,
or
else
also
be.
The
alternate
would
be
just
making
sure
that
the
extents
needed
are
fresh
in
the
Union,
and
that
sounds
like
that.
You
know
that
has
you
can
you
could
attack
other,
whether
your
other
work
attacked
it
there?
That
would
certainly
help
us,
even
though,
wouldn't
save
us
all
the
probably
work
yeah.
A
C
A
C
A
E
A
B
That
we
missed
well
we're
running
short
on
time
before
the
IDW
stand
up,
but
there's
one
thing
that
came
up.
We
like
the
PR.
A
Then
the
other
thing
is
Igor
is
not
here
anymore.
He
left,
but
he
has
a
PR
that
he's
resurrecting
to
store
smaller
than
4k
Oh
small
objects,
I
guess
we'll
say
in
Rux,
TV
itself,
I,
don't
know
if
it's
a
good
idea
or
not,
but
but
that
would
potentially
then
just
store
the
data
straight
as
part
of
the
you
know,
transaction
that's
happening,
yeah.
C
A
G
What
it
may
be,
it's
kind
about
the
implementation
there
is
that
booster
does
store
the
X
headers
within
the
owner.
I
think
right
now,
so
historian,
iris
is
implicitly
doing
the
same
thing.
I've
been
storing
the
data
directly
in
their
would
in
blue
stores.
To
me,
that's
why
you're
saying
this
implantation
was
simpler
from
that
perspective,
but
I
think
it
makes
more
sense
to
try
to
in
the
booster
level,
since
it'll
be
more
general
and
tell
about
with
ice.
I
have
Bester
other
things.
I
have
tiny
objects
to
exactly.