►
From YouTube: Ceph Performance Meeting 2018-10-25
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
C
A
C
Basically,
there
is
a
better
PR
for
the
buffer
history
work
I,
yes,
I
treat
it
as
this
one
I
sent
today,
I
treated
red
met
appear
as
an
index
or
smaller
peers,
and
the
idea
is
to
fake
performance
testing
and
in
this
PR,
but
conduct
reviews
in
dedicated
ones
in
smaller
ones.
Smaller
pieces
of
the
of
the
of
the
of
the
huge
rework.
Well,
it's
above
105
hundreds
lines,
1500
lines
so
I
guess
it
would
be
much
easier
to
review
in
smaller
chunks,
I
plan
to
I
plan
to
divide
this.
C
C
C
C
That
around
50%
of
Michael's
of
wall,
that
is
the
process
Styles
eyes,
are
spent
in
to
request
and
in
children.
However,
only
28
may
be
out,
50
of
them
are
spent
in
blue
star
read
and
on
message
crafting,
so
it
seems
we
have
huge,
really
huge
burn,
just
driven
by
the
very
complex
object
we
have
on
our
hot
past.
I
can
see
we
are
manipulating.
D
C
D
I
think
both
are
important
right.
We
want
to
learn
about,
have
the
occurring
come
work
with
the
current
complex
IO
path
is
in
the
know,
complexity
versus
necessary
complexity
and
but
also
having
a
past
read
path.
That's
very,
very
simple
and
makes
a
lot
of
sense
and
perhaps
doesn't
need
us
to
understand
all
the
instant
of
contact
city
immediately.
C
E
A
C
Yep,
but
in
order
to
cut
something
we
need
in
order
to
squeeze
nothing,
we
need
to
be
our
about
that
I
know
it
may
be.
We
go
ahead,
maybe
for
reads:
we
once
meet
to
instantiate,
for
instance,
the
dead
object
start
transaction.
We
are
doing
that
right
now.
Maybe
it's
not
necessary,
but
they
read.
The
buff
is
so
complex
that
I'm
not
entirely
sure
I,
don't
see
any
other
way
just
to
make
experiment.
C
D
D
C
A
D
A
A
Oh
the
stack
string
stream
yeah.
So
it
turns
out
that
we
are
calling
reserve
all
the
time.
Necessarily
that's
the
performance
impact,
certainly,
but
there's
also
something
else
going
on
with
Stax
string
stream,
Neha
and
Adam.
Both
were
independently
running
into
like
crashes,
that
weren't
explained
by
by
this
reserve
issue
that
was
found.
So
there's
there's
more
to
this
than
just
that
particular
fix
that's
listed
here,
but
certainly
that
fix
is
still
necessary
and
useful.
So
I
think
the
current
plan
is
basic.
Go
ahead!
Just
curious!
D
F
I
didn't
seem
to
see
a
crash,
but
I
saw
hangups
as
well,
but
I
think
yesterday,
no
day
before
yesterday,
when
I
saw
those
Hang,
Ups,
I
Sage
to
take
a
look,
and
he
looked
at
one
of
the
batteries
is
from
the
thread
that
was
stuck,
and
that
was
the
batteries
that
there
was
not
already
a
crash.
It
was
just
two
batteries
of
all
doc.
Read:
okay,.
F
The
idea
is
that
everybody's
kind
of
seeing
hang-ups
in
different
tests
that
they're
doing
individually
and
at
least
for
my
tests
I,
saw
that
it
did
not
exist
around
a
month
or
1/2
months
ago.
So
it's
definitely
something
that
was
added
later
now.
We
are
thinking
that
it
could
be
this
PR,
and
so
maybe
just
trying
to
revert
that
and
trying
those
tests
might
work.
But
it
is
there's
also
a
possibility.
It
would
be
something
entirely
different,
so
I'm
not
making
any
conclusions
yet.
C
A
Yeah
yeah
I've
I've
done
things
in
the
past
where
I've
like
just
not
written
them
out,
and
it
helps
so.
You
know,
presumably
anything
that
that
can
remove
that
as
a
blocker
or
as
you
know,
a
point
of
contention
would
probably
be
useful,
but
yeah
I
have
no
idea.
If
this,
how
much
this
actually
helps
it
be
good
to
know,
I'll
try
to
put
a
note
in
there
asking
for
more
info,
but
yeah
there's
certainly
possible.
That
could
be
a
win.
C
C
A
Yeah,
okay,
to
to
close,
actually
one
of
these
is
when
I
missed
last
week,
which
I'm
was
one
of
yours
attic.
Let's
see
this
used
reference
to
M
blockers
that
front
and
common
throttle
was
merged
by
Jason
I.
Didn't
I
have
looked
at
this
at
all:
I,
don't
I,
don't
actually
know
what
it
does
alright.
Well
then,
the
other
one
is
yours,
right,
ik
that
looks
like
that
merged
by
sage,
but
I
think
he
said
that
he
didn't
think
it
would
actually
do
much.
A
I'll,
clean
up
yeah
all
right,
a
couple
of
updated
ones.
Again,
one
of
yours
Radek,
your
your
your
you've
got
a
ton
of
PRS
good
job,
your
your
one
for
killing
backward,
iteration
and
buffer
list.
It
looks
like
you
guys
you
and
Josh,
and
other
people
are
discussing
this.
What
what
it's
can
take
I
believe.
C
H
A
So
some
at
the
the
I
was
starting
out
by
trying
to
just
look
at
well.
You
know
this
is
a
whole
discussion,
but
basically
you
know
maybe
just
adding
to
radius
bench,
but
ratos
bench
is
ugly.
I
mean
it
it's
okay,
but
anyway,
let's
talk
about
more
later,
though
other
pr's
there's
a
couple
of
other
new
ones,
I
did
not
get
through
all
the
list
of
old
pr's
guys
just
like
last
week.
A
I'm
sorry
I
need
to
devote
more
time
in
the
morning
to
this,
but
there's
a
bunch
of
outstanding
stuff
that
that
we
haven't
touched,
hopefully,
as
we
once
we
get
the
stale
BOTS
to
to
like
close
out
of
stuff
with
six
months
old.
Most
of
this
will
just
kind
of
go
away,
but
for
now
Braddock's
got
a
couple
of
ones
that
are
just
slightly
outdated.
A
There's
some
other
random
stuff
in
here
there's
my
big
cache,
pinning
thing
that
was
kind
of
the
final
piece
that
was
sake
folding
during
testing
and
I
still
haven't
gotten
back
to
it.
So
I
apologize
for
that.
It's
it's
on
my
to-do
list,
there's
just
a
couple
of
other
things
that
never
just
have
kind
of
gone
by
the
wayside.
I,
don't
I,
don't
think
a
whole
lot
of
this
is
is
interesting
at
least
not
right.
Now,
Radek,
were
there
any
ones
on
this?
C
I
F
A
A
C
C
C
A
A
C
A
All
right,
well,
I,
I,
think
that's
basically
it
for
we've.
We've
spent
in
a
half
an
hour
discussing
PR,
so
I
think
we're
good.
That's
right!
Brett!
Why
don't
you
start
us
off
talking
about
new
today?
So
let's,
what
were
you?
What
were
you
seeing
in
the
previous
recording
that
made
you
that
made
you
very
interested
well.
B
I
guess
it
was
probably
two
weeks
ago,
yeah
I
didn't
I,
don't
know
that
I
had
that
much
new
I
guess.
My
only
thought
was
I.
Think,
echoing
what
different
people
were
saying
was.
If
we
can
find
something
that
we
can.
You
know
it
has
to
be
automatic
that
the
system
has
to
be
able
to
configure
itself
which
to
me
seems
pretty
may
be
challenging
and
also
just
that.
You
know,
let's
not
forget
that
code
improvements
that
we
make
and
architectural
improvements
we
make
pay
off
everywhere.
B
So,
like
numerous
a
thing,
yeah
I
think
we
have
to
explore
it,
but
it
just
that's
all.
That
was
really
my
only
five
is
there
I
mean
I,
don't
know
and
I
didn't
see
that
no
sage
had
mentioned
he
was
gonna,
send
an
email
upstream
and
maybe
he
did
and
I
lost
it.
I
got
to
go
back
and
search
for
it.
Anybody
know
if
that
happened.
I.
A
A
We
can
certainly
do
something
like
pinning
you
know
the
the
the
the
local
Numa
node
memory
and
and
though
you
know,
I
was
trying
to
keep
keep
the
the
process
for
an
OSD
running
on
a
particular
Numa
node
yeah,
I,
guess
the
the
question
in
my
mind,
is
what
direction
do
we
want
to
head
in
terms
of
the
kind
of
way
in
OSD?
Looks
you
know,
is
it?
Is
it
lots
of
our?
How
is
the
sharding
working
I?
Guess
is.
Maybe
what
comes
down
to?
A
And
you
know
what
goes
to
an
OSD
where
our
expectations
for
an
OSD
controls
and
behave.
You
know
how
it
behaves.
Is
it
good
enough
to
take
an
OSD
process
and
pin
it
to
a
Numa
node,
or
do
we
actually
need
to
think
about
an
OSD
being
spread
across
multiple
Numa
nodes?
I?
Don't
I,
don't
have
a
good
answer
to
that
right
now,
I,
don't
know
that
anyone
has
really
talked
about
it
in
any
kind
of
you
know
visionary
kind
of
way,
yeah.
B
And
I
wife,
your
ISM,
you
know
that
this
could
be
easily
be
a
deep
dark
rat
hole
that,
because
you
know
yes,
those
of
us
that
have
some.
You
know.
If
we
inspect
a
system,
we
can
probably
come
up
with
some
new
mopped
emissions.
That
will
improve
things
you
know.
Can
we
can
we
make
the
system?
Do
it
itself?
No
way,
that's
reliable
in
a
way
that
we
can
guarantee,
doesn't
make
things
worse?
You
know
and
I
I
don't
have
it
I,
don't
even
have
a
way
to
answer
that
question.
I
was
just
you
know.
B
A
From
a
really,
you
know,
high-level
perspective
right,
it's
we're
kind
of
in
this.
This
awful
place
that
we're
or
we
don't
have
any
control
over
how
the
hardware
has
been
built
or
how
it's
been
laid
out,
and
you
know
it
you
could
end
up
with
a
horrible
topology
of
stuff
having
to
communicate
over
many
many
different.
You
know
many
different
things,
communicating
all
over
the
the
interconnect
between
sockets
right,
you
could
have.
A
B
F
H
A
H
But
there's
no
I'm
just
technically,
isn't
it
the
only
case
that
matters
we'd?
Okay,
we
don't
have
a
fixed
case,
I
think
I
fixed
time
for
the
thing
that
you
know
Rob.
This
is
our
sub
stream
calls
that
they're
very
some
there's
some
small
set
of
really
interesting
little
house.
We
just
want
to
support
those
right.
B
And
my,
but
my
belief
is
also
that
we
probably
like
you,
know
the
the
examples
that
are
out
there
in
the
in
the
wild
that
people
point
to
our
examples
where
somebody
literally
hand-tuned
something
you
know
at
least
that
I'm
aware
of,
and
maybe
somebody
can
find
something
else
and
and
also
the
you
know,
the
examples
that
I've
worked
on.
Where
works
explicitly
that
we're
you
know
we
had
completely.
You
know
complete
control
over
hardware
software,
everything
you
know
and
yes,
then
we
were
able
to
but
is
Ely's
twenty
percent.
You
know
yeah.
H
Well,
I
guess
what
I'm
getting
at
is
for
the
case.
Isn't
this
isn't
this
problem
generally,
so
that
fight
isn't
actually
how
you
solve
it.
So
so,
first
so
for
cases
where
we
have
no
control,
we
don't
care
but
race.
Where
someone
wants
to
build
the
system.
That
is
no
what
we
want
to
give
them
as
the
tools
to
yeah.
B
B
B
H
There
a
counter
example:
what
does
it
look
like?
Is
it
like
cube
just,
but,
but
if
it
is,
you
know,
why
should
why
shouldn't?
Why
shouldn't
the
the
directives
that
can
figure,
though
that
energy-generating
knows
and
they're,
follow
some
simple
template
that
we
had?
That
were
the
way
that
can
be
fed
to
this
to
the
to
us.
A
From
from
like
a
really
high-level
perspective
right,
you
have,
if
you
have
a
suboptimal
small
topology
there,
decisions
that
you
have
to
make.
Do
you
decide
to
put
an
OSD
on
a
node,
that's
close
to
the
network,
but
might
have
a
lot
of
other
OS
T's
already
placed
there,
or
do
you
place
it
on
another
complex,
that's
far
away
from
the
network,
but
has
lots
of
CPU
resources?
A
B
A
B
Yeah,
it
was
some
of
that
you
know
so,
but
you
know
it's
far
away
from
the
network
is
fine.
You
know,
and
it
probably
depends
on
the
type
of
storage
device
I'm
guessing
whether
like
two
weeks
ago,
you're
asking
a
while
is:
do
you
want
to
be
close
to
the
network?
You
want
to
be
close
to
the
storage
device.
If
you
can't
pick
both,
you
know,
no
no
probably
depends
on
the
type
of
storage
device
right.
You
know
exactly.
A
And
maybe
you
have
enough
of
interconnect
throughput
that
it
doesn't
matter
right,
yeah,
it
all
depends
on
on
the
hardware
and
how
it
behaves.
The
the
old
AMD
CPUs
from
like
five
years
ago
were
awful
with
hyper
transport
with
like,
for
you
know
for
sockets
you,
you
didn't.
Even
you
had
to
make
multiple
hops
before
you
could
get
to
a
corner
like
two
socket
Intel
with
QP.
Is
it's
not
that
bad
for
a
lot
of
cases
not
every
case,
but
for
a
lot
of
cases?
B
G
About
Numa,
when
I
tried
to
play
with
actually
splitting
OSD,
always
these
two
separate
Numa,
the
one
thing
I
was
unable
to
split,
which
is
a
code.
It
seems
that
linker
decides
where
the
library
should
decide
and
it
just
keeps
them
there
so
that
touch
sharing
always
exists.
That
I
was
unable
to
get
rid
of
that
problem.
Just
no!
No,
no.
A
A
E
E
E
Which,
I
think
is
a
bit
of
a
problem
because
effectively
what
we're
we're
saying?
Oh
it's
a
certain
percentage.
You've
actually
got
hard
limit,
so
you've
got
to
have
probably
at
least
say:
40
gig
and
the
next
brush
how
it's
fully
about
400
gig
there.
Any
any
value
in
between
there.
You're
not
gonna,
get
any
benefit.
A
A
E
Where
I
am
at
the
moment
is
I've
got
it's
a
400
gig
Intel
p3
700
I've
shrunk
in
the
the
OS
and
the
swap
partition
at
the
star,
I
managed
to
get
the
partitions
up
to
29,
gig
and
and
looking
at
all
the
sizes
and
sizing
of
the
levels.
I
thought
that
would
just
be
enough
to
fit
the
level
3
on,
but
apparently
isn't
I'm
gonna
try
and
gain
and
squeeze
it
up
to
30,
which
is
literally
the
maximum
I
can
can
go.
E
E
But
if
I
caught
a
30
gig
doesn't
work,
that's
gonna
be
quite
annoying,
but
that
these
probably
cement
that
should
be
either
like
I,
said,
fixed
or
at
least
be
made
aware,
because
if
someone
went
out
and
inspect
to
get
say
you
know,
300
gig
SSD
partitions
they're
going
to
be
sort
of
disappointed
if
they
were
having
to
get
to
a
certain
minimum
to
actually
get
that
on.
There.
A
E
I
mean
it
seemed
to
me
that
it
make
yeah
I,
say
automatic
or
I
have
a
bit
more
granularity
between
two
and
a
half
km
25
gig,
because
I'm
guessing
for
RVD
workloads,
that's
where
you're,
probably
gonna
be
mainly
sitting
I
mean
so
one
of
the
nodes
has
got
ten
terabyte
disks
about
just
over
70%,
full
and
I'm,
only
seeing
in
total
about
nine
or
ten
gig
of
DB
usage.
That's
really
in
that
sweet
spot
between
the
two
and
a
half
and
the
25
that
I'm
sort
of
hitting
this
issue.
E
A
If
you,
if
you
I,
just
link
the
the
tuning
guide,
if
you
look
at
the
section
title
level
style
compaction,
there's
some
notes
in
there
about
max
bytes
for
level
base
max
bytes
for
level
multiply
or
target
file,
size
based
target
file,
size
multiplier
and
num
levels,
and
all
this
stuff
and
you,
theoretically,
you
can
probably
tweak
these
the
point
where
you
can
get
your
your
kind
of
highest
level
size
to
fit
within
the
partition
size
that
you've
got
yeah.
It's
probably
not
going
to
be
real,
real,
simple
to
get
it
exact.
A
E
A
Probably
not
I,
don't
I,
don't
think
it
will,
although
I've
never
tried
it
on
one
that
has
data
in
it
before
so.
You
know
yeah.
If
your
your
your
situation
may
vary.
I
guess
is
the
answer,
but
that
would
be
the
the
thing
to
try
and
if,
if
it
works
like,
if
you
can
really
reliably
make
that
work,
then
maybe
we
can
figure
out
how
to
like
sort
of
automate
it
based
on
the
DB
size,
to
try
to
keep
like
whatever
the
highest
level.
Is
that
you've?
A
You
know
that
could
fit
within
the
DB
e
to
keep
it
within
like
a
certain
size
range
so
that
you
know
level
0,
1,
2
up
to
n.
You
know
mostly
or
entirely
fits
than
anything
beyond
that
you
know,
gets
moved
off,
but
that's
all
that's
all
kind
of
experimental
right,
I,
don't
know
how
well
would
really
work
and
practice
to
do
something
like
that,
but
that
would
be
the
place
to
start
yeah.
A
E
E
A
A
There
was
something,
though,
with
rocks
DB,
that
there
was
some
kind
of
like
there
might
be
something
else
here
that
might
actually
make
this
easier
Nick,
but
it's
it
was
something
that
was
added
to
rocks
TB
in
like
the
last
year
or
two
or
I
thought
they
were
maybe
trying
to
figure
out
the
an
automatic
tuning
for
like
the
level
size
and
I,
don't
remember
exactly
how
it
worked,
but
that
might
be
useful
for
you
as
well.
I'll.
Look
that
up
and
if
I
find
it
I'll
like.
E
Yeah
I
did
see.
There
was
one
thing
they
did
change
where
I
think.
Originally
you
you
basically
set
it
on
like
level
zero
or
something,
but
then
it
changed
and
you
set
it
on
level
1
and
then
it
calculated
what
all
the
others
should
be
relating
to
that,
but
which,
which
could
be
what's
happening
but
I
think
it's
cuz.
A
A
Maybe
we
can
wrap
that
one
up
for
now
and
then
just
very
quickly
talk
about
some
of
the
so
map
stuff.
So
all
right,
oh
map,
we're
slow
file
store,
at
least
on
the
test
that
I
was
doing,
was
taking
about
five
course
to
view
one
point:
five
to
two
K
right:
I
ops
or
small
o
map
entries
in
this
case
I
think
if
it
was
512,
byte
rights
and
blue
star
was
actually
taking
more
course,
but
a
little
faster
single,
probably.
A
A
The
thing
I
noticed
when
looking
at
a
wall
clock
profile
was
that
we
were
spending
a
lot
of
time
in
every
single
TP,
OSD
t,
pthread,
doing
encode
and
all
various
other
things
you
know
with
16
T
POS
TTP
threads.
Each
thread
was
active
about
25%
of
the
time,
so
that
was
like
four
cores
just
being
burned.
Doing
T
POS
DTP
work,
the
messenger
was
certainly
busy,
but
I'm
not
sure.
Well,
maybe
Brock
CB
was
also
spending
a
lot
of
time
doing
compaction,
but
that
was
maybe
only
a
core.
A
I'll,
take
a
look,
you
know
it's
it's
worth,
making
sure
and
double
check.
D
D
B
A
H
Would
say
for
some
purposes,
yes,
I
mean
if
it
can
describe.
This
particular
I,
be
very
instant,
helping
with
that
or
trying
to
think
about
ways
to
do
that,
and
we
already
do
some
I
mean
there's
art,
for
example.
There's
two
there's
four
things:
one
there
already
was
work
done
by
kisi.
H
Initially,
I
don't
know
if
it
has
merged
or
what,
but
there
was
a
range
the
range
delete
on
for
Looney
whole
bunches
of
keys
with
which
we
thought
would
be
helpful
and
and
that
we
can
expedite
that
other
operations
are
already
bashed
on
the
look.
If
we
like
a
flick
of
operating,
operating
on,
get
you're
getting
all
of
or
mutating
a
bunch
of
or
I'm,
almost
on
almost
all
cases.
H
If
we're
operating
on
the
on
the
metadata
for
on
the
OMAP
entries
foreign
objects,
we're
already
batching
them
all
no
honey,
not
even
mind
you
bet,
we
may
not
be
matching
them
in
the
CLS
at
this
age,
but
they're,
but
they
work
when
I
was
describing.
The
range
delete
thing
was
was
was
was
was
all
the
way
down
into
the
CLS,
so
it
worked
all
the
way
to
the
lymph
damn
big
storm,
so
we
give
it
so
he
could
person
so
I
think
cam.
H
A
H
H
H
H
A
A
H
A
H
A
H
Prepare
delete
and
4
+
4
+
4
+
4
in
sir.
Yes
as
I,
say:
that's
a
special
case
for
defer
for
various
kinds
of
trimming
operation
index
trimming
that
would
that
was
what
Casey's
work
applied
to
and
that
in
that
and
I
can
then
make
then
that
can
perhaps
be
extended
to
some
GC
cases
and
stuff
like
that.
So,
although
Jesus,
although
as
we've
kind
of
discussed,
we
could
take
since
that's
potentially
late
the
case
that
we
can
take
TC
out
of
old
map
and
that
would
be
a
bigger
win.
H
But
having
said
this,
we
already
began
batch.
We
can
batch
trimming
of
current
law.
The
current
you
know,
I
mean
it.
Won't
me
not
even
me
many
of
those
right,
those
those
those
are
all
for
sense.
All
for
CLS,
long,
I
think
put
some
put
betcha
the,
but
so
you
know,
but
I
can
see.
H
I
can
see
doing
that,
but
I,
but
I
but
I'm,
that's
what
I'm
not
seeing,
though,
is
a
ton
of
depth
for
this,
because
the
operation
being
done
is
big
and
you
know
as
big
ish
and
it's
waiting
to
complete
the
mist.
This
is
going
to
introduce
other
it's
going
to
perturb
the
flow
and
another
in
other
ways.
That's
very
quiet,
visible
sure.
H
Exactly
I
mean
it,
but
I
mean
if
there
may
be
more,
but
there
may
be
cases.
I
didn't
don't
know
about
any
other.
Where
we're
doing
a
bunch
of
operations
in
a
sequence
of
it
that
we
could
have.
We
could
have
patched
the
curve
that
but
crush
op
cross
operation.
You
know
cross
API,
op
bashing,
so
it
doesn't
took
that
doesn't
feel
very
logical,
but
maybe
that
maybe
it
maybe
it
is
well.
D
Another
case
to
consider
it
might
be
many
many
puts
a
very
tiny
on
fixed
rgw
transaction.
H
We
betray
I
mean
we
could
we
could
we
could
we
could
we
could
try
to.
We
could
try
to.
We
can
recreate
it.
Allow
so
further
for
that.
Now
that
we
have,
we
have.
We
haven't
torture,
workload
that
works
like
that
and
yeah.
We
could,
you
know
other
with
the
logs
or
the
objects
being
created
are
pretty
small
like
like
butter.
Yes,
I
mean
there
where
we
could
make.
H
A
So
so
sorry,
what
Joshua
or
Matt
what
what
is
what's
available
to
us
in
terms
of
kind
of
putting
all
of
these
updates,
together
kind
of
as
a
single,
a
single
transaction
ID.
Sorry
I'm,
I'm,
really
naive
in
this
regard.
What
like!
What's
the
interface
look
like
for
that
boy?.
H
I,
wouldn't
waste.
Her
are
challenged
much
time
noodling
it.
You
know,
I
mean
I
mean
it
didn't,
but
maybe
should
have
it
I
think
I
think
we
well
I,
think
I
think
one
thing
I'd
like
to
set
up,
maybe
so
what
about
some
kind
of
some
kind
of
ongoing
conversation?
It's
just
about
this
just
about
just
about
all
that
work
and
then
whatever
you
know
but
but
but
but
I
don't
know
yet,
but
but
I
mean
I,
think
I
think
there
I
think
I
think
it
could
operate
at
two
levels.
H
I
mean
Josh
is
introducing
the
highest
level
and
I
think
that's
useful
to
think
about
and
there's,
but
there's
one,
perhaps
the
lower
level,
one
right
where
a
bunch
of
operations
that
have
run
to
food
in
the
OSD.
Now
now
now
we
now
we
complete
them
together,
allowing
them
to
combine
and
the
urgent
know
about
that
case,
but
it,
but
it
has
happened.
A
H
Yeah,
the
latter
one.
Ok,
these
case
there's
no
their
operations
firing
coming.
There's
a
those
of
the
two
cases
that
there
are
the
two
ways
of
looking
at
it.
One
is
that
one
is
that
one
is
that
operations
are
intersecting
at
the
rgw
like
put
foo
foot
are,
but
you
know,
foot
put
Jerry
and
and
and
we
and
we're
on
day
we
were
there
together
in
to
win
in
like
a
window
mm-hmm,
and
then
we
patched
the
video
of
the
whole
the
whole,
maybe
the
whole,
maybe
the
parts
that
are
synchronous
to
the
end.
H
We
batch
up
the
index
update
for
the
through
firfer,
put
that
foo
bar
and
Jerry
together
and
complete
there
and
whatnot
in
one
CLS,
ops,
a
and
our
W
or
on
the
other
I,
don't
know
ste
if
they're,
if
they
happen
to
coincide
but
another
another
way,
looking
at
other
way
would
be
if
we
allow,
if
we
allow
up
some
of
our
own
cos
hops
to
rendezvous
with
each
other
and
build
up
a
transaction,
then
to
complete
it
when
it
would
meet.
H
H
D
H
D
H
I'll
try
to
make
up.
No
doubt
both
of
these
approaches.
You
know,
batching
is
we've
have
fun
of
this
before
and
especially
to
deletes
and
I.
Think
you're
able
to
get
that
one
in
for
the
things
that
are
you
know
for
is
the
quick
as
a
quicker
as
a
quick
as
a
quick
fix
for
the
you
know,
some
of
the
speed
ups
of
the
things
that
are
better
see.
H
Obviously,
a
less
log
trim,
if
there's
already
in
when
I'd
say,
which
I
think
it
didn't
quite
make
it
or
if
I
or
if
it
is
then
report
that
but
I
think
I
think
it
didn't.
But
if.
B
H
A
Well,
I've
got
like
a
list
of
wonts
here
that
I
was
going
through
and
thinking
about
trying
to
make
it
kind
of
meet
those
those
desires,
but
after
I
kind
of
got
done,
ripping
it
apart.
I've
got
all
these
pieces
and
I've
got
something
that
works,
but
I'm
kind
of
questioning,
whether
or
not
I.
How
I
put
all
this
back
together
and
if
it
actually
makes
any
sense
Doug's
tool
that
he
wrote.
That
does
basically
Oh
map
rights
and
then
deletes,
looks
really
nice
and
clean
in
comparison.
A
H
What's
good
about
that
is
that
you
know
that
you
can
elaborate
that
you
can
elaborate
into
something
that
this
understands,
though,
that
you
are
on
that
forw
Matt
will
use
cases,
you
know
I
know
I
got
into
this
Bri
I
think
I.
Think
you're.
My
intuition
agrees
with
yours
here.
I
think
we
should
be
building
up
something
that
that
knows
what
I'll
never
close
and
not
try
to
mix
it
together
with
things
designed
to
be.
You
know,
writing
writing.
Writing
buffer
segments
stuff.
A
H
A
Neha
made
the
very
valid
point
that
that
maybe
it's
it's
not
worth
spending
a
lot
of
time
on
this
unn
worrying
about
any
of
this
and
just
try
to
get
some
quick
results
and
then
and
then
go
back
to
our
or
gonna
fix
all
this.
So
hopefully,
hopefully
I
will
have
at
least
something
soon
to
do
tests,
even
if
it's
just
Doug's
existing
tool.
Actually,
my.
B
A
A
Well,
okay,
so
so,
frankly,
I
don't
I,
don't
even
necessarily
care
about
the
bench
portion
of
it.
So
much
as
the
see
if
we
can
make
rocks
DBE
start
behaving
in
really
bad
ways
by
by
you
know
having
a
bunch
of
a
bunch
of
tombstones
in
the
database
that
aren't
cleaned
up.
I.
Think
that's
really
really.
H
In
the
fall
in
the
fullness
of
the
product
of
the
problem,
you
know
their
their
area.
There
is
what
is,
what
is
optimal
for
rocks
to
be
or
that
or
they
or
another
or
another
KB
store
in
you
know,
independent
of
all
the
other
sequencing
robbing
ordering
and
other
stuff
that
goes
on
and,
and
you
could
factor
that
out
and
just
focus
on
the
key
value
in
their
face
or
yeah.
That
might,
and
that
might
be
a
useful
exercise.
D
Objects
right,
I'm,
not
sure
it's
that
important,
actually
I,
think
we've
I'd
rather
have
a
freitas
bench
not
treat
object,
new
objects
for
everything
or
have
whatever
bench
mark
--use.
Not
do
that,
because
that's
not
how
anything
that
uses
Oh
map
actually
works,
but
if
the
benchmark
that's
creating
objects
for
every
Oh
map
is
measuring
something
that
no
one
actually
does
it's
not
like
a
very
realistic
benchmark.
A
D
H
The
preview
in
the
previous
cover
previous
topic
or
some
topic,
certainly
the
certainly
bucket
restarting
we
could
analyze
as
a
source
of
badging
yeah.
That's
new,
but
I,
don't
think
look
looks
like,
but
but
but
that's
something
I'll
ask
Eric
to
look
at
on.
A
The
the
play
devil's
advocate
a
little
bit.
Josh,
though,
is
it's
not
exactly
the
same,
but
you'd
expect
that
in
a
world
with
blue
store,
you
probably
aren't
reading
OS
DS
just
for
bucket
index
updates
right.
You
probably
have
OS
DS
that
have
you're,
probably
trying
to
spread
it
across
as
many
OS
DS
as
possible
to
have
their
their
DBS
on
flash
and
they
have
a
mix
of
like
you
know,
object,
creation
and
objects
that
have
bucket
index
updates
on
or
have
app
updates
on
them
right.
I'm,
not
sure
that
that's
true.
D
A
A
D
H
A
But
so
Josh
do
you
think
in
a
blue
store
world
I
guess
my
view
on
the
blue
store
world
was
that
probably
people
were
going
to
go
back
to
kind
of
the
simplest
deployment
procedure
possible
because
in
blue
store
you
know
the
the
OMAP
is
gonna
be
on
flash.
If
you
have
flash
a
flash
DB,
it
doesn't
really.
A
You're
kind
of
by
splitting
it
now,
though,
right
you're
you're
in
some
ways,
converting
yourself,
because
now
you
have
to
decide
how
much
flash
to
give
your
your
non
bucket
OSD
is
non
bucket
in
their
index
bucket
OS
T's,
and
you
have
to
decide
how
much
those
should
get,
whereas
if
it's
one
OSD,
they
share
it
right.
D
Sure
sure
it's
think
that
this
is
something
that
I
think
we
can't
ignore
this
case,
that,
where
they're
separate
I
think
that's
that's
a
pretty
common
case
and
I,
don't
think
that's
gonna
change
that
fast
in
the
future.
Yeah.
A
A
A
But
the
kind
of
getting
back
to
my
original
point
right,
I
I,
think
we
can't
assume
in
the
future
that
object,
creation
and
OMAP
are
gonna,
be
happening,
distinct
on
two
separate
OS
T's
and
not
having
the
presence
of
both
at
once.
Does
that
make
sense
sure
pure?
Like
me,
you
know
it
might
be
the
case
that
most
people
are
still
gonna
separate
them.
I,
don't
think
we
can
you,
you
know,
base
any
decisions
on
the
assumption
that
you
won't
have
object,
creation
and
OMAP
happening
on
the
same
OSD
sure.
A
A
A
H
Mark
so
let's
see
I,
don't
see
you
and
I
or
C.
Are
you
not
a
nurse
you're
a
teller
I
should
be
n.
Oh
I
see
you
little
red
hat
South.
Only
as
such,
though
oh.
A
H
But
yeah
case
assured
I
didn't
email
notice,
but
what
you're
kind
of
the
conversation
was
I
wanted
to
actually
I'm
late
yeah-
let's,
let's
maybe
you
can
return
to
another
blue,
jean,
so
I'll
paste.
Also
paste
you
mind.