►
From YouTube: 2019-08-08 :: Ceph Performance Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
I
suspect
that
a
bunch
of
people
from
core
are
trying
to
get
this
Alec
size
change
over
over
the
over
the
fence,
because
we
have
a
couple
of
people
that
are
really
interested
in
getting
it
into
master
and
then
that
parted
as
well
so
the
gist
of
it,
is
that
were
using
up
extra
space
due
to
fragmentation
and
people.
Don't
like
it.
So
there
was
kind
of
a
more
extensive,
bigger
PR
originally
that
that
was
kind
of
complicated.
A
So
we
went
back
to
a
much
simpler
PR
that
just
set
the
blue
of
SLX
eyes
to
64k
and
then
a
slightly
more
complicated
PR
in
the
end
that
fences
Road
that
only
uses
the
64k
Alex
eyes
on
the
shared
device,
but
not
on
the
TV
or
well
device
in
blue
store.
Though,
and
the
idea
there
is
that
we
still
have
a
large
Alex
eyes
for
rocks
TB
when
we
can
and
then
I'm
sure
device.
We
have
a
smaller
one.
That.
A
Covers
the
first
two
PR
is
there:
there
is
a
fairly
complicated
read
from
our
started:
PR
for
the
OSD
that
that
implements
async
read
for
the
replicated
back-end
I
started
kind
of
poking
through
that
a
little
bit
I
didn't
have
time
to
really
dig
into
it.
Deeply
I'm
hoping
that
we
can
get
performance
and
profiling
data
on
that,
because
that's
a
looks
like
a
big
change.
A
He
just
knows
a
couple
things
that
were,
though
he
said
there
was
a
significant
performance
improvement
from
it
in
his
desk,
but
still
every
little
bit
helps
also.
We've
got
a
couple
of
finisher
changes
here,
one
from
Igor
that
removes
from
lock
acquisitions
and
also
another
one
from
champagne
that
reduces
unnecessary
notify,
though
yeah.
A
A
D
D
A
A
B
Good
to
me,
as
a
itself,
research,
different
pieces
of
booster
and
other
metadata
into
different
column,
families
think
they
keep
up
studying
stuff
like
seven
and
four
o
map
and
five
for
it.
I
think
there
may
be
alligator
families
there's
a
few
other
other
con
families
for
well.
The
piece
of
meditative
like
the
PT
II
metadata.
B
B
A
So
there's
my
original
PR
for
the
removing
double
cashing
which,
if,
if
Adams
stuff
looked
like
it
was
gonna
get
punted
until
after
octopus,
I
was
going
to
go
back
and-
and
just
you
know,
whip
it
into
shape
for
for
octopus.
But
now,
if
based
on
what
Josh
was
just
saying,
if
it
sounds
like
partings
getting
get
into
our
octopus,
then
Adam
has
a
version
of
my
PR,
let's
rebased,
on
top
of
his
so
I'll
go
through
and
try
to
start
reviewing
that
and
see.
If
we
can
get
it.
B
There,
the
Prados
PR
ISM
I'm,
not
an
addition
to
boost
our
tool
to
allow
you
to
change
the
charting
strategy,
either
change
the
number
of
comments
or
making
it
nice
not
use
condoms
anymore
at
all.
It
just
takes
a
relatively
short
time,
so
iterates
through
whole
database
and
and
do
that
copy
different
families
yeah.
It's
my
favorite
than
backfilling.
C
B
It
doesn't
particularly
matter
in
this
case
so
I'm
Adam
has
Safari,
puts
the
P
key
metadata
in
a
single
column
family.
We
could
look
at
I'm
charting
out
that
out
as
well
and
see
the
head
in
it
and
any
kind
of
impact.
A
Don't
I
don't
know
that
we'd
have
theoretically
for
PG
long
right.
We
we
shouldn't,
have
very
many
peas
in
the
database
itself.
Right
I
mean,
ultimately
speaking,
it
would
never
go
in.
You
just
have
like
wall
above
her
rights
and
then
you'd
hit
a
tombstone
before
you
ever
flushed
into
level
zero.
Well,.
B
B
Is
that
they're
difficult
families
are
separate,
l7
trees,
so
they
have
fairly
separate
storage
and
separate
memory.
Buffers,
though,
if
you
have
leant
your
rights
going
to
the
PG
log
next,
in
with
a
bunch
of
rights
going
to
oh,
my
poor
object
metadata
and
are
less
likely
to
be
flush
to
disk
Spacely.
A
So
I
guess
what
I'm
wondering
is
like
say:
did
you
split
my
cross,
multiple
column
family?
So
now
you've
got
like
you
know
a
portion
of
your
PG
log
entries
going
into
one
SST
file
or
into
one
set.
You
know
one
one
level:
zero
you'll
never
really
get
into
like
level
one
or
the
little
tips.
We
don't
have
enough
and
you
have
other
ones
going
into
other
ones.
I
wonder.
Does
that
actually?
Is
that
actually
a
good
thing
with
the
way
that
the
traffic
works
for
PG
log?
C
A
The
question,
in
my
mind,
is:
what
advantage
does
it
get
you
if
you're
already,
if
you
already
don't
have
enough
data
to
like
require
like
a
hierarchy,
because
everything
is
landing
in
l0
anyway,
then
what
does
it
buy
you
to
actually
split
your
your
to
shard,
PG
log
across
multiple
column,
families,
because
it
doesn't
reduce
other
things
right,
like
there's,
like
I,
think
some
walking
overhead
that
you
end
up
doing
with
and
potentially
now
you
have
multiple
kind
of
compactions
that
happen
less
frequently
for
each
one.
I
think.
B
Not
sure
needs
to
be
charted
and
I'm,
not
sure,
there's
enough
data
there,
like
you,
say
with
it,
they
would
make
sense
to
it,
but
they're
just
having
it
in
a
separate
column
family
from
everything
else.
Just
a
single
one
is
good
stuff.
Oh
yeah.
A
C
B
C
B
C
C
So
event,
that's
about
compassion
or
whatever,
on
ebay,
oh,
no,
no,
no,
never
actually
be
instances,
causes
cos
is
isn't
isn't
playing,
is
causing
obstacle
to
come
to
an
opera
tire
still
lengthened
dramatically,
and
so
I
wanted
to
understand.
If
this
have
any
way
to
sort
of
see
what
the
impact
of
this
is
another
one
that
I
want
to
see
is
under
extreme
stress
or,
as
he
knows
or
larger,
or
for
example,
or
when
the
wind,
perhaps
when
that
wind
101
along
convection,
is
happening
in.
A
B
A
One
thing
that
we
can
do
to
that'd
be
really
nice
is
on,
even
if
we
do
have
a
bunch
of
that
data.
The
if
you
log
data
ending
up
in
the
the
like
a
level
zero
it
or
we
could
kind
of
extend
that
PR
I
have
to
avoid
the
double
caching
of
Oh
nodes.
You
also
make
it
so
that
we
don't
assign
a
cache
or
isn't
a
very
small
cache
or
specifically
for
PG
log.
That's
an
excellent
point.
Yeah.
A
A
A
And
then
the
cash
bidding
rebase
so
I
think
that
we
probably
want
to
do
that
after
the
the
oh,
no
double
O
in
cash
in
PR,
but
it's
not
really
reliant
on
it.
So
maybe
I
will
try
quick
get
that
in
now,
but
there's
lots
to
do
before
octopus
if
we
won't
actually
merge
that
or
not
so
maybe
that's
next
on
my
list.
A
Alright,
so
the
only
thing
I
have
here
for
discussion
topics
is
to
kind
of
mention.
I
guess
we
already
talked
a
little
bit
about,
but
blue
face
Alex
eyes
and
what
we're
doing
there,
the
gist
of
it
right
now,
is
to
set
the
the
BBM
wall
Alex
eyes,
large
and
then
for
shared
devices
have
be
smaller,
I
think
64k
is
and
what
we're
talking
about
now,
which
would
be
the
same
as
the
Menelik
size
on
our
disks
and
then
not
big
enough.
So
it
doesn't
have
a
super
form.
A
A
I've
got
data
for
OMAP
rights
for
Kato
map
rights
with
with
Rita's
bench.
So
let's
I
guess
look
at
that
one
first
and
this
is
looking
at
bluefish
Alex
eyes
at
16
32
and
the
default
one
Meg
sizes
and
then
the
16,
Cape
and
Alex
eyes,
verses,
14
and
Alex
eyes
and
Catholic
gist
of
it.
Is
that
at
least
for
pure
old
map
rights
it
it
actually
was
faster
at
a
4k
Menelik
size
and
it
was
a
16
K,
Menelik
size
and
so
the
the.
A
B
A
A
If
you
look
at
that
for
like
for
K
R,
we
need
BIOS,
it's
all
really
pretty
close
for
for
reads.
It
looks
like
we're
a
little
slower
with
a
fork
in
Menelik
size,
but
in
fact,
for
writes
it's
actually
doing
slightly
better
and
the
difference
there
right
is
that
they
can't
make
sense
right
because
with
for
Katie,
you
have
more
data
more
metadata
in
the
database
and,
potentially
you
know
more
more
work
to
do
for
reads
and
for
writes,
but
it
also
means
that
you
for
these
forty
iOS,
you
have
less
right
ahead.
A
Log
data
being
written
out,
though
that's
kind
of
the
trade-off
right
so
for
writes
it
is
in
this
case,
a
little
bit
faster
breeds
is
a
little
slower.
What
I
found
incredibly
interesting,
though,
was
what's
going
on
here
with
the
128
K
rights,
specifically
sequential
rights,
and
it's
significantly
faster
in
some
cases.
Oh
wait,
yeah
for
in
this,
like
iteration
two
tests
that
was
gonna
go
faster,
I,
guess
not
for
narration
to,
but
for
what
it
was
anyway,
there's
crazy
stuff
going
on
there.
A
The
the
initial
write
is
significantly
slower
than
subsequent
iterations,
and
this
is
with
a
prefilled
volume.
Although
the
dot
the
objects
were
pre
filled
in
for
megabyte
IO
size,
though
in
any
event
here,
there's
there's
crazy
stuff
going
on.
So
it's
probably
worth
some
time
investigating
what
it
is
it's
going
on
here.
A
E
Yeah
there
was
a
PR
well
maybe
half
a
year
ago,
which
tries
to
keep
very
small
objects.
I
tried
against
one
K
into
K,
so
I
remember
to
keep
this
objects
in
K
V
store,
and
actually
it
showed
better
performance
than
keeping
them
on
the
main
device,
not
to
say
about
space
setting.
We
are
getting
this
way.
E
On
hunters
and
correct,
because
I
was
using
DB
it
and
we
need
dry
well,
III
was
using
both
enemy
for
both
DB
and
main
device
that
time
and
actually
now,
I'd
like
to
check
how
these
stab
caves
on
spinners
rifle
here,
both
DB
and
well.
Actually,
we
need
plan
scenarios
to
test
here
and
that's
what
that
was
the
main
command
from
sage
that
time,
but
yeah
I'd
like
to
check
different
blocks.
Different
compact
sizes,
different
I,
can't
drive
and
think
like
that.
B
E
A
Sure
well,
yeah
I
mean,
depending
on
how
many
levels
deep
I.
Guess
you
go,
you
can
end
up
with
stuff
in
you
know
ending
up
in
in
both
like
level
0
and
level
1
and
level
2
in
some
situations.
I.
Think,
though,
you're
probably
right,
though,
that
like
right
amp,
is
maybe
the
bigger
concern,
but
this
patient
could
still
be
an
issue.
B
A
Right
I
had
the
forming
of
my
graphs
here
to
this.
This
thing
this
chart,
but
yes,
interesting
how
it's
slower
or
multiple
iterations
for
the
large
writes,
which
can
make
sense
as
you
is,
if
fragments,
but
then
for
these
128k
writes
it's
actually
faster
on
the
subsequent
iterations.
A
All
right
well,
then,
I
think
next
week
we
are
going
to
get
Sam
to
dominant
presence.
His
his
latency
analysis
work
that
he's
been
doing
in
blue
story,
I'm
not
totally
positive
on
that,
but
he's
out
of
the
question
recent
went
this
week,
so
he
wasn't
able
to
do
it
today,
but
hopefully
next
week
we'll
get
him
so
I
guess
that's!
It
have
a
good
week
guys
thanks!
You
too.