►
From YouTube: 2020-01-30 :: Ceph Performance Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
A
Because
we
do
see
the
buffer
list,
implementation
slowing
things
down,
sometimes
so
what
else
Igor
has
an
excellent
PR,
or
at
least
it
looks
excellent
to
me
on
the
surface.
Evan
looked
at
real
deeply
yet,
but
it
simplifies
the
Onoda
pin
and
unpin
logic
and
bluestar
I
had
made
an
attempt
to
do
this
a
couple
of
months
ago,
and
there
was
an
issue
that
we
were
hitting
where
we
could
end
up
iterating
over
every
single
own
ode
in
the
cache
looking
when
we
were
trying
to
do
a
trim.
A
That
looks
much
much
simpler,
much
much
better,
so
very
exciting
that
that
should
both
be
an
improvement
over
what
we
had
before
and
an
improvement
over
what
I
was
doing.
So
that's
really
good
and
then
last
new
one
here
is
allow
zone
commodified
to
configure
index
charts
from
KC
for
our
JW.
That
also
includes
just
a
really
simple
change:
to
increase
the
number
of
buckets
shards.
A
Previously,
we
didn't
want
to
do
that
because
it
could
hurt
bucket
index
listing
performance
now
based
on
Eric's
PR.
That's
much
better,
so
we
can
increase
the
number
of
buckets
by
default,
which
lets
us
have
much
better
write
parallelism
than
we
previewed
when,
when
you're
multiple
clients
right
into
one
bucket
or
just
you
know,
lots
of
stuff
rating
to
one
look
at
it.
Concurrently,
I
guess.
C
A
All
right,
excellent
and
Thank
You
KC
for
for
doing
that,
I
kind
of
have
only
been
you
know,
updating
that
PR
periodically,
and
it
is
better
because
you're
you're
looking
at
it
a
lot
more
often
yep
excitable.
To
have
this
octopus
me
too
me
too
rgw
performance,
I.
Think
he's
going
to
look
really
really
good,
really
exciting.
A
Okay,
so
closed
what
closed,
he
has
PR
to
increase
the
default
PG
numb
in
the
PG
autoscaler
up
to
32
that
merged.
That's
really
good
I
think.
Originally
it
was
8.
Then
we
bumped
it
to
16.
Now
we
bumped
it
to
32
we're
kind
of
trying
to
generally
make
the
OS
DS
a
little
more
receptive
to
having
more
peachy's,
potentially
with
with
smaller
PG
logs,
that
improves
distribution
of
data.
When
you
don't
have
the
balancer
doing
it,
but
then
also
kind
of
alleviates
some
other
issues,
so
it
just
gives
us
a
little
bit
more
flexibility.
A
A
A
That's
just
for
code
for
standardless,
though
I've
been
still
really
good.
Mashing
might
have
been
the
situation.
I
saw
it
in
numb
store.
I,
don't
remember!
Now,
that's
the
move!
Deliverer
based
bucket
listing
filtering
logic
to
CLS,
so
that's
the
the
filtering
stuff
that
Eric
was
working
on
in
the
CLS
code
in
the
OSD
so
that
we
filter
before
sending
it
all
to
our
GW
that
merged
that's
very
good.
A
After
that
merges
if
you'd
got
one
right
now
and
even
do
I
split
the
read
I/o
if
the
I/o
size
is
large
key
food
reviewed
that
despite
being
on
holiday
right
now,
so
that
was
very,
very
nice
of
him,
we'll
see
if
I
can
get
in
soon
here,
Adams
big
objection,
triumphant
PR,
it
looks
like
Patrick
may
be-
saw
some
asserts
in
the
MDS
that
he
thinks
could
be
related
to
that.
I
did
see
that
Adam,
you
updated
that
it
looks
like,
maybe
after
that,
any
any
news.
I'm.
E
A
Ok
and
then
the
last
updated
PR
here
is
our
other
Adam,
as
has
been
working
on,
his
new
store
rocks
TV
sharding
PR.
He
updated
that,
based
on
some
feedback
just
to
make
it
a
little
bit
simpler,
moving
some
logic
into
the
key/value
database
rather
than
doing
it
in
blue
store.
It
may
be
a
little
less
flexible
in
the
future,
but
it's
a
lot
simpler.
So
that's
that
was
kind
of
a
desired
trade-off.
A
Else,
I
guess
that's,
probably
the
other
big
one
that
that's
kind
of
exciting
here
that
we're
waiting
on
a
lot
of
other
older
stuff
that
hasn't
seen
movement
in
a
while.
So
that's
it
for
PRS
I
do
not
have
well
actually
are
there
any
other
ones,
I
missed
from
anybody.
That's
interested
in
going
on
right
now.
H
G
I
peers
contribute
to
the
the
squeeze.
One
is
about
last
page,
it's
pretty
old
one,
but
rather
but
reworked
to
not
introduce
any
new
mechanism
just
to
move
to
iterators
instead.
Second
one
this
one.
This
gives
savings
from
70
to
72
bytes
on
x86
240,
but
there
is
also
another
one
that
arrange
the
things
inside
buffer
list.
It
allows
to
squeeze
aloud
another
eight
bytes
introduce
the
size
to
32,
which
is
half
of
cache
line.
Most
x86
might
be
might
be
worth
something.
Yesterday
I
was
testing,
I
was
I,
was
I
was
bidding.
A
G
Yeah
I
was
struggling
about
the
buffer
list
for
more
than
a
year
and
finally,
after
active
atrophy
after
finishing
the
exclusive
lock
for
crimson
I
got
I
got
some
time
for
buffer
list
till
there
is
one
big
win.
Maybe
it's
not
a
performance
righted,
but
response
ticked,
an
issue
with
buffer
list.
It's
the
de
crosstalk.
We
have.
We
had
it
for
years.
G
Basically,
if
you,
if
you're
sure,
if
you
make
a
new
instance
of
buffer
trees
from
by
copy
constructing
from
let's
say,
const
on
another
console
quality,
I'd
instance
philatelist
and
you
operate
and
buffers
inside
well,
the
modifications
will
be
visible
in
both
of
them,
including
that
Const
qualified
one.
That's
that's
a
bit,
that's
a
bit
painful,
but
to
fix
this
behavior
we
would
need.
We
would
need
to
make
video
changes.
A
A
H
G
Looked
at
Allegan
at
all:
no,
we
made
completely
different
approach
instead
of
instead
of
trying
to
embed
as
much
things
we
can
in
in
buffer
lists.
In
buffers
instance
itself,
we
decided
to
minimize
the
size
of
instance
and
try
to
embed
try
to
embed
the
buffer
pointer
in
a
special
place
designated
in
buffer
route.
We
call
it
hyper
combining
it's
it's
disabled
for
now,
because
there
was
some
nasty
and
identified
crashes,
but
well
today
they
are
the
distiller.
G
G
Okay,
the
smaller
job,
the
smaller
opera
list,
the
smaller
bufferless
instance
is
the
more
bufferless
you
can.
You
can
have
on
the
same
on
the
same
side
on
the
same
amount
of
cash.
That's
one
thing,
but
there
is
also
a
extra
birds
doing
move
drink
almost
you
need
to
copy
MLS
memory
and
that's
quit
it
and
that's
quite
frequent.
Think
in
crimson,
because
well
fatherless
dreams
potential
tends
to
be
embedded
tend
to
be
embedded
inside
of
your
sister
future
and
sister
future
are
constant,
are
moved
over
and
over.
So
the
performance
of
move
of
move.
G
G
G
A
G
I
A
G
Well,
go
ahead,
I'm,
no
I'm,
not
specific
about
well
I,
know
I,
don't
have
any
extra
requirements,
and
particular
version
of
or
as
all
all
of
them
work
for
me
nicely
I
will
just
want
to
have.
I
will
just
want
to
stick
to
the
same
version
as
you
do
not
diversify
I,
do
not
differentiate
and
necessarily
the
environment
even
for
the
sake
of
testing
repeated
any
bit
ability
sure.
A
G
G
Okay,
I
see
I,
see
it.
There
is
nice,
there
is
a
nice
addition
right
away
completely
better,
okay,
maybe
not
so
completely,
but
a
bit
back
away.
There
is
a
pull
request
for
CBT
bringing
support
for
the
cycles
per
op
metric,
which
can
be
useful
also
for
master
for
classical
OSD.
It's
it
has
been
made
by
ab
gnam,
and
I
bet
you
might
want
to
take
a
look
after
your
holiday.
Yes,.
G
Also,
a
bloom
is
working
on
on
extending
the
entire
performance
comparison
metric,
parametric
comparisons
over
broader
set
of
benchmarks,
because
at
the
moment
we
only
have
this
feature
for
Rados
bench
or
the
radius
of
an
integration.
So
in
we
really
want
to
move,
we
really
want
to
make
it
available
also
for
the
FIO
integration
of
activity.
A
G
Think
I
think
well
at
the
moment,
ablum
is
pairing
for
further
for
his
exams
at
his
university,
but
when
he's
back
likely
might
want
also
to
take
a
look
on
the
interface
we
designed
for
for
four
other
benchmarks,
as
well
idea,
is
to
generalize
generalize
the
performance
comparison
at
the
moment.
Stivity
runs
only
runs
where
a
dose
bench,
and
only
for
right,
husband.
She
was
able
to
run
path
underneath
and
also
for
writers
bench.
Only
it's
a
city
is
able
to
to
compare
results.
G
A
A
A
K
Well,
if
you
remember,
we
discussed
potential
ways
to
go,
still
have
4k
well
6040
well
well,
to
have
both
good
speed
and
effective
space
locations,
and
the
idea
was
to
try
real
allocator
and
maybe
different
approach
or
bitmap
allocator
cool
collocate
contiguous
blocks
in
some
cases,
but
the
more
I
think
about
the
issue.
The
more
it
looks
like
this
is
generally
unresolvable
issue.
K
K
K
Benchmark
numbers
we
we've
got
already
and
make
sure
again
that
this
digital
fragmentation,
which
introduced
by
using
48,
packed
briefs
negatively
and
it
improves,
writes,
but
so
I'd
like
to
do
to
confirm
this
theory
and
then
come
back
with
some
report.
But
again
it
looks
like
unresolvable
a
generally
a
dissolvable
issue
from
one
should
elect,
which
scenario
he
is
more
important
for
him.
A
Is
there
a
real?
Last
week
we
were
talking
about
doing
a
4k
allocation
size,
but
then
larger
having
like
a
flag
that
you
could
request
larger
contiguous
allocations
is
there?
Is
there
any
benefit
to
doing
that?
At
this
point,
do
you
think
or
is
it?
Is
that
kind
of
you
know
no
different
than
just
doing
larger
Menelik
size
well,.
K
B
Guess
that's
we're
talking
about
trying
to
narrow
in
on
McAllister
III.
So
far
is
fragmentation.
We
didn't
make
sense,
intuitive
I.
Guess
me
once
we
narrow
down,
though
I
cannot
causes
us
to
do
more,
precisely,
maybe
the
we
can
think
about
if
there
is
some
way
that
we
can
address
it,
either
the
boosters
allocation
strategy,
alligator
itself,
or
maybe
using
the
hints
from
the
client
better
or
something
like
that.
B
A
K
K
From
two
or
three
other
people
suffering
panting
similar,
but
in
rather
different
scenarios
in
full
production
clusters,
not
in
stand-alone
the
fire
but
well
so,
as
I
said
it's
pretty
unsorted
so
far.
So
if
that
one
has
similar
concerns
or
you'll,
be
able
to
run
your
regression
test
a
benchmark
if
any
one
day
and
share
results,
it
would
be
great.
A
Yegor,
you
know
if
the
reports
from
luminous
are
they
running
the
stupid
alligator
versus
bitmap
alligator.
K
A
Another
possibility
to
Igor
is
that
it
could
be
the
memory
hosting
memory
target
and
memory
auto-tuning
if
they're
running
a
test
where
previously
they
were
allocating
almost
all
of
the
cash
for
oh
no
cash
and
then
with
the
memory
auto-tuning.
If
it's
trying
to
balance
between
rocks
TB
and
oh
no,
but
because
it's
double
cashing
no
nodes
in
both.
You
may
have
math
less
memory
than
is
available
less
memory
that
you
previously
did
dedicate
to
Oh
notes
and
sometimes
that
that
might
very
well
could
be
slower.
So
maybe.
A
Yeah,
if
you
have
the
ability
to
do
the
testing,
see
if
increasing
the
memory
target
helps,
and
if
that
does
that
it's
probably
that
other
issue
I've
got
a
PR
that
that
basically
fixes
it.
But
we
are
waiting
until
the
addams
rocks.
Tv
sharding
work
finishes
before
we
merge
it
mm-hmm,
but
that
carry
well
could
be.
A
All
of
the
in
certain
nodes,
renting
CentOS
7
with
I,
think
it's
Python
3.6
I'm
version
of
3.6
we're
seeing
stuff
our
parse.
It's
it's
basically
immediately.
Returning
when
you're
trying
to
I
think
it's
like
close
a
thread
and
then
it's
it's
like
messing
up.
There's
a
PR
that
I've
got
that
describes
it
better.
All
right,
I'm
gonna
find.
H
G
How
do
you
start
the
cluster?
Are
you
do
you
use
CBT
for
it
or
just
by
this,
that
CBT,
okay,
it
makes
okay
now
I
see
I
tried,
I
tried
applying
the
part,
and
after
that
we
start
the
sage
actually
loops
over
loops,
those
into
into
hand.
Look
it
tries
to.
It
tries
to
start
some
things
over
and
over
and
over.
Oh.
A
A
A
I
just
wanted
to
let
you
know
sorry.
This
is
unrelated
to
this
conversation,
but
David
just
mentioned
that
the
inserter
nodes
are
ready
to
be
moved
across
racks
I.
So
this
is
I.
G
A
All
right,
well,
I,
don't
everything
else
guys
though
I
will
be
out
on
PTO
for
the
next
two
weeks.
I
think
josh
is
going
to
take
over
next
week
and
then
he's
also
out,
so
we
may
or
may
not
have
a
meeting
the
following
week.
Oh
there's
a
starter,
there's
a
message
here
from
mark
I,
don't
know
if
I'm
saying
your
last
name
right
later,
let's
see
I
like
to
help
improve
SSTV
threatening.
Yes,
thank
you
very
much.
It
was
awful
okay,
so
SSD
threading.
A
A
A
Josh
one
thing
I
was
thinking
that
might
be
interesting
here
too,
with
his
results
would
be
a
walk
trace
to
see
how
long,
like
sinks
are
taking
yeah.
That's
a
good
yeah
I.