►
From YouTube: 2019-12-19 :: Ceph Performance Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
so,
let's
get
started
here.
We've
got
a
couple
of
new
prs,
primarily
from
I
hope
I
pronounced
it
correctly
xixing,
I
believe
shai
shin,
guo
or
guo.
Maybe
these
are
working
in
the
manager,
parallel
calculation
for
the
balancer,
another
balancer
improvement
to
eliminate
the
usage
of
the
ms
infrastructure
for
upmap
mode,
okay,
and
I
think
some
other
general
efficiency
gains
there.
A
Let's
see,
there
were
two
pull
requests
that
both
got
closed
by
the
stale
bot.
One
was
for
me,
that's
the
avoid
double
caching
and
blue
star
o
nodes.
We
still
absolutely
want
to
do
this,
but
we
decided
a
while
back
to
wait
until
adam's
cash,
sharding
or
sorry
a
database
charting
pr
merged,
and
that
is
still
in
the
process
of
being
tested.
A
So
this
will
probably
need
to
be
rewritten
on
top
of
that
anyway.
So
the
old
one,
it's
useful
to
have
there
for
reference,
but
not
super
super
important
to
keep
it
alive.
I
think
so
I'll
just
let
it
stay
closed.
I
guess,
and
there
was
another
one
here:
increased
messenger
iops
that
was
written
by
roman
over
at
suse.
I
believe
that
was
also
closed
by
the
sailbot.
I
think
greg
reviewed
it
at
one
point,
but
there
wasn't
a
whole
lot
of
movement
on
it.
A
So
I
guess
if
he
is
interested
in
bringing
that
back,
maybe
he
can
either
reopen
it
this
pr
or
submit
a
new
one.
We
got
a
couple
of
them
updated.
I've
got
a
pr
for
hardware
recommendations,
this
just
changes
and
updates
some
things
that
we
had
in
our
documentation.
There's
been
a
little
bit
of
discussion
about
that
no
approvals.
Yet
though,
so
we
can't
merge
it
until
somebody
goes
and
decides
to
approve
it.
A
I
did
do
some
testing
on
that,
but
the
problem
I
hit
was
that
performance
was
actually
worse
in
the
test
than
in
previous
runs
that
I
had
done,
and
I
didn't
have
time
to
figure
out
why,
in
those
subsequent
runs,
I
didn't
see
much
benefit
at
all
by
increasing
it
beyond
seven
so
and
those
test
seven
was
was
good
enough,
but
that
was
quite
a
bit
different
than
the
behavior
that
I
had
seen
previously,
where
we
were
still
seeing
scaling
beyond
it.
A
B
See
what
else
so
we
did.
We
did
just
talk
about
that
pr
in
the
in
the
scrub
this
morning
and
we
are
still
interested
in
it.
Whether
the
initial
number
is
seven
or
something
slightly
higher,
I
mean,
I
think
it
would
be.
There
would
be
no
objection
to
taking
it
and
if
you
want
to
do
more
performance
work
to
argue
for
increasing
them,
and
that
would
be
fine
too.
A
Maybe
we
should
set
it
to
like
seven
or
eleven
or
whatever
now
and
just
get
it
changed,
and
then
you
know
we
can.
We
can
further
discuss
higher
higher.
You
know
defaults
if
it
if
it
makes
sense
or
if
it
doesn't
then
not.
B
I
I
don't
want
to
block
it
just
because,
as
as
long
as
you,
I
did
have
a
comment
about
changing
the
default
in
the
zone
instead
of
the
config.
A
Yeah
I'll
I'll
try
to
get
on
top
of
that
here.
Soonish.
A
Okay,
let's
see
what
else
do
we
have.
A
There
was
an
older
pr
about
adapting
the
this
purple
test
to
arm
to
an
arm
architecture,
and
I
think
kifu
happened
to
just
review
that
this
week,
so
that
got
updated
that
had
previously
been
kind
of
languishing
and
no
movement
category.
I
think
that's
about
it.
There
was
one
other
thing
casey
that
you
had
mentioned
still
being
interested
in.
That
was
the
pr
for
potentially
avoiding
the
the
decode
re-encode
step
in
between
the
osd
and
rgw
and
passing
the
the
buffer
list
straight
in
or
straight
across.
B
Yeah
and
I
and
I
still
think,
there's
a
way
we
can
do
that
without
changing
the
client
side
of
the
interface.
B
B
Adding
something
on
the
cls
side,
probably
in
the
form
of
a
template
where
you
can
encode
it
as
a
where
the
cls
can
encode
it
as
a
version
with
buffer
lists
there,
so
it
doesn't
have
to
decode
re-encode
and
then
on
the
client
side.
It
uses
the
existing
one
that
has
the
actual
structure
and
so
it'll
decode.
Normally.
B
A
Yeah,
if
we
could,
if
we
could
do
that,
especially
without
changing
the
client
side
code,
that
would
be
that'd
be
really
nice.
A
A
Also
that
testing
was
done
before
eric's
vr,
so
it's
quite
possible
that
it
will
be
different
now,
I'm
not
sure
if
it
would
be
better
or
not
or
if
it
would
have
less
of
an
effect
at
this
point,
but
but
it
may
have
a
different
effect
than
it
did.
When
I
was
testing
it
then.
B
A
All
right
cool-
I
don't
think,
there's
much
else
going
on
here.
It's
been
kind
of
a
quiet
week
with
people
starting
to
take
off
any
other
pr's
that
I
missed
guys.
A
All
right,
then,
the
only
other
thing
this
week
that
I
have
is
that
we're
continuing
to
do
testing
looking
at
pg
log
and
what
we
can
do
to
make
it
better.
I
ran
through
the
same
set
of
tests
that
I
had
from
last
week
on
the
new
intel
nodes
that
we
have
rather
than
the
older
ones,
and
I
did
get
results
from
those,
but
unfortunately
our
switch
is
having
a
ton
of
problems
and
needed
to
be
rna.
So
now
those
nodes
have
been
accessible
for
most
of
the
week.
A
The
the
gist
of
the
results,
though,
is
that
we
see
improvement
just
like
we
did
on
incerta,
but
then
we
hit
kevin
upper
wall
at
around
72
000
right,
I
apps,
and
at
that
point
then
further
improvements
just
make
cpu
usage
go
down
rather
than
performance
increase,
so
that
was
kind
of
interesting
to
see.
There's
there's
something
else
at
that
point.
That's
blowing
us
down,
but
we
do
continue
to
see
benefit
from
it
just
in
a
different
form.
A
A
I
think
the
next
step
will
be
to
look
at
prototyping
some
kind
of
alternate
mechanism
in
the
object
store
for
storing
pg
logs,
maybe
circular
buffers
per
pg
or
something
that
are
stored
either
in
objects
or
you
know,
stored
directly
in
you
know,
blue
store,
extents
or
something
I
don't
know
we'll
we'll
see,
but
that's
that's
kind
of
where
we're
at
on
it
right
now,
I've
been
a
little
bit
busy
with
some
other
bug
fixes
and
things,
so
I
haven't
really
been
able
to
get
back
to
it
on
on
serta.
A
So
that's
basically
it
that's.
That's
all
I've
got
anyone
have
anything
else.
They
want
to
talk
about.