►
From YouTube: Ceph Performance Meeting 2022-01-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
so
not
a
whole
lot
of
new
stuff
this
week,
which
is
not
super
surprising
since
everyone's
trying
to
get
their
stuff
done
for
quincy.
So
a
couple
of
things
closed
not
as
much
as
last
week,
but
still
a
couple.
It
looks
like
the
fix
for
the
stats
for
for
bucket
loading
got
merged
in
this
was
the
issue
that
cropped
up
last
week
with
rgw.
A
Luckily,
we
got
that
in
before
the
quincy
merge,
so
that's
really
nice
or
it's
either
quincy
freeze,
so
that
fixed
the
issue
as
far
as
we
can
tell
so,
that's
a
really
good
performance
fix,
let's
see,
igor's
pr
for
make
shared
blob,
fsck
much
less
ram
greedy
that
was
merged
by
neat.
Now.
A
That's
also
really
really
good
that
I
believe
we
don't
have
adam
from
core
here,
but
I
believe
he
said
that
was
changed
to
be
able
to
fit
within
a
memory
boundary
based
on
the
osd
memory
target.
So
that's
really
good.
Previously
it
just
improved
it,
but
didn't
actually
bound
it
now.
I
believe
that's
bounded,
so
very,
very
good
fix
there.
A
Oh
last
one,
this
was
just
a
super
old
pr
from
stuff
volume
that
was
closed
by
the
stale
bot.
I
don't
even
remember
exactly
what
was
going
on
there,
but
apparently
no
one
else
does
either
so
that
got
closed.
We
do
have
a
number
of
updated
prs.
A
Ronin
are
you
here?
Yes,
you
are.
This
is
the
local
pointer
variable?
You
approve
that.
Do
you
think
it's
time
to
to
merge
this
thing.
A
I
think
there's
a
question
for
kefu
if
this
emerged,
but
since
kafu
is
not,
you
know,
maybe
quite
as
involved
as
he
used
to
be,
you
know,
maybe
we
should
if
this
passed,
I
don't
know.
If
is
this
from
2qa,
so
it
is
good.
Oh,
I
asked
if
this
was
had
been
run
through
qa
light
yeah.
It
doesn't
look
like
you're
just
looking
yeah
or
is
this
maybe
we
should
add
this
to
qa.
A
Is
it
I'll
just
add
these
qa
to
the
label
here
labels
yeah
good
idea:
okay,
yeah,
assuming
you're
you're,
okay
with
it
still.
A
Right
next
adam's
pr
to
make
the
pinning
logic
simpler
and
the
blue
star:
oh
no,
cache
that
guy
updated
and
igor
re-re
re-rev.
A
So
that's
back
that
is
back
in
adams,
court
radix
introduced
a
huge
page
based,
read
buffers
that
broke
on
freebsd
and
I
think
he's
implemented
some
fixes,
but
doesn't
have
a
test
machine
so
we're
waiting
on
the
freebsd
folks
to
get
back
to
us
to
see
if
that
fixed
it
or
not.
A
A
Oh
you
know
what
that
one
actually
should
probably
be
in
your
movement,
because
I
don't
think
anything's
actually
last
week
on
it
so
and
then
oh,
the
the
omap
benchmark
test
that
we
added
to
store
test,
no
movement
on
it,
really
just
that
it
was
marked
on
still
by
neha,
since
we
do
still
want
it
in
some
fashion.
A
A
All
right,
then,
the
only
thing
I
have
for
discussion
topics
today
is
that
we
are
trying
to
do
quincy
testing
performance
testing
not
actually
doing
the
final
test
yet,
but
this
week
I've
been
working
on
just
making
sure
that
everything
is
still
working
properly,
primarily
focused
on
pacific
tests.
At
the
moment
that
we'll
be
able
to
use
for
comparison
later,
there
were
just
a
couple
of
issues
there.
A
One
is
that
the
iscsi
tests
are
not
working
anymore
due
to
our
reading,
not
appearing
in
the
target
cli
hierarchy
after
talking
to
josh
and
neha
about
that,
I
think
our
plan
right
now
is
just
to
skip
any
iscsi
testing,
because
it's
kind
of
not
not
super
important.
At
least
that
was
the
kind
of
general
thought
on
it.
A
If
anyone
cares
or
has
a
different
opinion
about
that,
you
know-
let
me
know,
but
but
right
now
we're
playing
I'm
just
not
not
doing
any
of
the
the
tcpu
runner
iscsi
type
tests
for
this,
given
that
my
christie
left
a
while
ago,
and
I'm
not
sure
how
much
work
has
been
done
since
then,
but
I
don't
think
anyone's
really
expecting
a
huge,
dramatic
change
in
the
performance
there
anyway,
let's
see
next
so
so
far,
I've
been
doing
tests
with
six
osds
per
node
on
our
nvme
drives
they're
they're
looking
right
in
line
with
where
I
remember
them
being
pacific
previously.
A
So
it's
all
all
looking
really
good.
The
one
thing
here
is
that
gathering
cycles
per
op
metrics
using
perf
with
these
kinds
of
big
tests,
where
we
have
you
know
lots
of
nvme
drives
being
used
on
an
upper
node
basis,
is
showing
up
to
about
a
15
performance
impact
for
like
small
random
reads,
so
we
do
see
an
impact
by
gathering
those
cycle.
Props
metrics.
A
I
was
primarily
going
to
aim
this
at
neha
or
josh.
But
kind
of
the
question
is
for
for
this
quincy
testing.
Whether
or
not
we
should
continue
to
gather
these
metrics
for
the
big
tests
or
just
kind
of
restrict
it
to
the
the
smaller
tests.
If
anyone
has
a
strong
opinion,
you
know
feel
free
to
air
it.
Otherwise,
I
think
my
thought
is
maybe
we'll
just
not
gather
this
for
the
big
tests.
A
Not
hearing
strong
opinions
on
it
so
I'll
I'll
bug,
josh
and
neha
and
see
if
they
have
they
care,
but
otherwise
I'll,
probably
just
disable
perf
for
those
tests,
since
this
is
showing
a
non-negligible
impact.
A
The
other
topic
that
has
come
up
is
running
recovery
tests.
For
this,
I
think
my
thought
right
now
is
to
basically
take
some
of
the
standard
fio
tests
that
we
run
and
add
a
recovery
workload.
Basically
in
the
past.
The
way
that
this
worked
is
we
we
let
fio
run
for
a
while.
We
we
first
pre-filled
and
left
if
I
were
in
for
a
while
doing
some
kind
of
workload
marked
some
number
of
osds
out
and
then
watched
what
happened
until
the
the
cluster
entered
into
a
steady
state.
A
Again,
you
know,
became
healthy
and
then
and
then
brought
the
osd
back
in
and
then
watched
the
behavior
again
until
again,
we
entered
into
a
healthy
state.
That's
probably
what
I
was
thinking
something
along
for
for
recovery
testing
for
quincy
here,
but
again,
I'm
I'm
open
to
feedback.
If
anyone
thinks
something
different
would
be
better.
A
All
right:
well,
then,
I
will
continue
on
with
that
as
planned.
The
only
other
thing
here
is
f3
tests.
I've
got
some
pre-canned
ones
from
the
past
that
I'll
I'll,
probably
just
recycle,
basically
multiple
rgw
instances.
A
You
know
high
concurrency
lots
of
objects,
lots
of
clients
just
see,
basically
how
far
we
can
push
rgw
and
then
look
at
how
much
cpu
is
using
and
how
hard
we're
pushing
the
osds,
that's
kind
of
typically
what
the
tests
have
been
in
the
past.
A
So
yeah,
that's
that's!
Basically
all
I've
got
for
right
now.
You
know
there
are
some
quincy
related
things,
performance
and
and
general
things
that
are
still
in
the
works,
radix
working
hard
and
trying
to
fix
some
issues
that
were
caused
by
a
bufferless
pr
that
he
got
in
last
minute.
I
know
that
the
rgw
guys
are
still
you
know,
working
really
hard.
They
had
prs
coming
in
on
friday
that
I
think
they're
still
working
on
and
getting
fixes
in
for
any
I'll
open
it
up.
A
C
I'll
just
I'll
say
if
any,
if
people
in
the
call
don't
know,
there's
also
some
testing
going
on
on
our
givea
cluster,
which
has
I
don't
remember
the
exact
number
of
osds.
But
it's
a
lot
of
osds.
C
C
It's
hashtag,
ceph
giveaway,
but.
C
Let
me
check
really
quickly,
I
think
it's
like
I
don't.
I
can
check
one
second.
B
C
Just
something
to
to
know
that
we're
getting
things
from
all
angles.
A
So
one
of
the
questions
with
the
the
mako
cluster
that
I'm
using
for
all
this
testing
is
we
we
have
the
ability
to
double
the
number
of
osds.
We
have
enough
memory
to
do
it,
so
we
could
do
like
two
osds
per
nvme
drive,
which
would
give
us
around
120
osds,
but
I'm
not
sure
if
it's
worth
redoing
all
the
tests,
like
both
ways
or
if
we
should
just
for
for
this
cluster,
stick
with
like
you
know,
one
one
osd
per
nvme
drive,
which
is
you
know
much
the
much
more
common
way
people
deploy.
C
The
main
bit
I
had
to
add,
but
also
just
if
you're,
if
you're
doing
any
performance
testing
and
you
want
us
the
core
team
to
get
involved
in
any
way.
I
don't
know
exactly
what
way,
but
just
let
us
know,
because
I
I
always
like
to
get
involved
in
different
kinds
of
testing.
So
if
there's
any
conversations
that
need
to
be
had,
let
us
know.