►
From YouTube: 2020-04-09 :: Ceph Performance Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
There's
a
PR
from
ma
Jinping
to
change
the
OSD
up,
num
threads
per
shard
SSE
value
to
one
I
did
theoretically
do
tests.
Looking
at
this
kind
of
thing
and
I,
don't
recall
seeing
the
same
kind
of
slowdown
that
he
saw
in
this
case,
where,
with
higher
Q
deaths,
he's
he's
seeing
random
reads
slower
with
two
threads
than
one
thread,
but
I
do
recall
seeing
some
kind
of
wonkiness
in
here.
So
it's
probably
worth
looking
at
again.
A
A
There
are
a
number
of
PRS
that
clothes
that
were
newly
opened
without
comment.
They
were
not
merged.
There's
an
MDM
e14,
it's
just
as
remove
vector,
I,
didn't
actually
read
the
code,
but
that
was
closed
quickly.
There's
another
one
for
Z
Lib,
the
window
bits
configuration
bureau
for
compression
that
one
was
closed
right
away,
and
then
there
was
another
older
one
from
stable
bot
that
was
close
by
the
steel
bot
that
she
can
go.
I
said
it
right
for
fixing
broken
I'm
used,
calculations
apparently
and
boost.
Are
there
also
were
a
couple
that
merged.
A
My
PR
for
blue
FS
buffered
I/o
merged.
That
was
a
real
small
change
just
to
disable
that
by
default
again,
since
it
was
causing
all
kinds
of
issues
with
the
kernel
going
into
swap
after
page
cache
filled
up
or
unknown
reasons,
but
apparently
it
dramatically
helps
when
there's
a
lot
of
long-running
rgw
traffic.
So
there's
that
and
then
oh
another
one.
A
A
A
B
A
There
were
a
couple
here
that
we're
closed
by
the
stale
bot
but
were
reopened
by
key
Foo
SP,
a
pool
removal
by
introducing
collection
list,
prefetch
I,
don't
know
if
maybe
Keef,
who's
planning
and
looking
that
I'm
not
sure
it's
been
there
for
quite
a
long
time,
and
then
this
also
optimized
mutex
contention
by
mushy
and
ping
yeah.
We
I'm
not
sure
who
is
the
right
person
to
look
at
that
now,
maybe
maybe
Igor
I
could
try
but
I'm,
not
sure
I'm
the
right
person
to
try
to
guarantee
that
safe.
A
A
A
C
A
I've
seen
I
seem
to
recall
in
the
past,
though,
that
we
do
have
a
couple
of
specific
reasons
why
we
want
multiple
threads
per
shard,
maybe
related
to
tobacco
filler
recovery.
But
I,
don't
remember
for
sure.
How
does
anyone
remember
exactly
Josh?
Maybe
you
do.
Are
there
situations
where,
having
that
extra
thread
for
each
shard
is
really
important.
C
A
A
A
A
Remember
like
last
summer,
I
was
kind
of
digging
in
thinking
that
we
should
maybe
change
how
some
of
this
works,
but
I
never
really
got
too
much
into.
It
was
at
the
same
time,
I
was
looking
at
the
the
OP
tracker
and
how
it's
doing
things
and
thinking
that
we
might
want
the
we
do
bad
things
in
both
cases.
I
remember,
try
it
but
I,
don't
remember
the
details
very
well.
A
So
that's
not
bad.
This
is
for
the
10.
No
challenge
also
we're
only
running
on
10
nodes
with
co-located
clients,
where
that
wasn't
actually
required
by
the
the
the
the
challenge
challenges.
Only
10
client
nodes
with
any
number
of
stores
notes
behind
it.
So
we're
actually
doing
not
bad
I
think
we
can
do
a
lot
better,
though
it'd
be
really
nice
to
see
us
up
in,
like
the
top
5
next
fall
so
anyway,
I'm
doing
some
work
on
trying
to
look
into
if
there
are
ways
that
we
can
improve
some
of
those
scores.
A
A
D
Until
of
Josh's
mic
works
or
well,
I
just
wanted
to
follow
up
on
our
serious
discussion
about
the
performance
ci
stuff
and
how
we
can
break
up.
You
know
that
the
tasks
that
would
be
involved
in
getting
some
of
those
pieces-
you
know
at
least
worked
on
by
some
somebody
right,
so
I
guess.
The
idea
was
that
we
had
at
least
two
main
things
that
we
talked
about.
D
One
was
about
doing
Jenkins
integration
for
all
the
peers,
not
just
crimson
PRS
when
it
comes
to
performance
testing
and
the
other
piece
that
we
talked
about
was
doing
some
long-running
tests
using
existing
pathology,
CBT
framework
and
when
it
comes
to
like
further
details
as
to
what
is
required
as
next
steps
for
each
of
them.
I
guess
when
it
comes
to
the
truth,
ology
bit,
we
probably
need
to
figure
out
a
way
to
tag
machines
in
the
sepia
lab,
which
we
can
just
say
that
we
are
going
to
use
for
long-running
tests.
D
I
think
we
need
David
calories,
so
I
think
I
discussed
this
with
Josh.
Briefly,
the
idea
was
that
we
already
have
some
existing
Smitty
machines.
We
can
probably
separate
out
some
Smitty
machines
and
call
them
something
else
and
immediately
start
using
them
for
that
kind
of
tests.
I
think
the
second
piece
would
be
to
make
sure
that
the
technology
integration
with
CBD
that
we
have
is
able
to
run
on
multiple
nodes,
and
we
need
to
figure
out
how
many
nodes
would
be
a
good
start.
D
What
should
be
the
scale
of
those
tests
and
I?
Think
the
third
piece
is:
if
we
get
the
first
two
pieces
integrated,
we
already
have
a
mechanism
of
touring
these
results.
So
currently
these
archives,
the
tautology,
archives,
store
the
performance
results
for
like
six
months
or
even
a
year
now.
So
we
need
a
way
to
visualize
these
results.
So
I
guess
the
part
that
Sridhar
was
interested
in
working
on
can
be
crucial
here.
D
We
don't
need
the
database
part
of
it,
but
we
need
a
way
for
us
to
pull
those
results
from
there
and
start
looking
at
some
time.
Series
data
in
in
a
human,
readable,
easy
manner
so
I
I
have
a
feeling
that
the
first
two
pieces
are
not
going
to
be
very
difficult
and
it
just
needs
some
focus
on
existing
things
that
we
already
have.
The
last
piece
is
something
that
we
can
probably
discuss
a
little
further
as
to
what
is
required
to
get
that.
A
I'd
I'd
suggest
for
that.
That's
last
piece
since
I
agree
with
you.
The
first
two
pieces
are
just
kind
of
from
getting
the
the
you
know,
Hardware
organized
and
set
up
for
this,
but
for
that
last
piece,
I
think
we
should
try
to
figure
out
some
kind
of
local
index
for
the
directories
right,
because
the
directories
themselves
almost
serve
as
an
authority
source
of
the
benchmark
data.
You've
got
all
the
original
results.
There.
A
You've
got
the
metadata
for
each
test
that
you
were
done,
but
if
we
can
index
that,
then
you
should
be
able
to
do
queries,
like
you
had
said,
with
time
series
data
you
know
looking
over
time.
How
have
this
directory
of
results?
That's
been
just
added
to
over
time.
You
know
how
all
of
those
results
changed
over
a
given
query
period.
That's
that's
kind
of
what
I
would
suggest
doing.
That
makes
sense.
Yeah.
A
D
Yeah
I
think
the
only
extra
piece
that
I
can
see
can
be
useful,
I
I'm,
maybe
I'm
misremembering.
Do
you
know
if
the
existing
CBT
results
also
capture
the
commit
the
safe
Chavan
or
the
commit
that
is
being
used
to
run
the
tests?
Because
that's
something
that
will
be
critical
if
we
want
to
technology
already?
Does
that
so
that's
some
information
that
we
can
just
extract
from
the
technology
results
or
the
ml
that
gets
connected
yeah.
A
A
C
C
D
A
Yeah,
it
doesn't
to
me
like
we
are
capturing
it
now.
I
think
I
just
didn't
get
around
to
adding
it,
because
I
I've
been
copying
the
the
OSD
logs
into
that
directory
whenever
I
run
CBT.
So
my
guess
is
I
just
never
got
around
to
it,
but
yeah.
It
should
be
I
mean
ridiculously
easy
to
add
right.
You
just
run
you
know
stuff
OSD
yourself
version
and
into
that
directory.
A
The
I
think
if
we
do
that
index
scheme,
we
should
think
carefully
about
the
schema
for
that
database,
but
hopefully
I'm
guessing
we're
gonna,
get
it
wrong,
so
I'm
hoping
that
we
can
make
that
index
really
easy
to
regenerate
and
we
can
change
it
whenever
we
need
to,
and
then
the
goal
would
be
that
you
know
this
thing
kind
of
is
a
living
index.
It
it
changes
over
time.
A
C
A
That
can
come,
my
thought
is,
I
can
come
after
right,
so
we
we
start
out
by
just
having
this
kind
of
index,
that
we
build
thus
ephemeral,
that
we
can
query
and
do
things
against
and
hopefully
then
that
will
teach
us.
You
know
what
we've
missed
and
what
we
haven't.
You
know
done
well
in
this
this.
You
know
in
this
query
thing
that
we're
building
and
then
once
we
have
that
kind
of
figured
out
and
can
make
crafts
from
Under
Armor,
then
we
could
have
another
piece.
A
C
A
C
A
A
C
C
C
A
A
C
A
A
Cool
yeah
I'm,
assuming
that
we
will
probably
be
growing
the
scheme
over
time
that
that
what
we
came
up
with
initially
is
probably
not
going
to
be
what
we
end
up
with
in
a
year
or
two
so
being
able
to
recreate
that-
and
you
know,
change
that
is
probably
good
to
be
important,
but
I,
don't
think
it
should
be
a
problem.
You
should
be
able
to
do
that
and
have
it
be
okay,.
A
So
so
I
think
my
thought
is
that
we're
probably
not
going
to
really
have
one
centralized
database
for
this
kind
of
stuff.
We
might
like
you
know
and
from
to
follow
gee.
We
certainly
will.
But
there
was
a
point
back
a
couple
years
ago,
where
Patrick
wanted
to
be
able
to
take
all
this
stuff
and
dump
it
into
like
some
kind
of
stuff.
Brag
thing
and
I've
I
know
that
the
guys
in
the
perf
lab
had
this
big
elasticsearch
thing
that
they're
dumping
a
bunch
of
like
stuff
results
into.
A
So
my
assumption
here
is
that
over
time,
we're
going
to
have
different
people
wanting
to
put
data
in
different
places
and
if
we
can
make
some
kind
of
fairly
flexible
underlying
thing
that
that
makes
it
easy
to
do.
Queries
against
that
will.
Let
us
kind
of
you
know
shove
it
into
whatever
we
want
down.
The
road
is
that
okay.
C
C
A
So
I
figured
that
the
kind
of
the
authoritative
source
of
the
benchmarking
data
is
what's
in
the
directory.
Right
is
the
the
direct
output
of
the
benchmarks
that
we
run,
but
that's
all
very
disparate
right.
It's
the
each
benchmark
has
its
own
format
for
dumping
data
out.
It's
it
takes
forever
to
iterate
through
that
directory
structure
and
read
all
of
these
things.
So
we
don't
really
want
to
just
directly
go
to
it.
If,
if
we
can
have
an
intermediate
source
and
sequel
light
or
some
other
embedded
database,
we
can.
A
Yeah
and-
and
you
know
it's
hopefully
will
be
easy
to
regenerate
right.
You
know
we
change
the
scheme,
oh,
we
can
just
remake
it.
We
or
something
else,
as
we
add
more
data
into
CBT,
that
we
can
know
that
gets
more
complicated,
because
now
we
need
versioning
in
two
places.
Maybe,
but,
but
you
know-
maybe
maybe
that
gives
us
the
ability
to
hide
some
of
that
from
from
the
the
the
whatever
is
consuming
it
down
an
expense.
A
E
E
A
E
A
E
B
C
A
E
Think
so
we
have
a
script
named
and
it's
put
on
the
sauce
grid,
its
name,
the
brand
CBT,
and
we
actually
have
a
inertia,
an
option
named
classical
classical
teaster.
What
you
need
is
just
to
copy
another
championship,
the
discreet
a
settings
in
stiff
builder
and
the
past
past
Castle
D,
that's
good
you'll
be
ready,
it's
quite
a
sweet
forward
and
I
have
even
to
it.
If
you
like,
Oh.
E
E
E
A
D
So,
look
if
you
add
a
simple
question
for
you,
so
when
you
said
these
results
are
not
persistent,
but
we
are
running
these
tests
against
master
and
a
particular
PR
and
then
dumping
a
results.
So
is
it?
Do
you
think
it
will
be
easy
for
us
to
integrate
a
mechanism
that
we
can
spit
out
the
results
or
like
some
sort
of
a
summary
of
results
into
the
PR
itself,
so
that
historic?
If
somebody
goes
back
and
looks
at
the
results,
they
know
why
a
test
passed
or
failed
I.
E
A
C
A
Think
there's
there's
a
couple
of
things
right.
One
is
that
if
you're
doing
a
comparison
of
a
pure
against
master,
we
probably
don't
want
to
rerun
master
tests
every
time
right.
We
want
to
be
able
to
go
back
and
say
we
already
ran
in
this
copy
of
master,
so
don't
rerun
it.
We've
already
got
the
results.
E
E
A
E
A
A
E
D
A
E
E
C
D
A
A
C
A
considering
what
what
musicians
are
with
it
BM
how
much
test
time
do
we
need
versus
how
much
do
we
will
be
consuming
or
like
if
we're
running
these
only
against
carries
tag
with
performance.
There
aren't
tons
and
tons
of
those,
that's
true,
so
maybe
we
wouldn't
even
be
at
capacity
with
the
current
four
of
four
machines.
If
we,
even
if
we're
rebuilding
every
time
really
max,
forget
every
time.
C
A
D
A
D
A
But
there's
there's
I,
think
there's
value
here
right
where
you
have
kind
of
a
historical
record
of
single
OSD
tests
from
Jenkins
that
are
compared
against
these.
These
you
know
peers
that
are
coming
in,
but
then
you
also
have
this
record
of
tooth
ology,
where
you
have
bigger
test,
runs
across
more
OS
DS
that
you
can't
easily
do
in
2000
with
the
drinkin
stuff,
the
okay,
the
way
I'm
seeing
is
Jenkins.
A
You've
got
like
big
Louis
detest
that
you're
running
really
quickly,
really
often
comparing
against
PRS,
and
then
you've
got
ten
of
these
more
elaborate,
bigger
tests
from
tooth
ology
that
are
banning
multiple
nodes,
and
you
know,
maybe
are
you
know
more
long-running
and
more
kind
of
extensive
than
the
Dickens
test?
Is
that
yeah
yeah.
D
I
think
I
think
that's
that's
the
ideal
case.
That's
what
we
want
to
get
but
I'm
just
trying
to
say.
If
we
don't
want
to
like
optimize
for
storing
or
persisting
results
for
the
Jenkins
bit,
we
can
still
use
our
tautology
framework
to
do
the
same
and
have
like
single
node
raters
bench
results
and
still
persist
them
and
look
at
them
over
time.
A
E
C
A
D
G
I
say
hard
work
with
perhaps
idea
and
does
something
like
thanking
stars
to
a
new
PR
like
if
he
asked
Jenkins
taste
test
this
Jenkins,
please
test
this,
so
it
figures
and
operations
of.
Similarly,
maybe
we
can
use
the
event
handling
function
for
just
posting
the
results
from
any
I
think
to
tautology
or
CBT
right.
D
A
G
A
D
As
just
thinking,
maybe
we
can
use
like,
we
don't
have
to
expend
the
whole
hour
discussing
this
every
week,
but
we
can
spend
like
10
minutes
just
discussing
what
progress
or
no
progress,
if
not
any,
on
a
weekly
basis
in
this
meeting.
Regarding
this
whole
project,
if
anybody
is
getting
stuck
or
anybody
doesn't
have
the
bandwidth,
maybe
somebody
else
can
jump
in
and
help
that's.
The
idea.