►
From YouTube: Ceph Performance Meeting 2021-11-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
folks,
sorry,
I'm
a
little
late,
the
as
always
core
is
growing,
bigger
and
bigger,
and
it's
taking
longer
to
get
through
everybody.
I
suspect
it'll
be
a
little
bit
before
we
get
those
people,
so
I
think
maybe
we'll
just
start
in
and
we
get
them
when
we
get
them
all
right.
So
a
couple
of
new
pr's
this
week.
Let's
see
there
is
a
redo
of
adam's
bluefest,
fine,
green
locking,
pr
in
case
casey
or
josh.
A
If
you
don't
know
the
the
previous
one
was
breaking,
and
so
we
reverted
it
after
after
trying
to
get
over
a
weekend
to
to
work.
But
this
is
now
a
re-attempt
at
introducing
this.
I
think
it
was
actually
kind
of
a
decent
win.
If
I
remember
right,
but
but
it's
it's
pretty
complicated,
the
lockheed
bluefest
is
not
super
straightforward.
A
So
anyway,
there's
that
casey
I
saw
that
you
have
got
a
new
pr
for
optimizing.
This
request
timeout
issue
that
we
had.
B
Yeah,
I
was
kind
of
just
throwing
a
few
things
at
the
wall
to
see
what
helps
and
it
looks
like
they
help
a
lot
so
hoping
to
narrow
it
down
a
little
bit
more
there's
a
custom
allocator
in
there,
and
I
would
rather
not
depend
on
that
if
it
doesn't
help
that
much.
But
otherwise,
I
think
most
of
the
performance
regression
is
gone.
A
Nice,
I
threw
a
comment
in
there
for
mark
if,
if
I
don't
know
if
he
looked
at
cpu
usage
when
he
passed,
but
that
was
one
of
the
big
things
I
saw.
What
I
remember
when
I
was
looking
at
all
this
stuff.
A
while
back
is
that
the
the
cpu
usage
of
greatest
gw
had
increased
dramatically
so
it'd
be
interesting
to
see
if
beyond
performance,
if
there's
also
have
a
corresponding
efficiency
gain.
B
B
Yeah
good
question
I
had
asked
for
one
more
run,
but
maybe
redoing
the
existing
ones
with
some
cpu
metrics
would
be
a
good
idea.
A
Sure,
if,
if
not,
I
can
at
some
point
possibly
try
to
rerun
some
of
the
tests
I
was
doing
earlier,
cbt
will
grab
all
that
anyway.
So
it's
you
know
no
more
than
just
running
a
test,
so
yeah.
If
you
don't,
if
mark
doesn't
have
any
of
it,
then
let
me
know
I
might
be
able
to
squeeze
a
quick
run
in.
C
Hello,
just
a
quick
note
on
on
the
on
extracting
the
perf
matrix,
basically,
the
iops,
the
I
o
the
the
sites
for
iops.
We
actually
have.
C
A
Bread,
if
I
remember
it,
that's
only
for
the
osd
right
now,
not
for
rgw
right,
that's
correct!
So
we'd
need
to
add
that,
but
but
yeah.
A
Yeah
yeah
there
I
it's
been
a
little
bit,
but
I
think
I
had
made
some
changes,
or
at
least
I
was
thinking.
I
don't
know.
If
I
did
it
or
not,
do
the
the
cpt
monitoring
class
tried
to
move
towards
being
able
to
just
use
instead
of
just
profiling,
the
osd
build
to
profile
like
arbitrary
commands
or
arbitrary
demons.
A
You
can
get
it
with
collector,
at
least
right
now,
if
it's
not
as
easy
or
as
straightforward
as
the
turf
report,
but
you
can
at
least
get
cpu
metrics
to
be
usage
metrics
out
of
the
of
the
clutter
recorded
data,
so
it's
still
possible
to
get
it.
It's
just
not
quite
as
convenient.
A
Okay
sounds
good,
casey,
yeah
I'll
just
see
if
mark
has
anything,
but
if,
if
not,
I
can,
I
can
probably
squeeze
in
a
quick
run
and
and
provide
that
okay
moving
on.
B
Just
just
a
quick
comment:
if
you,
if
you
do
do
some
benchmarking,
there's
one
logging
commit
in
there
that
I
would
prefer
you
leave
out
for
benchmarking.
B
Yes,
if
you
read
the
pr
description,
it
mentions
that
too.
A
Okay,
I'll
I'll,
take
a
look
and,
and
try
to
remember,
to
do
that.
Oh,
which
mean
too
many
things
to
keep
track
of.
Let's
see,
actually
testing
don't
include
blogging.
A
Okay,
I've
added
it
to
my
random
list
of
things
to
try
to
remind
myself,
okay,
moving
on
amazing
messenger.
Okay,
we
got
through
both
of
the
closed
prs.
So
sorry,
new
prs,
now
the
closed
pr
that
I've
got
for
this
week,
async
messenger
support
disabling
data
crc
for
protocol
version,
two
that
one
had
escaped
my
my
tracking
until
erratic.
I
think
you
added
the
performance
tag
last
week,
so
now
it
has
merged.
It
looks
like.
C
Radic,
I
don't
think
so.
I
I
think
it's
purely
about
bringing
back
the
feature
we
had
and
still
having
protocol
v1.
We
just
missed
it
for
a
video.
It
might
okay,
it's
reasonable
in
some
in
some
arrangements,
configurations
that
are
focused
solely
on
performance,
so
good
to
have.
A
Yep
sounds
good
all
right,
a
couple
of
updated
prs.
Let's
see,
set
blue
source
at
the
minute
size
to
optimal.
I
o
size
that
has
had
a
little
bit
more
discussion.
I
don't
remember
what
the
latest
on
that,
but
now
you
can
look
and
see
if
you're
interested
in
it.
C
A
Igor's
pr
to
make
the
shared
blob
fsck
process
use
less
memory.
We
talked
about
that
a
little
bit
last
week
looks
really
good.
There's
not
been
a
whole
lot
of
discussion
on
the
pr.
In
fact,
I
think
now
it
needs
a
rebase,
but
just
based
on
what
we
talked
about
last
week.
That
looks
excellent,
now,
significantly
a
significant
reduction
in
memory
usage
during
fsdk.
So
that's
that
looks
excellent
next
db
store,
add
configuration
to
set
the
sql
lite
performance
tuning
optimization.
A
I
think
there
are
some
updates
to
that,
and
there
was
some
active
going.
Ongoing
discussion
sounds
like
that's.
Just
actively
being
worked
on.
Kevin
project
would
very
much
like
to
see
where
that
goes,
and
and
what
performance
differences
we
see
versus
you
know
standard
stuff.
Cluster
next
optimize
objects,
memory
allocations
using
pools,
so
this
is
a
an
old
pr.
Well,
this
old
now
has
been.
I
think
it
was
sometime
in
the
summer
that
it
it
was
first
made.
A
There
has
been
some
discussion
in
the
core
stand
up
about
this
ronan
has
kind
of
looked
at
it
most
recently
had
a
bunch
of
questions
about
it
very
legitimate
questions
regarding
how
we.
A
Allocate
memory
where
we
allocate
memory
from
what
strategy
makes
the
most
sense.
So,
given
this
is
kind
of
old,
I
don't
know
if
we'll
we'll
see
a
lot
of
discussion
further
discussion
on
this,
but
the
overall
topic
definitely
deserves
a
lot
of
discussion,
especially
I
think
in
the
context
of
crimson
and
what
we
do
there,
but
even
for
classic.
It's
still
a
good
discussion
to
have
so
anyway,
that's
been
updated
with
a
really
good
set
of
questions
from
ronan.
A
A
Otherwise,
I'm
not
sure
that
anyone
on
the
cfs
team
has
really
gotten
a
chance
to
look
at
that.
I
know
right
now,
they're
they
have
at
least
a
little
bit
looked
at
the
the
journal,
optimization
pr
that
is
kind
of
sort
of
tangentially
related
to
this.
So
anyway,
that
is
still
being
actively
updated,
which
is
really
good.
Hopefully,
people
have
time
to
to
really
dig
into
it
and
look
at
it
and
see
if
it's
worth
trying
to
merge
all
right,
otherwise,
lots
of
stuff.
A
In
the
no
movement
category
I
made
it
about
halfway
through
before
I
ran
out
of
time
this
morning,
but
I,
I
suspect
that
there
wasn't
a
whole
lot
else
on
this
list
that
made
into
the
the
you
know
updated
category
or
closed
category,
so
I'll
leave
it
to
next
week.
A
All
right:
well,
then
niha,
you
had
brought
up
that
you
wanted
to
talk
about
performance
ci
work
this
week,
so
I
think
if,
if
you
want
I'll
turn
it
over
to
you.
D
Yeah,
I
guess
we've
talked
about
performance
ci
in
previous
meetings,
but
I
guess
I
wanted
to
revamp
some
of
the
discussion.
We
also
have
radic
here.
We
have
chris
who's
joined
us
recently
and
has
some
performance
testing
background,
so
I
thought
it'd
be
a
good
discussion
to
have
in
general.
Yesterday
I
was
talking
to
sam
as
well
so
radic.
I
think
you
can
correct
me
if
I'm
wrong.
D
So
currently,
all
the
crimson
prs
are
doing
a
performance
check
right,
but
it's
a
very
cool
screen
check
is
what
I
heard
from
samus.
Like
you
know,
if
yes,
yeah.
C
Let
me
provide
a
link,
let's
start,
maybe
we
have
okay.
D
C
Well,
I'm
afraid
it
will
be
a
problem.
Sorry
guys
for
not
entering
the
the
video
I
tried
doing
the
car
stand
up
and
it
was
it
was.
It
was
a
problem.
I
mean
I'm
in
a
place
with
very
bad.
C
Already
opened
steps
in
my
firefox,
so
this
shouldn't
take
us
so
much
of
time.
Okay,
maybe
let's
start
with
how
with
what
is
actually
being
tested.
That's
that's
a
link
to
an
example
of
of
of
a
result
set
the
most
interesting
one
with
basically
the
the
main,
the
main
the
main
thing
that
the
crooks
we
take
care
about
in
crimson
is
the
measure
for
the
so-called
computational
efficiency,
which
is
basically
numbers.
C
It's
basically
number
of
cycles
burned
per
per
io
here,
it's
the
second
row
cpu
cycles
per
per
up,
and
we
are
talking
we
are
testing.
Well,
we
have
to
think
two
things.
Actually
one
is
the
the
freshest
master
branch.
Second,
one
is
a
pr
we're
just
comparing
and,
moreover,
I
think
that
it
could
be.
C
It's
that
it
adds
a
plastic
class,
the
classic
ocd
flavor,
that
that
should
take
care
about
about
both
classical
os
as
well.
The
question
is,
of
course,
about
stability
and
availability
of
nodes
for
for
the
patch
check,
and
also
it's
not
so
commonly
used.
So
I
would
expect
a
bunch
of
problems.
Basically,
it
needs
some
polishing.
D
Okay,
so
so
so,
if
we
summarize
the
current
status,
the
the
crimson
prs
are
running
through
the
performance
check.
There
are
hooks
for
this
to
glue
up
to
classic
world,
but
because
of
you
know,
questions
like
you
know,
availability
of
nodes,
etc.
We
haven't
enabled
that.
C
I
think
so
yep
I
cannot.
I
went
through
a
bunch
of
performance
levels
prs
and
I
was
unable
to
find
the
the
results
from
junkies,
which
makes
me
think
that
we
need
to
dismount
everything
and
and
debuff
what
was.
A
Broken
add
to
that
yeah,
I'm
not
I'm
not
sure.
If
this
availability
of
nodes,
you
have
four
of
the
inserted
nodes
in
that
jenkins
cluster
from
what
I
understand
and
it
looks
like
several
of
them
have
been
completely
idle,
so
I
don't
know
why.
But
for.
A
D
Okay,
so
yeah,
but
yes,
okay,
I
mean
there
are
still
unknowns,
but
the
from
what
radik
has
observed
is
that
classic
pr's
weather
performance
labels
are
still
not
running
performance
tests.
So
I
think
that's
that's
the
the
piece
we
need
to
investigate
and
figure
out.
Why
so
that
feels
like
a
you
know,
bug
fix
kind
of
thing
we
need
to
do,
but
in
terms
of
like
what
can
we
do
next?
D
C
Yeah,
we
are,
if
I
recall
correctly,
we
are
testing
solely
with
with
freako's
bench,
which
other
hand
is
pity
on
another
one.
Well,
eratos
bench
is
is
supposed
to
be
quite
similar
to
rpd
on
its
main
on
its
hot
paths,
but
I
agree
we
should
we
should
bring
bring
more.
D
Okay,
I'm
just
trying
to
find
where
this
practice.
C
It's
located
in
these
src
scripts
in
in
the
main
repo
of
stuff.
D
A
Yeah
it
would,
it
should
be
also
very
easy
once
we
have
it
working
with
classic
to
add
hs
bench
as
an
rgw
test
as
well.
This
is
already
exactly
supportive.
Exactly.
D
Exactly
that
was
the
next
thing
in
my
mind,
so
I
guess
I'm
curious
chris.
What
are
what
are
your
thoughts
about
this
whole
thing
or,
like
you
know,
does
your
performance
testing
experience
align
with
any
of
this
automated
stuff
that
we
were
talking
about.
D
D
Maybe
you
are
muted,
two
ways:
blue
jeans
and
okay.
I
think
it
dropped,
but
any
guess
I
was
curious
to
hear
his
experience,
also
given
that
he's
coming
from
an
outside
theft
world
and
what
other
mechanisms
are
used
in
other
software
projects.
C
Yeah
go
ahead,
so
maybe
in
the
meantime
there
are
a
bunch
of
things
to
address.
For
instance,
we
are
testing,
we
are
testing
with
the
chef
x,
disable
disabled.
I
think
that
if
we
even
if
even
if
classical
osd,
if
the
classical
flavor
is
being
picked
up,
we
are
still
disabling
the
effects.
This
doesn't
make
a
huge
sense.
Basically,
it
would
would
invalidate
all
validity
of
all
messenger.
Like
appears
yep.
C
There
is
a
couple
of
things
we,
for
instance,
is
also
a
script
that
makes
the
comparison
that
that
actually
takes
takes
cpt's
output
directories
for
both
baseline
and
and
the
target
and
the
in
dpr
and
compares
the
extracts
metrics
right
now.
I
think
only
for
bench.
Making
it
aware
about
cbt
sorry
about
fio
would
require
also
adding
some
some
extraction,
some
extractors
for
for
fio,
I
believe
and
other
benchmarks
as
well,
but
they
are
not.
There
aren't
huge
things.
I
I
think.
A
Reddick
one
thing
I
I
wanted
to
mention
too
is
that
cbt
now
can
run
crimson
directly
a
set
of
a
cluster
with
crimson
across
multiple
nodes.
So
if
we
did
want
to
to
start
doing
multi-node
tests,
we
could
modify
run
cbt
to
also
allow
you
to
have
cbt.
C
Makes
sense
at
the
moment
at
the
moment,
run.
Cpt
is
basically
a
thin
wrapper
over
over
restart
the
dot
sh.
So
it's
single
cluster
testing,
and
because
of
that,
we
need
to
be
aware
that
pistach
started
cluster.
Has
some
some
unusual
settings
like
if
you're
correctly,
turning
on
log
depth?
C
It's
also
it's
not
so
good
idea
to
take
to
take
vanilla,
vista,
cluster
and
and
start
and
then
do
performance
performance
checks
for
classical
in
crimson.
We
don't
have
logo.
So
it's
not
a
big
deal.
You
know,
there's
there's
a
bunch
of
things.
We
need
to
publish.
A
Yeah
yeah,
if,
if
we
want
to
try
that
now,
the
only
thing
you
need
for
crimson
and
cbt
is
to
specify
version
two,
for
you
know
the
the
the
the
ip
for
you
know
all
the
the
the
hosts
or
all
the
demons.
A
D
Yeah,
I
just
decided
to
add
this
because
I
know
you've
been
discussing
a
lot
of
these
ideas,
but
we
haven't
put
it
down
somewhere.
I'm
also
trying
to
write
down
references
to
what
we
currently
have.
So
what
I'm
seeing
is
configuration.
While
there
are
configurations
that
live
in
source
test,
crimson
cbt,
then
there
is
radius,
k4,
okay,
right
and
4k
read
radic
is
that
is
that
all
we
have.
C
C
Here
here
is
here
is
the
fragment
that
is
responsible.
The
snippet
is
responsible
for
for
bringing
the
cluster
yes
and
dash.
Big
x
means
no
defects.
C
Yes,
no,
no,
secure
modes,
no
f,
drc,
I'm
not
sure
about
the
series
of
thinking,
but
well
it's
basically,
the
the
entire
crimson
spell
track
is
just
an
automation
of
our
very
rough
testing.
We
were
doing
at
the
very
at
the
initial
stages
of
crimson
development.
Basically
I
was
doing
them
on
my
machine.
This
is
just
moving
the
procedure
and
automating.
It.
C
Okay,
but
that's
a
big
problem,
I
believe,
because
somebody
would
need
to
judge
whether
it's
that
case
or
not,
I
mean
let's
say
you
have
you
have
a
pure
that
messes
with
with
async
messenger,
and
we
are
testing
without
without
its
effects
being
enabled,
and
if
somebody
break
something
in
must
in
comma,
in
our
in
the
alph
component,
we
wouldn't
notice.
A
I
seem
to
remember
someone
telling
me
once
that
you
could
have
different
tests
run
based
on
the
labels.
D
In
I
think
yeah,
I
think
one
simple
way
to
address
this
would
be
to
have
like
something
called
perf
basic
and
like
an
advanced
like
you,
when
you
think
that
there
is
a
potential
for
a
pr
to
you
know
cause
a
regression
because
of
a
set
exchange
or
an
earth
change
or
whatever
you
run,
the
entire
suite,
which
has
suffix
enabled
for
minor
prs,
where
we
think
you
know
it's
just
you
know:
osd
change,
osd
code
change
and
you
don't
need
fx.
You
can
just
run
the
basic
version
which
has
effects
disabled.
C
Oh
okay,
but
it
it
still
puts
some
some
necessity.
Some
extra
obligation
on
development
is
this
is:
does
it
qualify
for
sex
testing
or
well
it's
human
dependable,
which
makes
me
a
little
a
little
bit
nervous?
I
would
love
to
have
everything
automated.
A
If,
if
we
really
wanted
to
spend
time
on
it,
we
could
do
some
kind
of
like
pattern
matching
of
the
source
code
files
that
were
modified
right
and
then
make
suggestions.
A
Radik
this.
This
is
something
that
I
I
dealt
with
years
ago
that
question
and
I
don't
know
how
bad
it
is
now,
but
at
least
you
know
many
years
ago,
the
suffix
overhead
was
high
enough,
that
it
was
making
it
much
more
difficult
to
tease
out
changes
in
performance.
You
know
small
changes
in
performance
to
other
systems.
C
I
see
if
you
are,
if
you
are
testing
a
micro-optimization,
that
is
worth
let's
say
one
or
two
percent.
You
want
to
minimize
the
the
entire
number
of
cycles
to
make
the
better.
C
C
Maybe
you
know,
I
don't
know
for
max
for
large
chunks
for
sure
no
doubts
about
it,
but
there
was
something
somebody
quite
recently
was
was
profiling,
profiling,
an
osd
looking
for
overhead
of
of
cersei
of
crc,
and
he
tried
both
4k
and
for
max,
initially
starting
with
formax,
and
then
there
was
huge
amount
of
sales
consumed
by
it
by
crc
by
crc
calculation,
but
for
4k
it
was
just
invisible.
E
D
Yeah
we
can
profile.
I
think
that
that
goes
deeper
than
we
want
to
go
at
this
point
right.
We
first
need
some
science.
You
know
before
talking
about.
You
know
what
conversations
I
I
agree.
I
at
some
level.
I
absolutely
agree
with
you
erratic
until
unless
there's
a
real
problem
with
suffix,
we
there's
no
reason
for
us
to
disable
it,
and
what
mark
is
talking
six
seven
years
ago?
Maybe
we
have
done
something
better
in
six.
G
A
C
Well,
but
it
it
is
it
it's
enabled
in
default
deployments.
Moreover,
in
no.
A
No,
no
for
never
no
we've
had
it
enabled
for
deployments
by
default
forever,
but
I
mean
for
for
like
regression
or
performance
testing,
I'd
be
very
cautious
about
enabling,
by
default
until
we
can
prove
that
it's
not
going
to
make
a
dramatic
impact
on
looking
at
these
things.
D
Yeah,
yes,
I
understand
your
point
mark.
I
guess
I
mean
when
we
have
those
it's
a
matter
of
adding
an
extra
configuration
right
so
like
we
can
test
with
both
and
we
can
figure
out
how
bad
it
is,
but
I
guess
coming
back
to
coming
yeah
coming
back
to
the
basic
question,
I
guess
radix
concerns
about
like
non-default
configuration
and
things
that
we
have
disabled
is
genuine.
I
think
we
should
know
about
this,
so
I'm
trying
to
write
those
downs.
D
Fx
is
one,
but
in
terms
of
other
things,
we
we,
the
main
thing
that
strikes
me
is
that
we
are
still
doing
single
node
tests
right.
We
are
not
doing
multiple
node
tests.
A
D
Yes
again,
yeah
exactly
that's
a
very
narrow
case
again,
like
which
pr
actually
needs
to
go
through
a
multi-monte
note
test.
A
D
H
Yeah-
and
that
brings
up
a
good
point-
I
think
it
depends
on
if
you're
looking
at
multi-node
client
tests
or
not,
but
some
things
I've
done,
which
are
very
much
in
the
past
and
they've
been
manual
but
not
part
of
any
type
of
pipeline
was
yahoo
has
ycsv,
which
is
more
legacy
now
for
s3
testing.
H
But
then
also
we
can
do
things
like
through
various
my
nodes
have
some
sort
of
I
don't
know
of
any
current
frameworks,
but
test
scaling
up
and
down
various
sizes
of
clients
if
we
identify
certain
types
of
tests
that
are
needed,
I've
had
success
with
that
in
the
past,
through
various
s3
type,
object
store,
rudimentary
things
that
we've
created
in
the
past
at
positions,
but
it
just
depends
on
the
use
case.
At
this
point.
A
Yeah,
with
with
ycsb,
did
you
guys
do
anything
that
changed
like
the
default?
I
think
they
used
like
a
zipfian
distribution
right
for
their.
I
o
in
the
default
test.
H
Right
yeah,
we
just
did
basic,
not
basic
default
options
for
a
lot
of
things.
A
H
D
Yes,
so,
chris,
just
to
give
you
some
background,
we
already
have
something
called
ceph
benchmarking
tool
that
mark
has
written,
and
it
has
a
bunch
of
workloads
that
you
can
run
standalone
on
standalone
clusters.
What
we
are
trying
to
do
here
is
that
automate
some
of
this
in
our
jenkins
pipeline,
so
that
every
pr
that
is
coming
and
needs
a
performance.
So
we
have
labels
in
github,
so
any
pr
with
a
performance
label
would
run
these
bunch
of
tests
and
it
would
give
us
a
status
whether
it's
a
pass
or
a
fail.
D
Currently,
what
we
have
is
some
basic
tests
that
we
were
just
talking
about,
they're
also
listed
in
in
the
pad
there.
It's
a
radius
bench
test
which
is
essentially
a
you
know,
it's
a
ceph
construct.
It's
the
cf
test.
The
drills
bench
is
a
benchmark
that
is
run
just
creates
a
bunch
of
radius
objects
and
does
a
basic
I
o,
but
there.
D
A
A
You
know
focused
start
up
a
sf
cluster
quickly.
Cpt
is
kind
of
like
multi-node
run
benchmarks
quickly.
You
know,
set
potentially
set
up
a
cpt
or
set
up
sf
cluster,
but
also
can
use
existing
clusters.
That
kind
of
thing
and
then
teethology
is
like
you
know
the
the
the
kitchen
sink.
The
multi-tool,
like
everything.
D
A
You
know
it's
hard
to
know
which
tool
should
necessarily
do
which
things
you
know.
Should
we
have
pathology
setting
up
clusters?
Do
we
have
cp
stamp
clusters?
We
have
these
start.
Should
we
be
using
jenkins
for
ci?
Should
we
be
using
technology
for
ci?
You
know,
there's
all
these
overlapping
capabilities.
D
D
So
there's
a
lot
of
manual
work
that
goes
into
finding
out
that
there
was
a
performance
regression
that
was
caused,
and
you
have
been
part
of
a
lot
of
those
right
like
going
and
doing
some
so
many
bisects
and
like
figuring
out
which
th,
which
one
was
that
bad
comment
and
how
bad
you
know
how
further
back
do
we
have
to
go
so
anything
that
we
can
integrate
in
our
jenkins
pipeline
to
be
able
to
save
that
effort?
I'm
I'm
okay
with
that,
and
given
that
this
this
jenkins-based
pipeline
is
already
there.
D
I
feel
that
it's
a
matter
of
just
extend.
We
can
actually
I
mean,
I
know
that
there
will
still
be
regressions
that
will
be
corner
cases
etc,
but
they're
like
straight
out
things
that
are
just
bad
and
we
still
merge
them
in
the
past.
So
I
mean
those
kind
of
things
we
want
to
capture.
We
want
to
capture
and
the
the
second
part
that
I
think
about
it
is
like
a
performance
dashboard
like
that
is
something
we
don't
have
it.
D
People
have
to
manually
run
tests
to
be
able
to
create
these
kind
of
graphs,
and
you
know
look
at
like
how
did
how
are
we
in
the
last
month
and
how
are
we
now
so
if
we
and
for
that,
I
feel
the
pathology
test
that
we
are
already
running
can
be
used.
So
it's
it's
a
matter
of
just
pulling
out
those
results
that
are
there
and
creating
some
creating
a
dashboard
of
some
sort.
So
there
are
two
separate
ideas
in
my
mind:
yeah.
A
For
the
second
one,
is
anyone
still
actively
developing
popito
or
is
that
kind
of
yes.
D
I
mean
yeah,
I
think
junior
has
done
some
work
on
palpito.
I
think
zach
is
back
contributing
and
like
reviewing
pr's,
etc.
This
technology
meeting
has
been
added
to
the
calendar.
It's
like
two
weekly
meetings
that
happen.
Palpita
is
part
of
that
and
I
think
australia
as
well
a
little
bit.
So
I
guess
there
are
folks
who
know
more
about
peter
and
stuff
than
before.
D
Yes,
so
yeah
exactly
so
that
piece,
I
feel
that
we
can
still
talk
about
in
in
the
technology.
Slash.
You
know
cci
discussion,
but
this
one
today.
What
I
wanted
to
talk
about
was
this
first
first
part
was
like
you
know.
We
already
have
this
framework.
How
can
we
make
this
more
effective
and
extend
this
further
for
classic
as
well.
A
Yeah,
I
think,
you're
right.
I
think
the
first
thing
is
just
you
know:
we've
got
this
all
more
or
less
there.
It
looks
like
maybe
just
having
it
run,
different
cbt
tests,
I
mean
it.
I
don't
think
it
really
involves
much
beyond
just
you
know,
using
a
different
yaml
configuration
file
and
then
figuring
out
how
to
parse
the
results.
D
Yep
and,
and
also
extending
that
runs
cbd
for
fi
all
right
that
that
is
one
piece
I
feel
will
be
needed,
but
I
don't
expect
it
to
be
too
complex
if
you
know
how
to
just
run
fio
against
a
standalone
cbd
cluster,
it's
the
extra
dependencies
and
hooks
that
you
need.
A
I
suppose
the
other
thing
is
that
you'd
have
to
just
make
sure
that
fio
was
actually
on
the
notes
I
made.
Whoever
can
do
that,
but
yeah
that
that
should
be
fun
yeah.
One
time
do
we
need
to
trigger
the
new
compilation
of
fio
for
every
single
commit
like
if
we
expect
that
we
might
be
changing
something
and
stuff
regarding
like
lib
rbd,
maybe
we
need
to
recompile
a
fio.
A
C
This
confused
me
a
little
bit.
Do
we
do
we
link
the
lip
rbd
with
fire
statically.
F
Let
me
tell
you:
how
can
you
crouch
one
of
my
notes
and
I'll
I'll
find
out?
Okay?
Well,
if
it
might
be
fine,
it's
probably
dynamic.
It's
probably
your
best
dynamic.
It's
gotta
be
right
by
default,
I
think.
Well,
I
would
expect.
A
D
A
Chris,
do
you
have
much
experience
with
the
I
o
500.
H
A
A
Yeah
we
we
talked
to
go
ahead.
H
Oh
so
that
could
be
another
avenue
that
we
looked
down
as
well.
We
could
look
at
the
at
expanding
that,
but
then
also
what
part
of
the
test
would
make
sense
for
for
object
stores
with
io
500.
H
A
Yeah,
there
are
limitations
in
terms
of
the
object.
Interface.
Casey
is
gone
now
I
think,
but
we've
actually
had
talks
with
the
I
o
500
folks
or
the
the
ior
folks.
I
guess
have
overlapped
there,
but
about
doing
some
of
this.
But
honestly
we
just
didn't
have
time.
I
wrote
an
aorti
backend
for
loops
ffs
back
in,
like
I
think
around
ic20,
maybe
well
any
event.
Yeah.
A
Definitely
there's
lots
of
stuff
that
we
could
do
here,
and
we
can
also
take
some
of
the
stuff
that
we've
done
for
the
am500
and
get
into
cbt
and
get
that
to
be
like
make
it.
So
we
can
run
ior
and
md
test
as
benchmarks
against
existing
stuff
clusters.
Like
all
the
pieces
are
there,
we
just
have
never
put
them
all
together,
but
then
also
extending
it
out
and
doing
object-based
tests
for
using
the
s3
background
could
be
another
really
interesting
avenue,
so
there's
tons
of
stuff.
H
H
A
Oh
awesome:
okay,
cool
cool
yeah-
I
I
was
kind
of
involved
for
a
little
while
and
then
I
got
sucked
back
into
many
other
things,
and
then
I've
been
absent
for
like
a
year
or
so
it's
yeah.
A
For
for
a
long.
Long
time
ago,
I
was
on
the
open,
sfs
benchmarking
worker
committee
for
like
a
year
or
two,
and
then
that
went
by
the
wayside
too,
but
definitely
having
more
people
with
background
in
that
area.
To
work
on
this
stuff
would
be
super
super
nice.
D
Okay,
I
think
this
is
a
good
start.
I
try
to
add
some
stuff
in
the
user
pad
other
folks
as
well.
If
you
have
some
ideas,
basically,
what
we
want
to
do
is
you
know,
understand
where
the
current
problems
are.
I
think
one
thing
we
want
to
debug
first
is:
why
are
the
classic
prs
with
performance
labels
not
running
performance
tests?
That,
I
think,
is
the
first
thing
to
figure
out
and
in
parallel
we
can
also
start
investigating
on
how
to
extend
these
workloads
and
add
fio
support.
A
Is
there
any
coverage
that
you
feel
like
we're
really
lacking
in
terms
of
you
know
things
that
that
that
people
are
kind
of
clamoring
for
or
things
that
we're
not
testing?
Well
right
now
I
mean
I
know
we
can
do
better
in
many
many
areas
right,
but
is
there
anything
we're
really
struggling
to
to
get
coverage
on.
A
D
Yeah
yeah,
I
mean,
I
think
the
main
thing
I
feel
is
that
the
fact
that
these
periodic
performance
numbers
are
not
there,
some
for
somebody
to
look
at
and
if,
if
we
have
to
like
you,
know
evaluate
in
the
last
six
months,
what
kind
of
improvements
or
regressions
are
there
and,
let
us
say,
object
workouts.
D
We
don't
know
that
answer
right.
So,
even
if
there
is
some
basic
numbers
that
we
can
look
at
and
if
we
find
something
fishy,
we
go
and
do
a
broad
set
of
experiments,
etc.
D
That,
in
my
mind,
is
is
the
biggest
we
are
currently
having
somebody
spend
that
those
cycles
or
somebody
you
know
creating
some
chats,
etc
or
presentations,
etc.
But
I
guess
in
general
what
I
want
is
like
the
basics.
Tests
can
be
done
automatically
yeah.
A
Like
the
well,
the
one
that
casey
is
fixing
right
now
with
with
this,
what
was
the
pr
we
just
talked
about
actually,
before
a
lot
of
people
arrived,
the
request,
timeout,
optimization
stuff,
we
we
we
bisected-
or
I
bisected-
I
guess
and
and
found
it
you
know-
and
now
it's
being
fixed,
but
had
we
known
right
away,
we
might
not
have
merged
some
of
those
pre-iris
in
the
first
place,.
D
Yep-
and
this
is
the
kind
of
stuff
I'm
talking
about
and
the
remember
those
the
other
one
which
you
and
mark
hogan
were
trying
to
chase
down,
and
it
turned
out
to
be
a
really
docile
change
which
ended
up,
causing
a
huge
regression
in
master.
A
Yeah
yeah,
that
was
another
one.
It
was
it
wasn't
a
big
change
right.
It
was
a
stupid,
it
was
a
small
change,
but
it
was
one
that
had
a
big
impact
right.
So.
D
Yeah,
but
I
guess
you
know
what
the
other
thing
comes
to
mind-
is
that
for
that,
if
we
take
that
change
as
an
example,
we
didn't
we
wouldn't
have
put
a
performance
label
on
it
right,
but
it
would
have
caused
the
performance
regression.
So
I
I
guess
the
next
level
question
is,
you
know.
A
D
A
A
D
Yeah,
the
touching
things
we
already
have,
like
you
know,
based
on
file
names
and
directory
directories,
being
touched
by
a
change.
We
already
have
automatic
labels.
That
part
is
there
about
regular
expression
matching,
I
am
not
sure,
but
I'm
you
know
these
github
hooks
and
things
are
pretty
neat,
so
there
must
be
some
way
to
do
that
as
well.
A
Yeah
yeah,
you
imagine
that
if
we're,
if
we're
changing
anything
about
the
way,
the
builds
work
with
debugging,
your
optimization
flags
or
anything,
there's
probably
a
whole
lot
of
pattern
matching.
You
can
do
there
that
you
just
apply
the
performance
label
to
the
pr
and
it
just
run
through
stuff
yep.
A
D
D
D
Okay,
I
think
once
we
have
our
baby
steps
figured
out.
We
can
talk
about
further
improvements
and
additions,
but
I
guess
what
we
have
currently
in
the
ether
pad
looks
like
a
good
start.
A
A
All
right
well,
then
neha,
I
believe,
next
week
we
have
something
on
the
agenda
and
I
I
confess
I
have
totally
blamed
him.
D
Josh
solomon
is
already
here,
we'll
also
get
another
another
seth
user
who
has
written
a
wrapper
on
on
the
balancer.
So
I'm
curious
why
and
what
it
does
and
etc.
So
18
people
can
mark
their
calendars.
A
Sounds
good
so
yeah
next
week,
then
I
may
we
may
talk
about
crimson
a
little
bit.
I've
got
a
lot
of
data.
A
All
right,
then
so,
maybe
next
week,
crimson
week
after
that,
definitely
we'll
we'll
do
balancer.
So
all
right,
thanks
for
coming
everybody
have
a
great
rest
of
your
week
and
we'll
we'll
see
you
guys
next
week.