►
From YouTube: CDS Pacific: Performance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Do
you
want
so?
Okay
I'll
just
get
us
started,
then
all
right
so
I
guess
the
as
opposed
to
some
of
the
other
kind
of
CBS
sessions
there.
There
aren't
really
a
lot
of
like
huge
projects
here.
The
big
one
is,
probably
the
performance
see
I
work,
but
we
can
talk
about
that.
A
little
bit
more
later.
Well,
I
kind
of
wanted
to
focus
on
starting
out
is.
A
So
sorry,
on
the
blue
store
side,
we've
got
kind
of
a
couple
of
specific
areas
that
that
folks
have
been
working
on.
One
of
the
big
ones
actually-
and
this
is
has
became
a
big
project
but
mostly
took
place
during
the
octopus's
time
frame,
is
Adams
comb,
family
sharding
work
and
that
I
think
has
taken
a
little
bit
longer
than
we
were
hoping
initially
so
I
don't
believe
that
it
has
merged.
Yet
I
think
there
were
some
errors
even
fairly
recently,
but
that's
actively
being
worked
on
and
I.
A
Imagine
what
we
may
actually
back
port
that
to
octopus,
but
for
Pacific
will
be.
Imagine,
there's
going
to
be
a
lot
of
testing
with
that,
and
a
lot
of
you
know
maybe
some
fine-tuning
of
that.
The
big
advantage
there
is
that
is
going
to
give
us
much
better
write,
amplification
and
potentially
better
space
amplification
in
rocks
DB.
So
that's
pretty
exciting
when
that
PR
merges
I've
got
a
couple
of
PRS
that
are
waiting
for
that
finish
or
for
that
to
get
in
the
the
first
one.
Is
this
double?
A
That
will
let
us
avoid
some
of
the
complication
we've
had
in
the
past
and
so
do
currently
with
when
own
ODEs
in
the
in
blue
store
end
up
being
read
from
disk
rather
than
just
from
cache.
We
end
up
populating
Brock
cities
block
cache
with
the
same
data,
and
that
is,
you
know,
ends
up
polluting,
both
caches
and
slow,
and
we
end
up
with
less
cache
overall.
So
that
can
happen
after
Adams,
calm,
family
sharding
fix.
After
that
we
can
go
back
and
I
think
try
to
get
the
cache
age
based
bidding
in.
A
So
that's
been
hang
on
for
a
while
too,
but
the
the
gist
of
that
is
that
it
lets
us
look
at
different
caches
and
kind
of
the
relative
age
of
things
in
those
caches
and
then
make
decisions
about
how
much
memory
each
cache
should
get
based
on
the
age
of
the
things
that
are
in
each
one.
So,
rather
than
having
like
one
giant
LRU
for
everything,
this
camp
lets
us
have
separate.
A
A
A
One
of
those
is
node
data
structure,
diet,
so
basically
shrinking
the
size
of
the
data
structures
for
Oh
nodes,
I,
think
that
was
like
a
10
or
20
percent
improvement,
which
is
a
big
win,
and
this
is
stuff
that
we
really
can
eat
in
general
because
we
have
a
lot
of
people
wanting
to
be
able
to
use
less
memory
or
get
better
performance
on
a
specific
memory
footprint.
So
that's
can
be
really
important
and
then
also
ears.
Work
on
a
hybrid
allocator
and
deferring
big
writes
those.
When
combined.
A
Let
us
reduce
or
shrink
the
the
midden
Alex
eyes
on
hard
drives
to
4k
and
that's
a
huge
space
improvement.
You
know
the
spacing
application
improvement
versus
what
we
currently
have
and
in
some
cases
it
was
actually
faster,
which
is
really
nice
too.
It
is
slower
in
the
sequential
read
after
fragmentation
case,
and
we
will
probably
need
to
watch
that
a
little
bit,
but
it's
its
overall
if
they
win
so.
B
I
think
you
presented
my
stuff
pretty
well
nothing
into
just
a
few
shouldn't.
Ok,.
A
So
all
that
stuff,
together
I
think
is
gonna,
be
a
huge
improvement
for
blue
store.
I
think
that's
really
kind
of
shore
up
most
of
the
things
that
we
can
easily
shore
up
without
going
into
things
like
rewriting
the
PG
log
code,
or
you
know
changing
other,
have
more
dramatic
aspects
of
how
blue
store
works
and
most
of
that's,
probably
not
really
a
great
use
of
our
time.
A
Given
all
the
work
going
into
crimson
and
NC
store,
though
I
think
this
is
probably
a
good
target
for
kind
of
performance
Krugman's
in
blue
store
for
targeting
pacific.
The
only
other
thing
that
that
really
on
here
that
we
could
do
this
evicted
blue
store
would
be
testing
the
iou
raincoat
that
that
already
actually
has
emerged
and
consider
making
that
the
default
for
kernels
that
support
it.
A
There
there's
some
cleanup
in
there
that
we
could
do
right
now.
I
owe
you
you're
in
code.
Is
we're
using
the
a
IOT
structures
in
in
in
blue
store
it
it
probably
more
or
less
is
fine.
The
way
it
is
is
just
kept
a
little
ma'am,
maybe
a
little
bit
messy,
but
there
there
is
that
if
anyone
is
interested
in
that
work
or
testing
it
or
you
know
looking
at
it,
you
know
that
would
be
something
that
would
be
worthwhile,
probably
to
do
potentially
for
Pacific
I.
A
A
Cool
I
can
I
can
try
to
help
out
on
the
testing
side.
If
you
know
effects
would
be
helped.
You
know
helpful,
but
if
he's
interested
in
continuing
to
work
on
it,
you
know
kind
of
maybe
make
the
I.
Oh,
you
were
in
code
a
little
bit
more
of
a
like
a
first-class
citizen
yeah
that
that
could
be
a
good
use
of
time.
I
think
assuming
that
booklets
working
well.
A
Okay,
so
PG
log
PG
scaling
PG
balancer
changes
from
octopus.
This
is
stuff
that
that
sage
was
working
on
and
we
did
already
kind
of
prove
it
or
work
on
some
things
to
to
make
it
a
little
better.
I
think
that
that
will
need
to
be
ongoing
and
just
make
sure
that
were
we're
not
introducing
bad
behavior
in
any
situations
that
that
were
previously,
we
were,
you
know
not
not
doing
bad
things.
C
Yes,
we
have
a
few
things
already
planned
for
the
PE
balancer
improvements
there
I
want
to
simply
turn
it
on
by
default
and
in
a
bad
mood
and
FM
mode
requires
to
miss
clients,
also
setting
them
in
a
compact
client
setting
to
require
luminous
and
compliance
for
the
cluster
for
new
clusters
is
basically
what
we're
targeting
for
compatibility
for
octopus
anyway.
So
it's
not
not
really
an
issue.
C
In
terms
of
other
improvements,
there
also
some
additional
things
that
the
balance
doesn't
consider
today
that
it
could
to
provide
better
performance
or
better
space
utilization.
When
is
the
up
map
mode
doesn't
get
balanced
based
on
actual
base
used.
It
works
based
on
high
number
of
pts,
so
if
you
have
uneven,
they
use
bps,
then
you'll
get
things
good
and
balanced,
and
it
also
doesn't
consider
the
space
used
by
oh
map.
C
A
C
C
A
Can
see
that
being
really
tricky
because
well,
if
you
have
like
on
one
OSD,
you've
got
some
giant
object
right,
like
you
know,
someone
put
something
there.
Another
OSD
you've
got
like
tons
of
tiny
little
like
objects
right.
Do
you
really
want
to
balance
in
a
way
that,
like
shows
all
the
tiny
objects
on
one
and
the
big
object
on
the
other
yeah.
C
Yeah
I
think
you
can't
come
up
with
a
global
solution.
It's
gonna
work
working
at
every
case.
It's
kind
of
there's
too
many
dimensions
is
optimized
for
work
for
yeah.
D
A
C
Yeah
another
consideration
it
would
be
for
performance,
so
you
might
consider
like
further
in
the
future,
trying
to
actually
analyze
like
what
kind
of
performance
are
getting
across.
Different
I
was
DS
and
trying
to
move
some
load
from
one
space
to
another
phase
which
might
be
even
orthogonal
to
basically
is
entirely
yeah.
D
C
D
C
C
D
A
B
A
E
C
C
A
Cool
but
that's
neat,
I'm
glad
that's
working,
trying,
I
think.
If
there's
anything
else,
you
can
interesting,
you
can
do
with
it.
I
mean
like
count
what
you
were
saying,
Josh
in
terms
of
distributing
load,
they
have
a
particular
OC
is
like
super
backed
up.
Maybe
it
could
just
be
like,
like
don't
read
from
something
else
right.
A
C
E
C
We
did
we
didn't
find
it
was
like
necessary
to
increase
it.
Just
that
is
getting
the
autoscaler
to.
If
we
can
changes
the
autoscaler
to
make
it
not
decrease
things,
I
think
that'll
be
sufficient.
B
C
We
discussed
it
in
the
past
right
with
the
anko,
as
he
came
in
bigger
idea:
Merck
I'm
with
the
autoscaler
we're.
Currently
they
tries
to
increase
things
slowly
over
time,
but
if
we
could
instead
start
out
high
and
then
friend,
if
I'm
lowering
unless
it
needs
that
extra
room
to
grow
virtually
a
certain.
C
B
C
A
A
A
This
is
something
that
we
were
looking
at
for
octopus,
but
it's
just
a
little
hairy,
so
it's
it
didn't
make
it,
but
we
didn't
do
a
whole
bunch
of
other
stuff,
and
eric
has
been
working
on
have
of
stuff
surrounding
this
that
this
particular
optimization
is
looking
at
avoiding
re-encoding
directory
entries
in
this
LS
code
or
rgw
in
the
OSD
on
the
OSD
side.
Right
now,
we
we
basically
will
decode
that
directory
entry
and
then
we'll
use
that
the
data
that
we
get
from
that
make
filtering
decisions
about
what
to
send
back
to
rtw.
A
We
might
pre-filter
right
there
in
the
OSD.
That's
really
nice
because
it
lets
us
avoid
doing
is
sending
a
bunch
of
stuff
to
our
GW
that
it
doesn't
want,
but
the
the
downside
of
it
is
that
it
means
that
we're
doing
all
this
work
inside
the
OSD
decoding
this
thing
doing
the
filtering
and
then
potentially
re-encoding
and
sending
it
off,
though
the
the
goal
here
would
be
to.
A
A
We
there's
a
PR
here
that
already
at
least
used
to
work
that
can't
partially
did
some
of
this,
but
it
wasn't,
it
didn't
go
nearly
as
far
as
what
Casey
does
I
think
it's
still
doing
the
decode,
the
PR
that
wrote
is
doing
the
decode
still,
so
you
can
make
the
filtering
decisions,
but
then,
instead
of
reading
coding
is
just
sending
the
original
on
I.
Think
is
I.
Remember
right!
That's
what
it
does,
but
anyway,
that's
the
overall
gist
of
this
Casey.
What
do
you
think
is
this
something
worth
thinking
about
for
Pacific.
G
A
A
A
G
G
A
A
All
right,
the
other
thing
for
our
derivative
guy
here
and
I
imagine
this
is
probably
not
happening
for
Pacific
but
I.
Remember
you.
You
had
your
proposal
for
avoiding
write
pauses
during
bucking
index
splitting
and
it
sounds
like
it
got
more
complicated
if
I
remember
right
is
that
am
I
remembering
properly?
Was
it
a
scary
job
or
is
it
still
reasonable.
G
G
A
A
G
B
A
H
Yeah,
it's
it's
supposedly
ready
preview,
so
I'm
gonna
try
and
look
at
it
soon
and
get
it
in
and
then
market
we're
back
for
once
it's
gone
through
testing
for
a
little
while
sure.
A
Okay,
continue
testing
of
that
see
how
that's
going
I
have
been
looking
at
the
MDS
and
Canada
dynamic,
sub-tree
partitioning
scheme
and
calves.
What
our
behavior
is
like
when
we
can't
use
it
hammer
opening,
though,
when
you've
got
like
a
giant
directory
full
of
files
and
the
MDS
is
trying
to
export
stuff
to
other
mbss
and
you've
got
this
big
kind
of
you
know.
Dance
going
on
between
the
MDS
is
trying
to
export
and
import
stuff.
A
If
you
only
got
like
two
well,
then
maybe
you
still
wanted
to,
but
if
you've
got
like
16
or
32
or
whatever
you
you
may
want
to
have
that
one
just
not
do
anything.
So
that
might
be
something
to
look
at,
certainly
in
the
the
round
robin
case
with
directories.
I
see
it
so
there's
a
couple
of
other
things:
I'm
going
to
try
playing
with
there.
So
it's
all
very
experimental,
no,
no
definite
plans
for
anything
yet
there,
but
there
there
might
be
some
things
that
we
can
do
that.
That
might
help
a
little.
A
Kernel,
client
and
RBD
kernel,
client,
Ilya
and
I
have
working
on
trying
to
diagnose
an
issue
that
cropped
up
under
kind
of
strange
conditions
where,
when
we
have
25
gigabit
links
multiple
25
50
but
links,
we
see
this
bottleneck
where,
with
enough
io
depth,
we
can't
can
get
past
about
3,
D
bytes
per
second
for
sequential
reads:
on
a
single
client,
whereas
with
Lib
r,
BD
or
seemingly
lips
at
the
fest,
we
can
do
significantly
better
more
like
8.
So
I
would
like
to
figure
what
out.
A
What's
going
on
with
this,
and
hopefully
get
a
fix
in
for
Pacific
is
probably
not
noticeable
for
a
lot
of
people
that
have
the
afford
a
gigabit
or
slower
per
client,
but,
as
is
faster,
networking
cards
become
available
as
people
are
using
under
D,
but
now
I
think
we're
going
to
start
seeing
this
crop
up
as
a
complaint.
So
if
we
can
take
care
of
it
for
the
next
release,
I
think
it
would
be
a
really
good
idea.
A
Unfortunately,
we
still
don't
really
understand
why
it's
happening
and
I
have
not
yet
Ilia
have
been
able
to
get
locks
that
set
up
to
to
do
a
test.
That
way,
but
that's
still
probably
the
next
step,
I
think
all
right
so
I
think
that
covers
basically
all
of
the
kind
of
incremental
project,
EE
type
stuff.
Smaller
projects
that
that
are
have
I
would
like
to
personally
target
for
from
stick.
A
Well
then,
the
the
other
big
thing
here
that
that
Josh
wanted
to
make
sure
that
we
had
on
this
list
is
performance.
Ii
I,
though
this
has
been
a
long-standing
request,
that
we
be
able
to
take
PRS
and
run
performance
Suites
against
them
to
make
sure
that
we're
not
introducing
regressions
and
and
for
me
personally,
would
be
really
nice
too,
because
then
I
don't
hopefully
have
to
do
as
many
performance
by
sex
anymore.
So,
let's
see
Josh
did
you
want
talk
at
all
about
this
or
yeah.
C
Yeah
I
thought
we
could
maybe
think
about
what
like
then
minimum
first
steps
would
be
and
then
also
talk
a
little
bit
about
what
we
could
do
in
the
longer
term,
so
sure
some
of
the
existing
pieces
there
can
occur
pretty
much
already
in
place
in
terms
of
CBT
support
for
running
in
Jenkins.
That
I
keep
coming
radically
my
work
to
get
that
and
working
for
Crimson,
and
you
can
essentially
just
make
a
comment
on
that
tells
Jenkins
to
go
and
run
a
forints
job
and
reports
it
back
and
results
now
and
on.
C
It's
also
been
at
adding
some
support
for
more
metrics
there.
So
we
can
see
this
cycles
prop,
rather
than
just
appear
with
your
foot
off
your
IEPs
in
action,
tell
how
efficient
we're
being
and
if
we're
you
know,
just
only
increasing
performance
by
using
more
CPU.
That's,
maybe
not
the
best
use
of
resources.
C
A
C
A
I
I
I
Yep
we
need
to,
but
when
you
would
need
to
install
the
purge
tools
and
denounce
the
rapid
operational,
what's
even
more
prefer
to
to
match
the
the
CBT
peers
by
of
them.
That
extends
the
support.
The
coverage
of
sex
probe
just
from
the
radios
bench,
aggression
to
all
plugins,
especially
especially
to
to
our
BD
that
will
be
different.
That
is
the
think
would
be
interesting.
A
I
A
C
Exactly
I
think
that
would
be
a
great
first
step,
I
just
running
the
same,
like
RBG's,
benchmark,
ice
creams
and
then
just
being
able
to
tell
whether
I,
given
PR
is
getting
the
same
kind
of
throughput.
We
saw
previously
or
same
kind
of
CPU
efficiency.
We
saw
before.
C
A
C
That's
where
it
gets
kind
of
more
complicated,
I
mean
I.
Think
for
the
we
probably
wanna,
keep
the
Jenkins
jobs
relatively
simple
and
maybe
rely
on
last
week,
went
to
college
e
runs
things.
I've
DBT
integrations
utility,
so
we
can
have
longer
running
tests
there
and
that
maybe
have
expanded
coverage
to
different
more
configurations
or
more
workloads.
A
A
C
A
So
yeah,
the
have
very
long-standing
goal,
has
been
to
figure
out
how
we
could
take
like
a
results
directory.
That's
got
all
of
the
runs
in
it
from
from
DBT
and
be
able
to
import
those
into
like
an
index
I
like
to
call
it
a
mix
rather
than
because
it's
I
see
as
being
more
kind
of
just
a
way
of
kind
of
indexing,
the
the
authority
of
information
which
is
actually
still
the
directory
structure.
A
But
potentially
you
could
have
CBT
just
writing
every
day
or
every
so
often
data
into
this
thing
and
have
you
know,
metadata
associated
with
the
run
and
then
use
a
sequel
light
to
be
able
to
do
queries
against
it
in
a
much
faster
way
and
from
there
take
those
queries,
the
results
from
those
queries
and
then
send
them
off
to
Griffin
'or
or
you
know
some
other
database
somewhere.
That's
got
some
kind
of
you
know,
GUI
built
on
top
of
it
for
visualization
or
or
some
other
thing
that
uses
your
tools
to
visualize
as
well.
A
Alright
Josh
in
the
other
CD
s
session,
port
thorgy,
the
topic
of
labels
or
actually
I,
started
out
to
ecology
for
continuous
integration
with
with
functional
testing,
the
topic
of
labels
came
up
and
I,
don't
know
if
we
want
to
do
the
exact
same
thing
or
not.
Here,
I
had
just
kind
of
assumed
we
would
grab
like
the
performance
label.
Anything
that's
got
the
performance
label
and
then
kind
of
mix
and
match
other
labels
and
say:
okay,
if
it's
performance
in
rgw,
then
we
run
this
test
suite.
A
C
I
think
that
makes
a
lot
of
sense.
I'm,
not
sure
the
other
one
I
think
I
was
about
as
far
as
other
when
you
got
them.
I
guess:
I'm,
not
I'm,
not
clear.
Exactly
on
the
current
Jenkins
integration
and
anchor
would
need
something
additional
to
be
able
to
look
at
labels
there
and
and
parameterize
things,
but
I'm
guessing
it
wouldn't
be
too
difficult.
A
A
F
I
guess
it
doesn't
matter
right,
I
mean
today
the
way
the
birth
test
works
is
it's
a
trigger
phrase,
even
if
we
could
have
something
like
a
trigger
phrase
for
like
this,
a
trigger
rgw
performance
tests.
So
we
could
have
a
trigger
phrase
and
it
could
do
the
same
thing.
So
I
think
the
hooks
are
already
there
I'm,
not
sure.
We
really
need
the
label
to
be
actually
consumed
versus
a
phrase.
F
Pretty
much
what
we
do
for
this
for
the
Crimson
performance
test.
We
do
have
a
trigger
phrase
just
like
even
for
make
check.
We
have
a
trigger
phrase:
I'm,
pretty
sure
that
Jenkins
is
able
to
consume
trigger
phrases.
If
that's
the
easier
way
forward.
I,
don't
see
your
problem
there,
but
yeah
that
you
know
I'm,
saying
that
we
shouldn't
explore
the
other
option.
The
other
one
will
probably
easier
but
could
cause
some
confusion
in
general
like
sometimes,
if
you
do
a
data
tag
by
mistake
and
then
you
remove
it
on
how
does
it
behave
then?
F
A
F
A
A
Yeah
I
would
I
think
I
would
like
to
look
into
the
labels.
If
we
can,
it
just
seems
like
it
would
be
a
lot
of
extra
work
on
somebody's
part
to
go
in
and
regularly
tell
you
know
in
the
PR
comment
to
tell
it
to
start
reading
performance
to
us,
I
mean
it's
doable
right,
it's
not
hard.
It's
just
you
know,
then
someone
has
to
remember
to
do
that
regularly.
Yeah.
F
A
All
right
regarding
benchmarks:
we've
right
now
we
can
very
easily
run
raitis
bench,
FIO
and
HS
bench.
That
kind
of
covers
our
BD,
rgw
and
kind
of
core
object
and
sort
of
OMAP,
not
not
great,
but
at
least
gives
us
some
coverage
of
it
force
FS
we
can
do
FIO,
but
for
the
MDS
we
probably
want
something
like
md
test
or
if
anyone
knows
of
the
other
benchmarks,
that
would
be
better
and
we
could.
We
can
include
that
as
well.
A
C
A
There
is
not
CBT
just
dumps
the
results
from
whatever
benchmark
you
ran
into
the
results
directory.
There
is
a
common
format
for
the
metadata
associated
with
the
test.
You
can
go
into
that
directory
and
always
kind
of
look
through
the
yellow
representation
of
the
dick.
That
was
the
combination
of
parameters
that
result.
You
know
created
that
test,
but
the
actual
results
are
you
know
in
whatever
format,
the
benchmark
dump
them
in
good.
H
A
Maybe
related
to
that
topic,
Josh
when
you
and
I
were
talking
earlier,
the
the
reason
why
I
would
like
to
kind
of
look
at
indexing.
The
resulting
data,
rather
than
just
dumping
into
a
database
is
because
I
suspect,
as
we
try
go
through
this
process
of
figuring
out
how
to
kind
of
format
the
results
from
all
these
different
benchmarks,
we're
probably
going
to
change
the
schema
that
index
really
regularly
and
so
being
able
to
recreate
it
undemanding,
really
easily
change
it
anytime.
A
C
A
Without
you
know
on-demand.
Essentially,
you
could
just
delete
the
index
run
the
command
it
rebuilds
it.
You
know
with
our
new
schema
changes
you
made
or
or
whatever,
and
then
you
can
take
this
thing
that
you've
got
this
results
index
perform
queries
against
it.
You
know
ship
off
those
results,
then,
to
your
your
thing,
that's
more
static
that
doesn't
change
or
rarely
changes.
A
C
A
C
And
it's
a
it's
more
about
making
the
scheduler
able
to
integrate
more
with
with
a
the
locking
system.
So
you,
instead
of
having
a
whole
bunch
of
workers
competing
for
locks.
Yes,
I
have
one
one
dispatcher:
that
I
takes
job
out
of
the
queue
the
next
one
with
highest
priority;
lots
of
notes
for
it
and
then
runs
it.
So
you
can't
get
into
a
situation
where
one
job
that
needs
a
ton
of
notes
sits
around
waiting.
Okay,
since
they
just
get
sure
I
have
lectures
into
jobs
with
only
needing
two
notes.
For
example,
sure.
A
C
A
So
one
of
the
questions
would
be:
does
it
if
we've
already
got
Jenkins
set
up
on
these
nodes?
Is
there
any
reason
to
not
just
use
that
infrastructure
to
be
able
to
grab?
You
know
four
of
the
note,
the
four
nodes
that
are
there
right
now
and
do
it
that
way,
rather
than
like
blowing
Jenkins
off
of
them,
having
tooth
ology,
take
them
for
a
while
to
do
stuff
and
then
try
to
give
them
back
to
Jenkins.
C
I
guess
all
we
have
to
do
is
make
sure
that
the
Jenkins
isn't
trying
to
use
the
same
time
written.
So
we
need
some
kind
of
we
need
it
to
be
we're
aware
of
when
technologies
using
them.
Do
it
that
way.
C
C
C
For
example,
you
could
have
like
a
long-running
tests
that
does
different
kinds
of
each
of
a
kind
of
benchmark
when
the
test
with
booster,
when
tests
with
Crimson
all
kind
of
it.
G
A
F
C
F
Result,
parsing
just
works
same
as
what
to
thought
like
what
CBD
does
so,
it's
literally
using
DVD,
but
it
is
orchestrating
CBD
pathology
but
yeah
I
think
to
address
the
variability
problem.
I
think
it's
very
easy
in
pathology
to
give
it
a
label
of
machines
that
it
could
just
use
for
this
purpose.
Then
we
could
address
the
variability
problem.
If
we
just
say
that
we
have
like
two
machines:
three
machines
that
will
be
dedicatedly
just
used
for
long-running
performance
testing
using
tautology.
Then
we
can
set
them
aside
and
label
them
with
something
and
to
thora.
A
You
were
careful
in
the
sepia
lab
about
which
machines
you
grabbed
and
had
the
ball
in
the
same
switch.
So
you
know
this
starting
the
sympathy
machines,
if
you
you
know,
really
were
careful
about
making
sure
that
those
were
all
local
close
by
each
other
and
then
get
like
changed
dramatically
or
always
were
the
same,
that
that
might
give
you
relatively
decent.
F
Worth
a
try
yeah,
we
we
had
this
problem
earlier
as
well.
When
I
was
exploring
this
much
more
and
I
did
see
a
lot
of
variability
across
machines.
I've
been
even
comparing
results,
wasn't
like
sensible
to
do
that.
So
I
think
we
need
to
narrow
down
machines
first
and
then
also
like
me
again,
have
the
same
thing
that.
F
A
In
terms
of
this,
though,
that
that's
cat,
what
we'll
do
with
Jenkins
raised
we'll
do
a
peer
comparison
for,
like
you,
know,
peers
that
are
coming
in
for
for
the
I.
This,
though,
right
is
more
like.
What's
the
log
running
kind
of
like
you
know,
if
we
want
to
build
up
a
bigger
cluster
and
see
how
it's
doing
you
know
once
a
week
or
something
right,
that's
carried
not
the
yeah.
F
Ideally,
if
we
have
our
database
piece
put
together,
I
would
really
want
to
have
like
a
record
of
our
master,
has
been
doing
like
weekly
or
like
we
could
store
these
results
based
on
showers
and
have
historic
information
or,
like
Bank
series
data
of
how
things
are
improving
or
not
improving.
I.
A
Think
I
don't
write
is
well
from
Jenkins
will
get
really
fine,
grained
data
will
get
like
everyday
will,
potentially
have
multiple
sample
points
coming
in
on
a
small
scale
with
like
one,
you
know
a
single
OSD
or
maybe
a
couple
OS
these.
You
know
if
you,
if
we
set
tests
so
that
you
can
get
you
know
a
couple
of
C's
on
it.
This
seems
like
it'd
be
more.
We
want
to
grab
a
bunch
of
nodes
and
create
like
a
bigger
cluster
into
tholly.
That,
then
we
can
see,
you
know.
A
F
A
A
A
F
Think
I
think,
as
Josh
mentioned,
there
are
two
separate
pieces.
Are
the
Jenkins
integration
that
we
have
already
is
working
fine
for
crimson
I?
Don't
see
why
it
won't
work
fine
for
classic
velocity,
so
we
should
just
keep
using
that
and
probably
expand
that
to
like
support
more
kind
of
workloads,
because
I
know
that
currently
we
only
have
greatest
bench,
but
we
could
include
FIO
and
even
our
GW
workloads
for
PR.
It's
it's
an
entry
criteria
for
PRS
I
would
say
what
tautology
could
help
us
achieve
is
the
longer
running
tests
and
those
over
time.
F
I
would
like
to
see
how
we
are
doing
right.
I
mean
currently
with
point
releases
and
stuff,
there's
no
way
to
figure
out
how
much
can
be
improved
or
how
much
we
did
not
improve
so
I
mean
we
should
technically
have
like
a
graph
showing
us
that
in
you
know,
months
time
what
changed
based
on
commit
and
pathology
if
we
have
dedicated
machines
at
ethology,
could
be
one
place
that
could
be
used
to
store
that
information
as
well,
because
we
can
are
technically
archive
information
that
pathology
collects.
F
You
don't
really
lose
the
performance
data
that
you're
collecting,
so
you
could
currently
tune
with
all
the
tutors
say
that
okay
I
want
to
preserve
six
months
worth
of
performance
data,
and
then
we
could
use
that
to
see
thanksgivings
information
about
how
we're
doing
over
time.
So
I
guess
those
are
two
different
aspects.
One
thing,
but
technology
can
be
useful.
One
thing
I
think
the
PR
testing
Jenkins
is
fine
and
I.
Think
it's
a
better
approach.
F
C
Go
ahead,
Josh
should
I
say
one
major
benefit
at
the
Jenkins
inspect
is
that
it
doesn't
require
packages.
So
it's
not
an
extra
hour
to
have
weight
to
generate
packages
and
to
run
it.
The
performance
tests
which
you
do
need
42
for
ecology.
So
that's
one
reason
why
it
makes
more
sense
for,
in
the
short
run,
the
appropriate
drives
sure.
A
F
Sure
I
see
I
the
only
missing
pieces
today,
I
think
is
about
identifying
or
labeling
some
machines
that
we
want
to
run
those
tests
on
making
sure
I
think
currently
the
integration
that
we
have
only
runs
on
single
nodes,
so
we'll
have
to
probably
expand
that
I'm
sure
we
can
do
that.
We
just
haven't
done
that
or
we'd
never
needed
to
do
that.
We'll
have
to
make
sure
it
runs
on
multi
node,
and
there
is
also
the
aspect
of
workloads.
F
Cause
bench
is
something
that
has
not
been
maintained,
so
I
don't
see
that
that
could
fully
work
until
unless
somebody
spends
time
on
that
and
to
be
honest,
I
don't
have
the
bandwidth
to
just
like.
You
know,
maintain
it
right
now,
so
for
RPF,
IO
and
Rado
Spencer
I
mean
I.
Don't
think
that
should
be
a
hard
job
and
I
can
I
can
take
that.
If
you
want
me
to.
A
F
A
As
pyro
reaches
passions,
it's
it's
I
think
it
will
work
much
better
and
then,
if
there
are
bugs-
and
you
can
blame
me,
though,
there's
that
too
there.