►
From YouTube: 2016-MAY-26 -- Ceph Tech Talks: Ceph Benchmarking Tool
Description
A walkthrough of the mechanics of the Ceph Benchmarking Tool (CBT).
http://ceph.com/ceph-tech-talks
A
All
right
welcome
back
everybody
to
the
SEC
Tech
Talks.
It's
been
a
couple
of
months
since
we
saw
you
on
blue
jeans.
Here
saw
fair
amount
of
folks
last
month
at
the
April
live
face-to-face
in
OpenStack,
but
before
that,
March
was
canceled.
Obviously,
so
welcome
back
we're
happy
to
be
here,
and
today's
topic
is
the
Ceph
benchmarking
tool.
Many
of
you
probably
have
heard
about
this.
If
not
participated
in
it
already.
A
A
At
least
this
also
is
going
to
hook
into
sebrae,
which
many
of
you
have
heard
me
talk
about
at
various
places
like
the
staff
day
Portland
yesterday,
but
I'm
excited
to
have
more
CBT
usage
and
to
start
publishing
people's
results
on
the
metrics
that
set
calm
here
in
the
near
future.
So
today,
Kyle
Bader
is
going
to
give
us
a
rundown,
a
bit
about
CBT
and
how
it
works
and
and
we'll
go
from
there
so
Kyle.
If
you
could
take
it
away
absolutely.
B
Thanks
Patrick,
so
CBT,
Patrick
kind
of
did
a
brief
fix
and
introduction,
but
like
what
is
it?
What
is
this
thing?
So
it's
a
it's
a
been
parking
framework,
it's
it's
written
in
Python
and
and
it
doesn't
actually
generate
load
itself.
All
it
is,
is
kind
of
a
a
way
of
defining
defining
test
plans
they're
just
like
a
simply
Hamel
configuration,
and
then
it
actually,
you
know
just
uses
basically
parallel
SSH
to
to
log
into
the
different
clients
in
the
system
under
test
to
execute
different
benchmarking
tools.
B
Right
and
I'll
go
into
the
different
tools
that
it's
it
supports.
Its
was
originally
engineering
benchmark
tool
used
for
upstream
mark
Nelson
wrote
it
when
he
was,
you
know
only
he
was
still
a
tank
tank
and,
and
we
continue
to
use
it
in
that
stream
development
to
kind
of
analyze
the
performance,
trade-offs
of
different
sorts
of
development
efforts
and
a
year
and
a
half
ago,
maybe
we
started
using
it
also
for
downstream
performance
and
sizing
works.
B
So
one
of
the
responsibilities
that
I'm
tasked
with
at
Red
Hat
is
working
with
OEMs
and
OD
aims
to
develop
basically
SEF
skews
for
different
sorts
of
use
cases
right
for
different
sorts
of
workloads,
and
so
what
we
would
configure
different
sets
of
hardware
and
then
use
CBT
to
to
basically
inform
how
we
should
how
we
should
reconfigure
them
and
and
and
fine-tune
then
to
to
address
particular
workloads,
whether
it
be
object,
whether
it
be
block
storage.
It's
used
by
many
people
in
the
Red
Hat
I
mean
in
the
Ceph
community.
B
B
So
what
are
the
personalities?
What
are
the
different
nodes
that
that
are
going
to
participate
in
kind
of
a
benchmark
with
staff
that
kind
of
the
main
that
the
starting
node
that
kind
of
coordinates
and
runs
all
the
benchmarks
is
referred
to
as
the
head
node?
This
is
where
you're
going
to
have
the
check
out
of
the
CBT
code.
It
needs
to
have
key-based
authentication
configured
so
that
it
can.
You
know
a
association
to
every
house.
B
They'll
need
to
have
set
admin
keyring
because
they
do
some
administration
level
commands
like
like
in
the
case
of
our
VD
benchmark,
they're,
going
to
create
the
RBG's
and
and
maybe
max
and
so
I
need
to
have
a
admin
keyring.
You
know
this
is
you
could
pair
down
to
their
permissions,
but
a
lot
of
types
of
lab
environments?
You
know
the
least
privileges
is
not
a
huge
deal
all
right,
but
but
you
could
you
know
you
could
craft
keys
with
specific
permissions?
B
B
You
don't
want
those
things
installed
beforehand,
if
you're
using
kind
of
the
KVM
RVD
FIO
benchmark
that
the
client
would
be
the
actual
VMS
that
you
want
to
run
the
workload
and
not
the
hypervisor,
and
they
can
also
be
containerized
right.
So
these
could
be
a
list
of
lists
of
IPs.
That
point
to
you
know:
LXE
containers,
docker
containers,
etc.
That's
good!
B
If
you
want
to
do
the
RVD
fio2
test,
the
you
know,
multiple
care,
VD
instances
on
a
single
host
it's
closed
and
then
obviously
you
have
the
the
motors
and
the
OSDs
just
kind
of
normal
parts
of
a
set
cluster.
If
you
are
using
CBT,
if
you're
using
CBT
to
configure
the
cluster,
this
is
you
know
where
it's
going
to
actually
set
up
the
set
up
the
monitor
forum
and
and
and
go
ahead
and
and
provision
the
OSDs.
But
that's
that's
optional.
You
can
not
have
CBT
do
the
configuration
of
the
cluster.
B
You
can
run
it
against
an
existing
cluster
if
you'd
like,
because
maybe
you
want
to
configure
it
in
a
specific
way
or
you
want
to
test
it
test.
The
cluster,
as
is,
maybe
you
are
installing
from
you
know,
in
the
case
of
downstream
product,
if
someone's
installing
SEF
through
the
the
you
know
traditional
install
methods,
and
they
want
to
know
what
the
performances
of
the
way
that
they're
actually
deploying
systems
for
production.
That's
where
kind
of
doing
benchmarks
against
an
existing
cluster
is
is
fairly
useful.
B
So
what
what
benchmarks
does
a
sport?
Well,
there's
a
number
at
the
lowest
level,
there's
ratos
bench
and
then
there's
a
number
of
fil
benchmarks.
Depending
on
what
level
you
want
to
test
our
BD,
so
the
at
probably
the
simplest
level
is
using
FIO
with
the
RBD
engine
right
it
just
is
FIO
running
on
it
can
be
a
bare
metal
Linux
host.
It
could
be
a
VM
as
long
as
you
have
access
to
the
public
network
and
yeah,
it
just
creates
a
FIO
is
leaked
into
Libby.
B
B
You
can
run
FeO
against
a
txt
for
file
system
on
a
Caribee
device,
and
then,
if
you,
if
your
clients
are
actual
virtual
machines,
then
it
can
run
FIO
against
a
volume.
That's
that's
been
attached
to
it,
mr.
normal
keno,
mecca's
and
mechanisms
and
I
suppose
if
you
had
a
different
you're
using
different
hypervisors
as
long
as
it
was
pointing
to
like
a
B
DB
device,
it
would
work
as
well
and
finally,
there's
cause
bench.
B
If
you
want
to
do
s3
or
swift
testing
cause
bench
is
a
tool
from
Intel
for
testing,
object,
storage
performance
and
it
can
be
used
completely
independently
of
CBT.
But
one
of
the
things
you
get
by
running
cause
fetch
with
CBT.
Is
you
can
you
can
define
the
configuration
that
you
want
cause
bench
to
run
in
a
CPT
Amal
configuration,
so
you
can
kind
of
pass
that
between
other
people
that
are
familiar
with
CBT
and
and
they'll
know
exactly
what
you're
talking
about
without
that
necessarily
having
to
get
too
familiar
with
cause
bench.
B
So
there's
some
of
the
other
things
that
CBT
does
that
make
it
kind
of
great,
like
I,
said
before
it
can
create
the
cluster.
This
is
something
that
Mark
Nelson
does
a
lot
in
his
upstream
work,
where
he,
you
know,
creates
multiple
or
compiles
multiple
different
versions
of
staff,
and
we
use
you
know
different
different
sets
of
binaries
and
actually
create
the
cluster
with
them
and
execute
different
tests
against
them
to
to
be
able
to
establish
see
when
those
performance
regressions
or
see.
B
If
there's
performance
improvements
for
a
change
in
the
upstream
code
it,
and
because
of
that,
it's
it's
evolved,
a
couple
different
functionalities
that
it
can
do
it
can
test
past
year
configurations.
You
know
when
we
added
the
the
cache
to
your
code
into
the
upstream
code
base
and
I
think
Firefly.
B
Also
important
from
from
all
the
machines
about
the
clients,
are
the
OSD
is
the
monitors
and
in
that
way,
when
you're
going
back
through
kind
of
the
the
results
of
your
testing,
you
can,
if
you
see
performance,
that
that
is
not
what
you
expect.
You
can
see
whether
there's
a
bottleneck
at
a
disk
or
or
there
are
at
the
CPU,
and
you
can
use
that
to
inform
how
you
could
potentially
reconfigure
your
hardware
to
eke
out
more
performance.
B
So
the
basic
set
up
like
I,
hinted
at
this
earlier.
You
want
the
SSA,
you
don't
want
an
SSH
key
on
the
head,
node
and
and
the
public
key
the
for
the
s
for
the
head,
SSH
key
on
all
the
hosts,
including
the
head
node,
because
it
actually
runs
some
commands
on
itself
so
make
sure
the
the
public
key
is
an
authorized
keys
file
for
for
all
the
nodes
in
the
cluster
and
all
the
clients
you'll
have
to
pre-install.
The
staff
package
is
on
the
house.
It
doesn't
see
bt.
B
Doesn't
you
know,
install
packages
for
you,
so
that's
something
that
is
done
beforehand.
There's
a
kind
of
a
crude
set
up
script
it.
It
shows
some
of
the
stuff
that
you
need
to
install
in
the
repository.
If
you
want
to
peek
into
that
and
see
kind
of
a
list
of
some
packages
that
could
help
out,
you
want
the
PD
SH
packages
on
all
hosts.
Not
all
distributions
support
these
in
in
most
of
the
bun
two
or
Debian
based
distributions.
I,
think
you
it's
just
it's
just
available
and
their
regular
repositories.
B
If
you're
running
something
like
CentOS,
7
or
RAL
7,
you
might
need
to
pull
in
some
packages
from
fedora
and
there's
a
there's,
some
in
the
setup
script
and
the
CBT
repository
there's
some
some
links
to
the
which
packages
you
need
and
where
you
can
get
them
out
and
then
you'll
need
obviously
to
have
collect
all
installed
on
the
hosts.
If
you
want
to
be
collecting
monitoring
data,
which
is
super
useful,
so
it's
highly
recommended
a.
B
General
test
methodology,
but
or
actually
pretest
methodology
really
test
your
network
bat
network
and
obviously
impair
the
performance
of
a
distributed
storage
system
and
it's
remarkably
common,
especially
in
kind
of
quickly
assembled
lab
environments
where
you're
kind
of
just
doing
setting
up
a
bunch
of
machines
really
quickly
to
do
some
tests
so
kind
of
do
all
all
iperf.
Where
you
have
you
know
every
host
in
that
there
are
OSD
use
kind
of
all
running
iperf
to
to
all
the
other
hosts
all
the
other
OSDs,
both
on
the
public
and
the
cluster
network.
B
If
you're
using
a
cluster
network,
make
sure
to
check
your
your
routes,
your
interfaces
for
bonding
this
is
kind
of
just
general
stuff.
You
could
advise
if
you're
using
bonding,
you
really
want
to
make
sure
you're
using
five
temple
hashing
and
at
LACP,
and
the
nodes
themselves
have
should
be
using
like
layer.
B
So
it's
not
just
going
to
rely
on
the
back
address
and
the
IP
address.
It's
also
going
to
include
the
port,
and
since
the
Ceph
messenger
is
creating
connections
on
different
ports
to
connect
to
to
each
of
the
different
hosts.
You
much
better
link
utilization
if
you're
using
a
bomb.
So
that's
just
kind
of
some
general
advice,
not
necessarily
specific
to
CBT.
B
If
you're,
using,
if
you're
doing
kind
of
micro
benchmarks,
where
you're
you're
you're
just
kind
of
comparing
a
few
things,
obviously
you'll
want
to
do
multiple
iterations
that
you
can
look
at
the
variance
between
the
different
runs
and
kind
of
establish
a
standard,
deviation
and
understand
you
know.
Maybe
you
might
get
an
outlier
result
so
see
it's
good
to
run
multiple
iterations
oftentimes.
B
If
you're
really
wanting
to
establish
the
the
maximum
performance
of
a
cluster
that
you
have
in
a
lab,
you
want
to
do
a
client
suite,
so
that
use
you
slowly
incrementally
build
load
until
you
can't
hit
a
point
of
contention
and
and
can
see
where
the
max
throughput
is
the
cluster,
because
if
you
just
run,
you
know,
if
you
have
four
clients,
and
all
you
do
is
just
run
four
clients
at
a
single
single.
You
know
level
of
parallelism.
B
You
don't
know
where
on
the
arc
of
performance
you
are,
and
you
don't
know
if
that's
that,
the
maximum
performance
that
you're
going
to
be
able
to
get
so
by
doing
a
client
sweep.
You
can
see
that
nice
incremental
load
building
on
the
cluster
and
then
you'll
know
that
you
have.
You
have
established
enough
client
load
once
you
see
kind
of
a
point
of
contention
and
you
start
to
get
diminishing
returns
by
adding
clients.
B
If
you're
doing
when
you're
doing
these
client
sweeps,
you
should
always
start
with
with
one
client
instead
of
starting
at
you
know
two
or
four,
depending
on
how
many
clients
you
have
they're,
starting
at
one
that
you
can
measure
the
efficiency
of
of
adding
additional
clients
right.
So
if
you
have
one
host
and
it's
able
to
do,
you
know
two
and
fifty
I
ops
and
you
you
know
yeah
the
second
host
it
does
was
200
and
that
one
does
240.
B
Then
you
know
the
efficiency
has
gone
down
a
little
bit,
and
so
it
enables
you
to
see.
You
know
what
you
want.
I
mean
an
ideal
circumstance.
You
have.
You
know
linear
performance
games.
You
had
clients.
Obviously
that's
not
going
to
continue
on
forever.
As
you
add
clients,
but
but
you
should
always
start
with
a
single
at
x1
for
a
client,
sweet
and
kind
of.
B
If
you
want
to
be
able
to
establish
curve,
if
you're
going
to
use
modeling
tools
on
the
the
data,
you
should
probably
have
four
to
four
to
six
different
increments
of
clients.
Right
so
you'll
do
do
one
you'll
do
benchmark
with
one
client
you'll
do
a
benchmark
with
two
clients
to
do
a
benchmark:
three
clients,
4,
5,
6,
etc.
That
way
you
can
you'll
have
enough
points
to
really
be
able
to
do
something.
B
So
how
do
you
use
it
right?
So
you
use,
as
I
mentioned
before
you
define
a
test
plan
for
CBT
and
that's
just
a
mo
file
so
that
the
the
top
part
of
your
your
EML
file,
you'll
kind
of
have
the
the
cluster
cluster
level
configuration
and
and
I.
This
is
this-
would
just
continue.
You
know
long
form
if
I
had
enough
space,
so
I
split
it
in
two,
but
the
the
profiles
will
just
kind
of
sit
underneath
the
existing.
B
You
know
temperature
area
right
there,
so
you,
you
know
yeah,
because
you're
configuration
where
you
define
your
your
head,
node,
you
have
an
array
of
the
client
nodes.
You
have
an
array
of
the
OSD
nodes
and
monitor
notes
the
OSD
per
node
FS
and
mount
options.
I,
don't
know
if,
if
you're
not
using
the
cluster
creation,
you
don't
miss
the
values
for
these.
For
those
three,
don't
necessarily
matter
for
a
while,
even
if
you're
using
an
existing
cluster-
and
you
can
have
these
set
CBT
would
bomb
under
note.
B
You
know
the
number
of
iterations
that
you
want
to
run
for
the
test
plan,
if
you're,
if
you're
having
CBT
build
the
cluster,
whether
or
not
you
want
to
you
know,
rebuild
the
cluster
between
tests.
Sometimes
that's
useful
if
you
want
to
just
have
a
fresh,
clean
slate
in
between
runs
and
then
the
temp
directory
is
just
where,
on
the
head,
note
and
client
know
they're
going
to
kind
of
dump
the
the
data
from
the
temporary
data,
the
monitoring
data
and
that
output
from
the
various
benchmark
tools.
B
And
then
you
have
the
pool
profile
where
you
you
specify
kind
of
a
pool
profile,
so
the
the
higher-level
replicated.
That's
that's
kind
of
the
proof
profile
name
and
you
can
specify
the
number
of
replacement
groups
you
want.
Obviously
you
want
to
size
that,
according
to
your
your
cluster,
using
the
you
know,
the
general
seven
specify
a
replication
level
so
that
could
instead
of
saying
replicated
it
could
which
will
default
to
the
you
know
three
that
could
be.
You
know
two
or
some
other
value.
If
you
wanted
to
taste-test
at
a
different
level
of
replication,.
B
So
how
do
you
do
client
sweeps?
You
know
and
I
talked
a
lot
about
how
you
want
to
maybe
run
with
one
client
and
then
continue
to
add
more
clients.
Typically,
this
is
done.
Currently,
this
is
done
by
creating
separate
test
plans
and
then
having
a
different,
different,
different
list
of
elements
in
the
clients
array
right.
So
it's
a
you
know
if
you
wanted
to
run
for
four
different
client
levels
for
your
client
suite,
you
would,
you
know,
create
four
different
test
plans.
B
The
first
one
you'd
have
you
know
one
client
into,
and
subsequently
three
and
four,
and
then
you
can
kind
of
loop
through
the
different
test
ones.
You
want
to
run
in
that
way.
You
can
get
data
points
for
different
levels
of
clients.
It
would
be
great
if,
if
we
kind
of
bake
this
into
CBT,
but
right
now,
this
is
this
is
kind
of
the
way
to
do
it
without
modifying
the
code,
so
benchmarks
kind
of
the
first
and
most
low-level
benchmark
is
obviously
Reno's
bench.
Anyone
that's
that's.
That's
used.
B
A
bit
of
stuff
is
probably
familiar
with
with
with
using
it
just
to
kind
of
establish
the
cluster
that
you
just
created.
It
is,
is
working
properly,
so
it
you
know,
cbt
from
the
head
node,
it's
going
to
use
PD
SH
to
log
into
each
client,
and
it's
going
to
spawn
a
rate
of
special
clock
process.
You
can
specify
you
know
a
number
of
options.
You
can
give
a
array
of
different
sizes
to
iterate
through
whether
you
want
to
do
just
a
write,
only
test
or
read
and
write.
B
You
know
specify
things
like
the
time
number
of
concurrent
operations,
whether
you
want
multiple
processes
per
client.
Sometimes,
obviously
you
can
increase
the
number
of
concurrent
offs,
which
increases
the
number
of
threads
per
Rea
dispatch
process,
but
that's
only
you
know
that
that's
that's
limited
in
in
terms
of
its
ability
to
scale
up.
So
if
you
have
a
really
powerful
clients,
maybe
it
makes
sense
to
not
have
as
many
concurrent
ops
but
have
multiple
concurrent
processes
each
with
their
own
pool
of
threads.
B
There's
the
you
exist,
use
existing.
It
was
true,
that's
something
you
would
use
it
if
you
didn't
want
CBT
to
create
the
cluster,
and
then
you
can
specify
the
goal
profile
if
you're
having
a
CBT
create
the
create
the
cluster
you
can
have.
It
also
set
the
read
a
heads
that
that's
not
something
that's
supported
right
currently
for
a
use
existing
cluster.
B
Probably
the
second
most
commonly
used
benchmark
is
using
the
Lib
RVD
FIO
benchmark
right.
So
that's
the
use
FIO
with
the
RBI
Oh
engine
pass
over
CBT
Aetna
is
going
to
PDS
age
clients
spawn
FIO
processes,
their
knees,
the
the
RBD
IO
engine.
So
it's
just
our
BD
is
linked
into
Lamar,
BD
and
I
mean
fi.
O
is
linked
into
Lib,
our
BD,
and
so
it's
basically
just
doing
it
in
user
space
there's!
No!
No!
You
don't
need
to
set
up
an
e,
VMs
or
containers
ahead
of
time.
All
you
need
is
fi.
Oh,
that's!
B
That's
linked
to
live
our
BD
as
long
as
you
have
neither
the
admin
can
reading
it'll
be
able
to
create
the
volumes
and
stress
them
with
that
file.
Kind
of
has
the
the
whole
whole
gamut,
and
this
is
an
exhaustive
list
of
the
things
that
it
can
pass
to.
Fi,
oh
so,
if
you're
accustomed
to
running,
if
I
owe
it
all
a
lot
of
these
will
look
familiar
volume,
sizes,
size
and
megabytes
that
you
want
the
volume
to
be
created.
You
can
specify
the
different
modes,
whether
grand
Raider
and
right.
B
If
you
want
to
do
a
read/write
mix,
you
know
you
can
specify
the
the
percentage
of
reads
that
you
want
list
your
op
sizes
and
whether
you
want
multiple
processes
per
volume.
Multiple
volumes
for
clients
say
you,
you
say
you
have
one
client
and-
and
you
know
maybe
one
thing
you
want
to
do
is
is
run
multiple
volumes
per
client.
So
if
you
only
have
four
nodes,
you
can
still
test
varying
numbers
of
clients
by
by
maybe
you
know,
ring
one
node
with
one
volume
per
client
and
then
running.
B
You
know
one
node
with
two
two
volumes
per
client
and
then
one
node
with
three
villains
per
client
and
then
add
a
second
actual
client,
and
then
do
you
know
to
two
client
machines
with
two
volumes
per
client,
etc,
and
that
way
you're
good,
incrementally
increasing
the
number
of
volumes
which
is
relatively
equivalent
to
incrementing
the
number
of
clients.
As
long
as
you're
not
hitting
a
resource
bottleneck
on
your
client
machines,
you
can
give
it
a
ramp
IO
depths
again.
B
Get
you
can
provide
a
path?
You
know
if
you,
if
you
built
a
photo,
you
sell
yourself,
you
can
pass
a
command
path
for
where
the
FIO
binary
should
be
found.
And
one
thing
that's
useful
is
the
the
use
existing
volume.
So
you'll
have
to
run
one
time
well
without
the
use
existing
volumes,
because
the
first
time
you
run
without
use
existing
volumes,
it's
going
to
create
the
volumes
and
F
cbt
preconditions
all
the
volumes
right.
B
B
Provisioned
it'll
actually
write
out
all
the
objects
into
the
OS
DS,
and
that
way
it
doesn't
have
to
create
them
as
as
as
it's
running
the
benchmarking
and
you
have
the
the
disk
space
utilization
of
you
know
the
volume
size
times
the
number
of
clients
times
the
number
of
volumes
per
client,
and
if
you
have
a
lot
of
a
lot
of
volumes
and
their
large
size,
you
might
not
want
to
be.
You
know,
removing
these
and
recreating
them
for
each
iteration.
B
B
There's
this
also
fio2
to
test
against
Karen
BD.
So
CBG
was
going
to
PD
SH
into
all
the
client
nodes
created.
The
can't
reveal
all
Yume's
map
them
to
a
block
device,
are
BD
block
device
and
then
it'll
create
a
txt
for
filesystem
mount
it
and
then
a
little
spawn
and
FIO
process
for
client
and
we'll
use
the
the
a
IO
engine
to
to
work
against
files
that
are
created
on
the
file
system
on
Kara,
BD
and-
and
this
can
be
a
bare
metal
host
right.
B
If
you
should
want
to
run
a
single
care
B
or
if
you
want
to
run
multiple
contra
BD
instances
per
physical
client,
you'll,
probably
want
to
create,
create
containers
on
the
physical
host
and
then
list
the
containers
IP
addresses
as
as
clients
in
the
clients
array
and
then
each
each
container
can.
Then
you
know,
map
and
create
and
benchmark
it's
its
own
file
system,
and
so
this
kind
of
establishes
the
the
performance
potential
of
Caribee
and
all
the
all.
The
options
are
very
familiar
to
the
other
benchmark.
B
B
But
finally,
the
the
last
FIO
type
test
is
testing
kind
of
the
the
block
performance
of
k.
Vm
t
mu
right,
so
you'll
create
the
k,
vm
instances
beforehand
outside
of
CBT
and
and
attach
the
RV
volumes
yourself
to
to
the
instances
right.
So
that's
not
something
in
CBT
does
it
doesn't
make
calls
to
live
birth
or
to
actually
create
the
kicking
of
instances
right?
B
So
the
KVM
is
just
as
that
you'll
pre
create
and
attach
our
beauty
devices
to
you'll
list
the
IPS
of
the
k,
vm
instances
as
your
clients
and
then
similar
to
the
caribou
benchmark.
It's
going
to
create
the
file
system
out
the
file
system,
spawn
FIO,
execute
the
workload,
and
so
that
way
you
can
establish
RBD
performance
and
see
how
the
key
move
io
a
subsystem
interacts
with
that.
B
If
you
have
a
lot
of,
we
have
a
lot
of
KVM
instances
on
a
hypervisor,
and
you
have
FIO
running
and
a
lot
of
these
clients
on
a
single
host,
camiĆ³n
ki-moon,
is
going
to
emulate
the
the
call.
The
timing
call
to
the
the
hardware
registers
to
get
to
get
timing
information
that
it
uses
for
accounting
for
accounting,
completing
completed
iOS,
and
if
you
have
a
lot
of
machines,
all
running
FIO
and
all
requesting
timing
information.
B
Actually
the
the
emulation,
the
emulation
calls
sometimes
can
can
block
the
key
move
event
loop,
and
so
you
can,
you
can
lose
the
KVM
instances
can
lose
ticks
and
so
it
when
it
finally
become
when
the
event
loop
on
it
becomes
unblocked.
Then
you'll
see
a
large
number
of
completions
for
all
the
I/os
that
completed
during
the
event
loop
stall
right.
So
so
that's
kind
of
a
not
so
great
thing
about
KBM.
If
you're
you're
running
a
bunch
of
instances,
they
can
kind
of
on
a
single
hypervisor,
they
can
kind
of
walk
up.
B
B
If
you
all
of
a
sudden
see
you
know
if
you're
watching
a
guests,
one
of
the
instances
in
watching
D,
stat
or
something
you
might
see,
that
it's
business
and
ticks
and
you'll
see
a
bunch
of
io
completions
right
after
that,
and
that's
why
right
and
then
finally,
there's
there's
Koz
match.
So
if
you
want
to
exercise
the
the
Ceph
rgw
you
you
know,
configure
are
the
rgw
outside
of
CBT
and
you'll
install.
You
know
the
this.
B
The
CBT
drivers
on
both
the
head
node
and
the
the
clients
ahead
of
time
and
basically,
what
the
the
cosmetic
driver
does
is.
It
translates
the
the
CBT
llamo
to
the
cosmic
XML
and
and
then
runs
the
the
cosmic
drivers
using
the
command-line.
Are
you
using
the
command-line
tools
and
in
that
way,
if
maybe,
if
you're
not
so
familiar
with
cause
bench
or
if
you
want
to
kind
of
run,
cos
within
CBT?
B
B
And
you
want
to
test
use
a
bunch
of
different
yeah
Mille
files.
You
know,
you
know
that
you'll,
do
you
know
from
the
CBT
directly
you'll
have
the
CBT
directory
than
the
actual
CBT,
the
opacity
archive
directory
to
store
the
results
in
and
then
you'll
have
the
path
to
test.
Oh,
the
the
path
the
test
ham
will
propose
should
have
a
different
client.
It
should
have
the
you
know,
dollars
and
clients
and
client
in
there,
because
that
way,
you're
using
a
different
test
plan
and
that's
just
a
typo.
B
A
B
Some
crude
working,
Python
it'd
be
great
if
we
could
all
kind
of
work
together
to
add
some,
some
some
really
good
tools
to
analyze
the
data
to
the
CVD
repository,
that's
something
that
we've
always
wanted
to
do,
and
then
you
know
once
you
have
CSV
files,
you
can
plot
it
with
whatever
you're
plotting
tool
of
choice,
whether
you
know
you
were
using
genea
plot
excel
or
or
arm,
and
that's
what
I
have
for
day.
So
we
have
some
time
so
I'd
be
happy
to
field
any
questions.
A
B
A
B
And
and
also
there's
a
mailing
list
for
CBT,
so
if
anyone
has
kind
of
questions
that
they
come
up
later,
if
they
start
playing
with
CBT
and
they're
they're
unsure
how
you
know
a
particular
benchmark
works
or
just
need
a
little
bit
of
help
getting
it
running,
you
know
that's
what's
great
about
the
community,
we
have
a
view,
people
using
a
CBT
and
following
the
mailing
list.
So
if
you
have
questions,
that's
a
great
forum
for
them.
A
Yep
all
right,
well
thanks
Kyle,
and
thank
you
everybody
for
coming
to
the
to
the
Ceph
Tech
Talk
for
May.
If
you'd
like
to
join
us
again
in
June,
we'll
be
back
here,
the
same
time
same
channel
for
on
the
23rd
is
typically.
If
there
are
no
scheduling
conflicts,
it's
the
fourth
Thursday
of
every
month
from
1:00
to
2:00
p.m.
Eastern
Standard
Time.
So
we
will
see
you
all
again
next
month.