►
From YouTube: Teuthology Training: Introduction
Description
* Ceph Developer Guide: https://docs.ceph.com/en/latest/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-intro/
* Ceph Teuthology Documentation: https://docs.ceph.com/projects/teuthology/en/latest/
* Ceph Teuthology project wiki page: https://tracker.ceph.com/projects/ceph/wiki/Teuthology
A
After
a
month,
just
I
wanted
to
welcome
everybody
to
to
the
idea
technology
training
sessions.
This
is
the
first
one,
it'll
be
an
intro
and
greg
will
go
through
a
number
of
things
here
and
then
I'm
happy
to
talk
about
just
discuss
and
make
this
as
interactive
as
we
can.
So
please,
if
you
have
questions
at
any
time,
feel
free
to
jump
in.
B
C
B
Was
going
on
tour
trying
to
promote
more
people
to
get
into
set
testing
and
I've
updated
it
some,
but
forgive
me
if
there
are
any
oddities,
and
this
is,
I
think
we
got
an
hour
scheduled
for
this.
This
is
not
going
to
take
an
hour
so,
like
josh
said,
if
you
have
any
questions
just
like,
I
guess
just
speak
up
or
if
you're
not
comfortable
doing
that
then
put
a
question
in
chat
or
I
think
we
have
a
raised
hand
functionality.
B
But
yeah
so
we'll
get
going,
so
this
is
about
testing
seth,
which
we
mostly
do
with
our
toothology
framework
pieceology.
Actually,
meat
is
like
an
academic
field.
It's
the
study
of
cephalopods,
the
occupying
squid
that
we
named
our
product
after
and
toothology,
is
in
the
neighborhood
of
10
years
old.
We
needed
to
formalize
step
testing
because
we
didn't
really
have
a
good
test
system
back
then
I,
on
more
than
one
occasion,
would
like
run
a
file
system
suite
against
cefaves
and
discover
a.
D
B
If
statement
back
and
forth
two
or
three
times
or
the
last
six
months,
depending
on
which
test
we've
done
most
recently,
so
we
really
needed
something
stronger
that
we
could
run
like
on
a
regular
basis
and
track
results
with
and
and
write
test
cases
to
cover
specific
scenarios
when
we
discovered
issues
to
make
sure
that
we
didn't
regress
and
that
time
there
weren't
really
any
good
testing
systems
for
any
good
test
frameworks
for
distributed
systems.
B
So
we
had
hired
a
guy
who
had
a
lot
more
experience
than
me
at
the
time
and
he
decided
this
is
a
problem
that
needed
solving,
and
so
he
took
a
go
at
it
and
the
first
try
involved
auto
test,
which
I
think
is
used
mostly
for
linux.
Kernel
stuff,
which
is
you
know,
we
had
a
linux,
kernel,
module
and
sort
of
how
we
got
got
in
how
we
like
discovered
it
and
got
into
it.
B
But
it
was
unsuccessful
because
it
was
really
designed
for
like
doing
stuff
on
a
machine,
and
we
need
to
be
able
to
manipulate
multiple
machines
at
once.
When
we're
testing
staff,
because
you
know
we
have
clients
and
servers,
and
sometimes
they
restart
in
different
times,
and
you
can
sort
of
see
mocking
this
up
with
kvm.
B
But
if
we
could
just
have
different
servers,
it
would
be
a
lot
easier
and
even
with
kvm,
we
need
a
way
to
like
talk
to
the
different
vms
as
if
they're
independent,
so
we're
at
a
system.
It
is
based
on
an
orchestra
communications,
module
in
python
and
all
of
usology
is
written
in
python,
except
for
a
few
ancillary
things
that
we
do
in
shell
or
whatever
and
orchestra
which,
if
you
write
toothless
tests,
you'll,
probably
not
ever
actually
interact
with
directly,
except
for
in
the
way
on
the
right
where
we
do
cluster.run.
B
But
orchestra
is
a
thing
that
wraps
ssh
and
lets
us.
You
know
nicely
execute
commands
on
remote
machines
and
connect
to
them
and
do
things
to
them
and
to
ethology
started
as
just
a
test
runner
it's
more
today,
but
its
first
thing
was.
B
So
targets
are
literally
just
a
list
of
machines,
and
so
we
just
like
you
know
it's
like
we.
We
still
use
the
ubuntu
user
because
way
back
when
this
was
all
ubuntu
and
we
had
machines
called
sepia
located
in
our
little
chef
section
of
stream
host
incubated
stuff
back
then.
So
that's
why
these
are
on
dreamhost.com
roles.
B
And
then
we
have
a
list
of
tasks
that
we
want
to
execute,
dash
that
actually
sort
of
make
up
the
the
test
system
and
each
of
these
lines
or
each
of
these
top
level
lines
is
its
own
test
or
sorry,
it's
its
own
task.
B
That
does
that
and
the
k
client
task
will
takes
as
a
parameter
of
this
list
of
clients,
in
this
case
just
the
one
and
mounts
the
kernel
client
on
that
machine
against
the
previously
created
theft
cluster,
and
then
we
have
a
task
called
work
unit
which
is
for
executing
shell
scripts,
which
we
call
work
units
which
are
in
the
git
repository
and
we're
telling
that
work
unit
that
we
have
have
that
we
want
to
run
it
on
all
of
the
clients
and
the
work
unit.
B
In
python
technical
terms,
these
tasks
can
be
context,
managers,
so
the
ceph1,
and
what
that
means
is
that
it's
sort
of
broken
up
into
an
execution
phase
and
then
and
then
it
yields
and
lets
other
things
happen.
So
the
set
task,
I'm
sorry
and
and
then
once
and
then,
once
you
return
control
to
that
task,
then
it
does
its
teardown.
So
the
step
task
turns
on
a
step
cluster
and
then
it
yields-
and
you
get
to
do
other
things
like
mount.
B
The
kernel
client,
which
then
yields,
and
then
we
can
in
this
case
run
auto
test,
which
I
don't
think
exists
anymore
and
interactive,
which
is
a
test
fragment
that
simply
pauses
execution.
So
you
can
go
log
into
machines
and
then
you,
I
think,
control
d,
it
to
say
all
right,
I'm
done
now
and
then
it
returns.
B
But
then,
once
control
passes
back
to
the
chrono
client,
its
teardown
functionality
is
to
unmount
and
the
stuff
tasks.
Tear
down
functionality
is
to
shut
down
a
ceph
cluster,
and
this
is
you
know,
because
you
know
a
step.
Cluster
is
a
big
running
thing
and
we
want
to
clean
up
politely
in
it.
In
addition
to
it
just
being
polite,
we
want
to
like
make
sure
that
we
shut
down
successfully
and
that
we
don't
have
bugs
when,
when
stuff
programs
shut
down,
they
cause
crashes
that
make
people
unhappy,
and
things
like
that.
B
A
B
Now,
of
course,
it's
bad.
If
you
collide
with
other
developers
who
are
running
tests
so
to
ethology,
we
have
a
lock
server
that
lets.
You
grab
machines
and
lock
them
for
your
own
use
and
then
2000.
You
won't
give
them
out
to
other
people
or
try
and
schedule
things
on
them.
B
B
We
have
a
couple
different
machine
types,
we'll
talk
about
in
a
bit
and
the
technology
lock
command
just
gives
you
back
a
list
of
targets
which
you
can
conveniently
type
into
a
file
and
use
the
targets.gamil
in
addition
to
running
tasks
or
running
jobs,
directly
from
the
fan
line
on
your
own
machine
out
to
toothology
or
out
to
the
lab,
you
can
schedule
things
which
and
when
you
schedule
things
that
puts
jobs
into
a
beanstalk
eq
at
the
moment,
although
we
might
be
changing
that
in
the
future
and
those
jobs
are
just
grabbed
and
executed.
B
B
B
This
is
a
listing
a
couple
years
ago
and
go
and
it's
pruned
so
it'll
be
somewhat
different,
but
it's
sort
of
the
basic
idea,
so
at
the
top
level
of
any
given
subsequent
folder
or
sorry
so
radius
is
a
whole
suite
and
verify
is
a
sub
suite
inside
of
it
and
you
can
have
arbitrarily
nested
subsuites.
B
If
you
want
to
specify
make
run
smaller
groups,
but
within
any
given
folder,
then
we
will
run
any
files
at
the
top
level
get
stuck
together
and
any
folders
will
grab
a
piece
out
of
them
to
combine
into
suites
and
we'll
do
all
the
combinatorial
combinations
of
those
of
those
fragments
to
that.
We
can
to
get
a
whole
thing.
B
And
the
reason
for
that
is
because
it
makes
it
really
easy
to,
for
instance,
say:
hey:
we
have
a
new
blue
store,
allocator
or
hey.
We
have
we
have
this
new
option
in
that
we've
created
in
rados,
and
we
want
to
run
all
of
our
existing
tests
with
it
on
and
with
it
off,
and
so,
rather
than
have
to
go
through
and
modify
all
of
the
tests.
B
B
So
when
you,
when
you
run
thology
suite
against
the
thing
against
the
folder
structure,
that
looks
like
this,
it
will
always
include
the
ceph
and
radios.yaml
files,
and
this
cluster
is
one
of
the
special
case
we'll
get
back
to
and
then
it'll
say.
Okay,
so
we
have
this
the
thrash
folder
and
we
have
a-
and
we
have
two
two
gambles
inside
of
it,
but
we're
gonna
run
with
the
default
one.
First,
we
have
this
objectstar
folder
and
we'll
grab
the
bluestor
bitmap.yaml
and
in
this
task
we're
on
recovery.
B
And
that's
one
job,
but
then
you
know
there's
a
whole
bunch
of
other
files,
so
we'll
walk
through
and
say
all
right,
we'll
run
the
same
configuration
but
we're
on
the
greatest
api
test,
fragment
against
it
and
they
class
all
radios
class,
all
fragment
against
it
and
then
hey.
We've
done
all
those
combinations
with
the
original
setup.
So
now
we'll
move
on
to
a
new
object,
store
configuration
and
schedule
on
recovery
against
it
add
infant
item.
B
There
is
one
special
thing:
this
plus
is
actually
a
like
file
in
the
file
system,
and
it's
saying
that,
rather
than
grabbing
one
vial
at
a
time,
I
want
you
to
glue
them
to.
I
just
want
you
to
use
all
of
them
and
glue
them
together,
and
we
do
that
for
things
like
suites,
where
a
subsuite
you
know
has
bits
of
information.
We
want
to
add
together
on
and
run
on,
every
single
thing,
but
without
changing
them.
B
C
B
Okay,
you
have
any
questions
on
that
now's
the
time
because
we're
gonna
move
on
okay,
so
this
is
definitely
no
longer
complete
set
of
suites,
but
we
have
a
whole
lot
of
different
suites
that
cover
a
lot
of
different
pieces
of
functionality
against
the
ceph
code
base.
B
We
also
have
this
really
interesting
thing
within
the
suites
called
thrashers,
and
if
you've
ever
read
about
the
netflix
chaos,
monkey
or
thrashers
or
chaos
monkeys,
they
turn
nodes
on
and
off
or
in
the
case
of
this
one
they
they
grow
and
shrink.
The
number
of
p
pgs
that
the
that
the
greatest
pool
is
involved
have
just
randomly
by
running
in
the
background.
B
C
B
B
So,
for
instance,
in
in
the
in
the
mds
for
the
file
system,
then
we
have
a
sequence
of
of
of
configuration
options
that
lets
you
just
assert
out
at
critical
points
and
and
some
of
the
toothology
tests
do
things
like
set
these
config
options
so
that
when
you
migrate
data
between
mdss,
we
can
set
a
config
option
equal
to
a
number
and
just
step
through
bailing
at
every
single
step
in
the
migration
process.
B
B
B
It's
not
really
how
you
interact
with
it
much
today,
so
today,
toothology
there
are
people
outside
of
this
like
step
upstream
us
running
toothology
in
their
own
labs,
but
we
run
it
in
the
sepia
lab,
which
is
you
know,
a
community
lab
devoted
to
sev,
it's
hosted
at
red
hat
and
has
machines
from
several
different
companies
that
have
been
donated
and
we
just
got
a
new
edition
of
gibbon
nodes,
which
is
some
kind
of
cephalopod.
I
don't
know
what,
but
most
of
them
are
ssd
based.
B
We
still
have
some
hard
drive
based
ones
from
many
years
ago,
and
it's
just
devoted
to
running
sep
tests.
All
the
time
and
ssh
access
is
granted
to
engage
developers,
which
is,
I
think,
most
of
the
people
I
see
on
this
list.
But
if
you,
you
know,
contribute
a
couple
of
vrs,
you
can
get
granted
ssh
access
and
what
you
actually
execute
is
not
technology
schedule
or
physiology,
but
toothology
suite
to
run
one
of
those
suites.
Like
the
example
we
looked
at
and.
B
Let's
see
right
so
this
is
one
that
I
happened
to
run
last
month
or
something,
but
I
just
went
through
my
my
shell
history
to
find
it
find
an
example
command,
and
so,
when
you
run
toothology
suite,
then
you
are
specifying,
let's
see
a
particular
suite
to
run
this
dashes
radio
says,
run
the
radio
suite
and
I'm
running
it
on
a
particular
set
of
machines,
the
smithy
ones.
In
this
case-
and
I
am
drawing
a
particular
package-
this
is
a
branch
called
whip
stretch
updates,
which
is
in
the
cefci.git
repository.
B
We'll
talk
a
little
more
at
the
end
about
where
these
factors
come
from,
but
we
have
a
thing
that
builds
packages
and
I
want
to
use
the
version
of
the
suite
which
is
located
at
the
same
place,
and
I
don't
want
to
run
all
the
suites
we'll
talk
about
subsets,
but
I'm
specif,
I'm
shrinking
the
size
of
it
for
this
particular
command.
B
So
you
know
two
or
three
years
ago
there
are
148
ssh
keys.
There
are
more
now
and
to
get
a
build,
you
push
a
branch
to
cfci.get
and
then
you
execute
a
suite
command.
Like
the
longer
example,
I
gave
you
and
then
the
results
become
available
at
our
poputo
website.
B
These
suites
are
run
by
individual
developers
working
on
a
particular
feature,
but
then
you
know
we
get
pull
requests
from
ourselves
and
from
outside
contributors,
and
the
tech
leads
and
reviewers
will
look
at
prs
and
then
build
integration.
Branches
put
gluing
a
bunch
of
them
together
and
run
those
through
the
suites
to
check
for
issues.
B
B
We
have
unit
tests
that
you
can
run
locally
by
running,
make
check
and
those
also
execute
every
time
anyone
pushes
a
pr.
B
Let's
see
this
object,
corpus
thing
that
I've
highlighted
is
a
set
of
examples
of
all
the
things
that
seth
stores
on
disk,
from
every
version
that
we've
made,
or
at
least
most
of
the
versions
we
release,
and
this
is
how
we
automatic
we
make
sure
that
our
code
can
always
read
the
disk
state
that
might
exist
when
we
do
upgrades
and
stuff,
and
there
are
a
few
bits
that
I
sort
of
hinted
at,
but
I
haven't
discussed
a
bunch
yet
so
two
thousand
suites
have
gotten
huge.
B
Three
years
ago,
the
radio
suite
was
already
up
to
over
124
000
jobs.
It's
much
much
larger.
Now,
because
we're
just
you
know
when
we
add
new
options,
we
double
the
size
of
the
suite
saying,
hey,
you
know
like.
I
want
to
run
everything
against
this,
or
at
least
we
juggle
the
size
of
you
know
a
big
group
of
it,
and
so
that's
too
many
jobs
to
actually
run
on
a
regular
basis.
B
So
there's
this
thing
called
subsets
functionality
and
you
can
and
when
you
use
it
run
a
subset,
you
specify
you
know
a
numerator
is
which
number
of
the
subset
of
the
denominators
how
many
subsets
you're
saying
there
will
be
this
is
this
is
a
lie.
We
don't
actually
like,
run
all
500
subsets
against
any
given
build,
but
we're
saying
if
we
assume
that
we
had
500
subsets
and
across
that
500,
we
wanted
to
run
every
single
combination
of
them.
Then,
just
give
me
a
subset
that
touches
all
of
it.
B
That
uses
all
the
yellow
fragments
in
at
least
one
test,
but
does
not
run
literally
all
of
them
in
every
possible
combination
against
each
other
and
then,
as
time
goes
on,
we
can.
We
can
you
know
iterate
this
number
or
we
can
integrate
the
numerator
to
step
through
and
run
different
combinations,
so
the
nightlys
will
each
run.
B
I
don't
know
what
the
cycle
is
anymore.
It
might
be
two
weeks,
but
the
nightlys
you
know
run
like
iterate
through
to
eventually
get
all
the
combinations
and
when
you're
building
a
branch,
then
you
will
change
the
numerator
or
when
you're
like
building
and
testing
a
branch,
then
you'll
change
the
new.
Then
you
want
to
change
the
numerator
every
time
you
schedule
a
new
run
against
new
code
or
whatever
just
to
try
and
get
all
the
possible
coverage.
But
this
still
gives
us
a
good
sub
sample
of
the
coverage.
B
In
addition
to
the
subset
you
can
filter
out,
you
can
filter
to
specific
pieces
like
if
you're
making
a
blue
store
change.
Well,
no,
that's
not
a
good
example!
If
you're
making
a
change
to
the
monitors,
you
might
not
care
about
specific
things,
so
you
can
say
hey.
I
only
want
to
test
things
that
include
this
option
or
you
can
filter
out
filter
out
specific
sorts
of
jobs.
The
name
of
a
test
is
actually
just
a
concatenation
of
all
the
ammo
fragment
file
names.
B
B
Mostly
yes,
I've
had
it
fail.
A
few
times
need
to
go
poke
at
things.
B
B
So
if
you
actually
want
to
run
a
test,
you
need
to
build
the
packages
which
means
which
is
you
know
not
too
hard.
You
push
the
branch
to
this
fci.git
repository
and
then
you
wait
for
them
to
be
built,
and
then
you
schedule
the
right
suites
against
it.
So
again
here's
a
command.
I
ran
in
my
history.
It's
basically
the
same.
It's
what
I
showed
you
before,
but
I
added
a
few
more
things.
B
So,
first
of
all
I'm
running
I
I
made
the
subsequent
denominator
very,
very
large,
because
this
produces
in
the
neighborhood
of
like
450
tests.
Now-
and
I
guess
I
was
up
to
number
nine-
and
I
said
okay,
I
actually
don't
care
about
jobs
that
are
testing
the
dashboard
or
sep
adm,
because
I'm
making
the
kind
of
change
that
is
exceedingly
unlikely
to
damage
those
and
the
dash
p
is
setting
is
setting
a
priority.
This
is
a
weakness
of
our
existing
test,
scheduling,
infrastructure.
B
It
priority
puts
things
into
beanstalk,
just
in
a
strict
order.
So
I'm
saying
hey.
This
is
an
important
thing
and
so
put
me
ahead
of
every
single
test
that
has
a
priority
over
80.
B
and
I
think
cross
my
fingers
that
that's
going
to
be
improved
in
the
future,
so
that
it's
less
of
a
that's
a
race
to
the
lowest
numbers
and
this
forest
priority
flag
is
because
we
have
some
checks
built
in
to
be
like.
Are
you
sure-
and
I
was
sure,
and
then
you
wait
for
tests
to
come.
B
So
I
just
wanted
to
go
poke
at
a
couple
of
websites
to
show
you
what
I
was
referencing
earlier
and
then
work.
That's
the
end
of
my
formal
stuff.
B
E
B
Cool
all
right,
so
this
is
a
pull
request
that
one
of
our
developers
put
up
and
it
has
a
few
labels.
A
few
github
labels
saying
hey.
This
applies
to
the
core
me,
basically
meaning
rados
and
it
eats
qa.
B
It's
a
bug
fix
and
then
later
sage
came
along
and
put
it
in
one
of
his
testing
ranges
saying
hey:
this
will
be
included
in
my
testing
branch
and
we
have
these
checks
that
we
run
on
every
single
pull
request.
So
we
said:
hey
the
docs
are
fine.
The
commits
are
signed
off
correctly.
B
We
didn't
change
some
modules
and
we
ran
some
api
tests
and,
in
particular,
hey
make
check
passed
and
we
can
look
at
the
details
which
takes
us
to
our
upstream
jenkins
server
and
it's
you
know
it's
a
jenkins
server
which
you
may
or
not
may
not
be
familiar
with.
If
there's
a
problem,
then
the
way
you
find
out
what's
prom
is
you
go
look
at
the
console
output
and
you
this
might
look
familiar
to
you.
It
is
you
know
a
compile,
build
happening.
Then.
B
If
we
get
farther
down,
then
we
will
start
seeing
hey.
Look.
Tests
are
running
with
wow.
I
totally
pulling
a
blank
on
what
we
run
our
tests
with,
but
I
guess
it's
just
cma
can
make,
but
a
list
of
a
bunch
of
the
tests
we
have
in
the
make
check
script
or
in
the
make
check
you
know
system
and
that
they
all
passed
and
if
one
of
them
didn't
pass,
it
would
say
so
here.
B
And
then,
once
that's
ready
in
this
case
it
got
pulled
into
a
an
integration
branch.
But
if
you
wanted
to
build
your
own
branch,
then
you
would
push
it
to
this
fci
deck
that
we
talked
about
and
then
you
would
be
like
all
right.
There's
this
thing
called
shaman.com,
which
holds
all
our
builds,
and
so
we
go
look
at
the
builds
and
say
hey.
B
When
is
my
testing
done
so
sage
just
pushed
a
couple
more
branches
for
testing,
but
he
also
pushed
some
last
night
that
I
happen
to
have
seen
or
there,
which
are
which
are
available
now
and
you'll
notice
that
there's
a
bunch
of
different
branches
or
sorry
a
bunch
of
different
different
builds
for
each
branch,
and
that's
because
we
have
a
few
different
options.
We
build
against
that
are
named
here.
So
no
tc
malik.
We
use
the
tc
malloc
memory
allocator
because
it
provides
us
some
mutual
useful
functionality.
B
B
In
addition
to
x86
to
you,
know
validate
that,
and
apparently
it
failed
on
that
on
one
of
these,
oh
and
the
crimson
project,
which
you
may
have
heard
about
also
has
its
own.
Its
own
builds
to
actually
hit
that
stuff,
but
in
this
case
we're
saying
hey
the
normal
builds
we
care
about
for
x86
for
running
in
the
test
suite
like
succeeded.
B
And
so
we
can
run
tests
because
we
have
all
the
packages
that
we
need
and
then
sage
ran
a
command
very
much
like
the
2000
suite
ones.
I've
shown
you
to
schedule
tests
and
then
we
want
to
see
those
results.
We
can
go
to
popido.stuff.com
and
we
have
this
long
list
of
things
and
oh
that
branch
pushed
last
night
failed
on
a
whole
bunch
of
the
different
suites.
This
one
upgrades
from
the
octopus
branch
to
current
master.
This
one
is
testing
the
dashboard.
B
This
one
is
testing
sef,
adm
where's,
the.
I
guess
it's
scheduled
before
that
yeah
whatever
anyway,
so
we
can
go
in,
but
we
like
this
lists
all
the
tests
that
run
and
we
can
go
look
at
a
particular
and
it
tells
us
you
know
when
they're
in
progress,
how
many
have
passed
and
failed
and
we
can
and
it's
color
coded
for
green
means
all
paths
so
far,
but
this
particular
upgrade
test
did
not
do
very
well.
B
But
but
that's
sort
of
the
basics
of
running
through
thoughty
jobs
again
or
running
to
ethology
against
sep
and
dealing
with
the
results.
So
that's
all
I
had
any
questions.
C
B
G
B
So
josh
correct
me
if
I'm
wrong,
but
for
some
reason
I
don't
remember
the
details,
someone
wanted
to
make
it
so
that
you,
instead
of
adding
a
combination
like
a
combination
of
this
week,
that
we
just
randomly
picked
on
every
single
run,
one
of
them.
So
the
percent
sign
means
pick
one
of
these,
and
only
one
of
these
sda.
A
Percent
is
the
same
as
having
no
percent
there.
It
means
this
it'll
choose
one
of
the
ocml
fragments
in
that
folder,
the
the
dollar
sign,
if
that's
included
in
the
folder
name.
We
use
that.
Thank
you
mainly
just
for
the
distros
is
to
randomly
choose
a
one
of
those
emo
files
and
treat
that
folder
as
if
it
only
contained
one
file.
A
So
it's
kind
of
a
way
to
avoid
a
combinatorial
explosion,
but
still
get
good
coverage
for
that
kind
of
option.
In
this
case
which
distribute
which
version
of
the
distro
you're
running
and
thanks
for
the
link
in
the
chat
there
sunil
there's
a
read
me
in
the
qa
directory
and
steps
like
it.
That
explains
the
suite
structure
and
what
all
these
different
kinds
of
files.
A
A
G
Do
we
have
to
instrument
self
source
code
in
order
to
do
that.
A
A
Yeah,
it's
not
a
tremendous
deal
to
do
an
extra
build.
It's
it's
more.
Whether
the
results
would
be
helpful
at
the
time
when
we
had
enabled
we
didn't
particularly
find
the
results
that
useful
there
were
kind
of
clear
areas
that
we
were
covering
better
than
others
and
at
that
time
at
least,
and
so
it
didn't.
It
wasn't
a
very
good
good
guide
for
us
to
determine
what
we
needed
to
test
next,
since
we
already
knew
there
were
kind
of
different
areas
that
we
could
improve
coverage
on.
G
A
C
A
Are
probably
better
tools
for
viewing
that
information
too?
At
the
time,
it
was
a
very
basic
html
website
that
these
kinds
of
tools.
A
B
Yeah,
I
mean
we
run
all
kinds
of
different
ones
and
they're
mostly.
Actually,
I
guess
it
depends
on
which
ones
so
yeah.
So
we
have.
We
have
a
bunch
of
thing,
a
bunch
of
our
things
where
we
run
a
load
generator.
B
You
usually
not
cause
bench
because
we
have
our
own
specific
things
but
yeah,
but
like
cause
bench,
I
think,
is
included
in
the
rdw
test
where
we
will,
but
we
work
on
some
kind
of
benchmark
or
load
generator
as
one
of
those
contact
managers
that
and
then
we
do
specific
things
to
the
cluster
like
add
nodes
or
like
add
osds,
or
take
them
away
or
run
thrashers
against.
At
the
same
time,
that's
that's
what
that
mechanism
is
for.
F
Did
some
work
to
make
it
so
you
can
run
cbt
workloads
through
speechology?
I
don't
know
exactly
how
you
can
mix
and
match
that
with
other
stuff,
but
at
the
very
least
you
can
run
cost
bench
that
way,
if
you
really
want.
G
F
Oh
yeah
sure,
sorry,
so
so
cbt
it.
I
guess
the
best
way
to
describe
it
is
it's
sort
of
like
a
cross
between
vstart
and
tethology,
not
as
good
at
either
at
what
they
they
are.
Are
sort
of
you
know
designed
for
but
kind
of
like
a
light
version
of
a
mix
of
both
sort
of
it
can
run
benchmarks
and
then
mix
and
match
different
options
for
different
different
benchmark
options
and
iterate
through
tests.
That
way,
I
don't
think
we
use
most
of
that
with
the
wrapper
in
toothology.
A
F
Yeah
yep,
so
so
there's
there's
some
overlap
between
what
teethology
does
and
what
cbd
does.
But
cbt
is
very
much
like
just
it's
it's
much
stupider
than
what
mythology
is
so
so
in
this
case
like
for
for
toothology
it
it
just.
Basically,
you
use
toothology
to
specify
all
the
different
kind
of
parameters
that
you
want
and
it
passes
those
individually
along
to
cbt,
which
then
passes
them
along
to
whatever
benchmark
cbt
supports,
which
are
like
coz
bench,
hs
bench,
fio,
raiders
bench.
That
kind
of
thing.
I
So
greg,
since
we
have
a
little
bit
of
time
left,
I
would
suggest,
unless
we're
planning,
to
do
this
in
a
different
technology
tutorial,
maybe
looking
at
a
particular
jobs,
yaml
and
just
briefly
going
over
that.
F
B
B
See
all
right
so
here's
a
smoke
suite
which
ran
in
the
past
and
now
that's.
B
And
we
can
look
at
the
summary
and
say:
hey
stuff
passed
ray,
but
in
particular
looking
at
a
particular
specific
job.
B
Let's
see
so
we
had
the
original
configuration
and
so
the
archive
path
is
the
folder
we're
in
saying
saying:
hey
here's
where
I
want
you
to
put
the
files
when
the
job's
done
we're
running
against
master
branch.
The
description
is
the
names
of
all
the
specific
files
and
within
the
folders.
B
B
Log
rotate
is
literally
log
rotate
and
we're
setting
fi
file
size
thresholds.
For
when
that
happens,
this
the
log
rotate
is
going
to
be
a
task
that
runs
or
no
sorry
is,
I
think,
runs
inside
of
the
sep
task,
but
there's
a
configuration
for
it.
We're
running
on
smithy
the
suite
had
a
name.
B
This
particular
job
running
against
rail
8.3,
and
this
overrides
thing
is-
is
a
way
of
gluing
together
the
ammo
fragments
that
had
different
configuration
options
but
like
if
we
have
a
sort
of
base
configuration
and
then
a
specific
test,
we
want
to
run
with
a
different
one.
We
can,
we
can
put
overrides
in
it,
so
we're
saying
hey.
B
Actually,
I
have
no
idea
what
the
admin
socket
one
is
for,
but
the
self
config
here
is
literally
just
the
sef
config.
B
Options
the
log
options
here
so
when
a
test
runs
one
of
the
things
we
do
to
validate
that
it
succeeded,
is
scan
through
and
look
for,
error
messages
in
the
central
log
that
the
monitors
generate
and
feed
out.
But
we
ignore
certain
kinds
of
errors
and
certain
kinds
of
tests
like
when
you're
running
thrashing
you
don't
care
about
errors
that
say:
hey
like
osds
aren't
down
because
we're
like.
Yes,
that's
the
whole
point.
We
killed
a
bunch
of
osds
some
of
the
tests
still
maybe
use
sep
deploy
or
set
ansible.
B
So
those
have
configurations,
let's
see,
hang
on,
I
gotta
go.
Look
at
this.
Oh
yeah.
I
guess
this
one
was
in
fact
a
zef
deploy
test.
D
B
Anyway
and
we're
running
thrash
osds
and
work
units,
apparently.
B
No
there
we
go,
that's
the
actual
test
run
there
we
go,
but
that,
but
this
is
all
this
is
all
configuration
for.
If
we
happen
to
run.
Those
sorry
and
the
owner
in
this
case
is,
is
a
scheduled.
B
But
it
could
also
be
scheduled
greg
wherever
wherever
the
schedule
indicates
that
you
ran
2000
suite
and
then
it's
actually
just
the
username
and
the
machine
that
that
it's
on
this
nightly
was
run
it
prior
to
71.
For
some
reason,
it's
on
three
machines,
one
two
of
which
run
servers.
I
wonder
which
runs
the
client.
B
It's
running
the
smoke
suite
against
the
master
branch.
We
cloned
the
suite-
that's
not
that
interesting,
but
here,
okay,
so
when
them
it
was
started,
then
we
managed
to
lock
three
machines
for
it.
We
ran
the
and
then
finally,
here
we
have
the
task
list.
So
we
ran
the
install
task
which
installs
a
bunch
of
packages
we're
in
the
seth
task.
B
We
ran
set
the
subfuse
task,
which
met
which,
since
it
doesn't
have
a
specific
config,
would
have
mounted
it
on
every
client
role
and
then
we
ran
the
blogbench.cell,
I
mean
so
it's
it's
a
work
unit,
but
it's
invoking
a
like
load
generator
called
and
benchmarker
called
blog
bench,
and
then
that
was
it
in
terms
of
the
tasks.
B
B
So
that's
sort
of
the
so
that's
the
configuration
that
it
would
have
used,
there's
a
one
with
even
more
internal
stuff
in
it
in
the
action
in
the
actual
config.yml
file.
And
then,
if
you
go
look
at
the
toothology.log
we'll
see.
First
of
all,
it
reproduces
the
configuration
and
then
starts
telling
you
things
like
hey.
B
We
turned
on,
we
are
checking
the
packages,
we're
checking
our
locks
and
then
here's
that
orchestra
python
module.
We
talked
about
and
it's
doing
a
bunch
of
installation
stuff.
B
I
don't
know
what
I
actually
want
to
look
for
here
and
then
you
know
we
would
run
more
of
the
task
but
ensure
you
get
to
the
work
unit
task,
and
so
it
pulls
down
the
work
units
and
makes
a
directory
and.
C
I
A
Yeah,
I
guess
one
and
one,
and
you
can
see
the
log
is
each
time
you
can
see
every
time
a
task
starts
and
stops.
A
And
then
during
tear
down
it'll,
say
under
my
unwinding
task,
we're
running
like
the
cleanup
phase
of
that.
B
B
All
right,
yeah.
B
Yeah
traceback
is
usually
one
I
go
for,
but
depends
oh
and
then
at
the
end,
this
is
the
summary
which
is
also
printed
in
the
summary.yaml
file,
but
like
if
you're,
trying
to
debug
check
what
thing
a
specific
thing
it
says,
failed
and
then
go
look
for
that.
E
I
And
just
to
add
on
to
the
the
yaml's
or
the
configuration
that
we
were
going
over,
you
know
more
than
half
of
it
is
actually
recreated
by
technology.
For
that
particular
job,
and
the
remainder
is
what
actually
was
specified
by
the
collection
of
yaml
fragments.
You
can
see
in
the
description
here
that
greg
is
highlighting.
A
B
B
G
B
It's
this
legible,
all
right,
so
we
got
a
basic,
so
the
smoke
suite
has
just
one
real
folder.
It's
the
basic
suite.
We
have
this
cluster
configuration
so
we're
running
on
three
nodes.
With
the
openstack
configuration
that
I
mentioned
before,
we
are
always
running
against
bluestore
with
the
bitmap
alligator.
B
We
are
picking
one
of
the
distros
we
use
randomly
and
there
we
go
there's
a
next
one.
Here
we
install
it
and
then
we
pick
one
of
the
tests
of
which
we
have.
B
B
Which
just
consists
of
a
list
of
tasks
which
are
run
ceph?
This
is
actually
outdated
now,
because
this
would
have
been
a
configuration
for
file
store,
I
think,
and
but
so
it's
just
the
you
know,
run
ceph,
run
stuff
fuse
and
run.
The
blog
bench
work
unit
was
all
that
was
required
to
specify
that
particular
suite
and
then
all
the
rest
of
it
when
we
generated
this
was
predefined
from
the
existing.
B
B
I
I
just
want
to
add
to
to
this
that
you
know
we
have
a
sort
of
common
conventions
for
how
we
lay
out
the
suites,
as
you
can
see,
there's
a
task
directory,
but
you
don't
actually
have
to
put
all
of
the
tasks
in
that
directory.
I
B
Yeah,
we
got,
for
instance,
a
bunch
of
extra
packages.
The
a
bunch
of
the
file
system,
stuff
needs,
yeah
they'll
be
asked
to
get
installed.
Yep.
B
I
Right,
another
quick
of
the
yeah
sorry
just
be
speaking
up.
One
of
the
reasons
why
it's
called
begin.yaml
is
actually
the
something
I
don't
think
we
mentioned
was
the
yemels
are
actually
loaded
in
alphabetical
order.
So
the
order
of
the
tasks
actually
matter
and
one
of
the
ways
you
can
control
the
order
in
which
the
tests
are
are
put
in
the
yaml
array
is
is
by
using.
You
know
that
alphabetically
sorting,
the
the
yamls
so
begin
would
of
course,
be
first
because
it's
letter,
p.
B
Yeah
or
a
bunch
of
them,
I
don't
remember
where
offhand,
but
a
bunch
of
the
sweets
have
just
numerically
specify
the
folders,
so
yeah.
B
I
Like
the
smoke
suite
that
we
were
just
looking
at,
the
install
test
began
with
the
zero,
and
that
was
to
control
having
it
be
at
the
beginning
of
the
test
array
rather
than
after
the
actual
work
unit.
B
Oh
yeah,
this
zero
and
dash
install
one
a
bunch
of
the
more
bit
of
the
bigger
ones,
though
we'll
have
you
know,
tasks
and
work
units
and
stuff,
and
these
in
just
every
holder
name,
will
be
prefixed
with
zero
dash.
Whatever
one
dash.
D
Yes,
sorry,
good
question:
when
you
look
at
a
test
run,
how
can
you
tell
if
it
runs
with
sap
adm.
A
B
A
D
Awesome
thanks
and
if
it's
okay,
I
have
a
quick
question
when
you
want
to
have
an
integration
branch,
if
you
want
to
test
multiple
pr's,
how
would
you
do
that.
B
G
I
There's
two
there's
build
integration
branch
in
sort
of
scripts
and
there's
ptl
tool,
I
think-
for
building
integration
branches,
I'm
the
only
one
using
ptl
tool,
build
integration;
branches,
the
one
more
commonly
used.
G
A
Questions
all
right!
Well,
thanks
for
joining
folks
and
we'll
see
you
next
week
with
we'll
learn
how
we'll
explain
how
to
analyze
the
results
of
a
suite
or
dev.