►
From YouTube: Teuthology Training: Developing Tests
Description
* Notes: https://pad.ceph.com/p/testing-ceph-2021/timeslider#1647
* Ceph Developer Guide: https://docs.ceph.com/en/latest/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-intro/
* Ceph Teuthology Documentation: https://docs.ceph.com/projects/teuthology/en/latest/
* Ceph Teuthology project wiki page: https://tracker.ceph.com/projects/ceph/wiki/Teuthology
A
All
right,
let's
look,
we've
got
a
stuff
to
start
so
welcome
to
the
three
two
thousand
training
session
today,
we're
going
to
talk
about
how
to
develop
tests
and
what
kinds
of
what
different
kinds
of
tests
we
have
and
stuff
like
the
other
sessions,
feel
free
to
ask
questions
at
any
point:
it's
even
better
if
these
are
more
interactive
rather
than
just
just
a
presentation.
A
So
if
any
questions
about
anything
we've
already
covered
or
anything
we're
talking
about
today
feel
free
to
chime
in.
A
So
I
thought
we'd
start
with
little
discussion
about
what
different
kinds
of
tests
exist
and
stuff,
and
why,
when
you
want
to
use
one
or.
A
Another,
so
we've
got
a
few
different
ways
that
you
can
use
integration
tests
with
stuff
in
terms
of
simpler
tests
or
unit
tests.
I
won't
be
going
over
those
in
detail,
but
generally,
if
you
can
write
unit
tests,
it's
quite
helpful,
since
there
are,
of
course
much
faster
to
execute
much
faster
to
get
feedback
from,
but
I'll
be
going
over
one
more
detail.
But
what
what
different
ways
that
you
can
use
to
write
integration
tests.
A
One
of
the
most
common
ones
is
writing
some
kind
of
scripts
that
runs
against
the
stuff
cluster
and
a
common
one
mistake
is
what
we
call
a
work
unit.
These
live
in
the
repository
under
the
keyway
work
units
directory.
A
The
idea
is
that
if
the
script
exits
zero,
then
as
a
success
and
if
it
exits
non-zero,
that's
a
failure
and
you
can
run
these
scripts
against
your
local
v-start
environment
or
you
can
run
them
within
too
lg.
A
So
that
means
it's
easy
to
run
the
right
version
of
scripts
in
an
upgraded
scenario
or
just
make
sure
you're
using
this
consistent
version
of
the
script
and
not
getting
something
from
from
master
when
you
meant
to
run
a
nautilus
test,
for
example,.
A
Sometimes
these
scripts
aren't
really
necessary
to
run
tests,
but
sometimes
we
use
them
to
add
some
extra
logic.
For
example,
there
is
a
wrapper.
We
have
a
number
of
tests
and
stuff
that
use
the
details,
framework
that
are
like
integration
tests,
using
like
different
libraries
like
the
radios
with
little
ready,
and
these
end
up
in
in
this
bunch
of
the
binaries
called
starting
with
a
self
underscore
test
underscore
and
then
the
test
name.
A
Faster,
as
you
can
see,
this
is
a
typical
kind
of
work
unit.
It's
just
a
bash
script
which
uses
set
minus
e
so
that
the
script
exits
with
narrow
if
any
of
the
commands
fail
and
minus
x,
so
that
you
can
see
what
what
is
actually
being
executed
by
the
script
so
that
when,
if
something
does
fail,
you
can
tell
what
led
up
to
that
point
and
what
may
have
caused.
A
A
These,
when
you're,
when
the
markings
are
run,
they're
always
run
with
us.
A
fixed
set
of
environment
variables,
representing
this
left
version,
the
test
directory
that
the
technology
has
set
up
and
a
few
other
options
that
are
commonly.
A
For
example,
with
the
test
the
test
crash
script
here,
it
uses
the
test
directory
environment
variable
that
we
saw
earlier
that
relies
on
topology
to
set
that.
So
this
is
an
example
of
one
of
the
work
units
where
it
doesn't
really
make
sense
to
run
against
v-start
since
it
it's
it's
relying
on
on
the
technology
setup
of
humans
being
running
in
a
certain
spot.
A
A
For
the
test
test
crash
script
that
we
just
looked
at,
this
is
the
place
that
exists
in
the
radar
suite
in
the
singleton
sub
subsweeter
subscribers,
which
means
that
the
singleton
section
incorporates
the
kinds
of
tests
that
don't
need
to
be
run
against
a
wide
variety
of
configurations.
So
we
have
all
the
things
that
you
need
in
one
nail
file
with
no
extra
combinations
of
other
files
at
all,
so
it
makes
it
a
nice
easy
example
to
look
at
since
it's
all
self-contained
in
one
place.
A
A
You
can
find
more
information
about
any
of
these
kinds
of
tasks
by
looking
at
their
documentation
in
the
in
the
python
docs
string
for
where
they're
implemented.
So
if
you
look
go
back
to
the
working
a
task
here,.
A
A
A
Like
by
default,
all
these
tests
are
run
with
us.
Some
sort
of
timeout
and
and
in
hours,
looks
like
it
defaults
in
three
hours,
just
so
that
the
the
something
gets
stuck.
You
don't
hang
there
forever
and
we
exit
and
clean
up
eventually
so
work
units
are
typically
quite
useful
because
they
can
be
running
in
both
the
b-start
and
technology
environment,
and
it
gives
you
a
lot
of
flexibility
in
terms
of
how
you're
running
a
test
against
yourself.
Cluster.
A
A
The
pieces
that
they're
not
as
good
at
are
things
that
require
a
direct
interaction
with
the
themes
of
the
cluster
like
changing
their
running
states
or
otherwise,
manipulating
the
cluster,
if
you're,
trying
to
like
microsd's
out
or
in
or
remove
them
from
the
cluster
or
add
new
demons.
Those
are
pieces
that
are
handled
within
technology
itself.
So
for
those
kinds
of
situations
you
usually
want
to
have
something
that
hooks
into
a
like
a
an
existing
pathology
task
or
perhaps
write
a
new
task
to
handle
those
kinds
of
test
cases.
A
A
We
pass
again
the
sha1
from
as
an
environment
variable
so
that
the
we
can
we
can
use
whatever
shot
one
we're
running
against
in
these
commands,
and
we
have
a
little
script
here
that
goes
ahead
and
pulls
the
orchestrator
upgrade
status
when
it's
finished
upgrading
you
go
through
and
look
at
the
processes
just
so
that
we
have
a
record
of
what
what
the
different
demons
are
running
and
what
the
versions
are
get
the
ethology
log.
A
Since
all
this
all
the
standard,
output
and
standard
error
from
these
these
commands
gets
it
goes
into
the
toothology
log,
and
then
we
go
ahead
and
check
that
the
linked
number
is
exactly
one
version
present
and
that
it
is
the
expected
sha.
One.
A
A
If
you're,
not
using
zip,
adm,
there's
an
exec
task
which
executes
in
our
any
commands
on
the
given
host,
and
it
follows
essentially
the
same
format
as
this.
B
One
question
just
is
it's
about
the
there
is
any
place
where
we
can
find
the
templates
for
different,
for
example,
in
this
case,
it's
just
to
execute
several
commands
using
the
theft
class
that
is
started
with
fdm,
but
well.
This
is
a
specific
test.
Okay,
so
if
we
are
trying,
if
we
want
to
to
see
different
examples,
well,
there
is
a
folder
with
templates
or
examples
of
the
of
the
test.
A
There's
not
a
specific
template
folder,
but
right.
What
I
tend
to
do
is
I
grab
through
the
suites
directory
for
the
like
a
given
thing
that
I'm
interested
in.
But
if
you
look
for,
like
all
the
tasks
that
are
tested
using
a
separate
edm
shell
or
using
the
exact
task,
you'll
find
a
number
of
places
where
we're
doing
that.
B
Okay
and
the
list
of
different
tasks
that
we
can
use.
What
the?
Where
recommendation,
the
commentation.
B
A
B
A
Yeah,
so
you
can
you,
can
you
can
look
through
those
the
file,
the
files
and
those
those
directories
to
see
what
is
available.
B
A
A
Yeah
there
you'll
you'll
see
you're
looking
through
the
test,
suites
that
there
are
like,
like
different
formats
and
different
styles,
that
are
used
in
different
places.
So
there's
no
one
right
way
to
do
things.
A
Another
form
of
testing
up
tests
is
the
science
best
test
runner
form,
despite
his
name.
This
was
originally
created
for
for
running
suffice
list,
but
it
runs
many
manager
and
dashboard
tests
these
days
as
well.
A
So
this
is
quite
nice,
since
it
gives
you
something
that's
very
much
more
similar
to
a
play.
The
unit
test,
type
environment,
but
for
integration,
tests.
A
You'll
see
this
in
in
the
in
the
sweets
where
it's
represented
by
the
type
of
fest
test,
runner
task
and,
and
you
configure
it
to
run
a
given
module
within
the
tasks
directory
and
that
module
will
have
a
bunch
of
these
tests
in
it.
You'll
look
at
that
format
of
those
tests,
real.
A
A
So
this
one
in
fact
only
has
a
single
test,
but
they're
they'll
hear
it
from
some
kind
of
test
test
based
class.
In
this
case
it
looks
like
we
have
an
xfs
tests
based
class
and
then
this
is
add
some
extra.
A
This
module
quite
nice
because
it
does
allow
you
to
interact
with
the
cluster
more
through
these
python
tests,
but
it
doesn't
support
running
like
other
tasks
other
than
what's
in
your
python
test,
though
it
wouldn't
be
so
simple
to
add,
for
example,
osd
thrashing,
which
is
a
separate
task
in
totology
to
this
at
this
interface
here.
A
A
Another
kind
of
time
to
test
is
the
upgrade
tests.
These
tend
to
be
formed
mostly
out
of
different
different
ways
of
configuring,
stuff
and
and
running
workloads
against
it
via
relatively
complicated
structures
in
the
suites
themselves.
A
Typically,
these
will
be
doing
things
like
upgrading
the
cluster
in
parallel,
while
running
some
testing
workload
and
they
often
see
things
like
minus
x,
the
sweet
name,
which
means
it's
meant
to
run
against
whatever
upgrades
to
whatever
version
you
specify.
So,
for
example,
if
you
run
the
octopus
x
suite
now,
you
could
run
that
against
master,
and
I
would
upgrade
from
octopus
to
master.
You
can
run
specific
and
they
would
have
created
from
octopus
to
pacific.
A
A
So,
just
looking
at
a
little
more
depth,
one
of
these
we
tend
to
first
just
install
the
cluster
specifying
in
the
first
part
just
what
roles
the
cluster
has.
This
means
that
there's
two
three
four
five
different
nodes
in
this
test.
One
of
them
has
only
a
clients.
One
of
them
has
only
a
monitor
the
rest,
have
monitors,
managers,
osps
or
just
a
mix
of
different.
A
Demons
in
the
past
we
had
we
had
to
separate
these
out
a
bit
when
we
were
doing
upgrades
in
a
packaged
environment
so
that
we
could
upgrade
easily
upgrade
one
host,
but
not
another
one.
For
example,
we
need
like
we
need
the
client
to
be
on
a
separate
host
that
we
could
run
the
older
version
of
the
clients
against
the
upgraded
cluster
with
us
fadem.
That
is
no
longer
an
issue.
Since
result,
all
the
demons
are
running
in
containers,
so
the
version
on
the
host
doesn't
matter.
A
Basic
install
of
the
cluster
just
setting
things
up
before
we
upgrade
and
installing
the
packages
in
this
case
running
the
cluster
with
certain
some
extra
configurations.
A
A
You
know,
there's
a
lot
of
laser,
creates,
upgrade
suites
also
have
the
print
task
included,
which
is
to
add
so
that
just
lines
the
log
at
that
point.
So
you
can
more
easily
see
which
section
of
the
test
you're
you're
in
when
you're
examining
the
log.
A
So
here
we're
running
the
install.free
task,
which
upgrades
the
packages
on
each
of
the
nodes
running
monitor
a
b
and
c,
but
it
doesn't
do
anything
with
the
demons.
Yet
until
we
go
ahead
and
use
the
sept.restart
task
to
restart
the
demons
with
those
new
versions
of
the
binaries,
we
tell
it
to
restart
a
whole
bunch
of
these
demons,
but
not
not
all
of
them.
You'll
notice
that
we're
missing
the
third
manager
and
we're
missing
or
the
third
monitor
are
missing
a
bunch
of
the
osds.
A
Running
a
whole
bunch
of
workloads
while
flashing
the
cluster,
so
we
have
this
parallel
task,
which
in
this
case
is
only
one
one
thing,
but
this
is
a
common
kind
of
form
that
we
use
in
upgrade
suites,
where
we
will
often
run
more
than
one
thing
in
parallel.
A
So
that's
why
I
think
this
is
like
around
and
you'll
see
this.
This
reference
is
an
arbitrary
label
that
you
would
put
somewhere
else
in
your
ammo
files.
So
in
this
case
it's
it
happens
to
be
in
the
same.
You
know
file,
but
it
could.
We
could
be
adding
additional
tasks
to
the
stress
tasks
list
in
in
later
files
as
well.
A
We
started
out
with
just
we're
using
the
threshold
to
use
task
which
runs
in
the
background
and
makes
a
whole
bunch
of
changes
to
the
cluster
like
thinking
it
was
used
up
and
down.
A
Maybe
in
this
case
we're
disabling
objects
to
our
tool
test,
so
we're
not
running
running
those,
but
otherwise
we
would
be
running
like
doing
things
to
the
osc's
like
exporting
vg's
from
one
and
putting
them
to
another
or
it
might
be
increasing
and
decreasing,
beat
the
number
of
pgs
in
the
cluster
changing
the
fullness
settings
so
that
the
cluster
thinks
it's
full
when
it's
50
full
or
10
full.
That
kind
of
thing,
let's
try
and
just
address
the
cluster
in
general.
A
You
look
at
the
next
section,
so
here
we
are
adding
more
two
more
tasks
to
this
stress
tasks:
label.
That
means
that
these
are
all
going
to
be
running
against
this,
the
cluster,
when
it's
in
this
mixed
state
of
partially
upgraded.
While
this
flash
usb
thrashing
osd
is
going
on.
A
A
A
A
And
look
at
showing
the
directories
separately
if
you're
listing
things
in
graphical
order,
you'll
see
five
six
and
seven
four,
so
we
have
the
after.
After
all,
those
different
kinds
of
tests
are
done
and
when
the
cluster
is
in
mixed
mode,
we
go
ahead
and
finish
the
upgrading,
the
rest
of
the
daemons.
A
A
A
So
if
we
go
to
the
violence,
referencing
qa
releases,
specific
yaml,
we'll
see
this
is
setting
the
cluster
up.
Now
that
has
been
fully
upgraded
to
require
specific
osds
and
require
specific
clients.
A
For
this
is
a
bit
of
a
leftover
from
earlier
versions
where
we,
when
we
didn't,
have
messenger
v2
enabled
by
default,
so
we
were
after
the
upgrade
is
complete.
We
can
find
we
can
finally
upgrade
enable
messenger,
b2
and
turn
off
the
warning
about
it.
A
And
then,
finally,
after
the
class
was
all
upgraded,
then
we've
really
completed
all
the
changes
we
want
to
make
to
it
and
we
run
a
few
more
correctness
tests
at
the
end
to
verify
that
it's
still
functioning.
A
So
that's
that's
a
that's
one
example
of
an
upgrade
suite,
there's
a
a
few
other
formats.
This
is
this
that
one
also
called
a
stress,
split
upgrade
suite,
which
is
one
of
the
common
ones
where
the
cluster
is
previously
left
in
a
partially
upgraded
state.
A
For
a
long
time,
while
we
were
running
a
bunch
of
workloads,
the
parallel
for
the
upgrade
suites
it
makes
it
does
this
less
deterministically,
where
it's
upgrading
different
demons
and
running
workloads
at
exactly
at
the
same
time
and
not
intentionally
leaving
the
cluster
stuck
in
a
partially
partial
upgraded
state.
A
But
it's
focused
more
on
testing
things
like
upgrading
things,
upgrading
with
demons
in
different
orders
to
make
sure
that
that
doesn't
have
an
impact
on
correctness.
A
So
that's
kind
of
a
brief
look
at
the
upgrade
suites
any
questions
on
that
or
anything
else.
C
Josh
in
the
previous
directory,
I
saw
that
there
were
some
yaml
files
numbered
zero
one
and
one
which
was
not
numbered,
so
the
number
ones
are
sequentially
executed
and
the
not
numbered
ones
are
picked
up
randomly.
Is
that
so.
A
They're
all
picked
up
in
in
lexicographic
order,
so
in
this
case
I
think
these
ones
are
doing
things
like
adding
some
configuration
parameters
that
don't
need
to
be
included
at
any
particular
time.
A
So
that's
why
they're
not
in
a
numbered
list.
These
are
all
siblings,
so
they're
not
being
shown
by
gita
but
they're,
setting
a
general
configuration
that
applies
to
the
at
the
entire
test.
A
Yeah
the
percent,
my
sense
line
is
the
default
behavior,
which
means
that
we're
going
to
choose
a
combination
of
things
from
everything,
but
in
in
the
subject
in
this
directory.
A
With
us
so
simplified
directory
structure
of
a
suite,
so
with
it
with
the
percent
sign
here,
it
means
it's
going
to
take
the
former
matrix
from
all
using
one
of
the
yellow
file
from
from
each
subdirectory.
A
C
A
D
A
It
it
depends
on
what
what
you're,
what
you're
testing
exactly
like,
what
your
python
script
is
doing.
D
A
D
A
It
could
be
any
kind
of
script
that
runs
against
the
cluster.
It
could
be
python,
it
could
be,
it
could
be
go.
It
could
be
even
b
like
plus
plus,
oh,
maybe
like
that
go,
but
I
think
we
do
actually
run
make
in
those
directories
too.
So
if
you
have
something-
and
that
needs
some
compilation
that
can
be
done
as
well.
E
A
See
there's
a
question
from
matt
in
the
chat
about
that.
What
makes
third
runner
specific
to
ffs
at
this
point-
and
I
don't
think,
there's
anything-
that's
very
specific
to
south
africa
at
this
point.
Patrick
correct,
correct
me
if
I'm
wrong,
but
I
think
it's
fairly
general,
since
it's
running
manager
and
dashboard
tests.
F
Yeah,
I
think
just
the
name
is
there,
because
when
john
spray
was
working
on
this,
it
was
just
geared
towards
ffs,
but
I
believe
it's
been
generalized.
It
should
just
probably
be
renamed
at
this
point.
A
A
Okay,
well,
I
want
to
talk
about
a
couple
other
kinds
of
tests
briefly,
so
we
also
have
this
standalone
test
framework.
A
This
was
originally
used
in
make
check
years
ago,
but
we
took
it
out
of
there,
since
it
takes
the
tests
that
use
this
and
tend
to
run
for
quite
a
long
time
they
it's
a
kind
of
batch
framework
with
a
bunch
of
other
methods
for
setting
up
clusters
in
different
ways
and
running
some
more
invasive
testing
on
it
things
things
where
you
might
want
to
interact
with
a
particular
osd
or
particular
ipg
in
a
a
certain
environment.
A
So
this
is
kind
of
similar
to
the
work
units
in
the
sense
that
it
can
be
easily
run
from
the
v-star
environment
or
within
toothology,
but
it
is
kind
of
setting
up
these
clusters
in
a
very
unique
way.
That's
not
not
used
by
anything
else.
So
at
this
point
I
wouldn't
recommend
writing
your
tests
in
this
with
this
format.
A
E
A
A
So
if
you
want,
you
might
want
to,
for
example,
really
stress
pd
splitting
behavior,
for
that
we
have
some
configurations
where
you
can
do
a
whole
bunch
of
pg
splitting
operations
all
the
time
and
not
so
much
other.
Not
so
many
other
operations
like
taking
overseas
in
and
out
or
you
might
want
to
run
against
an
erasure,
coded
pool
and
really
stress
for
kind
of
recovery
around
below
inside
the
minimum
size.
A
So
you
might
you
can
change
the
configuration
in
terms
of
like
how
many
usds
you
want
to
click
you
want
to
guarantee
are,
are
still
in
the
cluster.
That
sort
of
thing.
A
So,
as
I
was
mentioning
earlier
for
any
of
these
tasks,
you
can
find
them
in
the
sf
repository
for
the
specific
specific
pieces,
either
in
the
qa
tasks
directory
or
it
for
more
common
pieces
that
are
more
general
in
the
toothology
tasks
directory
into
validate
repository.
A
A
Have
so
for
the
swatch
osd's,
you
can
choose
like
how
many
osd's
to
make
sure
are
in
the
cluster
how
how
much
delay
there
is
between
doing
doing
different
things
to
the
cluster.
A
Interesting
one
is
the
power
cycle:
option,
for
example,
which
ends
up
using
ipmi
for
the
altimeter
metal
hosts
to
directly
or
virtually
pull
the
plug
on
the
system
and
start
it
back
up
again.
A
We
have
a
specific
speed
that
does
that
that
does
this,
we
run
it
on
a
little
bit
or
less
frequent
basis,
since
in
the
past,
when
we've
run
it
too
often,
it
tends
to
actually
kill
the
hardware.
A
G
I
had
a
quick
question
actually
so
these
tasks,
these
thrashing
tests
can
be
done
for
all
the
client
level
client
or
at
the
client
level,
as
well
right
for
rgw
or
for
rbd,
for
example,.
A
E
I'm
not
aware
I
mean
if
so
we
haven't
noticed,
I
haven't
noticed
it
a
bit,
but
we
don't
have
a
particular
a
thrash
scenario,
specific
character
that
I
know
of
this
has
been
crafted
too,
are
due
to
rgb
demons.
Are
they
you
know
they
assemble
things
or
other
things?
They
don't
currently
have
concepts
like
that,
but,
as
time
goes
on,
we'll
probably
want
look
more
at
that
idea.
A
Yeah,
the
messenger
failure
injection
and
delay
injection,
something
that's
pretty
generic
that,
like,
I
think
I
think
that
could
be
certainly
enabled
without
much
work
there.
Just
adding
those
configurations
to.
A
A
A
Of
like
a
new
style
task,
I
don't
know
it's
a
little
though
so.
There's
you'll
see
the
the
original
style
of
tasks
was
that
they
were
defined
using
a
task
method
or
other
methods
that
were
context.
Managers
within
a.
F
A
Module
there's
also
a
newer
style
where
you
can
inherit
from
a
task
class
and
influence
some
methods
to
represent
that
the
kind
of
setup
tear
down
and
active
activity
stage
in
the
in
the
old
style.
You
know
all
the
setup
is
everything
before
you
get
to
this
yield
statement
once
you
reach
the
yields,
other
tasks
after
this
get
get
run
and
then
and
then
once
they're
done
and
the
geology
starts
cleaning
up
it
unwinds
the
context,
managers
and
each
task's.
A
A
There's
an
apply
overrides
method,
which
is
a
common
kind
of
structure
that
we
use
in
a
lot
of
tasks
where
you
can
specify
like
these
overrides
at
the
top
level
of
your
configuration
so
pretty
much
from
any
emo
file.
You
can
add
some
overrides
that
would.
A
Apply
to
any
kind
of
to
a
lot
of
tasks
configuration
so,
for
example,
we
use
this
a
lot
to
apply
different
kinds
of
stuff
configuration
settings
to
the
stuff
task
or
to
change
the
version
that's
being
installed
for
the
install
task,
but
in
general
it
allows
you
to
kind
of
very
easily
use
the
same
kind
of
functionality.
A
A
A
A
A
It's
usually
accessing
them
via
this
interface
and
geology
called
the
the
orchestra
library,
which
is
kind
of
a
wrapper
around
a
python
ssh
library.
So
you
can
run
all
kinds
of
commands
over
ssh
and.
A
Presents
is
these:
each
machine
is,
is
a
remote
so
we're
looking
at
looping
through
each
of
the
machines
here
and
for
each
one.
A
We're
got
some
console
methods
implemented
to
connect
to
the
console
and
get
the
log
from
it
and
whenever
you're
running
something
on
the
remote
it'll
return,
that
process
objects
that
you
can.
You
can
save
and
you
can
kill
or
wait
for
it
to
exit
later,
but
by
default
that
near
anything,
it'll
be
asynchronous.
A
A
Machine
so
note
that
these
all
have
the
same
kinds
of
two
parameters:
the
context
and
the
config
configuration
context
is
kind
of
like
a
global
context.
Object
that's
passed
through
each
task,
as
it
executes
and
tasks
can
kind
of
store
extra.
Their
own
kind
of
extra
configuration
there
when,
like
the
sf
or
selfie
m
task,
runs
they'll,
add
their
information
about
the
running
demons
and
demons
from
different
clusters.
Since
you
can
set
up
more
than
one
cluster
to
this
context
object.
A
So
the
later
tasks
can
refer
to
that
and
understand
like
what's
the
configuration
for
this
demon
and
for
this
fbi
conf
or
they
can
go
and
inspect
that
a
particular
cluster
and
operate
on
it
on
it.
The
other
cluster,
if
they
want
to,
but
you
can
use
this
context,
object
to
kind
of
store
state
that
you
want
to
have
accessible
to
later
tasks
as
well.
A
I
think
the
rgw
task
is
a
good
example
of
that,
where
it's
it
stashes
away
like
things
like
the
ports,
the
addresses
of
the
rgw's
that
it's
setting
up,
but
the
later
later,
tasks
that
run
tests
against
large
w
nowhere
to
find
it
and
the
configuration
that
each
task
gets.
That's
the
this
section
of
the
yaml,
so
everything
below
the
task
piece
is
going
into
that
configuration
variable.
So
in
this
case
this
would
be
a
dictionary
where
the
key
client
a
and
the
value
which
is
a
list
of
two.
A
A
Then
we
loop
through
the
the
roles
in
our
configuration
and
the
list
of
commands
to
execute
on
them
on
them,
and
this
is
a
pretty
common
pattern.
You'll
see
a
lot
where
we
look
at
the
context.cluster
object,
which
represents
the
the
nodes
in
the
cluster
and
the
demons
that
are
running
on
them,
which
is
generated
from
the
roles
in
your
test
configuration.
A
Let
me
transfer,
we
can
try
to
execute
these
commands
just
on
a
particular
role
like
say,
say:
monitor,
modded,
a
or
osd
at
zero,
we're
going
to
filter
that
down
to
just
monitor
that
a
for
example.
A
A
A
So
that's
a
brief
look
at
that's
a
kind
of
structure
of
tasks
within
toothology.
There
are
a
lot
of
different
ones
here.
I
think
there's
a
a
brief
guide
in
the
totality
documentation.
A
A
I
guess
one
other
thing
worth
noting
is
that
we
saw
like
this
f
dot,
restart
task
used
earlier
in
the
upgrade
suites
things
like
that
that
when
you
see
like
a
dot
in
the
task
name,
that
means
it's
a
sub.
That's
another
method
within
the
the
subtask.
A
F
A
Now
we're
coming
up
on
the
end
of
the
time
pretty
soon,
so
I
want
to
leave
some
more
time
for
questions
any
more
questions
right
now.
Okay,
it
looks
like
we
probably
don't
have
a
lot
of
time
to
cover
the
rest
of
this.
So
we'll
continue.
A
This
topic
a
little
bit
next
week,
looking
at
how
to
actually
run
technology
while
you're
developing
a
test,
and
perhaps
a
little
bit
more
going
into
a
little
bit
more
depth
about
what
kinds
of
what
kind
of
code
you
need
to
write
or
what
kinds
of
options
you
have
when
you,
when
you
are
writing
a
new
test
and
to
ethology
in
a
task
in
particular.