Welcome to pytest-executable documentation!¶
pytest-executable is a pytest plugin for simplifying the black-box testing of an executable, be it written in python or not. It helps to avoid writing the boilerplate test code to:
define the settings of a test case in a yaml file,
spawn a subprocess for running an executable,
reorder the tests properly either for a single test case or across several test cases,
handle the outputs and references directory trees,
provide convenient fixtures to customize the checking of the outcome of an executable.
It integrates naturally with standard test scripts written for pytest.
This plugin is originally intended for testing executables that create scientific data but it may hopefully be helpful for other kinds of executables. This project is still young, but already used in a professional environment.
Contributing¶
A contributing guide will be soon available (just a matter of free time).
Please fill an issue on the Github issue tracker for any bug report, feature request or question.
Authors¶
Antoine Dechaume - Project creator and maintainer
Copyright and License¶
Copyright 2020, CS GROUP
pytest-executable is a free and open source software, distributed under the Apache License 2.0. See the LICENSE.txt file for more information, or the quick summary of this license on tl;drLegal website.
Installation¶
Install using pip:
pip install pytest-executable
Install using conda:
conda install pytest-executable -c conda-forge
Command line interface¶
The pytest command line shall be executed from the directory that contains the inputs root directory.
Plugin options¶
-
--exe-runner
PATH
¶ use the shell script at PATH to run the executable.
This shell script may contain placeholders, such as {{output_path}} or others defined in the Runner section of a
test-settings.yaml
. A final runner shell script, with replaced placeholders, is written in the output directory of a test case ({{output_path}} is set to this path). This final script is then executed before any other test functions of a test case. See Runner fixture for further information.If this option is not defined then the runner shell script will not be executed, but all the other test functions will.
A typical runner shell script for running the executable with MPI could be:
#! /usr/bin/env bash env=/path/to/env/settings exe=/path/to/executable source $env mpirun -np {{nproc}} \ $exe \ --options \ 1> executable.stdout \ 2> executable.stderr
-
--exe-output-root
PATH
¶ use PATH as the root for the output directory tree, default: tests-output
-
--exe-overwrite-output
¶
overwrite existing files in the output directories
-
--exe-clean-output
¶
clean the output directories before executing the tests
-
--exe-regression-root
PATH
¶ use PATH as the root directory with the references for the regression testing, if omitted then the tests using the regression_path fixture will be skipped
-
--exe-default-settings
PATH
¶ use PATH as the yaml file with the default test settings instead of the built-in ones
-
--exe-test-module
PATH
¶ use PATH as the default test module instead of the built-in one
-
--exe-report-generator
PATH
¶ use PATH as the script to generate the test report
See
generate_report.py
in the report-conf directory for an example of such a script.Note
The report generator script may require to install additional dependencies, such as sphinx, which are not install by the pytest-executable plugin.
Standard pytest options¶
You can get all the standard command line options of pytest by executing pytest -h. In particular, to run only some of the test cases in the inputs tree, or to execute only some of the test functions, you may use one of the following ways:
Use multiple path patterns¶
Instead of providing the path to the root of the inputs tree, you may provide the path to one or more of its sub-directories, for instance:
pytest --exe-runner <path/to/runner> <path/to/tests/inputs/sub-directory1> <path/to/tests/inputs/sub/sub/sub-directory2>
You may also use shell patterns (with * and ? characters) in the paths like:
pytest --exe-runner <path/to/runner> <path/to/tests/inputs/*/sub-directory?>
Use marks¶
A test case could be assigned one or more marks in the test-settings.yaml
file, see
Marks section. Use the -m
to execute only the test cases that
match a given mark expression. A mark expression is a logical expression that
combines marks and yields a truth value. For example, to run only the tests
that have the mark1 mark but not the mark2 mark, use -m "mark1 and not
mark2"
. The logical operator or could be used as well.
Use sub-string expression¶
Like the marks, any part (sub-string) of the name of a test case or of a test
function can be used to filter what will be executed. For instance to only
execute the tests that have the string transition anywhere in their name, use
-k "transition"
. Or, to execute only the functions that have runner
in their names, use -k "runner"
. Logical expressions could be used to
combine more sub-strings as well.
Process last failed tests only¶
To only execute the tests that previously failed, use --last-failed
.
Show the markers¶
Use --markers
to show the available markers without executing the
tests.
Show the tests to be executed¶
Use --collect-only
to show the test cases and the test events
(functions) selected without executing them. You may combine this option with
other options, like the one above to filter the test cases.
Overview¶
Directory trees¶
The pytest-executable plugin deals with multiple directory trees:
the inputs
the outputs
the regression references
The inputs tree contains the files required to run an executable and to check its outcomes for different settings. It is composed of test cases as directories at the leaves of the tree. To create a test case, see Add a test case.
All the directory trees have the same hierarchy, this convention allows pytest-executable to work out what to test and what to check. The outputs tree is automatically created by pytest-executable, inside it, a test case directory typically contains:
symbolic links to the executable input files for the corresponding test case in the inputs tree
a runner shell script to execute executable
the files produced by the execution of executable
eventually, the files produced by the additional test modules
At the beginning, a regression reference tree is generally created from an existing outputs tree. In a regression references tree, a test case directory shall contain all the result files required for performing the comparisons for the regression testing. There can be more than one regression references trees for storing different sets of references, for instance for comparing the results against more than one version of executable.
Execution order¶
The pytest-executable plugin will reorder the execution such that the pytest tests are executed in the following order:
in a test case, the tests defined in the default test module (see
--exe-test-module
),any other tests defined in a test case directory, with pytest natural order,
any other tests defined in the parent directories of a test case.
The purposes of this order is to make sure that the runner shell script and the other default tests are executed first before the tests in other modules can be performed on the outcome of the executable. It also allows to create test modules in the parent directory of several test cases to gather their outcomes.
How to use¶
Run the executable only¶
pytest <path/to/tests/inputs> --exe-runner <path/to/runner> -k runner
This command will execute the executable for all the test cases that are found in
the input tree under path/to/tests/inputs
. A test case is identified by
a directory that contains a test-settings.yaml
file. For each of the test cases found,
pytest-executable will create an output directory with the same directory hierarchy and run
the cases in that output directory. By default, the root directory of the
output tree is tests-output
, this can be changed with the option
--exe-output-root
. Finally, the -k runner
option instructs
pytest to only execute the runner shell script and nothing more, see Standard pytest options for
more information on doing only some of the processing.
For instance, if the tests input tree contains:
path/to/tests/inputs
├── case-1
│ ├── input
│ └── test-settings.yaml
└── case-2
├── input
└── test-settings.yaml
Then the tests output tree is:
tests-output
├── case-1
│ ├── input -> path/to/tests/inputs/case-1/input
│ ├── output
│ ├── executable.stderr
│ ├── executable.stdout
│ ├── runner.sh
│ ├── runner.sh.stderr
│ └── runner.sh.stdout
└── case-2
├── input -> path/to/tests/inputs/case-2/input
├── output
├── executable.stderr
├── executable.stdout
├── runner.sh
├── runner.sh.stderr
└── runner.sh.stdout
For a given test case, for instance tests-output/case-1
,
the output directory contains:
- output
the output file produced by the execution of the executable, in practice there can be any number of output files and directories produced.
- input
a symbolic link to the file in the test input directory, in practice there can be any number of input files.
- executable.stderr
contains the error messages from the executable execution
- executable.stdout
contains the log messages from the executable execution
- runner.sh
a copy of the runner shell script defined with
--exe-runner
, eventually modified by pytest-executable for replacing the placeholders. Executing this script directly from a console shall produce the same results as when it is executed by pytest-executable. This script is intended to be as much as possible independent of the execution context such that it can be executed independently of pytest-executable in a reproducible way, i.e. it is self contained and does not depend on the shell context.- runner.sh.stderr
contains the error messages from the runner shell script execution
- runner.sh.stdout
contains the log messages from the runner shell script execution
If you need to manually run the executable for a test case, for debugging purposes for instance, just go to its output directory, for instance cd tests-output/case-1, and execute the runner shell script.
Check regressions without running the executable¶
pytest <path/to/tests/inputs> --exe-regression-root <path/to/tests/references> --exe-overwrite-output
We assume that the executable results have already been produced for the test cases
considered. This is not enough though because the output directory already
exists and pytest-executable will by default prevent the user from silently modifying any
existing test output directories. In that case, the option
--exe-overwrite-output
shall be used. The above command line will
compare the results in the default output tree with the references, if the
existing executable results are in a different directory then you need to add the
path to it with --exe-output-root
.
The option --exe-regression-root
points to the root directory with
the regression references tree . This tree shall have the same hierarchy as the
output tree but it only contains the results files that are used for doing the
regression checks.
Run the executable and do default regression checks¶
pytest <path/to/tests/inputs> --exe-runner <path/to/runner> --exe-regression-root <path/to/tests/references>
Note
Currently this can only be used when the executable execution is done on the same machine as the one that execute the regression checks, i.e. this will not work when the executable is executed on another machine.
Finally, checks are done on the executable log files to verify that the file
executable.stdout
exists and is not empty, and that the file
executable.stderr
exists and is empty.
Add a test case¶
A test case is composed of an input directory with:
the input files required by the runner shell script,
a
test-settings.yaml
file with the pytest-executable settings,any optional pytest python modules for performing additional tests.
Warning
The input directory of a test case shall not contain any of the files created by the execution of the executable or of the additional python modules, otherwise they may badly interfere with the executions done by pytest-executable. In other words: do not run anything in the input directory of a test case, this directory shall only contain input data.
The test-settings.yaml
file is used by pytest-executable for several things. When this file is
found, pytest-executable will:
create the output directory of the test case and, if needed, its parents,
execute the tests defined in the default test module,
execute the tests defined in the additional test modules.
execute the tests defined in the parent directories.
The parents of an output directory are created such that the path from the directory where pytest is executed to the input directory of the test case is the same but for the first parent. This way, the directories hierarchy below the first parent of both the inputs and the outputs trees are the same.
If test-settings.yaml
is empty, then the default settings are used. If
--exe-default-settings
is not set, the default settings are the
builtin ones:
# Copyright 2020, CS Systemes d'Information, http://www.c-s.fr # # This file is part of pytest-executable # https://www.github.com/CS-SI/pytest-executable # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Settings for the runner script runner: {} # List of keywords used to mark a test case marks: [] # File path patterns relative to a test case to identify the regression # reference files used for the regression assertions. references: [] # Tolerances used for the assertions when comparing the fields, all default # tolerances are 0. tolerances: {}
The following gives a description of the contents of test-settings.yaml
.
Note
If other settings not described below exist in test-settings.yaml
, they will be ignored
by pytest-executable. This means that you can use test-settings.yaml
to store settings for other
purposes than pytest-executable.
Runner section¶
The purpose of this section is to be able to precisely define how to run the
executable for each test case. The runner section contains key-value pairs of
settings to be used for replacing placeholders in the runner shell script passed to
--exe-runner
. For a key to be replaced, the runner shell script shall contain
the key between double curly braces.
For instance, if test-settings.yaml
of a test case contains:
runner:
nproc: 10
and the runner shell script passed to --exe-runner
contains:
mpirun -np {{nproc}} executable
then this line in the actual runner shell script used to run the test case will be:
mpirun -np 10 executable
The runner section may also contain the timeout key to set the maximum duration of the runner shell script execution. When this duration is reached and if the execution is not finished then the execution is failed and likely the other tests that rely on the outcome of the executable. If timeout is not set then there is no duration limit. The duration can be expressed with one or more numbers followed by its unit and separated by a space, for instance:
runner:
timeout: 1h 2m 3s
The available units are:
y, year, years
m, month, months
w, week, weeks
d, day, days
h, hour, hours
min, minute, minutes
s, second, seconds
ms, millis, millisecond, milliseconds
Reference section¶
The reference files are used to check for regressions on the files created by
the executable. Those checks can be done by comparing the files with a tolerance
, see Tolerances section. The references section shall contain a list of paths
to the files to be compared. A path shall be defined relatively to the test
case output directory, it may use any shell pattern like **
,
*
, ?
, for instance:
references:
- output/file
- '**/*.txt'
Note that pytest-executable does not know how to check for regression on files, you have to implement the pytest tests by yourself. To get the path to the references files in a test function, use the fixture Regression path fixture.
Tolerances section¶
A tolerance is used to define how close shall be 2 data to be considered as equal. It can be used when checking for regression by comparing files, see Reference section. To set the tolerances for the data named data-name1 and data-name2:
tolerances:
data-name1:
abs: 1.
data-name2:
rel: 0.
abs: 0.
For a given name, if one of the tolerance value is not defined, like the rel one for the data-name1, then its value will be set to 0..
Note that pytest-executable does not know how to use a tolerance, you have to implement it by yourself in a pytest tests. To get the tolerance in a test function, use the Tolerances fixture.
Marks section¶
A mark is a pytest feature that allows to select some of the tests to be executed, see Use marks. This is how to add marks to a test case, for instance the slow and big marks:
marks:
- slow
- big
Such a declared mark will be set to all the test functions in the directory of a test case, either from the default test module or from an additional pytest module.
You can also use the marks that already existing. In particular, the skip and xfail marks provided by pytest can be used. The skip mark tells pytest to record but not execute the built-in test events of a test case. The xfail mark tells pytest to expect that at least one of the built-in test events will fail.
Marks declaration¶
The marks defined in all test cases shall be declared to pytest in order to
be used. This is done in the file pytest.ini
that shall be created in
the parent folder of the test inputs directory tree, where the pytest command
is executed. This file shall have the format:
[pytest]
markers =
slow: one line explanation of what slow means
big: one line explanation of what big means
Add a post-processing¶
This section show how to a add post-processing that will be run by pytest-executable.
Pytest functions¶
In a test case input directory, create a python module with a name starting
by test_
. Then in that module, create pytest functions with a name
starting by test_
. Those functions will be executed and pytest will
catch the assert
statements to determine if the processing done by a
function is considered as passed or failed. The outcome of a function
could also be skipped if for some reason no assertion could be evaluated.
If an exception is raised in a function, the function execution will be
considered as failed.
The functions are executed is a defined order: first by the test directory name, then by the module name and finally by the function name. The sorting is done by alphabetical order. There are 2 exceptions to this behavior:
the
test-settings.yaml
file is always processes before all other modules in a given directorya module in a parent directory is always run after the modules in the children directories, this allows for gathering the results from the children directories
The pytest functions shall take advantages of the fixtures for automatically
retrieved data from the execution context, such as the information stored in
the test-settings.yaml
or the path to the current output directory.
See Fixtures for more information on fixtures.
See Default test module for pytest function examples.
Best practices¶
Script naming¶
If a post-processing script has the same name in different test case
directories then each of those directories shall have a __init__.py
file so pytest can use them.
External python module¶
If you import an external python module in a pytest function, you shall use the following code snippet to prevent pytest from failing if the module is not available.
pytest.importorskip('external_module',
reason='skip test because external_module cannot be imported')
from external_module import a_function, a_class
If the external module is installed in an environment not compatible with the anaconda environment of pytest-executable, then execute the module through a subprocess call. For instance:
import subprocess
command = 'python external_module.py'
subprocess.run(command.split())
Fixtures¶
The purpose of the test fixtures is to ease the writing of test functions by providing information and data automatically. You may find more documentation on pytest fixture in its official documentation. We describe here the fixtures defined in pytest-executable. Some of them are used in the default test module, see Default test module.
Runner fixture¶
The runner
fixture is used to execute the runner shell script passed with
--exe-runner
. This fixture is an object
which can execute the script
with the run()
method. This method returns the exit status of the
script execution. The value of the exit status shall be 0 when the
execution is successful.
When --exe-runner
is not set, a function that uses this fixture will
be skipped.
Output path fixture¶
The output_path
fixture provides the absolute path to the output
directory of a test case as a Path object.
Regression path fixture¶
The regression_file_path
fixture provides the paths to the reference
data of a test case. A test function that use this fixture is called once per
reference item (file or directory) declared in the Reference section of a test-settings.yaml
(thanks to the parametrize feature). The
regression_file_path
object has the attributes:
relative
: a Path object that contains the path to a reference item relatively to the output directory of the test case.absolute
: a Path object that contains the absolute path to a reference item.
If --exe-regression-root
is not set then a test function that uses
the fixture is skipped.
You may use this fixture with the Output path fixture to get the path to an output file that shall be compared to a reference file.
For instance, if a test-settings.yaml
under inputs/case
contains:
references:
- output/file
- '**/*.txt'
and if --exe-regression-root
is set to a directory references
that contains:
references
└── case
├── 0.txt
└── output
├── a.txt
└── file
then a test function that uses the fixture will be called once per item of the following list:
[
"references/case/output/file",
"references/case/0.txt",
"references/case/output/a.txt",
]
and for each these items, the regression_file_path
is set as
described above with the relative and absolute paths.
Tolerances fixture¶
The tolerances
fixture provides the contents of the Tolerances section
of a test-settings.yaml
as a dictionary that maps names to Tolerances
objects.
For instance, if a test-settings.yaml
contains:
tolerances:
data-name1:
abs: 1.
data-name2:
rel: 0.
abs: 0.
then the fixture object is such that:
tolerances["data-name1"].abs = 1.
tolerances["data-name1"].rel = 0.
tolerances["data-name2"].abs = 0.
tolerances["data-name2"].rel = 0.
Default test module¶
This is the default python module executed when the testing tool finds a
test-settings.yaml
, this module can be used as an example for writing new test modules.
# Copyright 2020, CS Systemes d'Information, http://www.c-s.fr
#
# This file is part of pytest-executable
# https://www.github.com/CS-SI/pytest-executable
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Builtin test module.
This module is automatically executed when a test-settings.yaml file is found.
"""
def test_runner(runner):
"""Check the runner execution.
An OK process execution shall return the code 0.
Args:
runner: Runner object to be run.
"""
assert runner.run() == 0
API documentation¶
Below are some of the classes API used by pytest-executable.
Changelog¶
All notable changes to this project will be documented here.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
0.5.0 - 2020-06-05¶
Changed¶
The name of the runner shell script in the output directories is the one passed to the CLI instead of the hardcoded one.
All the names of the CLI options have been prefixed with
--exe-
to prevent name clashes with other plugins options.It is easier to define the settings to execute the runner shell script for a test case thanks to a dedicated section in test-settings.yaml.
Rename test_case.yaml to test-settings.yaml.
Added¶
Testing on MacOS.
--exe-test-module
CLI option for setting the default test moduleAdd timeout setting for the runner execution.
Removed¶
The log files testing in the builtin test module.
Fixed¶
Tests execution order when a test module is in sub-directory of the yaml settings.
Marks of a test case not propagated to all test modules.
0.4.0 - 2020-05-03¶
Removed¶
equal_nan option is too specific and can easily be added with a custom fixture.
0.3.0 - 2020-03-19¶
Added¶
How to use skip and xfail marks in the docs.
How to use a proxy with anaconda in the docs.
Better error message when
--runner
do not get a script.
Changed¶
Placeholder in the runner script are compliant with bash (use {{}} instead of {}).
Report generation is done for all the tests at once and only requires a report generator script.
Fixed¶
#8393: check that
--clean-output
and--overwrite-output
are not used both.Output directory creation no longer fails when the input directory tree has one level.
Removed¶
Useless
--nproc
command line argument, because this can be done with a custom defaulttest_case.yaml
passed to the command line argument--default-settings
.