Welcome to pytest-executable documentation!

This is the user guide for pytest-executable, a pytest plugin for checking and validating executable.

Overview

The pytest-executable plugin allows to both automatically check executable results and post-process them. In this guide, a check is a testing event that can be automatically verified and can provide an OK or KO outcome, like checking that 2 numbers are equal. In contrast, a post-process is a testing event that solely produces additional data, like numerical or graphical data, which has to be analyzed manually in order to be qualified as OK or KO. The pytest-executable plugin may also generate test reports and users may add custom check and post-processing events.

The pytest-executable plugin works with several test cases directory trees for:

  • the inputs

  • the outputs

  • the regression references

There can be more than one regression references trees for storing different sets of references, for instance for comparing the results against more than one version of executable. All the directory trees have the same hierarchy, this convention allows pytest-executable to work out what to test and what to check. Except for the inputs tree, you do not have to manually create the directory hierarchies, as they are automatically created by pytest-executable when it is executed.

In the inputs tree, a test case is a directory that contains:

  • the executable input files

  • a test_case.yaml file with basic settings

  • optionnal pytest and python scripts for adding checks and post-processes

In the outputs tree, a test case directory typically contains:

  • symbolic links to the executable input files from the inputs tree

  • a shell script to execute executable

  • the files produced by the execution of executable

  • eventually, the files produced by the additional post-processing

In a regression references tree, a test case directory shall contains all the result files required for performing the checks.

Installation

Requirements

Anaconda or miniconda version 2019.07 or above is required, it can be downloaded from anaconda.com. Once anaconda is installed (see here if you need to define a proxy), create the anaconda environment with

make environment

Now activate the anaconda environment with

conda activate test-tools

The remaining of this guide assumes you are in that environment. When you are done with pytest-executable and wish to leave the environment, execute

conda deactivate

Installation

Install for development

Install for development if you intend to modify pytest-executable and have your modifications usable in the environment, i.e. without having to to do a reinstallation after a modification. To do so run

make develop

You may also use this command to update an existing anaconda environment, for instance after updating your local git clone or if you add packages dependencies.

Install for usage only

If you only need to use pytest-executable without having to modify it, run

make install

You may also use this command to update an existing anaconda environment, for instance after updating your local git clone or if you add packages dependencies.

Documentation

To generate the documentation, run

make doc

Then open doc/build/html/index.html from a web browser.

Testing the tool

The tests can be run with

make test

You shall run them when you modify or update pytest-executable. All the tests shall be OK, otherwise you shall not use pytest-executable and contact the support team.

Command line interface

The pytest command line shall be executed from the directory that contains the inputs root directory.

Plugin options

--runner PATH

use the shell script at PATH to run executable, if omitted then executable is not run.

This shell script may contain placeholders, such as {{nproc}} and {{output_path}}. The placeholders will be replaced with the parameters determined from the context (either a pytest option or a setting defined in a test case via the test_case.yaml), and a final script is saved for each test cases to be run in their output directories, under the name run_executable.sh. This latter is used to run executable.

A typical script for running executable with MPI could be:

#! /usr/bin/env bash

env=/path/to/env/settings
exe=/path/to/executable

source $env

mpirun -np {{nproc}} \
	$exe \
	--options \
	1> executable.stdout \
	2> executable.stderr
--output-root PATH

use PATH as the root for the output directory tree, default: tests-output

--overwrite-output

overwrite existing files in the output directories

--clean-output

clean the output directories before executing the tests

--regression-root PATH

use PATH as the root directory with the references for the regression testing, if omitted then the tests using the regression_path fixture will be skipped

--default-settings PATH

use PATH as the yaml file with the global default test settings instead of the built-in ones

--report-generator PATH

use PATH as the script to generate the test report

See the report-conf directory for an example of such a script.

Note

The report generator script may require to install additionnal dependencies, such as sphinx, which are not required by the plugin.

Standard pytest options

You can get all the standard command line options of pytest by executing pytest -h. In particular, to run only some of the test cases in the inputs tree, or to execute only some of the test functions, you may use one of the following ways:

Use multiple path patterns

Instead of providing the path to the root of the inputs tree, you may provide the path to one or more of its sub-directories, for instance:

pytest --runner <path/to/runner> <path/to/tests/inputs/sub-directory1> <path/to/tests/inputs/sub/sub/sub-directory2>

You may also use shell patterns (with * and ? characters) in the paths like:

pytest --runner <path/to/runner> <path/to/tests/inputs/*/sub-directory?>

Use marks

A test case could be assigned one or more marks in the test_case.yaml file, then with -m only the test cases that match a given mark expression will be run. A mark expression is a logical expression that combines marks and yields a truth value. For example, to run only the tests that have the mark1 mark but not the mark2 mark, use -m "mark1 and not mark2". The logical operator or could be used as well.

Use substring expression

Like the marks, any part (substring) of the name of a test case or of a test function can be used to filter what will be executed. For instance to only execute the tests that have the string transition anywhere in their name, use -k "transition". Or, to execute only the functions that have runner in their names, use -k "runner". Logical expressions could be used to combine more susbtrings as well.

Process last failed tests only

To only execute the tests that previously failed, use --last-failed.

Show the markers

Use --markers to show the available markers without executing the tests.

Show the tests to be executed

Use --collect-only to show the test cases and the test events (functions) selected without executing them. You may combine this option with other options, like the one above to filter the test cases.

Usage

The pytest-executable tool can be used in a wide variety of ways, the following sections explain how.

Run executable only

pytest --runner <path/to/runner> <path/to/tests/inputs> -k runner

This command will execute executable for all the test cases that are found in the inputs tree under path/to/tests/inputs. A test case is identified by a directory that contains a test_case.yaml file. For each of the test cases found, pytest-executable will create an output directory with the same directory hierarchy and run the cases in that output directory. By default, the root directory of the output tree tests-output, this can be changed with the option --output-root. Finally, the -k runner option instructs pytest to only execute the executable runner and nothing more, see Standard pytest options for more informations on doing only some of the processing.

For instance, if the test inputs tree contains:

path/to/tests/inputs
├── dir-1
│   ├── case.input
│   └── test_case.yaml
└── dir-2
    ├── case.input
    └── test_case.yaml

Then the output tree is:

tests-output
├── dir-1
│   ├── case.input -> path/to/tests/inputs/dir-1/case.input
│   ├── case.output
│   ├── executable.stderr
│   ├── executable.stdout
│   ├── run_executable.sh
│   ├── run_executable.stderr
│   └── run_executable.stdout
├── dir-2
    ├── case.input -> path/to/tests/inputs/dir-2/case.input
    ├── case.output
    ├── executable.stderr
    ├── executable.stdout
    ├── run_executable.sh
    ├── run_executable.stderr
    └── run_executable.stdout

For a given test case, for instance tests-output/dir-1, the output directory contains:

case.output

the output file produced by the execution of executable, in practice there can be any number of ouput files and directories produced.

case.input

a symbolic link to the file in the test case input directory, in pratice there can be any number of input files.

executable.stderr

contains the error messages from the executable execution

executable.stdout

contains the log messages from the executable execution

run_executable.sh

Executing this script directly from a console shall produce the same results as when it is executed by pytest-executable. This script is intended to be as much as possible independent of the execution context such that it can be executed independently of pytest-executable in a reproductible way, i.e. it is self contained and does not depend on the shell context. run_executable.stderr contains the error messages from the run_executable.sh execution

run_executable.stdout

contains the log messages from the run_executable.sh execution

If you need to manually run executable for a test case, for debugging purposes for instance, just go to its output directory, for instance cd tests-output/dir-1, and execute run_executable.sh.

Do default regression checking without running executable

pytest --regression-root <path/to/tests/references> <path/to/tests/inputs> --overwrite-output

We assume that executable results have already been produced for the test cases considered. This is not enough though because the output directory already exists and pytest-executable will by default prevent the user from silently modifying any existing test output directories. In that case, the option --overwrite-output shall be used. The above command line will compare the results in the default output tree with the references, if the existing executable results are in a different directory then you need to add the path to it with --output-root.

The option --regression-root points to the root directory with the regression references tree . This tree shall have the same hierarchy as the output tree but it only contains the results files that are used for doing the regression checks.

Run executable and do default regression checks

pytest --runner <path/to/runner> --regression-root <path/to/tests/references> <path/to/tests/inputs>

Note

Currently this can only be used when executable execution is done on the same machine as the one that execute the regression checks, i.e. this will not work when executable is submitted through a job scheduler.

Finally, checks are done on the executable log files to verify that the file executable.stdout exists and is not empty, and that the file executable.stderr exists and is empty.

Add a test case

A test case is composed of a directory with:

  • the executable input files

  • a test_case.yaml file with basic settings

  • optionnal pytest and python module for adding checks and post-processes

The executable input files shall use the naming convention case.labs and case.pbd. Among the optionnal modules, there shall be at least one that is discoverable by pytest, i.e. a python module which name starts with test_ and which contains at least one function which also starts with test_.

Note

A test case directory shall not contain any of the files created by the execution of executable or of the processing defined in the python modules, otherwise they may badly interfere with the execution of the testing tool. In other words, do not run anything in the input directory.

The test_case.yaml file is used by pytest-executable for several things. When this file is found, pytest-executable will create the test case output directory, then identify the settings for running the case and finally perform the checks and post-porcesses. If test_case.yaml is empty, then the default settings are used, which is equivalent to using a test_case.yaml with the following contents:

# Copyright 2020, CS Systemes d'Information, http://www.c-s.fr
#
# This file is part of pytest-executable
#     https://www.github.com/CS-SI/pytest-executable
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Number of MPI processes.
nproc: 1

# List of keywords used to mark a test case
marks: []

# File path patterns relative to a test case to identify the regression
# reference files used for the regression assertions.
references: []

# Tolerances used for the assertions when comparing the fields, all default
# tolerances are 0.
tolerances: {}

This file is in yaml format, a widely used human friendly file format that allows to define nested sections, lists of items, key-value pairs and more. To change a default settings, just define it in the test_case.yaml as explaned in the following sections.

Number of parallel processes

To change the number of parallel processes:

nproc: 10

Regression reference files

Reference files are used to do regression checks on the files produced by executable. The regression is done by comparing the files with a given tolerance (explained in the next section). The references setting shall contain a list of paths to the files to be compared. A path shall be defined relatively to the test case directory, it may use any shell pattern like **, *, ?, for instance:

references:
   - path/to/file/relative/to/test/case

Tolerances

To change the tolerance for comparing the Velocity variable and allow to compare a new NewVariable variable:

tolerances:
    Velocity:
        abs: 1.
    NewVariable:
        rel: 0.
        abs: 0.

If one of the tolerance value is not defined, like the abs one for the Velocity, then its value will be set to 0..

Marks

A mark is a pytest feature that allows to select some of the tests to be executed. A mark is a kind of tag or label assigned to a test. This is how to add marks to a test case, for instance the slow and isotropy marks:

marks:
   - slow
   - isotropy

You can also use the marks that already existing. In particular, the skip and xfail marks provided by pytest can be used. The skip mark tells pytest to record but not execute the built-in test events of a test case. The xfail mark tells pytest to expect that at least one of the built-in test events will fail.

Marks declaration

The marks defined in all test cases shall be declared to pytest in order to be used. This is done in the file pytest.ini that shall be created in the parent folder of the test inputs directory tree, where the pytest command is executed. This file shall have the format:

[pytest]
markers =
    slow: one line explanation of slow
    isotropy: one line explanation of isotropy

Add a post-processing

This section show how to a add post-processing that will be run by pytest-executable.

Pytest functions

In a test case input directory, create a python module with a name starting by test_. Then in that module, create pytest functions with a name starting by test_. Those functions will be executed and pytest will catch the assert statements to determine if the processing done by a function is considered as passed or failed. The outcome of a function could also be skipped if for some reason no assertion could be evaluated. If an exception is raised in a function, the function execution will be considered as failed.

The functions are executed is a defined order: first by the test directory name, then by the module name and finally by the function name. The sorting is done by alphabetical order. There are 2 exceptions to this behavior:

  • the test_case.yaml file is always processes before all other modules in a given directory

  • a module in a parent directory is always run after the modules in the children directories, this allows for gathering the results from the children directories

The pytest functions shall take advantages of the fixtures for automatically retrieved data from the execution context, such as the informations stored in the test_case.yaml or the path to the current output directory.

See Fixtures for more informations on fixtures.

See Builtin test module for pytest function examples.

Best practices

Script naming

If a post-processing script has the same name in different test case directories then each of those directories shall have a __init__.py file so pytest can use them.

External python module

If you import an external python module in a pytest function, you shall use the following code snippet to prevent pytest from failing if the module is not available.

pytest.importorskip('external_module',
                    reason='skip test because external_module cannot be imported')
from external_module import a_function, a_class

If the external module is installed in an enviroment not compatible with the anaconda environment of pytest-executable, then execute the module through a subprocess call. For instance:

import subprocess
command = 'python external_module.py'
subprocess.run(command.split())

Fixtures

The purpose of test fixtures is to ease the writing of test functions by providing informations and data automatically. You may find more documentation on pytest fixture in its official documentation. We describe here the fixtures defined in pytest-executable. They are used in the default test module, give a look at it for usage examples, see Builtin test module.

Runner fixture

This fixture is used to run executable, it will do the following:

  • get the runner script passed to the pytest command line option --runner,

  • process it to replace the placeholders {{nproc}} and {{output_path}} with their actual values,

  • write it to the run_executable.sh shell script in the test case output directory.

The runner object provided by the fixture can be executed with the run() method which will return the exit status of the script execution. The value 0 of the exit status means a successful execution.

Output path fixture

This fixture is used to get the absolute path to the output directory of a test case. It provides the output_path variable that holds a Path object.

Tolerances fixture

This fixture is used to get the values of the tolerances defined in the test_case.yaml. It provides the tolerances dictionary that binds the name of a quantity to an object that has 2 attributes:

  • rel: the relative tolerance,

  • abs: the absolute tolerance.

Regression path fixture

This fixture is used to get the absolute path to the directory that contains the regression reference of a test case when the command line option --regression-root is used. It provides the regression_path variable that holds a Path object.

You may use this fixture with the output_path fixture to get the path to the file that shall be compared to a reference file.

Builtin test module

This is the python module executed when the testing tool finds a test_case.yaml, this module can be used as an example for writing new test modules.

# Copyright 2020, CS Systemes d'Information, http://www.c-s.fr
#
# This file is part of pytest-executable
#     https://www.github.com/CS-SI/pytest-executable
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""Builtin test module.

This is the module automatically executed when a test_case.yaml file is found.
"""


def test_runner(runner):
    """Check the runner execution.

    An OK process execution shall return the code 0.

    Args:
        runner: Fixture to run the runner script.
    """
    assert runner.run() == 0


def test_logs(output_path):
    """Check the executable log files.

    The error log shall be empty and the output log shall not be empty.

    Args:
        output_path: Path to the current test output directory.
    """
    assert (
        output_path / "executable.stdout"
    ).stat().st_size != 0, "stdout file shall be non-empty"
    assert (
        output_path / "executable.stderr"
    ).stat().st_size == 0, "stderr file shall be empty"

Changelog

All notable changes to this project will be documented here.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

0.4.0 - 2020-05-03

Removed
  • equal_nan option is too specific and can easily be added with a custom fixture.

0.3.1 - 2020-03-30

Added
  • Report generation can handle a sphinx _static directory.

0.3.0 - 2020-03-19

Added
  • How to use skip and xfail marks in the docs.

  • How to use a proxy with anaconda in the docs.

  • Better error message when --runner do not get a script.

Changed
  • Placeholder in the runner script are compliant with bash (use {{}} instead of {}).

  • Report generation is done for all the tests at once and only requires a report generator script.

Fixed
  • #8393: check that --clean-output and --overwrite-output are not used both.

  • Output directory creation no longer fails when the input directory tree has one level.

Removed
  • Useless --nproc command line argument, because this can be done with a custom default test_case.yaml passed to the command line argument --default-settings.

0.2.1 - 2020-01-14

Fixed
  • #7043: skip regression tests when reference files are missing, no longer raise error.

Indices and tables