Executable Testing / Test Data

Introduction

An executable test can be run with several wrappers, e.g. with valgrind, memcheck or a simple time-measurement. Each wrapper run can then be validated (called testers), e.g. with file-diffs or -greps. This can be configured in CMake:

AddTest(
    NAME GroundWaterFlowProcess
    PATH Elliptic/quad_20x10_GroundWaterFlow
    EXECUTABLE ogs
    EXECUTABLE_ARGS quad_20x10_GroundWaterFlow.prj
    RUNTIME 35                                                        # optional
    WRAPPER time                                                      # optional
    TESTER diff                                                       # optional
    DIFF_DATA quad_20x10_constMat0.mesh.vtu quad_20x10_left_right.gml # optional
)

Tests are then run with ninja ctest or for more verbose output with ctest -VV (you may also use other ctest options). If the checker has some errors they are displayed. RUNTIME specifies the typical runtime in seconds on an Intel Xeon E5-2680 v2 @ 2.80 GHz with 500 GiB RAM (envinf1). Tests with a RUNTIME > 60 are considered LARGE-tests.

The functionality is very flexible and more wrappers and checker can be added later on. e.g. for running some statistics on output files and comparing them with statistics from reference files.

Run tests with CMake presets

Similar to the configure and build presets there are test presets, e.g. in your source-directory:

ctest --preset release                          # equivalent to running `ninja ctest` above
ctest --preset release -j 6 --label-regex Utils # run 6 tests in parallel which have a Utils label

To sum up: from a clean source directory you can fully configure, build and test OGS with these 3 commands:

cmake --preset release
cmake --build --preset release
ctest --preset release

Test Data

Test data is stored in Tests/Data. Generated test output files should be found in [build-dir]/Tests/Data.

In the OGS-cli outputting to [build-dir]/Tests/Data is already handled (via the -o parameter). For other executables you have to implement this, e.g. a with parameter specifying the output directory.

In code BaseLib::BuildInfo::data_path (from BuildInfo.h) references the data source directory and BaseLib::BuildInfo::data_binary_path references the data output directory.

For adding new data files simply commit the new files as usual.

Notebook testing

Full Jupyter Notebooks based workflows can be tested too. Create the notebook in Tests/Data. Configure output directory and try to use it for all outputs:

import os

# On CI out_dir is set to the notebooks directory inside the build directory
# similar to regular benchmark tests. On local testing it will output to the
# notebooks source directory under a _out-subdirectory.
out_dir = os.environ.get('OGS_TESTRUNNER_OUT_DIR', '_out')
if not os.path.exists(out_dir):
    os.makedirs(out_dir)

# ...
# Run ogs; get input data from current directory; write to `out_dir`
! ogs my_project.prj -o {out_dir} > {out_dir}/log.txt

# Verify results; on failure assert with:
assert False
# or
raise SystemExit()

Add new Python dependencies to Test/Data/requirements.txt.

Register with CTest

Add the benchmark to CTest with e.g.:

if(NOT OGS_USE_PETSC)
    NotebookTest(NOTEBOOKFILE Mechanics/Linear/SimpleMechanics.ipynb RUNTIME 10)
endif()

By registering notebooks are automatically added to the benchmark documentation page.

For local testing please note that you need to configure OGS with OGS_USE_PIP=ON (to automatically create a virtual environment in the build directory which is used by the notebook tests).

Then e.g. run all notebook test (-R nb) in parallel with:

source .venv/bin/activate # May need to be activated
ctest -R nb -j 4 --output-on-failure

This article was written by Lars Bilke. If you are missing something or you find an error please let us know.
Generated with Hugo 0.101.0 in CI job 262272 | Last revision: October 25, 2022
Commit: [T] Run notebook tests only once (either in serial or petsc). d4b44a9  | Edit this page on