An executable test can be run with several wrappers, e.g. with valgrind
, memcheck
or a simple time
-measurement. Each wrapper run can then be validated (called testers), e.g. with file-diff
s or -grep
s. This can be configured in CMake:
AddTest(
NAME GroundWaterFlowProcess
PATH Elliptic/quad_20x10_GroundWaterFlow
EXECUTABLE ogs
EXECUTABLE_ARGS quad_20x10_GroundWaterFlow.prj
RUNTIME 35 # optional
WRAPPER time # optional
TESTER diff # optional
DIFF_DATA quad_20x10_constMat0.mesh.vtu quad_20x10_left_right.gml # optional
)
Tests are then run with ninja ctest
or for more verbose output with ctest -VV
(you may also use other ctest
options). If the checker has some errors they are displayed. RUNTIME
specifies the typical runtime in seconds on an Intel Xeon E5-2680 v2 @ 2.80 GHz with 500 GiB RAM (envinf1
). Tests with a RUNTIME > 60
are considered LARGE
-tests.
The functionality is very flexible and more wrappers and checker can be added later on. e.g. for running some statistics on output files and comparing them with statistics from reference files.
Similar to the configure and build presets there are test presets, e.g. in your source-directory:
ctest --preset release # equivalent to running `ninja ctest` above
ctest --preset release -j 6 --label-regex Utils # run 6 tests in parallel which have a Utils label
To sum up: from a clean source directory you can fully configure, build and test OGS with these 3 commands:
cmake --preset release
cmake --build --preset release
ctest --preset release
Test data is stored in Tests/Data
. Generated test output files should be found in [build-dir]/Tests/Data
.
In the OGS-cli outputting to [build-dir]/Tests/Data
is already handled (via the -o
parameter). For other executables you have to implement this, e.g. a with parameter specifying the output directory.
In code BaseLib::BuildInfo::data_path
(from BuildInfo.h
) references the data source directory and BaseLib::BuildInfo::data_binary_path
references the data output directory.
For adding new data files simply commit the new files as usual.
Full Jupyter Notebooks based workflows can be tested too. Create the notebook in Tests/Data
. Configure input and output directories:
import os
# Second parameter to get() is important if you want to run
# the notebook standalone.
data_dir = os.environ.get('OGS_DATA_DIR', '../../../Data')
out_dir = os.environ.get('OGS_TESTRUNNER_OUT_DIR', '_out')
if not os.path.exists(out_dir):
os.makedirs(out_dir)
os.chdir(out_dir)
# ...
# Run ogs; get input data from `data_dir`; write to `out_dir
# Verify results; on failure assert with:
assert False
# or
raise SystemExit()
Add Python dependencies to web/data/versions.json
(under python/notebook_requirements
).
Add to CTest with:
NotebookTest(NOTEBOOKFILE Notebooks/SimpleMechanics.ipynb RUNTIME 10)
Then e.g. run with:
ctest -R nb -j 4 --output-on-failure
Make sure to have a Python virtual environment enabled and installed the requirements for your notebook. E.g.:
virtualenv .venv
source .venv/bin/activate
pip install $(jq -r '.python.notebook_requirements | join(" ")' path/to/ogs/web/data/versions.json)
This is handled automatically when OGS_USE_PIP=ON
.
Also make sure to have ogs
or other required tools in the PATH
:
export PATH=./path/to/build/release/bin:$PATH
Run all notebooks in Tests/Data
(ignoring notebooks with .ci-skip.
in their filename) with the notebook testrunner.py
:
cd Tests/Data
find . -type f -iname '*.ipynb' \
| grep -vP '\.ipynb_checkpoints|\.ci-skip.ipynb$' \
| xargs python Notebooks/testrunner.py --out _out
Notebooks are automatically added to the benchmark documentation page.
This article was written by Lars Bilke. If you are missing something or you find an error please let us know.
Generated with Hugo 0.96.0. Last revision: June 28, 2022
Commit: [web] Added docs on jupyter notebook generated web pages. 7bfbc49
| Edit this page on