Python library that handles grader test suite management, file validation and test feedback formatting. Originally developed to enable HTML feedback for programming exercise grading output for courses served on the A+ platform. The A+ platform is not required to run graderutils.
- Running
unittest.TestCase
tests and producing generic JSON results that may be converted into HTML. - Validation tasks before running tests.
- Restricting allowed Python syntax using black- and whitelists of AST node names.
- Formatting tracebacks and exception messages to include only essential information (the full, unformatted traceback is also available).
- Testing input and output of a program against a model program's input and output using
IOTester
.
Results from examples/01_simple
rendered with the default theme:
git clone --depth 1 https://github.com/apluslms/python-grader-utils.git
cd python-grader-utils
pip install .
01_simple
, minimal exercise02_property_based_testing
, grader tests with random input data generation03_template_extension
, if you want to extend or replace the current feedback template04_embedded_plot
, embedding JavaScript into the feedback template05_string_similarity_highlight
, extending the feedback template by adding character similarity highlighting when comparing two strings- Check out the aplus-manual git repository and the corresponding A+ site for more examples and explanations of unit tests (including
IOTester
examples).
Any JSON strings that validate successfully against the "Grading feedback" JSON schema can be converted to human readable form using graderutils_format
.
E.g.
cat results.json | python3 -m graderutils_format.html > results.html
Outline of the grading feedback JSON contents: