Skip to content

Commit

Permalink
Add automatic integration test generation script (#1316)
Browse files Browse the repository at this point in the history
Co-authored-by: Liam Toozer <[email protected]>
  • Loading branch information
petechd and liamtoozer authored Mar 8, 2024
1 parent b1e8d85 commit aae666a
Show file tree
Hide file tree
Showing 11 changed files with 322 additions and 33 deletions.
5 changes: 5 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -116,3 +116,8 @@ dev-compose-down-linux:

profile:
pipenv run python profile_application.py

generate-integration-test:
pipenv run playwright install chromium
pipenv run python -m scripts.generate_integration_test
pipenv run black ./scripts/test_*
1 change: 1 addition & 0 deletions Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ types-python-dateutil = "*"
pytest-mock = "*"
types-cachetools = "*"
types-pytz = "*"
playwright = "*"

[packages]
colorama = "*"
Expand Down
124 changes: 105 additions & 19 deletions Pipfile.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,11 @@ Or set the `GOOGLE_CLOUD_PROJECT` environment variable to your gcp project id.

---


## Integration Tests
There is a dev-convenience script that auto generates the lines of code for a user journey. See [README](scripts/README.md) for more information and how to run
the script.

## Frontend Tests

The frontend tests use NodeJS to run. To handle different versions of NodeJS it is recommended to install `Node Version Manager` (`nvm`). It is similar to pyenv but for Node versions.
Expand Down
22 changes: 11 additions & 11 deletions app/translations/messages.pot
Original file line number Diff line number Diff line change
Expand Up @@ -249,55 +249,55 @@ msgid_plural "{number_of_days} days"
msgstr[0] ""
msgstr[1] ""

#: app/routes/errors.py:159
#: app/routes/errors.py:160
msgid "You have reached the maximum number of individual access codes"
msgstr ""

#: app/routes/errors.py:162
#: app/routes/errors.py:163
msgid ""
"If you need more individual access codes, please <a "
"href='{contact_us_url}'>contact us</a>."
msgstr ""

#: app/routes/errors.py:180
#: app/routes/errors.py:181
msgid "You have reached the maximum number of times for submitting feedback"
msgstr ""

#: app/routes/errors.py:183
#: app/routes/errors.py:184
msgid ""
"If you need to give more feedback, please <a "
"href='{contact_us_url}'>contact us</a>."
msgstr ""

#: app/routes/errors.py:233
#: app/routes/errors.py:232
msgid "Sorry, there was a problem sending the access code"
msgstr ""

#: app/routes/errors.py:240
#: app/routes/errors.py:239
msgid "You can try to <a href='{retry_url}'>request a new access code again</a>."
msgstr ""

#: app/routes/errors.py:243 app/routes/errors.py:268 app/routes/errors.py:290
#: app/routes/errors.py:242 app/routes/errors.py:267 app/routes/errors.py:289
msgid ""
"If this problem keeps happening, please <a "
"href='{contact_us_url}'>contact us</a> for help."
msgstr ""

#: app/routes/errors.py:264
#: app/routes/errors.py:263
msgid "Sorry, there was a problem sending the confirmation email"
msgstr ""

#: app/routes/errors.py:265
#: app/routes/errors.py:264
msgid "You can try to <a href='{retry_url}'>send the email again</a>."
msgstr ""

#: app/routes/errors.py:286 templates/errors/403.html:3
#: app/routes/errors.py:285 templates/errors/403.html:3
#: templates/errors/403.html:6 templates/errors/submission-failed.html:5
#: templates/errors/submission-failed.html:8
msgid "Sorry, there is a problem"
msgstr ""

#: app/routes/errors.py:287
#: app/routes/errors.py:286
msgid "You can try to <a href='{retry_url}'>submit your feedback again</a>."
msgstr ""

Expand Down
5 changes: 5 additions & 0 deletions mypy.ini
Original file line number Diff line number Diff line change
Expand Up @@ -106,3 +106,8 @@ no_implicit_optional = True
disallow_untyped_defs = True
warn_return_any = True
no_implicit_optional = True

[mypy-scripts.generate_integration_test]
disallow_untyped_defs = True
warn_return_any = True
no_implicit_optional = True
40 changes: 40 additions & 0 deletions scripts/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Scripts

## Script to auto-generate code for integration test

### Details

To speed up the process of generating integration tests for Runner, there is a dev-convenience script that records the GET and POST requests of a user journey
and outputs this formatted in the style of an integration test.

### Overview

* All POSTs are recorded. To ensure only the necessary GET requests are recorded, additional logic excludes the following GET requests:
* Session tokens
* Initial URL requests for each page load
* Additional logic is in place to ensure that, when navigating backwards in a journey after following links (e.g. 'previous' link), it is recorded correctly.
This is achieved by storing the previous request method at module-level so that it can be used in deciding whether to record or disregard the GET request.
* You will need to manually add your assertions in the generated test file
* When the script is launched, it will create a new file for the schema chosen. If you launch the script again for the same schema, it will overwrite the
previous file output
* The script is intended to be run with schemas with a `test_` prefix, which would suit most scenarios for test generation. If you wish to use a schema without
this prefix, you will need to manually amend the generated names for the file, class, and function to allow pytest to process the test file correctly
* It does **not** handle dynamic answers because these are generated at runtime - you will need to update the output script to handle `list_item_id` separately,
as they will not be known beforehand

### Usage

Run the following make command from the project root folder:

```shell
make generate-integration-test
```

This will pause the script and open a browser pointing to the Launcher UI (make sure the application and supporting services are running). Now follow the below
steps:

1. Choose a schema and launch it - the schema name will be used for the name of the integration test output file
1. Navigate through the survey
1. When you're finished with the journey at any point, return to the command line and hit Enter
1. The output will be shown in the logs, and a formatted file will be created for you in the scripts folder. For example: `scripts/test_checkbox.py`
1. Add your assertions to the test file and move the file into the appropriate `test/integration/` location
Loading

0 comments on commit aae666a

Please sign in to comment.