This is a Next.js application that serves the GitHub COVID modelling web interface.
The needs of the unified modelling project are changing rapidly, and so we do not have a set-in-stone development roadmap. This application is built with the goal of it hopefully being not too difficult to alter or even rewrite parts of it as requirements change.
To learn more about this project's goals, please see PROJECT-INTENT.md.
This app has two development modes: "local" and "advanced" mode. The "local" mode is a much easier development setup, but does not actually queue simulation runs with the development control plane. Instead, it uses a stubbed result. The "advanced" mode is for maintainers only—it requires access to some shared credentials and accounts.
- Docker (v19.03).
- This repository assumes that your user has been configured to manage Docker.
- NOTE: This does not work out of the box with colima. Additional configuration may be required.
- Docker Compose (v1.22).
- Node.js (v14.15) / npm (v6.14), either installed directly or using nvm.
Versions listed have been confirmed to work, but older/newer versions may (or may not) also work.
-
Start Docker.
-
Clone this repository:
> git clone https://github.com/covid-policy-modelling/web-ui > cd web-ui
-
Install dependencies:
> npm install
-
Create an OAuth app for local development:
- Go to https://github.com/settings/applications/new to create a new OAuth app
- In the Authorization callback URL section, fill in
http://localhost:3000/api/callback
- Fill in anything you want for Application name and Homepage URL (this is for personal use only)
- Click Register application
- Click Generate a new client secret
- Make a note of the Client ID and Client Secret, you will need them for the next step (and you will not be able to retrieve the client secret later).
-
Run the environment setup script:
> script/setup
This script will ask you a series of questions — you'll want to answer that yes, you do want to run in local mode. When asked, enter the Client ID and Client Secret from the previous step. Although the script does not say this, for other questions, you can press Enter to select a default value, which is usually appropriate. The resulting configuration will be written into a
.env
file in the same directory. -
Setup the database:
> script/db-create
Optionally, run all the database migrations (these will be automatically run every time you start the server).
> script/db-migrate up
Optionally, if you want to start a
mysql
client on the database you can run:> script/db-console
-
Start the server:
> script/server
-
Fetch case data: This script requires some environment variables (see
script/fetch-recorded-data --help
, you will get most of the values for the environment variables from the.env
file - the database name isgithub_covid_modelling_dev
), but if you've already got your.env
set up, you can run the script with foreman to avoid manually setting them:
> npx foreman run script/fetch-recorded-data
- Authorize your local user to log in:
> script/authorize-local-user $my_github_username
- Go to your browser to view the interface at
http://localhost:3000
.
Advanced mode requires a number of secrets documented in env.yml, whose development values can be accessed by following instructions in the private maintainers-only documentation.
-
Start an HTTP ngrok proxy pointing to port 3000 and note its URL (such as "https://e028f3f1.ngrok.io"):
> script/tunnel
Note: If you don't want to install ngrok locally, you can instead run
script/tunnel-docker
, then visit "http://localhost:46000" to see the proxy URL -
Get the OAuth development app (not created yet) client ID and secret. You'll be prompted for them in the next step.
-
Run the environment setup script:
> script/setup
This script will ask you a series of questions — you'll want to answer that no, you don't want to run in local mode.
This script will now ask for a number of environment variables, each of which can be accessed by following instructions in the maintainer docs.
It'll also ask you for a
RUNNER_CALLBACK_URL
, which should be the value of your ngrok proxy. -
Start the server:
> script/server
-
Fetch case data:
This script requires some environment variables (see
script/fetch-recorded-data --help
), but if you've already got your .env set up, you can run the script with foreman to avoid manually setting them:> npx foreman run script/fetch-recorded-data
-
Authorize your local user to log in:
> script/authorize-local-user $my_github_username
For testing purposes, you may wish to run the environment in production mode, and connect to a database on Azure. To do so, you can mostly follow the instructions for Advanced mode, with the following caveats:
- Use the connection details of your remote database
- For Azure (and possibly others), the DB username has to be of the form: username@host
- You cannot run the
script/db-*
scripts, but you don't need to anyway- The database is created as part of the infrastructure setup
- The database is migrated when you start the server using the below command
- To run the server, you need to first run
export COMPOSE_FILE=docker-compose.release.yml
before runningscript/server
- You cannot run
script-fetch-recorded-data
directly, you instead need to do it on the container:docker compose exec --env NODE_ENV=production web npx foreman run script/fetch-recorded-data
- You cannot currently use the script/authorize-local-user script, you instead need to connect to the database and add entries manually
Configuration can also be carried out by directly editing the .env
file.
The environment variables are documented in env.yml.
The file models.yml
contains metadata specifying which model connectors are used when carrying out simulations.
This file is in source control, and should be kept up-to-date with information on each model, with values that are appropriate as a default for development purposes.
- All model connectors should be included in here (even if not all deployments use them)
- The information (description, links etc.) should be as useful to a general audience as possible
enabled
should be set totrue
if the connector image is publicly available, andfalse
otherwiseimageURL
should be set to install thelatest
(or similar) version of each image
Do not edit this file in order to tailor your specific deployment.
Instead, you can provide a file called .override/models.yml
in the same format, and the two files will be merged together.
This file should not be checked into source control.
For example, this can be used to:
basel:
enabled: false
covasim:
enabled: true
mrc-ide-covid-sim:
imageURL: ghcr.io/covid-policy-modelling/covid-sim-connector/covid-sim-connector:1.10.0
You can override any of the properties from models.yml
, not just the ones mentioned above.
You do not need to provide values for any properties that you are not interested in changing.
Please note however that the merging of files is quite simple - any properties specified in .override/models.yml
completely replace those in models.yml
.
As an example, consider adding a new region (UK
) to the list of supported regions for a model, where models.yml
contains the following:
my-model:
name: My Model
...
supportedRegions:
US:
- US-AL
- US-AK
Your .override/models.yml
needs to contain:
my-model:
supportedRegions:
US:
- US-AL
- US-AK
UK: []
The following would not work, and would effectively remove the US regions:
my-model:
supportedRegions:
UK: []
Tests can be executed by running npm test
.
Tests will be executed automatically by GitHub actions when commits are made.
Note that the test suite is quite minimal at present.
There are some additional scripts for testing the fetch-recorded-data
script.
These aren't fully automated as they have external dependencies.
See verification/README.md
for instructions.
The API can be tested be running script/test-api
.
This uses the example Python API client to execute a set of API commands and display the output.
Before running the tests, you must obtain an API token, and place it in a file named .env.test
with the contents:
API_TOKEN=eyJ...
In development, database migrations are run automatically when the web container starts. In staging/production, migrations are run manually via a GitHub Action.
To create a database migration:
# Create a migration
> script/db-migrate create name-of-migration --sql-file
We pass the --sql-file
here because we write migrations in plain SQL in this project.
The case_data
and intervention_data
tables are populated by the fetch-recorded-data
script.
This must be run manually for local development.
This is run nightly via a GitHub Action on staging and production.
The API documentation must be updated manually if routes/types are changed.
This can be done with script/generate-docs
.
GitHub Actions will build, test, and publish whenever changes are committed to this repository.
To build and publish a numbered version of a package, run npm version [major | minor | patch]
, then run git push --tags
.
- Pages are in
pages/{route}.tsx
. - Components are in
components/{Component}.tsx
. - API functions are in
pages/api/{route}.tsx
.
- Next.js pages
- Next.js API routes
- Vercel serverless functions
- Vercel environment variables and secrets
- React documentation
We welcome contributions to this project from the community. See CONTRIBUTING.md.
This project is licensed under the MIT license. See LICENSE.