Skip to content
Jonathan Schneider edited this page Mar 26, 2019 · 12 revisions

Prerequisites

The complete setup is done using Docker that means that you will require docker to be running for a local execution. As the user interface files are stored in a different repository, you have to clone the repo using:

git clone --recurse-submodules [email protected]:danthe96/mpci.git

To make your life easier to get from a git clone to an up-and-running project we prepared some scripts for you. Here’s a quick mapping of what our scripts are named and what they’re responsible for doing:

  • bash scripts/bootstrap.sh – installs/updates all dependencies
  • bash scripts/setup.sh – sets up a project to be used for the first time
  • bash scripts/update.sh – updates a project to run at its current version
  • bash scripts/server.sh – starts app
  • bash scripts/demo.sh – starts app with example dataset and experiment pre-configured
  • bash scripts/test.sh – runs tests
  • bash scripts/cibuild.sh – invoked by continuous integration servers to run tests
  • bash scripts/console.sh – opens a console

Some of the scripts accept parameters that are passed through the underlying docker commands. For example, you can start a server in detached mode with bash scripts/server.sh --detach or run a specific test with bash scripts/test.sh test/unit/master/resources/test_job.py.

We provide three different ways how to run the Causal Inference Pipeline:

  1. backend – starts just the backend with a postgres and a database ui - uses docker-compose.yml

  2. staging – deploys the backend with an additional nginx server, that is used to serve static files and provide the backend functionality by connecting to uWSGI. The transpilation of the UI files will be done during build. - uses docker-compose-staging.yml

  3. production – same setup as staging but without database ui. Make sure to override DB credentials - uses docker-compose-prod.yml

Change the environment variable MPCI_ENVIRONMENT in conf/backend.env accordingly to choose the desired setup. The default is backend.

A database user interface is available using http://localhost:8081 given a backend or staging setup.

Try it out

Just run bash scripts/demo.sh and open http://localhost:5000 in your browser. Make sure to change the MPCI_ENVIRONMENT in your conf/backend.env to staging to also deploy the user interface.

Migrating the database

A clear database is needed to launch. This is especially important, as the tests create all tables without using the migration system. To get a clear database run:

bash scripts/setup.sh

This command will clear all volumes, including the database.

Otherwise you can run the following SQL command using the DB ui or a postgres admin interface of your choice.

DROP SCHEMA public CASCADE; CREATE SCHEMA public 

When the models have been changed, make sure your database is up to date by using:

bash scripts/update.sh

Afterwards, you can auto-create an migration by using:

docker-compose run --rm backend flask db migrate -m "migration message"

If alembic does not detect your changes correctly, you can manually create an empty migration by using:

docker-compose run --rm backend flask db revision -m "migration message"

You can then either manipulate the autogenerated commands or insert new commands in the new migration file.

When you are done, re-run upgrade to apply your changes to your local instance.

Alembic is used for the migration system. Alembic does not auto-detect the following changes correctly:

  • Table and column renames (are detected as deleted and added with another name)
  • Column type changes (are not detected at all, remember to add conversion function when adding them manually)

This list might not be complete, be sure to check the Alembic documentation for further information.