This is an example of a Containerized Flask Application the can be the core ingredient in many "recipes", i.e. deploy targets..
- Create a Github Repo (if not created)
- Open AWS Cloud Shell
- Create ssh-keys in AWS Cloud Shell
- Upload ssh-keys to Github
- Create scaffolding for project (if not created)
- Makefile, requirements.txt etc.
Feel free to test my ML project: docker pull ghcr.io/noahgift/python-mlops-cookbook:latest
Makefile
: View Makefilerequirements.txt
: View requirements.txtcli.py
: View cli.pyutilscli.py
: View utilscli.pyapp.py
: View app.pymlib.py
: View mlib.pyModel Handling Libraryhtwtmlb.csv
: View CSV Useful for input scalingmodel.joblib
: View model.joblibDockerfile
: View DockerfileBaseball_Predictions_Export_Model.ipynb
: Baseball_Predictions_Export_Model.ipynb
There are two cli tools. First, the main cli.py
is the endpoint that serves out predictions.
To predict the height of an MLB player you use the following: ./cli.py --weight 180
-Another example of predict function results is:
$ ./cli.py --weight 340 Output: 6 foot, 10 inches
The second cli tool is utilscli.py
and this perform model retraining, and could serve as the entry point to do more things.
For example, this version doesn't change the default model_name
, but you could add that as an option by forking this repo.
./utilscli.py retrain --tsize 0.4
Here is an example retraining the model.
Additionally the you can query the API via the CLI allowing you to change both the host and the value passed into the API. This is accomplished through the requests library.
./utilscli.py predict --weight 400
The Flask ML Microservice can be run many ways.
You can run the Flask Microservice as follows with the commmand: python app.py
.
(.venv) ec2-user:~/environment/Python-MLOps-Cookbook (main) $ python app.py
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
INFO:werkzeug: * Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
INFO:werkzeug: * Restarting with stat
WARNING:werkzeug: * Debugger is active!
INFO:werkzeug: * Debugger PIN: 251-481-511
To serve a prediction against the application, run the predict.sh
.
(.venv) ec2-user:~/environment/Python-MLOps-Cookbook (main) $ ./predict.sh
Port: 8080
{
"prediction": {
"height_human_readable": "6 foot, 2 inches",
"height_inches": 73.61
}
}
Here is an example of how to build the container and run it locally, this is the contents of predict.sh
#!/usr/bin/env bash
# Build image
#change tag for new container registery, gcr.io/bob
docker build --tag=noahgift/mlops-cookbook .
# List docker images
docker image ls
# Run flask app
docker run -p 127.0.0.1:8080:8080 noahgift/mlops-cookbook
To setup the container build process do the following. This is also covered by Alfredo Deza in Practical MLOps book in greater detail.
build-container:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Loging to Github registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.BUILDCONTAINERS }}
- name: build flask app
uses: docker/build-push-action@v2
with:
context: ./
#tags: alfredodeza/flask-roberta:latest
tags: ghcr.io/noahgift/python-mlops-cookbook:latest
push: true
With the project using DevOps/MLOps best practices including linting, testing, and deployment, this project can be the base to deploy to many deployment targets.
[In progress....]
[In progress....]
Watch a YouTube Walkthrough on AWS App Runner for this repo here: https://www.youtube.com/watch?v=zzNnxDTWtXA