Skip to content

Latest commit

 

History

History
23 lines (16 loc) · 1.54 KB

File metadata and controls

23 lines (16 loc) · 1.54 KB

Deployment

In this folder we show how to deploy a Concrete ML model that does sentiment analysis, either through Docker or Amazon Web Services. This is based on the sentiment analysis use case example where a XGBoost model is trained on top of a Transformer model.

Get started

To run this example on AWS you will also need to have the AWS CLI properly setup on your system. To do so please refer to AWS documentation. One can also run this example locally using Docker, or just by running the scripts locally.

  1. To train your model you can use train.py, or train_with_docker.sh to use Docker (recommended). This operation might take some time. This will train a model and serialize the FHE circuit. This will result in a new folder called ./dev.
  2. Once that's done you can use the script provided in Concrete ML in src/concrete/ml/deployment/, use deploy_to_docker.py.
  • python use_case_examples/deployment/server/deploy_to_docker.py
  1. Once that's done you can launch the build_docker_client_image.sh script to build a client Docker image.
  2. You can then run the client by using the client.sh script. This will run the container in interactive mode. To interact with the server you can launch the client.py script using URL="<my_url>" python client.py where <my_url> is the content of the url.txt file.

And here it is you deployed a Concrete ML model and ran an inference using Fully Homormophic Encryption.