From 0ca35549f49088ebf51e40d6a7166f79d724d908 Mon Sep 17 00:00:00 2001 From: Evgeni Sautin Date: Wed, 1 Nov 2023 15:42:53 +0300 Subject: [PATCH] Extend the project description --- README.md | 95 ++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 83 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index 320e0e3..5dcf9f5 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,19 @@ ![Continuous Integration and Delivery](https://github.com/spyker77/fastapi-tdd-docker/workflows/Continuous%20Integration%20and%20Delivery/badge.svg?branch=main) -This is the implementation of [the course](https://testdriven.io/courses/tdd-fastapi/) with the following changes so far: +## Table of Contents + +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Installation](#installation) +- [Usage](#usage) +- [Testing](#testing) +- [Deployment](#deployment) +- [License](#license) + +## Introduction + +This project is an implementation of [the course](https://testdriven.io/courses/tdd-fastapi/) on Test-Driven Development to build the API for summarizing articles, with additional features and optimizations. Key enhancements include: - Dependencies updated to the latest version at the moment - CORSMiddleware used to manually control allowed origins @@ -17,34 +29,93 @@ This is the implementation of [the course](https://testdriven.io/courses/tdd-fas - Tortoise-ORM has been replaced by SQLAlchemy - Transformers model is used instead of NLP from Newspaper3k -## Quick Start +## Prerequisites + +- Docker and Docker Compose installed on your machine. +- Basic knowledge of Python, FastAPI, and Docker. -Spin up the containers: +## Installation + +1. Clone the repository: + +```bash +git clone https://github.com/spyker77/fastapi-tdd-docker.git +cd fastapi-tdd-docker +``` + +2. Build and start the containers: ```bash docker compose up -d --build ``` -Generate the database schema on first launch: +3. Generate the database schema: ```bash docker compose exec web alembic upgrade head ``` -Open in your browser: +## Usage + +1. Access the API documentation at: +2. Create a test user at: + + Example of the payload: + +```json +{ + "full_name": "Cute Koala", + "username": "cute", + "email": "cute@example.com", + "password": "supersecret" +} +``` -Tests can be run with this command: +3. Use the **Authorize** button (simplest way) at the top and enter the username and password you've just created, then click **Authorize**. + +4. At this point you're authorized. Now use the endpoint to send the article you want to summarize, for example like so: + +```json +{ + "url": "https://dev.to/spyker77/how-to-connect-godaddy-domain-with-heroku-and-cloudflare-mdh" +} +``` + +5. This triggers the ML model to download, which may take a few minutes for the first run (in the current implementation). After that, reach the endpoint and in the response you should see something like: + +```json +[ + { + "id": 1, + "url": "https://dev.to/spyker77/how-to-connect-godaddy-domain-with-heroku-and-cloudflare-mdh", + "summary": "If you struggle to connect newly registered domain from GoDaddy with your app at Heroku, and in addition would like to use advantages of Cloudflare – this article is for you. Hope it will help you and without many words, let's jump in!Sections: Heroku settings, Cloudflare settings and GoDaddy settings.", + "user_id": 1 + } +] +``` + +## Testing + +Run the tests using the following command: ```bash docker compose exec web pytest -n auto --cov ``` -## Note +## Deployment + +For production deployment, don't forget to change the ENVIRONMENT variables. The default CI/CD pipeline is set up to build images with GitHub Actions, store them in GitHub Packages, and deploy the application to Heroku. But the deployment part is currently disabled/commented out. + +### Note + +The current implementation of text summarization using the transformer model is not ideal for production due to the following reasons: + +- The requirement to install a heavy transformers library along with its dependencies. +- The necessity to download several gigabytes of the model. +- The need for powerful hardware to run the model. -Current implementation of text summarization using the transformer model is not ideal for production: +Typically, in a production environment, the model would be provided via an API using services like AWS SageMaker or Paperspace Gradient. -- The need to install a heavy transformers library with its dependencies; -- The need to download several gigs of the model; -- The need for powerful hardware to run it. +## License -Typically, the model is provided via API using services like AWS SageMaker or Paperspace Gradient. +This project is licensed under the MIT License. See the [MIT License](LICENSE) file for details.