Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(python-examples): Python S3 examples for both containers and functions #72

Merged
merged 6 commits into from
Feb 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,8 @@ node_modules/

# Python
venv/
__pycache__/

# Python API framework
package/
.scw
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ Table of Contents:
| **[NGINX CORS Private](containers/nginx-cors-private-python/README.md)** <br/> An NGINX proxy to allow CORS requests to a private container. | Python Flask | [Terraform] |
| **[NGINX hello world](containers/nginx-hello-world/README.md)** <br/> A minimal example running the base NGINX image in a serverless container. | N/A | [Serverless Framework] |
| **[Python hello world](containers/python-hello-world/README.md)** <br/> A minimal example running a Flask HTTP server in a serverless container. | N/A | [Serverless Framework] |
| **[Python S3 upload](containers/python-s3-upload/README.md)** <br/> A Python + Flask HTTP server that receives file uploads and writes them to S3. | N/A | [Terraform] |
| **[Terraform NGINX hello world](containers/terraform-nginx-hello-world/README.md)** <br/> A minimal example running the base NGINX image in a serverless container deployed with Terraform. | N/A | [Terraform] |
| **[Triggers with Terraform](containers/terraform-triggers/README.md)** <br/> Configuring two SQS triggers, used to trigger two containers, one public, one private. | N/A | [Terraform] |

Expand Down
48 changes: 48 additions & 0 deletions containers/python-s3-upload/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Container used to upload files to S3

This container does the following:

* Read a file from an HTTP request form
* Store the file in S3

## Requirements

- You have an account and are logged into the [Scaleway console](https://console.scaleway.com)
- You have created an API key in the [console](https://console.scaleway.com/iam/api-keys), with at least the `ObjectStorageFullAccess`, `ContainerRegistryFullAccess`, and `FunctionsFullAccess` permissions, plus access to the relevant project for Object Storage
- You have [Terraform](https://registry.terraform.io/providers/scaleway/scaleway/latest/docs) installed on your machine
- You have logged in to the Scaleway Container Registry (`scw registry login`)

## Deploy on Scaleway

First you need to set the following environment variables:

```bash
export TF_VAR_access_key=<your API access key>
export TF_VAR_secret_key=<your API secret key>
export TF_VAR_project_id=<your project id>
```

Deployment can be done by running:

```bash
terraform init

terraform plan

terraform apply
```

You can then query your function by running:

```bash
# Upload a random Terraform file to the bucket
curl -F [email protected] $(terraform output -raw endpoint)
```

You can get the bucket name with:

```bash
terraform output -raw bucket_name
```

You should then see the `requirements.txt` file uploaded to your bucket.
10 changes: 10 additions & 0 deletions containers/python-s3-upload/container/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM python:3.10
WORKDIR /app

RUN pip3 install --upgrade pip
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target .

COPY app.py .

CMD [ "python3", "./app.py" ]
52 changes: 52 additions & 0 deletions containers/python-s3-upload/container/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
from flask import Flask, request
import logging
import os

import boto3

REGION = "fr-par"
S3_URL = "https://s3.fr-par.scw.cloud"

SCW_ACCESS_KEY = os.environ["ACCESS_KEY"]
SCW_SECRET_KEY = os.environ["SECRET_KEY"]
BUCKET_NAME = os.environ["BUCKET_NAME"]

logging.basicConfig(level=logging.INFO)

app = Flask(__name__)


@app.route("/", methods=["GET"])
def hello():
return {
"statusCode": 200,
"body": "Hello from the container!",
}


@app.route("/", methods=["POST"])
def upload():
s3 = boto3.client(
"s3",
region_name=REGION,
use_ssl=True,
endpoint_url=S3_URL,
aws_access_key_id=SCW_ACCESS_KEY,
aws_secret_access_key=SCW_SECRET_KEY,
)

uploaded_file = request.files["file"]
file_body = uploaded_file.read()

logging.info(f"Uploading to {BUCKET_NAME}/{uploaded_file.filename}")

s3.put_object(Key=uploaded_file.filename, Bucket=BUCKET_NAME, Body=file_body)

return {
"statusCode": 200,
"body": f"Successfully uploaded {uploaded_file.filename} to bucket!",
}


if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=8080)
3 changes: 3 additions & 0 deletions containers/python-s3-upload/container/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
boto3==1.34.30
chardet==4.0.0
Flask==2.2.2
24 changes: 24 additions & 0 deletions containers/python-s3-upload/terraform/container.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
resource scaleway_container_namespace main {
name = "python-s3-example"
}

resource scaleway_container main {
name = "python-s3-example"
description = "S3 file uploader"
namespace_id = scaleway_container_namespace.main.id
registry_image = docker_image.main.name
port = 8080
cpu_limit = 1000
memory_limit = 1024
min_scale = 0
max_scale = 1
privacy = "public"
deploy = true
environment_variables = {
"BUCKET_NAME" = scaleway_object_bucket.main.name
}
secret_environment_variables = {
"ACCESS_KEY" = var.access_key
"SECRET_KEY" = var.secret_key
}
}
16 changes: 16 additions & 0 deletions containers/python-s3-upload/terraform/image.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
resource "scaleway_registry_namespace" "main" {
name = "s3-example-${random_string.suffix.result}"
region = "fr-par"
project_id = var.project_id
}

resource "docker_image" "main" {
name = "${scaleway_registry_namespace.main.endpoint}/s3-example:${var.image_version}"
build {
context = "${path.cwd}/../container"
}

provisioner "local-exec" {
command = "docker push ${docker_image.main.name}"
}
}
7 changes: 7 additions & 0 deletions containers/python-s3-upload/terraform/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
output "bucket_name" {
value = scaleway_object_bucket.main.name
}

output "endpoint" {
value = scaleway_container.main.domain_name
}
17 changes: 17 additions & 0 deletions containers/python-s3-upload/terraform/providers.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
provider "scaleway" {
zone = "fr-par-1"
region = "fr-par"
access_key = var.access_key
secret_key = var.secret_key
project_id = var.project_id
}

provider "docker" {
host = "unix:///var/run/docker.sock"

registry_auth {
address = scaleway_registry_namespace.main.endpoint
username = "nologin"
password = var.secret_key
}
}
3 changes: 3 additions & 0 deletions containers/python-s3-upload/terraform/s3.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
resource "scaleway_object_bucket" "main" {
name = "python-s3-example-${random_string.suffix.result}"
}
22 changes: 22 additions & 0 deletions containers/python-s3-upload/terraform/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
variable "access_key" {
type = string
}

variable "secret_key" {
type = string
}

variable "project_id" {
type = string
}

variable "image_version" {
type = string
default = "0.0.2"
}

resource "random_string" "suffix" {
length = 8
upper = false
special = false
}
12 changes: 12 additions & 0 deletions containers/python-s3-upload/terraform/versions.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
terraform {
required_providers {
scaleway = {
source = "scaleway/scaleway"
}
docker = {
source = "kreuzwerker/docker"
version = "3.0.2"
}
}
required_version = ">= 0.13"
}
59 changes: 40 additions & 19 deletions functions/python-upload-file-s3-multipart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,42 +3,63 @@
This function does the following steps:

* Read a file from an HTTP request form
* Send the file to long-term storage with Glacier for S3
* Store the file in S3

## Requirements

This example uses the [Python API Framework](https://github.com/scaleway/serverless-api-project) to deploy the function.
This example uses the [Python API Framework](https://github.com/scaleway/serverless-api-framework-python) to build and deploy the function.

If needed, create a bucket and provide the following variables in your environment:
First you need to:

```env
export SCW_ACCESS_KEY =
export SCW_SECRET_KEY =
export BUCKET_NAME =
```
- Create an API key in the [console](https://console.scaleway.com/iam/api-keys), with at least the `ObjectStorageFullAccess` and `FunctionsFullAccess` permissions, and access to the relevant project for Object Storage access
- Get the access key and secret key for this API key
- Get your project ID
- Create an S3 bucket

## Running
You then need to set the following environment variables:

### Running locally
```bash
export SCW_ACCESS_KEY=<your access key>
export SCW_SECRET_KEY=<your secret key>
export SCW_DEFAULT_PROJECT_ID=<your project id>
export BUCKET_NAME=<bucket name>
```

This examples uses [Serverless Functions Python Framework](https://github.com/scaleway/serverless-functions-python) and can be executed locally:
## Deploy on Scaleway

Deployment can be done with `scw_serverless`:

```bash
pip install -r requirements-dev.txtbash
python app.py
pip install --user -r requirements.txt

scw-serverless deploy app.py
```

The upload endpoint allows you to upload files to Glacier via the `file` form-data key:
This will then print out your function's URL. You can use this to test your function with:

```bash
echo -e 'Hello world!\n My contents will be stored in a bunker!' > myfile.dat
curl -F file=@myfile.dat localhost:8080
# Upload the requirements file
curl -F file=@requirements.txt <your function URL>
```

### Deploying with the API Framework
You should then see the `requirements.txt` file uploaded to your bucket.

Deployment can be done with `scw_serverless`:
_Warning_ when deploying the function, do not create a virtual environment directory in this project root, as this will be included in the deployment zip and make it too large.

## Running it locally

You can test your function locally thanks to the [Serverless Functions Python Framework](https://github.com/scaleway/serverless-functions-python). To do this, you can run:

```bash
pip install --user -r requirements-dev.txt

python app.py
```

This starts the function locally, allowing you to upload files to S3 via the `file` form-data key:

```bash
scw_serverless deploy app.py
# Upload the requirements file
curl -F [email protected] localhost:8080
```

40 changes: 17 additions & 23 deletions functions/python-upload-file-s3-multipart/app.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,19 @@
from typing import TYPE_CHECKING
import logging
import os

from scw_serverless import Serverless
if TYPE_CHECKING:
from scaleway_functions_python.framework.v1.hints import Context, Event, Response

import boto3
from streaming_form_data import StreamingFormDataParser
from streaming_form_data.targets import ValueTarget

REGION = "fr-par"
S3_URL = "https://s3.fr-par.scw.cloud"

SCW_ACCESS_KEY = os.environ["SCW_ACCESS_KEY"]
SCW_SECRET_KEY = os.environ["SCW_SECRET_KEY"]
BUCKET_NAME = os.environ["BUCKET_NAME"]

# Files will be uploaded to cold storage
# See: https://www.scaleway.com/en/glacier-cold-storage/
STORAGE_CLASS = "GLACIER"

app = Serverless(
"s3-utilities",
secret={
Expand All @@ -30,23 +26,21 @@
},
)

s3 = boto3.resource(
"s3",
region_name="fr-par",
use_ssl=True,
endpoint_url="https://s3.fr-par.scw.cloud",
aws_access_key_id=SCW_ACCESS_KEY,
aws_secret_access_key=SCW_SECRET_KEY,
)

bucket = s3.Bucket(BUCKET_NAME)

logging.basicConfig(level=logging.INFO)


@app.func()
def upload(event: "Event", _context: "Context") -> "Response":
"""Upload form data to S3 Glacier."""
@app.func(memory_limit=512)
def upload(event, _context):
"""Upload form data to S3"""

s3 = boto3.client(
"s3",
region_name=REGION,
use_ssl=True,
endpoint_url=S3_URL,
aws_access_key_id=SCW_ACCESS_KEY,
aws_secret_access_key=SCW_SECRET_KEY,
)

headers = event["headers"]
parser = StreamingFormDataParser(headers=headers)
Expand All @@ -63,8 +57,8 @@ def upload(event: "Event", _context: "Context") -> "Response":

name = target.multipart_filename

logging.info("Uploading file %s to Glacier on %s", name, bucket.name)
bucket.put_object(Key=name, Body=target.value, StorageClass=STORAGE_CLASS)
logging.info(f"Uploading to {BUCKET_NAME}/{name}")
s3.put_object(Bucket=BUCKET_NAME, Key=name, Body=target.value)

return {"statusCode": 200, "body": f"Successfully uploaded {name} to bucket!"}

Expand Down
Loading
Loading