Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document the current GHA build and deploy workflows #373

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 58 additions & 0 deletions .github/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Current Github Workflows for building and deploying ReportVision in Dev or Demo

## Prerequisites

There are secrets for Azure authentication from Github Action's located within the Github Settings. At the time of reading this, you may need to create new federated secrets and Resource Groups in your Azure account, while also updating the existing `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, `AZURE_SUBSCRIPTION_ID` secrets in each Github Environment.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The requirements should have some bullet point items of what individuals need to build or run things locally or to gain access to the repository.

For example:

  • Admin privileges to Github Settings
  • Azure admin privileges to your account
  • Terraform installed

Recommend updating the wording of the current Prerequisites to something more definitive:

You will need to update the secrets and following variable values: AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_SUBSCRIPTION_ID.

Not sure if the Resource Group name have to be changed. Azure documentation states the Resource Group name must be unique within a subscription and between 1 and 90 characters long. Perhaps note if there is an error when they run terraform regarding the resource group, they can change it to a unique name between 1-90 characters.

Copy link
Collaborator Author

@derekadombek derekadombek Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯 agree with

The requirements should have some bullet point items of what individuals need to build or run things locally or to gain access to the repository.

For example:

Admin privileges to Github Settings
Azure admin privileges to your account
Terraform installed
Recommend updating the wording of the current Prerequisites to something more definitive:

You will need to update the secrets and following variable values: AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_SUBSCRIPTION_ID.`

for this part:

`Not sure if the Resource Group name have to be changed. Azure documentation states the Resource Group name must be unique within a subscription and between 1 and 90 characters long. Perhaps note if there is an error when they run terraform regarding the resource group, they can change it to a unique name between 1-90 characters.

Thank you!

The reason they are required to use that naming convention, is to match how they're being variablized in Github Actions. The caveat to that however, is who ever take this over in the future are able to change that but they will need to change it in both places(Azure and Github Actions) I could explain that but I think thats a little out of scope for this document, because if someone wants to change that they most likely will have good knowledge of how Github Action deployments work with Azure.


**NOTE**: Resource Groups were never created from Terraform on purpose to better replicate CDC's Azure setup and requirements for potential future migrations from Skylight's Azure. CDC would manually create Resource Groups for us.

Azure Resource Group Naming:

- `reportvision-rg-dev`
- `reportvision-rg-demo`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused, I thought you mentioned to Bora and Pri that we were only going to work with one environment (dev) since you stated the demo environment was unstable. Are we still doing two environments now? If so, this is something we will have to discuss with the other engineers since they are still under the impression that we will have these two environments.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for meeting with me yesterday (11/18) and explaining that the demo environment is considered unstable which meant that it was simply another non-production environment. In addition, that we were only using it to demo to prospective partners. I believe your explanation resolves this comment.


## Complete e2e build and deploy for ReportVision

If you would like to build and deploy all of ReportVision's services at once, `deploy-dev.yml` will do the trick specifically for dev and demo environments within Azure only.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps say To build versus If you would like to build.


When you deploy a `demo` environment specifically, note that the `.github/actions/deploy-frontend/` Github Action will be skipped because Terraform will name the Azure Storage Account very uniquely. We decided to do this for demo environments to ensure user's could never return back to the specific url. We are currently not using any custom domain names. Once the `deploy-dev.yml` completes for a demo, just make sure to use the `build-deploy-frontend.yml` with the newly created Storage Account from Azure as an input.

Required Inputs:

- `branch`: Any
- `deploy-env`: Drop down of available environments to chose from.
- `ocr-version`: This will be the Docker Tag. At the moment this caan be uniquely named and does not required any kind of format. We will also check if this tag already exists and if it does the docker build step will be skipped.


## Build and deploy the ReportVision's frontend only

We made a separate workflow that builds and deploys the frontend files only, `build-deploy-frontend.yml`. The reason for this is because currently the OCR-API docker build takes nearly an hour at times. Having to wait for that along with the Terraform setup job to complete, just to refresh the frontend can be a giant waste of time. Just make sure the Azure environment is already up from the `deploy-dev.yml` workflow or at the very least a Storage Account in is created.

Required Inputs:

- `branch`: Any
- `deploy-env`: Drop down of available environments to chose from.

Optional Inputs:

- `storage-account-name`: After deploying a demo environment from `deploy-dev.yml`, you will need to use the unique Azure Storage Account name that was created here.

## Build and deploy the ReportVision's OCR-API only

Just like with the frontend, we needed a way to refresh the OCR-API without having to re-apply Terraform and deploying the frontend. With `build-deploy-ocr.yml`, we can either build and publish a new OCR-API Docker image or we can use an already registered OCR-API Docker image. The OCR-API Docker images are located here: https://github.com/CDCgov/ReportVision/pkgs/container/reportvision-ocr-api. Once the Docker image is ready to go it will deploy it to the selected environments Azure App-Service Webapp.

**Note**: Using an already registered Docker image will be MUCH faster than waiting for a new one to be built.

Required Inputs:

- `branch`: Any
- `deploy-env`: Drop down of available environments to chose from.
- `ocr-version`: This will be the Docker Tag. At the moment this caan be uniquely named and does not required any kind of format. We will also check if this tag already exists and if it does the docker build step will be skipped.

# Github Workflows for building and deploying ReportVision in Staging or Production

Unfortunately we never had the opportunity to pilot our amazing product to actual users which kept us from deploying to any type of Staging or Production environments. We also weren't entirely sure if we'd even be able to deploy to a centrally hosted Azure account like our current one either.

If we were able to deploy to a centrally hosted system. Our thought would have been to create a `deploy-stage.yml` workflow that is structured and functions very similarly to `deploy-dev.yml`, except it would be triggered off of the `main` branch or Github `tags`. If all staging jobs and tests pass, a `deploy-prod.yml` workflow would get triggered.

If we were required to deploy to STLT-hosted environments, our plan was going to ensure that all services are containerized and deployed as a container orchestrated system with tooling like Kubernetes. If this were to happen, we would have to paradigm shift completely. There was also the possibility of needing to do both.
4 changes: 2 additions & 2 deletions .github/workflows/build-deploy-frontend.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ on:
- dev
- demo
storage-account-name:
description: 'After the demo env gets created, copy its blob storage name here'
description: 'After the demo env gets created, copy its Azure Storage Account name here'
required: false

permissions:
Expand All @@ -32,7 +32,7 @@ jobs:
frontend-build-path: ./frontend/dist/
node-version: 20

deploy-with-blob-name-optional:
deploy-frontend:
name: Deploy
runs-on: ubuntu-latest
environment: ${{ inputs.deploy-env }}
Expand Down
14 changes: 14 additions & 0 deletions ops/terraform/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# ReportVision's Terraform Setup

Currently, our infrastructure is built specifically for Azure, with a traditional cloud architecture hosting our frontend code from blob storage and our OCR-API running in an App Service. Both the frontend and the OCR-API are behind a Virtual Network and load balanced by an App Gateway.


## Prerequisites

When using Terraform, you will need to created a `terraform.tfvars` file in the `ops/terraform` directory with variables:

``` bash
resource_group_name = "reportvision-rg-<environment-name>"
name = "reportvision"
sku_name = "S2"
```
Loading