Skip to content

Commit

Permalink
Merge pull request #2777 from GSA-TTS/main
Browse files Browse the repository at this point in the history
  • Loading branch information
jadudm authored Nov 9, 2023
2 parents 1f38db4 + 59f35f5 commit b2319ff
Show file tree
Hide file tree
Showing 33 changed files with 1,336 additions and 135 deletions.
6 changes: 5 additions & 1 deletion .github/workflows/historic-data-migrator.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ on:
- dev
- staging
- preview
email:
required: false
type: string
description: Make sure to log-in the env at list once. Email is required when testing in dev; on staging it defaults to CYPRESS_LOGIN_TEST_EMAIL_AUDITEE.
dbkeys:
required: false
type: string
Expand Down Expand Up @@ -39,4 +43,4 @@ jobs:
cf_password: ${{ secrets.CF_PASSWORD }}
cf_org: gsa-tts-oros-fac
cf_space: ${{ env.space }}
command: cf run-task gsa-fac -k 2G -m 2G --name historic_data_migrator --command "python manage.py historic-data-migrator --dbkeys ${{ inputs.dbkeys }} --years ${{ inputs.years }}"
command: cf run-task gsa-fac -k 2G -m 2G --name historic_data_migrator --command "python manage.py historic_data_migrator --dbkeys ${{ inputs.dbkeys }} --years ${{ inputs.years }} --email ${{ inputs.email }}"
16 changes: 2 additions & 14 deletions backend/.profile
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,11 @@ S3_PRIVATE_ENDPOINT_FOR_NO_PROXY="$(echo $VCAP_SERVICES | jq --raw-output --arg
S3_PRIVATE_FIPS_ENDPOINT_FOR_NO_PROXY="$(echo $VCAP_SERVICES | jq --raw-output --arg service_name "fac-private-s3" ".[][] | select(.name == \$service_name) | .credentials.fips_endpoint")"
export no_proxy="${S3_ENDPOINT_FOR_NO_PROXY},${S3_FIPS_ENDPOINT_FOR_NO_PROXY},${S3_PRIVATE_ENDPOINT_FOR_NO_PROXY},${S3_PRIVATE_FIPS_ENDPOINT_FOR_NO_PROXY},apps.internal"


# Grab the New Relic license key from the newrelic-creds user-provided service instance
export NEW_RELIC_LICENSE_KEY="$(echo "$VCAP_SERVICES" | jq --raw-output --arg service_name "newrelic-creds" ".[][] | select(.name == \$service_name) | .credentials.NEW_RELIC_LICENSE_KEY")"

# Set the application name for New Relic telemetry.
# this did not work on preview, so, since gsa-fac-app works, keeping that there
#export NEW_RELIC_APP_NAME="$(echo "$VCAP_APPLICATION" | jq -r .application_name)-$(echo "$VCAP_APPLICATION" | jq -r .space_name)"
export NEW_RELIC_APP_NAME="$(echo "$VCAP_APPLICATION" | jq -r .application_name)-app"

export NEW_RELIC_APP_NAME="$(echo "$VCAP_APPLICATION" | jq -r .application_name)-$(echo "$VCAP_APPLICATION" | jq -r .space_name)"

# Set the environment name for New Relic telemetry.
export NEW_RELIC_ENVIRONMENT="$(echo "$VCAP_APPLICATION" | jq -r .space_name)"
Expand All @@ -36,15 +32,7 @@ export NEW_RELIC_LOG_LEVEL=info
# https://docs.newrelic.com/docs/security/security-privacy/compliance/fedramp-compliant-endpoints/
export NEW_RELIC_HOST="gov-collector.newrelic.com"
# https://docs.newrelic.com/docs/apm/agents/python-agent/configuration/python-agent-configuration/#proxy
#https_proxy_protocol="$(echo "$VCAP_SERVICES" | jq --raw-output --arg service_name "https-proxy-creds" ".[][] | select(.name == \$service_name) | .credentials.protocol")"
#https_proxy_domain="$(echo "$VCAP_SERVICES" | jq --raw-output --arg service_name "https-proxy-creds" ".[][] | select(.name == \$service_name) | .credentials.domain")"
#https_proxy_port="$(echo "$VCAP_SERVICES" | jq --raw-output --arg service_name "https-proxy-creds" ".[][] | select(.name == \$service_name) | .credentials.port")"

#export NEW_RELIC_PROXY_SCHEME="$https_proxy_protocol"
#export NEW_RELIC_PROXY_HOST="$https_proxy_domain"
#export NEW_RELIC_PROXY_PORT="$https_proxy_port"
#export NEW_RELIC_PROXY_USER="$(echo "$VCAP_SERVICES" | jq --raw-output --arg service_name "https-proxy-creds" ".[][] | select(.name == \$service_name) | .credentials.username")"
#export NEW_RELIC_PROXY_PASS="$(echo "$VCAP_SERVICES" | jq --raw-output --arg service_name "https-proxy-creds" ".[][] | select(.name == \$service_name) | .credentials.password")"
export NEW_RELIC_PROXY_HOST="$https_proxy"

# We only want to run migrate and collecstatic for the first app instance, not
# for additional app instances, so we gate all of this behind CF_INSTANCE_INDEX
Expand Down
11 changes: 7 additions & 4 deletions backend/Apple_M1_Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,13 @@ RUN \

RUN \
apt-get update -yq && \
apt install curl -y && \
apt-get install -y gcc && \
curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && \
apt-get install -y nodejs && \
apt install build-essential curl -y && \
apt-get install -y gcc ca-certificates gnupg && \
mkdir -p /etc/apt/keyrings && \
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg && \
NODE_MAJOR=18 && \
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee /etc/apt/sources.list.d/nodesource.list && \
apt-get install nodejs -y && \
apt-get install -y npm && \
npm i -g npm@^8

Expand Down
60 changes: 57 additions & 3 deletions backend/census_historical_migration/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,64 @@
# Census Historical Migration
# Census to FAC data migration

## Overview

This is implemented as a Django app to leverage existing management commands and settings. It includes Python and shell scripts to:

* Load raw census data as CSV files into an S3 bucket
* Create Postgres tables from these CSV files
* Perform any data clean up required to create a table from a CSV file
* Run the historic data migrator
* Run the historic workbook generator

## Infrastructure changes

* Create a new S3 bucket in Cloud.gov spaces as well as in the local environment
* Create a new Postgres instance both in CG and locally

## Utilities

* fac_s3.py - Uploads folders or files to an S3 bucket.

```bash
python manage.py fac_s3 fac-census-to-gsafac-s3 --upload --src census_historical_migration/data
```

* csv_to_postgres.py - Inserts data into Postgres tables using the contents of the CSV files in the S3 bucket. The first row of each file is assumed to have the column names (we convert to lowercase). The name of the table is determined by examining the name of the file. The sample source files do not have delimters for empty fields at the end of a line - so we assume these are nulls.

```bash
python manage.py csv_to_postgres --folder data --chunksize 10000
python manage.py csv_to_postgres --clean True
```

* models.py These correspond to the incoming CSV files
* routers.py This tells django to use a different postgres instance.

* data A folder that contains sample data that we can use for development.

## Prerequisites

* A Django app that reads the tables created here as unmanaged models and populates SF-SAC tables by creating workbooks, etc. to simulate a real submission

## How to load test Census data into Postgres

1. Download test Census data from https://drive.google.com/drive/folders/1TY-7yWsMd8DsVEXvwrEe_oWW1iR2sGoy into census_historical_migration/data folder.
NOTE: Never check in the census_historical_migration/data folder into GitHub.

2. In the FAC/backend folder, run the following to load CSV files from census_historical_migration/data folder into fac-census-to-gsafac-s3 bucket.
```bash
docker compose run web python manage.py fac_s3 fac-census-to-gsafac-s3 --upload --src census_historical_migration/data
```

3. In the FAC/backend folder, run the following to read the CSV files from fac-census-to-gsafac-s3 bucket and load into Postgres.
```bash
docker compose run web python manage.py csv_to_postgres --folder data --chunksize 10000
```

### How to run the historic data migrator:
```
docker compose run web python manage.py historic_data_migrator --email [email protected] \
--year 22 \
--dbkey 100010
--years 22 \
--dbkeys 100010
```
- The email address currently must be a User in the system. As this has only been run locally so far, it would often be a test account in my local sandbox env.
- `year` and `dbkey` are optional. The script will use default values for these if they aren't provided.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
import logging
import boto3
import pandas as pd


from io import BytesIO
from botocore.exceptions import ClientError
from django.core.management.base import BaseCommand
from django.conf import settings
from django.apps import apps

logger = logging.getLogger(__name__)
logger.setLevel(logging.WARNING)
census_to_gsafac_models = list(
apps.get_app_config("census_historical_migration").get_models()
)
census_to_gsafac_model_names = [m._meta.model_name for m in census_to_gsafac_models]
s3_client = boto3.client(
"s3",
aws_access_key_id=settings.AWS_PRIVATE_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_PRIVATE_SECRET_ACCESS_KEY,
endpoint_url=settings.AWS_S3_ENDPOINT_URL,
)
census_to_gsafac_bucket_name = settings.AWS_CENSUS_TO_GSAFAC_BUCKET_NAME
DELIMITER = ","


class Command(BaseCommand):
help = """
Populate Postgres database from csv files
Usage:
manage.py csv_to_postgres --folder <folder_name> --clean <True|False>
"""

def add_arguments(self, parser):
parser.add_argument("--folder", help="S3 folder name (required)", type=str)
parser.add_argument(
"--clean", help="Clean the data (default: False)", type=bool, default=False
)
parser.add_argument(
"--sample",
help="Sample the data (default: False)",
type=bool,
default=False,
)
parser.add_argument("--load")
parser.add_argument(
"--chunksize",
help="Chunk size for processing data (default: 10_000)",
type=int,
default=10_000,
)

def handle(self, *args, **options):
folder = options.get("folder")
if not folder:
print("Please specify a folder name")
return
if options.get("clean"):
self.delete_data()
return
if options.get("sample"):
self.sample_data()
return
chunk_size = options.get("chunksize")
self.process_csv_files(folder, chunk_size)

def process_csv_files(self, folder, chunk_size):
items = self.list_s3_objects(census_to_gsafac_bucket_name, folder)
for item in items:
if item["Key"].endswith("/"):
continue
model_name = self.get_model_name(item["Key"])
if model_name:
model_index = census_to_gsafac_model_names.index(model_name)
model_obj = census_to_gsafac_models[model_index]
file = self.get_s3_object(
census_to_gsafac_bucket_name, item["Key"], model_obj
)
if file:
self.load_data(file, model_obj, chunk_size)

self.display_row_counts(census_to_gsafac_models)

def display_row_counts(self, models):
for mdl in models:
row_count = mdl.objects.all().count()
print(f"{row_count} in ", mdl)

def delete_data(self):
for mdl in census_to_gsafac_models:
print("Deleting ", mdl)
mdl.objects.all().delete()

def sample_data(self):
for mdl in census_to_gsafac_models:
print("Sampling ", mdl)
rows = mdl.objects.all()[:1]
for row in rows:
for col in mdl._meta.fields:
print(f"{col.name}: {getattr(row, col.name)}")

def list_s3_objects(self, bucket_name, folder):
return s3_client.list_objects(Bucket=bucket_name, Prefix=folder)["Contents"]

def get_s3_object(self, bucket_name, key, model_obj):
file = BytesIO()
try:
s3_client.download_fileobj(Bucket=bucket_name, Key=key, Fileobj=file)
except ClientError:
logger.error("Could not download {}".format(model_obj))
return None
print(f"Obtained {model_obj} from S3")
return file

def get_model_name(self, name):
print("Processing ", name)
file_name = name.split("/")[-1].split(".")[0]
for model_name in census_to_gsafac_model_names:
if file_name.lower().startswith(model_name):
print("model_name = ", model_name)
return model_name
print("Could not find a matching model for ", name)
return None

def load_data(self, file, model_obj, chunk_size):
print("Starting load data to postgres")
file.seek(0)
rows_loaded = 0
for df in pd.read_csv(file, iterator=True, chunksize=chunk_size):
# Each row is a dictionary. The columns are the
# correct names for our model. So, this should be a
# clean way to load the model from a row.
for _, row in df.iterrows():
obj = model_obj(**row)
obj.save()
rows_loaded += df.shape[0]
print(f"Loaded {rows_loaded} rows in ", model_obj)
return None
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
from census_historical_migration.workbooklib.workbook_creation import (
sections,
workbook_loader,
setup_sac,
)

import datetime
Expand Down Expand Up @@ -197,9 +196,8 @@ def handle(self, *args, **options): # noqa: C901
dbkey=options["dbkey"], date=datetime.datetime.now()
)

sac = setup_sac(None, entity_id, options["dbkey"])
loader = workbook_loader(
None, sac, options["dbkey"], options["year"], entity_id
None, None, options["dbkey"], options["year"], entity_id
)
json_test_tables = []
for section, fun in sections.items():
Expand Down
Loading

0 comments on commit b2319ff

Please sign in to comment.