- docker (tested with 18.03.1)
- pip 3.6 (tested with 9.0.3)
- aws cli (tested with 1.14.9)
-
Provision a new Amazon Web Services account.
-
Create an IAM user and access key.
-
Install the AWS Command Line Interface and configure your default region and access keys.
-
[optional] Register a new application in Earthdata Login. Note your new app's <urs_client_id> and <urs_app_password>. Use any placeholder URL for the Redirect URL field; we'll update that later.
Alternatively, you can re-use the client id and password for an existing application.
-
Generate your <urs_auth_code> from your <urs_client_id> and <urs_app_password>.
echo -n "<urs_client_id>:<urs_app_password>" | base64
See the "UrsAuthCode" parameter of the Apache URS Authentication Module for more details.
-
Create a new docker repository for the distribution web app.
aws ecr create-repository --repository-name distribution
Note the value for your <docker_repository_uri>.
-
Create a new S3 bucket to hold cloudformation artifacts.
aws s3api create-bucket --bucket <artifact_bucket_name>
-
Clone this repository and cd to the root directory.
git clone https://github.com/asjohnston-asf/se-tim-2018.git cd se-tim-2018
-
Build the distribution web app and upload the docker image to your docker repository.
docker build -t distribution distribution/ $(aws ecr get-login --no-include-email) docker tag distribution <docker_repository_uri> docker push <docker_repository_uri>
-
Install python requirements for the logging lambda function. Use pip 3.6!
pip install -r logging/requirements.txt -t logging/src/
-
Package the cloudformation template.
aws cloudformation package \ --template-file cloudformation.yaml \ --s3-bucket <artifact_bucket_name> \ --output-template-file cloudformation-packaged.yaml
-
Deploy the packaged cloudformation template. This step can take 15-25 minutes.
aws cloudformation deploy \ --stack-name <stack_name> \ --template-file cloudformation-packaged.yaml \ --capabilities CAPABILITY_NAMED_IAM \ --parameter-overrides \ VpcId=<vpc_id> \ SubnetIds=<subnet_id_1>,<subnet_id_2> \ ContainerImage=<docker_repository_uri> \ UrsServer=https://urs.earthdata.nasa.gov \ UrsClientId=<urs_client_id> \ UrsAuthCode=<urs_auth_code> \ LoadBalancerCidrIp=0.0.0.0/0 \ ElasticSearchCidrIp=<local_ip>
Note the stack output values for your:
- <product_url>
- <browse_url>
- <urs_redirect_url>
- <kibana_url>
- <public_bucket_name>
- <private_bucket_name>
-
Add your app's <urs_redirect_url> in URS.
-
Upload your browse images to your public bucket.
aws s3 cp <browse_folder> s3://<public_bucket_name> --recursive
-
Upload your product files to your private bucket.
aws s3 cp <product_folder> s3://<private_bucket_name> --recursive
-
[optional] Create and publish a new collection in your target CMR environment via the Metadata Management Tool. Note your new collection's <collection_dataset_id>.
Alternatively, you can update the granules of your existing collection in-place.
-
Obtain an echo token for your target CMR environment.
-
Run the CMR update script to populate your new collection in CMR. This script makes assumptions about how your CMR metadata is structured. Review the script and adapt it as necessary before running it!
python cmr/main.py \ --source_host=cmr.earthdata.nasa.gov \ --source_collection_concept_id=<source_collection_concept_id> \ --target_host=cmr.uat.earthdata.nasa.gov \ --provider=<provider> \ --echo_token=<echo_token> \ --new_granule_ur_suffix=-se-tim \ --new_dataset_id=<collection_dataset_id> \ --new_product_url=<product_url> \ --new_browse_url=<browse_url> \ --num_threads=8