- Overview
- Setup AWS credentials
- Install tools
- Quick start
- Customization
- Building Infrastructure
- One time Setup
- Troubleshooting
This is a practical reference implementation of Stakater Blueprints The entire infrastructure is managed by Terraform.
Go to AWS Console.
- Signup AWS account if you don't already have one. The default EC2 instances created by this tool is covered by AWS Free Tier (https://aws.amazon.com/free/) service.
- Create a group
stakater
withAdministratorAccess
policy or using the stakater-policy given in the repo. - Create a user
stakater
and Download the user credentials. - Add user
stakater
to groupstakater
.
If you use Vagrant, you can skip this section and go to Quick Start section.
Instructions for install tools on MacOS:
-
Install Terraform
$ brew update $ brew install terraform
or
$ mkdir -p ~/bin/terraform $ cd ~/bin/terraform $ curl -L -O https://dl.bintray.com/mitchellh/terraform/terraform_0.6.0_darwin_amd64.zip $ unzip terraform_0.6.0_darwin_amd64.zip
-
Install Jq
$ brew install jq
-
Install AWS CLI
$ brew install awscli
or
$ sudo easy_install pip $ sudo pip install --upgrade awscli
For other platforms, follow the tool links and instructions on tool sites.
$ git clone https://github.com/stakater/infrastructure-reference
$ cd infrastructure-reference
If you use Vagrant, instead of install tools on your host machine, there is Vagranetfile for a Ubuntu box with all the necessary tools installed:
$ vagrant up
$ vagrant ssh
$ cd infrastructure-reference
$ aws configure --profile stakater-reference
Use the downloaded aws user credentials when prompted.
The above command will create a stakater-reference profile authentication section in ~/.aws/config and ~/.aws/credentials files. The build process bellow will automatically configure Terraform AWS provider credentials using this profile.
You can customize stakater settings by changing the variables in the Makefile
.
Following is the list of variables in the Makefile
and their description:
| Variables | Description |
|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------| |
| AWS_PROFILE | Name of the AWS profile stakater is going to use (Setup AWS credentials) |
| STACK_NAME | Name of the stack you are about to build with stakater. (This name will be used in all resources created) |
| TF_STATE_BUCKET_NAME | Name of the (already existing) S3 bucket in which the terraform state files will be stored |
| TF_STATE_GLOBAL_ADMIRAL_KEY | Key of the global admiral state file in the bucket (i.e. full path of the state file) |
| TF_STATE_DEV_KEY | Key of the Development environment state file in the bucket (i.e. full path of the state file) |
| TF_STATE_QA_KEY | Key of the QA environment state file in the bucket (i.e. full path of the state file) |
| TF_STATE_PROD_KEY | Key of the Production environment state file in the bucket (i.e. full path of the state file) |
| PROD_CLOUDINIT_BUCKET_NAME | Name of the Cloudinit S3 Bucket for Production environment |
| PROD_CONFIG_BUCKET_NAME | Name of the Config S3 Bucket for Production environment |
| DEV_DATABASE_USERNAME | Database username for development database (Used for both Mysql instance-pool OR Aurora DB) |
| DEV_DATABASE_PASSWORD | Database password for the provided username AND root password, for development database (Used for both Mysql instance-pool OR Aurora DB) |
| DEV_DATABASE_NAME | Database name for QA database (Used for both Mysql instance-pool OR Aurora DB) |
| QA_DATABASE_USERNAME | Database username for QA database (Used for both Mysql instance-pool OR Aurora DB) |
| QA_DATABASE_PASSWORD | Database password for the provided username AND root password, for QA database (Used for both Mysql instance-pool OR Aurora DB) |
| QA_DATABASE_NAME | Database username for QA database (Used for both Mysql instance-pool OR Aurora DB) |
| PROD_DATABASE_USERNAME | Database username for production database (Aurora DB) |
| PROD_DATABASE_PASSWORD | Database password for the provided username AND root password, for production database (Aurora DB) |
| PROD_DATABASE_NAME | Database name for production database (Aurora DB) |
| COREOS_UPDATE_CHANNEL | Update channel for fetching Core OS AMI ID (stable, beta, alpha) (We recommend to keep it at stable
(default)) |
- Set up GoCD Configuration file (
cruise-config.xml
) - Set up the
gocd.parameters.txt
file.
(For more information on how to configure GoCD, follow the link)
If you want to use SSL certificates on your load balancers, import those certificates in AWS Certification Manager, and pass the ARN of the certificate from GoCD. (More in GoCD configuration)
You will need to create a S3 bucket (in the same region assigned to the AWS profile your using), for storing terraform remote states.
The name of this bucket should be provided against the TF_STATE_BUCKET_NAME
variable in the Makefile
Advanced options such as:
- Adding/Removing ELBs for a module
- Adding/Updating/Removing Security group rules for a module
- Chaning the size and name of attached EBS volumes
- Adding/Updating/Removing Route53 entries for a module
- Adding/Updating/Removing Scale policies for autoscaling groups
can be configured in the terraform files for modules in environments' folder inside infrastructure-modules
folder.
To Build your infrastructure consisting of Global Admiral and Dev,QA,Prod Environments run the following command:
make all
This will in turn call make global_admiral dev qa prod
in the given order.
You can also make each environment or resource separately by calling make in the following format:
For environments:
- make global_admiral
- make dev
- make qa
- make prod
For Resources:
Usage: make (<resource> | destroy_<resource> | plan_<resource> | refresh_<resource> )`
For example: make plan_network to show what resources are planned for network NOTE: The bucket name specified for TF_STATE_BUCKET_NAME in the Makefile should exist and should be accessible. ###To Destroy
Usage: make destroy_
For example: make destroy_network
Once your infrastructure has been set up, you'll need to perform the following steps as a part of one time setup of the infrastructure:
- Prepare your application for Stakater (link)
- Assign agents to GoCD (link)
-
Error:
aws_launch_configuration.lc_ebs: Error creating launch configuration: ValidationError: Invalid IamInstanceProfile
This is an intermittent and can be avoided by performing the specific step again
* https://github.com/hashicorp/terraform/issues/1885 * https://github.com/hashicorp/terraform/issues/9474
-
Error:
aws_launch_configuration.lc: Error creating launch configuration: ValidationError: You are not authorized to perform this operation.
This is an intermittent and can be avoided by performing the specific step again
* https://github.com/hashicorp/terraform/issues/5862 * https://github.com/hashicorp/terraform/issues/7198
-
Error:
Resource 'data.terraform_remote_state.global-admiral' does not have attribute 'variable_name' for variable 'data.terraform_remote_state.global-admiral.variable_name'
For example:
data.terraform_remote_state.global-admiral.private_app_route_table_ids
In case make fails due to unknown output referenced in global-admiral state, call
make refresh_global_admiral
and then make.* https://github.com/hashicorp/terraform/issues/2598
Once you refresh global admiral state, global admiral will remove vpc-peering connections created by other VPCs (in their tf states), as global admiral is not aware of them.
Please be sure to re-make the network module of other environments in order to re-create the vpc-connection IF it is removed as a result of refreshing. (Work in progress to solve this issue)
e.g.
make network_dev
,make network_qa
ormake network_prod
-
Error:
timeout while waiting for state to become 'successful'
ORNetwork time out waiting for I/O...
This issue occurs due to slow network response from AWS or slow internet connection from the requesting side.
Retry using a better internet connection.
-
Error:
Cannot create keypair: Permission denied
Issue occurs while creating keypair using the aws-keypair.sh script, this is an intermittent in issue, and can be avoided by retrying
-
Error:
Could not upload keypair: Access Denied
Issue occurs while uploading the keypair after it has been created using AWS CLI (aws-keypair.sh), this is an intermittent in issue, and can be avoided by retrying the make.