Skip to content
This repository has been archived by the owner on Nov 20, 2023. It is now read-only.

Latest commit

 

History

History
123 lines (90 loc) · 6.77 KB

PROVISIONING_AWS.adoc

File metadata and controls

123 lines (90 loc) · 6.77 KB

OpenShift on AWS EC2 using CASL

TODO: Using an existing VPC or maintaining the created VPC is not supported yet.

Local Setup (one time, only)

Note
These steps are a canned set of steps serving as an example, and may be different in your environment.

Before getting started following this guide, you’ll need the following:

  • Access to an AWS account with the proper policies to create resources (see details below)

  • Docker installed

    • RHEL/CentOS: yum install -y docker

    • Fedora: dnf install -y docker

    • NOTE: If you plan to run docker as yourself (non-root), your username must be added to the docker user group.

  • Ansible 2.5 or later installed

  • Although not strictly necessary, it is very handy to have the aws cli installed

cd ~/src/
git clone https://github.com/redhat-cop/casl-ansible.git
  • Run ansible-galaxy to pull in the necessary requirements for the CASL provisioning of OpenShift on AWS:

Note
The target directory ( galaxy ) is important as the playbooks know to source roles and playbooks from that location.
cd ~/src/casl-ansible
ansible-galaxy install -r casl-requirements.yml -p galaxy

AWS Setup

The following needs to be set up in your AWS account before provisioning.

  • An IAM User must be created with API access, and the ability to provision VPCs, Subnets, Instances, EBS Volumes, etc. (see note below on policy specifics)

    • Once created, set export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to your environment.

  • A Key-pair in AWS must exist in your AWS account that you can use to log into hosts once created.

  • A Hosted Zone must exist in your AWS Account.

  • Permission to provision the appropriate VPC, EC2, Route53, and CloudFormation resources. See Policy Assignment for details if you need help.

Cool! Now you’re ready to provision OpenShift clusters on AWS.

Provision an OpenShift Cluster

As an example, we’ll provision the sample.aws.example.com cluster defined in the ~/src/casl-ansible/inventory directory.

Note
Unless you already have a working inventory, it is recommended that you make a copy of the above mentioned sample inventory and keep it somewhere outside of the casl-ansible directory. This allows you to update/remove/change your casl-ansble source directory without losing your inventory. Also note that it may take some effort to get the inventory just right, hence it is very beneficial to keep it around for future use without having to redo everything.

The following is just an example on how the sample.aws.example.com inventory can be used:

  1. Select an AWS Region you would like to provision your cluster in.

    • Modify the regions entry (line 13) in the inventory ec2.ini with your region

    • Modify the cloud_infrastructure.region variable in group_vars/all.yml with your region.

  2. Select an environment id. This is a shortname for your cluster. In the example cluster domain sample.aws.example.com, your environment id would be sample.

    • Modify instance_filters entry (line 14) in the inventory ec2.ini file to match your chosen environment id

    • Set the env_id variable in group_vars/all.yml to match.

  3. Edit group_vars/all.yml to match your AWS environment. See comments in the file for more detailed information on how to fill these in.

  4. Edit group_vars/OSEv3.yml for OpenShift related configuration. See comments in the file for more detailed information on how to fill these in.

Note
To know the available AZs of the region you are deploying in, you can run this aws cli command: aws ec2 describe-availability-zones --region <REPLACE WITH AWS REGION>

Run the end-to-end provisioning playbook via our AWS installer container image.

docker run -u `id -u` \
      -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \
      -v $HOME/src/:/tmp/src:Z \
      -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
      -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
      -e INVENTORY_DIR=/tmp/src/casl-ansible/inventory/sample.aws.example.com.d/inventory \
      -e PLAYBOOK_FILE=/tmp/src/casl-ansible/playbooks/openshift/end-to-end.yml \
      -e OPTS="-e aws_key_name=my-key-name" -t \
      quay.io/redhat-cop/casl-ansible
Note
The aws_key_name variable at the end should specify the name of your AWS SSH keypair - as noted under AWS Specific Requirements above.
Note
The above bind-mounts will map files and source directories to the correct locations within the control host container. Update the local paths per your environment for a successful run.
Note
Depending on the SELinux configuration on your OS, you may or may not need the :Z at the end of the volume mounts.

Done! Wait till the provisioning completes and you should have an operational OpenShift cluster. If something fails along the way, either update your inventory and re-run the above end-to-end.yml playbook, or it may be better to [delete the cluster](https://github.com/redhat-cop/casl-ansible#deleting-a-cluster) and re-start.

Updating a Cluster

Once provisioned, a cluster may be adjusted/reconfigured as needed by updating the inventory and re-running the end-to-end.yml playbook.

Scaling Up and Down

A cluster’s Infra and App nodes may be scaled up and down by editing the following parameters in the all.yml file and then re-running the end-to-end.yml playbook as shown above.

appnodes:
  count: <REPLACE WITH NUMBER OF INSTANCES TO CREATE>
infranodes:
  count: <REPLACE WITH NUMBER OF INSTANCES TO CREATE>

Deleting a Cluster

A cluster can be decommissioned/deleted by re-using the same inventory with the delete-cluster.yml playbook found alongside the end-to-end.yml playbook.

docker run -it -u `id -u` \
      -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \
      -v $HOME/src/:/tmp/src:Z \
      -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
      -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
      -e INVENTORY_DIR=/tmp/src/casl-ansible/inventory/sample.casl.example.com.d/inventory \
      -e PLAYBOOK_FILE=/tmp/src/casl-ansible/playbooks/openshift/delete-cluster.yml \
      -e OPTS="-e aws_key_name=my-key-name" -t \
      quay.io/redhat-cop/casl-ansible