- Docker must be installed on your workstation.
Run the following command to create the viya4-iac-aws
Docker image using the provided dockerfile:
docker build -t viya4-iac-aws .
The Docker image, viya4-iac-aws
, contains Terraform and kubectl executables. The entrypoint for the Docker image is terraform
. The entrypoint will be run with subcommands in the subsequent steps.
Follow either one of the authentication methods described in Authenticating Terraform to Access AWS in order to configure authentication and enable container invocation. Store these values outside of this repository in a secure file, for example
$HOME/.aws_docker_creds.env.
Protect that file so that only you have Read access to it.
NOTE: Do not use quotation marks around the values in the file, and be sure to avoid any trailing blank spaces.
Now each time you invoke the container, specify the file with the --env-file
option in order to pass AWS credentials to the container.
Add volume mounts to the docker run
command for all files and directories that must be accessible from inside the container:
-
--volume=$HOME/.ssh:/.ssh
forssh_public_key
variable in theterraform.tfvars
file -
--volume=$(pwd):/workspace
for a local directory where theterraform.tfvars
file resides and where theterraform.tfstate
file will be written.To grant Docker permission to write to the local directory, use the
--user
option.
NOTE: Local references to $HOME
(or "~
") need to map to the root directory /
in the container.
Prepare your terraform.tfvars
file, as described in Customize Input Values.
To preview the cloud resources before creating them, run the Docker image (viya4-iac-aws
) with the plan
command:
docker run --rm --group-add root \
--user "$(id -u):$(id -g)" \
--env-file $HOME/.aws_docker_creds.env \
--volume $HOME/.ssh:/.ssh \
--volume $(pwd):/workspace \
viya4-iac-aws \
plan -var-file /workspace/terraform.tfvars \
-state /workspace/terraform.tfstate
To create the cloud resources, run the viya4-iac-aws
Docker image with the apply
command and the -auto-approve
option:
docker run --rm --group-add root \
--user "$(id -u):$(id -g)" \
--env-file $HOME/.aws_docker_creds.env \
--volume $HOME/.ssh:/.ssh \
--volume $(pwd):/workspace \
viya4-iac-aws \
apply -auto-approve \
-var-file /workspace/terraform.tfvars \
-state /workspace/terraform.tfstate
This command can take a few minutes to complete. Once complete, Terraform output values are written to the console. The kubeconfig
file for the cluster is written to [prefix]-eks-kubeconfig.conf
in the current directory, $(pwd)
.
Once the cloud resources have been created using the terraform apply
command, you can display Terraform output values by running the viya4-iac-aws
Docker image using the output
command:
docker run --rm --group-add root \
--user "$(id -u):$(id -g)" \
--volume $(pwd):/workspace \
viya4-iac-aws \
output -state /workspace/terraform.tfstate
After provisioning the infrastructure, you can make additional modifications. Update the corresponding variables with the desired values in terraform.tfvars
. Then run the Docker image with the apply
command and -auto-approve
option again:
docker run --rm --group-add root \
--user "$(id -u):$(id -g)" \
--env-file $HOME/.aws_docker_creds.env \
--volume $HOME/.ssh:/.ssh \
--volume $(pwd):/workspace \
viya4-iac-aws \
apply -auto-approve \
-var-file /workspace/terraform.tfvars \
-state /workspace/terraform.tfstate
To destroy all the cloud resources that you created with the previous commands, run the viya4-iac-aws
Docker image with the destroy
command and -auto-approve
option:
docker run --rm --group-add root \
--user "$(id -u):$(id -g)" \
--env-file $HOME/.aws_docker_creds.env \
--volume $HOME/.ssh:/.ssh \
--volume $(pwd):/workspace \
viya4-iac-aws \
destroy -auto-approve \
-var-file /workspace/terraform.tfvars \
-state /workspace/terraform.tfstate
NOTE: The
destroy
action is irreversible.
Creating the cloud resources writes the kube_config
output value to a file, ./[prefix]-eks-kubeconfig.conf
. When the Kubernetes cluster is ready, use --entrypoint kubectl
to interact with the cluster.
NOTE The
cluster_endpoint_public_access_cidrs
value in CONFIG-VARS.md must be set to your local IP address or CIDR range.
You can run the kubectl get nodes
command with the viya4-iac-aws
Docker image in order to get a list of cluster nodes. Switch the entrypoint to kubectl (--entrypoint kubectl
), provide a kubeconfig file (--env=KUBECONFIG=/workspace/<your prefix>-eks-kubeconfig.conf
), and pass kubectl subcommands (such as get nodes
). For example, to run kubectl get nodes
, run one of the following commands that matches your kubeconfig file type:
Using a static kubeconfig file
docker run --rm \
--env=KUBECONFIG=/workspace/<your prefix>-eks-kubeconfig.conf \
--volume=$(pwd):/workspace \
--entrypoint kubectl \
viya4-iac-aws get nodes
Using a provider based kubeconfig file requires AWS cli credentials in order to authenticate to the cluster
docker run --rm \
--env=KUBECONFIG=/workspace/<your prefix>-eks-kubeconfig.conf \
--volume=$(pwd):/workspace \
--env=AWS_PROFILE=default \
--env=AWS_SHARED_CREDENTIALS_FILE=/workspace/credentials \
--volume $HOME/.aws/credentials:/workspace/credentials \
--entrypoint kubectl \
viya4-iac-aws get nodes
See Kubernetes Configuration File Generation for information related to creating static and provider based kube config files.
You can find more information about using AWS CLI credentials in Configuring the AWS CLI.