Provision a Kubernetes cluster with Terraform on AWS.
Inspired by kubestack
This is no longer maintained. All the versions are old. Feel free to use this to inspire you.
Terraform will be used to declare and provision a Kubernetes cluster.
Create a terraform.tfvars
file in the top-level directory of the repo with content like:
ssh_key_name = name_of_my_key_pair_in_AWS
access_key = my_AWS_access_key
secret_key = my_AWS_secret_key
This file is ignored by git. You can also set these by using environment variables.
You also need to make sure you are running ssh-agent and have your AWS key added.
This repo includes a very simple Makefile that will handle generating an etcd discovery token.
To create the cluster, run make apply
To destroy the cluster, run make destroy
You can override any variables listed in variables.tf such as the ami to use, number of nodes, etc
When you create a cluster, it will output something like:
Outputs:
kubernetes-api-server =
# Use these commands to configure kubectl
kubectl config set-cluster testing --insecure-skip-tls-verify=true --server=IP
kubectl config set-credentials admin --token='4c98e411'
kubectl config set-context testing --cluster= testing --user=admin
kubectl config use-context testing
Run these commands to configure kubectl
. You can see these commands again by running terraform output kubernetes-api-server
Test this by running kubectl get nodes
You should now be able to use kubectl
to create services. See the kubernetes examples to get started.
Most of the differences are based on my personal opionions and/or just trying a different approach.
- We use base CoreOS image and install Kubernetes at boot time. This was originally to skip the "build image, upload, test, repeat" cycle during development, but it does expose the install process more clearly.
- We use etcd in proxy mode on the master and workers rather than pointing directly to etcd servers.