Skip to content
Venkata Mutyala edited this page Sep 1, 2023 · 27 revisions

Account Setup

  1. Create a new AWS account underneath your existing AWS Organization
  2. Request via AWS Support or your account representative from AWS that they "activate" your account. This can take hours if not, 10+ days to finish activating. This is entirely based on your AWS Account History so if you have an AWS representative, it helps to ask them to "Activate" your account. As they might be able to get it done faster.

Learn more:

Service Account / IAM User / IAM Role setup

  1. For the IAM user, within your sub account create an IAM user/key with full "Administrator Access".
  2. For the IAM role, you will need to make it assumable by the IAM User you just created and grant the role full "Administrator Access". Here is a video on how to create the role.
  3. Keep track of the IAM user accesskey/secret as well as the IAM Role ARN (Amazon Resource Name) as you will need these 3 values when setting your cluster.

Learn more:

Environment variables:

  • We recommend creating a .env file with these values for future use but export the following using the IAM user Access Key and Secret from earlier. The region should be whatever your preferred AWS region is.
  • You must use these environment variables anytime you execute your terraform or work with the kubernetes API (including kubectl commands).
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXX
export AWS_DEFAULT_REGION=us-west-2

Learn more:

Deployment

  • You need to deploy the base cluster first (see the example usage of the module and keep the node pools commented out).
  • Once you deploy the EKS cluster by itself you need to setup calico CNI:
    • Authenticate to the cluster as shown in the Create a kubeconfig section of this page
    • Delete thh existing AWS Daemonset:
kubectl delete daemonset -n kube-system aws-node
  • Create a calico.yaml (confirm the CIDR to use but in most cases this default will be fine):
installation:
  enabled: true
  kubernetesProvider: EKS
  typhaMetricsPort: 9093
  cni:
    type: Calico
  calicoNetwork:
    bgp: Disabled
    ipPools:
    - cidr: 172.16.0.0/16
      encapsulation: VXLAN

apiServer:
  enabled: true

# Resource requests and limits for the tigera/operator pod.
resources: {}

# Tolerations for the tigera/operator pod.
tolerations:
- effect: NoExecute
  operator: Exists
- effect: NoSchedule
  operator: Exists
- key: "glueops.dev/role"
  operator: "Equal"
  value: "glueops-platform"
  effect: "NoSchedule" 

# NodeSelector for the tigera/operator pod.
nodeSelector:
  kubernetes.io/os: linux

# Custom annotations for the tigera/operator pod.
podAnnotations: {}

# Custom labels for the tigera/operator pod.
podLabels: {}

# Image and registry configuration for the tigera/operator pod.
tigeraOperator:
  image: tigera/operator
  version: v1.30.4
  registry: quay.io
calicoctl:
  image: docker.io/calico/ctl
  tag: v3.26.1
  • Install the CNI:
helm repo add projectcalico https://docs.tigera.io/calico/charts
helm repo update
helm install calico projectcalico/tigera-operator --version v3.26.1 --namespace tigera-operator -f calico.yaml --create-namespace
  • Lastly, deploy a nodepool via terraform (remove the comments) and you will be all set.

Post Deployment

Accessing your Kubernetes Cluster

Create a kubeconfig

  • You need your environment variables (.env) to be set before you can run the command below.
 aws eks update-kubeconfig --region us-west-2 --name captain-cluster --role-arn arn:aws:iam::XXXXXXXXXXXXXXXXXXXXXX:role/glueops-captain-role

Learn more about the command:

Clone this wiki locally