Skip to content

Commit

Permalink
feat: add max pods to node pool definition (#16)
Browse files Browse the repository at this point in the history
* feat: add max pods to node pool definition
* feat: update csi driver to v1.18.0-eksbuild.1
* feat: update AMI to amazon-eks-node-1.26-v20230501
* feat: adding calico to tests
* feat: adjust size of nodes
* fix: use up the entire subnet provided versus reserving half for future private subnet usage

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
  • Loading branch information
venkatamutyala and github-actions[bot] authored May 9, 2023
1 parent da03aa2 commit 463c654
Show file tree
Hide file tree
Showing 9 changed files with 70 additions and 40 deletions.
27 changes: 14 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,21 +16,22 @@ For more details see: https://github.com/GlueOps/terraform-module-cloud-aws-kube
```hcl
module "captain" {
iam_role_to_assume = "arn:aws:iam::1234567890:role/glueops-captain"
source = "git::https://github.com/GlueOps/terraform-module-cloud-aws-kubernetes-cluster.git"
source = "git::https://github.com/GlueOps/terraform-module-cloud-aws-kubernetes-cluster.git?ref=feat/multiple-node-pools"
eks_version = "1.26"
csi_driver_version = "v1.17.0-eksbuild.1"
vpc_cidr_block = "10.65.0.0/16"
csi_driver_version = "v1.18.0-eksbuild.1"
vpc_cidr_block = "10.65.0.0/26"
region = "us-west-2"
availability_zones = ["us-west-2a", "us-west-2b"]
node_pools = [
{
"ami_image_id" : "amazon-eks-node-1.26-v20230411",
"instance_type" : "t3a.large",
"name" : "clusterwide-node-pool-1",
"node_count" : 3,
"spot" : false,
"disk_size_gb" : 20
}
# {
# "ami_image_id" : "amazon-eks-node-1.26-v20230501",
# "instance_type" : "t3a.large",
# "name" : "clusterwide-node-pool-1",
# "node_count" : 3,
# "spot" : false,
# "disk_size_gb" : 20,
# "max_pods" : 110
# }
]
}
```
Expand All @@ -52,7 +53,7 @@ module "captain" {
| Name | Source | Version |
|------|--------|---------|
| <a name="module_kubernetes"></a> [kubernetes](#module\_kubernetes) | cloudposse/eks-cluster/aws | 2.6.0 |
| <a name="module_node_pool"></a> [node\_pool](#module\_node\_pool) | cloudposse/eks-node-group/aws | 2.9.0 |
| <a name="module_node_pool"></a> [node\_pool](#module\_node\_pool) | cloudposse/eks-node-group/aws | 2.9.1 |
| <a name="module_subnets"></a> [subnets](#module\_subnets) | cloudposse/dynamic-subnets/aws | 2.0.4 |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | cloudposse/vpc/aws | 2.0.0 |
| <a name="module_vpc_peering_accepter_with_routes"></a> [vpc\_peering\_accepter\_with\_routes](#module\_vpc\_peering\_accepter\_with\_routes) | ./modules/vpc_peering_accepter_with_routes | n/a |
Expand All @@ -79,7 +80,7 @@ module "captain" {
| <a name="input_csi_driver_version"></a> [csi\_driver\_version](#input\_csi\_driver\_version) | You should grab the appropriate version number from: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/CHANGELOG.md | `string` | `"v1.17.0-eksbuild.1"` | no |
| <a name="input_eks_version"></a> [eks\_version](#input\_eks\_version) | The version of EKS to deploy | `string` | `"1.26"` | no |
| <a name="input_iam_role_to_assume"></a> [iam\_role\_to\_assume](#input\_iam\_role\_to\_assume) | The full ARN of the IAM role to assume | `string` | n/a | yes |
| <a name="input_node_pools"></a> [node\_pools](#input\_node\_pools) | node pool configurations:<br> - name (string): Name of the node pool. MUST BE UNIQUE! Recommended to use YYYYMMDD in the name<br> - node\_count (number): number of nodes to create in the node pool.<br> - instance\_type (string): Instance type to use for the nodes. ref: https://instances.vantage.sh/<br> - ami\_image\_id (string): AMI to use for EKS worker nodes. ref: https://github.com/awslabs/amazon-eks-ami/releases<br> - spot (bool): Enable spot instances for the nodes. DO NOT ENABLE IN PROD!<br> - disk\_size\_gb (number): Disk size in GB for the nodes. | <pre>list(object({<br> name = string<br> node_count = number<br> instance_type = string<br> ami_image_id = string<br> spot = bool<br> disk_size_gb = number<br> }))</pre> | <pre>[<br> {<br> "ami_image_id": "amazon-eks-node-1.24-v20230406",<br> "disk_size_gb": 20,<br> "instance_type": "t3a.large",<br> "name": "default-pool",<br> "node_count": 1,<br> "spot": false<br> }<br>]</pre> | no |
| <a name="input_node_pools"></a> [node\_pools](#input\_node\_pools) | node pool configurations:<br> - name (string): Name of the node pool. MUST BE UNIQUE! Recommended to use YYYYMMDD in the name<br> - node\_count (number): number of nodes to create in the node pool.<br> - instance\_type (string): Instance type to use for the nodes. ref: https://instances.vantage.sh/<br> - ami\_image\_id (string): AMI to use for EKS worker nodes. ref: https://github.com/awslabs/amazon-eks-ami/releases<br> - spot (bool): Enable spot instances for the nodes. DO NOT ENABLE IN PROD!<br> - disk\_size\_gb (number): Disk size in GB for the nodes.<br> - max\_pods (number): max pods that can be scheduled per node. | <pre>list(object({<br> name = string<br> node_count = number<br> instance_type = string<br> ami_image_id = string<br> spot = bool<br> disk_size_gb = number<br> max_pods = number<br> }))</pre> | <pre>[<br> {<br> "ami_image_id": "amazon-eks-node-1.24-v20230406",<br> "disk_size_gb": 20,<br> "instance_type": "t3a.large",<br> "max_pods": 110,<br> "name": "default-pool",<br> "node_count": 1,<br> "spot": false<br> }<br>]</pre> | no |
| <a name="input_peering_configs"></a> [peering\_configs](#input\_peering\_configs) | A list of maps containing VPC peering configuration details | <pre>list(object({<br> vpc_peering_connection_id = string<br> destination_cidr_block = string<br> }))</pre> | `[]` | no |
| <a name="input_region"></a> [region](#input\_region) | The AWS region to deploy into | `string` | n/a | yes |
| <a name="input_vpc_cidr_block"></a> [vpc\_cidr\_block](#input\_vpc\_cidr\_block) | The CIDR block for the VPC | `string` | `"10.65.0.0/16"` | no |
Expand Down
21 changes: 11 additions & 10 deletions docs/.header.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,19 +17,20 @@ module "captain" {
iam_role_to_assume = "arn:aws:iam::1234567890:role/glueops-captain"
source = "git::https://github.com/GlueOps/terraform-module-cloud-aws-kubernetes-cluster.git?ref=feat/multiple-node-pools"
eks_version = "1.26"
csi_driver_version = "v1.17.0-eksbuild.1"
vpc_cidr_block = "10.65.0.0/16"
csi_driver_version = "v1.18.0-eksbuild.1"
vpc_cidr_block = "10.65.0.0/26"
region = "us-west-2"
availability_zones = ["us-west-2a", "us-west-2b"]
node_pools = [
{
"ami_image_id" : "amazon-eks-node-1.26-v20230411",
"instance_type" : "t3a.large",
"name" : "clusterwide-node-pool-1",
"node_count" : 3,
"spot" : false,
"disk_size_gb" : 20
}
# {
# "ami_image_id" : "amazon-eks-node-1.26-v20230501",
# "instance_type" : "t3a.large",
# "name" : "clusterwide-node-pool-1",
# "node_count" : 3,
# "spot" : false,
# "disk_size_gb" : 20,
# "max_pods" : 110
# }
]
}
```
6 changes: 5 additions & 1 deletion main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ module "node_pool" {
for_each = { for np in var.node_pools : np.name => np }
source = "cloudposse/eks-node-group/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "2.9.0"
version = "2.9.1"

instance_types = [each.value.instance_type]
subnet_ids = module.subnets.public_subnet_ids
Expand All @@ -52,6 +52,9 @@ module "node_pool" {
"volume_type" : "gp2"
}
]
kubelet_additional_options = [
"--max-pods=${each.value.max_pods}"
]
associated_security_group_ids = [aws_security_group.captain.id]
}

Expand Down Expand Up @@ -100,6 +103,7 @@ resource "aws_eks_addon" "ebs_csi" {
resolve_conflicts = "OVERWRITE"
service_account_role_arn = aws_iam_role.eks_addon_ebs_csi_role.arn
depends_on = [aws_iam_role_policy_attachment.ebs_csi, module.node_pool]
count = length(var.node_pools) > 0 ? 1 : 0
}


1 change: 1 addition & 0 deletions network.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ module "subnets" {
private_subnets_enabled = false
public_subnets_enabled = true
availability_zones = var.availability_zones
max_subnet_count = length(var.availability_zones)
}

resource "aws_security_group" "captain" {
Expand Down
6 changes: 3 additions & 3 deletions peering.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@


module "vpc_peering_accepter_with_routes" {
source = "./modules/vpc_peering_accepter_with_routes"
route_table_ids = concat(module.subnets.private_route_table_ids, module.subnets.public_route_table_ids)
peering_configs = var.peering_configs
source = "./modules/vpc_peering_accepter_with_routes"
route_table_ids = concat(module.subnets.private_route_table_ids, module.subnets.public_route_table_ids)
peering_configs = var.peering_configs
}
23 changes: 12 additions & 11 deletions tests/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,19 @@ module "captain" {
iam_role_to_assume = "arn:aws:iam::761182885829:role/glueops-captain"
source = "../"
eks_version = "1.26"
csi_driver_version = "v1.17.0-eksbuild.1"
vpc_cidr_block = "10.65.0.0/16"
csi_driver_version = "v1.18.0-eksbuild.1"
vpc_cidr_block = "10.65.0.0/26"
region = "us-west-2"
availability_zones = ["us-west-2a", "us-west-2b"]
node_pools = [
{
"ami_image_id" : "amazon-eks-node-1.26-v20230406",
"instance_type" : "t3a.medium",
"name" : "clusterwide-node-pool-1",
"node_count" : 1,
"spot" : false,
"disk_size_gb" : 20
}
# {
# "ami_image_id" : "amazon-eks-node-1.26-v20230411",
# "instance_type" : "t3a.small",
# "name" : "clusterwide-node-pool-1",
# "node_count" : 2,
# "spot" : false,
# "disk_size_gb" : 20,
# "max_pods" : 1000
# }
]
}
}
14 changes: 12 additions & 2 deletions tests/run.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# #!/usr/bin/env bash

set -e

./destroy-aws.sh

echo "Terraform Init"
Expand All @@ -8,12 +10,20 @@ echo "Terraform Plan"
terraform plan
echo "Terraform Apply"
terraform apply -auto-approve
terraform apply -auto-approve
echo "Authenticate with Kubernetes"
aws eks update-kubeconfig --region us-west-2 --name captain-cluster --role-arn arn:aws:iam::761182885829:role/glueops-captain
echo "Delete AWS CNI"
kubectl delete daemonset -n kube-system aws-node
echo "Install Calico CNI"
helm repo add projectcalico https://docs.tigera.io/calico/charts
helm repo update
helm install calico projectcalico/tigera-operator --version v3.25.1 --namespace tigera-operator -f values.yaml --create-namespace
echo "Deploy node pool"
sed -i 's/#//g' main.tf
terraform apply -auto-approve
echo "Get nodes and pods from kubernetes"
kubectl get nodes
kubectl get pods --all-namespaces
kubectl get pods -A -o=wide
echo "Start Test Suite"
./k8s-test.sh
echo "Test Suite Complete"
Expand Down
9 changes: 9 additions & 0 deletions tests/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
installation:
kubernetesProvider: EKS
cni:
type: Calico
calicoNetwork:
bgp: Disabled
ipPools:
- cidr: 172.16.0.0/16
encapsulation: VXLAN
3 changes: 3 additions & 0 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ variable "node_pools" {
ami_image_id = string
spot = bool
disk_size_gb = number
max_pods = number
}))
default = [{
name = "default-pool"
Expand All @@ -44,6 +45,7 @@ variable "node_pools" {
ami_image_id = "amazon-eks-node-1.24-v20230406"
spot = false
disk_size_gb = 20
max_pods = 110
}]
description = <<-DESC
node pool configurations:
Expand All @@ -53,6 +55,7 @@ variable "node_pools" {
- ami_image_id (string): AMI to use for EKS worker nodes. ref: https://github.com/awslabs/amazon-eks-ami/releases
- spot (bool): Enable spot instances for the nodes. DO NOT ENABLE IN PROD!
- disk_size_gb (number): Disk size in GB for the nodes.
- max_pods (number): max pods that can be scheduled per node.
DESC
}

Expand Down

0 comments on commit 463c654

Please sign in to comment.