This terraform module creates an Azure Kubernetes Service and its associated Azure Application Gateway as ingress controller.
Inside the cluster default node pool, velero and cert-manager are installed.
Inside each node pool, Kured is installed as a daemonset.
This module also configures logging to a Log Analytics Workspace, deploys the Azure Active Directory Pod Identity and creates some Storage Classes with different types of Azure managed disks (Standard HDD retain and delete, Premium SSD retain and delete).
Module version | Terraform version | AzureRM version |
---|---|---|
>= 5.x.x | 0.15.x & 1.0.x | >= 2.10 |
>= 4.x.x | 0.13.x | >= 2.10 |
>= 3.x.x | 0.12.x | >= 2.10 |
>= 2.x.x | 0.12.x | < 2.0 |
< 2.x.x | 0.11.x | < 2.0 |
This module is optimized to work with the Claranet terraform-wrapper too which set some terraform variables in the environment needed by this module.
More details about variables set by the terraform wrapper
available in the documentation.
Module version | Terraform version | AzureRM version |
---|---|---|
>= 5.x.x | 0.15.x & 1.0.x | >= 2.0 |
>= 4.x.x | 0.13.x | >= 2.0 |
>= 3.x.x | 0.12.x | >= 2.0 |
>= 2.x.x | 0.12.x | < 2.0 |
< 2.x.x | 0.11.x | < 2.0 |
This module is optimized to work with the Claranet terraform-wrapper tool
which set some terraform variables in the environment needed by this module.
More details about variables set by the terraform-wrapper
available in the documentation.
locals {
allowed_cidrs = ["x.x.x.x", "y.y.y.y"]
}
data "azurerm_kubernetes_cluster" "aks" {
depends_on = [module.aks] # refresh cluster state before reading
name = module.aks.aks_name
resource_group_name = module.rg.resource_group_name
}
module "azure_region" {
source = "claranet/regions/azurerm"
version = "x.x.x"
azure_region = var.azure_region
}
module "rg" {
source = "claranet/rg/azurerm"
version = "x.x.x"
location = module.azure_region.location
client_name = var.client_name
environment = var.environment
stack = var.stack
}
module "azure_virtual_network" {
source = "claranet/vnet/azurerm"
version = "x.x.x"
environment = var.environment
location = module.azure_region.location
location_short = module.azure_region.location_short
client_name = var.client_name
stack = var.stack
resource_group_name = module.rg.resource_group_name
vnet_cidr = ["10.0.0.0/19"]
}
module "node_network_subnet" {
source = "claranet/subnet/azurerm"
version = "x.x.x"
environment = var.environment
location_short = module.azure_region.location_short
client_name = var.client_name
stack = var.stack
resource_group_name = module.rg.resource_group_name
virtual_network_name = module.azure_virtual_network.virtual_network_name
subnet_cidr_list = ["10.0.0.0/20"]
service_endpoints = ["Microsoft.Storage"]
}
module "appgtw_network_subnet" {
source = "claranet/subnet/azurerm"
version = "x.x.x"
environment = var.environment
location_short = module.azure_region.location_short
client_name = var.client_name
stack = var.stack
resource_group_name = module.rg.resource_group_name
virtual_network_name = module.azure_virtual_network.virtual_network_name
subnet_cidr_list = ["10.0.20.0/24"]
}
module "global_run" {
source = "claranet/run-common/azurerm"
version = "x.x.x"
client_name = var.client_name
location = module.azure_region.location
location_short = module.azure_region.location_short
environment = var.environment
stack = var.stack
monitoring_function_splunk_token = var.monitoring_function_splunk_token
resource_group_name = module.rg.resource_group_name
tenant_id = var.azure_tenant_id
}
module "aks" {
source = "claranet/aks/azurerm"
version = "x.x.x"
client_name = var.client_name
environment = var.environment
stack = var.stack
resource_group_name = module.rg.resource_group_name
location = module.azure_region.location
location_short = module.azure_region.location_short
service_cidr = "10.0.16.0/22"
kubernetes_version = "1.19.7"
vnet_id = module.azure_virtual_network.virtual_network_id
nodes_subnet_id = module.node_network_subnet.subnet_id
nodes_pools = [
{
name = "pool1"
count = 1
vm_size = "Standard_D1_v2"
os_type = "Linux"
os_disk_type = "Ephemeral"
os_disk_size_gb = 30
vnet_subnet_id = module.node_network_subnet.subnet_id
},
{
name = "bigpool1"
count = 3
vm_size = "Standard_F8s_v2"
os_type = "Linux"
os_disk_size_gb = 30
vnet_subnet_id = module.node_network_subnet.subnet_id
enable_auto_scaling = true
min_count = 3
max_count = 9
}
]
linux_profile = {
username = "user"
ssh_key = "ssh_priv_key"
}
addons = {
dashboard = false
oms_agent = true
oms_agent_workspace_id = module.global_run.log_analytics_workspace_id
policy = false
}
diagnostic_settings_logs_destination_ids = [module.global_run.log_analytics_workspace_id]
appgw_subnet_id = module.appgtw_network_subnet.subnet_id
appgw_ingress_controller_values = { "verbosityLevel" = "5", "appgw.shared" = "true" }
cert_manager_settings = { "cainjector.nodeSelector.agentpool" = "default", "nodeSelector.agentpool" = "default", "webhook.nodeSelector.agentpool" = "default" }
velero_storage_settings = { allowed_cidrs = local.allowed_cidrs }
container_registries_id = [module.acr.acr_id]
}
module "acr" {
source = "claranet/acr/azurerm"
version = "x.x.x"
location = module.azure_region.location
location_short = module.azure_region.location_short
resource_group_name = module.rg.resource_group_name
sku = "Standard"
client_name = var.client_name
environment = var.environment
stack = var.stack
logs_destinations_ids = [module.global_run.log_analytics_workspace_id]
}
Name | Version |
---|---|
azurerm | >= 2.51 |
Name | Source | Version |
---|---|---|
appgw | ./tools/agic | n/a |
certmanager | ./tools/cert-manager | n/a |
diagnostic_settings | claranet/diagnostic-settings/azurerm | 4.0.3 |
infra | ./modules/infra | n/a |
kured | ./tools/kured | n/a |
velero | ./tools/velero | n/a |
Name | Description | Type | Default | Required |
---|---|---|---|---|
aadpodidentity_chart_repository | AAD Pod Identity Helm chart repository URL | string |
"https://raw.githubusercontent.com/Azure/aad-pod-identity/master/charts" |
no |
aadpodidentity_chart_version | AAD Pod Identity helm chart version to use | string |
"2.0.0" |
no |
aadpodidentity_namespace | Kubernetes namespace in which to deploy AAD Pod Identity | string |
"system-aadpodid" |
no |
aadpodidentity_values | Settings for AAD Pod identity helm Chart:map(object({ |
map(string) |
{} |
no |
addons | Kubernetes addons to enable /disable | object({ |
{ |
no |
agic_chart_repository | Helm chart repository URL | string |
"https://appgwingress.blob.core.windows.net/ingress-azure-helm-package/" |
no |
agic_chart_version | Version of the Helm chart | string |
"1.2.0" |
no |
agic_enabled | Enable Application gateway ingress controller | bool |
true |
no |
agic_helm_version | [DEPRECATED] Version of Helm chart to deploy | string |
null |
no |
aks_sku_tier | aks sku tier. Possible values are Free ou Paid | string |
"Free" |
no |
aks_user_assigned_identity_custom_name | Custom name for the aks user assigned identity resource | string |
null |
no |
aks_user_assigned_identity_resource_group_name | Resource Group where to deploy the aks user assigned identity resource. Used when private cluster is enabled and when Azure private dns zone is not managed by aks | string |
null |
no |
api_server_authorized_ip_ranges | Ip ranges allowed to interract with Kubernetes API. Default no restrictions | list(string) |
[] |
no |
appgw_identity_enabled | Configure a managed service identity for Application gateway used with AGIC (useful to configure ssl cert into appgw from keyvault) | bool |
false |
no |
appgw_ingress_controller_values | Application Gateway Ingress Controller settings | map(string) |
{} |
no |
appgw_private_ip | Private IP for Application Gateway. Used when variable private_ingress is set to true . |
string |
null |
no |
appgw_settings | Application gateway configuration settings. Default dummy configuration | map(any) |
{} |
no |
appgw_ssl_certificates_configs | Application gateway ssl certificates configuration | list(map(string)) |
[] |
no |
appgw_subnet_id | Application gateway subnet id | string |
"" |
no |
appgw_user_assigned_identity_custom_name | Custom name for the application gateway user assigned identity resource | string |
null |
no |
appgw_user_assigned_identity_resource_group_name | Resource Group where to deploy the application gateway user assigned identity resource | string |
null |
no |
cert_manager_chart_repository | Helm chart repository URL | string |
"https://charts.jetstack.io" |
no |
cert_manager_chart_version | Cert Manager helm chart version to use | string |
"v0.13.0" |
no |
cert_manager_namespace | Kubernetes namespace in which to deploy Cert Manager | string |
"system-cert-manager" |
no |
cert_manager_settings | Settings for cert-manager helm chart | map(string) |
{} |
no |
client_name | Client name/account used in naming | string |
n/a | yes |
container_registries_id | List of Azure Container Registries ids where AKS needs pull access. | list(string) |
[] |
no |
custom_aks_name | Custom AKS name | string |
"" |
no |
custom_appgw_name | Custom name for AKS ingress application gateway | string |
"" |
no |
default_node_pool | Default node pool configuration:map(object({ |
map(any) |
{} |
no |
diagnostic_settings_custom_name | Custom name for Azure Diagnostics for AKS. | string |
"default" |
no |
diagnostic_settings_log_categories | List of log categories | list(string) |
null |
no |
diagnostic_settings_logs_destination_ids | List of destination resources IDs for logs diagnostic destination. Can be Storage Account, Log Analytics Workspace and Event Hub. No more than one of each can be set. | list(string) |
[] |
no |
diagnostic_settings_metric_categories | List of metric categories | list(string) |
null |
no |
diagnostic_settings_retention_days | The number of days to keep diagnostic logs. | number |
30 |
no |
docker_bridge_cidr | IP address for docker with Network CIDR. | string |
"172.16.0.1/16" |
no |
enable_cert_manager | Enable cert-manager on AKS cluster | bool |
true |
no |
enable_kured | Enable kured daemon on AKS cluster | bool |
true |
no |
enable_pod_security_policy | Enable pod security policy or not. https://docs.microsoft.com/fr-fr/azure/AKS/use-pod-security-policies | bool |
false |
no |
enable_velero | Enable velero on AKS cluster | bool |
true |
no |
environment | Project environment | string |
n/a | yes |
extra_tags | Extra tags to add | map(string) |
{} |
no |
kubernetes_version | Version of Kubernetes to deploy | string |
"1.17.9" |
no |
kured_chart_repository | Helm chart repository URL | string |
"https://weaveworks.github.io/kured" |
no |
kured_chart_version | Version of the Helm chart | string |
"2.2.0" |
no |
kured_settings | Settings for kured helm chart:map(object({ |
map(string) |
{} |
no |
linux_profile | Username and ssh key for accessing AKS Linux nodes with ssh. | object({ |
null |
no |
location | Azure region to use | string |
n/a | yes |
location_short | Short name of Azure regions to use | string |
n/a | yes |
name_prefix | Prefix used in naming | string |
"" |
no |
node_resource_group | Name of the resource group in which to put AKS nodes. If null default to MC_ | string |
null |
no |
nodes_pools | A list of nodes pools to create, each item supports same properties as local.default_agent_profile |
list(any) |
n/a | yes |
nodes_subnet_id | Id of the subnet used for nodes | string |
n/a | yes |
outbound_type | The outbound (egress) routing method which should be used for this Kubernetes Cluster. Possible values are loadBalancer and userDefinedRouting . |
string |
"loadBalancer" |
no |
private_cluster_enabled | Configure AKS as a Private Cluster : https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#private_cluster_enabled | bool |
false |
no |
private_dns_zone_id | Id of the private DNS Zone when <private_dns_zone_type> is custom | string |
null |
no |
private_dns_zone_type | Set AKS private dns zone if needed and if private cluster is enabled (privatelink..azmk8s.io) - "Custom" : You will have to deploy a private Dns Zone on your own and pass the id with <private_dns_zone_id> variable If this settings is used, aks user assigned identity will be "userassigned" instead of "systemassigned" and the aks user must have "Private DNS Zone Contributor" role on the private DNS Zone - "System" : AKS will manage the private zone and create it in the same resource group as the Node Resource Group - "None" : In case of None you will need to bring your own DNS server and set up resolving, otherwise cluster will have issues after provisioning. https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#private_dns_zone_id |
string |
"System" |
no |
private_ingress | Private ingress boolean variable. When true , the default http listener will listen on private IP instead of the public IP. |
bool |
false |
no |
resource_group_name | Name of the AKS resource group | string |
n/a | yes |
service_cidr | CIDR used by kubernetes services (kubectl get svc). | string |
n/a | yes |
stack | Project stack name | string |
n/a | yes |
velero_chart_repository | URL of the Helm chart repository | string |
"https://vmware-tanzu.github.io/helm-charts" |
no |
velero_chart_version | Velero helm chart version to use | string |
"2.12.13" |
no |
velero_namespace | Kubernetes namespace in which to deploy Velero | string |
"system-velero" |
no |
velero_storage_settings | Settings for Storage account and blob container for Veleromap(object({ |
map(any) |
{} |
no |
velero_values | Settings for Velero helm chart:map(object({ |
map(string) |
{} |
no |
vnet_id | Vnet id that Aks MSI should be network contributor in a private cluster | string |
null |
no |
Name | Description |
---|---|
aad_pod_identity_azure_identity | Identity object for AAD Pod Identity |
aad_pod_identity_namespace | Namespace used for AAD Pod Identity |
agic_namespace | Namespace used for AGIC |
aks_id | AKS resource id |
aks_kube_config | Kube configuration of AKS Cluster |
aks_kube_config_raw | Raw kube config to be used by kubectl command |
aks_name | Name of the AKS cluster |
aks_nodes_pools_ids | Ids of AKS nodes pools |
aks_nodes_pools_names | Names of AKS nodes pools |
aks_nodes_rg | Name of the resource group in which AKS nodes are deployed |
aks_user_managed_identity | The User Managed Identity used by AKS Agents |
application_gateway_id | Id of the application gateway used by AKS |
application_gateway_identity_principal_id | Id of the managed service identity of the application gateway used by AKS |
application_gateway_name | Name of the application gateway used by AKS |
cert_manager_namespace | Namespace used for Cert Manager |
kured_namespace | Namespace used for Kured |
public_ip_id | Id of the public ip used by AKS application gateway |
public_ip_name | Name of the public ip used by AKS application gateway |
velero_identity | Azure Identity used for Velero pods |
velero_namespace | Namespace used for Velero |
velero_storage_account | Storage Account on which Velero data is stored. |
velero_storage_account_container | Container in Storage Account on which Velero data is stored. |
- Azure Kubernetes Service documentation : docs.microsoft.com/en-us/azure/aks/
- Azure Kubernetes Service MSI Usage : docs.microsoft.com/en-us/azure/aks/use-managed-identity
- Azure Kubernetes Service User-Defined Route usage : docs.microsoft.com/en-us/azure/aks/egress-outboundtype
- Terraform Kubernetes provider documentation: www.terraform.io/docs/providers/kubernetes/index.html
- Terraform Helm provider documentation: www.terraform.io/docs/providers/helm/index.html
- Kured documentation: github.com/weaveworks/kured
- Velero documentation: velero.io/docs/v1.2.0/
- Velero Azure specific documentation: github.com/vmware-tanzu/velero-plugin-for-microsoft-azure
- cert-manager documentation : cert-manager.io/docs/