The following blueprint shows how to create a multi-cluster mesh for two private clusters on GKE. Anthos Service Mesh with automatic control plane management is set up for clusters using the Fleet API. This can only be done if the clusters are in a single project and in the same VPC. In this particular case both clusters having being deployed to different subnets in a shared VPC.
The diagram below depicts the architecture of the blueprint.
Terraform is used to provision the required infrastructure, create the IAM binding and register the clusters to the fleet.
Ansible is used to execute commands in the management VM. From this VM there is access to the cluster's endpoint. More specifically the following is done using Ansible:
- Install required dependencies in the VM
- Enable automatic control plane management in both clusters.
- Verify the control plane has been provisioned for both clusters.
- Configure ASM control plane endpoint discovery between the two clusters.
- Create a sample namespace in both clusters.
- Configure automatic sidecar injection in the created namespace.
- Deploy a hello-world service in both clusters
- Deploy a hello-world deployment (v1) in cluster a
- Deploy a hello-world deployment (v2) in cluster b
- Deploy a sleep service in both clusters.
- Send requests from a sleep pod to the hello-world service from both clusters, to verify that we get responses from alternative versions.
Clone this repository or open it in cloud shell, then go through the following steps to create resources:
terraform init
terraform apply -var billing_account_id=my-billing-account-id -var parent=folders/my-folder-id -var host_project_id=my-host-project-id -var fleet_project_id=my-fleet-project-id -var mgmt_project_id=my-mgmt-project-id
Once terraform completes do the following:
-
Change to the ansible folder
cd ansible
-
Run the ansible playbook
ansible-playbook -v playbook.yaml
The last two commands executed with Ansible Send requests from a sleep pod to the hello-world service from both clusters. If you see in the output of those two commands responses from alternative versions, everything works as expected.
Once done testing, you can clean up resources by running terraform destroy
.
name | description | modules | resources |
---|---|---|---|
ansible.tf | Ansible generated files. | local_file |
|
gke.tf | GKE cluster and hub resources. | gke-cluster-standard · gke-hub · gke-nodepool |
|
main.tf | Project resources. | project |
|
variables.tf | Module variables. | ||
vm.tf | Management server. | compute-vm |
|
vpc.tf | Networking resources. | net-cloudnat · net-vpc · net-vpc-firewall |
name | description | type | required | default |
---|---|---|---|---|
billing_account_id | Billing account id. | string |
✓ | |
fleet_project_id | Management Project ID. | string |
✓ | |
host_project_id | Project ID. | string |
✓ | |
mgmt_project_id | Management Project ID. | string |
✓ | |
parent | Parent. | string |
✓ | |
clusters_config | Clusters configuration. | map(object({…})) |
{…} |
|
deletion_protection | Prevent Terraform from destroying data storage resources (storage buckets, GKE clusters, CloudSQL instances) in this blueprint. When this field is set in Terraform state, a terraform destroy or terraform apply that would delete data storage resources will fail. | bool |
false |
|
istio_version | ASM version. | string |
"1.14.1-asm.3" |
|
mgmt_server_config | Mgmt server configuration. | object({…}) |
{…} |
|
mgmt_subnet_cidr_block | Management subnet CIDR block. | string |
"10.0.0.0/28" |
|
region | Region. | string |
"europe-west1" |
module "test" {
source = "./fabric/blueprints/gke/multi-cluster-mesh-gke-fleet-api"
billing_account_id = "123-456-789"
parent = "folders/123456789"
host_project_id = "my-host-project"
fleet_project_id = "my-fleet-project"
mgmt_project_id = "my-mgmt-project"
region = "europe-west1"
clusters_config = {
cluster-a = {
subnet_cidr_block = "10.0.1.0/24"
master_cidr_block = "10.16.0.0/28"
services_cidr_block = "192.168.1.0/24"
pods_cidr_block = "172.16.0.0/20"
}
cluster-b = {
subnet_cidr_block = "10.0.2.0/24"
master_cidr_block = "10.16.0.16/28"
services_cidr_block = "192.168.2.0/24"
pods_cidr_block = "172.16.16.0/20"
}
}
mgmt_subnet_cidr_block = "10.0.0.0/24"
istio_version = "1.14.1-asm.3"
}
# tftest modules=13 resources=59