This module allows managing a GCE Internal Load Balancer and integrates the forwarding rule, regional backend, and optional health check resources. It's designed to be a simple match for the compute-vm
module, which can be used to manage instance templates and instance groups.
This example shows how to reference existing Managed Infrastructure Groups (MIGs).
module "instance_template" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "europe-west1-b"
name = "vm-test"
create_template = true
service_account = {
auto_create = true
}
network_interfaces = [
{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}
]
tags = [
"http-server"
]
}
module "mig" {
source = "./fabric/modules/compute-mig"
project_id = var.project_id
location = "europe-west1"
name = "mig-test"
target_size = 1
instance_template = module.instance_template.template.self_link
}
module "ilb" {
source = "./fabric/modules/net-lb-int"
project_id = var.project_id
region = "europe-west1"
name = "ilb-test"
service_label = "ilb-test"
vpc_config = {
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}
backends = [{
group = module.mig.group_manager.instance_group
}]
health_check_config = {
http = {
port = 80
}
}
}
# tftest modules=3 resources=6
This examples shows how to create an ILB by combining externally managed instances (in a custom module or even outside of the current root module) in an unmanaged group. When using internally managed groups, remember to run terraform apply
each time group instances change.
module "ilb" {
source = "./fabric/modules/net-lb-int"
project_id = var.project_id
region = "europe-west1"
name = "ilb-test"
service_label = "ilb-test"
vpc_config = {
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}
group_configs = {
my-group = {
zone = "europe-west1-b"
instances = [
"instance-1-self-link",
"instance-2-self-link"
]
}
}
backends = [{
group = module.ilb.groups.my-group.self_link
}]
health_check_config = {
http = {
port = 80
}
}
}
# tftest modules=1 resources=4
The example shows how to send multiple protocols through the same internal network passthrough load balancer.
module "ilb" {
source = "./fabric/modules/net-lb-int"
project_id = var.project_id
region = "europe-west1"
name = "ilb-test"
service_label = "ilb-test"
vpc_config = {
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}
forwarding_rules_config = {
"" = {
protocol = "L3_DEFAULT"
}
}
group_configs = {
my-group = {
zone = "europe-west1-b"
instances = [
"instance-1-self-link",
"instance-2-self-link"
]
}
}
backends = [{
group = module.ilb.groups.my-group.self_link
}]
}
# tftest modules=1 resources=4
You can add more forwarding rules to your load balancer and override some forwarding rules defaults, including the global access policy, the IP protocol, the IP version and ports.
The example adds two forwarding rules:
- the first one, called
ilb-test-vip-one
exposes an IPv4 address, it listens on all ports, and allows connections from any region. - the second one, called
ilb-test-vip-two
exposes an IPv4 address, it listens on port 80 and allows connections from the same region only.
module "ilb" {
source = "./fabric/modules/net-lb-int"
project_id = var.project_id
region = "europe-west1"
name = "ilb-test"
service_label = "ilb-test"
vpc_config = {
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}
forwarding_rules_config = {
vip-one = {}
vip-two = {
global_access = false
ports = [80]
}
}
group_configs = {
my-group = {
zone = "europe-west1-b"
instances = [
"instance-1-self-link",
"instance-2-self-link"
]
}
}
backends = [{
group = module.ilb.groups.my-group.self_link
}]
}
# tftest modules=1 resources=5
Your load balancer can use a combination of either or both IPv4 and IPv6 forwarding rules. In this example we set the load balancer to work as dual stack, meaning it exposes both an IPv4 and an IPv6 address.
module "ilb" {
source = "./fabric/modules/net-lb-int"
project_id = var.project_id
region = "europe-west1"
name = "ilb-test"
service_label = "ilb-test"
vpc_config = {
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}
forwarding_rules_config = {
ipv4 = {
version = "IPV4"
}
ipv6 = {
version = "IPV6"
}
}
group_configs = {
my-group = {
zone = "europe-west1-b"
instances = [
"instance-1-self-link",
"instance-2-self-link"
]
}
}
backends = [{
group = module.ilb.groups.my-group.self_link
}]
}
# tftest modules=1 resources=5
The optional service_attachments
variable allows publishing Private Service Connect services by configuring up to one service attachment for each of the forwarding rules.
module "ilb" {
source = "./fabric/modules/net-lb-int"
project_id = var.project_id
region = "europe-west1"
name = "ilb-test"
service_label = "ilb-test"
vpc_config = {
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}
forwarding_rules_config = {
vip-one = {}
vip-two = {
global_access = false
ports = [80]
}
}
group_configs = {
my-group = {
zone = "europe-west1-b"
instances = [
"instance-1-self-link",
"instance-2-self-link"
]
}
}
backends = [{
group = module.ilb.groups.my-group.self_link
}]
service_attachments = {
vip-one = {
nat_subnets = [var.subnet_psc_1.self_link]
automatic_connection = true
}
vip-two = {
nat_subnets = [var.subnet_psc_2.self_link]
automatic_connection = true
}
}
}
# tftest modules=1 resources=7
This example spins up a simple HTTP server and combines four modules:
nginx
from thecloud-config-container
collection, to manage instance configurationcompute-vm
to manage the instance template and unmanaged instance group- this module to create an Internal Load Balancer in front of the managed instance group
Note that the example uses the GCE default service account. You might want to create an ad-hoc service account by combining the iam-service-account
module, or by having the GCE VM module create one for you. In both cases, remember to set at least logging write permissions for the service account, or the container on the instances won't be able to start.
module "cos-nginx" {
source = "./fabric/modules/cloud-config-container/nginx"
}
module "instance-group" {
source = "./fabric/modules/compute-vm"
for_each = toset(["b", "c"])
project_id = var.project_id
zone = "${var.region}-${each.key}"
name = "ilb-test-${each.key}"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
nat = false
addresses = null
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
type = "pd-ssd"
size = 10
}
}
tags = ["http-server", "ssh"]
metadata = {
user-data = module.cos-nginx.cloud_config
}
group = { named_ports = {} }
}
module "ilb" {
source = "./fabric/modules/net-lb-int"
project_id = var.project_id
region = var.region
name = "ilb-test"
service_label = "ilb-test"
vpc_config = {
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}
forwarding_rules_config = {
"" = {
ports = [80]
}
}
backends = [
for z, mod in module.instance-group : {
group = mod.group.self_link
}
]
health_check_config = {
http = {
port = 80
}
}
}
# tftest modules=3 resources=7 e2e
There are some corner cases where Terraform raises a cycle error on apply, for example when using the entire ILB module as a value in for_each
counts used to create static routes in the VPC module. These are easily fixed by using forwarding rule ids instead of modules as values in the for_each
loop.
name | description | type | required | default |
---|---|---|---|---|
name | Name used for all resources. | string |
✓ | |
project_id | Project id where resources will be created. | string |
✓ | |
region | GCP region. | string |
✓ | |
vpc_config | VPC-level configuration. | object({…}) |
✓ | |
backend_service_config | Backend service level configuration. | object({…}) |
{} |
|
backends | Load balancer backends. | list(object({…})) |
[] |
|
description | Optional description used for resources. | string |
"Terraform managed." |
|
forwarding_rules_config | The optional forwarding rules configuration. | map(object({…})) |
{…} |
|
group_configs | Optional unmanaged groups to create. Can be referenced in backends via outputs. | map(object({…})) |
{} |
|
health_check | Name of existing health check to use, disables auto-created health check. | string |
null |
|
health_check_config | Optional auto-created health check configuration, use the output self-link to set it in the auto healing policy. Refer to examples for usage. | object({…}) |
{…} |
|
labels | Labels set on resources. | map(string) |
{} |
|
service_attachments | PSC service attachments, keyed by forwarding rule. | map(object({…})) |
null |
|
service_label | Optional prefix of the fully qualified forwarding rule name. | string |
null |
name | description | sensitive |
---|---|---|
backend_service | Backend resource. | |
backend_service_id | Backend id. | |
backend_service_self_link | Backend self link. | |
forwarding_rule_addresses | Forwarding rule address. | |
forwarding_rule_self_links | Forwarding rule self links. | |
forwarding_rules | Forwarding rule resources. | |
group_self_links | Optional unmanaged instance group self links. | |
groups | Optional unmanaged instance group resources. | |
health_check | Auto-created health-check resource. | |
health_check_id | Auto-created health-check id. | |
health_check_self_link | Auto-created health-check self link. | |
id | Fully qualified forwarding rule ids. | |
service_attachment_ids | Service attachment ids. |