Skip to content

Latest commit

 

History

History
334 lines (258 loc) · 37.7 KB

cs_network_planning.md

File metadata and controls

334 lines (258 loc) · 37.7 KB
copyright lastupdated keywords subcollection
years
2014, 2021
2021-04-21
kubernetes, iks, networking
containers

{:DomainName: data-hd-keyref="APPDomain"} {:DomainName: data-hd-keyref="DomainName"} {:android: data-hd-operatingsystem="android"} {:api: .ph data-hd-interface='api'} {:apikey: data-credential-placeholder='apikey'} {:app_key: data-hd-keyref="app_key"} {:app_name: data-hd-keyref="app_name"} {:app_secret: data-hd-keyref="app_secret"} {:app_url: data-hd-keyref="app_url"} {:authenticated-content: .authenticated-content} {:beta: .beta} {:c#: data-hd-programlang="c#"} {:cli: .ph data-hd-interface='cli'} {:codeblock: .codeblock} {:curl: .ph data-hd-programlang='curl'} {:deprecated: .deprecated} {:dotnet-standard: .ph data-hd-programlang='dotnet-standard'} {:download: .download} {:external: target="_blank" .external} {:faq: data-hd-content-type='faq'} {:fuzzybunny: .ph data-hd-programlang='fuzzybunny'} {:generic: data-hd-operatingsystem="generic"} {:generic: data-hd-programlang="generic"} {:gif: data-image-type='gif'} {:go: .ph data-hd-programlang='go'} {:help: data-hd-content-type='help'} {:hide-dashboard: .hide-dashboard} {:hide-in-docs: .hide-in-docs} {:important: .important} {:ios: data-hd-operatingsystem="ios"} {:java: .ph data-hd-programlang='java'} {:java: data-hd-programlang="java"} {:javascript: .ph data-hd-programlang='javascript'} {:javascript: data-hd-programlang="javascript"} {:new_window: target="_blank"} {:note .note} {:note: .note} {:objectc data-hd-programlang="objectc"} {:org_name: data-hd-keyref="org_name"} {:php: data-hd-programlang="php"} {:pre: .pre} {:preview: .preview} {:python: .ph data-hd-programlang='python'} {:python: data-hd-programlang="python"} {:route: data-hd-keyref="route"} {:row-headers: .row-headers} {:ruby: .ph data-hd-programlang='ruby'} {:ruby: data-hd-programlang="ruby"} {:runtime: architecture="runtime"} {:runtimeIcon: .runtimeIcon} {:runtimeIconList: .runtimeIconList} {:runtimeLink: .runtimeLink} {:runtimeTitle: .runtimeTitle} {:screen: .screen} {:script: data-hd-video='script'} {:service: architecture="service"} {:service_instance_name: data-hd-keyref="service_instance_name"} {:service_name: data-hd-keyref="service_name"} {:shortdesc: .shortdesc} {:space_name: data-hd-keyref="space_name"} {:step: data-tutorial-type='step'} {:subsection: outputclass="subsection"} {:support: data-reuse='support'} {:swift: .ph data-hd-programlang='swift'} {:swift: data-hd-programlang="swift"} {:table: .aria-labeledby="caption"} {:term: .term} {:tip: .tip} {:tooling-url: data-tooling-url-placeholder='tooling-url'} {:troubleshoot: data-hd-content-type='troubleshoot'} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} {:tsSymptoms: .tsSymptoms} {:tutorial: data-hd-content-type='tutorial'} {:ui: .ph data-hd-interface='ui'} {:unity: .ph data-hd-programlang='unity'} {:url: data-credential-placeholder='url'} {:user_ID: data-hd-keyref="user_ID"} {:vbnet: .ph data-hd-programlang='vb.net'} {:video: .video}

Choosing an app exposure service

{: #cs_network_planning}

With {{site.data.keyword.containerlong}}, you can manage in-cluster and external networking by making apps publicly or privately accessible. {: shortdesc}

To quickly get started with app networking, follow this decision tree and click an option to see its setup docs:

This image walks you through choosing the best networking option for your application.

Understanding load balancing for apps through Kubernetes service discovery

{: #in-cluster}

Kubernetes service discovery provides apps with a network connection by using network services and a local Kubernetes proxy. {: shortdesc}

Services

All pods that are deployed to a worker node are assigned a private IP address in the 172.30.0.0/16 range and are routed between worker nodes only. To avoid conflicts, don't use this IP range on any nodes that communicate with your worker nodes. Worker nodes and pods can securely communicate on the private network by using private IP addresses. However, when a pod crashes or a worker node needs to be re-created, a new private IP address is assigned.

Instead of trying to track changing private IP addresses for apps that must be highly available, you can use built-in Kubernetes service discovery features to expose apps as services. A Kubernetes service groups a set of pods and provides a network connection to these pods. The service selects the targeted pods that it routes traffic to via labels.

A service provides connectivity between your app pods and other services in the cluster without exposing the actual private IP address of each pod. Services are assigned an in-cluster IP address, the clusterIP, that is accessible inside the cluster only. This IP address is tied to the service for its entire lifespan and does not change while the service exists.

  • Newer clusters: In clusters that were created after February 2018 in the dal13 zone or after October 2017 in any other zone, services are assigned an IP from one of the 65,000 IPs in the 172.21.0.0/16 range.
  • Older clusters: In clusters that were created before February 2018 in the dal13 zone or before October 2017 in any other zone, services are assigned an IP from one of 254 IPs in the 10.10.10.0/24 range. If you hit the limit of 254 services and need more services, you must create a new cluster.

To avoid conflicts, don't use this IP range on any nodes that communicate with your worker nodes. A DNS lookup entry is also created for the service and stored in the kube-dns component of the cluster. The DNS entry contains the name of the service, the namespace where the service was created, and the link to the assigned in-cluster IP address.

If you plan to connect your cluster to on-premises networks through {{site.data.keyword.cloud_notm}} or a VPN service, you might have subnet conflicts with the default 172.30.0.0/16 range for pods and 172.21.0.0/16 range for services. You can avoid subnet conflicts when you create a cluster by specifying a custom subnet CIDR for pods in the --pod-subnet flag and a custom subnet CIDR for services in the --service-subnet flag. {: tip}

kube-proxy

To provide basic load balancing of all TCP and UDP network traffic for services, a local Kubernetes network proxy, kube-proxy, runs as a daemon on each worker node in the kube-system namespace. kube-proxy uses Iptables rules, a Linux kernel feature, to direct requests to the pod behind a service equally, independent of pods' in-cluster IP addresses and the worker node that they are deployed to.

For example, apps inside the cluster can access a pod behind a cluster service by using the service's in-cluster IP or by sending a request to the name of the service. When you use the name of the service, kube-proxy looks up the name in the cluster DNS provider and routes the request to the in-cluster IP address of the service.

If you use a service that provides both an internal cluster IP address and an external IP address, clients outside of the cluster can send requests to the service's external public or private IP address. kube-proxy forwards the requests to the service's in-cluster IP address and load balances between the app pods behind the service.

The following image demonstrates how Kubernetes forwards public network traffic through kube-proxy and NodePort, LoadBalancer, or Ingress services in {{site.data.keyword.containerlong_notm}}.

{{site.data.keyword.containerlong_notm}} external traffic network architecture

How Kubernetes forwards public network traffic through NodePort, LoadBalancer, and Ingress services in {{site.data.keyword.containerlong_notm}}


Understanding Kubernetes service types

{: #external} {: help} {: support}

Kubernetes supports four basic types of network services: ClusterIP, NodePort, LoadBalancer, and Ingress. ClusterIP services make your apps accessible internally to allow communication between pods in your cluster only. NodePort, LoadBalancer, and Ingress services make your apps externally accessible from the public internet or a private network. {: shortdesc}

ClusterIP

You can expose apps only as cluster IP services on the private network. A clusterIP service provides an in-cluster IP address that is accessible by other pods and services inside the cluster only. No external IP address is created for the app. To access a pod behind a cluster service, other apps in the cluster can either use the in-cluster IP address of the service or send a request by using the name of the service. When a request reaches the service, the service forwards requests to the pods equally, independent of pods' in-cluster IP addresses and the worker node that they are deployed to. Note that if you do not specify a type in a service's YAML configuration file, the ClusterIP type is created by default.

NodePort

When you expose apps with a NodePort service, a NodePort in the range of 30000 - 32767 and an internal cluster IP address is assigned to the service. To access the service from outside the cluster, you use the public or private IP address of any worker node and the NodePort in the format <IP_address>:<nodeport>. However, the public and private IP addresses of the worker node are not permanent. When a worker node is removed or re-created, a new public and a new private IP address are assigned to the worker node. NodePorts are ideal for testing public or private access or providing access for only a short amount of time. Note: Because worker nodes in VPC clusters do not have a public IP address, you can access an app through a NodePort only if you are connected to your private VPC network, such as through a VPN connection.

LoadBalancer

The LoadBalancer service type is implemented differently depending on your cluster's infrastructure provider.

  • Classic infrastructure provider icon Classic clusters: Network load balancer (NLB). Every standard cluster is provisioned with four portable public and four portable private IP addresses that you can use to create a layer 4 TCP/UDP network load balancer (NLB) for your app. You can customize your NLB by exposing any port that your app requires. The portable public and private IP addresses that are assigned to the NLB are permanent and do not change when a worker node is re-created in the cluster. You can create a subdomain for your app that registers public NLB IP addresses with a DNS entry. You can also enable health check monitors on the NLB IPs for each subdomain.
  • VPC infrastructure provider icon VPC clusters: Load Balancer for VPC. When you create a Kubernetes LoadBalancer service for an app in your cluster, a layer 7 VPC load balancer is automatically created in your VPC outside of your cluster. The VPC load balancer is multizonal and routes requests for your app through the private NodePorts that are automatically opened on your worker nodes. By default, the load balancer is also created with a hostname that you can use to access your app.

Ingress

Expose multiple apps in a cluster by setting up routing with the Ingress application load balancer (ALB). The ALB uses a secured and unique public or private entry point, an Ingress subdomain, to route incoming requests to your apps. You can use one subdomain to expose multiple apps in your cluster as services. Ingress consists of three components:

  • The Ingress resource defines the rules for how to route and load balance incoming requests for an app.
  • The ALB listens for incoming HTTP, HTTPS, or TCP service requests. It forwards requests across the apps' pods based on the rules that you defined in the Ingress resource.
  • The multizone load balancer (MZLB) for classic clusters or the VPC load balancer for VPC clusters handles all incoming requests to your apps and load balances the requests among the ALBs in the various zones. It also enables health checks for the public Ingress IP addresses.

The following table compares the features of each network service type.
Characteristics ClusterIP NodePort LoadBalancer (Classic - NLB) LoadBalancer (VPC load balancer) Ingress
Free clusters Feature available Feature available
Standard clusters Feature available Feature available Feature available Feature available Feature available
Externally accessible Feature available Feature available Feature available Feature available
External hostname Feature available Feature available Feature available
Stable external IP Feature available Feature available
HTTP(S) load balancing Feature available* Feature available* Feature available
TLS termination Feature available
Custom routing rules Feature available
Multiple apps per service Feature available
{: caption="Characteristics of Kubernetes network service types" caption-side="top"}

* An SSL certificate for HTTPS load balancing is provided by ibmcloud ks nlb-dns commands. In classic clusters, these commands are supported for public NLBs only. {: note}


Planning public external load balancing

{: #public_access}

Publicly expose an app in your cluster to the internet. {: shortdesc}

In Classic infrastructure provider icon classic clusters, you can connect worker nodes to a public VLAN. The public VLAN determines the public IP address that is assigned to each worker node, which provides each worker node with a public network interface. Public networking services connect to this public network interface by providing your app with a public IP address and, optionally, a public URL.

In VPC infrastructure provider icon VPC clusters, your worker nodes are connected to private VPC subnets only. However, when you create public networking services, a VPC load balancer is automatically created. The VPC load balancer can route public requests to your app by providing your app a public URL. When an app is publicly exposed, anyone that has the public URL can send a request to your app.

When an app is publicly exposed, anyone that has the public service IP address or the URL that you set up for your app can send a request to your app. For this reason, expose as few apps as possible. Expose an app to the public only when your app is ready to accept traffic from external web clients or users.

The public network interface for worker nodes is protected by predefined Calico network policy settings that are configured on every worker node during cluster creation. By default, all outbound network traffic is allowed for all worker nodes. Inbound network traffic is blocked except for a few ports. These ports are opened so that IBM can monitor network traffic and automatically install security updates for the Kubernetes master, and so that connections can be established to NodePort, LoadBalancer, and Ingress services. For more information about these policies, including how to modify them, see Network policies.

Choosing a deployment pattern for classic clusters

{: #pattern_public}

To make an app publicly available to the internet in a classic cluster, choose a load balancing deployment pattern that uses public NodePort, LoadBalancer, or Ingress services. The following table describes each possible deployment pattern, why you might use it, and how to set it up. For basic information about the networking services that these deployment patterns use, see Understanding Kubernetes service types.

Name Load-balancing method Use case Implementation
NodePort Port on a worker node that exposes the app on the worker's public IP address Test public access to one app or provide access for only a short amount of time. Create a public NodePort service.

Gateway-enabled clusters that run Kubernetes version 1.17 only: If you have a gateway-enabled cluster and use a public node port to expose your app, public traffic on the node port is blocked by default in Kubernetes version 1.17. Instead, use a load balancer service or create a preDNAT Calico policy with an order number that is lower than 1800 and with a selector ibm.role == 'worker_public' so that public traffic is explicitly allowed to the node port.

NLB v1.0 (+ subdomain) Basic load balancing that exposes the app with an IP address or a subdomain Quickly expose one app to the public with an IP address or a subdomain that supports SSL termination.
  1. Create a public network load balancer (NLB) 1.0 in a single- or multizone cluster.
  2. Optionally register a subdomain and health checks.
NLB v2.0 (+ subdomain) DSR load balancing that exposes the app with an IP address or a subdomain Expose an app that might receive high levels of traffic to the public with an IP address or a subdomain that supports SSL termination.
  1. Complete the prerequisites.
  2. Create a public NLB 2.0 in a single- or multizone cluster.
  3. Optionally register a subdomain and health checks.
Istio + NLB subdomain Basic load balancing that exposes the app with a subdomain and uses Istio routing rules Implement Istio post-routing rules, such as rules for different versions of one app microservice, and expose an Istio-managed app with a public subdomain.
  1. Install the managed Istio add-on.
  2. Include your app in the Istio service mesh.
  3. Register the default Istio load balancer with a subdomain.
Ingress ALB HTTPS load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps.
  1. Create an Ingress service for the public ALB.
  2. Customize ALB routing rules with annotations.
Bring your own Ingress controller + NLB subdomain HTTPS load balancing with a custom Ingress controller that exposes the app with the IBM-provided ALB subdomain and uses custom routing rules Implement custom routing rules or other specific requirements for custom tuning for multiple apps. Deploy your Ingress controller and leverage an IBM-provided subdomain.
{: caption="Characteristics of public network deployment patterns in {{site.data.keyword.containerlong_notm}} classic clusters" caption-side="top"}
{: summary="This table reads left to right about the name, characteristics, use cases, and deployment steps of public network deployment patterns in classic clusters."}

Still want more details about the load balancing deployment patterns that are available in {{site.data.keyword.containerlong_notm}}? Check out this blog post{: external}. {: tip}

Choosing a deployment pattern for VPC clusters

{: #pattern_public_vpc}

To make an app publicly available to the internet in a VPC cluster, choose a load balancing deployment pattern that uses public LoadBalancer or Ingress services. The following table describes each possible deployment pattern, why you might use it, and how to set it up. For basic information about the networking services that these deployment patterns use, see Understanding Kubernetes service types. {: shortdesc}

When you create a VPC cluster that runs Kubernetes version 1.18 or earlier, the VPC is created with a default security group that does not allow incoming traffic to your worker nodes. You must modify the security group for the VPC to allow incoming TCP traffic to ports 30000 - 32767. For more information, see the "Before you begin" section of the VPC load balancer or Ingress setup topics. {: note}

Name Load-balancing method Use case Implementation
VPC load balancer Basic load balancing that exposes the app with a hostname Quickly expose one app to the public with a VPC load balancer-assigned hostname. Create a public LoadBalancer service in your cluster. A VPC load balancer is automatically created in your VPC that assigns a hostname to your LoadBalancer service for your app.
Istio Basic load balancing that exposes the app with a hostname and uses Istio routing rules Implement Istio post-routing rules, such as rules for different versions of one app microservice, and expose an Istio-managed app with a public hostname.
  1. Install the managed Istio add-on.
  2. Include your app in the Istio service mesh.
  3. Register the default Istio load balancer with a hostname.
Ingress ALB HTTPS load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps.
  1. Create an Ingress service for the public ALB.
  2. Customize ALB routing rules with annotations.
{: caption="Characteristics of public network deployment patterns in {{site.data.keyword.containerlong_notm}} VPC clusters" caption-side="top"}
{: summary="This table reads left to right about the name, characteristics, use cases, and deployment steps of public network deployment patterns in VPC clusters."}

Planning private external load balancing

{: #private_access}

Privately expose an app in your cluster to the private network only. {: shortdesc}

When you deploy an app in a Kubernetes cluster in {{site.data.keyword.containerlong_notm}}, you might want to make the app accessible to only users and services that are on the same private network as your cluster. Private load balancing is ideal for making your app available to requests from outside the cluster without exposing the app to the general public. You can also use private load balancing to test access, request routing, and other configurations for your app before you later expose your app to the public with public network services.

As an example, say that you create a private load balancer for your app. This private load balancer can be accessed by:

  • Any pod in that same cluster.
  • Any pod in any cluster in the same {{site.data.keyword.cloud_notm}} account.
  • If you're not in the {{site.data.keyword.cloud_notm}} account but still behind the company firewall, any system through a VPN connection to the subnet that the load balancer IP is on.
  • If you're in a different {{site.data.keyword.cloud_notm}} account, any system through a VPN connection to the subnet that the load balancer IP is on.
  • In classic clusters, if you have VRF or VLAN spanning enabled, any system that is connected to any of the private VLANs in the same {{site.data.keyword.cloud_notm}} account.
  • In VPC clusters:
    • If traffic is permitted between VPC subnets, any system in the same VPC.
    • If traffic is permitted between VPCs, any system that has access to the VPC that the cluster is in.

Choosing a deployment pattern for classic clusters

{: #pattern_private_classic}

To make an app available over a private network only in classic clusters, choose a load balancing deployment pattern based on your cluster's VLAN setup:

Setting up private load balancing in a public and private VLAN setup

{: #private_both_vlans}

When your worker nodes are connected to both a public and a private VLAN, you can make your app accessible from a private network only by creating private NodePort, LoadBalancer, or Ingress services. Then, you can create Calico policies to block public traffic to the services. {: shortdesc}

The public network interface for worker nodes is protected by predefined Calico network policy settings that are configured on every worker node during cluster creation. By default, all outbound network traffic is allowed for all worker nodes. Inbound network traffic is blocked except for a few ports. These ports are opened so that IBM can monitor network traffic and automatically install security updates for the Kubernetes master, and so that connections can be established to NodePort, LoadBalancer, and Ingress services.

Because the default Calico network policies allow inbound public traffic to these services, you can create Calico policies to instead block all public traffic to the services. For example, a NodePort service opens a port on a worker node over both the private and public IP address of the worker node. An NLB service with a portable private IP address opens a public NodePort on every worker node. You must create a Calico preDNAT network policy to block public NodePorts.

Check out the following load balancing deployment patterns for private networking:

Name Load-balancing method Use case Implementation
NodePort Port on a worker node that exposes the app on the worker's private IP address Test private access to one app or provide access for only a short amount of time.
  1. Create a NodePort service.
  2. A NodePort service opens a port on a worker node over both the private and public IP address of the worker node. You must use a Calico preDNAT network policy to block traffic to the public NodePorts.
NLB v1.0 Basic load balancing that exposes the app with a private IP address Quickly expose one app to a private network with a private IP address.
  1. Create a private NLB service.
  2. An NLB with a portable private IP address still has a public node port open on every worker node. Create a Calico preDNAT network policy to block traffic to the public NodePorts.
NLB v2.0 DSR load balancing that exposes the app with a private IP address Expose an app that might receive high levels of traffic to a private network with an IP address.
  1. Complete the prerequisites.
  2. Create a private NLB 2.0 in a single- or multizone cluster.
  3. An NLB with a portable private IP address still has a public node port open on every worker node. Create a Calico preDNAT network policy to block traffic to the public NodePorts.
Ingress ALB HTTPS load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps.
  1. Disable the public ALB.
  2. Enable the private ALB and create an Ingress resource.
  3. Customize ALB routing rules with annotations.
  4. An NLB with a portable private IP address still has a public node port open on every worker node. Create a Calico preDNAT network policy to block traffic to the public NodePorts.
{: caption="Characteristics of network deployment patterns for a public and a private VLAN setup" caption-side="top"}
{: summary="This table reads left to right about the name, characteristics, use cases, and deployment steps of private network deployment patterns in classic clusters."}

Setting up private load balancing for a private VLAN only setup

{: #plan_private_vlan}

When your worker nodes are connected to a private VLAN only, you can make your app externally accessible from a private network only by creating private NodePort, LoadBalancer, or Ingress services. {: shortdesc}

If your cluster is connected to a private VLAN only and you enable the master and worker nodes to communicate through a private-only service endpoint, you cannot automatically expose your apps to a private network. You must set up a gateway appliance, such as a VRA (Vyatta) or an FSA to act as your firewall and block or allow traffic. Because your worker nodes aren't connected to a public VLAN, no public traffic is routed to NodePort, LoadBalancer, or Ingress services. However, you must open up the required ports and IP addresses in your gateway appliance firewall to permit inbound traffic to these services.

Check out the following load balancing deployment patterns for private networking:

Name Load-balancing method Use case Implementation
NodePort Port on a worker node that exposes the app on the worker's private IP address Test private access to one app or provide access for only a short amount of time.
  1. Create a NodePort service.
  2. In your private firewall, open the port that you configured when you deployed the service to the private IP addresses for all of the worker nodes to allow traffic to. To find the port, run kubectl get svc. The port is in the 30000-32767 range.
NLB v1.0 Basic load balancing that exposes the app with a private IP address Quickly expose one app to a private network with a private IP address.
  1. Create a private NLB service.
  2. In your private firewall, open the port that you configured when you deployed the service to the NLB's private IP address.
NLB v2.0 DSR load balancing that exposes the app with a private IP address Expose an app that might receive high levels of traffic to a private network with an IP address.
  1. Create a private NLB service.
  2. In your private firewall, open the port that you configured when you deployed the service to the NLB's private IP address.
Ingress ALB HTTPS load balancing that exposes the app with a subdomain and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps.
  1. Configure a DNS service that is available on the private network{: external}.
  2. Enable the private ALB and create an Ingress resource.
  3. In your private firewall, open port 80 for HTTP or port 443 for HTTPS to the IP address for the private ALB.
  4. Customize ALB routing rules with annotations.
{: caption="Characteristics of network deployment patterns for a private VLAN only setup" caption-side="top"}
{: summary="This table reads left to right about the name, characteristics, use cases, and deployment steps of private network deployment patterns in classic clusters."}

Choosing a deployment pattern for VPC clusters

{: #pattern_private_vpc}

Make your app accessible from only a private network by creating private NodePort, LoadBalancer, or Ingress services. {: shortdesc}

Check out the following load balancing deployment patterns for private app networking in VPC clusters:

Name Load-balancing method Use case Implementation
NodePort Port on a worker node that exposes the app on the worker's private IP address Test private access to one app or provide access for only a short amount of time. Note: You can access an app through a NodePort only if you are connected to your private VPC network, such as through a VPN connection or by using the Kubernetes web terminal. Create a private NodePort service.
VPC application load balancer Basic load balancing that exposes the app with a private hostname Quickly expose one app to a private network with a VPC application load balancer-assigned private hostname. Create a private LoadBalancer service in your cluster. A multizonal VPC application load balancer is automatically created in your VPC that assigns a hostname to your LoadBalancer service for your app.
Ingress ALB HTTPS load balancing that exposes the app with a hostname and uses custom routing rules Implement custom routing rules and SSL termination for multiple apps.
  1. Enable the private ALB, create a subdomain to register the ALB with a DNS entry, and create an Ingress resource.
  2. Customize ALB routing rules with annotations.
{: caption="Characteristics of private network deployment patterns for a VPC cluster" caption-side="top"}
{: summary="This table reads left to right about the name, characteristics, use cases, and deployment steps of private network deployment patterns in VPC clusters."}