Skip to content

Commit

Permalink
tiproxy: add docs for label-based balance (#19288)
Browse files Browse the repository at this point in the history
  • Loading branch information
Oreoxmt authored Nov 1, 2024
1 parent 8295738 commit 5f8cb87
Show file tree
Hide file tree
Showing 4 changed files with 86 additions and 14 deletions.
Binary file added media/tiproxy/tiproxy-balance-label.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 7 additions & 0 deletions tiproxy/tiproxy-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,13 @@ Configurations for HTTP gateway.

Configurations for the load balancing policy of TiProxy.

#### `label-name`

+ Default value: `""`
+ Support hot-reload: yes
+ Specifies the label name used for [label-based load balancing](/tiproxy/tiproxy-load-balance.md#label-based-load-balancing). TiProxy matches the label values of TiDB servers based on this label name and prioritizes routing requests to TiDB servers with the same label value as itself.
+ The default value of `label-name` is an empty string, indicating that label-based load balancing is not used. To enable this load balancing policy, you need to set this configuration item to a non-empty string and configure both [`labels`](#labels) in TiProxy and [`labels`](/tidb-configuration-file.md#labels) in TiDB. For more information, see [Label-based load balancing](/tiproxy/tiproxy-load-balance.md#label-based-load-balancing).

#### `policy`

+ Default value: `resource`
Expand Down
3 changes: 2 additions & 1 deletion tiproxy/tiproxy-grafana.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ TiProxy has four panel groups. The metrics on these panels indicate the current
- **TiProxy-Server**: instance information.
- **TiProxy-Query-Summary**: SQL query metrics like CPS.
- **TiProxy-Backend**: information on TiDB nodes that TiProxy might connect to.
- **TiProxy-Balance**: loadbalance mertrics.
- **TiProxy-Balance**: load balancing metrics.

## Server

Expand Down Expand Up @@ -59,6 +59,7 @@ TiProxy has four panel groups. The metrics on these panels indicate the current
- Session Migration Duration: average, P95, P99 session migration duration.
- Session Migration Reasons: the number of session migrations that happened every minute and the reason for them. The reasons include the following:
- `status`: TiProxy performed [status-based load balancing](/tiproxy/tiproxy-load-balance.md#status-based-load-balancing).
- `label`: TiProxy performed [label-based load balancing](/tiproxy/tiproxy-load-balance.md#label-based-load-balancing).
- `health`: TiProxy performed [health-based load balancing](/tiproxy/tiproxy-load-balance.md#health-based-load-balancing).
- `memory`: TiProxy performed [memory-based load balancing](/tiproxy/tiproxy-load-balance.md#memory-based-load-balancing).
- `cpu`: TiProxy performed [CPU-based load balancing](/tiproxy/tiproxy-load-balance.md#cpu-based-load-balancing).
Expand Down
90 changes: 77 additions & 13 deletions tiproxy/tiproxy-load-balance.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,23 +5,86 @@ summary: Introduce the load balancing policies in TiProxy and their applicable s

# TiProxy Load Balancing Policies

TiProxy v1.0.0 only supports status-based and connection count-based load balancing policies for TiDB servers. Starting from v1.1.0, TiProxy introduces four additional load balancing policies that can be configured independently: health-based, memory-based, CPU-based, and location-based.
TiProxy v1.0.0 only supports status-based and connection count-based load balancing policies for TiDB servers. Starting from v1.1.0, TiProxy introduces five additional load balancing policies: label-based, health-based, memory-based, CPU-based, and location-based.

By default, TiProxy enables all policies with the following priorities:
By default, TiProxy applies these policies with the following priorities:

1. Status-based load balancing: when a TiDB server is shutting down, TiProxy migrates connections from that TiDB server to an online TiDB server.
2. Health-based load balancing: when the health of a TiDB server is abnormal, TiProxy migrates connections from that TiDB server to a healthy TiDB server.
3. Memory-based load balancing: when a TiDB server is at risk of running out of memory (OOM), TiProxy migrates connections from that TiDB server to a TiDB server with lower memory usage.
4. CPU-based load balancing: when the CPU usage of a TiDB server is much higher than that of other TiDB servers, TiProxy migrates connections from that TiDB server to a TiDB server with lower CPU usage.
5. Location-based load balancing: TiProxy prioritizes routing requests to the TiDB server geographically closest to TiProxy.
6. Connection count-based load balancing: when the connection count of a TiDB server is much higher than that of other TiDB servers, TiProxy migrates connections from that TiDB server to a TiDB server with fewer connections.
2. Label-based load balancing: TiProxy prioritizes routing requests to TiDB servers that share the same label as the TiProxy instance, enabling resource isolation at the computing layer.
3. Health-based load balancing: when the health of a TiDB server is abnormal, TiProxy migrates connections from that TiDB server to a healthy TiDB server.
4. Memory-based load balancing: when a TiDB server is at risk of running out of memory (OOM), TiProxy migrates connections from that TiDB server to a TiDB server with lower memory usage.
5. CPU-based load balancing: when the CPU usage of a TiDB server is much higher than that of other TiDB servers, TiProxy migrates connections from that TiDB server to a TiDB server with lower CPU usage.
6. Location-based load balancing: TiProxy prioritizes routing requests to the TiDB server geographically closest to TiProxy.
7. Connection count-based load balancing: when the connection count of a TiDB server is much higher than that of other TiDB servers, TiProxy migrates connections from that TiDB server to a TiDB server with fewer connections.

To adjust the priorities of load balancing policies, see [Configure load balancing policies](#configure-load-balancing-policies).

## Status-based load balancing

TiProxy periodically checks whether a TiDB server is offline or shutting down using the SQL port and status port.

## Label-based load balancing

Label-based load balancing prioritizes routing connections to TiDB servers that share the same label as TiProxy, enabling resource isolation at the computing layer. This policy is disabled by default and should only be enabled when your workload requires computing resource isolation.

To enable label-based load balancing, you need to:

- Specify the matching label name through [`balance.label-name`](/tiproxy/tiproxy-configuration.md#label-name).
- Configure the [`labels`](/tiproxy/tiproxy-configuration.md#labels) configuration item in TiProxy.
- Configure the [`labels`](/tidb-configuration-file.md#labels) configuration item in TiDB servers.

After configuration, TiProxy uses the label name specified in `balance.label-name` to route connections to TiDB servers with matching label values.

Consider an application that handles both transaction and BI workloads. To prevent these workloads from interfering with each other, configure your cluster as follows:

1. Set [`balance.label-name`](/tiproxy/tiproxy-configuration.md#label-name) to `"app"` in TiProxy, indicating that TiDB servers will be matched by the label name `"app"`, and connections will be routed to TiDB servers with matching label values.
2. Configure two TiProxy instances, adding `"app"="Order"` and `"app"="BI"` to their respective [`labels`](/tiproxy/tiproxy-configuration.md#labels) configuration items.
3. Divide TiDB instances into two groups, adding `"app"="Order"` and `"app"="BI"` to their respective [`labels`](/tidb-configuration-file.md#labels) configuration items.
4. Optional: For storage layer isolation, configure [Placement Rules](/configure-placement-rules.md) or [Resource Control](/tidb-resource-control.md).
5. Direct transaction and BI clients to connect to their respective TiProxy instance addresses.

<img src="https://download.pingcap.com/images/docs/tiproxy/tiproxy-balance-label.png" alt="Label-based Load Balancing" width="600" />

Example configuration for this topology:

```yaml
component_versions:
tiproxy: "v1.1.0"
server_configs:
tiproxy:
balance.label-name: "app"
tidb:
graceful-wait-before-shutdown: 15
tiproxy_servers:
- host: tiproxy-host-1
config:
labels: {app: "Order"}
- host: tiproxy-host-2
config:
labels: {app: "BI"}
tidb_servers:
- host: tidb-host-1
config:
labels: {app: "Order"}
- host: tidb-host-2
config:
labels: {app: "Order"}
- host: tidb-host-3
config:
labels: {app: "BI"}
- host: tidb-host-4
config:
labels: {app: "BI"}
tikv_servers:
- host: tikv-host-1
- host: tikv-host-2
- host: tikv-host-3
pd_servers:
- host: pd-host-1
- host: pd-host-2
- host: pd-host-3
```
## Health-based load balancing
TiProxy determines the health of a TiDB server by querying its error count. When the health of a TiDB server is abnormal while others are normal, TiProxy migrates connections from that server to a healthy TiDB server, achieving automatic failover.
Expand Down Expand Up @@ -99,11 +162,12 @@ tidb_servers:
zone: west
tikv_servers:
- host: tikv-host-1
port: 20160
- host: tikv-host-2
port: 20160
- host: tikv-host-3
port: 20160
pd_servers:
- host: pd-host-1
- host: pd-host-2
- host: pd-host-3
```

In the preceding configuration, the TiProxy instance on `tiproxy-host-1` prioritizes routing requests to the TiDB server on `tidb-host-1` because `tiproxy-host-1` has the same `zone` configuration as `tidb-host-1`. Similarly, the TiProxy instance on `tiproxy-host-2` prioritizes routing requests to the TiDB server on `tidb-host-2`.
Expand All @@ -121,9 +185,9 @@ Typically, TiProxy identifies the load on TiDB servers based on CPU usage. This

TiProxy lets you configure the combination and priority of load balancing policies through the [`policy`](/tiproxy/tiproxy-configuration.md#policy) configuration item.

- `resource`: the resource priority policy performs load balancing based on the following priority order: status, health, memory, CPU, location, and connection count.
- `location`: the location priority policy performs load balancing based on the following priority order: status, location, health, memory, CPU, and connection count.
- `connection`: the minimum connection count policy performs load balancing based on the following priority order: status and connection count.
- `resource`: the resource priority policy performs load balancing based on the following priority order: status, label, health, memory, CPU, location, and connection count.
- `location`: the location priority policy performs load balancing based on the following priority order: status, label, location, health, memory, CPU, and connection count.
- `connection`: the minimum connection count policy performs load balancing based on the following priority order: status, label, and connection count.

## More resources

Expand Down

0 comments on commit 5f8cb87

Please sign in to comment.