Skip to content

Commit

Permalink
en: add tiproxy docs (#2480) (#2489)
Browse files Browse the repository at this point in the history
* add translation

* Apply suggestions from code review

Co-authored-by: xixirangrang <[email protected]>

---------

Co-authored-by: Ran <[email protected]>
Co-authored-by: Ran <[email protected]>
Co-authored-by: xixirangrang <[email protected]>
  • Loading branch information
4 people authored Jan 17, 2024
1 parent 66b1ee4 commit 98b68cc
Show file tree
Hide file tree
Showing 13 changed files with 199 additions and 31 deletions.
1 change: 1 addition & 0 deletions en/TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
- [Alibaba Cloud ACK](deploy-on-alibaba-cloud.md)
- [Deploy TiDB on ARM64 Machines](deploy-cluster-on-arm64.md)
- [Deploy TiFlash to Explore TiDB HTAP](deploy-tiflash.md)
- [Deploy TiProxy Load Balancer](deploy-tiproxy.md)
- Deploy TiDB Across Multiple Kubernetes Clusters
- [Build Multiple Interconnected AWS EKS Clusters](build-multi-aws-eks.md)
- [Build Multiple Interconnected GKE Clusters](build-multi-gcp-gke.md)
Expand Down
33 changes: 31 additions & 2 deletions en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ To mount multiple PVs for TiCDC:

### HostNetwork

For PD, TiKV, TiDB, TiFlash, TiCDC, and Pump, you can configure the Pods to use the host namespace [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy).
For PD, TiKV, TiDB, TiFlash, TiProxy, TiCDC, and Pump, you can configure the Pods to use the host namespace [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy).

To enable `HostNetwork` for all supported components, configure `spec.hostNetwork: true`.

Expand Down Expand Up @@ -255,6 +255,17 @@ The deployed cluster topology by default has three PD Pods, three TiKV Pods, and
>
> If the number of Kubernetes cluster nodes is less than three, one PD Pod goes to the Pending state, and neither TiKV Pods nor TiDB Pods are created. When the number of nodes in the Kubernetes cluster is less than three, to start the TiDB cluster, you can reduce the number of PD Pods in the default deployment to `1`.

#### Enable TiProxy

The deployment method is the same as that of PD. In addition, you need to modify `spec.tiproxy` to manually specify the number of TiProxy components.

```yaml
tiproxy:
baseImage: pingcap/tiproxy
replicas: 3
config:
```

#### Enable TiFlash

If you want to enable TiFlash in the cluster, configure `spec.pd.config.replication.enable-placement-rules: true` and configure `spec.tiflash` in the `${cluster_name}/tidb-cluster.yaml` file as follows:
Expand Down Expand Up @@ -313,7 +324,7 @@ If you want to enable TiCDC in the cluster, you can add TiCDC spec to the `TiDBC

### Configure TiDB components

This section introduces how to configure the parameters of TiDB/TiKV/PD/TiFlash/TiCDC.
This section introduces how to configure the parameters of TiDB/TiKV/PD/TiProxy/TiFlash/TiCDC.

#### Configure TiDB parameters

Expand Down Expand Up @@ -377,6 +388,22 @@ For all the configurable parameters of PD, refer to [PD Configuration File](http
> - If you deploy your TiDB cluster using CR, make sure that `Config: {}` is set, no matter you want to modify `config` or not. Otherwise, PD components might not be started successfully. This step is meant to be compatible with `Helm` deployment.
> - After the cluster is started for the first time, some PD configuration items are persisted in etcd. The persisted configuration in etcd takes precedence over that in PD. Therefore, after the first start, you cannot modify some PD configuration using parameters. You need to dynamically modify the configuration using SQL statements, pd-ctl, or PD server API. Currently, among all the configuration items listed in [Modify PD configuration online](https://docs.pingcap.com/tidb/stable/dynamic-config#modify-pd-configuration-online), except `log.level`, all the other configuration items cannot be modified using parameters after the first start.

#### Configure TiProxy parameters

TiProxy parameters can be configured by `spec.tiproxy.config` in TidbCluster Custom Resource.

For example:

```yaml
spec:
tiproxy:
config: |
[log]
level = "info"
```

For all the configurable parameters of TiProxy, refer to [TiProxy Configuration File](https://docs.pingcap.com/tidb/v7.6/tiproxy-configuration).

#### Configure TiFlash parameters

TiFlash parameters can be configured by `spec.tiflash.config` in TidbCluster Custom Resource.
Expand Down Expand Up @@ -642,6 +669,8 @@ spec:

See [Kubernetes Service Documentation](https://kubernetes.io/docs/concepts/services-networking/service/) to know more about the features of Service and what LoadBalancer in the cloud platform supports.

If TiProxy is specified, `tiproxy-api` and `tiproxy-sql` services are also automatically created for use.

### IPv6 Support

Starting v6.5.1, TiDB supports using IPv6 addresses for all network connections. If you deploy TiDB using TiDB Operator v1.4.3 or later versions, you can enable the TiDB cluster to listen on IPv6 addresses by configuring `spec.preferIPv6` to `true`.
Expand Down
7 changes: 3 additions & 4 deletions en/deploy-failures.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,13 +38,12 @@ kubectl describe restores -n ${namespace} ${restore_name}
The Pending state of a Pod is usually caused by conditions of insufficient resources, for example:

- The `StorageClass` of the PVC used by PD, TiKV, TiFlash, Pump, Monitor, Backup, and Restore Pods does not exist or the PV is insufficient.
- No nodes in the Kubernetes cluster can satisfy the CPU or memory resources requested by the Pod
- The number of TiKV or PD replicas and the number of nodes in the cluster do not satisfy the high availability scheduling policy of tidb-scheduler
- No nodes in the Kubernetes cluster can satisfy the CPU or memory resources requested by the Pod.
- The number of TiKV or PD replicas and the number of nodes in the cluster do not satisfy the high availability scheduling policy of tidb-scheduler.
- The certificates used by TiDB or TiProxy components are not configured.

You can check the specific reason for Pending by using the `kubectl describe pod` command:

{{< copyable "shell-regular" >}}

```shell
kubectl describe po -n ${namespace} ${pod_name}
```
Expand Down
4 changes: 4 additions & 0 deletions en/deploy-on-general-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
pingcap/tidb-binlog:v7.5.0
pingcap/ticdc:v7.5.0
pingcap/tiflash:v7.5.0
pingcap/tiproxy:latest
pingcap/tidb-monitor-reloader:v1.0.1
pingcap/tidb-monitor-initializer:v7.5.0
grafana/grafana:7.5.11
Expand All @@ -69,6 +70,7 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
docker pull pingcap/tidb-binlog:v7.5.0
docker pull pingcap/ticdc:v7.5.0
docker pull pingcap/tiflash:v7.5.0
docker pull pingcap/tiproxy:latest
docker pull pingcap/tidb-monitor-reloader:v1.0.1
docker pull pingcap/tidb-monitor-initializer:v7.5.0
docker pull grafana/grafana:7.5.11
Expand All @@ -80,6 +82,7 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
docker save -o tidb-v7.5.0.tar pingcap/tidb:v7.5.0
docker save -o tidb-binlog-v7.5.0.tar pingcap/tidb-binlog:v7.5.0
docker save -o ticdc-v7.5.0.tar pingcap/ticdc:v7.5.0
docker save -o tiproxy-latest.tar pingcap/tiproxy:latest
docker save -o tiflash-v7.5.0.tar pingcap/tiflash:v7.5.0
docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1
docker save -o tidb-monitor-initializer-v7.5.0.tar pingcap/tidb-monitor-initializer:v7.5.0
Expand All @@ -98,6 +101,7 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
docker load -i tidb-v7.5.0.tar
docker load -i tidb-binlog-v7.5.0.tar
docker load -i ticdc-v7.5.0.tar
docker load -i tiproxy-latest.tar
docker load -i tiflash-v7.5.0.tar
docker load -i tidb-monitor-reloader-v1.0.1.tar
docker load -i tidb-monitor-initializer-v7.5.0.tar
Expand Down
13 changes: 7 additions & 6 deletions en/deploy-tidb-cluster-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -513,19 +513,20 @@ For a TiDB cluster deployed across Kubernetes clusters, to perform a rolling upg

2. Take step 1 as an example, perform the following upgrade operations in sequence:

1. If TiFlash is deployed in clusters, upgrade the TiFlash versions for all the Kubernetes clusters that have TiFlash deployed.
2. Upgrade TiKV versions for all Kubernetes clusters.
3. If Pump is deployed in clusters, upgrade the Pump versions for all the Kubernetes clusters that have Pump deployed.
4. Upgrade TiDB versions for all Kubernetes clusters.
5. If TiCDC is deployed in clusters, upgrade the TiCDC versions for all the Kubernetes clusters that have TiCDC deployed.
1. If TiProxy is deployed in clusters, upgrade the TiProxy versions for all the Kubernetes clusters that have TiProxy deployed.
2. If TiFlash is deployed in clusters, upgrade the TiFlash versions for all the Kubernetes clusters that have TiFlash deployed.
3. Upgrade TiKV versions for all Kubernetes clusters.
4. If Pump is deployed in clusters, upgrade the Pump versions for all the Kubernetes clusters that have Pump deployed.
5. Upgrade TiDB versions for all Kubernetes clusters.
6. If TiCDC is deployed in clusters, upgrade the TiCDC versions for all the Kubernetes clusters that have TiCDC deployed.

## Exit and reclaim TidbCluster that already join a cross-Kubernetes cluster

When you need to make a cluster exit from the joined TiDB cluster deployed across Kubernetes and reclaim resources, you can perform the operation by scaling in the cluster. In this scenario, the following requirements of scaling-in need to be met.

- After scaling in the cluster, the number of TiKV replicas in the cluster should be greater than the number of `max-replicas` set in PD. By default, the number of TiKV replicas needs to be greater than three.

Take the second TidbCluster created in [the last section](#step-2-deploy-the-new-tidbcluster-to-join-the-tidb-cluster) as an example. First, set the number of replicas of PD, TiKV, and TiDB to `0`. If you have enabled other components such as TiFlash, TiCDC, and Pump, set the number of these replicas to `0`:
Take the second TidbCluster created in [the last section](#step-2-deploy-the-new-tidbcluster-to-join-the-tidb-cluster) as an example. First, set the number of replicas of PD, TiKV, and TiDB to `0`. If you have enabled other components such as TiFlash, TiCDC, TiProxy, and Pump, set the number of these replicas to `0`:

{{< copyable "shell-regular" >}}

Expand Down
94 changes: 94 additions & 0 deletions en/deploy-tiproxy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
---
title: Deploy TiProxy Load Balancer for an Existing TiDB Cluster
summary: Learn how to deploy TiProxy for an existing TiDB cluster on Kubernetes.
---

# Deploy TiProxy Load Balancer for an Existing TiDB Cluster

This topic describes how to deploy or remove the TiDB load balancer [TiProxy](https://docs.pingcap.com/tidb/v7.6/tiproxy-overview) for an existing TiDB cluster on Kubernetes. TiProxy is placed between the client and TiDB server to provide load balancing, connection persistence, and service discovery for TiDB.

> **Note:**
>
> If you have not deployed a TiDB cluster, you can add TiProxy configurations when [configuring a TiDB cluster](configure-a-tidb-cluster.md) and then [deploy a TiDB cluster](deploy-on-general-kubernetes.md). In that case, you do not need to refer to this topic.
## Deploy TiProxy

If you need to deploy TiProxy for an existing TiDB cluster, follow these steps:

> **Note:**
>
> If your server does not have access to the internet, refer to [Deploy a TiDB Cluster](deploy-on-general-kubernetes.md#deploy-the-tidb-cluster) to download the `pingcap/tiproxy` Docker image to a machine with access to the internet and then upload the Docker image to your server. Then, use `docker load` to install the Docker image on your server.
1. Edit the TidbCluster Custom Resource (CR):

``` shell
kubectl edit tc ${cluster_name} -n ${namespace}
```

2. Add the TiProxy configuration as shown in the following example:

```yaml
spec:
tiproxy:
baseImage: pingcap/tiproxy
replicas: 3
```

3. Configure the related parameters in `spec.tiproxy.config` of the TidbCluster CR. For example:

```yaml
spec:
tiproxy:
config:
config: |
[log]
level = "info"
```

For more information about TiProxy configuration, see [TiProxy Configuration](https://docs.pingcap.com/tidb/v7.6/tiproxy/tiproxy-configuration).

After TiProxy is started, you can find the corresponding `tiproxy-sql` load balancer service by running the following command.

``` shell
kubectl get svc -n ${namespace}
```

## Remove TiProxy

If your TiDB cluster no longer needs TiProxy, follow these steps to remove it.

1. Modify `spec.tiproxy.replicas` to `0` to remove the TiProxy Pod by running the following command.

```shell
kubectl patch tidbcluster ${cluster_name} -n ${namespace} --type merge -p '{"spec":{"tiproxy":{"replicas": 0}}}'
```

2. Check the status of the TiProxy Pod.

```shell
kubectl get pod -n ${namespace} -l app.kubernetes.io/component=tiproxy,app.kubernetes.io/instance=${cluster_name}
```

If the output is empty, the TiProxy Pod has been successfully removed.

3. Delete the TiProxy StatefulSet.

1. Modify the TidbCluster CR and delete the `spec.tiproxy` field by running the following command:

```shell
kubectl patch tidbcluster ${cluster_name} -n ${namespace} --type json -p '[{"op":"remove", "path":"/spec/tiproxy"}]'
```

2. Delete the TiProxy StatefulSet by running the following command:

```shell
kubectl delete statefulsets -n ${namespace} -l app.kubernetes.io/component=tiproxy,app.kubernetes.io/instance=${cluster_name}
```

3. Check whether the TiProxy StatefulSet has been successfully deleted by running the following command:

```shell
kubectl get sts -n ${namespace} -l app.kubernetes.io/component=tiproxy,app.kubernetes.io/instance=${cluster_name}
```

If the output is empty, the TiProxy StatefulSet has been successfully deleted.
55 changes: 45 additions & 10 deletions en/enable-tls-between-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ To enable TLS between TiDB components, perform the following steps:

1. Generate certificates for each component of the TiDB cluster to be created:

- A set of server-side certificates for the PD/TiKV/TiDB/Pump/Drainer/TiFlash/TiKV Importer/TiDB Lightning component, saved as the Kubernetes Secret objects: `${cluster_name}-${component_name}-cluster-secret`.
- A set of server-side certificates for the PD/TiKV/TiDB/Pump/Drainer/TiFlash/TiProxy/TiKV Importer/TiDB Lightning component, saved as the Kubernetes Secret objects: `${cluster_name}-${component_name}-cluster-secret`.
- A set of shared client-side certificates for the various clients of each component, saved as the Kubernetes Secret objects: `${cluster_name}-cluster-client-secret`.

> **Note:**
Expand Down Expand Up @@ -402,10 +402,47 @@ This section describes how to issue certificates using two methods: `cfssl` and
{{< copyable "shell-regular" >}}
``` shell
```shell
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal ticdc-server.json | cfssljson -bare ticdc-server
```
- TiProxy
1. Generate the default `tiproxy-server.json` file:
```shell
cfssl print-defaults csr > tiproxy-server.json
```
2. Edit this file to change the `CN` and `hosts` attributes:
```json
...
"CN": "TiDB",
"hosts": [
"127.0.0.1",
"::1",
"${cluster_name}-tiproxy",
"${cluster_name}-tiproxy.${namespace}",
"${cluster_name}-tiproxy.${namespace}.svc",
"${cluster_name}-tiproxy-peer",
"${cluster_name}-tiproxy-peer.${namespace}",
"${cluster_name}-tiproxy-peer.${namespace}.svc",
"*.${cluster_name}-tiproxy-peer",
"*.${cluster_name}-tiproxy-peer.${namespace}",
"*.${cluster_name}-tiproxy-peer.${namespace}.svc"
],
...
```
`${cluster_name}` is the name of the cluster. `${namespace}` is the namespace in which the TiDB cluster is deployed. You can also add your customized `hosts`.
3. Generate the TiProxy server-side certificate:
```shell
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tiproxy-server.json | cfssljson -bare tiproxy-server
```
- TiFlash
1. Generate the default `tiflash-server.json` file:
Expand Down Expand Up @@ -588,32 +625,30 @@ This section describes how to issue certificates using two methods: `cfssl` and
- The Drainer cluster certificate Secret:
{{< copyable "shell-regular" >}}
```shell
kubectl create secret generic ${cluster_name}-drainer-cluster-secret --namespace=${namespace} --from-file=tls.crt=drainer-server.pem --from-file=tls.key=drainer-server-key.pem --from-file=ca.crt=ca.pem
```
- The TiCDC cluster certificate Secret:
{{< copyable "shell-regular" >}}
```shell
kubectl create secret generic ${cluster_name}-ticdc-cluster-secret --namespace=${namespace} --from-file=tls.crt=ticdc-server.pem --from-file=tls.key=ticdc-server-key.pem --from-file=ca.crt=ca.pem
```
- The TiFlash cluster certificate Secret:
- The TiProxy cluster certificate Secret:
{{< copyable "shell-regular" >}}
``` shell
kubectl create secret generic ${cluster_name}-tiproxy-cluster-secret --namespace=${namespace} --from-file=tls.crt=tiproxy-server.pem --from-file=tls.key=tiproxy-server-key.pem --from-file=ca.crt=ca.pem
```
- The TiFlash cluster certificate Secret:
``` shell
kubectl create secret generic ${cluster_name}-tiflash-cluster-secret --namespace=${namespace} --from-file=tls.crt=tiflash-server.pem --from-file=tls.key=tiflash-server-key.pem --from-file=ca.crt=ca.pem
```
- The TiKV Importer cluster certificate Secret:
{{< copyable "shell-regular" >}}
``` shell
kubectl create secret generic ${cluster_name}-importer-cluster-secret --namespace=${namespace} --from-file=tls.crt=importer-server.pem --from-file=tls.key=importer-server-key.pem --from-file=ca.crt=ca.pem
```
Expand Down
2 changes: 1 addition & 1 deletion en/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ After you execute the `kubectl get tc` command, if the output shows that the **R

* Upgrading
* Scaling
* Any Pod of PD, TiDB, TiKV, or TiFlash is not Ready
* Any Pod of PD, TiDB, TiKV, TiFlash, or TiProxy is not Ready

To check whether a TiDB cluster is unavailable, you can try connecting to TiDB. If the connection fails, it means that the corresponding TiDBCluster is unavailable.

Expand Down
6 changes: 5 additions & 1 deletion en/modify-tidb-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This document describes how to modify the configuration of TiDB clusters deploye

For TiDB and TiKV, if you [modify their configuration online](https://docs.pingcap.com/tidb/stable/dynamic-config/) using SQL statements, after you upgrade or restart the cluster, the configurations will be overwritten by those in the `TidbCluster` CR. This leads to the online configuration update being invalid. Therefore, to persist the configuration, you must directly modify their configurations in the `TidbCluster` CR.

For TiFlash, TiCDC, and Pump, you can only modify their configurations in the `TidbCluster` CR.
For TiFlash, TiProxy, TiCDC, and Pump, you can only modify their configurations in the `TidbCluster` CR.

To modify the configuration in the `TidbCluster` CR, take the following steps:

Expand Down Expand Up @@ -42,3 +42,7 @@ After PD is started for the first time, some PD configuration items are persiste
Among all the PD configuration items listed in [Modify PD configuration online](https://docs.pingcap.com/tidb/stable/dynamic-config/#modify-pd-configuration-online), after the first start, only `log.level` can be modified by using the `TidbCluster` CR. Other configurations cannot be modified by using CR.

For TiDB clusters deployed on Kubernetes, if you need to modify the PD configuration, you can modify the configuration online using [SQL statements](https://docs.pingcap.com/tidb/stable/dynamic-config/#modify-pd-configuration-online), [pd-ctl](https://docs.pingcap.com/tidb/stable/pd-control#config-show--set-option-value--placement-rules), or PD server API.

## Modify TiProxy configuration

Modifying the configuration of the TiProxy component never restarts the Pod. If you want to restart the Pod, you need to manually kill the Pod or change the Pod image to manually trigger the restart.
Loading

0 comments on commit 98b68cc

Please sign in to comment.