diff --git a/en/TOC.md b/en/TOC.md index 9064917f1..eb2366751 100644 --- a/en/TOC.md +++ b/en/TOC.md @@ -22,6 +22,7 @@ - [Alibaba Cloud ACK](deploy-on-alibaba-cloud.md) - [Deploy TiDB on ARM64 Machines](deploy-cluster-on-arm64.md) - [Deploy TiFlash to Explore TiDB HTAP](deploy-tiflash.md) + - [Deploy TiProxy Load Balancer](deploy-tiproxy.md) - Deploy TiDB Across Multiple Kubernetes Clusters - [Build Multiple Interconnected AWS EKS Clusters](build-multi-aws-eks.md) - [Build Multiple Interconnected GKE Clusters](build-multi-gcp-gke.md) diff --git a/en/configure-a-tidb-cluster.md b/en/configure-a-tidb-cluster.md index 545f0358a..b003d652b 100644 --- a/en/configure-a-tidb-cluster.md +++ b/en/configure-a-tidb-cluster.md @@ -223,7 +223,7 @@ To mount multiple PVs for TiCDC: ### HostNetwork -For PD, TiKV, TiDB, TiFlash, TiCDC, and Pump, you can configure the Pods to use the host namespace [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). +For PD, TiKV, TiDB, TiFlash, TiProxy, TiCDC, and Pump, you can configure the Pods to use the host namespace [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). To enable `HostNetwork` for all supported components, configure `spec.hostNetwork: true`. @@ -255,6 +255,17 @@ The deployed cluster topology by default has three PD Pods, three TiKV Pods, and > > If the number of Kubernetes cluster nodes is less than three, one PD Pod goes to the Pending state, and neither TiKV Pods nor TiDB Pods are created. When the number of nodes in the Kubernetes cluster is less than three, to start the TiDB cluster, you can reduce the number of PD Pods in the default deployment to `1`. +#### Enable TiProxy + +The deployment method is the same as that of PD. In addition, you need to modify `spec.tiproxy` to manually specify the number of TiProxy components. + +```yaml + tiproxy: + baseImage: pingcap/tiproxy + replicas: 3 + config: +``` + #### Enable TiFlash If you want to enable TiFlash in the cluster, configure `spec.pd.config.replication.enable-placement-rules: true` and configure `spec.tiflash` in the `${cluster_name}/tidb-cluster.yaml` file as follows: @@ -313,7 +324,7 @@ If you want to enable TiCDC in the cluster, you can add TiCDC spec to the `TiDBC ### Configure TiDB components -This section introduces how to configure the parameters of TiDB/TiKV/PD/TiFlash/TiCDC. +This section introduces how to configure the parameters of TiDB/TiKV/PD/TiProxy/TiFlash/TiCDC. #### Configure TiDB parameters @@ -377,6 +388,22 @@ For all the configurable parameters of PD, refer to [PD Configuration File](http > - If you deploy your TiDB cluster using CR, make sure that `Config: {}` is set, no matter you want to modify `config` or not. Otherwise, PD components might not be started successfully. This step is meant to be compatible with `Helm` deployment. > - After the cluster is started for the first time, some PD configuration items are persisted in etcd. The persisted configuration in etcd takes precedence over that in PD. Therefore, after the first start, you cannot modify some PD configuration using parameters. You need to dynamically modify the configuration using SQL statements, pd-ctl, or PD server API. Currently, among all the configuration items listed in [Modify PD configuration online](https://docs.pingcap.com/tidb/stable/dynamic-config#modify-pd-configuration-online), except `log.level`, all the other configuration items cannot be modified using parameters after the first start. +#### Configure TiProxy parameters + +TiProxy parameters can be configured by `spec.tiproxy.config` in TidbCluster Custom Resource. + +For example: + +```yaml +spec: + tiproxy: + config: | + [log] + level = "info" +``` + +For all the configurable parameters of TiProxy, refer to [TiProxy Configuration File](https://docs.pingcap.com/tidb/v7.6/tiproxy-configuration). + #### Configure TiFlash parameters TiFlash parameters can be configured by `spec.tiflash.config` in TidbCluster Custom Resource. @@ -642,6 +669,8 @@ spec: See [Kubernetes Service Documentation](https://kubernetes.io/docs/concepts/services-networking/service/) to know more about the features of Service and what LoadBalancer in the cloud platform supports. +If TiProxy is specified, `tiproxy-api` and `tiproxy-sql` services are also automatically created for use. + ### IPv6 Support Starting v6.5.1, TiDB supports using IPv6 addresses for all network connections. If you deploy TiDB using TiDB Operator v1.4.3 or later versions, you can enable the TiDB cluster to listen on IPv6 addresses by configuring `spec.preferIPv6` to `true`. diff --git a/en/deploy-failures.md b/en/deploy-failures.md index b399d08b8..e5e032c04 100644 --- a/en/deploy-failures.md +++ b/en/deploy-failures.md @@ -38,13 +38,12 @@ kubectl describe restores -n ${namespace} ${restore_name} The Pending state of a Pod is usually caused by conditions of insufficient resources, for example: - The `StorageClass` of the PVC used by PD, TiKV, TiFlash, Pump, Monitor, Backup, and Restore Pods does not exist or the PV is insufficient. -- No nodes in the Kubernetes cluster can satisfy the CPU or memory resources requested by the Pod -- The number of TiKV or PD replicas and the number of nodes in the cluster do not satisfy the high availability scheduling policy of tidb-scheduler +- No nodes in the Kubernetes cluster can satisfy the CPU or memory resources requested by the Pod. +- The number of TiKV or PD replicas and the number of nodes in the cluster do not satisfy the high availability scheduling policy of tidb-scheduler. +- The certificates used by TiDB or TiProxy components are not configured. You can check the specific reason for Pending by using the `kubectl describe pod` command: -{{< copyable "shell-regular" >}} - ```shell kubectl describe po -n ${namespace} ${pod_name} ``` diff --git a/en/deploy-on-general-kubernetes.md b/en/deploy-on-general-kubernetes.md index 8b4ebc6ef..f10972b86 100644 --- a/en/deploy-on-general-kubernetes.md +++ b/en/deploy-on-general-kubernetes.md @@ -51,6 +51,7 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. pingcap/tidb-binlog:v7.5.0 pingcap/ticdc:v7.5.0 pingcap/tiflash:v7.5.0 + pingcap/tiproxy:latest pingcap/tidb-monitor-reloader:v1.0.1 pingcap/tidb-monitor-initializer:v7.5.0 grafana/grafana:7.5.11 @@ -69,6 +70,7 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. docker pull pingcap/tidb-binlog:v7.5.0 docker pull pingcap/ticdc:v7.5.0 docker pull pingcap/tiflash:v7.5.0 + docker pull pingcap/tiproxy:latest docker pull pingcap/tidb-monitor-reloader:v1.0.1 docker pull pingcap/tidb-monitor-initializer:v7.5.0 docker pull grafana/grafana:7.5.11 @@ -80,6 +82,7 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. docker save -o tidb-v7.5.0.tar pingcap/tidb:v7.5.0 docker save -o tidb-binlog-v7.5.0.tar pingcap/tidb-binlog:v7.5.0 docker save -o ticdc-v7.5.0.tar pingcap/ticdc:v7.5.0 + docker save -o tiproxy-latest.tar pingcap/tiproxy:latest docker save -o tiflash-v7.5.0.tar pingcap/tiflash:v7.5.0 docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1 docker save -o tidb-monitor-initializer-v7.5.0.tar pingcap/tidb-monitor-initializer:v7.5.0 @@ -98,6 +101,7 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. docker load -i tidb-v7.5.0.tar docker load -i tidb-binlog-v7.5.0.tar docker load -i ticdc-v7.5.0.tar + docker load -i tiproxy-latest.tar docker load -i tiflash-v7.5.0.tar docker load -i tidb-monitor-reloader-v1.0.1.tar docker load -i tidb-monitor-initializer-v7.5.0.tar diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index 2d19cc483..06a56a732 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -513,11 +513,12 @@ For a TiDB cluster deployed across Kubernetes clusters, to perform a rolling upg 2. Take step 1 as an example, perform the following upgrade operations in sequence: - 1. If TiFlash is deployed in clusters, upgrade the TiFlash versions for all the Kubernetes clusters that have TiFlash deployed. - 2. Upgrade TiKV versions for all Kubernetes clusters. - 3. If Pump is deployed in clusters, upgrade the Pump versions for all the Kubernetes clusters that have Pump deployed. - 4. Upgrade TiDB versions for all Kubernetes clusters. - 5. If TiCDC is deployed in clusters, upgrade the TiCDC versions for all the Kubernetes clusters that have TiCDC deployed. + 1. If TiProxy is deployed in clusters, upgrade the TiProxy versions for all the Kubernetes clusters that have TiProxy deployed. + 2. If TiFlash is deployed in clusters, upgrade the TiFlash versions for all the Kubernetes clusters that have TiFlash deployed. + 3. Upgrade TiKV versions for all Kubernetes clusters. + 4. If Pump is deployed in clusters, upgrade the Pump versions for all the Kubernetes clusters that have Pump deployed. + 5. Upgrade TiDB versions for all Kubernetes clusters. + 6. If TiCDC is deployed in clusters, upgrade the TiCDC versions for all the Kubernetes clusters that have TiCDC deployed. ## Exit and reclaim TidbCluster that already join a cross-Kubernetes cluster @@ -525,7 +526,7 @@ When you need to make a cluster exit from the joined TiDB cluster deployed acros - After scaling in the cluster, the number of TiKV replicas in the cluster should be greater than the number of `max-replicas` set in PD. By default, the number of TiKV replicas needs to be greater than three. -Take the second TidbCluster created in [the last section](#step-2-deploy-the-new-tidbcluster-to-join-the-tidb-cluster) as an example. First, set the number of replicas of PD, TiKV, and TiDB to `0`. If you have enabled other components such as TiFlash, TiCDC, and Pump, set the number of these replicas to `0`: +Take the second TidbCluster created in [the last section](#step-2-deploy-the-new-tidbcluster-to-join-the-tidb-cluster) as an example. First, set the number of replicas of PD, TiKV, and TiDB to `0`. If you have enabled other components such as TiFlash, TiCDC, TiProxy, and Pump, set the number of these replicas to `0`: {{< copyable "shell-regular" >}} diff --git a/en/deploy-tiproxy.md b/en/deploy-tiproxy.md new file mode 100644 index 000000000..71b600840 --- /dev/null +++ b/en/deploy-tiproxy.md @@ -0,0 +1,94 @@ +--- +title: Deploy TiProxy Load Balancer for an Existing TiDB Cluster +summary: Learn how to deploy TiProxy for an existing TiDB cluster on Kubernetes. +--- + +# Deploy TiProxy Load Balancer for an Existing TiDB Cluster + +This topic describes how to deploy or remove the TiDB load balancer [TiProxy](https://docs.pingcap.com/tidb/v7.6/tiproxy-overview) for an existing TiDB cluster on Kubernetes. TiProxy is placed between the client and TiDB server to provide load balancing, connection persistence, and service discovery for TiDB. + +> **Note:** +> +> If you have not deployed a TiDB cluster, you can add TiProxy configurations when [configuring a TiDB cluster](configure-a-tidb-cluster.md) and then [deploy a TiDB cluster](deploy-on-general-kubernetes.md). In that case, you do not need to refer to this topic. + +## Deploy TiProxy + +If you need to deploy TiProxy for an existing TiDB cluster, follow these steps: + +> **Note:** +> +> If your server does not have access to the internet, refer to [Deploy a TiDB Cluster](deploy-on-general-kubernetes.md#deploy-the-tidb-cluster) to download the `pingcap/tiproxy` Docker image to a machine with access to the internet and then upload the Docker image to your server. Then, use `docker load` to install the Docker image on your server. + +1. Edit the TidbCluster Custom Resource (CR): + + ``` shell + kubectl edit tc ${cluster_name} -n ${namespace} + ``` + +2. Add the TiProxy configuration as shown in the following example: + + ```yaml + spec: + tiproxy: + baseImage: pingcap/tiproxy + replicas: 3 + ``` + +3. Configure the related parameters in `spec.tiproxy.config` of the TidbCluster CR. For example: + + ```yaml + spec: + tiproxy: + config: + config: | + [log] + level = "info" + ``` + + For more information about TiProxy configuration, see [TiProxy Configuration](https://docs.pingcap.com/tidb/v7.6/tiproxy/tiproxy-configuration). + +After TiProxy is started, you can find the corresponding `tiproxy-sql` load balancer service by running the following command. + +``` shell +kubectl get svc -n ${namespace} +``` + +## Remove TiProxy + +If your TiDB cluster no longer needs TiProxy, follow these steps to remove it. + +1. Modify `spec.tiproxy.replicas` to `0` to remove the TiProxy Pod by running the following command. + + ```shell + kubectl patch tidbcluster ${cluster_name} -n ${namespace} --type merge -p '{"spec":{"tiproxy":{"replicas": 0}}}' + ``` + +2. Check the status of the TiProxy Pod. + + ```shell + kubectl get pod -n ${namespace} -l app.kubernetes.io/component=tiproxy,app.kubernetes.io/instance=${cluster_name} + ``` + + If the output is empty, the TiProxy Pod has been successfully removed. + +3. Delete the TiProxy StatefulSet. + + 1. Modify the TidbCluster CR and delete the `spec.tiproxy` field by running the following command: + + ```shell + kubectl patch tidbcluster ${cluster_name} -n ${namespace} --type json -p '[{"op":"remove", "path":"/spec/tiproxy"}]' + ``` + + 2. Delete the TiProxy StatefulSet by running the following command: + + ```shell + kubectl delete statefulsets -n ${namespace} -l app.kubernetes.io/component=tiproxy,app.kubernetes.io/instance=${cluster_name} + ``` + + 3. Check whether the TiProxy StatefulSet has been successfully deleted by running the following command: + + ```shell + kubectl get sts -n ${namespace} -l app.kubernetes.io/component=tiproxy,app.kubernetes.io/instance=${cluster_name} + ``` + + If the output is empty, the TiProxy StatefulSet has been successfully deleted. diff --git a/en/enable-tls-between-components.md b/en/enable-tls-between-components.md index 87099ce33..b19ca7430 100644 --- a/en/enable-tls-between-components.md +++ b/en/enable-tls-between-components.md @@ -12,7 +12,7 @@ To enable TLS between TiDB components, perform the following steps: 1. Generate certificates for each component of the TiDB cluster to be created: - - A set of server-side certificates for the PD/TiKV/TiDB/Pump/Drainer/TiFlash/TiKV Importer/TiDB Lightning component, saved as the Kubernetes Secret objects: `${cluster_name}-${component_name}-cluster-secret`. + - A set of server-side certificates for the PD/TiKV/TiDB/Pump/Drainer/TiFlash/TiProxy/TiKV Importer/TiDB Lightning component, saved as the Kubernetes Secret objects: `${cluster_name}-${component_name}-cluster-secret`. - A set of shared client-side certificates for the various clients of each component, saved as the Kubernetes Secret objects: `${cluster_name}-cluster-client-secret`. > **Note:** @@ -402,10 +402,47 @@ This section describes how to issue certificates using two methods: `cfssl` and {{< copyable "shell-regular" >}} - ``` shell + ```shell cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal ticdc-server.json | cfssljson -bare ticdc-server ``` + - TiProxy + + 1. Generate the default `tiproxy-server.json` file: + + ```shell + cfssl print-defaults csr > tiproxy-server.json + ``` + + 2. Edit this file to change the `CN` and `hosts` attributes: + + ```json + ... + "CN": "TiDB", + "hosts": [ + "127.0.0.1", + "::1", + "${cluster_name}-tiproxy", + "${cluster_name}-tiproxy.${namespace}", + "${cluster_name}-tiproxy.${namespace}.svc", + "${cluster_name}-tiproxy-peer", + "${cluster_name}-tiproxy-peer.${namespace}", + "${cluster_name}-tiproxy-peer.${namespace}.svc", + "*.${cluster_name}-tiproxy-peer", + "*.${cluster_name}-tiproxy-peer.${namespace}", + "*.${cluster_name}-tiproxy-peer.${namespace}.svc" + ], + ... + ``` + + `${cluster_name}` is the name of the cluster. `${namespace}` is the namespace in which the TiDB cluster is deployed. You can also add your customized `hosts`. + + 3. Generate the TiProxy server-side certificate: + + ```shell + cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tiproxy-server.json | cfssljson -bare tiproxy-server + ``` + - TiFlash 1. Generate the default `tiflash-server.json` file: @@ -588,23 +625,23 @@ This section describes how to issue certificates using two methods: `cfssl` and - The Drainer cluster certificate Secret: - {{< copyable "shell-regular" >}} - ```shell kubectl create secret generic ${cluster_name}-drainer-cluster-secret --namespace=${namespace} --from-file=tls.crt=drainer-server.pem --from-file=tls.key=drainer-server-key.pem --from-file=ca.crt=ca.pem ``` - The TiCDC cluster certificate Secret: - {{< copyable "shell-regular" >}} - ```shell kubectl create secret generic ${cluster_name}-ticdc-cluster-secret --namespace=${namespace} --from-file=tls.crt=ticdc-server.pem --from-file=tls.key=ticdc-server-key.pem --from-file=ca.crt=ca.pem ``` - - The TiFlash cluster certificate Secret: + - The TiProxy cluster certificate Secret: - {{< copyable "shell-regular" >}} + ``` shell + kubectl create secret generic ${cluster_name}-tiproxy-cluster-secret --namespace=${namespace} --from-file=tls.crt=tiproxy-server.pem --from-file=tls.key=tiproxy-server-key.pem --from-file=ca.crt=ca.pem + ``` + + - The TiFlash cluster certificate Secret: ``` shell kubectl create secret generic ${cluster_name}-tiflash-cluster-secret --namespace=${namespace} --from-file=tls.crt=tiflash-server.pem --from-file=tls.key=tiflash-server-key.pem --from-file=ca.crt=ca.pem @@ -612,8 +649,6 @@ This section describes how to issue certificates using two methods: `cfssl` and - The TiKV Importer cluster certificate Secret: - {{< copyable "shell-regular" >}} - ``` shell kubectl create secret generic ${cluster_name}-importer-cluster-secret --namespace=${namespace} --from-file=tls.crt=importer-server.pem --from-file=tls.key=importer-server-key.pem --from-file=ca.crt=ca.pem ``` diff --git a/en/faq.md b/en/faq.md index 1fdabbef9..c5b10ec0a 100644 --- a/en/faq.md +++ b/en/faq.md @@ -135,7 +135,7 @@ After you execute the `kubectl get tc` command, if the output shows that the **R * Upgrading * Scaling -* Any Pod of PD, TiDB, TiKV, or TiFlash is not Ready +* Any Pod of PD, TiDB, TiKV, TiFlash, or TiProxy is not Ready To check whether a TiDB cluster is unavailable, you can try connecting to TiDB. If the connection fails, it means that the corresponding TiDBCluster is unavailable. diff --git a/en/modify-tidb-configuration.md b/en/modify-tidb-configuration.md index 8d2ab72f1..f0d9fe936 100644 --- a/en/modify-tidb-configuration.md +++ b/en/modify-tidb-configuration.md @@ -13,7 +13,7 @@ This document describes how to modify the configuration of TiDB clusters deploye For TiDB and TiKV, if you [modify their configuration online](https://docs.pingcap.com/tidb/stable/dynamic-config/) using SQL statements, after you upgrade or restart the cluster, the configurations will be overwritten by those in the `TidbCluster` CR. This leads to the online configuration update being invalid. Therefore, to persist the configuration, you must directly modify their configurations in the `TidbCluster` CR. -For TiFlash, TiCDC, and Pump, you can only modify their configurations in the `TidbCluster` CR. +For TiFlash, TiProxy, TiCDC, and Pump, you can only modify their configurations in the `TidbCluster` CR. To modify the configuration in the `TidbCluster` CR, take the following steps: @@ -42,3 +42,7 @@ After PD is started for the first time, some PD configuration items are persiste Among all the PD configuration items listed in [Modify PD configuration online](https://docs.pingcap.com/tidb/stable/dynamic-config/#modify-pd-configuration-online), after the first start, only `log.level` can be modified by using the `TidbCluster` CR. Other configurations cannot be modified by using CR. For TiDB clusters deployed on Kubernetes, if you need to modify the PD configuration, you can modify the configuration online using [SQL statements](https://docs.pingcap.com/tidb/stable/dynamic-config/#modify-pd-configuration-online), [pd-ctl](https://docs.pingcap.com/tidb/stable/pd-control#config-show--set-option-value--placement-rules), or PD server API. + +## Modify TiProxy configuration + +Modifying the configuration of the TiProxy component never restarts the Pod. If you want to restart the Pod, you need to manually kill the Pod or change the Pod image to manually trigger the restart. diff --git a/en/renew-tls-certificate.md b/en/renew-tls-certificate.md index c5fb1f9cb..6fee2cddf 100644 --- a/en/renew-tls-certificate.md +++ b/en/renew-tls-certificate.md @@ -67,7 +67,7 @@ If the original TLS certificates are issued by [the `cfssl` system](enable-tls-b > **Note:** > - > The above command only renews the server-side CA certificate and the client-side CA certificate between PD, TiKV, and TiDB components. If you need to renew the server-side CA certificates for other components, such as TiCDC and TiFlash, you can execute the similar command. + > The above command only renews the server-side CA certificate and the client-side CA certificate between PD, TiKV, and TiDB components. If you need to renew the server-side CA certificates for other components, such as TiCDC, TiFlash and TiProxy, you can execute the similar command. 5. [Perform the rolling restart](restart-a-tidb-cluster.md) to components that need to load the combined CA certificate. @@ -110,7 +110,7 @@ If the original TLS certificates are issued by [the `cfssl` system](enable-tls-b > **Note:** > - > The above command only renews the server-side and the client-side certificate between PD, TiKV, and TiDB components. If you need to renew the server-side certificates for other components, such as TiCDC and TiFlash, you can execute the similar command. + > The above command only renews the server-side and the client-side certificate between PD, TiKV, and TiDB components. If you need to renew the server-side certificates for other components, such as TiCDC, TiFlash and TiProxy, you can execute the similar command. 3. [Perform the rolling restart](restart-a-tidb-cluster.md) to components that need to load the new certificates. diff --git a/en/scale-a-tidb-cluster.md b/en/scale-a-tidb-cluster.md index 3f2f507d7..6ed6febe8 100644 --- a/en/scale-a-tidb-cluster.md +++ b/en/scale-a-tidb-cluster.md @@ -16,9 +16,9 @@ Horizontally scaling TiDB means that you scale TiDB out or in by adding or remov - To scale in a TiDB cluster, **decrease** the value of `replicas` of a certain component. The scaling in operations remove Pods based on the Pod ID in descending order, until the number of Pods equals the value of `replicas`. -### Horizontally scale PD, TiKV, and TiDB +### Horizontally scale PD, TiKV, TiDB, and TiProxy -To scale PD, TiKV, or TiDB horizontally, use kubectl to modify `spec.pd.replicas`, `spec.tikv.replicas`, and `spec.tidb.replicas` in the `TidbCluster` object of the cluster to a desired value. +To scale PD, TiKV, TiDB, or TiProxy horizontally, use kubectl to modify `spec.pd.replicas`, `spec.tikv.replicas`, `spec.tidb.replicas`, and `spec.tiproxy.replicas` in the `TidbCluster` object of the cluster to desired values. 1. Modify the `replicas` value of a component as needed. For example, configure the `replicas` value of PD to 3: @@ -165,9 +165,9 @@ Vertically scaling TiDB means that you scale TiDB up or down by increasing or de ### Vertically scale components -This section describes how to vertically scale up or scale down components including PD, TiKV, TiDB, TiFlash, and TiCDC. +This section describes how to vertically scale up or scale down components including PD, TiKV, TiDB, TiProxy, TiFlash, and TiCDC. -- To scale up or scale down PD, TiKV, TiDB, use kubectl to modify `spec.pd.resources`, `spec.tikv.resources`, and `spec.tidb.resources` in the `TidbCluster` object that corresponds to the cluster to a desired value. +- To scale up or scale down PD, TiKV, TiDB, and TiProxy, use kubectl to modify `spec.pd.resources`, `spec.tikv.resources`, and `spec.tidb.resources` in the `TidbCluster` object that corresponds to the cluster to desired values. - To scale up or scale down TiFlash, modify the value of `spec.tiflash.resources`. diff --git a/en/suspend-tidb-cluster.md b/en/suspend-tidb-cluster.md index 28d1d52a6..4790807ab 100644 --- a/en/suspend-tidb-cluster.md +++ b/en/suspend-tidb-cluster.md @@ -59,6 +59,7 @@ If you need to suspend the TiDB cluster, take the following steps: * TiCDC * TiKV * Pump + * TiProxy * PD ## Restore TiDB cluster diff --git a/en/upgrade-a-tidb-cluster.md b/en/upgrade-a-tidb-cluster.md index ea92955d3..1a2e70750 100644 --- a/en/upgrade-a-tidb-cluster.md +++ b/en/upgrade-a-tidb-cluster.md @@ -14,7 +14,7 @@ This document describes how to upgrade a TiDB cluster on Kubernetes using rollin Kubernetes provides the [rolling update](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/) feature to update your application with zero downtime. -When you perform a rolling update, TiDB Operator serially deletes an old Pod and creates the corresponding new Pod in the order of PD, TiFlash, TiKV, and TiDB. After the new Pod runs normally, TiDB Operator proceeds with the next Pod. +When you perform a rolling update, TiDB Operator serially deletes an old Pod and creates the corresponding new Pod in the order of PD, TiProxy, TiFlash, TiKV, and TiDB. After the new Pod runs normally, TiDB Operator proceeds with the next Pod. During the rolling update, TiDB Operator automatically completes Leader transfer for PD and TiKV. Under the highly available deployment topology (minimum requirements: PD \* 3, TiKV \* 3, TiDB \* 2), performing a rolling update to PD and TiKV servers does not impact the running application. If your client supports retrying stale connections, performing a rolling update to TiDB servers does not impact application, either.