diff --git a/en/TOC.md b/en/TOC.md index a54e5d118..e25d10a47 100644 --- a/en/TOC.md +++ b/en/TOC.md @@ -113,7 +113,7 @@ - [Advanced StatefulSet Controller](advanced-statefulset.md) - [Admission Controller](enable-admission-webhook.md) - [Sysbench Performance Test](benchmark-sysbench.md) - - [API References](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) + - [API References](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) - [Cheat Sheet](cheat-sheet.md) - [Required RBAC Rules](tidb-operator-rbac.md) - Tools diff --git a/en/_index.md b/en/_index.md index 479d24f6f..3e2f255f1 100644 --- a/en/_index.md +++ b/en/_index.md @@ -70,7 +70,7 @@ hide_commit: true -[API Docs](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) +[API Docs](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) [Tools](https://docs.pingcap.com/tidb-in-kubernetes/dev/tidb-toolkit) diff --git a/en/access-dashboard.md b/en/access-dashboard.md index e75f22299..6a1ca2113 100644 --- a/en/access-dashboard.md +++ b/en/access-dashboard.md @@ -244,7 +244,7 @@ To enable this feature, you need to deploy TidbNGMonitoring CR using TiDB Operat EOF ``` - For more configuration items of the TidbNGMonitoring CR, see [example in tidb-operator](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/advanced/tidb-ng-monitoring.yaml). + For more configuration items of the TidbNGMonitoring CR, see [example in tidb-operator](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/advanced/tidb-ng-monitoring.yaml). 3. Enable Continuous Profiling. diff --git a/en/advanced-statefulset.md b/en/advanced-statefulset.md index 0def45da2..eb6e74762 100644 --- a/en/advanced-statefulset.md +++ b/en/advanced-statefulset.md @@ -21,7 +21,7 @@ The [advanced StatefulSet controller](https://github.com/pingcap/advanced-statef {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/advanced-statefulset-crd.v1beta1.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/advanced-statefulset-crd.v1beta1.yaml ``` * For Kubernetes versions >= 1.16: @@ -29,7 +29,7 @@ The [advanced StatefulSet controller](https://github.com/pingcap/advanced-statef {{< copyable "shell-regular" >}} ``` - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/advanced-statefulset-crd.v1.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/advanced-statefulset-crd.v1.yaml ``` 2. Enable the `AdvancedStatefulSet` feature in `values.yaml` of the TiDB Operator chart: diff --git a/en/aggregate-multiple-cluster-monitor-data.md b/en/aggregate-multiple-cluster-monitor-data.md index ff213f559..8b0bbeea2 100644 --- a/en/aggregate-multiple-cluster-monitor-data.md +++ b/en/aggregate-multiple-cluster-monitor-data.md @@ -24,7 +24,7 @@ Thanos provides [Thanos Query](https://thanos.io/tip/components/query.md/) compo {{< copyable "shell-regular" >}} ```shell - kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/monitor-with-thanos/tidb-monitor.yaml + kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/monitor-with-thanos/tidb-monitor.yaml ``` 2. Deploy the Thanos Query component. @@ -34,7 +34,7 @@ Thanos provides [Thanos Query](https://thanos.io/tip/components/query.md/) compo {{< copyable "shell-regular" >}} ``` - curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/monitor-with-thanos/thanos-query.yaml + curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/monitor-with-thanos/thanos-query.yaml ``` 2. Manually modify the `--store` parameter in the `thanos-query.yaml` file by updating `basic-prometheus:10901` to `basic-prometheus.${namespace}:10901`. @@ -182,4 +182,4 @@ spec: After RemoteWrite is enabled, Prometheus pushes the monitoring data to [Thanos Receiver](https://thanos.io/tip/components/receive.md/). For more information, refer to [the design of Thanos Receiver](https://thanos.io/v0.8/proposals/201812_thanos-remote-receive/). -For details on the deployment, refer to [this example of integrating TidbMonitor with Thanos Receiver](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-prom-remotewrite). +For details on the deployment, refer to [this example of integrating TidbMonitor with Thanos Receiver](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-prom-remotewrite). diff --git a/en/backup-restore-cr.md b/en/backup-restore-cr.md index e0ba5cd83..6d815f008 100644 --- a/en/backup-restore-cr.md +++ b/en/backup-restore-cr.md @@ -24,7 +24,7 @@ This section introduces the fields in the `Backup` CR. - If an image is specified without the version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup. - When using Dumpling for backup, you can specify the Dumpling version in this field. - If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v7.5.0`, the image of the specified version is used for backup. - - If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.1/images/tidb-backup-manager/Dockerfile) is used for backup by default. + - If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.2/images/tidb-backup-manager/Dockerfile) is used for backup by default. * `.spec.backupType`: the backup type. This field is valid only when you use BR for backup. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules: * `full`: back up all databases in a TiDB cluster. @@ -261,7 +261,7 @@ This section introduces the fields in the `Restore` CR. * `.spec.toolImage`:the tools image used by `Restore`. TiDB Operator supports this configuration starting from v1.1.9. - When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v7.5.0`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default. - - When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.5.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.1/images/tidb-backup-manager/Dockerfile) is used for restoring by default. + - When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.5.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.2/images/tidb-backup-manager/Dockerfile) is used for restoring by default. * `.spec.backupType`: the restore type. This field is valid only when you use BR to restore data. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules: * `full`: restore all databases in a TiDB cluster. diff --git a/en/backup-to-aws-s3-by-snapshot.md b/en/backup-to-aws-s3-by-snapshot.md index 8a76f5e07..201073bb2 100644 --- a/en/backup-to-aws-s3-by-snapshot.md +++ b/en/backup-to-aws-s3-by-snapshot.md @@ -42,7 +42,7 @@ The following sections exemplify how to back up data of the TiDB cluster `demo1` ### Step 1. Set up the environment for EBS volume snapshot backup -1. Download the file [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) to the backup server. +1. Download the file [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) to the backup server. 2. Create the RBAC-related resources required for the backup in the `test1` namespace by running the following command: diff --git a/en/backup-to-aws-s3-using-br.md b/en/backup-to-aws-s3-using-br.md index 9a69cf642..b933bde4d 100644 --- a/en/backup-to-aws-s3-using-br.md +++ b/en/backup-to-aws-s3-using-br.md @@ -51,7 +51,7 @@ This document provides an example about how to back up the data of the `demo1` T kubectl create namespace backup-test ``` -2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace: +2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace: ```shell kubectl apply -f backup-rbac.yaml -n backup-test diff --git a/en/backup-to-azblob-using-br.md b/en/backup-to-azblob-using-br.md index 46038d102..31244ad3f 100644 --- a/en/backup-to-azblob-using-br.md +++ b/en/backup-to-azblob-using-br.md @@ -48,7 +48,7 @@ This document provides an example about how to back up the data of the `demo1` T kubectl create namespace backup-test ``` -2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace: +2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace: ```shell kubectl apply -f backup-rbac.yaml -n backup-test diff --git a/en/backup-to-gcs-using-br.md b/en/backup-to-gcs-using-br.md index bd70eaf12..e917a4172 100644 --- a/en/backup-to-gcs-using-br.md +++ b/en/backup-to-gcs-using-br.md @@ -48,7 +48,7 @@ This document provides an example about how to back up the data of the `demo1` T kubectl create namespace backup-test ``` -2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace: +2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace: ```shell kubectl apply -f backup-rbac.yaml -n backup-test diff --git a/en/backup-to-gcs.md b/en/backup-to-gcs.md index 9bf53cc02..2910b1661 100644 --- a/en/backup-to-gcs.md +++ b/en/backup-to-gcs.md @@ -38,7 +38,7 @@ To better explain how to perform the backup operation, this document shows an ex ### Step 1: Prepare for ad-hoc full backup -1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace: +1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace: {{< copyable "shell-regular" >}} diff --git a/en/backup-to-pv-using-br.md b/en/backup-to-pv-using-br.md index e6ecdb66f..455fe5663 100644 --- a/en/backup-to-pv-using-br.md +++ b/en/backup-to-pv-using-br.md @@ -33,7 +33,7 @@ This document provides an example about how to back up the data of the `demo1` T ### Step 1: Prepare for an ad-hoc backup -1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) to the server that runs the backup task. +1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) to the server that runs the backup task. 2. Execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace: diff --git a/en/backup-to-s3.md b/en/backup-to-s3.md index ced0df3ff..13f22fb8d 100644 --- a/en/backup-to-s3.md +++ b/en/backup-to-s3.md @@ -49,12 +49,12 @@ GRANT ### Step 1: Prepare for ad-hoc full backup -1. Execute the following command to create the role-based access control (RBAC) resources in the `tidb-cluster` namespace based on [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/backup/backup-rbac.yaml): +1. Execute the following command to create the role-based access control (RBAC) resources in the `tidb-cluster` namespace based on [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/backup/backup-rbac.yaml): {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/backup/backup-rbac.yaml -n tidb-cluster + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/backup/backup-rbac.yaml -n tidb-cluster ``` 2. Grant permissions to the remote storage. diff --git a/en/cheat-sheet.md b/en/cheat-sheet.md index 0aecf7def..1fb1fcc66 100644 --- a/en/cheat-sheet.md +++ b/en/cheat-sheet.md @@ -493,7 +493,7 @@ For example: {{< copyable "shell-regular" >}} ```shell -helm inspect values pingcap/tidb-operator --version=v1.5.1 > values-tidb-operator.yaml +helm inspect values pingcap/tidb-operator --version=v1.5.2 > values-tidb-operator.yaml ``` ### Deploy using Helm chart @@ -509,7 +509,7 @@ For example: {{< copyable "shell-regular" >}} ```shell -helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.5.1 -f values-tidb-operator.yaml +helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.5.2 -f values-tidb-operator.yaml ``` ### View the deployed Helm release @@ -533,7 +533,7 @@ For example: {{< copyable "shell-regular" >}} ```shell -helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.1 -f values-tidb-operator.yaml +helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.2 -f values-tidb-operator.yaml ``` ### Delete Helm release diff --git a/en/configure-a-tidb-cluster.md b/en/configure-a-tidb-cluster.md index 05905c1c4..a4246ca58 100644 --- a/en/configure-a-tidb-cluster.md +++ b/en/configure-a-tidb-cluster.md @@ -24,7 +24,7 @@ If you are using a NUMA-based CPU, you need to enable `Static`'s CPU management ## Configure TiDB deployment -To configure a TiDB deployment, you need to configure the `TiDBCluster` CR. Refer to the [TidbCluster example](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/advanced/tidb-cluster.yaml) for an example. For the complete configurations of `TiDBCluster` CR, refer to [API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md). +To configure a TiDB deployment, you need to configure the `TiDBCluster` CR. Refer to the [TidbCluster example](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/advanced/tidb-cluster.yaml) for an example. For the complete configurations of `TiDBCluster` CR, refer to [API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md). > **Note:** > diff --git a/en/configure-storage-class.md b/en/configure-storage-class.md index 3b3d6671f..aca935625 100644 --- a/en/configure-storage-class.md +++ b/en/configure-storage-class.md @@ -95,7 +95,7 @@ The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directori 1. Download the deployment file for the local-volume-provisioner. ```shell - wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/local-pv/local-volume-provisioner.yaml + wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/local-pv/local-volume-provisioner.yaml ``` 2. If you are using the same discovery directory as described in [Step 1: Pre-allocate local storage](#step-1-pre-allocate-local-storage), you can skip this step. If you are using a different path for the discovery directory than in the previous step, you need to modify the ConfigMap and DaemonSet spec. @@ -163,7 +163,7 @@ The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directori 3. Deploy the `local-volume-provisioner`. ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/local-dind/local-volume-provisioner.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/local-dind/local-volume-provisioner.yaml ``` 4. Check the status of the Pod and PV. diff --git a/en/deploy-heterogeneous-tidb-cluster.md b/en/deploy-heterogeneous-tidb-cluster.md index daa85f596..c3e458c71 100644 --- a/en/deploy-heterogeneous-tidb-cluster.md +++ b/en/deploy-heterogeneous-tidb-cluster.md @@ -165,7 +165,7 @@ After creating certificates, take the following steps to deploy a TLS-enabled he In the configuration file, `spec.tlsCluster.enabled`controls whether to enable TLS between the components and `spec.tidb.tlsClient.enabled`controls whether to enable TLS for the MySQL client. - - For more configurations of a TLS-enabled heterogeneous cluster, see the ['heterogeneous-tls'](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/heterogeneous-tls) example. + - For more configurations of a TLS-enabled heterogeneous cluster, see the ['heterogeneous-tls'](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/heterogeneous-tls) example. - For more configurations and field meanings of a TiDB cluster, see the [TiDB cluster configuration document](configure-a-tidb-cluster.md). 2. In the configuration file of your heterogeneous cluster, modify the configurations of each node according to your need. diff --git a/en/deploy-on-alibaba-cloud.md b/en/deploy-on-alibaba-cloud.md index 7dc8d8836..bf86a8430 100644 --- a/en/deploy-on-alibaba-cloud.md +++ b/en/deploy-on-alibaba-cloud.md @@ -89,7 +89,7 @@ All the instances except ACK mandatory workers are deployed across availability tikv_count = 3 tidb_count = 2 pd_count = 3 - operator_version = "v1.5.1" + operator_version = "v1.5.2" ``` * To deploy TiFlash in the cluster, set `create_tiflash_node_pool = true` in `terraform.tfvars`. You can also configure the node count and instance type of the TiFlash node pool by modifying `tiflash_count` and `tiflash_instance_type`. By default, the value of `tiflash_count` is `2`, and the value of `tiflash_instance_type` is `ecs.i2.2xlarge`. @@ -173,7 +173,7 @@ All the instances except ACK mandatory workers are deployed across availability cp manifests/dashboard.yaml.example tidb-dashboard.yaml ``` - To complete the CR file configuration, refer to [TiDB Operator API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). + To complete the CR file configuration, refer to [TiDB Operator API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). * To deploy TiFlash, configure `spec.tiflash` in `db.yaml` as follows: @@ -347,7 +347,7 @@ In the default configuration, the Terraform script creates a new VPC. To use the ### Configure the TiDB cluster -See [TiDB Operator API Documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). +See [TiDB Operator API Documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). ## Manage multiple TiDB clusters diff --git a/en/deploy-on-aws-eks.md b/en/deploy-on-aws-eks.md index 679cb0a33..8830c3782 100644 --- a/en/deploy-on-aws-eks.md +++ b/en/deploy-on-aws-eks.md @@ -306,7 +306,7 @@ The following `c5d.4xlarge` example shows how to configure StorageClass for the 2. [Mount the local storage](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) to the `/mnt/ssd` directory. - 3. According to the mounting configuration, modify the [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/eks/local-volume-provisioner.yaml) file. + 3. According to the mounting configuration, modify the [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/eks/local-volume-provisioner.yaml) file. 4. Deploy and create a `local-storage` storage class using the modified `local-volume-provisioner.yaml` file. @@ -351,9 +351,9 @@ First, download the sample `TidbCluster` and `TidbMonitor` configuration files: {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aws/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aws/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aws/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aws/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aws/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aws/tidb-dashboard.yaml ``` Refer to [configure the TiDB cluster](configure-a-tidb-cluster.md) to further customize and configure the CR before applying. @@ -668,4 +668,4 @@ Depending on the EKS cluster status, use different commands: Finally, execute `kubectl -n tidb-cluster apply -f tidb-cluster.yaml` to update the TiDB cluster configuration. -For detailed CR configuration, refer to [API references](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). +For detailed CR configuration, refer to [API references](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). diff --git a/en/deploy-on-azure-aks.md b/en/deploy-on-azure-aks.md index 8d2d8071f..7f2c5a976 100644 --- a/en/deploy-on-azure-aks.md +++ b/en/deploy-on-azure-aks.md @@ -237,9 +237,9 @@ First, download the sample `TidbCluster` and `TidbMonitor` configuration files: {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aks/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aks/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aks/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aks/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aks/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aks/tidb-dashboard.yaml ``` Refer to [configure the TiDB cluster](configure-a-tidb-cluster.md) to further customize and configure the CR before applying. @@ -526,7 +526,7 @@ Add a node pool for TiFlash/TiCDC respectively. You can set `--node-count` as re Finally, run the `kubectl -n tidb-cluster apply -f tidb-cluster.yaml` command to update the TiDB cluster configuration. -For detailed CR configuration, refer to [API references](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). +For detailed CR configuration, refer to [API references](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). ## Use other Disk volume types @@ -606,7 +606,7 @@ For instance types that provide local disks, refer to [Lsv2-series](https://docs {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/eks/local-volume-provisioner.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/eks/local-volume-provisioner.yaml ``` 3. Use local storage. diff --git a/en/deploy-on-gcp-gke.md b/en/deploy-on-gcp-gke.md index cbef498cd..a8c7b3233 100644 --- a/en/deploy-on-gcp-gke.md +++ b/en/deploy-on-gcp-gke.md @@ -135,7 +135,7 @@ If you need to simulate bare-metal performance, some Google Cloud instance types {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/gke/local-ssd-provision/local-ssd-provision.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/gke/local-ssd-provision/local-ssd-provision.yaml ``` 3. Use the local storage. @@ -173,9 +173,9 @@ First, download the sample `TidbCluster` and `TidbMonitor` configuration files: {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/gcp/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/gcp/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/gcp/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/gcp/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/gcp/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/gcp/tidb-dashboard.yaml ``` Refer to [configure the TiDB cluster](configure-a-tidb-cluster.md) to further customize and configure the CR before applying. @@ -467,4 +467,4 @@ The two components are *not required* in the deployment. This section shows a qu Finally, execute `kubectl -n tidb-cluster apply -f tidb-cluster.yaml` to update the TiDB cluster configuration. -For detailed CR configuration, refer to [API references](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). +For detailed CR configuration, refer to [API references](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md). diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index 5f467c199..6c5b6a01d 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -642,7 +642,7 @@ If each Kubernetes have different Cluster Domain, you need to update the `spec.c After completing the above steps, this TidbCluster can be used as the initial TidbCluster for TiDB cluster deployment across Kubernetes clusters. You can refer the [section](#step-2-deploy-the-new-tidbcluster-to-join-the-tidb-cluster) to deploy other TidbCluster. -For more examples and development information, refer to [`multi-cluster`](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/multi-cluster). +For more examples and development information, refer to [`multi-cluster`](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/multi-cluster). ## Deploy TiDB monitoring components diff --git a/en/deploy-tidb-dm.md b/en/deploy-tidb-dm.md index 5454c0679..5a82109f3 100644 --- a/en/deploy-tidb-dm.md +++ b/en/deploy-tidb-dm.md @@ -17,7 +17,7 @@ summary: Learn how to deploy TiDB DM cluster on Kubernetes. ## Configure DM deployment -To configure the DM deployment, you need to configure the `DMCluster` Custom Resource (CR). For the complete configurations of the `DMCluster` CR, refer to the [DMCluster example](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/dm/dm-cluster.yaml) and [API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md#dmcluster). Note that you need to choose the example and API of the current TiDB Operator version. +To configure the DM deployment, you need to configure the `DMCluster` Custom Resource (CR). For the complete configurations of the `DMCluster` CR, refer to the [DMCluster example](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/dm/dm-cluster.yaml) and [API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md#dmcluster). Note that you need to choose the example and API of the current TiDB Operator version. ### Cluster name diff --git a/en/deploy-tidb-from-kubernetes-gke.md b/en/deploy-tidb-from-kubernetes-gke.md index e000720da..5c0237e34 100644 --- a/en/deploy-tidb-from-kubernetes-gke.md +++ b/en/deploy-tidb-from-kubernetes-gke.md @@ -97,7 +97,7 @@ If you see `Ready` for all nodes, congratulations. You've set up your first Kube TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` CRD. ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/crd.yaml && \ +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/crd.yaml && \ kubectl get crd tidbclusters.pingcap.com ``` @@ -109,7 +109,7 @@ After the `TidbCluster` CRD is created, install TiDB Operator in your Kubernetes ```shell kubectl create namespace tidb-admin -helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.1 +helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.2 kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator ``` @@ -126,13 +126,13 @@ To deploy the TiDB cluster, perform the following steps: 2. Deploy the TiDB cluster: ``` shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-cluster.yaml -n demo + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-cluster.yaml -n demo ``` 3. Deploy the TiDB cluster monitor: ``` shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-monitor.yaml -n demo + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-monitor.yaml -n demo ``` 4. View the Pod status: diff --git a/en/deploy-tidb-monitor-across-multiple-kubernetes.md b/en/deploy-tidb-monitor-across-multiple-kubernetes.md index 4f40b1c24..83ccd65ce 100644 --- a/en/deploy-tidb-monitor-across-multiple-kubernetes.md +++ b/en/deploy-tidb-monitor-across-multiple-kubernetes.md @@ -24,7 +24,7 @@ The multiple Kubernetes clusters must meet the following condition: - The Prometheus (`TidbMonitor`) component in each Kubernetes cluster has access to the Thanos Receiver component. -For the deployment instructions of Thanos Receiver, refer to [kube-thanos](https://github.com/thanos-io/kube-thanos) and [the example](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-prom-remotewrite). +For the deployment instructions of Thanos Receiver, refer to [kube-thanos](https://github.com/thanos-io/kube-thanos) and [the example](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-prom-remotewrite). ### Architecture @@ -111,7 +111,7 @@ You need to configure the network and DNS of the Kubernetes clusters so that the - The Thanos Query component has access to the Pod IP of the Prometheus (`TidbMonitor`) component in each Kubernetes cluster. - The Thanos Query component has access to the Pod FQDN of the Prometheus (`TidbMonitor`) component in each Kubernetes cluster. -For the deployment instructions of Thanos Query, refer to [kube-thanos](https://github.com/thanos-io/kube-thanos) and [the example](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-with-thanos). +For the deployment instructions of Thanos Query, refer to [kube-thanos](https://github.com/thanos-io/kube-thanos) and [the example](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-with-thanos). #### Architecture diff --git a/en/deploy-tidb-operator.md b/en/deploy-tidb-operator.md index 3dc081062..f4d467f2b 100644 --- a/en/deploy-tidb-operator.md +++ b/en/deploy-tidb-operator.md @@ -45,7 +45,7 @@ TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs {{< copyable "shell-regular" >}} ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/crd.yaml +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/crd.yaml ``` If the server cannot access the Internet, you need to download the `crd.yaml` file on a machine with Internet access before installing: @@ -53,7 +53,7 @@ If the server cannot access the Internet, you need to download the `crd.yaml` fi {{< copyable "shell-regular" >}} ```shell -wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/crd.yaml +wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/crd.yaml kubectl create -f ./crd.yaml ``` @@ -101,7 +101,7 @@ When you use TiDB Operator, `tidb-scheduler` is not mandatory. Refer to [tidb-sc > **Note:** > - > `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.5.1`. You can view the currently supported versions by running the `helm search repo -l tidb-operator` command. + > `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.5.2`. You can view the currently supported versions by running the `helm search repo -l tidb-operator` command. 2. Configure TiDB Operator @@ -149,15 +149,15 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - wget http://charts.pingcap.org/tidb-operator-v1.5.1.tgz + wget http://charts.pingcap.org/tidb-operator-v1.5.2.tgz ``` - Copy the `tidb-operator-v1.5.1.tgz` file to the target server and extract it to the current directory: + Copy the `tidb-operator-v1.5.2.tgz` file to the target server and extract it to the current directory: {{< copyable "shell-regular" >}} ```shell - tar zxvf tidb-operator.v1.5.1.tgz + tar zxvf tidb-operator.v1.5.2.tgz ``` 2. Download the Docker images used by TiDB Operator @@ -169,8 +169,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "" >}} ```shell - pingcap/tidb-operator:v1.5.1 - pingcap/tidb-backup-manager:v1.5.1 + pingcap/tidb-operator:v1.5.2 + pingcap/tidb-backup-manager:v1.5.2 bitnami/kubectl:latest pingcap/advanced-statefulset:v0.3.3 k8s.gcr.io/kube-scheduler:v1.16.9 @@ -183,13 +183,13 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/tidb-operator:v1.5.1 - docker pull pingcap/tidb-backup-manager:v1.5.1 + docker pull pingcap/tidb-operator:v1.5.2 + docker pull pingcap/tidb-backup-manager:v1.5.2 docker pull bitnami/kubectl:latest docker pull pingcap/advanced-statefulset:v0.3.3 - docker save -o tidb-operator-v1.5.1.tar pingcap/tidb-operator:v1.5.1 - docker save -o tidb-backup-manager-v1.5.1.tar pingcap/tidb-backup-manager:v1.5.1 + docker save -o tidb-operator-v1.5.2.tar pingcap/tidb-operator:v1.5.2 + docker save -o tidb-backup-manager-v1.5.2.tar pingcap/tidb-backup-manager:v1.5.2 docker save -o bitnami-kubectl.tar bitnami/kubectl:latest docker save -o advanced-statefulset-v0.3.3.tar pingcap/advanced-statefulset:v0.3.3 ``` @@ -199,8 +199,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - docker load -i tidb-operator-v1.5.1.tar - docker load -i tidb-backup-manager-v1.5.1.tar + docker load -i tidb-operator-v1.5.2.tar + docker load -i tidb-backup-manager-v1.5.2.tar docker load -i bitnami-kubectl.tar docker load -i advanced-statefulset-v0.3.3.tar ``` diff --git a/en/enable-monitor-dynamic-configuration.md b/en/enable-monitor-dynamic-configuration.md index f1415430e..fca64fc54 100644 --- a/en/enable-monitor-dynamic-configuration.md +++ b/en/enable-monitor-dynamic-configuration.md @@ -38,7 +38,7 @@ spec: After you modify the `prometheusReloader` configuration, TidbMonitor restarts automatically. After the restart, the dynamic configuration feature is enabled. All configuration changes related to Prometheus are dynamically updated. -For more examples, refer to [monitor-dynamic-configmap](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-dynamic-configmap). +For more examples, refer to [monitor-dynamic-configmap](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-dynamic-configmap). ## Disable the dynamic configuration feature diff --git a/en/enable-monitor-shards.md b/en/enable-monitor-shards.md index c4231781b..e64c8a8ad 100644 --- a/en/enable-monitor-shards.md +++ b/en/enable-monitor-shards.md @@ -49,4 +49,4 @@ spec: > - The number of Pods corresponding to TidbMonitor is the product of `replicas` and `shards`. For example, when `replicas` is `1` and `shards` is `2`, TiDB Operator creates 2 TidbMonitor Pods. > - After `shards` is changed, `Targets` are reallocated. However, the monitoring data already stored on the Pods is not reallocated. -For details on the configuration, refer to [shards example](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-shards). +For details on the configuration, refer to [shards example](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-shards). diff --git a/en/get-started.md b/en/get-started.md index 030776032..ef61c33d6 100644 --- a/en/get-started.md +++ b/en/get-started.md @@ -175,7 +175,7 @@ First, you need to install the Custom Resource Definitions (CRDs) that are requi To install the CRDs, run the following command: ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/crd.yaml +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/crd.yaml ```
@@ -234,7 +234,7 @@ To install TiDB Operator, you can use [Helm 3](https://helm.sh/docs/intro/instal 3. Install TiDB Operator: ```shell - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.1 + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.2 ```
@@ -282,7 +282,7 @@ This section describes how to deploy a TiDB cluster and its monitoring services. ```shell kubectl create namespace tidb-cluster && \ - kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-cluster.yaml + kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-cluster.yaml ```
@@ -300,7 +300,7 @@ If you need to deploy a TiDB cluster on an ARM64 machine, refer to [Deploying a ### Deploy TiDB Dashboard independently ```shell -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-dashboard.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-dashboard.yaml ```
@@ -315,7 +315,7 @@ tidbdashboard.pingcap.com/basic created ### Deploy TiDB monitoring services ```shell -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-monitor.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-monitor.yaml ```
diff --git a/en/initialize-a-cluster.md b/en/initialize-a-cluster.md index d9f5a01be..a5734e215 100644 --- a/en/initialize-a-cluster.md +++ b/en/initialize-a-cluster.md @@ -15,7 +15,7 @@ This document describes how to initialize a TiDB cluster on Kubernetes (K8s), sp ## Configure TidbInitializer -Refer to [TidbInitializer configuration example](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/initializer/tidb-initializer.yaml), [API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md), and the following steps to complete TidbInitializer Custom Resource (CR), and save it to the `${cluster_name}/tidb-initializer.yaml` file. When referring to the TidbInitializer configuration example and API documentation, you need to switch the branch to the TiDB Operator version currently in use. +Refer to [TidbInitializer configuration example](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/initializer/tidb-initializer.yaml), [API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md), and the following steps to complete TidbInitializer Custom Resource (CR), and save it to the `${cluster_name}/tidb-initializer.yaml` file. When referring to the TidbInitializer configuration example and API documentation, you need to switch the branch to the TiDB Operator version currently in use. ### Set the cluster namespace and name diff --git a/en/monitor-a-tidb-cluster.md b/en/monitor-a-tidb-cluster.md index 54c42bcbb..1ef14c89d 100644 --- a/en/monitor-a-tidb-cluster.md +++ b/en/monitor-a-tidb-cluster.md @@ -84,13 +84,13 @@ You can customize the Prometheus configuration by using a customized configurati 2. Set `spec.prometheus.config.configMapRef.name` and `spec.prometheus.config.configMapRef.namespace` to the name and namespace of the customized ConfigMap respectively. 3. Check if TidbMonitor has enabled [dynamic configuration](enable-monitor-dynamic-configuration.md). If not, you need to restart TidbMonitor's pod to reload the configuration. -For the complete configuration, refer to the [tidb-operator example](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/monitor-with-externalConfigMap/prometheus/README.md). +For the complete configuration, refer to the [tidb-operator example](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/monitor-with-externalConfigMap/prometheus/README.md). #### Add extra options to the command To add extra options to the command that starts Prometheus, configure `spec.prometheus.config.commandOptions`. -For the complete configuration, refer to the [tidb-operator example](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/monitor-with-externalConfigMap/prometheus/README.md). +For the complete configuration, refer to the [tidb-operator example](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/monitor-with-externalConfigMap/prometheus/README.md). > **Note:** > @@ -367,7 +367,7 @@ spec: imagePullPolicy: IfNotPresent ``` -For a complete configuration example, refer to [Example](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-multiple-cluster-non-tls) in the TiDB Operator repository. +For a complete configuration example, refer to [Example](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-multiple-cluster-non-tls) in the TiDB Operator repository. ### Monitor multiple clusters using Grafana diff --git a/en/restore-from-aws-s3-by-snapshot.md b/en/restore-from-aws-s3-by-snapshot.md index 21da5fbb9..82a76df82 100644 --- a/en/restore-from-aws-s3-by-snapshot.md +++ b/en/restore-from-aws-s3-by-snapshot.md @@ -36,7 +36,7 @@ The restore method described in this document is implemented based on CustomReso Before using TiDB Operator to restore backup metadata and EBS snapshots from S3 storage to TiDB, prepare the restore environment by following the steps below: -1. Download the file [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml). +1. Download the file [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml). 2. Create the RBAC-related resources required for the restore in the `test2` namespace by running the following command: diff --git a/en/restore-from-aws-s3-using-br.md b/en/restore-from-aws-s3-using-br.md index fd9c261a1..27ce6172a 100644 --- a/en/restore-from-aws-s3-using-br.md +++ b/en/restore-from-aws-s3-using-br.md @@ -39,7 +39,7 @@ Before restoring backup data on a S3-compatible storage to TiDB using BR, take t kubectl create namespace restore-test ``` -2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: +2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: {{< copyable "shell-regular" >}} @@ -248,7 +248,7 @@ Before restoring backup data on S3-compatible storages to TiDB using BR, take th kubectl create namespace restore-test ``` -2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: +2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: ```shell kubectl apply -f backup-rbac.yaml -n restore-test diff --git a/en/restore-from-azblob-using-br.md b/en/restore-from-azblob-using-br.md index 26d80bfae..62ceafc54 100644 --- a/en/restore-from-azblob-using-br.md +++ b/en/restore-from-azblob-using-br.md @@ -38,7 +38,7 @@ Before restoring backup data on Azure Blob Storage to TiDB using BR, take the fo kubectl create namespace restore-test ``` -2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: +2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: ```shell kubectl apply -f backup-rbac.yaml -n restore-test @@ -155,7 +155,7 @@ Before restoring backup data on Azure Blob Storage to TiDB using BR, take the fo kubectl create namespace restore-test ``` -2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: +2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: ```shell kubectl apply -f backup-rbac.yaml -n restore-test diff --git a/en/restore-from-gcs-using-br.md b/en/restore-from-gcs-using-br.md index 0cb96f17e..9f411bfee 100644 --- a/en/restore-from-gcs-using-br.md +++ b/en/restore-from-gcs-using-br.md @@ -40,7 +40,7 @@ Before restoring backup data on GCS to TiDB using BR, take the following steps t kubectl create namespace restore-test ``` -2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: +2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: ```shell kubectl apply -f backup-rbac.yaml -n restore-test @@ -160,7 +160,7 @@ Before restoring backup data on GCS to TiDB using BR, take the following steps t kubectl create namespace restore-test ``` -2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: +2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `restore-test` namespace: ```shell kubectl apply -f backup-rbac.yaml -n restore-test diff --git a/en/restore-from-gcs.md b/en/restore-from-gcs.md index 6f25aed05..5d6416872 100644 --- a/en/restore-from-gcs.md +++ b/en/restore-from-gcs.md @@ -28,7 +28,7 @@ Before you perform the data restore, you need to prepare the restore environment ### Prepare the restore environment -1. Download [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test2` namespace: +1. Download [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test2` namespace: {{< copyable "shell-regular" >}} diff --git a/en/restore-from-pv-using-br.md b/en/restore-from-pv-using-br.md index ecf089868..75d773365 100644 --- a/en/restore-from-pv-using-br.md +++ b/en/restore-from-pv-using-br.md @@ -22,7 +22,7 @@ After backing up TiDB cluster data to PVs using BR, if you need to recover the b Before restoring backup data on PVs to TiDB using BR, take the following steps to prepare the restore environment: -1. Download [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml). +1. Download [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml). 2. Execute the following command to create the role-based access control (RBAC) resources in the `test2` namespace: diff --git a/en/restore-from-s3.md b/en/restore-from-s3.md index d2a99ee59..d9e27ac09 100644 --- a/en/restore-from-s3.md +++ b/en/restore-from-s3.md @@ -28,7 +28,7 @@ Before you perform the data restore, you need to prepare the restore environment ### Prepare the restore environment -1. Download [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test2` namespace: +1. Download [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test2` namespace: {{< copyable "shell-regular" >}} diff --git a/en/tidb-toolkit.md b/en/tidb-toolkit.md index c37d649db..03b29c005 100644 --- a/en/tidb-toolkit.md +++ b/en/tidb-toolkit.md @@ -200,11 +200,11 @@ helm search repo pingcap ``` NAME CHART VERSION APP VERSION DESCRIPTION -pingcap/tidb-backup v1.5.1 A Helm chart for TiDB Backup or Restore -pingcap/tidb-cluster v1.5.1 A Helm chart for TiDB Cluster -pingcap/tidb-drainer v1.5.1 A Helm chart for TiDB Binlog drainer. -pingcap/tidb-lightning v1.5.1 A Helm chart for TiDB Lightning -pingcap/tidb-operator v1.5.1 v1.5.1 tidb-operator Helm chart for Kubernetes +pingcap/tidb-backup v1.5.2 A Helm chart for TiDB Backup or Restore +pingcap/tidb-cluster v1.5.2 A Helm chart for TiDB Cluster +pingcap/tidb-drainer v1.5.2 A Helm chart for TiDB Binlog drainer. +pingcap/tidb-lightning v1.5.2 A Helm chart for TiDB Lightning +pingcap/tidb-operator v1.5.2 v1.5.2 tidb-operator Helm chart for Kubernetes ``` When a new version of chart has been released, you can use `helm repo update` to update the repository cached locally: @@ -266,9 +266,9 @@ Use the following command to download the chart file required for cluster instal {{< copyable "shell-regular" >}} ```shell -wget http://charts.pingcap.org/tidb-operator-v1.5.1.tgz -wget http://charts.pingcap.org/tidb-drainer-v1.5.1.tgz -wget http://charts.pingcap.org/tidb-lightning-v1.5.1.tgz +wget http://charts.pingcap.org/tidb-operator-v1.5.2.tgz +wget http://charts.pingcap.org/tidb-drainer-v1.5.2.tgz +wget http://charts.pingcap.org/tidb-lightning-v1.5.2.tgz ``` Copy these chart files to the server and decompress them. You can use these charts to install the corresponding components by running the `helm install` command. Take `tidb-operator` as an example: @@ -276,7 +276,7 @@ Copy these chart files to the server and decompress them. You can use these char {{< copyable "shell-regular" >}} ```shell -tar zxvf tidb-operator.v1.5.1.tgz +tar zxvf tidb-operator.v1.5.2.tgz helm install ${release_name} ./tidb-operator --namespace=${namespace} ``` diff --git a/en/upgrade-tidb-operator.md b/en/upgrade-tidb-operator.md index 7f021489d..c0f163c7c 100644 --- a/en/upgrade-tidb-operator.md +++ b/en/upgrade-tidb-operator.md @@ -60,27 +60,27 @@ If your server has access to the internet, you can perform online upgrade by tak kubectl get crd tidbclusters.pingcap.com ``` - This document takes TiDB v1.5.1 as an example. You can replace `${operator_version}` with the specific version you want to upgrade to. + This document takes TiDB v1.5.2 as an example. You can replace `${operator_version}` with the specific version you want to upgrade to. 3. Get the `values.yaml` file of the `tidb-operator` chart: {{< copyable "shell-regular" >}} ```bash - mkdir -p ${HOME}/tidb-operator/v1.5.1 && \ - helm inspect values pingcap/tidb-operator --version=v1.5.1 > ${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml + mkdir -p ${HOME}/tidb-operator/v1.5.2 && \ + helm inspect values pingcap/tidb-operator --version=v1.5.2 > ${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml ``` -4. In the `${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. +4. In the `${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. -5. If you have added customized configuration in the old `values.yaml` file, merge your customized configuration to the `${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml` file. +5. If you have added customized configuration in the old `values.yaml` file, merge your customized configuration to the `${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml` file. 6. Perform upgrade: {{< copyable "shell-regular" >}} ```bash - helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.1 -f ${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml -n tidb-admin + helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.2 -f ${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml -n tidb-admin ``` 7. After all the Pods start normally, check the image of TiDB Operator: @@ -91,13 +91,13 @@ If your server has access to the internet, you can perform online upgrade by tak kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:' ``` - If you see a similar output as follows, TiDB Operator is successfully upgraded. `v1.5.1` represents the TiDB Operator version you have upgraded to. + If you see a similar output as follows, TiDB Operator is successfully upgraded. `v1.5.2` represents the TiDB Operator version you have upgraded to. ``` - image: pingcap/tidb-operator:v1.5.1 - image: docker.io/pingcap/tidb-operator:v1.5.1 - image: pingcap/tidb-operator:v1.5.1 - image: docker.io/pingcap/tidb-operator:v1.5.1 + image: pingcap/tidb-operator:v1.5.2 + image: docker.io/pingcap/tidb-operator:v1.5.2 + image: pingcap/tidb-operator:v1.5.2 + image: docker.io/pingcap/tidb-operator:v1.5.2 ``` ## Offline upgrade @@ -124,14 +124,14 @@ If your server cannot access the Internet, you can offline upgrade by taking the wget -O crd.yaml https://raw.githubusercontent.com/pingcap/tidb-operator/${operator_version}/manifests/crd_v1beta1.yaml ``` - This document takes TiDB v1.5.1 as an example. You can replace `${operator_version}` with the specific version you want to upgrade to. + This document takes TiDB v1.5.2 as an example. You can replace `${operator_version}` with the specific version you want to upgrade to. 2. Download the `tidb-operator` chart package file. {{< copyable "shell-regular" >}} ```bash - wget http://charts.pingcap.org/tidb-operator-v1.5.1.tgz + wget http://charts.pingcap.org/tidb-operator-v1.5.2.tgz ``` 3. Download the Docker images required for the new TiDB Operator version: @@ -139,11 +139,11 @@ If your server cannot access the Internet, you can offline upgrade by taking the {{< copyable "shell-regular" >}} ```bash - docker pull pingcap/tidb-operator:v1.5.1 - docker pull pingcap/tidb-backup-manager:v1.5.1 + docker pull pingcap/tidb-operator:v1.5.2 + docker pull pingcap/tidb-backup-manager:v1.5.2 - docker save -o tidb-operator-v1.5.1.tar pingcap/tidb-operator:v1.5.1 - docker save -o tidb-backup-manager-v1.5.1.tar pingcap/tidb-backup-manager:v1.5.1 + docker save -o tidb-operator-v1.5.2.tar pingcap/tidb-operator:v1.5.2 + docker save -o tidb-backup-manager-v1.5.2.tar pingcap/tidb-backup-manager:v1.5.2 ``` 2. Upload the downloaded files and images to the server where TiDB Operator is deployed, and install the new TiDB Operator version: @@ -171,9 +171,9 @@ If your server cannot access the Internet, you can offline upgrade by taking the {{< copyable "shell-regular" >}} ```bash - tar zxvf tidb-operator-v1.5.1.tgz && \ - mkdir -p ${HOME}/tidb-operator/v1.5.1 && \ - cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml + tar zxvf tidb-operator-v1.5.2.tgz && \ + mkdir -p ${HOME}/tidb-operator/v1.5.2 && \ + cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml ``` 4. Install the Docker images on the server: @@ -181,20 +181,20 @@ If your server cannot access the Internet, you can offline upgrade by taking the {{< copyable "shell-regular" >}} ```bash - docker load -i tidb-operator-v1.5.1.tar && \ - docker load -i tidb-backup-manager-v1.5.1.tar + docker load -i tidb-operator-v1.5.2.tar && \ + docker load -i tidb-backup-manager-v1.5.2.tar ``` -3. In the `${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. +3. In the `${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml` file, modify the `operatorImage` version to the new TiDB Operator version. -4. If you have added customized configuration in the old `values.yaml` file, merge your customized configuration to the `${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml` file. +4. If you have added customized configuration in the old `values.yaml` file, merge your customized configuration to the `${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml` file. 5. Perform upgrade: {{< copyable "shell-regular" >}} ```bash - helm upgrade tidb-operator ./tidb-operator --version=v1.5.1 -f ${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml + helm upgrade tidb-operator ./tidb-operator --version=v1.5.2 -f ${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml ``` 6. After all the Pods start normally, check the image version of TiDB Operator: @@ -205,13 +205,13 @@ If your server cannot access the Internet, you can offline upgrade by taking the kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:' ``` - If you see a similar output as follows, TiDB Operator is successfully upgraded. `v1.5.1` represents the TiDB Operator version you have upgraded to. + If you see a similar output as follows, TiDB Operator is successfully upgraded. `v1.5.2` represents the TiDB Operator version you have upgraded to. ``` - image: pingcap/tidb-operator:v1.5.1 - image: docker.io/pingcap/tidb-operator:v1.5.1 - image: pingcap/tidb-operator:v1.5.1 - image: docker.io/pingcap/tidb-operator:v1.5.1 + image: pingcap/tidb-operator:v1.5.2 + image: docker.io/pingcap/tidb-operator:v1.5.2 + image: pingcap/tidb-operator:v1.5.2 + image: docker.io/pingcap/tidb-operator:v1.5.2 ``` > **Note:** diff --git a/zh/TOC.md b/zh/TOC.md index 321139887..028e68fa1 100644 --- a/zh/TOC.md +++ b/zh/TOC.md @@ -113,7 +113,7 @@ - [增强型 StatefulSet 控制器](advanced-statefulset.md) - [准入控制器](enable-admission-webhook.md) - [Sysbench 性能测试](benchmark-sysbench.md) - - [API 参考文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) + - [API 参考文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) - [Cheat Sheet](cheat-sheet.md) - [TiDB Operator RBAC 规则](tidb-operator-rbac.md) - 工具 diff --git a/zh/_index.md b/zh/_index.md index e0977985b..332fffa4d 100644 --- a/zh/_index.md +++ b/zh/_index.md @@ -70,7 +70,7 @@ hide_commit: true -[API 参考文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) +[API 参考文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) [工具](https://docs.pingcap.com/zh/tidb-in-kubernetes/dev/tidb-toolkit) diff --git a/zh/access-dashboard.md b/zh/access-dashboard.md index 641230a83..81db9e826 100644 --- a/zh/access-dashboard.md +++ b/zh/access-dashboard.md @@ -241,7 +241,7 @@ spec: EOF ``` - 关于 TidbNGMonitoring CR 的更多配置项,可参考 [tidb-operator 中的示例](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/advanced/tidb-ng-monitoring.yaml)。 + 关于 TidbNGMonitoring CR 的更多配置项,可参考 [tidb-operator 中的示例](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/advanced/tidb-ng-monitoring.yaml)。 3. 启用持续性能分析。 diff --git a/zh/advanced-statefulset.md b/zh/advanced-statefulset.md index 121a1f764..b2249a3f7 100644 --- a/zh/advanced-statefulset.md +++ b/zh/advanced-statefulset.md @@ -21,7 +21,7 @@ Kubernetes 内置 [StatefulSet](https://kubernetes.io/docs/concepts/workloads/co {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/advanced-statefulset-crd.v1beta1.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/advanced-statefulset-crd.v1beta1.yaml ``` * Kubernetes 1.16 及之后版本: @@ -29,7 +29,7 @@ Kubernetes 内置 [StatefulSet](https://kubernetes.io/docs/concepts/workloads/co {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/advanced-statefulset-crd.v1.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/advanced-statefulset-crd.v1.yaml ``` 2. 在 TiDB Operator chart 的 `values.yaml` 中启用 `AdvancedStatefulSet` 特性: diff --git a/zh/aggregate-multiple-cluster-monitor-data.md b/zh/aggregate-multiple-cluster-monitor-data.md index 767155ee1..dd2418d3b 100644 --- a/zh/aggregate-multiple-cluster-monitor-data.md +++ b/zh/aggregate-multiple-cluster-monitor-data.md @@ -24,7 +24,7 @@ Thanos 提供了跨 Prometheus 的统一查询方案 [Thanos Query](https://than {{< copyable "shell-regular" >}} ``` - kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/monitor-with-thanos/tidb-monitor.yaml + kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/monitor-with-thanos/tidb-monitor.yaml ``` 2. 部署 Thanos Query 组件。 @@ -34,7 +34,7 @@ Thanos 提供了跨 Prometheus 的统一查询方案 [Thanos Query](https://than {{< copyable "shell-regular" >}} ``` - curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/monitor-with-thanos/thanos-query.yaml + curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/monitor-with-thanos/thanos-query.yaml ``` 2. 手动修改 `thanos-query.yaml` 文件中的 `--store` 参数,将 `basic-prometheus:10901` 改为 `basic-prometheus.${namespace}:10901`。 @@ -182,4 +182,4 @@ spec: Prometheus 将会把数据推送到 [Thanos Receiver](https://thanos.io/tip/components/receive.md/) 服务,详情可以参考 [Receiver 架构设计](https://thanos.io/v0.8/proposals/201812_thanos-remote-receive/)。 -部署方案可以参考 [Example](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-prom-remotewrite)。 +部署方案可以参考 [Example](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-prom-remotewrite)。 diff --git a/zh/backup-restore-cr.md b/zh/backup-restore-cr.md index 84236db58..646665d67 100644 --- a/zh/backup-restore-cr.md +++ b/zh/backup-restore-cr.md @@ -22,7 +22,7 @@ summary: 介绍用于备份与恢复的 Custom Resource (CR) 资源的各字段 - 如果指定了镜像但未指定版本,例如 `.spec.toolImage: private/registry/br`,那么使用镜像 `private/registry/br:${tikv_version}` 进行备份。 - 使用 Dumpling 备份时,可以用该字段指定 Dumpling 的版本: - 如果指定了 Dumpling 的版本,例如 `spec.toolImage: pingcap/dumpling:v5.3.0`,那么使用指定的版本镜像进行备份。 - - 如果未指定,默认使用 [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.1/images/tidb-backup-manager/Dockerfile) 文件中 `TOOLKIT_VERSION` 指定的 Dumpling 版本进行备份。 + - 如果未指定,默认使用 [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.2/images/tidb-backup-manager/Dockerfile) 文件中 `TOOLKIT_VERSION` 指定的 Dumpling 版本进行备份。 * `.spec.backupType`:指定 Backup 类型,该字段仅在使用 BR 备份时有效,目前支持以下三种类型,可以结合 `.spec.tableFilter` 配置表库过滤规则: * `full`:对 TiDB 集群所有的 database 数据执行备份。 @@ -247,7 +247,7 @@ summary: 介绍用于备份与恢复的 Custom Resource (CR) 资源的各字段 * `.spec.metadata.namespace`:`Restore` CR 所在的 namespace。 * `.spec.toolImage`:用于指定 `Restore` 使用的工具镜像。TiDB Operator 从 v1.1.9 版本起支持这项配置。 - 使用 BR 恢复时,可以用该字段指定 BR 的版本。例如,`spec.toolImage: pingcap/br:v5.3.0`。如果不指定,默认使用 `pingcap/br:${tikv_version}` 进行恢复。 - - 使用 Lightning 恢复时,可以用该字段指定 Lightning 的版本,例如`spec.toolImage: pingcap/lightning:v5.3.0`。如果不指定,默认使用 [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.1/images/tidb-backup-manager/Dockerfile) 文件中 `TOOLKIT_VERSION` 指定的 Lightning 版本进行恢复。 + - 使用 Lightning 恢复时,可以用该字段指定 Lightning 的版本,例如`spec.toolImage: pingcap/lightning:v5.3.0`。如果不指定,默认使用 [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.2/images/tidb-backup-manager/Dockerfile) 文件中 `TOOLKIT_VERSION` 指定的 Lightning 版本进行恢复。 * `.spec.backupType`:指定 Restore 类型,该字段仅在使用 BR 恢复时有效,目前支持以下三种类型,可以结合 `.spec.tableFilter` 配置表库过滤规则: * `full`:对 TiDB 集群所有的 database 数据执行备份。 diff --git a/zh/backup-to-aws-s3-by-snapshot.md b/zh/backup-to-aws-s3-by-snapshot.md index ae871f96f..72e88d897 100644 --- a/zh/backup-to-aws-s3-by-snapshot.md +++ b/zh/backup-to-aws-s3-by-snapshot.md @@ -42,7 +42,7 @@ summary: 介绍如何基于 EBS 卷快照使用 TiDB Operator 备份 TiDB 集群 ### 第 1 步:准备 EBS 卷快照备份环境 -1. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) 到执行备份的服务器。 +1. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) 到执行备份的服务器。 2. 执行以下命令,在 `test1` 这个命名空间中,创建备份需要的 RBAC 相关资源: diff --git a/zh/backup-to-aws-s3-using-br.md b/zh/backup-to-aws-s3-using-br.md index 5163ac573..63d54ca37 100644 --- a/zh/backup-to-aws-s3-using-br.md +++ b/zh/backup-to-aws-s3-using-br.md @@ -51,7 +51,7 @@ Ad-hoc 备份支持快照备份,也支持[启动](#启动日志备份)和[停 kubectl create namespace backup-test ``` -2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `backup-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: +2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `backup-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: {{< copyable "shell-regular" >}} diff --git a/zh/backup-to-azblob-using-br.md b/zh/backup-to-azblob-using-br.md index 06b659426..a1e649655 100644 --- a/zh/backup-to-azblob-using-br.md +++ b/zh/backup-to-azblob-using-br.md @@ -48,7 +48,7 @@ Ad-hoc 备份支持快照备份,也支持[启动](#启动日志备份)和[停 kubectl create namespace backup-test ``` -2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `backup-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: +2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `backup-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: ```shell kubectl apply -f backup-rbac.yaml -n backup-test diff --git a/zh/backup-to-gcs-using-br.md b/zh/backup-to-gcs-using-br.md index 04b2eeb98..43a620306 100644 --- a/zh/backup-to-gcs-using-br.md +++ b/zh/backup-to-gcs-using-br.md @@ -49,7 +49,7 @@ Ad-hoc 备份支持快照备份,也支持[启动](#启动日志备份)和[停 kubectl create namespace backup-test ``` -2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `backup-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: +2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `backup-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: {{< copyable "shell-regular" >}} diff --git a/zh/backup-to-gcs.md b/zh/backup-to-gcs.md index 3e64080d3..fc23ea737 100644 --- a/zh/backup-to-gcs.md +++ b/zh/backup-to-gcs.md @@ -38,7 +38,7 @@ Ad-hoc 全量备份通过创建一个自定义的 `Backup` custom resource (CR) ### 第 1 步:Ad-hoc 全量备份环境准备 -1. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `test1` 这个 namespace 中创建备份需要的 RBAC 相关资源: +1. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `test1` 这个 namespace 中创建备份需要的 RBAC 相关资源: {{< copyable "shell-regular" >}} diff --git a/zh/backup-to-pv-using-br.md b/zh/backup-to-pv-using-br.md index 3f5cf1d11..f815aea66 100644 --- a/zh/backup-to-pv-using-br.md +++ b/zh/backup-to-pv-using-br.md @@ -31,7 +31,7 @@ Ad-hoc 备份支持快照备份与增量备份。Ad-hoc 备份通过创建一个 ### 第 1 步:准备 Ad-hoc 备份环境 -1. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) 到执行备份的服务器。 +1. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) 到执行备份的服务器。 2. 执行以下命令,在 `test1` 这个命名空间中,创建备份需要的 RBAC 相关资源: diff --git a/zh/backup-to-s3.md b/zh/backup-to-s3.md index da220d7df..e8de807e2 100644 --- a/zh/backup-to-s3.md +++ b/zh/backup-to-s3.md @@ -50,12 +50,12 @@ GRANT ### 第 1 步:Ad-hoc 全量备份环境准备 -1. 执行以下命令,根据 [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/backup/backup-rbac.yaml) 在 `tidb-cluster` 命名空间创建基于角色的访问控制 (RBAC) 资源。 +1. 执行以下命令,根据 [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/backup/backup-rbac.yaml) 在 `tidb-cluster` 命名空间创建基于角色的访问控制 (RBAC) 资源。 {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/backup/backup-rbac.yaml -n tidb-cluster + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/backup/backup-rbac.yaml -n tidb-cluster ``` 2. 远程存储访问授权。 diff --git a/zh/cheat-sheet.md b/zh/cheat-sheet.md index a62b1a3d6..c995f90c1 100644 --- a/zh/cheat-sheet.md +++ b/zh/cheat-sheet.md @@ -493,7 +493,7 @@ helm inspect values ${chart_name} --version=${chart_version} > values.yaml {{< copyable "shell-regular" >}} ```shell -helm inspect values pingcap/tidb-operator --version=v1.5.1 > values-tidb-operator.yaml +helm inspect values pingcap/tidb-operator --version=v1.5.2 > values-tidb-operator.yaml ``` ### 使用 Helm Chart 部署 @@ -509,7 +509,7 @@ helm install ${name} ${chart_name} --namespace=${namespace} --version=${chart_ve {{< copyable "shell-regular" >}} ```shell -helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.5.1 -f values-tidb-operator.yaml +helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.5.2 -f values-tidb-operator.yaml ``` ### 查看已经部署的 Helm Release @@ -533,7 +533,7 @@ helm upgrade ${name} ${chart_name} --version=${chart_version} -f ${values_file} {{< copyable "shell-regular" >}} ```shell -helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.1 -f values-tidb-operator.yaml +helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.2 -f values-tidb-operator.yaml ``` ### 删除 Helm Release diff --git a/zh/configure-a-tidb-cluster.md b/zh/configure-a-tidb-cluster.md index bc45d977f..53c0574e0 100644 --- a/zh/configure-a-tidb-cluster.md +++ b/zh/configure-a-tidb-cluster.md @@ -25,7 +25,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/configure-a-tidb-cluster/','/zh/tidb- ## 部署配置 -通过配置 `TidbCluster` CR 来配置 TiDB 集群。参考 TidbCluster [示例](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/advanced/tidb-cluster.yaml)和 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md)(示例和 API 文档请切换到当前使用的 TiDB Operator 版本)完成 TidbCluster CR(Custom Resource)。 +通过配置 `TidbCluster` CR 来配置 TiDB 集群。参考 TidbCluster [示例](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/advanced/tidb-cluster.yaml)和 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md)(示例和 API 文档请切换到当前使用的 TiDB Operator 版本)完成 TidbCluster CR(Custom Resource)。 > **注意:** > diff --git a/zh/configure-storage-class.md b/zh/configure-storage-class.md index a020a9624..260de50be 100644 --- a/zh/configure-storage-class.md +++ b/zh/configure-storage-class.md @@ -103,7 +103,7 @@ Kubernetes 当前支持静态分配的本地存储。可使用 [local-static-pro {{< copyable "shell-regular" >}} ```shell - wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/local-pv/local-volume-provisioner.yaml + wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/local-pv/local-volume-provisioner.yaml ``` 2. 如果你使用的发现路径与[第 1 步:准备本地存储](#第-1-步准备本地存储)中的示例一致,可跳过这一步。如果你使用与上一步中不同路径的发现目录,需要修改 ConfigMap 和 DaemonSet 定义。 diff --git a/zh/deploy-heterogeneous-tidb-cluster.md b/zh/deploy-heterogeneous-tidb-cluster.md index b12365506..da5b88a4b 100644 --- a/zh/deploy-heterogeneous-tidb-cluster.md +++ b/zh/deploy-heterogeneous-tidb-cluster.md @@ -165,7 +165,7 @@ summary: 本文档介绍如何为已有的 TiDB 集群部署一个异构集群 其中,`spec.tlsCluster.enabled` 表示组件间是否开启 TLS,`spec.tidb.tlsClient.enabled` 表示 MySQL 客户端是否开启 TLS。 - - 详细的异构 TLS 集群配置示例,请参阅 [`heterogeneous-tls`](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/heterogeneous-tls)。 + - 详细的异构 TLS 集群配置示例,请参阅 [`heterogeneous-tls`](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/heterogeneous-tls)。 - TiDB 集群更多的配置项和字段含义,请参考 [TiDB 集群配置文档](configure-a-tidb-cluster.md)。 diff --git a/zh/deploy-on-alibaba-cloud.md b/zh/deploy-on-alibaba-cloud.md index bbfd113fd..721586d68 100644 --- a/zh/deploy-on-alibaba-cloud.md +++ b/zh/deploy-on-alibaba-cloud.md @@ -89,7 +89,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/deploy-on-alibaba-cloud/'] tikv_count = 3 tidb_count = 2 pd_count = 3 - operator_version = "v1.5.1" + operator_version = "v1.5.2" ``` 如果需要在集群中部署 TiFlash,需要在 `terraform.tfvars` 中设置 `create_tiflash_node_pool = true`,也可以设置 `tiflash_count` 和 `tiflash_instance_type` 来配置 TiFlash 节点池的节点数量和实例类型,`tiflash_count` 默认为 `2`,`tiflash_instance_type` 默认为 `ecs.i2.2xlarge`。 @@ -168,7 +168,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/deploy-on-alibaba-cloud/'] cp manifests/dashboard.yaml.example tidb-dashboard.yaml ``` - 参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)完成 CR 文件配置。 + 参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)完成 CR 文件配置。 如果要部署 TiFlash,可以在 db.yaml 中配置 `spec.tiflash`,例如: @@ -374,7 +374,7 @@ terraform state rm module.ack.alicloud_cs_managed_kubernetes.k8s ### 配置 TiDB 集群 -参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)修改 TiDB 集群配置。 +参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)修改 TiDB 集群配置。 ## 管理多个 TiDB 集群 diff --git a/zh/deploy-on-aws-eks.md b/zh/deploy-on-aws-eks.md index 1359fd110..cc1e3113c 100644 --- a/zh/deploy-on-aws-eks.md +++ b/zh/deploy-on-aws-eks.md @@ -296,7 +296,7 @@ mountOptions: 2. 通过[普通挂载方式](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv)将本地存储挂载到 `/mnt/ssd` 目录。 - 3. 根据本地存储的挂载情况,修改 [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/eks/local-volume-provisioner.yaml) 文件。 + 3. 根据本地存储的挂载情况,修改 [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/eks/local-volume-provisioner.yaml) 文件。 4. 使用修改后的 `local-volume-provisioner.yaml`,部署并创建一个 `local-storage` 的 Storage Class: @@ -341,9 +341,9 @@ kubectl create namespace tidb-cluster {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aws/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aws/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aws/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aws/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aws/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aws/tidb-dashboard.yaml ``` 如需了解更详细的配置信息或者进行自定义配置,请参考[配置 TiDB 集群](configure-a-tidb-cluster.md) @@ -650,4 +650,4 @@ spec: 最后使用 `kubectl -n tidb-cluster apply -f tidb-cluster.yaml` 更新 TiDB 集群配置。 -更多可参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)完成 CR 文件配置。 +更多可参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)完成 CR 文件配置。 diff --git a/zh/deploy-on-azure-aks.md b/zh/deploy-on-azure-aks.md index 1c7af6cd1..3d0180fc0 100644 --- a/zh/deploy-on-azure-aks.md +++ b/zh/deploy-on-azure-aks.md @@ -233,9 +233,9 @@ kubectl create namespace tidb-cluster {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aks/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aks/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aks/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aks/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aks/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aks/tidb-dashboard.yaml ``` 如需了解更详细的配置信息或者进行自定义配置,请参考[配置 TiDB 集群](configure-a-tidb-cluster.md) @@ -514,7 +514,7 @@ spec: 最后使用 `kubectl -n tidb-cluster apply -f tidb-cluster.yaml` 更新 TiDB 集群配置。 -更多可参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)完成 CR 文件配置。 +更多可参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)完成 CR 文件配置。 ## 使用其他 Azure 磁盘类型 @@ -593,7 +593,7 @@ Azure Disk 支持多种磁盘类型。若需要低延迟、高吞吐,可以选 {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/eks/local-volume-provisioner.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/eks/local-volume-provisioner.yaml ``` 3. 使用本地存储。 diff --git a/zh/deploy-on-gcp-gke.md b/zh/deploy-on-gcp-gke.md index 92bf0a359..2d6ab1b52 100644 --- a/zh/deploy-on-gcp-gke.md +++ b/zh/deploy-on-gcp-gke.md @@ -130,7 +130,7 @@ mountOptions: {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/gke/local-ssd-provision/local-ssd-provision.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/gke/local-ssd-provision/local-ssd-provision.yaml ``` 3. 使用本地存储。 @@ -166,9 +166,9 @@ kubectl create namespace tidb-cluster {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/gcp/tidb-cluster.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/gcp/tidb-monitor.yaml && \ -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/gcp/tidb-dashboard.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/gcp/tidb-cluster.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/gcp/tidb-monitor.yaml && \ +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/gcp/tidb-dashboard.yaml ``` 如需了解更详细的配置信息或者进行自定义配置,请参考[配置 TiDB 集群](configure-a-tidb-cluster.md) @@ -446,4 +446,4 @@ spec: 最后使用 `kubectl -n tidb-cluster apply -f tidb-cluster.yaml` 更新 TiDB 集群配置。 -更多可参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)完成 CR 文件配置。 +更多可参考 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md)和[集群配置文档](configure-a-tidb-cluster.md)完成 CR 文件配置。 diff --git a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md index 83a6a136f..1fbc9b96d 100644 --- a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -639,7 +639,7 @@ kubectl patch tidbcluster cluster1 --type merge -p '{"spec":{"acrossK8s": true}} 完成上述步骤后,该 TidbCluster 可以作为跨 Kubernetes 集群部署 TiDB 集群的初始 TidbCluster。可以参考[部署新的 TidbCluster 加入 TiDB 集群](#第-2-步部署新的-tidbcluster-加入-tidb-集群)一节部署其他的 TidbCluster。 -更多示例信息以及开发信息,请参阅 [`multi-cluster`](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/multi-cluster)。 +更多示例信息以及开发信息,请参阅 [`multi-cluster`](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/multi-cluster)。 ## 跨多个 Kubernetes 集群部署的 TiDB 集群监控 diff --git a/zh/deploy-tidb-dm.md b/zh/deploy-tidb-dm.md index b85eb319b..4c73b4e85 100644 --- a/zh/deploy-tidb-dm.md +++ b/zh/deploy-tidb-dm.md @@ -17,7 +17,7 @@ summary: 了解如何在 Kubernetes 上部署 TiDB DM 集群。 ## 部署配置 -通过配置 DMCluster CR 来配置 DM 集群。参考 DMCluster [示例](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/dm/dm-cluster.yaml)和 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md#dmcluster)(示例和 API 文档请切换到当前使用的 TiDB Operator 版本)完成 DMCluster CR (Custom Resource)。 +通过配置 DMCluster CR 来配置 DM 集群。参考 DMCluster [示例](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/dm/dm-cluster.yaml)和 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md#dmcluster)(示例和 API 文档请切换到当前使用的 TiDB Operator 版本)完成 DMCluster CR (Custom Resource)。 ### 集群名称 diff --git a/zh/deploy-tidb-from-kubernetes-gke.md b/zh/deploy-tidb-from-kubernetes-gke.md index b5955a699..c2fb8e7ea 100644 --- a/zh/deploy-tidb-from-kubernetes-gke.md +++ b/zh/deploy-tidb-from-kubernetes-gke.md @@ -94,7 +94,7 @@ kubectl get nodes TiDB Operator 使用 [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) 扩展 Kubernetes,所以要使用 TiDB Operator,必须先创建 `TidbCluster` 等各种自定义资源类型: ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/crd.yaml && \ +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/crd.yaml && \ kubectl get crd tidbclusters.pingcap.com ``` @@ -106,7 +106,7 @@ kubectl get crd tidbclusters.pingcap.com ```shell kubectl create namespace tidb-admin -helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.1 +helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.2 kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator ``` @@ -123,13 +123,13 @@ kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator 2. 部署 TiDB 集群: ``` shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-cluster.yaml -n demo + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-cluster.yaml -n demo ``` 3. 部署 TiDB 集群监控: ``` shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-monitor.yaml -n demo + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-monitor.yaml -n demo ``` 4. 通过下面命令查看 Pod 状态: diff --git a/zh/deploy-tidb-monitor-across-multiple-kubernetes.md b/zh/deploy-tidb-monitor-across-multiple-kubernetes.md index 097476e93..3c1ab2f98 100644 --- a/zh/deploy-tidb-monitor-across-multiple-kubernetes.md +++ b/zh/deploy-tidb-monitor-across-multiple-kubernetes.md @@ -24,7 +24,7 @@ Push 方式指利用 Prometheus remote-write 的特性,使位于不同 Kuberne - 各 Kubernetes 集群上的 Prometheus(即 TidbMonitor)组件有能力访问 Thanos Receiver 组件。 -关于 Thanos Receiver 部署,可参考 [kube-thanos](https://github.com/thanos-io/kube-thanos) 以及 [Example](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-prom-remotewrite)。 +关于 Thanos Receiver 部署,可参考 [kube-thanos](https://github.com/thanos-io/kube-thanos) 以及 [Example](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-prom-remotewrite)。 ### 部署架构图 @@ -108,7 +108,7 @@ Pull 方式是指从不同 Kubernetes 集群的 Prometheus 实例中拉取监控 - Thanos Query 组件有能力访问各 Kubernetes 集群上的 Prometheus (即 TidbMonitor) 组件的 Pod IP。 - Thanos Query 组件有能力访问各 Kubernetes 集群上的 Prometheus (即 TidbMonitor) 组件的 Pod FQDN。 -关于 Thanos Query 部署, 参考 [kube-thanos](https://github.com/thanos-io/kube-thanos) 以及 [Example](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-with-thanos)。 +关于 Thanos Query 部署, 参考 [kube-thanos](https://github.com/thanos-io/kube-thanos) 以及 [Example](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-with-thanos)。 #### 部署架构图 diff --git a/zh/deploy-tidb-operator.md b/zh/deploy-tidb-operator.md index fa08652ee..face8f182 100644 --- a/zh/deploy-tidb-operator.md +++ b/zh/deploy-tidb-operator.md @@ -45,7 +45,7 @@ TiDB Operator 使用 [Custom Resource Definition (CRD)](https://kubernetes.io/do {{< copyable "shell-regular" >}} ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/crd.yaml +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/crd.yaml ``` 如果服务器没有外网,需要先用有外网的机器下载 `crd.yaml` 文件,然后再进行安装: @@ -53,7 +53,7 @@ kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1 {{< copyable "shell-regular" >}} ```shell -wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/crd.yaml +wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/crd.yaml kubectl create -f ./crd.yaml ``` @@ -101,7 +101,7 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z > **注意:** > - > `${chart_version}` 在后续文档中代表 chart 版本,例如 `v1.5.1`,可以通过 `helm search repo -l tidb-operator` 查看当前支持的版本。 + > `${chart_version}` 在后续文档中代表 chart 版本,例如 `v1.5.2`,可以通过 `helm search repo -l tidb-operator` 查看当前支持的版本。 2. 配置 TiDB Operator @@ -151,15 +151,15 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - wget http://charts.pingcap.org/tidb-operator-v1.5.1.tgz + wget http://charts.pingcap.org/tidb-operator-v1.5.2.tgz ``` - 将 `tidb-operator-v1.5.1.tgz` 文件拷贝到服务器上并解压到当前目录: + 将 `tidb-operator-v1.5.2.tgz` 文件拷贝到服务器上并解压到当前目录: {{< copyable "shell-regular" >}} ```shell - tar zxvf tidb-operator.v1.5.1.tgz + tar zxvf tidb-operator.v1.5.2.tgz ``` 2. 下载 TiDB Operator 运行所需的 Docker 镜像 @@ -169,8 +169,8 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z TiDB Operator 用到的 Docker 镜像有: ```shell - pingcap/tidb-operator:v1.5.1 - pingcap/tidb-backup-manager:v1.5.1 + pingcap/tidb-operator:v1.5.2 + pingcap/tidb-backup-manager:v1.5.2 bitnami/kubectl:latest pingcap/advanced-statefulset:v0.3.3 k8s.gcr.io/kube-scheduler:v1.16.9 @@ -183,13 +183,13 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/tidb-operator:v1.5.1 - docker pull pingcap/tidb-backup-manager:v1.5.1 + docker pull pingcap/tidb-operator:v1.5.2 + docker pull pingcap/tidb-backup-manager:v1.5.2 docker pull bitnami/kubectl:latest docker pull pingcap/advanced-statefulset:v0.3.3 - docker save -o tidb-operator-v1.5.1.tar pingcap/tidb-operator:v1.5.1 - docker save -o tidb-backup-manager-v1.5.1.tar pingcap/tidb-backup-manager:v1.5.1 + docker save -o tidb-operator-v1.5.2.tar pingcap/tidb-operator:v1.5.2 + docker save -o tidb-backup-manager-v1.5.2.tar pingcap/tidb-backup-manager:v1.5.2 docker save -o bitnami-kubectl.tar bitnami/kubectl:latest docker save -o advanced-statefulset-v0.3.3.tar pingcap/advanced-statefulset:v0.3.3 ``` @@ -199,8 +199,8 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - docker load -i tidb-operator-v1.5.1.tar - docker load -i tidb-backup-manager-v1.5.1.tar + docker load -i tidb-operator-v1.5.2.tar + docker load -i tidb-backup-manager-v1.5.2.tar docker load -i bitnami-kubectl.tar docker load -i advanced-statefulset-v0.3.3.tar ``` diff --git a/zh/enable-monitor-dynamic-configuration.md b/zh/enable-monitor-dynamic-configuration.md index 4807adcaa..25995d389 100644 --- a/zh/enable-monitor-dynamic-configuration.md +++ b/zh/enable-monitor-dynamic-configuration.md @@ -39,7 +39,7 @@ spec: `prometheusReloader` 配置变更后,TidbMonitor 会自动重启。重启后,所有针对 Prometheus 的配置变更都会动态更新。 -可以参考 [monitor-dynamic-configmap 配置示例](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-dynamic-configmap)。 +可以参考 [monitor-dynamic-configmap 配置示例](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-dynamic-configmap)。 ## 关闭动态配置功能 diff --git a/zh/enable-monitor-shards.md b/zh/enable-monitor-shards.md index 9a15036c0..e0fcf915c 100644 --- a/zh/enable-monitor-shards.md +++ b/zh/enable-monitor-shards.md @@ -49,4 +49,4 @@ spec: > - TidbMonitor 对应的 Pod 实例数量取决于 `replicas` 和 `shards` 的乘积。例如,当 `replicas` 为 1 个副本,`shards` 为 2 个分片时,TiDB Operator 将产生 2 个 TidbMonitor Pod 实例。 > - `shards` 变更后,`Targets` 会重新分配,但是原本在节点上的监控数据不会重新分配。 -可以参考 [分片示例](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-shards)。 +可以参考 [分片示例](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-shards)。 diff --git a/zh/get-started.md b/zh/get-started.md index fe94fa3ae..114c09a03 100644 --- a/zh/get-started.md +++ b/zh/get-started.md @@ -196,7 +196,7 @@ TiDB Operator 包含许多实现 TiDB 集群不同组件的自定义资源类型 {{< copyable "shell-regular" >}} ```shell -kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/crd.yaml +kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/crd.yaml ```
@@ -261,7 +261,7 @@ customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com {{< copyable "shell-regular" >}} ```shell - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.1 + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.2 ``` 如果访问 Docker Hub 网速较慢,可以使用阿里云上的镜像: @@ -269,9 +269,9 @@ customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com {{< copyable "shell-regular" >}} ``` - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.1 \ - --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.5.1 \ - --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.5.1 \ + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.5.2 \ + --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.5.2 \ + --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.5.2 \ --set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler ``` @@ -324,7 +324,7 @@ tidb-scheduler-644d59b46f-4f6sb 2/2 Running 0 2m22s ``` shell kubectl create namespace tidb-cluster && \ - kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-cluster.yaml + kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-cluster.yaml ``` 如果访问 Docker Hub 网速较慢,可以使用 UCloud 上的镜像: @@ -333,7 +333,7 @@ kubectl create namespace tidb-cluster && \ ``` kubectl create namespace tidb-cluster && \ - kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic-cn/tidb-cluster.yaml + kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic-cn/tidb-cluster.yaml ```
@@ -353,7 +353,7 @@ tidbcluster.pingcap.com/basic created {{< copyable "shell-regular" >}} ``` shell -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-dashboard.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-dashboard.yaml ``` 如果访问 Docker Hub 网速较慢,可以使用 UCloud 上的镜像: @@ -361,7 +361,7 @@ kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb- {{< copyable "shell-regular" >}} ``` -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic-cn/tidb-dashboard.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic-cn/tidb-dashboard.yaml ```
@@ -378,7 +378,7 @@ tidbdashboard.pingcap.com/basic created {{< copyable "shell-regular" >}} ``` shell -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic/tidb-monitor.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic/tidb-monitor.yaml ``` 如果访问 Docker Hub 网速较慢,可以使用 UCloud 上的镜像: @@ -386,7 +386,7 @@ kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb- {{< copyable "shell-regular" >}} ``` -kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/basic-cn/tidb-monitor.yaml +kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/basic-cn/tidb-monitor.yaml ```
diff --git a/zh/initialize-a-cluster.md b/zh/initialize-a-cluster.md index f3ed5f0bf..b8d42eb45 100644 --- a/zh/initialize-a-cluster.md +++ b/zh/initialize-a-cluster.md @@ -15,7 +15,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/initialize-a-cluster/'] ## 配置 TidbInitializer -请参考 TidbInitializer [示例](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/initializer/tidb-initializer.yaml)和 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md)(示例和 API 文档请切换到当前使用的 TiDB Operator 版本)以及下面的步骤,完成 TidbInitializer CR,保存到文件 `${cluster_name}/tidb-initializer.yaml`。 +请参考 TidbInitializer [示例](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/initializer/tidb-initializer.yaml)和 [API 文档](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md)(示例和 API 文档请切换到当前使用的 TiDB Operator 版本)以及下面的步骤,完成 TidbInitializer CR,保存到文件 `${cluster_name}/tidb-initializer.yaml`。 ### 设置集群的命名空间和名称 diff --git a/zh/monitor-a-tidb-cluster.md b/zh/monitor-a-tidb-cluster.md index 0e148a764..a35e30e0a 100644 --- a/zh/monitor-a-tidb-cluster.md +++ b/zh/monitor-a-tidb-cluster.md @@ -14,7 +14,7 @@ TiDB 通过 Prometheus 和 Grafana 监控 TiDB 集群。在通过 TiDB Operator 在 [TiDB 集群监控](https://docs.pingcap.com/zh/tidb/stable/deploy-monitoring-services)中有一些监控系统配置的细节可供参考。 -在 v1.1 及更高版本的 TiDB Operator 中,可以通过简单的 CR 文件(即 TidbMonitor,可参考 [tidb-operator 中的示例](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/monitor/tidb-monitor.yaml))来快速建立对 Kubernetes 集群上的 TiDB 集群的监控。 +在 v1.1 及更高版本的 TiDB Operator 中,可以通过简单的 CR 文件(即 TidbMonitor,可参考 [tidb-operator 中的示例](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/monitor/tidb-monitor.yaml))来快速建立对 Kubernetes 集群上的 TiDB 集群的监控。 > **注意:** > @@ -82,13 +82,13 @@ basic-monitor Bound pvc-6db79253-cc9e-4730-bbba-ba987c29db6f 5G R 2. 设置 `spec.prometheus.config.configMapRef.name` 与 `spec.prometheus.config.configMapRef.namespace` 为自定义 ConfigMap 的名称与所属的 namespace。 3. 确认 TidbMonitor 是否已开启[动态配置功能](enable-monitor-dynamic-configuration.md),如果未开启该功能,需要重启 TidbMonitor 的 pod 重新加载配置。 -如需了解完整的配置示例,可参考 [tidb-operator 中的示例](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/monitor-with-externalConfigMap/prometheus/README.md)。 +如需了解完整的配置示例,可参考 [tidb-operator 中的示例](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/monitor-with-externalConfigMap/prometheus/README.md)。 #### 增加额外的命令行参数 设置 `spec.prometheus.config.commandOptions` 为用于启动 Prometheus 的额外的命令行参数。 -如需了解完整的配置示例,可参考 [tidb-operator 中的示例](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/monitor-with-externalConfigMap/prometheus/README.md)。 +如需了解完整的配置示例,可参考 [tidb-operator 中的示例](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/monitor-with-externalConfigMap/prometheus/README.md)。 > **注意:** > @@ -361,7 +361,7 @@ spec: imagePullPolicy: IfNotPresent ``` -如需了解完整的配置示例,可参考 TiDB Operator 仓库中的[示例](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-multiple-cluster-non-tls)。 +如需了解完整的配置示例,可参考 TiDB Operator 仓库中的[示例](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-multiple-cluster-non-tls)。 ### 使用 Grafana 查看多集群监控 diff --git a/zh/restore-from-aws-s3-by-snapshot.md b/zh/restore-from-aws-s3-by-snapshot.md index a9251b578..61fcc756a 100644 --- a/zh/restore-from-aws-s3-by-snapshot.md +++ b/zh/restore-from-aws-s3-by-snapshot.md @@ -36,7 +36,7 @@ summary: 介绍如何将存储在 S3 上的备份元数据以及 EBS 卷快照 使用 TiDB Operator 将 S3 兼容存储上的备份元数据以及 EBS 快照恢复到 TiDB 之前,请按照以下步骤准备恢复环境。 -1. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml)。 +1. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml)。 2. 执行以下命令在 `test2` 这个命名空间中创建恢复需要的 RBAC 相关资源: diff --git a/zh/restore-from-aws-s3-using-br.md b/zh/restore-from-aws-s3-using-br.md index cc9d04512..178a9663f 100644 --- a/zh/restore-from-aws-s3-using-br.md +++ b/zh/restore-from-aws-s3-using-br.md @@ -39,7 +39,7 @@ PITR 全称为 Point-in-time recovery,该功能可以让你在新集群上恢 kubectl create namespace restore-test ``` -2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建恢复需要的 RBAC 相关资源: +2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建恢复需要的 RBAC 相关资源: {{< copyable "shell-regular" >}} @@ -247,7 +247,7 @@ demo2-restore-s3 Complete ... kubectl create namespace restore-test ``` -2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: +2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: ```shell kubectl apply -f backup-rbac.yaml -n restore-test diff --git a/zh/restore-from-azblob-using-br.md b/zh/restore-from-azblob-using-br.md index 7c758ff78..920734c06 100644 --- a/zh/restore-from-azblob-using-br.md +++ b/zh/restore-from-azblob-using-br.md @@ -38,7 +38,7 @@ PITR 全称为 Point-in-time recovery,该功能可以让你在新集群上恢 kubectl create namespace restore-test ``` -2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: +2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: ```shell kubectl apply -f backup-rbac.yaml -n restore-test @@ -155,7 +155,7 @@ demo2-restore-azblob Complete ... kubectl create namespace restore-test ``` -2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: +2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: ```shell kubectl apply -f backup-rbac.yaml -n restore-test diff --git a/zh/restore-from-gcs-using-br.md b/zh/restore-from-gcs-using-br.md index f8d93fb31..5d5e71eb0 100644 --- a/zh/restore-from-gcs-using-br.md +++ b/zh/restore-from-gcs-using-br.md @@ -39,7 +39,7 @@ PITR 全称为 Point-in-time recovery,该功能可以让你在新集群上恢 kubectl create namespace restore-test ``` -2. 下载文件 [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建恢复所需的 RBAC 相关资源: +2. 下载文件 [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建恢复所需的 RBAC 相关资源: {{< copyable "shell-regular" >}} @@ -161,7 +161,7 @@ PITR 全称为 Point-in-time recovery,该功能可以让你在新集群上恢 kubectl create namespace restore-test ``` -2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: +2. 下载文件 [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `restore-test` 这个 namespace 中创建备份需要的 RBAC 相关资源: ```shell kubectl apply -f backup-rbac.yaml -n restore-test diff --git a/zh/restore-from-gcs.md b/zh/restore-from-gcs.md index 33f15694c..af425421c 100644 --- a/zh/restore-from-gcs.md +++ b/zh/restore-from-gcs.md @@ -28,7 +28,7 @@ TiDB Lightning 是一款将全量数据高速导入到 TiDB 集群的工具, ### 环境准备 -1. 下载文件 [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `test2` 这个 namespace 中创建恢复所需的 RBAC 相关资源: +1. 下载文件 [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `test2` 这个 namespace 中创建恢复所需的 RBAC 相关资源: {{< copyable "shell-regular" >}} diff --git a/zh/restore-from-pv-using-br.md b/zh/restore-from-pv-using-br.md index 110eb5e6a..9ced1cc53 100644 --- a/zh/restore-from-pv-using-br.md +++ b/zh/restore-from-pv-using-br.md @@ -22,7 +22,7 @@ summary: 介绍如何将存储在持久卷上的备份数据恢复到 TiDB 集 使用 BR 将 PV 上的备份数据恢复到 TiDB 前,请按照以下步骤准备恢复环境。 -1. 下载文件 [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) 到执行恢复的服务器。 +1. 下载文件 [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) 到执行恢复的服务器。 2. 执行以下命令在 `test2` 这个命名空间中创建恢复所需的 RBAC 相关资源: diff --git a/zh/restore-from-s3.md b/zh/restore-from-s3.md index b1f9c4003..eed106c8e 100644 --- a/zh/restore-from-s3.md +++ b/zh/restore-from-s3.md @@ -28,7 +28,7 @@ TiDB Lightning 是一款将全量数据高速导入到 TiDB 集群的工具, ### 准备恢复环境 -1. 下载文件 [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml),并执行以下命令在 `test2` 这个 namespace 中创建恢复所需的 RBAC 相关资源: +1. 下载文件 [`backup-rbac.yaml`](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml),并执行以下命令在 `test2` 这个 namespace 中创建恢复所需的 RBAC 相关资源: {{< copyable "shell-regular" >}} diff --git a/zh/tidb-toolkit.md b/zh/tidb-toolkit.md index 75caa1fff..c1e59830d 100644 --- a/zh/tidb-toolkit.md +++ b/zh/tidb-toolkit.md @@ -200,11 +200,11 @@ helm search repo pingcap ``` NAME CHART VERSION APP VERSION DESCRIPTION -pingcap/tidb-backup v1.5.1 A Helm chart for TiDB Backup or Restore -pingcap/tidb-cluster v1.5.1 A Helm chart for TiDB Cluster -pingcap/tidb-drainer v1.5.1 A Helm chart for TiDB Binlog drainer. -pingcap/tidb-lightning v1.5.1 A Helm chart for TiDB Lightning -pingcap/tidb-operator v1.5.1 v1.5.1 tidb-operator Helm chart for Kubernetes +pingcap/tidb-backup v1.5.2 A Helm chart for TiDB Backup or Restore +pingcap/tidb-cluster v1.5.2 A Helm chart for TiDB Cluster +pingcap/tidb-drainer v1.5.2 A Helm chart for TiDB Binlog drainer. +pingcap/tidb-lightning v1.5.2 A Helm chart for TiDB Lightning +pingcap/tidb-operator v1.5.2 v1.5.2 tidb-operator Helm chart for Kubernetes ``` 当新版本的 chart 发布后,你可以使用 `helm repo update` 命令更新本地对于仓库的缓存: @@ -264,9 +264,9 @@ helm uninstall ${release_name} -n ${namespace} {{< copyable "shell-regular" >}} ```shell -wget http://charts.pingcap.org/tidb-operator-v1.5.1.tgz -wget http://charts.pingcap.org/tidb-drainer-v1.5.1.tgz -wget http://charts.pingcap.org/tidb-lightning-v1.5.1.tgz +wget http://charts.pingcap.org/tidb-operator-v1.5.2.tgz +wget http://charts.pingcap.org/tidb-drainer-v1.5.2.tgz +wget http://charts.pingcap.org/tidb-lightning-v1.5.2.tgz ``` 将这些 chart 文件拷贝到服务器上并解压,可以通过 `helm install` 命令使用这些 chart 来安装相应组件,以 `tidb-operator` 为例: @@ -274,7 +274,7 @@ wget http://charts.pingcap.org/tidb-lightning-v1.5.1.tgz {{< copyable "shell-regular" >}} ```shell -tar zxvf tidb-operator.v1.5.1.tgz +tar zxvf tidb-operator.v1.5.2.tgz helm install ${release_name} ./tidb-operator --namespace=${namespace} ``` diff --git a/zh/upgrade-tidb-operator.md b/zh/upgrade-tidb-operator.md index 9a57da2c1..07a0400a0 100644 --- a/zh/upgrade-tidb-operator.md +++ b/zh/upgrade-tidb-operator.md @@ -70,27 +70,27 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/upgrade-tidb-operator/'] kubectl get crd tidbclusters.pingcap.com ``` - 本文以 TiDB Operator v1.5.1 为例,你需要替换 `${operator_version}` 为你要升级到的 TiDB Operator 版本。 + 本文以 TiDB Operator v1.5.2 为例,你需要替换 `${operator_version}` 为你要升级到的 TiDB Operator 版本。 3. 获取你要升级的 `tidb-operator` chart 中的 `values.yaml` 文件: {{< copyable "shell-regular" >}} ```shell - mkdir -p ${HOME}/tidb-operator/v1.5.1 && \ - helm inspect values pingcap/tidb-operator --version=v1.5.1 > ${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml + mkdir -p ${HOME}/tidb-operator/v1.5.2 && \ + helm inspect values pingcap/tidb-operator --version=v1.5.2 > ${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml ``` -4. 修改 `${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml` 中 `operatorImage` 镜像版本为要升级到的版本。 +4. 修改 `${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml` 中 `operatorImage` 镜像版本为要升级到的版本。 -5. 如果你在旧版本 `values.yaml` 中设置了自定义配置,将自定义配置合并到 `${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml` 中。 +5. 如果你在旧版本 `values.yaml` 中设置了自定义配置,将自定义配置合并到 `${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml` 中。 6. 执行升级: {{< copyable "shell-regular" >}} ```shell - helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.1 -f ${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml -n tidb-admin + helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.2 -f ${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml -n tidb-admin ``` 7. Pod 全部正常启动之后,运行以下命令确认 TiDB Operator 镜像版本: @@ -101,13 +101,13 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/upgrade-tidb-operator/'] kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:' ``` - 如果输出类似下方的结果,则表示升级成功。其中,`v1.5.1` 表示已升级到的版本号。 + 如果输出类似下方的结果,则表示升级成功。其中,`v1.5.2` 表示已升级到的版本号。 ``` - image: pingcap/tidb-operator:v1.5.1 - image: docker.io/pingcap/tidb-operator:v1.5.1 - image: pingcap/tidb-operator:v1.5.1 - image: docker.io/pingcap/tidb-operator:v1.5.1 + image: pingcap/tidb-operator:v1.5.2 + image: docker.io/pingcap/tidb-operator:v1.5.2 + image: pingcap/tidb-operator:v1.5.2 + image: docker.io/pingcap/tidb-operator:v1.5.2 ``` > **注意:** @@ -138,14 +138,14 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/upgrade-tidb-operator/'] wget -O crd.yaml https://raw.githubusercontent.com/pingcap/tidb-operator/${operator_version}/manifests/crd_v1beta1.yaml ``` - 本文以 TiDB Operator v1.5.1 为例,你需要替换 `${operator_version}` 为你要升级到的 TiDB Operator 版本。 + 本文以 TiDB Operator v1.5.2 为例,你需要替换 `${operator_version}` 为你要升级到的 TiDB Operator 版本。 2. 下载 `tidb-operator` chart 包文件: {{< copyable "shell-regular" >}} ```shell - wget http://charts.pingcap.org/tidb-operator-v1.5.1.tgz + wget http://charts.pingcap.org/tidb-operator-v1.5.2.tgz ``` 3. 下载 TiDB Operator 升级所需的 Docker 镜像: @@ -153,11 +153,11 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/upgrade-tidb-operator/'] {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/tidb-operator:v1.5.1 - docker pull pingcap/tidb-backup-manager:v1.5.1 + docker pull pingcap/tidb-operator:v1.5.2 + docker pull pingcap/tidb-backup-manager:v1.5.2 - docker save -o tidb-operator-v1.5.1.tar pingcap/tidb-operator:v1.5.1 - docker save -o tidb-backup-manager-v1.5.1.tar pingcap/tidb-backup-manager:v1.5.1 + docker save -o tidb-operator-v1.5.2.tar pingcap/tidb-operator:v1.5.2 + docker save -o tidb-backup-manager-v1.5.2.tar pingcap/tidb-backup-manager:v1.5.2 ``` 2. 将下载的文件和镜像上传到需要升级的服务器上,在服务器上按照以下步骤进行安装: @@ -185,9 +185,9 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/upgrade-tidb-operator/'] {{< copyable "shell-regular" >}} ```shell - tar zxvf tidb-operator-v1.5.1.tgz && \ - mkdir -p ${HOME}/tidb-operator/v1.5.1 && \ - cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml + tar zxvf tidb-operator-v1.5.2.tgz && \ + mkdir -p ${HOME}/tidb-operator/v1.5.2 && \ + cp tidb-operator/values.yaml ${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml ``` 4. 安装 Docker 镜像到服务器上: @@ -195,20 +195,20 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/upgrade-tidb-operator/'] {{< copyable "shell-regular" >}} ```shell - docker load -i tidb-operator-v1.5.1.tar && \ - docker load -i tidb-backup-manager-v1.5.1.tar + docker load -i tidb-operator-v1.5.2.tar && \ + docker load -i tidb-backup-manager-v1.5.2.tar ``` -3. 修改 `${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml` 中 `operatorImage` 镜像版本为要升级到的版本。 +3. 修改 `${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml` 中 `operatorImage` 镜像版本为要升级到的版本。 -4. 如果你在旧版本 `values.yaml` 中设置了自定义配置,将自定义配置合并到 `${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml` 中。 +4. 如果你在旧版本 `values.yaml` 中设置了自定义配置,将自定义配置合并到 `${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml` 中。 5. 执行升级: {{< copyable "shell-regular" >}} ```shell - helm upgrade tidb-operator ./tidb-operator --version=v1.5.1 -f ${HOME}/tidb-operator/v1.5.1/values-tidb-operator.yaml + helm upgrade tidb-operator ./tidb-operator --version=v1.5.2 -f ${HOME}/tidb-operator/v1.5.2/values-tidb-operator.yaml ``` 6. Pod 全部正常启动之后,运行以下命令确认 TiDB Operator 镜像版本: @@ -219,13 +219,13 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/upgrade-tidb-operator/'] kubectl get po -n tidb-admin -l app.kubernetes.io/instance=tidb-operator -o yaml | grep 'image:.*operator:' ``` - 如果输出类似下方的结果,则表示升级成功。其中,`v1.5.1` 表示已升级到的版本号。 + 如果输出类似下方的结果,则表示升级成功。其中,`v1.5.2` 表示已升级到的版本号。 ``` - image: pingcap/tidb-operator:v1.5.1 - image: docker.io/pingcap/tidb-operator:v1.5.1 - image: pingcap/tidb-operator:v1.5.1 - image: docker.io/pingcap/tidb-operator:v1.5.1 + image: pingcap/tidb-operator:v1.5.2 + image: docker.io/pingcap/tidb-operator:v1.5.2 + image: pingcap/tidb-operator:v1.5.2 + image: docker.io/pingcap/tidb-operator:v1.5.2 ``` > **注意:**