diff --git a/en/access-dashboard.md b/en/access-dashboard.md index bd39cd93f..02f634e24 100644 --- a/en/access-dashboard.md +++ b/en/access-dashboard.md @@ -238,7 +238,7 @@ To enable this feature, you need to deploy TidbNGMonitoring CR using TiDB Operat ngMonitoring: requests: storage: 10Gi - version: v7.1.0 + version: v7.1.1 # storageClassName: default baseImage: pingcap/ng-monitoring EOF diff --git a/en/advanced-statefulset.md b/en/advanced-statefulset.md index 2a44cfc55..d89d2b7e4 100644 --- a/en/advanced-statefulset.md +++ b/en/advanced-statefulset.md @@ -95,7 +95,7 @@ kind: TidbCluster metadata: name: asts spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete pd: @@ -147,7 +147,7 @@ metadata: tikv.tidb.pingcap.com/delete-slots: '[1]' name: asts spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete pd: @@ -201,7 +201,7 @@ metadata: tikv.tidb.pingcap.com/delete-slots: '[]' name: asts spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete pd: diff --git a/en/aggregate-multiple-cluster-monitor-data.md b/en/aggregate-multiple-cluster-monitor-data.md index d59da4fc5..61b53ae32 100644 --- a/en/aggregate-multiple-cluster-monitor-data.md +++ b/en/aggregate-multiple-cluster-monitor-data.md @@ -170,7 +170,7 @@ spec: version: 7.5.11 initializer: baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-reloader version: v1.0.1 diff --git a/en/backup-restore-cr.md b/en/backup-restore-cr.md index 1f696bb12..132a576d3 100644 --- a/en/backup-restore-cr.md +++ b/en/backup-restore-cr.md @@ -20,11 +20,16 @@ This section introduces the fields in the `Backup` CR. - When using BR for backup, you can specify the BR version in this field. - If the field is not specified or the value is empty, the `pingcap/br:${tikv_version}` image is used for backup by default. - - If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v7.1.0`, the image of the specified version is used for backup. + - If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v7.1.1`, the image of the specified version is used for backup. - If an image is specified without the version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup. - When using Dumpling for backup, you can specify the Dumpling version in this field. +<<<<<<< HEAD - If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v7.1.0`, the image of the specified version is used for backup. - If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.0/images/tidb-backup-manager/Dockerfile) is used for backup by default. +======= + - If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v7.1.1`, the image of the specified version is used for backup. + - If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for backup by default. +>>>>>>> cf63ac66 (en,zh: Bump tidb components to v7.1.1 (#2432)) * `.spec.backupType`: the backup type. This field is valid only when you use BR for backup. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules: * `full`: back up all databases in a TiDB cluster. @@ -260,8 +265,13 @@ This section introduces the fields in the `Restore` CR. * `.spec.metadata.namespace`: the namespace where the `Restore` CR is located. * `.spec.toolImage`:the tools image used by `Restore`. TiDB Operator supports this configuration starting from v1.1.9. +<<<<<<< HEAD - When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v7.1.0`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default. - When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.1.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.0/images/tidb-backup-manager/Dockerfile) is used for restoring by default. +======= + - When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v7.1.1`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default. + - When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.1.1`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for restoring by default. +>>>>>>> cf63ac66 (en,zh: Bump tidb components to v7.1.1 (#2432)) * `.spec.backupType`: the restore type. This field is valid only when you use BR to restore data. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules: * `full`: restore all databases in a TiDB cluster. diff --git a/en/backup-restore-faq.md b/en/backup-restore-faq.md index e8303e02d..92719f23a 100644 --- a/en/backup-restore-faq.md +++ b/en/backup-restore-faq.md @@ -186,7 +186,7 @@ Solution: 2. Edit the configuration file of the TiDB cluster and increase the value of TiKV's `keepalive`: - ```shell + ```yaml config: | [server] grpc-keepalive-time = "500s" @@ -208,7 +208,7 @@ Solution: backupType: full restoreMode: volume-snapshot serviceAccount: tidb-backup-manager - toolImage: pingcap/br:v7.1.0 + toolImage: pingcap/br:v7.1.1 br: cluster: basic clusterNamespace: tidb-cluster diff --git a/en/configure-a-tidb-cluster.md b/en/configure-a-tidb-cluster.md index 67f773bee..719a74c8e 100644 --- a/en/configure-a-tidb-cluster.md +++ b/en/configure-a-tidb-cluster.md @@ -41,11 +41,11 @@ Usually, components in a cluster are in the same version. It is recommended to c Here are the formats of the parameters: -- `spec.version`: the format is `imageTag`, such as `v7.1.0` +- `spec.version`: the format is `imageTag`, such as `v7.1.1` - `spec..baseImage`: the format is `imageName`, such as `pingcap/tidb` -- `spec..version`: the format is `imageTag`, such as `v7.1.0` +- `spec..version`: the format is `imageTag`, such as `v7.1.1` ### Recommended configuration diff --git a/en/deploy-cluster-on-arm64.md b/en/deploy-cluster-on-arm64.md index 7386ced8c..160c04a5b 100644 --- a/en/deploy-cluster-on-arm64.md +++ b/en/deploy-cluster-on-arm64.md @@ -38,7 +38,7 @@ Before starting the process, make sure that Kubernetes clusters are deployed on name: ${cluster_name} namespace: ${cluster_namespace} spec: - version: "v7.1.0" + version: "v7.1.1" # ... helper: image: busybox:1.33.0 diff --git a/en/deploy-heterogeneous-tidb-cluster.md b/en/deploy-heterogeneous-tidb-cluster.md index b2c7766a5..458f0adff 100644 --- a/en/deploy-heterogeneous-tidb-cluster.md +++ b/en/deploy-heterogeneous-tidb-cluster.md @@ -48,7 +48,7 @@ To deploy a heterogeneous cluster, do the following: name: ${heterogeneous_cluster_name} spec: configUpdateStrategy: RollingUpdate - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete discovery: {} @@ -129,7 +129,7 @@ After creating certificates, take the following steps to deploy a TLS-enabled he tlsCluster: enabled: true configUpdateStrategy: RollingUpdate - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete discovery: {} @@ -218,7 +218,7 @@ If you need to deploy a monitoring component for a heterogeneous cluster, take t version: 7.5.11 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/en/deploy-on-aws-eks.md b/en/deploy-on-aws-eks.md index 447109ce4..0ea2aa4f7 100644 --- a/en/deploy-on-aws-eks.md +++ b/en/deploy-on-aws-eks.md @@ -461,7 +461,7 @@ After the bastion host is created, you can connect to the bastion host via SSH a $ mysql --comments -h abfc623004ccb4cc3b363f3f37475af1-9774d22c27310bc1.elb.us-west-2.amazonaws.com -P 4000 -u root Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 1189 - Server version: 5.7.25-TiDB-v7.1.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible + Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2022, Oracle and/or its affiliates. diff --git a/en/deploy-on-azure-aks.md b/en/deploy-on-azure-aks.md index f4a63b186..733cacdd7 100644 --- a/en/deploy-on-azure-aks.md +++ b/en/deploy-on-azure-aks.md @@ -342,7 +342,7 @@ After access to the internal host via SSH, you can access the TiDB cluster throu $ mysql --comments -h 20.240.0.7 -P 4000 -u root Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 1189 - Server version: 5.7.25-TiDB-v7.1.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible + Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2022, Oracle and/or its affiliates. diff --git a/en/deploy-on-gcp-gke.md b/en/deploy-on-gcp-gke.md index 8bd538f0b..5fc93765e 100644 --- a/en/deploy-on-gcp-gke.md +++ b/en/deploy-on-gcp-gke.md @@ -279,7 +279,7 @@ After the bastion host is created, you can connect to the bastion host via SSH a $ mysql --comments -h 10.128.15.243 -P 4000 -u root Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 7823 - Server version: 5.7.25-TiDB-v7.1.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible + Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2022, Oracle and/or its affiliates. diff --git a/en/deploy-on-general-kubernetes.md b/en/deploy-on-general-kubernetes.md index d48e59569..61df440b7 100644 --- a/en/deploy-on-general-kubernetes.md +++ b/en/deploy-on-general-kubernetes.md @@ -42,18 +42,23 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. If the server does not have an external network, you need to download the Docker image used by the TiDB cluster on a machine with Internet access and upload it to the server, and then use `docker load` to install the Docker image on the server. - To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v7.1.0): + To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v7.1.1): ```shell - pingcap/pd:v7.1.0 - pingcap/tikv:v7.1.0 - pingcap/tidb:v7.1.0 - pingcap/tidb-binlog:v7.1.0 - pingcap/ticdc:v7.1.0 - pingcap/tiflash:v7.1.0 + pingcap/pd:v7.1.1 + pingcap/tikv:v7.1.1 + pingcap/tidb:v7.1.1 + pingcap/tidb-binlog:v7.1.1 + pingcap/ticdc:v7.1.1 + pingcap/tiflash:v7.1.1 pingcap/tidb-monitor-reloader:v1.0.1 +<<<<<<< HEAD pingcap/tidb-monitor-initializer:v7.1.0 grafana/grafana:7.5.11 +======= + pingcap/tidb-monitor-initializer:v7.1.1 + grafana/grafana:6.0.1 +>>>>>>> cf63ac66 (en,zh: Bump tidb components to v7.1.1 (#2432)) prom/prometheus:v2.18.1 busybox:1.26.2 ``` @@ -63,27 +68,37 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/pd:v7.1.0 - docker pull pingcap/tikv:v7.1.0 - docker pull pingcap/tidb:v7.1.0 - docker pull pingcap/tidb-binlog:v7.1.0 - docker pull pingcap/ticdc:v7.1.0 - docker pull pingcap/tiflash:v7.1.0 + docker pull pingcap/pd:v7.1.1 + docker pull pingcap/tikv:v7.1.1 + docker pull pingcap/tidb:v7.1.1 + docker pull pingcap/tidb-binlog:v7.1.1 + docker pull pingcap/ticdc:v7.1.1 + docker pull pingcap/tiflash:v7.1.1 docker pull pingcap/tidb-monitor-reloader:v1.0.1 +<<<<<<< HEAD docker pull pingcap/tidb-monitor-initializer:v7.1.0 docker pull grafana/grafana:7.5.11 +======= + docker pull pingcap/tidb-monitor-initializer:v7.1.1 + docker pull grafana/grafana:6.0.1 +>>>>>>> cf63ac66 (en,zh: Bump tidb components to v7.1.1 (#2432)) docker pull prom/prometheus:v2.18.1 docker pull busybox:1.26.2 - docker save -o pd-v7.1.0.tar pingcap/pd:v7.1.0 - docker save -o tikv-v7.1.0.tar pingcap/tikv:v7.1.0 - docker save -o tidb-v7.1.0.tar pingcap/tidb:v7.1.0 - docker save -o tidb-binlog-v7.1.0.tar pingcap/tidb-binlog:v7.1.0 - docker save -o ticdc-v7.1.0.tar pingcap/ticdc:v7.1.0 - docker save -o tiflash-v7.1.0.tar pingcap/tiflash:v7.1.0 + docker save -o pd-v7.1.1.tar pingcap/pd:v7.1.1 + docker save -o tikv-v7.1.1.tar pingcap/tikv:v7.1.1 + docker save -o tidb-v7.1.1.tar pingcap/tidb:v7.1.1 + docker save -o tidb-binlog-v7.1.1.tar pingcap/tidb-binlog:v7.1.1 + docker save -o ticdc-v7.1.1.tar pingcap/ticdc:v7.1.1 + docker save -o tiflash-v7.1.1.tar pingcap/tiflash:v7.1.1 docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1 +<<<<<<< HEAD docker save -o tidb-monitor-initializer-v7.1.0.tar pingcap/tidb-monitor-initializer:v7.1.0 docker save -o grafana-7.5.11.tar grafana/grafana:7.5.11 +======= + docker save -o tidb-monitor-initializer-v7.1.1.tar pingcap/tidb-monitor-initializer:v7.1.1 + docker save -o grafana-6.0.1.tar grafana/grafana:6.0.1 +>>>>>>> cf63ac66 (en,zh: Bump tidb components to v7.1.1 (#2432)) docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1 docker save -o busybox-1.26.2.tar busybox:1.26.2 ``` @@ -93,14 +108,14 @@ This document describes how to deploy a TiDB cluster on general Kubernetes. {{< copyable "shell-regular" >}} ```shell - docker load -i pd-v7.1.0.tar - docker load -i tikv-v7.1.0.tar - docker load -i tidb-v7.1.0.tar - docker load -i tidb-binlog-v7.1.0.tar - docker load -i ticdc-v7.1.0.tar - docker load -i tiflash-v7.1.0.tar + docker load -i pd-v7.1.1.tar + docker load -i tikv-v7.1.1.tar + docker load -i tidb-v7.1.1.tar + docker load -i tidb-binlog-v7.1.1.tar + docker load -i ticdc-v7.1.1.tar + docker load -i tiflash-v7.1.1.tar docker load -i tidb-monitor-reloader-v1.0.1.tar - docker load -i tidb-monitor-initializer-v7.1.0.tar + docker load -i tidb-monitor-initializer-v7.1.1.tar docker load -i grafana-6.0.1.tar docker load -i prometheus-v2.18.1.tar docker load -i busybox-1.26.2.tar diff --git a/en/deploy-tidb-binlog.md b/en/deploy-tidb-binlog.md index 15941e93b..650666d32 100644 --- a/en/deploy-tidb-binlog.md +++ b/en/deploy-tidb-binlog.md @@ -28,7 +28,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster ... pump: baseImage: pingcap/tidb-binlog - version: v7.1.0 + version: v7.1.1 replicas: 1 storageClassName: local-storage requests: @@ -47,7 +47,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster ... pump: baseImage: pingcap/tidb-binlog - version: v7.1.0 + version: v7.1.1 replicas: 1 storageClassName: local-storage requests: @@ -188,7 +188,7 @@ To deploy multiple drainers using the `tidb-drainer` Helm chart for a TiDB clust ```yaml clusterName: example-tidb - clusterVersion: v7.1.0 + clusterVersion: v7.1.1 baseImage:pingcap/tidb-binlog storageClassName: local-storage storage: 10Gi diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index d46f505e3..ef1e0d029 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -52,7 +52,7 @@ kind: TidbCluster metadata: name: "${tc_name_1}" spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete enableDynamicConfiguration: true @@ -106,7 +106,7 @@ kind: TidbCluster metadata: name: "${tc_name_2}" spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete enableDynamicConfiguration: true @@ -383,7 +383,7 @@ kind: TidbCluster metadata: name: "${tc_name_1}" spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC tlsCluster: enabled: true @@ -441,7 +441,7 @@ kind: TidbCluster metadata: name: "${tc_name_2}" spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC tlsCluster: enabled: true diff --git a/en/deploy-tidb-dm.md b/en/deploy-tidb-dm.md index bd8ff0f32..eb43f2fe3 100644 --- a/en/deploy-tidb-dm.md +++ b/en/deploy-tidb-dm.md @@ -29,9 +29,9 @@ Usually, components in a cluster are in the same version. It is recommended to c The formats of the related parameters are as follows: -- `spec.version`: the format is `imageTag`, such as `v7.1.0`. +- `spec.version`: the format is `imageTag`, such as `v7.1.1`. - `spec..baseImage`: the format is `imageName`, such as `pingcap/dm`. -- `spec..version`: the format is `imageTag`, such as `v7.1.0`. +- `spec..version`: the format is `imageTag`, such as `v7.1.1`. TiDB Operator only supports deploying DM 2.0 and later versions. @@ -50,7 +50,7 @@ metadata: name: ${dm_cluster_name} namespace: ${namespace} spec: - version: v7.1.0 + version: v7.1.1 configUpdateStrategy: RollingUpdate pvReclaimPolicy: Retain discovery: {} @@ -141,10 +141,10 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace} If the server does not have an external network, you need to download the Docker image used by the DM cluster and upload the image to the server, and then execute `docker load` to install the Docker image on the server: -1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v7.1.0): +1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v7.1.1): ```shell - pingcap/dm:v7.1.0 + pingcap/dm:v7.1.1 ``` 2. To download the image, execute the following command: @@ -152,8 +152,8 @@ If the server does not have an external network, you need to download the Docker {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/dm:v7.1.0 - docker save -o dm-v7.1.0.tar pingcap/dm:v7.1.0 + docker pull pingcap/dm:v7.1.1 + docker save -o dm-v7.1.1.tar pingcap/dm:v7.1.1 ``` 3. Upload the Docker image to the server, and execute `docker load` to install the image on the server: @@ -161,7 +161,7 @@ If the server does not have an external network, you need to download the Docker {{< copyable "shell-regular" >}} ```shell - docker load -i dm-v7.1.0.tar + docker load -i dm-v7.1.1.tar ``` After deploying the DM cluster, execute the following command to view the Pod status: diff --git a/en/deploy-tidb-monitor-across-multiple-kubernetes.md b/en/deploy-tidb-monitor-across-multiple-kubernetes.md index 3932feb31..7a3ff4c04 100644 --- a/en/deploy-tidb-monitor-across-multiple-kubernetes.md +++ b/en/deploy-tidb-monitor-across-multiple-kubernetes.md @@ -302,7 +302,7 @@ After collecting data using Prometheus, you can visualize multi-cluster monitori ```shell # set tidb version here - version=v7.1.0 + version=v7.1.1 docker run --rm -i -v ${PWD}/dashboards:/dashboards/ pingcap/tidb-monitor-initializer:${version} && \ cd dashboards ``` diff --git a/en/enable-tls-between-components.md b/en/enable-tls-between-components.md index 09de7a8ce..4cb2a3762 100644 --- a/en/enable-tls-between-components.md +++ b/en/enable-tls-between-components.md @@ -1337,7 +1337,7 @@ In this step, you need to perform the following operations: spec: tlsCluster: enabled: true - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Retain pd: @@ -1396,7 +1396,7 @@ In this step, you need to perform the following operations: version: 7.5.11 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/en/enable-tls-for-dm.md b/en/enable-tls-for-dm.md index 6600aaee2..aad65de5f 100644 --- a/en/enable-tls-for-dm.md +++ b/en/enable-tls-for-dm.md @@ -518,7 +518,7 @@ metadata: spec: tlsCluster: enabled: true - version: v7.1.0 + version: v7.1.1 pvReclaimPolicy: Retain discovery: {} master: @@ -588,7 +588,7 @@ metadata: name: ${cluster_name} namespace: ${namespace} spec: - version: v7.1.0 + version: v7.1.1 pvReclaimPolicy: Retain discovery: {} tlsClientSecretNames: diff --git a/en/enable-tls-for-mysql-client.md b/en/enable-tls-for-mysql-client.md index fb02d2b35..6e1c43df7 100644 --- a/en/enable-tls-for-mysql-client.md +++ b/en/enable-tls-for-mysql-client.md @@ -554,7 +554,7 @@ In this step, you create a TiDB cluster and perform the following operations: name: ${cluster_name} namespace: ${namespace} spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Retain pd: diff --git a/en/get-started.md b/en/get-started.md index 6e5399dd7..d081adf94 100644 --- a/en/get-started.md +++ b/en/get-started.md @@ -462,12 +462,12 @@ APPROXIMATE_KEYS: 0 ```sql mysql> select tidb_version()\G *************************** 1. row *************************** - tidb_version(): Release Version: v7.1.0 + tidb_version(): Release Version: v7.1.1 Edition: Community - Git Commit Hash: 635a4362235e8a3c0043542e629532e3c7bb2756 - Git Branch: heads/refs/tags/v7.1.0 - UTC Build Time: 2023-05-30 10:58:57 - GoVersion: go1.20.3 + Git Commit Hash: cf441574864be63938524e7dfcf7cc659edc3dd8 + Git Branch: heads/refs/tags/v7.1.1 + UTC Build Time: 2023-07-19 10:16:40 + GoVersion: go1.20.6 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false @@ -652,12 +652,12 @@ Note that `nightly` is not a fixed version and the version might vary depending ``` *************************** 1. row *************************** -tidb_version(): Release Version: v7.1.0 +tidb_version(): Release Version: v7.1.1 Edition: Community -Git Commit Hash: 635a4362235e8a3c0043542e629532e3c7bb2756 -Git Branch: heads/refs/tags/v7.1.0 -UTC Build Time: 2023-05-30 10:58:57 -GoVersion: go1.20.3 +Git Commit Hash: cf441574864be63938524e7dfcf7cc659edc3dd8 +Git Branch: heads/refs/tags/v7.1.1 +UTC Build Time: 2023-07-19 10:16:40 +GoVersion: go1.20.6 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false diff --git a/en/monitor-a-tidb-cluster.md b/en/monitor-a-tidb-cluster.md index ff237fb3e..75ec74cb4 100644 --- a/en/monitor-a-tidb-cluster.md +++ b/en/monitor-a-tidb-cluster.md @@ -51,7 +51,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -173,7 +173,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -232,7 +232,7 @@ spec: foo: "bar" initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -274,7 +274,7 @@ spec: type: ClusterIP initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -357,7 +357,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/en/pd-recover.md b/en/pd-recover.md index b61b0b039..a877f6d5b 100644 --- a/en/pd-recover.md +++ b/en/pd-recover.md @@ -18,7 +18,7 @@ PD Recover is a disaster recovery tool of [PD](https://docs.pingcap.com/tidb/sta wget https://download.pingcap.org/tidb-community-toolkit-${version}-linux-amd64.tar.gz ``` - In the command above, `${version}` is the version of the TiDB cluster, such as `v7.1.0`. + In the command above, `${version}` is the version of the TiDB cluster, such as `v7.1.1`. 2. Unpack the TiDB package: diff --git a/en/restart-a-tidb-cluster.md b/en/restart-a-tidb-cluster.md index 6becd79a6..81d2763de 100644 --- a/en/restart-a-tidb-cluster.md +++ b/en/restart-a-tidb-cluster.md @@ -32,7 +32,7 @@ kind: TidbCluster metadata: name: basic spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete pd: diff --git a/en/restore-from-aws-s3-by-snapshot.md b/en/restore-from-aws-s3-by-snapshot.md index 5054a9369..54aa76008 100644 --- a/en/restore-from-aws-s3-by-snapshot.md +++ b/en/restore-from-aws-s3-by-snapshot.md @@ -21,7 +21,7 @@ The restore method described in this document is implemented based on CustomReso backupType: full restoreMode: volume-snapshot serviceAccount: tidb-backup-manager - toolImage: pingcap/br:v7.1.0 + toolImage: pingcap/br:v7.1.1 br: cluster: basic clusterNamespace: tidb-cluster diff --git a/en/upgrade-a-tidb-cluster.md b/en/upgrade-a-tidb-cluster.md index 05836615a..5389f0eb0 100644 --- a/en/upgrade-a-tidb-cluster.md +++ b/en/upgrade-a-tidb-cluster.md @@ -54,7 +54,7 @@ During the rolling update, TiDB Operator automatically completes Leader transfer The `version` field has following formats: - - `spec.version`: the format is `imageTag`, such as `v7.1.0` + - `spec.version`: the format is `imageTag`, such as `v7.1.1` - `spec..version`: the format is `imageTag`, such as `v3.1.0` 2. Check the upgrade progress: diff --git a/zh/access-dashboard.md b/zh/access-dashboard.md index 6e4ebbb56..c0a7f724f 100644 --- a/zh/access-dashboard.md +++ b/zh/access-dashboard.md @@ -235,7 +235,7 @@ spec: ngMonitoring: requests: storage: 10Gi - version: v7.1.0 + version: v7.1.1 # storageClassName: default baseImage: pingcap/ng-monitoring EOF diff --git a/zh/advanced-statefulset.md b/zh/advanced-statefulset.md index f380d0e6e..ae9b130c0 100644 --- a/zh/advanced-statefulset.md +++ b/zh/advanced-statefulset.md @@ -93,7 +93,7 @@ kind: TidbCluster metadata: name: asts spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete pd: @@ -145,7 +145,7 @@ metadata: tikv.tidb.pingcap.com/delete-slots: '[1]' name: asts spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete pd: @@ -199,7 +199,7 @@ metadata: tikv.tidb.pingcap.com/delete-slots: '[]' name: asts spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete pd: diff --git a/zh/aggregate-multiple-cluster-monitor-data.md b/zh/aggregate-multiple-cluster-monitor-data.md index 01f7fa355..31598d80a 100644 --- a/zh/aggregate-multiple-cluster-monitor-data.md +++ b/zh/aggregate-multiple-cluster-monitor-data.md @@ -170,7 +170,7 @@ spec: version: 7.5.11 initializer: baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/backup-restore-faq.md b/zh/backup-restore-faq.md index 4ef73cc54..8f9979979 100644 --- a/zh/backup-restore-faq.md +++ b/zh/backup-restore-faq.md @@ -186,7 +186,7 @@ error="rpc error: code = Unavailable desc = keepalive watchdog timeout" 2. 编辑 TiDB 集群配置,调大 TiKV `keepalive` 参数: - ```toml + ```yaml config: | [server] grpc-keepalive-time = "500s" @@ -208,7 +208,7 @@ error="rpc error: code = Unavailable desc = keepalive watchdog timeout" backupType: full restoreMode: volume-snapshot serviceAccount: tidb-backup-manager - toolImage: pingcap/br:v7.1.0 + toolImage: pingcap/br:v7.1.1 br: cluster: basic clusterNamespace: tidb-cluster diff --git a/zh/configure-a-tidb-cluster.md b/zh/configure-a-tidb-cluster.md index a5dee0daa..33e2758e2 100644 --- a/zh/configure-a-tidb-cluster.md +++ b/zh/configure-a-tidb-cluster.md @@ -41,9 +41,9 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/configure-a-tidb-cluster/','/zh/tidb- 相关参数的格式如下: -- `spec.version`,格式为 `imageTag`,例如 `v7.1.0` +- `spec.version`,格式为 `imageTag`,例如 `v7.1.1` - `spec..baseImage`,格式为 `imageName`,例如 `pingcap/tidb` -- `spec..version`,格式为 `imageTag`,例如 `v7.1.0` +- `spec..version`,格式为 `imageTag`,例如 `v7.1.1` ### 推荐配置 diff --git a/zh/deploy-heterogeneous-tidb-cluster.md b/zh/deploy-heterogeneous-tidb-cluster.md index 18aca309a..4ee74be9c 100644 --- a/zh/deploy-heterogeneous-tidb-cluster.md +++ b/zh/deploy-heterogeneous-tidb-cluster.md @@ -50,7 +50,7 @@ summary: 本文档介绍如何为已有的 TiDB 集群部署一个异构集群 name: ${heterogeneous_cluster_name} spec: configUpdateStrategy: RollingUpdate - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete discovery: {} @@ -129,7 +129,7 @@ summary: 本文档介绍如何为已有的 TiDB 集群部署一个异构集群 tlsCluster: enabled: true configUpdateStrategy: RollingUpdate - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete discovery: {} @@ -219,7 +219,7 @@ summary: 本文档介绍如何为已有的 TiDB 集群部署一个异构集群 version: 7.5.11 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/deploy-on-gcp-gke.md b/zh/deploy-on-gcp-gke.md index 05dcfe91f..ddbadd18c 100644 --- a/zh/deploy-on-gcp-gke.md +++ b/zh/deploy-on-gcp-gke.md @@ -270,7 +270,7 @@ gcloud compute instances create bastion \ $ mysql --comments -h 10.128.15.243 -P 4000 -u root Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 7823 - Server version: 5.7.25-TiDB-v7.1.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible + Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2022, Oracle and/or its affiliates. diff --git a/zh/deploy-on-general-kubernetes.md b/zh/deploy-on-general-kubernetes.md index 36700096c..8790bcf67 100644 --- a/zh/deploy-on-general-kubernetes.md +++ b/zh/deploy-on-general-kubernetes.md @@ -44,18 +44,23 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/deploy-on-general-kubernetes/','/zh/t 如果服务器没有外网,需要在有外网的机器上将 TiDB 集群用到的 Docker 镜像下载下来并上传到服务器上,然后使用 `docker load` 将 Docker 镜像安装到服务器上。 - 部署一套 TiDB 集群会用到下面这些 Docker 镜像(假设 TiDB 集群的版本是 v7.1.0): + 部署一套 TiDB 集群会用到下面这些 Docker 镜像(假设 TiDB 集群的版本是 v7.1.1): ```shell - pingcap/pd:v7.1.0 - pingcap/tikv:v7.1.0 - pingcap/tidb:v7.1.0 - pingcap/tidb-binlog:v7.1.0 - pingcap/ticdc:v7.1.0 - pingcap/tiflash:v7.1.0 + pingcap/pd:v7.1.1 + pingcap/tikv:v7.1.1 + pingcap/tidb:v7.1.1 + pingcap/tidb-binlog:v7.1.1 + pingcap/ticdc:v7.1.1 + pingcap/tiflash:v7.1.1 pingcap/tidb-monitor-reloader:v1.0.1 +<<<<<<< HEAD pingcap/tidb-monitor-initializer:v7.1.0 grafana/grafana:7.5.11 +======= + pingcap/tidb-monitor-initializer:v7.1.1 + grafana/grafana:6.0.1 +>>>>>>> cf63ac66 (en,zh: Bump tidb components to v7.1.1 (#2432)) prom/prometheus:v2.18.1 busybox:1.26.2 ``` @@ -65,27 +70,37 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/deploy-on-general-kubernetes/','/zh/t {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/pd:v7.1.0 - docker pull pingcap/tikv:v7.1.0 - docker pull pingcap/tidb:v7.1.0 - docker pull pingcap/tidb-binlog:v7.1.0 - docker pull pingcap/ticdc:v7.1.0 - docker pull pingcap/tiflash:v7.1.0 + docker pull pingcap/pd:v7.1.1 + docker pull pingcap/tikv:v7.1.1 + docker pull pingcap/tidb:v7.1.1 + docker pull pingcap/tidb-binlog:v7.1.1 + docker pull pingcap/ticdc:v7.1.1 + docker pull pingcap/tiflash:v7.1.1 docker pull pingcap/tidb-monitor-reloader:v1.0.1 +<<<<<<< HEAD docker pull pingcap/tidb-monitor-initializer:v7.1.0 docker pull grafana/grafana:7.5.11 +======= + docker pull pingcap/tidb-monitor-initializer:v7.1.1 + docker pull grafana/grafana:6.0.1 +>>>>>>> cf63ac66 (en,zh: Bump tidb components to v7.1.1 (#2432)) docker pull prom/prometheus:v2.18.1 docker pull busybox:1.26.2 - docker save -o pd-v7.1.0.tar pingcap/pd:v7.1.0 - docker save -o tikv-v7.1.0.tar pingcap/tikv:v7.1.0 - docker save -o tidb-v7.1.0.tar pingcap/tidb:v7.1.0 - docker save -o tidb-binlog-v7.1.0.tar pingcap/tidb-binlog:v7.1.0 - docker save -o ticdc-v7.1.0.tar pingcap/ticdc:v7.1.0 - docker save -o tiflash-v7.1.0.tar pingcap/tiflash:v7.1.0 + docker save -o pd-v7.1.1.tar pingcap/pd:v7.1.1 + docker save -o tikv-v7.1.1.tar pingcap/tikv:v7.1.1 + docker save -o tidb-v7.1.1.tar pingcap/tidb:v7.1.1 + docker save -o tidb-binlog-v7.1.1.tar pingcap/tidb-binlog:v7.1.1 + docker save -o ticdc-v7.1.1.tar pingcap/ticdc:v7.1.1 + docker save -o tiflash-v7.1.1.tar pingcap/tiflash:v7.1.1 docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1 +<<<<<<< HEAD docker save -o tidb-monitor-initializer-v7.1.0.tar pingcap/tidb-monitor-initializer:v7.1.0 docker save -o grafana-6.0.1.tar grafana/grafana:7.5.11 +======= + docker save -o tidb-monitor-initializer-v7.1.1.tar pingcap/tidb-monitor-initializer:v7.1.1 + docker save -o grafana-6.0.1.tar grafana/grafana:6.0.1 +>>>>>>> cf63ac66 (en,zh: Bump tidb components to v7.1.1 (#2432)) docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1 docker save -o busybox-1.26.2.tar busybox:1.26.2 ``` @@ -95,14 +110,14 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/deploy-on-general-kubernetes/','/zh/t {{< copyable "shell-regular" >}} ```shell - docker load -i pd-v7.1.0.tar - docker load -i tikv-v7.1.0.tar - docker load -i tidb-v7.1.0.tar - docker load -i tidb-binlog-v7.1.0.tar - docker load -i ticdc-v7.1.0.tar - docker load -i tiflash-v7.1.0.tar + docker load -i pd-v7.1.1.tar + docker load -i tikv-v7.1.1.tar + docker load -i tidb-v7.1.1.tar + docker load -i tidb-binlog-v7.1.1.tar + docker load -i ticdc-v7.1.1.tar + docker load -i tiflash-v7.1.1.tar docker load -i tidb-monitor-reloader-v1.0.1.tar - docker load -i tidb-monitor-initializer-v7.1.0.tar + docker load -i tidb-monitor-initializer-v7.1.1.tar docker load -i grafana-6.0.1.tar docker load -i prometheus-v2.18.1.tar docker load -i busybox-1.26.2.tar diff --git a/zh/deploy-tidb-binlog.md b/zh/deploy-tidb-binlog.md index 68272e3c1..785d90b00 100644 --- a/zh/deploy-tidb-binlog.md +++ b/zh/deploy-tidb-binlog.md @@ -26,7 +26,7 @@ spec ... pump: baseImage: pingcap/tidb-binlog - version: v7.1.0 + version: v7.1.1 replicas: 1 storageClassName: local-storage requests: @@ -45,7 +45,7 @@ spec ... pump: baseImage: pingcap/tidb-binlog - version: v7.1.0 + version: v7.1.1 replicas: 1 storageClassName: local-storage requests: @@ -182,7 +182,7 @@ spec ```yaml clusterName: example-tidb - clusterVersion: v7.1.0 + clusterVersion: v7.1.1 baseImage: pingcap/tidb-binlog storageClassName: local-storage storage: 10Gi diff --git a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md index 5f98d5960..5c16b2de3 100644 --- a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -52,7 +52,7 @@ kind: TidbCluster metadata: name: "${tc_name_1}" spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete enableDynamicConfiguration: true @@ -106,7 +106,7 @@ kind: TidbCluster metadata: name: "${tc_name_2}" spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete enableDynamicConfiguration: true @@ -379,7 +379,7 @@ kind: TidbCluster metadata: name: "${tc_name_1}" spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC tlsCluster: enabled: true @@ -437,7 +437,7 @@ kind: TidbCluster metadata: name: "${tc_name_2}" spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC tlsCluster: enabled: true diff --git a/zh/deploy-tidb-dm.md b/zh/deploy-tidb-dm.md index 18d3e5959..329b36da2 100644 --- a/zh/deploy-tidb-dm.md +++ b/zh/deploy-tidb-dm.md @@ -29,9 +29,9 @@ summary: 了解如何在 Kubernetes 上部署 TiDB DM 集群。 相关参数的格式如下: -- `spec.version`,格式为 `imageTag`,例如 `v7.1.0` +- `spec.version`,格式为 `imageTag`,例如 `v7.1.1` - `spec..baseImage`,格式为 `imageName`,例如 `pingcap/dm` -- `spec..version`,格式为 `imageTag`,例如 `v7.1.0` +- `spec..version`,格式为 `imageTag`,例如 `v7.1.1` TiDB Operator 仅支持部署 DM 2.0 及更新版本。 @@ -50,7 +50,7 @@ metadata: name: ${dm_cluster_name} namespace: ${namespace} spec: - version: v7.1.0 + version: v7.1.1 configUpdateStrategy: RollingUpdate pvReclaimPolicy: Retain discovery: {} @@ -140,10 +140,10 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace} 如果服务器没有外网,需要按下述步骤在有外网的机器上将 DM 集群用到的 Docker 镜像下载下来并上传到服务器上,然后使用 `docker load` 将 Docker 镜像安装到服务器上: -1. 部署一套 DM 集群会用到下面这些 Docker 镜像(假设 DM 集群的版本是 v7.1.0): +1. 部署一套 DM 集群会用到下面这些 Docker 镜像(假设 DM 集群的版本是 v7.1.1): ```shell - pingcap/dm:v7.1.0 + pingcap/dm:v7.1.1 ``` 2. 通过下面的命令将所有这些镜像下载下来: @@ -151,9 +151,9 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace} {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/dm:v7.1.0 + docker pull pingcap/dm:v7.1.1 - docker save -o dm-v7.1.0.tar pingcap/dm:v7.1.0 + docker save -o dm-v7.1.1.tar pingcap/dm:v7.1.1 ``` 3. 将这些 Docker 镜像上传到服务器上,并执行 `docker load` 将这些 Docker 镜像安装到服务器上: @@ -161,7 +161,7 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace} {{< copyable "shell-regular" >}} ```shell - docker load -i dm-v7.1.0.tar + docker load -i dm-v7.1.1.tar ``` 部署 DM 集群完成后,通过下面命令查看 Pod 状态: diff --git a/zh/deploy-tidb-monitor-across-multiple-kubernetes.md b/zh/deploy-tidb-monitor-across-multiple-kubernetes.md index 5308ac1c5..0e1cd16bd 100644 --- a/zh/deploy-tidb-monitor-across-multiple-kubernetes.md +++ b/zh/deploy-tidb-monitor-across-multiple-kubernetes.md @@ -75,7 +75,7 @@ Push 方式指利用 Prometheus remote-write 的特性,使位于不同 Kuberne #region: us-east-1 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 persistent: true storage: 100Gi storageClassName: ${storageclass_name} @@ -159,7 +159,7 @@ Pull 方式是指从不同 Kubernetes 集群的 Prometheus 实例中拉取监控 #region: us-east-1 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 persistent: true storage: 20Gi storageClassName: ${storageclass_name} @@ -245,7 +245,7 @@ Pull 方式是指从不同 Kubernetes 集群的 Prometheus 实例中拉取监控 #region: us-east-1 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 persistent: true storage: 20Gi storageClassName: ${storageclass_name} @@ -293,7 +293,7 @@ scrape_configs: ```shell # set tidb version here - version=v7.1.0 + version=v7.1.1 docker run --rm -i -v ${PWD}/dashboards:/dashboards/ pingcap/tidb-monitor-initializer:${version} && \ cd dashboards ``` diff --git a/zh/enable-monitor-shards.md b/zh/enable-monitor-shards.md index 7f8370b73..792bd87f3 100644 --- a/zh/enable-monitor-shards.md +++ b/zh/enable-monitor-shards.md @@ -34,7 +34,7 @@ spec: version: v2.27.1 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/enable-tls-between-components.md b/zh/enable-tls-between-components.md index 4fe27797d..62044a924 100644 --- a/zh/enable-tls-between-components.md +++ b/zh/enable-tls-between-components.md @@ -1314,7 +1314,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] spec: tlsCluster: enabled: true - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Retain pd: @@ -1373,7 +1373,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] version: 7.5.11 initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/enable-tls-for-dm.md b/zh/enable-tls-for-dm.md index 036440fe8..148272ac5 100644 --- a/zh/enable-tls-for-dm.md +++ b/zh/enable-tls-for-dm.md @@ -491,7 +491,7 @@ metadata: spec: tlsCluster: enabled: true - version: v7.1.0 + version: v7.1.1 pvReclaimPolicy: Retain discovery: {} master: @@ -559,7 +559,7 @@ metadata: name: ${cluster_name} namespace: ${namespace} spec: - version: v7.1.0 + version: v7.1.1 pvReclaimPolicy: Retain discovery: {} tlsClientSecretNames: diff --git a/zh/enable-tls-for-mysql-client.md b/zh/enable-tls-for-mysql-client.md index 92eba73c1..0b0af5d3b 100644 --- a/zh/enable-tls-for-mysql-client.md +++ b/zh/enable-tls-for-mysql-client.md @@ -550,7 +550,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-for-mysql-client/'] name: ${cluster_name} namespace: ${namespace} spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Retain pd: diff --git a/zh/get-started.md b/zh/get-started.md index bab588584..0d65493a7 100644 --- a/zh/get-started.md +++ b/zh/get-started.md @@ -490,7 +490,7 @@ mysql --comments -h 127.0.0.1 -P 14000 -u root ``` Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 178505 -Server version: 5.7.25-TiDB-v7.1.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible +Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. @@ -539,12 +539,12 @@ mysql> select * from information_schema.tikv_region_status where db_name=databas ```sql mysql> select tidb_version()\G *************************** 1. row *************************** - tidb_version(): Release Version: v7.1.0 + tidb_version(): Release Version: v7.1.1 Edition: Community - Git Commit Hash: 635a4362235e8a3c0043542e629532e3c7bb2756 - Git Branch: heads/refs/tags/v7.1.0 - UTC Build Time: 2023-05-30 10:58:57 - GoVersion: go1.20.3 + Git Commit Hash: cf441574864be63938524e7dfcf7cc659edc3dd8 + Git Branch: heads/refs/tags/v7.1.1 + UTC Build Time: 2023-07-19 10:16:40 + GoVersion: go1.20.6 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false @@ -735,12 +735,12 @@ mysql --comments -h 127.0.0.1 -P 24000 -u root -e 'select tidb_version()\G' ``` *************************** 1. row *************************** -tidb_version(): Release Version: v7.1.0 +tidb_version(): Release Version: v7.1.1 Edition: Community -Git Commit Hash: 635a4362235e8a3c0043542e629532e3c7bb2756 -Git Branch: heads/refs/tags/v7.1.0 -UTC Build Time: 2023-05-30 10:58:57 -GoVersion: go1.20.3 +Git Commit Hash: cf441574864be63938524e7dfcf7cc659edc3dd8 +Git Branch: heads/refs/tags/v7.1.1 +UTC Build Time: 2023-07-19 10:16:40 +GoVersion: go1.20.6 Race Enabled: false TiKV Min Version: 6.2.0-alpha Check Table Before Drop: false diff --git a/zh/monitor-a-tidb-cluster.md b/zh/monitor-a-tidb-cluster.md index 5a791548f..8445332a0 100644 --- a/zh/monitor-a-tidb-cluster.md +++ b/zh/monitor-a-tidb-cluster.md @@ -49,7 +49,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -171,7 +171,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -228,7 +228,7 @@ spec: foo: "bar" initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -270,7 +270,7 @@ spec: type: ClusterIP initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 @@ -351,7 +351,7 @@ spec: type: NodePort initializer: baseImage: pingcap/tidb-monitor-initializer - version: v7.1.0 + version: v7.1.1 reloader: baseImage: pingcap/tidb-monitor-reloader version: v1.0.1 diff --git a/zh/pd-recover.md b/zh/pd-recover.md index f32e99d15..d5876e17b 100644 --- a/zh/pd-recover.md +++ b/zh/pd-recover.md @@ -18,7 +18,7 @@ PD Recover 是对 PD 进行灾难性恢复的工具,用于恢复无法正常 wget https://download.pingcap.org/tidb-community-toolkit-${version}-linux-amd64.tar.gz ``` - `${version}` 是 TiDB 集群版本,例如,`v7.1.0`。 + `${version}` 是 TiDB 集群版本,例如,`v7.1.1`。 2. 解压安装包: diff --git a/zh/restart-a-tidb-cluster.md b/zh/restart-a-tidb-cluster.md index cd2ef098f..55313310f 100644 --- a/zh/restart-a-tidb-cluster.md +++ b/zh/restart-a-tidb-cluster.md @@ -22,7 +22,7 @@ kind: TidbCluster metadata: name: basic spec: - version: v7.1.0 + version: v7.1.1 timezone: UTC pvReclaimPolicy: Delete pd: diff --git a/zh/restore-from-aws-s3-by-snapshot.md b/zh/restore-from-aws-s3-by-snapshot.md index ad3f0e526..104002232 100644 --- a/zh/restore-from-aws-s3-by-snapshot.md +++ b/zh/restore-from-aws-s3-by-snapshot.md @@ -21,7 +21,7 @@ summary: 介绍如何将存储在 S3 上的备份元数据以及 EBS 卷快照 backupType: full restoreMode: volume-snapshot serviceAccount: tidb-backup-manager - toolImage: pingcap/br:v7.1.0 + toolImage: pingcap/br:v7.1.1 br: cluster: basic clusterNamespace: tidb-cluster