Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

en,zh: Bump tidb components to v7.5.0 (#2467) #2474

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion en/access-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ To enable this feature, you need to deploy TidbNGMonitoring CR using TiDB Operat
ngMonitoring:
requests:
storage: 10Gi
version: v7.1.1
version: v7.5.0
# storageClassName: default
baseImage: pingcap/ng-monitoring
EOF
Expand Down
6 changes: 3 additions & 3 deletions en/advanced-statefulset.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ kind: TidbCluster
metadata:
name: asts
spec:
version: v7.1.1
version: v7.5.0
timezone: UTC
pvReclaimPolicy: Delete
pd:
Expand Down Expand Up @@ -147,7 +147,7 @@ metadata:
tikv.tidb.pingcap.com/delete-slots: '[1]'
name: asts
spec:
version: v7.1.1
version: v7.5.0
timezone: UTC
pvReclaimPolicy: Delete
pd:
Expand Down Expand Up @@ -201,7 +201,7 @@ metadata:
tikv.tidb.pingcap.com/delete-slots: '[]'
name: asts
spec:
version: v7.1.1
version: v7.5.0
timezone: UTC
pvReclaimPolicy: Delete
pd:
Expand Down
2 changes: 1 addition & 1 deletion en/aggregate-multiple-cluster-monitor-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ spec:
version: 7.5.11
initializer:
baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-initializer
version: v7.1.1
version: v7.5.0
reloader:
baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-reloader
version: v1.0.1
Expand Down
12 changes: 11 additions & 1 deletion en/backup-restore-cr.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,16 @@ This section introduces the fields in the `Backup` CR.

- When using BR for backup, you can specify the BR version in this field.
- If the field is not specified or the value is empty, the `pingcap/br:${tikv_version}` image is used for backup by default.
- If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v7.1.1`, the image of the specified version is used for backup.
- If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v7.5.0`, the image of the specified version is used for backup.
- If an image is specified without the version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup.
- When using Dumpling for backup, you can specify the Dumpling version in this field.
<<<<<<< HEAD
- If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v7.1.1`, the image of the specified version is used for backup.
- If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.1/images/tidb-backup-manager/Dockerfile) is used for backup by default.
=======
- If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v7.5.0`, the image of the specified version is used for backup.
- If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for backup by default.
>>>>>>> 638db370 (en,zh: Bump tidb components to v7.5.0 (#2467))
Oreoxmt marked this conversation as resolved.
Show resolved Hide resolved

* `.spec.backupType`: the backup type. This field is valid only when you use BR for backup. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
* `full`: back up all databases in a TiDB cluster.
Expand Down Expand Up @@ -260,8 +265,13 @@ This section introduces the fields in the `Restore` CR.
* `.spec.metadata.namespace`: the namespace where the `Restore` CR is located.
* `.spec.toolImage`:the tools image used by `Restore`. TiDB Operator supports this configuration starting from v1.1.9.

<<<<<<< HEAD
- When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v7.1.1`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.1.1`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.1/images/tidb-backup-manager/Dockerfile) is used for restoring by default.
=======
- When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v7.5.0`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.5.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/master/images/tidb-backup-manager/Dockerfile) is used for restoring by default.
>>>>>>> 638db370 (en,zh: Bump tidb components to v7.5.0 (#2467))
Oreoxmt marked this conversation as resolved.
Show resolved Hide resolved

* `.spec.backupType`: the restore type. This field is valid only when you use BR to restore data. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
* `full`: restore all databases in a TiDB cluster.
Expand Down
2 changes: 1 addition & 1 deletion en/backup-restore-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ Solution:
backupType: full
restoreMode: volume-snapshot
serviceAccount: tidb-backup-manager
toolImage: pingcap/br:v7.1.1
toolImage: pingcap/br:v7.5.0
br:
cluster: basic
clusterNamespace: tidb-cluster
Expand Down
4 changes: 2 additions & 2 deletions en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@ Usually, components in a cluster are in the same version. It is recommended to c

Here are the formats of the parameters:

- `spec.version`: the format is `imageTag`, such as `v7.1.1`
- `spec.version`: the format is `imageTag`, such as `v7.5.0`

- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.baseImage`: the format is `imageName`, such as `pingcap/tidb`

- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.version`: the format is `imageTag`, such as `v7.1.1`
- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.version`: the format is `imageTag`, such as `v7.5.0`

### Recommended configuration

Expand Down
2 changes: 1 addition & 1 deletion en/deploy-cluster-on-arm64.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Before starting the process, make sure that Kubernetes clusters are deployed on
name: ${cluster_name}
namespace: ${cluster_namespace}
spec:
version: "v7.1.1"
version: "v7.5.0"
# ...
helper:
image: busybox:1.33.0
Expand Down
6 changes: 3 additions & 3 deletions en/deploy-heterogeneous-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ To deploy a heterogeneous cluster, do the following:
name: ${heterogeneous_cluster_name}
spec:
configUpdateStrategy: RollingUpdate
version: v7.1.1
version: v7.5.0
timezone: UTC
pvReclaimPolicy: Delete
discovery: {}
Expand Down Expand Up @@ -129,7 +129,7 @@ After creating certificates, take the following steps to deploy a TLS-enabled he
tlsCluster:
enabled: true
configUpdateStrategy: RollingUpdate
version: v7.1.1
version: v7.5.0
timezone: UTC
pvReclaimPolicy: Delete
discovery: {}
Expand Down Expand Up @@ -218,7 +218,7 @@ If you need to deploy a monitoring component for a heterogeneous cluster, take t
version: 7.5.11
initializer:
baseImage: pingcap/tidb-monitor-initializer
version: v7.1.1
version: v7.5.0
reloader:
baseImage: pingcap/tidb-monitor-reloader
version: v1.0.1
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-on-aws-eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -461,7 +461,7 @@ After the bastion host is created, you can connect to the bastion host via SSH a
$ mysql --comments -h abfc623004ccb4cc3b363f3f37475af1-9774d22c27310bc1.elb.us-west-2.amazonaws.com -P 4000 -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 1189
Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
Server version: 8.0.11-TiDB-v7.5.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Expand Down
2 changes: 1 addition & 1 deletion en/deploy-on-azure-aks.md
Original file line number Diff line number Diff line change
Expand Up @@ -342,7 +342,7 @@ After access to the internal host via SSH, you can access the TiDB cluster throu
$ mysql --comments -h 20.240.0.7 -P 4000 -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 1189
Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
Server version: 8.0.11-TiDB-v7.5.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Expand Down
2 changes: 1 addition & 1 deletion en/deploy-on-gcp-gke.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ After the bastion host is created, you can connect to the bastion host via SSH a
$ mysql --comments -h 10.128.15.243 -P 4000 -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 7823
Server version: 5.7.25-TiDB-v7.1.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
Server version: 8.0.11-TiDB-v7.5.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Expand Down
61 changes: 33 additions & 28 deletions en/deploy-on-general-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,17 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.

If the server does not have an external network, you need to download the Docker image used by the TiDB cluster on a machine with Internet access and upload it to the server, and then use `docker load` to install the Docker image on the server.

To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v7.1.1):
To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v7.5.0):

```shell
pingcap/pd:v7.1.1
pingcap/tikv:v7.1.1
pingcap/tidb:v7.1.1
pingcap/tidb-binlog:v7.1.1
pingcap/ticdc:v7.1.1
pingcap/tiflash:v7.1.1
pingcap/pd:v7.5.0
pingcap/tikv:v7.5.0
pingcap/tidb:v7.5.0
pingcap/tidb-binlog:v7.5.0
pingcap/ticdc:v7.5.0
pingcap/tiflash:v7.5.0
pingcap/tidb-monitor-reloader:v1.0.1
pingcap/tidb-monitor-initializer:v7.1.1
pingcap/tidb-monitor-initializer:v7.5.0
grafana/grafana:7.5.11
prom/prometheus:v2.18.1
busybox:1.26.2
Expand All @@ -63,27 +63,32 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/pd:v7.1.1
docker pull pingcap/tikv:v7.1.1
docker pull pingcap/tidb:v7.1.1
docker pull pingcap/tidb-binlog:v7.1.1
docker pull pingcap/ticdc:v7.1.1
docker pull pingcap/tiflash:v7.1.1
docker pull pingcap/pd:v7.5.0
docker pull pingcap/tikv:v7.5.0
docker pull pingcap/tidb:v7.5.0
docker pull pingcap/tidb-binlog:v7.5.0
docker pull pingcap/ticdc:v7.5.0
docker pull pingcap/tiflash:v7.5.0
docker pull pingcap/tidb-monitor-reloader:v1.0.1
docker pull pingcap/tidb-monitor-initializer:v7.1.1
docker pull pingcap/tidb-monitor-initializer:v7.5.0
docker pull grafana/grafana:7.5.11
docker pull prom/prometheus:v2.18.1
docker pull busybox:1.26.2

docker save -o pd-v7.1.1.tar pingcap/pd:v7.1.1
docker save -o tikv-v7.1.1.tar pingcap/tikv:v7.1.1
docker save -o tidb-v7.1.1.tar pingcap/tidb:v7.1.1
docker save -o tidb-binlog-v7.1.1.tar pingcap/tidb-binlog:v7.1.1
docker save -o ticdc-v7.1.1.tar pingcap/ticdc:v7.1.1
docker save -o tiflash-v7.1.1.tar pingcap/tiflash:v7.1.1
docker save -o pd-v7.5.0.tar pingcap/pd:v7.5.0
docker save -o tikv-v7.5.0.tar pingcap/tikv:v7.5.0
docker save -o tidb-v7.5.0.tar pingcap/tidb:v7.5.0
docker save -o tidb-binlog-v7.5.0.tar pingcap/tidb-binlog:v7.5.0
docker save -o ticdc-v7.5.0.tar pingcap/ticdc:v7.5.0
docker save -o tiflash-v7.5.0.tar pingcap/tiflash:v7.5.0
docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1
<<<<<<< HEAD
docker save -o tidb-monitor-initializer-v7.1.1.tar pingcap/tidb-monitor-initializer:v7.1.1
docker save -o grafana-7.5.11.tar grafana/grafana:7.5.11
=======
docker save -o tidb-monitor-initializer-v7.5.0.tar pingcap/tidb-monitor-initializer:v7.5.0
docker save -o grafana-6.0.1.tar grafana/grafana:7.5.11
>>>>>>> 638db370 (en,zh: Bump tidb components to v7.5.0 (#2467))
Oreoxmt marked this conversation as resolved.
Show resolved Hide resolved
docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1
docker save -o busybox-1.26.2.tar busybox:1.26.2
```
Expand All @@ -93,14 +98,14 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
{{< copyable "shell-regular" >}}

```shell
docker load -i pd-v7.1.1.tar
docker load -i tikv-v7.1.1.tar
docker load -i tidb-v7.1.1.tar
docker load -i tidb-binlog-v7.1.1.tar
docker load -i ticdc-v7.1.1.tar
docker load -i tiflash-v7.1.1.tar
docker load -i pd-v7.5.0.tar
docker load -i tikv-v7.5.0.tar
docker load -i tidb-v7.5.0.tar
docker load -i tidb-binlog-v7.5.0.tar
docker load -i ticdc-v7.5.0.tar
docker load -i tiflash-v7.5.0.tar
docker load -i tidb-monitor-reloader-v1.0.1.tar
docker load -i tidb-monitor-initializer-v7.1.1.tar
docker load -i tidb-monitor-initializer-v7.5.0.tar
docker load -i grafana-6.0.1.tar
docker load -i prometheus-v2.18.1.tar
docker load -i busybox-1.26.2.tar
Expand Down
6 changes: 3 additions & 3 deletions en/deploy-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster
...
pump:
baseImage: pingcap/tidb-binlog
version: v7.1.1
version: v7.5.0
replicas: 1
storageClassName: local-storage
requests:
Expand All @@ -47,7 +47,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster
...
pump:
baseImage: pingcap/tidb-binlog
version: v7.1.1
version: v7.5.0
replicas: 1
storageClassName: local-storage
requests:
Expand Down Expand Up @@ -188,7 +188,7 @@ To deploy multiple drainers using the `tidb-drainer` Helm chart for a TiDB clust

```yaml
clusterName: example-tidb
clusterVersion: v7.1.1
clusterVersion: v7.5.0
baseImage:pingcap/tidb-binlog
storageClassName: local-storage
storage: 10Gi
Expand Down
8 changes: 4 additions & 4 deletions en/deploy-tidb-cluster-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_1}"
spec:
version: v7.1.1
version: v7.5.0
timezone: UTC
pvReclaimPolicy: Delete
enableDynamicConfiguration: true
Expand Down Expand Up @@ -106,7 +106,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_2}"
spec:
version: v7.1.1
version: v7.5.0
timezone: UTC
pvReclaimPolicy: Delete
enableDynamicConfiguration: true
Expand Down Expand Up @@ -383,7 +383,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_1}"
spec:
version: v7.1.1
version: v7.5.0
timezone: UTC
tlsCluster:
enabled: true
Expand Down Expand Up @@ -441,7 +441,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_2}"
spec:
version: v7.1.1
version: v7.5.0
timezone: UTC
tlsCluster:
enabled: true
Expand Down
16 changes: 8 additions & 8 deletions en/deploy-tidb-dm.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ Usually, components in a cluster are in the same version. It is recommended to c

The formats of the related parameters are as follows:

- `spec.version`: the format is `imageTag`, such as `v7.1.1`.
- `spec.version`: the format is `imageTag`, such as `v7.5.0`.
- `spec.<master/worker>.baseImage`: the format is `imageName`, such as `pingcap/dm`.
- `spec.<master/worker>.version`: the format is `imageTag`, such as `v7.1.1`.
- `spec.<master/worker>.version`: the format is `imageTag`, such as `v7.5.0`.

TiDB Operator only supports deploying DM 2.0 and later versions.

Expand All @@ -50,7 +50,7 @@ metadata:
name: ${dm_cluster_name}
namespace: ${namespace}
spec:
version: v7.1.1
version: v7.5.0
configUpdateStrategy: RollingUpdate
pvReclaimPolicy: Retain
discovery: {}
Expand Down Expand Up @@ -141,27 +141,27 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace}

If the server does not have an external network, you need to download the Docker image used by the DM cluster and upload the image to the server, and then execute `docker load` to install the Docker image on the server:

1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v7.1.1):
1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v7.5.0):

```shell
pingcap/dm:v7.1.1
pingcap/dm:v7.5.0
```

2. To download the image, execute the following command:

{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/dm:v7.1.1
docker save -o dm-v7.1.1.tar pingcap/dm:v7.1.1
docker pull pingcap/dm:v7.5.0
docker save -o dm-v7.5.0.tar pingcap/dm:v7.5.0
```

3. Upload the Docker image to the server, and execute `docker load` to install the image on the server:

{{< copyable "shell-regular" >}}

```shell
docker load -i dm-v7.1.1.tar
docker load -i dm-v7.5.0.tar
```

After deploying the DM cluster, execute the following command to view the Pod status:
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-tidb-monitor-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ After collecting data using Prometheus, you can visualize multi-cluster monitori

```shell
# set tidb version here
version=v7.1.1
version=v7.5.0
docker run --rm -i -v ${PWD}/dashboards:/dashboards/ pingcap/tidb-monitor-initializer:${version} && \
cd dashboards
```
Expand Down
Loading
Loading