Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

en,zh: bump operator to v1.5.2 for release-1.5 #2487

Merged
merged 1 commit into from
Jan 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion en/TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@
- [Advanced StatefulSet Controller](advanced-statefulset.md)
- [Admission Controller](enable-admission-webhook.md)
- [Sysbench Performance Test](benchmark-sysbench.md)
- [API References](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md)
- [API References](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md)
- [Cheat Sheet](cheat-sheet.md)
- [Required RBAC Rules](tidb-operator-rbac.md)
- Tools
Expand Down
2 changes: 1 addition & 1 deletion en/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ hide_commit: true

<LearningPath label="Reference" icon="cloud-dev">

[API Docs](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md)
[API Docs](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md)

[Tools](https://docs.pingcap.com/tidb-in-kubernetes/dev/tidb-toolkit)

Expand Down
2 changes: 1 addition & 1 deletion en/access-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ To enable this feature, you need to deploy TidbNGMonitoring CR using TiDB Operat
EOF
```

For more configuration items of the TidbNGMonitoring CR, see [example in tidb-operator](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/advanced/tidb-ng-monitoring.yaml).
For more configuration items of the TidbNGMonitoring CR, see [example in tidb-operator](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/advanced/tidb-ng-monitoring.yaml).

3. Enable Continuous Profiling.

Expand Down
4 changes: 2 additions & 2 deletions en/advanced-statefulset.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,15 @@ The [advanced StatefulSet controller](https://github.com/pingcap/advanced-statef
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/advanced-statefulset-crd.v1beta1.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/advanced-statefulset-crd.v1beta1.yaml
```

* For Kubernetes versions >= 1.16:

{{< copyable "shell-regular" >}}

```
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/advanced-statefulset-crd.v1.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/advanced-statefulset-crd.v1.yaml
```

2. Enable the `AdvancedStatefulSet` feature in `values.yaml` of the TiDB Operator chart:
Expand Down
6 changes: 3 additions & 3 deletions en/aggregate-multiple-cluster-monitor-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Thanos provides [Thanos Query](https://thanos.io/tip/components/query.md/) compo
{{< copyable "shell-regular" >}}

```shell
kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/monitor-with-thanos/tidb-monitor.yaml
kubectl -n ${namespace} apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/monitor-with-thanos/tidb-monitor.yaml
```

2. Deploy the Thanos Query component.
Expand All @@ -34,7 +34,7 @@ Thanos provides [Thanos Query](https://thanos.io/tip/components/query.md/) compo
{{< copyable "shell-regular" >}}

```
curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/monitor-with-thanos/thanos-query.yaml
curl -sl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/monitor-with-thanos/thanos-query.yaml
```

2. Manually modify the `--store` parameter in the `thanos-query.yaml` file by updating `basic-prometheus:10901` to `basic-prometheus.${namespace}:10901`.
Expand Down Expand Up @@ -182,4 +182,4 @@ spec:

After RemoteWrite is enabled, Prometheus pushes the monitoring data to [Thanos Receiver](https://thanos.io/tip/components/receive.md/). For more information, refer to [the design of Thanos Receiver](https://thanos.io/v0.8/proposals/201812_thanos-remote-receive/).

For details on the deployment, refer to [this example of integrating TidbMonitor with Thanos Receiver](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/monitor-prom-remotewrite).
For details on the deployment, refer to [this example of integrating TidbMonitor with Thanos Receiver](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/monitor-prom-remotewrite).
4 changes: 2 additions & 2 deletions en/backup-restore-cr.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ This section introduces the fields in the `Backup` CR.
- If an image is specified without the version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup.
- When using Dumpling for backup, you can specify the Dumpling version in this field.
- If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v7.5.0`, the image of the specified version is used for backup.
- If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.1/images/tidb-backup-manager/Dockerfile) is used for backup by default.
- If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.2/images/tidb-backup-manager/Dockerfile) is used for backup by default.

* `.spec.backupType`: the backup type. This field is valid only when you use BR for backup. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
* `full`: back up all databases in a TiDB cluster.
Expand Down Expand Up @@ -261,7 +261,7 @@ This section introduces the fields in the `Restore` CR.
* `.spec.toolImage`:the tools image used by `Restore`. TiDB Operator supports this configuration starting from v1.1.9.

- When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v7.5.0`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.5.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.1/images/tidb-backup-manager/Dockerfile) is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.5.0`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.2/images/tidb-backup-manager/Dockerfile) is used for restoring by default.

* `.spec.backupType`: the restore type. This field is valid only when you use BR to restore data. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
* `full`: restore all databases in a TiDB cluster.
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-aws-s3-by-snapshot.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ The following sections exemplify how to back up data of the TiDB cluster `demo1`

### Step 1. Set up the environment for EBS volume snapshot backup

1. Download the file [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) to the backup server.
1. Download the file [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) to the backup server.

2. Create the RBAC-related resources required for the backup in the `test1` namespace by running the following command:

Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-aws-s3-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ This document provides an example about how to back up the data of the `demo1` T
kubectl create namespace backup-test
```

2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace:
2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace:

```shell
kubectl apply -f backup-rbac.yaml -n backup-test
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-azblob-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ This document provides an example about how to back up the data of the `demo1` T
kubectl create namespace backup-test
```

2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace:
2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `backup-test` namespace:

```shell
kubectl apply -f backup-rbac.yaml -n backup-test
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-gcs-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ This document provides an example about how to back up the data of the `demo1` T
kubectl create namespace backup-test
```

2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:
2. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml), and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:

```shell
kubectl apply -f backup-rbac.yaml -n backup-test
Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-gcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ To better explain how to perform the backup operation, this document shows an ex

### Step 1: Prepare for ad-hoc full backup

1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:
1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) and execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:

{{< copyable "shell-regular" >}}

Expand Down
2 changes: 1 addition & 1 deletion en/backup-to-pv-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ This document provides an example about how to back up the data of the `demo1` T

### Step 1: Prepare for an ad-hoc backup

1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.1/manifests/backup/backup-rbac.yaml) to the server that runs the backup task.
1. Download [backup-rbac.yaml](https://github.com/pingcap/tidb-operator/blob/v1.5.2/manifests/backup/backup-rbac.yaml) to the server that runs the backup task.

2. Execute the following command to create the role-based access control (RBAC) resources in the `test1` namespace:

Expand Down
4 changes: 2 additions & 2 deletions en/backup-to-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,12 @@ GRANT

### Step 1: Prepare for ad-hoc full backup

1. Execute the following command to create the role-based access control (RBAC) resources in the `tidb-cluster` namespace based on [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/backup/backup-rbac.yaml):
1. Execute the following command to create the role-based access control (RBAC) resources in the `tidb-cluster` namespace based on [backup-rbac.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/backup/backup-rbac.yaml):

{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/backup/backup-rbac.yaml -n tidb-cluster
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/backup/backup-rbac.yaml -n tidb-cluster
```

2. Grant permissions to the remote storage.
Expand Down
6 changes: 3 additions & 3 deletions en/cheat-sheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -493,7 +493,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm inspect values pingcap/tidb-operator --version=v1.5.1 > values-tidb-operator.yaml
helm inspect values pingcap/tidb-operator --version=v1.5.2 > values-tidb-operator.yaml
```

### Deploy using Helm chart
Expand All @@ -509,7 +509,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.5.1 -f values-tidb-operator.yaml
helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.5.2 -f values-tidb-operator.yaml
```

### View the deployed Helm release
Expand All @@ -533,7 +533,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.1 -f values-tidb-operator.yaml
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.5.2 -f values-tidb-operator.yaml
```

### Delete Helm release
Expand Down
2 changes: 1 addition & 1 deletion en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ If you are using a NUMA-based CPU, you need to enable `Static`'s CPU management

## Configure TiDB deployment

To configure a TiDB deployment, you need to configure the `TiDBCluster` CR. Refer to the [TidbCluster example](https://github.com/pingcap/tidb-operator/blob/v1.5.1/examples/advanced/tidb-cluster.yaml) for an example. For the complete configurations of `TiDBCluster` CR, refer to [API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md).
To configure a TiDB deployment, you need to configure the `TiDBCluster` CR. Refer to the [TidbCluster example](https://github.com/pingcap/tidb-operator/blob/v1.5.2/examples/advanced/tidb-cluster.yaml) for an example. For the complete configurations of `TiDBCluster` CR, refer to [API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md).

> **Note:**
>
Expand Down
4 changes: 2 additions & 2 deletions en/configure-storage-class.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directori
1. Download the deployment file for the local-volume-provisioner.

```shell
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/local-pv/local-volume-provisioner.yaml
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/local-pv/local-volume-provisioner.yaml
```

2. If you are using the same discovery directory as described in [Step 1: Pre-allocate local storage](#step-1-pre-allocate-local-storage), you can skip this step. If you are using a different path for the discovery directory than in the previous step, you need to modify the ConfigMap and DaemonSet spec.
Expand Down Expand Up @@ -163,7 +163,7 @@ The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directori
3. Deploy the `local-volume-provisioner`.

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/local-dind/local-volume-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/local-dind/local-volume-provisioner.yaml
```

4. Check the status of the Pod and PV.
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-heterogeneous-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ After creating certificates, take the following steps to deploy a TLS-enabled he

In the configuration file, `spec.tlsCluster.enabled`controls whether to enable TLS between the components and `spec.tidb.tlsClient.enabled`controls whether to enable TLS for the MySQL client.

- For more configurations of a TLS-enabled heterogeneous cluster, see the ['heterogeneous-tls'](https://github.com/pingcap/tidb-operator/tree/v1.5.1/examples/heterogeneous-tls) example.
- For more configurations of a TLS-enabled heterogeneous cluster, see the ['heterogeneous-tls'](https://github.com/pingcap/tidb-operator/tree/v1.5.2/examples/heterogeneous-tls) example.
- For more configurations and field meanings of a TiDB cluster, see the [TiDB cluster configuration document](configure-a-tidb-cluster.md).

2. In the configuration file of your heterogeneous cluster, modify the configurations of each node according to your need.
Expand Down
6 changes: 3 additions & 3 deletions en/deploy-on-alibaba-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ All the instances except ACK mandatory workers are deployed across availability
tikv_count = 3
tidb_count = 2
pd_count = 3
operator_version = "v1.5.1"
operator_version = "v1.5.2"
```

* To deploy TiFlash in the cluster, set `create_tiflash_node_pool = true` in `terraform.tfvars`. You can also configure the node count and instance type of the TiFlash node pool by modifying `tiflash_count` and `tiflash_instance_type`. By default, the value of `tiflash_count` is `2`, and the value of `tiflash_instance_type` is `ecs.i2.2xlarge`.
Expand Down Expand Up @@ -173,7 +173,7 @@ All the instances except ACK mandatory workers are deployed across availability
cp manifests/dashboard.yaml.example tidb-dashboard.yaml
```

To complete the CR file configuration, refer to [TiDB Operator API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md).
To complete the CR file configuration, refer to [TiDB Operator API documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md).

* To deploy TiFlash, configure `spec.tiflash` in `db.yaml` as follows:

Expand Down Expand Up @@ -347,7 +347,7 @@ In the default configuration, the Terraform script creates a new VPC. To use the

### Configure the TiDB cluster

See [TiDB Operator API Documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md).
See [TiDB Operator API Documentation](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md).

## Manage multiple TiDB clusters

Expand Down
10 changes: 5 additions & 5 deletions en/deploy-on-aws-eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ The following `c5d.4xlarge` example shows how to configure StorageClass for the

2. [Mount the local storage](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) to the `/mnt/ssd` directory.

3. According to the mounting configuration, modify the [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/manifests/eks/local-volume-provisioner.yaml) file.
3. According to the mounting configuration, modify the [local-volume-provisioner.yaml](https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/manifests/eks/local-volume-provisioner.yaml) file.

4. Deploy and create a `local-storage` storage class using the modified `local-volume-provisioner.yaml` file.

Expand Down Expand Up @@ -351,9 +351,9 @@ First, download the sample `TidbCluster` and `TidbMonitor` configuration files:
{{< copyable "shell-regular" >}}

```shell
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aws/tidb-cluster.yaml && \
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aws/tidb-monitor.yaml && \
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.1/examples/aws/tidb-dashboard.yaml
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aws/tidb-cluster.yaml && \
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aws/tidb-monitor.yaml && \
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.2/examples/aws/tidb-dashboard.yaml
```

Refer to [configure the TiDB cluster](configure-a-tidb-cluster.md) to further customize and configure the CR before applying.
Expand Down Expand Up @@ -668,4 +668,4 @@ Depending on the EKS cluster status, use different commands:

Finally, execute `kubectl -n tidb-cluster apply -f tidb-cluster.yaml` to update the TiDB cluster configuration.

For detailed CR configuration, refer to [API references](https://github.com/pingcap/tidb-operator/blob/v1.5.1/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md).
For detailed CR configuration, refer to [API references](https://github.com/pingcap/tidb-operator/blob/v1.5.2/docs/api-references/docs.md) and [Configure a TiDB Cluster](configure-a-tidb-cluster.md).
Loading
Loading