Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deprecated-tikv-importer-in-operator #2461

Merged
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 1 addition & 7 deletions en/restore-data-using-tidb-lightning.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,10 @@ aliases: ['/docs/tidb-in-kubernetes/dev/restore-data-using-tidb-lightning/']

This document describes how to import data into a TiDB cluster on Kubernetes using [TiDB Lightning](https://docs.pingcap.com/tidb/stable/tidb-lightning-overview).

TiDB Lightning contains two components: tidb-lightning and tikv-importer. In Kubernetes, the tikv-importer is inside the separate Helm chart of the TiDB cluster. And tikv-importer is deployed as a `StatefulSet` with `replicas=1` while tidb-lightning is in a separate Helm chart and deployed as a `Job`.
In Kubernetes, the tidb-lightning is in a separate Helm chart and deployed as a `Job`.

TiDB Lightning supports three backends: `Importer-backend`, `Local-backend`, and `TiDB-backend`. For the differences of these backends and how to choose backends, see [TiDB Lightning Backends](https://docs.pingcap.com/tidb/stable/tidb-lightning-backends).
Oreoxmt marked this conversation as resolved.
Show resolved Hide resolved

- For `Importer-backend`, both tikv-importer and tidb-lightning need to be deployed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update the above "TiDB Lightning supports three backends: Importer-backend," too?


> **Note:**
>
> `Importer-backend` is deprecated in TiDB 5.3 version or later versions. If you must use `Importer-backend`, refer to [the documentation of v1.2](https://docs.pingcap.com/tidb-in-kubernetes/v1.2/restore-data-using-tidb-lightning#deploy-tikv-importer).

- For `Local-backend`, only tidb-lightning needs to be deployed.

- For `TiDB-backend`, only tidb-lightning needs to be deployed, and it is recommended to import data using CustomResourceDefinition (CRD) in TiDB Operator v1.1 and later versions. For details, refer to [Restore Data from GCS Using TiDB Lightning](restore-from-gcs.md) or [Restore Data from S3-Compatible Storage Using TiDB Lightning](restore-from-s3.md)
Expand Down
2 changes: 0 additions & 2 deletions en/tidb-toolkit.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,6 @@ Kubernetes applications are packed as charts in Helm. PingCAP provides the follo
* `tidb-backup`: used to back up or restore TiDB clusters;
* `tidb-lightning`: used to import data into a TiDB cluster;
* `tidb-drainer`: used to deploy TiDB Drainer;
* `tikv-importer`: used to deploy TiKV Importer.

These charts are hosted in the Helm chart repository `https://charts.pingcap.org/` maintained by PingCAP. You can add this repository to your local server or computer using the following command:

Expand All @@ -206,7 +205,6 @@ pingcap/tidb-cluster v1.5.1 A Helm chart for TiDB Cl
pingcap/tidb-drainer v1.5.1 A Helm chart for TiDB Binlog drainer.
pingcap/tidb-lightning v1.5.1 A Helm chart for TiDB Lightning
pingcap/tidb-operator v1.5.1 v1.5.1 tidb-operator Helm chart for Kubernetes
pingcap/tikv-importer v1.5.1 A Helm chart for TiKV Importer
```
When a new version of chart has been released, you can use `helm repo update` to update the repository cached locally:
Expand Down
8 changes: 1 addition & 7 deletions zh/restore-data-using-tidb-lightning.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,10 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/restore-data-using-tidb-lightning/']

本文介绍了如何使用 [TiDB Lightning](https://docs.pingcap.com/zh/tidb/stable/tidb-lightning-overview) 导入集群数据。

TiDB Lightning 包含两个组件:tidb-lightning 和 tikv-importer。在 Kubernetes 上,tikv-importer 位于单独的 Helm chart 内,被部署为一个副本数为 1 (`replicas=1`) 的 `StatefulSet`;tidb-lightning 位于单独的 Helm chart 内,被部署为一个 `Job`。
TiDB Lightning 位于单独的 Helm chart 内,被部署为一个 `Job`。

目前,TiDB Lightning 支持三种后端:`Importer-backend`、`Local-backend` 、`TiDB-backend`。关于这三种后端的区别和选择,请参阅 [TiDB Lightning 文档](https://docs.pingcap.com/zh/tidb/stable/tidb-lightning-backends)。
Oreoxmt marked this conversation as resolved.
Show resolved Hide resolved

- 对于 `Importer-backend` 后端,需要分别部署 tikv-importer 与 tidb-lightning。

> **注意:**
>
> `Importer-backend` 后端在 TiDB 5.3 及之后的版本被废弃。如果必须使用 `Importer-backend` 后端,请参考 v1.2 及以前的[旧版文档](https://docs.pingcap.com/zh/tidb-in-kubernetes/v1.2/restore-data-using-tidb-lightning#部署-tikv-importer)部署 tikv-importer。

- 对于 `Local-backend` 后端,只需要部署 tidb-lightning。

- 对于 `TiDB-backend` 后端,只需要部署 tidb-lightning。推荐使用基于 TiDB Operator 新版(v1.1 及以上)的 CustomResourceDefinition (CRD) 实现。具体信息可参考[使用 TiDB Lightning 恢复 GCS 上的备份数据](restore-from-gcs.md)或[使用 TiDB Lightning 恢复 S3 兼容存储上的备份数据](restore-from-s3.md)。
Expand Down
2 changes: 0 additions & 2 deletions zh/tidb-toolkit.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,6 @@ Kubernetes 应用在 Helm 中被打包为 chart。PingCAP 针对 Kubernetes 上
* `tidb-backup`:用于 TiDB 集群备份恢复;
* `tidb-lightning`:用于 TiDB 集群导入数据;
* `tidb-drainer`:用于部署 TiDB Drainer;
* `tikv-importer`:用于部署 TiKV Importer;

这些 chart 都托管在 PingCAP 维护的 helm chart 仓库 `https://charts.pingcap.org/` 中,你可以通过下面的命令添加该仓库:

Expand All @@ -206,7 +205,6 @@ pingcap/tidb-cluster v1.5.1 A Helm chart for TiDB Cl
pingcap/tidb-drainer v1.5.1 A Helm chart for TiDB Binlog drainer.
pingcap/tidb-lightning v1.5.1 A Helm chart for TiDB Lightning
pingcap/tidb-operator v1.5.1 v1.5.1 tidb-operator Helm chart for Kubernetes
pingcap/tikv-importer v1.5.1 A Helm chart for TiKV Importer
```
当新版本的 chart 发布后,你可以使用 `helm repo update` 命令更新本地对于仓库的缓存:
Expand Down
Loading