Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zh: support PD micro service #2478

Merged
merged 28 commits into from
Mar 25, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
8c5cf9e
update
HuSharp Feb 22, 2024
7b82501
address comment
HuSharp Feb 27, 2024
d983076
address comment
HuSharp Feb 27, 2024
d9babd6
add watch status
HuSharp Feb 28, 2024
06579eb
Merge remote-tracking branch 'upstream/master' into support_pd_ms
HuSharp Mar 6, 2024
14848fc
address comment
HuSharp Mar 6, 2024
07381da
rename pdms to component name
HuSharp Mar 6, 2024
eef5485
Apply suggestions from code review
HuSharp Mar 7, 2024
daa875e
address comment
HuSharp Mar 7, 2024
e9594ca
address comment
HuSharp Mar 7, 2024
0544c54
Update zh/configure-a-tidb-cluster.md
HuSharp Mar 8, 2024
2e0b1c1
merge tidb doc
HuSharp Mar 13, 2024
e7bedab
fix doc
HuSharp Mar 14, 2024
e40e88a
fix doc
HuSharp Mar 14, 2024
9e971c2
refine descriptions
qiancai Mar 15, 2024
20e150f
refine descriptions
qiancai Mar 18, 2024
456870b
address comment
HuSharp Mar 20, 2024
8f7353b
wording updates
qiancai Mar 21, 2024
c330026
Apply suggestions from code review
HuSharp Mar 21, 2024
d1c0c27
move pd-microservice.md to doc
HuSharp Mar 21, 2024
b37489f
correct the link to the PD microservices doc
qiancai Mar 21, 2024
c24103d
unify the terms
qiancai Mar 21, 2024
c313ef6
wording updates
qiancai Mar 21, 2024
4da0176
indicate experimental
qiancai Mar 22, 2024
692220d
Merge branch 'support_pd_ms' of https://github.com/HuSharp/docs-tidb-…
qiancai Mar 22, 2024
866ba23
Update configure-a-tidb-cluster.md
qiancai Mar 22, 2024
b9dfd57
Update zh/configure-a-tidb-cluster.md
qiancai Mar 22, 2024
f697ea0
Apply suggestions from code review
qiancai Mar 25, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions zh/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -251,6 +251,32 @@ spec:
>
> 如果 Kubernetes 集群节点个数少于 3 个,将会导致有一个 PD Pod 处于 Pending 状态,而 TiKV 和 TiDB Pod 也都不会被创建。Kubernetes 集群节点个数少于 3 个时,为了使 TiDB 集群能启动起来,可以将默认部署的 PD Pod 个数减小到 1 个。

#### 部署 PD 微服务

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构。

如果要在集群中开启 PD 微服务,需要在 `${cluster_name}/tidb-cluster.yaml` 文件中配置 `spec.pd.config` 与 `spec.pdms`.
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

```yaml
spec:
pd:
config:
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
mode: "ms"
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
pdms:
- name: "tso"
replicas: 2
config: {}
- name: "scheduling"
config: {}
replicas: 1
```

`spec.pd.config.mode` 用于配置 PD 微服务模式,目前支持 "ms"、"" 两种模式,"ms" 表示开启微服务模式,"" 为空表示关闭微服务模式。
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

qiancai marked this conversation as resolved.
Show resolved Hide resolved
`spec.pdms.config` 用于配置 PD 微服务,配置参数与 `spec.pd.config` 相同,获取所有可以配置的 PD 微服务配置参数,请参考 [PD 配置文档](https://docs.pingcap.com/zh/tidb/stable/pd-configuration-file)。
Copy link
Collaborator

@qiancai qiancai Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PD 微服务配置参数目前和 PD 本身都是一样的

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里要不要介绍一下当前版本都支持哪些微服务,最好有个 doc link 出去?看举例介绍只有 tso,scheduling,这就是全部?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HuSharp marked this conversation as resolved.
Show resolved Hide resolved

#### 部署 TiProxy

部署方法与 PD 一致。此外,还需要修改 `spec.tiproxy` 来手动指定 TiProxy 组件的数量。
Expand Down Expand Up @@ -381,6 +407,33 @@ spec:
> - 为了兼容 `helm` 部署,如果你是通过 CR 文件部署 TiDB 集群,即使你不设置 Config 配置,也需要保证 `Config: {}` 的设置,从而避免 PD 组件无法正常启动。
> - PD 部分配置项在首次启动成功后会持久化到 etcd 中且后续将以 etcd 中的配置为准。因此 PD 在首次启动后,这些配置项将无法再通过配置参数来进行修改,而需要使用 SQL、pd-ctl 或 PD server API 来动态进行修改。目前,[在线修改 PD 配置](https://docs.pingcap.com/zh/tidb/stable/dynamic-config#在线修改-pd-配置)文档中所列的配置项中,除 `log.level` 外,其他配置项在 PD 首次启动之后均不再支持通过配置参数进行修改。

##### 配置 PD 微服务配置参数
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构。
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

你可以通过 TidbCluster CR 的 `spec.pd.config` 与 `spec.pdms` 来配置 PD 配置参数。目前支持 "tso"、"scheduling" 两个微服务,配置示例如下:

```yaml
spec:
pd:
config:
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
mode: "ms"
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
pdms:
- name: "tso"
replicas: 2
- name: "scheduling"
replicas: 1
```

其中 `spec.pdms` 用于配置 PD 微服务,配置参数与 `spec.pd.config` 相同,获取所有可以配置的 PD 微服务配置参数,请参考 [PD 配置文档](https://docs.pingcap.com/zh/tidb/stable/pd-configuration-file)。
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

> **注意:**
>
> - 为了兼容 `helm` 部署,如果你是通过 CR 文件部署 TiDB 集群,即使你不设置 Config 配置,也需要保证 `Config: {}` 的设置,从而避免 PD 微服务组件无法正常启动。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请问这里提到的“Config 配置” 就是下面截图里的配置吗

image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

不是的,pdms 字段下的各个服务带有 config,类似:
pdms:
- name: "tso"
- config: {}

HuSharp marked this conversation as resolved.
Show resolved Hide resolved
> - PD 微服务部分配置项在首次启动成功后会持久化到 etcd 中且后续将以 etcd 中的配置为准。因此 PD 微服务在首次启动后,这些配置项目前将无法再通过配置参数来进行修改。
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

#### 配置 TiProxy 配置参数

你可以通过 TidbCluster CR 的 `spec.tiproxy.config` 来配置 TiProxy 配置参数。
Expand Down
26 changes: 26 additions & 0 deletions zh/deploy-tidb-cluster-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -516,6 +516,28 @@ EOF
5. 升级所有 Kubernetes 集群的 TiDB 版本。
6. 如果集群中部署了 TiCDC,为所有部署了 TiCDC 的 Kubernetes 集群升级 TiCDC 版本。

### 升级 PD 微服务

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

”升级 PD 微服务版本“ 需要在 “升级 TiProxy 版本” 之前完成是吗?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是的,PD 微服务是最先更新的

若希望部署 PD 微服务,需要修改初始 TidbCluster 定义中的 `spec.pd.mode` 字段为 `ms`,并加入指定的 PD 微服务,目前支持 "tso","scheduling"。
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

```yaml
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
# ...
spec:
pd:
mode: ms
pdms:
- name: "tso"
baseImage: pingcap/pd
version: ${version}
replicas: 2
- name: "scheduling"
baseImage: pingcap/pd
version: ${version}
replicas: 1
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
```

## 退出和回收已加入的 TidbCluster

当你需要让一个集群从所加入的跨 Kubernetes 部署的 TiDB 集群退出并回收资源时,可以通过缩容流程来实现上述需求。在此场景下,需要满足缩容的一些限制,限制如下:
Expand All @@ -524,6 +546,10 @@ EOF

以上面文档创建的第二个 TidbCluster 为例,先将 PD、TiKV、TiDB 的副本数设置为 0,如果开启了 TiFlash、TiCDC、TiProxy、Pump 等其他组件,也请一并将其副本数设为 `0`:

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构。如果配置了 PD 微服务,也请将 PD 微服务(pdms 下字段)具体组件的副本数设置为 0。
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

{{< copyable "shell-regular" >}}

```bash
Expand Down
55 changes: 55 additions & 0 deletions zh/enable-tls-between-components.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,33 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/']
...
```

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构,如需部署 PD 微服务,并不需要为 PD 微服务的各个组件生成证书,只需要更新 `pd-server.json` 文件中的 `hosts` 属性,新增微服务相关 hosts 即可。以 `scheduling` 服务为例:
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

``` json
...
"CN": "TiDB",
"hosts": [
"127.0.0.1",
"::1",
"${cluster_name}-pd",
...
"*.${cluster_name}-pd-peer.${namespace}.svc",
// 以下为新增的 scheduling 微服务 hosts
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
"basic-pdms-scheduling",
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
"basic-pdms-scheduling.pingcap",
"basic-pdms-scheduling.pingcap.svc",
"basic-pdms-scheduling-peer",
"basic-pdms-scheduling-peer.pingcap",
"basic-pdms-scheduling-peer.pingcap.svc",
"*.basic-pdms-scheduling-peer",
"*.basic-pdms-scheduling-peer.pingcap",
"*.basic-pdms-scheduling-peer.pingcap.svc",
],
...
```

qiancai marked this conversation as resolved.
Show resolved Hide resolved
其中 `${cluster_name}` 为集群的名字,`${namespace}` 为 TiDB 集群部署的命名空间,用户也可以添加自定义 `hosts`。

最后生成 PD Server 端证书:
Expand Down Expand Up @@ -1428,6 +1455,34 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/']

然后使用 `kubectl apply -f tidb-cluster.yaml` 来创建 TiDB 集群。

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构,如需部署 PD 微服务,需要为各个微服务配置 `cert-allowed-cn`。以 `scheduling` 服务为例,需要:
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
>
> 更新 PD config 中 的 mode 为 `ms`
> 为 `scheduling` 服务配置 `security`
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

```yaml
pd:
baseImage: pingcap/pd
maxFailoverCount: 0
replicas: 1
requests:
storage: "10Gi"
config:
security:
cert-allowed-cn:
- TiDB
mode: "ms"
pdms:
- name: "scheduling"
replicas: 1
config:
security:
cert-allowed-cn:
- TiDB
```

qiancai marked this conversation as resolved.
Show resolved Hide resolved
2. 创建 Drainer 组件并开启 TLS 以及 CN 验证。

- 第一种方式:创建 Drainer 的时候设置 `drainerName`:
Expand Down
11 changes: 11 additions & 0 deletions zh/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,17 @@ tidbcluster.pingcap.com/basic created

如果要将 TiDB 集群部署到 ARM64 机器上,可以参考[在 ARM64 机器上部署 TiDB 集群](deploy-cluster-on-arm64.md)。

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构,如需部署 PD 微服务,可以按照如下方式进行部署:
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
> {{< copyable "shell-regular" >}}

HuSharp marked this conversation as resolved.
Show resolved Hide resolved
qiancai marked this conversation as resolved.
Show resolved Hide resolved
> ``` shell
> kubectl create namespace tidb-cluster && \
> kubectl -n tidb-cluster apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/pd-micro-service-cluster.yaml
> ```
>

### 部署独立的 TiDB Dashboard

{{< copyable "shell-regular" >}}
Expand Down
10 changes: 10 additions & 0 deletions zh/modify-tidb-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,16 @@ PD 中[支持在线修改的配置项](https://docs.pingcap.com/zh/tidb/stable/d

对于部署在 Kubernetes 中的 TiDB 集群,如需修改 PD 配置参数,需要使用 [SQL](https://docs.pingcap.com/zh/tidb/stable/dynamic-config/#在线修改-pd-配置)、[pd-ctl](https://docs.pingcap.com/tidb/stable/pd-control#config-show--set-option-value--placement-rules) 或 PD server API 来动态进行修改。

### 修改 PD 微服务配置

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构。
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

在 PD 微服务各个组件首次启动成功后,PD 的部分配置项会持久化到 etcd 中,且后续将以 etcd 中的配置为准。因此,在 PD 微服务各个组件首次启动后,这些配置项将无法再通过 TidbCluster CR 来进行修改。

PD 微服务各个组件中[支持在线修改的配置项](https://docs.pingcap.com/zh/tidb/stable/dynamic-config#在线修改-pd-配置)里,除 `log.level` 外,其他配置项在 PD 微服务各个组件首次启动之后均不再支持通过 TidbCluster CR 进行修改。
qiancai marked this conversation as resolved.
Show resolved Hide resolved

## 修改 TiProxy 组件配置

修改 TiProxy 组件的配置永远不会重启 Pod。如果你想要重启 Pod,需要手动杀死 Pod,或更改 Pod 镜像等配置,来手动触发重启。
36 changes: 36 additions & 0 deletions zh/scale-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,42 @@ kubectl patch -n ${namespace} tc ${cluster_name} --type merge --patch '{"spec":{
kubectl patch -n ${namespace} tc ${cluster_name} --type merge --patch '{"spec":{"ticdc":{"replicas":3}}}'
```

### 水平扩缩容 PD 微服务组件

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构。

如果要对 PD 微服务各个组件进行水平扩缩容,可以使用 kubectl 修改集群所对应的 `TidbCluster` 对象中的 `spec.pdms.replicas` 至期望值,目前支持 "tso","scheduling" 组件,现以 "tso" 为例:

1. 按需修改 TiDB 集群组件的 `replicas` 值。例如,执行以下命令可将 "tso" 的 `replicas` 值设置为 3:

{{< copyable "shell-regular" >}}

```shell
kubectl patch -n ${namespace} tc ${cluster_name} --type merge --patch '{"spec":{"pdms":{"name":"tso", "replicas":3}}}'
```

2. 查看 Kubernetes 集群中对应的 TiDB 集群是否更新到了你期望的配置。

{{< copyable "shell-regular" >}}

```shell
kubectl get tidbcluster ${cluster_name} -n ${namespace} -oyaml
```

上述命令输出的 `TidbCluster` 中,`spec.pdms` 的 `tso.replicas` 值预期应与你之前配置的值一致。

3. 观察 `TidbCluster` Pod 是否新增或者减少。

{{< copyable "shell-regular" >}}

```shell
watch kubectl -n ${namespace} get pod -o wide
```

PD 微服务组件通常需要 10 到 30 秒左右的时间进行扩容或者缩容。

### 查看集群水平扩缩容状态

{{< copyable "shell-regular" >}}
Expand Down
5 changes: 5 additions & 0 deletions zh/suspend-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,11 @@ summary: 了解如何通过配置挂起 Kubernetes 上的 TiDB 集群。
* TiProxy
* PD

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构。
> 若你的 TiDB 集群中部署了 PD 微服务组件,PD 微服务组件的 Pod 会在删除 PD 之后被删除。
HuSharp marked this conversation as resolved.
Show resolved Hide resolved

## 恢复 TiDB 集群

在 TiDB 集群或组件被挂起后,如果你需要恢复 TiDB 集群,执行以下步骤:
Expand Down
6 changes: 6 additions & 0 deletions zh/upgrade-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,12 @@ Kubernetes 提供了[滚动更新功能](https://kubernetes.io/docs/tutorials/ku

使用滚动更新时,TiDB Operator 会按 PD、TiProxy、TiFlash、TiKV、TiDB 的顺序,串行地删除旧版本的 Pod,并创建新版本的 Pod。当新版本的 Pod 正常运行后,再处理下一个 Pod。

> **注意:**
>
> PD 8.0.0 版本后开始支持微服务架构。
> TODO: 具体部署参考 [部署 PD 微服务](deploy-pd-microservice.md)。
HuSharp marked this conversation as resolved.
Show resolved Hide resolved
> 使用滚动升级时,TiDB Operator 会按照 PD 微服务各个组件、PD、TiKV、TiDB 的顺序,串行地删除旧版本的 Pod,并创建新版本的 Pod。当新版本的 Pod 正常运行后,再处理下一个 Pod。

滚动更新中,TiDB Operator 会自动处理 PD 和 TiKV 的 Leader 迁移。因此,在多节点的部署拓扑下(最小环境:PD \* 3、TiKV \* 3、TiDB \* 2),滚动更新 TiKV、PD 不会影响业务正常运行。对于有连接重试功能的客户端,滚动更新 TiDB 同样不会影响业务。

> **警告:**
Expand Down
Loading