From e1b3d36d5268a8cf1943881963e2d8655e8e2db0 Mon Sep 17 00:00:00 2001 From: Docsite Preview Bot <> Date: Thu, 21 Mar 2024 10:32:42 +0000 Subject: [PATCH] Preview PR https://github.com/pingcap/docs/pull/16803 and this preview is triggered from commit https://github.com/pingcap/docs/pull/16803/commits/8b5ba2e9309e743887cf9b3ce93092fa5ea8b9e1 --- markdown-pages/en/tidb/master/TOC.md | 5 + .../en/tidb/master/pd-microservices.md | 62 ++++++ .../en/tidb/master/tiup/tiup-playground.md | 188 ++++++++++++++++++ 3 files changed, 255 insertions(+) create mode 100644 markdown-pages/en/tidb/master/pd-microservices.md create mode 100644 markdown-pages/en/tidb/master/tiup/tiup-playground.md diff --git a/markdown-pages/en/tidb/master/TOC.md b/markdown-pages/en/tidb/master/TOC.md index 416cf6c9..7d5069d4 100644 --- a/markdown-pages/en/tidb/master/TOC.md +++ b/markdown-pages/en/tidb/master/TOC.md @@ -328,6 +328,7 @@ - [Use Load Base Split](/configure-load-base-split.md) - [Use Store Limit](/configure-store-limit.md) - [DDL Execution Principles and Best Practices](/ddl-introduction.md) + - [Use PD Microservices](/pd-microservices.md) - TiDB Tools - [Overview](/ecosystem-tool-user-guide.md) - [Use Cases](/ecosystem-tool-user-case.md) @@ -446,6 +447,7 @@ - [Binlog Event Filter](/dm/dm-binlog-event-filter.md) - [Filter DMLs Using SQL Expressions](/dm/feature-expression-filter.md) - [Online DDL Tool Support](/dm/dm-online-ddl-tool-support.md) + - [Customize a Secret Key for Encryption and Decryption](/dm/dm-customized-secret-key.md) - Manage a Data Migration Task - [Precheck a Task](/dm/dm-precheck.md) - [Create a Task](/dm/dm-create-task.md) @@ -577,6 +579,7 @@ - [TiCDC Canal-JSON Protocol](/ticdc/ticdc-canal-json.md) - [TiCDC Open Protocol](/ticdc/ticdc-open-protocol.md) - [TiCDC CSV Protocol](/ticdc/ticdc-csv.md) + - [TiCDC Debezium Protocol](/ticdc/ticdc-debezium.md) - [TiCDC Open API v2](/ticdc/ticdc-open-api-v2.md) - [TiCDC Open API v1](/ticdc/ticdc-open-api.md) - TiCDC Data Consumption @@ -960,6 +963,7 @@ - [`TIDB_HOT_REGIONS`](/information-schema/information-schema-tidb-hot-regions.md) - [`TIDB_HOT_REGIONS_HISTORY`](/information-schema/information-schema-tidb-hot-regions-history.md) - [`TIDB_INDEXES`](/information-schema/information-schema-tidb-indexes.md) + - [`TIDB_INDEX_USAGE`](/information-schema/information-schema-tidb-index-usage.md) - [`TIDB_SERVERS_INFO`](/information-schema/information-schema-tidb-servers-info.md) - [`TIDB_TRX`](/information-schema/information-schema-tidb-trx.md) - [`TIFLASH_REPLICA`](/information-schema/information-schema-tiflash-replica.md) @@ -976,6 +980,7 @@ - PERFORMANCE_SCHEMA - [Overview](/performance-schema/performance-schema.md) - [`SESSION_CONNECT_ATTRS`](/performance-schema/performance-schema-session-connect-attrs.md) + - [`SYS`](/sys-schema.md) - [Metadata Lock](/metadata-lock.md) - [TiDB DDL V2](/ddl-v2.md) - UI diff --git a/markdown-pages/en/tidb/master/pd-microservices.md b/markdown-pages/en/tidb/master/pd-microservices.md new file mode 100644 index 00000000..9227dba8 --- /dev/null +++ b/markdown-pages/en/tidb/master/pd-microservices.md @@ -0,0 +1,62 @@ +--- +title: PD Microservices +summary: Learn how to enable the microservice mode of PD to improve service quality. +--- + +# PD Microservices + +Starting from v8.0.0, PD supports the microservice mode, which disaggregates the timestamp allocation and cluster scheduling functions of PD into the following two independently deployed microservices. In this way, these two functions are decoupled from the routing function of PD, which allows PD to focus on the routing service for metadata. + +- `tso` microservice: provides monotonically increasing timestamp allocation for the entire cluster. +- `scheduling` microservice: provides scheduling functions for the entire cluster, including but not limited to load balancing, hot spot handling, replica repair, and replica placement. + +Each microservice is deployed as an independent process. If you configure more than one replica for a microservice, the microservice automatically implements a primary-secondary fault-tolerant mode to ensure high availability and reliability of the service. + +## Usage scenarios + +PD microservices are typically used to address performance bottlenecks in PD and improve PD service quality. With this feature, you can avoid the following issues: + +- Long-tail latency or jitter in TSO allocations due to excessive pressure in PD clusters +- Service unavailability of the entire cluster due to failures in the scheduling module +- Bottleneck issues solely caused by PD + +In addition, when the scheduling module is changed, you can update the `scheduling` microservice independently without restarting PD, thus avoiding any impact on the overall service of the cluster. + +> **Note:** +> +> If the performance bottleneck of a cluster is not caused by PD, there is no need to enable microservices, because using microservices increases the number of components and raises operational costs. + +## Restrictions + +- Currently, the `tso` microservice does not support dynamic start and stop. After enabling or disabling the `tso` microservice, you need to restart the PD cluster for the changes to take effect. +- Only the TiDB component supports a direct connection to the `tso` microservice through service discovery, while other components need to forward requests to the `tso` microservice through PD to obtain timestamps. +- Microservices are not compatible with the [Data Replication Auto Synchronous (DR Auto-Sync)](https://docs.pingcap.com/tidb/stable/two-data-centers-in-one-city-deployment) feature. +- Microservices are not compatible with the TiDB system variable [`tidb_enable_tso_follower_proxy`](https://docs.pingcap.com/tidb/stable/system-variables#tidb_enable_tso_follower_proxy-new-in-v530). +- Due to the potential presence of hibernate Regions in a cluster, during a primary and secondary switchover of the `scheduling` microservice, the scheduling function of the cluster might be unavailable for up to five minutes to avoid redundant scheduling. + +## Usage + +Currently, PD microservices can only be deployed using TiDB Operator. For detailed instructions, refer to the following documents: + +- [Deploy PD microservices](configure-a-tidb-cluster.md#deploy-pd-microservices) +- [Configure PD microservices](configure-a-tidb-cluster.md#configure-pd-microservices) +- [Modify PD microservices](modify-tidb-configuration.md#modify-pd-microservices-configuration) +- [Scale PD microservice components](scale-a-tidb-cluster.md#scale-pd-microservice-components) + +When deploying and using PD microservices, pay attention to the following: + +- After you enable microservices and restart PD for a cluster, PD stops allocating TSO for the cluster. Therefore, you need to deploy the `tso` microservice in the cluster when you enable microservices. +- If the `scheduling` microservice is deployed in a cluster, the scheduling function of the cluster is provided by the `scheduling` microservice. If the `scheduling` microservice is not deployed, the scheduling function of the cluster is still provided by PD. +- The `scheduling` microservice supports dynamic switching, which is enabled by default (`enable-scheduling-fallback` defaults to `true`). If the process of the `scheduling` microservice is terminated, PD continues to provide scheduling services for the cluster by default. + + If the binary versions of the `scheduling` microservice and PD are different, to prevent changes in the scheduling logic, you can disable the dynamic switching function of the `scheduling` microservice by executing `pd-ctl config set enable-scheduling-fallback false`. After this function is disabled, PD will not take over the scheduling service when the process of the `scheduling` microservice is terminated. This means that the scheduling service of the cluster will be unavailable until the `scheduling` microservice is restarted. + +## Tool compatibility + +Microservices do not affect the normal use of data import, export, and other replication tools. + +## FAQs + +- How can I determine if PD becomes a performance bottleneck? + + When your cluster is in a normal state, you can check monitoring metrics in the Grafana PD panel. If the `TiDB - PD server TSO handle time` metric shows a notable increase in latency or the `Heartbeat - TiKV side heartbeat statistics` metric shows a significant number of pending items, it indicates that PD becomes a performance bottleneck. \ No newline at end of file diff --git a/markdown-pages/en/tidb/master/tiup/tiup-playground.md b/markdown-pages/en/tidb/master/tiup/tiup-playground.md new file mode 100644 index 00000000..6888ee94 --- /dev/null +++ b/markdown-pages/en/tidb/master/tiup/tiup-playground.md @@ -0,0 +1,188 @@ +--- +title: Quickly Deploy a Local TiDB Cluster +summary: Learn how to quickly deploy a local TiDB cluster using the playground component of TiUP. +aliases: ['/docs/dev/tiup/tiup-playground/','/docs/dev/reference/tools/tiup/playground/'] +--- + +# Quickly Deploy a Local TiDB Cluster + +The TiDB cluster is a distributed system that consists of multiple components. A typical TiDB cluster consists of at least three PD nodes, three TiKV nodes, and two TiDB nodes. If you want to have a quick experience on TiDB, you might find it time-consuming and complicated to manually deploy so many components. This document introduces the playground component of TiUP and how to use it to quickly build a local TiDB test environment. + +## TiUP playground overview + +The basic usage of the playground component is shown as follows: + +```bash +tiup playground ${version} [flags] +``` + +If you directly execute the `tiup playground` command, TiUP uses the locally installed TiDB, TiKV, and PD components or installs the stable version of these components to start a TiDB cluster that consists of one TiKV instance, one TiDB instance, one PD instance, and one TiFlash instance. + +This command actually performs the following operations: + +- Because this command does not specify the version of the playground component, TiUP first checks the latest version of the installed playground component. Assume that the latest version is v1.12.3, then this command works the same as `tiup playground:v1.12.3`. +- If you have not used TiUP playground to install the TiDB, TiKV, and PD components, the playground component installs the latest stable version of these components, and then start these instances. +- Because this command does not specify the version of the TiDB, PD, and TiKV component, TiUP playground uses the latest version of each component by default. Assume that the latest version is v7.6.0, then this command works the same as `tiup playground:v1.12.3 v7.6.0`. +- Because this command does not specify the number of each component, TiUP playground, by default, starts a smallest cluster that consists of one TiDB instance, one TiKV instance, one PD instance, and one TiFlash instance. +- After starting each TiDB component, TiUP playground reminds you that the cluster is successfully started and provides you some useful information, such as how to connect to the TiDB cluster through the MySQL client and how to access the [TiDB Dashboard](/dashboard/dashboard-intro.md). + +The command-line flags of the playground component are described as follows: + +```bash +Flags: + --db int Specify the number of TiDB instances (default: 1) + --db.host host Specify the listening address of TiDB + --db.port int Specify the port of TiDB + --db.binpath string Specify the TiDB instance binary path (optional, for debugging) + --db.config string Specify the TiDB instance configuration file (optional, for debugging) + --db.timeout int Specify TiDB maximum wait time in seconds for starting. 0 means no limit + --drainer int Specify Drainer data of the cluster + --drainer.binpath string Specify the location of the Drainer binary files (optional, for debugging) + --drainer.config string Specify the Drainer configuration file + -h, --help help for tiup + --host string Specify the listening address of each component (default: `127.0.0.1`). Set it to `0.0.0.0` if provided for access of other machines + --kv int Specify the number of TiKV instances (default: 1) + --kv.binpath string Specify the TiKV instance binary path (optional, for debugging) + --kv.config string Specify the TiKV instance configuration file (optional, for debugging) + --mode string Specify the playground mode: 'tidb' (default) and 'tikv-slim' + --pd int Specify the number of PD instances (default: 1) + --pd.host host Specify the listening address of PD + --pd.binpath string Specify the PD instance binary path (optional, for debugging) + --pd.config string Specify the PD instance configuration file (optional, for debugging) + --pump int Specify the number of Pump instances. If the value is not `0`, TiDB Binlog is enabled. + --pump.binpath string Specify the location of the Pump binary files (optional, for debugging) + --pump.config string Specify the Pump configuration file (optional, for debugging) + -T, --tag string Specify a tag for playground + --ticdc int Specify the number of TiCDC instances (default: 0) + --ticdc.binpath string Specify the TiCDC instance binary path (optional, for debugging) + --ticdc.config string Specify the TiCDC instance configuration file (optional, for debugging) + --tiflash int Specify the number of TiFlash instances (default: 1) + --tiflash.binpath string Specify the TiFlash instance binary path (optional, for debugging) + --tiflash.config string Specify the TiFlash instance configuration file (optional, for debugging) + --tiflash.timeout int Specify TiFlash maximum wait time in seconds for starting. 0 means no limit + --tiproxy int TiProxy instance number + --tiproxy.binpath string TiProxy instance binary path + --tiproxy.config string TiProxy instance configuration file + --tiproxy.host host Playground TiProxy host. If not provided, TiProxy will still use host flag as its host + --tiproxy.port int Playground TiProxy port. If not provided, TiProxy will use 6000 as its port + --tiproxy.timeout int TiProxy max wait time in seconds for starting. 0 means no limit (default 60) + -v, --version Specify the version of playground + --without-monitor Disable the monitoring function of Prometheus and Grafana. If you do not add this flag, the monitoring function is enabled by default. +``` + +## Examples + +### Check available TiDB versions + +```shell +tiup list tidb +``` + +### Start a TiDB cluster of a specific version + +```shell +tiup playground ${version} +``` + +Replace `${version}` with the target version number. + +### Start a TiDB cluster of the nightly version + +```shell +tiup playground nightly +``` + +In the command above, `nightly` indicates the latest development version of TiDB. + +### Override PD's default configuration + +First, you need to copy the [PD configuration template](https://github.com/pingcap/pd/blob/master/conf/config.toml). Assume you place the copied file to `~/config/pd.toml` and make some changes according to your need, then you can execute the following command to override PD's default configuration: + +```shell +tiup playground --pd.config ~/config/pd.toml +``` + +### Replace the default binary files + +By default, when playground is started, each component is started using the binary files from the official mirror. If you want to put a temporarily compiled local binary file into the cluster for testing, you can use the `--{comp}.binpath` flag for replacement. For example, execute the following command to replace the binary file of TiDB: + +```shell +tiup playground --db.binpath /xx/tidb-server +``` + +### Start multiple component instances + +By default, only one instance is started for each TiDB, TiKV, and PD component. To start multiple instances for each component, add the following flag: + +```shell +tiup playground --db 3 --pd 3 --kv 3 +``` + +### Specify a tag when starting the TiDB cluster + +After you stop a TiDB cluster started using TiUP playground, all cluster data is cleaned up as well. To start a TiDB cluster using TiUP playground and ensure that the cluster data is not cleaned up automatically, you can specify a tag when starting the cluster. After specifying the tag, you can find the cluster data in the `~/.tiup/data` directory. Run the following command to specify a tag: + +```shell +tiup playground --tag +``` + +For a cluster started in this way, the data files are retained after the cluster is stopped. You can use this tag to start the cluster next time so that you can use the data kept since the cluster was stopped. + +## Quickly connect to the TiDB cluster started by playground + +TiUP provides the `client` component, which is used to automatically find and connect to a local TiDB cluster started by playground. The usage is as follows: + +```shell +tiup client +``` + +This command provides a list of TiDB clusters that are started by playground on the current machine on the console. Select the TiDB cluster to be connected. After clicking Enter, a built-in MySQL client is opened to connect to TiDB. + +## View information of the started cluster + +```shell +tiup playground display +``` + +The command above returns the following results: + +``` +Pid Role Uptime +--- ---- ------ +84518 pd 35m22.929404512s +84519 tikv 35m22.927757153s +84520 pump 35m22.92618275s +86189 tidb exited +86526 tidb 34m28.293148663s +86190 drainer 35m19.91349249s +``` + +## Scale out a cluster + +The command-line parameter for scaling out a cluster is similar to that for starting a cluster. You can scale out two TiDB instances by executing the following command: + +```shell +tiup playground scale-out --db 2 +``` + +## Scale in a cluster + +You can specify a `pid` in the `tiup playground scale-in` command to scale in the corresponding instance. To view the `pid`, execute `tiup playground display`. + +```shell +tiup playground scale-in --pid 86526 +``` + +## Deploy PD microservices + +Starting from v8.0.0, PD supports the [microservice mode](/pd-microservices.md). You can deploy the `tso` microservice and `scheduling` microservice for your cluster using TiUP playground as follows: + +```shell +./tiup-playground v8.0.0 --pd.mode ms --pd.api 3 --pd.tso 2 --pd.scheduling 3 +``` + +- `--pd.mode`: setting it to `ms` means enabling the microservice mode for PD. +- `--pd.api num`: specifies the number of APIs for PD microservices. It must be at least `1`. +- `--pd.tso num`: specifies the number of instances to be deployed for the `tso` microservice. +- `--pd.scheduling num`: specifies the number of instances to be deployed for the `scheduling` microservice. +``` \ No newline at end of file