diff --git a/TOC-tidb-cloud.md b/TOC-tidb-cloud.md index 1056b81518d14..a09a6fcbd2168 100644 --- a/TOC-tidb-cloud.md +++ b/TOC-tidb-cloud.md @@ -10,6 +10,7 @@ - [Roadmap](/tidb-cloud/tidb-cloud-roadmap.md) - Get Started - [Try Out TiDB Cloud](/tidb-cloud/tidb-cloud-quickstart.md) + - [Try Out TiDB + AI](/tidb-cloud/vector-search-get-started-using-python.md) - [Try Out HTAP](/tidb-cloud/tidb-cloud-htap-quickstart.md) - [Try Out TiDB Cloud CLI](/tidb-cloud/get-started-with-cli.md) - [Perform a PoC](/tidb-cloud/tidb-cloud-poc.md) @@ -61,6 +62,7 @@ - [Node.js Example](/tidb-cloud/serverless-driver-node-example.md) - [Prisma Example](/tidb-cloud/serverless-driver-prisma-example.md) - [Kysely Example](/tidb-cloud/serverless-driver-kysely-example.md) + - [Drizzle Example](/tidb-cloud/serverless-driver-drizzle-example.md) - Third-Party Support - [Third-Party Tools Supported by TiDB](/develop/dev-guide-third-party-support.md) - [Known Incompatibility Issues with Third-Party Tools](/develop/dev-guide-third-party-tools-compatibility.md) @@ -126,6 +128,7 @@ - [GitHub Integration](/tidb-cloud/branch-github-integration.md) - [Manage Spending Limit](/tidb-cloud/manage-serverless-spend-limit.md) - [Back Up and Restore TiDB Serverless Data](/tidb-cloud/backup-and-restore-serverless.md) + - [Export Data from TiDB Serverless](/tidb-cloud/serverless-export.md) - Manage TiDB Dedicated Clusters - [Create a TiDB Dedicated Cluster](/tidb-cloud/create-tidb-cluster.md) - Connect to Your TiDB Dedicated Cluster @@ -236,14 +239,42 @@ - [Precheck Errors, Migration Errors, and Alerts for Data Migration](/tidb-cloud/tidb-cloud-dm-precheck-and-troubleshooting.md) - [Connect AWS DMS to TiDB Cloud clusters](/tidb-cloud/tidb-cloud-connect-aws-dms.md) - Explore Data - - [Chat2Query (Beta)](/tidb-cloud/explore-data-with-chat2query.md) + - [Chat2Query (Beta) in SQL Editor](/tidb-cloud/explore-data-with-chat2query.md) +- Vector Search (Beta) + - [Overview](/tidb-cloud/vector-search-overview.md) + - Get Started + - [Get Started with SQL](/tidb-cloud/vector-search-get-started-using-sql.md) + - [Get Started with Python](/tidb-cloud/vector-search-get-started-using-python.md) + - Integrations + - [Overview](/tidb-cloud/vector-search-integration-overview.md) + - AI Frameworks + - [LlamaIndex](/tidb-cloud/vector-search-integrate-with-llamaindex.md) + - [Langchain](/tidb-cloud/vector-search-integrate-with-langchain.md) + - Embedding Models/Services + - [Jina AI](/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md) + - ORM Libraries + - [SQLAlchemy](/tidb-cloud/vector-search-integrate-with-sqlalchemy.md) + - [peewee](/tidb-cloud/vector-search-integrate-with-peewee.md) + - [Django ORM](/tidb-cloud/vector-search-integrate-with-django-orm.md) + - Reference + - [Vector Data Types](/tidb-cloud/vector-search-data-types.md) + - [Vector Functions and Operators](/tidb-cloud/vector-search-functions-and-operators.md) + - [Vector Index](/tidb-cloud/vector-search-index.md) + - [Improve Performance](/tidb-cloud/vector-search-improve-performance.md) + - [Limitations](/tidb-cloud/vector-search-limitations.md) + - [Changelogs](/tidb-cloud/vector-search-changelogs.md) - Data Service (Beta) - [Overview](/tidb-cloud/data-service-overview.md) - [Get Started](/tidb-cloud/data-service-get-started.md) - - [Try Out Chat2Query API](/tidb-cloud/use-chat2query-api.md) + - Chat2Query API + - [Get Started](/tidb-cloud/use-chat2query-api.md) + - [Start Multi-round Chat2Query](/tidb-cloud/use-chat2query-sessions.md) + - [Use Knowledge Bases](/tidb-cloud/use-chat2query-knowledge.md) - [Manage Data App](/tidb-cloud/data-service-manage-data-app.md) - [Manage Endpoint](/tidb-cloud/data-service-manage-endpoint.md) - [API Key](/tidb-cloud/data-service-api-key.md) + - [Custom Domain](/tidb-cloud/data-service-custom-domain.md) + - [Integrations](/tidb-cloud/data-service-integrations.md) - [Run in Postman](/tidb-cloud/data-service-postman-integration.md) - [Deploy Automatically with GitHub](/tidb-cloud/data-service-manage-github-connection.md) - [Use OpenAPI Specification with Next.js](/tidb-cloud/data-service-oas-with-nextjs.md) @@ -266,13 +297,15 @@ - [Basic SSO Authentication](/tidb-cloud/tidb-cloud-sso-authentication.md) - [Organization SSO Authentication](/tidb-cloud/tidb-cloud-org-sso-authentication.md) - [Identity Access Management](/tidb-cloud/manage-user-access.md) + - [OAuth 2.0](/tidb-cloud/oauth2.md) - Network Access Control - TiDB Serverless - [Connect via Private Endpoint](/tidb-cloud/set-up-private-endpoint-connections-serverless.md) - [TLS Connections to TiDB Serverless](/tidb-cloud/secure-connections-to-serverless-clusters.md) - TiDB Dedicated - [Configure an IP Access List](/tidb-cloud/configure-ip-access-list.md) - - [Connect via Private Endpoint](/tidb-cloud/set-up-private-endpoint-connections.md) + - [Connect via Private Endpoint with AWS](/tidb-cloud/set-up-private-endpoint-connections.md) + - [Connect via Private Endpoint (Private Service Connect) with Google Cloud](/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md) - [Connect via VPC Peering](/tidb-cloud/set-up-vpc-peering-connections.md) - [TLS Connections to TiDB Dedicated](/tidb-cloud/tidb-cloud-tls-connect-to-dedicated.md) - Data Access Control @@ -285,20 +318,26 @@ - Billing - [Invoices](/tidb-cloud/tidb-cloud-billing.md#invoices) - [Billing Details](/tidb-cloud/tidb-cloud-billing.md#billing-details) + - [Cost Explorer](/tidb-cloud/tidb-cloud-billing.md#cost-explorer) + - [Billing Profile](/tidb-cloud/tidb-cloud-billing.md#billing-profile) - [Credits](/tidb-cloud/tidb-cloud-billing.md#credits) - [Payment Method Setting](/tidb-cloud/tidb-cloud-billing.md#payment-method) - [Billing from AWS or GCP Marketplace](/tidb-cloud/tidb-cloud-billing.md#billing-from-aws-marketplace-or-google-cloud-marketplace) - [Billing for Changefeed](/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md) - [Billing for Data Migration](/tidb-cloud/tidb-cloud-billing-dm.md) - [Billing for Recovery Groups](/tidb-cloud/tidb-cloud-billing-recovery-group.md) -- API + - [Manage Budgets](/tidb-cloud/tidb-cloud-budget.md) +- Managed Service Provider Program + - [Managed Service Provider](/tidb-cloud/managed-service-provider.md) + - [MSP Customer](/tidb-cloud/managed-service-provider-customer.md) - API - [API Overview](/tidb-cloud/api-overview.md) - API Reference - v1beta1 - [Billing](https://docs.pingcap.com/tidbcloud/api/v1beta1/billing) - - [IAM](https://docs.pingcap.com/tidbcloud/api/v1beta1/apikey) - - [MSP](https://docs.pingcap.com/tidbcloud/api/msp/v1beta1) + - [Data Service](https://docs.pingcap.com/tidbcloud/api/v1beta1/dataservice) + - [IAM](https://docs.pingcap.com/tidbcloud/api/v1beta1/iam) + - [MSP](https://docs.pingcap.com/tidbcloud/api/v1beta1/msp) - [v1beta](https://docs.pingcap.com/tidbcloud/api/v1beta) - Integrations - [Airbyte](/tidb-cloud/integrate-tidbcloud-with-airbyte.md) @@ -336,12 +375,18 @@ - [Introduction](/tidb-distributed-execution-framework.md) - [TiDB Global Sort](/tidb-global-sort.md) - Benchmarks + - TiDB v8.1 + - [TPC-C Performance Test Report](/tidb-cloud/v8.1-performance-benchmarking-with-tpcc.md) + - [Sysbench Performance Test Report](/tidb-cloud/v8.1-performance-benchmarking-with-sysbench.md) - TiDB v7.5 - - [TPC-C Performance Test Report](/tidb-cloud/v7.5.0-performance-benchmarking-with-tpcc.md) - - [Sysbench Performance Test Report](/tidb-cloud/v7.5.0-performance-benchmarking-with-sysbench.md) + - [TPC-C Performance Test Report](/tidb-cloud/v7.5-performance-benchmarking-with-tpcc.md) + - [Sysbench Performance Test Report](/tidb-cloud/v7.5-performance-benchmarking-with-sysbench.md) - TiDB v7.1 - - [TPC-C Performance Test Report](/tidb-cloud/v7.1.0-performance-benchmarking-with-tpcc.md) - - [Sysbench Performance Test Report](/tidb-cloud/v7.1.0-performance-benchmarking-with-sysbench.md) + - [TPC-C Performance Test Report](/tidb-cloud/v7.1-performance-benchmarking-with-tpcc.md) + - [Sysbench Performance Test Report](/tidb-cloud/v7.1-performance-benchmarking-with-sysbench.md) + - TiDB v6.5 + - [TPC-C Performance Test Report](/tidb-cloud/v6.5-performance-benchmarking-with-tpcc.md) + - [Sysbench Performance Test Report](/tidb-cloud/v6.5-performance-benchmarking-with-sysbench.md) - SQL - [Explore SQL with TiDB](/basic-sql-operations.md) - SQL Language Structure and Syntax @@ -644,18 +689,37 @@ - [Spill to Disk](/tiflash/tiflash-spill-disk.md) - CLI - [Overview](/tidb-cloud/cli-reference.md) - - cluster + - auth + - [login](/tidb-cloud/ticloud-auth-login.md) + - [logout](/tidb-cloud/ticloud-auth-logout.md) + - serverless - [create](/tidb-cloud/ticloud-cluster-create.md) - [delete](/tidb-cloud/ticloud-cluster-delete.md) - [describe](/tidb-cloud/ticloud-cluster-describe.md) - [list](/tidb-cloud/ticloud-cluster-list.md) - - [connect-info](/tidb-cloud/ticloud-cluster-connect-info.md) - - branch - - [create](/tidb-cloud/ticloud-branch-create.md) - - [delete](/tidb-cloud/ticloud-branch-delete.md) - - [describe](/tidb-cloud/ticloud-branch-describe.md) - - [list](/tidb-cloud/ticloud-branch-list.md) - - [connect-info](/tidb-cloud/ticloud-branch-connect-info.md) + - [update](/tidb-cloud/ticloud-serverless-update.md) + - [spending-limit](/tidb-cloud/ticloud-serverless-spending-limit.md) + - [region](/tidb-cloud/ticloud-serverless-region.md) + - [shell](/tidb-cloud/ticloud-serverless-shell.md) + - branch + - [create](/tidb-cloud/ticloud-branch-create.md) + - [delete](/tidb-cloud/ticloud-branch-delete.md) + - [describe](/tidb-cloud/ticloud-branch-describe.md) + - [list](/tidb-cloud/ticloud-branch-list.md) + - [shell](/tidb-cloud/ticloud-branch-shell.md) + - import + - [cancel](/tidb-cloud/ticloud-import-cancel.md) + - [describe](/tidb-cloud/ticloud-import-describe.md) + - [list](/tidb-cloud/ticloud-import-list.md) + - [start](/tidb-cloud/ticloud-import-start.md) + - export + - [create](/tidb-cloud/ticloud-serverless-export-create.md) + - [describe](/tidb-cloud/ticloud-serverless-export-describe.md) + - [list](/tidb-cloud/ticloud-serverless-export-list.md) + - [cancel](/tidb-cloud/ticloud-serverless-export-cancel.md) + - [download](/tidb-cloud/ticloud-serverless-export-download.md) + - [ai](/tidb-cloud/ticloud-ai.md) + - [completion](/tidb-cloud/ticloud-completion.md) - config - [create](/tidb-cloud/ticloud-config-create.md) - [delete](/tidb-cloud/ticloud-config-delete.md) @@ -664,18 +728,10 @@ - [list](/tidb-cloud/ticloud-config-list.md) - [set](/tidb-cloud/ticloud-config-set.md) - [use](/tidb-cloud/ticloud-config-use.md) - - [connect](/tidb-cloud/ticloud-connect.md) - - import - - [cancel](/tidb-cloud/ticloud-import-cancel.md) - - [describe](/tidb-cloud/ticloud-import-describe.md) - - [list](/tidb-cloud/ticloud-import-list.md) - - start - - [local](/tidb-cloud/ticloud-import-start-local.md) - - [s3](/tidb-cloud/ticloud-import-start-s3.md) - - [mysql](/tidb-cloud/ticloud-import-start-mysql.md) - project - [list](/tidb-cloud/ticloud-project-list.md) - [update](/tidb-cloud/ticloud-update.md) + - [help](/tidb-cloud/ticloud-help.md) - [Table Filter](/table-filter.md) - [Resource Control](/tidb-resource-control.md) - [URI Formats of External Storage Services](/external-storage-uri.md) @@ -687,11 +743,17 @@ - [TiDB Cloud FAQs](/tidb-cloud/tidb-cloud-faq.md) - [TiDB Serverless FAQs](/tidb-cloud/serverless-faqs.md) - Release Notes - - [2023](/tidb-cloud/tidb-cloud-release-notes.md) + - [2024](/tidb-cloud/tidb-cloud-release-notes.md) + - [2023](/tidb-cloud/release-notes-2023.md) - [2022](/tidb-cloud/release-notes-2022.md) - [2021](/tidb-cloud/release-notes-2021.md) - [2020](/tidb-cloud/release-notes-2020.md) - Maintenance Notification + - [[2024-09-15] TiDB Cloud Console Maintenance Notification](/tidb-cloud/notification-2024-09-15-console-maintenance.md) + - [[2024-04-18] TiDB Cloud Data Migration (DM) Feature Maintenance Notification](/tidb-cloud/notification-2024-04-18-dm-feature-maintenance.md) + - [[2024-04-16] TiDB Cloud Monitoring Features Maintenance Notification](/tidb-cloud/notification-2024-04-16-monitoring-features-maintenance.md) + - [[2024-04-11] TiDB Cloud Data Migration (DM) Feature Maintenance Notification](/tidb-cloud/notification-2024-04-11-dm-feature-maintenance.md) + - [[2024-04-09] TiDB Cloud Monitoring Features Maintenance Notification](/tidb-cloud/notification-2024-04-09-monitoring-features-maintenance.md) - [[2023-11-14] TiDB Dedicated Scale Feature Maintenance Notification](/tidb-cloud/notification-2023-11-14-scale-feature-maintenance.md) - [[2023-09-26] TiDB Cloud Console Maintenance Notification](/tidb-cloud/notification-2023-09-26-console-maintenance.md) - [[2023-08-31] TiDB Cloud Console Maintenance Notification](/tidb-cloud/notification-2023-08-31-console-maintenance.md) diff --git a/media/dashboard/dashboard-statement-detail0.png b/media/dashboard/dashboard-statement-detail0.png new file mode 100644 index 0000000000000..b066c5e3c4a88 Binary files /dev/null and b/media/dashboard/dashboard-statement-detail0.png differ diff --git a/media/dashboard/dashboard-statement-detail1.png b/media/dashboard/dashboard-statement-detail1.png new file mode 100644 index 0000000000000..b17cb3797711f Binary files /dev/null and b/media/dashboard/dashboard-statement-detail1.png differ diff --git a/media/dashboard/dashboard-statement-detail2.png b/media/dashboard/dashboard-statement-detail2.png new file mode 100644 index 0000000000000..8cfedf64b1c0c Binary files /dev/null and b/media/dashboard/dashboard-statement-detail2.png differ diff --git a/media/tidb-cloud/Project-CIDR2.png b/media/tidb-cloud/Project-CIDR2.png index b412a4889f229..b0739027bda67 100644 Binary files a/media/tidb-cloud/Project-CIDR2.png and b/media/tidb-cloud/Project-CIDR2.png differ diff --git a/media/tidb-cloud/Project-CIDR4.png b/media/tidb-cloud/Project-CIDR4.png index 0879853b29d08..e73e2b3bb87a5 100644 Binary files a/media/tidb-cloud/Project-CIDR4.png and b/media/tidb-cloud/Project-CIDR4.png differ diff --git a/media/tidb-cloud/VPC-Peering3.png b/media/tidb-cloud/VPC-Peering3.png deleted file mode 100644 index b3a8aacbb72c3..0000000000000 Binary files a/media/tidb-cloud/VPC-Peering3.png and /dev/null differ diff --git a/media/tidb-cloud/data-service/GPTs1.png b/media/tidb-cloud/data-service/GPTs1.png index ba56ed8126c1d..b9b5d51ebf44d 100644 Binary files a/media/tidb-cloud/data-service/GPTs1.png and b/media/tidb-cloud/data-service/GPTs1.png differ diff --git a/media/tidb-cloud/v.7.1.0-oltp_read_write.png b/media/tidb-cloud/v.7.1.0-oltp_read_write.png deleted file mode 100644 index 0b56a59afcac0..0000000000000 Binary files a/media/tidb-cloud/v.7.1.0-oltp_read_write.png and /dev/null differ diff --git a/media/tidb-cloud/v6.5.6-oltp_insert.png b/media/tidb-cloud/v6.5.6-oltp_insert.png new file mode 100644 index 0000000000000..0ee03639ce2bf Binary files /dev/null and b/media/tidb-cloud/v6.5.6-oltp_insert.png differ diff --git a/media/tidb-cloud/v6.5.6-oltp_read_write.png b/media/tidb-cloud/v6.5.6-oltp_read_write.png new file mode 100644 index 0000000000000..82e5ab9075603 Binary files /dev/null and b/media/tidb-cloud/v6.5.6-oltp_read_write.png differ diff --git a/media/tidb-cloud/v6.5.6-oltp_select_point.png b/media/tidb-cloud/v6.5.6-oltp_select_point.png new file mode 100644 index 0000000000000..19496cb0e42b0 Binary files /dev/null and b/media/tidb-cloud/v6.5.6-oltp_select_point.png differ diff --git a/media/tidb-cloud/v6.5.6-oltp_update_index.png b/media/tidb-cloud/v6.5.6-oltp_update_index.png new file mode 100644 index 0000000000000..9cd6ab54f7cc9 Binary files /dev/null and b/media/tidb-cloud/v6.5.6-oltp_update_index.png differ diff --git a/media/tidb-cloud/v6.5.6-oltp_update_non_index.png b/media/tidb-cloud/v6.5.6-oltp_update_non_index.png new file mode 100644 index 0000000000000..235529282bc2a Binary files /dev/null and b/media/tidb-cloud/v6.5.6-oltp_update_non_index.png differ diff --git a/media/tidb-cloud/v6.5.6-tpmC.png b/media/tidb-cloud/v6.5.6-tpmC.png new file mode 100644 index 0000000000000..d454c4492ed21 Binary files /dev/null and b/media/tidb-cloud/v6.5.6-tpmC.png differ diff --git a/media/tidb-cloud/v7.1.3-oltp_insert.png b/media/tidb-cloud/v7.1.3-oltp_insert.png new file mode 100644 index 0000000000000..b98251fbae658 Binary files /dev/null and b/media/tidb-cloud/v7.1.3-oltp_insert.png differ diff --git a/media/tidb-cloud/v7.1.3-oltp_read_write.png b/media/tidb-cloud/v7.1.3-oltp_read_write.png new file mode 100644 index 0000000000000..1d21b6573d51b Binary files /dev/null and b/media/tidb-cloud/v7.1.3-oltp_read_write.png differ diff --git a/media/tidb-cloud/v7.1.3-oltp_select_point.png b/media/tidb-cloud/v7.1.3-oltp_select_point.png new file mode 100644 index 0000000000000..5d619973bdcd0 Binary files /dev/null and b/media/tidb-cloud/v7.1.3-oltp_select_point.png differ diff --git a/media/tidb-cloud/v7.1.3-oltp_update_index.png b/media/tidb-cloud/v7.1.3-oltp_update_index.png new file mode 100644 index 0000000000000..2ed956c736dde Binary files /dev/null and b/media/tidb-cloud/v7.1.3-oltp_update_index.png differ diff --git a/media/tidb-cloud/v7.1.3-oltp_update_non_index.png b/media/tidb-cloud/v7.1.3-oltp_update_non_index.png new file mode 100644 index 0000000000000..74aa87acdecdd Binary files /dev/null and b/media/tidb-cloud/v7.1.3-oltp_update_non_index.png differ diff --git a/media/tidb-cloud/v7.1.3-tpmC.png b/media/tidb-cloud/v7.1.3-tpmC.png new file mode 100644 index 0000000000000..d8bf49e923c69 Binary files /dev/null and b/media/tidb-cloud/v7.1.3-tpmC.png differ diff --git a/media/tidb-cloud/v7.5.0_oltp_insert.png b/media/tidb-cloud/v7.5.0-oltp_insert.png similarity index 100% rename from media/tidb-cloud/v7.5.0_oltp_insert.png rename to media/tidb-cloud/v7.5.0-oltp_insert.png diff --git a/media/tidb-cloud/v7.5.0-oltp_point_select.png b/media/tidb-cloud/v7.5.0-oltp_point_select.png new file mode 100644 index 0000000000000..12c34021f206b Binary files /dev/null and b/media/tidb-cloud/v7.5.0-oltp_point_select.png differ diff --git a/media/tidb-cloud/v7.5.0-oltp_read_write.png b/media/tidb-cloud/v7.5.0-oltp_read_write.png new file mode 100644 index 0000000000000..b423451bd21ff Binary files /dev/null and b/media/tidb-cloud/v7.5.0-oltp_read_write.png differ diff --git a/media/tidb-cloud/v7.5.0_oltp_update_index.png b/media/tidb-cloud/v7.5.0-oltp_update_index.png similarity index 100% rename from media/tidb-cloud/v7.5.0_oltp_update_index.png rename to media/tidb-cloud/v7.5.0-oltp_update_index.png diff --git a/media/tidb-cloud/v7.5.0_oltp_update_non_index.png b/media/tidb-cloud/v7.5.0-oltp_update_non_index.png similarity index 100% rename from media/tidb-cloud/v7.5.0_oltp_update_non_index.png rename to media/tidb-cloud/v7.5.0-oltp_update_non_index.png diff --git a/media/tidb-cloud/v7.5.0_oltp_point_select.png b/media/tidb-cloud/v7.5.0_oltp_point_select.png deleted file mode 100644 index f3f59e7c85321..0000000000000 Binary files a/media/tidb-cloud/v7.5.0_oltp_point_select.png and /dev/null differ diff --git a/media/tidb-cloud/v7.5.0_oltp_read_write.png b/media/tidb-cloud/v7.5.0_oltp_read_write.png deleted file mode 100644 index d796eadc62276..0000000000000 Binary files a/media/tidb-cloud/v7.5.0_oltp_read_write.png and /dev/null differ diff --git a/media/tidb-cloud/v7.5.0_tpcc.png b/media/tidb-cloud/v7.5.0_tpcc.png index 9078c9178e003..26089ecb4408c 100644 Binary files a/media/tidb-cloud/v7.5.0_tpcc.png and b/media/tidb-cloud/v7.5.0_tpcc.png differ diff --git a/media/tidb-cloud/v8.1.0_oltp_insert.png b/media/tidb-cloud/v8.1.0_oltp_insert.png new file mode 100644 index 0000000000000..fe0b422966171 Binary files /dev/null and b/media/tidb-cloud/v8.1.0_oltp_insert.png differ diff --git a/media/tidb-cloud/v8.1.0_oltp_point_select.png b/media/tidb-cloud/v8.1.0_oltp_point_select.png new file mode 100644 index 0000000000000..a1eb7e3029d76 Binary files /dev/null and b/media/tidb-cloud/v8.1.0_oltp_point_select.png differ diff --git a/media/tidb-cloud/v8.1.0_oltp_read_write.png b/media/tidb-cloud/v8.1.0_oltp_read_write.png new file mode 100644 index 0000000000000..75e9278d1f290 Binary files /dev/null and b/media/tidb-cloud/v8.1.0_oltp_read_write.png differ diff --git a/media/tidb-cloud/v8.1.0_oltp_update_index.png b/media/tidb-cloud/v8.1.0_oltp_update_index.png new file mode 100644 index 0000000000000..b5d58fe531600 Binary files /dev/null and b/media/tidb-cloud/v8.1.0_oltp_update_index.png differ diff --git a/media/tidb-cloud/v8.1.0_oltp_update_non_index.png b/media/tidb-cloud/v8.1.0_oltp_update_non_index.png new file mode 100644 index 0000000000000..b8e760933506d Binary files /dev/null and b/media/tidb-cloud/v8.1.0_oltp_update_non_index.png differ diff --git a/media/tidb-cloud/v8.1.0_tpcc.png b/media/tidb-cloud/v8.1.0_tpcc.png new file mode 100644 index 0000000000000..10eb8ef603758 Binary files /dev/null and b/media/tidb-cloud/v8.1.0_tpcc.png differ diff --git a/media/vector-search/embedding-search.png b/media/vector-search/embedding-search.png new file mode 100644 index 0000000000000..0035ada1b7479 Binary files /dev/null and b/media/vector-search/embedding-search.png differ diff --git a/tidb-cloud/_index.md b/tidb-cloud/_index.md index 9d0c8c0ac4083..9f8ec56ab846b 100644 --- a/tidb-cloud/_index.md +++ b/tidb-cloud/_index.md @@ -3,6 +3,7 @@ title: TiDB Cloud Documentation aliases: ['/tidbcloud/privacy-policy', '/tidbcloud/terms-of-service', '/tidbcloud/service-level-agreement'] hide_sidebar: true hide_commit: true +summary: TiDB Cloud is a fully-managed Database-as-a-Service (DBaaS) that brings everything great about TiDB to your cloud. It offers guides, samples, and references for learning, trying, developing, maintaining, migrating, monitoring, tuning, securing, billing, integrating, and referencing. --- @@ -21,6 +22,8 @@ hide_commit: true [Try Out TiDB Cloud](https://docs.pingcap.com/tidbcloud/tidb-cloud-quickstart) +[Try Out TiDB + AI](https://docs.pingcap.com/tidbcloud/vector-search-get-started-using-python) + [Try Out HTAP](https://docs.pingcap.com/tidbcloud/tidb-cloud-htap-quickstart) [Proof of Concept](https://docs.pingcap.com/tidbcloud/tidb-cloud-poc) diff --git a/tidb-cloud/api-overview.md b/tidb-cloud/api-overview.md index 02bbf64bd5150..820b056cb00ad 100644 --- a/tidb-cloud/api-overview.md +++ b/tidb-cloud/api-overview.md @@ -9,7 +9,7 @@ summary: Learn about what is TiDB Cloud API, its features, and how to use API to > > TiDB Cloud API is in beta. -The TiDB Cloud API is a [REST interface](https://en.wikipedia.org/wiki/Representational_state_transfer) that provides you with programmatic access to manage administrative objects within TiDB Cloud. Through this API, you can automatically and efficiently manage resources such as Projects, Clusters, Backups, Restores, Imports, and Billings. +The TiDB Cloud API is a [REST interface](https://en.wikipedia.org/wiki/Representational_state_transfer) that provides you with programmatic access to manage administrative objects within TiDB Cloud. Through this API, you can automatically and efficiently manage resources such as Projects, Clusters, Backups, Restores, Imports, Billings, and resources in the [Data Service](/tidb-cloud/data-service-overview.md). The API has the following features: @@ -23,6 +23,10 @@ To start using TiDB Cloud API, refer to the following resources in TiDB Cloud AP - [Authentication](https://docs.pingcap.com/tidbcloud/api/v1beta#section/Authentication) - [Rate Limiting](https://docs.pingcap.com/tidbcloud/api/v1beta#section/Rate-Limiting) - API Full References - - [v1beta1](https://docs.pingcap.com/tidbcloud/api/v1beta1) + - v1beta1 + - [Billing](https://docs.pingcap.com/tidbcloud/api/v1beta1/billing) + - [Data Service](https://docs.pingcap.com/tidbcloud/api/v1beta1/dataservice) + - [IAM](https://docs.pingcap.com/tidbcloud/api/v1beta1/iam) + - [MSP](https://docs.pingcap.com/tidbcloud/api/v1beta1/msp) - [v1beta](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Project) - [Changelog](https://docs.pingcap.com/tidbcloud/api/v1beta#section/API-Changelog) diff --git a/tidb-cloud/backup-and-restore.md b/tidb-cloud/backup-and-restore.md index 40b087d63ab68..3984d1f0c9ca9 100644 --- a/tidb-cloud/backup-and-restore.md +++ b/tidb-cloud/backup-and-restore.md @@ -94,12 +94,11 @@ To configure the backup schedule, perform the following steps: > - After you delete a cluster, the automatic backup files will be retained for a specified period, as set in backup retention. You need to delete the backup files accordingly. > - After you delete a cluster, the existing manual backup files will be retained until you manually delete them, or your account is closed. -### Turn on dual region backup (beta) +### Turn on dual region backup > **Note:** > -> - The dual region backup feature is currently in beta. -> - TiDB Dedicated clusters hosted on Google Cloud work seamlessly with Google Cloud Storage. Similar to Google Cloud Storage, **TiDB Dedicated supports dual-region pairing only within the same multi-region code as Google dual-region storage**. For example, in Asia, currently you must pair Tokyo and Osaka together for dual-region storage. For more information, refer to [Dual-regions](https://cloud.google.com/storage/docs/locations#location-dr). +> TiDB Dedicated clusters hosted on Google Cloud work seamlessly with Google Cloud Storage. Similar to Google Cloud Storage, **TiDB Dedicated supports dual-region pairing only within the same multi-region code as Google dual-region storage**. For example, in Asia, currently you must pair Tokyo and Osaka together for dual-region storage. For more information, refer to [Dual-regions](https://cloud.google.com/storage/docs/locations#location-dr). TiDB Dedicated supports dual region backup by replicating backups from your cluster region to another different region. After you enable this feature, all backups are automatically replicated to the specified region. This provides cross-region data protection and disaster recovery capabilities. It is estimated that approximately 99% of the data can be replicated to the secondary region within an hour. @@ -137,7 +136,7 @@ To turn off auto backup, perform the following steps: 5. Click **Confirm** again to save changes. -### Turn off dual region backup (beta) +### Turn off dual region backup > **Tip** > diff --git a/tidb-cloud/branch-overview.md b/tidb-cloud/branch-overview.md index aa75153ea68ff..5e9e428b0c45b 100644 --- a/tidb-cloud/branch-overview.md +++ b/tidb-cloud/branch-overview.md @@ -39,7 +39,7 @@ Currently, TiDB Serverless branches are in beta and free of charge. - For each organization in TiDB Cloud, you can create a maximum of five TiDB Serverless branches by default across all the clusters. The branches of a cluster will be created in the same region as the cluster, and you cannot create branches for a throttled cluster or a cluster larger than 100 GiB. -- For each branch, 5 GiB storage is allowed. Once the storage is reached, the read and write operations on this branch will be throttled until you reduce the storage. +- For each branch of a free cluster, 10 GiB storage is allowed. For each branch of a scalable cluster, 100 GiB storage is allowed. Once the storage is reached, the read and write operations on this branch will be throttled until you reduce the storage. If you need more quotas, [contact TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). diff --git a/tidb-cloud/built-in-monitoring.md b/tidb-cloud/built-in-monitoring.md index fc18d7d7f1903..5eb842f12d44f 100644 --- a/tidb-cloud/built-in-monitoring.md +++ b/tidb-cloud/built-in-monitoring.md @@ -67,18 +67,18 @@ The following sections illustrate the metrics on the Metrics page for TiDB Dedic | Metric name | Labels | Description | | :------------| :------| :-------------------------------------------- | | TiDB Uptime | node | The runtime of each TiDB node since last restart. | -| TiDB CPU Usage | node | The CPU usage statistics of each TiDB node. | -| TiDB Memory Usage | node | The memory usage statistics of each TiDB node. | +| TiDB CPU Usage | node, limit | The CPU usage statistics or upper limit of each TiDB node. | +| TiDB Memory Usage | node, limit | The memory usage statistics or upper limit of each TiDB node. | | TiKV Uptime | node | The runtime of each TiKV node since last restart. | -| TiKV CPU Usage | node | The CPU usage statistics of each TiKV node. | -| TiKV Memory Usage | node | The memory usage statistics of each TiKV node. | +| TiKV CPU Usage | node, limit | The CPU usage statistics or upper limit of each TiKV node. | +| TiKV Memory Usage | node, limit | The memory usage statistics or upper limit of each TiKV node. | | TiKV IO Bps | node-write, node-read | The total input/output bytes per second of read and write in each TiKV node. | -| TiKV Storage Usage | node | The storage usage statistics of each TiKV node. | +| TiKV Storage Usage | node, limit | The storage usage statistics or upper limit of each TiKV node. | | TiFlash Uptime | node | The runtime of each TiFlash node since last restart. | -| TiFlash CPU Usage | node | The CPU usage statistics of each TiFlash node. | -| TiFlash Memory | node | The memory usage statistics of each TiFlash node. | +| TiFlash CPU Usage | node, limit | The CPU usage statistics or upper limit of each TiFlash node. | +| TiFlash Memory Usage | node, limit | The memory usage statistics or upper limit of each TiFlash node. | | TiFlash IO MBps | node-write, node-read | The total bytes of read and write in each TiFlash node. | -| TiFlash Storage Usage | node | The storage usage statistics of each TiFlash node. | +| TiFlash Storage Usage | node, limit | The storage usage statistics or upper limit of each TiFlash node. | ## Metrics for TiDB Serverless clusters diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md index f7580d4d904ec..2e95d1d790d92 100644 --- a/tidb-cloud/changefeed-sink-to-apache-kafka.md +++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md @@ -1,6 +1,6 @@ --- title: Sink to Apache Kafka -Summary: Learn how to create a changefeed to stream data from TiDB Cloud to Apache Kafka. +summary: This document explains how to create a changefeed to stream data from TiDB Cloud to Apache Kafka. It includes restrictions, prerequisites, and steps to configure the changefeed for Apache Kafka. The process involves setting up network connections, adding permissions for Kafka ACL authorization, and configuring the changefeed specification. --- # Sink to Apache Kafka @@ -137,8 +137,8 @@ For example, if your Kafka cluster is in Confluent Cloud, you can see [Resources 8. In the **Topic Configuration** area, configure the following numbers. The changefeed will automatically create the Kafka topics according to the numbers. - - **Replication Factor**: controls how many Kafka servers each Kafka message is replicated to. - - **Partition Number**: controls how many partitions exist in a topic. + - **Replication Factor**: controls how many Kafka servers each Kafka message is replicated to. The valid value ranges from [`min.insync.replicas`](https://kafka.apache.org/33/documentation.html#brokerconfigs_min.insync.replicas) to the number of Kafka brokers. + - **Partition Number**: controls how many partitions exist in a topic. The valid value range is `[1, 10 * the number of Kafka brokers]`. 9. Click **Next**. diff --git a/tidb-cloud/changefeed-sink-to-cloud-storage.md b/tidb-cloud/changefeed-sink-to-cloud-storage.md index 391b53e14a602..62ddc711032a7 100644 --- a/tidb-cloud/changefeed-sink-to-cloud-storage.md +++ b/tidb-cloud/changefeed-sink-to-cloud-storage.md @@ -1,6 +1,6 @@ --- title: Sink to Cloud Storage -Summary: Learn how to create a changefeed to stream data from a TiDB Dedicated cluster to cloud storage, such as Amazon S3 and GCS. +summary: This document explains how to create a changefeed to stream data from TiDB Cloud to Amazon S3 or GCS. It includes restrictions, configuration steps for the destination, replication, and specification, as well as starting the replication process. --- # Sink to Cloud Storage @@ -45,8 +45,8 @@ For **GCS**, before filling **GCS Endpoint**, you need to first grant the GCS bu ![Create a role](/media/tidb-cloud/changefeed/sink-to-cloud-storage-gcs-create-role.png) - 3. Enter a name, description, ID, and role launch stage for the role. The role name cannot be changed after the role is created. - 4. Click **Add permissions**. Add the following read-only permissions to the role, and then click **Add**. + 3. Enter a name, description, ID, and role launch stage for the role. The role name cannot be changed after the role is created. + 4. Click **Add permissions**. Add the following permissions to the role, and then click **Add**. - storage.buckets.get - storage.objects.create diff --git a/tidb-cloud/changefeed-sink-to-mysql.md b/tidb-cloud/changefeed-sink-to-mysql.md index 97d3f7cc15787..2e7164df91927 100644 --- a/tidb-cloud/changefeed-sink-to-mysql.md +++ b/tidb-cloud/changefeed-sink-to-mysql.md @@ -1,6 +1,6 @@ --- title: Sink to MySQL -Summary: Learn how to create a changefeed to stream data from TiDB Cloud to MySQL. +summary: This document explains how to stream data from TiDB Cloud to MySQL using the Sink to MySQL changefeed. It includes restrictions, prerequisites, and steps to create a MySQL sink for data replication. The process involves setting up network connections, loading existing data to MySQL, and creating target tables in MySQL. After completing the prerequisites, users can create a MySQL sink to replicate data to MySQL. --- # Sink to MySQL @@ -35,7 +35,7 @@ If your MySQL service is in an AWS VPC that has no public internet access, take 1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your TiDB cluster. 2. Modify the inbound rules of the security group that the MySQL service is associated with. - You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-project-cidr) to the inbound rules. Doing so allows the traffic to flow from your TiDB Cluster to the MySQL instance. + You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-cidr-for-a-region) to the inbound rules. Doing so allows the traffic to flow from your TiDB Cluster to the MySQL instance. 3. If the MySQL URL contains a hostname, you need to allow TiDB Cloud to be able to resolve the DNS hostname of the MySQL service. @@ -48,7 +48,7 @@ If your MySQL service is in a Google Cloud VPC that has no public internet acces 2. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your TiDB cluster. 3. Modify the ingress firewall rules of the VPC where MySQL is located. - You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-project-cidr) to the ingress firewall rules. Doing so allows the traffic to flow from your TiDB Cluster to the MySQL endpoint. + You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-cidr-for-a-region) to the ingress firewall rules. Doing so allows the traffic to flow from your TiDB Cluster to the MySQL endpoint. ### Load existing data (optional) diff --git a/tidb-cloud/changefeed-sink-to-tidb-cloud.md b/tidb-cloud/changefeed-sink-to-tidb-cloud.md index 769b185cde21d..a5304d6d99e61 100644 --- a/tidb-cloud/changefeed-sink-to-tidb-cloud.md +++ b/tidb-cloud/changefeed-sink-to-tidb-cloud.md @@ -1,6 +1,6 @@ --- title: Sink to TiDB Cloud -Summary: Learn how to create a changefeed to stream data from a TiDB Dedicated cluster to a TiDB Serverless cluster. +summary: This document explains how to stream data from a TiDB Dedicated cluster to a TiDB Serverless cluster. There are restrictions on the number of changefeeds and regions available for the feature. Prerequisites include extending tidb_gc_life_time, backing up data, and obtaining the start position of TiDB Cloud sink. To create a TiDB Cloud sink, navigate to the cluster overview page, establish the connection, customize table and event filters, fill in the start replication position, specify the changefeed specification, review the configuration, and create the sink. Finally, restore tidb_gc_life_time to its original value. --- # Sink to TiDB Cloud diff --git a/tidb-cloud/cli-reference.md b/tidb-cloud/cli-reference.md index 44d96bdde1cda..02e6c2958340b 100644 --- a/tidb-cloud/cli-reference.md +++ b/tidb-cloud/cli-reference.md @@ -3,7 +3,11 @@ title: TiDB Cloud CLI Reference summary: Provides an overview of TiDB Cloud CLI. --- -# TiDB Cloud CLI Reference +# TiDB Cloud CLI Reference Beta + +> **Note:** +> +> TiDB Cloud CLI is in beta. TiDB Cloud CLI is a command line interface, which allows you to operate TiDB Cloud from your terminal with a few lines of commands. In the TiDB Cloud CLI, you can easily manage your TiDB Cloud clusters, import data to your clusters, and perform more operations. @@ -17,17 +21,19 @@ The following table lists the commands available for the TiDB Cloud CLI. To use the `ticloud` CLI in your terminal, run `ticloud [command] [subcommand]`. If you are using [TiUP](https://docs.pingcap.com/tidb/stable/tiup-overview), use `tiup cloud [command] [subcommand]` instead. -| Command | Subcommand | Description | -|------------|------------------------------------------------------------|----------------------------------------------------------------------------------------------------------| -| cluster | create, delete, describe, list, connect-info | Manage clusters | -| branch | create, delete, describe, list, connect-info | Manage branches | -| completion | bash, fish, powershell, zsh | Generate completion script for specified shell | -| config | create, delete, describe, edit, list, set, use | Configure user profiles | -| connect | - | Connect to a TiDB cluster | -| help | cluster, completion, config, help, import, project, update | View help for any command | -| import | cancel, describe, list, start | Manage [import](/tidb-cloud/tidb-cloud-migration-overview.md#import-data-from-files-to-tidb-cloud) tasks | -| project | list | Manage projects | -| update | - | Update the CLI to the latest version | +| Command | Subcommand | Description | +|-------------------|--------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------| +| auth | login, logout | Login and logout | +| serverless (alias: s) | create, delete, describe, list, update, spending-limit, region, shell | Manage TiDB Serverless clusters | +| serverless branch | create, delete, describe, list, shell | Manage TiDB Serverless branches | +| serverless import | cancel, describe, list, start | Manage TiDB Serverless import tasks | +| serverless export | create, describe, list, cancel, download | Manage TiDB Serverless export tasks | +| ai | - | Chat with TiDB Bot | +| completion | bash, fish, powershell, zsh | Generate completion script for specified shell | +| config | create, delete, describe, edit, list, set, use | Configure user profiles | +| project | list | Manage projects | +| update | - | Update the CLI to the latest version | +| help | cluster, completion, config, help, import, project, update | View help for any command | ## Command modes @@ -43,12 +49,20 @@ The TiDB Cloud CLI provides two modes for some commands for easy use: ## User profile -For the TiDB Cloud CLI, a user profile is a collection of properties associated with a user, including the profile name, public key, and private key. To use TiDB Cloud CLI, you must create a user profile first. +For the TiDB Cloud CLI, a user profile is a collection of properties associated with a user, including the profile name, public key, private key, and OAuth token. To use TiDB Cloud CLI, you must have a user profile. -### Create a user profile +### Create a user profile with TiDB Cloud API key Use [`ticloud config create`](/tidb-cloud/ticloud-config-create.md) to create a user profile. +### Create a user profile with OAuth token + +Use [`ticloud auth login`](/tidb-cloud/ticloud-auth-login.md) to assign OAuth token to the current profile. If no profiles exist, a profile named `default` will be created automatically. + +> **Note:** +> +> In the preceding two methods, the TiDB Cloud API key takes precedence over the OAuth token. If both are available in the current profile, the API key will be used. + ### List all user profiles Use [`ticloud config list`](/tidb-cloud/ticloud-config-list.md) to list all user profiles. @@ -103,10 +117,11 @@ Use [`ticloud config delete`](/tidb-cloud/ticloud-config-delete.md) to delete a The following table lists the global flags for the TiDB Cloud CLI. -| Flag | Description | Required | Note | -|----------------------|-----------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active user profile used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|---------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active user profile used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enable debug mode | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/connect-to-tidb-cluster.md b/tidb-cloud/connect-to-tidb-cluster.md index 2c36e21103fdf..0ec762a2c3350 100644 --- a/tidb-cloud/connect-to-tidb-cluster.md +++ b/tidb-cloud/connect-to-tidb-cluster.md @@ -29,15 +29,15 @@ After your TiDB Dedicated cluster is created on TiDB Cloud, you can connect to i If you want lower latency and more security, set up VPC peering and connect via a private endpoint using a VM instance on the corresponding cloud provider in your cloud account. -- [Connect via Chat2Query (beta)](/tidb-cloud/explore-data-with-chat2query.md) +- [Connect via built-in SQL Editor](/tidb-cloud/explore-data-with-chat2query.md) > **Note:** > - > To use Chat2Query on [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, contact [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md). + > To use SQL Editor on [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, contact [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md). - TiDB Cloud is powered by artificial intelligence (AI). If your cluster is hosted on AWS and the TiDB version of the cluster is v6.5.0 or later, you can use Chat2Query (beta), an AI-powered SQL editor in the [TiDB Cloud console](https://tidbcloud.com/), to maximize your data value. + If your cluster is hosted on AWS and the TiDB version of the cluster is v6.5.0 or later, you can use the AI-assisted SQL Editor in the [TiDB Cloud console](https://tidbcloud.com/) to maximize your data value. - In Chat2Query, you can either simply type `--` followed by your instructions to let AI generate SQL queries automatically or write SQL queries manually, and then run SQL queries against databases without a terminal. You can find the query results in tables intuitively and check the query logs easily. + In SQL Editor, you can either write SQL queries manually or simply press + I on macOS (or Control + I on Windows or Linux) to instruct [Chat2Query (beta)](/tidb-cloud/tidb-cloud-glossary.md#chat2query) to generate SQL queries automatically. This enables you to run SQL queries against databases without a local SQL client. You can intuitively view the query results in tables or charts and easily check the query logs. - [Connect via SQL Shell](/tidb-cloud/connect-via-sql-shell.md): to try TiDB SQL and test out TiDB's compatibility with MySQL quickly, or administer user privileges. diff --git a/tidb-cloud/connect-via-standard-connection-serverless.md b/tidb-cloud/connect-via-standard-connection-serverless.md index f1d0fe58281a0..d8cf180f10989 100644 --- a/tidb-cloud/connect-via-standard-connection-serverless.md +++ b/tidb-cloud/connect-via-standard-connection-serverless.md @@ -5,7 +5,9 @@ summary: Learn how to connect to your TiDB Serverless cluster via public endpoin # Connect to TiDB Serverless via Public Endpoint -This document describes how to connect to your TiDB Serverless cluster via public endpoint. With the public endpoint, you can connect to your TiDB Serverless cluster via a SQL client from your laptop. +This document describes how to connect to your TiDB Serverless cluster via a public endpoint, using a SQL client from your computer, as well as how to disable a public endpoint. + +## Connect via a public endpoint > **Tip:** > @@ -34,6 +36,28 @@ To connect to a TiDB Serverless cluster via public endpoint, take the following > > When you connect to a TiDB Serverless cluster, you must include the prefix for your cluster in the user name and wrap the name with quotation marks. For more information, see [User name prefix](/tidb-cloud/select-cluster-tier.md#user-name-prefix). +## Disable a public endpoint + +If you do not need to use a public endpoint of a TiDB Serverless cluster, you can disable it to prevent connections from the internet: + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Networking** in the left navigation pane and click **Disable** in the right pane. A confirmation dialog is displayed. + +3. Click **Disable** in the confirmation dialog. + +After disabling the public endpoint, the `Public` entry in the **Endpoint Type** drop-down list of the connect dialog is disabled. If users are still trying to access the cluster from the public endpoint, they will get an error. + +> **Note:** +> +> Disabling the public endpoint does not affect existing connections. It only prevents new connections from the internet. + +You can re-enable the public endpoint after disabling it: + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Networking** in the left navigation pane and click **Enable** in the right pane. + ## What's next After you have successfully connected to your TiDB cluster, you can [explore SQL statements with TiDB](/basic-sql-operations.md). diff --git a/tidb-cloud/create-tidb-cluster-serverless.md b/tidb-cloud/create-tidb-cluster-serverless.md index 474ae6408df88..732b76622cbbf 100644 --- a/tidb-cloud/create-tidb-cluster-serverless.md +++ b/tidb-cloud/create-tidb-cluster-serverless.md @@ -31,13 +31,15 @@ If you are in the `Organization Owner` or the `Project Owner` role, you can crea 4. The cloud provider of TiDB Serverless is AWS. You can select an AWS region where you want to host your cluster. -5. (Optional) Change the spending limit if you plan to use more storage and compute resources than the [free quota](/tidb-cloud/select-cluster-tier.md#usage-quota). If you have not added a payment method, you need to add a credit card after editing the limit. +5. Update the default cluster name if necessary. + +6. Select a cluster plan. TiDB Serverless provides two [cluster plans](/tidb-cloud/select-cluster-tier.md#cluster-plans): **Free Cluster** and **Scalable Cluster**. You can start with a free cluster and later upgrade to a scalable cluster as your needs grow. To create a scalable cluster, you need to specify a **Monthly Spending Limit** and add a credit card. > **Note:** > - > For each organization in TiDB Cloud, you can create a maximum of five TiDB Serverless clusters by default. To create more TiDB Serverless clusters, you need to add a credit card and set a [spending limit](/tidb-cloud/tidb-cloud-glossary.md#spending-limit) for the usage. + > For each organization in TiDB Cloud, you can create a maximum of five [free clusters](/tidb-cloud/select-cluster-tier.md#free-cluster-plan) by default. To create more TiDB Serverless clusters, you need to add a credit card and create [scalable clusters](/tidb-cloud/select-cluster-tier.md#scalable-cluster-plan) for the usage. -6. Update the default cluster name if necessary, and then click **Create**. +7. Click **Create**. The cluster creation process starts and your TiDB Cloud cluster will be created in approximately 30 seconds. @@ -47,4 +49,4 @@ After your cluster is created, follow the instructions in [Connect to TiDB Serve > **Note:** > -> If you do not set a password, you cannot connect to the cluster. \ No newline at end of file +> If you do not set a password, you cannot connect to the cluster. diff --git a/tidb-cloud/create-tidb-cluster.md b/tidb-cloud/create-tidb-cluster.md index 520ba710d4509..1ff57fedd7822 100644 --- a/tidb-cloud/create-tidb-cluster.md +++ b/tidb-cloud/create-tidb-cluster.md @@ -27,9 +27,7 @@ If you are an organization owner, you can rename the default project or create a 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/), and then click in the lower-left corner. -2. Click **Organization Settings**. - - The **Projects** tab is displayed by default. +2. Click **Organization Settings**, and click the **Projects** tab in the left navigation pane. The **Projects** tab is displayed. 3. Do one of the following: @@ -61,11 +59,12 @@ If you are in the `Organization Owner` or the `Project Owner` role, you can crea 2. Configure the [cluster size](/tidb-cloud/size-your-cluster.md) for TiDB, TiKV, and TiFlash (optional) respectively. 3. Update the default cluster name and port number if necessary. - 4. If this is the first cluster of your current project and CIDR has not been configured for this project, you need to set the project CIDR. If you do not see the **Project CIDR** field, it means that CIDR has already been configured for this project. + 4. If CIDR has not been configured for this region, you need to set the CIDR. If you do not see the **Project CIDR** field, it means that CIDR has already been configured for this region. > **Note:** > - > When setting the project CIDR, avoid any conflicts with the CIDR of the VPC where your application is located. You cannot modify your project CIDR once it is set. + > - TiDB Cloud will create a VPC with this CIDR when the first cluster in this region is created. All the subsequent clusters of the same project in this region will use this VPC. + > - When setting the CIDR, avoid any conflicts with the CIDR of the VPC where your application is located. You cannot modify your CIDR once the VPC is created. 4. Confirm the cluster and billing information on the right side. diff --git a/tidb-cloud/data-service-api-key.md b/tidb-cloud/data-service-api-key.md index c01b55c0a471b..368ffd66075a8 100644 --- a/tidb-cloud/data-service-api-key.md +++ b/tidb-cloud/data-service-api-key.md @@ -53,9 +53,33 @@ Request quotas are subject to rate limits as follows: - TiDB Cloud Data Service allows up to 100 requests per day for each Chat2Query Data App. +## API key expiration + +By default, API keys never expire. However, for security considerations, you can specify an expiration time for your API key when you [create](#create-an-api-key) or [edit](#edit-an-api-key) the key. + +- An API key is valid only before its expiration time. Once expired, all requests using that key will fail with a `401` error, and the response is similar as follows: + + ```bash + HTTP/2 401 + date: Mon, 05 Sep 2023 02:50:52 GMT + content-type: application/json + content-length: 420 + x-debug-trace-id: 202309040250529dcdf2055e7b2ae5e9 + x-kong-response-latency: 1 + server: kong/2.8.1 + + {"data":{"result":{"start_ms":0,"end_ms":0,"latency":"","row_affect":0,"limit":0,"code":49900002,"message":"API Key is no longer valid","row_count":0},"columns":[],"rows":[]},"type":""} + ``` + +- You can also expire API keys manually. For detailed steps, see [Expire an API key](#expire-an-api-key) and [Expire all API keys](#expire-all-api-keys). Once you manually expire an API key, the expiration takes effect immediately. + +- You can check the status and expiration time of your API keys in the **Authentication** area of your target Data App. + +- Once expired, an API key cannot be activated or edited again. + ## Manage API keys -The following sections describe how to create, edit, and delete an API key for a Data App. +The following sections describe how to create, edit, delete, and expire API keys for a Data App. ### Create an API key @@ -78,6 +102,10 @@ To create an API key for a Data App, perform the following steps: If your requests per minute exceed the rate limit, the API returns a `429` error. To get a quota of more than 1000 requests per minute (rpm) per API key, you can [submit a request](https://support.pingcap.com/hc/en-us/requests/new?ticket_form_id=7800003722519) to our support team. + 4. (Optional) Set a desired expiration time for your API key. + + By default, an API key never expires. If you prefer to specify an expiration time for the API key, click **Expires in**, select a time unit (`Minutes`, `Days`, or `Months`), and then fill in a desired number for the time unit. + 5. Click **Next**. The public key and private key are displayed. Make sure that you have copied and saved the private key in a secure location. After leaving this page, you will not be able to get the full private key again. @@ -86,12 +114,16 @@ To create an API key for a Data App, perform the following steps: ### Edit an API key +> **Note**: +> +> You cannot edit an expired key. + To edit the description or rate limit of an API key, perform the following steps: 1. Navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page of your project. 2. In the left pane, click the name of your target Data App to view its details. -3. In the **API Key** area, locate the **Action** column, and then click **...** > **Edit** in the API key row that you want to change. -4. Update the description, role, or rate limit of the API key. +3. In the **Authentication** area, locate the **Action** column, and then click **...** > **Edit** in the API key row that you want to change. +4. Update the description, role, rate limit, or expiration time of the API key. 5. Click **Update**. ### Delete an API key @@ -106,3 +138,25 @@ To delete an API key for a Data App, perform the following steps: 2. In the left pane, click the name of your target Data App to view its details. 3. In the **API Key** area, locate the **Action** column, and then click **...** > **Delete** in the API key row that you want to delete. 4. In the displayed dialog box, confirm the deletion. + +### Expire an API key + +> **Note**: +> +> You cannot expire an expired key. + +To expire an API key for a Data App, perform the following steps: + +1. Navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page of your project. +2. In the left pane, click the name of your target Data App to view its details. +3. In the **Authentication** area, locate the **Action** column, and then click **...** > **Expire Now** in the API key row that you want to expire. +4. In the displayed dialog box, confirm the expiration. + +### Expire all API keys + +To expire all API keys for a Data App, perform the following steps: + +1. Navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page of your project. +2. In the left pane, click the name of your target Data App to view its details. +3. In the **Authentication** area, click **Expire All**. +4. In the displayed dialog box, confirm the expiration. \ No newline at end of file diff --git a/tidb-cloud/data-service-app-config-files.md b/tidb-cloud/data-service-app-config-files.md index 42e62c9451a98..ff45d0a5a5997 100644 --- a/tidb-cloud/data-service-app-config-files.md +++ b/tidb-cloud/data-service-app-config-files.md @@ -142,7 +142,8 @@ The following is an example configuration of `config.json`. In this example, the "type": "", "required": <0 | 1>, "default": "", - "description": "" + "description": "", + "is_path_parameter": } ], "settings": { @@ -177,6 +178,7 @@ The description of each field is as follows: | `params.enum` | String | (Optional) Specifies the value options of the parameter. This field is only valid when `params.type` is set to `string`, `number`, or `integer`. To specify multiple values, you can separate them with a comma (`,`). | | `params.default` | String | The default value of the parameter. Make sure that the value matches the type of parameter you specified. Otherwise, the endpoint returns an error. The default value of an `ARRAY` type parameter is a string and you can use a comma (`,`) to separate multiple values. | | `params.description` | String | The description of the parameter. | +| `params.is_path_parameter` | Boolean | Specifies whether the parameter is a path parameter. If it is set to `true`, ensure that the `endpoint` field contains the corresponding parameter placeholders; otherwise, it will cause deployment failures. Conversely, if the `endpoint` field contains the corresponding parameter placeholders but this field is set to `false`, it will also cause deployment failures. | | `settings.timeout` | Integer | The timeout for the endpoint in milliseconds, which is `30000` by default. You can set it to an integer from `1` to `60000`. | | `settings.row_limit` | Integer | The maximum number of rows that the endpoint can operate or return, which is `1000` by default. When `batch_operation` is set to `0`, you can set it to an integer from `1` to `2000`. When `batch_operation` is set to `1`, you can set it to an integer from `1` to `100`. | | `settings.enable_pagination` | Integer | Controls whether to enable the pagination for the results returned by the request. Supported values are `0` (disabled) and `1` (enabled). The default value is `0`. | @@ -185,7 +187,7 @@ The description of each field is as follows: | `tag` | String | The tag for the endpoint. The default value is `"Default"`. | | `batch_operation` | Integer | Controls whether to enable the endpoint to operate in batch mode. Supported values are `0` (disabled) and `1` (enabled). When it is set to `1`, you can operate on multiple rows in a single request. To enable this option, make sure that the request method is `POST` or `PUT`. | | `sql_file` | String | The SQL file directory for the endpoint. For example, `"sql/GET-v1.sql"`. | -| `type` | String | The type of the endpoint, which can only be `"sql_endpoint"`. | +| `type` | String | The type of the endpoint. The value is `"system-data"` for predefined system endpoints and `"sql_endpoint"` for other endpoints. | | `return_type` | String | The response format of the endpoint, which can only be `"json"`. | ### SQL file configuration diff --git a/tidb-cloud/data-service-custom-domain.md b/tidb-cloud/data-service-custom-domain.md new file mode 100644 index 0000000000000..dc60c8030f938 --- /dev/null +++ b/tidb-cloud/data-service-custom-domain.md @@ -0,0 +1,72 @@ +--- +title: Custom Domain in Data Service +summary: Learn how to use a custom domain to access your Data App in TiDB Cloud Data Service. +--- + +# Custom Domain in Data Service + +TiDB Cloud Data Service provides a default domain `.data.tidbcloud.com` to access each Data App's endpoints. For enhanced personalization and flexibility, you can configure a custom domain for your Data App instead of using the default domain. + +This document describes how to manage custom domains in your Data App. + +## Before you begin + +Before configuring a custom domain for your Data App, note the following: + +- Custom domain requests exclusively support HTTPS for security. Once you successfully configure a custom domain, a "Let's Encrypt" certificate is automatically applied. +- Your custom domain must be unique within the TiDB Cloud Data Service. +- You can configure only one custom domain for each default domain, which is determined by the region of your cluster. + +## Manage custom domains + +The following sections describe how to create, edit, and remove a custom domain for a Data App. + +### Create a custom domain + +To create a custom domain for a Data App, perform the following steps: + +1. Navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page of your project. +2. In the left pane, click the name of your target Data App to view its details. +3. In the **Manage Custom Domain** area, click **Add Custom Domain**. +4. In the **Add Custom Domain** dialog box, do the following: + 1. Select the default domain you want to replace. + 2. Enter your desired custom domain name. + 3. Optional: configure a custom path as the prefix for your endpoints. If **Custom Path** is left empty, the default path is used. +5. Preview your **Base URL** to ensure it meets your expectations. If it looks correct, click **Save**. +6. Follow the instructions in the **DNS Settings** dialog to add a `CNAME` record for the default domain in your DNS provider. + +The custom domain is in a **Pending** status initially while the system validates your DNS settings. Once the DNS validation is successful, the status of your custom domain will update to **Success**. + +> **Note:** +> +> Depending on your DNS provider, it might take up to 24 hours for the DNS record to be validated. If a custom domain remains unvalidated for over 24 hours, it will be in an **Expired** status. In this case, you can only remove the custom domain and try again. + +After your custom domain status is set to **Success**, you can use it to access your endpoint. The code example provided by TiDB Cloud Data Service is automatically updated to your custom domain and path. For more information, see [Call an endpoint](/tidb-cloud/data-service-manage-endpoint.md#call-an-endpoint). + +### Edit a custom domain + +> **Note:** +> +> After you complete the following changes, the previous custom domain and custom path will become invalid immediately. If you modify the custom domain, you need to wait for the new DNS record to be validated. + +To edit a custom domain for a Data App, perform the following steps: + +1. Navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page of your project. +2. In the left pane, click the name of your target Data App to view its details. +3. In the **Manage Custom Domain** area, locate the **Action** column, and then click **Edit** in the custom domain row that you want to edit. +4. In the displayed dialog box, update the custom domain or custom path. +5. Preview your **Base URL** to ensure it meets your expectations. If it looks correct, click **Save**. +6. If you have changed the custom domain, follow the instructions in the **DNS Settings** dialog to add a `CNAME` record for the default domain in your DNS provider. + +### Remove a custom domain + +> **Note:** +> +> Before you delete a custom domain, make sure that the custom domain is not used anymore. + +To remove a custom domain for a Data App, perform the following steps: + +1. Navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page of your project. +2. In the left pane, click the name of your target Data App to view its details. +3. In the **Manage Custom Domain** area, locate the **Action** column, and then click **Delete** in the custom domain row that you want to delete. +4. In the displayed dialog box, confirm the deletion. diff --git a/tidb-cloud/data-service-get-started.md b/tidb-cloud/data-service-get-started.md index bc1c22137503e..db8366baa1f7d 100644 --- a/tidb-cloud/data-service-get-started.md +++ b/tidb-cloud/data-service-get-started.md @@ -64,7 +64,7 @@ To get started with Data Service, you can also create your own Data App, and the 4. (Optional) To automatically deploy endpoints of the Data App to your preferred GitHub repository and branch, enable **Connect to GitHub**, and then do the following: 1. Click **Install on GitHub**, and then follow the on-screen instructions to install **TiDB Cloud Data Service** as an application on your target repository. - 2. Click **Authorize** to authorize access to the application on GitHub. + 2. Go back to the TiDB Cloud console, and then click **Authorize** to authorize access to the application on GitHub. 3. Specify the target repository, branch, and directory where you want to save the configuration files of your Data App. > **Note:** @@ -82,9 +82,7 @@ To get started with Data Service, you can also create your own Data App, and the An endpoint is a web API that you can customize to execute SQL statements. -After creating a Data App, a default `untitled endpoint` is created for you automatically. You can use the default endpoint to access your TiDB Cloud cluster. - -If you want to create a new endpoint, locate the newly created Data App and click **+** **Create Endpoint** to the right of the App name. +To create a new endpoint, locate the newly created Data App and click **+** **Create Endpoint** to the right of the App name. #### Configure properties @@ -116,6 +114,10 @@ You can customize SQL statements for the endpoint in the SQL editor, which is th In the SQL editor, you can write statements such as table join queries, complex queries, and aggregate functions. You can also simply type `--` followed by your instructions to let AI generate SQL statements automatically. + > **Note:** + > + > To try the AI capacity of TiDB Cloud, you need to allow PingCAP and OpenAI to use your code snippets for research and service improvement. For more information, see [Enable or disable AI to generate SQL queries](/tidb-cloud/explore-data-with-chat2query.md#enable-or-disable-ai-to-generate-sql-queries). + To define a parameter, you can insert it as a variable placeholder like `${ID}` in the SQL statement. For example, `SELECT * FROM table_name WHERE id = ${ID}`. Then, you can click the **Params** tab on the right pane to change the parameter definition and test values. > **Note:** @@ -131,9 +133,31 @@ You can customize SQL statements for the endpoint in the SQL editor, which is th If you have inserted parameters in the SQL statements, make sure that you have set test values or default values for the parameters in the **Params** tab on the right pane. Otherwise, an error is returned. - To run a SQL statement, select the line of the SQL with your cursor and click **Run** > **Run at cursor**. + +
+ + For macOS: + + - If you have only one statement in the editor, to run it, press **⌘ + Enter** or click **Run**. + + - If you have multiple statements in the editor, to run one or several of them sequentially, place your cursor on your target statement or select the lines of the target statements with your cursor, and then press **⌘ + Enter** or click **Run**. + + - To run all statements in the editor sequentially, press **⇧ + ⌘ + Enter**, or select the lines of all statements with your cursor and click **Run**. - To run all SQL statements in the SQL editor, click **Run**. In this case, only the last SQL results are returned. +
+ +
+ + For Windows or Linux: + + - If you have only one statement in the editor, to run it, press **Ctrl + Enter** or click **Run**. + + - If you have multiple statements in the editor, to run one or several of them sequentially, place your cursor on your target statement or select the lines of the target statements with your cursor, and then press **Ctrl + Enter** or click **Run**. + + - To run all statements in the editor sequentially, press **Shift + Ctrl + Enter**, or select the lines of all statements with your cursor and click **Run**. + +
+
After running the statements, you can see the query results immediately in the **Result** tab at the bottom of the page. @@ -153,7 +177,7 @@ To deploy the endpoint, perform the following steps: 2. Click **Deploy** to confirm the deployment. You will get the **Endpoint has been deployed** prompt if the endpoint is successfully deployed. - On the right pane of the endpoint details page, you can click the **Deployments** tab to view the deployed history. + To view the deployment history, you can click the name of your Data App in the left pane, and then click the **Deployments** tab in the right pane. ### Step 5. Call the endpoint diff --git a/tidb-cloud/data-service-integrations.md b/tidb-cloud/data-service-integrations.md new file mode 100644 index 0000000000000..32b6ddc0f72a7 --- /dev/null +++ b/tidb-cloud/data-service-integrations.md @@ -0,0 +1,40 @@ +--- +title: Integrate a Data App with Third-Party Tools +summary: Learn how to integrate a TiDB Cloud Data App with third-party tools, such as GPTs and Dify, in the TiDB Cloud console. +--- + +# Integrate a Data App with Third-Party Tools + +Integrating third-party tools with your Data App enhances your applications with advanced natural language processing and artificial intelligence (AI) capabilities provided by third-party tools. This integration enables your applications to perform more complex tasks and deliver intelligent solutions. + +This document describes how to integrate a Data App with third-party tools, such as GPTs and Dify, in the TiDB Cloud console. + +## Integrate your Data App with GPTs + +You can integrate your Data App with [GPTs](https://openai.com/blog/introducing-gpts) to enhance your applications with intelligent capabilities. + +To integrate your Data App with GPTs, perform the following steps: + +1. Navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page of your project. +2. In the left pane, locate your target Data App, click the name of your target Data App, and then click the **Integrations** tab. +3. In the **Integrate with GPTs** area, click **Get Configuration**. + + ![Get Configuration](/media/tidb-cloud/data-service/GPTs1.png) + +4. In the displayed dialog box, you can see the following fields: + + a. **API Specification URL**: copy the URL of the OpenAPI Specification of your Data App. For more information, see [Use the OpenAPI Specification](/tidb-cloud/data-service-manage-data-app.md#use-the-openapi-specification). + + b. **API Key**: enter the API key of your Data App. If you do not have an API key yet, click **Create API Key** to create one. For more information, see [Create an API key](/tidb-cloud/data-service-api-key.md#create-an-api-key). + + c. **API Key Encoded**: copy the base64 encoded string equivalent to the API key you have provided. + + ![GPTs Dialog Box](/media/tidb-cloud/data-service/GPTs2.png) + +5. Use the copied API Specification URL and the encoded API key in your GPT configuration. + +## Integrate your Data App with Dify + +You can integrate your Data App with [Dify](https://docs.dify.ai/guides/tools) to enhance your applications with intelligent capabilities, such as vector distance calculations, advanced similarity searches, and vector analysis. + +To integrate your Data App with Dify, follow the same steps as for [GPTs integration](#integrate-your-data-app-with-gpts). The only difference is that on the **Integrations** tab, you need to click **Get Configuration** in the **Integrate with Dify** area. diff --git a/tidb-cloud/data-service-manage-data-app.md b/tidb-cloud/data-service-manage-data-app.md index ef8525188009a..962dc11193490 100644 --- a/tidb-cloud/data-service-manage-data-app.md +++ b/tidb-cloud/data-service-manage-data-app.md @@ -61,30 +61,6 @@ You can edit the name, version, and description of a Data App. To edit Data App For more information, see [Deploy automatically with GitHub](/tidb-cloud/data-service-manage-github-connection.md). -### Integrate your Data App with GPTs - -You can integrate your Data App with [GPTs](https://openai.com/blog/introducing-gpts) to enhance your applications with intelligent capabilities. - -To integrate your Data App with GPTs, perform the following steps: - -1. Navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page of your project. -2. In the left pane, locate your target Data App and click the name of your target Data App to view its details. -3. In the **Integration with GPTs** area, click **Get Configuration**. - - ![Get Configuration](/media/tidb-cloud/data-service/GPTs1.png) - -4. In the displayed dialog box, you can see the following fields: - - a. **API Specification URL**: copy the URL of the OpenAPI Specification of your Data App. For more information, see [Use the OpenAPI Specification](#use-the-openapi-specification). - - b. **API Key**: enter the API key of your Data App. If you do not have an API key yet, click **Create API Key** to create one. For more information, see [Create an API key](/tidb-cloud/data-service-api-key.md#create-an-api-key). - - c. **API Key Encoded**: copy the base64 encoded string equivalent to the API key you have provided. - - ![GPTs Dialog Box](/media/tidb-cloud/data-service/GPTs2.png) - -5. Use the copied API Specification URL and the encoded API key in your GPT configuration. - ### Manage linked data sources You can add or remove linked clusters for a Data App. @@ -113,6 +89,10 @@ For more information, see [Manage an API key](/tidb-cloud/data-service-api-key.m For more information, see [Manage an endpoint](/tidb-cloud/data-service-manage-endpoint.md). +### Manage a custom domain + +For more information, see [Manage a custom domain](/tidb-cloud/data-service-custom-domain.md). + ### Manage deployments To manage deployments, perform the following steps: diff --git a/tidb-cloud/data-service-manage-endpoint.md b/tidb-cloud/data-service-manage-endpoint.md index e0d885498da9e..b0e154c8e38ea 100644 --- a/tidb-cloud/data-service-manage-endpoint.md +++ b/tidb-cloud/data-service-manage-endpoint.md @@ -20,11 +20,11 @@ This document describes how to manage your endpoints in a Data App in the TiDB C ## Create an endpoint -In Data Service, you can either generate an endpoint automatically or create an endpoint manually. +In Data Service, you can automatically generate endpoints, manually create endpoints, or add predefined system endpoints. > **Tip:** > -> You can also create an endpoint from a SQL file in Chat2Query (beta). For more information, see [Generate an endpoint from a SQL file](/tidb-cloud/explore-data-with-chat2query.md#generate-an-endpoint-from-a-sql-file). +> You can also create an endpoint from a SQL file in SQL Editor. For more information, see [Generate an endpoint from a SQL file](/tidb-cloud/explore-data-with-chat2query.md#generate-an-endpoint-from-a-sql-file). ### Generate an endpoint automatically @@ -42,8 +42,15 @@ In TiDB Cloud Data Service, you can generate one or multiple endpoints automatic 2. Select at least one HTTP operation (such as `GET (Retrieve)`, `POST (Create)`, and `PUT (Update)`) for the endpoint to be generated. - For each operation you selected, TiDB Cloud Data Service will generate a corresponding endpoint. If you have selected a batch operation (such as `POST (Batch Create)`), the generated endpoint lets you operate on multiple rows in a single request. + For each operation you select, TiDB Cloud Data Service will generate a corresponding endpoint. If you select a batch operation (such as `POST (Batch Create)`), the generated endpoint lets you operate on multiple rows in a single request. + If the table you selected contains [vector data types](/tidb-cloud/vector-search-data-types.md), you can enable the **Vector Search Operations** option and select a vector distance function to generate a vector search endpoint that automatically calculates vector distances based on your selected distance function. The supported [vector distance functions](/tidb-cloud/vector-search-functions-and-operators.md) include the following: + + - `VEC_L2_DISTANCE` (default): calculates the L2 distance (Euclidean distance) between two vectors. + - `VEC_COSINE_DISTANCE`: calculates the cosine distance between two vectors. + - `VEC_NEGATIVE_INNER_PRODUCT`: calculates the distance by using the negative of the inner product between two vectors. + - `VEC_L1_DISTANCE`: calculates the L1 distance (Manhattan distance) between two vectors. + 3. (Optional) Configure a timeout and tag for the operations. All the generated endpoints will automatically inherit the configured properties, which can be modified later as needed. 4. (Optional) The **Auto-Deploy Endpoint** option (disabled by default) controls whether to enable the direct deployment of the generated endpoints. When it is enabled, the draft review process is skipped, and the generated endpoints are deployed immediately without further manual review or approval. @@ -56,6 +63,7 @@ In TiDB Cloud Data Service, you can generate one or multiple endpoints automatic - Endpoint name: the generated endpoint name is in the `/` format, and the request method (such as `GET`, `POST`, and `PUT`) is displayed before the endpoint name. For example, if the selected table name is `sample_table` and the selected operation is `POST (Create)`, the generated endpoint is displayed as `POST /sample_table`. - If a batch operation is selected, TiDB Cloud Data Service appends `/bulk` to the name of the generated endpoint. For example, if the selected table name is `/sample_table` and the selected operation is `POST (Batch Create)`, the generated endpoint is displayed as `POST /sample_table/bulk`. + - If `POST (Vector Similarity Search)` is selected, TiDB Cloud Data Service appends `/vector_search` to the name of the generated endpoint. For example, if the selected table name is `/sample_table` and the selected operation is `POST (Vector Similarity Search)`, the generated endpoint is displayed as `POST /sample_table/vector_search`. - If there has been already an endpoint with the same request method and endpoint name, TiDB Cloud Data Service appends `_dump_` to the name of the generated endpoint. For example, `/sample_table_dump_EUKRfl`. - SQL statements: TiDB Cloud Data Service automatically writes SQL statements for the generated endpoints according to the table column specifications and the selected endpoint operations. You can click the endpoint name to view its SQL statements in the middle section of the page. @@ -73,6 +81,44 @@ To create an endpoint manually, perform the following steps: 3. Update the default name if necessary. The newly created endpoint is added to the top of the endpoint list. 4. Configure the new endpoint according to the instructions in [Develop an endpoint](#develop-an-endpoint). +### Add a predefined system endpoint + +Data Service provides an endpoint library with predefined system endpoints that you can directly add to your Data App, reducing the effort in your endpoint development. Currently, the library only includes the `/system/query` endpoint, which enables you to execute any SQL statement by simply passing the statement in the predefined `sql` parameter. + +To add a predefined system endpoint to your Data App, perform the following steps: + +1. Navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page of your project. + +2. In the left pane, locate your target Data App, click **+** to the right of the App name, and then click **Manage Endpoint Library**. + + A dialog for endpoint library management is displayed. Currently, only **Execute Query** (that is, the `/system/query` endpoint) is provided in the dialog. + +3. To add the `/system/query` endpoint to your Data App, toggle the **Execute Query** switch to **Added**. + + > **Tip:** + > + > To remove an added predefined endpoint from your Data App, toggle the **Execute Query** switch to **Removed**. + +4. Click **Save**. + + > **Note:** + > + > - After you click **Save**, the added or removed endpoint is deployed to production immediately, which makes the added endpoint accessible and the removed endpoint inaccessible immediately. + > - If a non-predefined endpoint with the same path and method already exists in the current App, the creation of the system endpoint will fail. + + The added system-provided endpoint is displayed at the top of the endpoint list. + +5. Check the endpoint name, SQL statements, properties, and parameters of the new endpoint. + + > **Note:** + > + > The `/system/query` endpoint is powerful and versatile but can be potentially destructive. Use it with discretion and ensure the queries are secure and well-considered to prevent unintended consequences. + + - Endpoint name: the endpoint name and path is `/system/query`, and the request method `POST`. + - SQL statements: the `/system/query` endpoint does not come with any SQL statement. You can find the SQL editor in the middle section of the page and write your desired SQL statements in the SQL editor. Note that the SQL statements written in the SQL editor for the `/system/query` endpoint will be saved in the SQL editor so you can further develop and test them next time but they will not be saved in the endpoint configuration. + - Endpoint properties: in the right pane of the page, you can find the endpoint properties on the **Properties** tab. Unlike other custom endpoints, only the `timeout` and `max rows` properties can be customized for system endpoints. + - Endpoint parameters: in the right pane of the page, you can find the endpoint parameters on the **Params** tab. The parameters of the `/system/query` endpoint are configured automatically and cannot be modified. + ## Develop an endpoint For each endpoint, you can write SQL statements to execute on a TiDB cluster, define parameters for the SQL statements, or manage the name and version. @@ -89,11 +135,38 @@ On the right pane of the endpoint details page, you can click the **Properties** - **Path**: the path that users use to access the endpoint. - - The combination of the request method and the path must be unique within a Data App. - - Only letters, numbers, underscores (`_`), and slashes (`/`) are allowed in a path. A path must start with a slash (`/`) and end with a letter, number, or underscore (`_`). For example, `/my_endpoint/get_id`. - The length of the path must be less than 64 characters. + - The combination of the request method and the path must be unique within a Data App. + - Only letters, numbers, underscores (`_`), slashes (`/`), and parameters enclosed in curly braces (such as `{var}`) are allowed in a path. Each path must start with a slash (`/`) and end with a letter, number, or underscore (`_`). For example, `/my_endpoint/get_id`. + - For parameters enclosed in `{ }`, only letters, numbers, and underscores (`_`) are allowed. Each parameter enclosed in `{ }` must start with a letter or underscore (`_`). + + > **Note:** + > + > - In a path, each parameter must be at a separate level and does not support prefixes or suffixes. + > + > Valid path: ```/var/{var}``` and ```/{var}``` + > + > Invalid path: ```/var{var}``` and ```/{var}var``` + > + > - Paths with the same method and prefix might conflict, as in the following example: + > + > ```GET /var/{var1}``` + > + > ```GET /var/{var2}``` + > + > These two paths will conflict with each other because `GET /var/123` matches both. + > + > - Paths with parameters have lower priority than paths without parameters. For example: + > + > ```GET /var/{var1}``` + > + > ```GET /var/123``` + > + > These two paths will not conflict because `GET /var/123` takes precedence. + > + > - Path parameters can be used directly in SQL. For more information, see [Configure parameters](#configure-parameters). -- **Endpoint URL**: (read-only) the URL is automatically generated based on the region where the corresponding cluster is located, the service URL of the Data App, and the path of the endpoint. For example, if the path of the endpoint is `/my_endpoint/get_id`, the endpoint URL is `https://.data.tidbcloud.com/api/v1beta/app//endpoint/my_endpoint/get_id`. +- **Endpoint URL**: (read-only) the default URL is automatically generated based on the region where the corresponding cluster is located, the service URL of the Data App, and the path of the endpoint. For example, if the path of the endpoint is `/my_endpoint/get_id`, the endpoint URL is `https://.data.tidbcloud.com/api/v1beta/app//endpoint/my_endpoint/get_id`. To configure a custom domain for the Data App, see [Custom Domain in Data Service](/tidb-cloud/data-service-custom-domain.md). - **Request Method**: the HTTP method of the endpoint. The following methods are supported: @@ -136,9 +209,12 @@ On the SQL editor of the endpoint details page, you can write and run the SQL st On the upper part of the SQL editor, select a cluster on which you want to execute SQL statements from the drop-down list. Then, you can view all databases of this cluster in the **Schema** tab on the right pane. -2. Write SQL statements. +2. Depending on your endpoint type, do one of the following to select a database: - Before querying or modifying data, you need to first specify the database in the SQL statements. For example, `USE database_name;`. + - Predefined system endpoints: on the upper part of the SQL editor, select the target database from the drop-down list. + - Other endpoints: write a SQL statement to specify the target database in the SQL editor. For example, `USE database_name;`. + +3. Write SQL statements. In the SQL editor, you can write statements such as table join queries, complex queries, and aggregate functions. You can also simply type `--` followed by your instructions to let AI generate SQL statements automatically. @@ -151,7 +227,7 @@ On the SQL editor of the endpoint details page, you can write and run the SQL st > - The parameter name is case-sensitive. > - The parameter cannot be used as a table name or column name. -3. Run SQL statements. +4. Run SQL statements. If you have inserted parameters in the SQL statements, make sure that you have set test values or default values for the parameters in the **Params** tab on the right pane. Otherwise, an error is returned. @@ -183,6 +259,10 @@ On the SQL editor of the endpoint details page, you can write and run the SQL st After running the statements, you can see the query results immediately in the **Result** tab at the bottom of the page. + > **Note:** + > + > The returned result has a size limit of 8 MiB. + ### Configure parameters On the right pane of the endpoint details page, you can click the **Params** tab to view and manage the parameters used in the endpoint. @@ -190,8 +270,11 @@ On the right pane of the endpoint details page, you can click the **Params** tab In the **Definition** section, you can view and manage the following properties for a parameter: - The parameter name: the name can only include letters, digits, and underscores (`_`) and must start with a letter or an underscore (`_`). **DO NOT** use `page` and `page_size` as parameter names, which are reserved for pagination of request results. -- **Required**: specifies whether the parameter is required in the request. The default configuration is set to not required. -- **Type**: specifies the data type of the parameter. Supported values are `STRING`, `NUMBER`, `INTEGER`, `BOOLEAN`, and `ARRAY`. When using a `STRING` type parameter, you do not need to add quotation marks (`'` or `"`). For example, `foo` is valid for the `STRING` type and is processed as `"foo"`, whereas `"foo"` is processed as `"\"foo\""`. +- **Required**: specifies whether the parameter is required in the request. For path parameters, the configuration is required and cannot be modified. For other parameters, the default configuration is not required. +- **Type**: specifies the data type of the parameter. For path parameters, only `STRING` and `INTEGER` are supported. For other parameters, `STRING`, `NUMBER`, `INTEGER`, `BOOLEAN`, and `ARRAY` are supported. + + When using a `STRING` type parameter, you do not need to add quotation marks (`'` or `"`). For example, `foo` is valid for the `STRING` type and is processed as `"foo"`, whereas `"foo"` is processed as `"\"foo\""`. + - **Enum Value**: (optional) specifies the valid values for the parameter and is available only when the parameter type is `STRING`, `INTEGER`, or `NUMBER`. - If you leave this field empty, the parameter can be any value of the specified type. @@ -203,15 +286,12 @@ In the **Definition** section, you can view and manage the following properties - For `ARRAY` type, you need to separate multiple values with a comma (`,`). - Make sure that the value can be converted to the type of parameter. Otherwise, the endpoint returns an error. - If you do not set a test value for a parameter, the default value is used when testing the endpoint. +- **Location**: indicates the location of the parameter. This property cannot be modified. + - For path parameters, this property is `Path`. + - For other parameters, if the request method is `GET` or `DELETE`, this property is `Query`. If the request method is `POST` or `PUT`, this property is `Body`. In the **Test Values** section, you can view and set test parameters. These values are used as the parameter values when you test the endpoint. Make sure that the value can be converted to the type of parameter. Otherwise, the endpoint returns an error. -### Manage versions - -On the right pane of the endpoint details page, you can click the **Deployments** tab to view and manage the deployed versions of the endpoint. - -In the **Deployments** tab, you can deploy a draft version and undeploy the online version. - ### Rename To rename an endpoint, perform the following steps: @@ -220,6 +300,10 @@ To rename an endpoint, perform the following steps: 2. In the left pane, click the name of your target Data App to view its endpoints. 3. Locate the endpoint you want to rename, click **...** > **Rename**., and enter a new name for the endpoint. +> **Note:** +> +> Predefined system endpoints do not support renaming. + ## Test an endpoint To test an endpoint, perform the following steps: diff --git a/tidb-cloud/data-service-oas-with-nextjs.md b/tidb-cloud/data-service-oas-with-nextjs.md index d2a9c5a741a3c..941b0b93d3d82 100644 --- a/tidb-cloud/data-service-oas-with-nextjs.md +++ b/tidb-cloud/data-service-oas-with-nextjs.md @@ -22,7 +22,7 @@ This document uses a TiDB Serverless cluster as an example. To begin with, create a table `test.repository` in your TiDB cluster and insert some sample data into it. The following example inserts some open source projects developed by PingCAP as data for demonstration purposes. -To execute the SQL statements, you can use [Chat2Query](/tidb-cloud/explore-data-with-chat2query.md) in the [TiDB Cloud console](https://tidbcloud.com). +To execute the SQL statements, you can use [SQL Editor](/tidb-cloud/explore-data-with-chat2query.md) in the [TiDB Cloud console](https://tidbcloud.com). ```sql -- Select the database diff --git a/tidb-cloud/delete-tidb-cluster.md b/tidb-cloud/delete-tidb-cluster.md index d8d042d9bbec0..a3fd05c5d6a44 100644 --- a/tidb-cloud/delete-tidb-cluster.md +++ b/tidb-cloud/delete-tidb-cluster.md @@ -17,7 +17,10 @@ You can delete a cluster at any time by performing the following steps: > Alternatively, you can also click the name of the target cluster to go to its overview page, and then click **...** in the upper-right corner. 3. Click **Delete** in the drop-down menu. -4. In the cluster deleting window, enter your `//`. +4. In the cluster deleting window, confirm the deletion: + + - If you have at least one manual or automatic backup, you can see the number of backups and the charging policy for backups. Click **Continue** and enter `//`. + - If you do not have any backups, just enter `//`. If you want to restore the cluster sometime in the future, make sure that you have a backup of the cluster. Otherwise, you cannot restore it anymore. For more information about how to back up TiDB Dedicated clusters, see [Back Up and Restore TiDB Dedicated Data](/tidb-cloud/backup-and-restore.md). @@ -25,11 +28,15 @@ You can delete a cluster at any time by performing the following steps: > > [TiDB Serverless clusters](/tidb-cloud/select-cluster-tier.md#tidb-serverless) only support [in-place restoring from backups](/tidb-cloud/backup-and-restore-serverless.md#restore) and do not support restoring data after the deletion. If you want to delete a TiDB Serverless cluster and restore its data in the future, you can use [Dumpling](https://docs.pingcap.com/tidb/stable/dumpling-overview) to export your data as a backup. -5. Click **I understand the consequences. Delete this cluster**. +5. Click **I understand, delete it**. + + Once a backed up TiDB Dedicated cluster is deleted, the existing backup files of the cluster are moved to the recycle bin. - Once a backed up TiDB Dedicated cluster is deleted, the existing backup files of the cluster are moved to the recycle bin. + - Automatic backups will expire and be automatically deleted once the retention period ends. The default retention period is 7 days if you don't modify it. + - Manual backups will be kept in the Recycle Bin until manually deleted. -- For backup files from an automatic backup, the recycle bin can retain them for 7 days. -- For backup files from a manual backup, there is no expiration date. + > **Note:** + > + > Please be aware that backups will continue to incur charges until deleted. - If you want to restore a TiDB Dedicated cluster from recycle bin, see [Restore a deleted cluster](/tidb-cloud/backup-and-restore.md#restore-a-deleted-cluster). + If you want to restore a TiDB Dedicated cluster from recycle bin, see [Restore a deleted cluster](/tidb-cloud/backup-and-restore.md#restore-a-deleted-cluster). diff --git a/tidb-cloud/dev-guide-wordpress.md b/tidb-cloud/dev-guide-wordpress.md index 7b626d5e769e4..8f961e13e5ba5 100644 --- a/tidb-cloud/dev-guide-wordpress.md +++ b/tidb-cloud/dev-guide-wordpress.md @@ -34,7 +34,7 @@ cd wordpress-tidb-docker ### Step 2: Install dependencies -1. The sample repository requires [Docker](https://www.docker.com/) and [Docker Compose](https://docs.docker.com/compose/) to start WordPress. If you have them installed, you can skip this step. It is highly recommended to run your WordPress in a Linux environment (such as Ubuntu). Run the following command to install them: +1. The sample repository requires [Docker](https://www.docker.com/) and [Docker Compose](https://docs.docker.com/compose/) to start WordPress. If you have them installed, you can skip this step. It is highly recommended to run your WordPress in a Linux environment (such as Ubuntu). Run the following command to install Docker and Docker Compose: ```shell sudo sh install.sh @@ -59,6 +59,7 @@ Configure the WordPress database connection to TiDB Serverless. - **Endpoint Type** is set to `Public`. - **Connect With** is set to `WordPress`. - **Operating System** is set to `Debian/Ubuntu/Arch`. + - **Database** is set to the database you want to use—for example, `test`. 4. Click **Generate Password** to create a random password. @@ -96,6 +97,12 @@ Configure the WordPress database connection to TiDB Serverless. 2. Set up your WordPress site by visiting [localhost](http://localhost/) if you start the container on your local machine or `http://` if the WordPress is running on a remote machine. +### Step 5: Confirm the database connection + +1. Close the connection dialog for your cluster on the TiDB Cloud console, and open the **SQL Editor** page. +2. Under the **Schemas** tab on the left, click the database you connected to Wordpress. +3. Confirm that you now see the Wordpress tables (such as `wp_posts` and `wp_comments`) in the list of tables for that database. + ## Need help? Ask questions on [TiDB Community](https://ask.pingcap.com/), or [create a support ticket](https://support.pingcap.com/). diff --git a/tidb-cloud/explore-data-with-chat2query.md b/tidb-cloud/explore-data-with-chat2query.md index 27d2eb2d3d64a..95ade1842cc56 100644 --- a/tidb-cloud/explore-data-with-chat2query.md +++ b/tidb-cloud/explore-data-with-chat2query.md @@ -1,35 +1,29 @@ --- -title: Explore Your Data with AI-Powered Chat2Query (beta) -summary: Learn how to use Chat2Query, an AI-powered SQL editor in the TiDB Cloud console, to maximize your data value. +title: Explore Your Data with AI-Assisted SQL Editor +summary: Learn how to use AI-assisted SQL Editor in the TiDB Cloud console, to maximize your data value. --- -# Explore Your Data with AI-Powered Chat2Query (beta) +# Explore Your Data with AI-Assisted SQL Editor -TiDB Cloud is powered by AI. You can use Chat2Query (beta), an AI-powered SQL editor in the [TiDB Cloud console](https://tidbcloud.com/), to maximize your data value. +You can use the built-in AI-assisted SQL Editor in the [TiDB Cloud console](https://tidbcloud.com/) to maximize your data value. -In Chat2Query, you can either simply type `--` followed by your instructions to let AI generate SQL queries automatically or write SQL queries manually, and then run SQL queries against databases without a terminal. You can find the query results in tables intuitively and check the query logs easily. - -> **Note:** -> -> Chat2Query is supported for TiDB clusters that are v6.5.0 or later and are hosted on AWS. -> -> - For [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, Chat2Query is available by default. -> - For [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, Chat2Query is only available upon request. To use Chat2Query on TiDB Dedicated clusters, contact [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md). +In SQL Editor, you can either write SQL queries manually or simply press + I on macOS (or Control + I on Windows or Linux) to instruct [Chat2Query (beta)](/tidb-cloud/tidb-cloud-glossary.md#chat2query) to generate SQL queries automatically. This enables you to run SQL queries against databases without a local SQL client. You can intuitively view the query results in tables or charts and easily check the query logs. ## Use cases -The recommended use cases of Chat2Query are as follows: +The recommended use cases of SQL Editor are as follows: -- Use the AI capacity of Chat2Query to help you generate complex SQL queries instantly. -- Test out the MySQL compatibility of TiDB quickly. -- Explore TiDB SQL features easily. +- Utilize the AI capabilities of Chat2Query to generate, debug, or rewrite complex SQL queries instantly. +- Quickly test the MySQL compatibility of TiDB. +- Easily explore the SQL features in TiDB using your own datasets. -## Limitation +## Limitations -- SQL queries generated by the AI are not 100% accurate and might still need your further tweak. -- The [Chat2Query API](/tidb-cloud/use-chat2query-api.md) is available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. To use the Chat2Query API on [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, contact [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md). +- SQL queries generated by the AI might not be 100% accurate, and you might need to refine them. +- SQL Editor is only supported for TiDB clusters that are v6.5.0 or later and hosted on AWS. +- SQL Editor is available by default for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. To use SQL Editor and Chat2Query on [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, contact [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md). -## Access Chat2Query +## Access SQL Editor 1. Go to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. @@ -37,19 +31,19 @@ The recommended use cases of Chat2Query are as follows: > > If you have multiple projects, you can click in the lower-left corner and switch to another project. -2. Click your cluster name, and then click **Chat2Query** in the left navigation pane. +2. Click your cluster name, and then click **SQL Editor** in the left navigation pane. > **Note:** > - > In the following cases, the **Chat2Query** entry is displayed in grey and not clickable. + > In the following cases, the **SQL Editor** entry is displayed in gray and not clickable. > - > - Your TiDB Dedicated cluster is earlier than v6.5.0. To use Chat2Query, you need to contract [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md) to upgrade your clusters. - > - Your TiDB Dedicated cluster is just created and the running environment for Chat2Query is still in preparation. In this case, wait for a few minutes and Chat2Query will be available. + > - Your TiDB Dedicated cluster is earlier than v6.5.0. To use SQL Editor, you need to contract [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md) to upgrade your clusters. + > - Your TiDB Dedicated cluster is just created and the running environment for SQL Editor is still in preparation. In this case, wait for a few minutes and Chat2Query will be available. > - Your TiDB Dedicated cluster is [paused](/tidb-cloud/pause-or-resume-tidb-cluster.md). ## Enable or disable AI to generate SQL queries -PingCAP takes the privacy and security of users' data as a top priority. The AI capacity of Chat2Query only needs to access database schemas to generate SQL queries, not your data itself. For more information, see [Chat2Query Privacy FAQ](https://www.pingcap.com/privacy-policy/privacy-chat2query). +PingCAP takes the privacy and security of users' data as a top priority. The AI capacity of Chat2Query in SQL Editor only needs to access database schemas to generate SQL queries, not your data itself. For more information, see [Chat2Query Privacy FAQ](https://www.pingcap.com/privacy-policy/privacy-chat2query). When you access Chat2Query for the first time, you will be prompted with a dialog about whether to allow PingCAP and OpenAI to use your code snippets to research and improve the services. @@ -63,16 +57,36 @@ After the first-time access, you can still change the AI setting as follows: ## Write and run SQL queries -In Chat2Query, you can write and run SQL queries using your own dataset. +In SQL Editor, you can write and run SQL queries using your own dataset. 1. Write SQL queries. - - If AI is enabled, simply type `--` followed by your instructions to let AI generate SQL queries automatically or write SQL queries manually. + +
+ + For macOS: + + - If AI is enabled, simply press **⌘ + I** followed by your instructions and press **Enter** to let AI generate SQL queries automatically. - For a SQL query generated by AI, you can accept it by pressing Tab and then further edit it if needed, or reject it by pressing Esc. + For a SQL query generated by Chat2Query, click **Accept** to accept the query and continue editing. If the query does not meet your requirements, click **Discard** to reject it. Alternatively, click **Regenerate** to request a new query from Chat2Query. - If AI is disabled, write SQL queries manually. +
+ +
+ + For Windows or Linux: + + - If AI is enabled, simply press **Ctrl + I** followed by your instructions and press **Enter** to let AI generate SQL queries automatically. + + For a SQL query generated by Chat2Query, click **Accept** to accept the query and continue editing. If the query does not meet your requirements, click **Discard** to reject it. Alternatively, click **Regenerate** to request a new query from Chat2Query. + + - If AI is disabled, write SQL queries manually. + +
+
+ 2. Run SQL queries. @@ -103,9 +117,36 @@ In Chat2Query, you can write and run SQL queries using your own dataset. After running the queries, you can see the query logs and results immediately at the bottom of the page. +> **Note:** +> +> The returned result has a size limit of 8 MiB. + +## Rewrite SQL queries using Chat2Query + +In SQL Editor, you can use Chat2Query to rewrite existing SQL queries to optimize performance, fix errors, or meet other specific requirements. + +1. Select the lines of SQL queries you want to rewrite with your cursor. + +2. Invoke Chat2Query for rewriting by using the keyboard shortcut for your operating system: + + - + I on macOS + - Control + I on Windows or Linux + + Press **Enter** after providing your instructions to let AI handle the rewrite. + +3. After invoking Chat2Query, you can see the suggested rewrite and the following options: + + - **Accept**: click this to accept the suggested rewrite and continue editing. + - **Discard**: click this if the suggested rewrite does not meet your expectations. + - **Regenerate**: click this to request another rewrite from Chat2Query based on your feedback or additional instructions. + +> **Note:** +> +> Chat2Query uses AI algorithms to suggest optimizations and corrections. It is recommended that you review these suggestions carefully before finalizing the queries. + ## Manage SQL files -In Chat2Query, you can save your SQL queries in different SQL files and manage SQL files as follows: +In SQL Editor, you can save your SQL queries in different SQL files and manage SQL files as follows: - To add a SQL file, click **+** on the **SQL Files** tab. - To rename a SQL file, move your cursor on the filename, click **...** next to the filename, and then select **Rename**. @@ -127,7 +168,7 @@ For more information, see [Get started with Chat2Query API](/tidb-cloud/use-chat ## Generate an endpoint from a SQL file -For TiDB clusters, TiDB Cloud provides a [Data Service (beta)](/tidb-cloud/data-service-overview.md) feature that enables you to access TiDB Cloud data via an HTTPS request using a custom API endpoint. In Chat2Query, you can generate an endpoint of Data Service (beta) from a SQL file by taking the following steps: +For TiDB clusters, TiDB Cloud provides a [Data Service (beta)](/tidb-cloud/data-service-overview.md) feature that enables you to access TiDB Cloud data via an HTTPS request using a custom API endpoint. In SQL Editor, you can generate an endpoint of Data Service (beta) from a SQL file by taking the following steps: 1. Move your cursor on the filename, click **...** next to the filename, and then select **Generate endpoint**. 2. In the **Generate endpoint** dialog box, select the Data App you want to generate the endpoint for and enter the endpoint name. @@ -135,15 +176,15 @@ For TiDB clusters, TiDB Cloud provides a [Data Service (beta)](/tidb-cloud/data- For more information, see [Manage an endpoint](/tidb-cloud/data-service-manage-endpoint.md). -## Manage Chat2Query settings +## Manage SQL Editor settings -In Chat2Query, you can change the following settings: +In SQL Editor, you can change the following settings: - The maximum number of rows in query results - Whether to show system database schemas on the **Schemas** tab To change the settings, take the following steps: -1. In the upper-right corner of Chat2Query, click **...** and select **Settings**. +1. In the upper-right corner of **SQL Editor**, click **...** and select **Settings**. 2. Change the settings according to your need. 3. Click **Save**. diff --git a/tidb-cloud/get-started-with-cli.md b/tidb-cloud/get-started-with-cli.md index c5d86ed720727..c1a324099b208 100644 --- a/tidb-cloud/get-started-with-cli.md +++ b/tidb-cloud/get-started-with-cli.md @@ -8,12 +8,16 @@ summary: Learn how to manage TiDB Cloud resources through the TiDB Cloud CLI. TiDB Cloud provides a command-line interface (CLI) [`ticloud`](https://github.com/tidbcloud/tidbcloud-cli) for you to interact with TiDB Cloud from your terminal with a few lines of commands. For example, you can easily perform the following operations using `ticloud`: - Create, delete, and list your clusters. -- Import data from Amazon S3 or local files to your clusters. +- Import data to your clusters. +- Export data from your clusters. + +> **Note:** +> +> TiDB Cloud CLI is in beta. ## Before you begin - Have a TiDB Cloud account. If you do not have one, [sign up for a free trial](https://tidbcloud.com/free-trial). -- [Create a TiDB Cloud API Key](https://docs.pingcap.com/tidbcloud/api/v1beta#section/Authentication/API-Key-Management). ## Installation @@ -81,6 +85,44 @@ Install the MySQL command-line client if you do not have it. You can refer to th +## Quick start + +[TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) is the best way to get started with TiDB Cloud. In this section, you will learn how to create a TiDB Serverless cluster with TiDB Cloud CLI. + +### Create a user profile or log into TiDB Cloud + +Before creating a cluster with TiDB Cloud CLI, you need to either create a user profile or log into TiDB Cloud. + +- Create a user profile with your [TiDB Cloud API key](https://docs.pingcap.com/tidbcloud/api/v1beta#section/Authentication/API-Key-Management): + + ```shell + ticloud config create + ``` + + > **Warning:** + > + > The profile name **MUST NOT** contain `.`. + +- Log into TiDB Cloud with authentication: + + ```shell + ticloud auth login + ``` + + After successful login, an OAuth token will be assigned to the current profile. If no profiles exist, the token will be assigned to a profile named `default`. + +> **Note:** +> +> In the preceding two methods, the TiDB Cloud API key takes precedence over the OAuth token. If both are available, the API key will be used. + +### Create a TiDB Serverless cluster + +To create a TiDB Serverless cluster, enter the following command, and then follow the CLI prompts to provide the required information: + +```shell +ticloud serverless create +``` + ## Use the TiDB Cloud CLI View all commands available: @@ -114,7 +156,7 @@ tiup cloud --help Run commands with `tiup cloud `. For example: ```shell -tiup cloud cluster create +tiup cloud serverless create ``` Update to the latest version by TiUP: @@ -123,40 +165,6 @@ Update to the latest version by TiUP: tiup update cloud ``` -## Quick start - -[TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) is the best way to get started with TiDB Cloud. In this section, you will learn how to create a TiDB Serverless cluster with TiDB Cloud CLI. - -### Create a user profile - -Before creating a cluster, you need to create a user profile with your TiDB Cloud API Key: - -```shell -ticloud config create -``` - -> **Warning:** -> -> The profile name **MUST NOT** contain `.`. - -### Create a TiDB Serverless cluster - -To create a TiDB Serverless cluster, enter the following command, and then follow the CLI prompts to provide the required information and set the password: - -```shell -ticloud cluster create -``` - -### Connect to the cluster - -After the cluster is created, you can connect to the cluster: - -```shell -ticloud connect -``` - -When you are prompted about whether to use the default user, choose `Y` and enter the password that you set when creating the cluster. - ## What's next Check out [CLI reference](/tidb-cloud/cli-reference.md) to explore more features of TiDB Cloud CLI. diff --git a/tidb-cloud/import-csv-files.md b/tidb-cloud/import-csv-files.md index b1833a993e70e..7f42775cab6cb 100644 --- a/tidb-cloud/import-csv-files.md +++ b/tidb-cloud/import-csv-files.md @@ -142,7 +142,7 @@ To import the CSV files to TiDB Cloud, take the following steps: 4. You can choose to import into pre-created tables, or import schema and data from the source. - - **Import into pre-created tables** allows you to create tables in TiDB in advance and select the tables that you want to import data into. In this case, you can choose up to 1000 tables to import. To create tables, click **Chat2Query** in the left navigation pane. For more information about how to use Chat2Query, see [Explore Your Data with AI-Powered Chat2Query](/tidb-cloud/explore-data-with-chat2query.md). + - **Import into pre-created tables** allows you to create tables in TiDB in advance and select the tables that you want to import data into. In this case, you can choose up to 1000 tables to import. To create tables, click **SQL Editor** in the left navigation pane. For more information about how to use SQL Editor, see [Explore your data with AI-assisted SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). - **Import schema and data from S3** (This field is visible only for AWS S3) allows you to import SQL scripts that create a table along with its corresponding data stored in S3 directly into TiDB. - **Import schema and data from GCS** (This field is visible only for GCS) allows you to import SQL scripts that create a table along with its corresponding data stored in GCS directly into TiDB. diff --git a/tidb-cloud/import-parquet-files.md b/tidb-cloud/import-parquet-files.md index bf96136f3dda6..6e7aa08171e9e 100644 --- a/tidb-cloud/import-parquet-files.md +++ b/tidb-cloud/import-parquet-files.md @@ -137,7 +137,7 @@ To import the Parquet files to TiDB Cloud, take the following steps: 4. You can choose to import into pre-created tables, or import schema and data from the source. - - **Import into pre-created tables** allows you to create tables in TiDB in advance and select the tables that you want to import data into. In this case, you can choose up to 1000 tables to import. You can click **Chat2Query** in the left navigation pane to create tables. For more information about how to use Chat2Query, see [Explore Your Data with AI-Powered Chat2Query](/tidb-cloud/explore-data-with-chat2query.md). + - **Import into pre-created tables** allows you to create tables in TiDB in advance and select the tables that you want to import data into. In this case, you can choose up to 1000 tables to import. You can click **SQL Editor** in the left navigation pane to create tables. For more information about how to use SQL Editor, see [Explore your data with AI-assisted SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). - **Import schema and data from S3** (This field is visible only for AWS S3) allows you to import SQL scripts for creating a table and import corresponding table data stored in S3 into TiDB. - **Import schema and data from GCS** (This field is visible only for GCS) allows you to import SQL scripts that create a table along with its corresponding data stored in GCS directly into TiDB. diff --git a/tidb-cloud/import-sample-data.md b/tidb-cloud/import-sample-data.md index aabf06d9bf15c..1f52a8f3b624f 100644 --- a/tidb-cloud/import-sample-data.md +++ b/tidb-cloud/import-sample-data.md @@ -7,6 +7,9 @@ summary: Learn how to import sample data into TiDB Cloud via UI. This document describes how to import the sample data into TiDB Cloud via the UI. The sample data used is the system data from Capital Bikeshare, released under the Capital Bikeshare Data License Agreement. Before importing the sample data, you need to have one TiDB cluster. + +
+ 1. Open the **Import** page for your target cluster. 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. @@ -17,12 +20,7 @@ This document describes how to import the sample data into TiDB Cloud via the UI 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. -2. Configure the source data information. - - -
- - On the **Import** page: +2. Configure the source data information. On the **Import** page: - For a TiDB Dedicated cluster, click **Import Data** in the upper-right corner. - For a TiDB Serverless cluster, click the **import data from S3** link above the upload area. @@ -37,11 +35,27 @@ This document describes how to import the sample data into TiDB Cloud via the UI If the region of the bucket is different from your cluster, confirm the compliance of cross region. Click **Next**. -
+3. You can choose to import into pre-created tables, or import schema and data from the source. When you import sample data, choose **Import schema and data from S3**. + + - **Import into pre-created tables** allows you to create tables in TiDB in advance and select the tables that you want to import data into. In this case, you can choose up to 1000 tables to import. You can click **SQL Editor** in the left navigation pane to create tables. For more information about how to use SQL Editor, see [Explore your data with AI-assisted SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). + - **Import schema and data from S3** allows you to import SQL scripts for creating a table and import corresponding table data stored in S3 into TiDB. + +4. Click **Start Import**. + +
+
+ +1. Open the **Import** page for your target cluster. + + 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. -
+ > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. - If your TiDB cluster is hosted by Google Cloud, click **Import Data** in the upper-right corner, and then fill in the following parameters: + 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +2. Click **Import Data** in the upper-right corner, and then fill in the following parameters: - **Data Format**: select **SQL File**. TiDB Cloud supports importing compressed files in the following formats: `.gzip`, `.gz`, `.zstd`, `.zst` and `.snappy`. If you want to import compressed SQL files, name the files in the `${db_name}.${table_name}.${suffix}.sql.${compress}` format, in which `${suffix}` is optional and can be any integer such as '000001'. For example, if you want to import the `trips.000001.sql.gz` file to the `bikeshare.trips` table, you can rename the file as `bikeshare.trips.000001.sql.gz`. Note that you only need to compress the data files, not the database or table schema files. Note that you only need to compress the data files, not the database or table schema files. The Snappy compressed file must be in the [official Snappy format](https://github.com/google/snappy). Other variants of Snappy compression are not supported. - **Bucket gsutil URI**: enter the sample data URI `gs://tidbcloud-samples-us-west1/`. @@ -49,17 +63,16 @@ This document describes how to import the sample data into TiDB Cloud via the UI If the region of the bucket is different from your cluster, confirm the compliance of cross region. Click **Next**. -
- +3. You can choose to import into pre-created tables, or import schema and data from the source. When you import sample data, choose **Import schema and data from GCS**. -3. You can choose to import into pre-created tables, or import schema and data from the source. - - - **Import into pre-created tables** allows you to create tables in TiDB in advance and select the tables that you want to import data into. In this case, you can choose up to 1000 tables to import. You can click **Chat2Query** in the left navigation pane to create tables. For more information about how to use Chat2Query, see [Explore Your Data with AI-Powered Chat2Query](/tidb-cloud/explore-data-with-chat2query.md). - - **Import schema and data from S3** (This field is visible only for AWS S3) allows you to import SQL scripts for creating a table and import corresponding table data stored in S3 into TiDB. - - **Import schema and data from GCS** (This field is visible only for GCS) allows you to import SQL scripts that create a table along with its corresponding data stored in GCS directly into TiDB. + - **Import into pre-created tables** allows you to create tables in TiDB in advance and select the tables that you want to import data into. In this case, you can choose up to 1000 tables to import. You can click **SQL Editor** in the left navigation pane to create tables. For more information about how to use SQL Editor, see [Explore Your Data with AI-Assisted SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). + - **Import schema and data from GCS** allows you to import SQL scripts that create a table along with its corresponding data stored in GCS directly into TiDB. 4. Click **Start Import**. +
+
+ When the data import progress shows **Completed**, you have successfully imported the sample data and the database schema to your database in TiDB Cloud. Once the cluster finishes the data importing process, you will get the sample data in your database. diff --git a/tidb-cloud/integrate-tidbcloud-with-dbt.md b/tidb-cloud/integrate-tidbcloud-with-dbt.md index a2490a8659820..5debd3c0a8c30 100644 --- a/tidb-cloud/integrate-tidbcloud-with-dbt.md +++ b/tidb-cloud/integrate-tidbcloud-with-dbt.md @@ -59,7 +59,7 @@ In this directory: - `dbt_project.yml` is the dbt project configuration file, which holds the project name and database configuration file information. -- The `models` directory contains the project's SQL models and table schemas. Note that the data analyst writes this section. For more information about models, see [SQL models](https://docs.getdbt.com/docs/build/sql-models). +- The `models` directory contains the project’s SQL models and table schemas. Note that the data analyst writes this section. For more information about models, see [SQL models](https://docs.getdbt.com/docs/build/sql-models). - The `seeds` directory stores the CSV files that are dumped by the database export tools. For example, you can [export the TiDB Cloud data](https://docs.pingcap.com/tidbcloud/export-data-from-tidb-cloud) into CSV files through Dumpling. In the `jaffle_shop` project, these CSV files are used as raw data to be processed. @@ -145,7 +145,7 @@ To configure the project, take the following steps: > > This step is optional. If the data for processing is already in the target database, you can skip this step. -Now that you have successfully created and configured the project, it's time to load the CSV data and materialize the CSV as a table in the target database. +Now that you have successfully created and configured the project, it’s time to load the CSV data and materialize the CSV as a table in the target database. 1. Load the CSV data and materialize the CSV as a table in the target database. diff --git a/tidb-cloud/integrate-tidbcloud-with-netlify.md b/tidb-cloud/integrate-tidbcloud-with-netlify.md index 85c8cacc98b8d..0f52e3ff72c74 100644 --- a/tidb-cloud/integrate-tidbcloud-with-netlify.md +++ b/tidb-cloud/integrate-tidbcloud-with-netlify.md @@ -141,7 +141,7 @@ For a TiDB Dedicated cluster, you can get the connection string only from the Ti ```shell Adding local .netlify folder to .gitignore file... ? What would you like to do? + Create & configure a new site - ? Team: your_username's team + ? Team: your_username’s team ? Site name (leave blank for a random name; you can change it later): Site Created diff --git a/tidb-cloud/integrate-tidbcloud-with-zapier.md b/tidb-cloud/integrate-tidbcloud-with-zapier.md index 3594a3de4c8e4..57b107dc1548d 100644 --- a/tidb-cloud/integrate-tidbcloud-with-zapier.md +++ b/tidb-cloud/integrate-tidbcloud-with-zapier.md @@ -197,7 +197,7 @@ Zapier triggers can work with a polling API call to check for new data periodica TiDB Cloud triggers provide a polling API call that returns a lot of results. However, most of the results have been seen by Zapier before, that is, most of the results are duplication. -Since we don't want to trigger an action multiple times when an item in your API exists in multiple distinct polls, TiDB Cloud triggers deduplicate the data with the `id` field. +Since we don’t want to trigger an action multiple times when an item in your API exists in multiple distinct polls, TiDB Cloud triggers deduplicate the data with the `id` field. `New Cluster` and `New Table` triggers simply use the `cluster_id` or `table_id` as the `id` field to do the deduplication. You do not need to do anything for the two triggers. @@ -229,7 +229,7 @@ Make sure that your custom query executes in less than 30 seconds. Otherwise, yo 1. Choose `Find Table` action -2. In the`set up action` step, tick the `Create TiDB Cloud Table if it doesn't exist yet?` box to enable `find and create`. +2. In the`set up action` step, tick the `Create TiDB Cloud Table if it doesn’t exist yet?` box to enable `find and create`. ![Find and create](/media/tidb-cloud/zapier/zapier-tidbcloud-find-and-create.png) diff --git a/tidb-cloud/limitations-and-quotas.md b/tidb-cloud/limitations-and-quotas.md index 77956a759c0e3..1d96b583a2945 100644 --- a/tidb-cloud/limitations-and-quotas.md +++ b/tidb-cloud/limitations-and-quotas.md @@ -15,7 +15,7 @@ TiDB Cloud limits how many of each kind of component you can create in a [TiDB D | Component | Limit | |:-|:-| -| Number of data replicas | 3 | +| Number of copies for each [data region](/tidb-cloud/tidb-cloud-glossary.md#region) | 3 | | Number of Availability Zones for a cross-zone deployment | 3 | > **Note:** @@ -29,3 +29,7 @@ TiDB Cloud limits how many of each kind of component you can create in a [TiDB D | Maximum number of total TiDB nodes for all clusters in your organization | 10 | | Maximum number of total TiKV nodes for all clusters in your organization | 15 | | Maximum number of total TiFlash nodes for all clusters in your organization | 5 | + +> **Note:** +> +> If any of these limits or quotas present a problem for your organization, please contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). diff --git a/tidb-cloud/limited-sql-features.md b/tidb-cloud/limited-sql-features.md index 7f8238f1e28ac..ad9f4b5c4b3a1 100644 --- a/tidb-cloud/limited-sql-features.md +++ b/tidb-cloud/limited-sql-features.md @@ -60,6 +60,7 @@ TiDB Cloud works with almost all workloads that TiDB supports, but there are som | `SHOW PLUGINS` | Supported | Not supported [^8] | | `SHOW PUMP STATUS` | Not supported [^7] | Not supported [^7] | | `SHUTDOWN` | Not supported [^4] | Not supported [^4] | +| `PLAN REPLAYER` | Supported | Supported in a different way[^11] | ## Functions and operators @@ -118,114 +119,110 @@ TiDB Cloud works with almost all workloads that TiDB supports, but there are som | `mysql` | `gc_delete_range_done` | Not supported [^4] | Not supported [^4] | | `mysql` | `opt_rule_blacklist` | Not supported [^4] | Not supported [^4] | | `mysql` | `tidb` | Not supported [^4] | Not supported [^4] | -| `mysql` | `tidb_ttl_job_history` | Supported | Not supported [^9] | -| `mysql` | `tidb_ttl_table_status` | Supported | Not supported [^9] | -| `mysql` | `tidb_ttl_task` | Supported | Not supported [^9] | ## System variables | Variable | TiDB Dedicated | TiDB Serverless | |:-|:-|:-| | `datadir` | No limitation | Not supported [^1] | -| `interactive_timeout` | No limitation | Read-only [^11] | -| `max_allowed_packet` | No limitation | Read-only [^11] | +| `interactive_timeout` | No limitation | Read-only [^10] | +| `max_allowed_packet` | No limitation | Read-only [^10] | | `plugin_dir` | No limitation | Not supported [^8] | | `plugin_load` | No limitation | Not supported [^8] | -| `require_secure_transport` | Not supported [^12] | Read-only [^11] | -| `skip_name_resolve` | No limitation | Read-only [^11] | -| `sql_log_bin` | No limitation | Read-only [^11] | -| `tidb_cdc_write_source` | No limitation | Read-only [^11] | +| `require_secure_transport` | Not supported [^12] | Read-only [^10] | +| `skip_name_resolve` | No limitation | Read-only [^10] | +| `sql_log_bin` | No limitation | Read-only [^10] | +| `tidb_cdc_write_source` | No limitation | Read-only [^10] | | `tidb_check_mb4_value_in_utf8` | Not supported [^4] | Not supported [^4] | | `tidb_config` | Not supported [^4] | Not supported [^4] | -| `tidb_ddl_disk_quota` | No limitation | Read-only [^11] | -| `tidb_ddl_enable_fast_reorg` | No limitation | Read-only [^11] | -| `tidb_ddl_error_count_limit` | No limitation | Read-only [^11] | -| `tidb_ddl_flashback_concurrency` | No limitation | Read-only [^11] | -| `tidb_ddl_reorg_batch_size` | No limitation | Read-only [^11] | -| `tidb_ddl_reorg_priority` | No limitation | Read-only [^11] | -| `tidb_ddl_reorg_worker_cnt` | No limitation | Read-only [^11] | -| `tidb_enable_1pc` | No limitation | Read-only [^11] | -| `tidb_enable_async_commit` | No limitation | Read-only [^11] | -| `tidb_enable_auto_analyze` | No limitation | Read-only [^11] | +| `tidb_ddl_disk_quota` | No limitation | Read-only [^10] | +| `tidb_ddl_enable_fast_reorg` | No limitation | Read-only [^10] | +| `tidb_ddl_error_count_limit` | No limitation | Read-only [^10] | +| `tidb_ddl_flashback_concurrency` | No limitation | Read-only [^10] | +| `tidb_ddl_reorg_batch_size` | No limitation | Read-only [^10] | +| `tidb_ddl_reorg_priority` | No limitation | Read-only [^10] | +| `tidb_ddl_reorg_worker_cnt` | No limitation | Read-only [^10] | +| `tidb_enable_1pc` | No limitation | Read-only [^10] | +| `tidb_enable_async_commit` | No limitation | Read-only [^10] | +| `tidb_enable_auto_analyze` | No limitation | Read-only [^10] | | `tidb_enable_collect_execution_info` | Not supported [^4] | Not supported [^4] | -| `tidb_enable_ddl` | No limitation | Read-only [^11] | -| `tidb_enable_gc_aware_memory_track` | No limitation | Read-only [^11] | -| `tidb_enable_gogc_tuner` | No limitation | Read-only [^11] | -| `tidb_enable_local_txn` | No limitation | Read-only [^11] | -| `tidb_enable_resource_control` | No limitation | Read-only [^11] | +| `tidb_enable_ddl` | No limitation | Read-only [^10] | +| `tidb_enable_gc_aware_memory_track` | No limitation | Read-only [^10] | +| `tidb_enable_gogc_tuner` | No limitation | Read-only [^10] | +| `tidb_enable_local_txn` | No limitation | Read-only [^10] | +| `tidb_enable_resource_control` | No limitation | Read-only [^10] | | `tidb_enable_slow_log` | Not supported [^4] | Not supported [^4] | -| `tidb_enable_stmt_summary` | No limitation | Read-only [^11] | +| `tidb_enable_stmt_summary` | No limitation | Read-only [^10] | | `tidb_enable_telemetry` | Not supported [^4] | Not supported [^4] | -| `tidb_enable_top_sql` | No limitation | Read-only [^11] | -| `tidb_enable_tso_follower_proxy` | No limitation | Read-only [^11] | +| `tidb_enable_top_sql` | No limitation | Read-only [^10] | +| `tidb_enable_tso_follower_proxy` | No limitation | Read-only [^10] | | `tidb_expensive_query_time_threshold` | Not supported [^4] | Not supported [^4] | | `tidb_force_priority` | Not supported [^4] | Not supported [^4] | -| `tidb_gc_concurrency` | No limitation | Read-only [^11] | -| `tidb_gc_enable` | No limitation | Read-only [^11] | -| `tidb_gc_life_time` | No limitation | Read-only [^11] | -| `tidb_gc_max_wait_time` | No limitation | Read-only [^11] | -| `tidb_gc_run_interval` | No limitation | Read-only [^11] | -| `tidb_gc_scan_lock_mode` | No limitation | Read-only [^11] | +| `tidb_gc_concurrency` | No limitation | Read-only [^10] | +| `tidb_gc_enable` | No limitation | Read-only [^10] | +| `tidb_gc_life_time` | No limitation | Read-only [^10] | +| `tidb_gc_max_wait_time` | No limitation | Read-only [^10] | +| `tidb_gc_run_interval` | No limitation | Read-only [^10] | +| `tidb_gc_scan_lock_mode` | No limitation | Read-only [^10] | | `tidb_general_log` | Not supported [^4] | Not supported [^4] | -| `tidb_generate_binary_plan` | No limitation | Read-only [^11] | -| `tidb_gogc_tuner_threshold` | No limitation | Read-only [^11] | -| `tidb_guarantee_linearizability` | No limitation | Read-only [^11] | -| `tidb_isolation_read_engines` | No limitation | Read-only [^11] | -| `tidb_log_file_max_days` | No limitation | Read-only [^11] | +| `tidb_generate_binary_plan` | No limitation | Read-only [^10] | +| `tidb_gogc_tuner_threshold` | No limitation | Read-only [^10] | +| `tidb_guarantee_linearizability` | No limitation | Read-only [^10] | +| `tidb_isolation_read_engines` | No limitation | Read-only [^10] | +| `tidb_log_file_max_days` | No limitation | Read-only [^10] | | `tidb_memory_usage_alarm_ratio` | Not supported [^4] | Not supported [^4] | | `tidb_metric_query_range_duration` | Not supported [^4] | Not supported [^4] | | `tidb_metric_query_step` | Not supported [^4] | Not supported [^4] | | `tidb_opt_write_row_id` | Not supported [^4] | Not supported [^4] | -| `tidb_placement_mode` | No limitation | Read-only [^11] | +| `tidb_placement_mode` | No limitation | Read-only [^10] | | `tidb_pprof_sql_cpu` | Not supported [^4] | Not supported [^4] | | `tidb_record_plan_in_slow_log` | Not supported [^4] | Not supported [^4] | | `tidb_redact_log` | Not supported [^4] | Not supported [^4] | | `tidb_restricted_read_only` | Not supported [^4] | Not supported [^4] | | `tidb_row_format_version` | Not supported [^4] | Not supported [^4] | -| `tidb_scatter_region` | No limitation | Read-only [^11] | -| `tidb_server_memory_limit` | No limitation | Read-only [^11] | -| `tidb_server_memory_limit_gc_trigger` | No limitation | Read-only [^11] | -| `tidb_server_memory_limit_sess_min_size` | No limitation | Read-only [^11] | -| `tidb_simplified_metrics` | No limitation | Read-only [^11] | +| `tidb_scatter_region` | No limitation | Read-only [^10] | +| `tidb_server_memory_limit` | No limitation | Read-only [^10] | +| `tidb_server_memory_limit_gc_trigger` | No limitation | Read-only [^10] | +| `tidb_server_memory_limit_sess_min_size` | No limitation | Read-only [^10] | +| `tidb_simplified_metrics` | No limitation | Read-only [^10] | | `tidb_slow_query_file` | Not supported [^4] | Not supported [^4] | | `tidb_slow_log_threshold` | Not supported [^4] | Not supported [^4] | | `tidb_slow_txn_log_threshold` | Not supported [^4] | Not supported [^4] | -| `tidb_stats_load_sync_wait` | No limitation | Read-only [^11] | -| `tidb_stmt_summary_history_size` | No limitation | Read-only [^11] | -| `tidb_stmt_summary_internal_query` | No limitation | Read-only [^11] | -| `tidb_stmt_summary_max_sql_length` | No limitation | Read-only [^11] | -| `tidb_stmt_summary_max_stmt_count` | No limitation | Read-only [^11] | -| `tidb_stmt_summary_refresh_interval` | No limitation | Read-only [^11] | -| `tidb_sysproc_scan_concurrency` | No limitation | Read-only [^11] | +| `tidb_stats_load_sync_wait` | No limitation | Read-only [^10] | +| `tidb_stmt_summary_history_size` | No limitation | Read-only [^10] | +| `tidb_stmt_summary_internal_query` | No limitation | Read-only [^10] | +| `tidb_stmt_summary_max_sql_length` | No limitation | Read-only [^10] | +| `tidb_stmt_summary_max_stmt_count` | No limitation | Read-only [^10] | +| `tidb_stmt_summary_refresh_interval` | No limitation | Read-only [^10] | +| `tidb_sysproc_scan_concurrency` | No limitation | Read-only [^10] | | `tidb_top_sql_max_meta_count` | Not supported [^4] | Not supported [^4] | | `tidb_top_sql_max_time_series_count` | Not supported [^4] | Not supported [^4] | -| `tidb_tso_client_batch_max_wait_time` | No limitation | Read-only [^11] | -| `tidb_ttl_delete_batch_size` | No limitation | Read-only [^9] | -| `tidb_ttl_delete_rate_limit` | No limitation | Read-only [^9] | -| `tidb_ttl_delete_worker_count` | No limitation | Read-only [^9] | -| `tidb_ttl_job_enable` | No limitation | Read-only [^9] | -| `tidb_ttl_job_schedule_window_end_time` | No limitation | Read-only [^9] | -| `tidb_ttl_job_schedule_window_start_time` | No limitation | Read-only [^9] | -| `tidb_ttl_running_tasks` | No limitation | Read-only [^9] | -| `tidb_ttl_scan_batch_size` | No limitation | Read-only [^9] | -| `tidb_ttl_scan_worker_count` | No limitation | Read-only [^9] | -| `tidb_txn_mode` | No limitation | Read-only [^11] | -| `tidb_wait_split_region_finish` | No limitation | Read-only [^11] | -| `tidb_wait_split_region_timeout` | No limitation | Read-only [^11] | -| `txn_scope` | No limitation | Read-only [^11] | -| `validate_password.enable` | No limitation | Always enabled [^10] | -| `validate_password.length` | No limitation | At least `8` [^10] | -| `validate_password.mixed_case_count` | No limitation | At least `1` [^10] | -| `validate_password.number_count` | No limitation | At least `1` [^10] | -| `validate_password.policy` | No limitation | Can only be `MEDIUM` or `STRONG` [^10] | -| `validate_password.special_char_count` | No limitation | At least `1` [^10] | -| `wait_timeout` | No limitation | Read-only [^11] | +| `tidb_tso_client_batch_max_wait_time` | No limitation | Read-only [^10] | +| `tidb_ttl_delete_batch_size` | No limitation | Read-only [^10] | +| `tidb_ttl_delete_rate_limit` | No limitation | Read-only [^10] | +| `tidb_ttl_delete_worker_count` | No limitation | Read-only [^10] | +| `tidb_ttl_job_schedule_window_end_time` | No limitation | Read-only [^10] | +| `tidb_ttl_job_schedule_window_start_time` | No limitation | Read-only [^10] | +| `tidb_ttl_running_tasks` | No limitation | Read-only [^10] | +| `tidb_ttl_scan_batch_size` | No limitation | Read-only [^10] | +| `tidb_ttl_scan_worker_count` | No limitation | Read-only [^10] | +| `tidb_txn_mode` | No limitation | Read-only [^10] | +| `tidb_wait_split_region_finish` | No limitation | Read-only [^10] | +| `tidb_wait_split_region_timeout` | No limitation | Read-only [^10] | +| `txn_scope` | No limitation | Read-only [^10] | +| `validate_password.enable` | No limitation | Always enabled [^9] | +| `validate_password.length` | No limitation | At least `8` [^9] | +| `validate_password.mixed_case_count` | No limitation | At least `1` [^9] | +| `validate_password.number_count` | No limitation | At least `1` [^9] | +| `validate_password.policy` | No limitation | Can only be `MEDIUM` or `STRONG` [^9] | +| `validate_password.special_char_count` | No limitation | At least `1` [^9] | +| `wait_timeout` | No limitation | Read-only [^10] | [^1]: Configuring data placement is not supported on TiDB Serverless. [^2]: Configuring resource groups is not supported on TiDB Serverless. -[^3]: To perform [Back up and Restore](/tidb-cloud/backup-and-restore-serverless.md) operations on TiDB Serverless, you can use the TiDB Cloud console instead. +[^3]: To perform [Back up and Restore](/tidb-cloud/backup-and-restore-serverless.md) operations on TiDB Serverless, you can use the TiDB Cloud console instead. [^4]: The feature is unavailable in [Security Enhanced Mode (SEM)](/system-variables.md#tidb_enable_enhanced_security). @@ -237,10 +234,10 @@ TiDB Cloud works with almost all workloads that TiDB supports, but there are som [^8]: Plugin is not supported on TiDB Serverless. -[^9]: [Time to live (TTL)](/time-to-live.md) is currently unavailable on TiDB Serverless. +[^9]: TiDB Serverless enforces strong password policy. -[^10]: TiDB Serverless enforces strong password policy. +[^10]: The variable is read-only on TiDB Serverless. -[^11]: The variable is read-only on TiDB Serverless. +[^11]: TiDB Serverless does not support downloading the file exported by `PLAN REPLAYER` through `${tidb-server-status-port}` as in the [example](https://docs.pingcap.com/tidb/stable/sql-plan-replayer#examples-of-exporting-cluster-information). Instead, TiDB Serverless generates a [presigned URL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html) for you to download the file. Note that this URL remains valid for 10 hours after generation. [^12]: Not supported. Enabling `require_secure_transport` for TiDB Dedicated clusters will result in SQL client connection failures. diff --git a/tidb-cloud/manage-serverless-spend-limit.md b/tidb-cloud/manage-serverless-spend-limit.md index 1a3e130056d03..8f63f61bf3901 100644 --- a/tidb-cloud/manage-serverless-spend-limit.md +++ b/tidb-cloud/manage-serverless-spend-limit.md @@ -1,26 +1,27 @@ --- -title: Manage Spending Limit for TiDB Serverless clusters -summary: Learn how to manage spending limit for your TiDB Serverless clusters. +title: Manage Spending Limit for TiDB Serverless Scalable Clusters +summary: Learn how to manage spending limit for your TiDB Serverless scalable clusters. --- -# Manage Spending Limit for TiDB Serverless Clusters +# Manage Spending Limit for TiDB Serverless Scalable Clusters > **Note:** > -> The spending limit is only applicable to TiDB Serverless clusters. +> The spending limit is only applicable to TiDB Serverless [scalable clusters](/tidb-cloud/select-cluster-tier.md#scalable-cluster-plan). -Spending limit refers to the maximum amount of money that you are willing to spend on a particular workload in a month. It is a cost-control mechanism that allows you to set a budget for your TiDB Serverless clusters. +Spending limit refers to the maximum amount of money that you are willing to spend on a particular workload in a month. It is a cost-control mechanism that allows you to set a budget for your TiDB Serverless scalable clusters. -For each organization in TiDB Cloud, you can create a maximum of five TiDB Serverless clusters by default. To create more TiDB Serverless clusters, you need to add a credit card and set a spending limit for the usage. But if you delete some of your previous clusters before creating more, the new cluster can still be created without a credit card. +For each organization in TiDB Cloud, you can create a maximum of five [free clusters](/tidb-cloud/select-cluster-tier.md#free-cluster-plan) by default. To create more TiDB Serverless clusters, you need to add a credit card and create scalable clusters for the usage. But if you delete some of your previous clusters before creating more, the new cluster can still be created without a credit card. ## Usage quota -For the first five TiDB Serverless clusters in your organization, TiDB Cloud provides a free usage quota for each of them as follows: +For the first five TiDB Serverless clusters in your organization, whether they are free or scalable, TiDB Cloud provides a free usage quota for each of them as follows: - Row-based storage: 5 GiB +- Columnar storage: 5 GiB - [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit): 50 million RUs per month -Once the free quota of a cluster is reached, the read and write operations on this cluster will be throttled until you [increase the quota](#update-spending-limit) or the usage is reset upon the start of a new month. For example, when the storage of a cluster exceeds 5 GiB, the maximum size limit of a single transaction is reduced from 10 MiB to 1 MiB. +Once a cluster reaches its usage quota, it immediately denies any new connection attempts until you [increase the quota](#update-spending-limit) or the usage is reset upon the start of a new month. Existing connections established before reaching the quota will remain active but will experience throttling. For example, when the row-based storage of a cluster exceeds 5 GiB for a free cluster, the cluster automatically restricts any new connection attempts. To learn more about the RU consumption of different resources (including read, write, SQL CPU, and network egress), the pricing details, and the throttled information, see [TiDB Serverless Pricing Details](https://www.pingcap.com/tidb-cloud-serverless-pricing-details). @@ -28,7 +29,9 @@ If you want to create a TiDB Serverless cluster with an additional quota, you ca ## Update spending limit -For an existing TiDB Serverless cluster, you can increase the usage quota by updating the spending limit as follows: +For a TiDB Serverless free cluster, you can increase the usage quota by upgrading it to a scalable cluster. For an existing scalable cluster, you can adjust the monthly spending limit directly. + +To update the spending limit for a TiDB Serverless cluster, perform the following steps: 1. On the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project, click the name of your target cluster to go to its overview page. @@ -36,9 +39,9 @@ For an existing TiDB Serverless cluster, you can increase the usage quota by upd > > If you have multiple projects, you can click in the lower-left corner and switch to another project. -2. In the **Usage This Month** area, click **Get more usage quota**. +2. In the **Usage This Month** area, click **Upgrade to Scalable Cluster**. - If you have previously updated the spending limit for a cluster and want to increase it further, click **Edit**. + To adjust the spending limit for an existing scalable cluster, click **Edit**. 3. Edit the monthly spending limit as needed. If you have not added a payment method, you will need to add a credit card after editing the limit. -4. Click **Update Spending Limit**. +4. Click **Update Cluster Plan**. diff --git a/tidb-cloud/manage-user-access.md b/tidb-cloud/manage-user-access.md index 47b651404122d..295412ed245e6 100644 --- a/tidb-cloud/manage-user-access.md +++ b/tidb-cloud/manage-user-access.md @@ -78,7 +78,7 @@ At the organization level, TiDB Cloud defines four roles, in which `Organization | Invite users to or remove users from an organization, and edit organization roles of users. | ✅ | ❌ | ❌ | ❌ | | All the permissions of `Project Owner` for all projects in the organization. | ✅ | ❌ | ❌ | ❌ | | Create projects with Customer-Managed Encryption Key (CMEK) enabled | ✅ | ❌ | ❌ | ❌ | -| View bills and edit payment information for the organization. | ✅ | ✅ | ❌ | ❌ | +| View bills, use [cost explorer](/tidb-cloud/tidb-cloud-billing.md#cost-explorer), and edit payment information for the organization. | ✅ | ✅ | ❌ | ❌ | | Manage TiDB Cloud [console audit logging](/tidb-cloud/tidb-cloud-console-auditing.md) for the organization. | ✅ | ❌ | ✅ | ❌ | | View users in the organization and projects in which the member belong to. | ✅ | ✅ | ✅ | ✅ | @@ -108,8 +108,8 @@ At the project level, TiDB Cloud defines three roles, in which `Project Owner` c | Manage cluster data such as data import, data backup and restore, and data migration. | ✅ | ✅ | ❌ | | Manage [Data Service](/tidb-cloud/data-service-overview.md) for data read-only operations such as using or creating endpoints to read data. | ✅ | ✅ | ✅ | | Manage [Data Service](/tidb-cloud/data-service-overview.md) for data read and write operations. | ✅ | ✅ | ❌ | -| View cluster data using [Chat2Query](/tidb-cloud/explore-data-with-chat2query.md). | ✅ | ✅ | ✅ | -| Modify and delete cluster data using [Chat2Query](/tidb-cloud/explore-data-with-chat2query.md). | ✅ | ✅ | ❌ | +| View cluster data using [SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). | ✅ | ✅ | ✅ | +| Modify and delete cluster data using [SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). | ✅ | ✅ | ❌ | | View clusters in the project, view cluster backup records, and manage [changefeeds](/tidb-cloud/changefeed-overview.md). | ✅ | ✅ | ✅ | ## Manage organization access @@ -140,11 +140,9 @@ To change the local timezone setting, take the following steps: 2. Click **Organization Settings**. The organization settings page is displayed. -3. Click the **Time Zone** tab. +3. In the **Time Zone** section, select your time zone from the drop-down list. -4. Click the drop-down list and select your time zone. - -5. Click **Save**. +4. Click **Update**. ### Invite an organization member @@ -160,7 +158,7 @@ To invite a member to an organization, take the following steps: 2. Click **Organization Settings**. The organization settings page is displayed. -3. Click the **User Management** tab, and then select **By Organization**. +3. Click the **Users** tab in the left navigation pane, and then select **By Organization**. 4. Click **Invite**. @@ -191,7 +189,7 @@ To modify the organization role of a member, take the following steps: 2. Click **Organization Settings**. The organization settings page is displayed. -3. Click the **User Management** tab, and then select **By Organization**. +3. Click the **Users** tab in the left navigation pane, and then select **By Organization**. 4. Click the role of the target member, and then modify the role. @@ -209,7 +207,7 @@ To remove a member from an organization, take the following steps: 2. Click **Organization Settings**. The organization settings page is displayed. -3. Click the **User Management** tab, and then select **By Organization**. +3. Click the **Users** tab in the left navigation pane, and then select **By Organization**. 4. Click **Delete** in the user row that you want to delete. @@ -221,7 +219,7 @@ To check which project you belong to, take the following steps: 1. Click in the lower-left corner of the TiDB Cloud console. -2. Click **Organization Settings**. The **Projects** tab is displayed by default. +2. Click **Organization Settings**, and then click the **Projects** tab in the left navigation pane. The **Projects** tab is displayed. > **Tip:** > @@ -239,7 +237,7 @@ To create a new project, take the following steps: 1. Click in the lower-left corner of the TiDB Cloud console. -2. Click **Organization Settings**. The **Projects** tab is displayed by default. +2. Click **Organization Settings**, and then click the **Projects** tab in the left navigation pane. The **Projects** tab is displayed. 3. Click **Create New Project**. @@ -255,7 +253,7 @@ To rename a project, take the following steps: 1. Click in the lower-left corner of the TiDB Cloud console. -2. Click **Organization Settings**. The **Projects** tab is displayed by default. +2. Click **Organization Settings**, and then click the **Projects** tab in the left navigation pane. The **Projects** tab is displayed. 3. In the row of your project to be renamed, click **Rename**. diff --git a/tidb-cloud/managed-service-provider-customer.md b/tidb-cloud/managed-service-provider-customer.md new file mode 100644 index 0000000000000..57e00c7bf4229 --- /dev/null +++ b/tidb-cloud/managed-service-provider-customer.md @@ -0,0 +1,43 @@ +--- +title: Managed Service Provider Customer +summary: Learn how to become a Managed Service Provider (MSP) customer. +--- + +# Managed Service Provider Customer + +A Managed Service Provider (MSP) customer is a customer who uses the TiDB Cloud Services provided by the Managed Service Provider. + +Compared to the direct TiDB Cloud customer, there are several differences for sign-up and invoice payment: + +- The MSP customer needs to sign up for a TiDB Cloud account from the dedicated sign-up page provided by the MSP. +- The MSP customer pays invoices through the MSP channel, instead of paying directly to PingCAP. + +Other daily operations on the TiDB Cloud console are the same for both direct TiDB Cloud customers and MSP customers. + +This document describes how to become an MSP customer and how to check history and future bills for MSP customers. + +## Create a new MSP customer account + +To create a new MSP customer account, visit the MSP dedicated sign-up page. Each MSP has a unique dedicated sign-up page. Contact your MSP to get the URL of the sign-up page. + +## Migrate from a direct TiDB Cloud account to an MSP customer account + +> **Tip:** +> +> Direct customers are the end customers who purchase TiDB Cloud and pay invoices directly from PingCAP. + +If you are currently a direct customer with a TiDB Cloud account, you can ask your MSP to migrate your account to an MSP customer account. + +The migration will take effect on the 1st of a future month. Discuss with your MSP to determine the specific effective date. + +On the effective day of migration, you will receive an email notification. + +## Check your future bills + +Once you successfully become an MSP customer, you will pay invoices through your MSP. Ask your MSP where you can check your bills and invoices. + +PingCAP does not send any bills and invoices to MSP customers. + +## Check your history bills + +If you have migrated from a direct TiDB Cloud account to an MSP customer account, you can view your history bills prior to the migration by visiting **Billing** > **Bills** > **History** in the TiDB Cloud console. diff --git a/tidb-cloud/managed-service-provider.md b/tidb-cloud/managed-service-provider.md new file mode 100644 index 0000000000000..e08f520d92fe8 --- /dev/null +++ b/tidb-cloud/managed-service-provider.md @@ -0,0 +1,44 @@ +--- +title: Managed Service Provider +summary: Learn how to become a Managed Service Provider (MSP). +--- + +# Managed Service Provider + +The Managed Service Partner Program aims to establish and foster a strong partnership between PingCAP and our partners to benefit our customers. + +A managed service provider (MSP) is a partner who resells TiDB Cloud and provides value-added services, including but not limited to TiDB Cloud organization management, billing services, and technical support. + +Benefits of becoming a managed service provider include: + +- Discounts and incentive programs +- Enablement training +- Increased visibility through certification +- Joint marketing opportunities + +## Become an MSP of PingCAP + +If you are interested in the MSP program and would like to join as a partner, [contact sales](https://www.pingcap.com/partners/become-a-partner/) to enroll. Please provide the following information: + +- Company name +- Company contact email +- Company official website URL +- Company logo (One SVG file for light mode and one SVG file for dark mode. A horizontal logo with 256 x 48 pixels is preferred) + +The information above will be used to generate an exclusive sign-up URL and page with your company logo for your customers. + +We will carefully evaluate your request and get back to you soon. + +## Manage daily tasks for MSP customers + +Once you are approved as a PingCAP MSP, you will receive an API key for the [MSP Management API](https://docs.pingcap.com/tidbcloud/api/v1beta1/msp). + +You can use the MSP management API to manage daily tasks: + +- Query the MSP monthly bill for a specific month +- Query credits applied to an MSP +- Query discounts applied to an MSP +- Query the monthly bill for a specific MSP customer +- Create a new signup URL for an MSP customer +- List all MSP customers +- Retrieve MSP customer information by the customer organization ID diff --git a/tidb-cloud/migrate-from-mysql-using-data-migration.md b/tidb-cloud/migrate-from-mysql-using-data-migration.md index 44a88b8998c19..aeab9ffdc9bcf 100644 --- a/tidb-cloud/migrate-from-mysql-using-data-migration.md +++ b/tidb-cloud/migrate-from-mysql-using-data-migration.md @@ -93,12 +93,11 @@ The username you use for the downstream TiDB Cloud cluster must have the followi | `ALTER` | Tables | | `DROP` | Databases, Tables | | `INDEX` | Tables | -| `TRUNCATE` | Tables | For example, you can execute the following `GRANT` statement to grant corresponding privileges: ```sql -GRANT CREATE,SELECT,INSERT,UPDATE,DELETE,ALTER,TRUNCATE,DROP,INDEX ON *.* TO 'your_user'@'your_IP_address_of_host' +GRANT CREATE,SELECT,INSERT,UPDATE,DELETE,ALTER,DROP,INDEX ON *.* TO 'your_user'@'your_IP_address_of_host' ``` To quickly test a migration job, you can use the `root` account of the TiDB Cloud cluster. @@ -120,7 +119,7 @@ If your MySQL service is in an AWS VPC, take the following steps: 2. Modify the inbound rules of the security group that the MySQL service is associated with. - You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-project-cidr) to the inbound rules. Doing so allows the traffic to flow from your TiDB cluster to the MySQL instance. + You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-cidr-for-a-region) to the inbound rules. Doing so allows the traffic to flow from your TiDB cluster to the MySQL instance. 3. If the MySQL URL contains a DNS hostname, you need to allow TiDB Cloud to be able to resolve the hostname of the MySQL service. @@ -140,7 +139,7 @@ If your MySQL service is in a Google Cloud VPC, take the following steps: 3. Modify the ingress firewall rules of the VPC where MySQL is located. - You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-project-cidr) to the ingress firewall rules. This allows the traffic to flow from your TiDB cluster to the MySQL endpoint. + You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-cidr-for-a-region) to the ingress firewall rules. This allows the traffic to flow from your TiDB cluster to the MySQL endpoint. @@ -201,7 +200,7 @@ In the **Choose the objects to be migrated** step, you can choose existing data To migrate data to TiDB Cloud once and for all, choose both **Existing data migration** and **Incremental data migration**, which ensures data consistency between the source and target databases. -You can use **physical mode** or **logical mode** to migrate **existing data**. +You can use **physical mode** or **logical mode** to migrate **existing data** and **incremental data**. - The default mode is **logical mode**. This mode exports data from upstream databases as SQL statements, and then executes them on TiDB. In this mode, the target tables before migration can be either empty or non-empty. But the performance is slower than physical mode. @@ -227,7 +226,7 @@ Physical mode exports the upstream data as fast as possible, so [different speci To migrate only existing data of the source database to TiDB Cloud, choose **Existing data migration**. -You can choose to use physical mode or logical mode to migrate existing data. For more information, see [Migrate existing data and incremental data](#migrate-existing-data-and-incremental-data). +You can only use logical mode to migrate existing data. For more information, see [Migrate existing data and incremental data](#migrate-existing-data-and-incremental-data). ### Migrate only incremental data diff --git a/tidb-cloud/monitor-built-in-alerting.md b/tidb-cloud/monitor-built-in-alerting.md index 5944315f63d3c..6038ac4f231b1 100644 --- a/tidb-cloud/monitor-built-in-alerting.md +++ b/tidb-cloud/monitor-built-in-alerting.md @@ -109,6 +109,8 @@ The following table provides the TiDB Cloud built-in alert conditions and the co ### Changefeed alerts -| Condition | Recommended Action | -|:--- |:--- | -| Changefeed processor checkpoint delay more than 600 seconds | Check if the downstream system and network configuration are functioning normally, and rule out the possibility of an indexed table. | +| Condition | Recommended Action | +|:--------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| The changefeed latency exceeds 600 seconds. | Check the changefeed status on the **Changefeed** page and **Changefeed Detail** page of the TiDB Cloud console, where you can find some error messages to help diagnose this issue.
Possible reasons that can trigger this alert include:
  • The overall traffic in the upstream has increased, causing the existing changefeed specification to be insufficient to handle it. If the traffic increase is temporary, the changefeed latency will automatically recover after the traffic returns to normal. If the traffic increase is continuous, you need to scale up the changefeed.
  • The downstream or network is abnormal. In this case, resolve this abnormality first.
  • Tables lack indexes if the downstream is RDS, which might cause low write performance and high latency. In this case, you need to add the necessary indexes to the upstream or downstream.
If the problem cannot be fixed from your side, you can contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md) for further assistance. | +| The changefeed status is `FAILED`. | Check the changefeed status on the **Changefeed** page and **Changefeed Detail** page of the TiDB Cloud console, where you can find some error messages to help diagnose this issue.
If the problem cannot be fixed from your side, you can contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md) for further assistance. | +| The changefeed status is `WARNING`. | Check the changefeed status on the **Changefeed** page and **Changefeed Detail** page of the TiDB Cloud console, where you can find some error messages to help diagnose this issue.
If the problem cannot be fixed from your side, you can contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md) for further assistance. | diff --git a/tidb-cloud/monitor-prometheus-and-grafana-integration.md b/tidb-cloud/monitor-prometheus-and-grafana-integration.md index 5839b6200a8d1..151b016186768 100644 --- a/tidb-cloud/monitor-prometheus-and-grafana-integration.md +++ b/tidb-cloud/monitor-prometheus-and-grafana-integration.md @@ -25,20 +25,20 @@ This document describes how to configure your Prometheus service to read key met ### Step 1. Get a scrape_config file for Prometheus -Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a scrape_config YAML file in TiDB Cloud first. The scrape_config file contains a unique bearer token that allows the Prometheus service to monitor any database clusters in the current project. +Before configuring your Prometheus service to read metrics of TiDB Cloud, you need to generate a `scrape_config` YAML file in TiDB Cloud first. The `scrape_config` file contains a unique bearer token that allows the Prometheus service to monitor any database clusters in the current project. -To get the scrape_config file for Prometheus, do the following: +To get the `scrape_config` file for Prometheus, do the following: 1. Log in to the [TiDB Cloud console](https://tidbcloud.com). 2. Click in the lower-left corner, switch to the target project if you have multiple projects, and then click **Project Settings**. 3. On the **Project Settings** page of your project, click **Integrations** in the left navigation pane, and then click **Integration to Prometheus (BETA)**. 4. Click **Add File** to generate and show the scrape_config file for the current project. -5. Make a copy of the scrape_config file content for later use. +5. Make a copy of the `scrape_config` file content for later use. > **Note:** > - > For security reasons, TiDB Cloud only shows a newly generated scrape_config file once. Ensure that you copy the content before closing the file window. If you forget to do so, you need to delete the scrape_config file in TiDB Cloud and generate a new one. To delete a scrape_config file, select the file, click **...**, and then click **Delete**. + > For security reasons, TiDB Cloud only shows a newly generated `scrape_config` file once. Ensure that you copy the content before closing the file window. If you forget to do so, you need to delete the `scrape_config` file in TiDB Cloud and generate a new one. To delete a `scrape_config` file, select the file, click **...**, and then click **Delete**. ### Step 2. Integrate with Prometheus @@ -46,9 +46,9 @@ To get the scrape_config file for Prometheus, do the following: For example, `/etc/prometheus/prometheus.yml`. -2. In the Prometheus configuration file, locate the `scrape_configs` section, and then copy the scrape_config file content obtained from TiDB Cloud to the section. +2. In the Prometheus configuration file, locate the `scrape_configs` section, and then copy the `scrape_config` file content obtained from TiDB Cloud to the section. -3. In your Prometheus service, check **Status** > **Targets** to confirm that the new scrape_config file has been read. If not, you might need to restart the Prometheus service. +3. In your Prometheus service, check **Status** > **Targets** to confirm that the new `scrape_config` file has been read. If not, you might need to restart the Prometheus service. ### Step 3. Use Grafana GUI dashboards to visualize the metrics @@ -62,12 +62,12 @@ For more information about how to use Grafana, see [Grafana documentation](https ## Best practice of rotating scrape_config -To improve data security, it is a general best practice to periodically rotate scrape_config file bearer tokens. +To improve data security, it is a general best practice to periodically rotate `scrape_config` file bearer tokens. -1. Follow [Step 1](#step-1-get-a-scrape_config-file-for-prometheus) to create a new scrape_config file for Prometheus. +1. Follow [Step 1](#step-1-get-a-scrape_config-file-for-prometheus) to create a new `scrape_config` file for Prometheus. 2. Add the content of the new file to your Prometheus configuration file. -3. Once you have confirmed that your Prometheus service is still able to read from TiDB Cloud, remove the content of the old scrape_config file from your Prometheus configuration file. -4. On the **Integration** page of your project, delete the corresponding old scrape_config file to block anyone else from using it to read from the TiDB Cloud Prometheus endpoint. +3. Once you have confirmed that your Prometheus service is still able to read from TiDB Cloud, remove the content of the old `scrape_config` file from your Prometheus configuration file. +4. On the **Integration** page of your project, delete the corresponding old `scrape_config` file to block anyone else from using it to read from the TiDB Cloud Prometheus endpoint. ## Metrics available to Prometheus diff --git a/tidb-cloud/notification-2024-04-09-monitoring-features-maintenance.md b/tidb-cloud/notification-2024-04-09-monitoring-features-maintenance.md new file mode 100644 index 0000000000000..320126e3fb82f --- /dev/null +++ b/tidb-cloud/notification-2024-04-09-monitoring-features-maintenance.md @@ -0,0 +1,53 @@ +--- +title: 2024-04-09 TiDB Cloud Monitoring Features Maintenance Notification +summary: Learn about the details of the TiDB Cloud monitoring features maintenance on April 9, 2024, such as the maintenance window, reason, and impact. +--- + +# [2024-04-09] TiDB Cloud Monitoring Features Maintenance Notification + +This notification describes the details that you need to know about the TiDB Cloud [monitoring features](/tidb-cloud/monitor-tidb-cluster.md) maintenance on April 9, 2024. + +## Maintenance window + +- Start time: 2024-04-09 08:00 (UTC+0) +- End time: 2024-04-09 12:00 (UTC+0) +- Duration: 4 hours + +## Impact + +### Affected regions + +During the maintenance window, the monitoring features in the following regions will be affected: + +- TiDB Dedicated clusters: + - Cloud Provider: AWS, Region: Oregon (us-west-2) + - Cloud Provider: AWS, Region: Seoul (ap-northeast-2) + - Cloud Provider: AWS, Region: Frankfurt (eu-central-1) + - Cloud Provider: AWS, Region: Oregon (us-west-2) + - Cloud Provider: Google Cloud, Region: Oregon (us-west1) + - Cloud Provider: Google Cloud, Region: Tokyo (asia-northeast1) + - Cloud Provider: Google Cloud, Region: Singapore (asia-southeast1) + - Cloud Provider: Google Cloud, Region: Iowa (us-central1) + - Cloud Provider: Google Cloud, Region: Taiwan (asia-east1) + +- TiDB Serverless clusters: + - Cloud Provider: AWS, Region: Frankfurt (eu-central-1) + - Cloud Provider: AWS, Region: Oregon (us-west-2) + +### Affected monitoring features + +> **Note:** +> +> The maintenance only affects monitoring features in the TiDB cluster. All the other functionalities remain unaffected. You can continue to manage the TiDB cluster and perform read/write operations or other operations as usual. + +- The **Metrics** page will be temporarily unavailable for several short periods (each less than 20 minutes). +- The **Slow Query** page will be temporarily unavailable for several short periods (each less than 5 minutes). +- The metrics integration with Prometheus, DataDog, and NewRelic might have breakpoints. + +## Completion and resumption + +Once the maintenance is successfully completed, the affected functionalities will be reinstated, offering you an even better experience. + +## Get support + +If you have any questions or need assistance, contact our [support team](/tidb-cloud/tidb-cloud-support.md). We are here to address your concerns and provide any necessary guidance. diff --git a/tidb-cloud/notification-2024-04-11-dm-feature-maintenance.md b/tidb-cloud/notification-2024-04-11-dm-feature-maintenance.md new file mode 100644 index 0000000000000..26241a658b695 --- /dev/null +++ b/tidb-cloud/notification-2024-04-11-dm-feature-maintenance.md @@ -0,0 +1,49 @@ +--- +title: 2024-04-11 TiDB Cloud Data Migration (DM) Feature Maintenance Notification +summary: Learn about the details of TiDB Cloud Data Migration (DM) feature maintenance on April 11, 2024, such as the maintenance window and impact. +--- + +# [2024-04-11] TiDB Cloud Data Migration (DM) Feature Maintenance Notification + +This notification describes the details that you need to know about the maintenance for [Data Migration (DM) feature](/tidb-cloud/migrate-from-mysql-using-data-migration.md) of TiDB Cloud Dedicated on April 11, 2024. + +## Maintenance window + +- Start time: 2024-04-11 08:00 (UTC+0) +- End time: 2024-04-11 09:00 (UTC+0) +- Duration: 1 hour + +## Impact + +During the maintenance window, the DM feature for TiDB Dedicated clusters in the following regions will be affected: + +- Cloud provider: AWS, region: Oregon (us-west-2) +- Cloud provider: AWS, region: N. Virginia (us-east-1) +- Cloud provider: AWS, region: Singapore (ap-southeast-1) +- Cloud provider: AWS, region: Seoul (ap-northeast-2) +- Cloud provider: AWS, region: Frankfurt (eu-central-1) +- Cloud provider: AWS, region: São Paulo (sa-east-1) +- Cloud provider: AWS, region: Oregon (us-west-2) +- Cloud provider: Google Cloud, region: Oregon (us-west1) +- Cloud provider: Google Cloud, region: Tokyo (asia-northeast1) +- Cloud provider: Google Cloud, region: Singapore (asia-southeast1) + +The maintenance only affects the DM feature in the TiDB cluster. All the other functionalities remain unaffected. You can continue to manage the TiDB cluster and perform read/write operations or other operations as usual. + +For clusters deployed on AWS: + +- During the upgrade, the DM tasks can keep running without disruption. The DM console can be used normally. + +For clusters deployed on Google Cloud: + +- The DM console will be unavailable for up to 30 minutes. During this period, you cannot create or manage DM tasks. +- If a DM task is in the incremental migration stage, it will be interrupted for up to 30 minutes. During this period, do not purge the binary log of the MySQL database. The DM task will automatically resume after the upgrade is completed. +- If a DM task is in the stage of exporting and importing full data, it will fail during the upgrade, and cannot be resumed after the upgrade. It is recommended not to create any DM task on the day when the upgrade is performed, to ensure that no DM tasks are in the stage of exporting and importing full data when the upgrade starts. + +## Completion and resumption + +Once the maintenance is successfully completed, the affected functionalities will be reinstated, offering you a better experience. + +## Get support + +If you have any questions or need assistance, contact our [support team](/tidb-cloud/tidb-cloud-support.md). We are here to address your concerns and provide any necessary guidance. diff --git a/tidb-cloud/notification-2024-04-16-monitoring-features-maintenance.md b/tidb-cloud/notification-2024-04-16-monitoring-features-maintenance.md new file mode 100644 index 0000000000000..7938c4a387d32 --- /dev/null +++ b/tidb-cloud/notification-2024-04-16-monitoring-features-maintenance.md @@ -0,0 +1,46 @@ +--- +title: 2024-04-16 TiDB Cloud Monitoring Features Maintenance Notification +summary: Learn about the details of the TiDB Cloud monitoring features maintenance on April 16, 2024, such as the maintenance window, reason, and impact. +--- + +# [2024-04-16] TiDB Cloud Monitoring Features Maintenance Notification + +This notification describes the details that you need to know about the TiDB Cloud [monitoring features](/tidb-cloud/monitor-tidb-cluster.md) maintenance on April 16, 2024. + +## Maintenance window + +- Start time: 2024-04-16 08:00 (UTC+0) +- End time: 2024-04-16 12:00 (UTC+0) +- Duration: 4 hours + +## Impact + +### Affected regions + +During the maintenance window, the monitoring features in the following regions will be affected: + +- TiDB Dedicated clusters: + - Cloud Provider: AWS, Region: Tokyo (ap-northeast-1) + - Cloud Provider: AWS, Region: N. Virginia (us-east-1) + +- TiDB Serverless clusters: + - Cloud Provider: AWS, Region: Tokyo (ap-northeast-1) + - Cloud Provider: AWS, Region: N. Virginia (us-east-1) + +### Affected monitoring features + +> **Note:** +> +> The maintenance only affects monitoring features in the TiDB cluster. All the other functionalities remain unaffected. You can continue to manage the TiDB cluster and perform read/write operations or other operations as usual. + +- The **Metrics** page will be temporarily unavailable for several short periods (each less than 20 minutes). +- The **Slow Query** page will be temporarily unavailable for several short periods (each less than 5 minutes). +- The metrics integration with Prometheus, DataDog, and NewRelic might have breakpoints. + +## Completion and resumption + +Once the maintenance is successfully completed, the affected functionalities will be reinstated, offering you an even better experience. + +## Get support + +If you have any questions or need assistance, contact our [support team](/tidb-cloud/tidb-cloud-support.md). We are here to address your concerns and provide any necessary guidance. diff --git a/tidb-cloud/notification-2024-04-18-dm-feature-maintenance.md b/tidb-cloud/notification-2024-04-18-dm-feature-maintenance.md new file mode 100644 index 0000000000000..ff0b49821d76a --- /dev/null +++ b/tidb-cloud/notification-2024-04-18-dm-feature-maintenance.md @@ -0,0 +1,30 @@ +--- +title: 2024-04-18 TiDB Cloud Data Migration (DM) Feature Maintenance Notification +summary: Learn about the details of TiDB Cloud Data Migration (DM) feature maintenance on April 18, 2024, such as the maintenance window and impact. +--- + +# [2024-04-18] TiDB Cloud Data Migration (DM) Feature Maintenance Notification + +This notification describes the details that you need to know about the maintenance for [Data Migration (DM) feature](/tidb-cloud/migrate-from-mysql-using-data-migration.md) of TiDB Cloud Dedicated on April 18, 2024. + +## Maintenance window + +- Start time: 2024-04-18 08:00 (UTC+0) +- End time: 2024-04-18 09:00 (UTC+0) +- Duration: 1 hour + +## Impact + +During the maintenance window, the DM feature for TiDB Dedicated clusters in the following region will be upgraded: + +- Cloud provider: AWS, region: Tokyo (ap-northeast-1) + +During the upgrade, you can use the functionalities of TiDB clusters and the DM console normally. The DM tasks can keep running without disruption. + +## Completion and resumption + +Once the maintenance is successfully completed, the affected functionalities will be reinstated, offering you a better experience. + +## Get support + +If you have any questions or need assistance, contact our [support team](/tidb-cloud/tidb-cloud-support.md). We are here to address your concerns and provide any necessary guidance. diff --git a/tidb-cloud/notification-2024-09-15-console-maintenance.md b/tidb-cloud/notification-2024-09-15-console-maintenance.md new file mode 100644 index 0000000000000..964a94f142500 --- /dev/null +++ b/tidb-cloud/notification-2024-09-15-console-maintenance.md @@ -0,0 +1,92 @@ +--- +title: 2024-09-15 TiDB Cloud Console Maintenance Notification +summary: Learn about the details of the TiDB Cloud Console maintenance on September 15, 2024, such as the maintenance window, reason, and impact. +--- + +# [2024-09-15] TiDB Cloud Console Maintenance Notification + +This notification describes the details that you need to know about the [TiDB Cloud console](https://tidbcloud.com/) maintenance on September 15, 2024. + +## Maintenance window + +- Date: 2024-09-15 +- Start time: 8:00 (UTC+0) +- End time: 8:10 (UTC+0) +- Duration: Approximately 10 minutes + +> **Note:** +> +> - Currently, users cannot modify the maintenance timing for the TiDB Cloud console, so you need to plan accordingly in advance. +> - During the next 3 months, some users might experience an additional 20-minute maintenance window. Affected users will receive an email notification in advance. + +## Reason for maintenance + +We are upgrading the meta database services of the TiDB Cloud console to enhance performance and efficiency. This improvement aims to provide a better experience for all users as part of our ongoing commitment to delivering high-quality services. + +## Impact + +During the maintenance window, you might experience intermittent disruptions in functionalities related to creating and updating within the TiDB Cloud console UI and API. However, your TiDB cluster will maintain its regular operations for data read and write, ensuring no adverse effects on your online business. + +### Affected features of TiDB Cloud console UI + +- Cluster level + - Cluster management + - Create clusters + - Delete clusters + - Scale clusters + - Pause or Resume clusters + - Change cluster password + - Change cluster traffic filter + - Import + - Create an import job + - Data Migration + - Create a migration job + - Changefeed + - Create a changefeed job + - Backup + - Create a manual backup job + - Auto backup job + - Restore + - Create a restore Job + - Database audit log + - Test connectivity + - Add or delete access record + - Enable or disable Database audit logging + - Restart database audit logging +- Project level + - Network access + - Create a private endpoint + - Delete a private endpoint + - Add VPC Peering + - Delete VPC Peering + - Maintenance + - Change maintenance window + - Defer task + - Recycle Bin + - Delete clusters + - Delete backups + - Restore clusters + +### Affected features of TiDB Cloud API + +- Cluster management + - [CreateCluster](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/CreateCluster) + - [DeleteCluster](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/DeleteCluster) + - [UpdateCluster](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/UpdateCluster) + - [CreateAwsCmek](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/CreateAwsCmek) +- Backup + - [CreateBackup](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Backup/operation/CreateBackup) + - [DeleteBackup](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Backup/operation/DeleteBackup) +- Restore + - [CreateRestoreTask](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Restore/operation/CreateRestoreTask) +- Import + - [CreateImportTask](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Import/operation/CreateImportTask) + - [UpdateImportTask](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Import/operation/UpdateImportTask) + +## Completion and resumption + +Once the maintenance is successfully completed, the affected functionalities will be reinstated, offering you an even better experience. + +## Get support + +If you have any questions or need assistance, contact our [support team](/tidb-cloud/tidb-cloud-support.md). We are here to address your concerns and provide any necessary guidance. \ No newline at end of file diff --git a/tidb-cloud/oauth2.md b/tidb-cloud/oauth2.md new file mode 100644 index 0000000000000..9881a8c591815 --- /dev/null +++ b/tidb-cloud/oauth2.md @@ -0,0 +1,47 @@ +--- +title: OAuth 2.0 +summary: Learn about how to use OAuth 2.0 in TiDB Cloud. +--- + +# OAuth 2.0 + +This document describes how to access TiDB Cloud using OAuth 2.0. + +OAuth, which stands for Open Authorization, is an open standard authentication protocol that allows secure access to resources on behalf of a user. It provides a way for third-party applications to access user resources without exposing their credentials. + +[OAuth 2.0](https://oauth.net/2/), the latest version of OAuth, has become the industry-standard protocol for authorization. Key benefits of OAuth 2.0 include: + +- Security: By using token-based authentication, OAuth 2.0 minimizes the risk of password theft and unauthorized access. +- Convenience: You can grant and revoke access to your data without managing multiple credentials. +- Access control: You can specify the exact level of access granted to third-party applications, ensuring only necessary permissions are given. + +## OAuth grant types + +The OAuth framework specifies several grant types for different use cases. TiDB Cloud supports two most common OAuth grant types: Device Code and Authorization Code. + +### Device Code grant type + +It is usually used by browserless or input-constrained devices in the device flow to exchange a previously obtained device code for an access token. + +### Authorization Code grant type + +It is the most common OAuth 2.0 grant type, which enables both web apps and native apps to get an access token after a user authorizes an app. + +## Use OAuth to access TiDB Cloud + +You can access TiDB Cloud CLI using the OAuth 2.0 Device Code grant type: + +- [ticloud auth login](/tidb-cloud/ticloud-auth-login.md): Authenticate with TiDB Cloud +- [ticloud auth logout](/tidb-cloud/ticloud-auth-logout.md): Log out of TiDB Cloud + +If your app needs to access TiDB Cloud using OAuth, submit a request to [become a Cloud & Technology Partner](https://www.pingcap.com/partners/become-a-partner/) (select **Cloud & Technology Partner** in **Partner Program**). We will reach out to you. + +## View and revoke authorized OAuth apps + +You can view the records for authorized OAuth applications in the TiDB Cloud console as follows: + +1. In the [TiDB Cloud console](https://tidbcloud.com/), click in the lower-left corner. +2. Click **Account Settings**. +3. In the left navigation pane, click **Authorized OAuth Apps**. You can view authorized OAuth applications. + +You can click **Revoke** to revoke your authorization at any time. diff --git a/tidb-cloud/release-notes-2022.md b/tidb-cloud/release-notes-2022.md index 4969641e90d0f..49d96be10f982 100644 --- a/tidb-cloud/release-notes-2022.md +++ b/tidb-cloud/release-notes-2022.md @@ -468,7 +468,7 @@ This page lists the release notes of [TiDB Cloud](https://www.pingcap.com/tidb-c ## July 12, 2022 * Add the **Validate** button to the [**Data Import Task**](/tidb-cloud/import-sample-data.md) page for Amazon S3, which helps you detect data access issues before the data import starts. -* Add **Billing Profile** under the [**Payment Method**](/tidb-cloud/tidb-cloud-billing.md#payment-method) tab. By providing your tax registration number in **Billing Profile**, certain taxes might be exempted from your invoice. For more information, see [Edit billing profile information](/tidb-cloud/tidb-cloud-billing.md#edit-billing-profile-information). +* Add **Billing Profile** under the [**Payment Method**](/tidb-cloud/tidb-cloud-billing.md#payment-method) tab. By providing your tax registration number in **Billing Profile**, certain taxes might be exempted from your invoice. For more information, see [Edit billing profile information](/tidb-cloud/tidb-cloud-billing.md#billing-profile). ## July 05, 2022 diff --git a/tidb-cloud/release-notes-2023.md b/tidb-cloud/release-notes-2023.md new file mode 100644 index 0000000000000..d7ac55bbc5509 --- /dev/null +++ b/tidb-cloud/release-notes-2023.md @@ -0,0 +1,1003 @@ +--- +title: TiDB Cloud Release Notes in 2023 +summary: Learn about the release notes of TiDB Cloud in 2023. +--- + +# TiDB Cloud Release Notes in 2023 + +This page lists the release notes of [TiDB Cloud](https://www.pingcap.com/tidb-cloud/) in 2023. + +## December 5, 2023 + +**General changes** + +- [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) enables you to resume a failed changefeed, which saves your effort to recreate a new one. + + For more information, see [Changefeed states](/tidb-cloud/changefeed-overview.md#changefeed-states). + +**Console changes** + +- Enhance the connection experience for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless). + + Refine the **Connect** dialog interface to offer TiDB Serverless users a smoother and more efficient connection experience. In addition, TiDB Serverless introduces more client types and allows you to select the desired branch for connection. + + For more information, see [Connect to TiDB Serverless](/tidb-cloud/connect-via-standard-connection-serverless.md). + +## November 28, 2023 + +**General changes** + +- [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) supports restoring SQL bindings from backups. + + TiDB Dedicated now restores user accounts and SQL bindings by default when restoring from a backup. This enhancement is available for clusters of v6.2.0 or later versions, streamlining the data restoration process. The restoration of SQL bindings ensures the smooth reintegration of query-related configurations and optimizations, providing you with a more comprehensive and efficient recovery experience. + + For more information, see [Back up and restore TiDB Dedicated data](/tidb-cloud/backup-and-restore.md). + +**Console changes** + +- [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) supports monitoring SQL statement RU costs. + + TiDB Serverless now provides detailed insights into each SQL statement's [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit). You can view both the **Total RU** and **Mean RU** costs per SQL statement. This feature helps you identify and analyze RU costs, offering opportunities for potential cost savings in your operations. + + To check your SQL statement RU details, navigate to the **Diagnosis** page of [your TiDB Serverless cluster](https://tidbcloud.com/console/clusters) and then click the **SQL Statement** tab. + +## November 21, 2023 + +**General changes** + +- [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) supports high-speed physical mode for TiDB clusters deployed on Google Cloud. + + Now you can use physical mode for TiDB clusters deployed on AWS and Google Cloud. The migration speed of physical mode can reach up to 110 MiB/s, which is 2.4 times faster than logical mode. The improved performance is suitable for quickly migrating large datasets to TiDB Cloud. + + For more information, see [Migrate existing data and incremental data](/tidb-cloud/migrate-from-mysql-using-data-migration.md#migrate-existing-data-and-incremental-data). + +## November 14, 2023 + +**General changes** + +- When you restore data from TiDB Dedicated clusters, the default behavior is now modified from restoring without user accounts to restoring with all user accounts. + + For more information, see [Back Up and Restore TiDB Dedicated Data](/tidb-cloud/backup-and-restore.md). + +- Introduce event filters for changefeeds. + + This enhancement empowers you to easily manage event filters for changefeeds directly through the [TiDB Cloud console](https://tidbcloud.com/), streamlining the process of excluding specific events from changefeeds and providing better control over data replication downstream. + + For more information, see [Changefeed](/tidb-cloud/changefeed-overview.md#edit-a-changefeed). + +## November 7, 2023 + +**General changes** + +- Add the following resource usage alerts. The new alerts are disabled by default. You can enable them as needed. + + - Max memory utilization across TiDB nodes exceeded 70% for 10 minutes + - Max memory utilization across TiKV nodes exceeded 70% for 10 minutes + - Max CPU utilization across TiDB nodes exceeded 80% for 10 minutes + - Max CPU utilization across TiKV nodes exceeded 80% for 10 minutes + + For more information, see [TiDB Cloud Built-in Alerting](/tidb-cloud/monitor-built-in-alerting.md#resource-usage-alerts). + +## October 31, 2023 + +**General changes** + +- Support directly upgrading to the Enterprise support plan in the TiDB Cloud console without contacting sales. + + For more information, see [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + +## October 25, 2023 + +**General changes** + +- [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) supports dual region backup (beta) on Google Cloud. + + TiDB Dedicated clusters hosted on Google Cloud work seamlessly with Google Cloud Storage. Similar to the [Dual-regions](https://cloud.google.com/storage/docs/locations#location-dr) feature of Google Cloud Storage, the pair of regions that you use for the dual-region in TiDB Dedicated must be within the same multi-region. For example, Tokyo and Osaka are in the same multi-region `ASIA` so they can be used together for dual-region storage. + + For more information, see [Back Up and Restore TiDB Dedicated Data](/tidb-cloud/backup-and-restore.md#turn-on-dual-region-backup). + +- The feature of [streaming data change logs to Apache Kafka](/tidb-cloud/changefeed-sink-to-apache-kafka.md) is now in General Availability (GA). + + After a successful 10-month beta trial, the feature of streaming data change logs from TiDB Cloud to Apache Kafka becomes generally available. Streaming data from TiDB to a message queue is a common need in data integration scenarios. You can use Kafka sink to integrate with other data processing systems (such as Snowflake) or support business consumption. + + For more information, see [Changefeed overview](/tidb-cloud/changefeed-overview.md). + +## October 11, 2023 + +**General changes** + +- Support [dual region backup (beta)](/tidb-cloud/backup-and-restore.md#turn-on-dual-region-backup) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters deployed on AWS. + + You can now replicate backups across geographic regions within your cloud provider. This feature provides an additional layer of data protection and disaster recovery capabilities. + + For more information, see [Back up and restore TiDB Dedicated data](/tidb-cloud/backup-and-restore.md). + +- Data Migration now supports both physical mode and logical mode for migrating existing data. + + In physical mode, the migration speed can reach up to 110 MiB/s. Compared with 45 MiB/s in logical mode, the migration performance has improved significantly. + + For more information, see [Migrate existing data and incremental data](/tidb-cloud/migrate-from-mysql-using-data-migration.md#migrate-existing-data-and-incremental-data). + +## October 10, 2023 + +**General changes** + +- Support using TiDB Serverless branches in [Vercel Preview Deployments](https://vercel.com/docs/deployments/preview-deployments), with TiDB Cloud Vercel integration. + + For more information, see [Connect with TiDB Serverless branching](/tidb-cloud/integrate-tidbcloud-with-vercel.md#connect-with-tidb-serverless-branching). + +## September 28, 2023 + +**API changes** + +- Introduce a TiDB Cloud Billing API endpoint to retrieve the bill for the given month of a specific organization. + + This Billing API endpoint is released in TiDB Cloud API v1beta1, which is the latest API version of TiDB Cloud. For more information, refer to the [API documentation (v1beta1)](https://docs.pingcap.com/tidbcloud/api/v1beta1#tag/Billing). + +## September 19, 2023 + +**General changes** + +- Remove 2 vCPU TiDB and TiKV nodes from [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. + + The 2 vCPU option is no longer available on the **Create Cluster** page or the **Modify Cluster** page. + +- Release [TiDB Cloud serverless driver (beta)](/tidb-cloud/serverless-driver.md) for JavaScript. + + TiDB Cloud serverless driver for JavaScript allows you to connect to your [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster over HTTPS. It is particularly useful in edge environments where TCP connections are limited, such as [Vercel Edge Function](https://vercel.com/docs/functions/edge-functions) and [Cloudflare Workers](https://workers.cloudflare.com/). + + For more information, see [TiDB Cloud serverless driver (beta)](/tidb-cloud/serverless-driver.md). + +**Console changes** + +- For [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, you can get an estimation of cost in the **Usage This Month** panel or while setting up the spending limit. + +## September 5, 2023 + +**General changes** + +- [Data Service (beta)](https://tidbcloud.com/console/data-service) supports customizing the rate limit for each API key to meet specific rate-limiting requirements in different situations. + + You can adjust the rate limit of an API key when you [create](/tidb-cloud/data-service-api-key.md#create-an-api-key) or [edit](/tidb-cloud/data-service-api-key.md#edit-an-api-key) the key. + + For more information, see [Rate limiting](/tidb-cloud/data-service-api-key.md#rate-limiting). + +- Support a new AWS region for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: São Paulo (sa-east-1). + +- Support adding up to 100 IP addresses to the IP access list for each [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster. + + For more information, see [Configure an IP access list](/tidb-cloud/configure-ip-access-list.md). + +**Console changes** + +- Introduce the **Events** page for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, which provides the records of main changes to your cluster. + + On this page, you can view the event history for the last 7 days and track important details such as the trigger time and the user who initiated an action. + + For more information, see [TiDB Cloud cluster events](/tidb-cloud/tidb-cloud-events.md). + +**API changes** + +- Release several TiDB Cloud API endpoints for managing the [AWS PrivateLink](https://aws.amazon.com/privatelink/?privatelink-blogs.sort-by=item.additionalFields.createdDate&privatelink-blogs.sort-order=desc) or [Google Cloud Private Service Connect](https://cloud.google.com/vpc/docs/private-service-connect) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: + + - Create a private endpoint service for a cluster + - Retrieve the private endpoint service information of a cluster + - Create a private endpoint for a cluster + - List all private endpoints of a cluster + - List all private endpoints in a project + - Delete a private endpoint of a cluster + + For more information, refer to the [API documentation](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster). + +## August 23, 2023 + +**General changes** + +- Support Google Cloud [Private Service Connect](https://cloud.google.com/vpc/docs/private-service-connect) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. + + You can now create a private endpoint and establish a secure connection to a TiDB Dedicated cluster hosted on Google Cloud. + + Key benefits: + + - Intuitive operations: helps you create a private endpoint with only several steps. + - Enhanced security: establishes a secure connection to protect your data. + - Improved performance: provides low-latency and high-bandwidth connectivity. + + For more information, see [Connect via Private Endpoint with Google Cloud](/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md). + +- Support using a changefeed to stream data from a [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster to [Google Cloud Storage (GCS)](https://cloud.google.com/storage). + + You can now stream data from TiDB Cloud to GCS by using your own account's bucket and providing precisely tailored permissions. After replicating data to GCS, you can analyze the changes in your data as you wish. + + For more information, see [Sink to Cloud Storage](/tidb-cloud/changefeed-sink-to-cloud-storage.md). + +## August 15, 2023 + +**General changes** + +- [Data Service (beta)](https://tidbcloud.com/console/data-service) supports pagination for `GET` requests to improve the development experience. + + For `GET` requests, you can paginate results by enabling **Pagination** in **Advance Properties** and specifying `page` and `page_size` as query parameters when calling the endpoint. For example, to get the second page with 10 items per page, you can use the following command: + + ```bash + curl --digest --user ':' \ + --request GET 'https://.data.tidbcloud.com/api/v1beta/app//endpoint/?page=2&page_size=10' + ``` + + Note that this feature is available only for `GET` requests where the last query is a `SELECT` statement. + + For more information, see [Call an endpoint](/tidb-cloud/data-service-manage-endpoint.md#call-an-endpoint). + +- [Data Service (beta)](https://tidbcloud.com/console/data-service) supports caching endpoint response of `GET` requests for a specified time-to-live (TTL) period. + + This feature decreases database load and optimizes endpoint latency. + + For an endpoint using the `GET` request method, you can enable **Cache Response** and configure the TTL period for the cache in **Advance Properties**. + + For more information, see [Advanced properties](/tidb-cloud/data-service-manage-endpoint.md#advanced-properties). + +- Disable the load balancing improvement for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters that are hosted on AWS and created after August 15, 2023, including: + + - Disable automatically migrating existing connections to new TiDB nodes when you scale out TiDB nodes hosted on AWS. + - Disable automatically migrating existing connections to available TiDB nodes when you scale in TiDB nodes hosted on AWS. + + This change avoids resource contention of hybrid deployments and does not affect existing clusters with this improvement enabled. If you want to enable the load balancing improvement for your new clusters, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + +## August 8, 2023 + +**General changes** + +- [Data Service (beta)](https://tidbcloud.com/console/data-service) now supports Basic Authentication. + + You can provide your public key as the username and private key as the password in requests using the ['Basic' HTTP Authentication](https://datatracker.ietf.org/doc/html/rfc7617). Compared with Digest Authentication, the Basic Authentication is simpler, enabling more straightforward usage when calling Data Service endpoints. + + For more information, see [Call an endpoint](/tidb-cloud/data-service-manage-endpoint.md#call-an-endpoint). + +## August 1, 2023 + +**General changes** + +- Support the OpenAPI Specification for Data Apps in TiDB Cloud [Data Service](https://tidbcloud.com/console/data-service). + + TiDB Cloud Data Service provides autogenerated OpenAPI documentation for each Data App. In the documentation, you can view the endpoints, parameters, and responses, and try out the endpoints. + + You can also download an OpenAPI Specification (OAS) for a Data App and its deployed endpoints in YAML or JSON format. The OAS provides standardized API documentation, simplified integration, and easy code generation, which enables faster development and improved collaboration. + + For more information, see [Use the OpenAPI Specification](/tidb-cloud/data-service-manage-data-app.md#use-the-openapi-specification) and [Use the OpenAPI Specification with Next.js](/tidb-cloud/data-service-oas-with-nextjs.md). + +- Support running Data App in [Postman](https://www.postman.com/). + + The Postman integration empowers you to import a Data App's endpoints as a collection into your preferred workspace. Then you can benefit from enhanced collaboration and seamless API testing with support for both Postman web and desktop apps. + + For more information, see [Run Data App in Postman](/tidb-cloud/data-service-postman-integration.md). + +- Introduce a new **Pausing** status for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, allowing cost-effective pauses with no charges during this period. + + When you click **Pause** for a TiDB Dedicated cluster, the cluster will enter the **Pausing** status first. Once the pause operation is completed, the cluster status will transition to **Paused**. + + A cluster can only be resumed after its status transitions to **Paused**, which resolves the abnormal resumption issue caused by rapid clicks of **Pause** and **Resume**. + + For more information, see [Pause or resume a TiDB Dedicated cluster](/tidb-cloud/pause-or-resume-tidb-cluster.md). + +## July 26, 2023 + +**General changes** + +- Introduce a powerful feature in TiDB Cloud [Data Service](https://tidbcloud.com/console/data-service): Automatic endpoint generation. + + Developers can now effortlessly create HTTP endpoints with minimal clicks and configurations. Eliminate repetitive boilerplate code, simplify and accelerate endpoint creation, and reduce potential errors. + + For more information on how to use this feature, see [Generate an endpoint automatically](/tidb-cloud/data-service-manage-endpoint.md#generate-an-endpoint-automatically). + +- Support `PUT` and `DELETE` request methods for endpoints in TiDB Cloud [Data Service](https://tidbcloud.com/console/data-service). + + - Use the `PUT` method to update or modify data, similar to an `UPDATE` statement. + - Use the `DELETE` method to delete data, similar to a `DELETE` statement. + + For more information, see [Configure properties](/tidb-cloud/data-service-manage-endpoint.md#configure-properties). + +- Support **Batch Operation** for `POST`, `PUT`, and `DELETE` request methods in TiDB Cloud [Data Service](https://tidbcloud.com/console/data-service). + + When **Batch Operation** is enabled for an endpoint, you gain the ability to perform operations on multiple rows in a single request. For instance, you can insert multiple rows of data using a single `POST` request. + + For more information, see [Advanced properties](/tidb-cloud/data-service-manage-endpoint.md#advanced-properties). + +## July 25, 2023 + +**General changes** + +- Upgrade the default TiDB version of new [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.5.3](https://docs.pingcap.com/tidb/v6.5/release-6.5.3) to [v7.1.1](https://docs.pingcap.com/tidb/v7.1/release-7.1.1). + +**Console changes** + +- Simplify access to PingCAP Support for TiDB Cloud users by optimizing support entries. Improvements include: + + - Add an entrance for **Support** in the in the lower-left corner. + - Revamp the menus of the **?** icon in the lower-right corner of the [TiDB Cloud console](https://tidbcloud.com/) to make them more intuitive. + + For more information, see [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + +## July 18, 2023 + +**General changes** + +- Refine role-based access control at both the organization level and project level, which lets you grant roles with minimum permissions to users for better security, compliance, and productivity. + + - The organization roles include `Organization Owner`, `Organization Billing Admin`, `Organization Console Audit Admin`, and `Organization Member`. + - The project roles include `Project Owner`, `Project Data Access Read-Write`, and `Project Data Access Read-Only`. + - To manage clusters in a project (such as cluster creation, modification, and deletion), you need to be in the `Organization Owner` or `Project Owner` role. + + For more information about permissions of different roles, see [User roles](/tidb-cloud/manage-user-access.md#user-roles). + +- Support the Customer-Managed Encryption Key (CMEK) feature (beta) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters hosted on AWS. + + You can create CMEK based on AWS KMS to encrypt data stored in EBS and S3 directly from the TiDB Cloud console. This ensures that customer data is encrypted with a key managed by the customer, which enhances security. + + Note that this feature still has restrictions and is only available upon request. To apply for this feature, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + +- Optimize the Import feature in TiDB Cloud, aimed at enhancing the data import experience. The following improvements have been made: + + - Unified Import entry for TiDB Serverless: consolidate the entries for importing data, allowing you to seamlessly switch between importing local files and importing files from Amazon S3. + - Streamlined configuration: importing data from Amazon S3 now only requires a single step, saving time and effort. + - Enhanced CSV configuration: the CSV configuration settings are now located under the file type option, making it easier for you to quickly configure the necessary parameters. + - Enhanced target table selection: support choosing the desired target tables for data import by clicking checkboxes. This improvement eliminates the need for complex expressions and simplifies the target table selection. + - Refined display information: resolve issues related to inaccurate information displayed during the import process. In addition, the Preview feature has been removed to prevent incomplete data display and avoid misleading information. + - Improved source files mapping: support defining mapping relationships between source files and target tables. It addresses the challenge of modifying source file names to meet specific naming requirements. + +## July 11, 2023 + +**General changes** + +- [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) now is Generally Available. + +- Introduce TiDB Bot (beta), an OpenAI-powered chatbot that offers multi-language support, 24/7 real-time response, and integrated documentation access. + + TiDB Bot provides you with the following benefits: + + - Continuous support: always available to assist and answer your questions for an enhanced support experience. + - Improved efficiency: automated responses reduce latency, improving overall operations. + - Seamless documentation access: direct access to TiDB Cloud documentation for easy information retrieval and quick issue resolution. + + To use TiDB Bot, click **?** in the lower-right corner of the [TiDB Cloud console](https://tidbcloud.com), and select **Ask TiDB Bot** to start a chat. + +- Support [the branching feature (beta)](/tidb-cloud/branch-overview.md) for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + TiDB Cloud lets you create branches for TiDB Serverless clusters. A branch for a cluster is a separate instance that contains a diverged copy of data from the original cluster. It provides an isolated environment, allowing you to connect to it and experiment freely without worrying about affecting the original cluster. + + You can create branches for TiDB Serverless clusters created after July 5, 2023 by using either [TiDB Cloud console](/tidb-cloud/branch-manage.md) or [TiDB Cloud CLI](/tidb-cloud/ticloud-branch-create.md). + + If you use GitHub for application development, you can integrate TiDB Serverless branching into your GitHub CI/CD pipeline, which lets you automatically test your pull requests with branches without affecting the production database. For more information, see [Integrate TiDB Serverless Branching (Beta) with GitHub](/tidb-cloud/branch-github-integration.md). + +- Support weekly backup for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. For more information, see [Back up and restore TiDB Dedicated data](/tidb-cloud/backup-and-restore.md#turn-on-auto-backup). + +## July 4, 2023 + +**General changes** + +- Support point-in-time recovery (PITR) (beta) for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + You can now restore your TiDB Serverless cluster to any point in time within the last 90 days. This feature enhances the data recovery capability of TiDB Serverless clusters. For example, you can use PITR when data write errors occur and you want to restore the data to an earlier state. + + For more information, see [Back up and restore TiDB Serverless data](/tidb-cloud/backup-and-restore-serverless.md#restore). + +**Console changes** + +- Enhance the **Usage This Month** panel on the cluster overview page for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters to provide a clearer view of your current resource usage. + +- Enhance the overall navigation experience by making the following changes: + + - Consolidate **Organization** and **Account** in the upper-right corner into the left navigation bar. + - Consolidate **Admin** in the left navigation bar into **Project** in the left navigation bar, and remove the ☰ hover menu in the upper-left corner. Now you can click to switch between projects and modify project settings. + - Consolidate all the help and support information for TiDB Cloud into the menu of the **?** icon in the lower-right corner, such as documentation, interactive tutorials, self-paced training, and support entries. + +- TiDB Cloud console now supports Dark Mode, which provides a more comfortable, eye-friendly experience. You can switch between light mode and dark mode from the bottom of the left navigation bar. + +## June 27, 2023 + +**General changes** + +- Remove the pre-built sample dataset for newly created [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## June 20, 2023 + +**General changes** + +- Upgrade the default TiDB version of new [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.5.2](https://docs.pingcap.com/tidb/v6.5/release-6.5.2) to [v6.5.3](https://docs.pingcap.com/tidb/v6.5/release-6.5.3). + +## June 13, 2023 + +**General changes** + +- Support using changefeeds to stream data to Amazon S3. + + This enables seamless integration between TiDB Cloud and Amazon S3. It allows real-time data capture and replication from [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters to Amazon S3, ensuring that downstream applications and analytics have access to up-to-date data. + + For more information, see [Sink to cloud storage](/tidb-cloud/changefeed-sink-to-cloud-storage.md). + +- Increase the maximum node storage of 16 vCPU TiKV for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from 4 TiB to 6 TiB. + + This enhancement increases the data storage capacity of your TiDB Dedicated cluster, improves workload scaling efficiency, and accommodates growing data requirements. + + For more information, see [Size your cluster](/tidb-cloud/size-your-cluster.md). + +- Extend the [monitoring metrics retention period](/tidb-cloud/built-in-monitoring.md#metrics-retention-policy) for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters from 3 days to 7 days. + + By extending the metrics retention period, now you have access to more historical data. This helps you identify trends and patterns of the cluster for better decision-making and faster troubleshooting. + +**Console changes** + +- Release a new native web infrastructure for the [**Key Visualizer**](/tidb-cloud/tune-performance.md#key-visualizer) page of [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. + + With the new infrastructure, you can easily navigate through the **Key Visualizer** page and access the necessary information in a more intuitive and efficient manner. The new infrastructure also resolves many problems on UX, making the SQL diagnosis process more user-friendly. + +## June 6, 2023 + +**General changes** + +- Introduce [Index Insight (beta)](/tidb-cloud/index-insight.md) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, which optimizes query performance by providing index recommendations for slow queries. + + With Index Insight, you can improve the overall application performance and efficiency of your database operations in the following ways: + + - Enhanced query performance: Index Insight identifies slow queries and suggests appropriate indexes for them, thereby speeding up query execution, reducing response time, and improving user experience. + - Cost efficiency: By using Index Insight to optimize query performance, the need for extra computing resources is reduced, enabling you to use existing infrastructure more effectively. This can potentially lead to operational cost savings. + - Simplified optimization process: Index Insight simplifies the identification and implementation of index improvements, eliminating the need for manual analysis and guesswork. As a result, you can save time and effort with accurate index recommendations. + - Improved application efficiency: By using Index Insight to optimize database performance, applications running on TiDB Cloud can handle larger workloads and serve more users concurrently, which makes scaling operations of applications more efficient. + + To use Index Insight, navigate to the **Diagnosis** page of your TiDB Dedicated cluster and click the **Index Insight BETA** tab. + + For more information, see [Use Index Insight (beta)](/tidb-cloud/index-insight.md). + +- Introduce [TiDB Playground](https://play.tidbcloud.com/?utm_source=docs&utm_medium=tidb_cloud_release_notes), an interactive platform for experiencing the full capabilities of TiDB, without registration or installation. + + TiDB Playground is an interactive platform designed to provide a one-stop-shop experience for exploring the capabilities of TiDB, such as scalability, MySQL compatibility, and real-time analytics. + + With TiDB Playground, you can try out TiDB features in a controlled environment free from complex configurations in real-time, making it ideal to understand the features in TiDB. + + To get started with TiDB Playground, go to the [**TiDB Playground**](https://play.tidbcloud.com/?utm_source=docs&utm_medium=tidb_cloud_release_notes) page, select a feature you want to explore, and begin your exploration. + +## June 5, 2023 + +**General changes** + +- Support connecting your [Data App](/tidb-cloud/tidb-cloud-glossary.md#data-app) to GitHub. + + By [connecting your Data App to GitHub](/tidb-cloud/data-service-manage-github-connection.md), you can manage all configurations of the Data App as [code files](/tidb-cloud/data-service-app-config-files.md) on Github, which integrates TiDB Cloud Data Service seamlessly with your system architecture and DevOps process. + + With this feature, you can easily accomplish the following tasks, which improves the CI/CD experience of developing Data Apps: + + - Automatically deploy Data App changes with GitHub. + - Configure CI/CD pipelines of your Data App changes on GitHub with version control. + - Disconnect from a connected GitHub repository. + - Review endpoint changes before the deployment. + - View deployment history and take necessary actions in the event of a failure. + - Re-deploy a commit to roll back to an earlier deployment. + + For more information, see [Deploy Data App automatically with GitHub](/tidb-cloud/data-service-manage-github-connection.md). + +## June 2, 2023 + +**General changes** + +- In our pursuit to simplify and clarify, we have updated the names of our products: + + - "TiDB Cloud Serverless Tier" is now called "TiDB Serverless". + - "TiDB Cloud Dedicated Tier" is now called "TiDB Dedicated". + - "TiDB On-Premises" is now called "TiDB Self-Hosted". + + Enjoy the same great performance under these refreshed names. Your experience is our priority. + +## May 30, 2023 + +**General changes** + +- Enhance support for incremental data migration for the Data Migration feature in TiDB Cloud. + + You can now specify a binlog position or a global transaction identifier (GTID) to replicate only incremental data generated after the specified position to TiDB Cloud. This enhancement empowers you with greater flexibility to select and replicate the data you need, aligning with your specific requirements. + + For details, refer to [Migrate Only Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md). + +- Add a new event type (`ImportData`) to the [**Events**](/tidb-cloud/tidb-cloud-events.md) page. + +- Remove **Playground** from the TiDB Cloud console. + + Stay tuned for the new standalone Playground with an optimized experience. + +## May 23, 2023 + +**General changes** + +- When uploading a CSV file to TiDB, you can use not only English letters and numbers, but also characters such as Chinese and Japanese to define column names. However, for special characters, only underscore (`_`) is supported. + + For details, refer to [Import Local Files to TiDB Cloud](/tidb-cloud/tidb-cloud-import-local-files.md). + +## May 16, 2023 + +**Console changes** + +- Introduce the left navigation entries organized by functional categories for both Dedicated and Serverless tiers. + + The new navigation makes it easier and more intuitive for you to discover the feature entries. To view the new navigation, access the overview page of your cluster. + +- Release a new native web infrastructure for the following two tabs on the **Diagnosis** page of Dedicated Tier clusters. + + - [Slow Query](/tidb-cloud/tune-performance.md#slow-query) + - [SQL Statement](/tidb-cloud/tune-performance.md#statement-analysis) + + With the new infrastructure, you can easily navigate through the two tabs and access the necessary information in a more intuitive and efficient manner. The new infrastructure also improves user experience, making the SQL diagnosis process more user-friendly. + +## May 9, 2023 + +**General changes** + +- Support changing node sizes for GCP-hosted clusters created after April 26, 2023. + + With this feature, you can upgrade to higher-performance nodes for increased demand or downgrade to lower-performance nodes for cost savings. With this added flexibility, you can adjust your cluster's capacity to align with your workloads and optimize costs. + + For detailed steps, see [Change node size](/tidb-cloud/scale-tidb-cluster.md#change-vcpu-and-ram). + +- Support importing compressed files. You can import CSV and SQL files in the following formats: `.gzip`, `.gz`, `.zstd`, `.zst`, and `.snappy`. This feature provides a more efficient and cost-effective way to import data and reduces your data transfer costs. + + For more information, see [Import CSV Files from Amazon S3 or GCS into TiDB Cloud](/tidb-cloud/import-csv-files.md) and [Import Sample Data](/tidb-cloud/import-sample-data.md). + +- Support AWS PrivateLink-powered endpoint connection as a new network access management option for TiDB Cloud [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + The private endpoint connection does not expose your data to the public internet. In addition, the endpoint connection supports CIDR overlap and is easier for network management. + + For more information, see [Set Up Private Endpoint Connections](/tidb-cloud/set-up-private-endpoint-connections.md). + +**Console changes** + +- Add new event types to the [**Event**](/tidb-cloud/tidb-cloud-events.md) page to record backup, restore, and changefeed actions for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. + + To get a full list of the events that can be recorded, see [Logged events](/tidb-cloud/tidb-cloud-events.md#logged-events). + +- Introduce the **SQL Statement** tab on the [**SQL Diagnosis**](/tidb-cloud/tune-performance.md) page for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + The **SQL Statement** tab provides the following: + + - A comprehensive overview of all SQL statements executed by your TiDB database, allowing you to easily identify and diagnose slow queries. + - Detailed information on each SQL statement, such as the query time, execution plan, and the database server response, helping you optimize your database performance. + - A user-friendly interface that makes it easy to sort, filter, and search through large amounts of data, enabling you to focus on the most critical queries. + + For more information, see [Statement Analysis](/tidb-cloud/tune-performance.md#statement-analysis). + +## May 6, 2023 + +**General changes** + +- Support directly accessing the [Data Service endpoint](/tidb-cloud/tidb-cloud-glossary.md#endpoint) in the region where a TiDB [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster is located. + + For newly created Serverless Tier clusters, the endpoint URL now includes the cluster region information. By requesting the regional domain `.data.tidbcloud.com`, you can directly access the endpoint in the region where the TiDB cluster is located. + + Alternatively, you can also request the global domain `data.tidbcloud.com` without specifying a region. In this way, TiDB Cloud will internally redirect the request to the target region, but this might result in additional latency. If you choose this way, make sure to add the `--location-trusted` option to your curl command when calling an endpoint. + + For more information, see [Call an endpoint](/tidb-cloud/data-service-manage-endpoint.md#call-an-endpoint). + +## April 25, 2023 + +**General changes** + +- For the first five [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters in your organization, TiDB Cloud provides a free usage quota for each of them as follows: + + - Row storage: 5 GiB + - [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit): 50 million RUs per month + + Until May 31, 2023, Serverless Tier clusters are still free, with a 100% discount off. After that, usage beyond the free quota will be charged. + + You can easily [monitor your cluster usage or increase your usage quota](/tidb-cloud/manage-serverless-spend-limit.md#manage-spending-limit-for-tidb-serverless-scalable-clusters) in the **Usage This Month** area of your cluster **Overview** page. Once the free quota of a cluster is reached, the read and write operations on this cluster will be throttled until you increase the quota or the usage is reset upon the start of a new month. + + For more information about the RU consumption of different resources (including read, write, SQL CPU, and network egress), the pricing details, and the throttled information, see [TiDB Cloud Serverless Tier Pricing Details](https://www.pingcap.com/tidb-cloud-serverless-pricing-details). + +- Support backup and restore for TiDB Cloud [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + For more information, see [Back up and Restore TiDB Cluster Data](/tidb-cloud/backup-and-restore-serverless.md). + +- Upgrade the default TiDB version of new [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.5.1](https://docs.pingcap.com/tidb/v6.5/release-6.5.1) to [v6.5.2](https://docs.pingcap.com/tidb/v6.5/release-6.5.2). + +- Provide a maintenance window feature to enable you to easily schedule and manage planned maintenance activities for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. + + A maintenance window is a designated timeframe during which planned maintenance activities, such as operating system updates, security patches, and infrastructure upgrades, are performed automatically to ensure the reliability, security, and performance of the TiDB Cloud service. + + During a maintenance window, temporary connection disruptions or QPS fluctuations might occur, but the clusters remain available, and SQL operations, the existing data import, backup, restore, migration, and replication tasks can still run normally. See [a list of allowed and disallowed operations](/tidb-cloud/configure-maintenance-window.md#allowed-and-disallowed-operations-during-a-maintenance-window) during maintenance. + + We will strive to minimize the frequency of maintenance. If a maintenance window is planned, the default start time is 03:00 Wednesday (based on the time zone of your TiDB Cloud organization) of the target week. To avoid potential disruptions, it is important to be aware of the maintenance schedules and plan your operations accordingly. + + - To keep you informed, TiDB Cloud will send you three email notifications for every maintenance window: one before, one starting, and one after the maintenance tasks. + - To minimize the maintenance impact, you can modify the maintenance start time to your preferred time or defer maintenance activities on the **Maintenance** page. + + For more information, see [Configure maintenance window](/tidb-cloud/configure-maintenance-window.md). + +- Improve load balancing of TiDB and reduce connection drops when you scale TiDB nodes of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters that are hosted on AWS and created after April 25, 2023. + + - Support automatically migrating existing connections to new TiDB nodes when you scale out TiDB nodes. + - Support automatically migrating existing connections to available TiDB nodes when you scale in TiDB nodes. + + Currently, this feature is provided for all Dedicated Tier clusters hosted on AWS. + +**Console changes** + +- Release a new native web infrastructure for the [Monitoring](/tidb-cloud/built-in-monitoring.md#view-the-metrics-page) page of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. + + With the new infrastructure, you can easily navigate through the [Monitoring](/tidb-cloud/built-in-monitoring.md#view-the-metrics-page) page and access the necessary information in a more intuitive and efficient manner. The new infrastructure also resolves many problems on UX, making the monitoring process more user-friendly. + +## April 18, 2023 + +**General changes** + +- Support scaling up or down [Data Migration job specifications](/tidb-cloud/tidb-cloud-billing-dm.md#specifications-for-data-migration) for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. + + With this feature, you can improve migration performance by scaling up specifications or reduce costs by scaling down specifications. + + For more information, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#scale-a-migration-job-specification). + +**Console changes** + +- Revamp the UI to make [cluster creation](https://tidbcloud.com/console/clusters/create-cluster) experience more user-friendly, enabling you to create and configure clusters with just a few clicks. + + The new design focuses on simplicity, reducing visual clutter, and providing clear instructions. After clicking **Create** on the cluster creation page, you will be directed to the cluster overview page without having to wait for the cluster creation to be completed. + + For more information, see [Create a cluster](/tidb-cloud/create-tidb-cluster.md). + +- Introduce the **Discounts** tab on the **Billing** page to show the discount information for organization owners and billing administrators. + + For more information, see [Discounts](/tidb-cloud/tidb-cloud-billing.md#discounts). + +## April 11, 2023 + +**General changes** + +- Improve the load balance of TiDB and reduce connection drops when you scale TiDB nodes of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters hosted on AWS. + + - Support automatically migrating existing connections to new TiDB nodes when you scale out TiDB nodes. + - Support automatically migrating existing connections to available TiDB nodes when you scale in TiDB nodes. + + Currently, this feature is only provided for Dedicated Tier clusters that are hosted on the AWS `Oregon (us-west-2)` region. + +- Support the [New Relic](https://newrelic.com/) integration for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. + + With the New Relic integration, you can configure TiDB Cloud to send metric data of your TiDB clusters to [New Relic](https://newrelic.com/). Then, you can monitor and analyze both your application performance and your TiDB database performance on [New Relic](https://newrelic.com/). This feature can help you quickly identify and troubleshoot potential issues and reduce the resolution time. + + For integration steps and available metrics, see [Integrate TiDB Cloud with New Relic](/tidb-cloud/monitor-new-relic-integration.md). + +- Add the following [changefeed](/tidb-cloud/changefeed-overview.md) metrics to the Prometheus integration for Dedicated Tier clusters. + + - `tidbcloud_changefeed_latency` + - `tidbcloud_changefeed_replica_rows` + + If you have [integrated TiDB Cloud with Prometheus](/tidb-cloud/monitor-prometheus-and-grafana-integration.md), you can monitor the performance and health of changefeeds in real time using these metrics. Additionally, you can easily create alerts to monitor the metrics using Prometheus. + +**Console changes** + +- Update the [Monitoring](/tidb-cloud/built-in-monitoring.md#view-the-metrics-page) page for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters to use [node-level resource metrics](/tidb-cloud/built-in-monitoring.md#server). + + With node-level resource metrics, you can see a more accurate representation of resource consumption to better understand the actual usage of purchased services. + + To access these metrics, navigate to the [Monitoring](/tidb-cloud/built-in-monitoring.md#view-the-metrics-page) page of your cluster, and then check the **Server** category under the **Metrics** tab. + +- Optimize the [Billing](/tidb-cloud/tidb-cloud-billing.md#billing-details) page by reorganizing the billing items in **Summary by Project** and **Summary by Service**, which makes the billing information clearer. + +## April 4, 2023 + +**General changes** + +- Remove the following two alerts from [TiDB Cloud built-in alerts](/tidb-cloud/monitor-built-in-alerting.md#tidb-cloud-built-in-alert-conditions) to prevent false positives. This is because temporary offline or out-of-memory (OOM) issues on one of the nodes do not significantly affect the overall health of a cluster. + + - At least one TiDB node in the cluster has run out of memory. + - One or more cluster nodes are offline. + +**Console changes** + +- Introduce the [Alerts](/tidb-cloud/monitor-built-in-alerting.md) page for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, which lists both active and closed alerts for each Dedicated Tier cluster. + + The **Alerts** page provides the following: + + - An intuitive and user-friendly user interface. You can view alerts for your clusters on this page even if you have not subscribed to the alert notification emails. + - Advanced filtering options to help you quickly find and sort alerts based on their severity, status, and other attributes. It also allows you to view the historical data for the last 7 days, which eases the alert history tracking. + - The **Edit Rule** feature. You can customize alert rule settings to meet your cluster's specific needs. + + For more information, see [TiDB Cloud built-in alerts](/tidb-cloud/monitor-built-in-alerting.md). + +- Consolidate the help-related information and actions of TiDB Cloud into a single place. + + Now, you can get all the [TiDB Cloud help information](/tidb-cloud/tidb-cloud-support.md) and contact support by clicking **?** in the lower-right corner of the [TiDB Cloud console](https://tidbcloud.com/). + +- Introduce the [Getting Started](https://tidbcloud.com/console/getting-started) page to help you learn about TiDB Cloud. + + The **Getting Started** page provides you with interactive tutorials, essential guides, and useful links. By following interactive tutorials, you can easily explore TiDB Cloud features and HTAP capabilities with pre-built industry-specific datasets (Steam Game Dataset and S&P 500 Dataset). + + To access the **Getting Started** page, click **Getting Started** in the left navigation bar of the [TiDB Cloud console](https://tidbcloud.com/). On this page, you can click **Query Sample Dataset** to open the interactive tutorials or click other links to explore TiDB Cloud. Alternatively, you can click **?** in the lower-right corner and click **Interactive Tutorials**. + +## March 29, 2023 + +**General changes** + +- [Data Service (beta)](/tidb-cloud/data-service-overview.md) supports more fine-grained access control for Data Apps. + + On the Data App details page, now you can link clusters to your Data App and specify the role for each API key. The role controls whether the API key can read or write data to the linked clusters and can be set to `ReadOnly` or `ReadAndWrite`. This feature provides cluster-level and permission-level access control for Data Apps, giving you more flexibility to control the access scope according to your business needs. + + For more information, see [Manage linked clusters](/tidb-cloud/data-service-manage-data-app.md#manage-linked-data-sources) and [Manage API keys](/tidb-cloud/data-service-api-key.md). + +## March 28, 2023 + +**General changes** + +- Add 2 RCUs, 4 RCUs, and 8 RCUs specifications for [changefeeds](/tidb-cloud/changefeed-overview.md), and support choosing your desired specification when you [create a changefeed](/tidb-cloud/changefeed-overview.md#create-a-changefeed). + + Using these new specifications, the data replication costs can be reduced by up to 87.5% compared to scenarios where 16 RCUs were previously required. + +- Support scaling up or down specifications for [changefeeds](/tidb-cloud/changefeed-overview.md) created after March 28, 2023. + + You can improve replication performance by choosing a higher specification or reduce replication costs by choosing a lower specification. + + For more information, see [Scale a changefeed](/tidb-cloud/changefeed-overview.md#scale-a-changefeed). + +- Support replicating incremental data in real-time from a [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster in AWS to a [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster in the same project and same region. + + For more information, see [Sink to TiDB Cloud](/tidb-cloud/changefeed-sink-to-tidb-cloud.md). + +- Support two new GCP regions for the [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) feature of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: `Singapore (asia-southeast1)` and `Oregon (us-west1)`. + + With these new regions, you have more options for migrating your data to TiDB Cloud. If your upstream data is stored in or near these regions, you can now take advantage of faster and more reliable data migration from GCP to TiDB Cloud. + + For more information, see [Migrate MySQL-compatible databases to TiDB Cloud using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md). + +**Console changes** + +- Release a new native web infrastructure for the [Slow Query](/tidb-cloud/tune-performance.md#slow-query) page of [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + With this new infrastructure, you can easily navigate through the [Slow Query](/tidb-cloud/tune-performance.md#slow-query) page and access the necessary information in a more intuitive and efficient manner. The new infrastructure also resolves many problems on UX, making the SQL diagnosis process more user-friendly. + +## March 21, 2023 + +**General changes** + +- Introduce [Data Service (beta)](https://tidbcloud.com/console/data-service) for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, which enables you to access data via an HTTPS request using a custom API endpoint. + + With Data Service, you can seamlessly integrate TiDB Cloud with any application or service that is compatible with HTTPS. The following are some common scenarios: + + - Access the database of your TiDB cluster directly from a mobile or web application. + - Use serverless edge functions to call endpoints and avoid scalability issues caused by database connection pooling. + - Integrate TiDB Cloud with data visualization projects by using Data Service as a data source. + - Connect to your database from an environment that MySQL interface does not support. + + In addition, TiDB Cloud provides the [Chat2Query API](/tidb-cloud/use-chat2query-api.md), a RESTful interface that allows you to generate and execute SQL statements using AI. + + To access Data Service, navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page in the left navigation pane. For more information, see the following documentation: + + - [Data Service Overview](/tidb-cloud/data-service-overview.md) + - [Get Started with Data Service](/tidb-cloud/data-service-get-started.md) + - [Get Started with Chat2Query API](/tidb-cloud/use-chat2query-api.md) + +- Support decreasing the size of TiDB, TiKV, and TiFlash nodes to scale in a [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster that is hosted on AWS and created after December 31, 2022. + + You can decrease the node size [via the TiDB Cloud console](/tidb-cloud/scale-tidb-cluster.md#change-vcpu-and-ram) or [via the TiDB Cloud API (beta)](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/UpdateCluster). + +- Support a new GCP region for the [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) feature of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: `Tokyo (asia-northeast1)`. + + The feature can help you migrate data from MySQL-compatible databases in Google Cloud Platform (GCP) to your TiDB cluster easily and efficiently. + + For more information, see [Migrate MySQL-compatible databases to TiDB Cloud using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md). + +**Console changes** + +- Introduce the **Events** page for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, which provides the records of main changes to your cluster. + + On this page, you can view the event history for the last 7 days and track important details such as the trigger time and the user who initiated an action. For example, you can view events such as when a cluster was paused or who modified the cluster size. + + For more information, see [TiDB Cloud cluster events](/tidb-cloud/tidb-cloud-events.md). + +- Add the **Database Status** tab to the **Monitoring** page for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, which displays the following database-level metrics: + + - QPS Per DB + - Average Query Duration Per DB + - Failed Queries Per DB + + With these metrics, you can monitor the performance of individual databases, make data-driven decisions, and take actions to improve the performance of your applications. + + For more information, see [Monitoring metrics for Serverless Tier clusters](/tidb-cloud/built-in-monitoring.md). + +## March 14, 2023 + +**General changes** + +- Upgrade the default TiDB version of new [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.5.0](https://docs.pingcap.com/tidb/v6.5/release-6.5.0) to [v6.5.1](https://docs.pingcap.com/tidb/v6.5/release-6.5.1). + +- Support modifying column names of the target table to be created by TiDB Cloud when uploading a local CSV file with a header row. + + When importing a local CSV file with a header row to a [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster, if you need TiDB Cloud to create the target table and the column names in the header row do not follow the TiDB Cloud column naming conventions, you will see a warning icon next to the corresponding column name. To resolve the warning, you can move the cursor over the icon and follow the message to edit the existing column names or enter new column names. + + For information about column naming conventions, see [Import local files](/tidb-cloud/tidb-cloud-import-local-files.md#import-local-files). + +## March 7, 2023 + +**General changes** + +- Upgrade the default TiDB version of all [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters from [v6.4.0](https://docs.pingcap.com/tidb/v6.4/release-6.4.0) to [v6.6.0](https://docs.pingcap.com/tidb/v6.6/release-6.6.0). + +## February 28, 2023 + +**General changes** + +- Add the [SQL Diagnosis](/tidb-cloud/tune-performance.md) feature for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + With SQL Diagnosis, you can gain deep insights into SQL-related runtime status, which makes the SQL performance tuning more efficient. Currently, the SQL Diagnosis feature for Serverless Tier only provides slow query data. + + To use SQL Diagnosis, click **SQL Diagnosis** on the left navigation bar of your Serverless Tier cluster page. + +**Console changes** + +- Optimize the left navigation. + + You can navigate pages more efficiently, for example: + + - You can hover the mouse in the upper-left corner to quickly switch between clusters or projects. + - You can switch between the **Clusters** page and the **Admin** page. + +**API changes** + +- Release several TiDB Cloud API endpoints for data import: + + - List all import tasks + - Get an import task + - Create an import task + - Update an import task + - Upload a local file for an import task + - Preview data before starting an import task + - Get the role information for import tasks + + For more information, refer to the [API documentation](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Import). + +## February 22, 2023 + +**General changes** + +- Support using the [console audit logging](/tidb-cloud/tidb-cloud-console-auditing.md) feature to track various activities performed by members within your organization in the [TiDB Cloud console](https://tidbcloud.com/). + + The console audit logging feature is only visible to users with the `Owner` or `Audit Admin` role and is disabled by default. To enable it, click **Organization** > **Console Audit Logging** in the upper-right corner of the [TiDB Cloud console](https://tidbcloud.com/). + + By analyzing console audit logs, you can identify suspicious operations performed within your organization, thereby improving the security of your organization's resources and data. + + For more information, see [Console audit logging](/tidb-cloud/tidb-cloud-console-auditing.md). + +**CLI changes** + +- Add a new command `ticloud cluster connect-info` for [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). + + `ticloud cluster connect-info` is a command that allows you to get the connection string of a cluster. To use this command, [update `ticloud`](/tidb-cloud/ticloud-update.md) to v0.3.2 or a later version. + +## February 21, 2023 + +**General changes** + +- Support using the AWS access keys of an IAM user to access your Amazon S3 bucket when importing data to TiDB Cloud. + + This method is simpler than using Role ARN. For more information, refer to [Configure Amazon S3 access](/tidb-cloud/config-s3-and-gcs-access.md#configure-amazon-s3-access). + +- Extend the [monitoring metrics retention period](/tidb-cloud/built-in-monitoring.md#metrics-retention-policy) from 2 days to a longer period: + + - For Dedicated Tier clusters, you can view metrics data for the past 7 days. + - For Serverless Tier clusters, you can view metrics data for the past 3 days. + + By extending the metrics retention period, now you have access to more historical data. This helps you identify trends and patterns of the cluster for better decision-making and faster troubleshooting. + +**Console changes** + +- Release a new native web infrastructure on the Monitoring page of [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + With the new infrastructure, you can easily navigate through the Monitoring page and access the necessary information in a more intuitive and efficient manner. The new infrastructure also resolves many problems on UX, making the monitoring process a lot more user-friendly. + +## February 17, 2023 + +**CLI changes** + +- Add a new command [`ticloud connect`](/tidb-cloud/ticloud-serverless-shell.md) for [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). + + `ticloud connect` is a command that allows you to connect to your TiDB Cloud cluster from your local machine without installing any SQL clients. After connecting to your TiDB Cloud cluster, you can execute SQL statements in the TiDB Cloud CLI. + +## February 14, 2023 + +**General changes** + +- Support decreasing the number of TiKV and TiFlash nodes to scale in a TiDB [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster. + + You can decrease the node number [via the TiDB Cloud console](/tidb-cloud/scale-tidb-cluster.md#change-node-number) or [via the TiDB Cloud API (beta)](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/UpdateCluster). + +**Console changes** + +- Introduce the **Monitoring** page for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + The **Monitoring** page provides a range of metrics and data, such as the number of SQL statements executed per second, the average duration of queries, and the number of failed queries, which helps you better understand the overall performance of SQL statements in your Serverless Tier cluster. + + For more information, see [TiDB Cloud built-in monitoring](/tidb-cloud/built-in-monitoring.md). + +## February 2, 2023 + +**CLI changes** + +- Introduce the TiDB Cloud CLI client [`ticloud`](/tidb-cloud/cli-reference.md). + + Using `ticloud`, you can easily manage your TiDB Cloud resources from a terminal or other automatic workflows with a few lines of commands. Especially for GitHub Actions, we have provided [`setup-tidbcloud-cli`](https://github.com/marketplace/actions/set-up-tidbcloud-cli) for you to easily set up `ticloud`. + + For more information, see [TiDB Cloud CLI Quick Start](/tidb-cloud/get-started-with-cli.md) and [TiDB Cloud CLI Reference](/tidb-cloud/cli-reference.md). + +## January 18, 2023 + +**General changes** + +* Support [signing up](https://tidbcloud.com/free-trial) TiDB Cloud with a Microsoft account. + +## January 17, 2023 + +**General changes** + +- Upgrade the default TiDB version of new [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.1.3](https://docs.pingcap.com/tidb/stable/release-6.1.3) to [v6.5.0](https://docs.pingcap.com/tidb/stable/release-6.5.0). + +- For new sign-up users, TiDB Cloud will automatically create a free [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster so that you can quickly start a data exploration journey with TiDB Cloud. + +- Support a new AWS region for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: `Seoul (ap-northeast-2)`. + + The following features are enabled for this region: + + - [Migrate MySQL-compatible databases to TiDB Cloud using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) + - [Stream data from TiDB Cloud to other data services using changefeed](/tidb-cloud/changefeed-overview.md) + - [Back up and restore TiDB cluster data](/tidb-cloud/backup-and-restore.md) + +## January 10, 2023 + +**General changes** + +- Optimize the feature of importing data from local CSV files to TiDB to improve the user experience for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + + - To upload a CSV file, now you can simply drag and drop it to the upload area on the **Import** page. + - When creating an import task, if your target database or table does not exist, you can enter a name to let TiDB Cloud create it for you automatically. For the target table to be created, you can specify a primary key or select multiple fields to form a composite primary key. + - After the import is completed, you can explore your data with [AI-powered Chat2Query](/tidb-cloud/explore-data-with-chat2query.md) by clicking **Explore your data by Chat2Query** or clicking the target table name in the task list. + + For more information, see [Import local files to TiDB Cloud](/tidb-cloud/tidb-cloud-import-local-files.md). + +**Console changes** + +- Add the **Get Support** option for each cluster to simplify the process of requesting support for a specific cluster. + + You can request support for a cluster in either of the following ways: + + - On the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project, click **...** in the row of your cluster and select **Get Support**. + - On your cluster overview page, click **...** in the upper-right corner and select **Get Support**. + +## January 5, 2023 + +**Console changes** + +- Rename SQL Editor (beta) to Chat2Query (beta) for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters and support generating SQL queries using AI. + + In Chat2Query, you can either let AI generate SQL queries automatically or write SQL queries manually, and run SQL queries against databases without a terminal. + + To access Chat2Query, go to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project, click your cluster name, and then click **Chat2Query** in the left navigation pane. + +## January 4, 2023 + +**General changes** + +- Support scaling up TiDB, TiKV, and TiFlash nodes by increasing the **Node Size(vCPU + RAM)** for TiDB Dedicated clusters hosted on AWS and created after December 31, 2022. + + You can increase the node size [using the TiDB Cloud console](/tidb-cloud/scale-tidb-cluster.md#change-vcpu-and-ram) or [using the TiDB Cloud API (beta)](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/UpdateCluster). + +- Extend the metrics retention period on the [**Monitoring**](/tidb-cloud/built-in-monitoring.md) page to two days. + + Now you have access to metrics data of the last two days, giving you more flexibility and visibility into your cluster performance and trends. + + This improvement comes at no additional cost and can be accessed on the **Diagnosis** tab of the [**Monitoring**](/tidb-cloud/built-in-monitoring.md) page for your cluster. This will help you identify and troubleshoot performance issues and monitor the overall health of your cluster more effectively. + +- Support customizing Grafana dashboard JSON for Prometheus integration. + + If you have [integrated TiDB Cloud with Prometheus](/tidb-cloud/monitor-prometheus-and-grafana-integration.md), you can now import a pre-built Grafana dashboard to monitor TiDB Cloud clusters and customize the dashboard to your needs. This feature enables easy and fast monitoring of your TiDB Cloud clusters and helps you identify any performance issues quickly. + + For more information, see [Use Grafana GUI dashboards to visualize the metrics](/tidb-cloud/monitor-prometheus-and-grafana-integration.md#step-3-use-grafana-gui-dashboards-to-visualize-the-metrics). + +- Upgrade the default TiDB version of all [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters from [v6.3.0](https://docs.pingcap.com/tidb/v6.3/release-6.3.0) to [v6.4.0](https://docs.pingcap.com/tidb/v6.4/release-6.4.0). The cold start issue after upgrading the default TiDB version of Serverless Tier clusters to v6.4.0 has been resolved. + +**Console changes** + +- Simplify the display of the [**Clusters**](https://tidbcloud.com/console/clusters) page and the cluster overview page. + + - You can click the cluster name on the [**Clusters**](https://tidbcloud.com/console/clusters) page to enter the cluster overview page and start operating the cluster. + - Remove the **Connection** and **Import** panes from the cluster overview page. You can click **Connect** in the upper-right corner to get the connection information and click **Import** in the left navigation pane to import data. diff --git a/tidb-cloud/secure-connections-to-serverless-clusters.md b/tidb-cloud/secure-connections-to-serverless-clusters.md index 6da4afd53739b..78bd39153b71d 100644 --- a/tidb-cloud/secure-connections-to-serverless-clusters.md +++ b/tidb-cloud/secure-connections-to-serverless-clusters.md @@ -49,7 +49,7 @@ If you are using a GUI client, such as DBeaver, which does not accept a certific ### Root certificate default path -In different operating systems, the default storage paths of the root certificate are as follows: +In different operating systems, the default storage paths of the root certificate are as follows: **MacOS** diff --git a/tidb-cloud/select-cluster-tier.md b/tidb-cloud/select-cluster-tier.md index 607cfa002a3dd..1bac55a241245 100644 --- a/tidb-cloud/select-cluster-tier.md +++ b/tidb-cloud/select-cluster-tier.md @@ -18,18 +18,40 @@ TiDB Cloud provides the following two options of cluster tiers. Before creating TiDB Serverless is a fully managed, multi-tenant TiDB offering. It delivers an instant, autoscaling MySQL-compatible database and offers a generous free tier and consumption based billing once free limits are exceeded. +### Cluster plans + +TiDB Serverless offers two service plans to meet different user requirements. Whether you are just getting started or scaling to meet the increasing application demands, these service plans provide the flexibility and capability you need. + +#### Free cluster plan + +The free cluster plan is ideal for those who are getting started with TiDB Serverless. It provides developers and small teams with the following essential features: + +- **No cost**: This plan is completely free, with no credit card required to get started. +- **Storage**: Provides an initial 5 GiB of row-based storage and 5 GiB of columnar storage. +- **Request Units**: Includes 50 million [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit) for database operations. +- **Easy upgrade**: Offers a smooth transition to the [scalable cluster plan](#scalable-cluster-plan) as your needs grow. + +#### Scalable cluster plan + +For applications experiencing growing workloads and needing scalability in real time, the scalable cluster plan provides the flexibility and performance to keep pace with your business growth with the following features: + +- **Enhanced capabilities**: Includes all capabilities of the free cluster plan, along with the capacity to handle larger and more complex workloads, as well as advanced security features. +- **Automatic scaling**: Automatically adjusts storage and computing resources to efficiently meet changing workload demands. +- **Predictable pricing**: Although this plan requires a credit card, you are only charged for the resources you use, ensuring cost-effective scalability. + ### Usage quota -For each organization in TiDB Cloud, you can create a maximum of five TiDB Serverless clusters by default. To create more TiDB Serverless clusters, you need to add a credit card and set a [spending limit](/tidb-cloud/tidb-cloud-glossary.md#spending-limit) for the usage. +For each organization in TiDB Cloud, you can create a maximum of five [free clusters](#free-cluster-plan) by default. To create more TiDB Serverless clusters, you need to add a credit card and create [scalable clusters](#scalable-cluster-plan) for the usage. -For the first five TiDB Serverless clusters in your organization, TiDB Cloud provides a free usage quota for each of them as follows: +For the first five TiDB Serverless clusters in your organization, whether they are free or scalable, TiDB Cloud provides a free usage quota for each of them as follows: - Row-based storage: 5 GiB -- [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit): 50 million RUs per month +- Columnar storage: 5 GiB +- Request Units (RUs): 50 million RUs per month A Request Unit (RU) is a unit of measure used to represent the amount of resources consumed by a single request to the database. The amount of RUs consumed by a request depends on various factors, such as the operation type or the amount of data being retrieved or modified. -Once the free quota of a cluster is reached, the read and write operations on this cluster will be throttled until you [increase the quota](/tidb-cloud/manage-serverless-spend-limit.md#update-spending-limit) or the usage is reset upon the start of a new month. For example, when the storage of a cluster exceeds 5 GiB, the maximum size limit of a single transaction is reduced from 10 MiB to 1 MiB. +Once a cluster reaches its usage quota, it immediately denies any new connection attempts until you [increase the quota](/tidb-cloud/manage-serverless-spend-limit.md#update-spending-limit) or the usage is reset upon the start of a new month. Existing connections established before reaching the quota will remain active but will experience throttling. For example, when the row-based storage of a cluster exceeds 5 GiB for a free cluster, the cluster automatically restricts any new connection attempts. To learn more about the RU consumption of different resources (including read, write, SQL CPU, and network egress), the pricing details, and the throttled information, see [TiDB Serverless Pricing Details](https://www.pingcap.com/tidb-cloud-serverless-pricing-details). diff --git a/tidb-cloud/serverless-driver-drizzle-example.md b/tidb-cloud/serverless-driver-drizzle-example.md new file mode 100644 index 0000000000000..ee2768135b41b --- /dev/null +++ b/tidb-cloud/serverless-driver-drizzle-example.md @@ -0,0 +1,272 @@ +--- +title: TiDB Cloud Serverless Driver Drizzle Tutorial +summary: Learn how to use TiDB Cloud serverless driver with Drizzle. +--- + +# TiDB Cloud Serverless Driver Drizzle Tutorial + +[Drizzle ORM](https://orm.drizzle.team/) is a lightweight and performant TypeScript ORM with developer experience in mind. Starting from `drizzle-orm@0.31.2`, it supports [drizzle-orm/tidb-serverless](https://orm.drizzle.team/docs/get-started-mysql#tidb-serverless), enabling you to use Drizzle over HTTPS with [TiDB Cloud serverless driver](/tidb-cloud/serverless-driver.md). + +This tutorial describes how to use TiDB Cloud serverless driver with Drizzle in Node.js environments and edge environments. + +## Use Drizzle and TiDB Cloud serverless driver in Node.js environments + +This section describes how to use TiDB Cloud serverless driver with Drizzle in Node.js environments. + +### Before you begin + +To complete this tutorial, you need the following: + +- [Node.js](https://nodejs.org/en) >= 18.0.0. +- [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or your preferred package manager. +- A TiDB Serverless cluster. If you don't have any, you can [create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md). + +### Step 1. Create a project + +1. Create a project named `drizzle-node-example`: + + ```shell + mkdir drizzle-node-example + cd drizzle-node-example + ``` + +2. Install the `drizzle-orm` and `@tidbcloud/serverless` packages: + + ```shell + npm install drizzle-orm @tidbcloud/serverless + ``` + +3. In the root directory of your project, locate the `package.json` file, and then specify the ES module by adding `"type": "module"` to the file: + + ```json + { + "type": "module", + "dependencies": { + "@tidbcloud/serverless": "^0.1.1", + "drizzle-orm": "^0.31.2" + } + } + ``` + +4. In the root directory of your project, add a `tsconfig.json` file to define the TypeScript compiler options. Here is an example file: + + ```json + { + "compilerOptions": { + "module": "ES2022", + "target": "ES2022", + "moduleResolution": "node", + "strict": false, + "declaration": true, + "outDir": "dist", + "removeComments": true, + "allowJs": true, + "esModuleInterop": true, + "resolveJsonModule": true + } + } + ``` + +### Step 2. Set the environment + +1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project, and then click the name of your target TiDB Serverless cluster to go to its overview page. + +2. On the overview page, click **Connect** in the upper-right corner, select `Serverless Driver` in the **Connect With** drop-down box, and then click **Generate Password** to create a random password. + + > **Tip:** + > + > If you have created a password before, you can either use the original password or click **Reset Password** to generate a new one. + + The connection string looks like this: + + ``` + mysql://[username]:[password]@[host]/[database] + ``` + +3. Set the environment variable `DATABASE_URL` in your local environment. For example, in Linux or macOS, you can run the following command: + + ```shell + export DATABASE_URL='mysql://[username]:[password]@[host]/[database]' + ``` + +### Step 3. Use Drizzle to query data + +1. Create a table in your TiDB Serverless cluster. + + You can use [SQL Editor in the TiDB Cloud console](/tidb-cloud/explore-data-with-chat2query.md) to execute SQL statements. Here is an example: + + ```sql + CREATE TABLE `test`.`users` ( + `id` BIGINT PRIMARY KEY auto_increment, + `full_name` TEXT, + `phone` VARCHAR(256) + ); + ``` + +2. In the root directory of your project, create a file named `hello-world.ts` and add the following code: + + ```ts + import { connect } from '@tidbcloud/serverless'; + import { drizzle } from 'drizzle-orm/tidb-serverless'; + import { mysqlTable, serial, text, varchar } from 'drizzle-orm/mysql-core'; + + // Initialize + const client = connect({ url: process.env.DATABASE_URL }); + const db = drizzle(client); + + // Define schema + export const users = mysqlTable('users', { + id: serial("id").primaryKey(), + fullName: text('full_name'), + phone: varchar('phone', { length: 256 }), + }); + export type User = typeof users.$inferSelect; // return type when queried + export type NewUser = typeof users.$inferInsert; // insert type + + // Insert and select data + const user: NewUser = { fullName: 'John Doe', phone: '123-456-7890' }; + await db.insert(users).values(user) + const result: User[] = await db.select().from(users); + console.log(result); + ``` + +### Step 4. Run the Typescript code + +1. Install `ts-node` to transform TypeScript into JavaScript, and then install `@types/node` to provide TypeScript type definitions for Node.js. + + ```shell + npm install -g ts-node + npm i --save-dev @types/node + ``` + +2. Run the Typescript code with the following command: + + ```shell + ts-node --esm hello-world.ts + ``` + +## Use Drizzle and TiDB Cloud serverless driver in edge environments + +This section takes the Vercel Edge Function as an example. + +### Before you begin + +To complete this tutorial, you need the following: + +- A [Vercel](https://vercel.com/docs) account that provides edge environment. +- [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or your preferred package manager. +- A TiDB Serverless cluster. If you don't have any, you can [create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md). + +### Step 1. Create a project + +1. Install the Vercel CLI: + + ```shell + npm i -g vercel@latest + ``` + +2. Create a [Next.js](https://nextjs.org/) project called `drizzle-example` using the following terminal command: + + ```shell + npx create-next-app@latest drizzle-example --ts --no-eslint --tailwind --no-src-dir --app --import-alias "@/*" + ``` + +3. Navigate to the `drizzle-example` directory: + + ```shell + cd drizzle-example + ``` + +4. Install the `drizzle-orm` and `@tidbcloud/serverless` packages: + + ```shell + npm install drizzle-orm @tidbcloud/serverless --force + ``` + +### Step 2. Set the environment + +1. In the [TiDB Cloud console](https://tidbcloud.com/), navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project, and then click the name of your target TiDB Serverless cluster to go to its overview page. + +2. On the overview page, click **Connect** in the upper-right corner, select `Serverless Driver` in the **Connect With** drop-down box, and then click **Generate Password** to create a random password. + + > **Tip:** + > + > If you have created a password before, you can either use the original password or click **Reset Password** to generate a new one. + + The connection string looks like this: + + ``` + mysql://[username]:[password]@[host]/[database] + ``` + +### Step 3. Create an edge function + +1. Create a table in your TiDB Serverless cluster. + + You can use [SQL Editor in the TiDB Cloud console](/tidb-cloud/explore-data-with-chat2query.md) to execute SQL statements. Here is an example: + + ```sql + CREATE TABLE `test`.`users` ( + `id` BIGINT PRIMARY KEY auto_increment, + `full_name` TEXT, + `phone` VARCHAR(256) + ); + ``` + +2. In the `app` directory of your project, create a file `/api/edge-function-example/route.ts` and add the following code: + + ```ts + import { NextResponse } from 'next/server'; + import type { NextRequest } from 'next/server'; + import { connect } from '@tidbcloud/serverless'; + import { drizzle } from 'drizzle-orm/tidb-serverless'; + import { mysqlTable, serial, text, varchar } from 'drizzle-orm/mysql-core'; + export const runtime = 'edge'; + + // Initialize + const client = connect({ url: process.env.DATABASE_URL }); + const db = drizzle(client); + + // Define schema + export const users = mysqlTable('users', { + id: serial("id").primaryKey(), + fullName: text('full_name'), + phone: varchar('phone', { length: 256 }), + }); + export type User = typeof users.$inferSelect; // return type when queried + export type NewUser = typeof users.$inferInsert; // insert type + + export async function GET(request: NextRequest) { + // Insert and select data + const user: NewUser = { fullName: 'John Doe', phone: '123-456-7890' }; + await db.insert(users).values(user) + const result: User[] = await db.select().from(users); + return NextResponse.json(result); + } + ``` + +3. Test your code locally: + + ```shell + export DATABASE_URL='mysql://[username]:[password]@[host]/[database]' + next dev + ``` + +4. Navigate to `http://localhost:3000/api/edge-function-example` to get the response from your route. + +### Step 4. Deploy your code to Vercel + +1. Deploy your code to Vercel with the `DATABASE_URL` environment variable: + + ```shell + vercel -e DATABASE_URL='mysql://[username]:[password]@[host]/[database]' --prod + ``` + + After the deployment is complete, you will get the URL of your project. + +2. Navigate to the `${Your-URL}/api/edge-function-example` page to get the response from your route. + +## What's next + +- Learn more about [Drizzle](https://orm.drizzle.team/docs/overview) and [drizzle-orm/tidb-serverless](https://orm.drizzle.team/docs/get-started-mysql#tidb-serverless). +- Learn how to [integrate TiDB Cloud with Vercel](/tidb-cloud/integrate-tidbcloud-with-vercel.md). diff --git a/tidb-cloud/serverless-driver-kysely-example.md b/tidb-cloud/serverless-driver-kysely-example.md index ded6989ccf6af..5b38806848913 100644 --- a/tidb-cloud/serverless-driver-kysely-example.md +++ b/tidb-cloud/serverless-driver-kysely-example.md @@ -39,7 +39,7 @@ To complete this tutorial, you need the following: npm install kysely @tidbcloud/kysely @tidbcloud/serverless ``` -3. In the root directory of your project, locate the `package.json` file, and then specify the ES module by adding `type: "module"` to the file: +3. In the root directory of your project, locate the `package.json` file, and then specify the ES module by adding `"type": "module"` to the file: ```json { @@ -89,7 +89,7 @@ To complete this tutorial, you need the following: 1. Create a table in your TiDB Serverless cluster and insert some data. - You can use [Chat2Query in the TiDB Cloud console](/tidb-cloud/explore-data-with-chat2query.md) to execute SQL statements. Here is an example: + You can use [SQL Editor in the TiDB Cloud console](/tidb-cloud/explore-data-with-chat2query.md) to execute SQL statements. Here is an example: ```sql CREATE TABLE `test`.`person` ( @@ -102,7 +102,7 @@ To complete this tutorial, you need the following: insert into test.person values (1,'pingcap','male') ``` -2. In the root directory of your project, create a file named `hello-word.ts` and add the following code: +2. In the root directory of your project, create a file named `hello-world.ts` and add the following code: ```ts import { Kysely,GeneratedAlways,Selectable } from 'kysely' @@ -201,7 +201,7 @@ mysql://[username]:[password]@[host]/[database] 1. Create a table in your TiDB Serverless cluster and insert some data. - You can use [Chat2Query in the TiDB Cloud console](/tidb-cloud/explore-data-with-chat2query.md) to execute SQL statements. Here is an example: + You can use [SQL Editor in the TiDB Cloud console](/tidb-cloud/explore-data-with-chat2query.md) to execute SQL statements. Here is an example: ```sql CREATE TABLE `test`.`person` ( @@ -296,4 +296,4 @@ mysql://[username]:[password]@[host]/[database] ## What's next - Learn more about [Kysely](https://kysely.dev/docs/intro) and [@tidbcloud/kysely](https://github.com/tidbcloud/kysely) -- Learn how to [integrate TiDB Cloud with Vercel](/tidb-cloud/integrate-tidbcloud-with-vercel.md) \ No newline at end of file +- Learn how to [integrate TiDB Cloud with Vercel](/tidb-cloud/integrate-tidbcloud-with-vercel.md) diff --git a/tidb-cloud/serverless-driver-prisma-example.md b/tidb-cloud/serverless-driver-prisma-example.md index 49b3ffd9a7c88..a7f3fbe8aab28 100644 --- a/tidb-cloud/serverless-driver-prisma-example.md +++ b/tidb-cloud/serverless-driver-prisma-example.md @@ -7,13 +7,59 @@ summary: Learn how to use TiDB Cloud serverless driver with Prisma ORM. [Prisma](https://www.prisma.io/docs) is an open source next-generation ORM (Object-Relational Mapping) that helps developers interact with their database in an intuitive, efficient, and safe way. TiDB Cloud offers [@tidbcloud/prisma-adapter](https://github.com/tidbcloud/prisma-adapter), enabling you to use [Prisma Client](https://www.prisma.io/docs/concepts/components/prisma-client) over HTTPS with [TiDB Cloud serverless driver](/tidb-cloud/serverless-driver.md). Compared with the traditional TCP way, [@tidbcloud/prisma-adapter](https://github.com/tidbcloud/prisma-adapter) brings the following benefits: -- Better performance in serverless environments -- Possibility of using Prisma client in the edge environments (see [#21394](https://github.com/prisma/prisma/issues/21394) for more information) +- Better performance of Prisma Client in serverless environments +- Ability to use Prisma Client in edge environments -This tutorial describes how to use TiDB Cloud serverless driver with the Prisma adapter. +This tutorial describes how to use [@tidbcloud/prisma-adapter](https://github.com/tidbcloud/prisma-adapter) in serverless environments and edge environments. + +## Install + +You need to install both [@tidbcloud/prisma-adapter](https://github.com/tidbcloud/prisma-adapter) and [TiDB Cloud serverless driver](/tidb-cloud/serverless-driver.md). You can install them using [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or your preferred package manager. + +Taking npm as an example, you can run the following commands for installation: + +```shell +npm install @tidbcloud/prisma-adapter +npm install @tidbcloud/serverless +``` + +## Enable `driverAdapters` + +To use the Prisma adapter, you need to enable the `driverAdapters` feature in the `schema.prisma` file. For example: + +```prisma +generator client { + provider = "prisma-client-js" + previewFeatures = ["driverAdapters"] +} + +datasource db { + provider = "mysql" + url = env("DATABASE_URL") +} +``` + +## Initialize Prisma Client + +Before using Prisma Client, you need to initialize it with `@tidbcloud/prisma-adapter`. For example: + +```js +import { connect } from '@tidbcloud/serverless'; +import { PrismaTiDBCloud } from '@tidbcloud/prisma-adapter'; +import { PrismaClient } from '@prisma/client'; + +// Initialize Prisma Client +const connection = connect({ url: ${DATABASE_URL} }); +const adapter = new PrismaTiDBCloud(connection); +const prisma = new PrismaClient({ adapter }); +``` + +Then, queries from Prisma Client can be sent to the TiDB Cloud serverless driver for processing. ## Use the Prisma adapter in Node.js environments +This section provides an example of how to use `@tidbcloud/prisma-adapter` in Node.js environments. + ### Before you begin To complete this tutorial, you need the following: @@ -40,7 +86,7 @@ To complete this tutorial, you need the following: npm install @tidbcloud/serverless npm install prisma --save-dev ``` - + 3. In the `package.json` file, specify the ES module by adding `type: "module"`: ```json @@ -125,8 +171,8 @@ To complete this tutorial, you need the following: ``` npx prisma db push ``` - - This command will create the `user` table in your TiDB Serverless cluster through the traditional TCP connection, rather than through the HTTPS connection using `@tidbcloud/prisma-adapter`. This is because it uses the same engine as Prisma Migrate. For more information about this command, see [Prototype your schema](https://www.prisma.io/docs/concepts/components/prisma-migrate/db-push). + + This command will create the `user` table in your TiDB Serverless cluster through the traditional TCP connection, rather than through the HTTPS connection using `@tidbcloud/prisma-adapter`. This is because it uses the same engine as Prisma Migrate. For more information about this command, see [Prototype your schema](https://www.prisma.io/docs/concepts/components/prisma-migrate/db-push). 4. Generate Prisma Client: @@ -134,7 +180,7 @@ To complete this tutorial, you need the following: npx prisma generate ``` - This command will generate Prisma Client based on the Prisma schema. + This command will generate Prisma Client based on the Prisma schema. ### Step 4. Execute CRUD operations @@ -178,7 +224,7 @@ To complete this tutorial, you need the following: }, }) ``` - + 3. Execute some transaction operations with Prisma Client. For example: ```js @@ -213,7 +259,10 @@ To complete this tutorial, you need the following: console.log(e) } ``` - + ## Use the Prisma adapter in edge environments -Currently, `@tidbcloud/prisma-adapter` is not compatible with edge environments such as Vercel Edge Function and Cloudflare Workers. However, there are plans to support these environments. For more information, see [#21394](https://github.com/prisma/prisma/issues/21394). \ No newline at end of file +You can use `@tidbcloud/prisma-adapter` v5.11.0 or a later version in edge environments such as Vercel Edge Functions and Cloudflare Workers. + +- [Vercel Edge Function example](https://github.com/tidbcloud/serverless-driver-example/tree/main/prisma/prisma-vercel-example) +- [Cloudflare Workers example](https://github.com/tidbcloud/serverless-driver-example/tree/main/prisma/prisma-cloudflare-worker-example) diff --git a/tidb-cloud/serverless-driver.md b/tidb-cloud/serverless-driver.md index 04904e50075d9..c8e90733b2d4c 100644 --- a/tidb-cloud/serverless-driver.md +++ b/tidb-cloud/serverless-driver.md @@ -152,16 +152,17 @@ You can configure TiDB Cloud serverless driver at both the connection level and At the connection level, you can make the following configurations: -| Name | Type | Default value | Description | -|--------------|------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `username` | string | N/A | Username of TiDB Serverless | -| `password` | string | N/A | Password of TiDB Serverless | -| `host` | string | N/A | Hostname of TiDB Serverless | -| `database` | string | `test` | Database of TiDB Serverless | -| `url` | string | N/A | The URL for the database, in the `mysql://[username]:[password]@[host]/[database]` format, where `database` can be skipped if you intend to connect to the default database. | -| `fetch` | function | global fetch | Custom fetch function. For example, you can use the `undici` fetch in node.js. | -| `arrayMode` | bool | `false` | Whether to return results as arrays instead of objects. To get better performance, set it to `true`. | -| `fullResult` | bool | `false` | Whether to return full result object instead of just rows. To get more detailed results, set it to `true`. | +| Name | Type | Default value | Description | +|--------------|----------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `username` | string | N/A | Username of TiDB Serverless | +| `password` | string | N/A | Password of TiDB Serverless | +| `host` | string | N/A | Hostname of TiDB Serverless | +| `database` | string | `test` | Database of TiDB Serverless | +| `url` | string | N/A | The URL for the database, in the `mysql://[username]:[password]@[host]/[database]` format, where `database` can be skipped if you intend to connect to the default database. | +| `fetch` | function | global fetch | Custom fetch function. For example, you can use the `undici` fetch in node.js. | +| `arrayMode` | bool | `false` | Whether to return results as arrays instead of objects. To get better performance, set it to `true`. | +| `fullResult` | bool | `false` | Whether to return full result object instead of just rows. To get more detailed results, set it to `true`. | +| `decoders` | object | `{}` | A collection of key-value pairs, which enables you to customize the decoding process for different column types. In each pair, you can specify a column type as the key and specify a corresponding function as the value. This function takes the raw string value received from TiDB Cloud serverless driver as an argument and returns the decoded value. | **Database URL** @@ -200,31 +201,64 @@ const conn = connect(config) At the SQL level, you can configure the following options: -| Option | Type | Default value | Description | -|--------------|------|---------------|------------------------------------------------------------------------------------------------------------| -| `arrayMode` | bool | `false` | Whether to return results as arrays instead of objects. To get better performance, set it to `true`. | -| `fullResult` | bool | `false` | Whether to return full result object instead of just rows. To get more detailed results, set it to `true`. | +| Option | Type | Default value | Description | +|--------------|--------|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `arrayMode` | bool | `false` | Whether to return results as arrays instead of objects. To get better performance, set it to `true`. | +| `fullResult` | bool | `false` | Whether to return the full result object instead of just rows. To get more detailed results, set it to `true`. | +| `isolation` | string | `REPEATABLE READ` | The transaction isolation level, which can be set to `READ COMMITTED` or `REPEATABLE READ`. | +| `decoders` | object | `{}` | A collection of key-value pairs, which enables you to customize the decoding process for different column types. In each pair, you can specify a column type as the key and specify a corresponding function as the value. This function takes the raw string value received from TiDB Cloud serverless driver as an argument and returns the decoded value. If you have configured `decoders` at both the connection and SQL levels, the key-value pairs with different keys configured at the connection level will be merged to the SQL level to take effect. If the same key (this is, column type) is specified at both levels, the value at the SQL level takes precedence. | -For example: +**arrayMode and fullResult** + +To return the full result object as arrays, you can configure the `arrayMode` and `fullResult` options as follows: ```ts const conn = connect({url: process.env['DATABASE_URL'] || 'mysql://[username]:[password]@[host]/[database]'}) const results = await conn.execute('select * from test',null,{arrayMode:true,fullResult:true}) ``` -Starting from TiDB Cloud serverless driver v0.0.7, you can also configure the following SQL level option when you use transactions: - -| Option | Type | Default value | Description | -|--------------|--------|-------------------|------------------------------------------------------------------------------------| -| `isolation` | string | `REPEATABLE READ` | The transaction isolation level, which can be set to `READ COMMITTED` or `REPEATABLE READ`. | +**isolation** -The `isolation` option can only be used in the `begin` function. Here is an example: +The `isolation` option can only be used in the `begin` function. ```ts const conn = connect({url: 'mysql://[username]:[password]@[host]/[database]'}) const tx = await conn.begin({isolation:"READ COMMITTED"}) ``` +**decoders** + +To customize the format of returned column values, you can configure the `decoder` option in the `connect()` method as follows: + +```ts +import { connect, ColumnType } from '@tidbcloud/serverless'; + +const conn = connect({ + url: 'mysql://[username]:[password]@[host]/[database]', + decoders: { + // By default, TiDB Cloud serverless driver returns the BIGINT type as text value. This decoder converts BIGINT to the JavaScript built-in BigInt type. + [ColumnType.BIGINT]: (rawValue: string) => BigInt(rawValue), + + // By default, TiDB Cloud serverless driver returns the DATETIME type as the text value in the 'yyyy-MM-dd HH:mm:ss' format. This decoder converts the DATETIME text to the JavaScript native Date object. + [ColumnType.DATETIME]: (rawValue: string) => new Date(rawValue), + } +}) + +// You can also configure the decoder option at the SQL level to override the decoders with the same keys at the connection level. +conn.execute(`select ...`, [], { + decoders: { + // ... + } +}) +``` + +> **Note:** +> +> TiDB Cloud serverless driver configuration changes: +> +> - v0.0.7: add the SQL level option `isolation`. +> - v0.0.10: add the connection level configuration `decoders` and the SQL level option `decoders`. + ## Features ### Supported SQL statements @@ -253,27 +287,37 @@ The type mapping between TiDB Serverless and Javascript is as follows: | DECIMAL | string | | CHAR | string | | VARCHAR | string | -| BINARY | string | -| VARBINARY | string | +| BINARY | Uint8Array | +| VARBINARY | Uint8Array | | TINYTEXT | string | | TEXT | string | | MEDIUMTEXT | string | | LONGTEXT | string | -| TINYBLOB | string | -| BLOB | string | -| MEDIUMBLOB | string | -| LONGBLOB | string | +| TINYBLOB | Uint8Array | +| BLOB | Uint8Array | +| MEDIUMBLOB | Uint8Array | +| LONGBLOB | Uint8Array | | DATE | string | | TIME | string | | DATETIME | string | | TIMESTAMP | string | | ENUM | string | | SET | string | -| BIT | string | +| BIT | Uint8Array | | JSON | object | | NULL | null | | Others | string | +> **Note:** +> +> Make sure to use the default `utf8mb4` character set in TiDB Serverless for the type conversion to JavaScript strings, because TiDB Cloud serverless driver uses the UTF-8 encoding to decode them to strings. + +> **Note:** +> +> TiDB Cloud serverless driver data type mapping changes: +> +> - v0.1.0: the `BINARY`, `VARBINARY`, `TINYBLOB`, `BLOB`, `MEDIUMBLOB`, `LONGBLOB`, and `BIT` types are now returned as a `Uint8Array` instead of a `string`. + ### ORM integrations TiDB Cloud serverless driver has been integrated with the following ORMs: diff --git a/tidb-cloud/serverless-export.md b/tidb-cloud/serverless-export.md new file mode 100644 index 0000000000000..199fd7c93c0d4 --- /dev/null +++ b/tidb-cloud/serverless-export.md @@ -0,0 +1,128 @@ +--- +title: Export Data from TiDB Serverless +summary: Learn how to export data from TiDB Serverless clusters. +--- + +# Export Data from TiDB Serverless + +TiDB Serverless Export (Beta) is a service that enables you to export data from a TiDB Serverless cluster to local storage or an external storage service. You can use the exported data for backup, migration, data analysis, or other purposes. + +While you can also export data using tools such as [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) and TiDB [Dumpling](https://docs.pingcap.com/tidb/dev/dumpling-overview), TiDB Serverless Export offers a more convenient and efficient way to export data from a TiDB Serverless cluster. It brings the following benefits: + +- Convenience: the export service provides a simple and easy-to-use way to export data from a TiDB Serverless cluster, eliminating the need for additional tools or resources. +- Isolation: the export service uses separate computing resources, ensuring isolation from the resources used by your online services. +- Consistency: the export service ensures the consistency of the exported data without causing locks, which does not affect your online services. + +## Features + +This section describes the features of TiDB Serverless Export. + +### Export location + +You can export data to local storage or [Amazon S3](https://aws.amazon.com/s3/). + +> **Note:** +> +> If the size of the data to be exported is large (more than 100 GiB), it is recommended that you export it to Amazon S3. + +**Local storage** + +Exporting data to local storage has the following limitations: + +- Exporting multiple databases to local storage at the same time is not supported. +- Exported data is saved in the stashing area and will expire after two days. You need to download the exported data in time. +- If the storage space of stashing area is full, you will not be able to export data to local storage. + +**Amazon S3** + +To export data to Amazon S3, you need to provide an [access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for your S3 bucket. Make sure the access key has read and write access for your S3 bucket, including at least these permissions: `s3:PutObject` and `s3:ListBucket`. + +### Data filtering + +You can filter data by specifying the database and table you want to export. If you specify a database without specifying a table, all tables in that specified database will be exported. If you do not specify a database when you export data to Amazon S3, all databases in the cluster will be exported. + +> **Note:** +> +> You must specify the database when you export data to local storage. + +### Data formats + +You can export data in the following formats: + +- `SQL` (default): export data in SQL format. +- `CSV`: export data in CSV format. + +The schema and data are exported according to the following naming conventions: + +| Item | Not compressed | Compressed | +|-----------------|------------------------------------------|-----------------------------------------------------| +| Database schema | {database}-schema-create.sql | {database}-schema-create.sql.{compression-type} | +| Table schema | {database}.{table}-schema.sql | {database}.{table}-schema.sql.{compression-type} | +| Data | {database}.{table}.{0001}.{sql|csv} | {database}.{table}.{0001}.{sql|csv}.{compression-type} | + +### Data compression + +You can compress the exported data using the following algorithms: + +- `gzip` (default): compress the exported data with gzip. +- `snappy`: compress the exported data with snappy. +- `zstd`: compress the exported data with zstd. +- `none`: do not compress the exported data. + +### Cancel export + +You can cancel an export task that is in the running state. + +## Examples + +Currently, you can manage export tasks using [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). + +### Export data to local storage + +1. Create an export task that specifies the database and table you want to export: + + ```shell + ticloud serverless export create -c --database --table + ``` + + You will get an export ID from the output. + +2. After the export is successful, download the exported data to your local storage: + + ```shell + ticloud serverless export download -c -e + ``` + +### Export data to Amazon S3 + +```shell +ticloud serverless export create -c --bucket-uri --access-key-id --secret-access-key +``` + +### Export with the CSV format + +```shell +ticloud serverless export create -c --file-type CSV +``` + +### Export the whole database + +```shell +ticloud serverless export create -c --database +``` + +### Export with snappy compression + +```shell +ticloud serverless export create -c --compress snappy +``` + +### Cancel an export task + +```shell +ticloud serverless export cancel -c -e +``` + +## Pricing + +The export service is free during the beta period. You only need to pay for the [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit) generated during the export process of successful or canceled tasks. For failed export tasks, you will not be charged. \ No newline at end of file diff --git a/tidb-cloud/serverless-faqs.md b/tidb-cloud/serverless-faqs.md index 8aa876ddc0b61..e5b60bd561739 100644 --- a/tidb-cloud/serverless-faqs.md +++ b/tidb-cloud/serverless-faqs.md @@ -22,7 +22,7 @@ Get started with the 5-minute [TiDB Cloud Quick Start](/tidb-cloud/tidb-cloud-qu ### How many TiDB Serverless clusters can I create in TiDB Cloud? -For each organization in TiDB Cloud, you can create a maximum of five TiDB Serverless clusters by default. To create more TiDB Serverless clusters, you need to add a credit card and set a [spending limit](/tidb-cloud/tidb-cloud-glossary.md#spending-limit) for usage. +For each organization in TiDB Cloud, you can create a maximum of five [free clusters](/tidb-cloud/select-cluster-tier.md#free-cluster-plan) by default. To create more TiDB Serverless clusters, you need to add a credit card and create [scalable clusters](/tidb-cloud/select-cluster-tier.md#scalable-cluster-plan) for the usage. ### Are all TiDB Cloud features fully supported on TiDB Serverless? @@ -36,48 +36,66 @@ We are actively working on expanding TiDB Serverless to other cloud platforms, i Yes, your Developer Tier cluster has been automatically migrated to the TiDB Serverless cluster, providing you with an improved user experience without any disruptions to your prior usage. -## Billing and metering FAQs +### What is columnar storage in TiDB Serverless? -### What are Request Units? +Columnar storage in TiDB Serverless acts as an additional replica of row-based storage, ensuring strong consistency. Unlike traditional row-based storage, which stores data in rows, columnar storage organizes data in columns, optimizing it for data analytics tasks. -TiDB Serverless adopts a pay-as-you-go model, meaning that you only pay for the storage space and cluster usage. In this model, all cluster activities such as SQL queries, bulk operations, and background jobs are quantified in [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit). RU is an abstract measurement for the size and intricacy of requests initiated on your cluster. For more information, see [TiDB Serverless Pricing Details](https://www.pingcap.com/tidb-cloud-serverless-pricing-details/). +Columnar storage is a key feature that enables the Hybrid Transactional and Analytical Processing (HTAP) capabilities of TiDB by seamlessly blending transactional and analytical workloads. + +To efficiently manage columnar storage data, TiDB Serverless uses a separate elastic TiFlash engine. During query execution, the optimizer guides the cluster to automatically decide whether to retrieve data from row-based or columnar storage. + +### When should I use columnar storage in TiDB Serverless? + +Consider using columnar storage in TiDB Serverless in the following scenarios: + +- Your workload involves analytical tasks that require efficient data scanning and aggregation. +- You prioritize improved performance, especially for analytics workloads. +- You want to isolate analytical processing from transactional processing to prevent performance impact on your transactional processing (TP) workload. The separate columnar storage helps optimize these distinct workload patterns. + +In these scenarios, columnar storage can significantly improve query performance and provide a seamless experience for mixed workloads in your system. + +### How to use columnar storage in TiDB Serverless? + +Using columnar storage in TiDB Serverless is similar to using it in TiFlash. You can enable columnar storage at both the table and database levels: -### How can I view the RU costs for my SQL statements in TiDB Serverless? +- Table level: Assign a TiFlash replica to a table to enable columnar storage for that specific table. +- Database level: Configure TiFlash replicas for all tables in a database to use columnar storage across the entire database. -You can view both the **Total RU** and **Mean RU** costs per SQL statement in [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless). This feature helps in identifying and analyzing RU costs, which can lead to potential cost savings in your operations. +Once a TiFlash replica is set up for a table, TiDB automatically replicates data from the row-based storage to the columnar storage for that table. This ensures data consistency and optimizes performance for analytical queries. -To check your SQL statement RU details, perform the following steps: +For more information about how to set up TiFlash replicas, see [Create TiFlash replicas](/tiflash/create-tiflash-replicas.md). -1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. +## Billing and metering FAQs -2. Navigate to the **Diagnosis** page of [your TiDB Serverless cluster](https://tidbcloud.com/console/clusters). +### What are Request Units? -3. Click the **SQL Statement** tab. +TiDB Serverless adopts a pay-as-you-go model, meaning that you only pay for the storage space and cluster usage. In this model, all cluster activities such as SQL queries, bulk operations, and background jobs are quantified in [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit). RU is an abstract measurement for the size and intricacy of requests initiated on your cluster. For more information, see [TiDB Serverless Pricing Details](https://www.pingcap.com/tidb-cloud-serverless-pricing-details/). ### Is there any free plan available for TiDB Serverless? For the first five TiDB Serverless clusters in your organization, TiDB Cloud provides a free usage quota for each of them as follows: - Row-based storage: 5 GiB +- Columnar storage: 5 GiB - [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit): 50 million RUs per month -Usage beyond the free quota will be charged. Once the free quota of a cluster is reached, the read and write operations on this cluster will be throttled until you [increase the quota](/tidb-cloud/manage-serverless-spend-limit.md#update-spending-limit) or the usage is reset upon the start of a new month. +If you are using a scalable cluster, usage beyond the free quota will be charged. For a free cluster, once the free quota is reached, the read and write operations on this cluster will be throttled until you upgrade to a scalable cluster or the usage is reset upon the start of a new month. For more information, see [TiDB Serverless usage quota](/tidb-cloud/select-cluster-tier.md#usage-quota). ### What are the limitations of the free plan? -Under the free plan, cluster performance is capped at a maximum of 10,000 RUs per second based on actual workload. Additionally, memory allocation per query is limited to 256 MiB. To maximize cluster performance, you can choose to enable the commercial offering by [increasing your spending limit](/tidb-cloud/manage-serverless-spend-limit.md#update-spending-limit). +Under the free plan, cluster performance is limited due to non-scalable resources. This results in a restriction on memory allocation per query to 256 MiB and might cause observable bottlenecks in request units (RUs) per second. To maximize cluster performance and avoid these limitations, you can upgrade to a [scalable cluster](/tidb-cloud/select-cluster-tier.md#scalable-cluster-plan). ### How can I estimate the number of RUs required by my workloads and plan my monthly budget? To get the RU consumption of individual SQL statements, you can use the [`EXPLAIN ANALYZE`](/sql-statements/sql-statement-explain-analyze.md#ru-request-unit-consumption) SQL statement. However, it is important to note that the RUs usage returned in `EXPLAIN ANALYZE` does not incorporate egress RUs, as egress usage is measured separately in the gateway, which is unknown to the TiDB server. -To get the RUs and storage used by your cluster, view the **Usage this month** pane on your cluster overview page. With your past resource usage data and real-time resource usage in this pane, you can track your cluster's resource consumption and estimate a reasonable spending limit. If the free quota cannot meet your requirement, you can edit the spending limit easily. For more information, see [Manage Spending Limit for TiDB Serverless clusters](/tidb-cloud/manage-serverless-spend-limit.md). +To get the RUs and storage used by your cluster, view the **Usage this month** pane on your cluster overview page. With your past resource usage data and real-time resource usage in this pane, you can track your cluster's resource consumption and estimate a reasonable spending limit. If the free quota cannot meet your requirement, you can upgrade to a [scalable cluster](/tidb-cloud/select-cluster-tier.md#scalable-cluster-plan) and edit the spending limit. For more information, see [TiDB Serverless usage quota](/tidb-cloud/select-cluster-tier.md#usage-quota). ### How can I optimize my workload to minimize the number of RUs consumed? -Ensure that your queries have been carefully optimized for optimal performance by following the guidelines in [Optimizing SQL Performance](/develop/dev-guide-optimize-sql-overview.md). In addition, minimizing the amount of egress traffic is also crucial for reducing RUs consumption. To achieve this, it is recommended to return only the necessary columns and rows in your query, which in turn helps reduce network egress traffic. This can be achieved by carefully selecting and filtering the columns and rows to be returned, thereby optimizing network utilization. +Ensure that your queries have been carefully optimized for optimal performance by following the guidelines in [Optimizing SQL Performance](/develop/dev-guide-optimize-sql-overview.md). To identify the SQL statements that consume the most RUs, navigate to **Diagnosis > SQL Statements** on your cluster overview page, where you can observe SQL execution and view the top statements sorted by **Total RU** or **Mean RU**. For more information, see [Statement Analysis](/tidb-cloud/tune-performance.md#statement-analysis). In addition, minimizing the amount of egress traffic is also crucial for reducing RUs consumption. To achieve this, it is recommended to return only the necessary columns and rows in your query, which in turn helps reduce network egress traffic. This can be achieved by carefully selecting and filtering the columns and rows to be returned, thereby optimizing network utilization. ### How storage is metered for TiDB Serverless? @@ -97,12 +115,26 @@ A spike in RU usage can occur due to necessary background jobs in TiDB. These jo ### What happens when my cluster exhausts its free quota or exceeds its spending limit? -Once a cluster reaches its free quota or spending limit, the cluster will enforce throttling measures on its read and write operations. These operations will be limited until the quota is increased or the usage is reset at the start of a new month. For more information, see [TiDB Serverless Limitations and Quotas](/tidb-cloud/serverless-limitations.md#usage-quota). +Once a cluster reaches its free quota or spending limit, the cluster immediately denies any new connection attempts until the quota is increased or the usage is reset at the start of a new month. Existing connections established before reaching the quota will remain active but will experience throttling. For more information, see [TiDB Serverless Limitations and Quotas](/tidb-cloud/serverless-limitations.md#usage-quota). ### Why do I observe spikes in RU usage while importing data? During the data import process of a TiDB Serverless cluster, RU consumption occurs only when the data is successfully imported, which leads to spikes in RU usage. +### What costs are involved when using columnar storage in TiDB Serverless? + +The pricing for columnar storage in TiDB Serverless is similar to that for row-based storage. When you use columnar storage, an additional replica is created to store your data (without indexes). The replication of data from row-based to columnar storage does not incur extra charges. + +For detailed pricing information, see [TiDB Serverless pricing details](https://www.pingcap.com/tidb-serverless-pricing-details/). + +### Is using columnar storage more expensive? + +Columnar storage in TiDB Serverless incurs additional costs due to the extra replica, which requires more storage and resources for data replication. However, columnar storage becomes more cost-effective when running analytical queries. + +According to the TPC-H benchmark test, the cost of running analytic queries on columnar storage is about one-third of the cost when using row-based storage. + +Therefore, while there might be an initial cost due to the extra replica, the reduced computational costs during analytics can make it more cost-effective for specific use cases. Especially for users with analytical demands, columnar storage can significantly reduce costs, offering considerable cost savings opportunities. + ## Security FAQs ### Is my TiDB Serverless shared or dedicated? diff --git a/tidb-cloud/serverless-limitations.md b/tidb-cloud/serverless-limitations.md index dce4940d0f72d..96c7797b0bafe 100644 --- a/tidb-cloud/serverless-limitations.md +++ b/tidb-cloud/serverless-limitations.md @@ -48,27 +48,31 @@ We are constantly filling in the feature gaps between TiDB Serverless and TiDB D - [Changefeed](/tidb-cloud/changefeed-overview.md) is not supported for TiDB Serverless currently. - [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) is not supported for TiDB Serverless currently. +### Time to live (TTL) + +- In TiDB Serverless, the [`TTL_JOB_INTERVAL`](/time-to-live.md#ttl-job) attribute for a table is fixed at `15m` and cannot be modified. This means that TiDB Serverless schedules a background job every 15 minutes to clean up expired data. + ### Others -- [Time to live (TTL)](/time-to-live.md) is currently unavailable. - Transaction can not last longer than 30 minutes. - For more details about SQL limitations, refer to [Limited SQL Features](/tidb-cloud/limited-sql-features.md). ## Usage quota -For each organization in TiDB Cloud, you can create a maximum of five TiDB Serverless clusters by default. To create more TiDB Serverless clusters, you need to add a credit card and set a [spending limit](/tidb-cloud/tidb-cloud-glossary.md#spending-limit) for the usage. +For each organization in TiDB Cloud, you can create a maximum of five [free clusters](/tidb-cloud/select-cluster-tier.md#free-cluster-plan) by default. To create more TiDB Serverless clusters, you need to add a credit card and create [scalable clusters](/tidb-cloud/select-cluster-tier.md#scalable-cluster-plan) for the usage. -For the first five TiDB Serverless clusters in your organization, TiDB Cloud provides a free usage quota for each of them as follows: +For the first five TiDB Serverless clusters in your organization, whether they are free or scalable, TiDB Cloud provides a free usage quota for each of them as follows: - Row-based storage: 5 GiB +- Columnar storage: 5 GiB - [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit): 50 million RUs per month The Request Unit (RU) is a unit of measurement used to track the resource consumption of a query or transaction. It is a metric that allows you to estimate the computational resources required to process a specific request in the database. The request unit is also the billing unit for TiDB Cloud Serverless service. -Once the free quota of a cluster is reached, the read and write operations on this cluster will be throttled until you [increase the quota](/tidb-cloud/manage-serverless-spend-limit.md#update-spending-limit) or the usage is reset upon the start of a new month. +Once a cluster reaches its usage quota, it immediately denies any new connection attempts until you [increase the quota](/tidb-cloud/manage-serverless-spend-limit.md#update-spending-limit) or the usage is reset upon the start of a new month. Existing connections established before reaching the quota will remain active but will experience throttling. To learn more about the RU consumption of different resources (including read, write, SQL CPU, and network egress), the pricing details, and the throttled information, see [TiDB Serverless Pricing Details](https://www.pingcap.com/tidb-cloud-serverless-pricing-details). -If you want to create a TiDB Serverless cluster with an additional quota, you can edit the spending limit on the cluster creation page. For more information, see [Create a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md). +If you want to create a TiDB Serverless cluster with an additional quota, you can select the scalable cluster plan and edit the spending limit on the cluster creation page. For more information, see [Create a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md). -After creating a TiDB Serverless, you can still check and edit the spending limit on your cluster overview page. For more information, see [Manage Spending Limit for TiDB Serverless Clusters](/tidb-cloud/manage-serverless-spend-limit.md). +After creating a TiDB Serverless cluster, you can still check and edit the spending limit on your cluster overview page. For more information, see [Manage Spending Limit for TiDB Serverless Clusters](/tidb-cloud/manage-serverless-spend-limit.md). diff --git a/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md b/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md index ad728d54d0071..c00709e54a460 100644 --- a/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md +++ b/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md @@ -45,8 +45,8 @@ In most scenarios, it is recommended that you use private endpoint connection ov To connect to your TiDB Dedicated cluster via a private endpoint, complete the [prerequisites](#prerequisites) and follow these steps: -1. [Choose a TiDB cluster](#step-1-choose-a-tidb-cluster) -2. [Provide the information for creating an endpoint](#step-2-provide-the-information-for-creating-an-endpoint) +1. [Select a TiDB cluster](#step-1-select-a-tidb-cluster) +2. [Create a Google Cloud private endpoint](#step-2-create-a-google-cloud-private-endpoint) 3. [Accept endpoint access](#step-3-accept-endpoint-access) 4. [Connect to your TiDB cluster](#step-4-connect-to-your-tidb-cluster) @@ -74,12 +74,12 @@ Perform the following steps to go to the **Google Cloud Private Endpoint** page: 1. Log in to the [TiDB Cloud console](https://tidbcloud.com). 2. Click in the lower-left corner, switch to the target project if you have multiple projects, and then click **Project Settings**. -3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, and click the **Private Endpoint** tab. -4. Click **Create Private Endpoint** in the upper-right corner, and then select **Google Cloud Private Endpoint**. +3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, and click the **Private Endpoint** > **Google Cloud** tab to view the Google Cloud private endpoints. +4. In the upper-right corner, click **Create Private Endpoint Connection**. -### Step 1. Choose a TiDB cluster +### Step 1. Select a TiDB cluster -Click the drop-down list and choose an available TiDB Dedicated cluster. +In the **Cluster** list, select the TiDB Dedicated cluster that you want to establish a private endpoint connection. You can select a cluster with any of the following statuses: @@ -88,7 +88,7 @@ You can select a cluster with any of the following statuses: - **Modifying** - **Importing** -### Step 2. Provide the information for creating an endpoint +### Step 2. Create a Google Cloud private endpoint 1. Provide the following information to generate the command for private endpoint creation: - **Google Cloud Project ID**: the Project ID associated with your Google Cloud account. You can find the ID on the [Google Cloud **Dashboard** page](https://console.cloud.google.com/home/dashboard). @@ -96,8 +96,8 @@ You can select a cluster with any of the following statuses: - **Google Cloud Subnet Name**: the name of the subnet in the specified VPC. You can find it on the **VPC network details** page. - **Private Service Connect Endpoint Name**: enter a unique name for the private endpoint that will be created. 2. After entering the information, click **Generate Command**. -3. Copy the command. -4. Go to [Google Cloud Shell](https://console.cloud.google.com/home/dashboard) to execute the command. +3. Copy the generated command. +4. Open [Google Cloud Shell](https://console.cloud.google.com/home/dashboard) and execute the command to create the private endpoint. ### Step 3. Accept endpoint access @@ -133,7 +133,7 @@ The possible statuses of a private endpoint service are explained as follows: ### TiDB Cloud fails to create an endpoint service. What should I do? -The endpoint service is created automatically after you open the **Create Google Cloud Private Endpoint** page and choose the TiDB cluster. If it shows as failed or remains in the **Creating** state for a long time, submit a [support ticket](/tidb-cloud/tidb-cloud-support.md) for assistance. +The endpoint service is created automatically after you open the **Create Google Cloud Private Endpoint Connection** page and choose the TiDB cluster. If it shows as failed or remains in the **Creating** state for a long time, submit a [support ticket](/tidb-cloud/tidb-cloud-support.md) for assistance. ### Fail to create an endpoint in Google Cloud. What should I do? @@ -147,6 +147,6 @@ If you have already executed the command to create a private endpoint in Google ### Why can't I see the endpoints generated by directly copying the service attachment in the TiDB Cloud console? -In the TiDB Cloud console, you can only view endpoints that are created through the command generated on the **Create Google Cloud Private Endpoint** page. +In the TiDB Cloud console, you can only view endpoints that are created through the command generated on the **Create Google Cloud Private Endpoint Connection** page. However, endpoints generated by directly copying the service attachment (that is, not created through the command generated in the TiDB Cloud console) are not displayed in the TiDB Cloud console. diff --git a/tidb-cloud/set-up-private-endpoint-connections.md b/tidb-cloud/set-up-private-endpoint-connections.md index ff75084628416..3f03450ec5a60 100644 --- a/tidb-cloud/set-up-private-endpoint-connections.md +++ b/tidb-cloud/set-up-private-endpoint-connections.md @@ -40,12 +40,11 @@ In most scenarios, you are recommended to use private endpoint connection over V To connect to your TiDB Dedicated cluster via a private endpoint, complete the [prerequisites](#prerequisites) and follow these steps: -1. [Choose a TiDB cluster](#step-1-choose-a-tidb-cluster) -2. [Check the service endpoint region](#step-2-check-the-service-endpoint-region) -3. [Create an AWS interface endpoint](#step-3-create-an-aws-interface-endpoint) -4. [Accept the endpoint connection](#step-4-accept-the-endpoint-connection) -5. [Enable private DNS](#step-5-enable-private-dns) -6. [Connect to your TiDB cluster](#step-6-connect-to-your-tidb-cluster) +1. [Select a TiDB cluster](#step-1-select-a-tidb-cluster) +2. [Create an AWS interface endpoint](#step-2-create-an-aws-interface-endpoint) +3. [Fill in your endpoint ID](#step-3-fill-in-your-endpoint-id) +4. [Enable private DNS and create connection](#step-4-enable-private-dns-and-create-connection) +5. [Connect to your TiDB cluster](#step-5-connect-to-your-tidb-cluster) If you have multiple clusters, you need to repeat these steps for each cluster that you want to connect to using AWS PrivateLink. @@ -53,35 +52,29 @@ If you have multiple clusters, you need to repeat these steps for each cluster t 1. Log in to the [TiDB Cloud console](https://tidbcloud.com). 2. Click in the lower-left corner, switch to the target project if you have multiple projects, and then click **Project Settings**. -3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, and click the **Private Endpoint** tab. -4. Click **Create Private Endpoint** in the upper-right corner, and then select **AWS Private Endpoint**. +3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, and click the **Private Endpoint** > **AWS** tab to view the AWS private endpoints. +4. In the upper-right corner, click **Create Private Endpoint Connection**. -### Step 1. Choose a TiDB cluster +### Step 1. Select a TiDB cluster -1. Click the drop-down list and choose an available TiDB Dedicated cluster. -2. Click **Next**. +In the **Cluster** list, select the TiDB Dedicated cluster that you want to establish a private endpoint connection. -### Step 2. Check the service endpoint region - -Your service endpoint region is selected by default. Have a quick check and click **Next**. - -> **Note:** -> -> The default region is where your cluster is located. Do not change it. Cross-region private endpoint is currently not supported. - -### Step 3. Create an AWS interface endpoint +### Step 2. Create an AWS interface endpoint > **Note:** > > For each TiDB Dedicated cluster created after March 28, 2023, the corresponding endpoint service is automatically created 3 to 4 minutes after the cluster creation. -If you see the `Endpoint Service Ready` message, take note of your endpoint service name from the command in the lower area of the console for later use. Otherwise, wait 3 to 4 minutes to let TiDB Cloud create an endpoint service for your cluster. +If you see the `TiDB Private Link Service is ready` message, the corresponding endpoint service is ready. You can provide the following information to create the endpoint. -```bash -aws ec2 create-vpc-endpoint --vpc-id ${your_vpc_id} --region ${your_region} --service-name ${your_endpoint_service_name} --vpc-endpoint-type Interface --subnet-ids ${your_application_subnet_ids} -``` +1. On the **Create AWS Private Endpoint Connection** page, fill in the **Your VPC ID** and **Your Subnet IDs** fields. You can get the IDs from your [AWS Management Console](https://console.aws.amazon.com/). +2. Click **Generate Command** to get the following endpoint creation command. -Then create an AWS interface endpoint either using the AWS Management Console or using the AWS CLI. + ```bash + aws ec2 create-vpc-endpoint --vpc-id ${your_vpc_id} --region ${your_region} --service-name ${your_endpoint_service_name} --vpc-endpoint-type Interface --subnet-ids ${your_application_subnet_ids} + ``` + +Then, you can create an AWS interface endpoint either using the [AWS Management Console](https://aws.amazon.com/console/) or using the AWS CLI.
@@ -96,7 +89,7 @@ To use the AWS Management Console to create a VPC interface endpoint, perform th ![Verify endpoint service](/media/tidb-cloud/private-endpoint/create-endpoint-2.png) 3. Select **Other endpoint services**. -4. Enter the service name that you found in the TiDB Cloud console. +4. Enter the service name `${your_endpoint_service_name}` from the generated command (`--service-name ${your_endpoint_service_name}`). 5. Click **Verify service**. 6. Select your VPC in the drop-down list. 7. Select the availability zones where your TiDB cluster is located in the **Subnets** area. @@ -109,7 +102,7 @@ To use the AWS Management Console to create a VPC interface endpoint, perform th > **Note:** > - > Make sure the selected security group allows inbound access from your EC2 instances on Port 4000 or a customer-defined port. + > Make sure the selected security group allows inbound access from your EC2 instances on Port 4000 or a customer-defined port. 9. Click **Create endpoint**. @@ -118,27 +111,24 @@ To use the AWS Management Console to create a VPC interface endpoint, perform th To use the AWS CLI to create a VPC interface endpoint, perform the following steps: -1. Fill in the **VPC ID** and **Subnet IDs** fields on the private endpoint creation page. You can get the IDs from your AWS Management Console. -2. Copy the command in the lower area of the page and run it in your terminal. Then click **Next**. +1. Copy the generated command and run it in your terminal. +2. Record the VPC endpoint ID you just created. > **Tip:** > > - Before running the command, you need to have AWS CLI installed and configured. See [AWS CLI configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) for details. > > - If your service is spanning across more than three availability zones (AZs), you will get an error message indicating that the VPC endpoint service does not support the AZ of the subnet. This issue occurs when there is an extra AZ in your selected region in addition to the AZs where your TiDB cluster is located. In this case, you can contact [PingCAP Technical Support](https://docs.pingcap.com/tidbcloud/tidb-cloud-support). -> -> - You cannot copy the command until TiDB Cloud finishes creating an endpoint service in the background.
-### Step 4. Accept the endpoint connection +### Step 3. Fill in your endpoint ID 1. Go back to the TiDB Cloud console. -2. Fill in the box with your VPC endpoint ID on the **Create Private Endpoint** page. -3. Click **Next**. +2. On the **Create AWS Private Endpoint Connection** page, enter your VPC endpoint ID. -### Step 5. Enable private DNS +### Step 4. Enable private DNS and create connection Enable private DNS in AWS. You can either use the AWS Management Console or the AWS CLI. @@ -166,11 +156,11 @@ aws ec2 modify-vpc-endpoint --vpc-endpoint-id ${your_vpc_endpoint_id} --private- -Click **Create** in the TiDB Cloud console to finalize the creation of the private endpoint. +Click **Create Private Endpoint Connection** in the TiDB Cloud console to finalize the creation of the private endpoint. -Then you can connect to the endpoint service. +Then you can connect to your TiDB cluster. -### Step 6. Connect to your TiDB cluster +### Step 5. Connect to your TiDB cluster After you have enabled the private DNS, go back to the TiDB Cloud console and take the following steps: diff --git a/tidb-cloud/set-up-vpc-peering-connections.md b/tidb-cloud/set-up-vpc-peering-connections.md index 23f6986e2b777..2edbec2391000 100644 --- a/tidb-cloud/set-up-vpc-peering-connections.md +++ b/tidb-cloud/set-up-vpc-peering-connections.md @@ -13,41 +13,45 @@ To connect your application to TiDB Cloud via VPC peering, you need to set up [V VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they are within the same network. -Currently, TiDB Cloud only supports VPC peering in the same region for the same project. TiDB clusters of the same project in the same region are created in the same VPC. Therefore, once VPC peering is set up in a region of a project, all the TiDB clusters created in the same region of this project can be connected in your VPC. VPC peering setup differs among cloud providers. +Currently, TiDB clusters of the same project in the same region are created in the same VPC. Therefore, once VPC peering is set up in a region of a project, all the TiDB clusters created in the same region of this project can be connected in your VPC. VPC peering setup differs among cloud providers. > **Tip:** > > To connect your application to TiDB Cloud, you can also set up [private endpoint connection](/tidb-cloud/set-up-private-endpoint-connections.md) with TiDB Cloud, which is secure and private, and does not expose your data to the public internet. It is recommended to use private endpoints over VPC peering connections. -## Prerequisite: Set a Project CIDR +## Prerequisite: Set a CIDR for a region -Project CIDR (Classless Inter-Domain Routing) is the CIDR block used for network peering in a project. +CIDR (Classless Inter-Domain Routing) is the CIDR block used for creating VPC for TiDB Dedicated clusters. -Before adding VPC Peering requests to a region, you need to set a project CIDR for your project's cloud provider (AWS or Google Cloud) to establish a peering link to your application's VPC. +Before adding VPC Peering requests to a region, you must set a CIDR for that region and create an initial TiDB Dedicated cluster in that region. Once the first Dedicated cluster is created, TiDB Cloud will create the VPC of the cluster, allowing you to establish a peering link to your application's VPC. -You can set the project CIDR when creating the first TiDB Dedicated of your project. If you want to set the project CIDR before creating the cluster, perform the following operations: +You can set the CIDR when creating the first TiDB Dedicated cluster. If you want to set the CIDR before creating the cluster, perform the following operations: 1. Log in to the [TiDB Cloud console](https://tidbcloud.com). 2. Click in the lower-left corner, switch to the target project if you have multiple projects, and then click **Project Settings**. -3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, and then click the **Project CIDR** tab. -4. Click **Add a project CIDR for AWS** or **Add a project CIDR for Google Cloud** according to your cloud provider, specify one of the following network addresses in the **Project CIDR** field, and then click **Confirm**. +3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, click the **Project CIDR** tab, and then select **AWS** or **Google Cloud** according to your cloud provider. +4. In the upper-right corner, click **Create CIDR**. Specify the region and CIDR value in the **Create AWS CIDR** or **Create Google Cloud CIDR** dialog, and then click **Confirm**. + + ![Project-CIDR4](/media/tidb-cloud/Project-CIDR4.png) > **Note:** > - > To avoid any conflicts with the CIDR of the VPC where your application is located, you need to set a different project CIDR in this field. + > - To avoid any conflicts with the CIDR of the VPC where your application is located, you need to set a different project CIDR in this field. + > - For AWS Region, it is recommended to configure an IP range size between `/16` and `/23`. Supported network addresses include: + > - 10.250.0.0 - 10.251.255.255 + > - 172.16.0.0 - 172.31.255.255 + > - 192.168.0.0 - 192.168.255.255 - - 10.250.0.0/16 - - 10.250.0.0/17 - - 10.250.128.0/17 - - 172.30.0.0/16 - - 172.30.0.0/17 - - 172.30.128.0/17 + > - For Google Cloud Region, it is recommended to configure an IP range size between `/19` and `/20`. If you want to configure an IP range size between `/16` and `/18`, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). Supported network addresses include: + > - 10.250.0.0 - 10.251.255.255 + > - 172.16.0.0 - 172.17.255.255 + > - 172.30.0.0 - 172.31.255.255 - ![Project-CIDR4](/media/tidb-cloud/Project-CIDR4.png) + > - TiDB Cloud limits the number of TiDB Cloud nodes in a region of a project based on the CIDR block size of the region. -5. View the CIDR of the cloud provider and the specific region. +5. View the CIDR of the cloud provider and the specific region. - The region CIDR is inactive by default. To activate the region CIDR, you need to create a cluster in the target region. When the region CIDR is active, you can create VPC Peering for the region. + The CIDR is inactive by default. To activate the CIDR, you need to create a cluster in the target region. When the region CIDR is active, you can create VPC Peering for the region. ![Project-CIDR2](/media/tidb-cloud/Project-CIDR2.png) @@ -59,22 +63,24 @@ This section describes how to set up VPC peering connections on AWS. For Google 1. Log in to the [TiDB Cloud console](https://tidbcloud.com). 2. Click in the lower-left corner, switch to the target project if you have multiple projects, and then click **Project Settings**. -3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, and then click the **VPC Peering** tab. +3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, and click the **VPC Peering** > **AWS** tab. The **VPC Peering** configuration is displayed by default. -4. Click **Add**, choose the AWS icon, and then fill in the required information of your existing AWS VPC: +4. In the upper-right corner, click **Create VPC Peering**, select the **TiDB Cloud VPC Region**, and then fill in the required information of your existing AWS VPC: - - Region + - Your VPC Region - AWS Account ID - VPC ID - VPC CIDR - You can get these information from your VPC details on the VPC dashboard. + You can get such information from your VPC details page of the [AWS Management Console](https://console.aws.amazon.com/). TiDB Cloud supports creating VPC peerings between VPCs in the same region or from two different regions. ![VPC peering](/media/tidb-cloud/vpc-peering/vpc-peering-creating-infos.png) -5. Click **Initialize**. The **Approve VPC Peerings** dialog is displayed. +5. Click **Create** to send the VPC peering request, and then view the VPC peering information on the **VPC Peering** > **AWS** tab. The status of the newly created VPC peering is **System Checking**. + +6. To view detailed information about your newly created VPC peering, click **...** > **View** in the **Action** column. The **VPC Peering Details** page is displayed. ### Step 2. Approve and configure the VPC peering @@ -141,7 +147,7 @@ Use either of the following two options to approve and configure the VPC peering aws ec2 describe-route-tables --region "$app_region" --filters Name=vpc-id,Values="$app_vpc_id" --query 'RouteTables[*].RouteTableId' --output text | tr "\t" "\n" | while read row do app_route_table_id="$row" - aws ec2 create-route --route-table-id "$app_route_table_id" --destination-cidr-block "$tidbcloud_project_cidr" --vpc-peering-connection-id "$pcx_tidb_to_app_id" + aws ec2 create-route --region "$app_region" --route-table-id "$app_route_table_id" --destination-cidr-block "$tidbcloud_project_cidr" --vpc-peering-connection-id "$pcx_tidb_to_app_id" done ``` @@ -163,15 +169,15 @@ After finishing the configuration, the VPC peering has been created. You can [co You can also use the AWS dashboard to configure the VPC peering connection. -1. Confirm to accept the peer connection request in your AWS console. +1. Confirm to accept the peer connection request in your [AWS Management Console](https://console.aws.amazon.com/). - 1. Sign in to the AWS console and click **Services** on the top menu bar. Enter `VPC` in the search box and go to the VPC service page. + 1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) and click **Services** on the top menu bar. Enter `VPC` in the search box and go to the VPC service page. ![AWS dashboard](/media/tidb-cloud/vpc-peering/aws-vpc-guide-1.jpg) 2. From the left navigation bar, open the **Peering Connections** page. On the **Create Peering Connection** tab, a peering connection is in the **Pending Acceptance** status. - 3. Confirm the requester owner is TiDB Cloud (`380838443567`). Right-click the peering connection and select **Accept Request** to accept the request in the **Accept VPC peering connection request** dialog. + 3. Confirm that the requester owner and the requester VPC match **TiDB Cloud AWS Account ID** and **TiDB Cloud VPC ID** on the **VPC Peering Details** page of the [TiDB Cloud console](https://tidbcloud.com). Right-click the peering connection and select **Accept Request** to accept the request in the **Accept VPC peering connection request** dialog. ![AWS VPC peering requests](/media/tidb-cloud/vpc-peering/aws-vpc-guide-3.png) @@ -183,7 +189,7 @@ You can also use the AWS dashboard to configure the VPC peering connection. ![Search all route tables related to VPC](/media/tidb-cloud/vpc-peering/aws-vpc-guide-4.png) - 3. Right-click each route table and select **Edit routes**. On the edit page, add a route with a destination to the Project CIDR (by checking the **VPC Peering** configuration page in the TiDB Cloud console) and fill in your peering connection ID in the **Target** column. + 3. Right-click each route table and select **Edit routes**. On the edit page, add a route with a destination to the TiDB Cloud CIDR (by checking the **VPC Peering** configuration page in the TiDB Cloud console) and fill in your peering connection ID in the **Target** column. ![Edit all route tables](/media/tidb-cloud/vpc-peering/aws-vpc-guide-5.png) @@ -205,26 +211,23 @@ Now you have successfully set up the VPC peering connection. Next, [connect to t 1. Log in to the [TiDB Cloud console](https://tidbcloud.com). 2. Click in the lower-left corner, switch to the target project if you have multiple projects, and then click **Project Settings**. -3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, and then click the **VPC Peering** tab. +3. On the **Project Settings** page of your project, click **Network Access** in the left navigation pane, and click the **VPC Peering** > **Google Cloud** tab. The **VPC Peering** configuration is displayed by default. -4. Click **Add**, choose the Google Cloud icon, and then fill in the required information of your existing Google Cloud VPC: +4. In the upper-right corner, click **Create VPC Peering**, select the **TiDB Cloud VPC Region**, and then fill in the required information of your existing Google Cloud VPC: > **Tip:** > > You can follow instructions next to the **Application Google Cloud Project ID** and **VPC Network Name** fields to find the project ID and VPC network name. - - Region - - Application Google Cloud Project ID + - Google Cloud Project ID - VPC Network Name - VPC CIDR -5. Click **Initialize**. The **Approve VPC Peerings** dialog is displayed. - -6. Check the connection information of your TiDB VPC peerings. +5. Click **Create** to send the VPC peering request, and then view the VPC peering information on the **VPC Peering** > **Google Cloud** tab. The status of the newly created VPC peering is **System Checking**. - ![VPC-Peering](/media/tidb-cloud/VPC-Peering3.png) +6. To view detailed information about your newly created VPC peering, click **...** > **View** in the **Action** column. The **VPC Peering Details** page is displayed. 7. Execute the following command to finish the setup of VPC peerings: diff --git a/tidb-cloud/size-your-cluster.md b/tidb-cloud/size-your-cluster.md index 26a2ac7e3ed90..128cc679708e0 100644 --- a/tidb-cloud/size-your-cluster.md +++ b/tidb-cloud/size-your-cluster.md @@ -23,12 +23,17 @@ To learn performance test results of different cluster scales, see [TiDB Cloud P The supported vCPU and RAM sizes include the following: -- 4 vCPU, 16 GiB -- 8 vCPU, 16 GiB -- 16 vCPU, 32 GiB +| Standard size | High memory size | +|:---------:|:----------------:| +| 4 vCPU, 16 GiB | N/A | +| 8 vCPU, 16 GiB | 8 vCPU, 32 GiB | +| 16 vCPU, 32 GiB | 16 vCPU, 64 GiB | +| 32 vCPU, 64 GiB | 32 vCPU, 128 GiB | > **Note:** > +> To use the **32 vCPU, 128 GiB** size of TiDB, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). +> > If the vCPU and RAM size of TiDB is set as **4 vCPU, 16 GiB**, note the following restrictions: > > - The node number of TiDB can only be set to 1 or 2, and the node number of TiKV is fixed to 3. @@ -84,10 +89,12 @@ To learn performance test results of different cluster scales, see [TiDB Cloud P The supported vCPU and RAM sizes include the following: -- 4 vCPU, 16 GiB -- 8 vCPU, 32 GiB -- 8 vCPU, 64 GiB -- 16 vCPU, 64 GiB +| Standard size | High memory size | +|:---------:|:----------------:| +| 4 vCPU, 16 GiB | N/A | +| 8 vCPU, 32 GiB | 8 vCPU, 64 GiB | +| 16 vCPU, 64 GiB | Coming soon | +| 32 vCPU, 128 GiB | N/A | > **Note:** > @@ -169,6 +176,7 @@ The supported node storage of different TiKV vCPUs is as follows: | 4 vCPU | 200 GiB | 2048 GiB | 500 GiB | | 8 vCPU | 200 GiB | 4096 GiB | 500 GiB | | 16 vCPU | 200 GiB | 6144 GiB | 500 GiB | +| 32 vCPU | 200 GiB | 6144 GiB | 500 GiB | > **Note:** > @@ -186,6 +194,7 @@ The supported vCPU and RAM sizes include the following: - 8 vCPU, 64 GiB - 16 vCPU, 128 GiB +- 32 vCPU, 256 GiB Note that TiFlash is unavailable when the vCPU and RAM size of TiDB or TiKV is set as **4 vCPU, 16 GiB**. @@ -208,7 +217,8 @@ The supported node storage of different TiFlash vCPUs is as follows: | TiFlash vCPU | Min node storage | Max node storage | Default node storage | |:---------:|:----------------:|:----------------:|:--------------------:| | 8 vCPU | 200 GiB | 2048 GiB | 500 GiB | -| 16 vCPU | 200 GiB | 2048 GiB | 500 GiB | +| 16 vCPU | 200 GiB | 4096 GiB | 500 GiB | +| 32 vCPU | 200 GiB | 4096 GiB | 500 GiB | > **Note:** > diff --git a/tidb-cloud/terraform-get-tidbcloud-provider.md b/tidb-cloud/terraform-get-tidbcloud-provider.md index feebb0ae99d27..1b2637dfc2a00 100644 --- a/tidb-cloud/terraform-get-tidbcloud-provider.md +++ b/tidb-cloud/terraform-get-tidbcloud-provider.md @@ -48,7 +48,7 @@ For detailed steps, see [TiDB Cloud API documentation](https://docs.pingcap.com/ required_providers { tidbcloud = { source = "tidbcloud/tidbcloud" - version = "~> 0.1.0" + version = "~> 0.3.0" } } required_version = ">= 1.0.0" @@ -103,12 +103,28 @@ provider "tidbcloud" { `public_key` and `private_key` are the API key's public key and private key. You can also pass them through the environment variables: ``` -export TIDBCLOUD_PUBLIC_KEY = ${public_key} -export TIDBCLOUD_PRIVATE_KEY = ${private_key} +export TIDBCLOUD_PUBLIC_KEY=${public_key} +export TIDBCLOUD_PRIVATE_KEY=${private_key} ``` Now, you can use the TiDB Cloud Terraform Provider. +## Step 5. Configure TiDB Cloud Terraform Provider with sync configuration + +Terraform provider (>= 0.3.0) supports an optional parameter `sync`. + +By setting `sync` to `true`, you can create, update, or delete resources synchronously. Here is an example: + +``` +provider "tidbcloud" { + public_key = "your_public_key" + private_key = "your_private_key" + sync = true +} +``` + +Setting `sync` to `true` is recommended, but note that `sync` currently only works with the cluster resource. If you need synchronous operations for other resources, [contact TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + ## Next step -Get started by managing a cluster with the [cluster resource](/tidb-cloud/terraform-use-cluster-resource.md). \ No newline at end of file +Get started by managing a cluster with the [cluster resource](/tidb-cloud/terraform-use-cluster-resource.md). diff --git a/tidb-cloud/terraform-use-cluster-resource.md b/tidb-cloud/terraform-use-cluster-resource.md index 1f223d17ce471..7ca347bcb2975 100644 --- a/tidb-cloud/terraform-use-cluster-resource.md +++ b/tidb-cloud/terraform-use-cluster-resource.md @@ -39,6 +39,7 @@ To view the information of all available projects, you can use the `tidbcloud_pr provider "tidbcloud" { public_key = "your_public_key" private_key = "your_private_key" + sync = true } data "tidbcloud_projects" "example_project" { @@ -137,6 +138,7 @@ To get the cluster specification information, you can use the `tidbcloud_cluster provider "tidbcloud" { public_key = "your_public_key" private_key = "your_private_key" + sync = true } data "tidbcloud_cluster_specs" "example_cluster_spec" { } @@ -267,7 +269,7 @@ In the results: > **Note:** > -> Before you begin, make sure that you have set a Project CIDR in the TiDB Cloud console. For more information, see [Set a Project CIDR](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-project-cidr). +> Before you begin, make sure that you have set a CIDR in the TiDB Cloud console. For more information, see [Set a CIDR](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-cidr-for-a-region). You can create a cluster using the `tidbcloud_cluster` resource. @@ -289,6 +291,7 @@ The following example shows how to create a TiDB Dedicated cluster. provider "tidbcloud" { public_key = "your_public_key" private_key = "your_private_key" + sync = true } resource "tidbcloud_cluster" "example_cluster" { diff --git a/tidb-cloud/third-party-monitoring-integrations.md b/tidb-cloud/third-party-monitoring-integrations.md index d0cd59441a9cf..761de322beb24 100644 --- a/tidb-cloud/third-party-monitoring-integrations.md +++ b/tidb-cloud/third-party-monitoring-integrations.md @@ -35,7 +35,7 @@ For the detailed integration steps and a list of metrics that Datadog tracks, re ### Prometheus and Grafana integration (beta) -With the Prometheus and Grafana integration, you can get a scrape_config file for Prometheus from TiDB Cloud and use the content from the file to configure Prometheus. You can view these metrics in your Grafana dashboards. +With the Prometheus and Grafana integration, you can get a `scrape_config` file for Prometheus from TiDB Cloud and use the content from the file to configure Prometheus. You can view these metrics in your Grafana dashboards. For the detailed integration steps and a list of metrics that Prometheus tracks, see [Integrate TiDB Cloud with Prometheus and Grafana](/tidb-cloud/monitor-prometheus-and-grafana-integration.md). diff --git a/tidb-cloud/ticloud-ai.md b/tidb-cloud/ticloud-ai.md new file mode 100644 index 0000000000000..7f2a6b705c498 --- /dev/null +++ b/tidb-cloud/ticloud-ai.md @@ -0,0 +1,47 @@ +--- +title: ticloud ai +summary: The reference of `ticloud ai`. +--- + +# ticloud ai + +Chat with TiDB Bot: + +```shell +ticloud ai [flags] +``` + +## Examples + +Chat with TiDB Bot in interactive mode: + +```shell +ticloud ai +``` + +Chat with TiDB Bot in non-interactive mode: + +```shell +ticloud ai -q "How to create a cluster?" +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|--------------------|-----------------------------------|----------|------------------------------------------------------| +| -q, --query string | Specifies your query to TiDB Bot. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|--------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-auth-login.md b/tidb-cloud/ticloud-auth-login.md new file mode 100644 index 0000000000000..3ed324f317c67 --- /dev/null +++ b/tidb-cloud/ticloud-auth-login.md @@ -0,0 +1,47 @@ +--- +title: ticloud auth login +summary: The reference of `ticloud auth login`. +--- + +# ticloud auth login + +Authenticate with TiDB Cloud: + +```shell +ticloud auth login [flags] +``` + +## Examples + +To log into TiDB Cloud: + +```shell +ticloud auth login +``` + +To log into TiDB Cloud with insecure storage: + +```shell +ticloud auth login --insecure-storage +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|--------------------|---------------------------------------------------------------------------|----------|------------------------------------------------------| +| --insecure-storage | Saves authentication credentials in plain text instead of credential store. | No | Works in both non-interactive and interactive modes. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|--------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-auth-logout.md b/tidb-cloud/ticloud-auth-logout.md new file mode 100644 index 0000000000000..39bfa978c3c51 --- /dev/null +++ b/tidb-cloud/ticloud-auth-logout.md @@ -0,0 +1,32 @@ +--- +title: ticloud auth logout +summary: The reference of `ticloud auth logout`. +--- + +# ticloud auth logout + +Log out of TiDB Cloud: + +```shell +ticloud auth logout [flags] +``` + +## Examples + +To log out of TiDB Cloud: + +```shell +ticloud auth logout +``` + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|--------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-branch-connect-info.md b/tidb-cloud/ticloud-branch-connect-info.md deleted file mode 100644 index f2aac9ad26e80..0000000000000 --- a/tidb-cloud/ticloud-branch-connect-info.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: ticloud branch connect-info -summary: The reference of `ticloud branch connect-info`. ---- - -# ticloud branch connect-info - -Get the connection string of a branch: - -```shell -ticloud branch connect-info [flags] -``` - -## Examples - -Get the connection string of a branch in interactive mode: - -```shell -ticloud branch connect-info -``` - -Get the connection string of a branch in non-interactive mode: - -```shell -ticloud branch connect-info --branch-id --cluster-id --client --operating-system -``` - -## Flags - -In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. - -| Flag | Description | Required | Note | -|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| -| -c, --cluster-id string | The ID of the cluster, in which the branch is created | Yes | Only works in non-interactive mode. | -| -b, --branch-id string | The ID of the branch | Yes | Only works in non-interactive mode. | -| --client string | The desired client used for the connection. Supported clients include `general`, `mysql_cli`, `mycli`, `libmysqlclient`, `python_mysqlclient`, `pymysql`, `mysql_connector_python`, `mysql_connector_java`, `go_mysql_driver`, `node_mysql2`, `ruby_mysql2`, `php_mysqli`, `rust_mysql`, `mybatis`, `hibernate`, `spring_boot`, `gorm`, `prisma`, `sequelize_mysql2`, `django_tidb`, `sqlalchemy_mysqlclient`, and `active_record`. | Yes | Only works in non-interactive mode. | -| --operating-system string | The operating system name. Supported operating systems include `macOS`, `Windows`, `Ubuntu`, `CentOS`, `RedHat`, `Fedora`, `Debian`, `Arch`, `OpenSUSE`, `Alpine`, and `Others`. | Yes | Only works in non-interactive mode. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | - -## Inherited flags - -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | - -## Feedback - -If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-branch-create.md b/tidb-cloud/ticloud-branch-create.md index d73a7dc20853b..c2f97c8b6ed77 100644 --- a/tidb-cloud/ticloud-branch-create.md +++ b/tidb-cloud/ticloud-branch-create.md @@ -1,50 +1,47 @@ --- -title: ticloud branch create -summary: The reference of `ticloud branch create`. +title: ticloud serverless branch create +summary: The reference of `ticloud serverless branch create`. --- -# ticloud branch create +# ticloud serverless branch create -Create a branch for a cluster: +Create a [branch](/tidb-cloud/branch-overview.md) for a TiDB Serverless cluster: ```shell -ticloud branch create [flags] +ticloud serverless branch create [flags] ``` -> **Note:** -> -> Currently, you can only create branches for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster. - ## Examples -Create a branch in interactive mode: +Create a branch for a TiDB Serverless cluster in interactive mode: ```shell -ticloud branch create +ticloud serverless branch create ``` -Create a branch in non-interactive mode: +Create a branch for a TiDB Serverless cluster in non-interactive mode: ```shell -ticloud branch create --cluster-id --branch-name +ticloud serverless branch create --cluster-id --display-name ``` ## Flags In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|-------------------------|------------------------------------------------------------|----------|-----------------------------------------------------| -| -c, --cluster-id string | The ID of the cluster, in which the branch will be created | Yes | Only works in non-interactive mode. | -| --branch-name string | The name of the branch to be created | Yes | Only works in non-interactive mode. | -| -h, --help | Get help information for this command | No | Works in both non-interactive and interactive modes | +| Flag | Description | Required | Note | +|---------------------------|------------------------------------------------------------|----------|-----------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster, in which the branch will be created. | Yes | Only works in non-interactive mode. | +| -n, --display-name string | Specifies the name of the branch to be created. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-branch-delete.md b/tidb-cloud/ticloud-branch-delete.md index 678d7e64bf770..51f5d5e204af5 100644 --- a/tidb-cloud/ticloud-branch-delete.md +++ b/tidb-cloud/ticloud-branch-delete.md @@ -1,31 +1,31 @@ --- -title: ticloud branch delete -summary: The reference of `ticloud branch delete`. +title: ticloud serverless branch delete +summary: The reference of `ticloud serverless branch delete`. --- -# ticloud branch delete +# ticloud serverless branch delete -Delete a branch from your cluster: +Delete a branch from your TiDB Serverless cluster: ```shell -ticloud branch delete [flags] +ticloud serverless branch delete [flags] ``` Or use the following alias command: ```shell -ticloud branch rm [flags] +ticloud serverless branch rm [flags] ``` ## Examples -Delete a branch in interactive mode: +Delete a TiDB Serverless branch in interactive mode: ```shell -ticloud branch delete +ticloud serverless branch delete ``` -Delete a branch in non-interactive mode: +Delete a TiDB Serverless branch in non-interactive mode: ```shell ticloud branch delete --branch-id --cluster-id @@ -37,17 +37,18 @@ In non-interactive mode, you need to manually enter the required flags. In inter | Flag | Description | Required | Note | |-------------------------|--------------------------------------------|----------|------------------------------------------------------| -| -b, --branch-id string | The ID of the branch to be deleted | Yes | Only works in non-interactive mode. | -| --force | Deletes a branch without confirmation | No | Works in both non-interactive and interactive modes. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| -c, --cluster-id string | The cluster ID of the branch to be deleted | Yes | Only works in non-interactive mode. | +| -b, --branch-id string | Specifies the ID of the branch to be deleted. | Yes | Only works in non-interactive mode. | +| --force | Deletes a branch without confirmation. | No | Works in both non-interactive and interactive modes. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|--------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | The active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|--------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-branch-describe.md b/tidb-cloud/ticloud-branch-describe.md index c5d8299a42c86..f11a21fd2d96b 100644 --- a/tidb-cloud/ticloud-branch-describe.md +++ b/tidb-cloud/ticloud-branch-describe.md @@ -1,34 +1,34 @@ --- -title: ticloud branch describe -summary: The reference of `ticloud branch describe`. +title: ticloud serverless branch describe +summary: The reference of `ticloud serverless branch describe`. --- -# ticloud branch describe +# ticloud serverless branch describe Get information about a branch (such as the endpoints, [user name prefix](/tidb-cloud/select-cluster-tier.md#user-name-prefix), and usages): ```shell -ticloud branch describe [flags] +ticloud serverless branch describe [flags] ``` Or use the following alias command: ```shell -ticloud branch get [flags] +ticloud serverless branch get [flags] ``` ## Examples -Get the branch information in interactive mode: +Get branch information of a TiDB Serverless cluster in interactive mode: ```shell -ticloud branch describe +ticloud serverless branch describe ``` -Get the branch information in non-interactive mode: +Get branch information of a TiDB Serverless cluster in non-interactive mode: ```shell -ticloud branch describe --branch-id --cluster-id +ticloud serverless branch describe --branch-id --cluster-id ``` ## Flags @@ -37,16 +37,17 @@ In non-interactive mode, you need to manually enter the required flags. In inter | Flag | Description | Required | Note | |-------------------------|-----------------------------------|----------|------------------------------------------------------| -| -b, --branch-id string | The ID of the branch | Yes | Only works in non-interactive mode. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| -c, --cluster-id string | The cluster ID of the branch | Yes | Only works in non-interactive mode. | +| -b, --branch-id string | Specifies the ID of the branch. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-branch-list.md b/tidb-cloud/ticloud-branch-list.md index a3cacf5d665d5..49285d1e93946 100644 --- a/tidb-cloud/ticloud-branch-list.md +++ b/tidb-cloud/ticloud-branch-list.md @@ -1,65 +1,59 @@ --- -title: ticloud branch list -summary: The reference of `ticloud branch list`. +title: ticloud serverless branch list +summary: The reference of `ticloud serverless branch list`. --- -# ticloud branch list +# ticloud serverless branch list -List all branches for a cluster: +List all branches for a TiDB Serverless cluster: ```shell -ticloud branch list [flags] +ticloud serverless branch list [flags] ``` Or use the following alias command: ```shell -ticloud branch ls [flags] +ticloud serverless branch ls [flags] ``` ## Examples -List all branches for a cluster (interactive mode): +List all branches for a TiDB Serverless cluster in interactive mode: ```shell -ticloud branch list +ticloud serverless branch list ``` -List all branches for a specified cluster (non-interactive mode): +List all branches for a specific TiDB Serverless cluster in non-interactive mode: ```shell -ticloud branch list +ticloud serverless branch list -c ``` -List all branches for a specified cluster in the JSON format: +List all branches for a specific TiDB Serverless cluster in the JSON format: ```shell -ticloud branch list -o json +ticloud serverless branch list -o json ``` -## Arguments - -The `branch list` command has the following arguments: - -| Argument Index | Description | Required | Note | -|----------------|-----------------------------------------------------|----------|---------------------------------------| -| `` | The cluster ID of the branches which will be listed | Yes | Only works in non-interactive mode. | - ## Flags In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|---------------------|--------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| -o, --output string | Output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|-------------------------|--------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| -o, --output string | Specifies the output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-branch-shell.md b/tidb-cloud/ticloud-branch-shell.md new file mode 100644 index 0000000000000..50c8d637025bf --- /dev/null +++ b/tidb-cloud/ticloud-branch-shell.md @@ -0,0 +1,63 @@ +--- +title: ticloud serverless branch shell +summary: The reference of `ticloud serverless branch shell`. +aliases: ['/tidbcloud/ticloud-connect'] +--- + +# ticloud serverless branch shell + +Connect to a branch of a TiDB Serverless cluster: + +```shell +ticloud serverless branch shell [flags] +``` + +## Examples + +Connect to a TiDB Serverless branch in interactive mode: + +```shell +ticloud serverless branch shell +``` + +Connect to a TiDB Serverless branch with the default user in non-interactive mode: + +```shell +ticloud serverless branch shell -c -b +``` + +Connect to a TiDB Serverless branch with the default user and password in non-interactive mode: + +```shell +ticloud serverless branch shell -c -b --password +``` + +Connect to a TiDB Serverless branch with a specific user and password in non-interactive mode: + +```shell +ticloud serverless branch shell -c -b -u --password +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------|-----------------------------------|----------|------------------------------------------------------| +| -b, --branch-id string | Specifies the ID of the branch. | Yes | Only works in non-interactive mode. | +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| --password | Specifies the password of the user. | No | Only works in non-interactive mode. | +| -u, --user string | Specifies the user for login. | No | Only works in non-interactive mode. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. \ No newline at end of file diff --git a/tidb-cloud/ticloud-cluster-connect-info.md b/tidb-cloud/ticloud-cluster-connect-info.md deleted file mode 100644 index 0e2c4ec1e28e1..0000000000000 --- a/tidb-cloud/ticloud-cluster-connect-info.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: ticloud cluster connect-info -summary: The reference of `ticloud cluster connect-info`. ---- - -# ticloud cluster connect-info - -Get the connection string of a cluster: - -```shell -ticloud cluster connect-info [flags] -``` - -> **Note:** -> -> Currently, this command only supports getting the connection string of a [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster. - -## Examples - -Get the connection string of a cluster in interactive mode: - -```shell -ticloud cluster connect-info -``` - -Get the connection string of a cluster in non-interactive mode: - -```shell -ticloud cluster connect-info --project-id --cluster-id --client --operating-system -``` - -## Flags - -In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. - -| Flag | Description | Required | Note | -|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| -| -p, --project-id string | The ID of the project, in which the cluster is created | Yes | Only works in non-interactive mode. | -| -c, --cluster-id string | The ID of the cluster | Yes | Only works in non-interactive mode. | -| --client string | The desired client used for the connection. Supported clients include `general`, `mysql_cli`, `mycli`, `libmysqlclient`, `python_mysqlclient`, `pymysql`, `mysql_connector_python`, `mysql_connector_java`, `go_mysql_driver`, `node_mysql2`, `ruby_mysql2`, `php_mysqli`, `rust_mysql`, `mybatis`, `hibernate`, `spring_boot`, `gorm`, `prisma`, `sequelize_mysql2`, `django_tidb`, `sqlalchemy_mysqlclient`, and `active_record`. | Yes | Only works in non-interactive mode. | -| --operating-system string | The operating system name. Supported operating systems include `macOS`, `Windows`, `Ubuntu`, `CentOS`, `RedHat`, `Fedora`, `Debian`, `Arch`, `OpenSUSE`, `Alpine`, and `Others`. | Yes | Only works in non-interactive mode. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | - -## Inherited flags - -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | - -## Feedback - -If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-cluster-create.md b/tidb-cloud/ticloud-cluster-create.md index f9dbd9ba85b10..571c92c744bd0 100644 --- a/tidb-cloud/ticloud-cluster-create.md +++ b/tidb-cloud/ticloud-cluster-create.md @@ -1,54 +1,49 @@ --- -title: ticloud cluster create -summary: The reference of `ticloud cluster create`. +title: ticloud serverless create +summary: The reference of `ticloud serverless create`. --- -# ticloud cluster create +# ticloud serverless create -Create a cluster: +Create a TiDB Serverless cluster: ```shell -ticloud cluster create [flags] +ticloud serverless create [flags] ``` -> **Note:** -> -> Currently, you can only create a [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster using the preceding command. - ## Examples -Create a cluster in interactive mode: +Create a TiDB Serverless cluster in interactive mode: ```shell -ticloud cluster create +ticloud serverless create ``` -Create a cluster in non-interactive mode: +Create a TiDB Serverless cluster in non-interactive mode: ```shell -ticloud cluster create --project-id --cluster-name --cloud-provider --region --root-password --cluster-type +ticloud serverless create --project-id --display-name --region ``` ## Flags In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|-------------------------|-------------------------------------------------------------|----------|-----------------------------------| -| --cloud-provider string | Cloud provider (Currently, only `AWS` is supported) | Yes | Only works in non-interactive mode. | -| --cluster-name string | Name of the cluster to be created | Yes | Only works in non-interactive mode. | -| --cluster-type string | Cluster type (Currently, only `SERVERLESS` is supported) | Yes | Only works in non-interactive mode. | -| -h, --help | Get help information for this command | No | Works in both non-interactive and interactive modes | -| -p, --project-id string | The ID of the project, in which the cluster will be created | Yes | Only works in non-interactive mode. | -| -r, --region string | Cloud region | Yes | Only works in non-interactive mode. | -| --root-password string | The root password of the cluster | Yes | Only works in non-interactive mode. | +| Flag | Description | Required | Note | +|------------------------------|--------------------------------------------------------------------------------------------|-----------|-----------------------------------------------------| +| -n --display-name string | Specifies the name of the cluster to be created. | Yes | Only works in non-interactive mode. | +| --spending-limit-monthly int | Specifies the maximum monthly spending limit in USD cents. | No | Only works in non-interactive mode. | +| -p, --project-id string | Specifies the ID of the project, in which the cluster will be created. The default value is `default project`. | No | Only works in non-interactive mode. | +| -r, --region string | Specifies the Cloud region. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-cluster-delete.md b/tidb-cloud/ticloud-cluster-delete.md index 080e92df19a11..9bda957d4f747 100644 --- a/tidb-cloud/ticloud-cluster-delete.md +++ b/tidb-cloud/ticloud-cluster-delete.md @@ -1,53 +1,53 @@ --- -title: ticloud cluster delete -summary: The reference of `ticloud cluster delete`. +title: ticloud serverless cluster delete +summary: The reference of `ticloud serverless delete`. --- -# ticloud cluster delete +# ticloud serverless delete -Delete a cluster from your project: +Delete a TiDB Serverless cluster from your project: ```shell -ticloud cluster delete [flags] +ticloud serverless delete [flags] ``` Or use the following alias command: ```shell -ticloud cluster rm [flags] +ticloud serverless rm [flags] ``` ## Examples -Delete a cluster in interactive mode: +Delete a TiDB Serverless cluster in interactive mode: ```shell -ticloud cluster delete +ticloud serverless delete ``` -Delete a cluster in non-interactive mode: +Delete a TiDB Serverless cluster in non-interactive mode: ```shell -ticloud cluster delete --project-id --cluster-id +ticloud serverless delete --cluster-id ``` ## Flags In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|-------------------------|---------------------------------------------|----------|-----------------------------------------------------| -| -c, --cluster-id string | The ID of the cluster to be deleted | Yes | Only works in non-interactive mode. | -| --force | Deletes a cluster without confirmation | No | Works in both non-interactive and interactive modes. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| -p, --project-id string | The project ID of the cluster to be deleted | Yes | Only works in non-interactive mode. | +| Flag | Description | Required | Note | +|-------------------------|----------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster to be deleted. | Yes | Only works in non-interactive mode. | +| --force | Deletes a cluster without confirmation. | No | Works in both non-interactive and interactive modes. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| +| Flag | Description | Required | Note | +|----------------------|--------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| | --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | The active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-cluster-describe.md b/tidb-cloud/ticloud-cluster-describe.md index 2215ecccb1264..622a2c64d01e7 100644 --- a/tidb-cloud/ticloud-cluster-describe.md +++ b/tidb-cloud/ticloud-cluster-describe.md @@ -1,52 +1,52 @@ --- -title: ticloud cluster describe -summary: The reference of `ticloud cluster describe`. +title: ticloud serverless cluster describe +summary: The reference of `ticloud serverless describe`. --- -# ticloud cluster describe +# ticloud serverless describe -Get information about a cluster (such as the cloud provider, cluster type, cluster configurations, and cluster status): +Get information about a TiDB Serverless cluster (such as the cluster configurations and cluster status): ```shell -ticloud cluster describe [flags] +ticloud serverless describe [flags] ``` Or use the following alias command: ```shell -ticloud cluster get [flags] +ticloud serverless get [flags] ``` ## Examples -Get the cluster information in interactive mode: +Get information about a TiDB Serverless cluster in interactive mode: ```shell -ticloud cluster describe +ticloud serverless describe ``` -Get the cluster information in non-interactive mode: +Get information about a TiDB Serverless cluster in non-interactive mode: ```shell -ticloud cluster describe --project-id --cluster-id +ticloud serverless describe --cluster-id ``` ## Flags In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|-------------------------|-------------------------------|----------|-----------------------------------| -| -c, --cluster-id string | The ID of the cluster | Yes | Only works in non-interactive mode. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| -p, --project-id string | The project ID of the cluster | Yes | Only works in non-interactive mode. | +| Flag | Description | Required | Note | +|-------------------------|-----------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-cluster-list.md b/tidb-cloud/ticloud-cluster-list.md index a8d05cea36a75..2fc466e8c290c 100644 --- a/tidb-cloud/ticloud-cluster-list.md +++ b/tidb-cloud/ticloud-cluster-list.md @@ -1,57 +1,59 @@ --- -title: ticloud cluster list -summary: The reference of `ticloud cluster list`. +title: ticloud serverless cluster list +summary: The reference of `ticloud serverless list`. --- -# ticloud cluster list +# ticloud serverless list -List all clusters in a project: +List all TiDB Serverless clusters in a project: ```shell -ticloud cluster list [flags] +ticloud serverless list [flags] ``` Or use the following alias command: ```shell -ticloud cluster ls [flags] +ticloud serverless ls [flags] ``` ## Examples -List all clusters in a project (interactive mode): +List all TiDB Serverless clusters in interactive mode: ```shell -ticloud cluster list +ticloud serverless list ``` -List all clusters in a specified project (non-interactive mode): +List all TiDB Serverless clusters in a specified project in non-interactive mode: ```shell -ticloud cluster list +ticloud serverless list -p ``` -List all clusters in a specified project in the JSON format: +List all TiDB Serverless clusters in a specified project with the JSON format in non-interactive mode: ```shell -ticloud cluster list -o json +ticloud serverless list -p -o json ``` ## Flags In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|---------------------|--------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------| -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| -o, --output string | Output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|-------------------------|--------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| +| -p, --project-id string | Specifies the ID of the project. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| -o, --output string | Specifies the output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-completion.md b/tidb-cloud/ticloud-completion.md new file mode 100644 index 0000000000000..2c563df51d004 --- /dev/null +++ b/tidb-cloud/ticloud-completion.md @@ -0,0 +1,46 @@ +--- +title: ticloud completion +summary: The reference of `ticloud completion`. +--- + +# ticloud completion + +Generate the autocompletion script for the specified shell of TiDB Cloud CLI: + +```shell +ticloud completion [command] +``` + +## Examples + +Generate the autocompletion script for bash: + +```shell +ticloud completion bash +``` + +Generate the autocompletion script for zsh: + +```shell +ticloud completion zsh +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------|---------------------------------------------------------------|----------|------------------------------------------------------| +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. \ No newline at end of file diff --git a/tidb-cloud/ticloud-config-create.md b/tidb-cloud/ticloud-config-create.md index c5d2f566984ce..74d7a64720323 100644 --- a/tidb-cloud/ticloud-config-create.md +++ b/tidb-cloud/ticloud-config-create.md @@ -35,10 +35,10 @@ In non-interactive mode, you need to manually enter the required flags. In inter | Flag | Description | Required | Note | |-----------------------|-----------------------------------------------|----------|-----------------------------------| -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| --private-key string | The private key of the TiDB Cloud API | Yes | Only works in non-interactive mode. | -| --profile-name string | The name of the profile, which must not contain `.` | Yes | Only works in non-interactive mode. | -| --public-key string | The public key of the TiDB Cloud API | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| --private-key string | Specifies the private key of the TiDB Cloud API. | Yes | Only works in non-interactive mode. | +| --profile-name string | Specifies the name of the profile (which must not contain `.`). | Yes | Only works in non-interactive mode. | +| --public-key string | Specifies the public key of the TiDB Cloud API. | Yes | Only works in non-interactive mode. | ## Inherited flags @@ -46,6 +46,7 @@ In non-interactive mode, you need to manually enter the required flags. In inter |----------------------|----------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| | --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | | -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-config-delete.md b/tidb-cloud/ticloud-config-delete.md index 6e039e49c0075..e9e35fb5759ba 100644 --- a/tidb-cloud/ticloud-config-delete.md +++ b/tidb-cloud/ticloud-config-delete.md @@ -29,8 +29,8 @@ ticloud config delete | Flag | Description | |------------|---------------------------------------| -| --force | Deletes a profile without confirmation | -| -h, --help | Help information for this command | +| --force | Deletes a profile without confirmation. | +| -h, --help | Shows help information for this command. | ## Inherited flags @@ -38,6 +38,7 @@ ticloud config delete |----------------------|-----------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| | --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | | -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-config-describe.md b/tidb-cloud/ticloud-config-describe.md index acdf34e102a6b..802421ce671a9 100644 --- a/tidb-cloud/ticloud-config-describe.md +++ b/tidb-cloud/ticloud-config-describe.md @@ -29,7 +29,7 @@ ticloud config describe | Flag | Description | |------------|--------------------------| -| -h, --help | Help information for this command | +| -h, --help | Shows help information for this command. | ## Inherited flags @@ -37,6 +37,7 @@ ticloud config describe |----------------------|-----------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| | --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | | -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-config-edit.md b/tidb-cloud/ticloud-config-edit.md index 6fb82659297b8..83389b8c148db 100644 --- a/tidb-cloud/ticloud-config-edit.md +++ b/tidb-cloud/ticloud-config-edit.md @@ -29,7 +29,7 @@ ticloud config edit | Flag | Description | |------------|--------------------------| - | -h, --help | Help information for this command | + | -h, --help | Shows help information for this command. | ## Inherited flags @@ -37,6 +37,7 @@ ticloud config edit |----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| | --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | | -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-config-list.md b/tidb-cloud/ticloud-config-list.md index 766f7c8e99f2d..549005ed9fcea 100644 --- a/tidb-cloud/ticloud-config-list.md +++ b/tidb-cloud/ticloud-config-list.md @@ -29,7 +29,7 @@ ticloud config list | Flag | Description | |------------|--------------------------| -| -h, --help | Help information for this command | +| -h, --help | Shows help information for this command. | ## Inherited flags @@ -37,6 +37,7 @@ ticloud config list |----------------------|-----------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| | --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | | -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-config-set.md b/tidb-cloud/ticloud-config-set.md index 8059c79e9fcc7..bd8d5f8227221 100644 --- a/tidb-cloud/ticloud-config-set.md +++ b/tidb-cloud/ticloud-config-set.md @@ -15,11 +15,11 @@ The properties that can be configured include `public-key`, `private-key`, and ` | Properties | Description | Required | |-------------|--------------------------------------------------------------------|----------| -| public-key | The public key of the TiDB Cloud API | Yes | -| private-key | The private key of the TiDB Cloud API | Yes | -| api-url | The base API URL of TiDB Cloud (`https://api.tidbcloud.com` by default) | No | +| public-key | Specifies the public key of the TiDB Cloud API. | Yes | +| private-key | Specifies the private key of the TiDB Cloud API. | Yes | +| api-url | Specifies the base API URL of TiDB Cloud (`https://api.tidbcloud.com` by default). | No | -> **Notes:** +> **Note:** > > If you want to configure properties for a specific user profile, you can add the `-P` flag and specify the target user profile name in the command. @@ -51,7 +51,7 @@ ticloud config set api-url https://api.tidbcloud.com | Flag | Description | |------------|--------------------------| -| -h, --help | Help information for this command | +| -h, --help | Shows help information for this command. | ## Inherited flags @@ -59,6 +59,7 @@ ticloud config set api-url https://api.tidbcloud.com |----------------------|-----------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| | --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | | -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-config-use.md b/tidb-cloud/ticloud-config-use.md index d73b9c76860f1..197735abface4 100644 --- a/tidb-cloud/ticloud-config-use.md +++ b/tidb-cloud/ticloud-config-use.md @@ -23,7 +23,7 @@ ticloud config use test | Flag | Description | |------------|--------------------------| -| -h, --help | Help information for this command | +| -h, --help | Shows help information for this command. | ## Inherited flags @@ -31,6 +31,7 @@ ticloud config use test |----------------------|-----------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| | --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | | -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-connect.md b/tidb-cloud/ticloud-connect.md deleted file mode 100644 index 93a631f3f7eb7..0000000000000 --- a/tidb-cloud/ticloud-connect.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -title: ticloud connect -summary: The reference of `ticloud connect`. ---- - -# ticloud connect - -Connect to a TiDB Cloud cluster or branch: - -```shell -ticloud connect [flags] -``` - -> **Note:** -> -> - If you are prompted about whether to use the default user, you can choose `Y` to use the default root user or choose `n` to specify another user. For [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, the name of the default root user has a [prefix](/tidb-cloud/select-cluster-tier.md#user-name-prefix) such as `3pTAoNNegb47Uc8`. -> - The connection forces the [ANSI SQL mode](https://dev.mysql.com/doc/refman/8.0/en/sql-mode.html#sqlmode_ansi) for the session. To exit the session, enter `\q`. - -## Examples - -Connect to a TiDB Cloud cluster or branch in interactive mode: - -```shell -ticloud connect -``` - -Use the default user to connect to a TiDB Cloud cluster or branch in non-interactive mode: - -```shell -ticloud connect -p -c -ticloud connect -p -c -b -``` - -Use the default user to connect to the TiDB Cloud cluster or branch with password in non-interactive mode: - -```shell -ticloud connect -p -c --password -ticloud connect -p -c -b --password -``` - -Use a specific user to connect to the TiDB Cloud cluster or branch in non-interactive mode: - -```shell -ticloud connect -p -c -u -ticloud connect -p -c -b -u -``` - -## Flags - -In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. - -| Flag | Description | Required | Note | -|-------------------------|-----------------------------------|----------|------------------------------------------------------| -| -c, --cluster-id string | Cluster ID | Yes | Only works in non-interactive mode. | -| -b, --branch-id string | Branch ID | No | Only works in non-interactive mode. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| --password | The password of the user | No | Only works in non-interactive mode. | -| -p, --project-id string | Project ID | Yes | Only works in non-interactive mode. | -| -u, --user string | A specific user for login | No | Only works in non-interactive mode. | - -## Inherited flags - -| Flag | Description | Required | Note | -|----------------------|------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | - -## Feedback - -If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-help.md b/tidb-cloud/ticloud-help.md new file mode 100644 index 0000000000000..88966528564c0 --- /dev/null +++ b/tidb-cloud/ticloud-help.md @@ -0,0 +1,46 @@ +--- +title: ticloud help +summary: The reference of `ticloud help`. +--- + +# ticloud help + +Get help information for any command in TiDB Cloud CLI: + +```shell +ticloud help [command] [flags] +``` + +## Examples + +To get help for the `auth` command: + +```shell +ticloud help auth +``` + +To get help for the `serveless create` command: + +```shell +ticloud help serveless create +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------|---------------------------------------------------------------|----------|------------------------------------------------------| +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. \ No newline at end of file diff --git a/tidb-cloud/ticloud-import-cancel.md b/tidb-cloud/ticloud-import-cancel.md index bb42e358db4c7..8ce763008620d 100644 --- a/tidb-cloud/ticloud-import-cancel.md +++ b/tidb-cloud/ticloud-import-cancel.md @@ -1,14 +1,14 @@ --- -title: ticloud import cancel -summary: The reference of `ticloud import cancel`. +title: ticloud serverless import cancel +summary: The reference of `ticloud serverless import cancel`. --- -# ticloud import cancel +# ticloud serverless import cancel Cancel a data import task: ```shell -ticloud import cancel [flags] +ticloud serverless import cancel [flags] ``` ## Examples @@ -16,26 +16,25 @@ ticloud import cancel [flags] Cancel an import task in interactive mode: ```shell -ticloud import cancel +ticloud serverless import cancel ``` Cancel an import task in non-interactive mode: ```shell -ticloud import cancel --project-id --cluster-id --import-id +ticloud serverless import cancel --cluster-id --import-id ``` ## Flags In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|-------------------------|---------------------------------------|----------|-----------------------------------------------------| -| -c, --cluster-id string | Cluster ID | Yes | Only works in non-interactive mode. | -| --force | Deletes a profile without confirmation | No | Works in both non-interactive and interactive modes. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| --import-id string | The ID of the import task | Yes | Only works in non-interactive mode. | -| -p, --project-id string | Project ID | Yes | Only works in non-interactive mode. | +| Flag | Description | Required | Note | +|-------------------------|----------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| --force | Cancels an import task without confirmation. | No | Works in both non-interactive and interactive modes. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| --import-id string | Specifies the ID of the import task. | Yes | Only works in non-interactive mode. | ## Inherited flags @@ -43,6 +42,7 @@ In non-interactive mode, you need to manually enter the required flags. In inter |----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| | --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | | -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-import-describe.md b/tidb-cloud/ticloud-import-describe.md index c01062e039c2f..bcbf64fcdb025 100644 --- a/tidb-cloud/ticloud-import-describe.md +++ b/tidb-cloud/ticloud-import-describe.md @@ -1,20 +1,20 @@ --- -title: ticloud import describe -summary: The reference of `ticloud import describe`. +title: ticloud serverless import describe +summary: The reference of `ticloud serverless import describe`. --- -# ticloud import describe +# ticloud serverless import describe -Get the import details of a data import task: +Describe a data import task: ```shell -ticloud import describe [flags] +ticloud serverless import describe [flags] ``` Or use the following alias command: ```shell -ticloud import get [flags] +ticloud serverless import get [flags] ``` ## Examples @@ -22,32 +22,32 @@ ticloud import get [flags] Describe an import task in interactive mode: ```shell -ticloud import describe +ticloud serverless import describe ``` Describe an import task in non-interactive mode: ```shell -ticloud import describe --project-id --cluster-id --import-id +ticloud serverless import describe --cluster-id --import-id ``` ## Flags In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|-------------------------|--------------------------|----------|-----------------------------------| -| -c, --cluster-id string | Cluster ID | Yes | Only works in non-interactive mode. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| --import-id string | The ID of the import task | Yes | Only works in non-interactive mode. | -| -p, --project-id string | Project ID | Yes | Only works in non-interactive mode. | +| Flag | Description | Required | Note | +|-------------------------|-----------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| --import-id string | Specifies the ID of the import task. | Yes | Only works in non-interactive mode. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-import-list.md b/tidb-cloud/ticloud-import-list.md index d02d5c26b6798..55cd733c718d8 100644 --- a/tidb-cloud/ticloud-import-list.md +++ b/tidb-cloud/ticloud-import-list.md @@ -1,20 +1,20 @@ --- -title: ticloud import list -summary: The reference of `ticloud import list`. +title: ticloud serverless import list +summary: The reference of `ticloud serverless import list`. --- -# ticloud import list +# ticloud serverless import list List data import tasks: ```shell -ticloud import list [flags] +ticloud serverless import list [flags] ``` Or use the following alias command: ```shell -ticloud import ls [flags] +ticloud serverless import ls [flags] ``` ## Examples @@ -22,38 +22,38 @@ ticloud import ls [flags] List import tasks in interactive mode: ```shell -ticloud import list +ticloud serverless import list ``` List import tasks in non-interactive mode: ```shell -ticloud import list --project-id --cluster-id +ticloud serverless import list --cluster-id ``` List import tasks for a specified cluster in the JSON format: ```shell -ticloud import list --project-id --cluster-id --output json +ticloud serverless import list --cluster-id --output json ``` ## Flags In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|-------------------------|--------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------| -| -c, --cluster-id string | Cluster ID | Yes | Only works in non-interactive mode. | -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| -o, --output string | Output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | -| -p, --project-id string | Project ID | Yes | Only works in non-interactive mode. | +| Flag | Description | Required | Note | +|-------------------------|--------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| -o, --output string | Specifies the output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-import-start-local.md b/tidb-cloud/ticloud-import-start-local.md deleted file mode 100644 index c3576a4f901ae..0000000000000 --- a/tidb-cloud/ticloud-import-start-local.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: ticloud import start local -summary: The reference of `ticloud import start local`. ---- - -# ticloud import start local - -Import a local file to a [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster: - -```shell -ticloud import start local [flags] -``` - -> **Note:** -> -> Currently, you can only import one CSV file for one import task. - -## Examples - -Start an import task in interactive mode: - -```shell -ticloud import start local -``` - -Start an import task in non-interactive mode: - -```shell -ticloud import start local --project-id --cluster-id --data-format --target-database --target-table -``` - -Start an import task with a custom CSV format: - -```shell -ticloud import start local --project-id --cluster-id --data-format CSV --target-database --target-table --separator \" --delimiter \' --backslash-escape=false --trim-last-separator=true -``` - -## Flags - -In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. - -| Flag | Description | Required | Note | -|---|---|---|---| -| --backslash-escape | Whether to parse backslashes inside fields as escape characters for CSV files. The default value is `true`. | No | Only works in non-interactive mode when `--data-format CSV` is specified. | -| -c, --cluster-id string | Specifies the cluster ID. | Yes | Only works in non-interactive mode. | -| --data-format string | Specifies the data format. Currently, only `CSV` is supported. | Yes | Only works in non-interactive mode. | -| --delimiter string | Specifies the delimiter used for quoting for CSV files. The default value is `"`. | No | Only works in non-interactive mode when `--data-format CSV` is specified. | -| -h, --help | Displays help information for this command. | No | Works in both non-interactive and interactive modes. | -| -p, --project-id string | Specifies the project ID. | Yes | Only works in non-interactive mode. | -| --separator string | Specifies the field separator for CSV files. The default value is `,`. | No | Only works in non-interactive mode when `--data-format CSV` is specified. | -| --target-database string | Specifies the target database to import data to. | Yes | Only works in non-interactive mode. | -| --target-table string | Specifies the target table to import data to. | Yes | Only works in non-interactive mode. | -| --trim-last-separator | Whether to treat separators as line terminators and trim all trailing separators for CSV files. The default value is `false`. | No | Only works in non-interactive mode when `--data-format CSV` is specified. | - -## Inherited flags - -| Flag | Description | Required | Note | -|---|---|---|---| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | - -## Feedback - -If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-import-start-mysql.md b/tidb-cloud/ticloud-import-start-mysql.md deleted file mode 100644 index 3c67cc007d95a..0000000000000 --- a/tidb-cloud/ticloud-import-start-mysql.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: ticloud import start mysql -summary: The reference of `ticloud import start mysql`. ---- - -# ticloud import start mysql - -Import a table from a MySQL-compatible database to a [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster: - -```shell -ticloud import start mysql [flags] -``` - -> **Note:** -> -> - Before running this command, make sure that you have installed the MySQL command-line tool first. For more details, see [Installation](/tidb-cloud/get-started-with-cli.md#installation). -> - If the target table already exists in the target database, to use this command for table import, make sure that the target table name is the same as the source table name and add the `skip-create-table` flag in the command. -> - If the target table does not exist in the target database, executing this command automatically creates a table with the same name as the source table in the target database. - -## Examples - -- Start an import task in interactive mode: - - ```shell - ticloud import start mysql - ``` - -- Start an import task in non-interactive mode (using the TiDB Serverless cluster default user `.root`): - - ```shell - ticloud import start mysql --project-id --cluster-id --source-host --source-port --source-user --source-password --source-database --source-table --target-database --target-password - ``` - -- Start an import task in non-interactive mode (using a specific user): - - ```shell - ticloud import start mysql --project-id --cluster-id --source-host --source-port --source-user --source-password --source-database --source-table --target-database --target-password --target-user - ``` - -- Start an import task that skips creating the target table if it already exists in the target database: - - ```shell - ticloud import start mysql --project-id --cluster-id --source-host --source-port --source-user --source-password --source-database --source-table --target-database --target-password --skip-create-table - ``` - -> **Note:** -> -> MySQL 8.0 uses `utf8mb4_0900_ai_ci` as the default collation, which is currently not supported by TiDB. If your source table uses the `utf8mb4_0900_ai_ci` collation, before the import, you need to either alter the source table collation to a [supported collation of TiDB](/character-set-and-collation.md#character-sets-and-collations-supported-by-tidb) or manually create the target table in TiDB. - -## Flags - -In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. - -| Flag | Description | Required | Note | -|---|---|---|---| -| -c, --cluster-id string | Specifies the cluster ID. | Yes | Only works in non-interactive mode. | -| -h, --help | Displays help information for this command. | No | Works in both non-interactive and interactive modes. | -| -p, --project-id string | Specifies the project ID. | Yes | Only works in non-interactive mode. | -| --skip-create-table | Skips creating the target table if it already exists in the target database. | No | Only works in non-interactive mode. | -| --source-database string | The name of the source MySQL database. | Yes | Only works in non-interactive mode. | -| --source-host string | The host of the source MySQL instance. | Yes | Only works in non-interactive mode. | -| --source-password string | The password for the source MySQL instance. | Yes | Only works in non-interactive mode. | -| --source-port int | The port of the source MySQL instance. | Yes | Only works in non-interactive mode. | -| --source-table string | The source table name in the source MySQL database. | Yes | Only works in non-interactive mode. | -| --source-user string | The user to log in to the source MySQL instance. | Yes | Only works in non-interactive mode. | -| --target-database string | The target database name in the TiDB Serverless cluster. | Yes | Only works in non-interactive mode. | -| --target-password string | The password for the target TiDB Serverless cluster. | Yes | Only works in non-interactive mode. | -| --target-user string | The user to log in to the target TiDB Serverless cluster. | No | Only works in non-interactive mode. | - -## Inherited flags - -| Flag | Description | Required | Note | -|---|---|---|---| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | - -## Feedback - -If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-import-start-s3.md b/tidb-cloud/ticloud-import-start-s3.md deleted file mode 100644 index d42f84def8128..0000000000000 --- a/tidb-cloud/ticloud-import-start-s3.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: ticloud import start s3 -summary: The reference of `ticloud import start s3`. ---- - -# ticloud import start s3 - -Import files from Amazon S3 into TiDB Cloud: - -```shell -ticloud import start s3 [flags] -``` - -> **Note:** -> -> Before importing files from Amazon S3 into TiDB Cloud, you need to configure the Amazon S3 bucket access for TiDB Cloud and get the Role ARN. For more information, see [Configure Amazon S3 access](/tidb-cloud/config-s3-and-gcs-access.md#configure-amazon-s3-access). - -## Examples - -Start an import task in interactive mode: - -```shell -ticloud import start s3 -``` - -Start an import task in non-interactive mode: - -```shell -ticloud import start s3 --project-id --cluster-id --aws-role-arn --data-format --source-url -``` - -Start an import task with a custom CSV format: - -```shell -ticloud import start s3 --project-id --cluster-id --aws-role-arn --data-format CSV --source-url --separator \" --delimiter \' --backslash-escape=false --trim-last-separator=true -``` - -## Flags - -In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. - -| Flag | Description | Required | Note | -|---|---|---|---| -| --aws-role-arn string | Specifies the AWS role ARN that is used to access the Amazon S3 data source. | Yes | Only works in non-interactive mode. | -| --backslash-escape | Whether to parse backslashes inside fields as escape characters for CSV files. The default value is `true`. | No | Only works in non-interactive mode when `--data-format CSV` is specified. | -| -c, --cluster-id string | Specifies the cluster ID. | Yes | Only works in non-interactive mode. | -| --data-format string | Specifies the data format. Valid values are `CSV`, `SqlFile`, `Parquet`, or `AuroraSnapshot`. | Yes | Only works in non-interactive mode. | -| --delimiter string | Specifies the delimiter used for quoting for CSV files. The default value is `"`. | No | Only works in non-interactive mode when `--data-format CSV` is specified. | -| -h, --help | Displays help information for this command. | No | Works in both non-interactive and interactive modes. | -| -p, --project-id string | Specifies the project ID. | Yes | Only works in non-interactive mode. | -| --separator string | Specifies the field separator for CSV files. The default value is `,`. | No | Only works in non-interactive mode when `--data-format CSV` is specified. | -| --source-url string | The S3 path where the source data files are stored. | Yes | Only works in non-interactive mode. | -| --trim-last-separator | Whether to treat separators as line terminators and trim all trailing separators for CSV files. The default value is `false`. | No | Only works in non-interactive mode when `--data-format CSV` is specified. | - -## Inherited flags - -| Flag | Description | Required | Note | -|---|---|---|---| -| --no-color | Disables color in output | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | - -## Feedback - -If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-import-start.md b/tidb-cloud/ticloud-import-start.md new file mode 100644 index 0000000000000..c610a76d5e8ad --- /dev/null +++ b/tidb-cloud/ticloud-import-start.md @@ -0,0 +1,80 @@ +--- +title: ticloud serverless import start +summary: The reference of `ticloud serverless import start`. +aliases: ['/tidbcloud/ticloud-import-start-local','/tidbcloud/ticloud-import-start-mysql','/tidbcloud/ticloud-import-start-s3'] +--- + +# ticloud serverless import start + +Start a data import task: + +```shell +ticloud serverless import start [flags] +``` + +Or use the following alias command: + +```shell +ticloud serverless import create [flags] +``` + +> **Note:** +> +> Currently, you can only import one CSV file for one import task. + +## Examples + +Start an import task in interactive mode: + +```shell +ticloud serverless import start +``` + +Start a local import task in non-interactive mode: + +```shell +ticloud serverless import start --local.file-path --cluster-id --file-type --local.target-database --local.target-table +``` + +Start a local import task with custom upload concurrency: + +```shell +ticloud serverless import start --local.file-path --cluster-id --file-type --local.target-database --local.target-table --local.concurrency 10 +``` + +Start a local import task with custom CSV format: + +```shell +ticloud serverless import start --local.file-path --cluster-id --file-type CSV --local.target-database --local.target-table --csv.separator \" --csv.delimiter \' --csv.backslash-escape=false --csv.trim-last-separator=true +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|--------------------------------|---------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------| +| -c, --cluster-id string | Specifies the cluster ID. | Yes | Only works in non-interactive mode. | +| --source-type string | Specifies the import source type. The default value is `LOCAL`. | No | Only works in non-interactive mode. | +| --local.concurrency int | Specifies the concurrency for uploading files. The default value is `5`. | No | Only works in non-interactive mode. | +| --local.file-path string | Specifies the path of the local file to be imported. | No | Only works in non-interactive mode. | +| --local.target-database string | Specifies the target database to which the data is imported. | No | Only works in non-interactive mode. | +| --local.target-table string | Specifies the target table to which the data is imported. | No | Only works in non-interactive mode. | +| --file-type string | Specifies the import file type. Currently, only `CSV` is supported. | Yes | Only works in non-interactive mode. | +| --csv.backslash-escape | Specifies whether to parse backslash inside fields as escape characters in a CSV file. The default value is `true`. | No | Only works in non-interactive mode. | +| --csv.delimiter string | Specifies the delimiter used for quoting a CSV file. The default value is `\`. | No | Only works in non-interactive mode. | +| --csv.separator string | Specifies the field separator in a CSV file. The default value is `,`. | No | Only works in non-interactive mode. | +| --csv.trim-last-separator | Specifies whether to treat the separator as the line terminator and trim all trailing separators in a CSV file. | No | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-project-list.md b/tidb-cloud/ticloud-project-list.md index eee14b32cd698..da1fba930bcd9 100644 --- a/tidb-cloud/ticloud-project-list.md +++ b/tidb-cloud/ticloud-project-list.md @@ -35,17 +35,18 @@ ticloud project list -o json In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. -| Flag | Description | Required | Note | -|---------------------|--------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------| -| -h, --help | Help information for this command | No | Works in both non-interactive and interactive modes. | -| -o, --output string | Output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|---------------------|--------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| -o, --output string | Specifies the output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/ticloud-serverless-export-cancel.md b/tidb-cloud/ticloud-serverless-export-cancel.md new file mode 100644 index 0000000000000..777ec6686b79a --- /dev/null +++ b/tidb-cloud/ticloud-serverless-export-cancel.md @@ -0,0 +1,49 @@ +--- +title: ticloud serverless export cancel +summary: The reference of `ticloud serverless export cancel`. +--- + +# ticloud serverless export cancel + +Cancel a data export task: + +```shell +ticloud serverless export cancel [flags] +``` + +## Examples + +Cancel an export task in interactive mode: + +```shell +ticloud serverless export cancel +``` + +Cancel an export task in non-interactive mode: + +```shell +ticloud serverless export cancel -c -e +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------|----------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -e, --export-id string | Specifies the ID of the export task. | Yes | Only works in non-interactive mode. | +| --force | Cancels an export task without confirmation | No | Works in both non-interactive and interactive modes. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-serverless-export-create.md b/tidb-cloud/ticloud-serverless-export-create.md new file mode 100644 index 0000000000000..5595477ad87ef --- /dev/null +++ b/tidb-cloud/ticloud-serverless-export-create.md @@ -0,0 +1,61 @@ +--- +title: ticloud serverless export create +summary: The reference of `ticloud serverless export create`. +--- + +# ticloud serverless export create + +Export data from a TiDB Serverless cluster: + +```shell +ticloud serverless export create [flags] +``` + +## Examples + +Export data from a TiDB Serverless cluster in interactive mode: + +```shell +ticloud serverless export create +``` + +Export data from a TiDB Serverless cluster to local storage in non-interactive mode: + +```shell +ticloud serverless export create -c --database +``` + +Export data from a TiDB Serverless cluster to Amazon S3 in non-interactive mode: + +```shell +ticloud serverless export create -c --s3.bucket-uri --s3.access-key-id --s3.secret-access-key +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------------|--------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster, from which you want to export data. | Yes | Only works in non-interactive mode. | +| --file-type string | Specifies the format of the exported file. The supported formats include `CSV` and `SQL`. The default value is `SQL`. | No | Only works in non-interactive mode. | +| --database string | Specifies the database from which you want to export data. The default value is `*`. This flag is required when you export data to local storage. | No | Only works in non-interactive mode. | +| --table string | Specifies the table from which you want to export data. The default value is `*`. | No | Only works in non-interactive mode. | +| --target-type string | Specifies the exported location. The supported location includes `LOCAL` and `S3`. The default value is `LOCAL`. | No | Only works in non-interactive mode. | +| --s3.bucket-uri string | Specifies the bucket URI of the S3. This flag is required when you export data to Amazon S3. | No | Only works in non-interactive mode. | +| --s3.access-key-id string | Specifies the access key ID of the S3 bucket. This flag is required when you export data to Amazon S3. | NO | Only works in non-interactive mode. | +| --s3.secret-access-key string | Specifies the secret access key of the S3 bucket. This flag is required when you export data to Amazon S3. | No | Only works in non-interactive mode. | +| --compression string | Specifies the compression algorithm of the export file. The supported algorithms include `GZIP`, `SNAPPY`, `ZSTD`, and `NONE`. The default value is `GZIP`. | No | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-serverless-export-describe.md b/tidb-cloud/ticloud-serverless-export-describe.md new file mode 100644 index 0000000000000..4a074c721595d --- /dev/null +++ b/tidb-cloud/ticloud-serverless-export-describe.md @@ -0,0 +1,54 @@ +--- +title: ticloud serverless export describe +summary: The reference of `ticloud serverless export describe`. +--- + +# ticloud serverless export describe + +Get the export information of a TiDB Serverless cluster: + +```shell +ticloud serverless export describe [flags] +``` + +Or use the following alias command: + +```shell +ticloud serverless export get [flags] +``` + +## Examples + +Get the export information in interactive mode: + +```shell +ticloud serverless export describe +``` + +Get the export information in non-interactive mode: + +```shell +ticloud serverless export describe -c -e +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------|----------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -e, --export-id string | Specifies the ID of the export task. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-serverless-export-download.md b/tidb-cloud/ticloud-serverless-export-download.md new file mode 100644 index 0000000000000..c1ecb764b669c --- /dev/null +++ b/tidb-cloud/ticloud-serverless-export-download.md @@ -0,0 +1,50 @@ +--- +title: ticloud serverless export download +summary: The reference of `ticloud serverless export download`. +--- + +# ticloud serverless export download + +Download the exported data from a TiDB Serverless cluster to your local storage: + +```shell +ticloud serverless export download [flags] +``` + +## Examples + +Download the exported data in interactive mode: + +```shell +ticloud serverless export download +``` + +Download the exported data in non-interactive mode: + +```shell +ticloud serverless export download -c -e +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------|------------------------------------------------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -e, --export-id string | Specifies the ID of the export task. | Yes | Only works in non-interactive mode. | +| --output-path string | Specifies the destination path for saving the downloaded data. If not specified, the data is downloaded to the current directory. | No | Only works in non-interactive mode. | +| --force | Downloads the exported data without confirmation. | No | Works in both non-interactive and interactive modes. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-serverless-export-list.md b/tidb-cloud/ticloud-serverless-export-list.md new file mode 100644 index 0000000000000..68976f86fae7f --- /dev/null +++ b/tidb-cloud/ticloud-serverless-export-list.md @@ -0,0 +1,60 @@ +--- +title: ticloud serverless export list +summary: The reference of `ticloud serverless export list`. +--- + +# ticloud serverless export list + +List data export tasks of TiDB Serverless clusters: + +```shell +ticloud serverless export list [flags] +``` + +Or use the following alias command: + +```shell +ticloud serverless export ls [flags] +``` + +## Examples + +List all export tasks in interactive mode: + +```shell +ticloud serverless export list +``` + +List export tasks for a specified cluster in non-interactive mode: + +```shell +ticloud serverless export list -c +``` + +List export tasks for a specified cluster in the JSON format in non-interactive mode: + +```shell +ticloud serverless export list -c -o json +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------|--------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -o, --output string | Specifies the output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-serverless-region.md b/tidb-cloud/ticloud-serverless-region.md new file mode 100644 index 0000000000000..5fb28b2f0e97c --- /dev/null +++ b/tidb-cloud/ticloud-serverless-region.md @@ -0,0 +1,48 @@ +--- +title: ticloud serverless region +summary: The reference of `ticloud serverless region`. +aliases: ['/tidbcloud/ticloud-serverless-regions'] +--- + +# ticloud serverless region + +List all available regions for TiDB Serverless: + +```shell +ticloud serverless region [flags] +``` + +## Examples + +List all available regions for TiDB Serverless: + +```shell +ticloud serverless region +``` + +List all available regions for TiDB Serverless clusters in the JSON format: + +```shell +ticloud serverless region -o json +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|---------------------|--------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------| +| -o, --output string | Specifies the output format (`human` by default). Valid values are `human` or `json`. To get a complete result, use the `json` format. | No | Works in both non-interactive and interactive modes. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|--------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-serverless-shell.md b/tidb-cloud/ticloud-serverless-shell.md new file mode 100644 index 0000000000000..1267f0f4e4902 --- /dev/null +++ b/tidb-cloud/ticloud-serverless-shell.md @@ -0,0 +1,62 @@ +--- +title: ticloud serverless shell +summary: The reference of `ticloud serverless shell`. +aliases: ['/tidbcloud/ticloud-connect'] +--- + +# ticloud serverless shell + +Connect to a TiDB Serverless cluster: + +```shell +ticloud serverless shell [flags] +``` + +## Examples + +Connect to a TiDB Serverless cluster in interactive mode: + +```shell +ticloud serverless shell +``` + +Connect to a TiDB Serverless cluster with the default user in non-interactive mode: + +```shell +ticloud serverless shell -c +``` + +Connect to a TiDB Serverless cluster with the default user and password in non-interactive mode: + +```shell +ticloud serverless shell -c --password +``` + +Connect to a TiDB Serverless cluster with a specific user and password in non-interactive mode: + +```shell +ticloud connect -c -u --password +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------|-----------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | +| --password | Specifies the password of the user. | No | Only works in non-interactive mode. | +| -u, --user string | Specifies the user for login. | No | Only works in non-interactive mode. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. \ No newline at end of file diff --git a/tidb-cloud/ticloud-serverless-spending-limit.md b/tidb-cloud/ticloud-serverless-spending-limit.md new file mode 100644 index 0000000000000..b42c0781eb6ba --- /dev/null +++ b/tidb-cloud/ticloud-serverless-spending-limit.md @@ -0,0 +1,48 @@ +--- +title: ticloud serverless spending-limit +summary: The reference of `ticloud serverless spending-limit`. +--- + +# ticloud serverless spending-limit + +Set the maximum monthly [spending limit](/tidb-cloud/manage-serverless-spend-limit.md) for a TiDB Serverless cluster: + +```shell +ticloud serverless spending-limit [flags] +``` + +## Examples + +Set the spending limit for a TiDB Serverless cluster in interactive mode: + +```shell +ticloud serverless spending-limit +``` + +Set the spending limit for a TiDB Serverless cluster in non-interactive mode: + +```shell +ticloud serverless spending-limit -c --monthly +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|-------------------------|---------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| --monthly int32 | Specifies the maximum monthly spending limit in USD cents. | Yes | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|--------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-serverless-update.md b/tidb-cloud/ticloud-serverless-update.md new file mode 100644 index 0000000000000..3191dc0451125 --- /dev/null +++ b/tidb-cloud/ticloud-serverless-update.md @@ -0,0 +1,56 @@ +--- +title: ticloud serverless update +summary: The reference of `ticloud serverless update`. +--- + +# ticloud serverless update + +Update a TiDB Serverless cluster: + +```shell +ticloud serverless update [flags] +``` + +## Examples + +Update a TiDB Serverless cluster in interactive mode: + +```shell +ticloud serverless update +``` + +Update the name of a TiDB Serverless cluster in non-interactive mode: + +```shell +ticloud serverless update -c --display-name +``` + +Update labels of a TiDB Serverless cluster in non-interactive mode + +```shell +ticloud serverless update -c --labels "{\"label1\":\"value1\"}" +``` + +## Flags + +In non-interactive mode, you need to manually enter the required flags. In interactive mode, you can just follow CLI prompts to fill them in. + +| Flag | Description | Required | Note | +|--------------------------|-------------------------------------------------------|----------|------------------------------------------------------| +| -c, --cluster-id string | Specifies the ID of the cluster. | Yes | Only works in non-interactive mode. | +| -n --display-name string | Specifies a new name for the cluster. | No | Only works in non-interactive mode. |. +| --annotations string | Specifies new annotations for the cluster | No | Only works in non-interactive mode. | +| --labels string | Specifies new labels for the cluster. | No | Only works in non-interactive mode. | +| -h, --help | Shows help information for this command. | No | Works in both non-interactive and interactive modes. | + +## Inherited flags + +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | + +## Feedback + +If you have any questions or suggestions on the TiDB Cloud CLI, feel free to create an [issue](https://github.com/tidbcloud/tidbcloud-cli/issues/new/choose). Also, we welcome any contributions. diff --git a/tidb-cloud/ticloud-update.md b/tidb-cloud/ticloud-update.md index 0a2c85dd3eca7..cd00acbf63a69 100644 --- a/tidb-cloud/ticloud-update.md +++ b/tidb-cloud/ticloud-update.md @@ -21,16 +21,17 @@ ticloud update ## Flags -| Flag | Description | -|------------|--------------------------| - | -h, --help | Help information for this command | +| Flag | Description | +|------------|-----------------------------------| + | -h, --help | Shows help information for this command. | ## Inherited flags -| Flag | Description | Required | Note | -|----------------------|-------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------| -| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | -| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| Flag | Description | Required | Note | +|----------------------|------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------| +| --no-color | Disables color in output. | No | Only works in non-interactive mode. In interactive mode, disabling color might not work with some UI components. | +| -P, --profile string | Specifies the active [user profile](/tidb-cloud/cli-reference.md#user-profile) used in this command. | No | Works in both non-interactive and interactive modes. | +| -D, --debug | Enables debug mode. | No | Works in both non-interactive and interactive modes. | ## Feedback diff --git a/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md b/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md index 14930b63a97ad..27128bc02f828 100644 --- a/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md +++ b/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md @@ -24,7 +24,7 @@ The following table lists the specifications and corresponding replication perfo > **Note:** > -> The preceding performance data is for reference only and might vary in different scenarios. +> The preceding performance data is for reference only and might vary in different scenarios. It is strongly recommended that you conduct a real workload test before using the changefeed feature in a production environment. For further assistance, contact [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md#get-support-for-a-cluster). ## Price diff --git a/tidb-cloud/tidb-cloud-billing.md b/tidb-cloud/tidb-cloud-billing.md index d456a1a4278e0..4ca9266baa951 100644 --- a/tidb-cloud/tidb-cloud-billing.md +++ b/tidb-cloud/tidb-cloud-billing.md @@ -46,7 +46,9 @@ To view the list of invoices, perform the following steps: > > If you are in multiple organizations, switch to your target organization by clicking its name. -2. Click **Billing**. The invoices page is displayed. +2. In the left navigation pane, click the **Billing** tab. + +3. Click the **Invoices** tab. The invoices page is displayed. ## Billing details @@ -78,6 +80,48 @@ The billing details page shows the billing summary by project and by service. Yo > - The total amount in the monthly bill is rounded off to the 2nd decimal place. > - The total amount in the daily usage details is accurate to the 6th decimal place. +## Cost explorer + +If you are in the `Organization Owner` or `Organization Billing Admin` role of your organization, you can view and analyze the usage costs of TiDB Cloud. Otherwise, skip this section. + +To analyze and customize your cost reports of your organization, perform the following steps: + +1. In the lower-left corner of the [TiDB Cloud console](https://tidbcloud.com), click , and then click **Billing**. + + > **Note:** + > + > If you are in multiple organizations, switch to your target organization by clicking its name. + +2. On the **Billing** page, click the **Cost Explorer** tab. +3. On the **Cost Explorer** page, expand the **Filter** section in the upper-right corner to customize your report. You can set the time range, select a grouping option (such as by service, project, cluster, region, product type, and charge type), and apply filters by selecting specific services, projects, clusters, or regions. The cost explorer will display you with the following information: + + - **Cost Graph**: visualizes the cost trends over the selected time range. You can switch between **Monthly**, **Daily**, and **Total** views. + - **Cost Breakdown**: displays a detailed breakdown of your costs according to the selected grouping option. For further analysis, you can download the data in CSV format. + +## Billing profile + +Paid organizations can create a billing profile. Information in this profile will be used to determine the tax calculation. + +To view or update the billing profile of your organization, click in the lower-left corner and then click **Billing** > **Billing Profile**. + +There are four fields in the billing profile. + +### Company name (optional) + +If this field is specified, this name will appear on invoices instead of your organization name. + +### Billing email (optional) + +If this field is specified, invoices and other billing-related notifications will be sent to this email address. + +### Primary business address + +This is the address of the company that purchases TiDB Cloud services. It is used to calculate any applicable taxes. + +### Business tax ID (optional) + +If your business is registered for VAT/GST, fill in a valid VAT/GST ID. By providing this information, we will exempt you from charging VAT/GST if applicable. This is important for businesses operating in regions where VAT/GST registration allows for certain tax exemptions or refunds. + ## Credits TiDB Cloud offers a certain number of credits for Proof of Concept (PoC) users. One credit is equivalent to one U.S. dollar. You can use credits to pay TiDB cluster fees before the credits become expired. @@ -136,8 +180,6 @@ If you are in the `Organization Owner` or `Organization Billing Admin` role of y > > If you sign up for TiDB Cloud through [AWS Marketplace](https://aws.amazon.com/marketplace) or [Google Cloud Marketplace](https://console.cloud.google.com/marketplace), you can pay through your AWS account or Google Cloud account directly but cannot add payment methods or download invoices in the TiDB Cloud console. -### Add a credit card - The fee is deducted from a bound credit card according to your cluster usage. To add a valid credit card, you can use either of the following methods: - When you are creating a cluster: @@ -156,7 +198,9 @@ The fee is deducted from a bound credit card according to your cluster usage. To 2. Click **Billing**. 3. Under the **Payment Method** tab, click **Add a New Card**. - 4. Fill in the billing address and card information, and then click **Save**. + 4. Fill in the credit card information and credit card address, and then click **Save Card**. + + If you do not specify a primary business address in [**Billing profile**](#billing-profile), the credit card address will be used as your primary business address for tax calculation. You can update your primary business address in **Billing profile** anytime. > **Note:** > @@ -176,22 +220,6 @@ To set the default credit card, perform the following steps: 3. Click the **Payment Method** tab. 4. Select a credit card in the credit card list, and click **Set as default**. -### Edit billing profile information - -The billing profile information includes the business legal address and tax registration information. By providing your tax registration number, certain taxes might be exempted from your invoice. - -To edit the billing profile information, perform the following steps: - -1. Click in the lower-left corner of the TiDB Cloud console. - - > **Note:** - > - > If you are in multiple organizations, switch to your target organization by clicking its name. - -2. Click **Billing**. -3. Click the **Payment Method** tab. -4. Edit the billing profile information, and then click **Save**. - ## Contract If you are in the `Organization Owner` or `Organization Billing Admin` role of your organization, you can manage your customized TiDB Cloud subscriptions in the TiDB Cloud console to meet compliance requirements. Otherwise, skip this section. diff --git a/tidb-cloud/tidb-cloud-budget.md b/tidb-cloud/tidb-cloud-budget.md new file mode 100644 index 0000000000000..6b6e0840ca324 --- /dev/null +++ b/tidb-cloud/tidb-cloud-budget.md @@ -0,0 +1,113 @@ +--- +title: Manage Budgets for TiDB Cloud +summary: Learn about how to use the budget feature of TiDB Cloud to monitor your costs. +--- + +# Manage Budgets for TiDB Cloud + +In TiDB Cloud, you can use the budget feature to monitor your costs and keep your spending under control. + +When your monthly actual costs exceed the percentage thresholds of your specified budget, alert emails are sent to your organization owners and billing administrators. These notifications help you stay informed and take proactive measures to manage your spending, aligning your expenses with your budget. + +TiDB Cloud provides two types of budgets to help you track your spending: + +- **Serverless Spending Limit** budget: for each TiDB Serverless scalable cluster, TiDB Cloud automatically creates a **Serverless Spending Limit** budget. This budget helps you track the actual cost against the [spending limit](/tidb-cloud/manage-serverless-spend-limit.md) configured on that cluster. It includes three threshold rules: 75%, 90%, and 100% of the budget, which are not editable. + +- **Custom** budget: you can create custom budgets to track actual costs for an entire organization or specific projects. For each budget, you can specify a budget scope, set a target spending amount, and configure alert thresholds. After creating a custom budget, you can compare your monthly actual costs with your planned costs to ensure you stay within budget. + +## Prerequisites + +To view, create, edit, or delete budgets of your organization or projects, you must be in the `Organization Owner` or `Organization Billing Admin` role of your organization. + +## View the budget information + +To view the budget page of your organization, take the following steps: + +1. In the lower-left corner of the TiDB Cloud console, click , and then click **Billing**. + + > **Note:** + > + > If you are in multiple organizations, switch to your target organization by clicking its name. + +2. On the **Billing** page, click the **Budgets** tab. + +For each budget, you can view its name, type, status, amount used, budget amount, period, and scope. + +## Create a custom budget + +To create a custom budget to monitor the spending of your organization or specific projects, take the following steps: + +1. In the lower-left corner of the TiDB Cloud console, click , and then click **Billing**. + + > **Note:** + > + > If you are in multiple organizations, switch to your target organization by clicking its name. + +2. On the **Billing** page, click the **Budgets** tab. + +3. On the **Budgets** page, click **Create Custom Budget**. You can create up to five custom budgets. + +4. Provide the budget basic settings. + + - **Name**: enter a name for the budget. + - **Period**: select a time range for tracking costs. Currently, you can only select **Monthly**, which starts on the first day of each month and resets at the beginning of each month. TiDB Cloud tracks your actual spending during the time range against your budget amount (your planned spending). + - **Budget scope**: apply the scope to all projects (which means the entire TiDB Cloud organization) or a specific project as needed. + +5. Set the budget amount. + + - **Budget Amount**: enter a planned spending amount for the selected period. + - **Apply credits**: choose whether to apply credits to the running total cost. Credits are used to reduce the cost of your TiDB Cloud usage. When this option is enabled, the budget tracks the running total cost minus credits. + - **Apply discounts**: choose whether to apply discounts to the running total cost. Discounts are reductions in the regular price of TiDB Cloud service. When this option is enabled, the budget tracks the running total cost minus discounts. + +6. Configure alert thresholds for the budget. If your actual spending exceeds specified thresholds during the selected period, TiDB Cloud sends a budget notification email to your organization owners and billing administrators. + + - By default, TiDB Cloud provides three alert thresholds: 75%, 90%, and 100% of the budget amount. You can modify these percentages as needed. + - To add a new alert threshold, click **Add alert threshold.** + - To remove a threshold, click the delete icon next to the threshold. + +7. Click **Create**. + +## Edit a custom budget + +> **Note:** +> +> The **Serverless Spending Limit** budget cannot be edited because it is automatically created by TiDB Cloud to help you track the cost of a TiDB Serverless scalable cluster against its [spending limit](/tidb-cloud/manage-serverless-spend-limit.md). + +To edit a custom budget, take the following steps: + +1. In the lower-left corner of the TiDB Cloud console, click , and then click **Billing**. + + > **Note:** + > + > If you are in multiple organizations, switch to your target organization by clicking its name. + +2. On the **Billing** page, click the **Budgets** tab. + +3. On the **Budgets** page, locate the row of your budget, click **...** in that row, and then click **Edit**. + +4. Edit the budget name, budget scope, budget amount, and alert thresholds as needed. + + > **Note:** + > + > Editing the budget period and whether to apply credits and discounts is not supported. + +5. Click **Update**. + +## Delete a custom budget + +> **Note:** +> +> - Once a custom budget is deleted, you will no longer receive any alert emails related to it. +> - The **Serverless Spending Limit** budget cannot be deleted because it is automatically created by TiDB Cloud to help you track the cost of a TiDB Serverless scalable cluster against its [spending limit](/tidb-cloud/manage-serverless-spend-limit.md). + +To delete a custom budget, take the following steps: + +1. In the lower-left corner of the TiDB Cloud console, click , and then click **Billing**. + + > **Note:** + > + > If you are in multiple organizations, switch to your target organization by clicking its name. + +2. On the **Billing** page, click the **Budgets** tab. + +3. On the **Budgets** page, locate the row of your budget, click **...** in that row, and then click **Delete**. \ No newline at end of file diff --git a/tidb-cloud/tidb-cloud-console-auditing.md b/tidb-cloud/tidb-cloud-console-auditing.md index 7c73d1836aa7c..9a474d2517b77 100644 --- a/tidb-cloud/tidb-cloud-console-auditing.md +++ b/tidb-cloud/tidb-cloud-console-auditing.md @@ -147,7 +147,7 @@ The console audit logs record various user activities on the TiDB Cloud console | BindSupportPlan | Bind a support plan | | CancelSupportPlan | Cancel a support plan | | UpdateOrganizationName | Update the organization name | -| SetSpendLimit | Edit the spending limit of a TiDB Serverless cluster | +| SetSpendLimit | Edit the spending limit of a TiDB Serverless scalable cluster | | UpdateMaintenanceWindow | Modify maintenance window start time | | DeferMaintenanceTask | Defer a maintenance task | | CreateBranch | Create a TiDB Serverless branch | diff --git a/tidb-cloud/tidb-cloud-encrypt-cmek.md b/tidb-cloud/tidb-cloud-encrypt-cmek.md index 361d0b8e34fc8..de24cbce5dc50 100644 --- a/tidb-cloud/tidb-cloud-encrypt-cmek.md +++ b/tidb-cloud/tidb-cloud-encrypt-cmek.md @@ -5,7 +5,7 @@ summary: Learn about how to use Customer-Managed Encryption Key (CMEK) in TiDB C # Encryption at Rest Using Customer-Managed Encryption Keys -Customer-Managed Encryption Key (CMEK) allows you to secure your static data in a TiDB Dedicated cluster by utilizing a cryptographic key that is under your complete control. This key is referred to as the CMEK key. +Customer-Managed Encryption Key (CMEK) allows you to secure your static data in a TiDB Dedicated cluster by utilizing a symmetric encryption key that is under your complete control. This key is referred to as the CMEK key. Once CMEK is enabled for a project, all clusters created within that project encrypt their static data using the CMEK key. Additionally, any backup data generated by these clusters is encrypted using the same key. If CMEK is not enabled, TiDB Cloud employs an escrow key to encrypt all data in your cluster when it is at rest. @@ -34,8 +34,8 @@ If you are in the `Organization Owner` role of your organization, you can create To create a CMEK-enabled project, take the following steps: 1. Click in the lower-left corner of the TiDB Cloud console. -2. Click **Organization Settings**. -3. On the **Organization Settings** page, click **Create New Project** to open the project creation dialog. +2. Click **Organization Settings**, and then click the **Projects** tab in the left navigation pane. The **Projects** tab is displayed. +3. Click **Create New Project** to open the project creation dialog. 4. Fill in a project name. 5. Choose to enable the CMEK capability of the project. 6. Click **Confirm** to complete the project creation. @@ -67,7 +67,7 @@ To complete the CMEK configuration of the project, take the following steps: 2. Click **Encryption Access** to enter the encryption management page of the project. 3. Click **Create Encryption Key** to enter the key creation page. 4. The key provider only supports AWS KMS. You can choose the region where the encryption key can be used. -5. Copy and save the JSON file as `ROLE-TRUST-POLICY.JSON`. This file describes the trust relationship. +5. Copy and save the JSON file as `ROLE-TRUST-POLICY.JSON`. This file describes the trust relationship. 6. Add this trust relationship to the key policy of AWS KMS. For more information, refer to [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html). 7. In the TiDB Cloud console, scroll to the bottom of the key creation page, and then fill in the **KMS Key ARN** obtained from AWS KMS. 8. Click **Create** to create the key. diff --git a/tidb-cloud/tidb-cloud-events.md b/tidb-cloud/tidb-cloud-events.md index a774c258e05ba..02256ca8eadc3 100644 --- a/tidb-cloud/tidb-cloud-events.md +++ b/tidb-cloud/tidb-cloud-events.md @@ -42,7 +42,7 @@ TiDB Cloud logs the following types of cluster events: | ScaleChangefeed | Scale the specification of a changefeed | | FailedChangefeed | Changefeed failures | | ImportData | Import data to a cluster | -| UpdateSpendingLimit | Update spending limit of a TiDB Serverless cluster | +| UpdateSpendingLimit | Update spending limit of a TiDB Serverless scalable cluster | | ResourceLimitation | Update resource limitation of a TiDB Serverless cluster | For each event, the following information is logged: diff --git a/tidb-cloud/tidb-cloud-faq.md b/tidb-cloud/tidb-cloud-faq.md index bae227574bc71..222a4a60e87e5 100644 --- a/tidb-cloud/tidb-cloud-faq.md +++ b/tidb-cloud/tidb-cloud-faq.md @@ -41,8 +41,8 @@ No. ### What versions of TiDB are supported on TiDB Cloud? -- Starting from October 31, 2023, the default TiDB version for new TiDB Dedicated clusters is v7.1.2. -- Starting from March 7, 2023, the default TiDB version for new TiDB Serverless clusters is v6.6.0. +- Starting from August 6, 2024, the default TiDB version for new TiDB Dedicated clusters is v7.5.3. +- Starting from February 21, 2024, the TiDB version for TiDB Serverless clusters is v7.1.3. For more information, see [TiDB Cloud Release Notes](/tidb-cloud/tidb-cloud-release-notes.md). @@ -197,3 +197,7 @@ For more information, see [Connect to Your TiDB Serverless Cluster](/tidb-cloud/ ### What support is available for customers? TiDB Cloud is supported by the same team behind TiDB, which has run mission-critical use cases for over 1500 global enterprises across industries including financial services, e-commerce, enterprise applications, and gaming. TiDB Cloud offers a free basic support plan for each user and you can upgrade to a paid plan for extended services. For more information, see [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + +### How do I check if TiDB Cloud is down? + +You can check the current uptime status of TiDB Cloud on the [System Status](https://status.tidbcloud.com/) page. \ No newline at end of file diff --git a/tidb-cloud/tidb-cloud-glossary.md b/tidb-cloud/tidb-cloud-glossary.md index 703d70b33ee0c..e8331e53991d3 100644 --- a/tidb-cloud/tidb-cloud-glossary.md +++ b/tidb-cloud/tidb-cloud-glossary.md @@ -25,9 +25,7 @@ ACID refers to the four key properties of a transaction: atomicity, consistency, ### Chat2Query -TiDB Cloud is powered by AI. You can use Chat2Query (beta), an AI-powered SQL editor in the [TiDB Cloud console](https://tidbcloud.com/), to maximize your data value. - -In Chat2Query, you can either simply type `--` followed by your instructions to let AI generate SQL queries automatically or write SQL queries manually, and then run SQL queries against databases without a terminal. You can find the query results in tables intuitively and check the query logs easily. For more information, see [Chat2Query (beta)](/tidb-cloud/explore-data-with-chat2query.md). +Chat2Query is an AI-powered feature integrated into SQL Editor that assists users in generating, debugging, or rewriting SQL queries using natural language instructions. For more information, see [Explore your data with AI-assisted SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). In addition, TiDB Cloud provides a Chat2Query API for TiDB Serverless clusters. After it is enabled, TiDB Cloud will automatically create a system Data App called **Chat2Query** and a Chat2Data endpoint in Data Service. You can call this endpoint to let AI generate and execute SQL statements by providing instructions. For more information, see [Get started with Chat2Query API](/tidb-cloud/use-chat2query-api.md). @@ -49,6 +47,10 @@ Data Service (beta) enables you to access TiDB Cloud data via an HTTPS request u For more information, see [Data Service Overview](/tidb-cloud/data-service-overview.md). +### Direct Customer + +A direct customer is an end customer who purchases TiDB Cloud and pays invoices directly from PingCAP. This is distinguished from an [MSP customer](#msp-customer). + ## E ### Endpoint @@ -67,6 +69,14 @@ A user that has been invited to an organization, with access to the organization Starting from v5.0, TiDB introduces Massively Parallel Processing (MPP) architecture through TiFlash nodes, which shares the execution workloads of large join queries among TiFlash nodes. When the MPP mode is enabled, TiDB, based on cost, determines whether to use the MPP framework to perform the calculation. In the MPP mode, the join keys are redistributed through the Exchange operation while being calculated, which distributes the calculation pressure to each TiFlash node and speeds up the calculation. For more information, see [Use TiFlash MPP Mode](/tiflash/use-tiflash-mpp-mode.md). +### MSP Customer + +A managed service provider (MSP) customer is an end customer who purchases TiDB Cloud and pays invoices through the MSP channel. This is distinguished from a [direct customer](#direct-customer). + +### Managed Service Provider (MSP) + +A managed service provider (MSP) is a partner who resells TiDB Cloud and provides value-added services, including but not limited to TiDB Cloud organization management, billing services, and technical support. + ## N ### node @@ -129,7 +139,7 @@ A Request Unit (RU) is a unit of measure used to represent the amount of resourc ### Spending limit -Spending limit refers to the maximum amount of money that you are willing to spend on a particular workload in a month. It is a cost-control mechanism that allows you to set a budget for your TiDB Serverless clusters. When the spending limit of a cluster is greater than 0, the cluster is considered a paid cluster. Also, the paid cluster can have a free quota if it meets the qualifications. The paid cluster with a free quota will consume the free quota first. +Spending limit refers to the maximum amount of money that you are willing to spend on a particular workload in a month. It is a cost-control mechanism that enables you to set a budget for your TiDB Serverless clusters. For [scalable clusters](/tidb-cloud/select-cluster-tier.md#scalable-cluster-plan), the spending limit must be set to a minimum of $0.01. Also, the scalable cluster can have a free quota if it meets the qualifications. The scalable cluster with a free quota will consume the free quota first. ## T diff --git a/tidb-cloud/tidb-cloud-htap-quickstart.md b/tidb-cloud/tidb-cloud-htap-quickstart.md index 0427f7a97a0b6..a65ce78aa6fef 100644 --- a/tidb-cloud/tidb-cloud-htap-quickstart.md +++ b/tidb-cloud/tidb-cloud-htap-quickstart.md @@ -12,7 +12,7 @@ This tutorial guides you through an easy way to experience the Hybrid Transactio ## Before you begin -Before experiencing the HTAP feature, follow [TiDB Cloud Quick Start](/tidb-cloud/tidb-cloud-quickstart.md) to create a cluster with TiFlash nodes, connect to the TiDB cluster, and import the Capital Bikeshare sample data to the cluster. +Before experiencing the HTAP feature, follow [TiDB Cloud Quick Start](/tidb-cloud/tidb-cloud-quickstart.md) to create a TiDB Serverless cluster and import the **Steam Game Stats** sample dataset to the cluster. ## Steps @@ -20,28 +20,28 @@ Before experiencing the HTAP feature, follow [TiDB Cloud Quick Start](/tidb-clou After a cluster with TiFlash nodes is created, TiKV does not replicate data to TiFlash by default. You need to execute DDL statements in a MySQL client of TiDB to specify the tables to be replicated. After that, TiDB will create the specified table replicas in TiFlash accordingly. -For example, to replicate the `trips` table (in the Capital Bikeshare sample data) to TiFlash, execute the following statements: +For example, to replicate the `games` table (in the **Steam Game Stats** sample dataset) to TiFlash, execute the following statements: ```sql -USE bikeshare; +USE game; ``` ```sql -ALTER TABLE trips SET TIFLASH REPLICA 1; +ALTER TABLE games SET TIFLASH REPLICA 2; ``` To check the replication progress, execute the following statement: ```sql -SELECT * FROM information_schema.tiflash_replica WHERE TABLE_SCHEMA = 'bikeshare' and TABLE_NAME = 'trips'; +SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_ID, REPLICA_COUNT, LOCATION_LABELS, AVAILABLE, PROGRESS FROM information_schema.tiflash_replica WHERE TABLE_SCHEMA = 'game' and TABLE_NAME = 'games'; ``` ```sql -+--------------+------------+----------+---------------+-----------------+-----------+----------+------------+ -| TABLE_SCHEMA | TABLE_NAME | TABLE_ID | REPLICA_COUNT | LOCATION_LABELS | AVAILABLE | PROGRESS | TABLE_MODE | -+--------------+------------+----------+---------------+-----------------+-----------+----------+------------+ -| bikeshare | trips | 88 | 1 | | 1 | 1 | NORMAL | -+--------------+------------+----------+---------------+-----------------+-----------+----------+------------+ ++--------------+------------+----------+---------------+-----------------+-----------+----------+ +| TABLE_SCHEMA | TABLE_NAME | TABLE_ID | REPLICA_COUNT | LOCATION_LABELS | AVAILABLE | PROGRESS | ++--------------+------------+----------+---------------+-----------------+-----------+----------+ +| game | games | 88 | 2 | | 1 | 1 | ++--------------+------------+----------+---------------+-----------------+-----------+----------+ 1 row in set (0.20 sec) ``` @@ -54,12 +54,20 @@ In the result of the preceding statement: When the process of replication is completed, you can start to run some queries. -For example, you can check the number of trips by different start and end stations: +For example, you can check the number of games released every year, as well as the average price and average playtime: ```sql -SELECT start_station_name, end_station_name, COUNT(ride_id) as count from `trips` -GROUP BY start_station_name, end_station_name -ORDER BY count ASC; +SELECT + YEAR(`release_date`) AS `release_year`, + COUNT(*) AS `games_released`, + AVG(`price`) AS `average_price`, + AVG(`average_playtime_forever`) AS `average_playtime` +FROM + `games` +GROUP BY + `release_year` +ORDER BY + `release_year` DESC; ``` ### Step 3. Compare the query performance between row-based storage and columnar storage @@ -69,12 +77,20 @@ In this step, you can compare the execution statistics between TiKV (row-based s - To get the execution statistics of this query using TiKV, execute the following statement: ```sql - EXPLAIN ANALYZE SELECT /*+ READ_FROM_STORAGE(TIKV[trips]) */ start_station_name, end_station_name, COUNT(ride_id) as count from `trips` - GROUP BY start_station_name, end_station_name - ORDER BY count ASC; + EXPLAIN ANALYZE SELECT /*+ READ_FROM_STORAGE(TIKV[games]) */ + YEAR(`release_date`) AS `release_year`, + COUNT(*) AS `games_released`, + AVG(`price`) AS `average_price`, + AVG(`average_playtime_forever`) AS `average_playtime` + FROM + `games` + GROUP BY + `release_year` + ORDER BY + `release_year` DESC; ``` - For tables with TiFlash replicas, the TiDB optimizer automatically determines whether to use either TiKV or TiFlash replicas based on the cost estimation. In the preceding `EXPLAIN ANALYZE` statement, `HINT /*+ READ_FROM_STORAGE(TIKV[trips]) */` is used to force the optimizer to choose TiKV so you can check the execution statistics of TiKV. + For tables with TiFlash replicas, the TiDB optimizer automatically determines whether to use either TiKV or TiFlash replicas based on the cost estimation. In the preceding `EXPLAIN ANALYZE` statement, the `/*+ READ_FROM_STORAGE(TIKV[games]) */` hint is used to force the optimizer to choose TiKV so you can check the execution statistics of TiKV. > **Note:** > @@ -83,41 +99,50 @@ In this step, you can compare the execution statistics between TiKV (row-based s In the output, you can get the execution time from the `execution info` column. ```sql - id | estRows | actRows | task | access object | execution info | operator info | memory | disk - ---------------------------+-----------+---------+-----------+---------------+-------------------------------------------+-----------------------------------------------+---------+--------- - Sort_5 | 633.00 | 73633 | root | | time:1.62s, loops:73 | Column#15 | 6.88 MB | 0 Bytes - └─Projection_7 | 633.00 | 73633 | root | | time:1.57s, loops:76, Concurrency:OFF... | bikeshare.trips.start_station_name... | 6.20 MB | N/A | 6.20 MB | N/A - └─HashAgg_15 | 633.00 | 73633 | root | | time:1.57s, loops:76, partial_worker:... | group by:bikeshare.trips.end_station_name... | 58.0 MB | N/A - └─TableReader_16 | 633.00 | 111679 | root | | time:1.34s, loops:3, cop_task: {num: ... | data:HashAgg_8 | 7.55 MB | N/A - └─HashAgg_8 | 633.00 | 111679 | cop[tikv] | | tikv_task:{proc max:830ms, min:470ms,... | group by:bikeshare.trips.end_station_name... | N/A | N/A - └─TableFullScan_14 | 816090.00 | 816090 | cop[tikv] | table:trips | tikv_task:{proc max:490ms, min:310ms,... | keep order:false | N/A | N/A + id | estRows | actRows | task | access object | execution info | operator info | memory | disk + ---------------------------+----------+---------+-----------+---------------+--------------------------------------------+-----------------------------------------------+---------+--------- + Sort_5 | 4019.00 | 28 | root | | time:672.7ms, loops:2, RU:1159.679690 | Column#36:desc | 18.0 KB | 0 Bytes + └─Projection_7 | 4019.00 | 28 | root | | time:672.7ms, loops:6, Concurrency:5 | year(game.games.release_date)->Column#36, ... | 35.5 KB | N/A + └─HashAgg_15 | 4019.00 | 28 | root | | time:672.6ms, loops:6, partial_worker:... | group by:Column#38, funcs:count(Column#39)... | 56.7 KB | N/A + └─TableReader_16 | 4019.00 | 28 | root | | time:672.4ms, loops:2, cop_task: {num:... | data:HashAgg_9 | 3.60 KB | N/A + └─HashAgg_9 | 4019.00 | 28 | cop[tikv] | | tikv_task:{proc max:300ms, min:0s, avg... | group by:year(game.games.release_date), ... | N/A | N/A + └─TableFullScan_14 | 68223.00 | 68223 | cop[tikv] | table:games | tikv_task:{proc max:290ms, min:0s, avg... | keep order:false | N/A | N/A (6 rows) ``` - To get the execution statistics of this query using TiFlash, execute the following statement: ```sql - EXPLAIN ANALYZE SELECT start_station_name, end_station_name, COUNT(ride_id) as count from `trips` - GROUP BY start_station_name, end_station_name - ORDER BY count ASC; + EXPLAIN ANALYZE SELECT + YEAR(`release_date`) AS `release_year`, + COUNT(*) AS `games_released`, + AVG(`price`) AS `average_price`, + AVG(`average_playtime_forever`) AS `average_playtime` + FROM + `games` + GROUP BY + `release_year` + ORDER BY + `release_year` DESC; ``` In the output, you can get the execution time from the `execution info` column. ```sql - id | estRows | actRows | task | access object | execution info | operator info | memory | disk - -----------------------------------+-----------+---------+--------------+---------------+-------------------------------------------+------------------------------------+---------+--------- - Sort_5 | 633.00 | 73633 | root | | time:420.2ms, loops:73 | Column#15 | 5.61 MB | 0 Bytes - └─Projection_7 | 633.00 | 73633 | root | | time:368.7ms, loops:73, Concurrency:OFF | bikeshare.trips.start_station_... | 4.94 MB | N/A - └─TableReader_34 | 633.00 | 73633 | root | | time:368.6ms, loops:73, cop_task: {num... | data:ExchangeSender_33 | N/A | N/A - └─ExchangeSender_33 | 633.00 | 73633 | mpp[tiflash] | | tiflash_task:{time:360.7ms, loops:1,... | ExchangeType: PassThrough | N/A | N/A - └─Projection_29 | 633.00 | 73633 | mpp[tiflash] | | tiflash_task:{time:330.7ms, loops:1,... | Column#15, bikeshare.trips.star... | N/A | N/A - └─HashAgg_30 | 633.00 | 73633 | mpp[tiflash] | | tiflash_task:{time:330.7ms, loops:1,... | group by:bikeshare.trips.end_st... | N/A | N/A - └─ExchangeReceiver_32 | 633.00 | 73633 | mpp[tiflash] | | tiflash_task:{time:280.7ms, loops:12,... | | N/A | N/A - └─ExchangeSender_31 | 633.00 | 73633 | mpp[tiflash] | | tiflash_task:{time:272.3ms, loops:256,... | ExchangeType: HashPartition, Ha... | N/A | N/A - └─HashAgg_12 | 633.00 | 73633 | mpp[tiflash] | | tiflash_task:{time:252.3ms, loops:256,... | group by:bikeshare.trips.end_st... | N/A | N/A - └─TableFullScan_28 | 816090.00 | 816090 | mpp[tiflash] | table:trips | tiflash_task:{time:92.3ms, loops:16,... | keep order:false | N/A | N/A - (10 rows) + id | estRows | actRows | task | access object | execution info | operator info | memory | disk + -------------------------------------+----------+---------+--------------+---------------+-------------------------------------------------------+--------------------------------------------+---------+--------- + Sort_5 | 4019.00 | 28 | root | | time:222.2ms, loops:2, RU:25.609675 | Column#36:desc | 3.77 KB | 0 Bytes + └─TableReader_39 | 4019.00 | 28 | root | | time:222.1ms, loops:2, cop_task: {num: 2, max: 0s,... | MppVersion: 1, data:ExchangeSender_38 | 4.64 KB | N/A + └─ExchangeSender_38 | 4019.00 | 28 | mpp[tiflash] | | tiflash_task:{time:214.8ms, loops:1, threads:1} | ExchangeType: PassThrough | N/A | N/A + └─Projection_8 | 4019.00 | 28 | mpp[tiflash] | | tiflash_task:{time:214.8ms, loops:1, threads:1} | year(game.games.release_date)->Column#3... | N/A | N/A + └─Projection_34 | 4019.00 | 28 | mpp[tiflash] | | tiflash_task:{time:214.8ms, loops:1, threads:1} | Column#33, div(Column#34, cast(case(eq(... | N/A | N/A + └─HashAgg_35 | 4019.00 | 28 | mpp[tiflash] | | tiflash_task:{time:214.8ms, loops:1, threads:1} | group by:Column#63, funcs:sum(Column#64... | N/A | N/A + └─ExchangeReceiver_37 | 4019.00 | 28 | mpp[tiflash] | | tiflash_task:{time:214.8ms, loops:1, threads:8} | | N/A | N/A + └─ExchangeSender_36 | 4019.00 | 28 | mpp[tiflash] | | tiflash_task:{time:210.6ms, loops:1, threads:1} | ExchangeType: HashPartition, Compressio... | N/A | N/A + └─HashAgg_33 | 4019.00 | 28 | mpp[tiflash] | | tiflash_task:{time:210.6ms, loops:1, threads:1} | group by:Column#75, funcs:count(1)->Col... | N/A | N/A + └─Projection_40 | 68223.00 | 68223 | mpp[tiflash] | | tiflash_task:{time:210.6ms, loops:2, threads:8} | game.games.price, game.games.price, gam... | N/A | N/A + └─TableFullScan_23 | 68223.00 | 68223 | mpp[tiflash] | table:games | tiflash_task:{time:210.6ms, loops:2, threads:8}, ... | keep order:false | N/A | N/A + (11 rows) ``` > **Note:** diff --git a/tidb-cloud/tidb-cloud-import-local-files.md b/tidb-cloud/tidb-cloud-import-local-files.md index 89c0e514a1a7c..51bfed6038dc8 100644 --- a/tidb-cloud/tidb-cloud-import-local-files.md +++ b/tidb-cloud/tidb-cloud-import-local-files.md @@ -76,7 +76,7 @@ Currently, this method supports importing one CSV file for one task into either You can view the import progress on the **Import Task Detail** page. If there are warnings or failed tasks, you can check to view the details and solve them. -9. After the import task is completed, you can click **Explore your data by Chat2Query** to query your imported data. For more information about how to use Chat2Query, see [Explore Your Data with AI-Powered Chat2Query](/tidb-cloud/explore-data-with-chat2query.md). +9. After the import task is completed, you can click **Explore your data in SQL Editor** to query your imported data. For more information about how to use SQL Editor, see [Explore your data with AI-assisted SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). 10. On the **Import** page, you can click **View** in the **Action** column to check the import task detail. @@ -98,7 +98,7 @@ CREATE TABLE `import_test` ( LOAD DATA LOCAL INFILE 'load.txt' INTO TABLE import_test FIELDS TERMINATED BY ',' (name, address); ``` -If you use the `mysql` command-line client and encounter `ERROR 2068 (HY000): LOAD DATA LOCAL INFILE file request rejected due to restrictions on access.`, you can add `--local-infile=true` in the connection string. +If you use `mysql` and encounter `ERROR 2068 (HY000): LOAD DATA LOCAL INFILE file request rejected due to restrictions on access.`, you can add `--local-infile=true` in the connection string. ### Why can't I query a column with a reserved keyword after importing data into TiDB Cloud? diff --git a/tidb-cloud/tidb-cloud-intro.md b/tidb-cloud/tidb-cloud-intro.md index 03a38a2e03bd9..8a97852a839d2 100644 --- a/tidb-cloud/tidb-cloud-intro.md +++ b/tidb-cloud/tidb-cloud-intro.md @@ -80,7 +80,7 @@ For feature comparison between TiDB Serverless and TiDB Dedicated, see [TiDB: An - TiDB VPC (Virtual Private Cloud) - For each TiDB Cloud cluster, all TiDB nodes and auxiliary nodes, including TiDB Operator nodes and logging nodes, are deployed in an independent VPC. + For each TiDB Cloud cluster, all TiDB nodes and auxiliary nodes, including TiDB Operator nodes and logging nodes, are deployed in the same VPC. - TiDB Cloud Central Services diff --git a/tidb-cloud/tidb-cloud-org-sso-authentication.md b/tidb-cloud/tidb-cloud-org-sso-authentication.md index 760dc50020f41..35526f55b5b02 100644 --- a/tidb-cloud/tidb-cloud-org-sso-authentication.md +++ b/tidb-cloud/tidb-cloud-org-sso-authentication.md @@ -77,7 +77,7 @@ To enable Cloud Organization SSO, take the following steps: 1. Log in to [TiDB Cloud console](https://tidbcloud.com) as a user with the `Organization Owner` role. 2. In the lower-left corner of the TiDB Cloud console, click , and then click **Organization Settings**. -3. On the **Organization Settings** page, click the **Authentication** tab, and then click **Enable**. +3. In the left navigation pane, click the **Authentication** tab, and then click **Enable**. 4. In the dialog, fill in the custom URL for your organization, which must be unique in TiDB Cloud. > **Note:** @@ -168,7 +168,7 @@ In TiDB Cloud, the SAML authentication method is disabled by default. After enab - Sign on URL - Signing Certificate -2. On the **Organization Settings** page, click the **Authentication** tab, locate the row of SAML in the **Authentication Methods** area, and then click to show the SAML method details. +2. On the **Organization Settings** page, click the **Authentication** tab in the left navigation pane, locate the row of SAML in the **Authentication Methods** area, and then click to show the SAML method details. 3. In the method details, you can configure the following: - **Name** @@ -232,8 +232,9 @@ In TiDB Cloud, the SAML authentication method is disabled by default. After enab 3. In TiDB Cloud, view groups pushed from your identity provider. 1. In the lower-left corner of the [TiDB Cloud console](https://tidbcloud.com), click , and then click **Organization Settings**. - 2. On the **Organization Settings** page, click the **Groups** tab. The groups synchronized from your identity provider are displayed. - 3. To view users in a group, click **View**. + 2. In the left navigation pane, click the **Authentication** tab. + 3. Click the **Groups** tab. The groups synchronized from your identity provider are displayed. + 4. To view users in a group, click **View**. 4. In TiDB Cloud, grant roles to the groups pushed from your identity provider. diff --git a/tidb-cloud/tidb-cloud-performance-reference.md b/tidb-cloud/tidb-cloud-performance-reference.md index d94905929c80c..d16663ebde40c 100644 --- a/tidb-cloud/tidb-cloud-performance-reference.md +++ b/tidb-cloud/tidb-cloud-performance-reference.md @@ -9,9 +9,25 @@ This document provides [Sysbench](https://github.com/akopytov/sysbench) performa > **Note:** > -> The tests are performed on TiDB v6.1.1, and the test results are based on the condition that the P95 latency is below 105 ms. - -In this document, the transaction models `Read Only`, `Read Write`, and `Write Only` represent read workloads, mixed workloads, and write workloads. +> The tests are performed on TiDB v6.1.1, and the test results are based on the condition that the P95 transaction latency is below 105 ms. + +Here is an example of a Sysbench configuration file: + +```txt +mysql-host={TIDB_HOST} +mysql-port=4000 +mysql-user=root +mysql-password=password +mysql-db=sbtest +time=1200 +threads={100} +report-interval=10 +db-driver=mysql +mysql-ignore-errors=1062,2013,8028,9002,9007 +auto-inc=false +``` + +In this document, the transaction models `Read Only`, `Read Write`, and `Write Only` represent read workloads, mixed workloads, and write workloads. ## 4 vCPU performance @@ -24,7 +40,7 @@ Test results: **TiDB (4 vCPU, 16 GiB) \* 1; TiKV (4 vCPU, 16 GiB) \* 3** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|----------|----------|----------------------|------------------| | Read Only | 35 | 8,064.89 | 504.06 | 69.43 | 104.84 | | Read Write | 25 | 6,747.60 | 337.38 | 74.10 | 102.97 | @@ -32,7 +48,7 @@ Test results: **TiDB (4 vCPU, 16 GiB) \* 2; TiKV (4 vCPU, 16 GiB) \* 3** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|-----------|----------|----------------------|------------------| | Read Only | 65 | 16,805.76 | 1,050.36 | 61.88 | 95.81 | | Read Write | 45 | 12,940.36 | 647.02 | 69.55 | 99.33 | @@ -53,7 +69,7 @@ Test results: **TiDB (8 vCPU, 16 GiB) \* 2; TiKV (8 vCPU, 32 GiB) \* 3** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|-----------|----------|----------------------|------------------| | Read Only | 150 | 37,863.64 | 2,366.48 | 63.38 | 99.33 | | Read Write | 100 | 30,218.42 | 1,510.92 | 66.18 | 94.10 | @@ -61,7 +77,7 @@ Test results: **TiDB (8 vCPU, 16 GiB) \* 4; TiKV (8 vCPU, 32 GiB) \* 3** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|-----------|----------|----------------------|------------------| | Read Only | 300 | 74,190.40 | 4,636.90 | 64.69 | 104.84 | | Read Write | 200 | 53,351.84 | 2,667.59 | 74.97 | 97.55 | @@ -69,7 +85,7 @@ Test results: **TiDB (8 vCPU, 16 GiB) \* 4; TiKV (8 vCPU, 32 GiB) \* 6** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|-----------|-----------|----------------------|------------------| | Read Only | 300 | 75,713.04 | 4,732.06 | 63.39 | 102.97 | | Read Write | 200 | 62,640.62 | 3,132.03 | 63.85 | 95.81 | @@ -77,7 +93,7 @@ Test results: **TiDB (8 vCPU, 16 GiB) \* 6; TiKV (8 vCPU, 32 GiB) \* 9** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|------------|-----------|----------------------|------------------| | Read Only | 450 | 113,407.94 | 7,088.00 | 63.48 | 104.84 | | Read Write | 300 | 92,387.31 | 4,619.37 | 64.93 | 99.33 | @@ -85,7 +101,7 @@ Test results: **TiDB (8 vCPU, 16 GiB) \* 9; TiKV (8 vCPU, 32 GiB) \* 6** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|------------|-----------|----------------------|------------------| | Read Only | 650 | 168,486.65 | 10,530.42 | 61.72 | 101.13 | | Read Write | 400 | 106,853.63 | 5,342.68 | 74.86 | 101.13 | @@ -93,7 +109,7 @@ Test results: **TiDB (8 vCPU, 16 GiB) \* 12; TiKV (8 vCPU, 32 GiB) \* 9** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|------------|-----------|----------------------|------------------| | Read Only | 800 | 211,882.77 | 13,242.67 | 60.40 | 101.13 | | Read Write | 550 | 139,393.46 | 6,969.67 | 78.90 | 104.84 | @@ -110,7 +126,7 @@ Test results: **TiDB (16 vCPU, 32 GiB) \* 1; TiKV (16 vCPU, 64 GiB) \* 3** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|----------|---------|----------------------|------------------| | Read Only | 125 | 37448.41 | 2340.53 | 53.40 | 89.16 | | Read Write | 100 | 28903.99 | 1445.20 | 69.19 | 104.84 | @@ -118,8 +134,33 @@ Test results: **TiDB (16 vCPU, 32 GiB) \* 2; TiKV (16 vCPU, 64 GiB) \* 3** -| Transaction model | Threads | QPS | TPS | Average latency (ms) | P95 latency (ms) | +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | |-------------------|---------|----------|----------|----------------------|------------------| | Read Only | 300 | 77238.30 | 4827.39 | 62.14 | 102.97 | | Read Write | 200 | 58241.15 | 2912.06 | 68.67 | 97.55 | | Write Only | 700 | 68829.89 | 11471.65 | 61.01 | 101.13 | + +## 32 vCPU performance + +Test scales: + +- TiDB (32 vCPU, 64 GiB) \* 1; TiKV (32 vCPU, 128 GiB) \* 3 +- TiDB (32 vCPU, 64 GiB) \* 2; TiKV (32 vCPU, 128 GiB) \* 3 + +Test results: + +**TiDB (32 vCPU, 64 GiB) \* 1; TiKV (32 vCPU, 128 GiB) \* 3** + +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | +|-------------------|---------|----------|---------|----------------------|------------------| +| Read Only | 300 | 83941.16 | 5246 | 57.20 | 87.6 | +| Read Write | 250 | 71290.31 | 3565 | 70.10 | 105.0 | +| Write Only | 700 | 72199.56 | 12033 | 58.20 | 101.0 | + +**TiDB (32 vCPU, 64 GiB) \* 2; TiKV (32 vCPU, 128 GiB) \* 3** + +| Transaction model | Threads | QPS | TPS | Average transaction latency (ms) | P95 transaction latency (ms) | +|-------------------|---------|-----------|----------|----------------------|------------------| +| Read Only | 650 | 163101.68 | 10194 | 63.8 | 99.3 | +| Read Write | 450 | 123152.74 | 6158 | 73.1 | 101 | +| Write Only | 1200 | 112333.16 | 18722 | 64.1 | 101 | diff --git a/tidb-cloud/tidb-cloud-quickstart.md b/tidb-cloud/tidb-cloud-quickstart.md index 7ffedf227e8eb..c08475afd50b7 100644 --- a/tidb-cloud/tidb-cloud-quickstart.md +++ b/tidb-cloud/tidb-cloud-quickstart.md @@ -26,23 +26,25 @@ Additionally, you can try out TiDB features on [TiDB Playground](https://play.ti 3. For new sign-up users, TiDB Cloud automatically creates a default TiDB Serverless cluster named `Cluster0` for you. - - To instantly try out TiDB Cloud features with this default cluster, proceed to [Step 2: Try AI-powered Chat2Query (beta)](#step-2-try-ai-powered-chat2query-beta). + - To instantly try out TiDB Cloud features with this default cluster, proceed to [Step 2: Try AI-assisted SQL Editor](#step-2-try-ai-assisted-sql-editor). - To create a new TiDB Serverless cluster on your own, follow these steps: 1. Click **Create Cluster**. - 2. On the **Create Cluster** page, **Serverless** is selected by default. Select the target region for your cluster, update the default cluster name if necessary, and then click **Create**. Your TiDB Serverless cluster will be created in approximately 30 seconds. + 2. On the **Create Cluster** page, **Serverless** is selected by default. Select the target region for your cluster, update the default cluster name if necessary, select your [cluster plan](/tidb-cloud/select-cluster-tier.md#cluster-plans), and then click **Create**. Your TiDB Serverless cluster will be created in approximately 30 seconds. -## Step 2: Try AI-powered Chat2Query (beta) +## Step 2: Try AI-assisted SQL Editor -TiDB Cloud is powered by AI. You can use Chat2Query (beta), an AI-powered SQL editor in the TiDB Cloud console, to maximize the value of your data. +You can use the built-in AI-assisted SQL Editor in the TiDB Cloud console to maximize your data value. This enables you to run SQL queries against databases without a local SQL client. You can intuitively view the query results in tables or charts and easily check the query logs. -In Chat2Query, you can either simply type `--` followed by your instructions to let AI automatically generate SQL queries, or write SQL queries manually and run them against databases without using a terminal. - -1. On the [**Clusters**](https://tidbcloud.com/console/clusters) page, click on a cluster name to go to its overview page, and then click **Chat2Query** in the left navigation pane. +1. On the [**Clusters**](https://tidbcloud.com/console/clusters) page, click on a cluster name to go to its overview page, and then click **SQL Editor** in the left navigation pane. 2. To try the AI capacity of TiDB Cloud, follow the on-screen instructions to allow PingCAP and OpenAI to use your code snippets for research and service improvement, and then click **Save and Get Started**. -3. In the editor, you can either simply type `--` followed by your instructions to let AI automatically generate SQL queries, or write SQL queries manually. +3. In SQL Editor, press + I on macOS (or Control + I on Windows or Linux) to instruct [Chat2Query (beta)](/tidb-cloud/tidb-cloud-glossary.md#chat2query) to generate SQL queries automatically. + + For example, to create a new table `test.t` with two columns (column `id` and column `name`), you can type `use test;` to specify the database, press + I, type `create a new table t with id and name` as the instruction, and then press **Enter** to let AI generate a SQL statement accordingly. + + For the generated statement, you can accept it by clicking **Accept** and then further edit it if needed, or reject it by clicking **Discard**. > **Note:** > @@ -76,7 +78,32 @@ In Chat2Query, you can either simply type `--` followed by your instructions to -After running the queries, you can immediately see the query logs and results at the bottom of the page. +After running the queries, you can immediately see the query logs and results at the bottom of the page. + +To let AI generate more SQL statements, you can type more instructions as shown in the following example: + +```sql +use test; + +-- create a new table t with id and name +CREATE TABLE + `t` (`id` INT, `name` VARCHAR(255)); + +-- add 3 rows +INSERT INTO + `t` (`id`, `name`) +VALUES + (1, 'row1'), + (2, 'row2'), + (3, 'row3'); + +-- query all +SELECT + `id`, + `name` +FROM + `t`; +``` ## Step 3: Try interactive tutorials @@ -90,6 +117,6 @@ TiDB Cloud offers interactive tutorials with carefully crafted sample datasets t ## What's next - To learn how to connect to your cluster using different methods, see [Connect to a TiDB Serverless cluster](/tidb-cloud/connect-to-tidb-cluster-serverless.md). -- For more information about how to use Chat2Query to explore your data, see [Chat2Query](/tidb-cloud/explore-data-with-chat2query.md). +- For more information about how to use SQL Editor and Chat2Query to explore your data, see [Explore your data with AI-assisted SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). - For TiDB SQL usage, see [Explore SQL with TiDB](/basic-sql-operations.md). - For production use with the benefits of cross-zone high availability, horizontal scaling, and [HTAP](https://en.wikipedia.org/wiki/Hybrid_transactional/analytical_processing), see [Create a TiDB Dedicated cluster](/tidb-cloud/create-tidb-cluster.md). diff --git a/tidb-cloud/tidb-cloud-release-notes.md b/tidb-cloud/tidb-cloud-release-notes.md index 08e350c01121d..3724a880f9217 100644 --- a/tidb-cloud/tidb-cloud-release-notes.md +++ b/tidb-cloud/tidb-cloud-release-notes.md @@ -1,1004 +1,313 @@ --- -title: TiDB Cloud Release Notes in 2023 -summary: Learn about the release notes of TiDB Cloud in 2023. +title: TiDB Cloud Release Notes in 2024 +summary: Learn about the release notes of TiDB Cloud in 2024. aliases: ['/tidbcloud/supported-tidb-versions','/tidbcloud/release-notes'] --- -# TiDB Cloud Release Notes in 2023 +# TiDB Cloud Release Notes in 2024 -This page lists the release notes of [TiDB Cloud](https://www.pingcap.com/tidb-cloud/) in 2023. +This page lists the release notes of [TiDB Cloud](https://www.pingcap.com/tidb-cloud/) in 2024. -## December 5, 2023 - -**General changes** - -- [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) enables you to resume a failed changefeed, which saves your effort to recreate a new one. - - For more information, see [Changefeed states](/tidb-cloud/changefeed-overview.md#changefeed-states). - -**Console changes** - -- Enhance the connection experience for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless). - - Refine the **Connect** dialog interface to offer TiDB Serverless users a smoother and more efficient connection experience. In addition, TiDB Serverless introduces more client types and allows you to select the desired branch for connection. - - For more information, see [Connect to TiDB Serverless](/tidb-cloud/connect-via-standard-connection-serverless.md). - -## November 28, 2023 - -**General changes** - -- [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) supports restoring SQL bindings from backups. - - TiDB Dedicated now restores user accounts and SQL bindings by default when restoring from a backup. This enhancement is available for clusters of v6.2.0 or later versions, streamlining the data restoration process. The restoration of SQL bindings ensures the smooth reintegration of query-related configurations and optimizations, providing you with a more comprehensive and efficient recovery experience. - - For more information, see [Back up and restore TiDB Dedicated data](/tidb-cloud/backup-and-restore.md). +## August 20, 2024 **Console changes** -- [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) supports monitoring SQL statement RU costs. - - TiDB Serverless now provides detailed insights into each SQL statement's [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit). You can view both the **Total RU** and **Mean RU** costs per SQL statement. This feature helps you identify and analyze RU costs, offering opportunities for potential cost savings in your operations. - - To check your SQL statement RU details, navigate to the **Diagnosis** page of [your TiDB Serverless cluster](https://tidbcloud.com/console/clusters) and then click the **SQL Statement** tab. - -## November 21, 2023 +- Refine the layout of the **Create Private Endpoint Connection** page to improve the user experience for creating new private endpoint connections in [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. -**General changes** - -- [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) supports high-speed physical mode for TiDB clusters deployed on Google Cloud. - - Now you can use physical mode for TiDB clusters deployed on AWS and Google Cloud. The migration speed of physical mode can reach up to 110 MiB/s, which is 2.4 times faster than logical mode. The improved performance is suitable for quickly migrating large datasets to TiDB Cloud. + For more information, see [Connect to a TiDB Dedicated Cluster via Private Endpoint with AWS](/tidb-cloud/set-up-private-endpoint-connections.md) and [Connect to a TiDB Dedicated Cluster via Google Cloud Private Service Connect](/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md). - For more information, see [Migrate existing data and incremental data](/tidb-cloud/migrate-from-mysql-using-data-migration.md#migrate-existing-data-and-incremental-data). - -## November 14, 2023 +## August 6, 2024 **General changes** -- When you restore data from TiDB Dedicated clusters, the default behavior is now modified from restoring without user accounts to restoring with all user accounts. - - For more information, see [Back Up and Restore TiDB Dedicated Data](/tidb-cloud/backup-and-restore.md). - -- Introduce event filters for changefeeds. +- [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) billing changes for load balancing on AWS. - This enhancement empowers you to easily manage event filters for changefeeds directly through the [TiDB Cloud console](https://tidbcloud.com/), streamlining the process of excluding specific events from changefeeds and providing better control over data replication downstream. + Starting from August 1, 2024, TiDB Dedicated bills include new AWS charges for public IPv4 addresses, aligned with [AWS pricing changes effective from February 1, 2024](https://aws.amazon.com/blogs/aws/new-aws-public-ipv4-address-charge-public-ip-insights/). The charge for each public IPv4 address is $0.005 per hour, which will result in approximately $10 per month for each TiDB Dedicated cluster hosted on AWS. - For more information, see [Changefeed](/tidb-cloud/changefeed-overview.md#edit-a-changefeed). - -## November 7, 2023 - -**General changes** - -- Add the following resource usage alerts. The new alerts are disabled by default. You can enable them as needed. - - - Max memory utilization across TiDB nodes exceeded 70% for 10 minutes - - Max memory utilization across TiKV nodes exceeded 70% for 10 minutes - - Max CPU utilization across TiDB nodes exceeded 80% for 10 minutes - - Max CPU utilization across TiKV nodes exceeded 80% for 10 minutes - - For more information, see [TiDB Cloud Built-in Alerting](/tidb-cloud/monitor-built-in-alerting.md#resource-usage-alerts). - -## October 31, 2023 - -**General changes** - -- Support directly upgrading to the Enterprise support plan in the TiDB Cloud console without contacting sales. - - For more information, see [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). - -## October 25, 2023 - -**General changes** - -- [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) supports dual region backup (beta) on Google Cloud. - - TiDB Dedicated clusters hosted on Google Cloud work seamlessly with Google Cloud Storage. Similar to the [Dual-regions](https://cloud.google.com/storage/docs/locations#location-dr) feature of Google Cloud Storage, the pair of regions that you use for the dual-region in TiDB Dedicated must be within the same multi-region. For example, Tokyo and Osaka are in the same multi-region `ASIA` so they can be used together for dual-region storage. - - For more information, see [Back Up and Restore TiDB Dedicated Data](/tidb-cloud/backup-and-restore.md#turn-on-dual-region-backup-beta). - -- The feature of [streaming data change logs to Apache Kafka](/tidb-cloud/changefeed-sink-to-apache-kafka.md) is now in General Availability (GA). - - After a successful 10-month beta trial, the feature of streaming data change logs from TiDB Cloud to Apache Kafka becomes generally available. Streaming data from TiDB to a message queue is a common need in data integration scenarios. You can use Kafka sink to integrate with other data processing systems (such as Snowflake) or support business consumption. - - For more information, see [Changefeed overview](/tidb-cloud/changefeed-overview.md). - -## October 11, 2023 - -**General changes** + This charge will appear under the existing **TiDB Dedicated - Data Transfer - Load Balancing** service in your [billing details](/tidb-cloud/tidb-cloud-billing.md#billing-details). -- Support [dual region backup (beta)](/tidb-cloud/backup-and-restore.md#turn-on-dual-region-backup-beta) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters deployed on AWS. - - You can now replicate backups across geographic regions within your cloud provider. This feature provides an additional layer of data protection and disaster recovery capabilities. - - For more information, see [Back up and restore TiDB Dedicated data](/tidb-cloud/backup-and-restore.md). - -- Data Migration now supports both physical mode and logical mode for migrating existing data. - - In physical mode, the migration speed can reach up to 110 MiB/s. Compared with 45 MiB/s in logical mode, the migration performance has improved significantly. - - For more information, see [Migrate existing data and incremental data](/tidb-cloud/migrate-from-mysql-using-data-migration.md#migrate-existing-data-and-incremental-data). - -## October 10, 2023 - -**General changes** - -- Support using TiDB Serverless branches in [Vercel Preview Deployments](https://vercel.com/docs/deployments/preview-deployments), with TiDB Cloud Vercel integration. - - For more information, see [Connect with TiDB Serverless branching](/tidb-cloud/integrate-tidbcloud-with-vercel.md#connect-with-tidb-serverless-branching). - -## September 28, 2023 - -**API changes** - -- Introduce a TiDB Cloud Billing API endpoint to retrieve the bill for the given month of a specific organization. - - This Billing API endpoint is released in TiDB Cloud API v1beta1, which is the latest API version of TiDB Cloud. For more information, refer to the [API documentation (v1beta1)](https://docs.pingcap.com/tidbcloud/api/v1beta1#tag/Billing). - -## September 19, 2023 - -**General changes** - -- Remove 2 vCPU TiDB and TiKV nodes from [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. - - The 2 vCPU option is no longer available on the **Create Cluster** page or the **Modify Cluster** page. - -- Release [TiDB Cloud serverless driver (beta)](/tidb-cloud/serverless-driver.md) for JavaScript. - - TiDB Cloud serverless driver for JavaScript allows you to connect to your [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster over HTTPS. It is particularly useful in edge environments where TCP connections are limited, such as [Vercel Edge Function](https://vercel.com/docs/functions/edge-functions) and [Cloudflare Workers](https://workers.cloudflare.com/). - - For more information, see [TiDB Cloud serverless driver (beta)](/tidb-cloud/serverless-driver.md). +- Upgrade the default TiDB version of new [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v7.5.2](https://docs.pingcap.com/tidb/v7.5/release-7.5.2) to [v7.5.3](https://docs.pingcap.com/tidb/v7.5/release-7.5.3). **Console changes** -- For [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, you can get an estimation of cost in the **Usage This Month** panel or while setting up the spending limit. - -## September 5, 2023 - -**General changes** - -- [Data Service (beta)](https://tidbcloud.com/console/data-service) supports customizing the rate limit for each API key to meet specific rate-limiting requirements in different situations. - - You can adjust the rate limit of an API key when you [create](/tidb-cloud/data-service-api-key.md#create-an-api-key) or [edit](/tidb-cloud/data-service-api-key.md#edit-an-api-key) the key. - - For more information, see [Rate limiting](/tidb-cloud/data-service-api-key.md#rate-limiting). - -- Support a new AWS region for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: São Paulo (sa-east-1). - -- Support adding up to 100 IP addresses to the IP access list for each [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster. - - For more information, see [Configure an IP access list](/tidb-cloud/configure-ip-access-list.md). - -**Console changes** - -- Introduce the **Events** page for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, which provides the records of main changes to your cluster. - - On this page, you can view the event history for the last 7 days and track important details such as the trigger time and the user who initiated an action. - - For more information, see [TiDB Cloud cluster events](/tidb-cloud/tidb-cloud-events.md). - -**API changes** - -- Release several TiDB Cloud API endpoints for managing the [AWS PrivateLink](https://aws.amazon.com/privatelink/?privatelink-blogs.sort-by=item.additionalFields.createdDate&privatelink-blogs.sort-order=desc) or [Google Cloud Private Service Connect](https://cloud.google.com/vpc/docs/private-service-connect) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: - - - Create a private endpoint service for a cluster - - Retrieve the private endpoint service information of a cluster - - Create a private endpoint for a cluster - - List all private endpoints of a cluster - - List all private endpoints in a project - - Delete a private endpoint of a cluster - - For more information, refer to the [API documentation](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster). - -## August 23, 2023 - -**General changes** - -- Support Google Cloud [Private Service Connect](https://cloud.google.com/vpc/docs/private-service-connect) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. +- Enhance the cluster size configuration experience for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated). - You can now create a private endpoint and establish a secure connection to a TiDB Dedicated cluster hosted on Google Cloud. + Refine the layout of the **Cluster Size** section on the [**Create Cluster**](/tidb-cloud/create-tidb-cluster.md) and [**Modify Cluster**](/tidb-cloud/scale-tidb-cluster.md) pages for TiDB Dedicated clusters. In addition, the **Cluster Size** section now includes links to node size recommendation documents, which helps you select an appropriate cluster size. - Key benefits: - - - Intuitive operations: helps you create a private endpoint with only several steps. - - Enhanced security: establishes a secure connection to protect your data. - - Improved performance: provides low-latency and high-bandwidth connectivity. - - For more information, see [Connect via Private Endpoint with Google Cloud](/tidb-cloud/set-up-private-endpoint-connections-on-google-cloud.md). - -- Support using a changefeed to stream data from a [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster to [Google Cloud Storage (GCS)](https://cloud.google.com/storage). - - You can now stream data from TiDB Cloud to GCS by using your own account's bucket and providing precisely tailored permissions. After replicating data to GCS, you can analyze the changes in your data as you wish. - - For more information, see [Sink to Cloud Storage](/tidb-cloud/changefeed-sink-to-cloud-storage.md). - -## August 15, 2023 +## July 23, 2024 **General changes** -- [Data Service (beta)](https://tidbcloud.com/console/data-service) supports pagination for `GET` requests to improve the development experience. - - For `GET` requests, you can paginate results by enabling **Pagination** in **Advance Properties** and specifying `page` and `page_size` as query parameters when calling the endpoint. For example, to get the second page with 10 items per page, you can use the following command: - - ```bash - curl --digest --user ':' \ - --request GET 'https://.data.tidbcloud.com/api/v1beta/app//endpoint/?page=2&page_size=10' - ``` - - Note that this feature is available only for `GET` requests where the last query is a `SELECT` statement. - - For more information, see [Call an endpoint](/tidb-cloud/data-service-manage-endpoint.md#call-an-endpoint). - -- [Data Service (beta)](https://tidbcloud.com/console/data-service) supports caching endpoint response of `GET` requests for a specified time-to-live (TTL) period. - - This feature decreases database load and optimizes endpoint latency. +- [Data Service (beta)](https://tidbcloud.com/console/data-service) supports automatically generating vector search endpoints. - For an endpoint using the `GET` request method, you can enable **Cache Response** and configure the TTL period for the cache in **Advance Properties**. + If your table contains [vector data types](/tidb-cloud/vector-search-data-types.md), you can automatically generate a vector search endpoint that calculates vector distances based on your selected distance function. - For more information, see [Advanced properties](/tidb-cloud/data-service-manage-endpoint.md#advanced-properties). + This feature enables seamless integration with AI platforms such as [Dify](https://docs.dify.ai/guides/tools) and [GPTs](https://openai.com/blog/introducing-gpts), enhancing your applications with advanced natural language processing and AI capabilities for more complex tasks and intelligent solutions. -- Disable the load balancing improvement for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters that are hosted on AWS and created after August 15, 2023, including: + For more information, see [Generate an endpoint automatically](/tidb-cloud/data-service-manage-endpoint.md#generate-an-endpoint-automatically) and [Integrate a Data App with Third-Party Tools](/tidb-cloud/data-service-integrations.md). - - Disable automatically migrating existing connections to new TiDB nodes when you scale out TiDB nodes hosted on AWS. - - Disable automatically migrating existing connections to available TiDB nodes when you scale in TiDB nodes hosted on AWS. +- Introduce the budget feature to help you track actual TiDB Cloud costs against planned expenses, preventing unexpected costs. - This change avoids resource contention of hybrid deployments and does not affect existing clusters with this improvement enabled. If you want to enable the load balancing improvement for your new clusters, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + To access this feature, you must be in the `Organization Owner` or `Organization Billing Admin` role of your organization. -## August 8, 2023 + For more information, see [Manage budgets for TiDB Cloud](/tidb-cloud/tidb-cloud-budget.md). -**General changes** - -- [Data Service (beta)](https://tidbcloud.com/console/data-service) now supports Basic Authentication. - - You can provide your public key as the username and private key as the password in requests using the ['Basic' HTTP Authentication](https://datatracker.ietf.org/doc/html/rfc7617). Compared with Digest Authentication, the Basic Authentication is simpler, enabling more straightforward usage when calling Data Service endpoints. - - For more information, see [Call an endpoint](/tidb-cloud/data-service-manage-endpoint.md#call-an-endpoint). - -## August 1, 2023 +## July 9, 2024 **General changes** -- Support the OpenAPI Specification for Data Apps in TiDB Cloud [Data Service](https://tidbcloud.com/console/data-service). - - TiDB Cloud Data Service provides autogenerated OpenAPI documentation for each Data App. In the documentation, you can view the endpoints, parameters, and responses, and try out the endpoints. - - You can also download an OpenAPI Specification (OAS) for a Data App and its deployed endpoints in YAML or JSON format. The OAS provides standardized API documentation, simplified integration, and easy code generation, which enables faster development and improved collaboration. - - For more information, see [Use the OpenAPI Specification](/tidb-cloud/data-service-manage-data-app.md#use-the-openapi-specification) and [Use the OpenAPI Specification with Next.js](/tidb-cloud/data-service-oas-with-nextjs.md). - -- Support running Data App in [Postman](https://www.postman.com/). - - The Postman integration empowers you to import a Data App's endpoints as a collection into your preferred workspace. Then you can benefit from enhanced collaboration and seamless API testing with support for both Postman web and desktop apps. - - For more information, see [Run Data App in Postman](/tidb-cloud/data-service-postman-integration.md). - -- Introduce a new **Pausing** status for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, allowing cost-effective pauses with no charges during this period. +- Enhance the [System Status](https://status.tidbcloud.com/) page to provide better insights into TiDB Cloud system health and performance. - When you click **Pause** for a TiDB Dedicated cluster, the cluster will enter the **Pausing** status first. Once the pause operation is completed, the cluster status will transition to **Paused**. - - A cluster can only be resumed after its status transitions to **Paused**, which resolves the abnormal resumption issue caused by rapid clicks of **Pause** and **Resume**. - - For more information, see [Pause or resume a TiDB Dedicated cluster](/tidb-cloud/pause-or-resume-tidb-cluster.md). - -## July 26, 2023 - -**General changes** - -- Introduce a powerful feature in TiDB Cloud [Data Service](https://tidbcloud.com/console/data-service): Automatic endpoint generation. - - Developers can now effortlessly create HTTP endpoints with minimal clicks and configurations. Eliminate repetitive boilerplate code, simplify and accelerate endpoint creation, and reduce potential errors. - - For more information on how to use this feature, see [Generate an endpoint automatically](/tidb-cloud/data-service-manage-endpoint.md#generate-an-endpoint-automatically). - -- Support `PUT` and `DELETE` request methods for endpoints in TiDB Cloud [Data Service](https://tidbcloud.com/console/data-service). - - - Use the `PUT` method to update or modify data, similar to an `UPDATE` statement. - - Use the `DELETE` method to delete data, similar to a `DELETE` statement. - - For more information, see [Configure properties](/tidb-cloud/data-service-manage-endpoint.md#configure-properties). - -- Support **Batch Operation** for `POST`, `PUT`, and `DELETE` request methods in TiDB Cloud [Data Service](https://tidbcloud.com/console/data-service). - - When **Batch Operation** is enabled for an endpoint, you gain the ability to perform operations on multiple rows in a single request. For instance, you can insert multiple rows of data using a single `POST` request. - - For more information, see [Advanced properties](/tidb-cloud/data-service-manage-endpoint.md#advanced-properties). - -## July 25, 2023 - -**General changes** - -- Upgrade the default TiDB version of new [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.5.3](https://docs.pingcap.com/tidb/v6.5/release-6.5.3) to [v7.1.1](https://docs.pingcap.com/tidb/v7.1/release-7.1.1). + To access it, visit directly, or navigate via the [TiDB Cloud console](https://tidbcloud.com) by clicking **?** in the lower-right corner and selecting **System Status**. **Console changes** -- Simplify access to PingCAP Support for TiDB Cloud users by optimizing support entries. Improvements include: +- Refine the **VPC Peering** page layout to improve the user experience for [creating VPC Peering connections](/tidb-cloud/set-up-vpc-peering-connections.md) in [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. - - Add an entrance for **Support** in the in the lower-left corner. - - Revamp the menus of the **?** icon in the lower-right corner of the [TiDB Cloud console](https://tidbcloud.com/) to make them more intuitive. - - For more information, see [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). - -## July 18, 2023 +## July 2, 2024 **General changes** -- Refine role-based access control at both the organization level and project level, which lets you grant roles with minimum permissions to users for better security, compliance, and productivity. - - - The organization roles include `Organization Owner`, `Organization Billing Admin`, `Organization Console Audit Admin`, and `Organization Member`. - - The project roles include `Project Owner`, `Project Data Access Read-Write`, and `Project Data Access Read-Only`. - - To manage clusters in a project (such as cluster creation, modification, and deletion), you need to be in the `Organization Owner` or `Project Owner` role. - - For more information about permissions of different roles, see [User roles](/tidb-cloud/manage-user-access.md#user-roles). - -- Support the Customer-Managed Encryption Key (CMEK) feature (beta) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters hosted on AWS. +- [Data Service (beta)](https://tidbcloud.com/console/data-service) provides an endpoint library with predefined system endpoints that you can directly add to your Data App, reducing the effort in your endpoint development. - You can create CMEK based on AWS KMS to encrypt data stored in EBS and S3 directly from the TiDB Cloud console. This ensures that customer data is encrypted with a key managed by the customer, which enhances security. + Currently, the library only includes the `/system/query` endpoint, which enables you to execute any SQL statement by simply passing the statement in the predefined `sql` parameter. This endpoint facilitates the immediate execution of SQL queries, enhancing flexibility and efficiency. - Note that this feature still has restrictions and is only available upon request. To apply for this feature, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + For more information, see [Add a predefined system endpoint](/tidb-cloud/data-service-manage-endpoint.md#add-a-predefined-system-endpoint). -- Optimize the Import feature in TiDB Cloud, aimed at enhancing the data import experience. The following improvements have been made: +- Enhance slow query data storage. - - Unified Import entry for TiDB Serverless: consolidate the entries for importing data, allowing you to seamlessly switch between importing local files and importing files from Amazon S3. - - Streamlined configuration: importing data from Amazon S3 now only requires a single step, saving time and effort. - - Enhanced CSV configuration: the CSV configuration settings are now located under the file type option, making it easier for you to quickly configure the necessary parameters. - - Enhanced target table selection: support choosing the desired target tables for data import by clicking checkboxes. This improvement eliminates the need for complex expressions and simplifies the target table selection. - - Refined display information: resolve issues related to inaccurate information displayed during the import process. In addition, the Preview feature has been removed to prevent incomplete data display and avoid misleading information. - - Improved source files mapping: support defining mapping relationships between source files and target tables. It addresses the challenge of modifying source file names to meet specific naming requirements. + The slow query access on the [TiDB Cloud console](https://tidbcloud.com) is now more stable and does not affect database performance. -## July 11, 2023 +## June 25, 2024 **General changes** -- [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) now is Generally Available. +- [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) supports vector search (beta). -- Introduce TiDB Bot (beta), an OpenAI-powered chatbot that offers multi-language support, 24/7 real-time response, and integrated documentation access. + The vector search (beta) feature provides an advanced search solution for performing semantic similarity searches across various data types, including documents, images, audio, and video. This feature enables developers to easily build scalable applications with generative artificial intelligence (AI) capabilities using familiar MySQL skills. Key features include: - TiDB Bot provides you with the following benefits: + - [Vector data types](/tidb-cloud/vector-search-data-types.md), [vector index](/tidb-cloud/vector-search-index.md), and [vector functions and operators](/tidb-cloud/vector-search-functions-and-operators.md). + - Ecosystem integrations with [LangChain](/tidb-cloud/vector-search-integrate-with-langchain.md), [LlamaIndex](/tidb-cloud/vector-search-integrate-with-llamaindex.md), and [JinaAI](/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md). + - Programming language support for Python: [SQLAlchemy](/tidb-cloud/vector-search-integrate-with-sqlalchemy.md), [Peewee](/tidb-cloud/vector-search-integrate-with-peewee.md), and [Django ORM](/tidb-cloud/vector-search-integrate-with-django-orm.md). + - Sample applications and tutorials: perform semantic searches for documents using [Python](/tidb-cloud/vector-search-get-started-using-python.md) or [SQL](/tidb-cloud/vector-search-get-started-using-sql.md). - - Continuous support: always available to assist and answer your questions for an enhanced support experience. - - Improved efficiency: automated responses reduce latency, improving overall operations. - - Seamless documentation access: direct access to TiDB Cloud documentation for easy information retrieval and quick issue resolution. + For more information, see [Vector search (beta) overview](/tidb-cloud/vector-search-overview.md). - To use TiDB Bot, click **?** in the lower-right corner of the [TiDB Cloud console](https://tidbcloud.com), and select **Ask TiDB Bot** to start a chat. +- [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) now offers weekly email reports for organization owners. -- Support [the branching feature (beta)](/tidb-cloud/branch-overview.md) for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + These reports provide insights into the performance and activity of your clusters. By receiving automatic weekly updates, you can stay informed about your clusters and make data-driven decisions to optimize your clusters. - TiDB Cloud lets you create branches for TiDB Serverless clusters. A branch for a cluster is a separate instance that contains a diverged copy of data from the original cluster. It provides an isolated environment, allowing you to connect to it and experiment freely without worrying about affecting the original cluster. +- Release Chat2Query API v3 endpoints and deprecate the Chat2Query API v1 endpoint `/v1/chat2data`. - You can create branches for TiDB Serverless clusters created after July 5, 2023 by using either [TiDB Cloud console](/tidb-cloud/branch-manage.md) or [TiDB Cloud CLI](/tidb-cloud/ticloud-branch-create.md). + With Chat2Query API v3 endpoints, you can start multi-round Chat2Query by using sessions. - If you use GitHub for application development, you can integrate TiDB Serverless branching into your GitHub CI/CD pipeline, which lets you automatically test your pull requests with branches without affecting the production database. For more information, see [Integrate TiDB Serverless Branching (Beta) with GitHub](/tidb-cloud/branch-github-integration.md). - -- Support weekly backup for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. For more information, see [Back up and restore TiDB Dedicated data](/tidb-cloud/backup-and-restore.md#turn-on-auto-backup). - -## July 4, 2023 - -**General changes** - -- Support point-in-time recovery (PITR) (beta) for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. - - You can now restore your TiDB Serverless cluster to any point in time within the last 90 days. This feature enhances the data recovery capability of TiDB Serverless clusters. For example, you can use PITR when data write errors occur and you want to restore the data to an earlier state. - - For more information, see [Back up and restore TiDB Serverless data](/tidb-cloud/backup-and-restore-serverless.md#restore). + For more information, see [Get started with Chat2Query API](/tidb-cloud/use-chat2query-api.md). **Console changes** -- Enhance the **Usage This Month** panel on the cluster overview page for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters to provide a clearer view of your current resource usage. - -- Enhance the overall navigation experience by making the following changes: - - - Consolidate **Organization** and **Account** in the upper-right corner into the left navigation bar. - - Consolidate **Admin** in the left navigation bar into **Project** in the left navigation bar, and remove the ☰ hover menu in the upper-left corner. Now you can click to switch between projects and modify project settings. - - Consolidate all the help and support information for TiDB Cloud into the menu of the **?** icon in the lower-right corner, such as documentation, interactive tutorials, self-paced training, and support entries. - -- TiDB Cloud console now supports Dark Mode, which provides a more comfortable, eye-friendly experience. You can switch between light mode and dark mode from the bottom of the left navigation bar. - -## June 27, 2023 - -**General changes** - -- Remove the pre-built sample dataset for newly created [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. - -## June 20, 2023 - -**General changes** - -- Upgrade the default TiDB version of new [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.5.2](https://docs.pingcap.com/tidb/v6.5/release-6.5.2) to [v6.5.3](https://docs.pingcap.com/tidb/v6.5/release-6.5.3). - -## June 13, 2023 - -**General changes** - -- Support using changefeeds to stream data to Amazon S3. - - This enables seamless integration between TiDB Cloud and Amazon S3. It allows real-time data capture and replication from [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters to Amazon S3, ensuring that downstream applications and analytics have access to up-to-date data. +- Rename Chat2Query (beta) to SQL Editor (beta). - For more information, see [Sink to cloud storage](/tidb-cloud/changefeed-sink-to-cloud-storage.md). + The interface previously known as Chat2Query is renamed to SQL Editor. This change clarifies the distinction between manual SQL editing and AI-assisted query generation, enhancing usability and your overall experience. -- Increase the maximum node storage of 16 vCPU TiKV for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from 4 TiB to 6 TiB. + - **SQL Editor**: the default interface for manually writing and executing SQL queries in the TiDB Cloud console. + - **Chat2Query**: the AI-assisted text-to-query feature, which enables you to interact with your databases using natural language to generate, rewrite, and optimize SQL queries. - This enhancement increases the data storage capacity of your TiDB Dedicated cluster, improves workload scaling efficiency, and accommodates growing data requirements. + For more information, see [Explore your data with AI-assisted SQL Editor](/tidb-cloud/explore-data-with-chat2query.md). - For more information, see [Size your cluster](/tidb-cloud/size-your-cluster.md). - -- Extend the [monitoring metrics retention period](/tidb-cloud/built-in-monitoring.md#metrics-retention-policy) for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters from 3 days to 7 days. - - By extending the metrics retention period, now you have access to more historical data. This helps you identify trends and patterns of the cluster for better decision-making and faster troubleshooting. - -**Console changes** - -- Release a new native web infrastructure for the [**Key Visualizer**](/tidb-cloud/tune-performance.md#key-visualizer) page of [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. - - With the new infrastructure, you can easily navigate through the **Key Visualizer** page and access the necessary information in a more intuitive and efficient manner. The new infrastructure also resolves many problems on UX, making the SQL diagnosis process more user-friendly. - -## June 6, 2023 +## June 18, 2024 **General changes** -- Introduce [Index Insight (beta)](/tidb-cloud/index-insight.md) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, which optimizes query performance by providing index recommendations for slow queries. - - With Index Insight, you can improve the overall application performance and efficiency of your database operations in the following ways: - - - Enhanced query performance: Index Insight identifies slow queries and suggests appropriate indexes for them, thereby speeding up query execution, reducing response time, and improving user experience. - - Cost efficiency: By using Index Insight to optimize query performance, the need for extra computing resources is reduced, enabling you to use existing infrastructure more effectively. This can potentially lead to operational cost savings. - - Simplified optimization process: Index Insight simplifies the identification and implementation of index improvements, eliminating the need for manual analysis and guesswork. As a result, you can save time and effort with accurate index recommendations. - - Improved application efficiency: By using Index Insight to optimize database performance, applications running on TiDB Cloud can handle larger workloads and serve more users concurrently, which makes scaling operations of applications more efficient. - - To use Index Insight, navigate to the **Diagnosis** page of your TiDB Dedicated cluster and click the **Index Insight BETA** tab. - - For more information, see [Use Index Insight (beta)](/tidb-cloud/index-insight.md). - -- Introduce [TiDB Playground](https://play.tidbcloud.com/?utm_source=docs&utm_medium=tidb_cloud_release_notes), an interactive platform for experiencing the full capabilities of TiDB, without registration or installation. +- Increase the maximum node storage of 16 vCPU TiFlash and 32 vCPU TiFlash for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from 2048 GiB to 4096 GiB. - TiDB Playground is an interactive platform designed to provide a one-stop-shop experience for exploring the capabilities of TiDB, such as scalability, MySQL compatibility, and real-time analytics. + This enhancement increases the analytics data storage capacity of your TiDB Dedicated cluster, improves workload scaling efficiency, and accommodates growing data requirements. - With TiDB Playground, you can try out TiDB features in a controlled environment free from complex configurations in real-time, making it ideal to understand the features in TiDB. + For more information, see [TiFlash node storage](/tidb-cloud/size-your-cluster.md#tiflash-node-storage). - To get started with TiDB Playground, go to the [**TiDB Playground**](https://play.tidbcloud.com/?utm_source=docs&utm_medium=tidb_cloud_release_notes) page, select a feature you want to explore, and begin your exploration. +- Upgrade the default TiDB version of new [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v7.5.1](https://docs.pingcap.com/tidb/v7.5/release-7.5.1) to [v7.5.2](https://docs.pingcap.com/tidb/v7.5/release-7.5.2). -## June 5, 2023 +## June 4, 2024 **General changes** -- Support connecting your [Data App](/tidb-cloud/tidb-cloud-glossary.md#data-app) to GitHub. +- Introduce the Recovery Group feature (beta) for disaster recovery of [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters deployed on AWS. - By [connecting your Data App to GitHub](/tidb-cloud/data-service-manage-github-connection.md), you can manage all configurations of the Data App as [code files](/tidb-cloud/data-service-app-config-files.md) on Github, which integrates TiDB Cloud Data Service seamlessly with your system architecture and DevOps process. + This feature enables you to replicate your databases between TiDB Dedicated clusters, ensuring rapid recovery in the event of a regional disaster. If you are in the `Project Owner` role, you can enable this feature by creating a new recovery group and assigning databases to the group. By replicating databases with recovery groups, you can improve disaster readiness, meet stricter availability SLAs, and achieve more aggressive Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO). - With this feature, you can easily accomplish the following tasks, which improves the CI/CD experience of developing Data Apps: + For more information, see [Get started with recovery groups](/tidb-cloud/recovery-group-get-started.md). - - Automatically deploy Data App changes with GitHub. - - Configure CI/CD pipelines of your Data App changes on GitHub with version control. - - Disconnect from a connected GitHub repository. - - Review endpoint changes before the deployment. - - View deployment history and take necessary actions in the event of a failure. - - Re-deploy a commit to roll back to an earlier deployment. +- Introduce billing and metering (beta) for the [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) columnar storage [TiFlash](/tiflash/tiflash-overview.md). - For more information, see [Deploy Data App automatically with GitHub](/tidb-cloud/data-service-manage-github-connection.md). + Until June 30, 2024, columnar storage in TiDB Serverless clusters remains free with a 100% discount. After this date, each TiDB Serverless cluster will include a free quota of 5 GiB for columnar storage. Usage beyond the free quota will be charged. -## June 2, 2023 + For more information, see [TiDB Serverless pricing details](https://www.pingcap.com/tidb-serverless-pricing-details/#storage). -**General changes** - -- In our pursuit to simplify and clarify, we have updated the names of our products: - - - "TiDB Cloud Serverless Tier" is now called "TiDB Serverless". - - "TiDB Cloud Dedicated Tier" is now called "TiDB Dedicated". - - "TiDB On-Premises" is now called "TiDB Self-Hosted". +- [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) supports [Time to live (TTL)](/time-to-live.md). - Enjoy the same great performance under these refreshed names. Your experience is our priority. - -## May 30, 2023 +## May 28, 2024 **General changes** -- Enhance support for incremental data migration for the Data Migration feature in TiDB Cloud. - - You can now specify a binlog position or a global transaction identifier (GTID) to replicate only incremental data generated after the specified position to TiDB Cloud. This enhancement empowers you with greater flexibility to select and replicate the data you need, aligning with your specific requirements. - - For details, refer to [Migrate Only Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md). +- Google Cloud `Taiwan (asia-east1)` region supports the [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) feature. -- Add a new event type (`ImportData`) to the [**Events**](/tidb-cloud/tidb-cloud-events.md) page. - -- Remove **Playground** from the TiDB Cloud console. - - Stay tuned for the new standalone Playground with an optimized experience. - -## May 23, 2023 - -**General changes** + The [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters hosted in the Google Cloud `Taiwan (asia-east1)` region now support the Data Migration (DM) feature. If your upstream data is stored in or near this region, you can now take advantage of faster and more reliable data migration from Google Cloud to TiDB Cloud. -- When uploading a CSV file to TiDB, you can use not only English letters and numbers, but also characters such as Chinese and Japanese to define column names. However, for special characters, only underscore (`_`) is supported. +- Provide a new [TiDB node size](/tidb-cloud/size-your-cluster.md#tidb-vcpu-and-ram) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters hosted on AWS and Google Cloud: `16 vCPU, 64 GiB` - For details, refer to [Import Local Files to TiDB Cloud](/tidb-cloud/tidb-cloud-import-local-files.md). - -## May 16, 2023 - -**Console changes** - -- Introduce the left navigation entries organized by functional categories for both Dedicated and Serverless tiers. - - The new navigation makes it easier and more intuitive for you to discover the feature entries. To view the new navigation, access the overview page of your cluster. - -- Release a new native web infrastructure for the following two tabs on the **Diagnosis** page of Dedicated Tier clusters. - - - [Slow Query](/tidb-cloud/tune-performance.md#slow-query) - - [SQL Statement](/tidb-cloud/tune-performance.md#statement-analysis) - - With the new infrastructure, you can easily navigate through the two tabs and access the necessary information in a more intuitive and efficient manner. The new infrastructure also improves user experience, making the SQL diagnosis process more user-friendly. - -## May 9, 2023 - -**General changes** - -- Support changing node sizes for GCP-hosted clusters created after April 26, 2023. - - With this feature, you can upgrade to higher-performance nodes for increased demand or downgrade to lower-performance nodes for cost savings. With this added flexibility, you can adjust your cluster's capacity to align with your workloads and optimize costs. - - For detailed steps, see [Change node size](/tidb-cloud/scale-tidb-cluster.md#change-vcpu-and-ram). - -- Support importing compressed files. You can import CSV and SQL files in the following formats: `.gzip`, `.gz`, `.zstd`, `.zst`, and `.snappy`. This feature provides a more efficient and cost-effective way to import data and reduces your data transfer costs. - - For more information, see [Import CSV Files from Amazon S3 or GCS into TiDB Cloud](/tidb-cloud/import-csv-files.md) and [Import Sample Data](/tidb-cloud/import-sample-data.md). - -- Support AWS PrivateLink-powered endpoint connection as a new network access management option for TiDB Cloud [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. - - The private endpoint connection does not expose your data to the public internet. In addition, the endpoint connection supports CIDR overlap and is easier for network management. - - For more information, see [Set Up Private Endpoint Connections](/tidb-cloud/set-up-private-endpoint-connections.md). - -**Console changes** - -- Add new event types to the [**Event**](/tidb-cloud/tidb-cloud-events.md) page to record backup, restore, and changefeed actions for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. - - To get a full list of the events that can be recorded, see [Logged events](/tidb-cloud/tidb-cloud-events.md#logged-events). - -- Introduce the **SQL Statement** tab on the [**SQL Diagnosis**](/tidb-cloud/tune-performance.md) page for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. - - The **SQL Statement** tab provides the following: - - - A comprehensive overview of all SQL statements executed by your TiDB database, allowing you to easily identify and diagnose slow queries. - - Detailed information on each SQL statement, such as the query time, execution plan, and the database server response, helping you optimize your database performance. - - A user-friendly interface that makes it easy to sort, filter, and search through large amounts of data, enabling you to focus on the most critical queries. - - For more information, see [Statement Analysis](/tidb-cloud/tune-performance.md#statement-analysis). - -## May 6, 2023 - -**General changes** - -- Support directly accessing the [Data Service endpoint](/tidb-cloud/tidb-cloud-glossary.md#endpoint) in the region where a TiDB [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster is located. - - For newly created Serverless Tier clusters, the endpoint URL now includes the cluster region information. By requesting the regional domain `.data.tidbcloud.com`, you can directly access the endpoint in the region where the TiDB cluster is located. - - Alternatively, you can also request the global domain `data.tidbcloud.com` without specifying a region. In this way, TiDB Cloud will internally redirect the request to the target region, but this might result in additional latency. If you choose this way, make sure to add the `--location-trusted` option to your curl command when calling an endpoint. - - For more information, see [Call an endpoint](/tidb-cloud/data-service-manage-endpoint.md#call-an-endpoint). - -## April 25, 2023 - -**General changes** - -- For the first five [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters in your organization, TiDB Cloud provides a free usage quota for each of them as follows: - - - Row storage: 5 GiB - - [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit): 50 million RUs per month - - Until May 31, 2023, Serverless Tier clusters are still free, with a 100% discount off. After that, usage beyond the free quota will be charged. - - You can easily [monitor your cluster usage or increase your usage quota](/tidb-cloud/manage-serverless-spend-limit.md#manage-spending-limit-for-tidb-serverless-clusters) in the **Usage This Month** area of your cluster **Overview** page. Once the free quota of a cluster is reached, the read and write operations on this cluster will be throttled until you increase the quota or the usage is reset upon the start of a new month. - - For more information about the RU consumption of different resources (including read, write, SQL CPU, and network egress), the pricing details, and the throttled information, see [TiDB Cloud Serverless Tier Pricing Details](https://www.pingcap.com/tidb-cloud-serverless-pricing-details). - -- Support backup and restore for TiDB Cloud [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. - - For more information, see [Back up and Restore TiDB Cluster Data](/tidb-cloud/backup-and-restore-serverless.md). - -- Upgrade the default TiDB version of new [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.5.1](https://docs.pingcap.com/tidb/v6.5/release-6.5.1) to [v6.5.2](https://docs.pingcap.com/tidb/v6.5/release-6.5.2). - -- Provide a maintenance window feature to enable you to easily schedule and manage planned maintenance activities for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. - - A maintenance window is a designated timeframe during which planned maintenance activities, such as operating system updates, security patches, and infrastructure upgrades, are performed automatically to ensure the reliability, security, and performance of the TiDB Cloud service. - - During a maintenance window, temporary connection disruptions or QPS fluctuations might occur, but the clusters remain available, and SQL operations, the existing data import, backup, restore, migration, and replication tasks can still run normally. See [a list of allowed and disallowed operations](/tidb-cloud/configure-maintenance-window.md#allowed-and-disallowed-operations-during-a-maintenance-window) during maintenance. - - We will strive to minimize the frequency of maintenance. If a maintenance window is planned, the default start time is 03:00 Wednesday (based on the time zone of your TiDB Cloud organization) of the target week. To avoid potential disruptions, it is important to be aware of the maintenance schedules and plan your operations accordingly. - - - To keep you informed, TiDB Cloud will send you three email notifications for every maintenance window: one before, one starting, and one after the maintenance tasks. - - To minimize the maintenance impact, you can modify the maintenance start time to your preferred time or defer maintenance activities on the **Maintenance** page. - - For more information, see [Configure maintenance window](/tidb-cloud/configure-maintenance-window.md). - -- Improve load balancing of TiDB and reduce connection drops when you scale TiDB nodes of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters that are hosted on AWS and created after April 25, 2023. - - - Support automatically migrating existing connections to new TiDB nodes when you scale out TiDB nodes. - - Support automatically migrating existing connections to available TiDB nodes when you scale in TiDB nodes. - - Currently, this feature is provided for all Dedicated Tier clusters hosted on AWS. - -**Console changes** - -- Release a new native web infrastructure for the [Monitoring](/tidb-cloud/built-in-monitoring.md#view-the-metrics-page) page of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. - - With the new infrastructure, you can easily navigate through the [Monitoring](/tidb-cloud/built-in-monitoring.md#view-the-metrics-page) page and access the necessary information in a more intuitive and efficient manner. The new infrastructure also resolves many problems on UX, making the monitoring process more user-friendly. - -## April 18, 2023 - -**General changes** - -- Support scaling up or down [Data Migration job specifications](/tidb-cloud/tidb-cloud-billing-dm.md#specifications-for-data-migration) for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. - - With this feature, you can improve migration performance by scaling up specifications or reduce costs by scaling down specifications. - - For more information, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#scale-a-migration-job-specification). - -**Console changes** - -- Revamp the UI to make [cluster creation](https://tidbcloud.com/console/clusters/create-cluster) experience more user-friendly, enabling you to create and configure clusters with just a few clicks. - - The new design focuses on simplicity, reducing visual clutter, and providing clear instructions. After clicking **Create** on the cluster creation page, you will be directed to the cluster overview page without having to wait for the cluster creation to be completed. - - For more information, see [Create a cluster](/tidb-cloud/create-tidb-cluster.md). - -- Introduce the **Discounts** tab on the **Billing** page to show the discount information for organization owners and billing administrators. - - For more information, see [Discounts](/tidb-cloud/tidb-cloud-billing.md#discounts). - -## April 11, 2023 - -**General changes** - -- Improve the load balance of TiDB and reduce connection drops when you scale TiDB nodes of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters hosted on AWS. - - - Support automatically migrating existing connections to new TiDB nodes when you scale out TiDB nodes. - - Support automatically migrating existing connections to available TiDB nodes when you scale in TiDB nodes. - - Currently, this feature is only provided for Dedicated Tier clusters that are hosted on the AWS `Oregon (us-west-2)` region. - -- Support the [New Relic](https://newrelic.com/) integration for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters. - - With the New Relic integration, you can configure TiDB Cloud to send metric data of your TiDB clusters to [New Relic](https://newrelic.com/). Then, you can monitor and analyze both your application performance and your TiDB database performance on [New Relic](https://newrelic.com/). This feature can help you quickly identify and troubleshoot potential issues and reduce the resolution time. - - For integration steps and available metrics, see [Integrate TiDB Cloud with New Relic](/tidb-cloud/monitor-new-relic-integration.md). - -- Add the following [changefeed](/tidb-cloud/changefeed-overview.md) metrics to the Prometheus integration for Dedicated Tier clusters. - - - `tidbcloud_changefeed_latency` - - `tidbcloud_changefeed_replica_rows` - - If you have [integrated TiDB Cloud with Prometheus](/tidb-cloud/monitor-prometheus-and-grafana-integration.md), you can monitor the performance and health of changefeeds in real time using these metrics. Additionally, you can easily create alerts to monitor the metrics using Prometheus. - -**Console changes** - -- Update the [Monitoring](/tidb-cloud/built-in-monitoring.md#view-the-metrics-page) page for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters to use [node-level resource metrics](/tidb-cloud/built-in-monitoring.md#server). - - With node-level resource metrics, you can see a more accurate representation of resource consumption to better understand the actual usage of purchased services. - - To access these metrics, navigate to the [Monitoring](/tidb-cloud/built-in-monitoring.md#view-the-metrics-page) page of your cluster, and then check the **Server** category under the **Metrics** tab. - -- Optimize the [Billing](/tidb-cloud/tidb-cloud-billing.md#billing-details) page by reorganizing the billing items in **Summary by Project** and **Summary by Service**, which makes the billing information clearer. - -## April 4, 2023 - -**General changes** - -- Remove the following two alerts from [TiDB Cloud built-in alerts](/tidb-cloud/monitor-built-in-alerting.md#tidb-cloud-built-in-alert-conditions) to prevent false positives. This is because temporary offline or out-of-memory (OOM) issues on one of the nodes do not significantly affect the overall health of a cluster. - - - At least one TiDB node in the cluster has run out of memory. - - One or more cluster nodes are offline. - -**Console changes** - -- Introduce the [Alerts](/tidb-cloud/monitor-built-in-alerting.md) page for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, which lists both active and closed alerts for each Dedicated Tier cluster. - - The **Alerts** page provides the following: - - - An intuitive and user-friendly user interface. You can view alerts for your clusters on this page even if you have not subscribed to the alert notification emails. - - Advanced filtering options to help you quickly find and sort alerts based on their severity, status, and other attributes. It also allows you to view the historical data for the last 7 days, which eases the alert history tracking. - - The **Edit Rule** feature. You can customize alert rule settings to meet your cluster's specific needs. - - For more information, see [TiDB Cloud built-in alerts](/tidb-cloud/monitor-built-in-alerting.md). - -- Consolidate the help-related information and actions of TiDB Cloud into a single place. +**API changes** - Now, you can get all the [TiDB Cloud help information](/tidb-cloud/tidb-cloud-support.md) and contact support by clicking **?** in the lower-right corner of the [TiDB Cloud console](https://tidbcloud.com/). +- Introduce TiDB Cloud Data Service API for managing the following resources automatically and efficiently: -- Introduce the [Getting Started](https://tidbcloud.com/console/getting-started) page to help you learn about TiDB Cloud. + * **Data App**: a collection of endpoints that you can use to access data for a specific application. + * **Data Source**: clusters linked to Data Apps for data manipulation and retrieval. + * **Endpoint**: a web API that you can customize to execute SQL statements. + * **Data API Key**: used for secure endpoint access. + * **OpenAPI Specification**: Data Service supports generating the OpenAPI Specification 3.0 for each Data App, which enables you to interact with your endpoints in a standardized format. - The **Getting Started** page provides you with interactive tutorials, essential guides, and useful links. By following interactive tutorials, you can easily explore TiDB Cloud features and HTAP capabilities with pre-built industry-specific datasets (Steam Game Dataset and S&P 500 Dataset). + These TiDB Cloud Data Service API endpoints are released in TiDB Cloud API v1beta1, which is the latest API version of TiDB Cloud. - To access the **Getting Started** page, click **Getting Started** in the left navigation bar of the [TiDB Cloud console](https://tidbcloud.com/). On this page, you can click **Query Sample Dataset** to open the interactive tutorials or click other links to explore TiDB Cloud. Alternatively, you can click **?** in the lower-right corner and click **Interactive Tutorials**. + For more information, see [API documentation (v1beta1)](https://docs.pingcap.com/tidbcloud/api/v1beta1/dataservice). -## March 29, 2023 +## May 21, 2024 **General changes** -- [Data Service (beta)](/tidb-cloud/data-service-overview.md) supports more fine-grained access control for Data Apps. +- Provide a new [TiDB node size](/tidb-cloud/size-your-cluster.md#tidb-vcpu-and-ram) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters hosted on Google Cloud: `8 vCPU, 16 GiB` - On the Data App details page, now you can link clusters to your Data App and specify the role for each API key. The role controls whether the API key can read or write data to the linked clusters and can be set to `ReadOnly` or `ReadAndWrite`. This feature provides cluster-level and permission-level access control for Data Apps, giving you more flexibility to control the access scope according to your business needs. - - For more information, see [Manage linked clusters](/tidb-cloud/data-service-manage-data-app.md#manage-linked-data-sources) and [Manage API keys](/tidb-cloud/data-service-api-key.md). - -## March 28, 2023 +## May 14, 2024 **General changes** -- Add 2 RCUs, 4 RCUs, and 8 RCUs specifications for [changefeeds](/tidb-cloud/changefeed-overview.md), and support choosing your desired specification when you [create a changefeed](/tidb-cloud/changefeed-overview.md#create-a-changefeed). - - Using these new specifications, the data replication costs can be reduced by up to 87.5% compared to scenarios where 16 RCUs were previously required. +- Expand the selection of time zones in the [**Time Zone**](/tidb-cloud/manage-user-access.md#set-the-time-zone-for-your-organization) section to better accommodate customers from diverse regions. -- Support scaling up or down specifications for [changefeeds](/tidb-cloud/changefeed-overview.md) created after March 28, 2023. +- Support [creating a VPC peering](/tidb-cloud/set-up-vpc-peering-connections.md) when your VPC is in a different region from the VPC of TiDB Cloud. - You can improve replication performance by choosing a higher specification or reduce replication costs by choosing a lower specification. +- [Data Service (beta)](https://tidbcloud.com/console/data-service) supports path parameters alongside query parameters. - For more information, see [Scale a changefeed](/tidb-cloud/changefeed-overview.md#scale-a-changefeed). + This feature enhances resource identification with structured URLs and improves user experience, search engine optimization (SEO), and client integration, offering developers more flexibility and better alignment with industry standards. -- Support replicating incremental data in real-time from a [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster in AWS to a [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster in the same project and same region. + For more information, see [Basic properties](/tidb-cloud/data-service-manage-endpoint.md#basic-properties). - For more information, see [Sink to TiDB Cloud](/tidb-cloud/changefeed-sink-to-tidb-cloud.md). +## April 16, 2024 -- Support two new GCP regions for the [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) feature of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: `Singapore (asia-southeast1)` and `Oregon (us-west1)`. - - With these new regions, you have more options for migrating your data to TiDB Cloud. If your upstream data is stored in or near these regions, you can now take advantage of faster and more reliable data migration from GCP to TiDB Cloud. - - For more information, see [Migrate MySQL-compatible databases to TiDB Cloud using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md). +**CLI changes** -**Console changes** +- Introduce [TiDB Cloud CLI 1.0.0-beta.1](https://github.com/tidbcloud/tidbcloud-cli), built upon the new [TiDB Cloud API](/tidb-cloud/api-overview.md). The new CLI brings the following new features: -- Release a new native web infrastructure for the [Slow Query](/tidb-cloud/tune-performance.md#slow-query) page of [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + - [Export data from TiDB Serverless clusters](/tidb-cloud/serverless-export.md) + - [Import data from local storage into TiDB Serverless clusters](/tidb-cloud/ticloud-import-start.md) + - [Authenticate via OAuth](/tidb-cloud/ticloud-auth-login.md) + - [Ask questions via TiDB Bot](/tidb-cloud/ticloud-ai.md) - With this new infrastructure, you can easily navigate through the [Slow Query](/tidb-cloud/tune-performance.md#slow-query) page and access the necessary information in a more intuitive and efficient manner. The new infrastructure also resolves many problems on UX, making the SQL diagnosis process more user-friendly. + Before upgrading your TiDB Cloud CLI, note that this new CLI is incompatible with previous versions. For example, `ticloud cluster` in CLI commands is now updated to `ticloud serverless`. For more information, see [TiDB Cloud CLI reference](/tidb-cloud/cli-reference.md). -## March 21, 2023 +## April 9, 2024 **General changes** -- Introduce [Data Service (beta)](https://tidbcloud.com/console/data-service) for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, which enables you to access data via an HTTPS request using a custom API endpoint. - - With Data Service, you can seamlessly integrate TiDB Cloud with any application or service that is compatible with HTTPS. The following are some common scenarios: - - - Access the database of your TiDB cluster directly from a mobile or web application. - - Use serverless edge functions to call endpoints and avoid scalability issues caused by database connection pooling. - - Integrate TiDB Cloud with data visualization projects by using Data Service as a data source. - - Connect to your database from an environment that MySQL interface does not support. - - In addition, TiDB Cloud provides the [Chat2Query API](/tidb-cloud/use-chat2query-api.md), a RESTful interface that allows you to generate and execute SQL statements using AI. - - To access Data Service, navigate to the [**Data Service**](https://tidbcloud.com/console/data-service) page in the left navigation pane. For more information, see the following documentation: - - - [Data Service Overview](/tidb-cloud/data-service-overview.md) - - [Get Started with Data Service](/tidb-cloud/data-service-get-started.md) - - [Get Started with Chat2Query API](/tidb-cloud/use-chat2query-api.md) - -- Support decreasing the size of TiDB, TiKV, and TiFlash nodes to scale in a [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster that is hosted on AWS and created after December 31, 2022. - - You can decrease the node size [via the TiDB Cloud console](/tidb-cloud/scale-tidb-cluster.md#change-vcpu-and-ram) or [via the TiDB Cloud API (beta)](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/UpdateCluster). +- Provide a new [TiDB node size](/tidb-cloud/size-your-cluster.md#tidb-vcpu-and-ram) for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters hosted on AWS: `8 vCPU, 32 GiB`. -- Support a new GCP region for the [Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) feature of [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: `Tokyo (asia-northeast1)`. - - The feature can help you migrate data from MySQL-compatible databases in Google Cloud Platform (GCP) to your TiDB cluster easily and efficiently. - - For more information, see [Migrate MySQL-compatible databases to TiDB Cloud using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md). - -**Console changes** - -- Introduce the **Events** page for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, which provides the records of main changes to your cluster. - - On this page, you can view the event history for the last 7 days and track important details such as the trigger time and the user who initiated an action. For example, you can view events such as when a cluster was paused or who modified the cluster size. - - For more information, see [TiDB Cloud cluster events](/tidb-cloud/tidb-cloud-events.md). - -- Add the **Database Status** tab to the **Monitoring** page for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters, which displays the following database-level metrics: - - - QPS Per DB - - Average Query Duration Per DB - - Failed Queries Per DB - - With these metrics, you can monitor the performance of individual databases, make data-driven decisions, and take actions to improve the performance of your applications. - - For more information, see [Monitoring metrics for Serverless Tier clusters](/tidb-cloud/built-in-monitoring.md). - -## March 14, 2023 +## April 2, 2024 **General changes** -- Upgrade the default TiDB version of new [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.5.0](https://docs.pingcap.com/tidb/v6.5/release-6.5.0) to [v6.5.1](https://docs.pingcap.com/tidb/v6.5/release-6.5.1). +- Introduce two service plans for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters: **Free** and **Scalable**. -- Support modifying column names of the target table to be created by TiDB Cloud when uploading a local CSV file with a header row. + To meet different user requirements, TiDB Serverless offers the free and scalable service plans. Whether you are just getting started or scaling to meet the increasing application demands, these plans provide the flexibility and capabilities you need. - When importing a local CSV file with a header row to a [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster, if you need TiDB Cloud to create the target table and the column names in the header row do not follow the TiDB Cloud column naming conventions, you will see a warning icon next to the corresponding column name. To resolve the warning, you can move the cursor over the icon and follow the message to edit the existing column names or enter new column names. + For more information, see [Cluster plans](/tidb-cloud/select-cluster-tier.md#cluster-plans). - For information about column naming conventions, see [Import local files](/tidb-cloud/tidb-cloud-import-local-files.md#import-local-files). +- Modify the throttling behavior for TiDB Serverless clusters upon reaching their usage quota. Now, once a cluster reaches its usage quota, it immediately denies any new connection attempts, thereby ensuring uninterrupted service for existing operations. -## March 7, 2023 + For more information, see [Usage quota](/tidb-cloud/serverless-limitations.md#usage-quota). -**General changes** - -- Upgrade the default TiDB version of all [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters from [v6.4.0](https://docs.pingcap.com/tidb/v6.4/release-6.4.0) to [v6.6.0](https://docs.pingcap.com/tidb/v6.6/release-6.6.0). - -## February 28, 2023 +## March 5, 2024 **General changes** -- Add the [SQL Diagnosis](/tidb-cloud/tune-performance.md) feature for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. - - With SQL Diagnosis, you can gain deep insights into SQL-related runtime status, which makes the SQL performance tuning more efficient. Currently, the SQL Diagnosis feature for Serverless Tier only provides slow query data. - - To use SQL Diagnosis, click **SQL Diagnosis** on the left navigation bar of your Serverless Tier cluster page. +- Upgrade the default TiDB version of new [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v7.5.0](https://docs.pingcap.com/tidb/v7.5/release-7.5.0) to [v7.5.1](https://docs.pingcap.com/tidb/v7.5/release-7.5.1). **Console changes** -- Optimize the left navigation. - - You can navigate pages more efficiently, for example: - - - You can hover the mouse in the upper-left corner to quickly switch between clusters or projects. - - You can switch between the **Clusters** page and the **Admin** page. - -**API changes** - -- Release several TiDB Cloud API endpoints for data import: - - - List all import tasks - - Get an import task - - Create an import task - - Update an import task - - Upload a local file for an import task - - Preview data before starting an import task - - Get the role information for import tasks +- Introduce the **Cost Explorer** tab on the [**Billing**](https://tidbcloud.com/console/org-settings/billing/payments) page, which provides an intuitive interface for analyzing and customizing cost reports for your organization over time. - For more information, refer to the [API documentation](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Import). + To use this feature, navigate to the **Billing** page of your organization and click the **Cost Explorer** tab. -## February 22, 2023 + For more information, see [Cost Explorer](/tidb-cloud/tidb-cloud-billing.md#cost-explorer). -**General changes** - -- Support using the [console audit logging](/tidb-cloud/tidb-cloud-console-auditing.md) feature to track various activities performed by members within your organization in the [TiDB Cloud console](https://tidbcloud.com/). - - The console audit logging feature is only visible to users with the `Owner` or `Audit Admin` role and is disabled by default. To enable it, click **Organization** > **Console Audit Logging** in the upper-right corner of the [TiDB Cloud console](https://tidbcloud.com/). +- [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) displays a **limit** label for [node-level resource metrics](/tidb-cloud/built-in-monitoring.md#server). - By analyzing console audit logs, you can identify suspicious operations performed within your organization, thereby improving the security of your organization's resources and data. - - For more information, see [Console audit logging](/tidb-cloud/tidb-cloud-console-auditing.md). - -**CLI changes** + The **limit** label shows the maximum usage of resources such as CPU, memory, and storage for each component in a cluster. This enhancement simplifies the process of monitoring the resource usage rate of your cluster. -- Add a new command [`ticloud cluster connect-info`](/tidb-cloud/ticloud-cluster-connect-info.md) for [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). + To access these metric limits, navigate to the **Monitoring** page of your cluster, and then check the **Server** category under the **Metrics** tab. - `ticloud cluster connect-info` is a command that allows you to get the connection string of a cluster. To use this command, [update `ticloud`](/tidb-cloud/ticloud-update.md) to v0.3.2 or a later version. + For more information, see [Metrics for TiDB Dedicated clusters](/tidb-cloud/built-in-monitoring.md#server). -## February 21, 2023 +## February 21, 2024 **General changes** -- Support using the AWS access keys of an IAM user to access your Amazon S3 bucket when importing data to TiDB Cloud. - - This method is simpler than using Role ARN. For more information, refer to [Configure Amazon S3 access](/tidb-cloud/config-s3-and-gcs-access.md#configure-amazon-s3-access). - -- Extend the [monitoring metrics retention period](/tidb-cloud/built-in-monitoring.md#metrics-retention-policy) from 2 days to a longer period: - - - For Dedicated Tier clusters, you can view metrics data for the past 7 days. - - For Serverless Tier clusters, you can view metrics data for the past 3 days. - - By extending the metrics retention period, now you have access to more historical data. This helps you identify trends and patterns of the cluster for better decision-making and faster troubleshooting. - -**Console changes** +- Upgrade the TiDB version of [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters from [v6.6.0](https://docs.pingcap.com/tidb/v6.6/release-6.6.0) to [v7.1.3](https://docs.pingcap.com/tidb/v7.1/release-7.1.3). -- Release a new native web infrastructure on the Monitoring page of [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. - - With the new infrastructure, you can easily navigate through the Monitoring page and access the necessary information in a more intuitive and efficient manner. The new infrastructure also resolves many problems on UX, making the monitoring process a lot more user-friendly. - -## February 17, 2023 - -**CLI changes** - -- Add a new command [`ticloud connect`](/tidb-cloud/ticloud-connect.md) for [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). - - `ticloud connect` is a command that allows you to connect to your TiDB Cloud cluster from your local machine without installing any SQL clients. After connecting to your TiDB Cloud cluster, you can execute SQL statements in the TiDB Cloud CLI. - -## February 14, 2023 +## February 20, 2024 **General changes** -- Support decreasing the number of TiKV and TiFlash nodes to scale in a TiDB [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) cluster. - - You can decrease the node number [via the TiDB Cloud console](/tidb-cloud/scale-tidb-cluster.md#change-node-number) or [via the TiDB Cloud API (beta)](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/UpdateCluster). - -**Console changes** - -- Introduce the **Monitoring** page for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. +- Support creating more TiDB Cloud nodes on Google Cloud. - The **Monitoring** page provides a range of metrics and data, such as the number of SQL statements executed per second, the average duration of queries, and the number of failed queries, which helps you better understand the overall performance of SQL statements in your Serverless Tier cluster. + - By [configuring a regional CIDR size](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-cidr-for-a-region) of `/19` for Google Cloud, you can now create up to 124 TiDB Cloud nodes within any region of a project. + - If you want to create more than 124 nodes in any region of a project, you can contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md) for assistance in customizing an IP range size ranging from `/16` to `/18`. - For more information, see [TiDB Cloud built-in monitoring](/tidb-cloud/built-in-monitoring.md). - -## February 2, 2023 - -**CLI changes** - -- Introduce the TiDB Cloud CLI client [`ticloud`](/tidb-cloud/cli-reference.md). - - Using `ticloud`, you can easily manage your TiDB Cloud resources from a terminal or other automatic workflows with a few lines of commands. Especially for GitHub Actions, we have provided [`setup-tidbcloud-cli`](https://github.com/marketplace/actions/set-up-tidbcloud-cli) for you to easily set up `ticloud`. - - For more information, see [TiDB Cloud CLI Quick Start](/tidb-cloud/get-started-with-cli.md) and [TiDB Cloud CLI Reference](/tidb-cloud/cli-reference.md). - -## January 18, 2023 - -**General changes** - -* Support [signing up](https://tidbcloud.com/free-trial) TiDB Cloud with a Microsoft account. - -## January 17, 2023 +## January 23, 2024 **General changes** -- Upgrade the default TiDB version of new [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v6.1.3](https://docs.pingcap.com/tidb/stable/release-6.1.3) to [v6.5.0](https://docs.pingcap.com/tidb/stable/release-6.5.0). - -- For new sign-up users, TiDB Cloud will automatically create a free [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) cluster so that you can quickly start a data exploration journey with TiDB Cloud. +- Add 32 vCPU as a node size option for TiDB, TiKV, and TiFlash. -- Support a new AWS region for [Dedicated Tier](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters: `Seoul (ap-northeast-2)`. + For each `32 vCPU, 128 GiB` TiKV node, the node storage ranges from 200 GiB to 6144 GiB. - The following features are enabled for this region: + It is recommended to use such nodes in the following scenarios: - - [Migrate MySQL-compatible databases to TiDB Cloud using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) - - [Stream data from TiDB Cloud to other data services using changefeed](/tidb-cloud/changefeed-overview.md) - - [Back up and restore TiDB cluster data](/tidb-cloud/backup-and-restore.md) + - High-workload production environments + - Extremely high performance -## January 10, 2023 +## January 16, 2024 **General changes** -- Optimize the feature of importing data from local CSV files to TiDB to improve the user experience for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. +- Enhance CIDR configuration for projects. - - To upload a CSV file, now you can simply drag and drop it to the upload area on the **Import** page. - - When creating an import task, if your target database or table does not exist, you can enter a name to let TiDB Cloud create it for you automatically. For the target table to be created, you can specify a primary key or select multiple fields to form a composite primary key. - - After the import is completed, you can explore your data with [AI-powered Chat2Query](/tidb-cloud/explore-data-with-chat2query.md) by clicking **Explore your data by Chat2Query** or clicking the target table name in the task list. + - You can directly set a region-level CIDR for each project. + - You can choose your CIDR configurations from a broader range of CIDR values. - For more information, see [Import local files to TiDB Cloud](/tidb-cloud/tidb-cloud-import-local-files.md). + Note: The previous global-level CIDR settings for projects are retired, but all existing regional CIDR in active state remain unaffected. There will be no impact on the network of existing clusters. -**Console changes** - -- Add the **Get Support** option for each cluster to simplify the process of requesting support for a specific cluster. - - You can request support for a cluster in either of the following ways: + For more information, see [Set a CIDR for a region](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-cidr-for-a-region). - - On the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project, click **...** in the row of your cluster and select **Get Support**. - - On your cluster overview page, click **...** in the upper-right corner and select **Get Support**. +- TiDB Serverless users now have the capability to disable public endpoints for your clusters. -## January 5, 2023 + For more information, see [Disable a Public Endpoint](/tidb-cloud/connect-via-standard-connection-serverless.md#disable-a-public-endpoint). -**Console changes** - -- Rename SQL Editor (beta) to Chat2Query (beta) for [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters and support generating SQL queries using AI. +- [Data Service (beta)](https://tidbcloud.com/console/data-service) supports configuring a custom domain to access endpoints in a Data App. - In Chat2Query, you can either let AI generate SQL queries automatically or write SQL queries manually, and run SQL queries against databases without a terminal. + By default, TiDB Cloud Data Service provides a domain `.data.tidbcloud.com` to access each Data App's endpoints. For enhanced personalization and flexibility, you can now configure a custom domain for your Data App instead of using the default domain. This feature enables you to use branded URLs for your database services and enhances security. - To access Chat2Query, go to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project, click your cluster name, and then click **Chat2Query** in the left navigation pane. + For more information, see [Custom domain in Data Service](/tidb-cloud/data-service-custom-domain.md). -## January 4, 2023 +## January 3, 2024 **General changes** -- Support scaling up TiDB, TiKV, and TiFlash nodes by increasing the **Node Size(vCPU + RAM)** for TiDB Dedicated clusters hosted on AWS and created after December 31, 2022. - - You can increase the node size [using the TiDB Cloud console](/tidb-cloud/scale-tidb-cluster.md#change-vcpu-and-ram) or [using the TiDB Cloud API (beta)](https://docs.pingcap.com/tidbcloud/api/v1beta#tag/Cluster/operation/UpdateCluster). +- Support [Organization SSO](https://tidbcloud.com/console/preferences/authentication) to streamline enterprise authentication processes. -- Extend the metrics retention period on the [**Monitoring**](/tidb-cloud/built-in-monitoring.md) page to two days. + With this feature, you can seamlessly integrate TiDB Cloud with any identity provider (IdP) using [Security Assertion Markup Language (SAML)](https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language) or [OpenID Connect (OIDC)](https://openid.net/developers/how-connect-works/). - Now you have access to metrics data of the last two days, giving you more flexibility and visibility into your cluster performance and trends. + For more information, see [Organization SSO Authentication](/tidb-cloud/tidb-cloud-org-sso-authentication.md). - This improvement comes at no additional cost and can be accessed on the **Diagnosis** tab of the [**Monitoring**](/tidb-cloud/built-in-monitoring.md) page for your cluster. This will help you identify and troubleshoot performance issues and monitor the overall health of your cluster more effectively. +- Upgrade the default TiDB version of new [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters from [v7.1.1](https://docs.pingcap.com/tidb/v7.1/release-7.1.1) to [v7.5.0](https://docs.pingcap.com/tidb/v7.5/release-7.5.0). -- Support customizing Grafana dashboard JSON for Prometheus integration. - - If you have [integrated TiDB Cloud with Prometheus](/tidb-cloud/monitor-prometheus-and-grafana-integration.md), you can now import a pre-built Grafana dashboard to monitor TiDB Cloud clusters and customize the dashboard to your needs. This feature enables easy and fast monitoring of your TiDB Cloud clusters and helps you identify any performance issues quickly. - - For more information, see [Use Grafana GUI dashboards to visualize the metrics](/tidb-cloud/monitor-prometheus-and-grafana-integration.md#step-3-use-grafana-gui-dashboards-to-visualize-the-metrics). - -- Upgrade the default TiDB version of all [Serverless Tier](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters from [v6.3.0](https://docs.pingcap.com/tidb/v6.3/release-6.3.0) to [v6.4.0](https://docs.pingcap.com/tidb/v6.4/release-6.4.0). The cold start issue after upgrading the default TiDB version of Serverless Tier clusters to v6.4.0 has been resolved. - -**Console changes** +- The dual region backup feature for [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) is now in General Availability (GA). -- Simplify the display of the [**Clusters**](https://tidbcloud.com/console/clusters) page and the cluster overview page. + By using this feature, you can replicate backups across geographic regions within AWS or Google Cloud. This feature provides an additional layer of data protection and disaster recovery capabilities. - - You can click the cluster name on the [**Clusters**](https://tidbcloud.com/console/clusters) page to enter the cluster overview page and start operating the cluster. - - Remove the **Connection** and **Import** panes from the cluster overview page. You can click **Connect** in the upper-right corner to get the connection information and click **Import** in the left navigation pane to import data. + For more information, see [Dual region backup](/tidb-cloud/backup-and-restore.md#turn-on-dual-region-backup). diff --git a/tidb-cloud/tidb-cloud-sql-tuning-overview.md b/tidb-cloud/tidb-cloud-sql-tuning-overview.md index 1f7d130561cb8..0058552892dfc 100644 --- a/tidb-cloud/tidb-cloud-sql-tuning-overview.md +++ b/tidb-cloud/tidb-cloud-sql-tuning-overview.md @@ -27,13 +27,21 @@ The TiDB Cloud console provides a **[SQL Statement](/tidb-cloud/tune-performance Note that on this sub-tab, SQL queries with the same structure (even if the query parameters do not match) are grouped into the same SQL statement. For example, `SELECT * FROM employee WHERE id IN (1, 2, 3)` and `select * from EMPLOYEE where ID in (4, 5)` are both part of the same SQL statement `select * from employee where id in (...)`. -You can view some key information in **Statement**. +You can view some key information in **SQL Statement**. + +- **SQL Template**: including SQL digest, SQL template ID, the time range currently viewed, the number of execution plans, and the database where the execution takes place. + + ![Details0](/media/dashboard/dashboard-statement-detail0.png) -- SQL statement overview: including SQL digest, SQL template ID, the time range currently viewed, the number of execution plans, and the database where the execution takes place. - Execution plan list: if a SQL statement has more than one execution plan, the list is displayed. You can select different execution plans and the details of the selected execution plan are displayed at the bottom of the list. If there is only one execution plan, the list will not be displayed. -- Execution plan details: shows the details of the selected execution plan. It collects the execution plans of such SQL type and the corresponding execution time from several perspectives to help you get more information. See [Execution plan in details](https://docs.pingcap.com/tidb/stable/dashboard-statement-details#statement-execution-details-of-tidb-dashboard) (area 3 in the image below). -![Details](/media/dashboard/dashboard-statement-detail.png) + ![Details1](/media/dashboard/dashboard-statement-detail1.png) + +- Execution plan details: shows the details of the selected execution plan. It collects the execution plans of each SQL type and the corresponding execution time from several perspectives to help you get more information. See [Execution plans](https://docs.pingcap.com/tidb/stable/dashboard-statement-details#execution-plans). + + ![Details2](/media/dashboard/dashboard-statement-detail2.png) + +- Related slow query In addition to the information in the **Statement** dashboard, there are also some SQL best practices for TiDB Cloud as described in the following sections. diff --git a/tidb-cloud/tidb-cloud-tls-connect-to-dedicated.md b/tidb-cloud/tidb-cloud-tls-connect-to-dedicated.md index 2366f2909e454..93eebf02a3cb4 100644 --- a/tidb-cloud/tidb-cloud-tls-connect-to-dedicated.md +++ b/tidb-cloud/tidb-cloud-tls-connect-to-dedicated.md @@ -27,9 +27,9 @@ In the [TiDB Cloud console](https://tidbcloud.com/), you can get examples of dif 2. Click **Connect** in the upper-right corner. A dialog is displayed. 3. On the **Standard Connection** tab of this dialog, follow the three steps to set up the TLS connection. - - Step 1: Create traffic filter - - Step 2: Download CA cert - - Step 3: Connect with an SQL client + - Step 1:Create traffic filter + - Step 2:Download CA cert + - Step 3:Connect with an SQL client 4. Under **Step 1: Create traffic filter** in the dialog, configure the IP addresses that are allowed to access your cluster. For more information, see [Configure an IP access list in standard connection](/tidb-cloud/configure-ip-access-list.md#configure-an-ip-access-list-in-standard-connection). @@ -53,7 +53,7 @@ MySQL CLI client attempts to establish a TLS connection by default. When you con mysql --connect-timeout 15 --ssl-mode=VERIFY_IDENTITY --ssl-ca=ca.pem --tls-version="TLSv1.2" -u root -h tidb.eqlfbdgthh8.clusters.staging.tidb-cloud.com -P 4000 -D test -p ``` -Parameter description: +Parameter description: - With `--ssl-mode=VERIFY_IDENTITY`, MySQL CLI client forces to enable TLS and validate TiDB Dedicated clusters. - Use `--ssl-ca=` to specify your local path of the downloaded TiDB cluster `ca.pem`. @@ -69,7 +69,7 @@ Parameter description: mycli --ssl-ca=ca.pem --ssl-verify-server-cert -u root -h tidb.eqlfbdgthh8.clusters.staging.tidb-cloud.com -P 4000 -D test ``` -Parameter descriptions: +Parameter descriptions: - Use `--ssl-ca=` to specify your local path of the downloaded TiDB cluster `ca.pem`. - With `--ssl-verify-server-cert` to validate TiDB Dedicated clusters. @@ -115,7 +115,7 @@ class Main { } ``` -Parameter description: +Parameter description: - Set `sslMode=VERIFY_IDENTITY` to enable TLS and validate TiDB Dedicated clusters. - Set `enabledTLSProtocols=TLSv1.2` to restrict the versions of the TLS protocol. If you want to use TLS 1.3, you can set the version to `TLSv1.3`. @@ -146,7 +146,7 @@ with connection: print(m[0]) ``` -Parameter descriptions: +Parameter descriptions: - Set `ssl_mode="VERIFY_IDENTITY"` to enable TLS and validate TiDB Dedicated clusters. - Use `ssl={"ca": ""}` to specify your local path of the downloaded TiDB cluster `ca.pem`. @@ -218,7 +218,7 @@ func main() { } ``` -Parameter descriptions: +Parameter descriptions: - Register `tls.Config` in the TLS connection configuration to enable TLS and validate TiDB Dedicated clusters. - Set `MinVersion: tls.VersionTLS12` to restrict the versions of TLS protocol. @@ -277,7 +277,7 @@ connection.connect(function(err) { }); ``` -Parameter descriptions: +Parameter descriptions: - Set `ssl: {minVersion: 'TLSv1.2'}` to restrict the versions of the TLS protocol. If you want to use TLS 1.3, you can set the version to `TLSv1.3`. - Set `ssl: {ca: fs.readFileSync('')}` to read your local CA path of the downloaded TiDB cluster `ca.pem`. diff --git a/tidb-cloud/use-chat2query-api.md b/tidb-cloud/use-chat2query-api.md index a8b2a8dd6fa52..9938a008c30fb 100644 --- a/tidb-cloud/use-chat2query-api.md +++ b/tidb-cloud/use-chat2query-api.md @@ -13,7 +13,11 @@ Chat2Query API can only be accessed through HTTPS, ensuring that all data transm > > Chat2Query API is available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. To use the Chat2Query API on [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, contact [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md). -## Step 1. Create a Chat2Query Data App +## Before you begin + +Before calling Chat2Query endpoints, you need to create a Chat2Query Data App and create an API key for the Data App. + +### Create a Chat2Query Data App To create a Data App for your project, perform the following steps: @@ -21,7 +25,7 @@ To create a Data App for your project, perform the following steps: > **Tip:** > - > If you are on the **Chat2Query** page of your cluster, you can also open the data app creation dialog by clicking **...** in the upper-right corner, choosing **Access Chat2Query via API**, and clicking **New Chat2Query Data App**. + > If you are on the **SQL Editor** page of your cluster, you can also open the data app creation dialog by clicking **...** in the upper-right corner, choosing **Access Chat2Query via API**, and clicking **New Chat2Query Data App**. 2. In the dialog, define a name for your Data App, choose the desired clusters as the data sources, and select **Chat2Query Data App** as the **Data App** type. Optionally, you can also write a description for the App. @@ -29,7 +33,7 @@ To create a Data App for your project, perform the following steps: The newly created Chat2Query Data App is displayed in the left pane. Under this Data App, you can find a list of Chat2Query endpoints. -## Step 2. Create an API key +### Create an API key Before calling an endpoint, you need to create an API key for the Chat2Query Data App, which is used by the endpoint to access data in your TiDB Cloud clusters. @@ -49,13 +53,15 @@ To create an API key, perform the following steps: - `Chat2Query SQL ReadOnly`: only allows the API key to generate SQL statements based on provided instructions and execute `SELECT` SQL statements. - `Chat2Query SQL ReadWrite`: allows the API key to generate SQL statements based on provided instructions and execute any SQL statements. -4. Click **Next**. The public key and private key are displayed. +4. By default, an API key never expires. If you prefer to set an expiration time for the key, click **Expires in**, select a time unit (`Minutes`, `Days`, or `Months`), and then fill in a desired number for the time unit. + +5. Click **Next**. The public key and private key are displayed. Make sure that you have copied and saved the private key in a secure location. After leaving this page, you will not be able to get the full private key again. -5. Click **Done**. +6. Click **Done**. -## Step 3. Call Chat2Query endpoints +## Call Chat2Query endpoints > **Note:** > @@ -63,12 +69,13 @@ To create an API key, perform the following steps: In each Chat2Query Data App, you can find the following endpoints: -- Chat2Query v1 endpoint: `/v1/chat2data` +- Chat2Query v3 endpoints: the endpoints whose names starting with `/v3`, such as `/v3/dataSummaries` and `/v3/chat2data`(recommended) - Chat2Query v2 endpoints: the endpoints whose names starting with `/v2`, such as `/v2/dataSummaries` and `/v2/chat2data` +- Chat2Query v1 endpoint: `/v1/chat2data`(deprecated) > **Tip:** > -> Compared with `/v1/chat2data`, `/v2/chat2data` requires you to analyze your database first by calling `/v2/dataSummaries`, so the results returned by `/v2/chat2data` are generally more accurate. +> Compared with `/v1/chat2data`, `/v3/chat2data` and `/v2/chat2data` requires you to analyze your database first by calling `/v3/dataSummaries` or `/v2/dataSummaries`. Consequently, the results returned by `/v3/chat2data` and `/v2/chat2data` are generally more accurate. ### Get the code example of an endpoint @@ -84,34 +91,60 @@ TiDB Cloud provides code examples to help you quickly call Chat2Query endpoints. > **Note:** > - > For `/v2/chat2data` and `/v2/jobs/{job_id}`, you only need to select the authentication method. + > For some of the endpoints such as `/v2/jobs/{job_id}`, you only need to select the authentication method. 4. To call the endpoint, you can paste the example in your application, replace the parameters in the example with your own (such as replacing the `${PUBLIC_KEY}` and `${PRIVATE_KEY}` placeholders with your API key), and then run it. -### Call Chat2Query v2 endpoints - -TiDB Cloud Data Service provides the following Chat2Query v2 endpoints: - -| Method | Endpoint| Description | -| ---- | ---- |---- | -| POST | `/v2/dataSummaries` | This endpoint generates a data summary for your database schema, table schema, and column schema by using artificial intelligence for analysis. | -| POST | `/v2/chat2data` | This endpoint enables you to generate and execute SQL statements using artificial intelligence by providing the data summary ID and instructions. | -| GET | `/v2/jobs/{job_id}` | This endpoint enables you to query the status of the data summary generation job. | - -In the subsequent sections, you will learn how to call these endpoints. - -#### 1. Generate a data summary by calling `/v2/dataSummaries` - -Before calling `/v2/chat2data`, let AI analyze the database and generate a data summary first by calling `/v2/dataSummaries`, so `/v2/chat2data` can get a better performance in SQL generation later. - -The following is a code example of calling `/v2/chat2data` to analyze the `sp500insight` database and generate a data summary for the database: +### Call Chat2Query v3 endpoints or v2 endpoints + +TiDB Cloud Data Service provides the following Chat2Query v3 endpoints and v2 endpoints: + +| Method | Endpoint | Description | +| ------ | -------- | ----------- | +| POST | `/v3/dataSummaries` | This endpoint generates a data summary for your database schema, table schema, and column schema by using artificial intelligence for analysis. | +| GET | `/v3/dataSummaries` | This endpoint retrieves all data summaries of your database. | +| GET | `/v3/dataSummaries/{data_summary_id}` | This endpoint retrieves a specific data summary. | +| PUT | `/v3/dataSummaries/{data_summary_id}` | This endpoint updates a specific data summary. | +| PUT | `/v3/dataSummaries/{data_summary_id}/tables/{table_name}` | This endpoint updates the description of a specific table in a specific data summary. | +| PUT | `/v3/dataSummaries/{data_summary_id}/tables/{table_name}/columns` | This endpoint updates the description of columns for a specific table in a specific data summary. | +| POST | `/v3/knowledgeBases` | This endpoint creates a new knowledge base. For more information about the usage of knowledge base related endpoints, see [Use knowledge bases](/tidb-cloud/use-chat2query-knowledge.md). | +| GET | `/v3/knowledgeBases` | This endpoint retrieves all knowledge bases. | +| GET | `/v3/knowledgeBases/{knowledge_base_id}` | This endpoint retrieves a specific knowledge base. | +| PUT | `/v3/knowledgeBases/{knowledge_base_id}` | This endpoint updates a specific knowledge base. | +| POST | `/v3/knowledgeBases/{knowledge_base_id}/data` | This endpoint adds data to a specific knowledge base. | +| GET | `/v3/knowledgeBases/{knowledge_base_id}/data` | This endpoint retrieves data from a specific knowledge base. | +| PUT | `/v3/knowledgeBases/{knowledge_base_id}/data/{knowledge_data_id}` | This endpoint updates specific data in a knowledge base. | +| DEL | `/v3/knowledgeBases/{knowledge_base_id}/data/{knowledge_data_id}` | This endpoint deletes specific data from a knowledge base. | +| POST | `/v3/sessions` | This endpoint creates a new session. For more information about the usage of session-related endpoints, see [Start multi-round Chat2Query](/tidb-cloud/use-chat2query-sessions.md). | +| GET | `/v3/sessions` | This endpoint retrieves a list of all sessions. | +| GET | `/v3/sessions/{session_id}` | This endpoint retrieves the details of a specific session. | +| PUT | `/v3/sessions/{session_id}` | This endpoint updates a specific session. | +| PUT | `/v3/sessions/{session_id}/reset` | This endpoint resets a specific session. | +| POST | `/v3/sessions/{session_id}/chat2data` | This endpoint generates and executes SQL statements within a specific session using artificial intelligence. For more information, see [Start multi-round Chat2Query by using sessions](/tidb-cloud/use-chat2query-sessions.md). | +| POST | `/v3/chat2data` | This endpoint enables you to generate and execute SQL statements using artificial intelligence by providing the data summary ID and instructions. | +| POST | `/v3/refineSql` | This endpoint refines existing SQL queries using artificial intelligence. | +| POST | `/v3/suggestQuestions` | This endpoint suggests questions based on the provided data summary. | +| POST | `/v2/dataSummaries` | This endpoint generates a data summary for your database schema, table schema, and column schema using artificial intelligence. | +| GET | `/v2/dataSummaries` | This endpoint retrieves all data summaries. | +| POST | `/v2/chat2data` | This endpoint enables you to generate and execute SQL statements using artificial intelligence by providing the data summary ID and instructions. | +| GET | `/v2/jobs/{job_id}` | This endpoint enables you to query the status of a specific data summary generation job. | + +The steps to call `/v3/chat2data` and `/v2/chat2data` are the same. The following sections take `/v3/chat2data` as an example to show how to call it. + +#### 1. Generate a data summary by calling `/v3/dataSummaries` + +Before calling `/v3/chat2data`, let AI analyze the database and generate a data summary first by calling `/v3/dataSummaries`, so `/v3/chat2data` can get a better performance in SQL generation later. + +The following is a code example of calling `/v3/dataSummaries` to analyze the `sp500insight` database and generate a data summary for the database: ```bash -curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request POST 'https://.data.dev.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v2/dataSummaries'\ +curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request POST 'https://.data.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v3/dataSummaries'\ --header 'content-type: application/json'\ --data-raw '{ - "cluster_id": "10939961583884005252", - "database": "sp500insight" + "cluster_id": "10140100115280519574", + "database": "sp500insight", + "description": "Data summary for SP500 Insight", + "reuse": false }' ``` @@ -119,23 +152,25 @@ In the preceding example, the request body is a JSON object with the following p - `cluster_id`: _string_. A unique identifier of the TiDB cluster. - `database`: _string_. The name of the database. +- `description`: _string_. A description of the data summary. +- `reuse`: _boolean_. Specifies whether to reuse an existing data summary. If you set it to `true`, the API will reuse an existing data summary. If you set it to `false`, the API will generate a new data summary. An example response is as follows: -```json +```js { "code": 200, "msg": "", "result": { - "data_summary_id": 481235, - "job_id": "79c2b3d36c074943ab06a29e45dd5887" + "data_summary_id": 304823, + "job_id": "fb99ef785da640ab87bf69afed60903d" } } ``` #### 2. Check the analysis status by calling `/v2/jobs/{job_id}` -The `/v2/dataSummaries` API is asynchronous. For a database with a large dataset, it might take a few minutes to complete the database analysis and return the full data summary. +The `/v3/dataSummaries` API is asynchronous. For a database with a large dataset, it might take a few minutes to complete the database analysis and return the full data summary. To check the analysis status of your database, you can call the `/v2/jobs/{job_id}` endpoint as follows: @@ -146,103 +181,102 @@ curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request GET 'https://.data.dev.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v2/chat2data'\ +curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request POST 'https://.data.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v3/chat2data'\ --header 'content-type: application/json'\ --data-raw '{ - "cluster_id": "10939961583884005252", - "database": "sp500insight", - "raw_question": "" + "cluster_id": "10140100115280519574", + "database": "sp500insight", + "question": "", + "sql_generate_mode": "direct" }' ``` -In the preceding code, the request body is a JSON object with the following properties: +The request body is a JSON object with the following properties: - `cluster_id`: _string_. A unique identifier of the TiDB cluster. - `database`: _string_. The name of the database. -- `raw_question`: _string_. A natural language describing the query you want. +- `data_summary_id`: _integer_. The ID of the data summary used to generate SQL. This property only takes effect if `cluster_id` and `database` are not provided. If you specify both `cluster_id` and `database`, the API uses the default data summary of the database. +- `question`: _string_. A question in natural language describing the query you want. +- `sql_generate_mode`: _string_. The mode to generate SQL statements. The value can be `direct` or `auto_breakdown`. If you set it to `direct`, the API will generate SQL statements directly based on the `question` you provided. If you set it to `auto_breakdown`, the API will break down the `question` into multiple tasks and generate SQL statements for each task. An example response is as follows: -```json +```js { "code": 200, "msg": "", "result": { - "job_id": "3966d5bd95324a6283445e3a02ccd97c" + "cluster_id": "10140100115280519574", + "database": "sp500insight", + "job_id": "20f7577088154d7889964f1a5b12cb26", + "session_id": 304832 } } ``` If you receive a response with the status code `400` as follows, it means that you need to wait a moment for the data summary to be ready. -```json +```js { "code": 400, "msg": "Data summary is not ready, please wait for a while and retry", @@ -250,7 +284,7 @@ If you receive a response with the status code `400` as follows, it means that y } ``` -The `/v2/chat2data` API is asynchronous. You can check the job status by calling the `/v2/jobs/{job_id}` endpoint: +The `/v3/chat2data` API is asynchronous. You can check the job status by calling the `/v2/jobs/{job_id}` endpoint: ```bash curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request GET 'https://.data.dev.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v2/jobs/{job_id}'\ @@ -259,44 +293,55 @@ curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request GET 'https:// **Note:** +> +> The Chat2Data v1 endpoint is deprecated. It is recommended that you call Chat2Data v3 endpoints instead. TiDB Cloud Data Service provides the following Chat2Query v1 endpoint: @@ -318,7 +363,6 @@ When calling `/v1/chat2data`, you need to replace the following parameters: > > Each Chat2Query Data App has a rate limit of 100 requests per day. If you exceed the rate limit, the API returns a `429` error. For more quota, you can [submit a request](https://support.pingcap.com/hc/en-us/requests/new?ticket_form_id=7800003722519) to our support team. > An API Key with the role `Chat2Query Data Summary Management Role` cannot call the Chat2Data v1 endpoint. - The following code example is used to count how many users are in the `sp500insight.users` table: ```bash @@ -337,7 +381,7 @@ In the preceding example, the request body is a JSON object with the following p - `cluster_id`: _string_. A unique identifier of the TiDB cluster. - `database`: _string_. The name of the database. - `tables`: _array_. (optional) A list of table names to be queried. -- `instruction`: _string_. A natural language instruction describing the query you want. +- `instruction`: _string_. An instruction in natural language describing the query you want. The response is as follows: @@ -398,4 +442,6 @@ If your API call is not successful, you will receive a status code other than `2 ## Learn more - [Manage an API key](/tidb-cloud/data-service-api-key.md) +- [Start Multi-round Chat2Query](/tidb-cloud/use-chat2query-sessions.md) +- [Use Knowledge Bases](/tidb-cloud/use-chat2query-knowledge.md) - [Response and Status Codes of Data Service](/tidb-cloud/data-service-response-and-status-code.md) diff --git a/tidb-cloud/use-chat2query-knowledge.md b/tidb-cloud/use-chat2query-knowledge.md new file mode 100644 index 0000000000000..e11f32e6ad53b --- /dev/null +++ b/tidb-cloud/use-chat2query-knowledge.md @@ -0,0 +1,218 @@ +--- +title: Use Knowledge Bases +summary: Learn how to improve your Chat2Query results by using Chat2Query knowledge base APIs. +--- + +# Use Knowledge Bases + +A knowledge base is a collection of structured data that can be used to enhance the SQL generation capabilities of Chat2Query. + +Starting from v3, the Chat2Query API enables you to add or modify knowledge bases by calling knowledge base related endpoints of your Chat2Query Data App. + +> **Note:** +> +> Knowledge base related endpoints are available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters by default. To use knowledge base related endpoints on [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) clusters, contact [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md). + +## Before you begin + +Before creating a knowledge base for your database, make sure that you have the following: + +- A [Chat2Query Data App](/tidb-cloud/use-chat2query-api.md#create-a-chat2query-data-app) +- An [API key for the Chat2Query Data App](/tidb-cloud/use-chat2query-api.md#create-an-api-key) + +## Step 1. Create a knowledge base for the linked database + +> **Note:** +> +> The knowledge used by Chat2Query is **structured according to the database dimension**. You can connect multiple Chat2Query Data Apps to the same database, but each Chat2Query Data App can only use knowledge from a specific database it is linked to. + +In your Chat2Query Data App, you can create a knowledge base for a specific database by calling the `/v3/knowledgeBases` endpoint. After creation, you will get a `knowledge_base_id` for future knowledge management. + +The following is a general code example for calling this endpoint. + +> **Tip:** +> +> To get a specific code example for your endpoint, click the endpoint name in the left pane of your Data App, and then click **Show Code Example**. For more information, see [Get the example code of an endpoint](/tidb-cloud/use-chat2query-api.md#get-the-code-example-of-an-endpoint). + +```bash +curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request POST 'https://.data.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v3/knowledgeBases'\ + --header 'content-type: application/json'\ + --data-raw '{ + "cluster_id": "", + "database": "", + "description": "" +}' +``` + +An example response is as follows: + +```json +{ + "code":200, + "msg":"", + "result": + { + "default":true, + "description":"", + "knowledge_base_id":2 + } +} +``` + +After getting the response, record the `knowledge_base_id` value in your response for later use. + +## Step 2. Choose a knowledge type + +The knowledge base of each database can contain multiple types of knowledge. Before adding knowledge to your knowledge base, you need to choose a knowledge type that best suits your use case. + +Currently, Chat2Query knowledge bases support the following knowledge types. Each type is specifically designed for different scenarios and has a unique knowledge structure. + +- [Few-shot example](#few-shot-example) +- [Term-sheet explanation](#term-sheet-explanation) +- [Instruction](#instruction) + +### Few-shot example + +Few-shot example refers to the Q&A learning samples provided to Chat2Query, which include sample questions and their corresponding answers. These examples help Chat2Query handle new tasks more effectively. + +> **Note:** +> +> Make sure the accuracy of newly added examples, because the quality of examples affects how well Chat2Query learns. Poor examples, such as mismatched questions and answers, can degrade the performance of Chat2Query on new tasks. + +#### Knowledge structure + +Each example consists of a sample question and its corresponding answer. + +For example: + +```json +{ + "question": "How many records are in the 'test' table?", + "answer": "SELECT COUNT(*) FROM `test`;" +} +``` + +#### Use cases + +Few-Shot examples can significantly improve the performance of Chat2Query in various scenarios, including but not limited to the following: + +1. **When dealing with rare or complex questions**: if Chat2Query encounters infrequent or complex questions, adding few-shot examples can enhance its understanding and improve the accuracy of the results. + +2. **When struggling with a certain type of question**: if Chat2Query frequently makes mistakes or has difficulty with specific questions, adding few-shot examples can help improve its performance on these questions. + +### Term-sheet explanation + +Term-sheet explanation refers to a comprehensive explanation of a specific term or a group of similar terms, helping Chat2Query understand the meaning and usage of these terms. + +> **Note:** +> +> Make sure the accuracy of newly added term explanations, because the quality of explanations affects how well Chat2Query learns. Incorrect interpretations do not improve Chat2Query results but also potentially lead to adverse effects. + +#### Knowledge structure + +Each explanation includes either a single term or a list of similar terms and their detailed descriptions. + +For example: + +```json +{ + "term": ["OSS"], + "description": "OSS Insight is a powerful tool that provides online data analysis for users based on nearly 6 billion rows of GitHub event data." +} +``` + +#### Use cases + +Term-sheet explanation is primarily used to improve Chat2Query's comprehension of user queries, especially in these situations: + +- **Dealing with industry-specific terminology or acronyms**: when your query contains industry-specific terminology or acronyms that might not be universally recognized, using a term-sheet explanation can help Chat2Query understand the meaning and usage of these terms. +- **Dealing with ambiguities in user queries**: when your query contains ambiguous concepts that is confusing, using a term-sheet explanation can help Chat2Query clarify these ambiguities. +- **Dealing with terms with various meanings**: when your query contains terms that carry different meanings in various contexts, using a term-sheet explanation can assist Chat2Query in discerning the correct interpretation. + +### Instruction + +Instruction is a piece of textual command. It is used to guide or control the behavior of Chat2Query, specifically instructing it on how to generate SQL according to specific requirements or conditions. + +> **Note:** +> +> - The instruction has a length limit of 512 characters. +> - Make sure to provide as clear and specific instructions as possible to ensure that Chat2Query can understand and execute the instructions effectively. + +#### Knowledge structure + +Instruction only includes a piece of textual command. + +For example: + +```json +{ + "instruction": "If the task requires calculating the sequential growth rate, use the LAG function with the OVER clause in SQL" +} +``` + +#### Use cases + +Instruction can be used in many scenarios to guide Chat2Query to output according to your requirements, including but not limited to the following: + +- **Limiting query scope**: if you want the SQL to consider only certain tables or columns, use an instruction to specify this. +- **Guiding SQL structure**: if you have specific requirements for the SQL structure, use an instruction to guide Chat2Query. + +## Step 3. Add knowledge to the newly created knowledge base + +To add new knowledge, you can call the `/v3/knowledgeBases/{knowledge_base_id}/data` endpoint. + +### Add a few-shot example type of knowledge + +For example, if you want Chat2Query to generate SQL statements of the count of rows in a table in a specific structure, you can add a few-shot example type of knowledge by calling `/v3/knowledgeBases/{knowledge_base_id}/data` as follows: + +```bash +curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request POST 'https://.data.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v3/knowledgeBases//data'\ + --header 'content-type: application/json'\ + --data-raw '{ + "type": "few-shot", + "meta_data": {}, + "raw_data": { + "question": "How many records are in the 'test' table?", + "answer": "SELECT COUNT(*) FROM `test`;" + } +}' +``` + +In the preceding example code, `"type": "few-shot"` represents the few-shot example knowledge type. + +### Add a term-sheet explanation type of knowledge + +For example, if you want Chat2Query to comprehend the meaning of the term `OSS` using your provided explanation, you can add a term-sheet explanation type of knowledge by calling `/v3/knowledgeBases/{knowledge_base_id}/data` as follows: + +```bash +curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request POST 'https://.data.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v3/knowledgeBases//data'\ + --header 'content-type: application/json'\ + --data-raw '{ + "type": "term-sheet", + "meta_data": {}, + "raw_data": { + "term": ["OSS"], + "description": "OSS Insight is a powerful tool that provides online data analysis for users based on nearly 6 billion rows of GitHub event data." + } +}' +``` + +In the preceding example code, `"type": "term-sheet"` represents the term-sheet explanation knowledge type. + +### Add an instruction type of knowledge + +For example, if you want Chat2Query to consistently use the `LAG` function with the `OVER` clause in SQL queries when dealing with questions about sequential growth rate calculation, you can add an instruction type of knowledge by calling `/v3/knowledgeBases/{knowledge_base_id}/data` as follows: + +```bash +curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request POST 'https://.data.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v3/knowledgeBases//data'\ + --header 'content-type: application/json'\ + --data-raw '{ + "type": "instruction", + "meta_data": {}, + "raw_data": { + "instruction": "If the task requires calculating the sequential growth rate, use the LAG function with the OVER clause in SQL" + } +}' +``` + +In the preceding example code, `"type": "instruction"` represents the instruction knowledge type. \ No newline at end of file diff --git a/tidb-cloud/use-chat2query-sessions.md b/tidb-cloud/use-chat2query-sessions.md new file mode 100644 index 0000000000000..de8dbd8deb8b5 --- /dev/null +++ b/tidb-cloud/use-chat2query-sessions.md @@ -0,0 +1,101 @@ +--- +title: Start Multi-round Chat2Query +summary: Learn how to start multi-round chat by using Chat2Query session-related APIs. +--- + +# Start Multi-round Chat2Query + +Starting from v3, the Chat2Query API enables you to start multi-round chats by calling session related endpoints. You can use the `session_id` returned by the `/v3/chat2data` endpoint to continue your conversation in the next round. + +## Before you begin + +Before starting multi-round Chat2Query, make sure that you have the following: + +- A [Chat2Query Data App](/tidb-cloud/use-chat2query-api.md#create-a-chat2query-data-app). +- An [API key for the Chat2Query Data App](/tidb-cloud/use-chat2query-api.md#create-an-api-key). +- A [data summary for your target database](/tidb-cloud/use-chat2query-api.md#1-generate-a-data-summary-by-calling-v3datasummaries). + +## Step 1. Start a session + +To start a session, you can call the `/v3/sessions` endpoint of your Chat2Query Data App. + +The following is a general code example for calling this endpoint. + +> **Tip:** +> +> To get a specific code example for your endpoint, click the endpoint name in the left pane of your Data App, and then click **Show Code Example**. For more information, see [Get the example code of an endpoint](/tidb-cloud/use-chat2query-api.md#get-the-code-example-of-an-endpoint). + +```bash +curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request POST 'https://.data.tidbcloud.com/api/v1beta/app/chat2query-/endpoint/v3/sessions'\ + --header 'content-type: application/json'\ + --data-raw '{ + "cluster_id": "10140100115280519574", + "database": "sp500insight", + "name": "" +}' +``` + +In the preceding code, the request body is a JSON object with the following properties: + +- `cluster_id`: _string_. A unique identifier of the TiDB cluster. +- `database`: _string_. The name of the database. +- `name`: _string_. The name of the session. + +An example response is as follows: + +```json +{ + "code": 200, + "msg": "", + "result": { + "messages": [], + "meta": { + "created_at": 1718948875, // A UNIX timestamp indicating when the session is created + "creator": "", // The creator of the session + "name": "", // The name of the session + "org_id": "1", // The organization ID + "updated_at": 1718948875 // A UNIX timestamp indicating when the session is updated + }, + "session_id": 305685 // The session ID + } +} +``` + +## Step 2. Call Chat2Data endpoints with the session + +After starting a session, you can call `/v3/sessions/{session_id}/chat2data` to continue your conversation in the next round. + +The following is a general code example: + +```bash +curl --digest --user ${PUBLIC_KEY}:${PRIVATE_KEY} --request POST 'https://eu-central-1.data.tidbcloud.com/api/v1beta/app/chat2query-YqAvnlRj/endpoint/v3/sessions/{session_id}/chat2data'\ + --header 'content-type: application/json'\ + --data-raw '{ + "question": "", + "feedback_answer_id": "", + "feedback_task_id": "", + "sql_generate_mode": "direct" +}' +``` + +In the preceding code, the request body is a JSON object with the following properties: + +- `question`: _string_. A question in natural language describing the query you want. +- `feedback_answer_id`: _string_. The feedback answer ID. This field is optional and is only used for feedback. +- `feedback_task_id`: _string_. The feedback task ID. This field is optional and is only used for feedback. +- `sql_generate_mode`: _string_. The mode to generate SQL statements. The value can be `direct` or `auto_breakdown`. If you set it to `direct`, the API will generate SQL statements directly based on the `question` you provided. If you set it to `auto_breakdown`, the API will break down the `question` into multiple tasks and generate SQL statements for each task. + +An example response is as follows: + +```json +{ + "code": 200, + "msg": "", + "result": { + "job_id": "d96b6fd23c5f445787eb5fd067c14c0b", + "session_id": 305685 + } +} +``` + +The response is similar to the response of the `/v3/chat2data` endpoint. You can check the job status by calling the `/v2/jobs/{job_id}` endpoint. For more information, see [Check the analysis status by calling `/v2/jobs/{job_id}`](/tidb-cloud/use-chat2query-api.md#2-check-the-analysis-status-by-calling-v2jobsjob_id). \ No newline at end of file diff --git a/tidb-cloud/v6.5-performance-benchmarking-with-sysbench.md b/tidb-cloud/v6.5-performance-benchmarking-with-sysbench.md new file mode 100644 index 0000000000000..3fafc1b68db4c --- /dev/null +++ b/tidb-cloud/v6.5-performance-benchmarking-with-sysbench.md @@ -0,0 +1,155 @@ +--- +title: TiDB Cloud Sysbench Performance Test Report for TiDB v6.5.6 +summary: Introduce the Sysbench performance test results for a TiDB Dedicated cluster with the TiDB version of v6.5.6. +--- + +# TiDB Cloud Sysbench Performance Test Report for TiDB v6.5.6 + +This document provides the Sysbench performance test steps and results for a TiDB Dedicated cluster with the TiDB version of v6.5.6. This report can also be used as a reference for the performance of TiDB Self-Hosted v6.5.6 clusters. + +## Test overview + +This test aims at showing the Sysbench performance of TiDB v6.5.6 in the Online Transactional Processing (OLTP) scenario. + +## Test environment + +### TiDB cluster + +The test is conducted on a TiDB cluster with the following settings: + +- Cluster type: [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) +- Cluster version: v6.5.6 +- Cloud provider: AWS (us-west-2) +- Cluster configuration: + + | Node type | Node size | Node quantity | Node storage | + | :-------- | :-------------- | :------------ | :----------- | + | TiDB | 16 vCPU, 32 GiB | 2 | N/A | + | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | + +### Benchmark executor + +The benchmark executor sends SQL queries to the TiDB cluster. In this test, its hardware configuration is as follows: + +- Machine type: Amazon EC2 (us-west-2) +- Instance type: c6a.2xlarge +- Sysbench version: sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2) + +## Test steps + +This section introduces how to perform the Sysbench performance test step by step. + +1. In the [TiDB Cloud console](https://tidbcloud.com/), create a TiDB Dedicated cluster that meets the [test environment](#tidb-cluster) requirements. + + For more information, see [Create a TiDB Dedicated cluster](/tidb-cloud/create-tidb-cluster.md). + +2. On the benchmark executor, connect to the newly created cluster and create a database named `sbtest`. + + To connect to the cluster, see [Connect to TiDB Dedicated via Private Endpoint](/tidb-cloud/set-up-private-endpoint-connections.md). + + To create the `sbtest` database, execute the following SQL statement: + + ```sql + CREATE DATABASE sbtest; + ``` + +3. Load Sysbench data to the `sbtest` database. + + 1. The test in this document is implemented based on [sysbench](https://github.com/akopytov/sysbench). To install sysbench, see [Building and installing from source](https://github.com/akopytov/sysbench#building-and-installing-from-source). + + 2. Run the following `sysbench prepare` command to import 32 tables and 10,000,000 rows to the `sbtest` database. Replace `${HOST}`, `${PORT}`, `${THREAD}`, and `${PASSWORD}` with your actual values. + + ```shell + sysbench oltp_common \ + --threads=${THREAD} \ + --db-driver=mysql \ + --mysql-db=sbtest \ + --mysql-host=${HOST} \ + --mysql-port=${PORT} \ + --mysql-user=root \ + --mysql-password=${PASSWORD} \ + prepare --tables=32 --table-size=10000000 + ``` + +4. Run the following `sysbench run` command to conduct Sysbench performance tests on different workloads. This document conducts tests on five workloads: `oltp_point_select`, `oltp_read_write`, `oltp_update_non_index`, `oltp_update_index`, and `oltp_insert`. For each workload, this document conducts three tests with the `${THREAD}` value of `100`, `200`, and `400`. For each concurrency, the test takes 20 minutes. + + ```shell + sysbench ${WORKLOAD} run \ + --mysql-host=${HOST} \ + --mysql-port=${PORT} \ + --mysql-user=root \ + --db-driver=mysql \ + --mysql-db=sbtest \ + --threads=${THREAD} \ + --time=1200 \ + --report-interval=10 \ + --tables=32 \ + --table-size=10000000 \ + --mysql-ignore-errors=1062,2013,8028,9007 \ + --auto-inc=false \ + --mysql-password=${PASSWORD} + ``` + +## Test results + +This section introduces the Sysbench performance of v6.5.6 in the [test environment](#test-environment). + +### Point select performance + +The performance on the `oltp_point_select` workload is as follows: + +| Threads | TPS | 95% latency (ms) | +| :------ | :----- | :--------------- | +| 50 | 34125 | 2.03 | +| 100 | 64987 | 2.07 | +| 200 | 121656 | 2.14 | + +![Sysbench point select performance](/media/tidb-cloud/v6.5.6-oltp_select_point.png) + +### Read write performance + +The performance on the `oltp_read_write` workload is as follows: + +| Threads | TPS | 95% latency (ms) | +| :------ | :--- | :--------------- | +| 50 | 1232 | 46.6 | +| 100 | 2266 | 51.9 | +| 200 | 3578 | 81.5 | + +![Sysbench read write performance](/media/tidb-cloud/v6.5.6-oltp_read_write.png) + +### Update non-index performance + +The performance on the `oltp_update_non_index` workload is as follows: + +| Threads | TPS | 95% latency (ms) | +| :------ | :---- | :--------------- | +| 100 | 11016 | 11.0 | +| 200 | 20640 | 12.1 | +| 400 | 36830 | 13.5 | + +![Sysbench update non-index performance](/media/tidb-cloud/v6.5.6-oltp_update_non_index.png) + +### Update index performance + +The performance on the `oltp_update_index` workload is as follows: + +| Threads | TPS | 95% latency (ms) | +| :------ | :---- | :--------------- | +| 100 | 9270 | 14.0 | +| 200 | 14466 | 18.0 | +| 400 | 22194 | 24.8 | + +![Sysbench update index performance](/media/tidb-cloud/v6.5.6-oltp_update_index.png) + +### Insert performance + +The performance on the `oltp_insert` workload is as follows: + +| Threads | TPS | 95% latency (ms) | +| :------ | :---- | :--------------- | +| 100 | 16008 | 8.13 | +| 200 | 27143 | 10.1 | +| 400 | 40884 | 15.0 | + +![Sysbench insert performance](/media/tidb-cloud/v6.5.6-oltp_insert.png) diff --git a/tidb-cloud/v7.1.0-performance-benchmarking-with-tpcc.md b/tidb-cloud/v6.5-performance-benchmarking-with-tpcc.md similarity index 80% rename from tidb-cloud/v7.1.0-performance-benchmarking-with-tpcc.md rename to tidb-cloud/v6.5-performance-benchmarking-with-tpcc.md index eb4f988afea16..903e648dd58cf 100644 --- a/tidb-cloud/v7.1.0-performance-benchmarking-with-tpcc.md +++ b/tidb-cloud/v6.5-performance-benchmarking-with-tpcc.md @@ -1,15 +1,15 @@ --- -title: TiDB Cloud TPC-C Performance Test Report for TiDB v7.1.0 -summary: Introduce the TPC-C performance test results for a TiDB Dedicated cluster with the TiDB version of v7.1.0. +title: TiDB Cloud TPC-C Performance Test Report for TiDB v6.5.6 +summary: Introduce the TPC-C performance test results for a TiDB Dedicated cluster with the TiDB version of v6.5.6. --- -# TiDB Cloud TPC-C Performance Test Report for TiDB v7.1.0 +# TiDB Cloud TPC-C Performance Test Report for TiDB v6.5.6 -This document provides the TPC-C performance test steps and results for a TiDB Dedicated cluster with the TiDB version of v7.1.0. This report can also be used as a reference for the performance of TiDB Self-Hosted v7.1.0 clusters. +This document provides the TPC-C performance test steps and results for a TiDB Dedicated cluster with the TiDB version of v6.5.6. This report can also be used as a reference for the performance of TiDB Self-Hosted v6.5.6 clusters. ## Test overview -This test aims at showing the TPC-C performance of TiDB v7.1.0 in the Online Transactional Processing (OLTP) scenario. +This test aims at showing the TPC-C performance of TiDB v6.5.6 in the Online Transactional Processing (OLTP) scenario. ## Test environment @@ -18,14 +18,14 @@ This test aims at showing the TPC-C performance of TiDB v7.1.0 in the Online Tra The test is conducted on a TiDB cluster with the following settings: - Cluster type: [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) -- Cluster version: v7.1.0 +- Cluster version: v6.5.6 - Cloud provider: AWS (us-west-2) - Cluster configuration: - | Node type | Node size | Node quantity | Node storage | - |:----------|:----------|:----------|:----------| - | TiDB | 16 vCPU, 32 GiB | 2 | N/A | - | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | + | Node type | Node size | Node quantity | Node storage | + | :-------- | :-------------- | :------------ | :----------- | + | TiDB | 16 vCPU, 32 GiB | 2 | N/A | + | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | ### Benchmark executor @@ -100,12 +100,12 @@ This section introduces how to perform the TPC-C performance test step by step. ## Test results -The TPC-C performance of v7.1.0 in the [test environment](#test-environment) is as follows: +The TPC-C performance of v6.5.6 in the [test environment](#test-environment) is as follows: -| Threads | v7.1.0 tpmC | -|:--------|:----------| -| 50 | 36,159 | -| 100 | 66,742 | -| 200 | 94,406 | +| Threads | v6.5.6 tpmC | +| :------ | :---------- | +| 50 | 44183 | +| 100 | 74424 | +| 200 | 101545 | -![TPC-C](/media/tidb-cloud/v7.1.0-tpmC.png) +![TPC-C](/media/tidb-cloud/v6.5.6-tpmC.png) diff --git a/tidb-cloud/v7.1.0-performance-benchmarking-with-sysbench.md b/tidb-cloud/v7.1-performance-benchmarking-with-sysbench.md similarity index 71% rename from tidb-cloud/v7.1.0-performance-benchmarking-with-sysbench.md rename to tidb-cloud/v7.1-performance-benchmarking-with-sysbench.md index 5f11db1dd6080..b8d3e656cc1b8 100644 --- a/tidb-cloud/v7.1.0-performance-benchmarking-with-sysbench.md +++ b/tidb-cloud/v7.1-performance-benchmarking-with-sysbench.md @@ -1,15 +1,16 @@ --- -title: TiDB Cloud Sysbench Performance Test Report for TiDB v7.1.0 -summary: Introduce the Sysbench performance test results for a TiDB Dedicated cluster with the TiDB version of v7.1.0. +title: TiDB Cloud Sysbench Performance Test Report for TiDB v7.1.3 +summary: Introduce the Sysbench performance test results for a TiDB Dedicated cluster with the TiDB version of v7.1.3. +aliases: ['/tidbcloud/v7.1.0-performance-benchmarking-with-sysbench'] --- -# TiDB Cloud Sysbench Performance Test Report for TiDB v7.1.0 +# TiDB Cloud Sysbench Performance Test Report for TiDB v7.1.3 -This document provides the Sysbench performance test steps and results for a TiDB Dedicated cluster with the TiDB version of v7.1.0. This report can also be used as a reference for the performance of TiDB Self-Hosted v7.1.0 clusters. +This document provides the Sysbench performance test steps and results for a TiDB Dedicated cluster with the TiDB version of v7.1.3. This report can also be used as a reference for the performance of TiDB Self-Hosted v7.1.3 clusters. ## Test overview -This test aims at showing the Sysbench performance of TiDB v7.1.0 in the Online Transactional Processing (OLTP) scenario. +This test aims at showing the Sysbench performance of TiDB v7.1.3 in the Online Transactional Processing (OLTP) scenario. ## Test environment @@ -18,14 +19,14 @@ This test aims at showing the Sysbench performance of TiDB v7.1.0 in the Online The test is conducted on a TiDB cluster with the following settings: - Cluster type: [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) -- Cluster version: v7.1.0 +- Cluster version: v7.1.3 - Cloud provider: AWS (us-west-2) - Cluster configuration: - | Node type | Node size | Node quantity | Node storage | - |:----------|:----------|:----------|:----------| - | TiDB | 16 vCPU, 32 GiB | 2 | N/A | - | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | + | Node type | Node size | Node quantity | Node storage | + | :-------- | :-------------- | :------------ | :----------- | + | TiDB | 16 vCPU, 32 GiB | 2 | N/A | + | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | ### Parameter configuration @@ -112,64 +113,64 @@ This section introduces how to perform the Sysbench performance test step by ste ## Test results -This section introduces the Sysbench performance of v7.1.0 in the [test environment](#test-environment). +This section introduces the Sysbench performance of v7.1.3 in the [test environment](#test-environment). ### Point select performance The performance on the `oltp_point_select` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 56,039 | 2.34 | -| 200 | 95,908 | 2.78 | -| 400 | 111,810 | 5.57 | +| Threads | TPS | 95% latency (ms) | +| :------ | :----- | :--------------- | +| 50 | 35309 | 1.93 | +| 100 | 64853 | 2.00 | +| 200 | 118462 | 2.22 | -![Sysbench point select performance](/media/tidb-cloud/v7.1.0-oltp_select_point.png) +![Sysbench point select performance](/media/tidb-cloud/v7.1.3-oltp_select_point.png) ### Read write performance The performance on the `oltp_read_write` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 1,789 | 66.8 | -| 200 | 2,842 | 97.6 | -| 400 | 3,090 | 191 | +| Threads | TPS | 95% latency (ms) | +| :------ | :--- | :--------------- | +| 50 | 1218 | 48.3 | +| 100 | 2235 | 53.9 | +| 200 | 3380 | 87.6 | -![Sysbench read write performance](/media/tidb-cloud/v.7.1.0-oltp_read_write.png) +![Sysbench read write performance](/media/tidb-cloud/v7.1.3-oltp_read_write.png) ### Update non-index performance The performance on the `oltp_update_non_index` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 7,944 | 16.7 | -| 200 | 13,844 | 19.0 | -| 400 | 29,063 | 20.4 | +| Threads | TPS | 95% latency (ms) | +| :------ | :---- | :--------------- | +| 100 | 10928 | 11.7 | +| 200 | 19985 | 12.8 | +| 400 | 35621 | 14.7 | -![Sysbench update non-index performance](/media/tidb-cloud/v7.1.0-oltp_update_non_index.png) +![Sysbench update non-index performance](/media/tidb-cloud/v7.1.3-oltp_update_non_index.png) ### Update index performance The performance on the `oltp_update_index` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 6,389 | 20 | -| 200 | 12,583 | 22.3 | -| 400 | 22,393 | 25.7 | +| Threads | TPS | 95% latency (ms) | +| :------ | :---- | :--------------- | +| 100 | 8854 | 14.7 | +| 200 | 14414 | 18.6 | +| 400 | 21997 | 25.3 | -![Sysbench update index performance](/media/tidb-cloud/v7.1.0-oltp_update_index.png) +![Sysbench update index performance](/media/tidb-cloud/v7.1.3-oltp_update_index.png) ### Insert performance The performance on the `oltp_insert` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 7,671 | 17.3 | -| 200 | 13,584 | 19.7 | -| 400 | 31,252 | 20 | +| Threads | TPS | 95% latency (ms) | +| :------ | :---- | :--------------- | +| 100 | 15575 | 8.13 | +| 200 | 25078 | 11.0 | +| 400 | 38436 | 15.6 | -![Sysbench insert performance](/media/tidb-cloud/v7.1.0-oltp_insert.png) +![Sysbench insert performance](/media/tidb-cloud/v7.1.3-oltp_insert.png) diff --git a/tidb-cloud/v7.1-performance-benchmarking-with-tpcc.md b/tidb-cloud/v7.1-performance-benchmarking-with-tpcc.md new file mode 100644 index 0000000000000..91fbc788782b5 --- /dev/null +++ b/tidb-cloud/v7.1-performance-benchmarking-with-tpcc.md @@ -0,0 +1,112 @@ +--- +title: TiDB Cloud TPC-C Performance Test Report for TiDB v7.1.3 +summary: Introduce the TPC-C performance test results for a TiDB Dedicated cluster with the TiDB version of v7.1.3. +aliases: ['/tidbcloud/v7.1.0-performance-benchmarking-with-tpcc'] +--- + +# TiDB Cloud TPC-C Performance Test Report for TiDB v7.1.3 + +This document provides the TPC-C performance test steps and results for a TiDB Dedicated cluster with the TiDB version of v7.1.3. This report can also be used as a reference for the performance of TiDB Self-Hosted v7.1.3 clusters. + +## Test overview + +This test aims at showing the TPC-C performance of TiDB v7.1.3 in the Online Transactional Processing (OLTP) scenario. + +## Test environment + +### TiDB cluster + +The test is conducted on a TiDB cluster with the following settings: + +- Cluster type: [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) +- Cluster version: v7.1.3 +- Cloud provider: AWS (us-west-2) +- Cluster configuration: + + | Node type | Node size | Node quantity | Node storage | + | :-------- | :-------------- | :------------ | :----------- | + | TiDB | 16 vCPU, 32 GiB | 2 | N/A | + | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | + +### Benchmark executor + +The benchmark executor sends SQL queries to the TiDB cluster. In this test, its hardware configuration is as follows: + +- Machine type: Amazon EC2 (us-west-2) +- Instance type: c6a.2xlarge + +## Test steps + +This section introduces how to perform the TPC-C performance test step by step. + +1. In the [TiDB Cloud console](https://tidbcloud.com/), create a TiDB Dedicated cluster that meets the [test environment](#tidb-cluster) requirements. + + For more information, see [Create a TiDB Dedicated cluster](/tidb-cloud/create-tidb-cluster.md). + +2. On the benchmark executor, connect to the newly created cluster and create a database named `tpcc`. + + To connect to the cluster, see [Connect to TiDB Dedicated via Private Endpoint](/tidb-cloud/set-up-private-endpoint-connections.md). + + To create the `tpcc` database, execute the following SQL statement: + + ```sql + CREATE DATABASE tpcc; + ``` + +3. Load TPC-C data to the `tpcc` database. + + 1. The test in this document is implemented based on [go-tpc](https://github.com/pingcap/go-tpc). You can download the test program using the following command: + + ```shell + curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/pingcap/go-tpc/master/install.sh | sh + ``` + + 2. Run the following `go-tpc tpcc` command to import 1,000 warehouses to the `tpcc` database. Replace `${HOST}`, `${THREAD}`, and `${PASSWORD}` with your actual values. This document conducts three tests with the `${THREAD}` value of `50`, `100`, and `200`. + + ```shell + go-tpc tpcc --host ${HOST} --warehouses 1000 prepare -P 4000 -D tpcc -T ${THREAD} --time 2h0m0s -p ${PASSWORD} --ignore-error + ``` + +4. To ensure that the TiDB optimizer can generate the optimal execution plan, execute the following SQL statements to collect statistics before conducting the TPC-C test: + + ```sql + ANALYZE TABLE customer; + ANALYZE TABLE district; + ANALYZE TABLE history; + ANALYZE TABLE item; + ANALYZE TABLE new_order; + ANALYZE TABLE order_line; + ANALYZE TABLE orders; + ANALYZE TABLE stock; + ANALYZE TABLE warehouse; + ``` + + To accelerate the collection of statistics, execute the following SQL statements before collecting: + + ```sql + SET tidb_build_stats_concurrency=16; + SET tidb_distsql_scan_concurrency=16; + SET tidb_index_serial_scan_concurrency=16; + ``` + +5. Run the following `go-tpc tpcc` command to conduct stress tests on the TiDB Dedicated cluster. For each concurrency, the test takes two hours. + + ```shell + go-tpc tpcc --host ${HOST} -P 4000 --warehouses 1000 run -D tpcc -T ${THREAD} --time 2h0m0s -p ${PASSWORD} --ignore-error + ``` + +6. Extract the tpmC data of `NEW_ORDER` from the result. + + TPC-C uses tpmC (transactions per minute) to measure the maximum qualified throughput (MQTh, Max Qualified Throughput). The transactions are the NewOrder transactions and the final unit of measure is the number of new orders processed per minute. + +## Test results + +The TPC-C performance of v7.1.3 in the [test environment](#test-environment) is as follows: + +| Threads | v7.1.3 tpmC | +| :------ | :---------- | +| 50 | 42839 | +| 100 | 72895 | +| 200 | 97924 | + +![TPC-C](/media/tidb-cloud/v7.1.3-tpmC.png) diff --git a/tidb-cloud/v7.5.0-performance-benchmarking-with-sysbench.md b/tidb-cloud/v7.5-performance-benchmarking-with-sysbench.md similarity index 76% rename from tidb-cloud/v7.5.0-performance-benchmarking-with-sysbench.md rename to tidb-cloud/v7.5-performance-benchmarking-with-sysbench.md index 3e797f1d6072e..719171e9f77e6 100644 --- a/tidb-cloud/v7.5.0-performance-benchmarking-with-sysbench.md +++ b/tidb-cloud/v7.5-performance-benchmarking-with-sysbench.md @@ -1,6 +1,7 @@ --- title: TiDB Cloud Sysbench Performance Test Report for TiDB v7.5.0 summary: Introduce the Sysbench performance test results for a TiDB Dedicated cluster with the TiDB version of v7.5.0. +aliases: ['/tidbcloud/v7.5.0-performance-benchmarking-with-sysbench'] --- # TiDB Cloud Sysbench Performance Test Report for TiDB v7.5.0 @@ -22,10 +23,10 @@ The test is conducted on a TiDB cluster with the following settings: - Cloud provider: AWS (us-west-2) - Cluster configuration: - | Node type | Node size | Node quantity | Node storage | - |:----------|:----------|:----------|:----------| - | TiDB | 16 vCPU, 32 GiB | 2 | N/A | - | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | + | Node type | Node size | Node quantity | Node storage | + | :-------- | :-------------- | :------------ | :----------- | + | TiDB | 16 vCPU, 32 GiB | 2 | N/A | + | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | ### Parameter configuration @@ -118,58 +119,58 @@ This section introduces the Sysbench performance of v7.5.0 in the [test environm The performance on the `oltp_point_select` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 64,810 | 2.03 | -| 200 | 118,651 | 2.22 | -| 400 | 153,580 | 3.96 | +| Threads | TPS | 95% latency (ms) | +| :------ | :------ | :--------------- | +| 50 | 33,344 | 1.96 | +| 100 | 64,810 | 2.03 | +| 200 | 118,651 | 2.22 | -![Sysbench point select performance](/media/tidb-cloud/v7.5.0_oltp_point_select.png) +![Sysbench point select performance](/media/tidb-cloud/v7.5.0-oltp_point_select.png) ### Read write performance The performance on the `oltp_read_write` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 2,134 | 54.8 | -| 200 | 3,020 | 99.3 | -| 400 | 3,251 | 193 | +| Threads | TPS | 95% latency (ms) | +| :------ | :---- | :--------------- | +| 50 | 1,181 | 49.2 | +| 100 | 2,162 | 54.8 | +| 200 | 3,169 | 92.4 | -![Sysbench read write performance](/media/tidb-cloud/v7.5.0_oltp_read_write.png) +![Sysbench read write performance](/media/tidb-cloud/v7.5.0-oltp_read_write.png) ### Update non-index performance The performance on the `oltp_update_non_index` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 10,567 | 11.7 | -| 200 | 20,223 | 13.0 | -| 400 | 34,011 | 14.7 | +| Threads | TPS | 95% latency (ms) | +| :------ | :----- | :--------------- | +| 100 | 10,567 | 11.7 | +| 200 | 20,223 | 13.0 | +| 400 | 34,011 | 14.7 | -![Sysbench update non-index performance](/media/tidb-cloud/v7.5.0_oltp_update_non_index.png) +![Sysbench update non-index performance](/media/tidb-cloud/v7.5.0-oltp_update_non_index.png) ### Update index performance The performance on the `oltp_update_index` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 8,896 | 14.7 | -| 200 | 1,3718 | 19.0 | -| 400 | 2,0377 | 26.9 | +| Threads | TPS | 95% latency (ms) | +| :------ | :----- | :--------------- | +| 100 | 8,896 | 14.7 | +| 200 | 13,718 | 19.0 | +| 400 | 20,377 | 26.9 | -![Sysbench update index performance](/media/tidb-cloud/v7.5.0_oltp_update_index.png) +![Sysbench update index performance](/media/tidb-cloud/v7.5.0-oltp_update_index.png) ### Insert performance The performance on the `oltp_insert` workload is as follows: -| Threads | TPS | 95% latency (ms)| -|:--------|:----------|:----------| -| 100 | 15,132 | 8.58 | -| 200 | 24,756 | 10.8 | -| 400 | 37,247 | 16.4 | +| Threads | TPS | 95% latency (ms) | +| :------ | :----- | :--------------- | +| 100 | 15,132 | 8.58 | +| 200 | 24,756 | 10.8 | +| 400 | 37,247 | 16.4 | -![Sysbench insert performance](/media/tidb-cloud/v7.5.0_oltp_insert.png) +![Sysbench insert performance](/media/tidb-cloud/v7.5.0-oltp_insert.png) diff --git a/tidb-cloud/v7.5.0-performance-benchmarking-with-tpcc.md b/tidb-cloud/v7.5-performance-benchmarking-with-tpcc.md similarity index 89% rename from tidb-cloud/v7.5.0-performance-benchmarking-with-tpcc.md rename to tidb-cloud/v7.5-performance-benchmarking-with-tpcc.md index 545fb8eb8a62a..202e00a647386 100644 --- a/tidb-cloud/v7.5.0-performance-benchmarking-with-tpcc.md +++ b/tidb-cloud/v7.5-performance-benchmarking-with-tpcc.md @@ -1,6 +1,7 @@ --- title: TiDB Cloud TPC-C Performance Test Report for TiDB v7.5.0 summary: Introduce the TPC-C performance test results for a TiDB Dedicated cluster with the TiDB version of v7.5.0. +aliases: ['/tidbcloud/v7.5.0-performance-benchmarking-with-tpcc'] --- # TiDB Cloud TPC-C Performance Test Report for TiDB v7.5.0 @@ -22,10 +23,10 @@ The test is conducted on a TiDB cluster with the following settings: - Cloud provider: AWS (us-west-2) - Cluster configuration: - | Node type | Node size | Node quantity | Node storage | - |:----------|:----------|:----------|:----------| - | TiDB | 16 vCPU, 32 GiB | 2 | N/A | - | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | + | Node type | Node size | Node quantity | Node storage | + | :-------- | :-------------- | :------------ | :----------- | + | TiDB | 16 vCPU, 32 GiB | 2 | N/A | + | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | ### Benchmark executor @@ -102,10 +103,10 @@ This section introduces how to perform the TPC-C performance test step by step. The TPC-C performance of v7.5.0 in the [test environment](#test-environment) is as follows: -| Threads | v7.5.0 tpmC | -|:--------|:----------| -| 50 | 37,443 | -| 100 | 67,899 | -| 200 | 93,038 | +| Threads | v7.5.0 tpmC | +| :------ | :---------- | +| 50 | 41,426 | +| 100 | 71,499 | +| 200 | 97,389 | ![TPC-C](/media/tidb-cloud/v7.5.0_tpcc.png) diff --git a/tidb-cloud/v8.1-performance-benchmarking-with-sysbench.md b/tidb-cloud/v8.1-performance-benchmarking-with-sysbench.md new file mode 100644 index 0000000000000..ce9a0b136f3a0 --- /dev/null +++ b/tidb-cloud/v8.1-performance-benchmarking-with-sysbench.md @@ -0,0 +1,181 @@ +--- +title: TiDB Cloud Sysbench Performance Test Report for TiDB v8.1.0 +summary: Introduce the Sysbench performance test results for a TiDB Dedicated cluster with the TiDB version of v8.1.0. +--- + +# TiDB Cloud Sysbench Performance Test Report for TiDB v8.1.0 + +This document provides the Sysbench performance test steps and results for a TiDB Dedicated cluster with the TiDB version of v8.1.0. This report can also be used as a reference for the performance of TiDB Self-Hosted v8.1.0 clusters. + +## Test overview + +This test aims at showing the Sysbench performance of TiDB v8.1.0 in the Online Transactional Processing (OLTP) scenario. + +## Test environment + +### TiDB cluster + +The test is conducted on a TiDB cluster with the following settings: + +- Cluster type: [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) +- Cluster version: v8.1.0 +- Cloud provider: AWS (us-west-2) +- Cluster configuration: + + | Node type | Node size | Node quantity | Node storage | + |:----------|:----------|:----------|:----------| + | TiDB | 16 vCPU, 32 GiB | 2 | N/A | + | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | + +### Parameter configuration + +The system variable [`tidb_session_plan_cache_size`](https://docs.pingcap.com/tidb/stable/system-variables#tidb_session_plan_cache_size-new-in-v710) controls the maximum number of plans that can be cached. The default value is `100`. For each workload, this document conducts tests with `tidb_session_plan_cache_size` set to `1000`: + +```sql +SET GLOBAL tidb_session_plan_cache_size = 1000; +``` + +> **Note:** +> +> For TiDB Cloud, to modify the TiKV parameters of your cluster, you can contact [PingCAP Support](/tidb-cloud/tidb-cloud-support.md) for help. + +The TiKV parameter [`prefill-for-recycle`](https://docs.pingcap.com/tidb/stable/tikv-configuration-file#prefill-for-recycle-new-in-v700) can make log recycling effective immediately after initialization. This document conducts tests based on different workloads with the following `prefill-for-recycle` configuration: + +- For the `oltp_point_select` workload, use the default value of the [`prefill-for-recycle`](https://docs.pingcap.com/tidb/stable/tikv-configuration-file#prefill-for-recycle-new-in-v700) parameter: + + ```yaml + raft-engine.prefill-for-recycle = false + ``` + +- For `oltp_insert`, `oltp_read_write`, `oltp_update_index`, and `oltp_update_non_index` workloads, enable the [`prefill-for-recycle`](https://docs.pingcap.com/tidb/stable/tikv-configuration-file#prefill-for-recycle-new-in-v700) parameter: + + ```yaml + raft-engine.prefill-for-recycle = true + ``` + +### Benchmark executor + +The benchmark executor sends SQL queries to the TiDB cluster. In this test, its hardware configuration is as follows: + +- Machine type: Amazon EC2 (us-west-2) +- Instance type: c6a.2xlarge +- Sysbench version: sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2) + +## Test steps + +This section introduces how to perform the Sysbench performance test step by step. + +1. In the [TiDB Cloud console](https://tidbcloud.com/), create a TiDB Dedicated cluster that meets the [test environment](#tidb-cluster) requirements. + + For more information, see [Create a TiDB Dedicated cluster](/tidb-cloud/create-tidb-cluster.md). + +2. On the benchmark executor, connect to the newly created cluster and create a database named `sbtest`. + + To connect to the cluster, see [Connect to TiDB Dedicated via Private Endpoint](/tidb-cloud/set-up-private-endpoint-connections.md). + + To create the `sbtest` database, execute the following SQL statement: + + ```sql + CREATE DATABASE sbtest; + ``` + +3. Load Sysbench data to the `sbtest` database. + + 1. The test in this document is implemented based on [sysbench](https://github.com/akopytov/sysbench). To install sysbench, see [Building and installing from source](https://github.com/akopytov/sysbench#building-and-installing-from-source). + + 2. Run the following `sysbench prepare` command to import 32 tables and 10,000,000 rows to the `sbtest` database. Replace `${HOST}`, `${PORT}`, `${THREAD}`, and `${PASSWORD}` with your actual values. + + ```shell + sysbench oltp_common \ + --threads=${THREAD} \ + --db-driver=mysql \ + --mysql-db=sbtest \ + --mysql-host=${HOST} \ + --mysql-port=${PORT} \ + --mysql-user=root \ + --mysql-password=${PASSWORD} \ + prepare --tables=32 --table-size=10000000 + ``` + +4. Run the following `sysbench run` command to conduct Sysbench performance tests on different workloads. This document conducts tests on five workloads: `oltp_point_select`, `oltp_read_write`, `oltp_update_non_index`, `oltp_update_index`, and `oltp_insert`. For each workload, this document conducts three tests with different values for the `${THREAD}` variable. For `oltp_point_select` and `oltp_read_write`, the values are `50`, `100`, and `200`. For other workloads, the values are `100`, `200`, and `400`. For each concurrency, the test takes 20 minutes. + + ```shell + sysbench ${WORKLOAD} run \ + --mysql-host=${HOST} \ + --mysql-port=${PORT} \ + --mysql-user=root \ + --db-driver=mysql \ + --mysql-db=sbtest \ + --threads=${THREAD} \ + --time=1200 \ + --report-interval=10 \ + --tables=32 \ + --table-size=10000000 \ + --mysql-ignore-errors=1062,2013,8028,9007 \ + --auto-inc=false \ + --mysql-password=${PASSWORD} + ``` + +## Test results + +This section introduces the Sysbench performance of v8.1.0 in the [test environment](#test-environment). + +### Point select performance + +The performance on the `oltp_point_select` workload is as follows: + +| Threads | TPS | 95% latency (ms)| +|:--------|:----------|:----------| +| 50 | 32,741 | 1.96 | +| 100 | 62,545 | 2.03 | +| 200 | 111,470 | 2.48 | + +![Sysbench point select performance](/media/tidb-cloud/v8.1.0_oltp_point_select.png) + +### Read write performance + +The performance on the `oltp_read_write` workload is as follows: + +| Threads | TPS | 95% latency (ms)| +|:--------|:----------|:----------| +| 50 | 1,232 | 46.6 | +| 100 | 2,341 | 51 | +| 200 | 3,240 | 109 | + +![Sysbench read write performance](/media/tidb-cloud/v8.1.0_oltp_read_write.png) + +### Update non-index performance + +The performance on the `oltp_update_non_index` workload is as follows: + +| Threads | TPS | 95% latency (ms)| +|:--------|:----------|:----------| +| 100 | 14,000 | 9.39 | +| 200 | 25,215 | 10.5 | +| 400 | 42,550 | 12.8 | + +![Sysbench update non-index performance](/media/tidb-cloud/v8.1.0_oltp_update_non_index.png) + +### Update index performance + +The performance on the `oltp_update_index` workload is as follows: + +| Threads | TPS | 95% latency (ms)| +|:--------|:----------|:----------| +| 100 | 11,188 | 11.7 | +| 200 | 17,805 | 14.7 | +| 400 | 24,575 | 23.5 | + +![Sysbench update index performance](/media/tidb-cloud/v8.1.0_oltp_update_index.png) + +### Insert performance + +The performance on the `oltp_insert` workload is as follows: + +| Threads | TPS | 95% latency (ms)| +|:--------|:----------|:----------| +| 100 | 18,339 | 7.3| +| 200 | 29,387 | 9.73 | +| 400 | 42,712 | 14.2 | + +![Sysbench insert performance](/media/tidb-cloud/v8.1.0_oltp_insert.png) diff --git a/tidb-cloud/v8.1-performance-benchmarking-with-tpcc.md b/tidb-cloud/v8.1-performance-benchmarking-with-tpcc.md new file mode 100644 index 0000000000000..18f30272e1e71 --- /dev/null +++ b/tidb-cloud/v8.1-performance-benchmarking-with-tpcc.md @@ -0,0 +1,123 @@ +--- +title: TiDB Cloud TPC-C Performance Test Report for TiDB v8.1.0 +summary: Introduce the TPC-C performance test results for a TiDB Dedicated cluster with the TiDB version of v8.1.0. +--- + +# TiDB Cloud TPC-C Performance Test Report for TiDB v8.1.0 + +This document provides the TPC-C performance test steps and results for a TiDB Dedicated cluster with the TiDB version of v8.1.0. This report can also be used as a reference for the performance of TiDB Self-Hosted v8.1.0 clusters. + +## Test overview + +This test aims at showing the TPC-C performance of TiDB v8.1.0 in the Online Transactional Processing (OLTP) scenario. + +## Test environment + +### TiDB cluster + +The test is conducted on a TiDB cluster with the following settings: + +- Cluster type: [TiDB Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-dedicated) +- Cluster version: v8.1.0 +- Cloud provider: AWS (us-west-2) +- Cluster configuration: + + | Node type | Node size | Node quantity | Node storage | + |:----------|:----------|:----------|:----------| + | TiDB | 16 vCPU, 32 GiB | 2 | N/A | + | TiKV | 16 vCPU, 64 GiB | 3 | 1000 GiB | + +### Parameter configuration + +> **Note:** +> +> For TiDB Cloud, to modify the TiKV parameters of your cluster, you can contact [PingCAP Support](/tidb-cloud/tidb-cloud-support.md) for help. + +The TiKV parameter [`prefill-for-recycle`](https://docs.pingcap.com/tidb/stable/tikv-configuration-file#prefill-for-recycle-new-in-v700) can make log recycling effective immediately after initialization. This document conducts tests with `prefill-for-recycle` enabled: + +```yaml +raft-engine.prefill-for-recycle = true +``` + +### Benchmark executor + +The benchmark executor sends SQL queries to the TiDB cluster. In this test, its hardware configuration is as follows: + +- Machine type: Amazon EC2 (us-west-2) +- Instance type: c6a.2xlarge + +## Test steps + +This section introduces how to perform the TPC-C performance test step by step. + +1. In the [TiDB Cloud console](https://tidbcloud.com/), create a TiDB Dedicated cluster that meets the [test environment](#tidb-cluster) requirements. + + For more information, see [Create a TiDB Dedicated cluster](/tidb-cloud/create-tidb-cluster.md). + +2. On the benchmark executor, connect to the newly created cluster and create a database named `tpcc`. + + To connect to the cluster, see [Connect to TiDB Dedicated via Private Endpoint](/tidb-cloud/set-up-private-endpoint-connections.md). + + To create the `tpcc` database, execute the following SQL statement: + + ```sql + CREATE DATABASE tpcc; + ``` + +3. Load TPC-C data to the `tpcc` database. + + 1. The test in this document is implemented based on [go-tpc](https://github.com/pingcap/go-tpc). You can download the test program using the following command: + + ```shell + curl --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/pingcap/go-tpc/master/install.sh | sh + ``` + + 2. Run the following `go-tpc tpcc` command to import 1,000 warehouses to the `tpcc` database. Replace `${HOST}`, `${THREAD}`, and `${PASSWORD}` with your actual values. This document conducts three tests with the `${THREAD}` value of `50`, `100`, and `200`. + + ```shell + go-tpc tpcc --host ${HOST} --warehouses 1000 prepare -P 4000 -D tpcc -T ${THREAD} --time 2h0m0s -p ${PASSWORD} --ignore-error + ``` + +4. To ensure that the TiDB optimizer can generate the optimal execution plan, execute the following SQL statements to collect statistics before conducting the TPC-C test: + + ```sql + ANALYZE TABLE customer; + ANALYZE TABLE district; + ANALYZE TABLE history; + ANALYZE TABLE item; + ANALYZE TABLE new_order; + ANALYZE TABLE order_line; + ANALYZE TABLE orders; + ANALYZE TABLE stock; + ANALYZE TABLE warehouse; + ``` + + To accelerate the collection of statistics, execute the following SQL statements before collecting: + + ```sql + SET tidb_build_stats_concurrency=16; + SET tidb_distsql_scan_concurrency=16; + SET tidb_index_serial_scan_concurrency=16; + ``` + +5. Run the following `go-tpc tpcc` command to conduct stress tests on the TiDB Dedicated cluster. For each concurrency, the test takes two hours. + + ```shell + go-tpc tpcc --host ${HOST} -P 4000 --warehouses 1000 run -D tpcc -T ${THREAD} --time 2h0m0s -p ${PASSWORD} --ignore-error + ``` + +6. Extract the tpmC data of `NEW_ORDER` from the result. + + TPC-C uses tpmC (transactions per minute) to measure the maximum qualified throughput (MQTh, Max Qualified Throughput). The transactions are the NewOrder transactions and the final unit of measure is the number of new orders processed per minute. + +## Test results + +The TPC-C performance of v8.1.0 in the [test environment](#test-environment) is as follows: + +| Threads | v8.1.0 tpmC | +|:--------|:----------| +| 50 | 43,660 | +| 100 | 75,495 | +| 200 | 102,013 | + +![TPC-C](/media/tidb-cloud/v8.1.0_tpcc.png) diff --git a/tidb-cloud/vector-search-changelogs.md b/tidb-cloud/vector-search-changelogs.md new file mode 100644 index 0000000000000..9083ef7027f0d --- /dev/null +++ b/tidb-cloud/vector-search-changelogs.md @@ -0,0 +1,14 @@ +--- +title: Vector Search Changelogs +summary: Learn about the new features, compatibility changes, improvements, and bug fixes for the TiDB vector search feature. +--- + +# Vector Search Changelogs + +## June 25, 2024 + +- TiDB Vector Search (beta) is now available for TiDB Serverless clusters in all regions for all users. + +## April 1, 2024 + +- TiDB Vector Search (beta) is now available for TiDB Serverless clusters in EU regions for invited users. diff --git a/tidb-cloud/vector-search-data-types.md b/tidb-cloud/vector-search-data-types.md new file mode 100644 index 0000000000000..542fd3327c873 --- /dev/null +++ b/tidb-cloud/vector-search-data-types.md @@ -0,0 +1,253 @@ +--- +title: Vector Data Types +summary: Learn about the Vector data types in TiDB. +--- + +# Vector Data Types + +TiDB provides Vector data type specifically optimized for AI Vector Embedding use cases. By using the Vector data type, you can store and query a sequence of floating numbers efficiently, such as `[0.3, 0.5, -0.1, ...]`. + +The following Vector data type is currently available: + +- `VECTOR`: A sequence of single-precision floating numbers. The dimensions can be different for each row. +- `VECTOR(D)`: A sequence of single-precision floating numbers with a fixed dimension `D`. + +The Vector data type provides these advantages over storing in a `JSON` column: + +- Vector Index support. A [Vector Search Index](/tidb-cloud/vector-search-index.md) can be built to speed up vector searching. +- Dimension enforcement. A dimension can be specified to forbid inserting vectors with different dimensions. +- Optimized storage format. Vector data types are stored even more space-efficient than `JSON` data type. + +> **Note:** +> +> Vector data types are only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Value syntax + +A Vector value contains an arbitrary number of floating numbers. You can use a string in the following syntax to represent a Vector value: + +```sql +'[, , ...]' +``` + +Example: + +```sql +CREATE TABLE vector_table ( + id INT PRIMARY KEY, + embedding VECTOR(3) +); + +INSERT INTO vector_table VALUES (1, '[0.3, 0.5, -0.1]'); + +INSERT INTO vector_table VALUES (2, NULL); +``` + +Inserting vector values with invalid syntax will result in an error: + +```sql +[tidb]> INSERT INTO vector_table VALUES (3, '[5, ]'); +ERROR 1105 (HY000): Invalid vector text: [5, ] +``` + +As dimension 3 is enforced for the `embedding` column in the preceding example, inserting a vector with a different dimension will result in an error: + +```sql +[tidb]> INSERT INTO vector_table VALUES (4, '[0.3, 0.5]'); +ERROR 1105 (HY000): vector has 2 dimensions, does not fit VECTOR(3) +``` + +See [Vector Functions and Operators](/tidb-cloud/vector-search-functions-and-operators.md) for available functions and operators over the Vector data type. + +See [Vector Search Index](/tidb-cloud/vector-search-index.md) for building and using a vector search index. + +## Vectors with different dimensions + +You can store vectors with different dimensions in the same column by omitting the dimension parameter in the `VECTOR` type: + +```sql +CREATE TABLE vector_table ( + id INT PRIMARY KEY, + embedding VECTOR +); + +INSERT INTO vector_table VALUES (1, '[0.3, 0.5, -0.1]'); -- 3 dimensions vector, OK +INSERT INTO vector_table VALUES (2, '[0.3, 0.5]'); -- 2 dimensions vector, OK +``` + +However you cannot build a [Vector Search Index](/tidb-cloud/vector-search-index.md) for this column, as vector distances can be only calculated between vectors with the same dimensions. + +## Comparison + +You can compare vector data types using [comparison operators](/functions-and-operators/operators.md) such as `=`, `!=`, `<`, `>`, `<=`, and `>=`. For a complete list of comparison operators and functions for vector data types, see [Vector Functions and Operators](/tidb-cloud/vector-search-functions-and-operators.md). + +Vector data types are compared element-wise numerically. Examples: + +- `[1] < [12]` +- `[1,2,3] < [1,2,5]` +- `[1,2,3] = [1,2,3]` +- `[2,2,3] > [1,2,3]` + +Vectors with different dimensions are compared using lexicographical comparison, with the following properties: + +- Two vectors are compared element by element, and each element is compared numerically. +- The first mismatching element determines which vector is lexicographically _less_ or _greater_ than the other. +- If one vector is a prefix of another, the shorter vector is lexicographically _less_ than the other. +- Vectors of the same length with identical elements are lexicographically _equal_. +- An empty vector is lexicographically _less_ than any non-empty vector. +- Two empty vectors are lexicographically _equal_. + +Examples: + +- `[] < [1]` +- `[1,2,3] < [1,2,3,0]` + +When comparing vector constants, consider performing an [explicit cast](#cast) from string to vector to avoid comparisons based on string values: + +```sql +-- Because string is given, TiDB is comparing strings: +[tidb]> SELECT '[12.0]' < '[4.0]'; ++--------------------+ +| '[12.0]' < '[4.0]' | ++--------------------+ +| 1 | ++--------------------+ +1 row in set (0.01 sec) + +-- Cast to vector explicitly to compare by vectors: +[tidb]> SELECT VEC_FROM_TEXT('[12.0]') < VEC_FROM_TEXT('[4.0]'); ++--------------------------------------------------+ +| VEC_FROM_TEXT('[12.0]') < VEC_FROM_TEXT('[4.0]') | ++--------------------------------------------------+ +| 0 | ++--------------------------------------------------+ +1 row in set (0.01 sec) +``` + +## Arithmetic + +Vector data types support element-wise arithmetic operations `+` (addition) and `-` (subtraction). However, performing arithmetic operations between vectors with different dimensions results in an error. + +Examples: + +```sql +[tidb]> SELECT VEC_FROM_TEXT('[4]') + VEC_FROM_TEXT('[5]'); ++---------------------------------------------+ +| VEC_FROM_TEXT('[4]') + VEC_FROM_TEXT('[5]') | ++---------------------------------------------+ +| [9] | ++---------------------------------------------+ +1 row in set (0.01 sec) + +mysql> SELECT VEC_FROM_TEXT('[2,3,4]') - VEC_FROM_TEXT('[1,2,3]'); ++-----------------------------------------------------+ +| VEC_FROM_TEXT('[2,3,4]') - VEC_FROM_TEXT('[1,2,3]') | ++-----------------------------------------------------+ +| [1,1,1] | ++-----------------------------------------------------+ +1 row in set (0.01 sec) + +[tidb]> SELECT VEC_FROM_TEXT('[4]') + VEC_FROM_TEXT('[1,2,3]'); +ERROR 1105 (HY000): vectors have different dimensions: 1 and 3 +``` + +## Cast + +### Cast between Vector ⇔ String + +To cast between Vector and String, use the following functions: + +- `CAST(... AS VECTOR)`: String ⇒ Vector +- `CAST(... AS CHAR)`: Vector ⇒ String +- `VEC_FROM_TEXT`: String ⇒ Vector +- `VEC_AS_TEXT`: Vector ⇒ String + +There are implicit casts when calling functions receiving vector data types: + +```sql +-- There is an implicit cast here, since VEC_DIMS only accepts VECTOR arguments: +[tidb]> SELECT VEC_DIMS('[0.3, 0.5, -0.1]'); ++------------------------------+ +| VEC_DIMS('[0.3, 0.5, -0.1]') | ++------------------------------+ +| 3 | ++------------------------------+ +1 row in set (0.01 sec) + +-- Cast explicitly using VEC_FROM_TEXT: +[tidb]> SELECT VEC_DIMS(VEC_FROM_TEXT('[0.3, 0.5, -0.1]')); ++---------------------------------------------+ +| VEC_DIMS(VEC_FROM_TEXT('[0.3, 0.5, -0.1]')) | ++---------------------------------------------+ +| 3 | ++---------------------------------------------+ +1 row in set (0.01 sec) + +-- Cast explicitly using CAST(... AS VECTOR): +[tidb]> SELECT VEC_DIMS(CAST('[0.3, 0.5, -0.1]' AS VECTOR)); ++----------------------------------------------+ +| VEC_DIMS(CAST('[0.3, 0.5, -0.1]' AS VECTOR)) | ++----------------------------------------------+ +| 3 | ++----------------------------------------------+ +1 row in set (0.01 sec) +``` + +Use explicit casts when operators or functions accept multiple data types. For example, in comparisons, use explicit casts to compare vector numeric values instead of string values: + +```sql +-- Because string is given, TiDB is comparing strings: +[tidb]> SELECT '[12.0]' < '[4.0]'; ++--------------------+ +| '[12.0]' < '[4.0]' | ++--------------------+ +| 1 | ++--------------------+ +1 row in set (0.01 sec) + +-- Cast to vector explicitly to compare by vectors: +[tidb]> SELECT VEC_FROM_TEXT('[12.0]') < VEC_FROM_TEXT('[4.0]'); ++--------------------------------------------------+ +| VEC_FROM_TEXT('[12.0]') < VEC_FROM_TEXT('[4.0]') | ++--------------------------------------------------+ +| 0 | ++--------------------------------------------------+ +1 row in set (0.01 sec) +``` + +To cast vector into its string representation explicitly, use the `VEC_AS_TEXT()` function: + +```sql +-- String representation is normalized: +[tidb]> SELECT VEC_AS_TEXT('[0.3, 0.5, -0.1]'); ++--------------------------------------+ +| VEC_AS_TEXT('[0.3, 0.5, -0.1]') | ++--------------------------------------+ +| [0.3,0.5,-0.1] | ++--------------------------------------+ +1 row in set (0.01 sec) +``` + +For additional cast functions, see [Vector Functions and Operators](/tidb-cloud/vector-search-functions-and-operators.md). + +### Cast between Vector ⇔ other data types + +It is currently not possible to cast between Vector and other data types (like `JSON`) directly. You need to use String as an intermediate type. + +## Restrictions + +- The maximum supported Vector dimension is 16000. +- You cannot store `NaN`, `Infinity`, or `-Infinity` values in the vector data type. +- Currently Vector data types cannot store double-precision floating numbers. This will be supported in future release. + +For other limitations, see [Vector Search Limitations](/tidb-cloud/vector-search-limitations.md). + +## MySQL compatibility + +Vector data types are TiDB specific, and are not supported in MySQL. + +## See also + +- [Vector Functions and Operators](/tidb-cloud/vector-search-functions-and-operators.md) +- [Vector Search Index](/tidb-cloud/vector-search-index.md) +- [Improve Vector Search Performance](/tidb-cloud/vector-search-improve-performance.md) diff --git a/tidb-cloud/vector-search-functions-and-operators.md b/tidb-cloud/vector-search-functions-and-operators.md new file mode 100644 index 0000000000000..b54070ed34f9f --- /dev/null +++ b/tidb-cloud/vector-search-functions-and-operators.md @@ -0,0 +1,282 @@ +--- +title: Vector Functions and Operators +summary: Learn about functions and operators available for Vector Data Types. +--- + +# Vector Functions and Operators + +> **Note:** +> +> Vector data types and these vector functions are only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Vector functions + +The following functions are designed specifically for [Vector Data Types](/tidb-cloud/vector-search-data-types.md). + +**Vector Distance Functions:** + +| Function Name | Description | +| --------------------------------------------------------- | ---------------------------------------------------------------- | +| [VEC_L2_DISTANCE](#vec_l2_distance) | Calculates L2 distance (Euclidean distance) between two vectors | +| [VEC_COSINE_DISTANCE](#vec_cosine_distance) | Calculates the cosine distance between two vectors | +| [VEC_NEGATIVE_INNER_PRODUCT](#vec_negative_inner_product) | Calculates the negative of the inner product between two vectors | +| [VEC_L1_DISTANCE](#vec_l1_distance) | Calculates L1 distance (Manhattan distance) between two vectors | + +**Other Vector Functions:** + +| Function Name | Description | +| ------------------------------- | --------------------------------------------------- | +| [VEC_DIMS](#vec_dims) | Returns the dimension of a vector | +| [VEC_L2_NORM](#vec_l2_norm) | Calculates the L2 norm (Euclidean norm) of a vector | +| [VEC_FROM_TEXT](#vec_from_text) | Converts a string into a vector | +| [VEC_AS_TEXT](#vec_as_text) | Converts a vector into a string | + +## Extended built-in functions and operators + +The following built-in functions and operators are extended, supporting operating on [Vector Data Types](/tidb-cloud/vector-search-data-types.md). + +**Arithmetic operators:** + +| Name | Description | +| :-------------------------------------------------------------------------------------- | :--------------------------------------- | +| [`+`](https://dev.mysql.com/doc/refman/8.0/en/arithmetic-functions.html#operator_plus) | Vector element-wise addition operator | +| [`-`](https://dev.mysql.com/doc/refman/8.0/en/arithmetic-functions.html#operator_minus) | Vector element-wise subtraction operator | + +For more information about how vector arithmetic works, see [Vector Data Type | Arithmetic](/tidb-cloud/vector-search-data-types.md#arithmetic). + +**Aggregate (GROUP BY) functions:** + +| Name | Description | +| :------------------------------------------------------------------------------------------------------------ | :----------------------------------------------- | +| [`COUNT()`](https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_count) | Return a count of the number of rows returned | +| [`COUNT(DISTINCT)`](https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_count-distinct) | Return the count of a number of different values | +| [`MAX()`](https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_max) | Return the maximum value | +| [`MIN()`](https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_min) | Return the minimum value | + +**Comparison functions and operators:** + +| Name | Description | +| ------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | +| [`BETWEEN ... AND ...`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_between) | Check whether a value is within a range of values | +| [`COALESCE()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_coalesce) | Return the first non-NULL argument | +| [`=`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_equal) | Equal operator | +| [`<=>`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_equal-to) | NULL-safe equal to operator | +| [`>`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_greater-than) | Greater than operator | +| [`>=`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_greater-than-or-equal) | Greater than or equal operator | +| [`GREATEST()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_greatest) | Return the largest argument | +| [`IN()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_in) | Check whether a value is within a set of values | +| [`IS NULL`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_is-null) | NULL value test | +| [`ISNULL()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_isnull) | Test whether the argument is NULL | +| [`LEAST()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#function_least) | Return the smallest argument | +| [`<`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_less-than) | Less than operator | +| [`<=`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_less-than-or-equal) | Less than or equal operator | +| [`NOT BETWEEN ... AND ...`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_not-between) | Check whether a value is not within a range of values | +| [`!=`, `<>`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_not-equal) | Not equal operator | +| [`NOT IN()`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_not-in) | Check whether a value is not within a set of values | + +For more information about how vectors are compared, see [Vector Data Type | Comparison](/tidb-cloud/vector-search-data-types.md#comparison). + +**Control flow functions:** + +| Name | Description | +| :------------------------------------------------------------------------------------------------ | :--------------------------- | +| [`CASE`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#operator_case) | Case operator | +| [`IF()`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#function_if) | If/else construct | +| [`IFNULL()`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#function_ifnull) | Null if/else construct | +| [`NULLIF()`](https://dev.mysql.com/doc/refman/8.0/en/flow-control-functions.html#function_nullif) | Return NULL if expr1 = expr2 | + +**Cast functions:** + +| Name | Description | +| :------------------------------------------------------------------------------------------ | :----------------------------- | +| [`CAST()`](https://dev.mysql.com/doc/refman/8.0/en/cast-functions.html#function_cast) | Cast a value as a certain type | +| [`CONVERT()`](https://dev.mysql.com/doc/refman/8.0/en/cast-functions.html#function_convert) | Cast a value as a certain type | + +For more information about how to use `CAST()`, see [Vector Data Type | Cast](/tidb-cloud/vector-search-data-types.md#cast). + +## Full references + +### VEC_L2_DISTANCE + +```sql +VEC_L2_DISTANCE(vector1, vector2) +``` + +Calculates the L2 distance (Euclidean distance) between two vectors using the following formula: + +$DISTANCE(p,q)=\sqrt {\sum \limits _{i=1}^{n}{(p_{i}-q_{i})^{2}}}$ + +The two vectors must have the same dimension. Otherwise an error is returned. + +Examples: + +```sql +[tidb]> select VEC_L2_DISTANCE('[0,3]', '[4,0]'); ++-----------------------------------+ +| VEC_L2_DISTANCE('[0,3]', '[4,0]') | ++-----------------------------------+ +| 5 | ++-----------------------------------+ +``` + +### VEC_COSINE_DISTANCE + +```sql +VEC_COSINE_DISTANCE(vector1, vector2) +``` + +Calculates the cosine distance between two vectors using the following formula: + +$DISTANCE(p,q)=1.0 - {\frac {\sum \limits _{i=1}^{n}{p_{i}q_{i}}}{{\sqrt {\sum \limits _{i=1}^{n}{p_{i}^{2}}}}\cdot {\sqrt {\sum \limits _{i=1}^{n}{q_{i}^{2}}}}}}$ + +The two vectors must have the same dimension. Otherwise an error is returned. + +Examples: + +```sql +[tidb]> select VEC_COSINE_DISTANCE('[1, 1]', '[-1, -1]'); ++-------------------------------------------+ +| VEC_COSINE_DISTANCE('[1, 1]', '[-1, -1]') | ++-------------------------------------------+ +| 2 | ++-------------------------------------------+ +``` + +### VEC_NEGATIVE_INNER_PRODUCT + +```sql +VEC_NEGATIVE_INNER_PRODUCT(vector1, vector2) +``` + +Calculates the distance by using the negative of the inner product between two vectors, using the following formula: + +$DISTANCE(p,q)=- INNER\_PROD(p,q)=-\sum \limits _{i=1}^{n}{p_{i}q_{i}}$ + +The two vectors must have the same dimension. Otherwise an error is returned. + +Examples: + +```sql +[tidb]> select VEC_NEGATIVE_INNER_PRODUCT('[1,2]', '[3,4]'); ++----------------------------------------------+ +| VEC_NEGATIVE_INNER_PRODUCT('[1,2]', '[3,4]') | ++----------------------------------------------+ +| -11 | ++----------------------------------------------+ +``` + +### VEC_L1_DISTANCE + +```sql +VEC_L1_DISTANCE(vector1, vector2) +``` + +Calculates the L1 distance (Manhattan distance) between two vectors using the following formula: + +$DISTANCE(p,q)=\sum \limits _{i=1}^{n}{|p_{i}-q_{i}|}$ + +The two vectors must have the same dimension. Otherwise an error is returned. + +Examples: + +```sql +[tidb]> select VEC_L1_DISTANCE('[0,0]', '[3,4]'); ++-----------------------------------+ +| VEC_L1_DISTANCE('[0,0]', '[3,4]') | ++-----------------------------------+ +| 7 | ++-----------------------------------+ +``` + +### VEC_DIMS + +```sql +VEC_DIMS(vector) +``` + +Returns the dimension of a vector. + +Examples: + +```sql +[tidb]> select VEC_DIMS('[1,2,3]'); ++---------------------+ +| VEC_DIMS('[1,2,3]') | ++---------------------+ +| 3 | ++---------------------+ + +[tidb]> select VEC_DIMS('[]'); ++----------------+ +| VEC_DIMS('[]') | ++----------------+ +| 0 | ++----------------+ +``` + +### VEC_L2_NORM + +```sql +VEC_L2_NORM(vector) +``` + +Calculates the L2 norm (Euclidean norm) of a vector using the following formula: + +$NORM(p)=\sqrt {\sum \limits _{i=1}^{n}{p_{i}^{2}}}$ + +Examples: + +```sql +[tidb]> select VEC_L2_NORM('[3,4]'); ++----------------------+ +| VEC_L2_NORM('[3,4]') | ++----------------------+ +| 5 | ++----------------------+ +``` + +### VEC_FROM_TEXT + +```sql +VEC_FROM_TEXT(string) +``` + +Converts a string into a vector. + +Examples: + +```sql +[tidb]> select VEC_FROM_TEXT('[1,2]') + VEC_FROM_TEXT('[3,4]'); ++-------------------------------------------------+ +| VEC_FROM_TEXT('[1,2]') + VEC_FROM_TEXT('[3,4]') | ++-------------------------------------------------+ +| [4,6] | ++-------------------------------------------------+ +``` + +### VEC_AS_TEXT + +```sql +VEC_AS_TEXT(vector) +``` + +Converts a vector into a string. + +Examples: + +```sql +[tidb]> select VEC_AS_TEXT('[1.000, 2.5]'); ++-------------------------------+ +| VEC_AS_TEXT('[1.000, 2.5]') | ++-------------------------------+ +| [1,2.5] | ++-------------------------------+ +``` + +## MySQL compatibility + +The vector functions and the extended usage of built-in functions and operators over vector data types are TiDB specific, and are not supported in MySQL. + +## See also + +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) diff --git a/tidb-cloud/vector-search-get-started-using-python.md b/tidb-cloud/vector-search-get-started-using-python.md new file mode 100644 index 0000000000000..08a3e315db98b --- /dev/null +++ b/tidb-cloud/vector-search-get-started-using-python.md @@ -0,0 +1,195 @@ +--- +title: Get Started with TiDB + AI via Python +summary: Learn how to quickly develop an AI application that performs semantic search using Python and TiDB Vector Search. +--- + +# Get Started with TiDB + AI via Python + +This tutorial demonstrates how to develop a simple AI application that provides **semantic search** features. Unlike traditional keyword search, semantic search intelligently understands the meaning behind your query. For example, if you have documents titled "dog", "fish", and "tree", and you search for "a swimming animal", the application would identify "fish" as the most relevant result. + +Throughout this tutorial, you will develop this AI application using [TiDB Vector Search](/tidb-cloud/vector-search-overview.md), Python, [TiDB Vector SDK for Python](https://github.com/pingcap/tidb-vector-python), and AI models. + +> **Note** +> +> TiDB Vector Search is currently in beta and only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Prerequisites + +To complete this tutorial, you need: + +- [Python 3.8 or higher](https://www.python.org/downloads/) installed. +- [Git](https://git-scm.com/downloads) installed. +- A TiDB Serverless cluster. Follow [creating a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md) to create your own TiDB Cloud cluster if you don't have one. + +## Get started + +To run the demo directly, check out the sample code in the [pingcap/tidb-vector-python](https://github.com/pingcap/tidb-vector-python/blob/main/examples/python-client-quickstart) repository. + +### Step 1. Create a new Python project + +In your preferred directory, create a new Python project and a file named `example.py`: + +```shell +mkdir python-client-quickstart +cd python-client-quickstart +touch example.py +``` + +### Step 2. Install required dependencies + +In your project directory, run the following command to install the required packages: + +```shell +pip install sqlalchemy pymysql sentence-transformers tidb-vector python-dotenv +``` + +- `tidb-vector`: the Python client for interacting with Vector Search in TiDB Cloud. +- [`sentence-transformers`](https://sbert.net): a Python library that provides pre-trained models for generating [vector embeddings](/tidb-cloud/vector-search-overview.md#vector-embedding) from text. + +### Step 3. Configure the connection string to the TiDB cluster + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Connect** in the upper-right corner. A connection dialog is displayed. + +3. Ensure the configurations in the connection dialog match your operating environment. + + - **Endpoint Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `SQLAlchemy`. + - **Operating System** matches your environment. + + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + +4. Click the **PyMySQL** tab and copy the connection string. + + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. + +5. In the root directory of your Python project, create a `.env` file and paste the connection string into it. + + The following is an example for macOS: + + ```dotenv + TIDB_DATABASE_URL="mysql+pymysql://.root:@gateway01..prod.aws.tidbcloud.com:4000/test?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" + ``` + +### Step 4. Initialize the embedding model + +An [embedding model](/tidb-cloud/vector-search-overview.md#embedding-model) transforms data into [vector embeddings](/tidb-cloud/vector-search-overview.md#vector-embedding). This example uses the pre-trained model [**msmarco-MiniLM-L12-cos-v5**](https://huggingface.co/sentence-transformers/msmarco-MiniLM-L12-cos-v5) for text embedding. This lightweight model, provided by the `sentence-transformers` library, transforms text data into 384-dimensional vector embeddings. + +To set up the model, copy the following code into the `example.py` file. This code initializes a `SentenceTransformer` instance and defines a `text_to_embedding()` function for later use. + +```python +from sentence_transformers import SentenceTransformer + +print("Downloading and loading the embedding model...") +embed_model = SentenceTransformer("sentence-transformers/msmarco-MiniLM-L12-cos-v5", trust_remote_code=True) +embed_model_dims = embed_model.get_sentence_embedding_dimension() + +def text_to_embedding(text): + """Generates vector embeddings for the given text.""" + embedding = embed_model.encode(text) + return embedding.tolist() +``` + +### Step 5. Connect to the TiDB cluster + +Use the `TiDBVectorClient` class to connect to your TiDB cluster and create a table `embedded_documents` with a vector column to serve as the vector store. + +> **Note** +> +> Ensure the dimension of your vector column matches the dimension of the vectors produced by your embedding model. For example, the **msmarco-MiniLM-L12-cos-v5** model generates vectors with 384 dimensions. + +```python +import os +from tidb_vector.integrations import TiDBVectorClient +from dotenv import load_dotenv + +# Load the connection string from the .env file +load_dotenv() + +vector_store = TiDBVectorClient( + # The table which will store the vector data. + table_name='embedded_documents', + # The connection string to the TiDB cluster. + connection_string=os.environ.get('TIDB_DATABASE_URL'), + # The dimension of the vector generated by the embedding model. + vector_dimension=embed_model_dims, + # Determine whether to recreate the table if it already exists. + drop_existing_table=True, +) +``` + +### Step 6. Embed text data and store the vectors + +In this step, you will prepare sample documents containing single words, such as "dog", "fish", and "tree". The following code uses the `text_to_embedding()` function to transform these text documents into vector embeddings, and then inserts them into the vector store. + +```python +documents = [ + { + "id": "f8e7dee2-63b6-42f1-8b60-2d46710c1971", + "text": "dog", + "embedding": text_to_embedding("dog"), + "metadata": {"category": "animal"}, + }, + { + "id": "8dde1fbc-2522-4ca2-aedf-5dcb2966d1c6", + "text": "fish", + "embedding": text_to_embedding("fish"), + "metadata": {"category": "animal"}, + }, + { + "id": "e4991349-d00b-485c-a481-f61695f2b5ae", + "text": "tree", + "embedding": text_to_embedding("tree"), + "metadata": {"category": "plant"}, + }, +] + +vector_store.insert( + ids=[doc["id"] for doc in documents], + texts=[doc["text"] for doc in documents], + embeddings=[doc["embedding"] for doc in documents], + metadatas=[doc["metadata"] for doc in documents], +) +``` + +### Step 7. Perform semantic search + +In this step, you will search for "a swimming animal", which doesn't directly match any words in existing documents. + +The following code uses the `text_to_embedding()` function again to convert the query text into a vector embedding, and then queries with the embedding to find the top three closest matches. + +```python +def print_result(query, result): + print(f"Search result (\"{query}\"):") + for r in result: + print(f"- text: \"{r.document}\", distance: {r.distance}") + +query = "a swimming animal" +query_embedding = text_to_embedding(query) +search_result = vector_store.query(query_embedding, k=3) +print_result(query, search_result) +``` + +Run the `example.py` file and the output is as follows: + +```plain +Search result ("a swimming animal"): +- text: "fish", distance: 0.4562914811223072 +- text: "dog", distance: 0.6469335836410557 +- text: "tree", distance: 0.798545178640937 +``` + +From the output, the swimming animal is most likely a fish, or a dog with a gift for swimming. + +This demonstration shows how vector search can efficiently locate the most relevant documents, with search results organized by the proximity of the vectors: the smaller the distance, the more relevant the document. + +## See also + +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) +- [Vector Search Index](/tidb-cloud/vector-search-index.md) diff --git a/tidb-cloud/vector-search-get-started-using-sql.md b/tidb-cloud/vector-search-get-started-using-sql.md new file mode 100644 index 0000000000000..5da699c224bae --- /dev/null +++ b/tidb-cloud/vector-search-get-started-using-sql.md @@ -0,0 +1,148 @@ +--- +title: Get Started with Vector Search via SQL +summary: Learn how to quickly get started with Vector Search in TiDB Cloud using SQL statements and power the generative AI application. +--- + +# Get Started with Vector Search via SQL + +TiDB extends MySQL syntax to support [Vector Search](/tidb-cloud/vector-search-overview.md) and introduce new [Vector data types](/tidb-cloud/vector-search-data-types.md) and several [vector functions](/tidb-cloud/vector-search-functions-and-operators.md). + +This tutorial demonstrates how to get started with TiDB Vector Search just using SQL statements. You will learn how to use the [MySQL command-line client](https://dev.mysql.com/doc/refman/8.4/en/mysql.html) to: + +- Connect to your TiDB cluster. +- Create a vector table. +- Store vector embeddings. +- Perform vector search queries. + +> **Note** +> +> TiDB Vector Search is currently in beta and only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Prerequisites + +To complete this tutorial, you need: + +- [MySQL command-line client](https://dev.mysql.com/doc/refman/8.4/en/mysql.html) (MySQL CLI) installed on your machine. +- A TiDB Serverless cluster. Follow [creating a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md) to create your own TiDB Cloud cluster if you don't have one. + +## Get started + +### Step 1. Connect to the TiDB cluster + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Connect** in the upper-right corner. A connection dialog is displayed. + +3. In the connection dialog, select **MySQL CLI** from the **Connect With** drop-down list and keep the default setting of the **Endpoint Type** as **Public**. + +4. If you have not set a password yet, click **Generate Password** to generate a random password. + +5. Copy the connection command and paste it into your terminal. The following is an example for macOS: + + ```bash + mysql -u '.root' -h '' -P 4000 -D 'test' --ssl-mode=VERIFY_IDENTITY --ssl-ca=/etc/ssl/cert.pem -p'' + ``` + +### Step 2. Create a vector table + +With vector search support, you can use the `VECTOR` type column to store [vector embeddings](/tidb-cloud/vector-search-overview.md#vector-embedding) in TiDB. + +To create a table with a three-dimensional `VECTOR` column, execute the following SQL statements using your MySQL CLI: + +```sql +USE test; +CREATE TABLE embedded_documents ( + id INT PRIMARY KEY, + -- Column to store the original content of the document. + document TEXT, + -- Column to store the vector representation of the document. + embedding VECTOR(3) +); +``` + +The expected output is as follows: + +```text +Query OK, 0 rows affected (0.27 sec) +``` + +### Step 3. Store the vector embeddings + +Insert three documents with their [vector embeddings](/tidb-cloud/vector-search-overview.md#vector-embedding) into the `embedded_documents` table: + +```sql +INSERT INTO embedded_documents +VALUES + (1, 'dog', '[1,2,1]'), + (2, 'fish', '[1,2,4]'), + (3, 'tree', '[1,0,0]'); +``` + +The expected output is as follows: + +``` +Query OK, 3 rows affected (0.15 sec) +Records: 3 Duplicates: 0 Warnings: 0 +``` + +> **Note** +> +> This example simplifies the dimensions of the vector embeddings and uses only 3-dimensional vectors for demonstration purposes. +> +> In real-world applications, [embedding models](/tidb-cloud/vector-search-overview.md#embedding-model) often produce vector embeddings with hundreds or thousands of dimensions. + +### Step 4. Query the vector table + +To verify that the documents have been inserted correctly, query the `embedded_documents` table: + +```sql +SELECT * FROM embedded_documents; +``` + +The expected output is as follows: + +```sql ++----+----------+-----------+ +| id | document | embedding | ++----+----------+-----------+ +| 1 | dog | [1,2,1] | +| 2 | fish | [1,2,4] | +| 3 | tree | [1,0,0] | ++----+----------+-----------+ +3 rows in set (0.15 sec) +``` + +### Step 5. Perform a vector search query + +Similar to full-text search, users provide search terms to the application when using vector search. + +In this example, the search term is "a swimming animal", and its corresponding vector embedding is `[1,2,3]`. In practical applications, you need to use an embedding model to convert the user's search term into a vector embedding. + +Execute the following SQL statement and TiDB will identify the top three documents closest to the search term by calculating and sorting the cosine distances (`vec_cosine_distance`) between the vector embeddings. + +```sql +SELECT id, document, vec_cosine_distance(embedding, '[1,2,3]') AS distance +FROM embedded_documents +ORDER BY distance +LIMIT 3; +``` + +The expected output is as follows: + +```plain ++----+----------+---------------------+ +| id | document | distance | ++----+----------+---------------------+ +| 2 | fish | 0.00853986601633272 | +| 1 | dog | 0.12712843905603044 | +| 3 | tree | 0.7327387580875756 | ++----+----------+---------------------+ +3 rows in set (0.15 sec) +``` + +From the output, the swimming animal is most likely a fish, or a dog with a gift for swimming. + +## See also + +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) +- [Vector Search Index](/tidb-cloud/vector-search-index.md) diff --git a/tidb-cloud/vector-search-improve-performance.md b/tidb-cloud/vector-search-improve-performance.md new file mode 100644 index 0000000000000..651bc94251370 --- /dev/null +++ b/tidb-cloud/vector-search-improve-performance.md @@ -0,0 +1,36 @@ +--- +title: Improve Vector Search Performance +summary: Learn best practices for improving the performance of TiDB Vector Search. +--- + +# Improve Vector Search Performance + +TiDB Vector Search allows you to perform ANN queries that search for results similar to an image, document and so on. To improve the query performance, review the following best practices. + +## Add vector search index for vector columns + +The [vector search index](/tidb-cloud/vector-search-index.md) dramatically improves the performance of vector search queries, usually by 10x or more, with a trade-off of only a small decrease of recall rate. + +## Ensure vector indexes are fully built + +Vector indexes are built asynchronously. Until all vector data is indexed, vector search performance is suboptimal. To check the index build progress, see [View index build progress](/tidb-cloud/vector-search-index.md#view-index-build-progress). + +## Reduce vector dimensions or shorten embeddings + +The computational complexity of vector search indexing and queries increases significantly as the size of vectors grows, necessitating more floating point comparisons. + +To optimize performance, consider reducing the vector dimensions whenever feasible. This usually needs switching to another embedding model. Make sure to measure the impact of changing embedding models on the accuracy of your vector queries. + +Certain embedding models like OpenAI `text-embedding-3-large` support [shortening embeddings](https://openai.com/index/new-embedding-models-and-api-updates/), which removes some numbers from the end of vector sequences without losing the embedding's concept-representing properties. You can also use such an embedding model to reduce the vector dimensions. + +## Exclude vector columns from the results + +Vector embedding data are usually large and only used during the search process. By excluding vector columns from the query results, you can greatly reduce the amount of data transferred between the TiDB server and your SQL client, thereby improving query performance. + +To exclude vector columns, explicitly list the columns you want to retrieve in the `SELECT` clause, instead of using `SELECT *`. + +## Warm up the index + +When an index is cold accessed, it takes time to load the whole index from S3, or load from disk (instead of from memory). Such processes usually result in high tail latency. Additionally, if no SQL queries exist on a cluster for a long time (e.g. hours), the compute resource is reclaimed and will result in cold access next time. + +To avoid such tail latencies, warm up your index before actual workload by using similar vector search queries that hit the vector index. diff --git a/tidb-cloud/vector-search-index.md b/tidb-cloud/vector-search-index.md new file mode 100644 index 0000000000000..5efbedfa48fdc --- /dev/null +++ b/tidb-cloud/vector-search-index.md @@ -0,0 +1,274 @@ +--- +title: Vector Search Index +summary: Learn how to build and use the vector search index to accelerate K-Nearest neighbors (KNN) queries in TiDB. +--- + +# Vector Search Index + +K-nearest neighbors (KNN) search is the problem of finding the K closest points for a given point in a vector space. The most straightforward approach to solving this problem is a brute force search, where the distance between all points in the vector space and the reference point is computed. This method guarantees perfect accuracy, but it is usually too slow for practical applications. Thus, nearest neighbors search problems are often solved with approximate algorithms. + +In TiDB, you can create and utilize vector search indexes for such approximate nearest neighbor (ANN) searches over columns with [vector data types](/tidb-cloud/vector-search-data-types.md). By using vector search indexes, vector search queries could be finished in milliseconds. + +TiDB currently supports the following vector search index algorithms: + +- HNSW + +> **Note:** +> +> Vector search index is only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Create the HNSW vector index + +[HNSW](https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world) is one of the most popular vector indexing algorithms. The HNSW index provides good performance with relatively high accuracy (> 98% in typical cases). + +To create an HNSW vector index, specify the index definition in the comment of a column with a [vector data type](/tidb-cloud/vector-search-data-types.md) when creating the table: + +```sql +CREATE TABLE vector_table_with_index ( + id INT PRIMARY KEY, doc TEXT, + embedding VECTOR(3) COMMENT "hnsw(distance=cosine)" +); +``` + +> **Note:** +> +> The syntax to create a vector index might change in future releases. + +You must specify the distance metric via the `distance=` configuration when creating the vector index: + +- Cosine Distance: `COMMENT "hnsw(distance=cosine)"` +- L2 Distance: `COMMENT "hnsw(distance=l2)"` + +The vector index can only be created for fixed-dimensional vector columns like `VECTOR(3)`. It cannot be created for mixed-dimensional vector columns like `VECTOR` because vector distances can only be calculated between vectors with the same dimensions. + +If you are using programming language SDKs or ORMs, refer to the following documentation for creating vector indexes: + +- Python: [TiDB Vector SDK for Python](https://github.com/pingcap/tidb-vector-python) +- Python: [SQLAlchemy](/tidb-cloud/vector-search-integrate-with-sqlalchemy.md) +- Python: [Peewee](/tidb-cloud/vector-search-integrate-with-peewee.md) +- Python: [Django](/tidb-cloud/vector-search-integrate-with-django-orm.md) + +Be aware of the following limitations when creating the vector index. These limitations might be removed in future releases: + +- L1 distance and inner product are not supported for the vector index yet. + +- You can only define and create a vector index when the table is created. You cannot create the vector index on demand using DDL statements after the table is created. You cannot drop the vector index using DDL statements as well. + +## Use the vector index + +The vector search index can be used in K-nearest neighbor search queries by using the `ORDER BY ... LIMIT` form like below: + +```sql +SELECT * +FROM vector_table_with_index +ORDER BY Vec_Cosine_Distance(embedding, '[1, 2, 3]') +LIMIT 10 +``` + +You must use the same distance metric as you have defined when creating the vector index if you want to utilize the index in vector search. + +## Use the vector index with filters + +Queries that contain a pre-filter (using the `WHERE` clause) cannot utilize the vector index because they are not querying for K-Nearest neighborss according to the SQL semantics. For example: + +```sql +-- Filter is performed before kNN, so Vector Index cannot be used: + +SELECT * FROM vec_table +WHERE category = "document" +ORDER BY Vec_Cosine_distance(embedding, '[1, 2, 3]') +LIMIT 5; +``` + +Several workarounds are as follows: + +**Post-Filter after Vector Search:** Query for the K-Nearest neighbors first, then filter out unwanted results: + +```sql +-- The filter is performed after kNN for these queries, so Vector Index can be used: + +SELECT * FROM +( + SELECT * FROM vec_table + ORDER BY Vec_Cosine_distance(embedding, '[1, 2, 3]') + LIMIT 5 +) t +WHERE category = "document"; + +-- Note that this query may return less than 5 results if some are filtered out. +``` + +**Use Table Partitioning**: Queries within the [table partition](/partitioned-table.md) can fully utilize the vector index. This can be useful if you want to perform equality filters, as equality filters can be turned into accessing specified partitions. + +Example: Suppose you want to find the closest documentation for a specific product version. + +```sql +-- Filter is performed before kNN, so Vector Index cannot be used: +SELECT * FROM docs +WHERE ver = "v2.0" +ORDER BY Vec_Cosine_distance(embedding, '[1, 2, 3]') +LIMIT 5; +``` + +Instead of writing a query using the `WHERE` clause, you can partition the table and then query within the partition using the [`PARTITION` keyword](/partitioned-table.md#partition-selection): + +```sql +CREATE TABLE docs ( + id INT, + ver VARCHAR(10), + doc TEXT, + embedding VECTOR(3) COMMENT "hnsw(distance=cosine)" +) PARTITION BY LIST COLUMNS (ver) ( + PARTITION p_v1_0 VALUES IN ('v1.0'), + PARTITION p_v1_1 VALUES IN ('v1.1'), + PARTITION p_v1_2 VALUES IN ('v1.2'), + PARTITION p_v2_0 VALUES IN ('v2.0') +); + +SELECT * FROM docs +PARTITION (p_v2_0) +ORDER BY Vec_Cosine_distance(embedding, '[1, 2, 3]') +LIMIT 5; +``` + +See [Table Partitioning](/partitioned-table.md) for more information. + +## View index build progress + +Unlike other indexes, vector indexes are built asynchronously. Therefore, vector indexes might not be immediately available after bulk data insertion. This does not affect data correctness or consistency, and you can perform vector searches at any time and get complete results. However, performance will be suboptimal until vector indexes are fully built. + +To view the index build progress, you can query the `INFORMATION_SCHEMA.TIFLASH_INDEXES` table as follows: + +```sql +SELECT * FROM INFORMATION_SCHEMA.TIFLASH_INDEXES; ++---------------+------------+----------------+----------+--------------------+-------------+-----------+------------+---------------------+-------------------------+--------------------+------------------------+------------------+ +| TIDB_DATABASE | TIDB_TABLE | TIDB_PARTITION | TABLE_ID | BELONGING_TABLE_ID | COLUMN_NAME | COLUMN_ID | INDEX_KIND | ROWS_STABLE_INDEXED | ROWS_STABLE_NOT_INDEXED | ROWS_DELTA_INDEXED | ROWS_DELTA_NOT_INDEXED | TIFLASH_INSTANCE | ++---------------+------------+----------------+----------+--------------------+-------------+-----------+------------+---------------------+-------------------------+--------------------+------------------------+------------------+ +| test | sample | NULL | 106 | -1 | vec | 2 | HNSW | 0 | 13000 | 0 | 2000 | store-6ba728d2 | +| test | sample | NULL | 106 | -1 | vec | 2 | HNSW | 10500 | 0 | 0 | 4500 | store-7000164f | ++---------------+------------+----------------+----------+--------------------+-------------+-----------+------------+---------------------+-------------------------+--------------------+------------------------+------------------+ +``` + +- The `ROWS_STABLE_INDEXED` and `ROWS_STABLE_NOT_INDEXED` columns show the index build progress. When `ROWS_STABLE_NOT_INDEXED` becomes 0, the index build is complete. + + As a reference, indexing a 500 MiB vector dataset might take up to 20 minutes. The indexer can run in parallel for multiple tables. Currently, adjusting the indexer priority or speed is not supported. + +- The `ROWS_DELTA_NOT_INDEXED` column shows the number of rows in the Delta layer. The Delta layer stores _recently_ inserted or updated rows and is periodically merged into the Stable layer according to the write workload. This merge process is called Compaction. + + The Delta layer is always not indexed. To achieve optimal performance, you can force the merge of the Delta layer into the Stable layer so that all data can be indexed: + + ```sql + ALTER TABLE COMPACT; + ``` + + For more information, see [`ALTER TABLE ... COMPACT`](/sql-statements/sql-statement-alter-table-compact.md). + +## Check whether the vector index is used + +Use the [`EXPLAIN`](/sql-statements/sql-statement-explain.md) or [`EXPLAIN ANALYZE`](/sql-statements/sql-statement-explain-analyze.md) statement to check whether this query is using the vector index. When `annIndex:` is presented in the `operator info` column for the `TableFullScan` executor, it means this table scan is utilizing the vector index. + +**Example: the vector index is used** + +```sql +[tidb]> EXPLAIN SELECT * FROM vector_table_with_index +ORDER BY Vec_Cosine_Distance(embedding, '[1, 2, 3]') +LIMIT 10; ++-----+-------------------------------------------------------------------------------------+ +| ... | operator info | ++-----+-------------------------------------------------------------------------------------+ +| ... | ... | +| ... | Column#5, offset:0, count:10 | +| ... | ..., vec_cosine_distance(test.vector_table_with_index.embedding, [1,2,3])->Column#5 | +| ... | MppVersion: 1, data:ExchangeSender_16 | +| ... | ExchangeType: PassThrough | +| ... | ... | +| ... | Column#4, offset:0, count:10 | +| ... | ..., vec_cosine_distance(test.vector_table_with_index.embedding, [1,2,3])->Column#4 | +| ... | annIndex:COSINE(test.vector_table_with_index.embedding..[1,2,3], limit:10), ... | ++-----+-------------------------------------------------------------------------------------+ +9 rows in set (0.01 sec) +``` + +**Example: The vector index is not used because of not specifying a Top K** + +```sql +[tidb]> EXPLAIN SELECT * FROM vector_table_with_index + -> ORDER BY Vec_Cosine_Distance(embedding, '[1, 2, 3]'); ++--------------------------------+-----+--------------------------------------------------+ +| id | ... | operator info | ++--------------------------------+-----+--------------------------------------------------+ +| Projection_15 | ... | ... | +| └─Sort_4 | ... | Column#4 | +| └─Projection_16 | ... | ..., vec_cosine_distance(..., [1,2,3])->Column#4 | +| └─TableReader_14 | ... | MppVersion: 1, data:ExchangeSender_13 | +| └─ExchangeSender_13 | ... | ExchangeType: PassThrough | +| └─TableFullScan_12 | ... | keep order:false, stats:pseudo | ++--------------------------------+-----+--------------------------------------------------+ +6 rows in set, 1 warning (0.01 sec) +``` + +When the vector index cannot be used, a warning occurs in some cases to help you learn the cause: + +```sql +-- Using a wrong distance metric: +[tidb]> EXPLAIN SELECT * FROM vector_table_with_index +ORDER BY Vec_l2_Distance(embedding, '[1, 2, 3]') +LIMIT 10; + +[tidb]> SHOW WARNINGS; +ANN index not used: not ordering by COSINE distance + +-- Using a wrong order: +[tidb]> EXPLAIN SELECT * FROM vector_table_with_index +ORDER BY Vec_Cosine_Distance(embedding, '[1, 2, 3]') DESC +LIMIT 10; + +[tidb]> SHOW WARNINGS; +ANN index not used: index can be used only when ordering by vec_cosine_distance() in ASC order +``` + +## Analyze vector search performance + +The [`EXPLAIN ANALYZE`](/sql-statements/sql-statement-explain-analyze.md) statement contains detailed information about how the vector index is used in the `execution info` column: + +```sql +[tidb]> EXPLAIN ANALYZE SELECT * FROM vector_table_with_index +ORDER BY Vec_Cosine_Distance(embedding, '[1, 2, 3]') +LIMIT 10; ++-----+--------------------------------------------------------+-----+ +| | execution info | | ++-----+--------------------------------------------------------+-----+ +| ... | time:339.1ms, loops:2, RU:0.000000, Concurrency:OFF | ... | +| ... | time:339ms, loops:2 | ... | +| ... | time:339ms, loops:3, Concurrency:OFF | ... | +| ... | time:339ms, loops:3, cop_task: {...} | ... | +| ... | tiflash_task:{time:327.5ms, loops:1, threads:4} | ... | +| ... | tiflash_task:{time:327.5ms, loops:1, threads:4} | ... | +| ... | tiflash_task:{time:327.5ms, loops:1, threads:4} | ... | +| ... | tiflash_task:{time:327.5ms, loops:1, threads:4} | ... | +| ... | tiflash_task:{...}, vector_idx:{ | ... | +| | load:{total:68ms,from_s3:1,from_disk:0,from_cache:0},| | +| | search:{total:0ms,visited_nodes:2,discarded_nodes:0},| | +| | read:{vec_total:0ms,others_total:0ms}},...} | | ++-----+--------------------------------------------------------+-----+ +``` + +> **Note:** +> +> The execution information is internal. Fields and formats are subject to change without any notification. Do not rely on them. + +Explanation of some important fields: + +- `vector_index.load.total`: The total duration of loading index. This field could be larger than actual query time because multiple vector indexes may be loaded in parallel. +- `vector_index.load.from_s3`: Number of indexes loaded from S3. +- `vector_index.load.from_disk`: Number of indexes loaded from disk. The index was already downloaded from S3 previously. +- `vector_index.load.from_cache`: Number of indexes loaded from cache. The index was already downloaded from S3 previously. +- `vector_index.search.total`: The total duration of searching in the index. Large latency usually means the index is cold (never accessed before, or accessed long ago) so that there is heavy IO when searching through the index. This field could be larger than actual query time because multiple vector indexes may be searched in parallel. +- `vector_index.search.discarded_nodes`: Number of vector rows visited but discarded during the search. These discarded vectors are not considered in the search result. Large values usually indicate that there are many stale rows caused by UPDATE or DELETE statements. + +See [`EXPLAIN`](/sql-statements/sql-statement-explain.md), [`EXPLAIN ANALYZE`](/sql-statements/sql-statement-explain-analyze.md), and [EXPLAIN Walkthrough](/explain-walkthrough.md) for interpreting the output. + +## See also + +- [Improve Vector Search Performance](/tidb-cloud/vector-search-improve-performance.md) +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) diff --git a/tidb-cloud/vector-search-integrate-with-django-orm.md b/tidb-cloud/vector-search-integrate-with-django-orm.md new file mode 100644 index 0000000000000..cc11b69ad1d4e --- /dev/null +++ b/tidb-cloud/vector-search-integrate-with-django-orm.md @@ -0,0 +1,237 @@ +--- +title: Integrate TiDB Vector Search with Django ORM +summary: Learn how to integrate TiDB Vector Search with Django ORM to store embeddings and perform semantic search. +--- + +# Integrate TiDB Vector Search with Django ORM + +This tutorial walks you through how to use [Django](https://www.djangoproject.com/) ORM to interact with the TiDB Vector Search, store embeddings, and perform vector search queries. + +> **Note** +> +> TiDB Vector Search is currently in beta and only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Prerequisites + +To complete this tutorial, you need: + +- [Python 3.8 or higher](https://www.python.org/downloads/) installed. +- [Git](https://git-scm.com/downloads) installed. +- A TiDB Serverless cluster. Follow [creating a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md) to create your own TiDB Cloud cluster if you don't have one. + +## Run the sample app + +You can quickly learn about how to integrate TiDB Vector Search with Django ORM by following the steps below. + +### Step 1. Clone the repository + +Clone the `tidb-vector-python` repository to your local machine: + +```shell +git clone https://github.com/pingcap/tidb-vector-python.git +``` + +### Step 2. Create a virtual environment + +Create a virtual environment for your project: + +```bash +cd tidb-vector-python/examples/orm-django-quickstart +python3 -m venv .venv +source .venv/bin/activate +``` + +### Step 3. Install required dependencies + +Install the required dependencies for the demo project: + +```bash +pip install -r requirements.txt +``` + +For your existing project, you can install the following packages: + +```bash +pip install Django django-tidb mysqlclient numpy python-dotenv +``` + +If you encounter installation issues with mysqlclient, refer to the mysqlclient official documentation. + +#### What is `django-tidb` + +`django-tidb` is a TiDB dialect for Django that enhances the Django ORM to support TiDB-specific features (For example, Vector Search) and resolves compatibility issues between TiDB and Django. + +To install `django-tidb`, choose a version that matches your Django version. For example, if you are using `django==4.2.*`, install `django-tidb==4.2.*`. The minor version does not need to be the same. It is recommended to use the latest minor version. + +For more information, refer to [django-tidb repository](https://github.com/pingcap/django-tidb). + +### Step 4. Configure the environment variables + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Connect** in the upper-right corner. A connection dialog is displayed. + +3. Ensure the configurations in the connection dialog match your operating environment. + + - **Endpoint Type** is set to `Public` + - **Branch** is set to `main` + - **Connect With** is set to `General` + - **Operating System** matches your environment. + + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + +4. Copy the connection parameters from the connection dialog. + + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. + +5. In the root directory of your Python project, create a `.env` file and paste the connection parameters to the corresponding environment variables. + + - `TIDB_HOST`: The host of the TiDB cluster. + - `TIDB_PORT`: The port of the TiDB cluster. + - `TIDB_USERNAME`: The username to connect to the TiDB cluster. + - `TIDB_PASSWORD`: The password to connect to the TiDB cluster. + - `TIDB_DATABASE`: The database name to connect to. + - `TIDB_CA_PATH`: The path to the root certificate file. + + The following is an example for macOS: + + ```dotenv + TIDB_HOST=gateway01.****.prod.aws.tidbcloud.com + TIDB_PORT=4000 + TIDB_USERNAME=********.root + TIDB_PASSWORD=******** + TIDB_DATABASE=test + TIDB_CA_PATH=/etc/ssl/cert.pem + ``` + +### Step 5. Run the demo + +Migrate the database schema: + +```bash +python manage.py migrate +``` + +Run the Django development server: + +```bash +python manage.py runserver +``` + +Open your browser and visit `http://127.0.0.1:8000` to try the demo application. Here are the available API paths: + +| API Path | Description | +| --------------------------------------- | ---------------------------------------- | +| `POST: /insert_documents` | Insert documents with embeddings. | +| `GET: /get_nearest_neighbors_documents` | Get the 3-nearest neighbor documents. | +| `GET: /get_documents_within_distance` | Get documents within a certain distance. | + +## Sample code snippets + +You can refer to the following sample code snippets to complete your own application development. + +### Connect to the TiDB cluster + +In the file `sample_project/settings.py`, add the following configurations: + +```python +dotenv.load_dotenv() + +DATABASES = { + "default": { + # https://github.com/pingcap/django-tidb + "ENGINE": "django_tidb", + "HOST": os.environ.get("TIDB_HOST", "127.0.0.1"), + "PORT": int(os.environ.get("TIDB_PORT", 4000)), + "USER": os.environ.get("TIDB_USERNAME", "root"), + "PASSWORD": os.environ.get("TIDB_PASSWORD", ""), + "NAME": os.environ.get("TIDB_DATABASE", "test"), + "OPTIONS": { + "charset": "utf8mb4", + }, + } +} + +TIDB_CA_PATH = os.environ.get("TIDB_CA_PATH", "") +if TIDB_CA_PATH: + DATABASES["default"]["OPTIONS"]["ssl_mode"] = "VERIFY_IDENTITY" + DATABASES["default"]["OPTIONS"]["ssl"] = { + "ca": TIDB_CA_PATH, + } +``` + +You can create a `.env` file in the root directory of your project and set up the environment variables `TIDB_HOST`, `TIDB_PORT`, `TIDB_USERNAME`, `TIDB_PASSWORD`, `TIDB_DATABASE`, and `TIDB_CA_PATH` with the actual values of your TiDB cluster. + +### Create vector tables + +#### Define a vector column + +`tidb-django` provides a `VectorField` to store vector embeddings in a table. + +Create a table with a column named `embedding` that stores a 3-dimensional vector. + +```python +class Document(models.Model): + content = models.TextField() + embedding = VectorField(dimensions=3) +``` + +#### Define a vector column optimized with index + +Define a 3-dimensional vector column and optimize it with a [vector search index](/tidb-cloud/vector-search-index.md) (HNSW index). + +```python +class DocumentWithIndex(models.Model): + content = models.TextField() + # Note: + # - Using comment to add hnsw index is a temporary solution. In the future it will use `CREATE INDEX` syntax. + # - Currently the HNSW index cannot be changed after the table has been created. + # - Only Django >= 4.2 supports `db_comment`. + embedding = VectorField(dimensions=3, db_comment="hnsw(distance=cosine)") +``` + +TiDB will use this index to speed up vector search queries based on the cosine distance function. + +### Store documents with embeddings + +```python +Document.objects.create(content="dog", embedding=[1, 2, 1]) +Document.objects.create(content="fish", embedding=[1, 2, 4]) +Document.objects.create(content="tree", embedding=[1, 0, 0]) +``` + +### Search the nearest neighbor documents + +TiDB Vector support below distance functions: + +- `L1Distance` +- `L2Distance` +- `CosineDistance` +- `NegativeInnerProduct` + +Search for the top-3 documents that are semantically closest to the query vector `[1, 2, 3]` based on the cosine distance function. + +```python +results = Document.objects.annotate( + distance=CosineDistance('embedding', [1, 2, 3]) +).order_by('distance')[:3] +``` + +### Search documents within a certain distance + +Search for the documents whose cosine distance from the query vector `[1, 2, 3]` is less than 0.2. + +```python +results = Document.objects.annotate( + distance=CosineDistance('embedding', [1, 2, 3]) +).filter(distance__lt=0.2).order_by('distance')[:3] +``` + +## See also + +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) +- [Vector Search Index](/tidb-cloud/vector-search-index.md) diff --git a/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md b/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md new file mode 100644 index 0000000000000..1ec86cf0d1017 --- /dev/null +++ b/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md @@ -0,0 +1,246 @@ +--- +title: Integrate TiDB Vector Search with Jina AI Embeddings API +summary: Learn how to integrate TiDB Vector Search with Jina AI Embeddings API to store embeddings and perform semantic search. +--- + +# Integrate TiDB Vector Search with Jina AI Embeddings API + +This tutorial walks you through how to use [Jina AI](https://jina.ai/) to generate embeddings for text data, and then store the embeddings in TiDB Vector Storage and search similar texts based on embeddings. + +> **Note** +> +> TiDB Vector Search is currently in beta and only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Prerequisites + +To complete this tutorial, you need: + +- [Python 3.8 or higher](https://www.python.org/downloads/) installed. +- [Git](https://git-scm.com/downloads) installed. +- A TiDB Serverless cluster. Follow [creating a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md) to create your own TiDB Cloud cluster if you don't have one. + +## Run the sample app + +You can quickly learn about how to integrate TiDB Vector Search with JinaAI Embedding by following the steps below. + +### Step 1. Clone the repository + +Clone the `tidb-vector-python` repository to your local machine: + +```shell +git clone https://github.com/pingcap/tidb-vector-python.git +``` + +### Step 2. Create a virtual environment + +Create a virtual environment for your project: + +```bash +cd tidb-vector-python/examples/jina-ai-embeddings-demo +python3 -m venv .venv +source .venv/bin/activate +``` + +### Step 3. Install required dependencies + +Install the required dependencies for the demo project: + +```bash +pip install -r requirements.txt +``` + +### Step 4. Configure the environment variables + +#### 4.1 Get the Jina AI API key + +Get the Jina AI API key from the [Jina AI Embeddings API](https://jina.ai/embeddings/) page. + +#### 4.2 Get the TiDB connection parameters + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Connect** in the upper-right corner. A connection dialog is displayed. + +3. Ensure the configurations in the connection dialog match your operating environment. + + - **Endpoint Type** is set to `Public` + - **Branch** is set to `main` + - **Connect With** is set to `SQLAlchemy` + - **Operating System** matches your environment. + + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + +4. Switch to the **PyMySQL** tab and click the **Copy** icon to copy the connection string. + + > **Tip:** + > + > If you have not set a password yet, click **Create password** to generate a random password. + +#### 4.3 Set the environment variables + +Set the environment variables in your terminal, or create a `.env` file with the above environment variables. + +```dotenv +JINAAI_API_KEY="****" +TIDB_DATABASE_URL="{tidb_connection_string}" +``` + +For example, the connection string on macOS looks like: + +```dotenv +TIDB_DATABASE_URL="mysql+pymysql://.root:@gateway01..prod.aws.tidbcloud.com:4000/test?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" +``` + +### Step 5. Run the demo + +```bash +python jina-ai-embeddings-demo.py +``` + +Example output: + +```text +- Inserting Data to TiDB... + - Inserting: Jina AI offers best-in-class embeddings, reranker and prompt optimizer, enabling advanced multimodal AI. + - Inserting: TiDB is an open-source MySQL-compatible database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. +- List All Documents and Their Distances to the Query: + - distance: 0.3585317326132522 + content: Jina AI offers best-in-class embeddings, reranker and prompt optimizer, enabling advanced multimodal AI. + - distance: 0.10858102967720984 + content: TiDB is an open-source MySQL-compatible database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. +- The Most Relevant Document and Its Distance to the Query: + - distance: 0.10858102967720984 + content: TiDB is an open-source MySQL-compatible database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. +``` + +## Sample code snippets + +### Get Embeddings from Jina AI + +Define a `generate_embeddings` helper function to call Jina AI embeddings API: + +```python +import os +import requests +import dotenv + +dotenv.load_dotenv() + +JINAAI_API_KEY = os.getenv('JINAAI_API_KEY') + +def generate_embeddings(text: str): + JINAAI_API_URL = 'https://api.jina.ai/v1/embeddings' + JINAAI_HEADERS = { + 'Content-Type': 'application/json', + 'Authorization': f'Bearer {JINAAI_API_KEY}' + } + JINAAI_REQUEST_DATA = { + 'input': [text], + 'model': 'jina-embeddings-v2-base-en' # with dimension 768. + } + response = requests.post(JINAAI_API_URL, headers=JINAAI_HEADERS, json=JINAAI_REQUEST_DATA) + return response.json()['data'][0]['embedding'] +``` + +### Connect to TiDB Serverless + +Connect to TiDB Serverless through SQLAlchemy: + +```python +import os +import dotenv + +from tidb_vector.sqlalchemy import VectorType +from sqlalchemy.orm import Session, declarative_base + +dotenv.load_dotenv() + +TIDB_DATABASE_URL = os.getenv('TIDB_DATABASE_URL') +assert TIDB_DATABASE_URL is not None +engine = create_engine(url=TIDB_DATABASE_URL, pool_recycle=300) +``` + +### Define the vector table schema + +Create a table named `jinaai_tidb_demo_documents` with a `content` column for storing texts and a vector column named `content_vec` for storing embeddings: + +```python +from sqlalchemy import Column, Integer, String, create_engine +from sqlalchemy.orm import declarative_base + +Base = declarative_base() + +class Document(Base): + __tablename__ = "jinaai_tidb_demo_documents" + + id = Column(Integer, primary_key=True) + content = Column(String(255), nullable=False) + content_vec = Column( + # DIMENSIONS is determined by the embedding model, + # for Jina AI's jina-embeddings-v2-base-en model it's 768. + VectorType(dim=768), + comment="hnsw(distance=cosine)" +``` + +> **Note:** +> +> - The dimension of the vector column must match the dimension of the embeddings generated by the embedding model. +> - In this example, the dimension of embeddings generated by the `jina-embeddings-v2-base-en` model is `768`. + +### Create embeddings with Jina AI embeddings and TiDB + +Use the Jina AI Embeddings API to generate embeddings for each piece of text and store the embeddings in TiDB: + +```python +TEXTS = [ + 'Jina AI offers best-in-class embeddings, reranker and prompt optimizer, enabling advanced multimodal AI.', + 'TiDB is an open-source MySQL-compatible database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads.', +] +data = [] + +for text in TEXTS: + # Generate embeddings for the texts via Jina AI API. + embedding = generate_embeddings(text) + data.append({ + 'text': text, + 'embedding': embedding + }) + +with Session(engine) as session: + print('- Inserting Data to TiDB...') + for item in data: + print(f' - Inserting: {item["text"]}') + session.add(Document( + content=item['text'], + content_vec=item['embedding'] + )) + session.commit() +``` + +### Perform semantic search with Jina AI embeddings and TiDB + +Generate embeddings for the query text via Jina AI embeddings API, and then search for the most relevant document based on the cosine distance between the query embedding and the document embeddings: + +```python +query = 'What is TiDB?' +# Generate embeddings for the query via Jina AI API. +query_embedding = generate_embeddings(query) + +with Session(engine) as session: + print('- The Most Relevant Document and Its Distance to the Query:') + doc, distance = session.query( + Document, + Document.content_vec.cosine_distance(query_embedding).label('distance') + ).order_by( + 'distance' + ).limit(1).first() + print(f' - distance: {distance}\n' + f' content: {doc.content}') +``` + +## See also + +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) +- [Vector Search Index](/tidb-cloud/vector-search-index.md) diff --git a/tidb-cloud/vector-search-integrate-with-langchain.md b/tidb-cloud/vector-search-integrate-with-langchain.md new file mode 100644 index 0000000000000..21be2741ca4bb --- /dev/null +++ b/tidb-cloud/vector-search-integrate-with-langchain.md @@ -0,0 +1,585 @@ +--- +title: Integrate Vector Search with LangChain +summary: Learn how to integrate Vector Search in TiDB Cloud with LangChain. +--- + +# Integrate Vector Search with LangChain + +This tutorial demonstrates how to integrate the [vector search](/tidb-cloud/vector-search-overview.md) feature in TiDB Cloud with [LangChain](https://python.langchain.com/). + +> **Note** +> +> - TiDB Vector Search is currently in beta and only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. +> - You can view the complete [sample code](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/vectorstores/tidb_vector.ipynb) on Jupyter Notebook, or run the sample code directly in the [Colab](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/integrations/vectorstores/tidb_vector.ipynb) online environment. + +## Prerequisites + +To complete this tutorial, you need: + +- [Python 3.8 or higher](https://www.python.org/downloads/) installed. +- [Jupyter Notebook](https://jupyter.org/install) installed. +- [Git](https://git-scm.com/downloads) installed. +- A TiDB Serverless cluster. Follow [creating a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md) to create your own TiDB Cloud cluster if you don't have one. + +## Get started + +This section provides step-by-step instructions for integrating TiDB Vector Search with LangChain to perform semantic searches. + +### Step 1. Create a new Jupyter Notebook file + +In your preferred directory, create a new Jupyter Notebook file named `integrate_with_langchain.ipynb`: + +```shell +touch integrate_with_langchain.ipynb +``` + +### Step 2. Install required dependencies + +In your project directory, run the following command to install the required packages: + +```shell +!pip install langchain langchain-community +!pip install langchain-openai +!pip install pymysql +!pip install tidb-vector +``` + +Open the `integrate_with_langchain.ipynb` file in Jupyter Notebook and add the following code to import the required packages: + +```python +from langchain_community.document_loaders import TextLoader +from langchain_community.vectorstores import TiDBVectorStore +from langchain_openai import OpenAIEmbeddings +from langchain_text_splitters import CharacterTextSplitter +``` + +### Step 3. Set up your environment + +#### Step 3.1 Obtain the connection string to the TiDB cluster + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Connect** in the upper-right corner. A connection dialog is displayed. + +3. Ensure the configurations in the connection dialog match your operating environment. + + - **Endpoint Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `SQLAlchemy`. + - **Operating System** matches your environment. + +4. Click the **PyMySQL** tab and copy the connection string. + + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. + +#### Step 3.2 Configure environment variables + +To establish a secure and efficient database connection, use the standard connection method provided by TiDB Cloud. + +This document uses [OpenAI](https://platform.openai.com/docs/introduction) as the embedding model provider. In this step, you need to provide the connection string obtained from step 3.1 and your [OpenAI API key](https://platform.openai.com/docs/quickstart/step-2-set-up-your-api-key). + +To configure the environment variables, run the following code. You will be prompted to enter your connection string and OpenAI API key: + +```python +# Use getpass to securely prompt for environment variables in your terminal. +import getpass +import os + +# Copy your connection string from the TiDB Cloud console. +# Connection string format: "mysql+pymysql://:@:4000/?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" +tidb_connection_string = getpass.getpass("TiDB Connection String:") +os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") +``` + +### Step 4. Load the sample document + +#### Step 4.1 Download the sample document + +In your project directory, create a directory named `data/how_to/` and download the sample document [`state_of_the_union.txt`](https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/state_of_the_union.txt) from the [langchain-ai/langchain](https://github.com/langchain-ai/langchain) GitHub repository. + +```shell +!mkdir -p 'data/how_to/' +!wget 'https://raw.githubusercontent.com/langchain-ai/langchain/master/docs/docs/how_to/state_of_the_union.txt' -O 'data/how_to/state_of_the_union.txt' +``` + +#### Step 4.2 Load and split the document + +Load the sample document from `data/how_to/state_of_the_union.txt` and split it into chunks of approximately 1,000 characters each using a `CharacterTextSplitter`. + +```python +loader = TextLoader("data/how_to/state_of_the_union.txt") +documents = loader.load() +text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) +docs = text_splitter.split_documents(documents) +``` + +### Step 5. Embed and store document vectors + +TiDB vector store supports both cosine distance (`consine`) and Euclidean distance (`l2`) for measuring similarity between vectors. The default strategy is cosine distance. + +The following code creates a table named `embedded_documents` in TiDB, which is optimized for vector search. + +```python +embeddings = OpenAIEmbeddings() +vector_store = TiDBVectorStore.from_documents( + documents=docs, + embedding=embeddings, + table_name="embedded_documents", + connection_string=tidb_connection_string, + distance_strategy="cosine", # default, another option is "l2" +) +``` + +Upon successful execution, you can directly view and access the `embedded_documents` table in your TiDB database. + +### Step 6. Perform a vector search + +This step demonstrates how to query "What did the president say about Ketanji Brown Jackson" from the document `state_of_the_union.txt`. + +```python +query = "What did the president say about Ketanji Brown Jackson" +``` + +#### Option 1: Use `similarity_search_with_score()` + +The `similarity_search_with_score()` method calculates the vector space distance between the documents and the query. This distance serves as a similarity score, determined by the chosen `distance_strategy`. The method returns the top `k` documents with the lowest scores. A lower score indicates a higher similarity between a document and your query. + +```python +docs_with_score = vector_store.similarity_search_with_score(query, k=3) +for doc, score in docs_with_score: + print("-" * 80) + print("Score: ", score) + print(doc.page_content) + print("-" * 80) +``` + +
+ Expected output + +```plain +-------------------------------------------------------------------------------- +Score: 0.18472413652518527 +Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. + +Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. + +One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. + +And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. +-------------------------------------------------------------------------------- +-------------------------------------------------------------------------------- +Score: 0.21757513022785557 +A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. + +And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. + +We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. + +We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. + +We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. + +We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. +-------------------------------------------------------------------------------- +-------------------------------------------------------------------------------- +Score: 0.22676987253721725 +And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. + +As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. + +While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. + +And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. + +So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. + +First, beat the opioid epidemic. +-------------------------------------------------------------------------------- +``` + +
+ +#### Option 2: Use `similarity_search_with_relevance_scores()` + +The `similarity_search_with_relevance_scores()` method returns the top `k` documents with the highest relevance scores. A higher score indicates a higher degree of similarity between a document and your query. + +```python +docs_with_relevance_score = vector_store.similarity_search_with_relevance_scores(query, k=2) +for doc, score in docs_with_relevance_score: + print("-" * 80) + print("Score: ", score) + print(doc.page_content) + print("-" * 80) +``` + +
+ Expected output + +```plain +-------------------------------------------------------------------------------- +Score: 0.8152758634748147 +Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. + +Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. + +One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. + +And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. +-------------------------------------------------------------------------------- +-------------------------------------------------------------------------------- +Score: 0.7824248697721444 +A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. + +And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. + +We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. + +We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. + +We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. + +We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. +-------------------------------------------------------------------------------- +``` + +
+ +### Use as a retriever + +In Langchain, a [retriever](https://python.langchain.com/v0.2/docs/concepts/#retrievers) is an interface that retrieves documents in response to an unstructured query, providing more functionality than a vector store. The following code demonstrates how to use TiDB vector store as a retriever. + +```python +retriever = vector_store.as_retriever( + search_type="similarity_score_threshold", + search_kwargs={"k": 3, "score_threshold": 0.8}, +) +docs_retrieved = retriever.invoke(query) +for doc in docs_retrieved: + print("-" * 80) + print(doc.page_content) + print("-" * 80) +``` + +The expected output is as follows: + +``` +-------------------------------------------------------------------------------- +Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. + +Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. + +One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. + +And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. +-------------------------------------------------------------------------------- +``` + +### Remove the vector store + +To remove an existing TiDB vector store, use the `drop_vectorstore()` method: + +```python +vector_store.drop_vectorstore() +``` + +## Search with metadata filters + +To refine your searches, you can use metadata filters to retrieve specific nearest-neighbor results that match the applied filters. + +### Supported metadata types + +Each document in the TiDB vector store can be paired with metadata, structured as key-value pairs within a JSON object. Keys are always strings, while values can be any of the following types: + +- String +- Number: integer or floating point +- Boolean: `true` or `false` + +For example, the following is a valid metadata payload: + +```json +{ + "page": 12, + "book_title": "Siddhartha" +} +``` + +### Metadata filter syntax + +Available filters include the following: + +- `$or`: Selects vectors that match any one of the specified conditions. +- `$and`: Selects vectors that match all the specified conditions. +- `$eq`: Equal to the specified value. +- `$ne`: Not equal to the specified value. +- `$gt`: Greater than the specified value. +- `$gte`: Greater than or equal to the specified value. +- `$lt`: Less than the specified value. +- `$lte`: Less than or equal to the specified value. +- `$in`: In the specified array of values. +- `$nin`: Not in the specified array of values. + +If the metadata of a document is as follows: + +```json +{ + "page": 12, + "book_title": "Siddhartha" +} +``` + +The following metadata filters can match this document: + +```json +{ "page": 12 } +``` + +```json +{ "page": { "$eq": 12 } } +``` + +```json +{ + "page": { + "$in": [11, 12, 13] + } +} +``` + +```json +{ "page": { "$nin": [13] } } +``` + +```json +{ "page": { "$lt": 11 } } +``` + +```json +{ + "$or": [{ "page": 11 }, { "page": 12 }], + "$and": [{ "page": 12 }, { "page": 13 }] +} +``` + +Each key-value pair in the metadata filters is treated as a separate filter clause, and these clauses are combined using the `AND` logical operator. + +### Example + +The following example adds two documents to `TiDBVectorStore` and adds a `title` field to each document as the metadata: + +```python +vector_store.add_texts( + texts=[ + "TiDB Vector offers advanced, high-speed vector processing capabilities, enhancing AI workflows with efficient data handling and analytics support.", + "TiDB Vector, starting as low as $10 per month for basic usage", + ], + metadatas=[ + {"title": "TiDB Vector functionality"}, + {"title": "TiDB Vector Pricing"}, + ], +) +``` + +The expected output is as follows: + +```plain +[UUID('c782cb02-8eec-45be-a31f-fdb78914f0a7'), + UUID('08dcd2ba-9f16-4f29-a9b7-18141f8edae3')] +``` + +Perform a similarity search with metadata filters: + +```python +docs_with_score = vector_store.similarity_search_with_score( + "Introduction to TiDB Vector", filter={"title": "TiDB Vector functionality"}, k=4 +) +for doc, score in docs_with_score: + print("-" * 80) + print("Score: ", score) + print(doc.page_content) + print("-" * 80) +``` + +The expected output is as follows: + +```plain +-------------------------------------------------------------------------------- +Score: 0.12761409169211535 +TiDB Vector offers advanced, high-speed vector processing capabilities, enhancing AI workflows with efficient data handling and analytics support. +-------------------------------------------------------------------------------- +``` + +## Advanced usage example: travel agent + +This section demonstrates an advanced use case of integrating vector search with Langchain for a travel agent. The goal is to create personalized travel reports for clients seeking airports with specific amenities, such as clean lounges and vegetarian options. + +The process involves two main steps: + +1. Perform a semantic search across airport reviews to identify airport codes that match the desired amenities. +2. Execute an SQL query to merge these codes with route information, highlighting airlines and destinations that align with user's preferences. + +### Prepare data + +First, create a table to store airport route data: + +```python +# Create table to store airplan data. +vector_store.tidb_vector_client.execute( + """CREATE TABLE airplan_routes ( + id INT AUTO_INCREMENT PRIMARY KEY, + airport_code VARCHAR(10), + airline_code VARCHAR(10), + destination_code VARCHAR(10), + route_details TEXT, + duration TIME, + frequency INT, + airplane_type VARCHAR(50), + price DECIMAL(10, 2), + layover TEXT + );""" +) + +# Insert some sample data into airplan_routes and the vector table. +vector_store.tidb_vector_client.execute( + """INSERT INTO airplan_routes ( + airport_code, + airline_code, + destination_code, + route_details, + duration, + frequency, + airplane_type, + price, + layover + ) VALUES + ('JFK', 'DL', 'LAX', 'Non-stop from JFK to LAX.', '06:00:00', 5, 'Boeing 777', 299.99, 'None'), + ('LAX', 'AA', 'ORD', 'Direct LAX to ORD route.', '04:00:00', 3, 'Airbus A320', 149.99, 'None'), + ('EFGH', 'UA', 'SEA', 'Daily flights from SFO to SEA.', '02:30:00', 7, 'Boeing 737', 129.99, 'None'); + """ +) +vector_store.add_texts( + texts=[ + "Clean lounges and excellent vegetarian dining options. Highly recommended.", + "Comfortable seating in lounge areas and diverse food selections, including vegetarian.", + "Small airport with basic facilities.", + ], + metadatas=[ + {"airport_code": "JFK"}, + {"airport_code": "LAX"}, + {"airport_code": "EFGH"}, + ], +) +``` + +The expected output is as follows: + +```plain +[UUID('6dab390f-acd9-4c7d-b252-616606fbc89b'), + UUID('9e811801-0e6b-4893-8886-60f4fb67ce69'), + UUID('f426747c-0f7b-4c62-97ed-3eeb7c8dd76e')] +``` + +### Perform a semantic search + +The following code searches for airports with clean facilities and vegetarian options: + +```python +retriever = vector_store.as_retriever( + search_type="similarity_score_threshold", + search_kwargs={"k": 3, "score_threshold": 0.85}, +) +semantic_query = "Could you recommend a US airport with clean lounges and good vegetarian dining options?" +reviews = retriever.invoke(semantic_query) +for r in reviews: + print("-" * 80) + print(r.page_content) + print(r.metadata) + print("-" * 80) +``` + +The expected output is as follows: + +```plain +-------------------------------------------------------------------------------- +Clean lounges and excellent vegetarian dining options. Highly recommended. +{'airport_code': 'JFK'} +-------------------------------------------------------------------------------- +-------------------------------------------------------------------------------- +Comfortable seating in lounge areas and diverse food selections, including vegetarian. +{'airport_code': 'LAX'} +-------------------------------------------------------------------------------- +``` + +### Retrieve detailed airport information + +Extract airport codes from the search results and query the database for detailed route information: + +```python +# Extracting airport codes from the metadata +airport_codes = [review.metadata["airport_code"] for review in reviews] + +# Executing a query to get the airport details +search_query = "SELECT * FROM airplan_routes WHERE airport_code IN :codes" +params = {"codes": tuple(airport_codes)} + +airport_details = vector_store.tidb_vector_client.execute(search_query, params) +airport_details.get("result") +``` + +The expected output is as follows: + +```plain +[(1, 'JFK', 'DL', 'LAX', 'Non-stop from JFK to LAX.', datetime.timedelta(seconds=21600), 5, 'Boeing 777', Decimal('299.99'), 'None'), + (2, 'LAX', 'AA', 'ORD', 'Direct LAX to ORD route.', datetime.timedelta(seconds=14400), 3, 'Airbus A320', Decimal('149.99'), 'None')] +``` + +### Streamline the process + +Alternatively, you can streamline the entire process using a single SQL query: + +```python +search_query = f""" + SELECT + VEC_Cosine_Distance(se.embedding, :query_vector) as distance, + ar.*, + se.document as airport_review + FROM + airplan_routes ar + JOIN + {TABLE_NAME} se ON ar.airport_code = JSON_UNQUOTE(JSON_EXTRACT(se.meta, '$.airport_code')) + ORDER BY distance ASC + LIMIT 5; +""" +query_vector = embeddings.embed_query(semantic_query) +params = {"query_vector": str(query_vector)} +airport_details = vector_store.tidb_vector_client.execute(search_query, params) +airport_details.get("result") +``` + +The expected output is as follows: + +```plain +[(0.1219207353407008, 1, 'JFK', 'DL', 'LAX', 'Non-stop from JFK to LAX.', datetime.timedelta(seconds=21600), 5, 'Boeing 777', Decimal('299.99'), 'None', 'Clean lounges and excellent vegetarian dining options. Highly recommended.'), + (0.14613754359804654, 2, 'LAX', 'AA', 'ORD', 'Direct LAX to ORD route.', datetime.timedelta(seconds=14400), 3, 'Airbus A320', Decimal('149.99'), 'None', 'Comfortable seating in lounge areas and diverse food selections, including vegetarian.'), + (0.19840519342700513, 3, 'EFGH', 'UA', 'SEA', 'Daily flights from SFO to SEA.', datetime.timedelta(seconds=9000), 7, 'Boeing 737', Decimal('129.99'), 'None', 'Small airport with basic facilities.')] +``` + +### Clean up + +Finally, clean up the resources by dropping the created table: + +```python +vector_store.tidb_vector_client.execute("DROP TABLE airplan_routes") +``` + +The expected output is as follows: + +```plain +{'success': True, 'result': 0, 'error': None} +``` + +## See also + +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) +- [Vector Search Index](/tidb-cloud/vector-search-index.md) diff --git a/tidb-cloud/vector-search-integrate-with-llamaindex.md b/tidb-cloud/vector-search-integrate-with-llamaindex.md new file mode 100644 index 0000000000000..094388563de34 --- /dev/null +++ b/tidb-cloud/vector-search-integrate-with-llamaindex.md @@ -0,0 +1,264 @@ +--- +title: Integrate Vector Search with LlamaIndex +summary: Learn how to integrate Vector Search in TiDB Cloud with LlamaIndex. +--- + +# Integrate Vector Search with LlamaIndex + +This tutorial demonstrates how to integrate the [vector search](/tidb-cloud/vector-search-overview.md) feature in TiDB Cloud with [LlamaIndex](https://www.llamaindex.ai). + +> **Note** +> +> - TiDB Vector Search is currently in beta and only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. +> - You can view the complete [sample code](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/TiDBVector.ipynb) on Jupyter Notebook, or run the sample code directly in the [Colab](https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/vector_stores/TiDBVector.ipynb) online environment. + +## Prerequisites + +To complete this tutorial, you need: + +- [Python 3.8 or higher](https://www.python.org/downloads/) installed. +- [Jupyter Notebook](https://jupyter.org/install) installed. +- [Git](https://git-scm.com/downloads) installed. +- A TiDB Serverless cluster. Follow [creating a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md) to create your own TiDB Cloud cluster if you don't have one. + +## Get started + +This section provides step-by-step instructions for integrating TiDB Vector Search with LlamaIndex to perform semantic searches. + +### Step 1. Create a new Jupyter Notebook file + +In your preferred directory, create a new Jupyter Notebook file named `integrate_with_llamaindex.ipynb`: + +```shell +touch integrate_with_llamaindex.ipynb +``` + +### Step 2. Install required dependencies + +In your project directory, run the following command to install the required packages: + +```shell +pip install llama-index-vector-stores-tidbvector +pip install llama-index +``` + +Open the `integrate_with_llamaindex.ipynb` file in Jupyter Notebook and add the following code to import the required packages: + +```python +import textwrap + +from llama_index.core import SimpleDirectoryReader, StorageContext +from llama_index.core import VectorStoreIndex +from llama_index.vector_stores.tidbvector import TiDBVectorStore +``` + +### Step 3. Set up your environment + +#### Step 3.1 Obtain the connection string to the TiDB cluster + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Connect** in the upper-right corner. A connection dialog is displayed. + +3. Ensure the configurations in the connection dialog match your operating environment. + + - **Endpoint Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `SQLAlchemy`. + - **Operating System** matches your environment. + +4. Click the **PyMySQL** tab and copy the connection string. + + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. + +#### Step 3.2 Configure environment variables + +To establish a secure and efficient database connection, use the standard connection method provided by TiDB Cloud. + +This document uses [OpenAI](https://platform.openai.com/docs/introduction) as the embedding model provider. In this step, you need to provide the connection string obtained from step 3.1 and your [OpenAI API key](https://platform.openai.com/docs/quickstart/step-2-set-up-your-api-key). + +To configure the environment variables, run the following code. You will be prompted to enter your connection string and OpenAI API key: + +```python +import getpass +import os + +tidb_connection_url = getpass.getpass( + "TiDB connection URL (format - mysql+pymysql://root@127.0.0.1:4000/test): " +) +os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") +``` + +### Step 4. Load the sample document + +#### Step 4.1 Download the sample document + +In your project directory, create a directory named `data/paul_graham/` and download the sample document [`paul_graham_essay.txt`](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt) from the [run-llama/llama_index](https://github.com/run-llama/llama_index) GitHub repository. + +```shell +!mkdir -p 'data/paul_graham/' +!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt' +``` + +#### Step 4.2 Load the document + +Load the sample document from `data/paul_graham/paul_graham_essay.txt` using the `SimpleDirectoryReader` class. + +```python +documents = SimpleDirectoryReader("./data/paul_graham").load_data() +print("Document ID:", documents[0].doc_id) + +for index, document in enumerate(documents): + document.metadata = {"book": "paul_graham"} +``` + +### Step 5. Embed and store document vectors + +#### Step 5.1 Initialize the TiDB vector store + +The following code creates a table named `paul_graham_test` in TiDB, which is optimized for vector search. + +```python +tidbvec = TiDBVectorStore( + connection_string=tidb_connection_url, + table_name="paul_graham_test", + distance_strategy="cosine", + vector_dimension=1536, + drop_existing_table=False, +) +``` + +Upon successful execution, you can directly view and access the `paul_graham_test` table in your TiDB database. + +#### Step 5.2 Generate and store embeddings + +The following code parses the documents, generates embeddings, and stores them in the TiDB vector store. + +```python +storage_context = StorageContext.from_defaults(vector_store=tidbvec) +index = VectorStoreIndex.from_documents( + documents, storage_context=storage_context, show_progress=True +) +``` + +The expected output is as follows: + +```plain +Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 8.76it/s] +Generating embeddings: 100%|██████████| 21/21 [00:02<00:00, 8.22it/s] +``` + +### Step 6. Perform a vector search + +The following creates a query engine based on the TiDB vector store and performs a semantic similarity search. + +```python +query_engine = index.as_query_engine() +response = query_engine.query("What did the author do?") +print(textwrap.fill(str(response), 100)) +``` + +> **Note** +> +> `TiDBVectorStore` only supports the [`default`](https://docs.llamaindex.ai/en/stable/api_reference/storage/vector_store/?h=vectorstorequerymode#llama_index.core.vector_stores.types.VectorStoreQueryMode) query mode. + +The expected output is as follows: + +```plain +The author worked on writing, programming, building microcomputers, giving talks at conferences, +publishing essays online, developing spam filters, painting, hosting dinner parties, and purchasing +a building for office use. +``` + +### Step 7. Search with metadata filters + +To refine your searches, you can use metadata filters to retrieve specific nearest-neighbor results that match the applied filters. + +#### Query with `book != "paul_graham"` filter + +The following example excludes results where the `book` metadata field is `"paul_graham"`: + +```python +from llama_index.core.vector_stores.types import ( + MetadataFilter, + MetadataFilters, +) + +query_engine = index.as_query_engine( + filters=MetadataFilters( + filters=[ + MetadataFilter(key="book", value="paul_graham", operator="!="), + ] + ), + similarity_top_k=2, +) +response = query_engine.query("What did the author learn?") +print(textwrap.fill(str(response), 100)) +``` + +The expected output is as follows: + +```plain +Empty Response +``` + +#### Query with `book == "paul_graham"` filter + +The following example filters results to include only documents where the `book` metadata field is `"paul_graham"`: + +```python +from llama_index.core.vector_stores.types import ( + MetadataFilter, + MetadataFilters, +) + +query_engine = index.as_query_engine( + filters=MetadataFilters( + filters=[ + MetadataFilter(key="book", value="paul_graham", operator="=="), + ] + ), + similarity_top_k=2, +) +response = query_engine.query("What did the author learn?") +print(textwrap.fill(str(response), 100)) +``` + +The expected output is as follows: + +```plain +The author learned programming on an IBM 1401 using an early version of Fortran in 9th grade, then +later transitioned to working with microcomputers like the TRS-80 and Apple II. Additionally, the +author studied philosophy in college but found it unfulfilling, leading to a switch to studying AI. +Later on, the author attended art school in both the US and Italy, where they observed a lack of +substantial teaching in the painting department. +``` + +### Step 8. Delete documents + +Delete the first document from the index: + +```python +tidbvec.delete(documents[0].doc_id) +``` + +Check whether the documents had been deleted: + +```python +query_engine = index.as_query_engine() +response = query_engine.query("What did the author learn?") +print(textwrap.fill(str(response), 100)) +``` + +The expected output is as follows: + +```plain +Empty Response +``` + +## See also + +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) +- [Vector Search Index](/tidb-cloud/vector-search-index.md) diff --git a/tidb-cloud/vector-search-integrate-with-peewee.md b/tidb-cloud/vector-search-integrate-with-peewee.md new file mode 100644 index 0000000000000..0af42329cc0f1 --- /dev/null +++ b/tidb-cloud/vector-search-integrate-with-peewee.md @@ -0,0 +1,227 @@ +--- +title: Integrate TiDB Vector Search with peewee +summary: Learn how to integrate TiDB Vector Search with peewee to store embeddings and perform semantic searches. +--- + +# Integrate TiDB Vector Search with peewee + +This tutorial walks you through how to use [peewee](https://docs.peewee-orm.com/) to interact with the [TiDB Vector Search](/tidb-cloud/vector-search-overview.md), store embeddings, and perform vector search queries. + +> **Note** +> +> TiDB Vector Search is currently in beta and only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Prerequisites + +To complete this tutorial, you need: + +- [Python 3.8 or higher](https://www.python.org/downloads/) installed. +- [Git](https://git-scm.com/downloads) installed. +- A TiDB Serverless cluster. Follow [creating a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md) to create your own TiDB Cloud cluster if you don't have one. + +## Run the sample app + +You can quickly learn about how to integrate TiDB Vector Search with peewee by following the steps below. + +### Step 1. Clone the repository + +Clone the [`tidb-vector-python`](https://github.com/pingcap/tidb-vector-python) repository to your local machine: + +```shell +git clone https://github.com/pingcap/tidb-vector-python.git +``` + +### Step 2. Create a virtual environment + +Create a virtual environment for your project: + +```bash +cd tidb-vector-python/examples/orm-peewee-quickstart +python3 -m venv .venv +source .venv/bin/activate +``` + +### Step 3. Install required dependencies + +Install the required dependencies for the demo project: + +```bash +pip install -r requirements.txt +``` + +For your existing project, you can install the following packages: + +```bash +pip install peewee pymysql python-dotenv tidb-vector +``` + +### Step 4. Configure the environment variables + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Connect** in the upper-right corner. A connection dialog is displayed. + +3. Ensure the configurations in the connection dialog match your operating environment. + + - **Endpoint Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `General`. + - **Operating System** matches your environment. + + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + +4. Copy the connection parameters from the connection dialog. + + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. + +5. In the root directory of your Python project, create a `.env` file and paste the connection parameters to the corresponding environment variables. + + - `TIDB_HOST`: The host of the TiDB cluster. + - `TIDB_PORT`: The port of the TiDB cluster. + - `TIDB_USERNAME`: The username to connect to the TiDB cluster. + - `TIDB_PASSWORD`: The password to connect to the TiDB cluster. + - `TIDB_DATABASE`: The database name to connect to. + - `TIDB_CA_PATH`: The path to the root certificate file. + + The following is an example for macOS: + + ```dotenv + TIDB_HOST=gateway01.****.prod.aws.tidbcloud.com + TIDB_PORT=4000 + TIDB_USERNAME=********.root + TIDB_PASSWORD=******** + TIDB_DATABASE=test + TIDB_CA_PATH=/etc/ssl/cert.pem + ``` + +### Step 5. Run the demo + +```bash +python peewee-quickstart.py +``` + +Example output: + +```text +Get 3-nearest neighbor documents: + - distance: 0.00853986601633272 + document: fish + - distance: 0.12712843905603044 + document: dog + - distance: 0.7327387580875756 + document: tree +Get documents within a certain distance: + - distance: 0.00853986601633272 + document: fish + - distance: 0.12712843905603044 + document: dog +``` + +## Sample code snippets + +You can refer to the following sample code snippets to develop your application. + +### Create vector tables + +#### Connect to TiDB cluster + +```python +import os +import dotenv + +from peewee import Model, MySQLDatabase, SQL, TextField +from tidb_vector.peewee import VectorField + +dotenv.load_dotenv() + +# Using `pymysql` as the driver. +connect_kwargs = { + 'ssl_verify_cert': True, + 'ssl_verify_identity': True, +} + +# Using `mysqlclient` as the driver. +# connect_kwargs = { +# 'ssl_mode': 'VERIFY_IDENTITY', +# 'ssl': { +# # Root certificate default path +# # https://docs.pingcap.com/tidbcloud/secure-connections-to-serverless-clusters/#root-certificate-default-path +# 'ca': os.environ.get('TIDB_CA_PATH', '/path/to/ca.pem'), +# }, +# } + +db = MySQLDatabase( + database=os.environ.get('TIDB_DATABASE', 'test'), + user=os.environ.get('TIDB_USERNAME', 'root'), + password=os.environ.get('TIDB_PASSWORD', ''), + host=os.environ.get('TIDB_HOST', 'localhost'), + port=int(os.environ.get('TIDB_PORT', '4000')), + **connect_kwargs, +) +``` + +#### Define a vector column + +Create a table with a column named `peewee_demo_documents` that stores a 3-dimensional vector. + +```python +class Document(Model): + class Meta: + database = db + table_name = 'peewee_demo_documents' + + content = TextField() + embedding = VectorField(3) +``` + +#### Define a vector column optimized with index + +Define a 3-dimensional vector column and optimize it with a [vector search index](/tidb-cloud/vector-search-index.md) (HNSW index). + +```python +class DocumentWithIndex(Model): + class Meta: + database = db + table_name = 'peewee_demo_documents_with_index' + + content = TextField() + embedding = VectorField(3, constraints=[SQL("COMMENT 'hnsw(distance=cosine)'")]) +``` + +TiDB will use this index to accelerate vector search queries based on the cosine distance function. + +### Store documents with embeddings + +```python +Document.create(content='dog', embedding=[1, 2, 1]) +Document.create(content='fish', embedding=[1, 2, 4]) +Document.create(content='tree', embedding=[1, 0, 0]) +``` + +### Search the nearest neighbor documents + +Search for the top-3 documents that are semantically closest to the query vector `[1, 2, 3]` based on the cosine distance function. + +```python +distance = Document.embedding.cosine_distance([1, 2, 3]).alias('distance') +results = Document.select(Document, distance).order_by(distance).limit(3) +``` + +### Search documents within a certain distance + +Search for the documents whose cosine distance from the query vector `[1, 2, 3]` is less than 0.2. + +```python +distance_expression = Document.embedding.cosine_distance([1, 2, 3]) +distance = distance_expression.alias('distance') +results = Document.select(Document, distance).where(distance_expression < 0.2).order_by(distance).limit(3) +``` + +## See also + +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) +- [Vector Search Index](/tidb-cloud/vector-search-index.md) diff --git a/tidb-cloud/vector-search-integrate-with-sqlalchemy.md b/tidb-cloud/vector-search-integrate-with-sqlalchemy.md new file mode 100644 index 0000000000000..f66cc6c97f676 --- /dev/null +++ b/tidb-cloud/vector-search-integrate-with-sqlalchemy.md @@ -0,0 +1,199 @@ +--- +title: Integrate TiDB Vector Search with SQLAlchemy +summary: Learn how to integrate TiDB Vector Search with SQLAlchemy to store embeddings and perform semantic searches. +--- + +# Integrate TiDB Vector Search with SQLAlchemy + +This tutorial walks you through how to use [SQLAlchemy](https://www.sqlalchemy.org/) to interact with [TiDB Vector Search](/tidb-cloud/vector-search-overview.md), store embeddings, and perform vector search queries. + +> **Note** +> +> TiDB Vector Search is currently in beta and only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Prerequisites + +To complete this tutorial, you need: + +- [Python 3.8 or higher](https://www.python.org/downloads/) installed. +- [Git](https://git-scm.com/downloads) installed. +- A TiDB Serverless cluster. Follow [creating a TiDB Serverless cluster](/tidb-cloud/create-tidb-cluster-serverless.md) to create your own TiDB Cloud cluster if you don't have one. + +## Run the sample app + +You can quickly learn about how to integrate TiDB Vector Search with SQLAlchemy by following the steps below. + +### Step 1. Clone the repository + +Clone the `tidb-vector-python` repository to your local machine: + +```shell +git clone https://github.com/pingcap/tidb-vector-python.git +``` + +### Step 2. Create a virtual environment + +Create a virtual environment for your project: + +```bash +cd tidb-vector-python/examples/orm-sqlalchemy-quickstart +python3 -m venv .venv +source .venv/bin/activate +``` + +### Step 3. Install the required dependencies + +Install the required dependencies for the demo project: + +```bash +pip install -r requirements.txt +``` + +For your existing project, you can install the following packages: + +```bash +pip install pymysql python-dotenv sqlalchemy tidb-vector +``` + +### Step 4. Configure the environment variables + +1. Navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page, and then click the name of your target cluster to go to its overview page. + +2. Click **Connect** in the upper-right corner. A connection dialog is displayed. + +3. Ensure the configurations in the connection dialog match your environment. + + - **Endpoint Type** is set to `Public`. + - **Branch** is set to `main`. + - **Connect With** is set to `SQLAlchemy`. + - **Operating System** matches your environment. + + > **Tip:** + > + > If your program is running in Windows Subsystem for Linux (WSL), switch to the corresponding Linux distribution. + +4. Click the **PyMySQL** tab and copy the connection string. + + > **Tip:** + > + > If you have not set a password yet, click **Generate Password** to generate a random password. + +5. In the root directory of your Python project, create a `.env` file and paste the connection string into it. + + The following is an example for macOS: + + ```dotenv + TIDB_DATABASE_URL="mysql+pymysql://.root:@gateway01..prod.aws.tidbcloud.com:4000/test?ssl_ca=/etc/ssl/cert.pem&ssl_verify_cert=true&ssl_verify_identity=true" + ``` + +### Step 5. Run the demo + +```bash +python sqlalchemy-quickstart.py +``` + +Example output: + +```text +Get 3-nearest neighbor documents: + - distance: 0.00853986601633272 + document: fish + - distance: 0.12712843905603044 + document: dog + - distance: 0.7327387580875756 + document: tree +Get documents within a certain distance: + - distance: 0.00853986601633272 + document: fish + - distance: 0.12712843905603044 + document: dog +``` + +## Sample code snippets + +You can refer to the following sample code snippets to develop your application. + +### Create vector tables + +#### Connect to TiDB cluster + +```python +import os +import dotenv + +from sqlalchemy import Column, Integer, create_engine, Text +from sqlalchemy.orm import declarative_base, Session +from tidb_vector.sqlalchemy import VectorType + +dotenv.load_dotenv() + +tidb_connection_string = os.environ['TIDB_DATABASE_URL'] +engine = create_engine(tidb_connection_string) +``` + +#### Define a vector column + +Create a table with a column named `embedding` that stores a 3-dimensional vector. + +```python +Base = declarative_base() + +class Document(Base): + __tablename__ = 'sqlalchemy_demo_documents' + id = Column(Integer, primary_key=True) + content = Column(Text) + embedding = Column(VectorType(3)) +``` + +#### Define a vector column optimized with index + +Define a 3-dimensional vector column and optimize it with a [vector search index](/tidb-cloud/vector-search-index.md) (HNSW index). + +```python +class DocumentWithIndex(Base): + __tablename__ = 'sqlalchemy_demo_documents_with_index' + id = Column(Integer, primary_key=True) + content = Column(Text) + embedding = Column(VectorType(3), comment="hnsw(distance=cosine)") +``` + +TiDB will use this index to accelerate vector search queries based on the cosine distance function. + +### Store documents with embeddings + +```python +with Session(engine) as session: + session.add(Document(content="dog", embedding=[1, 2, 1])) + session.add(Document(content="fish", embedding=[1, 2, 4])) + session.add(Document(content="tree", embedding=[1, 0, 0])) + session.commit() +``` + +### Search the nearest neighbor documents + +Search for the top-3 documents that are semantically closest to the query vector `[1, 2, 3]` based on the cosine distance function. + +```python +with Session(engine) as session: + distance = Document.embedding.cosine_distance([1, 2, 3]).label('distance') + results = session.query( + Document, distance + ).order_by(distance).limit(3).all() +``` + +### Search documents within a certain distance + +Search for documents whose cosine distance from the query vector `[1, 2, 3]` is less than 0.2. + +```python +with Session(engine) as session: + distance = Document.embedding.cosine_distance([1, 2, 3]).label('distance') + results = session.query( + Document, distance + ).filter(distance < 0.2).order_by(distance).limit(3).all() +``` + +## See also + +- [Vector Data Types](/tidb-cloud/vector-search-data-types.md) +- [Vector Search Index](/tidb-cloud/vector-search-index.md) diff --git a/tidb-cloud/vector-search-integration-overview.md b/tidb-cloud/vector-search-integration-overview.md new file mode 100644 index 0000000000000..1bf586375259a --- /dev/null +++ b/tidb-cloud/vector-search-integration-overview.md @@ -0,0 +1,71 @@ +--- +title: Vector Search Integration Overview +summary: An overview of TiDB vector search integration, including supported AI frameworks, embedding models, and ORM libraries. +--- + +# Vector Search Integration Overview + +This document provides an overview of TiDB vector search integration, including supported AI frameworks, embedding models, and Object Relational Mapping (ORM) libraries. + +> **Note** +> +> TiDB Vector Search is currently in beta and is only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## AI frameworks + +TiDB provides official support for the following AI frameworks, enabling you to easily integrate AI applications developed based on these frameworks with TiDB Vector Search. + +| AI frameworks | Tutorial | +|---------------|---------------------------------------------------------------------------------------------------| +| Langchain | [Integrate Vector Search with LangChain](/tidb-cloud/vector-search-integrate-with-langchain.md) | +| LlamaIndex | [Integrate Vector Search with LlamaIndex](/tidb-cloud/vector-search-integrate-with-llamaindex.md) | + +Moreover, you can also use TiDB for various purposes, such as document storage and knowledge graph storage for AI applications. + +## Embedding models and services + +TiDB Vector Search supports storing vectors of up to 16,000 dimensions, which accommodates most embedding models. + +You can either use self-deployed open-source embedding models or third-party embedding APIs provided by third-party embedding providers to generate vectors. + +The following table lists some mainstream embedding service providers and the corresponding integration tutorials. + +| Embedding service providers | Tutorial | +|-----------------------------|---------------------------------------------------------------------------------------------------------------------| +| Jina AI | [Integrate Vector Search with Jina AI Embeddings API](/tidb-cloud/vector-search-integrate-with-jinaai-embedding.md) | + +## Object Relational Mapping (ORM) libraries + +You can integrate TiDB Vector Search with your ORM library to interact with the TiDB database. + +The following table lists the supported ORM libraries and the corresponding integration tutorials: + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
LanguageORM/ClientHow to installTutorial
PythonTiDB Vector Clientpip install tidb-vector[client]Get Started with Vector Search Using Python
SQLAlchemypip install tidb-vectorIntegrate TiDB Vector Search with SQLAlchemy
peeweepip install tidb-vectorIntegrate TiDB Vector Search with peewee
Djangopip install django-tidb[vector]Integrate TiDB Vector Search with Django
diff --git a/tidb-cloud/vector-search-limitations.md b/tidb-cloud/vector-search-limitations.md new file mode 100644 index 0000000000000..de84f5e50d8c5 --- /dev/null +++ b/tidb-cloud/vector-search-limitations.md @@ -0,0 +1,23 @@ +--- +title: Vector Search Limitations +summary: Learn the limitations of the TiDB Vector Search. +--- + +# Vector Search Limitations + +This document describes the known limitations of TiDB Vector Search. We are continuously working to enhance your experience by adding more features. + +- TiDB Vector Search is only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. It is not available for TiDB Dedicated or TiDB Self-Hosted. + +- Each [vector](/tidb-cloud/vector-search-data-types.md) supports up to 16,000 dimensions. + +- Vector data supports only single-precision floating-point numbers (Float32). + +- Only cosine distance and L2 distance are supported when you create a [vector search index](/tidb-cloud/vector-search-index.md). + +## Feedback + +We value your feedback and are always here to help: + +- [Join our Discord](https://discord.gg/zcqexutz2R) +- [Visit our Support Portal](https://tidb.support.pingcap.com/) diff --git a/tidb-cloud/vector-search-overview.md b/tidb-cloud/vector-search-overview.md new file mode 100644 index 0000000000000..1b9c3ddaa1d03 --- /dev/null +++ b/tidb-cloud/vector-search-overview.md @@ -0,0 +1,65 @@ +--- +title: Vector Search (Beta) Overview +summary: Learn about Vector Search in TiDB Cloud. This feature provides an advanced search solution for performing semantic similarity searches across various data types, including documents, images, audio, and video. +--- + +# Vector Search (Beta) Overview + +TiDB Vector Search (beta) provides an advanced search solution for performing semantic similarity searches across various data types, including documents, images, audio, and video. This feature enables developers to easily build scalable applications with generative artificial intelligence (AI) capabilities using familiar MySQL skills. + +> **Note** +> +> TiDB Vector Search is currently in beta and only available for [TiDB Serverless](/tidb-cloud/select-cluster-tier.md#tidb-serverless) clusters. + +## Concepts + +Vector search is a search method that prioritizes the meaning of your data to deliver relevant results. This differs from traditional full-text search, which relies primarily on exact keyword matches and word frequency. + +For example, a full-text search for "a swimming animal" only returns results with those exact keywords. In contrast, vector search can return results for other swimming animals, such as fish or ducks, even if the exact keywords are not present. + +### Vector embedding + +A vector embedding, also known as an embedding, is a sequence of numbers that represents real-world objects in a high-dimensional space. It captures the meaning and context of unstructured data, such as documents, images, audio, and videos. + +Vector embeddings are essential in machine learning and serve as the foundation for semantic similarity searches. + +TiDB introduces [Vector data types](/tidb-cloud/vector-search-data-types.md) designed to optimize the storage and retrieval of vector embeddings, enhancing their use in AI applications. You can store vector embeddings in TiDB and perform vector search queries to find the most relevant data using these data types. + +### Embedding model + +Embedding models are algorithms that transform data into [vector embeddings](#vector-embedding). + +Selecting an appropriate embedding model is crucial for ensuring the accuracy and relevance of semantic search results. For unstructured text data, you can find top-performing text embedding models on the [Massive Text Embedding Benchmark (MTEB) Leaderboard](https://huggingface.co/spaces/mteb/leaderboard). + +To learn how to generate vector embeddings for your specific data types, refer to the embedding provider integration tutorials or examples. + +## How vector search works + +After converting raw data into vector embeddings and storing them in TiDB, your application can execute vector search queries to find the data most semantically or contextually relevant to a user's query. + +Vector Search in TiDB Cloud identifies the top-k nearest neighbor (KNN) vectors by using a [distance function](/tidb-cloud/vector-search-functions-and-operators.md) to calculate the distance between the given vector and vectors stored in the database. The vectors closest to the query represent the most similar data in meaning. + +![The Schematic TiDB Vector Search](/media/vector-search/embedding-search.png) + +As a relational database with integrated vector search capabilities, TiDB enables you to store data and their corresponding vector embeddings together in one database. You can store them in the same table using different columns, or separate them into different tables and combine them using `JOIN` queries when retrieving. + +## Use cases + +### Retrieval-Augmented Generation (RAG) + +Retrieval-Augmented Generation (RAG) is an architecture designed to optimize the output of Large Language Models (LLMs). By using vector search, RAG applications can store vector embeddings in the database and retrieve relevant documents as additional context when the LLM generates responses, thereby improving the quality and relevance of the answers. + +### Semantic search + +Semantic search is a search technology that returns results based on the meaning of a query, rather than simply matching keywords. It interprets the meaning across different languages and various types of data (such as text, images, and audio) using embeddings. Vector search algorithms then use these embeddings to find the most relevant data that satisfies the user's query. + +### Recommendation engine + +A recommendation engine is a system that proactively suggests content, products, or services that are relevant and personalized to users. It accomplishes this by creating embeddings that represent user behavior and preferences. These embeddings help the system identify similar items that other users have interacted with or shown interest in. This increases the likelihood that the recommendations will be both relevant and appealing to the user. + +## See also + +To get started with TiDB Vector Search, see the following documents: + +- [Get started with vector search using Python](/tidb-cloud/vector-search-get-started-using-python.md) +- [Get started with vector search using SQL](/tidb-cloud/vector-search-get-started-using-sql.md)