Skip to content

Commit

Permalink
refine descriptions
Browse files Browse the repository at this point in the history
  • Loading branch information
qiancai committed Apr 7, 2024
1 parent f59d72f commit 83160e2
Showing 1 changed file with 43 additions and 43 deletions.
86 changes: 43 additions & 43 deletions tidb-cloud/serverless-export.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,114 +7,114 @@ summary: Learn how to export data from TiDB Serverless clusters.

TiDB Serverless Export (Beta) is a service that enables you to export data from a TiDB Serverless cluster to local storage or an external storage service. You can use the exported data for backup, migration, data analysis, or other purposes.

While you can also export data using `mysqldump`, TiDB Dumpling, or other tools, TiDB Serverless Export offers a more convenient and efficient way to export data from a TiDB Serverless cluster. It brings the following benefits:
While you can also export data using tools such as [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) and TiDB [Dumpling](https://docs.pingcap.com/tidb/dev/dumpling-overview), TiDB Serverless Export offers a more convenient and efficient way to export data from a TiDB Serverless cluster. It brings the following benefits:

- Convenience: the export service provides a simple and easy-to-use way to export data from a TiDB Serverless cluster, eliminating the need for additional tools or resources.
- Isolation: the export service uses separate computing resources, ensuring isolation from the resources used by your online services.
- Consistency: the export service ensures the consistency of the exported data without causing locks, which does not affect your online services.

## Features

### Location of files
This section describes the features of TiDB Serverless Export.

You can export data to the local storage or an external storage service.
### Export location

**Local storage**
You can export data to local storage or [Amazon S3](https://aws.amazon.com/s3/).

There are some limitations when you export data to local storage:
> **Note:**
>
> If the size of the data to be exported is large, it is recommended that you export it to Amazon S3.
1. You are not allowed to export multiple databases at the same time.
2. The exported data will be expired after two days, please download the data in time.
3. The exported data will be saved in the stashing area, which offers 250 GB storage space for each organization per region. If the storage space is full, you will not be able to export data to local.
**Local storage**

**[Amazon S3](https://aws.amazon.com/s3/)**
Exporting data to local storage has the following limitations:

You need to provide the credentials of the S3 bucket. Supported credentials include:
- Exporting multiple databases to local storage at the same time is not supported.
- Exported data is saved in the stashing area and will expire after two days. You need to download the exported data in time.
- TiDB Cloud offers 250 GiB of storage space in the stashing area for each organization per region. If the storage space is full, you will not be able to export data to local storage.

- [Access Key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html): The long-term credentials for an IAM user or the AWS account root user. Please make sure the access key has the necessary permissions to your S3 bucket, we recommend you create a new bucket with full s3 access.
**Amazon S3**

> **Note:**
>
> We recommend you export to the external storage service such as S3 when you want to export a large amount of data.
To export data to Amazon S3, you need to provide an [access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for your S3 bucket and make sure the access key has the necessary permissions for your S3 bucket. It is recommended that you create a new bucket with full S3 access.

### Data Filtering
### Data filtering

You can filter data by specifying the database and table you want to export. If you do not specify the table, we will export all tables in the specified database. If you do not specify the database, we will export all databases in the cluster.
You can filter data by specifying the database and table you want to export. If you specify a database without specifying a table, all tables in that specified database will be exported. If you do not specify a database when you export data to Amazon S3, all databases in the cluster will be exported.

> **Note:**
>
> You must specify the database when you export data to local storage.
### Data Formats
### Data formats

You can export data in the following formats:

- SQL(default): export data in SQL format.
- CSV: export data in CSV format.
- `SQL` (default): export data in SQL format.
- `CSV`: export data in CSV format.

### Data Compression
### Data compression

You can compress the exported data in the following algorithms:
You can compress the exported data using the following algorithms:

- gzip(default): compress the exported data with gzip
- snappy: compress the exported data with snappy.
- zstd: compress the exported data with zstd.
- none: do not compress the exported data.
- `gzip` (default): compress the exported data with gzip.
- `snappy`: compress the exported data with snappy.
- `zstd`: compress the exported data with zstd.
- `none`: do not compress the exported data.

### Cancel Export
### Cancel export

You can cancel an export job that is in running state.
You can cancel an export job that is in the running state.

## Examples

Now, you can manage exports with [TiDB Cloud CLI](/tidb-cloud/cli-reference.md).
Currently, you can manage export jobs using [TiDB Cloud CLI](/tidb-cloud/cli-reference.md).

### Export to local
### Export data to local storage

First, create an export job which specifies the database and table you want to export. It will output the export ID.
1. Create an export job that specifies the database and table you want to export:

```sh
```shell
ticloud serverless export create -c <cluster-id> --database <database> --table <table>
```

Then, download the exported data after the export is succeeded.
You will get an export ID from the output.

```sh
2. After the export is successful, download the exported data to your local storage:

```shell
ticloud serverless export download -c <cluster-id> -e <export-id>
```

### Export to S3
### Export data to Amazon S3

```sh
```shell
ticloud serverless export create -c <cluster-id> --bucket-uri <bucket-uri> --access-key-id <access-key-id> --secret-access-key <secret-access-key>
```

### Export with CSV format
### Export with the CSV format

```sh
```shell
ticloud serverless export create -c <cluster-id> --file-type CSV
```

### Export the whole database

```sh
```shell
ticloud serverless export create -c <cluster-id> --database <database>
```

### Export with snappy compression

```sh
```shell
ticloud serverless export create -c <cluster-id> --compress snappy
```

### Cancel an export job

```sh
```shell
ticloud serverless export cancel -c <cluster-id> -e <export-id>
```

## Pricing

You will only be charged for a successful or canceled export.

The export service is free during the beta period. You only need to pay for the [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit) generated during the export process.
The export service is free during the beta period. You only need to pay for the [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit) generated during the export process of successful or canceled jobs. For failed export jobs, you will not be charged.

0 comments on commit 83160e2

Please sign in to comment.