+
+
+If you grant permissions by accessKey and secretKey, you can create the `VolumeBackup` CR as follows:
+
+```shell
+kubectl apply -f backup-fed.yaml
+```
+
+The `backup-fed.yaml` file has the following content:
+
+```yaml
+---
+apiVersion: federation.pingcap.com/v1alpha1
+kind: VolumeBackup
+metadata:
+ name: ${backup-name}
+spec:
+ clusters:
+ - k8sClusterName: ${k8s-name1}
+ tcName: ${tc-name1}
+ tcNamespace: ${tc-namespace1}
+ - k8sClusterName: ${k8s-name2}
+ tcName: ${tc-name2}
+ tcNamespace: ${tc-namespace2}
+ - ... # other clusters
+ template:
+ br:
+ sendCredToTikv: true
+ s3:
+ provider: aws
+ secretName: s3-secret
+ region: ${region-name}
+ bucket: ${bucket-name}
+ prefix: ${backup-path}
+ toolImage: ${br-image}
+ cleanPolicy: Delete
+ calcSizeLevel: {snapshot-size-calculation-level}
+```
+
+
+
+
+
+If you grant permissions by associating Pod with IAM, you can create the `VolumeBackup` CR as follows:
+
+```shell
+kubectl apply -f backup-fed.yaml
+```
+
+The `backup-fed.yaml` file has the following content:
+
+```yaml
+---
+apiVersion: federation.pingcap.com/v1alpha1
+kind: VolumeBackup
+metadata:
+ name: ${backup-name}
+ annotations:
+ iam.amazonaws.com/role: arn:aws:iam::123456789012:role/role-name
+spec:
+ clusters:
+ - k8sClusterName: ${k8s-name1}
+ tcName: ${tc-name1}
+ tcNamespace: ${tc-namespace1}
+ - k8sClusterName: ${k8s-name2}
+ tcName: ${tc-name2}
+ tcNamespace: ${tc-namespace2}
+ - ... # other clusters
+ template:
+ br:
+ sendCredToTikv: false
+ s3:
+ provider: aws
+ region: ${region-name}
+ bucket: ${bucket-name}
+ prefix: ${backup-path}
+ toolImage: ${br-image}
+ cleanPolicy: Delete
+ calcSizeLevel: {snapshot-size-calculation-level}
+```
+
+
+
+
+
+If you grant permissions by associating ServiceAccount with IAM, you can create the `VolumeBackup` CR as follows:
+
+```shell
+kubectl apply -f backup-fed.yaml
+```
+
+The `backup-fed.yaml` file has the following content:
+
+```yaml
+---
+apiVersion: federation.pingcap.com/v1alpha1
+kind: VolumeBackup
+metadata:
+ name: ${backup-name}
+spec:
+ clusters:
+ - k8sClusterName: ${k8s-name1}
+ tcName: ${tc-name1}
+ tcNamespace: ${tc-namespace1}
+ - k8sClusterName: ${k8s-name2}
+ tcName: ${tc-name2}
+ tcNamespace: ${tc-namespace2}
+ - ... # other clusters
+ template:
+ br:
+ sendCredToTikv: false
+ s3:
+ provider: aws
+ region: ${region-name}
+ bucket: ${bucket-name}
+ prefix: ${backup-path}
+ toolImage: ${br-image}
+ serviceAccount: tidb-backup-manager
+ cleanPolicy: Delete
+ calcSizeLevel: {snapshot-size-calculation-level}
+```
+
+
+
+
+> **Note:**
+>
+> The value of `spec.clusters.k8sClusterName` field in `VolumeBackup` CR must be the same as the **context name** of the kubeconfig used by the br-federation-manager.
+
+### Step 3. View the backup status
+
+After creating the `VolumeBackup` CR, the BR Federation automatically starts the backup process in each data plane.
+
+To check the volume backup status, use the following command:
+
+```shell
+kubectl get vbk -n ${namespace} -o wide
+```
+
+Once the volume backup is complete, you can get the information of all the data planes in the `status.backups` field. This information can be used for volume restore.
+
+To obtain the information, use the following command:
+
+```shell
+kubectl get vbk ${backup-name} -n ${namespace} -o yaml
+```
+
+The information is as follows:
+
+```yaml
+status:
+ backups:
+ - backupName: fed-{backup-name}-{k8s-name1}
+ backupPath: s3://{bucket-name}/{backup-path}-{k8s-name1}
+ commitTs: "ts1"
+ k8sClusterName: {k8s-name1}
+ tcName: {tc-name1}
+ tcNamespace: {tc-namespace1}
+ - backupName: fed-{backup-name}-{k8s-name2}
+ backupPath: s3://{bucket-name}/{backup-path}-{k8s-name2}
+ commitTs: "ts2"
+ k8sClusterName: {k8s-name2}
+ tcName: {tc-name2}
+ tcNamespace: {tc-namespace2}
+ - ... # other backups
+```
+
+### Delete the `VolumeBackup` CR
+
+If you set `spec.template.cleanPolicy` to `Delete`, when you delete the `VolumeBackup` CR, the BR Federation will clean up the backup file and the volume snapshots on AWS.
+
+To delete the `VolumeBackup` CR, run the following commands:
+
+```shell
+kubectl delete backup ${backup-name} -n ${namespace}
+```
+
+## Scheduled volume backup
+
+To ensure regular backups of the TiDB cluster and prevent an excessive number of backup items, you can set a backup policy and retention policy.
+
+This can be done by creating a `VolumeBackupSchedule` CR object that describes the scheduled snapshot backup. Each backup time point triggers a volume backup. The underlying implementation is the ad-hoc volume backup.
+
+### Perform a scheduled volume backup
+
+**You must execute the following steps in the control plane**.
+
+Depending on the authorization method you choose in the previous step for granting remote storage access, perform a scheduled volume backup by doing one of the following:
+
+
+
+
+If you grant permissions by accessKey and secretKey, Create the `VolumeBackupSchedule` CR, and back up cluster data as described below:
+
+```shell
+kubectl apply -f volume-backup-scheduler.yaml
+```
+
+The content of `volume-backup-scheduler.yaml` is as follows:
+
+```yaml
+---
+apiVersion: federation.pingcap.com/v1alpha1
+kind: VolumeBackupSchedule
+metadata:
+ name: {scheduler-name}
+ namespace: {namespace-name}
+spec:
+ #maxBackups: {number}
+ #pause: {bool}
+ maxReservedTime: {duration}
+ schedule: {cron-expression}
+ backupTemplate:
+ clusters:
+ - k8sClusterName: {k8s-name1}
+ tcName: {tc-name1}
+ tcNamespace: {tc-namespace1}
+ - k8sClusterName: {k8s-name2}
+ tcName: {tc-name2}
+ tcNamespace: {tc-namespace2}
+ - ... # other clusters
+ template:
+ br:
+ sendCredToTikv: true
+ s3:
+ provider: aws
+ secretName: s3-secret
+ region: {region-name}
+ bucket: {bucket-name}
+ prefix: {backup-path}
+ toolImage: {br-image}
+ cleanPolicy: Delete
+ calcSizeLevel: {snapshot-size-calculation-level}
+```
+
+
+
+
+
+If you grant permissions by associating Pod with IAM, Create the `VolumeBackupSchedule` CR, and back up cluster data as described below:
+
+```shell
+kubectl apply -f volume-backup-scheduler.yaml
+```
+
+The content of `volume-backup-scheduler.yaml` is as follows:
+
+```yaml
+---
+apiVersion: federation.pingcap.com/v1alpha1
+kind: VolumeBackupSchedule
+metadata:
+ name: {scheduler-name}
+ namespace: {namespace-name}
+ annotations:
+ iam.amazonaws.com/role: arn:aws:iam::123456789012:role/role-name
+spec:
+ #maxBackups: {number}
+ #pause: {bool}
+ maxReservedTime: {duration}
+ schedule: {cron-expression}
+ backupTemplate:
+ clusters:
+ - k8sClusterName: {k8s-name1}
+ tcName: {tc-name1}
+ tcNamespace: {tc-namespace1}
+ - k8sClusterName: {k8s-name2}
+ tcName: {tc-name2}
+ tcNamespace: {tc-namespace2}
+ - ... # other clusters
+ template:
+ br:
+ sendCredToTikv: false
+ s3:
+ provider: aws
+ region: {region-name}
+ bucket: {bucket-name}
+ prefix: {backup-path}
+ toolImage: {br-image}
+ cleanPolicy: Delete
+ calcSizeLevel: {snapshot-size-calculation-level}
+```
+
+
+
+
+
+If you grant permissions by associating ServiceAccount with IAM, Create the `VolumeBackupSchedule` CR, and back up cluster data as described below:
+
+```shell
+kubectl apply -f volume-backup-scheduler.yaml
+```
+
+The content of `volume-backup-scheduler.yaml` is as follows:
+
+```yaml
+---
+apiVersion: federation.pingcap.com/v1alpha1
+kind: VolumeBackupSchedule
+metadata:
+ name: {scheduler-name}
+ namespace: {namespace-name}
+spec:
+ #maxBackups: {number}
+ #pause: {bool}
+ maxReservedTime: {duration}
+ schedule: {cron-expression}
+ backupTemplate:
+ clusters:
+ - k8sClusterName: {k8s-name1}
+ tcName: {tc-name1}
+ tcNamespace: {tc-namespace1}
+ - k8sClusterName: {k8s-name2}
+ tcName: {tc-name2}
+ tcNamespace: {tc-namespace2}
+ - ... # other clusters
+ template:
+ br:
+ sendCredToTikv: false
+ s3:
+ provider: aws
+ region: {region-name}
+ bucket: {bucket-name}
+ prefix: {backup-path}
+ serviceAccount: tidb-backup-manager
+ toolImage: {br-image}
+ cleanPolicy: Delete
+ calcSizeLevel: {snapshot-size-calculation-level}
+```
+
+
+
diff --git a/en/backup-restore-by-ebs-snapshot-faq.md b/en/backup-restore-by-ebs-snapshot-faq.md
new file mode 100644
index 0000000000..24873e096c
--- /dev/null
+++ b/en/backup-restore-by-ebs-snapshot-faq.md
@@ -0,0 +1,45 @@
+---
+title: FAQs on EBS Snapshot Backup and Restore across Multiple Kubernetes
+summary: Learn about the common questions and solutions for EBS snapshot backup and restore across multiple Kubernetes.
+---
+
+# FAQs on EBS Snapshot Backup and Restore across Multiple Kubernetes
+
+This document addresses common questions and solutions related to EBS snapshot backup and restore across multiple Kubernetes environments.
+
+## New tags on snapshots and restored volumes
+
+**Symptom:** Some tags are automatically added to generated snapshots and restored EBS volumes
+
+**Explanation:** The new tags are added for traceability. Snapshots inherit all tags from the individual source EBS volumes, while restored EBS volumes inherit tags from the source snapshots but prefix keys with `snapshot\`. Additionally, new tags such as `