` 或 Markdown 文章中的 `# `。
+* 每个文件都有一个唯一的 ID。默认情况下,文档 ID 是与根文档目录相关的文档名称(不带扩展名)。
+
+### 链接到其他文档
+
+你可以通过添加以下任意链接轻松跳转到其他位置:
+* 你可以使用以下 Markdown 标记指向 `https://github.com` 或 `https://k8s.io` 等外部站点的绝对 URL:
+ * `` 或
+ * `[kubernetes](https://k8s.io)`
+* 链接到 Markdown 文件或生成的路径。
+ 你可以使用相对路径来索引相应的文件。
+* 链接到图片或其他资源。
+ 如果你的文章包含图片或其他资源,你可以在 `/docs/resources` 中创建相应的目录,并将文章相关的文件放置在该目录中。
+ 现在我们将有关 Karmada 的公开图片存放在 `/docs/resources/general` 中。你可以使用以下方式链接到图片:
+ * `![Git 工作流](../resources/contributor/git_workflow.png)`
+
+### 目录组成
+
+Docusaurus 2 使用一个侧边栏来管理文档。
+
+创建侧边栏可用于:
+* 对多个相关的文档分组
+* 为每个文档显示侧边栏
+* 提供分页导航,有 Next/Previous 按钮
+
+对于 Karmada 文档,你可以查阅 了解文档的组成结构。
+
+```
+module.exports = {
+ docs: [
+ {
+ type: "category",
+ label: "Core Concepts",
+ collapsed: false,
+ items: [
+ "core-concepts/introduction",
+ "core-concepts/concepts",
+ "core-concepts/architecture",
+ ],
+ },
+ {
+ type: "doc",
+ id: "key-features/features",
+ },
+ {
+ type: "category",
+ label: "Get Started",
+ items: [
+ "get-started/nginx-example"
+ ],
+ },
+....
+```
+
+目录中文档的顺序严格按照 items 的顺序排列。
+```
+type: "category",
+label: "Core Concepts",
+collapsed: false,
+items: [
+ "core-concepts/introduction",
+ "core-concepts/concepts",
+ "core-concepts/architecture",
+],
+```
+
+如果新增一篇文档,必须将其添加到 `sidebars.js` 中才能正常显示。如果你不确定将文档放在哪儿,请在 PR 中询问社区成员。
+
+### 有关中文文档
+
+贡献中文文档有以下两种情况:
+* 你想将现有的英文文档翻译成中文。
+ 在这种情况下,你需要修改 中相应的文件内容。
+ 该目录的组织结构与英文完全相同。`current.json` 保存文档目录的中文译稿。如果要翻译目录名称,可以对其进行编辑。
+* 你想贡献没有英文版本的中文文档。任何类型的文章都是受欢迎的。
+ 在这种情况下,你可以先将文章和标题添加到英文目录。
+ 文章内容可以先待定。
+ 然后将对应的中文内容添加到中文目录中。
+
+## 调试文档
+
+假设现在你已经完成了文档编辑。对 `karmada.io/website` 发起 PR 后,如果通过了 CI,就可以在网站上预览你的文档。
+
+点击红色标记的 **Details**,可以看到网站的预览视图。
+
+![文档 CI](../resources/contributor/debug-docs.png)
+
+点击 **Next** 切换至当前版本,然后你可以看到你修改的文档相应的变更。
+如果你有与中文版本相关的更改,请点击旁边的语言下拉框切换到中文。
+
+![点击 Next](../resources/contributor/click-next.png)
+
+如果预览的页面与预期不符,请再次检查文档。
+
+### 拼写检查 (可选)
+
+更新文件后,你可以使用 [拼写检查工具](https://github.com/crate-ci/typos) 查找并纠正拼写错误。
+
+要安装拼写检查工具,可以参考 [安装](https://github.com/crate-ci/typos?tab=readme-ov-file#install)。
+
+然后,只需在本地命令行的仓库 root 路径下执行 `typos . --config ./typos.toml`命令。
+
+## FAQ
+
+### 版本控制
+
+对于各版本新补充的文档,我们将在各版本发布之日同步到最新版本,旧版本文档不做修改。
+对于文档中发现的错误,我们将在每次发布时进行修复。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/contributor/count-contributions.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/contributor/count-contributions.md
new file mode 100644
index 000000000..a68a2e5ac
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/contributor/count-contributions.md
@@ -0,0 +1,21 @@
+---
+title: 更正您的信息,以便更好地贡献
+---
+
+通过问题、评论、拉取请求等向 [karmada-io](https://github.com/karmada-io) 做出贡献后,您可以在[此处](https://karmada.devstats.cncf.io/d/66/developer-activity-counts-by-companies)查看您的贡献。
+如果您发现到公司栏中的信息错误或为空,我们强烈建议您更正它
+列如,应该使用 `Huawei Technologies Co. Ltd` 而不是 `HUAWEI`:
+![Wrong Information](../resources/contributor/contributions_list.png)
+
+以下是解决此问题的步骤。
+
+## 验证您在 CNCF 系统中的组织信息
+首先,访问您的个人资料[页面](https://openprofile.dev/edit/profile)并确保您的组织是准确的。
+![organization-check](../resources/contributor/organization_check.png)
+* 如果组织不正确,请选择正确的组织。
+* 如果您的组织不在列表中,请单击 **Add** 添加您的组织。
+
+## 更新用于计算贡献的 CNCF 存储库
+一旦您在 CNCF 系统中验证了您的组织,您必须在 gitdm 中使用更新的从属关系创建拉取请求。
+为此,您需要修改两个文件: `company_developers*.txt` 和 `developers_affiliations*.txt`。请参考这个示例拉请求: [PR Example](https://github.com/cncf/gitdm/pull/183)。
+拉取请求合并成功后,更改同步可能需要最多四周的时间
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/contributor/github-workflow.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/contributor/github-workflow.md
new file mode 100644
index 000000000..f8eae90d8
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/contributor/github-workflow.md
@@ -0,0 +1,275 @@
+---
+title: "GitHub Workflow"
+description: An overview of the GitHub workflow used by the Karmada project. It includes some tips and suggestions on things such as keeping your local environment in sync with upstream and commit hygiene.
+---
+
+> This doc is lifted from [Kubernetes github-workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md).
+
+![Git workflow](../resources/contributor/git_workflow.png)
+
+### 1 Fork in the cloud
+
+1. Visit https://github.com/karmada-io/karmada
+2. Click `Fork` button (top right) to establish a cloud-based fork.
+
+### 2 Clone fork to local storage
+
+Per Go's [workspace instructions][go-workspace], place Karmada' code on your
+`GOPATH` using the following cloning procedure.
+
+[go-workspace]: https://golang.org/doc/code.html#Workspaces
+
+Define a local working directory:
+
+```sh
+# If your GOPATH has multiple paths, pick
+# just one and use it instead of $GOPATH here.
+# You must follow exactly this pattern,
+# neither `$GOPATH/src/github.com/${your github profile name/`
+# nor any other pattern will work.
+export working_dir="$(go env GOPATH)/src/github.com/karmada-io"
+```
+
+Set `user` to match your github profile name:
+
+```sh
+export user={your github profile name}
+```
+
+Both `$working_dir` and `$user` are mentioned in the figure above.
+
+Create your clone:
+
+```sh
+mkdir -p $working_dir
+cd $working_dir
+git clone https://github.com/$user/karmada.git
+# or: git clone git@github.com:$user/karmada.git
+
+cd $working_dir/karmada
+git remote add upstream https://github.com/karmada-io/karmada.git
+# or: git remote add upstream git@github.com:karmada-io/karmada.git
+
+# Never push to upstream master
+git remote set-url --push upstream no_push
+
+# Confirm that your remotes make sense:
+git remote -v
+```
+
+### 3 Branch
+
+Get your local master up to date:
+
+```sh
+# Depending on which repository you are working from,
+# the default branch may be called 'main' instead of 'master'.
+
+cd $working_dir/karmada
+git fetch upstream
+git checkout master
+git rebase upstream/master
+```
+
+Branch from it:
+```sh
+git checkout -b myfeature
+```
+
+Then edit code on the `myfeature` branch.
+
+### 4 Keep your branch in sync
+
+```sh
+# Depending on which repository you are working from,
+# the default branch may be called 'main' instead of 'master'.
+
+# While on your myfeature branch
+git fetch upstream
+git rebase upstream/master
+```
+
+Please don't use `git pull` instead of the above `fetch` / `rebase`. `git pull`
+does a merge, which leaves merge commits. These make the commit history messy
+and violate the principle that commits ought to be individually understandable
+and useful (see below). You can also consider changing your `.git/config` file via
+`git config branch.autoSetupRebase always` to change the behavior of `git pull`, or another non-merge option such as `git pull --rebase`.
+
+### 5 Commit
+
+Commit your changes.
+
+```sh
+git commit --signoff
+```
+Likely you go back and edit/build/test some more then `commit --amend`
+in a few cycles.
+
+### 6 Push
+
+When ready to review (or just to establish an offsite backup of your work),
+push your branch to your fork on `github.com`:
+
+```sh
+git push -f ${your_remote_name} myfeature
+```
+
+### 7 Create a pull request
+
+1. Visit your fork at `https://github.com/$user/karmada`
+2. Click the `Compare & Pull Request` button next to your `myfeature` branch.
+
+_If you have upstream write access_, please refrain from using the GitHub UI for
+creating PRs, because GitHub will create the PR branch inside the main
+repository rather than inside your fork.
+
+#### Get a code review
+
+Once your pull request has been opened it will be assigned to one or more
+reviewers. Those reviewers will do a thorough code review, looking for
+correctness, bugs, opportunities for improvement, documentation and comments,
+and style.
+
+Commit changes made in response to review comments to the same branch on your
+fork.
+
+Very small PRs are easy to review. Very large PRs are very difficult to review.
+
+#### Squash commits
+
+After a review, prepare your PR for merging by squashing your commits.
+
+All commits left on your branch after a review should represent meaningful milestones or units of work. Use commits to add clarity to the development and review process.
+
+Before merging a PR, squash the following kinds of commits:
+
+- Fixes/review feedback
+- Typos
+- Merges and rebases
+- Work in progress
+
+Aim to have every commit in a PR compile and pass tests independently if you can, but it's not a requirement. In particular, `merge` commits must be removed, as they will not pass tests.
+
+To squash your commits, perform an [interactive
+rebase](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History):
+
+1. Check your git branch:
+
+ ```
+ git status
+ ```
+
+Output is similar to:
+
+ ```
+ On branch your-contribution
+ Your branch is up to date with 'origin/your-contribution'.
+ ```
+
+2. Start an interactive rebase using a specific commit hash, or count backwards from your last commit using `HEAD~`, where `` represents the number of commits to include in the rebase.
+
+ ```
+ git rebase -i HEAD~3
+ ```
+
+Output is similar to:
+
+ ```
+ pick 2ebe926 Original commit
+ pick 31f33e9 Address feedback
+ pick b0315fe Second unit of work
+
+ # Rebase 7c34fc9..b0315ff onto 7c34fc9 (3 commands)
+ #
+ # Commands:
+ # p, pick = use commit
+ # r, reword = use commit, but edit the commit message
+ # e, edit = use commit, but stop for amending
+ # s, squash = use commit, but meld into previous commit
+ # f, fixup = like "squash", but discard this commit's log message
+
+ ...
+
+ ```
+
+3. Use a command line text editor to change the word `pick` to `squash` for the commits you want to squash, then save your changes and continue the rebase:
+
+ ```
+ pick 2ebe926 Original commit
+ squash 31f33e9 Address feedback
+ pick b0315fe Second unit of work
+
+ ...
+
+ ```
+
+Output (after saving changes) is similar to:
+
+ ```
+ [detached HEAD 61fdded] Second unit of work
+ Date: Thu Mar 5 19:01:32 2020 +0100
+ 2 files changed, 15 insertions(+), 1 deletion(-)
+
+ ...
+
+ Successfully rebased and updated refs/heads/master.
+ ```
+4. Force push your changes to your remote branch:
+
+ ```
+ git push --force
+ ```
+
+For mass automated fixups (e.g. automated doc formatting), use one or more
+commits for the changes to tooling and a final commit to apply the fixup en
+masse. This makes reviews easier.
+
+### Merging a commit
+
+Once you've received review and approval, your commits are squashed, your PR is ready for merging.
+
+Merging happens automatically after both a Reviewer and Approver have approved the PR. If you haven't squashed your commits, they may ask you to do so before approving a PR.
+
+### Reverting a commit
+
+In case you wish to revert a commit, use the following instructions.
+
+_If you have upstream write access_, please refrain from using the
+`Revert` button in the GitHub UI for creating the PR, because GitHub
+will create the PR branch inside the main repository rather than inside your fork.
+
+- Create a branch and sync it with upstream.
+
+ ```sh
+ # Depending on which repository you are working from,
+ # the default branch may be called 'main' instead of 'master'.
+
+ # create a branch
+ git checkout -b myrevert
+
+ # sync the branch with upstream
+ git fetch upstream
+ git rebase upstream/master
+ ```
+- If the commit you wish to revert is a:
+ - **merge commit:**
+
+ ```sh
+ # SHA is the hash of the merge commit you wish to revert
+ git revert -m 1 SHA
+ ```
+
+ - **single commit:**
+
+ ```sh
+ # SHA is the hash of the single commit you wish to revert
+ git revert SHA
+ ```
+
+- This will create a new commit reverting the changes. Push this new commit to your remote.
+
+```sh
+git push ${your_remote_name} myrevert
+```
+
+- [Create a Pull Request](#7-create-a-pull-request) using this branch.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/contributor/lifted.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/contributor/lifted.md
new file mode 100644
index 000000000..d6fb8a996
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/contributor/lifted.md
@@ -0,0 +1,121 @@
+---
+title: 如何管理 Lift 代码
+---
+
+本文讲解如何管理 Lift 代码。
+此任务的常见用户场景是开发者将代码从其他代码仓库 Lift 到 `pkg/util/lifted` 目录。
+
+- [Lift 代码的步骤](#lift-代码的步骤)
+- [如何编写 Lift 注释](#如何编写-lift-注释)
+- [示例](#示例)
+
+## Lift 代码的步骤
+- 从另一个代码仓库拷贝代码并将其保存到 `pkg/util/lifted` 的一个 Go 文件中。
+- 可选择更改 Lift 代码。
+- [参照指南](#如何编写-lift-注释)为代码添加 Lift 注释。
+- 运行 `hack/update-lifted.sh` 以更新 Lift 文档 `pkg/util/lifted/doc.go`。
+
+## 如何编写 Lift 注释
+Lift 注释应放在 Lift 代码(可以是函数、类型、变量或常量)前面。
+在 Lift 注释和 Lift 代码之间只允许空行和注释。
+
+Lift 注释由一行或多行注释组成,每行的格式为 `+lifted:KEY[=VALUE]`。
+对某些键而言,值是可选的。
+
+有效的键如下:
+
+- source:
+
+ `source` 键是必需的。其值表明 Lift 代码的来源。
+
+- changed:
+
+ `changed` 键是可选的。它表明代码是否被更改。
+ 值是可选的(`true` 或 `false`,默认为 `true`)。
+ 不添加此键或将其设为 `false` 意味着不变更代码。
+
+## 示例
+### Lift 函数
+
+将 `IsQuotaHugePageResourceName` 函数 Lift 到 `corehelpers.go`:
+
+```go
+// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/apis/core/helper/helpers.go#L57-L61
+
+// IsQuotaHugePageResourceName returns true if the resource name has the quota
+// related huge page resource prefix.
+func IsQuotaHugePageResourceName(name corev1.ResourceName) bool {
+ return strings.HasPrefix(string(name), corev1.ResourceHugePagesPrefix) || strings.HasPrefix(string(name), corev1.ResourceRequestsHugePagesPrefix)
+}
+```
+
+添加到 `doc.go` 中:
+
+```markdown
+| Lift 的文件 | 源文件 | 常量/变量/类型/函数 | 是否变更 |
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
+| corehelpers.go | https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/apis/core/helper/helpers.go#L57-L61 | func IsQuotaHugePageResourceName | N |
+```
+
+### 变更 Lift 函数
+
+将 `GetNewReplicaSet` 函数 Lift 并变更为 `deployment.go`:
+
+```go
+// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/controller/deployment/util/deployment_util.go#L536-L544
+// +lifted:changed
+
+// GetNewReplicaSet returns a replica set that matches the intent of the given deployment; get ReplicaSetList from client interface.
+// Returns nil if the new replica set doesn't exist yet.
+func GetNewReplicaSet(deployment *appsv1.Deployment, f ReplicaSetListFunc) (*appsv1.ReplicaSet, error) {
+ rsList, err := ListReplicaSetsByDeployment(deployment, f)
+ if err != nil {
+ return nil, err
+ }
+ return FindNewReplicaSet(deployment, rsList), nil
+}
+```
+
+添加到 `doc.go` 中:
+
+```markdown
+| Lift 的文件 | 源文件 | 常量/变量/类型/函数 | 是否变更 |
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
+| deployment.go | https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/controller/deployment/util/deployment_util.go#L536-L544 | func GetNewReplicaSet | Y |
+```
+
+### Lift 常量
+
+将 `isNegativeErrorMsg` 常量 Lift 到 `corevalidation.go`:
+
+```go
+// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/apis/core/validation/validation.go#L59
+const isNegativeErrorMsg string = apimachineryvalidation.IsNegativeErrorMsg
+```
+
+添加到 `doc.go` 中:
+
+```markdown
+| Lift 的文件 | 源文件 | 常量/变量/类型/函数 | 是否变更 |
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
+| corevalidation.go | https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/apis/core/validation/validation.go#L59 | const isNegativeErrorMsg | N |
+```
+
+### Lift 类型
+
+将 `Visitor` 类型 Lift 到 `visitpod.go`:
+
+```go
+// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/api/v1/pod/util.go#L82-L83
+
+// Visitor 随每个对象名称被调用,如果应继续 visiting,则返回 true
+type Visitor func(name string) (shouldContinue bool)
+```
+
+添加到 `doc.go` 中:
+
+```markdown
+| Lift 的文件 | 源文件 | 常量/变量/类型/函数 | 是否变更 |
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
+| visitpod.go | https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/api/v1/pod/util.go#L82-L83 | type Visitor | N |
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/architecture.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/architecture.md
new file mode 100644
index 000000000..ca8f40d04
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/architecture.md
@@ -0,0 +1,22 @@
+---
+title: 架构设计
+---
+
+Karmada 的总体架构如下所示:
+
+![Architecture](../resources/general/architecture.png)
+
+Karmada 控制平面包括以下组件:
+
+- Karmada API Server
+- Karmada Controller Manager
+- Karmada Scheduler
+
+ETCD 存储了 karmada API 对象,API Server 是所有其他组件通讯的 REST 端点,Karmada Controller Manager 根据您通过 API 服务器创建的 API 对象执行操作。
+
+Karmada Controller Manager 在管理面运行各种 Controller,这些 Controller 监视 karmada 对象,然后与成员集群的 API Server 通信以创建常规的 Kubernetes 资源。
+
+1. Cluster Controller:将 Kubernetes 集群连接到 Karmada,通过创建集群对象来管理集群的生命周期。
+2. Policy Controller:监视 PropagationPolicy 对象。当添加 PropagationPolicy 对象时,Controller 将选择与 resourceSelector 匹配的一组资源,并为每个单独的资源对象创建 ResourceBinding。
+3. Binding Controller:监视 ResourceBinding 对象,并为每个带有单个资源清单的集群创建一个 Work 对象。
+4. Execution Controller:监视 Work 对象。当创建 Work 对象时,Controller 将把资源分发到成员集群。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/components.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/components.md
new file mode 100644
index 000000000..99bbd45b9
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/components.md
@@ -0,0 +1,108 @@
+---
+title: 关键组件
+---
+本文档概述了 Karmada 正常提供完整功能时所需的那些组件。
+
+![components](../resources/general/components.png)
+
+## 控制平面组件(Control Plane Components)
+
+一个完整且可工作的 Karmada 控制平面由以下组件组成。karmada-agent 可以是可选的,
+这取决于[集群注册模式](../userguide/clustermanager/cluster-registration)。
+
+### karmada-apiserver
+
+API 服务器是 Karmada 控制平面的一个组件,对外暴露 Karmada API 以及 Kubernetes 原生API,API 服务器是 Karmada 控制平面的前端。
+
+Karmada API 服务器是直接使用 Kubernetes 的 kube-apiserver 实现的,因此 Karmada 与 Kubernetes API 自然兼容。
+这也使得 Karmada 更容易实现与 Kubernetes 生态系统的集成,例如允许用户使用 kubectl 来操作 Karmada、
+[与 ArgoCD 集成](../userguide/cicd/working-with-argocd)、[与 Flux 集成](../userguide/cicd/working-with-flux)等等。
+
+### karmada-aggregated-apiserver
+
+聚合 API 服务器是使用 [Kubernetes API 聚合层](https://kubernetes.io/zh-cn/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)技术实现的扩展 API 服务器。
+它提供了[集群 API](https://github.com/karmada-io/karmada/blob/master/pkg/apis/cluster/types.go) 以及相应的子资源,
+例如 cluster/status 和 cluster/proxy,实现了[聚合 Kubernetes API Endpoint](../userguide/globalview/aggregated-api-endpoint) 等可以通过 karmada-apiserver 访问成员集群的高级功能。
+
+### kube-controller-manager
+
+kube-controller-manager 由一组控制器组成,Karmada 只是从 Kubernetes 的官方版本中挑选了一些控制器,以保持与原生控制器一致的用户体验和行为。
+
+值得注意的是,并非所有的原生控制器都是 Karmada 所需要的,
+推荐的控制器请参阅 [Recommended Controllers](../administrator/configuration/configure-controllers#required-controllers)。
+
+> 注意:当用户向 Karmada API 服务器提交 Deployment 或其他 Kubernetes 标准资源时,它们只记录在 Karmada 控制平面的 etcd 中。
+> 随后,这些资源会向成员集群同步。然而,这些部署资源不会在 Karmada 控制平面集群中进行 reconcile 过程(例如创建Pod)。
+
+### karmada-controller-manager
+
+Karmada 控制器管理器运行了各种自定义控制器进程。
+
+控制器负责监视Karmada对象,并与底层集群的API服务器通信,以创建原生的 Kubernetes 资源。
+
+所有的控制器列举在 [Karmada 控制器](../administrator/configuration/configure-controllers/#karmada-controllers)。
+
+### karmada-scheduler
+
+karmada-scheduler 负责将 Kubernetes 原生API资源对象(以及CRD资源)调度到成员集群。
+
+调度器依据策略约束和可用资源来确定哪些集群对调度队列中的资源是可用的,然后调度器对每个可用集群进行打分排序,并将资源绑定到最合适的集群。
+
+### karmada-webhook
+
+karmada-webhook 是用于接收 karmada/Kubernetes API 请求的 HTTP 回调,并对请求进行处理。你可以定义两种类型的 karmada-webhook,即验证性质的 webhook 和修改性质的 webhook。
+修改性质的准入 webhook 会先被调用。它们可以更改发送到 Karmada API 服务器的对象以执行自定义的设置默认值操作。
+
+在完成了所有对象修改并且 Karmada API 服务器也验证了所传入的对象之后,验证性质的 webhook 会被调用,并通过拒绝请求的方式来强制实施自定义的策略。
+
+> 说明:如果 Webhook 需要保证它们所看到的是对象的最终状态以实施某种策略。则应使用验证性质的 webhook,因为对象被修改性质 webhook 看到之后仍然可能被修改。
+
+### etcd
+
+一致且高可用的键值存储,用作 Karmada 的所有 Karmada/Kubernetes 资源对象数据的后台数据库。
+
+如果你的 Karmada 使用 etcd 作为其后台数据库,请确保你针对这些数据有一份备份计划。
+
+你可以在官方[文档](https://etcd.io/docs/)中找到有关 etcd 的深入知识。
+
+### karmada-agent
+
+Karmada 有 Push 和 Pull 两种[集群注册模式](../userguide/clustermanager/cluster-registration),karmada-agent 应部署在每个 Pull 模式的成员集群上。
+它可以将特定集群注册到 Karmada 控制平面,并将工作负载清单从 Karmada 控制平面同步到成员集群。
+此外,它也负责将成员集群及其资源的状态同步到 Karmada 控制平面。
+
+## 插件(Addons)
+
+### karmada-scheduler-estimator
+
+Karmada 调度估计器为每个成员集群运行精确的调度预估,它为调度器提供了更准确的集群资源信息。
+
+> 注意:早期的 Karmada 调度器只支持根据集群资源的总量来决策可调度副本的数量。
+> 在这种情况下,当集群资源的总量足够但每个节点资源不足时,会发生调度失败。
+> 为了解决这个问题,引入了估计器组件,该组件根据资源请求计算每个节点的可调度副本的数量,从而计算出真正的整个集群的可调度副本的数量。
+
+### karmada-descheduler
+
+Karmada 重调度组件负责定时检测所有副本(默认为两分钟),并根据成员集群中副本实例状态的变化触发重新调度。
+
+该组件是通过调用 karmada-scheduler-estimator 来感知有多少副本实例状态发生了变化,并且只有当副本的调度策略为动态划分时,它才会发挥作用。
+
+### karmada-search
+
+Karmada 搜索组件以聚合服务的形式,提供了在多云环境中进行全局搜索和资源代理等功能。
+
+其中,[全局搜索](../tutorials/karmada-search/)能力是用来跨多个集群缓存资源对象和事件,以及通过搜索 API 对外提供图形化的检索服务;
+[资源代理](../userguide/globalview/proxy-global-resource/)能力使用户既可以访问 Karmada 控制平面所有资源,又可以访问成员集群中的所有资源。
+
+## CLI 工具
+
+### karmadactl
+
+Karmada 提供了一个命令行工具 karmadactl,用于使用 Karmada API 与 Karmada 的控制平面进行通信。
+
+你可以使用 karmadactl 执行成员集群的添加/剔除,将成员集群标记/取消标记为不可调度,等等。
+有关包括 karmadactl 操作完整列表在内的更多信息,请参阅 [karmadactl](../reference/karmadactl/karmadactl-commands/karmadactl)。
+
+### kubectl karmada
+
+kubectl karmada 以 kubectl 插件的形式提供功能,但它的实现与 karmadactl 完全相同。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/concepts.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/concepts.md
new file mode 100644
index 000000000..0e8e0d56f
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/concepts.md
@@ -0,0 +1,29 @@
+---
+title: 核心概念
+---
+
+本页面介绍了有关 Karmada 的一些核心概念。
+
+## 资源模板
+
+Karmada 使用 Kubernetes 原生 API 定义联邦资源模板,以便轻松与现有 Kubernetes 采用的工具进行集成。
+
+## 调度策略
+
+Karmada 提供了一个独立的 Propagation(placement) Policy API 来定义多集群的调度要求。
+
+- 支持 1:N 的策略映射机制。用户无需每次创建联邦应用时都标明调度约束。
+
+- 在使用默认策略的情况下,用户可以直接与 Kubernetes API 交互。
+
+## 差异化策略
+
+Karmada 为不同的集群提供了一个可自动化生产独立配置的 Override Policy API。例如:
+
+- 基于成员集群所在区域自动配置不同镜像仓库地址。
+
+- 根据集群不同的云厂商,可以使用不同的存储类。
+
+下图显示了 Karmada 资源如何调度到成员集群。
+
+![karmada-resource-relation](../resources/general/karmada-resource-relation.png)
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/introduction.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/introduction.md
new file mode 100644
index 000000000..71615242c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/core-concepts/introduction.md
@@ -0,0 +1,51 @@
+---
+title: Karmada 是什么?
+slug: /
+
+---
+
+## Karmada:开放的、多云的、多集群的 Kubernetes 编排
+
+Karmada(Kubernetes Armada)是一个 Kubernetes 管理系统,使您能够在多个 Kubernetes 集群和云中运行云原生应用程序,而无需更改应用程序。通过使用 Kubernetes 原生 API 并提供先进的调度功能,Karmada 实现了真正的开放式、多云 Kubernetes。
+
+Karmada 旨在为多云和混合云场景下的多集群应用程序管理提供即插即用的自动化,具有集中式多云管理、高可用性、故障恢复和流量调度等关键功能。
+
+Karmada 是[Cloud Native Computing Foundation](https://cncf.io/)(CNCF)的孵化项目。
+
+## 为什么选择 Karmada
+
+- __兼容 K8s 原生 API__
+ - 从单集群到多集群的无侵入式升级
+ - 现有 K8s 工具链的无缝集成
+
+- __开箱即用__
+ - 针对场景内置策略集,包括:Active-active, Remote DR, Geo Redundant 等。
+ - 在多集群上进行跨集群应用程序自动伸缩、故障转移和负载均衡。
+
+- __避免供应商锁定__
+ - 与主流云提供商集成
+ - 在集群之间自动分配、迁移
+ - 未绑定专有供应商编排
+
+- __集中式管理__
+ - 位置无关的集群管理
+ - 支持公有云、本地或边缘上的集群。
+
+- __丰富多集群调度策略__
+ - 集群亲和性、实例在多集群中的拆分调度/再平衡,
+ - 多维 HA:区域/AZ/集群/提供商
+
+- __开放和中立__
+ - 由互联网、金融、制造业、电信、云提供商等联合发起。
+ - 目标是与 CNCF 一起进行开放治理。
+
+**注意:此项目是在 Kubernetes [Federation v1](https://github.com/kubernetes-retired/federation)和[v2](https://github.com/kubernetes-sigs/kubefed)基础之上开发的。某些基本概念从这两个版本继承而来。**
+
+## 接下来做什么
+
+以下是一些建议的下一步:
+
+- 了解 Karmada 的[核心概念](./concepts.md)。
+- 了解 Karmada 的[架构](./architecture.md)。
+- 开始[安装 Karmada](../installation/installation.md)。
+- 开始使用[交互式教程](https://killercoda.com/karmada/)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/customize-karmada-scheduler.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/customize-karmada-scheduler.md
new file mode 100644
index 000000000..e27d986ee
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/customize-karmada-scheduler.md
@@ -0,0 +1,353 @@
+---
+title: 自定义开发调度器
+---
+
+Karmada自带了一个默认调度器,其详细描述请查阅[这里](../reference/components/karmada-scheduler.md)。 如果默认调度器不适合你的需求,你可以实现自己的调度器。
+Karmada的调度器框架与Kubernetes类似,但与K8s不同的是,Karmada是需要将应用部署至一组集群上,而不是一个单一对象,根据用户的调度策略的placement字段以及内部的调度插件算法将用户应用部署到预期的集群组上。
+
+调度流程可以分为如下四步:
+* Predicate阶段:过滤不合适的集群
+* Priority阶段:为集群打分
+* SelectClusters选取阶段:根据集群得分以及SpreadConstraint选取集群组
+* ReplicaScheduling阶段:根据配置的副本调度策略将用户作业副本部署在选取的集群组上
+
+![schedule process](../resources/developers/schedule-process.png)
+
+其中过滤与打分的插件可以基于调度器框架进行自定义的开发与配置。
+
+Karmada默认的调度器有几个内置的插件:
+* APIEnablement: 一个过滤插件,用于检查需要下发的API资源(CRD)是否已在目标集群中被安装。
+* TaintToleration: 一个过滤插件,用于检查调度策略是否容忍集群的污点。
+* ClusterAffinity: 一个过滤和打分插件,用于实现集群的亲和性调度,支持通过names、labels、cluster的字段进行集群过滤。
+* SpreadConstraint: 一个过滤插件,用于检查集群是否满足调度策略的分发属性。
+* ClusterLocality: 一个打分插件,用于检查集群是否已存在被调度的资源,实现资源的聚合调度。
+
+用户可以基于自身的场景自定义插件,并且通过Karmada的调度器框架实现自身的调度器。
+以下给出了一个自定义开发调度器的具体例子。
+
+## 开发前的准备
+
+你需要已经安装Karmada,并拉取了Karmada的代码。 如果你想要安装Karmada,你可以按照这里的[安装指南](../installation/installation.md)。
+如果你想要试用Karmada,我们推荐通过```hack/local-up-karmada.sh```来部署一个开发环境。
+
+```sh
+git clone https://github.com/karmada-io/karmada
+cd karmada
+hack/local-up-karmada.sh
+```
+
+## 开发一个插件
+
+假设你想要开发一个名为`TestFilter`的过滤插件,你可以参考Karmada源代码中的调度器实现,代码位于[pkg/scheduler/framework/plugins](https://github.com/karmada-io/karmada/tree/master/pkg/scheduler/framework/plugins)。
+开发完成后的目录结构类似于:
+
+```
+.
+├── apienablement
+├── clusteraffinity
+├── clusterlocality
+├── spreadconstraint
+├── tainttoleration
+├── testfilter
+│ ├── test_filter.go
+```
+
+其中test_filter.go文件的内容如下,隐去了具体的过滤逻辑实现。
+
+```go
+package testfilter
+
+import (
+ "context"
+
+ clusterv1alpha1 "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1"
+ policyv1alpha1 "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"
+ workv1alpha2 "github.com/karmada-io/karmada/pkg/apis/work/v1alpha2"
+ "github.com/karmada-io/karmada/pkg/scheduler/framework"
+)
+
+const (
+ // Name is the name of the plugin used in the plugin registry and configurations.
+ Name = "TestFilter"
+)
+
+type TestFilter struct{}
+
+var _ framework.FilterPlugin = &TestFilter{}
+
+// New instantiates the TestFilter plugin.
+func New() (framework.Plugin, error) {
+ return &TestFilter{}, nil
+}
+
+// Name returns the plugin name.
+func (p *TestFilter) Name() string {
+ return Name
+}
+
+// Filter implements the filtering logic of the TestFilter plugin.
+func (p *TestFilter) Filter(ctx context.Context,
+ bindingSpec *workv1alpha2.ResourceBindingSpec, bindingStatus *workv1alpha2.ResourceBindingStatus, cluster *clusterv1alpha1.Cluster) *framework.Result {
+
+ // implementation
+
+ return framework.NewResult(framework.Success)
+}
+```
+
+作为一个过滤插件,你需要实现`framework.FilterPlugin`接口。而作为一个打分插件,你需要实现`framework.ScorePlugin`接口。
+
+## 注册插件
+
+你需要编辑调度器的main函数 [cmd/scheduler/main.go](https://github.com/karmada-io/karmada/blob/master/cmd/scheduler/main.go),在`NewSchedulerCommand`函数中传入自定义的插件配置。
+
+```go
+package main
+
+import (
+ "os"
+
+ "k8s.io/component-base/cli"
+ _ "k8s.io/component-base/logs/json/register" // for JSON log format registration
+ controllerruntime "sigs.k8s.io/controller-runtime"
+ _ "sigs.k8s.io/controller-runtime/pkg/metrics"
+
+ "github.com/karmada-io/karmada/cmd/scheduler/app"
+ "github.com/karmada-io/karmada/pkg/scheduler/framework/plugins/testfilter"
+)
+
+func main() {
+ stopChan := controllerruntime.SetupSignalHandler().Done()
+ command := app.NewSchedulerCommand(stopChan, app.WithPlugin(testfilter.Name, testfilter.New))
+ code := cli.Run(command)
+ os.Exit(code)
+}
+
+```
+
+## 打包调度器
+
+在你注册插件之后,你需要将你的调度器的二进制的调度器文件打包进一个容器镜像,并将上述镜像替换掉默认调度器的镜像。
+
+```shell
+cd karmada
+export VERSION=## Your Image Tag
+make image-karmada-scheduler
+```
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
+...
+ spec:
+ automountServiceAccountToken: false
+ containers:
+ - command:
+ - /bin/karmada-scheduler
+ - --kubeconfig=/etc/kubeconfig
+ - --bind-address=0.0.0.0
+ - --secure-port=10351
+ - --enable-scheduler-estimator=true
+ - --v=4
+ image: ## Your Image Address
+...
+```
+
+当你启动调度器后,你可以从调度器的日志中发现`TestFilter`插件已启用。
+
+```
+I0105 09:50:11.809137 1 scheduler.go:109] karmada-scheduler version: version.Info{GitVersion:"v1.4.0-141-g119cb8e1", GitCommit:"119cb8e1e8be0142ca3d32c619c25e5ec4b0a1b6", GitTreeState:"dirty", BuildDate:"2023-01-05T09:42:41Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
+I0105 09:50:11.813339 1 registry.go:63] Enable Scheduler plugin "SpreadConstraint"
+I0105 09:50:11.813470 1 registry.go:63] Enable Scheduler plugin "ClusterLocality"
+I0105 09:50:11.813483 1 registry.go:63] Enable Scheduler plugin "TestFilter"
+I0105 09:50:11.813489 1 registry.go:63] Enable Scheduler plugin "APIEnablement"
+I0105 09:50:11.813545 1 registry.go:63] Enable Scheduler plugin "TaintToleration"
+I0105 09:50:11.813596 1 registry.go:63] Enable Scheduler plugin "ClusterAffinity"
+```
+
+## 配置插件的启停
+
+你可以通过配置`--plugins`选项来配置插件的启停。
+例如,以下的配置将会关闭`TestFilter`插件。
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
+...
+ spec:
+ automountServiceAccountToken: false
+ containers:
+ - command:
+ - /bin/karmada-scheduler
+ - --kubeconfig=/etc/kubeconfig
+ - --bind-address=0.0.0.0
+ - --secure-port=10351
+ - --enable-scheduler-estimator=true
+ - --plugins=*,-TestFilter
+ - --v=4
+ image: ## Your Image Address
+...
+```
+
+## 配置多个调度器
+
+### 运行第二个调度器
+
+你可以和默认调度器一起同时运行多个调度器,并告诉 Karmada 为每个工作负载使用哪个调度器。
+以下是一个示例的调度器配置文件。 你可以将它保存为 `my-scheduler.yaml`。
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: my-karmada-scheduler
+ namespace: karmada-system
+ labels:
+ app: my-karmada-scheduler
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: my-karmada-scheduler
+ template:
+ metadata:
+ labels:
+ app: my-karmada-scheduler
+ spec:
+ automountServiceAccountToken: false
+ tolerations:
+ - key: node-role.kubernetes.io/master
+ operator: Exists
+ containers:
+ - name: karmada-scheduler
+ image: docker.io/karmada/karmada-scheduler:latest
+ imagePullPolicy: IfNotPresent
+ livenessProbe:
+ httpGet:
+ path: /healthz
+ port: 10351
+ scheme: HTTP
+ failureThreshold: 3
+ initialDelaySeconds: 15
+ periodSeconds: 15
+ timeoutSeconds: 5
+ command:
+ - /bin/karmada-scheduler
+ - --kubeconfig=/etc/kubeconfig
+ - --bind-address=0.0.0.0
+ - --secure-port=10351
+ - --enable-scheduler-estimator=true
+ - --leader-elect-resource-name=my-scheduler # 你的自定义调度器名称
+ - --scheduler-name=my-scheduler # 你的自定义调度器名称
+ - --v=4
+ volumeMounts:
+ - name: kubeconfig
+ subPath: kubeconfig
+ mountPath: /etc/kubeconfig
+ volumes:
+ - name: kubeconfig
+ secret:
+ secretName: kubeconfig
+```
+
+> Note: 对于 `--leader-elect-resource-name` 选项,默认为 `karmada-scheduler`。如果你将另一个调度器与默认的调度器一起部署,
+> 需要指定此选项,并且建议使用你的自定义调度器名称作为值。
+
+为了在 Karmada 中运行我们的第二个调度器,在 host 集群中创建上面配置中指定的 Deployment:
+
+```shell
+kubectl --context karmada-host create -f my-scheduler.yaml
+```
+
+验证调度器 Pod 正在运行:
+
+```
+kubectl --context karmada-host get pods --namespace=karmada-system
+```
+
+输出类似于:
+
+```
+NAME READY STATUS RESTARTS AGE
+....
+my-karmada-scheduler-lnf4s-4744f 1/1 Running 0 2m
+...
+```
+
+此列表中,除了默认的 karmada-scheduler Pod 之外,你应该还能看到处于 “Running” 状态的 my-karmada-scheduler Pod。
+
+### 为 Deployment 指定调度器
+
+现在第二个调度器正在运行,创建一些 Deployment,并指定它们由默认调度器或部署的调度器进行调度。 为了使用特定的调度器调度给定的 Deployment,在命中那个 Deployment 的 Propagation spec 中指定调度器的名称。让我们看看三个例子。
+
+* PropagationPolicy spec 没有任何调度器名称
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+如果未提供调度器名称,则会使用 default-scheduler 自动调度 Deployment。
+
+* PropagationPolicy spec 设置为 `default-scheduler`
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ schedulerName: default-scheduler
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+通过将调度器名称作为 `spec.schedulerName` 参数的值来指定调度器。 我们提供默认调度器的名称,即 `default-scheduler`。
+
+* PropagationPolicy spec 设置为 `my-scheduler`
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ schedulerName: my-scheduler
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+在这种情况下,我们指定此 Deployment 使用我们部署的 `my-scheduler` 来进行调度。
+请注意, `spec.schedulerName` 参数的值应该与调度器提供的选项中的 `schedulerName` 相匹配。
+
+### 验证是否使用所需的调度器调度了 Deployment
+
+为了更容易地完成这些示例, 你可以查看与此 Deployment 相关的事件日志,以验证是否由所需的调度器调度了该 Deployment。
+
+```shell
+kubectl --context karmada-apiserver describe deploy/nginx
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/document-releasing.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/document-releasing.md
new file mode 100644
index 000000000..786c243dd
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/document-releasing.md
@@ -0,0 +1,94 @@
+---
+title: 文档发布
+---
+
+每个小版本都有相应的文档版本。本指南是对整个发布过程的介绍。
+
+## 保持多语言文档同步(手动)
+
+有时贡献者不会更新所有语言文档的内容。在发布之前,请确保多语种文档同步。这将通过一个 issue 来跟踪。issue 应遵循以下格式:
+
+```
+This issue is to track documents which needs to sync zh for release 1.x:
+* #268
+```
+
+## 更新参考文档(手动)
+
+在发布之前,我们需要更新网站中的参考文档,包括 CLI 引用和组件引用。整个过程由脚本自动完成。按照以下步骤更新参考文档。
+
+1. 将 `karmada-io/karmada` 和 `karmada-io/website` 克隆到本地环境。建议将这两个项目放在同一个文件夹中。
+
+```text
+$ git clone https://github.com/karmada-io/karmada.git
+$ git clone https://github.com/karmada-io/website.git
+
+
+$ tree -L 1
+#.
+#├── karmada
+#├── website
+```
+
+2. 在 karmada 根目录下执行 generate 命令。
+
+```shell
+cd karmada/
+go run ./hack/tools/genkarmadactldocs/gen_karmadactl_docs.go ../website/docs/reference/karmadactl/karmadactl-commands/
+go run ./hack/tools/genkarmadactldocs/gen_karmadactl_docs.go ../website/i18n/zh/docusaurus-plugin-content-docs/current/reference/karmadactl/karmadactl-commands/
+```
+
+3. 逐一生成每个组件的参考文档。这里我们以 `karmada-apiserver` 为例
+
+```shell
+cd karmada/
+go build ./hack/tools/gencomponentdocs/.
+./gencomponentdocs ../website/docs/reference/components/ karmada-apiserver
+./gencomponentdocs ../website/i18n/zh/docusaurus-plugin-content-docs/current/reference/components/ karmada-apiserver
+```
+
+## 建立 release-1.x(手动)
+
+1. 更新 versions.json
+
+```shell
+cd website/
+vim versions.json
+
+[
+ v1.5 # add a new version tag
+ v1.4
+ v1.3
+]
+```
+
+2. 更新 vesioned_docs
+
+```shell
+mkdir versioned_docs/version-v1.5
+cp docs/* versioned_docs/version-v1.5 -r
+```
+
+3. 更新 versioned_sidebars
+
+```shell
+cp versioned_sidebars/version-v1.4-sidebars.json versioned_sidebars/version-v1.5-sidebars.json
+sed -i'' -e "s/version-v1.4/version-v1.5/g" versioned_sidebars/version-v1.5-sidebars.json
+# update version-v1.5-sidebars.json based on sidebars.js
+```
+
+4. 更新中文的 versioned_docs
+
+```shell
+mkdir i18n/zh/docusaurus-plugin-content-docs/version-v1.5
+cp i18n/zh/docusaurus-plugin-content-docs/current/* i18n/zh/docusaurus-plugin-content-docs/version-v1.5 -r
+```
+
+5. 更新中文的 versioned_sidebars
+
+```shell
+cp i18n/zh/docusaurus-plugin-content-docs/current.json i18n/zh/docusaurus-plugin-content-docs/version-v1.5.json
+sed -i'' -e "s/Next/v1.5/g" i18n/zh/docusaurus-plugin-content-docs/version-v1.5.json
+```
+
+## 检查变更的文件和主仓库的差异部分,并创建 Pull Request(手动)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/performance-test-setup-for-karmada.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/performance-test-setup-for-karmada.md
new file mode 100644
index 000000000..d24245a48
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/performance-test-setup-for-karmada.md
@@ -0,0 +1,335 @@
+---
+title: Performance Test Setup for Karmada
+---
+
+## Abstract
+
+As Karmada is being implemented in more and more enterprises and organizations, scalability and scale of Karmada is gradually becoming new concerns for the community. In this article, we will introduce how to conduct large-scale testing for Karmada and how to monitor metrics from Karmada control plane.
+
+## Build large scale environment
+
+### Create member clusters using kind
+
+#### Why kind
+
+[Kind](https://sigs.k8s.io/kind) is a tool for running local Kubernetes clusters using Docker containers. Kind was primarily designed for testing Kubernetes itself, so it play a good role in simulating member clusters.
+
+#### Usage
+
+> Follow the [kind installation](https://kind.sigs.k8s.io/docs/user/quick-start#installation) guide.
+
+Create 10 member clusters:
+
+```shell
+for ((i=1; i<=10; i ++)); do
+ kind create cluster --name member$i
+done;
+```
+
+
+
+### Simulate a large number of fake nodes using fake-kubelet
+
+#### Why fake-kubelet
+
+##### Compare to Kubemark
+
+**Kubemark** is directly implemented with the code of kubelet, replacing the runtime part, except that it does not actually start the container, other behaviors are exactly the same as kubelet, mainly used for Kubernetes own e2e test, simulating a large number of nodes and pods will **occupy the same memory as the real scene**.
+
+**Fake-kubelet** is a tool used to simulate any number of nodes and maintain pods on those nodes. It only does the minimum work of maintaining nodes and pods, so that it is very suitable for simulating a large number of nodes and pods for pressure testing on the control plane.
+
+#### Usage
+
+Deploy the fake-kubelet:
+
+> Note: Set container ENV `GENERATE_REPLICAS` in fake-kubelet deployment to set node replicas you want to create
+
+```shell
+export GENERATE_REPLICAS=your_replicas
+curl https://raw.githubusercontent.com/wzshiming/fake-kubelet/master/deploy.yaml > fakekubelet.yml
+# GENERATE_REPLICAS default value is 5
+sed -i "s/5/$GENERATE_REPLICAS/g" fakekubelet.yml
+kubectl apply -f fakekubelet.yml
+```
+
+
+`kubectl get node` You will find fake nodes.
+
+```shell
+> kubectl get node -o wide
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+fake-0 Ready agent 10s fake 10.88.0.136
+fake-1 Ready agent 10s fake 10.88.0.136
+fake-2 Ready agent 10s fake 10.88.0.136
+fake-3 Ready agent 10s fake 10.88.0.136
+fake-4 Ready agent 10s fake 10.88.0.136
+```
+
+Deploy an sample deployment to test:
+
+```shell
+> kubectl apply -f - < kubectl get pod -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+fake-pod-78884479b7-52qcx 1/1 Running 0 6s 10.0.0.23 fake-4
+fake-pod-78884479b7-bd6nk 1/1 Running 0 6s 10.0.0.13 fake-2
+fake-pod-78884479b7-dqjtn 1/1 Running 0 6s 10.0.0.15 fake-2
+fake-pod-78884479b7-h2fv6 1/1 Running 0 6s 10.0.0.31 fake-0
+```
+
+
+
+## Distribute resources using ClusterLoader2
+
+### ClusterLoader2
+
+[ClusterLoader2](https://github.com/kubernetes/perf-tests/tree/master/clusterloader2) is an open source Kubernetes cluster testing tool. It tests against Kubernetes-defined SLIs/SLOs metrics to verify that clusters meet various quality of service standards. ClusterLoader2 is a tool oriented single cluster, it is complex to test karmada control plane meanwhile distribute resources to member clusters. Therefore, we just use the ClusterLoader2 to distribute resources to clusters managed by karmada.
+
+### Prepare a simple config
+
+Let's prepare our config (config.yaml) to distribute resources. This config will:
+
+- Create 10 namespace
+
+- Create 20 deployments with 1000 pods inside that namespace
+
+
+We will create file `config.yaml` that describes this test. First we need to start with defining test name:
+
+```yaml
+name: test
+```
+
+ClusterLoader2 will create namespaces automatically, but we need to specify how many namespaces we want and whether delete the namespaces after distributing resources:
+
+```yaml
+namespace:
+ number: 10
+ deleteAutomanagedNamespaces: false
+```
+
+Next, we need to specify TuningSets. TuningSet describes how actions are executed. The qps means 1/qps s per action interval. In order to distribute resources slowly to relieve the pressure on the apiserver, the qps of Uniformtinyqps is set to 0.1, which means that after distributing a deployment, we wait 10s before continuing to distribute the next deployment.
+
+```yaml
+tuningSets:
+- name: Uniformtinyqps
+ qpsLoad:
+ qps: 0.1
+- name: Uniform1qps
+ qpsLoad:
+ qps: 1
+```
+
+Finally, we will create a phase that creates deployment and propagation policy. We need to specify in which namespaces we want the deployment and propagation policy to be created, how many of these deployments per namespace. Also, we will need to specify template for our deployment and propagation policy , which we will do later. For now, let's assume that this template allows us to specify numbers of replicas in deployment and propagation policy.
+
+```yaml
+steps:
+- name: Create deployment
+ phases:
+ - namespaceRange:
+ min: 1
+ max: 10
+ replicasPerNamespace: 20
+ tuningSet: Uniformtinyqps
+ objectBundle:
+ - basename: test-deployment
+ objectTemplatePath: "deployment.yaml"
+ templateFillMap:
+ Replicas: 1000
+ - namespaceRange:
+ min: 1
+ max: 10
+ replicasPerNamespace: 1
+ tuningSet: Uniform1qps
+ objectBundle:
+ - basename: test-policy
+ objectTemplatePath: "policy.yaml"
+ templateFillMap:
+ Replicas: 1
+
+```
+
+The whole `config.yaml` will look like this:
+
+```yaml
+name: test
+
+namespace:
+ number: 10
+ deleteAutomanagedNamespaces: false
+
+tuningSets:
+- name: Uniformtinyqps
+ qpsLoad:
+ qps: 0.1
+- name: Uniform1qps
+ qpsLoad:
+ qps: 1
+
+steps:
+- name: Create deployment
+ phases:
+ - namespaceRange:
+ min: 1
+ max: 10
+ replicasPerNamespace: 20
+ tuningSet: Uniformtinyqps
+ objectBundle:
+ - basename: test-deployment
+ objectTemplatePath: "deployment.yaml"
+ templateFillMap:
+ Replicas: 1000
+ - namespaceRange:
+ min: 1
+ max: 10
+ replicasPerNamespace: 1
+ tuningSet: Uniform1qps
+ objectBundle:
+ - basename: test-policy
+ objectTemplatePath: "policy.yaml"
+```
+
+
+Now, we need to specify deployment and propagation template. ClusterLoader2 by default adds parameter `Name` that you can use in your template. In our config, we also passed `Replicas` parameter. So our template for deployment and propagation policy will look like following:
+
+```yaml
+# deployment.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{.Name}}
+ labels:
+ group: test-deployment
+spec:
+ replicas: {{.Replicas}}
+ selector:
+ matchLabels:
+ app: fake-pod
+ template:
+ metadata:
+ labels:
+ app: fake-pod
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: type
+ operator: In
+ values:
+ - fake-kubelet
+ tolerations: # A taints was added to an automatically created Node. You can remove taints of Node or add this tolerations
+ - key: "fake-kubelet/provider"
+ operator: "Exists"
+ effect: "NoSchedule"
+ containers:
+ - image: fake-pod
+ name: {{.Name}}
+```
+
+```yaml
+# policy.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: test
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ placement:
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+```
+
+
+
+### Start Distributing
+
+To distributing resources, run:
+
+```shell
+export KARMADA_APISERVERCONFIG=your_config
+export KARMADA_APISERVERIP=your_ip
+cd clusterloader2/
+go run cmd/clusterloader.go --testconfig=config.yaml --provider=local --kubeconfig=$KARMADA_APISERVERCONFIG --v=2 --k8s-clients-number=1 --skip-cluster-verification=true --masterip=$KARMADA_APISERVERIP --enable-exec-service=false
+```
+
+The meaning of args above shows as following:
+
+- k8s-clients-number: the number of karmada apiserver client number.
+- skip-cluster-verification: whether to skip the cluster verification, which expects at least one schedulable node in the cluster.
+- enable-exec-service: whether to enable exec service that allows executing arbitrary commands from a pod running in the cluster.
+
+Since the resources of member cluster cannot be accessed in karmada control plane, we have to turn off enable-exec-service and cluster-verification.
+
+> Note: If the `deleteAutomanagedNamespaces` parameter in config file is set to true, when the whole distribution of resources is complete, the resources will be immediately deleted.
+
+## Monitor Karmada control plane using Prometheus and Grafana
+
+### Deploy Prometheus and Grafana
+
+> Follow the [Prometheus and Grafana Deploy Guide](https://karmada.io/docs/administrator/monitoring/working-with-prometheus-in-control-plane)
+
+### Create Grafana DashBoards to observe Karmada control plane metrics
+
+Here's an example to monitor the mutating api call latency for works and resourcebindings of the karmada apiserver through grafana. Monitor the metrics you want by modifying the Query statement.
+
+#### Create a dashboard
+
+> Follow the [Grafana support For Prometheus](https://prometheus.io/docs/visualization/grafana/) document.
+
+#### Modify Query Statement
+
+Enter the following Prometheus expression into the ` Query` field.
+
+````sql
+histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{verb!="WATCH|GET|LIST", resource~="works|resourcebindings"}[5m])) by (resource, verb, le))
+````
+
+The gragh will show as follow:
+
+![grafana-dashboard](../resources/developers/grafana_metrics.png)
+
+
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/profiling-karmada.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/profiling-karmada.md
new file mode 100644
index 000000000..4353c27cb
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/profiling-karmada.md
@@ -0,0 +1,63 @@
+---
+title: Profiling Karmada
+---
+
+## Enable profiling
+
+To profile Karmada components running inside a Kubernetes pod, set --enable-pprof flag to true in the yaml of Karmada components.
+The default profiling address is 127.0.0.1:6060, and it can be configured via `--profiling-bind-address`.
+The components which are compiled by the Karmada source code support the flag above, including `Karmada-agent`, `Karmada-aggregated-apiserver`, `Karmada-controller-manager`, `Karmada-descheduler`, `Karmada-search`, `Karmada-scheduler`, `Karmada-scheduler-estimator`, `Karmada-webhook`.
+
+```
+--enable-pprof
+ Enable profiling via web interface host:port/debug/pprof/.
+--profiling-bind-address string
+ The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+
+```
+
+## Expose the endpoint at the local port
+
+You can get at the application in the pod by port forwarding with kubectl, for example:
+
+```shell
+$ kubectl -n karmada-system get pod
+NAME READY STATUS RESTARTS AGE
+karmada-controller-manager-7567b44b67-8kt59 1/1 Running 0 19s
+...
+```
+
+```shell
+$ kubectl -n karmada-system port-forward karmada-controller-manager-7567b44b67-8kt59 6060
+Forwarding from 127.0.0.1:6060 -> 6060
+Forwarding from [::1]:6060 -> 6060
+```
+
+The HTTP endpoint will now be available as a local port.
+
+## Generate the data
+
+You can then generate the file for the memory profile with curl and pipe the data to a file:
+
+```shell
+$ curl http://localhost:6060/debug/pprof/heap > heap.pprof
+```
+
+Generate the file for the CPU profile with curl and pipe the data to a file (7200 seconds is two hours):
+
+```shell
+curl "http://localhost:6060/debug/pprof/profile?seconds=7200" > cpu.pprof
+```
+
+## Analyze the data
+
+To analyze the data:
+
+```shell
+go tool pprof heap.pprof
+```
+
+## Read more about profiling
+
+1. [Profiling Golang Programs on Kubernetes](https://danlimerick.wordpress.com/2017/01/24/profiling-golang-programs-on-kubernetes/)
+2. [Official Go blog](https://blog.golang.org/pprof)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/releasing.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/releasing.md
new file mode 100644
index 000000000..3df3d4c29
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/developers/releasing.md
@@ -0,0 +1,106 @@
+---
+title: 版本发布
+---
+
+Karmada版本发布可分为minor版本发布和patch版本发布。例如,`v1.3.0`是一个minor版本,`v1.3.1`是一个patch版本。Minor版本表示此版本有新特性添加,同时兼容之前的版本;Patch版本表示此版本主要为Bug修复,同时兼容之前版本。release,tag和分支之间的关系如下图:
+![img](../resources/developers/releasing.png)
+
+对于不同的版本发布,操作步骤也不同。
+
+## Minor版本发布
+Minor版本应该从对于的minor分支发布,发布步骤描述如下。
+
+### 创建release分支(手动)
+首先,确保所有必要PR都已被合入master 分支,然后从master分支创建minor release 分支。Minor release分支的命名应该符合`release-{major}.{minor}`格式,例如,`release-1.4`。
+
+### 准备发布说明(手动)
+每个版本发布都需要格式正确的版本发布说明。版本发布说明应该遵循如下格式:
+```text
+# What's New
+此版本发布的重点更新,例如,一些关键特性的支持。此部分内容需要手动整理收集。
+
+# Other Notable Changes
+## API Changes
+* API更改列表,如API版本更新。此部分内容需要手动整理收集。
+
+## Bug Fixes
+* Bug修复列表。此部分内容需要手动整理收集。
+
+## Features & Enhancements
+* 新特性和功能增强。此部分内容需要手动整理收集。
+
+## Security
+* 安全相关修复。此部分内容需要手动整理收集。
+
+## Other
+### Dependencies
+* 依赖相关更新,如golang版本更新。此部分内容需要手动整理收集。
+
+### Instrumentation
+* 可观测性相关更新,例如,增加监控数据/事件记录。此部分内容需要手动整理收集。
+```
+为了获取如上所有相关内容,需要对比新创建的minor release分支和上一版本的minor release tag,例如,比较`release-1.4`分支和`v1.3.0`tag。然后从这些对比修改中提取为如上不同类型的发布说明。例如,从[此修改](https://github.com/karmada-io/karmada/pull/2675)提取如下发布说明:
+```text
+## Bug Fixes
+* `karmada-controller-manager`: Fixed the panic when cluster ImpersonatorSecretRef is nil.
+```
+
+### 提交发布说明(手动)
+在发布说明准备好后,提交到minor release分支的`docs/CHANGELOG/CHANGELOG-{major}.{minor}.md`。
+
+### 准备贡献者列表(手动)
+每个版本发布都需要指明贡献者。比较新创建的minor release分支和前一个minor release tag获取贡献者的Github ID列表,例如,比较`release-1.4`分支和`v1.3.0` tag。此列表需要按照字母序排列,如:
+```text
+## Contributors
+Thank you to everyone who contributed to this release!
+
+Users whose commits are in this release (alphabetically by username)
+@a
+@B
+@c
+@D
+...
+```
+
+### 更新描述文件(手动)
+安装`Karmada`时,对应镜像需要从DockerHub/SWR拉取,所以我们需要更新描述文件中的镜像tag为最新的minor版本。如下文件需要更新:
+* `charts/karmada/values.yaml`: 更新 `Karmada` 相关的镜像tag为即将发布的版本。
+* `charts/index.yaml`: 增加对应版本的helm仓库索引。
+
+### 添加升级文档(手动)
+新minor版本发布时,对应升级文档`docs/administrator/upgrading/v{major}.{minor_previous}-v{major}.{minor_new}.md`需要被添加到 [website](https://github.com/karmada-io/website) 仓库。例如,发布minor版本`v1.4.0`时,需要添加升级文档`docs/administrator/upgrading/v1.3-v1.4.md`。
+
+### 创建发布(手动)
+现在,所有准备工作都已完成,让我们在Github发布页面上创建发布。
+* 创建一个新的minor release tag,tag命名应该遵循`v{major}.{minor}.{patch}`格式,例如,`v1.4.0`。
+* 目标分支为新创建的minor release分支。
+* `Describe this release`的内容应该为章节`准备发布说明`和`准备贡献者列表`内容的合并。
+
+### 添加发布产物(自动)
+在版本发布后,GitHub会运行流水线`.github/workflows/release.yml`,构建`karmadactl`和`kubectl-karmada`二进制,并且将其添加到新发布的产物中。
+
+### 构建/发布镜像(自动)
+在版本发布后,Github会运行流水线`.github/workflows/swr-released-image.yml` 和 `.github/workflows/dockerhub-released-image.yml`,构建所有`Karmada`相关的组件镜像,并推送到DockerHub/SWR。
+
+### 验证发布(手动)
+在所有流水线完成后,你应该执行手动检查,确认所有发布产物都正确构建:
+* 检查所有产物是否都被添加。
+* 检查所有镜像是否都被推送到DockerHub/SWR。
+
+## Patch版本发布
+Patch版本应该从对应的minor release分支发布。
+
+### 准备发布说明(手动)
+此步骤和minor版本发布几乎一致,只是我们需要比较对应的minor release分支和minor tag来提取对应的发布说明,例如,对比`release-1.3` 分支和`v1.3.0` tag获取`1.3.1`patch版本发布说明。
+
+### 创建发布(手动)
+此步骤和minor版本发布几乎一致,只是target分支为对应的minor release分支,例如,从`release-1.3`分支创建release tag `v1.3.1`。同时,我们也不需要要指明贡献者,Github会自动在release note中添加贡献者列表。
+
+### Attach asserts(automatically)
+此步骤和minor版本发布一致。
+
+### Build/Push images(automatically)
+此步骤和minor版本发布一致。
+
+### Verifying release(manually)
+此步骤和minor版本发布一致。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/faq/faq.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/faq/faq.md
new file mode 100644
index 000000000..531b9d024
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/faq/faq.md
@@ -0,0 +1,57 @@
+---
+title: 常见问题
+---
+
+## PropagationPolicy 与 ClusterPropagationPolicy 有什么区别?
+
+`PropagationPolicy` 是一种作用于命名空间的资源类型,意味着这种类型的对象必须处于一个命名空间中。
+而 `ClusterPropagationPolicy` 是作用于集群的资源类型,意味着这种类型的对象没有命名空间。
+
+二者都用于保留分发声明,但其权能有所不同:
+- PropagationPolicy:只能表示同一命名空间中资源的分发策略。
+- ClusterPropagationPolicy:可以表示所有资源的分发策略,包括作用于命名空间和作用于集群的资源。
+
+## 集群的 'Push' 和 'Pull' 模式有何区别?
+
+请参阅 [Push 和 Pull 概述](../userguide/clustermanager/cluster-registration.md#overview-of-cluster-mode)。
+
+## 为什么 Karmada 需要 `kube-controller-manager`?
+
+`kube-controller-manager` 由许多控制器组成,Karmada 从其继承了一些控制器以保持一致的用户体验和行为。
+
+值得注意的是,Karmada 并不需要所有控制器。
+有关推荐的控制器,请参阅[Kubernetes 控制器](../administrator/configuration/configure-controllers.md#kubernetes-控制器)。
+
+
+## 我可以在 Kubernetes 集群中安装 Karmada 并将 kube-apiserver 重用为 Karmada apiserver 吗?
+
+答案是 `yes`。在这种情况下,你可以在部署
+[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml)
+时节省不少时间,只需在 Kubernetes 和 Karmada 之间共享 APIServer 即可。
+此外,这样可以无缝继承原集群的高可用能力。我们确实有一些用户以这种方式使用 Karmada。
+
+不过在此之前你需要注意以下几点:
+
+- 这种方法尚未经过 Karmada 社区的全面测试,也没有相关测试计划。
+- 这种方法会增加 Karmada 系统的计算成本。
+ 以 `Deployment` 为例采用 `resource template` 后,`kube-controller` 会为 Deployment 创建 `Pods` 并持续更新状态,Karmada 系统也会协调这些变化,所以可能会发生冲突。
+
+待办事项:一旦有相关使用案例,我们将添加相应链接。
+
+## 为什么 Cluster API 没有 CRD YAML 文件?
+
+Kubernetes 提供了两种方式来扩展 API:**定制资源**、**Kubernetes API 聚合层**。更多详细信息,您可以参考[扩展 Kubernetes API](https://kubernetes.io/docs/concepts/extend-kubernetes/)。
+
+Karmada 使用了这两种扩展方式,例如,`PropagationPolicy` 和 `ResourceBinding` 使用**定制资源**,`Cluster` 资源使用**Kubernetes API 聚合层**。
+
+因此,`Cluster` 资源没有 CRD YAML 文件,当执行 `kubectl get crd` 命令时也无法获取 `Cluster` 资源。
+
+那么,为什么我们要使用**Kubernetes API 聚合层**来扩展 `Cluster` 资源,而不是使用**定制资源**呢?
+
+这是因为我们需要为 `Cluster` 资源设置 `Proxy` 子资源,通过使用 `Proxy`,您可以访问成员集群中的资源,具体内容可以参考[聚合层 APIServer](https://karmada.io/zh/docs/next/userguide/globalview/aggregated-api-endpoint)。目前,**定制资源**还不支持设置 `Proxy` 子资源,这也是我们没有选择它的原因。
+
+## 如何防止 Namespace 自动分发到所有成员集群?
+
+Karmada 会默认将用户创建的 Namespace 资源分发到成员成员集群中,这个功能是由 `Karmada-controller-manager` 组件中的 `namespace` 控制器负责的,可以通过参考[配置 Karmada 控制器](../administrator/configuration/configure-controllers.md#配置-karmada-控制器)来进行配置。
+
+当禁用掉 `namespace` 控制器之后,用户可以通过 `ClusterPropagationPolicy` 资源将 `Namespace` 资源分发到指定的集群中。
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/get-started/nginx-example.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/get-started/nginx-example.md
new file mode 100644
index 000000000..8951026fd
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/get-started/nginx-example.md
@@ -0,0 +1,88 @@
+---
+title: 通过 Karmada 分发 Deployment
+---
+
+本指南涵盖了:
+- 在名为 `host cluster` 的 Kubernetes 集群中安装 `karmada` 控制面组件。
+- 将一个成员集群接入到 `karmada` 控制面。
+- 通过使用 `karmada` 分发应用程序。
+
+### 前提条件
+- [Go](https://golang.org/) v1.18+
+- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) v1.19+
+- [kind](https://kind.sigs.k8s.io/) v0.14.0+
+
+### 安装 Karmada 控制面
+
+#### 1. 克隆此代码仓库到你的机器
+```
+git clone https://github.com/karmada-io/karmada
+```
+
+#### 2. 更改到 karmada 目录
+```
+cd karmada
+```
+
+#### 3. 部署并运行 Karmada 控制面
+
+运行以下脚本:
+
+```
+# hack/local-up-karmada.sh
+```
+该脚本将为你执行以下任务:
+- 启动一个 Kubernetes 集群来运行 Karmada 控制面,即 `host cluster`。
+- 根据当前代码库构建 Karmada 控制面组件。
+- 在 `host cluster` 上部署 Karmada 控制面组件。
+- 创建成员集群并接入 Karmada。
+
+如果一切良好,在脚本输出结束时你将看到以下类似消息:
+```
+Local Karmada is running.
+
+To start using your Karmada environment, run:
+ export KUBECONFIG="$HOME/.kube/karmada.config"
+Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.
+
+To manage your member clusters, run:
+ export KUBECONFIG="$HOME/.kube/members.config"
+Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.
+```
+
+Karmada 中有两个上下文环境:
+- karmada-apiserver `kubectl config use-context karmada-apiserver`
+- karmada-host `kubectl config use-context karmada-host`
+
+`karmada-apiserver` 是与 Karmada 控制面交互时要使用的 **主要 kubeconfig**,
+而 `karmada-host` 仅用于调试 Karmada 对 `host cluster` 的安装。
+你可以通过运行 `kubectl config view` 随时查看所有集群。
+要切换集群上下文,请运行 `kubectl config use-context [CONTEXT_NAME]`
+
+
+### Demo
+
+![Demo](../resources/general/sample-nginx.svg)
+
+### 分发应用程序
+在以下步骤中,我们将通过 Karmada 分发一个 Deployment。
+
+#### 1. 在 Karmada 中创建 nginx deployment
+首先创建名为 `nginx` 的 [deployment](https://github.com/karmada-io/karmada/blob/master/samples/nginx/deployment.yaml):
+```
+kubectl create -f samples/nginx/deployment.yaml
+```
+
+#### 2. 创建将 nginx 分发到成员集群的 PropagationPolicy
+随后我们需要创建一个策略将 Deployment 分发到成员集群。
+```
+kubectl create -f samples/nginx/propagationpolicy.yaml
+```
+
+#### 3. 从 Karmada 查看 Deployment 状态
+你可以从 Karmada 查看 Deployment 状态,无需访问成员集群:
+```
+$ kubectl get deployment
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx 2/2 2 2 20s
+```
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/installation/fromsource.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/installation/fromsource.md
new file mode 100644
index 000000000..4105c3976
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/installation/fromsource.md
@@ -0,0 +1,57 @@
+---
+title: 从源代码安装
+---
+
+本文说明如何使用 `hack/remote-up-karmada.sh` 脚本通过代码库将 Karmada 安装到你的集群。
+
+## 选择一种暴露 karmada-apiserver 的方式
+
+`hack/remote-up-karmada.sh` 将安装 `karmada-apiserver` 并提供两种暴露 karmada-apiserver 服务器的方式:
+
+### 1. 通过 `HostNetwork` 类型
+
+默认情况下,`hack/remote-up-karmada.sh` 将通过 `HostNetwork` 暴露 `karmada-apiserver`。
+
+这种方式无需额外的操作。
+
+### 2. 通过 `LoadBalancer` 类型的服务
+
+如果你不想使用 `HostNetwork`,可以让 `hack/remote-up-karmada.sh` 脚本通过 `LoadBalancer` 类型的服务暴露 `karmada-apiserver`,
+这种方式 **要求你的集群已部署 `Load Balancer`**。你需要做得是设置一个环境变量:
+```bash
+export LOAD_BALANCER=true
+```
+
+## 安装
+从 `karmada` 仓库的 `root` 目录,执行以下命令安装 Karmada:
+```bash
+hack/remote-up-karmada.sh
+```
+- `kubeconfig` 是你要安装的目标集群的 kubeconfig
+- `context_name` 是 'kubeconfig' 中上下文的名称
+
+例如:
+```bash
+hack/remote-up-karmada.sh $HOME/.kube/config mycluster
+```
+
+如果一切正常,脚本输出结束后,你将看到类似以下的消息:
+```
+------------------------------------------------------------------------------------------------------
+█████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████
+░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███
+░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███
+░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████
+░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███
+░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███
+█████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████
+░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░
+------------------------------------------------------------------------------------------------------
+Karmada is installed successfully.
+
+Kubeconfig for karmada in file: /root/.kube/karmada.config, so you can run:
+ export KUBECONFIG="/root/.kube/karmada.config"
+Or use kubectl with --kubeconfig=/root/.kube/karmada.config
+Please use 'kubectl config use-context karmada-apiserver' to switch the cluster of karmada control plane
+And use 'kubectl config use-context your-host' for debugging karmada installation
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/installation/install-binary.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/installation/install-binary.md
new file mode 100644
index 000000000..7b75085e5
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/installation/install-binary.md
@@ -0,0 +1,1192 @@
+---
+title: 通过二进制方式安装
+---
+
+分步安装二进制高可用 `karmada` 集群。
+
+## 前提条件
+
+### 服务器
+
+需要 3 个服务器,例如:
+
+```shell
++---------------+-----------------+-----------------+
+| HostName | Host IP | Public IP |
++---------------+-----------------+-----------------+
+| karmada-01 | 172.31.209.245 | 47.242.88.82 |
++---------------+-----------------+-----------------+
+| karmada-02 | 172.31.209.246 | |
++---------------+-----------------+-----------------+
+| karmada-03 | 172.31.209.247 | |
++---------------+-----------------+-----------------+
+```
+
+> 公共 IP 不是必需的。这个 IP 用于从公网下载某些 `karmada` 依赖组件,并通过公网连接到 `karmada` ApiServer。
+
+### DNS 解析
+
+对 `karmada-01`、`karmada-02`、`karmada-03` 执行操作。
+
+```bash
+vi /etc/hosts
+172.31.209.245 karmada-01
+172.31.209.246 karmada-02
+172.31.209.247 karmada-03
+```
+
+你也可以使用 "Linux 虚拟服务器"进行负载均衡,不更改 /etc/hosts 文件。
+
+### 环境
+
+`karmada-01` 需要以下环境。
+
+**Golang**:编译 karmada 二进制文件
+**GCC**:编译 nginx(使用云负载均衡时忽略此项)
+
+## 编译并下载二进制文件
+
+对 `karmada-01` 执行操作。
+
+### Kubernetes 二进制文件
+
+下载 `kubernetes` 二进制文件包。
+
+参阅本页下载不同版本和不同架构的二进制文件:
+
+```bash
+wget https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz
+tar -zxvf kubernetes-server-linux-amd64.tar.gz --no-same-owner
+cd kubernetes/server/bin
+mv kube-apiserver kube-controller-manager kubectl /usr/local/sbin/
+```
+
+### etcd 二进制文件
+
+下载 `etcd` 二进制文件包。
+
+若要使用较新版本的 etcd,请参阅:
+
+```bash
+wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
+tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz --no-same-owner
+cd etcd-v3.5.1-linux-amd64/
+mv etcdctl etcd /usr/local/sbin/
+```
+
+### Karmada 二进制文件
+
+从源代码编译 `karmada` 二进制文件。
+
+```bash
+git clone https://github.com/karmada-io/karmada
+cd karmada
+make karmada-aggregated-apiserver karmada-controller-manager karmada-scheduler karmada-webhook karmadactl kubectl-karmada
+mv _output/bin/linux/amd64/* /usr/local/sbin/
+```
+
+### Nginx 二进制文件
+
+从源代码编译 `nginx` 二进制文件。
+
+```bash
+wget http://nginx.org/download/nginx-1.21.6.tar.gz
+tar -zxvf nginx-1.21.6.tar.gz
+cd nginx-1.21.6
+./configure --with-stream --without-http --prefix=/usr/local/karmada-nginx --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
+make && make install
+mv /usr/local/karmada-nginx/sbin/nginx /usr/local/karmada-nginx/sbin/karmada-nginx
+```
+
+### 分发二进制文件
+
+上传二进制文件到 `karmada-02`、`karmada-03` 服务器。
+
+## 生成证书
+
+### 步骤 1:创建 Bash 脚本和配置文件
+
+此脚本将使用 `openssl` 命令生成证书。
+下载[此目录](https://github.com/karmada-io/website/tree/main/docs/resources/installation/install-binary/generate_cert)。
+
+我们分开了 CA 和叶证书生成脚本,若你需要更改叶证书的主体备用名称(又名负载均衡器 IP),你可以重用 CA 证书,并运行 generate_leaf.sh 以仅生成叶证书。
+
+
+
+有 3 个 CA:front-proxy-ca、server-ca、etcd/ca。
+为什么我们需要 3 个 CA,请参见 [PKI 证书和要求](https://kubernetes.io/zh-cn/docs/setup/best-practices/certificates/)、[CA 重用和冲突](https://kubernetes.io/zh-cn/docs/tasks/extend-kubernetes/configure-aggregation-layer/#ca-reusage-and-conflicts)。
+
+如果你使用他人提供的 etcd,可以忽略 `generate_etcd.sh` 和 `csr_config/etcd`。
+
+### 步骤 2:更改 ``
+
+你需要将 `csr_config/**/*.conf` 文件中的 `` 更改为"负载均衡器 IP" 和"服务器 IP"。
+如果你仅使用负载均衡器访问服务器,你只需要填写"负载均衡器 IP"。
+
+你正常不需要更改 `*.sh` 文件。
+
+
+### 步骤 3:运行 Shell 脚本
+
+```bash
+$ ./generate_ca.sh
+$ ./generate_leaf.sh ca_cert/
+$ ./generate_etcd.sh
+```
+
+
+
+### 步骤 4:检查证书
+
+你可以查看证书的配置,以 `karmada.crt` 为例。
+
+```bash
+openssl x509 -noout -text -in karmada.crt
+```
+
+### 步骤 5:创建 Karmada 配置目录
+
+复制证书到 `/etc/karmada/pki` 目录。
+
+```bash
+mkdir -p /etc/karmada/pki
+
+cd ca_cert
+cp -r * /etc/karmada/pki
+
+cd ../cert
+cp -r * /etc/karmada/pki
+```
+
+
+
+## 创建 Karmada kubeconfig 文件和 etcd 加密密钥
+
+对 `karmada-01` 执行操作。
+
+### 创建 kubeconfig 文件
+
+**步骤 1:下载 bash 脚本**
+
+下载[此文件](https://github.com/karmada-io/website/tree/main/docs/resources/installation/install-binary/other_scripts/create_kubeconfig_file.sh)。
+
+**步骤 2:执行 bash 脚本**
+
+`172.31.209.245:5443` 是针对 `karmada-apiserver` 的 `nginx` 代理的地址,我们将在后续设置。
+你应将其替换为负载均衡器提供的 "host:port"。
+
+```bash
+./create_kubeconfig_file.sh "https://172.31.209.245:5443"
+```
+
+### 创建 etcd 加密密钥
+
+如果你不需要加密 etcd 中的内容,请忽略本节和对应的 kube-apiserver 启动参数。
+
+```bash
+export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
+cat > /etc/karmada/encryption-config.yaml <`karmada-02` 和 `karmada-03` 需要更改的参数为:
+>
+>--name
+>
+>--initial-advertise-peer-urls
+>
+>--listen-peer-urls
+>
+>--listen-client-urls
+>
+>--advertise-client-urls
+>
+>
+>
+>你可以使用 `EnvironmentFile` 将可变配置与不可变配置分开。
+
+### 启动 etcd 集群
+
+3 个服务器必须执行以下命令创建 etcd 存储目录。
+
+```bash
+mkdir /var/lib/etcd/
+chmod 700 /var/lib/etcd
+```
+
+启动 etcd:
+
+```bash
+systemctl daemon-reload
+systemctl enable etcd.service
+systemctl start etcd.service
+systemctl status etcd.service
+```
+
+### 验证
+
+```bash
+etcdctl --cacert /etc/karmada/pki/etcd/ca.crt \
+ --cert /etc/karmada/pki/etcd/healthcheck-client.crt \
+ --key /etc/karmada/pki/etcd/healthcheck-client.key \
+ --endpoints "172.31.209.245:2379,172.31.209.246:2379,172.31.209.247:2379" \
+ endpoint status --write-out="table"
+
++---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
++---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+| 172.31.209.245:2379 | 689151f8cbf4ee95 | 3.5.1 | 20 kB | false | false | 2 | 9 | 9 | |
+| 172.31.209.246:2379 | 5db4dfb6ecc14de7 | 3.5.1 | 20 kB | true | false | 2 | 9 | 9 | |
+| 172.31.209.247:2379 | 7e59eef3c816aa57 | 3.5.1 | 20 kB | false | false | 2 | 9 | 9 | |
++---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+```
+
+## 安装 kube-apiserver
+
+### 配置 Nginx
+
+对 `karmada-01` 执行操作。
+
+为 `karmada apiserver` 配置负载均衡。
+
+
+
+/usr/local/karmada-nginx/conf/nginx.conf
+
+```bash
+worker_processes 2;
+
+events {
+ worker_connections 1024;
+}
+
+stream {
+ upstream backend {
+ hash consistent;
+ server 172.31.209.245:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:6443 max_fails=3 fail_timeout=30s;
+ }
+
+ server {
+ listen 172.31.209.245:5443;
+ proxy_connect_timeout 1s;
+ proxy_pass backend;
+ }
+}
+```
+
+/lib/systemd/system/karmada-nginx.service
+
+```bash
+[Unit]
+Description=The karmada karmada-apiserver nginx proxy server
+After=syslog.target network-online.target remote-fs.target nss-lookup.target
+Wants=network-online.target
+
+[Service]
+Type=forking
+ExecStartPre=/usr/local/karmada-nginx/sbin/karmada-nginx -t
+ExecStart=/usr/local/karmada-nginx/sbin/karmada-nginx
+ExecReload=/usr/local/karmada-nginx/sbin/karmada-nginx -s reload
+ExecStop=/bin/kill -s QUIT $MAINPID
+PrivateTmp=true
+Restart=always
+RestartSec=5
+StartLimitInterval=0
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+启动 `karmada nginx`。
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-nginx.service
+systemctl start karmada-nginx.service
+systemctl status karmada-nginx.service
+```
+
+### 创建 kube-apiserver Systemd 服务
+
+对 `karmada-01`、`karmada-02`、`karmada-03` 执行操作。以 `karmada-01` 为例。
+
+
+
+/usr/lib/systemd/system/kube-apiserver.service
+
+```bash
+[Unit]
+Description=Kubernetes API Server
+Documentation=https://kubernetes.io/docs/home/
+After=network.target
+
+[Service]
+# 如果你不需要加密 etcd,移除 --encryption-provider-config
+ExecStart=/usr/local/sbin/kube-apiserver \
+ --allow-privileged=true \
+ --anonymous-auth=false \
+ --audit-webhook-batch-buffer-size 30000 \
+ --audit-webhook-batch-max-size 800 \
+ --authorization-mode "Node,RBAC" \
+ --bind-address 0.0.0.0 \
+ --client-ca-file /etc/karmada/pki/server-ca.crt \
+ --default-watch-cache-size 200 \
+ --delete-collection-workers 2 \
+ --disable-admission-plugins "StorageObjectInUseProtection,ServiceAccount" \
+ --enable-admission-plugins "NodeRestriction" \
+ --enable-bootstrap-token-auth \
+ --encryption-provider-config "/etc/karmada/encryption-config.yaml" \
+ --etcd-cafile /etc/karmada/pki/etcd/ca.crt \
+ --etcd-certfile /etc/karmada/pki/etcd/apiserver-etcd-client.crt \
+ --etcd-keyfile /etc/karmada/pki/etcd/apiserver-etcd-client.key \
+ --etcd-servers "https://172.31.209.245:2379,https://172.31.209.246:2379,https://172.31.209.247:2379" \
+ --insecure-port 0 \
+ --logtostderr=true \
+ --max-mutating-requests-inflight 2000 \
+ --max-requests-inflight 4000 \
+ --proxy-client-cert-file /etc/karmada/pki/front-proxy-client.crt \
+ --proxy-client-key-file /etc/karmada/pki/front-proxy-client.key \
+ --requestheader-allowed-names "front-proxy-client" \
+ --requestheader-client-ca-file /etc/karmada/pki/front-proxy-ca.crt \
+ --requestheader-extra-headers-prefix "X-Remote-Extra-" \
+ --requestheader-group-headers "X-Remote-Group" \
+ --requestheader-username-headers "X-Remote-User" \
+ --runtime-config "api/all=true" \
+ --secure-port 6443 \
+ --service-account-issuer "https://kubernetes.default.svc.cluster.local" \
+ --service-account-key-file /etc/karmada/pki/sa.pub \
+ --service-account-signing-key-file /etc/karmada/pki/sa.key \
+ --service-cluster-ip-range "10.254.0.0/16" \
+ --tls-cert-file /etc/karmada/pki/kube-apiserver.crt \
+ --tls-private-key-file /etc/karmada/pki/kube-apiserver.key \
+
+Restart=on-failure
+RestartSec=5
+Type=notify
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### 启动 kube-apiserver
+
+3 个服务器必须执行以下命令:
+
+``` bash
+systemctl daemon-reload
+systemctl enable kube-apiserver.service
+systemctl start kube-apiserver.service
+systemctl status kube-apiserver.service
+```
+
+### 验证
+
+```bash
+$ ./check_status.sh
+###### 开始检查 kube-apiserver
+[+]ping ok
+[+]log ok
+[+]etcd ok
+[+]poststarthook/start-kube-apiserver-admission-initializer ok
+[+]poststarthook/generic-apiserver-start-informers ok
+[+]poststarthook/priority-and-fairness-config-consumer ok
+[+]poststarthook/priority-and-fairness-filter ok
+[+]poststarthook/start-apiextensions-informers ok
+[+]poststarthook/start-apiextensions-controllers ok
+[+]poststarthook/crd-informer-synced ok
+[+]poststarthook/bootstrap-controller ok
+[+]poststarthook/rbac/bootstrap-roles ok
+[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
+[+]poststarthook/priority-and-fairness-config-producer ok
+[+]poststarthook/start-cluster-authentication-info-controller ok
+[+]poststarthook/aggregator-reload-proxy-client-cert ok
+[+]poststarthook/start-kube-aggregator-informers ok
+[+]poststarthook/apiservice-registration-controller ok
+[+]poststarthook/apiservice-status-available-controller ok
+[+]poststarthook/kube-apiserver-autoregistration ok
+[+]autoregister-completion ok
+[+]poststarthook/apiservice-openapi-controller ok
+livez check passed
+
+###### kube-apiserver 检查成功
+```
+
+## 安装 karmada-aggregated-apiserver
+
+首先,创建 `namespace` 并绑定 `cluster admin role`。对 `karmada-01` 执行操作。
+
+```bash
+kubectl create ns karmada-system
+kubectl create clusterrolebinding cluster-admin:karmada --clusterrole=cluster-admin --user system:karmada
+```
+
+然后,类似 `karmada-webhook`,为高可用使用 `nginx`。
+
+修改 `nginx` 配置并添加以下配置。对 `karmada-01` 执行以下操作。
+
+```bash
+cat /usr/local/karmada-nginx/conf/nginx.conf
+worker_processes 2;
+
+events {
+ worker_connections 1024;
+}
+
+stream {
+ upstream backend {
+ hash consistent;
+ server 172.31.209.245:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:6443 max_fails=3 fail_timeout=30s;
+ }
+
+ upstream webhook {
+ hash consistent;
+ server 172.31.209.245:8443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:8443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:8443 max_fails=3 fail_timeout=30s;
+ }
+
+ upstream aa {
+ hash consistent;
+ server 172.31.209.245:7443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:7443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:7443 max_fails=3 fail_timeout=30s;
+ }
+
+ server {
+ listen 172.31.209.245:5443;
+ proxy_connect_timeout 1s;
+ proxy_pass backend;
+ }
+
+ server {
+ listen 172.31.209.245:4443;
+ proxy_connect_timeout 1s;
+ proxy_pass webhook;
+ }
+
+ server {
+ listen 172.31.209.245:443;
+ proxy_connect_timeout 1s;
+ proxy_pass aa;
+ }
+}
+```
+
+重新加载 `nginx` 配置。
+
+```bash
+systemctl restart karmada-nginx
+```
+
+### 创建 Systemd 服务
+
+对 `karmada-01`、`karmada-02`、`karmada-03` 执行操作。以 `karmada-01` 为例。
+
+/usr/lib/systemd/system/karmada-aggregated-apiserver.service
+
+```bash
+[Unit]
+Description=Karmada Aggregated ApiServer
+Documentation=https://github.com/karmada-io/karmada
+
+[Service]
+ExecStart=/usr/local/sbin/karmada-aggregated-apiserver \
+ --audit-log-maxage 0 \
+ --audit-log-maxbackup 0 \
+ --audit-log-path - \
+ --authentication-kubeconfig /etc/karmada/karmada.kubeconfig \
+ --authorization-kubeconfig /etc/karmada/karmada.kubeconfig \
+ --etcd-cafile /etc/karmada/pki/etcd/ca.crt \
+ --etcd-certfile /etc/karmada/pki/etcd/apiserver-etcd-client.crt \
+ --etcd-keyfile /etc/karmada/pki/etcd/apiserver-etcd-client.key \
+ --etcd-servers "https://172.31.209.245:2379,https://172.31.209.246:2379,https://172.31.209.247:2379" \
+ --feature-gates "APIPriorityAndFairness=false" \
+ --kubeconfig /etc/karmada/karmada.kubeconfig \
+ --logtostderr=true \
+ --secure-port 7443 \
+ --tls-cert-file /etc/karmada/pki/karmada.crt \
+ --tls-private-key-file /etc/karmada/pki/karmada.key \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### 启动 karmada-aggregated-apiserver
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-aggregated-apiserver.service
+systemctl start karmada-aggregated-apiserver.service
+systemctl status karmada-aggregated-apiserver.service
+```
+
+### 创建 `APIService`
+
+`externalName` 是 `nginx` 所在的主机名 (`karmada-01`)。
+
+
+
+(1) 创建文件:`karmada-aggregated-apiserver-apiservice.yaml`
+
+```yaml
+apiVersion: apiregistration.k8s.io/v1
+kind: APIService
+metadata:
+ name: v1alpha1.cluster.karmada.io
+ labels:
+ app: karmada-aggregated-apiserver
+ apiserver: "true"
+spec:
+ insecureSkipTLSVerify: true
+ group: cluster.karmada.io
+ groupPriorityMinimum: 2000
+ service:
+ name: karmada-aggregated-apiserver
+ namespace: karmada-system
+ port: 443
+ version: v1alpha1
+ versionPriority: 10
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: karmada-aggregated-apiserver
+ namespace: karmada-system
+spec:
+ type: ExternalName
+ externalName: karmada-01
+```
+
+(2) `kubectl create -f karmada-aggregated-apiserver-apiservice.yaml`
+
+### 验证
+
+```bash
+$ ./check_status.sh
+###### 开始检查 karmada-aggregated-apiserver
+[+]ping ok
+[+]log ok
+[+]etcd ok
+[+]poststarthook/generic-apiserver-start-informers ok
+[+]poststarthook/max-in-flight-filter ok
+[+]poststarthook/start-aggregated-server-informers ok
+livez check passed
+
+###### karmada-aggregated-apiserver 检查成功
+```
+
+## 安装 kube-controller-manager
+
+对 `karmada-01`、`karmada-02`、`karmada-03` 执行操作。以 `karmada-01` 为例。
+
+### 创建 Systemd 服务
+
+/usr/lib/systemd/system/kube-controller-manager.service
+
+```bash
+[Unit]
+Description=Kubernetes Controller Manager
+Documentation=https://kubernetes.io/docs/home/
+After=network.target
+
+[Service]
+ExecStart=/usr/local/sbin/kube-controller-manager \
+ --authentication-kubeconfig /etc/karmada/kube-controller-manager.kubeconfig \
+ --authorization-kubeconfig /etc/karmada/kube-controller-manager.kubeconfig \
+ --bind-address "0.0.0.0" \
+ --client-ca-file /etc/karmada/pki/server-ca.crt \
+ --cluster-name karmada \
+ --cluster-signing-cert-file /etc/karmada/pki/server-ca.crt \
+ --cluster-signing-key-file /etc/karmada/pki/server-ca.key \
+ --concurrent-deployment-syncs 10 \
+ --concurrent-gc-syncs 30 \
+ --concurrent-service-syncs 1 \
+ --controllers "namespace,garbagecollector,serviceaccount-token" \
+ --feature-gates "RotateKubeletServerCertificate=true" \
+ --horizontal-pod-autoscaler-sync-period 10s \
+ --kube-api-burst 2000 \
+ --kube-api-qps 1000 \
+ --kubeconfig /etc/karmada/kube-controller-manager.kubeconfig \
+ --leader-elect \
+ --logtostderr=true \
+ --node-cidr-mask-size 24 \
+ --pod-eviction-timeout 5m \
+ --requestheader-allowed-names "front-proxy-client" \
+ --requestheader-client-ca-file /etc/karmada/pki/front-proxy-ca.crt \
+ --requestheader-extra-headers-prefix "X-Remote-Extra-" \
+ --requestheader-group-headers "X-Remote-Group" \
+ --requestheader-username-headers "X-Remote-User" \
+ --root-ca-file /etc/karmada/pki/server-ca.crt \
+ --service-account-private-key-file /etc/karmada/pki/sa.key \
+ --service-cluster-ip-range "10.254.0.0/16" \
+ --terminated-pod-gc-threshold 10000 \
+ --tls-cert-file /etc/karmada/pki/kube-controller-manager.crt \
+ --tls-private-key-file /etc/karmada/pki/kube-controller-manager.key \
+ --use-service-account-credentials \
+ --v 4 \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### 启动 kube-controller-manager
+
+```bash
+systemctl daemon-reload
+systemctl enable kube-controller-manager.service
+systemctl start kube-controller-manager.service
+systemctl status kube-controller-manager.service
+```
+
+### 验证
+
+```bash
+$ ./check_status.sh
+###### 开始检查 kube-controller-manager
+[+]leaderElection ok
+healthz check passed
+
+###### kube-controller-manager 检查成功
+```
+
+## 安装 karmada-controller-manager
+
+### 创建 Systemd 服务
+
+对 `karmada-01`、`karmada-02`、`karmada-03` 执行操作。以 `karmada-01` 为例。
+
+/usr/lib/systemd/system/karmada-controller-manager.service
+
+```bash
+[Unit]
+Description=Karmada Controller Manager
+Documentation=https://github.com/karmada-io/karmada
+
+[Service]
+ExecStart=/usr/local/sbin/karmada-controller-manager \
+ --bind-address 0.0.0.0 \
+ --cluster-status-update-frequency 10s \
+ --kubeconfig /etc/karmada/karmada.kubeconfig \
+ --logtostderr=true \
+ --metrics-bind-address ":10358" \
+ --secure-port 10357 \
+ --v=4 \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### 启动 karmada-controller-manager
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-controller-manager.service
+systemctl start karmada-controller-manager.service
+systemctl status karmada-controller-manager.service
+```
+
+### 验证
+
+```bash
+$ ./check_status.sh
+###### 开始检查 karmada-controller-manager
+[+]ping ok
+healthz check passed
+
+###### karmada-controller-manager 检查成功
+```
+
+## 安装 karmada-scheduler
+
+### 创建 Systemd Service
+
+对 `karmada-01`、`karmada-02`、`karmada-03` 执行操作。以 `karmada-01` 为例。
+
+/usr/lib/systemd/system/karmada-scheduler.service
+
+```bash
+[Unit]
+Description=Karmada Scheduler
+Documentation=https://github.com/karmada-io/karmada
+
+[Service]
+ExecStart=/usr/local/sbin/karmada-scheduler \
+ --bind-address 0.0.0.0 \
+ --enable-scheduler-estimator=true \
+ --kubeconfig /etc/karmada/karmada.kubeconfig \
+ --logtostderr=true \
+ --scheduler-estimator-port 10352 \
+ --secure-port 10511 \
+ --v=4 \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### 启动 karmada-scheduler
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-scheduler.service
+systemctl start karmada-scheduler.service
+systemctl status karmada-scheduler.service
+```
+
+### 验证
+
+```bash
+$ ./check_status.sh
+###### 开始检查 karmada-scheduler
+ok
+###### karmada-scheduler 检查成功
+```
+
+## 安装 karmada-webhook
+
+`karmada-webhook` 不同于 `scheduler` 和 `controller-manager`,其高可用需要用 `nginx` 实现。
+
+修改 `nginx` 配置并添加以下配置。对 `karmada-01` 执行以下操作。
+
+```bash
+cat /usr/local/karmada-nginx/conf/nginx.conf
+worker_processes 2;
+
+events {
+ worker_connections 1024;
+}
+
+stream {
+ upstream backend {
+ hash consistent;
+ server 172.31.209.245:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:6443 max_fails=3 fail_timeout=30s;
+ }
+
+ upstream webhook {
+ hash consistent;
+ server 172.31.209.245:8443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:8443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:8443 max_fails=3 fail_timeout=30s;
+ }
+
+ server {
+ listen 172.31.209.245:5443;
+ proxy_connect_timeout 1s;
+ proxy_pass backend;
+ }
+
+ server {
+ listen 172.31.209.245:4443;
+ proxy_connect_timeout 1s;
+ proxy_pass webhook;
+ }
+}
+```
+
+重新加载 `nginx` 配置。
+
+```bash
+systemctl restart karmada-nginx
+```
+
+### 创建 Systemd 服务
+
+对 `karmada-01`、`karmada-02`、`karmada-03` 执行操作。以 `karmada-01` 为例。
+
+/usr/lib/systemd/system/karmada-webhook.service
+
+```bash
+[Unit]
+Description=Karmada Webhook
+Documentation=https://github.com/karmada-io/karmada
+
+[Service]
+ExecStart=/usr/local/sbin/karmada-webhook \
+ --bind-address 0.0.0.0 \
+ --cert-dir /etc/karmada/pki \
+ --health-probe-bind-address ":8444" \
+ --kubeconfig /etc/karmada/karmada.kubeconfig \
+ --logtostderr=true \
+ --metrics-bind-address ":8445" \
+ --secure-port 8443 \
+ --tls-cert-file-name "karmada.crt" \
+ --tls-private-key-file-name "karmada.key" \
+ --v=4 \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### 启动 karmada-webook
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-webhook.service
+systemctl start karmada-webhook.service
+systemctl status karmada-webhook.service
+```
+
+### 配置 karmada-webhook
+
+下载 `webhook-configuration.yaml` 文件:
+
+```bash
+ca_string=$(cat /etc/karmada/pki/server-ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i "s/{{caBundle}}/${ca_string}/g" webhook-configuration.yaml
+# 你需要将 172.31.209.245:4443 更改为你的负载均衡器 host:port。
+sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook-configuration.yaml
+
+kubectl create -f webhook-configuration.yaml
+```
+
+### 验证
+
+```bash
+$ ./check_status.sh
+###### 开始检查 karmada-webhook
+ok
+###### karmada-webhook 检查成功
+```
+
+## 初始化 Karmada
+
+对 `karmada-01` 执行以下操作。
+
+```bash
+git clone https://github.com/karmada-io/karmada
+cd karmada/charts/karmada/_crds/bases
+
+kubectl apply -f .
+
+cd ../patches/
+ca_string=$(cat /etc/karmada/pki/server-ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i "s/{{caBundle}}/${ca_string}/g" webhook_in_resourcebindings.yaml
+sed -i "s/{{caBundle}}/${ca_string}/g" webhook_in_clusterresourcebindings.yaml
+# 你需要将 172.31.209.245:4443 更改为你的负载均衡器 host:port。
+sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook_in_resourcebindings.yaml
+sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook_in_clusterresourcebindings.yaml
+
+kubectl patch CustomResourceDefinition resourcebindings.work.karmada.io --patch-file webhook_in_resourcebindings.yaml
+kubectl patch CustomResourceDefinition clusterresourcebindings.work.karmada.io --patch-file webhook_in_clusterresourcebindings.yaml
+```
+
+此时,Karmada基础组件已经安装完毕,此时,你可以接入集群;如果想使用karmadactl聚合查询,需要运行如下命令:
+```sh
+cat < 注:从 v1.0 开始可以使用 `init` 命令。运行 `init` 命令需要升级的权限才能将多个用户的公共配置(证书、crd)存储在默认位置 `/etc/karmada` 下,您可以通过标志 `--karmada-data` 和 `--karmada-pki` 覆盖此位置。有关更多详细信息或用法信息,请参阅CLI。
+
+运行以下命令进行安装:
+```bash
+kubectl karmada init
+```
+安装过程需要大约 5 分钟。如果一切正常,你将看到类似的输出:
+```
+I1121 19:33:10.270959 2127786 tlsbootstrap.go:61] [bootstrap-token] configured RBAC rules to allow certificate rotation for all agent client certificates in the member cluster
+I1121 19:33:10.275041 2127786 deploy.go:127] Initialize karmada bootstrap token
+I1121 19:33:10.281426 2127786 deploy.go:397] create karmada kube controller manager Deployment
+I1121 19:33:10.288232 2127786 idempotency.go:276] Service karmada-system/kube-controller-manager has been created or updated.
+...
+...
+------------------------------------------------------------------------------------------------------
+ █████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████
+░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███
+ ░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███
+ ░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████
+ ░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███
+ ░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███
+ █████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████
+░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░
+------------------------------------------------------------------------------------------------------
+Karmada is installed successfully.
+Register Kubernetes cluster to Karmada control plane.
+Register cluster with 'Push' mode
+Step 1: Use "kubectl karmada join" command to register the cluster to Karmada control plane. --cluster-kubeconfig is kubeconfig of the member cluster.
+(In karmada)~# MEMBER_CLUSTER_NAME=$(cat ~/.kube/config | grep current-context | sed 's/: /\n/g'| sed '1d')
+(In karmada)~# kubectl karmada --kubeconfig /etc/karmada/karmada-apiserver.config join ${MEMBER_CLUSTER_NAME} --cluster-kubeconfig=$HOME/.kube/config
+Step 2: Show members of karmada
+(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
+Register cluster with 'Pull' mode
+Step 1: Use "kubectl karmada register" command to register the cluster to Karmada control plane. "--cluster-name" is set to cluster of current-context by default.
+(In member cluster)~# kubectl karmada register 172.18.0.3:32443 --token lm6cdu.lcm4wafod2jmjvty --discovery-token-ca-cert-hash sha256:9bf5aa53d2716fd9b5568c85db9461de6429ba50ef7ade217f55275d89e955e4
+Step 2: Show members of karmada
+(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
+
+```
+
+Karmada 的组件默认安装在 `karmada-system` 命名空间中,你可以通过以下命令查看:
+```bash
+kubectl get deployments -n karmada-system
+NAME READY UP-TO-DATE AVAILABLE AGE
+karmada-aggregated-apiserver 1/1 1 1 102s
+karmada-apiserver 1/1 1 1 2m34s
+karmada-controller-manager 1/1 1 1 116s
+karmada-scheduler 1/1 1 1 119s
+karmada-webhook 1/1 1 1 113s
+kube-controller-manager 1/1 1 1 2m3s
+```
+`karmada-etcd` 被安装为 `StatefulSet`,通过以下命令查看:
+```bash
+kubectl get statefulsets -n karmada-system
+NAME READY AGE
+etcd 1/1 28m
+```
+
+Karmada 的配置文件默认创建到 `/etc/karmada/karmada-apiserver.config`。
+
+#### 离线安装
+
+安装 Karmada 时,`kubectl karmada init` 默认将从 Karmada 官网 release 页面(例如 `https://github.com/karmada-io/karmada/releases/tag/v0.10.1`)下载 API(CRD),并从官方镜像仓库加载镜像。
+
+如果你要离线安装 Karmada,你可能必须指定 API tar 文件和镜像。
+
+使用 `--crds` 标志指定 CRD 文件,例如:
+```bash
+kubectl karmada init --crds /$HOME/crds.tar.gz
+```
+
+你可以指定 Karmada 组件的镜像,以 `karmada-controller-manager` 为例:
+```bash
+kubectl karmada init --karmada-controller-manager-image=example.registry.com/library/karmada-controller-manager:1.0
+```
+
+#### 高可用部署
+使用 `--karmada-apiserver-replicas` 和 `--etcd-replicas` 标志指定副本数(默认为 `1`)。
+```bash
+kubectl karmada init --karmada-apiserver-replicas 3 --etcd-replicas 3
+```
+
+### 在 Kind 集群中安装 Karmada
+
+> kind 是一个使用 Docker 容器“节点”运行本地 Kubernetes 集群的工具。
+> 它主要设计用于测试 Kubernetes 本身,并非用于生产。
+
+通过 `hack/create-cluster.sh` 创建名为 `host` 的一个集群:
+```bash
+hack/create-cluster.sh host $HOME/.kube/host.config
+```
+
+通过命令 `kubectl karmada init` 安装 Karmada v1.2.0:
+```bash
+kubectl karmada init --crds https://github.com/karmada-io/karmada/releases/download/v1.2.0/crds.tar.gz --kubeconfig=$HOME/.kube/host.config
+```
+
+检查已安装的组件:
+```bash
+kubectl get pods -n karmada-system --kubeconfig=$HOME/.kube/host.config
+NAME READY STATUS RESTARTS AGE
+etcd-0 1/1 Running 0 2m55s
+karmada-aggregated-apiserver-84b45bf9b-n5gnk 1/1 Running 0 109s
+karmada-apiserver-6dc4cf6964-cz4jh 1/1 Running 0 2m40s
+karmada-controller-manager-556cf896bc-79sxz 1/1 Running 0 2m3s
+karmada-scheduler-7b9d8b5764-6n48j 1/1 Running 0 2m6s
+karmada-webhook-7cf7986866-m75jw 1/1 Running 0 2m
+kube-controller-manager-85c789dcfc-k89f8 1/1 Running 0 2m10s
+```
+
+## 通过 Helm Chart Deployment 安装 Karmada
+
+请参阅[通过 Helm 安装](https://github.com/karmada-io/karmada/tree/master/charts/karmada)。
+
+## 通过 Karmada Operator 安装 Karmada
+
+请参阅[通过 Karmada Operator 安装](https://github.com/karmada-io/karmada/blob/master/operator/README.md)。
+
+## 通过二进制安装 Karmada
+
+请参阅[通过二进制安装](./install-binary.md)。
+
+## 从源代码安装 Karmada
+
+请参阅[从源代码安装](./fromsource.md)。
+
+[1]: https://kubernetes.io/zh-cn/docs/tasks/extend-kubectl/kubectl-plugins/
+
+## 为开发环境安装 Karmada
+
+如果你要试用 Karmada,我们推荐用 `hack/local-up-karmada.sh` 构建一个开发环境,该脚本将为你执行以下任务:
+- 通过 [kind](https://kind.sigs.k8s.io/) 启动一个 Kubernetes 集群以运行 Karmada 控制面(也称为 `host cluster`)。
+- 基于当前代码库构建 Karmada 控制面组件。
+- 在 `host cluster` 上部署 Karmada 控制面组件。
+- 创建成员集群并接入 Karmada。
+
+**1. 克隆 Karmada 仓库到你的机器:**
+```
+git clone https://github.com/karmada-io/karmada
+```
+或替换你的 `GitHub ID` 来使用你的 fork 仓库:
+```
+git clone https://github.com//karmada
+```
+
+**2. 更改为 karmada 目录:**
+```
+cd karmada
+```
+
+**3. 部署并运行 Karmada 控制面:**
+
+运行以下脚本:
+
+```
+hack/local-up-karmada.sh
+```
+如果一切良好,在脚本输出结束时,你将看到类似以下的消息:
+```
+Local Karmada is running.
+
+To start using your Karmada environment, run:
+ export KUBECONFIG="$HOME/.kube/karmada.config"
+Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.
+
+To manage your member clusters, run:
+ export KUBECONFIG="$HOME/.kube/members.config"
+Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.
+```
+
+**4. 检查注册的集群**
+
+```
+kubectl get clusters --kubeconfig=/$HOME/.kube/karmada.config
+```
+
+你将看到类似以下的输出:
+```
+NAME VERSION MODE READY AGE
+member1 v1.23.4 Push True 7m38s
+member2 v1.23.4 Push True 7m35s
+member3 v1.23.4 Pull True 7m27s
+```
+
+有 3 个名为 `member1`、`member2` 和 `member3` 的集群已使用 `Push` 或 `Pull` 模式进行了注册。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/key-features/features.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/key-features/features.md
new file mode 100644
index 000000000..0c0ec1a71
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/key-features/features.md
@@ -0,0 +1,120 @@
+---
+title: 关键特性
+---
+
+## 跨云多集群多模式管理
+
+Karmada 支持:
+
+* 安全隔离:
+ * 为每个集群创建一个 namespace,以`karmada-es-`为前缀。
+* [多模式](../userguide/clustermanager/cluster-registration.md) :
+ * Push:Karmada 与成员集群的 kube-apiserver 直连。
+ * Pull:在成员集群中装 agent 组件,Karmada 委托任务给 agent 组件。
+* 多云支持(符合 Kubernetes 规范)
+ * 支持各公有云厂商。
+ * 支持私有云。
+ * 支持自建集群。
+
+成员集群和控制面的整体关系如下图所示:
+
+![overall-relationship.png](../resources/key-features/overall-relationship.png)
+
+## 多策略的多集群调度
+
+Karmada 支持:
+
+* 不同 [调度策略](../userguide/scheduling/resource-propagating.md) 下的集群分发能力:
+ * ClusterAffinity:基于 ClusterName、Label、Field 的定向调度。
+ * Toleration:基于 Taint 和 Toleration 的调度。
+ * SpreadConstraint:基于集群拓扑的调度。
+ * ReplicasScheduling:针对有实例的工作负载的复制模式与拆分模式。
+* 差异化配置( [OverridePolicy](../userguide/scheduling/override-policy.md) ):
+ * ImageOverrider:镜像的差异化配置。
+ * ArgsOverrider:运行参数的差异化配置。
+ * CommandOverrider:运行命令的差异化配置。
+ * PlainText:自定义的差异化配置。
+* 支持 [重调度](../userguide/scheduling/descheduler.md) :
+ * Descheduler(karmada-descheduler):根据成员集群内实例状态变化触发重调度。
+ * Scheduler-estimator(karmada-scheduler-estimator):为调度器提供更精确的成员集群运行实例的期望状态。
+
+像 k8s 调度一样,Karamda 支持不同的调度策略。整体的调度流程如下图所示:
+
+![overall-relationship.png](../resources/key-features/overall-scheduling.png)
+
+如果一个成员集群没有足够的资源容纳其中的 Pod,Karmada 会重新调度 Pod。整体的重调度流程如下图所示:
+
+![overall-relationship.png](../resources/key-features/overall-rescheduling.png)
+
+## 应用的跨集群故障迁移
+
+Karmada 支持:
+
+* [集群故障迁移](../userguide/failover/failover-overview.md) :
+ * Karmada 支持用户设置分发策略,在集群发生故障后,将故障集群实例进行自动的集中式或分散式的迁移。
+* 集群污点设置:
+ * 当用户为集群设置污点,且资源分发策略无法容忍污点时,Karmada 也会自动触发集群实例的迁移。
+* 服务不断服:
+ * 在实例迁移过程中,Karmada 能够保证服务实例不跌零,从而确保服务不会断服。
+
+Karmada 支持成员集群的故障迁移,一个成员集群故障会导致集群实例的迁移,如下图所示:
+
+![overall-relationship.png](../resources/key-features/cluster-failover.png)
+
+## 全局统一资源视图
+
+Karmada 支持:
+
+* [资源状态收集与聚合](../userguide/globalview/customizing-resource-interpreter.md) :借助资源解释器(Resource Interpreter),将状态收集并聚合到资源模板
+ * 用户自定义,触发 Webhook 远程调用。
+ * 对于一些常见资源,在 Karmada 中固定编码。
+* [统一资源管理](../userguide/globalview/aggregated-api-endpoint.md) :统一管理资源的创建、更新、删除、查询。
+* [统一运维](../userguide/globalview/proxy-global-resource.md) :可以在同一个 k8s 上下文中执行`describe`、`exec`、`logs`。
+* [资源、事件全局搜索](../tutorials/karmada-search.md) :
+ * 缓存查询:支持全局模糊搜索、全局精确搜索。
+ * 第三方存储:支持搜索引擎(Elasticsearch 或 OpenSearch)、关系型数据库、图数据库。
+
+用户可以通过 karmada-apiserver 连接和操作所有成员集群:
+
+![overall-relationship.png](../resources/key-features/unified-operation.png)
+
+用户也可以通过 karamda-apiserver 检查和搜索所有成员集群的资源:
+
+![overall-relationship.png](../resources/key-features/unified-resourcequota.png)
+
+## 最佳生产实践
+
+Karmada 支持:
+
+* [统一认证鉴权](../userguide/bestpractices/unified-auth.md) :
+ * 聚合 API 统一访问入口。
+ * 访问权限控制与成员集群一致。
+* 全局资源配额(`FederatedResourceQuota`):
+ * 全局配置各成员集群的 ResourceQuota。
+ * 配置联邦级别的 ResourceQuota。
+ * 实时收集各成员集群的资源使用量。
+* 可复用调度策略:
+ * 资源模板与调度策略解耦,即插即用。
+
+用户可以通过统一认证连接所有成员集群:
+
+![overall-relationship.png](../resources/key-features/unified-access.png)
+
+用户也可以通过`FederatedResourceQuota`定义全局资源配额:
+
+![overall-relationship.png](../resources/key-features/unified-resourcequota.png)
+
+## 跨集群服务治理
+
+Karmada 支持:
+
+* [多集群服务发现](../userguide/service/multi-cluster-service.md) :
+ * 使用 ServiceExport 和 ServiceImport,实现跨集群的服务发现。
+* [多集群网络支持](../userguide/network/working-with-submariner.md) :
+ * 使用`Submariner`打通集群间容器网络。
+* [使用 ErieCanal 实现跨集群的服务治理](../userguide/service/working-with-eriecanal.md)
+ * 与 `ErieCanal` 集成支持跨集群的服务治理。
+
+用户可以使用 Karmada,开启跨集群服务治理:
+
+![overall-relationship.png](../resources/key-features/service-governance.png)
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-agent.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-agent.md
new file mode 100644
index 000000000..772a07167
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-agent.md
@@ -0,0 +1,91 @@
+---
+title: karmada-agent
+---
+
+
+
+### Synopsis
+
+The karmada-agent is the agent of member clusters. It can register a specific cluster to the Karmada control
+plane and sync manifests from the Karmada control plane to the member cluster. In addition, it also syncs the status of member
+cluster and manifests to the Karmada control plane.
+
+```
+karmada-agent [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --cert-rotation-checking-interval duration The interval of checking if the certificate need to be rotated. This is only applicable if cert rotation is enabled (default 5m0s)
+ --cert-rotation-remaining-time-threshold float The threshold of remaining time of the valid certificate. This is only applicable if cert rotation is enabled. (default 0.2)
+ --cluster-api-burst int Burst to use while talking with cluster kube-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --cluster-api-endpoint string APIEndpoint of the cluster.
+ --cluster-api-qps float32 QPS to use while talking with cluster kube-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --cluster-cache-sync-timeout duration Timeout period waiting for cluster cache to sync. (default 2m0s)
+ --cluster-failure-threshold duration The duration of failure for the cluster to be considered unhealthy. (default 30s)
+ --cluster-lease-duration duration Specifies the expiration period of a cluster lease. (default 40s)
+ --cluster-lease-renew-interval-fraction float Specifies the cluster lease renew interval fraction. (default 0.25)
+ --cluster-name string Name of member cluster that the agent serves for.
+ --cluster-namespace string Namespace in the control plane where member cluster secrets are stored. (default "karmada-cluster")
+ --cluster-provider string Provider of the joining cluster. The Karmada scheduler can use this information to spread workloads across providers for higher availability.
+ --cluster-region string The region of the joining cluster. The Karmada scheduler can use this information to spread workloads across regions for higher availability.
+ --cluster-status-update-frequency duration Specifies how often karmada-agent posts cluster status to karmada-apiserver. Note: be cautious when changing the constant, it must work with ClusterMonitorGracePeriod in karmada-controller-manager. (default 10s)
+ --cluster-success-threshold duration The duration of successes for the cluster to be considered healthy after recovery. (default 30s)
+ --cluster-zones strings The zones of the joining cluster. The Karmada scheduler can use this information to spread workloads across zones for higher availability.
+ --concurrent-cluster-syncs int The number of Clusters that are allowed to sync concurrently. (default 5)
+ --concurrent-work-syncs int The number of Works that are allowed to sync concurrently. (default 5)
+ --controllers strings A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'. All controllers: certRotation, clusterStatus, endpointsliceCollect, execution, serviceExport, workStatus. (default [*])
+ --enable-cluster-resource-modeling Enable means controller would build resource modeling for each cluster by syncing Nodes and Pods resources.
+ The resource modeling might be used by the scheduler to make scheduling decisions in scenario of dynamic replica assignment based on cluster free resources.
+ Disable if it does not fit your cases for better performance. (default true)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ MultiClusterService=true|false (ALPHA - default=false)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ -h, --help help for karmada-agent
+ --karmada-context string Name of the cluster context in karmada control plane kubeconfig file.
+ --karmada-kubeconfig string Path to karmada control plane kubeconfig file.
+ --karmada-kubeconfig-namespace string Namespace of the secret containing karmada-agent certificate. This is only applicable if cert rotation is enabled. (default "karmada-system")
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
+ --leader-elect Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability. (default true)
+ --leader-elect-lease-duration duration The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. (default 15s)
+ --leader-elect-renew-deadline duration The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. (default 10s)
+ --leader-elect-resource-namespace string The namespace of resource object that is used for locking during leader election. (default "karmada-system")
+ --leader-elect-retry-period duration The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. (default 2s)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --metrics-bind-address string The TCP address that the controller should bind to for serving prometheus metrics(e.g. 127.0.0.1:8080, :8080). It can be set to "0" to disable the metrics serving. (default ":8080")
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --proxy-server-address string Address of the proxy server that is used to proxy to the cluster.
+ --rate-limiter-base-delay duration The base delay for rate limiter. (default 5ms)
+ --rate-limiter-bucket-size int The bucket size for rate limier. (default 100)
+ --rate-limiter-max-delay duration The max delay for rate limiter. (default 16m40s)
+ --rate-limiter-qps int The QPS for rate limier. (default 10)
+ --report-secrets strings The secrets that are allowed to be reported to the Karmada control plane during registering. Valid values are 'KubeCredentials', 'KubeImpersonator' and 'None'. e.g 'KubeCredentials,KubeImpersonator' or 'None'. (default [KubeCredentials,KubeImpersonator])
+ --resync-period duration Base frequency the informers are resynced.
+ --secure-port int The secure port on which to serve HTTPS. (default 10357)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-aggregated-apiserver.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-aggregated-apiserver.md
new file mode 100644
index 000000000..9fd3ddb33
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-aggregated-apiserver.md
@@ -0,0 +1,157 @@
+---
+title: karmada-aggregated-apiserver
+---
+
+
+
+### Synopsis
+
+The karmada-aggregated-apiserver starts an aggregated server.
+It is responsible for registering the Cluster API and provides the ability to aggregate APIs,
+allowing users to access member clusters from the control plane directly.
+
+```
+karmada-aggregated-apiserver [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --admission-control-config-file string File with admission control configuration.
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
+ --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
+ --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
+ --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
+ --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
+ --audit-log-compress If set, the rotated log files will be compressed using gzip.
+ --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
+ --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
+ --audit-log-maxbackup int The maximum number of old audit log files to retain. Setting a value of 0 will mean there's no restriction on the number of files.
+ --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
+ --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
+ --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
+ --audit-log-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
+ --audit-policy-file string Path to the file that defines the audit policy configuration.
+ --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
+ --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
+ --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
+ --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
+ --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
+ --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
+ --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
+ --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
+ --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
+ --authentication-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io.
+ --authentication-skip-lookup If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+ --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 10s)
+ --authentication-tolerate-lookup-failure If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+ --authorization-always-allow-paths strings A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. (default [/healthz,/readyz,/livez])
+ --authorization-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io.
+ --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 10s)
+ --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s)
+ --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used. (default 0.0.0.0)
+ --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
+ --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+ --contention-profiling Enable block profiling, if profiling is enabled
+ --debug-socket-path string Use an unprotected (no authn/authz) unix-domain socket for profiling with the given path
+ --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
+ --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+ --egress-selector-config-file string File with apiserver egress selector configuration.
+ --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+ --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
+ --encryption-provider-config-automatic-reload Determines if the file set by --encryption-provider-config should be automatically reloaded if the disk contents change. Setting this to true disables the ability to uniquely identify distinct KMS plugins via the API server healthz endpoints.
+ --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
+ --etcd-certfile string SSL certification file used to secure etcd communication.
+ --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
+ --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
+ --etcd-db-metric-poll-interval duration The interval of requests to poll etcd and update metric. 0 disables the metric collection (default 30s)
+ --etcd-healthcheck-timeout duration The timeout to use when checking etcd health. (default 2s)
+ --etcd-keyfile string SSL key file used to secure etcd communication.
+ --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
+ --etcd-readycheck-timeout duration The timeout to use when checking etcd readiness (default 2s)
+ --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
+ --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated. Note that this applies only to resources compiled into this server binary.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ APIListChunking=true|false (BETA - default=true)
+ APIPriorityAndFairness=true|false (BETA - default=true)
+ APIResponseCompression=true|false (BETA - default=true)
+ APIServerIdentity=true|false (BETA - default=true)
+ APIServerTracing=true|false (BETA - default=true)
+ AdmissionWebhookMatchConditions=true|false (BETA - default=true)
+ AggregatedDiscoveryEndpoint=true|false (BETA - default=true)
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ ComponentSLIs=true|false (BETA - default=true)
+ ConsistentListFromCache=true|false (ALPHA - default=false)
+ CustomResourceValidationExpressions=true|false (BETA - default=true)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ InPlacePodVerticalScaling=true|false (ALPHA - default=false)
+ KMSv2=true|false (BETA - default=true)
+ KMSv2KDF=true|false (BETA - default=false)
+ MultiClusterService=true|false (ALPHA - default=false)
+ OpenAPIEnums=true|false (BETA - default=true)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ RemainingItemCount=true|false (BETA - default=true)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ StorageVersionAPI=true|false (ALPHA - default=false)
+ StorageVersionHash=true|false (BETA - default=true)
+ UnauthenticatedHTTP2DOSMitigation=true|false (BETA - default=false)
+ ValidatingAdmissionPolicy=true|false (BETA - default=false)
+ WatchList=true|false (ALPHA - default=false)
+ -h, --help help for karmada-aggregated-apiserver
+ --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default. (default 1000)
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --lease-reuse-duration-seconds int The time in seconds that each lease is reused. A lower value could avoid large number of objects reusing the same lease. Notice that a too small value may cause performance problems at storage layer. (default 60)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --permit-address-sharing If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]
+ --permit-port-sharing If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+ --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+ --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+ --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-])
+ --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group])
+ --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user])
+ --secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 443)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
+ --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. Supported media types: [application/json, application/yaml, application/vnd.kubernetes.protobuf] (default "application/json")
+ --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+ --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
+ Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
+ Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+ --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+ --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
+ --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
+ --tracing-config-file string File with apiserver tracing configuration.
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+ --watch-cache Enable watch caching in the apiserver (default true)
+ --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. This option is only meaningful for resources built into the apiserver, not ones defined by CRDs or aggregated from external servers, and is only consulted if the watch-cache is enabled. The only meaningful size setting to supply here is zero, which means to disable watch caching for the associated resource; all non-zero values are equivalent and mean to not disable watch caching for that resource
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-controller-manager.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-controller-manager.md
new file mode 100644
index 000000000..441db73ec
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-controller-manager.md
@@ -0,0 +1,107 @@
+---
+title: karmada-controller-manager
+---
+
+
+
+### Synopsis
+
+The karmada-controller-manager runs various controllers.
+The controllers watch Karmada objects and then talk to the underlying clusters' API servers
+to create regular Kubernetes resources.
+
+```
+karmada-controller-manager [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --cluster-api-burst int Burst to use while talking with cluster kube-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --cluster-api-context string Name of the cluster context in cluster-api management cluster kubeconfig file.
+ --cluster-api-kubeconfig string Path to the cluster-api management cluster kubeconfig file.
+ --cluster-api-qps float32 QPS to use while talking with cluster kube-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --cluster-cache-sync-timeout duration Timeout period waiting for cluster cache to sync. (default 2m0s)
+ --cluster-failure-threshold duration The duration of failure for the cluster to be considered unhealthy. (default 30s)
+ --cluster-lease-duration duration Specifies the expiration period of a cluster lease. (default 40s)
+ --cluster-lease-renew-interval-fraction float Specifies the cluster lease renew interval fraction. (default 0.25)
+ --cluster-monitor-grace-period duration Specifies the grace period of allowing a running cluster to be unresponsive before marking it unhealthy. (default 40s)
+ --cluster-monitor-period duration Specifies how often karmada-controller-manager monitors cluster health status. (default 5s)
+ --cluster-startup-grace-period duration Specifies the grace period of allowing a cluster to be unresponsive during startup before marking it unhealthy. (default 1m0s)
+ --cluster-status-update-frequency duration Specifies how often karmada-controller-manager posts cluster status to karmada-apiserver. (default 10s)
+ --cluster-success-threshold duration The duration of successes for the cluster to be considered healthy after recovery. (default 30s)
+ --concurrent-cluster-propagation-policy-syncs int The number of ClusterPropagationPolicy that are allowed to sync concurrently. (default 1)
+ --concurrent-cluster-syncs int The number of Clusters that are allowed to sync concurrently. (default 5)
+ --concurrent-clusterresourcebinding-syncs int The number of ClusterResourceBindings that are allowed to sync concurrently. (default 5)
+ --concurrent-namespace-syncs int The number of Namespaces that are allowed to sync concurrently. (default 1)
+ --concurrent-propagation-policy-syncs int The number of PropagationPolicy that are allowed to sync concurrently. (default 1)
+ --concurrent-resource-template-syncs int The number of resource templates that are allowed to sync concurrently. (default 5)
+ --concurrent-resourcebinding-syncs int The number of ResourceBindings that are allowed to sync concurrently. (default 5)
+ --concurrent-work-syncs int The number of Works that are allowed to sync concurrently. (default 5)
+ --controllers strings A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.
+ All controllers: applicationFailover, binding, bindingStatus, cluster, clusterStatus, cronFederatedHorizontalPodAutoscaler, endpointSlice, endpointsliceCollect, endpointsliceDispatch, execution, federatedHorizontalPodAutoscaler, federatedResourceQuotaStatus, federatedResourceQuotaSync, gracefulEviction, hpaReplicasSyncer, multiclusterservice, namespace, remedy, serviceExport, serviceImport, unifiedAuth, workStatus.
+ Disabled-by-default controllers: hpaReplicasSyncer (default [*])
+ --enable-cluster-resource-modeling Enable means controller would build resource modeling for each cluster by syncing Nodes and Pods resources.
+ The resource modeling might be used by the scheduler to make scheduling decisions in scenario of dynamic replica assignment based on cluster free resources.
+ Disable if it does not fit your cases for better performance. (default true)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --enable-taint-manager If set to true enables NoExecute Taints and will evict all not-tolerating objects propagating on Clusters tainted with this kind of Taints. (default true)
+ --failover-eviction-timeout duration Specifies the grace period for deleting scheduling result on failed clusters. (default 5m0s)
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ MultiClusterService=true|false (ALPHA - default=false)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ --graceful-eviction-timeout duration Specifies the timeout period waiting for the graceful-eviction-controller performs the final removal since the workload(resource) has been moved to the graceful eviction tasks. (default 10m0s)
+ -h, --help help for karmada-controller-manager
+ --horizontal-pod-autoscaler-cpu-initialization-period duration The period after pod start when CPU samples might be skipped. (default 5m0s)
+ --horizontal-pod-autoscaler-downscale-delay duration The period since last downscale, before another downscale can be performed in horizontal pod autoscaler. (default 5m0s)
+ --horizontal-pod-autoscaler-downscale-stabilization duration The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period. (default 5m0s)
+ --horizontal-pod-autoscaler-initial-readiness-delay duration The period after pod start during which readiness changes will be treated as initial readiness. (default 30s)
+ --horizontal-pod-autoscaler-sync-period duration The period for syncing the number of pods in horizontal pod autoscaler. (default 15s)
+ --horizontal-pod-autoscaler-tolerance float The minimum change (from 1.0) in the desired-to-actual metrics ratio for the horizontal pod autoscaler to consider scaling. (default 0.1)
+ --horizontal-pod-autoscaler-upscale-delay duration The period since last upscale, before another upscale can be performed in horizontal pod autoscaler. (default 3m0s)
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --leader-elect Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability. (default true)
+ --leader-elect-lease-duration duration The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. (default 15s)
+ --leader-elect-renew-deadline duration The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. (default 10s)
+ --leader-elect-resource-namespace string The namespace of resource object that is used for locking during leader election. (default "karmada-system")
+ --leader-elect-retry-period duration The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. (default 2s)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --metrics-bind-address string The TCP address that the controller should bind to for serving prometheus metrics(e.g. 127.0.0.1:8080, :8080). It can be set to "0" to disable the metrics serving. (default ":8080")
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --rate-limiter-base-delay duration The base delay for rate limiter. (default 5ms)
+ --rate-limiter-bucket-size int The bucket size for rate limier. (default 100)
+ --rate-limiter-max-delay duration The max delay for rate limiter. (default 16m40s)
+ --rate-limiter-qps int The QPS for rate limier. (default 10)
+ --resync-period duration Base frequency the informers are resynced.
+ --secure-port int The secure port on which to serve HTTPS. (default 10357)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --skipped-propagating-apis string Semicolon separated resources that should be skipped from propagating in addition to the default skip list(cluster.karmada.io;policy.karmada.io;work.karmada.io). Supported formats are:
+ for skip resources with a specific API group(e.g. networking.k8s.io),
+ / for skip resources with a specific API version(e.g. networking.k8s.io/v1beta1),
+ //, for skip one or more specific resource(e.g. networking.k8s.io/v1beta1/Ingress,IngressClass) where the kinds are case-insensitive.
+ --skipped-propagating-namespaces strings Comma-separated namespaces that should be skipped from propagating.
+ Note: 'karmada-system', 'karmada-cluster' and 'karmada-es-.*' are Karmada reserved namespaces that will always be skipped. (default [kube-.*])
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-descheduler.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-descheduler.md
new file mode 100644
index 000000000..4d7687e6f
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-descheduler.md
@@ -0,0 +1,51 @@
+---
+title: karmada-descheduler
+---
+
+
+
+### Synopsis
+
+The karmada-descheduler evicts replicas from member clusters
+if they are failed to be scheduled for a period of time. It relies on
+karmada-scheduler-estimator to get replica status.
+
+```
+karmada-descheduler [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --descheduling-interval duration Time interval between two consecutive descheduler executions. Setting this value instructs the descheduler to run in a continuous loop at the interval specified. (default 2m0s)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ -h, --help help for karmada-descheduler
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --leader-elect Enable leader election, which must be true when running multi instances. (default true)
+ --leader-elect-resource-namespace string The namespace of resource object that is used for locking during leader election. (default "karmada-system")
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --master string The address of the Kubernetes API server. Overrides any value in KubeConfig. Only required if out-of-cluster.
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --scheduler-estimator-port int The secure port on which to connect the accurate scheduler estimator. (default 10352)
+ --scheduler-estimator-service-prefix string The prefix of scheduler estimator service name (default "karmada-scheduler-estimator")
+ --scheduler-estimator-timeout duration Specifies the timeout period of calling the scheduler estimator service. (default 3s)
+ --secure-port int The secure port on which to serve HTTPS. (default 10358)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --unschedulable-threshold duration The period of pod unschedulable condition. This value is considered as a classification standard of unschedulable replicas. (default 5m0s)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-metrics-adapter.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-metrics-adapter.md
new file mode 100644
index 000000000..c332169db
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-metrics-adapter.md
@@ -0,0 +1,96 @@
+---
+title: karmada-metrics-adapter
+---
+
+
+
+### Synopsis
+
+The karmada-metrics-adapter is a adapter to aggregate the metrics from member clusters.
+
+```
+karmada-metrics-adapter [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
+ --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
+ --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
+ --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
+ --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
+ --audit-log-compress If set, the rotated log files will be compressed using gzip.
+ --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
+ --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
+ --audit-log-maxbackup int The maximum number of old audit log files to retain. Setting a value of 0 will mean there's no restriction on the number of files.
+ --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
+ --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
+ --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
+ --audit-log-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
+ --audit-policy-file string Path to the file that defines the audit policy configuration.
+ --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
+ --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
+ --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
+ --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
+ --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
+ --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
+ --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
+ --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
+ --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
+ --authentication-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io.
+ --authentication-skip-lookup If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+ --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 10s)
+ --authentication-tolerate-lookup-failure If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+ --authorization-always-allow-paths strings A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. (default [/healthz,/readyz,/livez])
+ --authorization-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io.
+ --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 10s)
+ --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s)
+ --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used. (default 0.0.0.0)
+ --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
+ --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+ --contention-profiling Enable block profiling, if profiling is enabled
+ --debug-socket-path string Use an unprotected (no authn/authz) unix-domain socket for profiling with the given path
+ -h, --help help for karmada-metrics-adapter
+ --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --permit-address-sharing If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]
+ --permit-port-sharing If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+ --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
+ --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+ --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+ --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-])
+ --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group])
+ --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user])
+ --secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 443)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+ --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
+ Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
+ Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+ --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+ --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
+ --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-scheduler-estimator.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-scheduler-estimator.md
new file mode 100644
index 000000000..659899ac9
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-scheduler-estimator.md
@@ -0,0 +1,56 @@
+---
+title: karmada-scheduler-estimator
+---
+
+
+
+### Synopsis
+
+The karmada-scheduler-estimator runs an accurate scheduler estimator of a cluster. It
+provides the scheduler with more accurate cluster resource information.
+
+```
+karmada-scheduler-estimator [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --cluster-name string Name of member cluster that the estimator serves for.
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ MultiClusterService=true|false (ALPHA - default=false)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ -h, --help help for karmada-scheduler-estimator
+ --kube-api-burst int Burst to use while talking with apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 30)
+ --kube-api-qps float32 QPS to use while talking with apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 20)
+ --kubeconfig string Path to member cluster's kubeconfig file.
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --master string The address of the member Kubernetes API server. Overrides any value in KubeConfig. Only required if out-of-cluster.
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --parallelism int Parallelism defines the amount of parallelism in algorithms for estimating. Must be greater than 0. Defaults to 16.
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --secure-port int The secure port on which to serve HTTPS. (default 10351)
+ --server-port int The secure port on which to serve gRPC. (default 10352)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-scheduler.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-scheduler.md
new file mode 100644
index 000000000..49a4337b5
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-scheduler.md
@@ -0,0 +1,74 @@
+---
+title: karmada-scheduler
+---
+
+
+
+### Synopsis
+
+The karmada-scheduler is a control plane process which assigns resources to the clusters it manages.
+The scheduler determines which clusters are valid placements for each resource in the scheduling queue according to
+constraints and available resources. The scheduler then ranks each valid cluster and binds the resource to
+the most suitable cluster.
+
+```
+karmada-scheduler [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --disable-scheduler-estimator-in-pull-mode Disable the scheduler estimator for clusters in pull mode, which takes effect only when enable-scheduler-estimator is true.
+ --enable-empty-workload-propagation Enable workload with replicas 0 to be propagated to member clusters.
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --enable-scheduler-estimator Enable calling cluster scheduler estimator for adjusting replicas.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ MultiClusterService=true|false (ALPHA - default=false)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ -h, --help help for karmada-scheduler
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --leader-elect Enable leader election, which must be true when running multi instances. (default true)
+ --leader-elect-lease-duration duration The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. (default 15s)
+ --leader-elect-renew-deadline duration The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. (default 10s)
+ --leader-elect-resource-name string The name of resource object that is used for locking during leader election. (default "karmada-scheduler")
+ --leader-elect-resource-namespace string The namespace of resource object that is used for locking during leader election. (default "karmada-system")
+ --leader-elect-retry-period duration The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. (default 2s)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --master string The address of the Kubernetes API server. Overrides any value in KubeConfig. Only required if out-of-cluster.
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --plugins strings A list of plugins to enable. '*' enables all build-in and customized plugins, 'foo' enables the plugin named 'foo', '*,-foo' disables the plugin named 'foo'.
+ All build-in plugins: APIEnablement,ClusterAffinity,ClusterEviction,ClusterLocality,SpreadConstraint,TaintToleration. (default [*])
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --rate-limiter-base-delay duration The base delay for rate limiter. (default 5ms)
+ --rate-limiter-bucket-size int The bucket size for rate limier. (default 100)
+ --rate-limiter-max-delay duration The max delay for rate limiter. (default 16m40s)
+ --rate-limiter-qps int The QPS for rate limier. (default 10)
+ --scheduler-estimator-port int The secure port on which to connect the accurate scheduler estimator. (default 10352)
+ --scheduler-estimator-service-prefix string The prefix of scheduler estimator service name (default "karmada-scheduler-estimator")
+ --scheduler-estimator-timeout duration Specifies the timeout period of calling the scheduler estimator service. (default 3s)
+ --scheduler-name string SchedulerName represents the name of the scheduler. default is 'default-scheduler'. (default "default-scheduler")
+ --secure-port int The secure port on which to serve HTTPS. (default 10351)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-search.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-search.md
new file mode 100644
index 000000000..7d735a600
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-search.md
@@ -0,0 +1,151 @@
+---
+title: karmada-search
+---
+
+
+
+### Synopsis
+
+The karmada-search starts an aggregated server. It provides
+capabilities such as global search and resource proxy in a multi-cloud environment.
+
+```
+karmada-search [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --admission-control-config-file string File with admission control configuration.
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
+ --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
+ --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
+ --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
+ --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
+ --audit-log-compress If set, the rotated log files will be compressed using gzip.
+ --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
+ --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
+ --audit-log-maxbackup int The maximum number of old audit log files to retain. Setting a value of 0 will mean there's no restriction on the number of files.
+ --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
+ --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
+ --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
+ --audit-log-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
+ --audit-policy-file string Path to the file that defines the audit policy configuration.
+ --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
+ --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
+ --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
+ --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
+ --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
+ --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
+ --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
+ --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
+ --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
+ --authentication-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io.
+ --authentication-skip-lookup If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+ --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 10s)
+ --authentication-tolerate-lookup-failure If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+ --authorization-always-allow-paths strings A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. (default [/healthz,/readyz,/livez])
+ --authorization-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io.
+ --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 10s)
+ --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s)
+ --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used. (default 0.0.0.0)
+ --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
+ --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+ --contention-profiling Enable block profiling, if profiling is enabled
+ --debug-socket-path string Use an unprotected (no authn/authz) unix-domain socket for profiling with the given path
+ --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
+ --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+ --disable-proxy Disable proxy feature that would save memory usage significantly.
+ --disable-search Disable search feature that would save memory usage significantly.
+ --egress-selector-config-file string File with apiserver egress selector configuration.
+ --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+ --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
+ --encryption-provider-config-automatic-reload Determines if the file set by --encryption-provider-config should be automatically reloaded if the disk contents change. Setting this to true disables the ability to uniquely identify distinct KMS plugins via the API server healthz endpoints.
+ --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
+ --etcd-certfile string SSL certification file used to secure etcd communication.
+ --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
+ --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
+ --etcd-db-metric-poll-interval duration The interval of requests to poll etcd and update metric. 0 disables the metric collection (default 30s)
+ --etcd-healthcheck-timeout duration The timeout to use when checking etcd health. (default 2s)
+ --etcd-keyfile string SSL key file used to secure etcd communication.
+ --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
+ --etcd-readycheck-timeout duration The timeout to use when checking etcd readiness (default 2s)
+ --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
+ --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated. Note that this applies only to resources compiled into this server binary.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ APIListChunking=true|false (BETA - default=true)
+ APIPriorityAndFairness=true|false (BETA - default=true)
+ APIResponseCompression=true|false (BETA - default=true)
+ APIServerIdentity=true|false (BETA - default=true)
+ APIServerTracing=true|false (BETA - default=true)
+ AdmissionWebhookMatchConditions=true|false (BETA - default=true)
+ AggregatedDiscoveryEndpoint=true|false (BETA - default=true)
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ ComponentSLIs=true|false (BETA - default=true)
+ ConsistentListFromCache=true|false (ALPHA - default=false)
+ CustomResourceValidationExpressions=true|false (BETA - default=true)
+ InPlacePodVerticalScaling=true|false (ALPHA - default=false)
+ KMSv2=true|false (BETA - default=true)
+ KMSv2KDF=true|false (BETA - default=false)
+ OpenAPIEnums=true|false (BETA - default=true)
+ RemainingItemCount=true|false (BETA - default=true)
+ StorageVersionAPI=true|false (ALPHA - default=false)
+ StorageVersionHash=true|false (BETA - default=true)
+ UnauthenticatedHTTP2DOSMitigation=true|false (BETA - default=false)
+ ValidatingAdmissionPolicy=true|false (BETA - default=false)
+ WatchList=true|false (ALPHA - default=false)
+ -h, --help help for karmada-search
+ --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default. (default 1000)
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --lease-reuse-duration-seconds int The time in seconds that each lease is reused. A lower value could avoid large number of objects reusing the same lease. Notice that a too small value may cause performance problems at storage layer. (default 60)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --permit-address-sharing If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]
+ --permit-port-sharing If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+ --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+ --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+ --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-])
+ --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group])
+ --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user])
+ --secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 443)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
+ --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. Supported media types: [application/json, application/yaml, application/vnd.kubernetes.protobuf] (default "application/json")
+ --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+ --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
+ Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
+ Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+ --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+ --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
+ --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
+ --tracing-config-file string File with apiserver tracing configuration.
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+ --watch-cache Enable watch caching in the apiserver (default true)
+ --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. This option is only meaningful for resources built into the apiserver, not ones defined by CRDs or aggregated from external servers, and is only consulted if the watch-cache is enabled. The only meaningful size setting to supply here is zero, which means to disable watch caching for the associated resource; all non-zero values are equivalent and mean to not disable watch caching for that resource
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-webhook.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-webhook.md
new file mode 100644
index 000000000..319993f60
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/components/karmada-webhook.md
@@ -0,0 +1,50 @@
+---
+title: karmada-webhook
+---
+
+
+
+### Synopsis
+
+The karmada-webhook starts a webhook server and manages policies about how to mutate and validate
+Karmada resources including 'PropagationPolicy', 'OverridePolicy' and so on.
+
+```
+karmada-webhook [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --cert-dir string The directory that contains the server key and certificate. (default "/tmp/k8s-webhook-server/serving-certs")
+ --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the propagation policy toleration for notReady:NoExecute that is added by default to every propagation policy that does not already have such a toleration. (default 300)
+ --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the propagation policy toleration for unreachable:NoExecute that is added by default to every propagation policy that does not already have such a toleration. (default 300)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --health-probe-bind-address string The TCP address that the controller should bind to for serving health probes(e.g. 127.0.0.1:8000, :8000) (default ":8000")
+ -h, --help help for karmada-webhook
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --metrics-bind-address string The TCP address that the controller should bind to for serving prometheus metrics(e.g. 127.0.0.1:8080, :8080). It can be set to "0" to disable the metrics serving. (default ":8080")
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --secure-port int The secure port on which to serve HTTPS. (default 8443)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --tls-cert-file-name string The name of server certificate. (default "tls.crt")
+ --tls-min-version string Minimum TLS version supported. Possible values: 1.0, 1.1, 1.2, 1.3. (default "1.3")
+ --tls-private-key-file-name string The name of server key. (default "tls.key")
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/glossary.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/glossary.md
new file mode 100644
index 000000000..a1648d6e9
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/glossary.md
@@ -0,0 +1,68 @@
+---
+title: 词汇表
+---
+
+此术语表旨在提供 Karmada 术语的完整、标准列表。其中包含特定于 Karmada 的技术术语以及能够构造有用的语境的一般性术语。
+
+* Aggregated API
+
+ 聚合 API,由 `karmada-aggregated-apiserver` 提供。它能够聚合所有注册的集群,并允许用户通过 Karmada 的 `cluster/proxy` 端点统一访问不同的成员集群。
+
+* ClusterAffinity
+
+ 类似于 K8s,ClusterAffinity 是指一组规则,它们为调度器提供在哪个集群部署应用的提示信息。
+
+* GracefulEviction
+
+ 优雅驱逐,指工作负载在集群间迁移时,驱逐将被推迟到工作负载在新集群上启动或是达到最大宽限期之后才被执行。
+ 优雅驱逐可以帮助应用在多集群故障迁移时服务不断服,实例不跌零。
+
+* OverridePolicy
+
+ 跨集群可重用的差异化配置策略。
+
+* Overrider
+
+ Overrider 是指 Karmada 提供的一系列差异化配置规则,如 ImageOverrider 差异化配置工作负载的镜像。
+
+* Propagate Dependencies(PropagateDeps)
+
+ 依赖跟随分发,是指应用在下发至某个集群时,Karmada 能自动将它的依赖同时分发至同一个集群。依赖不经过调度流程,而是复用主体应用的调度结果。
+ 复杂应用的依赖解析,可以通过资源解释器的 `InterpretDependency` 操作进行解析。
+
+* PropagationPolicy
+
+ 可重用的应用多集群调度策略。
+
+* Pull Mode
+
+ Karmada 管理集群的一种模式,Karmada 控制面将不会直接访问成员集群,而是将职责委托给部署于成员集群的 `karmada-agent`。
+
+* Push Mode
+
+ Karmada 管理集群的一种模式,Karmada 控制面将直接访问成员集群的 `kube-apiserver` 来获取集群状态和部署应用。
+
+* ResourceBinding
+
+ Karmada 的通用类型,驱动内部流程,包含应用的模板信息和调度策略信息,是 Karmada 调度器在调度应用时的处理对象。
+
+* Resource Interpreter
+
+ 在将资源从 `karmada-apiserver` 分发到成员集群的过程中,Karmada 需要了解资源的定义结构,例如在 Deployment 的拆分调度时,Karmada 需要解析 Deployment 资源的 `replicas` 字段。
+ 资源解释器专为解释资源结构而设计,它包括两类解释器,内置解释器用于解释常见的 Kubernetes 原生资源或一些知名的扩展资源,由社区实现并维护,而自定义解释器用于解释自定义资源或覆盖内置解释器,由用户实现和维护。
+
+* Resource Model
+
+ 资源模型,是成员集群的资源状态在 Karmada 控制面的抽象,在根据集群余量进行实例的调度过程中,Karmada 调度器会基于集群的资源模型做出决策。
+
+* Resource Template
+
+ 资源模板,指包含 CRD 在内的 K8s 原生 API 定义,泛指多集群应用的模板。
+
+* SpreadConstraint
+
+ 分发约束,指基于集群拓扑的调度约束,可根据集群所在的 region、provider、zone 等信息进行调度。
+
+* Work
+
+ 成员集群最终资源在联邦层的映射,不同成员集群通过命名空间隔离。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/instrumentation/event.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/instrumentation/event.md
new file mode 100644
index 000000000..eb369c341
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/instrumentation/event.md
@@ -0,0 +1,46 @@
+---
+title: Karmada 事件参考
+---
+
+## 事件
+
+本章节详细介绍了 Karmada 中记录关键过程的事件,更多详细信息请参考 [这里](https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/)。
+
+
+| 原因 | 关联对象 | 类型 | 源组件 |
+|-----------------------------------------|---------------------------------------------------------------------------------------------|---------|---------------------------------------------------------------------------------------------------------------------------|
+| CreateExecutionSpaceFailed | Cluster | Warning | cluster-controller |
+| CreateExecutionSpaceSucceed | Cluster | Normal | cluster-controller |
+| RemoveExecutionSpaceFailed | Cluster | Warning | cluster-controller |
+| RemoveExecutionSpaceSucceed | Cluster | Normal | cluster-controller |
+| TaintClusterByConditionFailed | Cluster | Warning | cluster-controller |
+| RemoveTargetClusterFailed | Cluster | Warning | cluster-controller |
+| SyncImpersonationConfigFailed | Cluster | Warning | unified-auth-controller |
+| SyncImpersonationConfigSucceed | Cluster | Normal | unified-auth-controller |
+| ReflectStatusFailed | Work | Warning | work-status-controller |
+| ReflectStatusSucceed | Work | Normal | work-status-controller |
+| InterpretHealthFailed | Work | Warning | work-status-controller |
+| InterpretHealthSucceed | Work | Normal | work-status-controller |
+| SyncFailed | Work | Warning | execution-controller |
+| SyncSucceed | Work | Normal | execution-controller |
+| CleanupWorkFailed | ResourceBinding ClusterResourceBinding | Warning | binding-controller cluster-resource-binding-controller |
+| SyncScheduleResultToDependenciesSucceed | ResourceBinding ClusterResourceBinding | Normal | dependencies-distributor |
+| SyncScheduleResultToDependenciesFailed | ResourceBinding ClusterResourceBinding | Warning | dependencies-distributor |
+| SyncWorkFailed | ResourceBinding ClusterResourceBinding resource template FederatedResourceQuota | Warning | binding-controller cluster-resource-binding-controller |
+| SyncWorkSucceed | ResourceBinding ClusterResourceBinding resource template FederatedResourceQuota | Normal | binding-controller cluster-resource-binding-controller |
+| AggregateStatusFailed | ResourceBinding ClusterResourceBinding resource template FederatedResourceQuota | Warning | binding-controller cluster-resource-binding-controller |
+| AggregateStatusSucceed | ResourceBinding ClusterResourceBinding resource template FederatedResourceQuota | Normal | binding-controller cluster-resource-binding-controller |
+| ScheduleBindingFailed | ResourceBinding ClusterResourceBinding resource template | Warning | karmada-scheduler |
+| ScheduleBindingSucceed | ResourceBinding ClusterResourceBinding resource template | Normal | karmada-scheduler |
+| DescheduleBindingFailed | ResourceBinding ClusterResourceBinding resource template | Warning | karmada-descheduler |
+| DescheduleBindingSucceed | ResourceBinding ClusterResourceBinding resource template | Normal | karmada-descheduler |
+| EvictWorkloadFromClusterSucceed | ResourceBinding ClusterResourceBinding resource template | Normal | taint-manager resource-binding-graceful-eviction-controller cluster-resource-binding-graceful-eviction-controller |
+| EvictWorkloadFromClusterFailed | ResourceBinding ClusterResourceBinding resource template | Warning | taint-manager resource-binding-graceful-eviction-controller cluster-resource-binding-graceful-eviction-controller |
+| ApplyPolicyFailed | resource template | Warning | resource-detector |
+| ApplyPolicySucceed | resource template | Normal | resource-detector |
+| ApplyOverridePolicyFailed | resource template | Warning | override-manager |
+| ApplyOverridePolicySucceed | resource template | Normal | override-manager |
+| GetDependenciesFailed | resource template | Warning | dependencies-distributor |
+| GetDependenciesSucceed | resource template | Normal | dependencies-distributor |
+| SyncDerivedServiceFailed | ServiceImport | Warning | service-import-controller |
+| SyncDerivedServiceSucceed | ServiceImport | Normal | service-import-controller |
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/instrumentation/metrics.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/instrumentation/metrics.md
new file mode 100644
index 000000000..ff1a975dd
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/instrumentation/metrics.md
@@ -0,0 +1,42 @@
+---
+title: Karmada Metrics 参考
+---
+
+## Metrics
+
+本章节详细介绍了 Karmada 不同组件导出的 Metrics 指标。
+
+您可以使用 HTTP 抓取查询这些组件的指标端点,并以 Prometheus 格式获取当前指标数据。
+
+| 名称 | 类型 | 帮助 | 标签 | 源组件 |
+|------------------------------------------------|-----------|---------------------------------------------------------------------------------------------|---------------------------------------|----------------------------------------------|
+| schedule_attempts_total | Counter | 尝试调度 resourceBinding 的次数 | result schedule_type | karmada-scheduler |
+| e2e_scheduling_duration_seconds | Histogram | E2E 调度延迟 (单位秒) | result schedule_type | karmada-scheduler |
+| scheduling_algorithm_duration_seconds | Histogram | 调度算法延迟 (单位秒,不包括 scale 调度器) | schedule_step | karmada-scheduler |
+| queue_incoming_bindings_total | Counter | 按事件类型添加到调度队列的 bindings 数量 | event | karmada-scheduler |
+| framework_extension_point_duration_seconds | Histogram | 运行特定扩展点的所有插件的延迟 | extension_point result | karmada-scheduler |
+| plugin_execution_duration_seconds | Histogram | 在特定扩展点运行插件的持续时间 | plugin extension_point result | karmada-scheduler |
+| estimating_request_total | Counter | 调度器估算器的请求数 | result type | karmada_scheduler_estimator |
+| estimating_algorithm_duration_seconds | Histogram | 估算每个步骤的算法的延迟(单位秒) | result type step | karmada_scheduler_estimator |
+| cluster_ready_state | Gauge | 集群的状态 (1 代表就绪, 0 代表其他) | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_node_number | Gauge | 集群中节点的数量 | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_ready_node_number | Gauge | 集群中就绪节点的数量 | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_memory_allocatable_bytes | Gauge | 集群中可分配的内存资源 (单位字节) | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_cpu_allocatable_number | Gauge | 集群中可分配的 CPU 数量 | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_pod_allocatable_number | Gauge | 集群中可分配的 Pod 数量 | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_memory_allocated_bytes | Gauge | 集群中已分配的内存资源 (单位字节) | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_cpu_allocated_number | Gauge | 集群中已分配的 CPU 数量 | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_pod_allocated_number | Gauge | 集群中已分配的 Pod 数量 | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_sync_status_duration_seconds | Histogram | 同步一次群集状态的持续时间 (单位秒) | cluster_name | karmada-controller-manager karmada-agent |
+| resource_match_policy_duration_seconds | Histogram | 为资源模板找到匹配的调度策略的持续时间 (单位秒) | / | karmada-controller-manager |
+| resource_apply_policy_duration_seconds | Histogram | 为资源模板应用调度策略的持续时间 (单位秒),"error" 代表资源模板应用该策略失败,否则为 "success" | result | karmada-controller-manager |
+| policy_apply_attempts_total | Counter | 为资源模板应用调度策略的尝试次数次数,"error" 代表资源模板应用该策略失败,否则为 "success" | result | karmada-controller-manager |
+| binding_sync_work_duration_seconds | Histogram | 为 binding 对象同步 work 的持续时间 (单位秒),"error" 代表为 binding 同步 work 失败,否则为 "success" | result | karmada-controller-manager |
+| work_sync_workload_duration_seconds | Histogram | 将 workload 对象同步到目标群集的持续时间 (单位秒),"error" 代表同步 workload 失败,否则为 "success" | result | karmada-controller-manager karmada-agent |
+| policy_preemption_total | Counter | 资源模板的抢占次数,"error" 代表资源模版抢占失败,否则为 "success" | result | karmada-controller-manager |
+| cronfederatedhpa_process_duration_seconds | Histogram | 处理 CronFederatedHPA 的持续时间 (单位秒),"error" 代表处理 CronFederatedHPA 失败,否则为 "success" | result | karmada-controller-manager |
+| cronfederatedhpa_rule_process_duration_seconds | Histogram | 处理 CronFederatedHPA 规则的持续时间 (单位秒),"error" 代表处理 CronFederatedHPA 规则失败,否则为 "success" | result | karmada-controller-manager |
+| federatedhpa_process_duration_seconds | Histogram | 处理 FederatedHPA 的持续时间 (单位秒),"error" 代表处理 FederatedHPA 失败,否则为 "success" | result | karmada-controller-manager |
+| federatedhpa_pull_metrics_duration_seconds | Histogram | FederatedHPA 拉取 metrics 指标所需的时间 (单位秒),"error" 代表 FederatedHPA 拉取 metrics 指标失败,否则为 "success" | result metricType | karmada-controller-manager |
+| pool_get_operation_total | Counter | 从池中拉数据的总次数 | name from | karmada-controller-manager karmada-agent |
+| pool_put_operation_total | Counter | 向池中推数据的总次数 | name to | karmada-controller-manager karmada-agent |
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/auto-scaling-resources/cron-federated-hpa-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/auto-scaling-resources/cron-federated-hpa-v1alpha1.md
new file mode 100644
index 000000000..29eccb357
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/auto-scaling-resources/cron-federated-hpa-v1alpha1.md
@@ -0,0 +1,709 @@
+---
+api_metadata:
+ apiVersion: "autoscaling.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/autoscaling/v1alpha1"
+ kind: "CronFederatedHPA"
+content_type: "api_reference"
+description: "CronFederatedHPA represents a collection of repeating schedule to scale replica number of a specific workload."
+title: "CronFederatedHPA v1alpha1"
+weight: 2
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: autoscaling.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/autoscaling/v1alpha1"`
+
+## CronFederatedHPA
+
+CronFederatedHPA 表示一组可重复的计划,这些计划用于伸缩特定工作负载的副本数量。CronFederatedHPA 可以伸缩任何实现了 scale 子资源的资源,也可以是 FederatedHPA。
+
+
+
+- **apiVersion**: autoscaling.karmada.io/v1alpha1
+
+- **kind**: CronFederatedHPA
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([CronFederatedHPASpec](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpaspec)),必选
+
+ Spec 表示 CronFederatedHPA 的规范。
+
+- **status** ([CronFederatedHPAStatus](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpastatus))
+
+ Status 是 CronFederatedHPA 当前的状态。
+
+## CronFederatedHPASpec
+
+CronFederatedHPASpec 表示 CronFederatedHPA 的规范。
+
+
+
+- **rules** ([]CronFederatedHPARule),必选
+
+ Rules 是一组计划,用于声明伸缩引用目标资源的时间和动作。
+
+
+
+ *CronFederatedHPARule 声明伸缩计划及伸缩动作。*
+
+ - **rules.name** (string),必选
+
+ 规则名称 CronFederatedHPA 中的每条规则必须有唯一的名称。
+
+ 注意:每条规则的名称是记录其执行历史记录的标识符。如果更改某条规则的名称,将被视为删掉该规则,并添加一条新规则,这意味着原始执行历史将被丢弃。
+
+ - **rules.schedule** (string),必选
+
+ Schedule 是表示周期时间的 cron 表达式。欲了解其语法,请浏览 https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-syntax
+
+ - **rules.failedHistoryLimit** (int32)
+
+ FailedHistoryLimit 表示每条规则的失败执行次数。取值只能为正整数,默认值为 3。
+
+ - **rules.successfulHistoryLimit** (int32)
+
+ SuccessfulHistoryLimit 表示每条规则的成功执行次数。取值只能为正整数,默认值为 3。
+
+ - **rules.suspend** (boolean)
+
+ Suspend 通知控制器暂停后续执行。默认值为 false。
+
+ - **rules.targetMaxReplicas** (int32)
+
+ TargetMaxReplicas 是为 FederatedHPA 设置的最大副本数(MaxReplicas)。此字段只有当引用资源是 FederatedHPA 才需要。TargetMaxReplicas 可与 TargetMinReplicas 同时指定,也可单独指定。nil 表示不会更新引用 FederatedHPA 的 MaxReplicas(.spec.maxReplicas)。
+
+ - **rules.targetMinReplicas** (int32)
+
+ TargetMinReplicas 是为 FederatedHPA 设置的最小副本数(MinReplicas)。此字段只有当引用资源是 FederatedHPA 才需要。TargetMinReplicas 可与 TargetMaxReplicas 同时指定,也可单独指定。nil 表示不会更新引用 FederatedHPA 的 MinReplicas(.spec.minReplicas)。
+
+ - **rules.targetReplicas** (int32)
+
+ TargetReplicas 是资源伸缩的目标副本,资源被 CronFederatedHPA 的 ScaleTargetRef 所引用。此字段只有当引用资源不是 FederatedHPA 才需要。
+
+ - **rules.timeZone** (string)
+
+ TimeZone 表示计划所用的时区。如果未指定,默认使用 karmada-controller-manager 进程的时区。当应用此资源模板时,无效的 TimeZone 会被 karmada-webhook 拒绝。欲了解所有时区,请浏览 https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
+
+- **scaleTargetRef** (CrossVersionObjectReference),必选
+
+ ScaleTargetRef 指向待伸缩的目标资源。目标资源可以是任何资源,比如 Deployment 等子资源或 FederatedHPA。
+
+
+
+ *CrossVersionObjectReference 包含可以识别被引用资源的足够信息。*
+
+ - **scaleTargetRef.kind** (string),必选
+
+ kind 表示被引用资源的类别。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **scaleTargetRef.name** (string),必选
+
+ name 表示被引用资源的名称。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ - **scaleTargetRef.apiVersion** (string)
+
+ apiVersion 是被引用资源的API版本。
+
+## CronFederatedHPAStatus
+
+CronFederatedHPAStatus 表示 CronFederatedHPA 当前的状态。
+
+
+
+- **executionHistories** ([]ExecutionHistory)
+
+ ExecutionHistories 记录 CronFederatedHPARule 的执行历史。
+
+
+
+ *ExecutionHistory 记录特定 CronFederatedHPARule 的执行历史。*
+
+ - **executionHistories.ruleName** (string),必选
+
+ RuleName是 CronFederatedHPARule 的名称。
+
+ - **executionHistories.failedExecutions** ([]FailedExecution)
+
+ FailedExecutions 是失败的执行记录。
+
+
+
+ *FailedExecution 记录了一次失败的执行。*
+
+ - **executionHistories.failedExecutions.executionTime** (Time),必选
+
+ ExecutionTime 表示 CronFederatedHPARule 的实际执行时间。任务可能并不总是在 ScheduleTime 执行。ExecutionTime 用于评估控制器执行的效率。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **executionHistories.failedExecutions.message** (string),必选
+
+ Message 是有关失败的详细信息(人类可读消息)。
+
+ - **executionHistories.failedExecutions.scheduleTime** (Time),必选
+
+ ScheduleTime 是 CronFederatedHPARule 中声明的期待执行时间。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **executionHistories.nextExecutionTime** (Time)
+
+ NextExecutionTime 是下一次执行的时间。nil 表示规则已被暂停。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **executionHistories.successfulExecutions** ([]SuccessfulExecution)
+
+ SuccessfulExecutions 是成功的执行记录。
+
+
+
+ *SuccessfulExecution 记录了一次成功的执行。*
+
+ - **executionHistories.successfulExecutions.executionTime** (Time),必选
+
+ ExecutionTime 表示 CronFederatedHPARule 的实际执行时间。任务可能并不总是在 ScheduleTime 执行。ExecutionTime 用于评估控制器执行的效率。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **executionHistories.successfulExecutions.scheduleTime** (Time),必选
+
+ ScheduleTime 是 CronFederatedHPARule 中声明的期待执行时间。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **executionHistories.successfulExecutions.appliedMaxReplicas** (int32)
+
+ AppliedMaxReplicas 表示已应用的最大副本数(MaxReplicas)。此字段只有在 .spec.rules[*].targetMaxReplicas 未留空时需要。
+
+ - **executionHistories.successfulExecutions.appliedMinReplicas** (int32)
+
+ AppliedMinReplicas 表示已应用的最小副本数(MinReplicas)。此字段只有在 .spec.rules[*].targetMinReplicas 未留空时需要。
+
+ - **executionHistories.successfulExecutions.appliedReplicas** (int32)
+
+ AppliedReplicas 表示已应用的副本。此字段只有在 .spec.rules[*].targetReplicas 未留空时需要。
+
+## CronFederatedHPAList
+
+CronFederatedHPAList 罗列 CronFederatedHPA。
+
+
+
+- **apiVersion**: autoscaling.karmada.io/v1alpha1
+
+- **kind**: CronFederatedHPAList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)),必选
+
+## 操作
+
+
+
+### `get`:查询指定的 CronFederatedHPA
+
+#### HTTP 请求
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ CronFederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+### `get`:查询指定 CronFederatedHPA 的状态
+
+#### HTTP 请求
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ CronFederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+### `list`:查询指定命名空间内的所有 CronFederatedHPA
+
+#### HTTP 请求
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([CronFederatedHPAList](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpalist)): OK
+
+### `list`:查询所有 CronFederatedHPA
+
+#### HTTP 请求
+
+GET /apis/autoscaling.karmada.io/v1alpha1/cronfederatedhpas
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([CronFederatedHPAList](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpalist)): OK
+
+### `create`:创建一条 CronFederatedHPA
+
+#### HTTP 请求
+
+POST /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+202 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Accepted
+
+### `update`:更新指定的 CronFederatedHPA
+
+#### HTTP 请求
+
+PUT /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ CronFederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+### `update`:更新指定 CronFederatedHPA 的状态
+
+#### HTTP 请求
+
+PUT /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ CronFederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+### `patch`:更新指定 CronFederatedHPA 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ CronFederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+### `patch`:更新指定 CronFederatedHPA 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ CronFederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+### `delete`:删除一条 CronFederatedHPA
+
+#### HTTP 请求
+
+DELETE /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ CronFederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除指定命名空间内的所有 CronFederatedHPA
+
+#### HTTP 请求
+
+DELETE /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/auto-scaling-resources/federated-hpa-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/auto-scaling-resources/federated-hpa-v1alpha1.md
new file mode 100644
index 000000000..acfaae9e1
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/auto-scaling-resources/federated-hpa-v1alpha1.md
@@ -0,0 +1,1214 @@
+---
+api_metadata:
+ apiVersion: "autoscaling.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/autoscaling/v1alpha1"
+ kind: "FederatedHPA"
+content_type: "api_reference"
+description: "FederatedHPA is centralized HPA that can aggregate the metrics in multiple clusters."
+title: "FederatedHPA v1alpha1"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: autoscaling.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/autoscaling/v1alpha1"`
+
+## FederatedHPA
+
+FederatedHPA 是一个可以聚合多个集群指标的 HPA。当系统负载增加时,它会从多个集群查询指标,并增加副本。当系统负载减少时,它会从多个集群查询指标,并减少副本。副本增加或减少后,karmada-scheduler 将根据策略调度副本。
+
+
+
+- **apiVersion**: autoscaling.karmada.io/v1alpha1
+
+- **kind**: FederatedHPA
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([FederatedHPASpec](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpaspec)),必选
+
+ Spec表示 FederatedHPA 的规范。
+
+- **status** (HorizontalPodAutoscalerStatus)
+
+ Status 是 FederatedHPA 当前的状态。
+
+
+
+ *HorizontalPodAutoscalerStatus 描述 pod 水平伸缩器当前的状态。*
+
+ - **status.desiredReplicas** (int32),必选
+
+ desiredReplicas 是自动伸缩器自上次计算起,其所管理的pod所需的副本数量。
+
+ - **status.conditions** ([]HorizontalPodAutoscalerCondition)
+
+ *补丁策略:根据键 `type` 进行合并。*
+
+ *Map:在合并过程中将保留键类型的唯一值。*
+
+ conditions 是自动伸缩器伸缩其目标所需的状况,并表明是否满足这些状况。
+
+
+
+ *HorizontalPodAutoscalerCondition 描述 HorizontalPodAutoscaler 在特定时刻的状态。*
+
+ - **status.conditions.status** (string),必选
+
+ status 表示状况的状态(True、False 和 Unknown)。
+
+ - **status.conditions.type** (string),必选
+
+ type 描述当前的状况。
+
+ - **status.conditions.lastTransitionTime** (Time)
+
+ lastTransitionTime 是状况最后一次从一种状态转换到另一种状态的时间。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **status.conditions.message** (string)
+
+ message 解释了有关状态转换的细节(人类可读消息)。
+
+ - **status.conditions.reason** (string)
+
+ reason 是状况(Condition)最后一次转换的原因。
+
+ - **status.currentMetrics** ([]MetricStatus)
+
+ *Atomic:将在合并过程中被替换掉。*
+
+ currentMetrics 是自动伸缩器所用指标最后读取的状态。
+
+
+
+ *MetricStatus 描述单个指标最后读取的状态。*
+
+ - **status.currentMetrics.type** (string),必选
+
+ type 表示指标源的类别。指标源可能是 ContainerResource、External、Object、Pods 或 Resource,均对应对象中的一个匹配字段。注意:ContainerResource 只有在特性开关 HPAContainerMetrics 启用时可用。
+
+ - **status.currentMetrics.containerResource** (ContainerResourceMetricStatus)
+
+ 容器资源是 Kubernetes 已知的资源指标(如 request 与 limit),用于描述当前伸缩目标(如 CPU 或内存)中每个 pod 中单个容器的资源使用情况。这类指标是 Kubernetes 内置指标,除了使用 Pods 源的 pod 粒度的正常指标以外,还有一些特殊的伸缩选项。
+
+
+
+ *ContainerResourceMetricStatus 表示 Kubernetes 已知的资源指标的当前值,如 request 与 limit,描述当前伸缩目标(如 CPU 或内存)中每个 Pod 中的单个容器的资源使用情况。这类指标是 Kubernetes 内置指标,除了使用 Pods 源的 pod 粒度的正常指标以外,还有一些特殊的伸缩选项。*
+
+ - **status.currentMetrics.containerResource.container** (string),必选
+
+ container 是伸缩目标的 pod 中容器的名称。
+
+ - **status.currentMetrics.containerResource.current** (MetricValueStatus),必选
+
+ current 是给定指标的当前值。
+
+
+
+ *MetricValueStatus 表示指标的当前值。*
+
+ - **status.currentMetrics.containerResource.current.averageUtilization** (int32)
+
+ currentAverageUtilization 是所有相关 pod 中资源指标当前的平均值,表示为 pod 请求的资源值的百分比。
+
+ - **status.currentMetrics.containerResource.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标当前的平均值(数量)。
+
+ - **status.currentMetrics.containerResource.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的当前值(数量)。
+
+ - **status.currentMetrics.containerResource.name** (string),必选
+
+ name 是伸缩资源的名称。
+
+ - **status.currentMetrics.external** (ExternalMetricStatus)
+
+ external 是指不与任何 Kubernetes 对象关联的全局指标。它允许根据集群外运行的组件的信息(例如,云消息传递服务中的队列长度,或集群外运行的负载平衡器的 QPS)进行自动伸缩。
+
+
+
+ *ExternalMetricStatus 表示与任何 Kubernetes 对象无关的全局指标的当前值。*
+
+ - **status.currentMetrics.external.current** (MetricValueStatus),必选
+
+ current 是给定指标的当前值。
+
+
+
+ *MetricValueStatus 表示指标的当前值。*
+
+ - **status.currentMetrics.external.current.averageUtilization** (int32)
+
+ currentAverageUtilization 是所有相关 pod 中资源指标当前的平均值,表示为 pod 请求的资源值的百分比。
+
+ - **status.currentMetrics.external.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标当前的平均值。
+
+ - **status.currentMetrics.external.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的当前值(数量)。
+
+ - **status.currentMetrics.external.metric** (MetricIdentifier),必选
+
+ metric 通过名称和选择器标识目标指标。
+
+
+
+ *MetricIdentifier 定义指标的名称和可选的选择器。*
+
+ - **status.currentMetrics.external.metric.name** (string),必选
+
+ name 是给定指标的名称。
+
+ - **status.currentMetrics.external.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector 是给定指标的标准 Kubernetes 标签选择器的字符串编码形式。如果设置,将作为附加参数传递给指标服务器,以实现更具体的指标范围。如果未设置,只使用 metricName 收集指标。
+
+ - **status.currentMetrics.object** (ObjectMetricStatus)
+
+ object 是描述单个 Kubernetes 对象的指标(例如,Ingress 对象每秒的点击量)。
+
+
+
+ *ObjectMetricStatus 是 Kubernetes 对象指标(例如,Ingress对象每秒的点击量)的当前值。*
+
+ - **status.currentMetrics.object.current** (MetricValueStatus),必选
+
+ current 是给定指标的当前值。
+
+
+
+ *MetricValueStatus 表示指标的当前值。*
+
+ - **status.currentMetrics.object.current.averageUtilization** (int32)
+
+ currentAverageUtilization 是所有相关 pod 中资源指标当前的平均值,表示为 pod 请求的资源值的百分比。
+
+ - **status.currentMetrics.object.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标当前的平均值。
+
+ - **status.currentMetrics.object.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的当前值(数量)。
+
+ - **status.currentMetrics.object.describedObject** (CrossVersionObjectReference),必选
+
+ DescribedObject 是对象的描述,如类别、名称和 apiVersion。
+
+
+
+ *CrossVersionObjectReference 包含可以识别被引用资源的足够信息。*
+
+ - **status.currentMetrics.object.describedObject.kind** (string),必选
+
+ kind 表示引用资源的类别。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **status.currentMetrics.object.describedObject.name** (string),必选
+
+ name 表示引用资源的名称。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ - **status.currentMetrics.object.describedObject.apiVersion** (string)
+
+ apiVersion 是引用资源的API版本。
+
+ - **status.currentMetrics.object.metric** (MetricIdentifier),必选
+
+ metric 通过名称和选择器标识目标指标。
+
+
+
+ *MetricIdentifier 定义指标的名称和可选的选择器。*
+
+ - **status.currentMetrics.object.metric.name** (string),必选
+
+ name 是给定指标的名称。
+
+ - **status.currentMetrics.object.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector 是给定指标的标准 Kubernetes 标签选择器的字符串编码形式。如果设置,将作为附加参数传递给指标服务器,以实现更具体的指标范围。如果未设置,只使用 metricName 收集指标。
+
+ - **status.currentMetrics.pods** (PodsMetricStatus)
+
+ pods 是指描述当前伸缩目标中每个 pod 的指标(例如,每秒处理的事务)。在与目标值进行比较之前,会将所有指标值进行平均。
+
+
+
+ *PodsMetricStatus 表示描述当前规模目标(例如,每秒处理的事务)中每个 pod 的指标的当前值(例如,每秒处理的事务)。*
+
+ - **status.currentMetrics.pods.current** (MetricValueStatus),必选
+
+ current 是给定指标的当前值。
+
+
+
+ *MetricValueStatus 表示指标的当前值。*
+
+ - **status.currentMetrics.pods.current.averageUtilization** (int32)
+
+ currentAverageUtilization 是所有相关 pod 中资源指标当前的平均值,表示为 pod 请求的资源值的百分比。
+
+ - **status.currentMetrics.pods.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标当前的平均值。
+
+ - **status.currentMetrics.pods.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的当前值(数量)。
+
+ - **status.currentMetrics.pods.metric** (MetricIdentifier),必选
+
+ metric 通过名称和选择器标识目标指标。
+
+
+
+ *MetricIdentifier 定义指标的名称和可选的选择器。*
+
+ - **status.currentMetrics.pods.metric.name** (string),必选
+
+ name 是给定指标的名称。
+
+ - **status.currentMetrics.pods.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector 是给定指标的标准 Kubernetes 标签选择器的字符串编码形式。如果设置,将作为附加参数传递给指标服务器,以实现更具体的指标范围。如果未设置,只使用 metricName 收集指标。
+
+ - **status.currentMetrics.resource** (ResourceMetricStatus)
+
+ resource 表示 Kubernetes 已知的资源指标(如 request 与 limit),用于描述当前伸缩目标(如 CPU 或内存)中每个 pod 的资源使用情况。这类指标是 Kubernetes 内置指标,除了使用 Pods 源的 pod 粒度的正常指标以外,还有一些特殊的伸缩选项。
+
+
+
+ *ResourceMetricStatus 表示 Kubernetes 已知的资源指标的当前值,如 request 与 limit,描述当前伸缩目标(如 CPU 或内存)中每个 pod 的资源使用情况。这类指标是 Kubernetes 内置指标,除了使用 Pods 源的 pod 粒度的正常指标以外,还有一些特殊的伸缩选项。*
+
+ - **status.currentMetrics.resource.current** (MetricValueStatus),必选
+
+ current 是给定指标的当前值。
+
+
+
+ *MetricValueStatus 表示指标的当前值。*
+
+ - **status.currentMetrics.resource.current.averageUtilization** (int32)
+
+ currentAverageUtilization 是所有相关 pod 中资源指标当前的平均值,表示为 pod 请求的资源值的百分比。
+
+ - **status.currentMetrics.resource.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标当前的平均值。
+
+ - **status.currentMetrics.resource.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的当前值(数量)。
+
+ - **status.currentMetrics.resource.name** (string),必选
+
+ name 是伸缩资源的名称。
+
+ - **status.currentReplicas** (int32)
+
+ currentReplicas 是指从自动伸缩器上次计算后,其所管理的 pod 当前的副本数。
+
+ - **status.lastScaleTime** (Time)
+
+ lastScaleTime 是 HorizontalPodAutoscaler 最后一次伸缩 pod 的时间,自动伸缩器用此控制 pod 数量更改的频率。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **status.observedGeneration** (int64)
+
+ observedGeneration 是此自动伸缩器观察到的最新一代。
+
+## FederatedHPASpec
+
+FederatedHPASpec 描述了 FederatedHPA 的所需功能。
+
+
+
+- **maxReplicas** (int32),必选
+
+ MaxReplicas 是自动伸缩器可增加的副本量的上限。它不能小于 minReplicas。
+
+- **scaleTargetRef** (CrossVersionObjectReference),必选
+
+ ScaleTargetRef 指向要伸缩的目标资源,用于收集 pod的指标,以及实际更改副本的数量。
+
+
+
+ *CrossVersionObjectReference 包含可以识别被引用资源的足够信息。*
+
+ - **scaleTargetRef.kind** (string),必选
+
+ kind 表示引用资源的类别。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **scaleTargetRef.name** (string),必选
+
+ name 表示引用资源的名称。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ - **scaleTargetRef.apiVersion** (string)
+
+ apiVersion 是引用资源的API版本。
+
+- **behavior** (HorizontalPodAutoscalerBehavior)
+
+ Behavior 表示目标的伸缩行为(scaleUp 或 scaleDown)。如果未设置,则使用默认的 HPAScalingRules 完成伸缩。
+
+
+
+ *HorizontalPodAutoscalerBehavior 表示目标的伸缩行为(scaleUp 或 scaleDown)。*
+
+ - **behavior.scaleDown** (HPAScalingRules)
+
+ scaleDown 是用于缩容的伸缩策略。如果未设置,默认允许伸缩至 minReplicas,稳定窗口为 300 秒(建议为 300 秒)。
+
+
+
+ *HPAScalingRules 表示一个方向伸缩行为。在根据 HPA 的指标计算 DesiredReplicas 后应用这些规则。可以通过指定伸缩策略来限制伸缩速度,也可以通过指定稳定窗口来防止抖动,这样就不会立即设置副本的数量,而是选择稳定窗口中最安全的值。*
+
+ - **behavior.scaleDown.policies** ([]HPAScalingPolicy)
+
+ *Atomic:将在合并过程中被替换掉。*
+
+ policies 罗列伸缩过程中可用的伸缩策略。必须至少指定一条策略,否则 HPAScalingRules 将被视为无效而丢弃。
+
+
+
+ *HPAScalingPolicy 表示单条策略,在指定的过去间隔内取值必须为 true。*
+
+ - **behavior.scaleDown.policies.periodSeconds** (int32),必选
+
+ periodSeconds 表示策略取值为 true 的时间窗口。periodSeconds 必须大于 0,且小于或等于 1800 秒(30 分钟)。
+
+ - **behavior.scaleDown.policies.type** (string),必选
+
+ type 用于指定伸缩策略。
+
+ - **behavior.scaleDown.policies.value** (int32),必选
+
+ value 包含策略允许的变化数量。取值必须大于 0。
+
+ - **behavior.scaleDown.selectPolicy** (string)
+
+ selectPolicy 用于指定应使用的策略。如果未设置,则使用默认值 Max。
+
+ - **behavior.scaleDown.stabilizationWindowSeconds** (int32)
+
+ stabilizationWindowSeconds 是指伸缩时应考虑之前建议的秒数。取值必须大于等于 0 且小于等于 3600 秒(即一个小时)。如果未设置,请使用默认值:- 扩容:0(不设置稳定窗口)。- 缩容:300(即稳定窗口为 300 秒)。
+
+ - **behavior.scaleUp** (HPAScalingRules)
+
+ scaleUp 是用于扩容的伸缩策略。如果未设置,默认值为以下中较高的值:
+ *每 60 秒增加不超过 4 个pod
+ *每 60 秒 pod 数量翻倍
+ 不使用稳定窗口。
+
+
+
+ *HPAScalingRules 表示一个方向的伸缩行为。在根据 HPA 的指标计算 DesiredReplicas 后应用这些规则。可以通过指定伸缩策略来限制伸缩速度,也可以通过指定稳定窗口来防止抖动,这样就不会立即设置副本的数量,而是选择稳定窗口中最安全的值。*
+
+ - **behavior.scaleUp.policies** ([]HPAScalingPolicy)
+
+ *Atomic:将在合并过程中被替换掉。*
+
+ policies 罗列伸缩过程中可用的伸缩策略。必须至少指定一条策略,否则 HPAScalingRules 将被视为无效而丢弃。
+
+
+
+ *HPAScalingPolicy 表示单条策略,在指定的过去间隔内取值必须为 true。*
+
+ - **behavior.scaleUp.policies.periodSeconds** (int32),必选
+
+ periodSeconds 表示策略取值为 true 的时间窗口。periodSeconds 必须大于 0,且小于或等于 1800 秒(即30 分钟)。
+
+ - **behavior.scaleUp.policies.type** (string),必选
+
+ type 用于指定伸缩策略。
+
+ - **behavior.scaleUp.policies.value** (int32),必选
+
+ value 包含策略允许的伸缩数量。取值必须大于 0。
+
+ - **behavior.scaleUp.selectPolicy** (string)
+
+ selectPolicy 用于指定应使用的策略。如果未设置,则使用默认值 Max。
+
+ - **behavior.scaleUp.stabilizationWindowSeconds** (int32)
+
+ stabilizationWindowSeconds 是指伸缩时应考虑之前建议的秒数。取值必须大于等于 0 且小于等于 3600 秒(即一个小时)。如果未设置,请使用默认值:- 扩容:0(不设置稳定窗口)。- 缩容:300(即稳定窗口为 300 秒)。
+
+- **metrics** ([]MetricSpec)
+
+ Metrics 包含用于计算所需副本数的规范(将使用所有指标中的最大副本数)。所需的副本数是目标值和当前值之间的比率与当前 pod 数的乘积。因此,指标必须随 pod 数的增加而减少,反之亦然。有关每种类型的指标源详细信息,参见各个指标源类型。如果未设置,默认指标为平均 CPU 利用率的 80%。
+
+
+
+ *MetricSpec 是如何基于单个指标进行伸缩的规范(每次只应设置* *`type`* *和一个其他匹配字段)。*
+
+ - **metrics.type** (string),必选
+
+ type 表示指标源的类别。指标源类别可以是 ContainerResource、External、Object、Pods 或 Resource,每个类别映射对象中的一个对应字段。注意:ContainerResource 只有在特性开关 HPAContainerMetrics 启用时可用。
+
+ - **metrics.containerResource** (ContainerResourceMetricSource)
+
+ containerResource 表示 Kubernetes 已知的资源指标(如 request 与 limit),用于描述当前伸缩目标(如 CPU 或内存)中每个 pod 中单个容器的资源使用情况。这类指标是 Kubernetes 内置指标,除了使用 Pods 源的 pod 粒度的正常指标以外,还有一些特殊的伸缩选项。这是一个 alpha 特性,可以通过 HPAContainerMetrics 特性标志启用。
+
+
+
+ *ContainerResourceMetricSource 表明在 Kubernetes 已知资源指标(如 request 与 limit)的基础上进行伸缩的方式,该指标描述当前伸缩目标(例如CPU或内存)中每个Pod的资源使用情况。在与目标值进行比较之前,会将所有指标值进行平均。这类指标是 Kubernetes 内置指标,除了使用 Pods 源的 pod 粒度的正常指标以外,还有一些特殊的伸缩选项。只应设置一种 “target” 类别。*
+
+ - **metrics.containerResource.container** (string),必选
+
+ container 是伸缩目标的 pod 中容器的名称。
+
+ - **metrics.containerResource.name** (string),必选
+
+ name 是伸缩资源的名称。
+
+ - **metrics.containerResource.target** (MetricTarget),必选
+
+ target 是给定指标的目标值
+
+
+
+ *MetricTarget 定义特定指标的目标值、平均值或平均利用率。*
+
+ - **metrics.containerResource.target.type** (string),必选
+
+ type 表示指标类型:利用率(Utilization)、值(Value)和平均值(AverageValue)。
+
+ - **metrics.containerResource.target.averageUtilization** (int32)
+
+ averageUtilization 是所有相关 pod 中资源指标均值的目标值,表示为 pod 资源请求值的百分比。目前仅对 Resource 指标源类别有效。
+
+ - **metrics.containerResource.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标均值的目标值 (数量)。
+
+ - **metrics.containerResource.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的目标值(数量)。
+
+ - **metrics.external** (ExternalMetricSource)
+
+ external 是指不与任何 Kubernetes 对象关联的全局指标。它允许根据集群外运行的组件的信息(例如,云消息传递服务中的队列长度,或集群外运行的负载平衡器的 QPS)进行自动伸缩。
+
+
+
+ *ExternalMetricSource 表示基于任何与 Kubernetes 对象无关的指标(例如,云消息传递服务中的队列长度,或集群外的负载平衡器的QPS)进行伸缩的方式。*
+
+ - **metrics.external.metric** (MetricIdentifier),必选
+
+ metric 通过名称和选择器标识目标指标。
+
+
+
+ *MetricIdentifier 定义指标的名称和可选的选择器。*
+
+ - **metrics.external.metric.name** (string),必选
+
+ name 是给定指标的名称。
+
+ - **metrics.external.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector 是给定指标的标准 Kubernetes 标签选择器的字符串编码形式。如果设置,将作为附加参数传递给指标服务器,以实现更具体的指标范围。如果未设置,只使用 metricName 收集指标。
+
+ - **metrics.external.target** (MetricTarget),必选
+
+ target 是给定指标的目标值。
+
+
+
+ *MetricTarget 定义特定指标的目标值、平均值或平均利用率。*
+
+ - **metrics.external.target.type** (string),必选
+
+ type 表示指标类型:利用率(Utilization)、值(Value)和平均值(AverageValue)。
+
+ - **metrics.external.target.averageUtilization** (int32)
+
+ averageUtilization 是所有相关 pod 中资源指标均值的目标值,表示为 pod 资源请求值的百分比。目前仅对 Resource 指标源类别有效。
+
+ - **metrics.external.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标均值的目标值 (数量)。
+
+ - **metrics.external.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的目标值(数量)。
+
+ - **metrics.object** (ObjectMetricSource)
+
+ object 是描述单个 Kubernetes 对象的指标(例如,Ingress 对象每秒的点击量)。
+
+
+
+ *ObjectMetricSource 是 Kubernetes 对象(例如,Ingress对象每秒的点击量)指标的伸缩方式。*
+
+ - **metrics.object.describedObject** (CrossVersionObjectReference),必选
+
+ DescribedObject 是对象的描述,如类别、名称和 apiVersion。
+
+
+
+ *CrossVersionObjectReference 包含可以识别被引用资源的足够信息。*
+
+ - **metrics.object.describedObject.kind** (string),必选
+
+ kind 表示被引用资源的类别。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **metrics.object.describedObject.name** (string),必选
+
+ name 表示被引用资源的名称。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ - **metrics.object.describedObject.apiVersion** (string)
+
+ apiVersion 是被引用资源的API版本。
+
+ - **metrics.object.metric** (MetricIdentifier),必选
+
+ metric 通过名称和选择器标识目标指标。
+
+
+
+ *MetricIdentifier 定义指标的名称和可选的选择器。*
+
+ - **metrics.object.metric.name** (string),必选
+
+ name 是给定指标的名称。
+
+ - **metrics.object.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector 是给定指标的标准 Kubernetes 标签选择器的字符串编码形式。如果设置,将作为附加参数传递给指标服务器,以实现更具体的指标范围。如果未设置,只使用 metricName 收集指标。
+
+ - **metrics.object.target** (MetricTarget),必选
+
+ target 是给定指标的目标值。
+
+
+
+ *MetricTarget 定义特定指标的目标值、平均值或平均利用率。*
+
+ - **metrics.object.target.type** (string),必选
+
+ type 表示指标类型:利用率(Utilization)、值(Value)和平均值(AverageValue)。
+
+ - **metrics.object.target.averageUtilization** (int32)
+
+ averageUtilization 是所有相关 pod 中资源指标均值的目标值,表示为 pod 资源请求值的百分比。目前仅对 Resource 指标源类别有效。
+
+ - **metrics.object.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标均值的目标值 (数量)。
+
+ - **metrics.object.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的目标值(数量)。
+
+ - **metrics.pods** (PodsMetricSource)
+
+ pods 是指描述当前伸缩目标(例如,每秒处理的事务)中每个 pod 的指标。在与目标值进行比较之前,会将所有指标值进行平均。
+
+
+
+ *PodsMetricSource 表示根据指标进行伸缩的方式,该指标描述当前伸缩目标(例如,每秒处理的事务)中每个pod的资源情况。在与目标值进行比较之前,会将所有指标值进行平均。*
+
+ - **metrics.pods.metric** (MetricIdentifier),必选
+
+ metric 通过名称和选择器标识目标指标。
+
+
+
+ *MetricIdentifier 定义指标的名称和可选的选择器。*
+
+ - **metrics.pods.metric.name** (string),必选
+
+ name 是给定指标的名称。
+
+ - **metrics.pods.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector 是给定指标的标准 Kubernetes 标签选择器的字符串编码形式。如果设置,将作为附加参数传递给指标服务器,以实现更具体的指标范围。如果未设置,只使用 metricName 收集指标。
+
+ - **metrics.pods.target** (MetricTarget),必选
+
+ target 是给定指标的目标值。
+
+
+
+ *MetricTarget 定义特定指标的目标值、平均值或平均利用率。*
+
+ - **metrics.pods.target.type** (string),必选
+
+ type 表示指标类型:利用率(Utilization)、值(Value)和平均值(AverageValue)。
+
+ - **metrics.pods.target.averageUtilization** (int32)
+
+ averageUtilization 是所有相关 pod 中资源指标均值的目标值,表示为 pod 资源请求值的百分比。目前仅对 Resource 指标源类别有效。
+
+ - **metrics.pods.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标均值的目标值 (数量)。
+
+ - **metrics.pods.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的目标值(数量)。
+
+ - **metrics.resource** (ResourceMetricSource)
+
+ resource 表示 Kubernetes 已知的资源指标(如 request 与 limit),用于描述当前伸缩目标(如 CPU 或内存)中每个 pod 的资源使用情况。这类指标是 Kubernetes 内置指标,除了使用 Pods 源的 pod 粒度的正常指标以外,还有一些特殊的伸缩选项。
+
+
+
+ *ResourceMetricSource 表明在 Kubernetes 已知资源指标(如 request 与 limit)的基础上进行伸缩的方式,该指标描述当前伸缩目标(例如 CPU 或内存)中每个Pod的资源使用情况。在与目标值进行比较之前,会将所有指标值进行平均。这类指标是 Kubernetes 内置指标,除了使用 Pods 源的 pod 粒度的正常指标以外,还有一些特殊的伸缩选项。只应设置一种 target 类别。*
+
+ - **metrics.resource.name** (string),必选
+
+ name 是伸缩资源的名称。
+
+ - **metrics.resource.target** (MetricTarget),必选
+
+ target 是给定指标的目标值。
+
+
+
+ *MetricTarget 定义特定指标的目标值、平均值或平均利用率。*
+
+ - **metrics.resource.target.type** (string),必选
+
+ type 表示指标类型:利用率(Utilization)、值(Value)和平均值(AverageValue)。
+
+ - **metrics.resource.target.averageUtilization** (int32)
+
+ averageUtilization 是所有相关 pod 中资源指标均值的目标值,表示为 pod 资源请求值的百分比。目前仅对 Resource 指标源类别有效。
+
+ - **metrics.resource.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue 是所有相关 pod 中资源指标均值的目标值 (数量)。
+
+ - **metrics.resource.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value 是指标的目标值(数量)。
+
+- **minReplicas** (int32)
+
+ MinReplicas 是自动伸缩器可减少的副本量的下限。默认值为1。
+
+## FederatedHPAList
+
+FederatedHPAList 罗列 FederatedHPA。
+
+
+
+- **apiVersion**: autoscaling.karmada.io/v1alpha1
+
+- **kind**: FederatedHPAList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)),必选
+
+## 操作
+
+
+
+### `get`:查询指定的 FederatedHPA
+
+#### HTTP请求
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedHPA的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+### `get`:查询指定 FederatedHPA 的状态
+
+#### HTTP请求
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+### `list`:查询指定命名空间内的所有 FederatedHPA
+
+#### HTTP请求
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([FederatedHPAList](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpalist)): OK
+
+### `list`:查询所有 FederatedHPA
+
+#### HTTP请求
+
+GET /apis/autoscaling.karmada.io/v1alpha1/federatedhpas
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([FederatedHPAList](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpalist)): OK
+
+### `create`:创建一条 FederatedHPA
+
+#### HTTP请求
+
+POST /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+202 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Accepted
+
+### `update`:更新指定的 FederatedHPA
+
+#### HTTP请求
+
+PUT /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedHPA的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+### `update`:更新指定 FederatedHPA 的状态
+
+#### HTTP请求
+
+PUT /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedHPA的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+### `patch`:更新指定 FederatedHPA 的部分信息
+
+#### HTTP请求
+
+PATCH /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedHPA的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+### `patch`:更新指定 FederatedHPA 状态的部分信息
+
+#### HTTP请求
+
+PATCH /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+### `delete`:删除一条 FederatedHPA
+
+#### HTTP请求
+
+DELETE /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedHPA 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除指定命名空间内的所有 FederatedHPA
+
+#### HTTP请求
+
+DELETE /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/cluster-resources/cluster-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/cluster-resources/cluster-v1alpha1.md
new file mode 100644
index 000000000..fec388d80
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/cluster-resources/cluster-v1alpha1.md
@@ -0,0 +1,773 @@
+---
+api_metadata:
+ apiVersion: "cluster.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1"
+ kind: "Cluster"
+content_type: "api_reference"
+description: "Cluster represents the desire state and status of a member cluster."
+title: "Cluster v1alpha1"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: cluster.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1"`
+
+## Cluster
+
+Cluster 表示成员集群的预期状态和当前状态。
+
+
+
+- **apiVersion**: cluster.karmada.io/v1alpha1
+
+- **kind**: Cluster
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([ClusterSpec](../cluster-resources/cluster-v1alpha1#clusterspec)),必选
+
+ Spec 表示成员集群的规范。
+
+- **status** ([ClusterStatus](../cluster-resources/cluster-v1alpha1#clusterstatus))
+
+ Status 表示成员集群的状态。
+
+## ClusterSpec
+
+ClusterSpec 定义成员集群的预期状态。
+
+
+
+- **syncMode**(string),必选
+
+ SyncMode 描述集群从 Karmada 控制面同步资源的方式。
+
+- **apiEndpoint** (string)
+
+ 成员集群的 API 端点。取值包括 hostname、hostname:port、IP 和 IP:port。
+
+- **id** (string)
+
+ ID 是集群的唯一标识符。它不同于 uid(.metadata.uid),通常会在注册过程中自动从成员集群收集。
+
+ 收集顺序如下:
+
+ 1. 如果注册集群启用了 ClusterProperty API 并通过创建名为 cluster.clusterset.k8s.io 的 ClusterProperty 对象来定义集群 ID,则 Karmada 将在 ClusterProperty 对象中获取定义的值。有关 ClusterProperty API 的更多详情,请浏览:https://github.com/kubernetes-sigs/about-api
+
+ 2. 在注册集群上获取命名空间 kube-system 的 UID。
+
+ 此 UID 有以下用途:
+ - 是识别 Karmada 系统中的集群的唯一标识;
+ - 组成多集群服务的 DNS 名称。
+ 一般情况下,不更新此 UID ,请谨慎操作。
+
+- **impersonatorSecretRef** (LocalSecretReference)
+
+ ImpersonatorSecretRef 表示包含用于伪装的令牌的密钥。密钥应包含以下凭据:- secret.data.token
+
+
+
+ *LocalSecretReference 是指封闭命名空间内的密钥引用。*
+
+ - **impersonatorSecretRef.name** (string),必选
+
+ Name 指被引用资源的名称。
+
+ - **impersonatorSecretRef.namespace** (string),必选
+
+ Namespace 指所引用资源的命名空间。
+
+- **insecureSkipTLSVerification** (boolean)
+
+ InsecureSkipTLSVerification 表示 Karmada 控制平面不应确认其所连接的集群的服务证书的有效性,这样会导致 Karmada 控制面与成员集群之间的 HTTPS 连接不安全。默认值为 false。
+
+- **provider** (string)
+
+ Provider 表示成员集群的云提供商名称。
+
+- **proxyHeader** (map[string]string)
+
+ ProxyHeader 是代理服务器所需的 HTTP 头。其中,键为 HTTP 头键,值为HTTP 头的负载。如果 HTTP 头有多个值,所有的值使用逗号分隔(例如,k1: v1,v2,v3)。
+
+- **proxyURL** (string)
+
+ ProxyURL 是集群的代理URL。如果不为空,则 Karmada 控制面会使用此代理与集群通信。更多详情,请参考:https://github.com/kubernetes/client-go/issues/351
+
+- **region** (string)
+
+ Region 表示成员集群所在的区域。
+
+- **resourceModels** ([]ResourceModel)
+
+ ResourceModels 是集群中资源建模的列表。每个建模配额都可以由用户自定义。建模名称必须是 cpu、memory、storage 或 ephemeral-storage。如果用户未定义建模名称和建模配额,将使用默认模型。默认模型的等级为 0 到 8。当 grade 设置为 0 或 1 时,默认模型的 CPU 配额和内存配额为固定值。当 grade 大于或等于 2 时,每个默认模型的 CPU 配额为:[2^(grade-1), 2^grade), 2 <= grade <= 7。每个默认模型的内存配额为:[2^(grade + 2), 2^(grade + 3)), 2 <= grade <= 7。例如:
+
+ - grade: 0
+ ranges:
+ - name: "cpu"
+ min: 0 C
+ max: 1 C
+ - name: "memory"
+ min: 0 GB
+ max: 4 GB
+
+ - grade: 1
+ ranges:
+ - name: "cpu"
+ min: 1 C
+ max: 2 C
+ - name: "memory"
+ min: 4 GB
+ max: 16 GB
+
+ - grade: 2
+ ranges:
+ - name: "cpu"
+ min: 2 C
+ max: 4 C
+ - name: "memory"
+ min: 16 GB
+ max: 32 GB
+
+ - grade: 7
+ range:
+ - name: "cpu"
+ min: 64 C
+ max: 128 C
+ - name: "memory"
+ min: 512 GB
+ max: 1024 GB
+
+ 如果 grade 为 8,无论设置的 Max 值为多少,该等级中 Max 值的含义都表示无限。因此,可以设置任何大于 Min 值的数字。
+
+- grade: 8
+ range:
+ - name: "cpu"
+ min: 128 C
+ max: MAXINT
+ - name: "memory"
+ min: 1024 GB
+ max: MAXINT
+
+
+
+*ResourceModel 描述要统计的建模。*
+
+- **resourceModels.grade** (int32),必选
+
+ Grade 是资源建模的索引。
+
+- **resourceModels.ranges** ([]ResourceModelRange),必选
+
+ Ranges 描述资源配额范围。
+
+
+
+ *ResourceModelRange 描述每个建模配额从 min 到 max 的详细信息。注意:默认情况下,包含 min 值,但不包含 max 值。例如,设置 min = 2,max =10,则间隔为 [2, 10)。此规则能确保所有间隔具有相同的含义。如果最后一个间隔是无限的,肯定无法实现。因此,我们将正确的间隔定义为开放间隔。对于有效的间隔,右侧的值大于左侧的值,即 max 值 必须大于 min 值。建议所有 ResourceModelRanges 的 [Min, Max) 都可以是连续的间隔。*
+
+ - **resourceModels.ranges.max** ([Quantity](../common-definitions/quantity#quantity)),必选
+
+ Max 指定资源的最大数量,由资源名称表示。特别说明,对于最后一个 ResourceModelRange ,无论传递的 Max 值是什么,都表示无限。因为对于最后一项,任何大于 Min 值的 ResourceModelRange 配额都将归为最后一项。任何情况下,Max 的值都大于 Min 的值。
+
+ - **resourceModels.ranges.min** ([Quantity](../common-definitions/quantity#quantity)),必选
+
+ Min 指定资源的最小数量,由资源名称表示。注意:等级 1 的 Min 值(通常为0)始终为零,例如,[1,2)等同于[0, 2)。
+
+ - **resourceModels.ranges.name** (string),必选
+
+ Name 是要分类的资源的名称。
+
+- **secretRef** (LocalSecretReference)
+
+ SecretRef 表示密钥包含访问成员集群的强制性凭据。取值包括:- secret.data.token - secret.data.caBundle
+
+
+
+ *LocalSecretReference 指封闭命名空间内的密钥引用。*
+
+ - **secretRef.name** (string),必选
+
+ Name 指所引用资源的名称。
+
+ - **secretRef.namespace** (string),必选
+
+ Namespace 指所引用资源的命名空间。
+
+- **taints** ([]Taint)
+
+ 附加到成员集群的污点。集群的污点对任何不容忍该污点的资源都有“影响”。
+
+
+
+ *此污点所在的节点对任何不容忍污点的 Pod 都有“影响”。*
+
+ - **taints.effect** (string),必选
+
+ 必选。污点对不容忍该污点的 Pod 的影响。有效取值包括 NoSchedule、PreferNoSchedule 和 NoExecute。
+
+ 枚举值包括:
+ - `"NoExecute"`:任何不能容忍该污点的 Pod 都会被驱逐。当前由 NodeController 强制执行。
+ - `"NoSchedule"`:如果新 pod 无法容忍该污点,不允许新 pod 调度到节点上,但允许由 Kubelet 调度但不需要调度器启动的所有 pod ,并允许节点上已存在的 Pod 继续运行。由调度器强制执行。
+ - `"PreferNoSchedule"`:和 TaintEffectNoSchedule 相似,不同的是调度器尽量避免将新 Pod 调度到具有该污点的节点上,除非没有其他节点可调度。由调度器强制执行。
+
+ - **taints.key** (string),必选
+
+ 必选。应用到节点上的污点的键。
+
+ - **taints.timeAdded** (Time)
+
+ TimeAdded 表示添加污点的时间。仅适用于 NoExecute 的污点。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **taints.value** (string)
+
+ 与污点键对应的污点值。
+
+- **zone** (string)
+
+ Zone 表示成员集群所在的区域。Deprecated 表示 Karmada 从未使用过该字段。为了向后兼容,不会从 v1alpha1 中删除该字段,请改用 Zones。
+
+- **zones** ([]string)
+
+ Zones 表示成员集群的故障区域(也称为可用区域)。这些区域以切片的形式显示,这样集群便可跨多个故障区域运行。欲了解在多个区域运行 Kubernetes的更多细节,请浏览:https://kubernetes.io/docs/setup/best-practices/multiple-zones/
+
+## ClusterStatus
+
+ClusterStatus 包含有关集群当前状态的信息,由集群控制器定期更新。
+
+
+
+- **apiEnablements** ([]APIEnablement)
+
+ APIEnablements 表示成员集群的 API 列表。
+
+
+
+ *APIEnablement 表示 API 列表,用于公开特定群组和版本中支持的资源的名称*。
+
+ - **apiEnablements.groupVersion** (string),必选
+
+ GroupVersion 是此 APIEnablement 的群组和版本。
+
+ - **apiEnablements.resources** ([]APIResource)
+
+ Resources 是 APIResource 的列表。
+
+
+
+ *APIResource 指定资源的名称和类别。*
+
+ - **apiEnablements.resources.kind** (string),必选
+
+ Kind 是资源的类别(例如,资源 deployments 的类别是 Deployment)
+
+ - **apiEnablements.resources.name** (string),必选
+
+ Name 表示资源的复数名称。
+
+- **conditions** ([]Condition)
+
+ Conditions 表示当前集群的状况(数组结构)。
+
+
+
+ *Condition 包含此 API 资源当前状态某个方面的详细信息。*
+
+ - **conditions.lastTransitionTime** (Time),必选
+
+ lastTransitionTime 是状况最近一次从一种状态转换到另一种状态的时间。这种变化通常出现在下层状况发生变化的时候。如果无法了解下层状况变化,使用 API 字段更改的时间也是可以接受的。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **conditions.message**(string),必选
+
+ message 是有关转换的详细信息(人类可读消息)。可以是空字符串。
+
+ - **conditions.reason**(string),必选
+
+ reason 是一个程序标识符,表明状况最后一次转换的原因。特定状况类型的生产者可以定义该字段的预期值和含义,以及这些值是否可被视为有保证的 API。取值应该是一个 CamelCase 字符串。此字段不能为空。
+
+ - **conditions.status**(string),必选
+
+ status 表示状况的状态。取值为True、False或Unknown。
+
+ - **conditions.type**(string),必选
+
+ type 表示状况的类型,采用 CamelCase 或 foo.example.com/CamelCase 形式。
+
+ - **conditions.observedGeneration** (int64)
+
+ observedGeneration 表示设置状况时所基于的 .metadata.generation。例如,如果 .metadata.generation 为 12,但 .status.conditions[x].observedGeneration 为 9,则状况相对于实例的当前状态已过期。
+
+- **kubernetesVersion** (string)
+
+ KubernetesVersion 表示成员集群的版本。
+
+- **nodeSummary** (NodeSummary)
+
+ NodeSummary 表示成员集群中节点状态的汇总。
+
+
+
+ *NodeSummary 表示特定集群中节点状态的汇总。*
+
+ - **nodeSummary.readyNum** (int32)
+
+ ReadyNum 指集群中就绪节点的数量。
+
+ - **nodeSummary.totalNum** (int32)
+
+ TotalNum 指集群中的节点总数。
+
+- **resourceSummary** (ResourceSummary)
+
+ ResourceSummary 表示成员集群中资源的汇总。
+
+
+
+ *ResourceSummary 表示成员集群中资源的汇总。*
+
+ - **resourceSummary.allocatable** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ Allocatable 表示集群中可用于调度的资源,是所有节点上可分配资源的总量。
+
+ - **resourceSummary.allocatableModelings** ([]AllocatableModeling)
+
+ AllocatableModelings 表示统计资源建模。
+
+
+
+ *AllocatableModeling 表示特定资源模型等级中可分配资源的节点数。例如,AllocatableModeling[Grade: 2, Count: 10] 表示有 10 个节点属于等级为 2 的资源模型。*
+
+ - **resourceSummary.allocatableModelings.count** (int32),必选
+
+ Count 统计能使用此建模所划定的资源的节点数。
+
+ - **resourceSummary.allocatableModelings.grade** (int32),必选
+
+ Grade 是 ResourceModel 的索引。
+
+ - **resourceSummary.allocated** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ Allocated 表示集群中已调度的资源,是已调度到节点的所有 Pod 所需资源的总和。
+
+ - **resourceSummary.allocating** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ Allocating 表示集群中待调度的资源,是所有等待调度的 Pod 所需资源的总和。
+
+## ClusterList
+
+ClusterList 罗列成员集群。
+
+
+
+- **apiVersion**: cluster.karmada.io/v1alpha1
+
+- **kind**: ClusterList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][Cluster](../cluster-resources/cluster-v1alpha1#cluster)),必选
+
+ Items 中包含 Cluster 列表。
+
+## 操作
+
+
+
+### `get`:查询指定的集群
+
+#### HTTP 请求
+
+GET /apis/cluster.karmada.io/v1alpha1/clusters/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ Cluster 名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+### `get`:查询指定集群的状态
+
+#### HTTP 请求
+
+GET /apis/cluster.karmada.io/v1alpha1/clusters/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ 集群的名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+### `list`:查询所有集群
+
+#### HTTP 请求
+
+GET /apis/cluster.karmada.io/v1alpha1/clusters
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([ClusterList](../cluster-resources/cluster-v1alpha1#clusterlist)): OK
+
+### `create`:创建一个集群
+
+#### HTTP 请求
+
+POST /apis/cluster.karmada.io/v1alpha1/clusters
+
+#### Parameters
+
+- **body**: [Cluster](../cluster-resources/cluster-v1alpha1#cluster),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+202 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Accepted
+
+### `update`:更新指定的集群
+
+#### HTTP 请求
+
+PUT /apis/cluster.karmada.io/v1alpha1/clusters/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ 集群的名称
+
+- **body**: [Cluster](../cluster-resources/cluster-v1alpha1#cluster),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+### `update`:更新指定集群的状态
+
+#### HTTP 请求
+
+PUT /apis/cluster.karmada.io/v1alpha1/clusters/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ 集群的名称
+
+- **body**: [Cluster](../cluster-resources/cluster-v1alpha1#cluster),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+### `patch`:更新指定集群的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/cluster.karmada.io/v1alpha1/clusters/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ 集群的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+### `patch`:更新指定集群状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/cluster.karmada.io/v1alpha1/clusters/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ 集群的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+### `delete`:删除一个集群
+
+#### HTTP 请求
+
+DELETE /apis/cluster.karmada.io/v1alpha1/clusters/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ Cluster 名称
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除所有集群
+
+#### HTTP 请求
+
+DELETE /apis/cluster.karmada.io/v1alpha1/clusters
+
+#### 参数
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/delete-options.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/delete-options.md
new file mode 100644
index 000000000..50de42a1a
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/delete-options.md
@@ -0,0 +1,65 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "DeleteOptions"
+content_type: "api_reference"
+description: "DeleteOptions may be provided when deleting an API object."
+title: "DeleteOptions"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+删除 API 对象时,可以提供 DeleteOptions。
+
+
+
+- **apiVersion** (string)
+
+ APIVersion 定义对象的版本化模式。服务器应将已识别的模式转换为最新的内部值,可能拒绝无法识别的值。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+
+- **dryRun** ([]string)
+
+ 如果存在,表示修改不会被保留。无效或无法识别的 dryRun 指令会导致错误响应,也不会进一步处理请求。有效值为:
+ - All:将处理所有空运行阶段。
+
+- **gracePeriodSeconds** (int64)
+
+ 此字段表示删除对象之前的持续时间(单位为秒)。取值只能为正整数。取值为 0,表示立即删除。取值为 nil,表示使用指定类型的默认宽限期。如果未指定,则为每个对象的默认值。0 表示立即删除。
+
+- **kind** (string)
+
+ Kind 是对象所表示的 REST 资源的字符串值。服务器可从客户端提交请求的端点推断出字符串的值。此字段无法更新,必须采用驼峰形式( CamelCase)表示。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+- **orphanDependents** (boolean)
+
+ Deprecated:此字段将在1.7中废弃,请使用 PropagationPolicy。此字段表示是否孤立依赖项。如果取值为 true,会向对象的终结器列表中添加 orphan 终结器。如果取值为 false,会删除对象终结器列表中的 orphan 终结器。可设置此字段或 PropagationPolicy,但不可两个同时设置。
+
+- **preconditions** (Preconditions)
+
+ 删除前必须满足先决条件。如果无法满足,将返回 409 Conflict。
+
+
+
+ *在执行操作(更新、删除等)之前,必须满足先决条件。*
+
+ - **preconditions.resourceVersion** (string)
+
+ 目标资源版本。
+
+ - **preconditions.uid** (string)
+
+ 目标UID。
+
+- **propagationPolicy** (string)
+
+ 是否以及如何执行垃圾收集。可以设置此字段或 OrphanDependents,但不能同时设置。默认策略由 metadata.finalizers 中现有的终结器和特定资源的默认策略设置所决定。可接受的值是:
+ - Orphan:孤立依赖项;
+ - Background:允许垃圾回收器后台删除依赖;
+ - Foreground:一个级联策略,前台删除所有依赖项。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/label-selector.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/label-selector.md
new file mode 100644
index 000000000..1fcdf3d55
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/label-selector.md
@@ -0,0 +1,47 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "LabelSelector"
+content_type: "api_reference"
+description: "A label selector is a label query over a set of resources."
+title: "LabelSelector"
+weight: 2
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+标签选择器是对一组资源的标签进行查询。matchLabels 和 matchExpressions 的结果之间是与的关系。如果留空,表示匹配所有对象。null 表示不匹配任何对象。
+
+
+
+- **matchExpressions** ([]LabelSelectorRequirement)
+
+ matchExpressions 是标签选择器要求的列表。要求之间是与的关系(即所有的要求都要满足)。
+
+
+
+ *标签选择器要求由键、值和关联键与值的运算符组成。*
+
+ - **matchExpressions.key** (string),必选
+
+ *补丁策略:根据键 `key` 进行合并。*
+
+ key 是选择器应用的标签键。
+
+ - **matchExpressions.operator** (string),必选
+
+ operator 表示一个键与其值的关系。有效的运算符包括 In、NotIn、Exists 和 DoesNotExist。
+
+ - **matchExpressions.values** ([]string)
+
+ values 是字符串值的数组。如果运算符为 In 或 NotIn ,则 values 的数组必须非空。如果运算符为 Exists 或 DoesNotExist,则 values 的数组必须为空。在策略性合并补丁期间会替换此数组。
+
+- **matchLabels** (map[string]string)
+
+ matchLabels 是键值对的映射。matchLabels 映射中的单个键值对相当于 matchExpressions 的一个元素,键的字段为 key,运算符为 In,值为包含 value 的 values 数组。要求之间是与的关系(即所有的要求都要满足)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/list-meta.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/list-meta.md
new file mode 100644
index 000000000..60bb173f0
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/list-meta.md
@@ -0,0 +1,37 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "ListMeta"
+content_type: "api_reference"
+description: "ListMeta describes metadata that synthetic resources must have, including lists and various status objects."
+title: "ListMeta"
+weight: 3
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+ListMeta 描述合成资源必须具有的元数据,包括列表和各种状态对象。一个资源只能有一个 [ObjectMeta, ListMeta]。
+
+
+
+- **continue** (string)
+
+ 如果用户对返回的项目数量设置了限制,可能会设置 continue,表示服务器有更多可用数据。取值是不透明的,可用于向列表的端点发出另一个请求,以检索下一组可用对象。如果服务器配置已更改或已过去几分钟,可能无法继续提供一致的列表。除非您从错误的消息中收到 continue 值,否则使用 continue 的值时返回的 resourceVersion 字段将与第一个响应中的值相同。
+
+- **remainingItemCount** (int64)
+
+ remainingItemCount 是列表中后续项目的数量,这些项目并未包含在列表响应中。如果列表请求包含标签或字段选择器,则剩余项的数量是未知的。在序列化过程中,remainingItemCount 将保持未设置和省略。如果列表是完整的(既不分块也不是是最后一个分块),则没有更多的剩余项。在序列化过程中,此字段将保持未设置和省略。早于 v1.15 的服务器不设置此字段。remainingItemCount 用于*估计*集合的大小。客户端不应依赖要设置或设置准确的 remainingItemCount。
+
+- **resourceVersion** (string)
+
+ resourceVersion 是标识此对象的服务器内部版本的字符串,客户端可使用此字段确定对象更改的时间。对客户端而言,此字段的取值非透明,且未做任何修改,直接传回至服务器。取值由系统填充,而且只读。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
+
+- **selfLink** (string)
+
+ Deprecated:selfLink是遗留的只读字段,系统不再自动填充。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/node-selector-requirement.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/node-selector-requirement.md
new file mode 100644
index 000000000..3b846faa3
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/node-selector-requirement.md
@@ -0,0 +1,41 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/api/core/v1"
+ kind: "NodeSelectorRequirement"
+content_type: "api_reference"
+description: "A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values."
+title: "NodeSelectorRequirement"
+weight: 4
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/api/core/v1"`
+
+*节点选择器要求由键、值和关联键与值的运算符组成。*
+
+
+
+- **key** (string),必选
+
+ key 是选择器应用的标签键。
+
+- **operator** (string),必选
+
+ operator 表示一个键与其值的关系。有效的运算符包括 In、NotIn、Exists、DoesNotExist、Gt 和 Lt。
+
+ 枚举值包括:
+ - `"DoesNotExist"`
+ - `"Exists"`
+ - `"Gt"`
+ - `"In"`
+ - `"Lt"`
+ - `"NotIn"`
+
+- **values** ([]string)
+
+ 字符串值的数组。如果运算符为 In 或 NotIn ,则 values 的数组必须非空。如果运算符为 Exists 或 DoesNotExist,则 values 的数组必须为空。如果运算符是 Gt 或 Lt,则 values 的数组只能包含一个元素,该元素将被解释为整数。此数组会在策略性合并补丁期间被替换。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/object-meta.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/object-meta.md
new file mode 100644
index 000000000..8455e188d
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/object-meta.md
@@ -0,0 +1,177 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "ObjectMeta"
+content_type: "api_reference"
+description: "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create."
+title: "ObjectMeta"
+weight: 5
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+ObjectMeta 是所有持久化资源必须具有的元数据,其中包括用户必须创建的所有对象。
+
+
+
+- **annotations** (map[string]string)
+
+ Annotations 是一个非结构化键值映射,与资源一起存储,该资源可使用外部工具设置,以存储和检索任意元数据。Annotations 都是不可查询的,在修改对象时应该保留。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
+
+- **creationTimestamp** (Time)
+
+ CreationTimestamp 是一个时间戳,表示创建此对象时的服务器时间。无法保证在不同的操作中按发生前的顺序设置此字段。客户端不能设置此值。此字段以 RFC3339 形式表示,时间采用 UTC 格式。
+
+ 取值由系统填充,而且只读。列表为空。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+- **deletionGracePeriodSeconds** (int64)
+
+ 对象从系统中删除之前允许优雅终止的时间(单位为秒)。此字段只有在 deletionTimestamp 设置时需要,只能被缩短,而且只读。
+
+- **deletionTimestamp** (Time)
+
+ DeletionTimestamp 是资源的删除时间,以 RFC 3339 形式表示。当用户请求优雅删除时,此字段由服务器设置,客户端不能直接设置。一旦终结器列表为空,资源将在此字段中的时间之后被删除(资源列表中不可见,也无法通过名称访问)。只要终结器列表包含项目,删除就会被阻止。一旦设置了 deletionTimestamp,取值不会被取消设置或在未来设置,尽管它可能会缩短或在此之前资源可能会删除。例如,用户可能会请求在 30 秒内删除 pod。Kubelet 会向 pod 中的容器发送优雅终止信号。30 秒后,Kubelet 向容器发送硬终止信号(SIGKILL),并在清理后,从 API 中删除 pod。如果存在网络分区,对象在此时间戳之后也可能依然存在,直到管理员或自动进程确定资源已完全终止。如果未设置,表示未请求优雅删除对象。
+
+ 请求优雅删除时,取值由系统填充,而且只读。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+- **finalizers** ([]string)
+
+ 在从注册表中删除对象之前,必须为空。每个条目都是组件的标识符,将从列表中删除条目。如果对象的 deletionTimestamp 不为 nil,只能删除列表中的条目。可以按任何顺序处理和删除终结器。顺序不是强制的,只是可能会造成终结器被卡住。finalizers 是个共享字段,任何具有权限的参与者都可以对其重新排序。如果按顺序处理终结器列表,列表中负责第一个终结器的组件会等待列表中稍后负责终结器的组件发出的信号(字段值、外部系统或其他),从而导致死锁。在没有强制顺序的情况下,终结器可自由排序,且不容易受到列表中顺序更改的影响。
+
+- **generateName** (string)
+
+ GenerateName 是服务器使用的可选前缀,仅在没有 Name 字段的情况下生成唯一名称。如果使用此字段,返回给客户端的名称会与传递的名称不同。取值还会与唯一后缀组合。取值具有与 Name 字段相同的验证规则,且后缀的长度可能会被截断,使名称在服务器上唯一。
+
+ 如果指定此字段,且存在生成的名称,服务器将返回 409。
+
+ 仅在未指定 Name 时应用。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency
+
+- **generation** (int64)
+
+ 表示所需状态的特定序列号。取值由系统填充,而且只读。
+
+- **labels** (map[string]string)
+
+ 字符串键与值之间的映射,可用于对象的组织和分类(范围和选择)。可匹配复制控制器和服务的选择器。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
+
+- **managedFields** ([]ManagedFieldsEntry)
+
+ ManagedFields 将工作流 ID 和版本映射到该工作流管理的字段集。主要用于内部管理,用户通常不需要设置或理解此字段。工作流可以是用户名、控制器名称或特定应用路径的名称,如 ci-cd。修改对象时,工作流所使用的版本始终包含此字段集。
+
+
+
+ *ManagedFieldsEntry 是一个工作流 ID,一个字段集(FieldSet),也是该字段集适用的资源的组版本。*
+
+ - **managedFields.apiVersion** (string)
+
+ APIVersion 定义字段集适用的资源的版本。格式为“组/版本(group/version)”,就像顶级 APIVersion 字段一样。字段集的版本无法自动转换,所以必须跟踪字段集的版本。
+
+ - **managedFields.fieldsType** (string)
+
+ FieldsType 是不同字段格式和版本的标识符。目前取值只有一个:FieldsV1。
+
+ - **managedFields.fieldsV1** (FieldsV1)
+
+ FieldsV1 是 FieldsV1 类别中所描述的第一个 JSON 版本格式。
+
+
+
+ *FieldsV1 以 JSON 格式将字段存储在 Trie 等数据结构中。
+
+ 每个键或 “.” 表示字段本身,并始终映射到一个空集,或是表示子字段或项的字符串。字符串有四种格式:- 'f:<name>',其中 <name> 是结构中字段的名称,或映射中的键。- 'v:<value>',其中 <value> 是列表项的实际值(JSON格式)。- i:<index>', <index> 是列表项的位置。 - 'k:<keys>',其中 <keys> 是列表项的键字段与其唯一值之间的映射。如果键映射到空字段值,则键表示的字段是集合的一部分。
+
+ sigs.k8s.io/structured-merge-diff 中定义了具体格式*
+
+ - **managedFields.manager** (string)
+
+ Manager 是管理字段的工作流的标识符。
+
+ - **managedFields.operation** (string)
+
+ Operation 是导致创建 ManagedFieldsEntry 的操作类型。取值包括 Apply 和 Update。
+
+ - **managedFields.subresource** (string)
+
+ Subresource 是用于更新对象的子资源的名称,如果对象是通过主资源更新的,则此字段为空字符串。字段的取值可用于区分管理器,即使它们共享相同的名称。例如,状态更新不同于使用相同管理器名称的常规更新。注意:APIVersion 与 Subresource 无关,APIVersion 始终与主资源的版本有关。
+
+ - **managedFields.time** (Time)
+
+ Time 是添加 ManagedFields 条目的时间戳。如果添加字段,管理器更改任何所属字段值或删除字段,时间戳也会更新。当从条目中删除字段时,时间戳不会更新,因为另一个管理器会接管时间戳。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+- **name** (string)
+
+ 名称在命名空间中必须是唯一的。名称是创建资源时必需的,尽管某些资源可能允许客户端自动请求生成适当的名称。名称主要用于表示创建幂等性和配置定义。此字段无法更新。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names
+
+- **namespace** (string)
+
+ Namespace 定义一个空间,其中每个名称必须唯一。空命名空间相当于 default 命名空间,但 default 才是默认命名空间的规范表示。并非所有对象都需要限定在命名空间内,对于这些对象,此字段值为空。
+
+ 必须是 DNS_LABEL,而且无法更新。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces
+
+- **ownerReferences** ([]OwnerReference)
+
+ *补丁策略:根据键 `uid` 进行合并。*
+
+ 对象所依赖的对象列表。如果列表中的所有对象都已删除,对象会被视为垃圾收集。如果对象由控制器管理,则列表中的条目将指向此控制器,控制器字段设置为 true。管理对象的控制器不能有多个。
+
+
+
+ *OwnerReference 包含足够可以让你识别属主对象的信息。属主对象必须与依赖对象处于同一命名空间内或同一集群内,因此没有命名空间字段。*
+
+ - **ownerReferences.apiVersion** (string),必选
+
+ 被引用资源的 API 版本。
+
+ - **ownerReferences.kind** (string),必选
+
+ 被引用资源的 API 类别。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **ownerReferences.name** (string),必选
+
+ 被引用资源的名称。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names
+
+ - **ownerReferences.uid** (string),必选
+
+ 被引用资源的 UID。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids
+
+ - **ownerReferences.blockOwnerDeletion** (boolean)
+
+ 如果取值为 true,且属主有 “foregroundDeletion” 终结器,则在删除此引用之前,无法从键值存储中删除属主。有关垃圾收集器与此字段交互并强制完成前台删除的方式,请浏览 https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion 默认值为false。要设置此字段,用户需要所有者的“删除”权限,否则将返回 422 Unprocessable Entity。
+
+ - **ownerReferences.controller** (boolean)
+
+ 如果取值为 true,引用指向管理控制器。
+
+- **resourceVersion** (string)
+
+ 如果取值不透明,表示对象的内部版本,客户端可使用该版本来确定对象的更改时间。可用于乐观并发、变更检测以及对资源或资源集的监视操作。客户端必须将字段的取值视为是不透明的,并将未修改的值传递回服务器。可能仅对特定资源或资源集有效。
+
+ 取值由系统填充,而且只读。客户端必须将字段取值视为不透明。 更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
+
+- **selfLink** (string)
+
+ Deprecated:selfLink 是遗留的只读字段,系统不再自动填充。
+
+- **uid** (string)
+
+ UID 是对象的唯一时间和空间值。通常由服务器在成功创建资源时生成,不允许在 PUT 操作中更改。
+
+ 取值由系统填充,而且只读。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/patch.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/patch.md
new file mode 100644
index 000000000..36c800a9f
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/patch.md
@@ -0,0 +1,21 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "Patch"
+content_type: "api_reference"
+description: "Patch is provided to give a concrete name and type to the Kubernetes PATCH request body."
+title: "Patch"
+weight: 6
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+Patch为 Kubernetes PATCH 请求体提供具体的名称和类型。
+
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/quantity.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/quantity.md
new file mode 100644
index 000000000..09a7fea55
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/quantity.md
@@ -0,0 +1,68 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/api/resource"
+ kind: "Quantity"
+content_type: "api_reference"
+description: "Quantity is a fixed-point representation of a number."
+title: "Quantity"
+weight: 7
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/api/resource"`
+
+Quantity 是数字的定点表示。除了 String() 和 AsInt64() 的访问接口之外,此字段还在 JSON 和 YAML 中提供了方便的序列化(marshaling)和反序列化(unmarshalling)。
+
+序列化格式如下:
+
+```
+ ::= \\
+
+ (注意 \ 可能为空,例如 \ 的 "" 情形。)
+
+ ::= 0 | 1 | ... | 9
+ ::= \ | \\
+ ::= \ | \.\ | \. | .\
+ ::= "+" | "-"
+ ::= \ | \\
+ ::= \ | \ | \
+ ::= Ki | Mi | Gi | Ti | Pi | Ei
+
+ (国际单位制度,请浏览: http://physics.nist.gov/cuu/Units/binary.html)
+
+ ::= m | "" | k | M | G | T | P | E
+
+ (注意,1024 = 1Ki,1000 = 1k)
+
+ ::= "e" \ | "E" \
+```
+
+无论使用三种指数形式中的哪一种,任何数量都不可能是大于 2^63-1 的数字,小数位也不能超过 3 位。如果有较大的数字,则会取最大值(即 2^63-1),如果有更精确的数字,则会向上取整 (例如:0.1m 将四舍五入为 1m)。如果需要更大或更小的数量,可以后续进行扩展。
+
+从字符串解析 Quantity 时,字符串的后缀类型会被记住,并在序列化时再次使用相同的类型。
+
+在序列化之前,Quantity 将以“规范形式”呈现。这意味着指数或后缀将向上或向下调整(尾数相应增加或减少),以便:
+
+- 无精确度丢失。
+- 无小数数字。
+- 指数(或后缀)尽可能大。
+
+除非数量为负数,否则正负号将被省略。
+
+示例:
+
+- 1.5 将被序列化成 “1500m”。
+- 1.5Gi 将被序列化成 “1536Mi”。
+
+注意,数量在内部永远不会由浮点数表示。这是本设计的重中之重。
+
+只要非规范值格式正确,仍会被解析,但将以其规范形式重新发出 (所以一定要使用规范形式,不要执行 diff 比较)。
+
+采用这种格式,就很难在不编写某种特殊处理的代码的情况下使用这些数字,进而希望实现者也使用定点实现。
+
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/status.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/status.md
new file mode 100644
index 000000000..8efb024de
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/status.md
@@ -0,0 +1,101 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "Status"
+content_type: "api_reference"
+description: "Status is a return value for calls that don't return other objects."
+title: "Status"
+weight: 8
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+Status 是调用的返回值,而调用不返回其他对象。
+
+
+
+- **apiVersion** (string)
+
+ APIVersion 定义对象的版本模式。服务器将识别的模式转换为最新的内部值,可能拒绝未识别的值。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+
+- **code** (int32)
+
+ 建议返回 HTTP码。如果未设置,则为 0。
+
+- **details** (StatusDetails)
+
+ 与原因有关的扩展数据。每个原因都可以定义自己的扩展细节。此字段是可选的,且不保证返回的数据符合除原因类型定义外的任何模式。
+
+
+
+ *StatusDetails 是一组可以由服务器设置的附加属性,以提供有关响应的附加信息。Status 对象的原因(Reason)字段定义了要设置的属性。客户端必须忽略与每个属性的定义类型不匹配的字段,并假定任何属性都可能为空、无效或未定义。*
+
+ - **details.causes** ([]StatusCause)
+
+ Causes 数组包含与 StatusReason 失败有关的更多详细信息。并非所有的 StatusReason 都可以提供详细的原因。
+
+
+
+ *StatusCause 提供了有关 api.Status 失败的更多信息,包括多个错误的情况。*
+
+ - **details.causes.field** (string)
+
+ 导致错误的资源的字段,由其JSON序列化命名,可能包括嵌套属性的点和后缀。数组从零开始索引。由于字段有多个错误,字段可能会在原因数组中出现多次。可选字段。
+
+ 示例:
+ “name”:当前资源的字段 “name”。
+ "items[0].name":"items" 中第一个数组条目的字段 "name"。
+
+ - **details.causes.message** (string)
+
+ 错误原因的描述(人类可读消息)。此字段可以直接呈现给读者。
+
+ - **details.causes.reason** (string)
+
+ 错误原因的描述(机器可读消息)。如果留空,表示没有可用的信息。
+
+ - **details.group** (string)
+
+ 与 StatusReason 关联的资源的组属性。
+
+ - **details.kind** (string)
+
+ 与 StatusReason 关联的资源的类别属性。在某些操作上可能与请求的资源类别不同。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **details.name** (string)
+
+ 与 StatusReason 关联的资源的名称属性(当有单个名称可以描述时)。
+
+ - **details.retryAfterSeconds** (int32)
+
+ 表示重试操作前的时间(以秒为单位)。某些错误可能指出客户端必须采取替代操作,对于这些错误,此字段可能表示在采取替代操作之前等待的时间。
+
+ - **details.uid** (string)
+
+ 资源的UID (当有单个资源可以描述时)。更多信息,请浏览 https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids
+
+- **kind** (string)
+
+ Kind 是对象所表示的 REST 资源的字符串值。服务器可从客户端提交请求的端点推断出字符串的值。此字段无法更新,必须采用驼峰形式表示。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+- **message** (string)
+
+ 操作状态的描述(人类可读消息)。
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+ 标准的列表元数据。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+- **reason** (string)
+
+ 解释操作处于 Failure 状态的原因(机器可读消息)。如果留空,表示没有可用的信息。Reason 对 HTTP 状态码进行解释,但不会覆盖状态码。
+
+- **status** (string)
+
+ 操作的状态。取值为 Success 或 Failure。更多信息,请浏览 https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/typed-local-object-reference.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/typed-local-object-reference.md
new file mode 100644
index 000000000..bd8d3c610
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-definitions/typed-local-object-reference.md
@@ -0,0 +1,33 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/api/core/v1"
+ kind: "TypedLocalObjectReference"
+content_type: "api_reference"
+description: "TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace."
+title: "TypedLocalObjectReference"
+weight: 9
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/api/core/v1"`
+
+TypedLocalObjectReference 包含足够的信息,使您能够在同一命名空间内按类别定位被引用的对象。
+
+
+
+- **kind** (string),必选
+
+ Kind 是被引用资源的类别。
+
+- **name** (string),必选
+
+ Name 是被引用资源的名称。
+
+- **apiGroup** (string)
+
+ APIGroup 是被引用资源所在的组。如果未指定 APIGroup,则核心 API 组中必须包含指定的 Kind。对于任何其他第三方类型,APIGroup 都是必需的。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-parameter/common-parameters.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-parameter/common-parameters.md
new file mode 100644
index 000000000..12ce54cc1
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/common-parameter/common-parameters.md
@@ -0,0 +1,144 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: ""
+ kind: "Common Parameters"
+content_type: "api_reference"
+description: ""
+title: "Common Parameters"
+weight: 9
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+## allowWatchBookmarks
+
+allowWatchBookmarks 请求类型为 BOOKMARK 的监视事件。没有实现书签的服务器可能会忽略这个标志,并根据服务器的判断发送书签。客户端不应该假设书签会在任何特定的时间间隔返回,也不应该假设服务器会在会话期间发送任何 BOOKMARK 事件。如果当前不是 watch 请求,则忽略该字段。
+
+
+
+## continue
+
+当需要从服务器检索更多结果时,应该设置 continue 选项。由于这个值是服务器定义的,客户端只能使用先前查询结果中具有相同查询参数的 continue 值(continue 值除外),服务器可能拒绝它识别不到的 continue 值。如果指定的 continue 值不再有效,无论是由于过期(通常是 5 到 15 分钟) 还是服务器上的配置更改,服务器将响应 "410 ResourceExpired" 错误和一个 continue 令牌。如果客户端需要一个一致的列表,它必须在没有 continue 字段的情况下重新发起 list 请求。否则,客户端可能会发送另一个带有 410 错误令牌的 list 请求,服务器将响应从下一个键开始的列表,但列表数据来自最新的快照,这与之前的列表结果不一致。第一个列表请求之后的对象创建,修改,或删除的对象将被包含在响应中,只要他们的键是在“下一个键”之后。
+
+当 watch 字段为 true 时,不支持此字段。客户端可以从服务器返回的最后一个 resourceVersion 值开始监视,就不会错过任何修改。
+
+
+
+## dryRun
+
+表示不应该持久化所请求的修改。无效或无法识别的 dryRun 指令将导致错误响应,并且服务器不再对请求进行进一步处理。有效值为:
+- All,表示将处理所有的演练阶段。
+
+
+
+## fieldManager
+
+fieldManager 是与进行这些变更的参与者或实体相关联的名称。长度小于或等于·128 个字符且仅包含可打印字符,如 https://golang.org/pkg/unicode/#IsPrint 所定义。
+
+
+
+## fieldSelector
+
+通过字段限制返回对象列表的选择器。默认为返回所有对象。
+
+
+
+## fieldValidation
+
+fieldValidation 指示服务器如何处理请求(POST/PUT/PATCH)中包含未知或重复字段的对象。有效值为:
+- Ignore:将忽略从对象中默默删除的所有未知字段,并将忽略除解码器遇到的最后一个重复字段之外的所有字段。这是在 v1.23 之前的默认行为。
+- Warn:将针对从对象中删除的各个未知字段以及所遇到的各个重复字段,分别通过标准警告响应头发出警告。如果没有其他错误,请求仍然会成功,并且只会保留所有重复字段中的最后一个。这是 v1.23+ 版本中的默认设置。
+- Strict:如果从对象中删除任何未知字段,或者存在任何重复字段,将使请求失败并返回 BadRequest 错误。从服务器返回的错误将包含遇到的所有未知和重复字段。
+
+
+
+## force
+
+Force 将“强制”应用请求。这意味着用户将重新获得他人拥有的冲突领域。对于非应用补丁请求,Force 标志必须不设置。
+
+
+
+## gracePeriodSeconds
+
+删除对象前的持续时间(秒数)。值必须为非负整数。取值为 0 表示立即删除。如果该值为 nil,将使用指定类型的默认宽限期。如果没有指定,默认为每个对象的设置值。0 表示立即删除。
+
+
+
+## labelSelector
+
+通过标签限制返回对象列表的选择器。默认为返回所有对象。
+
+
+
+## limit
+
+limit 是一个列表调用返回的最大响应数。如果有更多的条目,服务器会将列表元数据上的 `continue` 字段设置为一个值,该值可以用于相同的初始查询来检索下一组结果。设置 limit 可能会在所有请求的对象被过滤掉的情况下返回少于请求的条目数量(下限为零),并且客户端应该只根据 continue 字段是否存在来确定是否有更多的结果可用。服务器可能选择不支持 limit 参数,并将返回所有可用的结果。如果指定了 limit 并且 continue 字段为空,客户端可能会认为没有更多的结果可用。如果 watch 为 true,则不支持此字段。
+
+服务器保证在使用 continue 时返回的对象将与不带 limit 的列表调用相同。也就是说,在发出第一个请求后所创建、修改或删除的对象将不包含在任何后续的继续请求中。这有时被称为一致性快照,确保使用 limit 的客户端在分块接收非常大的结果的客户端能够看到所有可能的对象。如果对象在分块列表期间被更新,则返回计算第一个列表结果时存在的对象版本。
+
+
+
+## namespace
+
+对象名称和身份验证范围,例如用于团队和项目。
+
+
+
+## pretty
+
+如果设置为 true ,那么输出是规范的打印。
+
+
+
+## propagationPolicy
+
+是否以及如何执行垃圾收集。可以设置此字段或 OrphanDependents,但不能同时设置。默认策略由 metadata.finalizers 和特定资源的默认策略设置决定。可接受的值是:
+- Orphan:孤立依赖项;
+- Background:允许垃圾回收器后台删除依赖;
+- Foreground:一个级联策略,前台删除所有依赖项。
+
+
+
+## resourceVersion
+
+resourceVersion 对请求所针对的资源版本设置约束。详情请参见:https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions
+
+默认不设置。
+
+
+
+## resourceVersionMatch
+
+resourceVersionMatch 决定如何将 resourceVersion 应用于列表调用。强烈建议对设置了 resourceVersion 的列表调用设置 resourceVersionMatch,具体请参见:https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions
+
+默认不设置。
+
+
+
+## sendInitialEvents
+
+`sendInitialEvents=true` 可以和 `watch=true` 一起设置。在这种情况下,监视通知流将从合成事件开始,以生成集合中对象的当前状态。一旦发送了所有此类事件,将发送合成的 Bookmark 事件。bookmark 将报告对象集合对应的 ResourceVersion(RV),并标有 `"k8s.io/initial-events-end": "true"` 注解。之后,监视通知流将照常进行,发送与所监视的对象的变更(在 RV 之后)对应的监视事件。
+
+当设置了 `sendInitialEvents` 选项时,我们还需要设置 `resourceVersionMatch` 选项。watch 请求的语义如下:
+- `resourceVersionMatch` = NotOlderThan 被解释为"数据至少与提供的 `resourceVersion` 一样新",最迟当状态同步到与 ListOptions 提供的版本一样新的 `resourceVersion` 时,发送 bookmark 事件。如果 `resourceVersion` 未设置,这将被解释为"一致读取",最迟当状态同步到开始处理请求的那一刻时,发送 bookmark 事件。
+- `resourceVersionMatch` 设置为任何其他值或返回 unsetInvalid 错误。
+
+如果 `resourceVersion=""` 或 `resourceVersion="0"`(出于向后兼容原因),默认为 true,否则默认为 false。
+
+
+
+## timeoutSeconds
+
+list/watch 调用的超时秒数。限制调用的持续时间,无论是否有活动。
+
+
+
+## watch
+
+监视对所述资源的变更,并将此类变更以添加、更新和删除通知流的形式返回。指定 resourceVersion。
+
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/config-resources/resource-interpreter-customization-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/config-resources/resource-interpreter-customization-v1alpha1.md
new file mode 100644
index 000000000..3397431e0
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/config-resources/resource-interpreter-customization-v1alpha1.md
@@ -0,0 +1,683 @@
+---
+api_metadata:
+ apiVersion: "config.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/config/v1alpha1"
+ kind: "ResourceInterpreterCustomization"
+content_type: "api_reference"
+description: "ResourceInterpreterCustomization describes the configuration of a specific resource for Karmada to get the structure."
+title: "ResourceInterpreterCustomization v1alpha1"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: config.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/config/v1alpha1"`
+
+## ResourceInterpreterCustomization
+
+ResourceInterpreterCustomization 描述特定资源的配置,方便 Karmada 获取结构。它的优先级高于默认解释器和 webhook 解释器。
+
+
+
+- **apiVersion**: config.karmada.io/v1alpha1
+
+- **kind**: ResourceInterpreterCustomization
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([ResourceInterpreterCustomizationSpec](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomizationspec)),必选
+
+ Spec 是配置的详情。
+
+## ResourceInterpreterCustomizationSpec
+
+ResourceInterpreterCustomizationSpec 是配置的详情。
+
+
+
+- **customizations** (CustomizationRules),必选
+
+ Customizations 是对解释规则的描述。
+
+
+
+ *CustomizationRules 是对解释规则的描述。*
+
+ - **customizations.dependencyInterpretation** (DependencyInterpretation)
+
+ DependencyInterpretation 描述了 Karmada 分析依赖资源的规则。Karmada 为几种标准的 Kubernetes 类型提供了内置规则。如果设置了 DependencyInterpretation,内置规则将被忽略。更多信息,请浏览:https://karmada.io/docs/userguide/globalview/customizing-resource-interpreter/#interpretdependency
+
+
+
+ *DependencyInterpretation 是用于解释特定资源的依赖资源的规则。*
+
+ - **customizations.dependencyInterpretation.luaScript** (string),必选
+
+ LuaScript 是用于解释特定资源的依赖关系的 Lua 脚本。该脚本应实现以下功能:
+ ```yaml
+ luaScript: >
+ function GetDependencies(desiredObj)
+ dependencies = []
+ if desiredObj.spec.serviceAccountName ~= nil and desiredObj.spec.serviceAccountName ~= "default" then
+ dependency = []
+ dependency.apiVersion = "v1"
+ dependency.kind = "ServiceAccount"
+ dependency.name = desiredObj.spec.serviceAccountName
+ dependency.namespace = desiredObj.namespace
+ dependencies[1] = []
+ dependencies[1] = dependency
+ end
+ return dependencies
+ end
+ ```
+
+ LuaScript 的内容是一个完整的函数,包括声明和定义。
+
+ 以下参数将由系统提供:
+ - desiredObj:将应用于成员集群的配置。
+
+ 返回值由 DependentObjectReference 的列表表示。
+
+- **customizations.healthInterpretation** (HealthInterpretation)
+
+ HealthInterpretation 描述了健康评估规则,Karmada 可以通过这些规则评估各类资源的健康状态。
+
+
+
+ *HealthInterpretation 是解释特定资源健康状态的规则。*
+
+ - **customizations.healthInterpretation.luaScript** (string),必选
+
+ LuaScript 是评估特定资源的健康状态的 Lua 脚本。该脚本应实现以下功能:
+ ```yaml
+ luaScript: >
+ function InterpretHealth(observedObj)
+ if observedObj.status.readyReplicas == observedObj.spec.replicas then
+ return true
+ end
+ end
+ ```
+
+ LuaScript 的内容是一个完整的函数,包括声明和定义。
+
+ 以下参数将由系统提供:
+ - observedObj:从特定成员集群观测到的配置。
+
+ 返回的 boolean 值表示健康状态。
+
+- **customizations.replicaResource** (ReplicaResourceRequirement)
+
+ ReplicaResource 描述了 Karmada 发现资源副本及资源需求的规则。对于声明式工作负载类型(如 Deployment)的 CRD 资源,可能会有用。由于 Karmada 知晓发现 Kubernetes 本机资源信息的方式,因此 Kubernetes 本机资源(Deployment、Job)通常不需要该字段。但如果已设置该字段,内置的发现规则将被忽略。
+
+
+
+ *ReplicaResourceRequirement 保存了获取所需副本及每个副本资源要求的脚本。*
+
+ - **customizations.replicaResource.luaScript** (string),必选
+
+ LuaScript 是发现资源所用的副本以及资源需求的 Lua 脚本。
+
+ 该脚本应实现以下功能:
+ ```yaml
+ luaScript: >
+ function GetReplicas(desiredObj)
+ replica = desiredObj.spec.replicas
+ requirement = []
+ requirement.nodeClaim = []
+ requirement.nodeClaim.nodeSelector = desiredObj.spec.template.spec.nodeSelector
+ requirement.nodeClaim.tolerations = desiredObj.spec.template.spec.tolerations
+ requirement.resourceRequest = desiredObj.spec.template.spec.containers[1].resources.limits
+ return replica, requirement
+ end
+ ```
+
+ LuaScript 的内容是一个完整的函数,包括声明和定义。
+
+ 以下参数将由系统提供:
+ - desiredObj:待应用于成员集群的配置。
+
+ 该函数有两个返回值:
+ - replica:声明的副本编号。
+ - requirement:每个副本所需的资源,使用 ResourceBindingSpec.ReplicaRequirements 表示。
+
+ 返回值将被 ResourceBinding 或 ClusterResourceBinding 使用。
+
+- **customizations.replicaRevision** (ReplicaRevision)
+
+ ReplicaRevision 描述了 Karmada 修改资源副本的规则。对于声明式工作负载类型(如 Deployment)的 CRD 资源,可能会有用。由于 Karmada 知晓修改 Kubernetes 本机资源副本的方式,因此 Kubernetes 本机资源(Deployment、Job)通常不需要该字段。但如果已设置该字段,内置的修改规则将被忽略。
+
+
+
+ *ReplicaRevision 保存了用于修改所需副本的脚本。*
+
+ - **customizations.replicaRevision.luaScript** (string),必选
+
+ LuaScript 是修改所需规范中的副本的 Lua 脚本。该脚本应实现以下功能:
+ ```yaml
+ luaScript: >
+ function ReviseReplica(desiredObj, desiredReplica)
+ desiredObj.spec.replicas = desiredReplica
+ return desiredObj
+ end
+ ```
+
+ LuaScript 的内容是一个完整的函数,包括声明和定义。
+
+ 以下参数将由系统提供:
+ - desiredObj:待应用于成员集群的配置。
+
+ - desiredReplica:待应用于成员集群的期望副本数。
+
+ 返回的是修订后的配置,最终将应用于成员集群。
+
+ - **customizations.retention** (LocalValueRetention)
+
+ Retention 描述了 Karmada 对成员集群组件变化的预期反应。这样可以避免系统进入无意义循环,即 Karmada 资源控制器和成员集群组件,对同一个字段采用不同的值。例如,成员群集的 HPA 控制器可能会更改 Deployment 的 replicas。在这种情况下,Karmada 会保留 replicas,而不会去更改它。
+
+
+
+ *LocalValueRetention 保存了要保留的脚本。当前只支持 Lua 脚本。*
+
+ - **customizations.retention.luaScript** (string),必选
+
+ LuaScript 是将运行时值保留到所需规范的 Lua 脚本。
+
+ 该脚本应实现以下功能:
+ ```yaml
+ luaScript: >
+ function Retain(desiredObj, observedObj)
+ desiredObj.spec.fieldFoo = observedObj.spec.fieldFoo
+ return desiredObj
+ end
+ ```
+
+ LuaScript 的内容是一个完整的函数,包括声明和定义。
+
+ 以下参数将由系统提供:
+ - desiredObj:待应用于成员集群的配置。
+
+ - observedObj:从特定成员集群观测到的配置。
+
+ 返回的是保留的配置,最终将应用于成员集群。
+
+- **customizations.statusAggregation** (StatusAggregation)
+
+ StatusAggregation 描述了 Karmada 从成员集群收集的状态汇总到资源模板的规则。Karmada 为几种标准的 Kubernetes 类型提供了内置规则。如果设置了 StatusAggregation,内置规则将被忽略。更多信息,请浏览:https://karmada.io/docs/userguide/globalview/customizing-resource-interpreter/#aggregatestatus
+
+
+
+ *StatusAggregation 保存了用于聚合多个分散状态的脚本。*
+
+ - **customizations.statusAggregation.luaScript** (string),必选
+
+ LuaScript 是将分散状态聚合到所需规范的 Lua 脚本。该脚本应实现以下功能:
+ ```yaml
+ luaScript: >
+ function AggregateStatus(desiredObj, statusItems)
+ for i = 1, #statusItems do
+ desiredObj.status.readyReplicas = desiredObj.status.readyReplicas + items[i].readyReplicas
+ end
+ return desiredObj
+ end
+ ```
+
+ LuaScript 的内容是一个完整的函数,包括声明和定义。
+
+ 以下参数将由系统提供:
+ - desiredObj:资源模板。
+ - statusItems:用 AggregatedStatusItem 表示的状态列表。
+
+ 返回的是状态聚合成的完整对象。
+
+ - **customizations.statusReflection** (StatusReflection)
+
+ StatusReflection 描述了 Karmada 挑选资源状态的规则。Karmada 为几种标准的 Kubernetes 类型提供了内置规则。如果设置了 StatusReflection,内置规则将被忽略。更多信息,请浏览:https://karmada.io/docs/userguide/globalview/customizing-resource-interpreter/#interpretstatus
+
+
+
+ *StatusReflection 保存了用于获取状态的脚本。*
+
+ - **customizations.statusReflection.luaScript** (string),必选
+
+ LuaScript 是从观测到的规范中获取状态的 Lua 脚本。该脚本应实现以下功能:
+ ```yaml
+ luaScript: >
+ function ReflectStatus(observedObj)
+ status = []
+ status.readyReplicas = observedObj.status.observedObj
+ return status
+ end
+ ```
+
+ LuaScript 的内容是一个完整的函数,包括声明和定义。
+
+ 以下参数将由系统提供:
+ - observedObj:从特定成员集群观测到的配置。
+
+ 返回的是整个状态,也可以是状态的一部分,并会被 Work 和 ResourceBinding(ClusterResourceBinding) 使用。
+
+- **target** (CustomizationTarget),必选
+
+ CustomizationTarget 表示自定义的资源类型。
+
+
+
+ *CustomizationTarget 表示自定义的资源类型。*
+
+ - **target.apiVersion**(string),必选
+
+ APIVersion 表示目标资源的 API 版本。
+
+ - **target.kind**(string),必选
+
+ Kind 表示目标资源的类别。
+
+## ResourceInterpreterCustomizationList
+
+ResourceInterpreterCustomizationList 包含 ResourceInterpreterCustomization 的列表。
+
+
+
+- **apiVersion**: config.karmada.io/v1alpha1
+
+- **kind**: ResourceInterpreterCustomizationList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)),必选
+
+## 操作
+
+
+
+### `get`:查询指定的 ResourceInterpreterCustomization
+
+#### HTTP 请求
+
+GET /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterCustomization 的名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): OK
+
+### `get`:查询指定 ResourceInterpreterCustomization 的状态
+
+#### HTTP 请求
+
+GET /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterCustomization 名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): OK
+
+### `list`:查询所有 ResourceInterpreterCustomization
+
+#### HTTP 请求
+
+GET /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([ResourceInterpreterCustomizationList](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomizationlist)): OK
+
+### `create`:创建一个 ResourceInterpreterCustomization
+
+#### HTTP 请求
+
+POST /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations
+
+#### 参数
+
+- **body**: [ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): OK
+
+201 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): Created
+
+202 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): Accepted
+
+### `update`:更新指定的 ResourceInterpreterCustomization
+
+#### HTTP 请求
+
+PUT /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterCustomization 名称
+
+- **body**: [ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): OK
+
+201 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): Created
+
+### `update`:更新指定 ResourceInterpreterCustomization 的状态
+
+#### HTTP 请求
+
+PUT /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterCustomization 的名称
+
+- **body**: [ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): OK
+
+201 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): Created
+
+### `patch`:更新指定 ResourceInterpreterCustomization 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterCustomization 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): OK
+
+201 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): Created
+
+### `patch`:更新指定 ResourceInterpreterCustomization 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterCustomization 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): OK
+
+201 ([ResourceInterpreterCustomization](../config-resources/resource-interpreter-customization-v1alpha1#resourceinterpretercustomization)): Created
+
+### `delete`:删除一个 ResourceInterpreterCustomization
+
+#### HTTP 请求
+
+DELETE /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterCustomization 名称
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除所有 ResourceInterpreterCustomization
+
+#### HTTP 请求
+
+DELETE /apis/config.karmada.io/v1alpha1/resourceinterpretercustomizations
+
+#### 参数
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/config-resources/resource-interpreter-webhook-configuration-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/config-resources/resource-interpreter-webhook-configuration-v1alpha1.md
new file mode 100644
index 000000000..79b87e572
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/config-resources/resource-interpreter-webhook-configuration-v1alpha1.md
@@ -0,0 +1,549 @@
+---
+api_metadata:
+ apiVersion: "config.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/config/v1alpha1"
+ kind: "ResourceInterpreterWebhookConfiguration"
+content_type: "api_reference"
+description: "ResourceInterpreterWebhookConfiguration describes the configuration of webhooks which take the responsibility to tell karmada the details of the resource object, especially for custom resources."
+title: "ResourceInterpreterWebhookConfiguration v1alpha1"
+weight: 2
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: config.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/config/v1alpha1"`
+
+## ResourceInterpreterWebhookConfiguration
+
+ResourceInterpreterWebhookConfiguration 描述 webhook 的配置,这些配置负责传递 Karmada 资源对象的详情,特别是自定义资源的详情。
+
+
+
+- **apiVersion**: config.karmada.io/v1alpha1
+
+- **kind**: ResourceInterpreterWebhookConfiguration
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **webhooks** ([]ResourceInterpreterWebhook),必选
+
+ Webhooks 罗列 webhook 及其所影响的资源和操作。
+
+
+
+ *ResourceInterpreterWebhook 描述 webhook 及其适用的资源和操作。*
+
+ - **webhooks.clientConfig** (WebhookClientConfig),必选
+
+ ClientConfig 定义与钩子通信的方式。
+
+
+
+ *WebhookClientConfig 包含与 webhook 建立 TLS 连接的信息。*
+
+ - **webhooks.clientConfig.caBundle** ([]byte)
+
+ `caBundle` 是一个 PEM 编码的 CA 包,用于验证 webhook 的服务器证书。如果未指定,则使用API服务器上的系统信任根。
+
+ - **webhooks.clientConfig.service** (ServiceReference)
+
+ `service` 是对此 webhook 的服务的引用。必须指定 `service` 或 `url`。
+
+ 如果 webhook 在集群中运行,应使用 `service` 字段。
+
+
+
+ *ServiceReference 包含对 Service.legacy.k8s.io 的引用。*
+
+ - **webhooks.clientConfig.service.name** (string),必选
+
+ `name` 是服务的名称。必选。
+
+ - **webhooks.clientConfig.service.namespace** (string),必选
+
+ `namespace` 是服务的命名空间。必选。
+
+ - **webhooks.clientConfig.service.path** (string)
+
+ `path` 是一个可选的 URL 路径,在发送给服务的所有请求中都会包含此路径。
+
+ - **webhooks.clientConfig.service.port** (int32)
+
+ 如果指定,则为托管 webhook 的服务的端口。默认为 443 以实现向后兼容。 `port` 应该是一个有效端口(端口范围 1-65535)。
+
+ - **webhooks.clientConfig.url** (string)
+
+ `url` 以标准 URL 形式(`scheme://host:port/path`)给出了 webhook 的位置。必须指定 `url` 或 `service`。
+
+ `host` 不能用来表示集群中运行的服务,应改用 `service` 字段。在某些 API 服务器上,可能会通过外部 DNS 解析 host 值。(例如,`kube-apiserver` 无法解析集群内的域名,因为这会违反分层原理)。`host` 也可以是 IP 地址。
+
+ 注意:使用 `localhost` 或 `127.0.0.1` 作为 `host` 是有风险的,在运行 API 服务器的所有主机上运行此 webhook 时必须非常小心, 这些 API 服务器可能需要调用此 webhook。此类部署可能是不可移植的,即不易在新集群中重复安装。
+
+ 该方案必须是 “https”;URL 必须以 “https://” 开头。
+
+ 路径是可选的,如果有路径,可以是 URL 中允许的任何字符串。可以使用路径将任意字符串传递给 webhook,例如集群标识符。
+
+ 不允许使用用户或基本身份验证,例如,不允许使用 “user:password@”。不允许使用片段(“#...”)和查询参数(“?...”)。
+
+ - **webhooks.interpreterContextVersions** ([]string),必选
+
+ InterpreterContextVersions 是 Webhook 期望的优选的 `ResourceInterpreterContext` 版本的有序列表。Karmada 将尝试使用列表中的第一个版本。如果 Karmada 不支持此列表中的版本,则此对象的验证将失败。如果持久化的 webhook 配置指定了支持的版本,且不包括 Karmada 已知的任何版本,则对 webhook 的调用将失败,并受失败策略的约束。
+
+ - **webhooks.name**(string),必选
+
+ Name 是 webhook 的全限定名。
+
+ - **webhooks.rules** ([]RuleWithOperations)
+
+ Rules 描述了 webhook 涉及的资源上的操作。webhook 关心操作是否匹配任何 Rule。
+
+
+
+ *RuleWithOperations 是操作和资源的元组。建议确保所有元组组合都是有效的。*
+
+ - **webhooks.rules.apiGroups** ([]string),必选
+
+ APIGroups 是资源所属的 API 组。'*' 表示所有组。如果存在 '*',则列表的长度必须为 1。例如:
+ ["apps", "batch", "example.io"]:匹配3个组。
+ ["*"]:匹配所有组。
+
+ 注意:组可留空,例如,对于 Kubernetes 的 core 组,使用 [""]。
+
+ - **webhooks.rules.apiVersions** ([]string),必选
+
+ APIVersions 是资源所属的API版本。'*' 表示所有版本。如果存在 '*',则列表的长度必须为 1。例如:
+ ["v1alpha1", "v1beta1"]:匹配2个版本。
+ ["*"]:匹配所有版本。
+
+ - **webhooks.rules.kinds** ([]string),必选
+
+ Kinds 是规则适用的资源列表。如果存在 '*',则列表的长度必须为 1。例如:
+ ["Deployment", "Pod"]:匹配 Deployment 和 Pod。
+ ["*"]:应用于所有资源。
+
+ - **webhooks.rules.operations** ([]string),必选
+
+ Operations 是钩子所关心的操作。如果存在 '*',则列表的长度必须为 1。
+
+ - **webhooks.timeoutSeconds** (int32)
+
+ TimeoutSeconds 指定此 webhook 的超时时间。超时后,webhook 的调用将被忽略或 API 调用将根据失败策略失败。取值必须在 1 到 30 秒之间,默认为 10 秒。
+
+## ResourceInterpreterWebhookConfigurationList
+
+ResourceInterpreterWebhookConfigurationList 包含 ResourceInterpreterWebhookConfiguration 的列表。
+
+
+
+- **apiVersion**: config.karmada.io/v1alpha1
+
+- **kind**: ResourceInterpreterWebhookConfigurationList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)),必选
+
+ Items holds a list of ResourceInterpreterWebhookConfiguration.
+
+## 操作
+
+
+
+### `get`:查询指定的 ResourceInterpreterWebhookConfiguration
+
+#### HTTP 请求
+
+GET /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterWebhookConfiguration 名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): OK
+
+### `get`:查询指定 ResourceInterpreterWebhookConfiguration 的状态
+
+#### HTTP 请求
+
+GET /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterWebhookConfiguration 的名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): OK
+
+### `list`:查询所有 ResourceInterpreterWebhookConfiguration
+
+#### HTTP 请求
+
+GET /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([ResourceInterpreterWebhookConfigurationList](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfigurationlist)): OK
+
+### `create`:创建一个 ResourceInterpreterWebhookConfiguration
+
+#### HTTP 请求
+
+POST /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations
+
+#### 参数
+
+- **body**: [ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration), required
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): OK
+
+201 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): Created
+
+202 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): Accepted
+
+### `update`:更新指定的 ResourceInterpreterWebhookConfiguration
+
+#### HTTP 请求
+
+PUT /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterWebhookConfiguration 的名称
+
+- **body**: [ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): OK
+
+201 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): Created
+
+### `update`:更新指定 ResourceInterpreterWebhookConfiguration 的状态
+
+#### HTTP 请求
+
+PUT /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterWebhookConfiguration 的名称
+
+- **body**: [ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): OK
+
+201 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): Created
+
+### `patch`:更新指定 ResourceInterpreterWebhookConfiguration 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterWebhookConfiguration 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): OK
+
+201 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): Created
+
+### `patch`:更新指定 ResourceInterpreterWebhookConfiguration 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterWebhookConfiguration 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): OK
+
+201 ([ResourceInterpreterWebhookConfiguration](../config-resources/resource-interpreter-webhook-configuration-v1alpha1#resourceinterpreterwebhookconfiguration)): Created
+
+### `delete`:删除一个 ResourceInterpreterWebhookConfiguration
+
+#### HTTP 请求
+
+DELETE /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceInterpreterWebhookConfiguration 名称
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除所有 ResourceInterpreterWebhookConfiguration
+
+#### HTTP 请求
+
+DELETE /apis/config.karmada.io/v1alpha1/resourceinterpreterwebhookconfigurations
+
+#### 参数
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/networking-resources/multi-cluster-ingress-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/networking-resources/multi-cluster-ingress-v1alpha1.md
new file mode 100644
index 000000000..ae804f0a6
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/networking-resources/multi-cluster-ingress-v1alpha1.md
@@ -0,0 +1,766 @@
+---
+api_metadata:
+ apiVersion: "networking.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/networking/v1alpha1"
+ kind: "MultiClusterIngress"
+content_type: "api_reference"
+description: "MultiClusterIngress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend."
+title: "MultiClusterIngress v1alpha1"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: networking.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/networking/v1alpha1"`
+
+## MultiClusterIngress
+
+MultiClusterIngress 是允许入站连接到达后端定义的端点的规则集合。MultiClusterIngress 的结构 与 Ingress 相同,表示多集群中的 Ingress。
+
+
+
+- **apiVersion**: networking.karmada.io/v1alpha1
+
+- **kind**: MultiClusterIngress
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** (IngressSpec)
+
+ Spec 是 MultiClusterIngress 的期望状态。
+
+
+
+ *IngressSpec 描述用户希望存在的 Ingress。*
+
+ - **spec.defaultBackend** (IngressBackend)
+
+ defaultBackend 是负责处理与任何规则都不匹配的请求的后端。如果未指定 Rules,则必须指定 DefaultBackend。如果未设置 DefaultBackend,则与任何规则都不匹配的请求的处理将由 Ingress 控制器决定。
+
+
+
+ *IngressBackend 描述给定服务和端口的所有端点。*
+
+ - **spec.defaultBackend.resource** ([TypedLocalObjectReference](../common-definitions/typed-local-object-reference#typedlocalobjectreference))
+
+ Resource 是一个 ObjectRef 对象,指向同一命名空间内的另一个 Kubernetes 资源,将其视为 Ingress 对象。如果指定了 Resource,则不能指定 service.Name 和 service.Port。Resource 与 Service 互斥。
+
+ - **spec.defaultBackend.service** (IngressServiceBackend)
+
+ Service 引用一个服务作为后端,与 Resource 互斥。
+
+
+
+ *IngressServiceBackend 引用一个 Kubernetes Service 作为后端。*
+
+ - **spec.defaultBackend.service.name** (string),必选
+
+ Name 是引用的服务。该服务必须与 Ingress 对象在同一命名空间。
+
+ - **spec.defaultBackend.service.port** (ServiceBackendPort)
+
+ 所引用的服务的端口。IngressServiceBackend 需要端口名或端口号。
+
+
+
+ *ServiceBackendPort 是被引用的服务端口。*
+
+ - **spec.defaultBackend.service.port.name** (string)
+
+ Name 是服务上的端口名称,与 Number 互斥。
+
+ - **spec.defaultBackend.service.port.number** (int32)
+
+ Number 是服务上的数字形式端口号(例如 80),与 Name 互斥。
+
+ - **spec.ingressClassName** (string)
+
+ ingressClassName 是 IngressClass 集群资源的名称。Ingress 控制器实现使用此字段来了解它们是否应该通过传递连接(控制器 - IngressClass -> Ingress 资源)为该 Ingress 资源提供服务。尽管 `kubernetes.io/ingress.class` 注解(简单的常量名称)从未正式定义,但它被 Ingress 控制器广泛支持,以在 Ingress 控制器和 Ingress 资源之间创建直接绑定。新创建的 Ingress 资源应该优先选择使用该字段。但是,即使注解已被正式弃用,出于向后兼容的原因,Ingress 控制器仍应能够处理该注解(如果存在)。
+
+ - **spec.rules** ([]IngressRule)
+
+ *Atomic:将在合并期间被替换*
+
+ rules 是用于配置 Ingress 的主机规则列表。如果未指定或没有规则匹配,则所有流量都将发送到默认后端。
+
+
+
+ *IngressRule 表示将指定主机下的路径映射到相关后端服务的规则。传入请求首先评估主机匹配,然后路由到与匹配的 IngressRuleValue 关联的后端。*
+
+ - **spec.rules.host** (string)
+
+ host 是 RFC 3986 定义的网络主机的完全限定域名。请注意以下与 RFC 3986 中定义的 URI 的 host 部分的偏差:
+
+ - 不允许 IP。当前 IngressRuleValue 只能应用于父 Ingress Spec 中的 IP。
+
+ - 由于不允许使用端口,因此 `:` 分隔符会被忽略。当前 Ingress 的端口隐式为: :80 用于 http 和 :443 用于 https
+
+ 这两种情况在未来都可能发生变化。入站请求在通过 IngressRuleValue 处理之前会先进行主机匹配。如果主机未指定,Ingress 将根据指定的 IngressRuleValue 规则路由所有流量。
+
+ 主机可以是“精确”的,设置为一个不含终止句点的网络主机域名(例如, *foo.bar.com* ),也可以是一个“通配符”,设置为以单个通配符标签为前缀的域名(例如, *.foo.com* )。通配符 “*” 必须单独显示为第一个 DNS 标签,并且仅与单个标签匹配。不能单独使用通配符作为标签(例如,Host == "*")。请求将按以下方式与主机字段匹配:- 如果主机是精确匹配的,在 http host 头等于 Host 值的情况下,请求与此规则匹配。- 如果主机是用通配符给出的,在 http host 头与通配符规则的后缀(删除第一个标签)相同的情况下,请求与此规则匹配。
+
+ - **spec.rules.http** (HTTPIngressRuleValue)
+
+
+
+ *HTTPIngressRuleValue 是指向后端的 http 选择算符列表。例如 http://<host>/<path>?<searchpart> -> 后端,其中 url 的部分对应 RFC 3986,此资源将用于匹配最后一个 “/” 之后和第一个 “?” 之前的所有内容或 “#”。*
+
+ - **spec.rules.http.paths** ([]HTTPIngressPath),必选
+
+ *Atomic:将在合并期间被替换*
+
+ paths 是一个将请求映射到后端的路径集合。
+
+
+
+ *HTTPIngressPath 将路径与后端关联。与路径匹配的传入 URL 将转发到后端。*
+
+ - **spec.rules.http.paths.backend** (IngressBackend),必选
+
+ backend 定义将流量转发到的引用服务端点。
+
+
+
+ *IngressBackend 描述给定服务和端口的所有端点。*
+
+ - **spec.rules.http.paths.backend.resource** ([TypedLocalObjectReference](../common-definitions/typed-local-object-reference#typedlocalobjectreference))
+
+ resource 是一个 ObjectRef 对象,指向同一命名空间内的另一个 Kubernetes 资源,将其视为 Ingress 对象。如果指定了 resource,则不能指定 service.Name 和 service.Port。resource 与 Service 互斥。
+
+ - **spec.rules.http.paths.backend.service** (IngressServiceBackend)
+
+ service 引用一个服务作为后端,与 Resource互斥。
+
+
+
+ *IngressServiceBackend 引用一个 Kubernetes Service 作为后端。*
+
+ - **spec.rules.http.paths.backend.service.name** (string),必选
+
+ name 是引用的服务。该服务必须与 Ingress 对象在同一命名空间。
+
+ - **spec.rules.http.paths.backend.service.port** (ServiceBackendPort)
+
+ 所引用的服务的端口。IngressServiceBackend 需要端口名或端口号。
+
+
+
+ *ServiceBackendPort 是被引用的服务端口。*
+
+ - **spec.rules.http.paths.backend.service.port.name** (string)
+
+ name 是服务上的端口名称,与 Number 互斥。
+
+ - **spec.rules.http.paths.backend.service.port.number** (int32)
+
+ number 是服务上的数字形式端口号(例如 80),与 Name 互斥。
+
+ - **spec.rules.http.paths.pathType** (string),必选
+
+ pathType 决定如何解释路径匹配。取值包括: * Exact:与 URL 路径完全匹配。* Prefix:根据按 “/” 拆分的 URL 路径前缀进行匹配。匹配是按路径元素逐个元素完成。
+ 路径元素引用的是路径中由“/”分隔符拆分的标签列表。如果每个 p 都是请求路径 p 的元素前缀,则请求与路径 p 匹配。请注意,如果路径的最后一个元素是请求路径中的最后一个元素的子字符串,则匹配不成功(例如,/foo/bar 匹配 /foo/bar/baz,但不匹配 /foo/barbaz)。
+
+ * ImplementationSpecific:路径匹配的解释取决于 IngressClass。
+ 实现可以将其视为单独的路径类型,也可以将其视为前缀或确切的路径类型。
+
+ 实现需要支持所有路径类型。
+
+ 枚举值包括:
+ - `"Exact"`:与URL路径完全匹配,并区分大小写。
+ - `"ImplementationSpecific"`:匹配取决于 IngressClass。 实现可以将其视为单独的路径类型,也可以将其视为前缀或确切的路径类型。
+ - `"Prefix"`:根据按 “/” 拆分的 URL 路径前缀进行匹配。匹配区分大小写,是按路径元素逐个元素完成。路径元素引用的是路径中由“/”分隔符拆分的标签列表。如果每个 p 都是请求路径 p 的元素前缀,则请求与路径 p 匹配。请注意,如果路径的最后一个元素是请求路径中的最后一个元素的子字符串,则匹配不成功(例如,/foo/bar 匹配 /foo/bar/baz,但不匹配 /foo/barbaz)。如果 Ingress 规范中存在多个匹配路径,则匹配路径最长者优先。例如, - /foo/bar 不匹配 /foo/barbaz - /foo/bar 匹配 /foo/bar和/foo/bar/baz - /foo和/foo/均匹配 /foo 和 /foo/。如果仍然有两条同等的匹配路径,则匹配路径最长者(例如,/foo/)优先。
+
+ - **spec.rules.http.paths.path** (string)
+
+ path 要与传入请求的路径进行匹配。目前,它可以包含 RFC 3986 定义的 URL 的常规“路径”部分所不允许的字符。路径必须以 “/” 开头,并且在 pathType 值为 Exact 或 Prefix 时必须存在。
+
+- **spec.tls** ([]IngressTLS)
+
+ *Atomic:将在合并期间被替换*
+
+ tls 表示 TLS 配置。目前,Ingress 仅支持一个 TLS 端口 443。如果此列表的多个成员指定了不同的主机,在实现 Ingress 的 Ingress 控制器支持 SNI的情况下,它们将根据通过 SNI TLS 扩展指定的主机名在同一端口上多路复用。
+
+
+
+ *IngressTLS 描述与 Ingress 相关的传输层安全性。*
+
+ - **spec.tls.hosts** ([]string)
+
+ *Atomic:将在合并期间被替换*
+
+ hosts 是 TLS 证书中包含的主机列表。此列表中的值必须与 tlsSecret 中使用的名称匹配。如果未指定,默认为实现此 Ingress 的负载均衡控制器的通配符主机设置。
+
+ - **spec.tls.secretName** (string)
+
+ secretName 是用于终止端口 443 上 TLS 通信的 secret 的名称。字段是可选的,以允许仅基于 SNI 主机名的 TLS 路由。如果监听器中的 SNI 主机与 IngressRule 使用的 “Host” 头字段冲突,则 SNI 主机用于终止,Host 头的值用于路由。
+
+- **status** (IngressStatus)
+
+ Status 是 MultiClusterIngress 的当前状态。
+
+
+
+ *IngressStatus 描述 Ingress 的当前状态。*
+
+ - **status.loadBalancer** (IngressLoadBalancerStatus)
+
+ loadBalancer 包含负载均衡器的当前状态。
+
+
+
+ *IngressLoadBalancerStatus 表示负载均衡器的状态。*
+
+ - **status.loadBalancer.ingress** ([]IngressLoadBalancerIngress)
+
+ ingress 是一个包含负载均衡器入口点的列表。
+
+
+
+ *IngressLoadBalancerIngress 表示负载均衡器入口点的状态。*
+
+ - **status.loadBalancer.ingress.hostname** (string)
+
+ hostname 是为基于 DNS 的负载均衡器入口点所设置的主机名。
+
+ - **status.loadBalancer.ingress.ip** (string)
+
+ ip 是为基于 IP 的负载均衡器入口点设置的 IP。
+
+ - **status.loadBalancer.ingress.ports** ([]IngressPortStatus)
+
+ *Atomic:将在合并期间被替换*
+
+ ports 提供有关此 LoadBalancer 公开端口的信息。
+
+
+
+ *IngressPortStatus 表示服务端口的错误情况。*
+
+ - **status.loadBalancer.ingress.ports.port** (int32),必选
+
+ port 是入栈端口的端口号。
+
+ - **status.loadBalancer.ingress.ports.protocol** (string),必选
+
+ protocol 是入栈端口的协议。取值包括:TCP、UDP 和·SCTP。
+
+ 枚举值包括:
+ - `"SCTP"`:SCTP协议。
+ - `"TCP"`:TCP协议。
+ - `"UDP"`:UDP协议。
+
+ - **status.loadBalancer.ingress.ports.error** (string)
+
+ error 用来记录服务端口的问题。错误的格式应符合以下规则:
+
+ - 应在此文件中指定内置错误码,并且错误码应使用驼峰法命名。
+
+ - 特定于云提供商的错误码名称必须符合 foo.example.com/CamelCase 格式。
+
+## MultiClusterIngressList
+
+MultiClusterIngressList 是 MultiClusterIngress 的集合。
+
+
+
+- **apiVersion**: networking.karmada.io/v1alpha1
+
+- **kind**: MultiClusterIngressList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)),必选
+
+ Items 是 MultiClusterIngress 的列表。
+
+## 操作
+
+
+
+### `get`:查询指定的 MultiClusterIngress
+
+#### HTTP 请求
+
+GET /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ name of the MultiClusterIngress
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): OK
+
+### `get`:查询指定 MultiClusterIngress 的状态
+
+#### HTTP 请求
+
+GET /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterIngress 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): OK
+
+### `list`:查询指定命名空间内的所有 MultiClusterIngress
+
+#### HTTP 请求
+
+GET /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([MultiClusterIngressList](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringresslist)): OK
+
+### `list`:查询所有 MultiClusterIngress
+
+#### HTTP 请求
+
+GET /apis/networking.karmada.io/v1alpha1/multiclusteringresses
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([MultiClusterIngressList](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringresslist)): OK
+
+### `create`:创建一个 MultiClusterIngress
+
+#### HTTP 请求
+
+POST /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress), required
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): OK
+
+201 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): Created
+
+202 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): Accepted
+
+### `update`:更新指定的 MultiClusterIngress
+
+#### HTTP 请求
+
+PUT /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterIngress 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): OK
+
+201 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): Created
+
+### `update`:更新指定 MultiClusterIngress 的状态
+
+#### HTTP 请求
+
+PUT /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterIngress 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): OK
+
+201 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): Created
+
+### `patch`:更新指定 MultiClusterIngress 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterIngress 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): OK
+
+201 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): Created
+
+### `patch`:更新指定 MultiClusterIngress 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterIngress 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): OK
+
+201 ([MultiClusterIngress](../networking-resources/multi-cluster-ingress-v1alpha1#multiclusteringress)): Created
+
+### `delete`:删除一个 MultiClusterIngress
+
+#### HTTP 请求
+
+DELETE /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterIngress 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除所有 MultiClusterIngress
+
+#### HTTP 请求
+
+DELETE /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusteringresses
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/networking-resources/multi-cluster-service-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/networking-resources/multi-cluster-service-v1alpha1.md
new file mode 100644
index 000000000..2e6099ff4
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/networking-resources/multi-cluster-service-v1alpha1.md
@@ -0,0 +1,687 @@
+---
+api_metadata:
+ apiVersion: "networking.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/networking/v1alpha1"
+ kind: "MultiClusterService"
+content_type: "api_reference"
+description: "MultiClusterService is a named abstraction of multi-cluster software service."
+title: "MultiClusterService v1alpha1"
+weight: 2
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: networking.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/networking/v1alpha1"`
+
+## MultiClusterService
+
+MultiClusterService 是多集群软件服务的命名抽象。MultiClusterService 的 name 字段与 Service 的 name 字段相同。不同集群中同名的服务将被视为同一服务,会被关联到同一 MultiClusterService。MultiClusterService 可以控制服务向多集群外部暴露,还可以在集群之间启用服务发现。
+
+
+
+- **apiVersion**: networking.karmada.io/v1alpha1
+
+- **kind**: MultiClusterService
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([MultiClusterServiceSpec](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservicespec)),必选
+
+ Spec 是 MultiClusterService 的期望状态。
+
+- **status** (ServiceStatus)
+
+ Status 是 MultiClusterService 的当前状态。
+
+
+
+ *ServiceStatus 表示服务的当前状态。*
+
+ - **status.conditions** ([]Condition)
+
+ *补丁策略:以键的**`类型`**为基础进行合并。*
+
+ *Map: 键类型的唯一值将在合并期间保留*
+
+ 服务的当前状态。
+
+
+
+ *Condition 包含此 API 资源当前状态某个方面的详细信息。*
+
+ - **status.conditions.lastTransitionTime** (Time),必选
+
+ lastTransitionTime 是状况最近一次从一种状态转换到另一种状态的时间。这种变化通常出现在下层状况发生变化的时候。如果无法了解下层状况变化,使用 API 字段更改的时间也是可以接受的。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **status.conditions.message** (string),必选
+
+ message 是有关转换的详细信息(人类可读消息)。可以是空字符串。
+
+ - **status.conditions.reason** (string),必选
+
+ reason 是一个程序标识符,表明状况最后一次转换的原因。特定状况类型的生产者可以定义该字段的预期值和含义,以及这些值是否可被视为有保证的 API。取值应该是一个 CamelCase 字符串。此字段不能为空。
+
+ - **status.conditions.status** (string),必选
+
+ status 表示状况的状态。取值为True、False或Unknown。
+
+ - **status.conditions.type** (string),必选
+
+ type 表示状况的类型,采用 CamelCase 或 foo.example.com/CamelCase 形式。
+
+ - **status.conditions.observedGeneration** (int64)
+
+ observedGeneration 表示设置状况时所基于的 .metadata.generation。例如,如果 .metadata.generation 为 12,但 .status.conditions[x].observedGeneration 为 9,则状况相对于实例的当前状态已过期。
+
+ - **status.loadBalancer** (LoadBalancerStatus)
+
+ loadBalancer 包含负载均衡器的当前状态(如果存在)。
+
+
+
+ *LoadBalancerStatus 表示负载均衡器的状态。*
+
+ - **status.loadBalancer.ingress** ([]LoadBalancerIngress)
+
+ ingress 是一个包含负载均衡器入口点的列表。服务的流量需要被发送到这些入口点。
+
+
+
+ *LoadBalancerIngress 表示负载均衡器入口点的状态,用于服务的流量是否被发送到入口点。*
+
+ - **status.loadBalancer.ingress.hostname** (string)
+
+ Hostname 为基于 DNS 的负载均衡器入口点(通常是 AWS 负载均衡器)所设置。
+
+ - **status.loadBalancer.ingress.ip** (string)
+
+ IP 为基于 IP 的负载均衡器入口点(通常是 GCE 或 OpenStack 负载均衡器)所设置。
+
+ - **status.loadBalancer.ingress.ports** ([]PortStatus)
+
+ *Atomic:将在合并期间被替换*
+
+ Ports 是服务的端口列表。如果设置了此字段,服务中定义的每个端口都应该在此列表中。
+
+
+
+ **
+
+ - **status.loadBalancer.ingress.ports.port** (int32),必选
+
+ Port 是所记录的服务端口状态的端口号。
+
+ - **status.loadBalancer.ingress.ports.protocol** (string),必选
+
+ Protocol 是所记录的服务端口状态的协议。取值包括:TCP、UDP 和 SCTP。
+
+ 枚举值包括:
+ - `"SCTP"`:SCTP协议。
+ - `"TCP"`:TCP协议。
+ - `"UDP"`:UDP协议。
+
+ - **status.loadBalancer.ingress.ports.error** (string)
+
+ error 用来记录服务端口的问题。错误的格式应符合以下规则:
+
+ - 应在此文件中指定内置错误码,并且错误码应使用驼峰法命名。
+
+ - 特定于云驱动的错误码名称必须符合 foo.example.com/CamelCase 格式。
+
+
+## MultiClusterServiceSpec
+
+MultiClusterServiceSpec 是 MultiClusterService 的期望状态。
+
+
+
+- **types** ([]string),必选
+
+ Types 指定公开此 MultiClusterService 的服务引用的方式。
+
+- **ports** ([]ExposurePort)
+
+ Ports 罗列了此 MultiClusterService 公开的端口。在服务暴露和发现过程中,不会过滤指定的端口。默认情况下,引用服务中的所有端口都将公开。
+
+
+
+ *ExposurePort 描述了将暴露的端口。*
+
+ - **ports.port** (int32),必选
+
+ Port 表示暴露的服务端口。
+
+ - **ports.name** (string)
+
+ Name 是需要在服务中公开的端口的名称。端口名称必须与服务中定义的端口名称一致。
+
+- **range** (ExposureRange)
+
+ Range 指定引用服务应公开的范围。仅在 Types 包含 CrossCluster 的情况下有效和可选。如果未设置且 Types 包含 CrossCluster,将选择所有群集,这意味着引用服务将在所有注册的集群上公开。
+
+
+
+ *ExposureRange 罗列了暴露服务的集群。当前支持按名称选择集群,为扩展更多方法留出空间,如使用标签选择器。*
+
+ - **range.clusterNames** ([]string)
+
+ ClusterNames 罗列了待选择的集群。
+
+## MultiClusterServiceList
+
+MultiClusterServiceList 是 MultiClusterService 的集合。
+
+
+
+- **apiVersion**: networking.karmada.io/v1alpha1
+
+- **kind**: MultiClusterServiceList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)),必选
+
+ Items 是 MultiClusterService 的列表。
+
+## 操作
+
+
+
+### `get`:查询指定的 MultiClusterService
+
+#### HTTP 请求
+
+GET /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterService 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): OK
+
+### `get`:查询指定 MultiClusterService 的状态
+
+#### HTTP 请求
+
+GET /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterService 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): OK
+
+### `list`:查询指定命名空间内的所有 MultiClusterService
+
+#### HTTP 请求
+
+GET /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([MultiClusterServiceList](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservicelist)): OK
+
+### `list`:查询所有 MultiClusterService
+
+#### HTTP 请求
+
+GET /apis/networking.karmada.io/v1alpha1/multiclusterservices
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([MultiClusterServiceList](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservicelist)): OK
+
+### `create`:创建一个 MultiClusterService
+
+#### HTTP 请求
+
+POST /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): OK
+
+201 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): Created
+
+202 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): Accepted
+
+### `update`:更新指定的 MultiClusterService
+
+#### HTTP 请求
+
+PUT /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterService 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): OK
+
+201 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): Created
+
+### `update`:更新指定 MultiClusterService 的状态
+
+#### HTTP 请求
+
+PUT /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterService 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): OK
+
+201 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): Created
+
+### `patch`:更新指定 MultiClusterService 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterService 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): OK
+
+201 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): Created
+
+### `patch`:更新指定 MultiClusterService 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterService 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): OK
+
+201 ([MultiClusterService](../networking-resources/multi-cluster-service-v1alpha1#multiclusterservice)): Created
+
+### `delete`:删除一个 MultiClusterService
+
+#### HTTP 请求
+
+DELETE /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ MultiClusterService 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除所有 MultiClusterService
+
+#### HTTP 请求
+
+DELETE /apis/networking.karmada.io/v1alpha1/namespaces/{namespace}/multiclusterservices
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### Response
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/cluster-override-policy-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/cluster-override-policy-v1alpha1.md
new file mode 100644
index 000000000..0d7635b2e
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/cluster-override-policy-v1alpha1.md
@@ -0,0 +1,850 @@
+---
+api_metadata:
+ apiVersion: "policy.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"
+ kind: "ClusterOverridePolicy"
+content_type: "api_reference"
+description: "ClusterOverridePolicy represents the cluster-wide policy that overrides a group of resources to one or more clusters."
+title: "ClusterOverridePolicy v1alpha1"
+weight: 3
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: policy.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"`
+
+## ClusterOverridePolicy
+
+ClusterOverridePolicy 表示将一组资源覆盖到一个或多个集群的集群范围策略。
+
+
+
+- **apiVersion**:policy.karmada.io/v1alpha1
+
+- **kind**:ClusterOverridePolicy
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** (OverrideSpec),必选
+
+ Spec 表示 ClusterOverridePolicy 的规范。
+
+
+
+ *OverrideSpec 定义了 OverridePolicy 的规范。*
+
+ - **spec.overrideRules** ([]RuleWithCluster)
+
+ OverrideRules 定义了一组针对目标集群的覆盖规则。
+
+
+
+ *RuleWithCluster 定义集群的覆盖规则。*
+
+ - **spec.overrideRules.overriders** (Overriders), 必选
+
+ *Overriders 表示将应用于资源的覆盖规则。*
+
+
+
+ *Overriders 提供各种表示覆盖规则的替代方案。
+
+ 如果多个替代方案并存,将按以下顺序应用:ImageOverrider > CommandOverrider > ArgsOverrider > LabelsOverrider > AnnotationsOverrider > Plaintext*
+
+ - **spec.overrideRules.overriders.annotationsOverrider** ([]LabelAnnotationOverrider)
+
+ AnnotationsOverrider 表示用于处理工作负载注解的专属规则。
+
+
+
+ *LabelAnnotationOverrider 表示用于处理工作负载标签/注解的专属规则。*
+
+ - **spec.overrideRules.overriders.annotationsOverrider.operator** (string),必选
+
+ Operator 表示将应用于工作负载的运算符。
+
+ - **spec.overrideRules.overriders.annotationsOverrider.value** (map[string]string)
+
+ Value 表示工作负载 annotation/label 的值。当运算符为“add(添加)"时,Value 中的项会附加在 annotation/label 之后。当运算符为“remove(移除)”时,Value 中与 annotation/label 匹配的项将被删除。当运算符为“replace(替换)”时,Value 中与 annotation/label 匹配的项将被替换。
+
+ - **spec.overrideRules.overriders.argsOverrider** ([]CommandArgsOverrider)
+
+ ArgsOverrider 表示用于处理容器 args 的专属规则。
+
+
+
+ *CommandArgsOverrider 表示用于处理 command/args 覆盖的专属规则。*
+
+ - **spec.overrideRules.overriders.argsOverrider.containerName** (string),必选
+
+ 容器的名称。
+
+ - **spec.overrideRules.overriders.argsOverrider.operator** (string),必选
+
+ Operator 表示将应用于 command/args 的运算符。
+
+ - **spec.overrideRules.overriders.argsOverrider.value** ([]string)
+
+ Value 表示 command/args 的值。当运算符为“add(添加)"时,Value 中的项会附加在 command/args 之后。当运算符为“remove(移除)”时,Value 中与 command/args 匹配的项将被删除。如果 Value 为空,command/args 将保持不变。
+
+ - **spec.overrideRules.overriders.commandOverrider** ([]CommandArgsOverrider)
+
+ CommandOverrider 表示用于处理容器 command 的专属规则。
+
+
+
+ *CommandArgsOverrider 表示用于处理 command/args 覆盖的专属规则。*
+
+ - **spec.overrideRules.overriders.commandOverrider.containerName** (string),必选
+
+ 容器的名称。
+
+ - **spec.overrideRules.overriders.commandOverrider.operator** (string),必选
+
+ Operator 表示将应用于 command/args 的运算符。
+
+ - **spec.overrideRules.overriders.commandOverrider.value** ([]string)
+
+ Value 表示 command/args 的值。当运算符为“add(添加)"时,Value 中的项会附加在 command/args 之后。当运算符为“remove(移除)”时,Value 中与 command/args 匹配的项将被删除。如果 Value 为空,command/args 将保持不变。
+
+ - **spec.overrideRules.overriders.imageOverrider** ([]ImageOverrider)
+
+ ImageOverrider 表示用于处理镜像覆盖的专属规则。
+
+
+
+ *ImageOverrider 表示用于处理镜像覆盖的专属规则。*
+
+ - **spec.overrideRules.overriders.imageOverrider.component** (string),必选
+
+ 组件是镜像名称的一部分。镜像名称通常表示为[registry/]repository[:tag]。registry 可能是 - registry.k8s.io - fictional.registry.example:10443;repository 可能是 kube-apiserver - fictional/nginx;标签可能是 -latest- v1.19.1 - @sha256:dbcc1c35ac38df41fd2f5e4130b32ffdb93ebae8b3dbe638c23575912276fc9c
+
+ - **spec.overrideRules.overriders.imageOverrider.operator** (string),必选
+
+ Operator 表示将应用于镜像的运算符。
+
+ - **spec.overrideRules.overriders.imageOverrider.predicate** (ImagePredicate)
+
+ 在应用规则之前,Predicate 会对镜像进行过滤。
+
+ 默认值为 nil。如果设置为默认值,并且资源类型为 Pod、ReplicaSet、Deployment、StatefulSet、DaemonSet 或 Job,系统将按照以下规则自动检测镜像字段:
+ - Pod: /spec/containers/<N>/image
+ - ReplicaSet: /spec/template/spec/containers/<N>/image
+ - Deployment: /spec/template/spec/containers/<N>/image
+ - DaemonSet: /spec/template/spec/containers/<N>/image
+ - StatefulSet: /spec/template/spec/containers/<N>/image
+ - Job: /spec/template/spec/containers/<N>/image
+
+ 此外,如果资源对象有多个容器,所有镜像都将被处理。
+
+ 如果值不是 nil,仅处理与过滤条件匹配的镜像。
+
+
+
+ *ImagePredicate 定义了镜像的过滤条件。*
+
+ - **spec.overrideRules.overriders.imageOverrider.predicate.path** (string),必选
+
+ Path 表示目标字段的路径。
+
+ - **spec.overrideRules.overriders.imageOverrider.value** (string)
+
+ Value 表示镜像的值。当运算符为“add(添加)”或“replace(替换)”时,不得为空。当运算符为“remove(删除)”时,默认为空且可忽略。
+
+ - **spec.overrideRules.overriders.labelsOverrider** ([]LabelAnnotationOverrider)
+
+ LabelsOverrider 表示用于处理工作负载标签的专属规则。
+
+
+
+ **LabelAnnotationOverrider**表示用于处理工作负载 labels/annotations 的专属规则。
+
+ - **spec.overrideRules.overriders.labelsOverrider.operator** (string),必选
+
+ Operator 表示将应用于工作负载的运算符。
+
+ - **spec.overrideRules.overriders.labelsOverrider.value** (map[string]string)
+
+ Value 表示工作负载的 annotation/label 的值。当运算符为“add(添加)"时,Value 中的项会附加在 annotation/label 之后。当运算符为“remove(移除)”时,Value 中与 annotation/label 匹配的项将被删除。当运算符为“replace(替换)”时,Value 中与 annotation/label 匹配的项将被替换。
+
+ - **spec.overrideRules.overriders.plaintext** ([]PlaintextOverrider)
+
+ Plaintext 表示用明文定义的覆盖规则。
+
+
+
+ *PlaintextOverrider 根据路径、运算符和值覆盖目标字段。*
+
+ - **spec.overrideRules.overriders.plaintext.operator** (string),必选
+
+ Operator 表示对目标字段进行的操作。可用的运算符有:添加(add)、替换(replace)和删除(remove)。
+
+ - **spec.overrideRules.overriders.plaintext.path** (string),必选
+
+ Path 表示目标字段的路径。
+
+ - **spec.overrideRules.overriders.plaintext.value** (JSON)
+
+ Value 表示目标字段的值。当操作符为“remove(删除)”时,必须为空。
+
+
+
+ *JSON 表示任何有效的 JSON 值。支持以下类型:bool、int64、float64、string、[]interface[]、map[string]interface[]和 nil*
+
+ - **spec.overrideRules.targetCluster** (ClusterAffinity)
+
+ TargetCluster 定义了对此覆盖策略的限制,此覆盖策略仅适用于分发到匹配集群的资源。nil 表示匹配所有集群。
+
+
+
+ *ClusterAffinity 表示用于选择集群的过滤条件。*
+
+ - **spec.overrideRules.targetCluster.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.overrideRules.targetCluster.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.overrideRules.targetCluster.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.overrideRules.targetCluster.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.overrideRules.targetCluster.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **spec.overriders** (Overriders)
+
+ Overriders 表示将应用于资源的覆盖规则。
+
+ Deprecated:此字段已在 v1.0 中被弃用,请改用 OverrideRules。
+
+
+
+ *Overriders 提供各种表示覆盖规则的替代方案。
+
+ 如果多个替代方案并存,将按以下顺序应用: ImageOverrider > CommandOverrider > ArgsOverrider > LabelsOverrider > AnnotationsOverrider > Plaintext*
+
+ - **spec.overriders.annotationsOverrider** ([]LabelAnnotationOverrider)
+
+ AnnotationsOverrider 表示用于处理工作负载注解的专属规则。
+
+
+
+ **LabelAnnotationOverrider**表示用于处理工作负载 labels/annotations 的专属规则。
+
+ - **spec.overriders.annotationsOverrider.operator** (string),必选
+
+ Operator 表示将应用于工作负载的运算符。
+
+ - **spec.overriders.annotationsOverrider.value** (map[string]string)
+
+ Value 表示工作负载的注解/标签的值。当运算符为“add(添加)"时,Value 中的项会附加在 annotation/label 之后。当运算符为“remove(移除)”时,Value 中与 annotation/label 匹配的项将被删除。当运算符为“replace(替换)”时,Value 中与 annotation/label 匹配的项将被替换。
+
+ - **spec.overriders.argsOverrider** ([]CommandArgsOverrider)
+
+ ArgsOverrider 表示用于处理容器 args 的专属规则。
+
+
+
+ *CommandArgsOverrider 表示用于处理 command/args 覆盖的专属规则。*
+
+ - **spec.overriders.argsOverrider.containerName** (string),必选
+
+ 容器的名称。
+
+ - **spec.overriders.argsOverrider.operator** (string),必选
+
+ Operator 表示将应用于 command/args 的运算符。
+
+ - **spec.overriders.argsOverrider.value** ([]string)
+
+ Value 表示 command/args 的值。当运算符为“add(添加)"时,Value 中的项会附加在 command/args 之后。当运算符为“remove(移除)”时,Value 中与 command/args 匹配的项将被删除。如果 Value 为空,则 command/args 将保持不变。
+
+ - **spec.overriders.commandOverrider** ([]CommandArgsOverrider)
+
+ CommandOverrider 表示用于处理容器命令的专属规则。
+
+
+
+ *CommandArgsOverrider 表示用于处理 command/args 覆盖的专属规则。*
+
+ - **spec.overriders.commandOverrider.containerName** (string),必选
+
+ 容器的名称。
+
+ - **spec.overriders.commandOverrider.operator** (string),必选
+
+ Operator 表示将应用于 command/args 的运算符。
+
+ - **spec.overriders.commandOverrider.value** ([]string)
+
+ Value 表示 command/args 的值。当运算符为“add(添加)"时,Value 中的项会附加在 command/args 之后。当运算符为“remove(移除)”时,Value 中与 command/args 匹配的项将被删除。如果 Value 为空,则 command/args 将保持不变。
+
+ - **spec.overriders.imageOverrider** ([]ImageOverrider)
+
+ ImageOverrider 表示用于处理镜像覆盖的专属规则。
+
+
+
+ *ImageOverrider 表示用于处理镜像覆盖的专属规则。*
+
+ - **spec.overriders.imageOverrider.component** (string),必选
+
+ 组件是镜像名称的一部分。镜像名称通常表示为[registry/]repository[:tag]。registry 可能是 - registry.k8s.io - fictional.registry.example:10443;repository 可能是 kube-apiserver - fictional/nginx;标签可能是 -latest- v1.19.1 - @sha256:dbcc1c35ac38df41fd2f5e4130b32ffdb93ebae8b3dbe638c23575912276fc9c
+
+ - **spec.overriders.imageOverrider.operator** (string),必选
+
+ Operator 表示将应用于镜像的运算符。
+
+ - **spec.overriders.imageOverrider.predicate** (ImagePredicate)
+
+ 在应用规则之前,Predicate 会对镜像进行过滤。
+
+ 默认值为 nil。如果设置为默认值,并且资源类型为 Pod、ReplicaSet、Deployment、StatefulSet、DaemonSet 或 Job,系统将按照以下规则自动检测镜像字段:
+ - Pod: /spec/containers/<N>/image
+ - ReplicaSet: /spec/template/spec/containers/<N>/image
+ - Deployment: /spec/template/spec/containers/<N>/image
+ - DaemonSet: /spec/template/spec/containers/<N>/image
+ - StatefulSet: /spec/template/spec/containers/<N>/image
+ - Job: /spec/template/spec/containers/<N>/image
+
+ 此外,如果资源对象有多个容器,所有镜像都将被处理。
+
+ 如果值不是 nil,仅处理与过滤条件匹配的镜像。
+
+
+
+ *ImagePredicate 定义了镜像的过滤条件。*
+
+ - **spec.overriders.imageOverrider.predicate.path** (string),必选
+
+ Path 表示目标字段的路径。
+
+ - **spec.overriders.imageOverrider.value** (string)
+
+ Value 表示应用于镜像的值。当运算符为“add(添加)”或“replace(替换)”时,不得为空。当运算符为“remove(删除)”时,默认为空且可忽略。
+
+ - **spec.overriders.labelsOverrider** ([]LabelAnnotationOverrider)
+
+ LabelsOverrider 表示用于处理工作负载标签的专属规则。
+
+
+
+ *LabelAnnotationOverrider 表示用于处理工作负载 labels/annotations 的专属规则。*
+
+ - **spec.overriders.labelsOverrider.operator** (string),必选
+
+ Operator 表示将应用于工作负载的运算符。
+
+ - **spec.overriders.labelsOverrider.value** (map[string]string)
+
+ Value 表示工作负载的注解/标签的值。当运算符为“add(添加)"时,Value 中的项会附加在 annotation/label 之后。当运算符为“remove(移除)”时,Value 中与 annotation/label 匹配的项将被删除。当运算符为“replace(替换)”时,Value 中与 annotation/label 匹配的项将被替换。
+
+ - **spec.overriders.plaintext** ([]PlaintextOverrider)
+
+ Plaintext 表示用明文定义的覆盖规则。
+
+
+
+ *PlaintextOverrider 根据路径、运算符和值覆盖目标字段。*
+
+ - **spec.overriders.plaintext.operator** (string),必选
+
+ Operator 表示对目标字段进行的操作。可用的运算符有:添加(add)、替换(replace)和删除(remove)。
+
+ - **spec.overriders.plaintext.path** (string),必选
+
+ Path 表示目标字段的路径。
+
+ - **spec.overriders.plaintext.value** (JSON)
+
+ Value 表示应用于目标字段的值。当操作符为“remove(删除)”时,必须为空。
+
+
+
+ *JSON 表示任何有效的 JSON 值。支持以下类型:bool、int64、float64、string、[]interface[]、map[string]interface[]和 nil*
+
+ - **spec.resourceSelectors** ([]ResourceSelector)
+
+ ResourceSelectors 限制此覆盖策略适用的资源类型。nil 表示此覆盖策略适用于所有资源。
+
+
+
+ *ResourceSelector 用于选择资源。*
+
+ - **spec.resourceSelectors.apiVersion** (string),必选
+
+ APIVersion 表示目标资源的 API 版本。
+
+ - **spec.resourceSelectors.kind** (string),必选
+
+ Kind 表示目标资源的类别。
+
+ - **spec.resourceSelectors.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ 查询一组资源的标签。如果 name 不为空,labelSelector 会被忽略。
+
+ - **spec.resourceSelectors.name** (string)
+
+ 目标资源的名称。默认值为空,表示选择所有资源。
+
+ - **spec.resourceSelectors.namespace** (string)
+
+ 目标资源的 namespace。默认值为空,表示从父对象作用域继承资源。
+
+ - **spec.targetCluster** (ClusterAffinity)
+
+ TargetCluster 定义对此覆盖策略的限制,此覆盖策略仅适用于分发到匹配集群的资源。nil:匹配所有集群。
+
+ Deprecated:此字段已在 v1.0 中被弃用,请改用 OverrideRules。
+
+
+
+ *ClusterAffinity 表示用于选择集群的过滤条件。*
+
+ - **spec.targetCluster.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.targetCluster.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.targetCluster.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.targetCluster.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.targetCluster.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+## ClusterOverridePolicyList
+
+ClusterOverridePolicyList 表示一组 ClusterOverridePolicy 的集合。
+
+
+
+- **apiVersion**:policy.karmada.io/v1alpha1
+
+- **kind**:ClusterOverridePolicyList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)),必选
+
+ Items 罗列 ClusterOverridePolicy。
+
+## 操作
+
+
+
+### `get`:查询指定的 ClusterOverridePolicy
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterOverridePolicy 的名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):OK
+
+### `get`:查询指定 ClusterOverridePolicy 的状态
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterOverridePolicy 的名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):OK
+
+### `list`:查询所有的 ClusterOverridePolicy
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([ClusterOverridePolicyList](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicylist)):OK
+
+### `create`:创建一条 ClusterOverridePolicy
+
+#### HTTP 请求
+
+POST /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies
+
+#### 参数
+
+- **body**: [ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):OK
+
+201 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):Created
+
+202 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):Accepted
+
+### `update`:更新指定的 ClusterOverridePolicy
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterOverridePolicy 的名称
+
+- **body**: [ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):OK
+
+201 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):Created
+
+### `update`:更新指定 ClusterOverridePolicy 的状态
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterOverridePolicy 的名称
+
+- **body**: [ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):OK
+
+201 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):Created
+
+### `patch`:更新指定 ClusterOverridePolicy 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterOverridePolicy 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):OK
+
+201 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):Created
+
+### `patch`:更新指定 ClusterOverridePolicy 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterOverridePolicy 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):OK
+
+201 ([ClusterOverridePolicy](../policy-resources/cluster-override-policy-v1alpha1#clusteroverridepolicy)):Created
+
+### `delete`:删除一条 ClusterOverridePolicy
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterOverridePolicy 的名称
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)):OK
+
+202 ([Status](../common-definitions/status#status)):Accepted
+
+### `deletecollection`:删除所有 ClusterOverridePolicy
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/clusteroverridepolicies
+
+#### 参数
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (查询参数):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)):OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/cluster-propagation-policy-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/cluster-propagation-policy-v1alpha1.md
new file mode 100644
index 000000000..f5d6ac491
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/cluster-propagation-policy-v1alpha1.md
@@ -0,0 +1,778 @@
+---
+api_metadata:
+ apiVersion: "policy.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"
+ kind: "ClusterPropagationPolicy"
+content_type: "api_reference"
+description: "ClusterPropagationPolicy represents the cluster-wide policy that propagates a group of resources to one or more clusters."
+title: "ClusterPropagationPolicy v1alpha1"
+weight: 5
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: policy.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"`
+
+## ClusterPropagationPolicy
+
+ClusterPropagationPolicy 表示将一组资源分发到一个或多个集群的集群策略。与只能在其命名空间内分发资源的 PropagationPolicy 相比,ClusterPropagationPolicy 能够在系统命名空间之外的任何命名空间内分发集群级别的资源。系统命名空间包括:karmada-system、karmada-cluster、karmada-es-*。
+
+
+
+- **apiVersion**:policy.karmada.io/v1alpha1
+
+- **kind**:ClusterPropagationPolicy
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** (PropagationSpec),必选
+
+ Spec 表示 ClusterPropagationPolicy 的规范。
+
+
+
+ *PropagationSpec 表示 PropagationPolicy 的规范。*
+
+ - **spec.resourceSelectors** ([]ResourceSelector),必选
+
+ ResourceSelectors 用于选择资源。不允许设置为 nil 或者留空。为安全起见,避免 Secret 等敏感资源被无意分发,不会匹配全部的资源。
+
+
+
+ *ResourceSelector 用于选择资源。*
+
+ - **spec.resourceSelectors.apiVersion** (string),必选
+
+ APIVersion 表示目标资源的 API 版本。
+
+ - **spec.resourceSelectors.kind** (string),必选
+
+ Kind 表示目标资源的类别。
+
+ - **spec.resourceSelectors.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ 查询一组资源的标签。如果 name 不为空,labelSelector 会被忽略。
+
+ - **spec.resourceSelectors.name** (string)
+
+ 目标资源的名称。默认值为空,意味着将选择所有的资源。
+
+ - **spec.resourceSelectors.namespace** (string)
+
+ 目标资源的 namespace。默认值为空,意味着从父对象作用域继承资源。
+
+ - **spec.association** (boolean)
+
+ Association 表示是否自动选择相关资源,例如,被 Deployment 引用的 ConfigMap。默认值为 false。Deprecated 表示改用 PropagateDeps。
+
+ - **spec.conflictResolution** (string)
+
+ ConflictResolution 表示当目标集群中已存在正在分发的资源时,处理潜在冲突的方式。
+
+ 默认值为 Abort,表示停止分发资源以避免意外覆盖。将原集群资源迁移到 Karmada 时,可设置为“Overwrite”。此时,冲突是可预测的,且 Karmada 可通过覆盖来接管资源。
+
+ - **spec.dependentOverrides** ([]string)
+
+ DependentOverrides 罗列在当前 PropagationPolicy 生效之前必须出现的覆盖(OverridePolicy)。
+
+ 它指明当前 PropagationPolicy 所依赖的覆盖。当用户同时创建 OverridePolicy 和资源时,一般希望可以采用新创建的策略。
+
+ 注意:如果当前命名空间中的 OverridePolicy 和 ClusterOverridePolicy 与资源匹配,即使它们不在列表中,仍将被应用于覆盖。
+
+ - **spec.failover** (FailoverBehavior)
+
+ Failover 表示 Karmada 在故障场景中迁移应用的方式。如果值为 nil,则禁用故障转移。
+
+
+
+ *FailoverBehavior 表示应用或集群的故障转移。*
+
+ - **spec.failover.application** (ApplicationFailoverBehavior)
+
+ Application 表示应用的故障转移。如果值为 nil,则禁用故障转移。如果值不为 nil,则 PropagateDeps 应设置为 true,以便依赖项随应用一起迁移。
+
+
+
+ *ApplicationFailoverBehavior 表示应用的故障转移。*
+
+ - **spec.failover.application.decisionConditions** (DecisionConditions),必选
+
+ DecisionConditions 表示执行故障转移的先决条件。只有满足所有条件,才能执行故障转移。当前条件为 TolerationSeconds(可选)。
+
+
+
+ *DecisionConditions 表示执行故障转移的先决条件。*
+
+ - **spec.failover.application.decisionConditions.tolerationSeconds** (int32)
+
+ TolerationSeconds 表示应用达到预期状态后,Karmada 在执行故障转移之前应等待的时间。如果未指定,Karmada 将立即执行故障转移。默认为 300 秒。
+
+ - **spec.failover.application.gracePeriodSeconds** (int32)
+
+ GracePeriodSeconds 表示从新集群中删除应用之前的最长等待时间(以秒为单位)。仅当 PurgeMode 设置为 Graciously 且默认时长为 600 秒时,才需要设置该字段。如果新群集中的应用无法达到健康状态,Karmada 将在达到最长等待时间后删除应用。取值只能为正整数。
+
+ - **spec.failover.application.purgeMode** (string)
+
+ PurgeMode 表示表示原集群中应用的处理方式。取值包括 Immediately、Graciously 和 Never。默认为 Graciously。
+
+ - **spec.placement** (Placement)
+
+ Placement 表示选择集群以分发资源的规则。
+
+
+
+ *Placement 表示选择集群的规则。*
+
+ - **spec.placement.clusterAffinities** ([]ClusterAffinityTerm)
+
+ ClusterAffinities 表示多个集群组的调度限制(ClusterAffinityTerm 指定每种限制)。
+
+ 调度器将按照这些组在规范中出现的顺序逐个评估,不满足调度限制的组将被忽略。除非该组中的所有集群也属于下一个组(同一集群可以属于多个组),否则将不会选择此组中的所有集群。
+
+ 如果任何组都不满足调度限制,则调度失败,任何群集都不会被选择。
+
+ 注意:
+ 1. ClusterAffinities 不能与 ClusterAffinity 共存。
+ 2. 如果未同时设置 ClusterAffinities 和 ClusterAffinity,则任何集群都可以作为调度候选集群。
+
+ 潜在用例1:本地数据中心的私有集群为主集群组,集群提供商的托管集群是辅助集群组。Karmada 调度器更愿意将工作负载调度到主集群组,只有在主集群组不满足限制(如缺乏资源)的情况下,才会考虑辅助集群组。
+
+ 潜在用例2:对于容灾场景,系统管理员可定义主集群组和备份集群组,工作负载将首先调度到主集群组,当主集群组中的集群发生故障(如数据中心断电)时,Karmada 调度器可以将工作负载迁移到备份集群组。
+
+
+
+ *ClusterAffinityTerm 用于选择集群。*
+
+ - **spec.placement.clusterAffinities.affinityName** (string),必选
+
+ AffinityName 是集群组的名称。
+
+ - **spec.placement.clusterAffinities.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.placement.clusterAffinities.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.placement.clusterAffinities.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.placement.clusterAffinities.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.placement.clusterAffinities.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **spec.placement.clusterAffinity** (ClusterAffinity)
+
+ ClusterAffinity 表示对某组集群的调度限制。注意:
+ 1. ClusterAffinity 不能与 ClusterAffinities 共存。
+ 2. 如果未同时设置 ClusterAffinities 和 ClusterAffinity,则任何集群都可以作为调度候选集群。
+
+
+
+ *ClusterAffinity 是用于选择集群的过滤条件。*
+
+ - **spec.placement.clusterAffinity.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.placement.clusterAffinity.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.placement.clusterAffinity.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.placement.clusterAffinity.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.placement.clusterAffinity.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **spec.placement.clusterTolerations** ([]Toleration)
+
+ ClusterTolerations 表示容忍度。
+
+
+
+ *附加此容忍度的 Pod 能够容忍任何使用匹配运算符 <operator> 匹配三元组 <key,value,effect> 所得到的污点。*
+
+ - **spec.placement.clusterTolerations.effect** (string)
+
+ Effect 表示要匹配的污点效果。留空表示匹配所有污点效果。如果设置此字段,允许的值为 NoSchedule、PreferNoSchedule 或 NoExecute。
+
+ 枚举值包括:
+ - `"NoExecute"`:任何不能容忍该污点的 Pod 都会被驱逐。当前由 NodeController 强制执行。
+ - `"NoSchedule"`:如果新 pod 无法容忍该污点,不允许新 pod 调度到节点上,但允许由 kubelet 调度但不需要调度器启动的所有 pod,并允许节点上已存在的 pod 继续运行。由调度器强制执行。
+ - `"PreferNoSchedule"`:和 TaintEffectNoSchedule 相似,不同的是调度器尽量避免将新 Pod 调度到具有该污点的节点上,除非没有其他节点可调度。由调度器强制执行。
+
+ - **spec.placement.clusterTolerations.key** (string)
+
+ key 是容忍度的污点键。留空表示匹配所有污点键。如果键为空,则运算符必须为 Exists,所有值和所有键都会被匹配。
+
+ - **spec.placement.clusterTolerations.operator** (string)
+
+ Operator 表示一个键与其值的关系。有效的运算符包括 Exists 和 Equal。默认为 Equal。Exists 相当于将值设置为通配符,因此一个 Pod 可以容忍特定类别的所有污点。
+
+ 枚举值包括:
+ - `"Equal"`
+ - `"Exists"`
+
+ - **spec.placement.clusterTolerations.tolerationSeconds** (int64)
+
+ TolerationSeconds 表示容忍度容忍污点的时间段(Effect 的取值为 NoExecute,否则忽略此字段)。默认情况下,不设置此字段,表示永远容忍污点(不驱逐)。零和负值将被系统视为 0(立即驱逐)。
+
+ - **spec.placement.clusterTolerations.value** (string)
+
+ Value 是容忍度匹配到的污点值。如果运算符为 Exists,则值留空,否则就是一个普通字符串。
+
+ - **spec.placement.replicaScheduling** (ReplicaSchedulingStrategy)
+
+ ReplicaScheduling 表示将 spec 中规约的副本资源(例如 Deployments、Statefulsets)分发到成员集群时处理副本数量的调度策略。
+
+
+
+ *ReplicaSchedulingStrategy 表示副本的分配策略。*
+
+ - **spec.placement.replicaScheduling.replicaDivisionPreference** (string)
+
+ 当 ReplicaSchedulingType 设置为 Divided 时,由 ReplicaDivisionPreference 确定副本的分配策略。取值包括 Aggregated 和 Weighted。Aggregated:将副本分配给尽可能少的集群,同时考虑集群的资源可用性。Weighted:根据 WeightPreference 按权重分配副本。
+
+ - **spec.placement.replicaScheduling.replicaSchedulingType** (string)
+
+ ReplicaSchedulingType 确定 Karmada 分发资源时副本的调度方式。取值包括 Duplicated 和 Divided。Duplicated:将相同的副本从资源复制到每个候选成员群集。Divided:根据有效候选成员集群的数量分配副本,每个集群的副本由 ReplicaDivisionPreference 确定。
+
+ - **spec.placement.replicaScheduling.weightPreference** (ClusterPreferences)
+
+ WeightPreference 描述每个集群或每组集群的权重。如果 ReplicaDivisionPreference 设置为 Weighted,但 WeightPreference 未设置,调度器将为所有集群设置相同的权重。
+
+
+
+ *ClusterPreferences 描述了每个集群或每组集群的权重。*
+
+ - **spec.placement.replicaScheduling.weightPreference.dynamicWeight** (string)
+
+ DynamicWeight 指生成动态权重列表的因子。如果指定,StaticWeightList 将被忽略。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList** ([]StaticClusterWeight)
+
+ StaticWeightList 罗列静态集群权重。
+
+
+
+ *StaticClusterWeight 定义静态集群权重。*
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster** (ClusterAffinity),必选
+
+ TargetCluster 是选择集群的过滤器。
+
+
+
+ *ClusterAffinity 表示用于选择集群的过滤条件。*
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.weight** (int64), required
+
+ Weight 表示优先选则 TargetCluster 指定的集群。
+
+ - **spec.placement.spreadConstraints** ([]SpreadConstraint)
+
+ SpreadConstraints 表示调度约束列表。
+
+
+
+ *SpreadConstraint 表示资源分布的约束。*
+
+ - **spec.placement.spreadConstraints.maxGroups** (int32)
+
+ MaxGroups 表示要选择的集群组的最大数量。
+
+ - **spec.placement.spreadConstraints.minGroups** (int32)
+
+ MinGroups 表示要选择的集群组的最小数量。默认值为 1。
+
+ - **spec.placement.spreadConstraints.spreadByField** (string)
+
+ SpreadByField 是 Karmada 集群 API 中的字段,该API用于将成员集群分到不同集群组。资源将被分发到不同的集群组中。可用的字段包括 cluster、region、zone 和 provider。SpreadByField 不能与 SpreadByLabel 共存。如果两个字段都为空,SpreadByField 默认为 cluster。
+
+ - **spec.placement.spreadConstraints.spreadByLabel** (string)
+
+ SpreadByLabel 表示用于将成员集群分到不同集群组的标签键。资源将被分发到不同的集群组中。SpreadByLabel 不能与 SpreadByField 共存。
+
+ - **spec.preemption** (string)
+
+ Preemption 表示资源抢占。取值包括 Always 和 Never。
+
+ 枚举值包括:
+ - `"Always"`:允许抢占。如果 Always 应用于 PropagationPolicy,则会根据优先级抢占资源。只要 PropagationPolicy 和 ClusterPropagationPolicy 能匹配 ResourceSelector 中定义的规则,均可用于声明资源。此外,如果资源已经被 ClusterPropagationPolicy 声明,PropagationPolicy 仍然可以抢占该资源,无需考虑优先级。如果 Always 应用于 ClusterPropagationPolicy,只有 ClusterPropagationPolicy 能抢占资源。
+ - `"Never"`:PropagationPolicy(或 ClusterPropagationPolicy)不抢占资源。
+
+- **spec.priority** (int32)
+
+ Priority 表示策略(PropagationPolicy 或 ClusterPropagationPolicy)的重要性。对于每条策略,如果在资源模板中没有其他优先级更高的策略,则将为匹配的资源模板应用该策略。一旦资源模板被某个策略声明,默认情况下该模板不会被优先级更高的策略抢占。查看 Preemption 字段,了解更多信息。
+
+ 如果两条策略有相同的优先级,会使用 ResourceSelector 中有更精确匹配规则的策略。
+ - 按 name(resourceSelector.name) 匹配的优先级高于按 selector(resourceSelector.labelSelector) 匹配。
+ - 按 selector(resourceSelector.labelSelector) 匹配的优先级又高于按 APIVersion(resourceSelector.apiVersion) 或 Kind(resourceSelector.kind) 匹配。
+ 如果优先级相同,则按字母顺序,会使用字母排名更前的策略,比如,名称以 bar 开头的策略的优先级高于以 foo 开头的策略。
+
+ 值越大,优先级越高。默认值为 0。
+
+- **spec.propagateDeps** (boolean)
+
+ PropagateDeps 表示相关资源是否被自动分发。以引用 ConfigMap 和 Secret 的 Deployment 为例,当 propagateDeps 为 true 时,resourceSelectors 不引用资源(以减少配置),ConfigMap 和 Secret 将与 Deployment 一起被分发。此外,在故障转移场景中,引用资源将与 Deployment 一起迁移。
+
+ 默认值为 false。
+
+ - **spec.schedulerName** (string)
+
+ SchedulerName 表示要继续进行调度的调度器。如果指定,将由指定的调度器调度策略。如果未指定,将由默认调度器调度策略。
+
+## ClusterPropagationPolicyList
+
+ClusterPropagationPolicyList 罗列 ClusterPropagationPolicy。
+
+
+
+- **apiVersion**:policy.karmada.io/v1alpha1
+
+- **kind**: ClusterPropagationPolicyList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)), required
+
+## 操作
+
+
+
+### `get`:查询指定的 ClusterPropagationPolicy
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterPropagationPolicy 的名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):OK
+
+### `get`:查询指定 ClusterPropagationPolicy 的状态
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterPropagationPolicy的名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):OK
+
+### `list` 罗列或者监听 ClusterPropagationPolicy 类型的对象
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies
+
+#### 参数
+
+- **allowWatchBookmarks** (*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*in query*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (查询参数):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*in query*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([ClusterPropagationPolicyList](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicylist)):OK
+
+### `create`:创建一条 ClusterPropagationPolicy
+
+#### HTTP 请求
+
+POST /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies
+
+#### 参数
+
+- **body**: [ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):OK
+
+201 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):Created
+
+202 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):Accepted
+
+### `update`:更新指定的 ClusterPropagationPolicy
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterPropagationPolicy 的名称
+
+- **body**: [ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):OK
+
+201 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):Created
+
+### `update`:更新指定 ClusterPropagationPolicy 的状态
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterPropagationPolicy 的名称
+
+- **body**: [ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):OK
+
+201 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):Created
+
+### `patch`:更新指定 ClusterPropagationPolicy 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterPropagationPolicy 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):OK
+
+201 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):Created
+
+### `patch`:更新指定 ClusterPropagationPolicy 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterPropagationPolicy 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):OK
+
+201 ([ClusterPropagationPolicy](../policy-resources/cluster-propagation-policy-v1alpha1#clusterpropagationpolicy)):Created
+
+### `delete`:删除一条 ClusterPropagationPolicy
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ ClusterPropagationPolicy 的名称
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*in query*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*in query*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)):OK
+
+202 ([Status](../common-definitions/status#status)):Accepted
+
+### `deletecollection`:删除所有 ClusterPropagationPolicy
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/clusterpropagationpolicies
+
+#### 参数
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*in query*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*in query*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)):OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/federated-resource-quota-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/federated-resource-quota-v1alpha1.md
new file mode 100644
index 000000000..7949c8672
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/federated-resource-quota-v1alpha1.md
@@ -0,0 +1,600 @@
+---
+api_metadata:
+ apiVersion: "policy.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"
+ kind: "FederatedResourceQuota"
+content_type: "api_reference"
+description: "FederatedResourceQuota sets aggregate quota restrictions enforced per namespace across all clusters."
+title: "FederatedResourceQuota v1alpha1"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: policy.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"`
+
+## FederatedResourceQuota
+
+FederatedResourceQuota 用于设置所有集群每个命名空间内强制执行的聚合配额限制。
+
+
+
+- **apiVersion**:policy.karmada.io/v1alpha1
+
+- **kind**:FederatedResourceQuota
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([FederatedResourceQuotaSpec](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequotaspec)),必选
+
+ Spec 规定预期配额。
+
+- **status** ([FederatedResourceQuotaStatus](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequotastatus))
+
+ Status 表示实际的强制配额和已使用配额。
+
+## FederatedResourceQuotaSpec
+
+FederatedResourceQuotaSpec 定义强制配额的预期硬限制。
+
+
+
+- **overall** (map[string][Quantity](../common-definitions/quantity#quantity)),必选
+
+ Overall 是每个命名资源的预期硬限制。
+
+- **staticAssignments** ([]StaticClusterAssignment)
+
+ StaticAssignments 是每个集群的预期硬限制。注意:对于不在此列表中的集群,Karmada 会将其 ResourceQuota 留空,这些集群在引用的命名空间中没有配额。
+
+
+
+ *StaticClusterAssignment 表示某个指定集群的预期硬限制。*
+
+ - **staticAssignments.clusterName** (string),必选
+
+ ClusterName 表示将执行限制的集群的名称。
+
+ - **staticAssignments.hard** (map[string][Quantity](../common-definitions/quantity#quantity)),必选
+
+ Hard 表示每个命名资源的预期硬限制。
+
+## FederatedResourceQuotaStatus
+
+FederatedResourceQuotaStatus 表示强制硬限制和所观测到的使用情况。
+
+
+
+- **aggregatedStatus** ([]ClusterQuotaStatus)
+
+ AggregatedStatus 表示每个集群所观测到的配额使用情况。
+
+
+
+ *ClusterQuotaStatus 表示某个指定集群的预期限制和所观测到的使用情况。*
+
+ - **aggregatedStatus.clusterName** (string),必选
+
+ ClusterName 表示将执行限制的集群的名称。
+
+ - **aggregatedStatus.hard** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ Hard 表示每个命名资源的强制硬限制。更多信息,请浏览 https://kubernetes.io/docs/concepts/policy/resource-quotas/。
+
+ - **aggregatedStatus.used** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ Used 是当前所观测到的命名空间中资源的总体使用情况。
+
+- **overall** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ Overall 是每个命名资源的强制硬限制。
+
+- **overallUsed** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ OverallUsed 是当前所观测到的命名空间中资源的总体使用情况。
+
+## FederatedResourceQuotaList
+
+FederatedResourceQuotaList 罗列 FederatedResourceQuota。
+
+
+
+- **apiVersion**:policy.karmada.io/v1alpha1
+
+- **kind**: FederatedResourceQuotaList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)), required
+
+## 操作
+
+
+
+### `get`:查询指定的 FederatedResourceQuota
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedResourceQuota 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):OK
+
+### `get`:查询指定 FederatedResourceQuota 的状态
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedResourceQuota 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):OK
+
+### `list`:查询指定命名空间内的所有 FederatedResourceQuota
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks** (*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (查询参数):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([FederatedResourceQuotaList](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequotalist)):OK
+
+### `list`:查询所有的 FederatedResourceQuota
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/federatedresourcequotas
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([FederatedResourceQuotaList](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequotalist)):OK
+
+### `create`:创建一个 FederatedResourceQuota
+
+#### HTTP 请求
+
+POST /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):OK
+
+201 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):Created
+
+202 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):Accepted
+
+### `update`:更新指定的 FederatedResourceQuota
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedResourceQuota 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):OK
+
+201 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):Created
+
+### `update`:更新指定 FederatedResourceQuota 的状态
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedResourceQuota 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):OK
+
+201 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):Created
+
+### `patch`:更新指定 FederatedResourceQuota 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedResourceQuota 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):OK
+
+201 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):Created
+
+### `patch`:更新指定 FederatedResourceQuota 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedResourceQuota 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):OK
+
+201 ([FederatedResourceQuota](../policy-resources/federated-resource-quota-v1alpha1#federatedresourcequota)):Created
+
+### `delete`:删除一个 FederatedResourceQuota
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ FederatedResourceQuota的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*in query*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)):OK
+
+202 ([Status](../common-definitions/status#status)):Accepted
+
+### `deletecollection`:删除所有 FederatedResourceQuota
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/federatedresourcequotas
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (查询参数):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)):OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/override-policy-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/override-policy-v1alpha1.md
new file mode 100644
index 000000000..f38c13d8e
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/override-policy-v1alpha1.md
@@ -0,0 +1,946 @@
+---
+api_metadata:
+ apiVersion: "policy.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"
+ kind: "OverridePolicy"
+content_type: "api_reference"
+description: "OverridePolicy represents the policy that overrides a group of resources to one or more clusters."
+title: "OverridePolicy v1alpha1"
+weight: 2
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: policy.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"`
+
+## OverridePolicy
+
+OverridePolicy 表示将一组资源覆盖到一个或多个集群的策略。
+
+
+
+- **apiVersion**:policy.karmada.io/v1alpha1
+
+- **kind**:OverridePolicy
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** (OverrideSpec),必选
+
+ Spec 表示 OverridePolicy 的规范。
+
+
+
+ *OverrideSpec 定义了 OverridePolicy 的规范。*
+
+ - **spec.overrideRules** ([]RuleWithCluster)
+
+ OverrideRules 定义一组针对目标群集的覆盖规则。
+
+
+
+ *RuleWithCluster 定义集群的覆盖规则。*
+
+ - **spec.overrideRules.overriders** (Overriders),必选
+
+ Overriders 表示将应用于资源的覆盖规则。
+
+
+
+ *Overriders 提供各种表示覆盖规则的替代方案。
+
+ 如果多个替代方案并存,将按以下顺序应用: ImageOverrider > CommandOverrider > ArgsOverrider > LabelsOverrider > AnnotationsOverrider > Plaintext*
+
+ - **spec.overrideRules.overriders.annotationsOverrider** ([]LabelAnnotationOverrider)
+
+ AnnotationsOverrider 表示用于处理工作负载注解的专属规则。
+
+
+
+ *LabelAnnotationOverrider 表示用于处理工作负载标签/注解的专属规则。*
+
+ - **spec.overrideRules.overriders.annotationsOverrider.operator** (string),必选
+
+ Operator 表示将应用于工作负载的运算符。
+
+ - **spec.overrideRules.overriders.annotationsOverrider.value** (map[string]string)
+
+ Value 表示工作负载的注解/标签的值。当运算符为“add(添加)"时,Value 中的项会附加在 annotation/label 之后。当运算符为“remove(移除)”时,Value 中与 annotation/label 匹配的项将被删除。当运算符为“replace(替换)”时,Value 中与 annotation/label 匹配的项将被替换。
+
+ - **spec.overrideRules.overriders.argsOverrider** ([]CommandArgsOverrider)
+
+ ArgsOverrider 表示专门处理容器 args 的规则。
+
+
+
+ *CommandArgsOverrider 表示用于处理 command/args 覆盖的专属规则。*
+
+ - **spec.overrideRules.overriders.argsOverrider.containerName** (string),必选
+
+ 容器的名称。
+
+ - **spec.overrideRules.overriders.argsOverrider.operator** (string),必选
+
+ Operator 表示将应用于 command/args 的运算符。
+
+ - **spec.overrideRules.overriders.argsOverrider.value** ([]string)
+
+ Value 表示 command/args 的值。当运算符为“add(添加)"时,Value 中的项会附加在 command/args 之后。当运算符为“remove(移除)”时,Value 中与 command/args 匹配的项将被删除。如果 Value 留空,则 command/args 将保持不变。
+
+ - **spec.overrideRules.overriders.commandOverrider** ([]CommandArgsOverrider)
+
+ CommandOverrider 表示用于处理容器命令的专属规则。
+
+
+
+ *CommandArgsOverrider 表示用于处理 command/args 覆盖的专属规则。*
+
+ - **spec.overrideRules.overriders.commandOverrider.containerName** (string),必选
+
+ 容器的名称。
+
+ - **spec.overrideRules.overriders.commandOverrider.operator** (string),必选
+
+ Operator 表示将应用于 command/args 的运算符。
+
+ - **spec.overrideRules.overriders.commandOverrider.value** ([]string)
+
+ Value 表示 command/args 的值。当运算符为“add(添加)"时,Value 中的项会附加在 command/args 之后。当运算符为“remove(移除)”时,Value 中与 command/args 匹配的项将被删除。如果 Value 留空,则 command/args 将保持不变。
+
+ - **spec.overrideRules.overriders.imageOverrider** ([]ImageOverrider)
+
+ ImageOverrider 表示用于处理镜像覆盖的专属规则。
+
+
+
+ *ImageOverrider 表示用于处理镜像覆盖的专属规则。*
+
+ - **spec.overrideRules.overriders.imageOverrider.component** (string),必选
+
+ 组件是镜像名称的一部分。镜像名称通常表示为[registry/]repository[:tag]。registry 可能是 - registry.k8s.io - fictional.registry.example:10443;repository 可能是 kube-apiserver - fictional/nginx;标签可能是 -latest- v1.19.1 - @sha256:dbcc1c35ac38df41fd2f5e4130b32ffdb93ebae8b3dbe638c23575912276fc9c
+
+ - **spec.overrideRules.overriders.imageOverrider.operator** (string),必选
+
+ Operator 表示将应用于镜像的运算符。
+
+ - **spec.overrideRules.overriders.imageOverrider.predicate** (ImagePredicate)
+
+ Predicate 在应用规则之前,会对镜像进行过滤。
+
+ 默认值为 nil。如果设置为默认值,并且资源类型为 Pod、ReplicaSet、Deployment、StatefulSet、DaemonSet 或 Job,系统将按照以下规则自动检测镜像字段:
+ - Pod: /spec/containers/<N>/image
+ - ReplicaSet: /spec/template/spec/containers/<N>/image
+ - Deployment: /spec/template/spec/containers/<N>/image
+ - DaemonSet: /spec/template/spec/containers/<N>/image
+ - StatefulSet: /spec/template/spec/containers/<N>/image
+ - Job: /spec/template/spec/containers/<N>/image
+
+ 此外,如果资源对象有多个容器,所有镜像都将被处理。
+
+ 如果值不是 nil,仅处理与过滤条件匹配的镜像。
+
+
+
+ *ImagePredicate 定义镜像的过滤条件。*
+
+ - **spec.overrideRules.overriders.imageOverrider.predicate.path** (string),必选
+
+ Path 表示目标字段的路径。
+
+ - **spec.overrideRules.overriders.imageOverrider.value** (string)
+
+ Value 表示镜像的值。当运算符为“add(添加)”或“replace(替换)”时,不得为空。当运算符为“remove(删除)”时,默认为空且可忽略。
+
+ - **spec.overrideRules.overriders.labelsOverrider** ([]LabelAnnotationOverrider)
+
+ LabelsOverrider 表示用于处理工作负载标签的专属规则
+
+
+
+ **LabelAnnotationOverrider** 表示用于处理工作负载标签/注解的专属规则。
+
+ - **spec.overrideRules.overriders.labelsOverrider.operator** (string),必选
+
+ Operator表示将应用于工作负载的运算符。
+
+ - **spec.overrideRules.overriders.labelsOverrider.value** (map[string]string)
+
+ Value 表示工作负载的注解/标签的值。当运算符为“add(添加)"时,Value中的项会附加在annotation/label之后。当运算符为“remove(移除)”时,Value 中与 annotation/label 匹配的项将被删除。当运算符为“replace(替换)”时,Value 中与 annotation/label 匹配的项将被替换。
+
+ - **spec.overrideRules.overriders.plaintext** ([]PlaintextOverrider)
+
+ Plaintext 表示用明文定义的覆盖规则。
+
+
+
+ *PlaintextOverrider 根据路径、运算符和值覆盖目标字段。*
+
+ - **spec.overrideRules.overriders.plaintext.operator** (string),必选
+
+ Operator 表示对目标字段的操作。可用的运算符有:添加(add)、替换(replace)和删除(remove)。
+
+ - **spec.overrideRules.overriders.plaintext.path** (string),必选
+
+ Path 表示目标字段的路径。
+
+ - **spec.overrideRules.overriders.plaintext.value** (JSON)
+
+ Value 表示应用于目标字段的值。当操作符为“remove(删除)”时,必须为空。
+
+
+
+ *JSON 表示任何有效的 JSON 值。支持以下类型:bool、int64、float64、string、[]interface[]、map[string]interface[]和 nil*
+
+ - **spec.overrideRules.targetCluster** (ClusterAffinity)
+
+ TargetCluster 定义对此覆盖策略的限制,此覆盖策略仅适用于分发到匹配集群的资源。nil 表示匹配所有集群。
+
+
+
+ *ClusterAffinity 表示用于选择集群的过滤条件。*
+
+ - **spec.overrideRules.targetCluster.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.overrideRules.targetCluster.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.overrideRules.targetCluster.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.overrideRules.targetCluster.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.overrideRules.targetCluster.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **spec.overriders** (Overriders)
+
+ Overriders 表示将应用于资源的覆盖规则。
+
+ Deprecated:此字段已在 v1.0 中被弃用,请改用 OverrideRules。
+
+
+
+ *Overriders 提供了各种表示覆盖规则的替代方案。
+
+ 如果多个替代方案并存,将按以下顺序应用: ImageOverrider > CommandOverrider > ArgsOverrider > LabelsOverrider > AnnotationsOverrider > Plaintext*
+
+ - **spec.overriders.annotationsOverrider** ([]LabelAnnotationOverrider)
+
+ AnnotationsOverrider 表示用于处理工作负载注解的专属规则。
+
+
+
+ *LabelAnnotationOverrider 表示用于处理工作负载标签/注解的专属规则。*
+
+ - **spec.overriders.annotationsOverrider.operator** (string),必选
+
+ Operator 表示将应用于工作负载的运算符。
+
+ - **spec.overriders.annotationsOverrider.value** (map[string]string)
+
+ Value 表示工作负载的注解/标签的值。当运算符为“add(添加)"时,Value 中的项会附加在 annotation/label 之后。当运算符为“remove(移除)”时,Value 中与 annotation/label 匹配的项将被删除。当运算符为“replace(替换)”时,Value 中与 annotation/label 匹配的项将被替换。
+
+ - **spec.overriders.argsOverrider** ([]CommandArgsOverrider)
+
+ ArgsOverrider 表示专门处理容器 args 的规则。
+
+
+
+ *CommandArgsOverrider 表示用于处理 command/args 覆盖的专属规则。*
+
+ - **spec.overriders.argsOverrider.containerName** (string),必选
+
+ 容器的名称。
+
+ - **spec.overriders.argsOverrider.operator** (string),必选
+
+ Operator 表示将应用于 command/args 的运算符。
+
+ - **spec.overriders.argsOverrider.value** ([]string)
+
+ Value 表示 command/args 的值。当运算符为“add(添加)"时,Value 中的项会附加在 command/args 之后。当运算符为“remove(移除)”时,Value 中与 command/args 匹配的项将被删除。如果Value 为空,则 command/args 将保持不变。
+
+ - **spec.overriders.commandOverrider** ([]CommandArgsOverrider)
+
+ CommandOverrider 表示用于处理容器命令的专属规则。
+
+
+
+ *CommandArgsOverrider 表示用于处理 command/args 覆盖的专属规则。*
+
+ - **spec.overriders.commandOverrider.containerName** (string),必选
+
+ 容器的名称。
+
+ - **spec.overriders.commandOverrider.operator** (string),必选
+
+ Operator 表示将应用于 command/args 的运算符。
+
+ - **spec.overriders.commandOverrider.value** ([]string)
+
+ Value 表示 command/args 的值。当运算符为“add(添加)"时,Value 中的项会附加在 command/args 之后。当运算符为“remove(移除)”时,Value 中与 command/args 匹配的项将被删除。如果 Value 为空,则 command/args 将保持不变。
+
+ - **spec.overriders.imageOverrider** ([]ImageOverrider)
+
+ ImageOverrider 表示用于处理镜像覆盖的专属规则。
+
+
+
+ *ImageOverrider 表示用于处理镜像覆盖的专属规则。*
+
+ - **spec.overriders.imageOverrider.component** (string),必选
+
+ 组件是镜像名称的一部分。镜像名称通常表示为[registry/]repository[:tag]。registry 可能是 - registry.k8s.io - fictional.registry.example:10443;repository 可能是 kube-apiserver - fictional/nginx;标签可能是 -latest- v1.19.1 - @sha256:dbcc1c35ac38df41fd2f5e4130b32ffdb93ebae8b3dbe638c23575912276fc9c
+
+ - **spec.overriders.imageOverrider.operator** (string),必选
+
+ Operator 表示将应用于镜像的运算符。
+
+ - **spec.overriders.imageOverrider.predicate** (ImagePredicate)
+
+ Predicate 在应用规则之前,会对镜像进行过滤。
+
+ 默认值为 nil。如果设置为默认值,并且资源类型为 Pod、ReplicaSet、Deployment、StatefulSet、DaemonSet 或 Job,系统将按照以下规则自动检测镜像字段:
+ - Pod: /spec/containers/<N>/image
+ - ReplicaSet: /spec/template/spec/containers/<N>/image
+ - Deployment: /spec/template/spec/containers/<N>/image
+ - DaemonSet: /spec/template/spec/containers/<N>/image
+ - StatefulSet: /spec/template/spec/containers/<N>/image
+ - Job: /spec/template/spec/containers/<N>/image
+
+ 此外,如果资源对象有多个容器,所有镜像都将被处理。
+
+ 如果值不是 nil,仅处理与过滤条件匹配的镜像。
+
+
+
+ *ImagePredicate 定义镜像的过滤条件。*
+
+ - **spec.overriders.imageOverrider.predicate.path** (string),必选
+
+ Path 表示目标字段的路径。
+
+ - **spec.overriders.imageOverrider.value** (string)
+
+ Value 表示镜像的值。当运算符为“add(添加)”或“replace(替换)”时,不得为空。当运算符为“remove(删除)”时,默认为空且可忽略。
+
+ - **spec.overriders.labelsOverrider** ([]LabelAnnotationOverrider)
+
+ LabelsOverrider 表示用于处理工作负载标签的专属规则
+
+
+
+ *LabelAnnotationOverrider 表示用于处理工作负载标签/注解的专属规则。*
+
+ - **spec.overriders.labelsOverrider.operator** (string),必选
+
+ Operator 表示将应用于工作负载的运算符。
+
+ - **spec.overriders.labelsOverrider.value** (map[string]string)
+
+ Value 表示工作负载的注解/标签的值。当运算符为“add(添加)"时,Value 中的项会附加在 annotation/label 之后。当运算符为“remove(移除)”时,Value 中与 annotation/label 匹配的项将被删除。当运算符为“replace(替换)”时,Value 中与 annotation/label 匹配的项将被替换。
+
+ - **spec.overriders.plaintext** ([]PlaintextOverrider)
+
+ Plaintext 表示用明文定义的覆盖规则。
+
+
+
+ *PlaintextOverrider 根据路径、运算符和值覆盖目标字段。*
+
+ - **spec.overriders.plaintext.operator** (string),必选
+
+ Operator 表示对目标字段的操作。可用的运算符有:添加(add)、替换(replace)和删除(remove)。
+
+ - **spec.overriders.plaintext.path** (string),必选
+
+ Path 表示目标字段的路径。
+
+ - **spec.overriders.plaintext.value** (JSON)
+
+ Value 表示应用于目标字段的值。当操作符为“remove(删除)”时,必须为空。
+
+
+
+ *JSON 表示任何有效的 JSON 值。支持以下类型:bool、int64、float64、string、[]interface[]、map[string]interface[]和 nil*
+
+ - **spec.resourceSelectors** ([]ResourceSelector)
+
+ ResourceSelectors 限制此覆盖策略适用的资源类型。nil 表示此覆盖策略适用于所有资源。
+
+
+
+ *ResourceSelector 用于选择资源。*
+
+ - **spec.resourceSelectors.apiVersion** (string),必选
+
+ APIVersion 表示目标资源的 API 版本。
+
+ - **spec.resourceSelectors.kind** (string),必选
+
+ Kind 表示目标资源的类别。
+
+ - **spec.resourceSelectors.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ 查询一组资源的标签。如果 name 不为空,labelSelector 会被忽略。
+
+ - **spec.resourceSelectors.name** (string)
+
+ 目标资源的名称。默认值为空,表示选择所有资源。
+
+ - **spec.resourceSelectors.namespace** (string)
+
+ 目标资源的命名空间。默认值为空,表示从父对象作用域继承资源。
+
+ - **spec.targetCluster** (ClusterAffinity)
+
+ TargetCluster 定义对此覆盖策略的限制,此覆盖策略仅适用于分发到匹配集群的资源。nil 表示匹配所有集群。
+
+ Deprecated 表示此字段已在 v1.0 中被弃用,请改用 OverrideRules。
+
+
+
+ *ClusterAffinity 表示用于选择群集的过滤器。*
+
+ - **spec.targetCluster.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.targetCluster.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.targetCluster.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.targetCluster.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.targetCluster.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+## OverridePolicyList
+
+OverridePolicyList 是一组 OverridePolicy 的集合。
+
+
+
+- **apiVersion**:policy.karmada.io/v1alpha1
+
+- **kind**:OverridePolicyList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)),必选
+
+ Items 罗列 OverridePolicy。
+
+## 操作
+
+
+
+### `get`:查询指定的 OverridePolicy
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ OverridePolicy 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):OK
+
+### `get`:查询指定 OverridePolicy 的状态
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ OverridePolicy 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):OK
+
+### `list`:查询某个命名空间内的所有 OverridePolicy
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks** (*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (查询参数):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([OverridePolicyList](../policy-resources/override-policy-v1alpha1#overridepolicylist)):OK
+
+### `list`:查询所有 OverridePolicy
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/overridepolicies
+
+#### 参数
+
+- **allowWatchBookmarks** (*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (查询参数):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([OverridePolicyList](../policy-resources/override-policy-v1alpha1#overridepolicylist)):OK
+
+### `create`:创建一条 OverridePolicy
+
+#### HTTP 请求
+
+POST /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):OK
+
+201 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):Created
+
+202 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):Accepted
+
+### `update`:更新指定的 OverridePolicy
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ OverridePolicy 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy),必须
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):OK
+
+201 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):Created
+
+### `update`:更新指定 OverridePolicy 的状态
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ OverridePolicy 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy),必须
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):OK
+
+201 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):Created
+
+### `patch`:更新指定 OverridePolicy 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ OverridePolicy 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):OK
+
+201 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):Created
+
+### `patch`:更新指定 OverridePolicy 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies/{name}/status
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ OverridePolicy 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):OK
+
+201 ([OverridePolicy](../policy-resources/override-policy-v1alpha1#overridepolicy)):Created
+
+### `delete`:删除一条 OverridePolicy
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies/{name}
+
+#### 参数
+
+- **名称**(*路径参数*):string,必选
+
+ OverridePolicy 的名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)):OK
+
+202 ([Status](../common-definitions/status#status)):Accepted
+
+### `deletecollection` 删除所有 OverridePolicy
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/overridepolicies
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (查询参数):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)):OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/propagation-policy-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/propagation-policy-v1alpha1.md
new file mode 100644
index 000000000..df3d8c132
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/policy-resources/propagation-policy-v1alpha1.md
@@ -0,0 +1,875 @@
+---
+api_metadata:
+ apiVersion: "policy.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"
+ kind: "PropagationPolicy"
+content_type: "api_reference"
+description: "PropagationPolicy represents the policy that propagates a group of resources to one or more clusters."
+title: "PropagationPolicy v1alpha1"
+weight: 4
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: policy.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"`
+
+## PropagationPolicy
+
+PropagationPolicy 表示将一组资源分发到一个或多个集群的策略。
+
+
+
+- **apiVersion**: policy.karmada.io/v1alpha1
+
+- **kind**: PropagationPolicy
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** (PropagationSpec),必选
+
+ Spec 表示 PropagationPolicy 的规范。
+
+
+
+ *PropagationSpec 表示 PropagationPolicy 的规范。*
+
+ - **spec.resourceSelectors** ([]ResourceSelector),必选
+
+ ResourceSelectors 用于选择资源。不允许设置为 nil 或者留空。为安全起见,避免 Secret 等敏感资源被无意分发,不会匹配全部的资源。
+
+
+
+ *ResourceSelector 用于选择资源。*
+
+ - **spec.resourceSelectors.apiVersion**(string),必选
+
+ APIVersion 表示目标资源的 API 版本。
+
+ - **spec.resourceSelectors.kind**(string),必选
+
+ Kind 表示目标资源的种类。
+
+ - **spec.resourceSelectors.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ 查询一组资源的标签。如果 name 不为空,labelSelector 会被忽略。
+
+ - **spec.resourceSelectors.name**(string)
+
+ 目标资源的名称。默认值为空,意味着所有资源将被选中。
+
+ - **spec.resourceSelectors.namespace**(string)
+
+ 目标资源的命名空间。默认值为空,意味着从父对象作用域继承资源。
+
+ - **spec.association**(boolean)
+
+ Association 表示是否自动选择相关资源,例如,被 Deployment 引用的 ConfigMap。默认值为 false。Deprecated 表示改用 PropagateDeps。
+
+ - **spec.conflictResolution**(string)
+
+ ConflictResolution 表示当目标集群中已存在正在分发的资源时,处理潜在冲突的方式。
+
+ 默认值为 Abort,表示停止分发资源以避免意外覆盖。将原集群资源迁移到 Karmada 时,可设置为“Overwrite”。此时,冲突是可预测的,且 Karmada 可通过覆盖来接管资源。
+
+ - **spec.dependentOverrides**([]string)
+
+ DependentOverrides 罗列在当前 PropagationPolicy 生效之前必须出现的覆盖(OverridePolicy)。
+
+ 它指明当前 PropagationPolicy 所依赖的覆盖。当用户同时创建 OverridePolicy 和资源时,一般希望可以采用新创建的策略。
+
+ 注意:如果当前命名空间中的 OverridePolicy 和 ClusterOverridePolicy 与资源匹配,即使它们不在列表中,仍将被应用于覆盖。
+
+ - **spec.failover** (FailoverBehavior)
+
+ Failover 表示 Karmada 在故障场景中迁移应用的方式。如果值为 nil,则禁用故障转移。
+
+
+
+ *FailoverBehavior 表示应用或集群的故障转移。*
+
+ - **spec.failover.application** (ApplicationFailoverBehavior)
+
+ Application 表示应用的故障转移。如果值为 nil,则禁用故障转移。如果值不为 nil,则 PropagateDeps 应设置为 true,以便依赖项随应用一起迁移。
+
+
+
+ *ApplicationFailoverBehavior 表示应用的故障转移。*
+
+ - **spec.failover.application.decisionConditions** (DecisionConditions),必选
+
+ DecisionConditions 表示执行故障转移的先决条件。只有满足所有条件,才能执行故障转移。当前条件为 TolerationSeconds(可选)。
+
+
+
+ *DecisionConditions 表示执行故障转移的先决条件。*
+
+ - **spec.failover.application.decisionConditions.tolerationSeconds**(int32)
+
+ TolerationSeconds 表示应用达到预期状态后,Karmada 在执行故障转移之前应等待的时间。如果未指定等待时间,Karmada 将立即执行故障转移。默认为 300 秒。
+
+ - **spec.failover.application.gracePeriodSeconds**(int32)
+
+ GracePeriodSeconds 表示从新集群中删除应用之前的最长等待时间(以秒为单位)。仅当 PurgeMode 设置为 Graciously 且默认时长为 600 秒时,才需要设置该字段。如果新群集中的应用无法达到健康状态,Karmada 将在达到最长等待时间后删除应用。取值只能为正整数。
+
+ - **spec.failover.application.purgeMode**(string)
+
+ PurgeMode 表示原集群中应用的处理方式。取值包括 Immediately、Graciously 和 Never。默认为 Graciously。
+
+ - **spec.placement** (Placement)
+
+ Placement 表示选择集群以分发资源的规则。
+
+
+
+ *Placement 表示选择集群的规则。*
+
+ - **spec.placement.clusterAffinities** ([]ClusterAffinityTerm)
+
+ ClusterAffinities 表示多个集群组的调度限制(ClusterAffinityTerm 指定每种限制)。
+
+ 调度器将按照这些组在规范中出现的顺序逐个评估,不满足调度限制的组将被忽略。除非该组中的所有集群也属于下一个组(同一集群可以属于多个组),否则将不会选择此组中的所有集群。
+
+ 如果任何组都不满足调度限制,则调度失败,任何集群都不会被选择。
+
+ 注意:
+ 1. ClusterAffinities 不能与 ClusterAffinity 共存。
+ 2. 如果未同时设置 ClusterAffinities 和 ClusterAffinity,则任何集群都可以作为调度候选集群。
+
+ 潜在用例1:本地数据中心的私有集群为主集群组,集群提供商的托管集群是辅助集群组。Karmada 调度器更愿意将工作负载调度到主集群组,只有在主集群组不满足限制(如缺乏资源)的情况下,才会考虑辅助集群组。
+
+ 潜在用例2:对于容灾场景,系统管理员可定义主集群组和备份集群组,工作负载将首先调度到主集群组,当主集群组中的集群发生故障(如数据中心断电)时,Karmada 调度器可以将工作负载迁移到备份集群组。
+
+
+
+ *ClusterAffinityTerm 用于选择集群。*
+
+ - **spec.placement.clusterAffinities.affinityName**(string),必选
+
+ AffinityName 是集群组的名称。
+
+ - **spec.placement.clusterAffinities.clusterNames**([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.placement.clusterAffinities.exclude**([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.placement.clusterAffinities.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.placement.clusterAffinities.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.placement.clusterAffinities.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **spec.placement.clusterAffinity** (ClusterAffinity)
+
+ ClusterAffinity 表示对某组集群的调度限制。注意:
+ 1. ClusterAffinity 不能与 ClusterAffinities 共存。
+ 2. 如果未同时设置 ClusterAffinities 和 ClusterAffinity,则任何集群都可以作为调度候选集群。
+
+
+
+ *ClusterAffinity 是用于选择集群的过滤条件。*
+
+ - **spec.placement.clusterAffinity.clusterNames**([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.placement.clusterAffinity.exclude**([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.placement.clusterAffinity.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.placement.clusterAffinity.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.placement.clusterAffinity.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **spec.placement.clusterTolerations** ([]Toleration)
+
+ ClusterTolerations 表示容忍度。
+
+
+
+ *附加此容忍度的 Pod 能够容忍任何使用匹配运算符 <operator> 匹配三元组 <key,value,effect> 所得到的污点。*
+
+ - **spec.placement.clusterTolerations.effect**(string)
+
+ Effect 表示要匹配的污点效果。留空表示匹配所有污点效果。如果要设置此字段,允许的值为 NoSchedule、PreferNoSchedule 或 NoExecute。
+
+ 枚举值包括:
+ - `"NoExecute"`:任何不能容忍该污点的 Pod 都会被驱逐。当前由 NodeController 强制执行。
+ - `"NoSchedule"`:如果新 Pod 无法容忍该污点,不允许新 Pod 调度到节点上,但允许由 Kubelet 调度但不需要调度器启动的所有 Pod,并允许节点上已存在的 Pod 继续运行。由调度器强制执行。
+ - `"PreferNoSchedule"`:和 TaintEffectNoSchedule 相似,不同的是调度器尽量避免将新 Pod 调度到具有该污点的节点上,除非没有其他节点可调度。由调度器强制执行。
+
+ - **spec.placement.clusterTolerations.key**(string)
+
+ Key 是容忍度的污点键。留空表示匹配所有污点键。如果键为空,则运算符必须为 Exist,所有值和所有键都会被匹配。
+
+ - **spec.placement.clusterTolerations.operator**(string)
+
+ Operator 表示一个键与值的关系。有效的运算符包括 Exists 和 Equal。默认为 Equal。Exists 相当于将值设置为通配符,因此一个 Pod 可以容忍特定类别的所有污点。
+
+ 枚举值包括:
+ - `"Equal"`
+ - `"Exists"`
+
+ - **spec.placement.clusterTolerations.tolerationSeconds**(int64)
+
+ TolerationSeconds 表示容忍度容忍污点的时间段(Effect 的取值为 NoExecute,否则忽略此字段)。默认情况下,不设置此字段,表示永远容忍污点(不驱逐)。零和负值将被系统视为 0(立即驱逐)。
+
+ - **spec.placement.clusterTolerations.value**(string)
+
+ Value 是容忍度匹配到的污点值。如果运算符为 Exists,则值留空,否则就是一个普通字符串。
+
+ - **spec.placement.replicaScheduling** (ReplicaSchedulingStrategy)
+
+ ReplicaScheduling 表示将 spec 中规约的副本资源(例如 Deployments、Statefulsets)分发到成员集群时处理副本数量的调度策略。
+
+
+
+ *ReplicaSchedulingStrategy 表示副本的分配策略。*
+
+ - **spec.placement.replicaScheduling.replicaDivisionPreference**(string)
+
+ 当 ReplicaSchedulingType 设置为 Divided 时,由 ReplicaDivisionPreference 确定副本的分配策略。取值包括 Aggregated 和 Weighted。Aggregated:将副本分配给尽可能少的集群,同时考虑集群的资源可用性。Weighted:根据 WeightPreference 按权重分配副本。
+
+ - **spec.placement.replicaScheduling.replicaSchedulingType**(string)
+
+ ReplicaSchedulingType 确定 karmada 分发资源时副本的调度方式。取值包括 Duplicated 和 Divided。Duplicated:将相同的副本从资源复制到每个候选成员群集。Divided:根据有效候选成员集群的数量分配副本,每个集群的副本由 ReplicaDivisionPreference 确定。
+
+ - **spec.placement.replicaScheduling.weightPreference** (ClusterPreferences)
+
+ WeightPreference 描述每个集群或每组集群的权重。如果 ReplicaDivisionPreference 设置为 Weighted,但 WeightPreference 未设置,调度器将为所有集群设置相同的权重。
+
+
+
+ *ClusterPreferences 描述每个集群或每组集群的权重。*
+
+ - **spec.placement.replicaScheduling.weightPreference.dynamicWeight**(string)
+
+ DynamicWeight 指生成动态权重列表的因子。如果指定,StaticWeightList 将被忽略。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList** ([]StaticClusterWeight)
+
+ StaticWeightList 罗列静态集群权重。
+
+
+
+ *StaticClusterWeight 定义静态集群权重。*
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster** (ClusterAffinity),必选
+
+ TargetCluster 是选择集群的过滤器。
+
+
+
+ *ClusterAffinity 是用于选择集群的过滤条件。*
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **spec.placement.replicaScheduling.weightPreference.staticWeightList.weight**(int64),必选
+
+ Weight 表示优先选择 TargetCluster 指定的集群。
+
+ - **spec.placement.spreadConstraints** ([]SpreadConstraint)
+
+ SpreadConstraints 表示调度约束列表。
+
+
+
+ *SpreadConstraint 表示资源分布的约束。*
+
+ - **spec.placement.spreadConstraints.maxGroups**(int32)
+
+ MaxGroups 表示要选择的集群组的最大数量。
+
+ - **spec.placement.spreadConstraints.minGroups**(int32)
+
+ MinGroups 表示要选择的集群组的最小数量。默认值为 1。
+
+ - **spec.placement.spreadConstraints.spreadByField**(string)
+
+ SpreadByField 是 Karmada 集群 API 中的字段,该 API 用于将成员集群分到不同集群组。资源将被分发到不同的集群组中。可用的字段包括 cluster、region、zone 和 provider。SpreadByField 不能与 SpreadByLabel 共存。如果两个字段都为空,SpreadByField 默认为 cluster。
+
+ - **spec.placement.spreadConstraints.spreadByLabel**(string)
+
+ SpreadByLabel 表示用于将成员集群分到不同集群组的标签键。资源将被分发到不同的集群组中。SpreadByLabel 不能与 SpreadByField 共存。
+
+- **spec.preemption**(string)
+
+ Preemption 表示资源抢占。取值包括 Always 和 Never。
+
+ 枚举值包括:
+ - `"Always"`:允许抢占。如果 Always 应用于 PropagationPolicy,则会根据优先级抢占资源。只要 PropagationPolicy 和 ClusterPropagationPolicy 能匹配 ResourceSelector 中定义的规则,均可用于声明资源。此外,如果资源已被 ClusterPropagationPolicy 声明,PropagationPolicy 仍然可以抢占该资源,无需考虑优先级。如果 Always 应用于 ClusterPropagationPolicy,只有 ClusterPropagationPolicy 能抢占资源。
+ - `"Never"`:PropagationPolicy(或 ClusterPropagationPolicy)不抢占资源。
+
+- **spec.priority**(int32)
+
+ Priority 表示策略(PropagationPolicy 或 ClusterPropagationPolicy)的重要性。对于每条策略,如果在资源模板中没有其他优先级更高的策略,则将为匹配的资源模板应用该策略。一旦资源模板被某个策略声明,默认情况下该模板不会被优先级更高的策略抢占。查看 Preemption 字段,了解更多信息。
+
+ 如果两条策略有相同的优先级,会使用 ResourceSelector 中有更精确匹配规则的策略。按 name(resourceSelector.name) 匹配的优先级高于按 selector(resourceSelector.labelSelector) 匹配。
+
+ - 按 selector(resourceSelector.labelSelector) 匹配的优先级又高于按 APIVersion(resourceSelector.apiVersion) 或 Kind(resourceSelector.kind) 匹配。
+
+ 如果优先级相同,则按字母顺序,会使用字母排名更前的策略,比如,名称以 bar 开头的策略优先级高于以 foo 开头的策略。
+
+ 值越大,优先级越高。默认值为 0。
+
+- **spec.propagateDeps**(boolean)
+
+ PropagateDeps 表示相关资源是否被自动分发。以引用 ConfigMap 和 Secret 的 Deployment 为例,当 propagateDeps 为 true 时,resourceSelectors 不引用资源(以减少配置),ConfigMap 和 Secret 将与 Deployment 一起被分发。此外,在故障转移场景中,引用资源也将与 Deployment 一起迁移。
+
+ 默认值为 false。
+
+- **spec.schedulerName**(string)
+
+ SchedulerName 表示要继续进行调度的调度器。如果指定,将由指定的调度器调度策略。如果未指定,将由默认调度器调度策略。
+
+## PropagationPolicyList
+
+PropagationPolicyList 罗列 PropagationPolicy。
+
+
+
+- **apiVersion**: policy.karmada.io/v1alpha1
+
+- **kind**: PropagationPolicyList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)),必选
+
+## 操作
+
+
+
+### `get`:查询指定的 PropagationPolicy
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ PropagationPolicy 名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): OK
+
+### `get`:查询指定 PropagationPolicy 的状态
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ PropagationPolicy 名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): OK
+
+### `list`:查询全部 PropagationPolicy
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([PropagationPolicyList](../policy-resources/propagation-policy-v1alpha1#propagationpolicylist)): OK
+
+### `list`:查询全部 PropagationPolicy
+
+#### HTTP 请求
+
+GET /apis/policy.karmada.io/v1alpha1/propagationpolicies
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([PropagationPolicyList](../policy-resources/propagation-policy-v1alpha1#propagationpolicylist)): OK
+
+### `create`:创建一个 PropagationPolicy
+
+#### HTTP 请求
+
+POST /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): OK
+
+201 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): Created
+
+202 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): Accepted
+
+### `update`:更新指定的 PropagationPolicy
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ PropagationPolicy 名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): OK
+
+201 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): Created
+
+### `update`:更新指定 PropagationPolicy 的状态
+
+#### HTTP 请求
+
+PUT /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ PropagationPolicy名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): OK
+
+201 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): Created
+
+### `patch`:更新指定 PropagationPolicy 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ PropagationPolicy 名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): OK
+
+201 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): Created
+
+### `patch`:更新指定 PropagationPolicy 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ PropagationPolicy 名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): OK
+
+201 ([PropagationPolicy](../policy-resources/propagation-policy-v1alpha1#propagationpolicy)): Created
+
+### `delete`:刪除一个 PropagationPolicy
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ PropagationPolicy 名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除 PropagationPolicy 的集合
+
+#### HTTP 请求
+
+DELETE /apis/policy.karmada.io/v1alpha1/namespaces/{namespace}/propagationpolicies
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/search-resources/resource-registry-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/search-resources/resource-registry-v1alpha1.md
new file mode 100644
index 000000000..8ab62fb0a
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/search-resources/resource-registry-v1alpha1.md
@@ -0,0 +1,582 @@
+---
+api_metadata:
+ apiVersion: "search.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/search/v1alpha1"
+ kind: "ResourceRegistry"
+content_type: "api_reference"
+description: "ResourceRegistry represents the configuration of the cache scope, mainly describes which resources in which clusters should be cached."
+title: "ResourceRegistry v1alpha1"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: search.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/search/v1alpha1"`
+
+## ResourceRegistry
+
+ResourceRegistry 表示缓存范围的配置,主要描述在哪些集群缓存资源。
+
+
+
+- **apiVersion**: search.karmada.io/v1alpha1
+
+- **kind**: ResourceRegistry
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([ResourceRegistrySpec](../search-resources/resource-registry-v1alpha1#resourceregistryspec))
+
+ Spec 表示 ResourceRegistry 的规范。
+
+- **status** ([ResourceRegistryStatus](../search-resources/resource-registry-v1alpha1#resourceregistrystatus))
+
+ Status 表示 ResourceRegistry 的状态。
+
+## ResourceRegistrySpec
+
+ResourceRegistrySpec 定义 ResourceRegistry 的预期状态。
+
+
+
+- **resourceSelectors** ([]ResourceSelector),必选
+
+ ResourceSelectors 指定缓存系统应缓存的资源类型。
+
+
+
+ *ResourceSelector 指定资源类型及其范围。*
+
+ - **resourceSelectors.apiVersion**(string),必选
+
+ APIVersion 表示目标资源的 API 版本。
+
+ - **resourceSelectors.kind**(string),必选
+
+ Kind 表示目标资源的类别。
+
+ - **resourceSelectors.namespace** (string)
+
+ 目标资源的命名空间。默认为空,表示所有命名空间。
+
+- **targetCluster** (ClusterAffinity),必选
+
+ TargetCluster 指定缓存系统收集资源的集群。
+
+
+
+ *ClusterAffinity 是用来选择集群的过滤器。*
+
+ - **targetCluster.clusterNames** ([]string)
+
+ ClusterNames 罗列了待选择的集群。
+
+ - **targetCluster.exclude** ([]string)
+
+ ExcludedClusters 罗列了待忽略的集群。
+
+ - **targetCluster.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **targetCluster.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **targetCluster.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+- **backendStore** (BackendStoreConfig)
+
+ BackendStore 指定缓存项的存储位置。
+
+
+
+ *BackendStoreConfig 表示后端存储。*
+
+ - **backendStore.openSearch** (OpenSearchConfig)
+
+ OpenSearch 是一款由社区驱动的开源搜索和分析套件。更多详情,请浏览:https://opensearch.org/
+
+
+
+ *OpenSearchConfig 包含了客户端访问 OpenSearch 服务器所需的配置。*
+
+ - **backendStore.openSearch.addresses** ([]string),必选
+
+ Addresses 罗列了待使用的节点端点(例如,https://localhost:9200)。有关“节点”的概念,请浏览:https://opensearch.org/docs/latest/opensearch/index/#clusters-and-nodes
+
+ - **backendStore.openSearch.secretRef** (LocalSecretReference)
+
+ SecretRef 表示密钥包含访问服务器的强制性凭据。取值包括 secret.data.userName 和 secret.data.password。
+
+
+
+ *LocalSecretReference 指封闭命名空间内的密钥引用。*
+
+ - **backendStore.openSearch.secretRef.name** ([]string),必选
+
+ Name 指被引用资源的名称。
+
+ - **backendStore.openSearch.secretRef.namespace** ([]string),必选
+
+ Namespace 指所引用资源的命名空间。
+
+## ResourceRegistryStatus
+
+ResourceRegistryStatus 定义 ResourceRegistry 的预期状态。
+
+
+
+- **conditions** ([]Condition)
+
+ Conditions 包含不同的状况。
+
+
+
+ *Condition 包含此 API 资源当前状态某个方面的详细信息。*
+
+ - **conditions.lastTransitionTime** (Time),必选
+
+ lastTransitionTime 是状况最近一次从一种状态转换到另一种状态的时间。这种变化通常出现在下层状况发生变化的时候。如果无法了解下层状况变化,使用 API 字段更改的时间也是可以接受的。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **conditions.message**(string),必选
+
+ message 是有关转换的详细信息(人类可读消息)。可以是空字符串。
+
+ - **conditions.reason**(string),必选
+
+ reason 是一个程序标识符,表明状况最后一次转换的原因。特定状况类型的生产者可以定义该字段的预期值和含义,以及这些值是否可被视为有保证的 API。取值应该是一个驼峰式(CamelCase)字符串。此字段不能为空。
+
+ - **conditions.status**(string),必选
+
+ status 表示状况的状态。取值为True、False 或 Unknown。
+
+ - **conditions.type**(string),必选
+
+ type 表示状况的类型,采用 CamelCase 或 foo.example.com/CamelCase 形式。
+
+ - **conditions.observedGeneration** (int64)
+
+ observedGeneration 表示设置状况时所基于的 .metadata.generation。例如,如果 .metadata.generation 为 12,但 .status.conditions[x].observedGeneration 为 9,则状况相对于实例的当前状态已过期。
+
+## ResourceRegistryList
+
+ResourceRegistryList 是 ResourceRegistry 的集合。
+
+
+
+- **apiVersion**: search.karmada.io/v1alpha1
+
+- **kind**: ResourceRegistryList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)),必选
+
+ Items 是 ResourceRegistry 的列表。
+
+## 操作
+
+
+
+### `get`:查询指定的 ResourceRegistry
+
+#### HTTP 请求
+
+GET /apis/search.karmada.io/v1alpha1/resourceregistries/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceRegistry 名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): OK
+
+### `get`:查询指定 ResourceRegistry 的状态
+
+#### HTTP 请求
+
+GET /apis/search.karmada.io/v1alpha1/resourceregistries/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceRegistry 的名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): OK
+
+### `list`:查询所有 ResourceRegistry
+
+#### HTTP 请求
+
+GET /apis/search.karmada.io/v1alpha1/resourceregistries
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([ResourceRegistryList](../search-resources/resource-registry-v1alpha1#resourceregistrylist)): OK
+
+### `create`:创建一个 ResourceRegistry
+
+#### HTTP 请求
+
+POST /apis/search.karmada.io/v1alpha1/resourceregistries
+
+#### 参数
+
+- **body**: [ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): OK
+
+201 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): Created
+
+202 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): Accepted
+
+### `update`:更新指定的 ResourceRegistry
+
+#### HTTP 请求
+
+PUT /apis/search.karmada.io/v1alpha1/resourceregistries/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceRegistry 的名称
+
+- **body**: [ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): OK
+
+201 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): Created
+
+### `update`:更新指定 ResourceRegistry 的状态
+
+#### HTTP 请求
+
+PUT /apis/search.karmada.io/v1alpha1/resourceregistries/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceRegistry 的名称
+
+- **body**: [ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): OK
+
+201 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): Created
+
+### `patch`:更新指定 ResourceRegistry 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/search.karmada.io/v1alpha1/resourceregistries/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceRegistry 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): OK
+
+201 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): Created
+
+### `patch`:更新指定 ResourceRegistry 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/search.karmada.io/v1alpha1/resourceregistries/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceRegistry 的名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): OK
+
+201 ([ResourceRegistry](../search-resources/resource-registry-v1alpha1#resourceregistry)): Created
+
+### `delete`:删除一个 ResourceRegistry
+
+#### HTTP 请求
+
+DELETE /apis/search.karmada.io/v1alpha1/resourceregistries/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceRegistry 的名称
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除所有 ResourceRegistry
+
+#### HTTP 请求
+
+DELETE /apis/search.karmada.io/v1alpha1/resourceregistries
+
+#### 参数
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/work-resources/cluster-resource-binding-v1alpha2.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/work-resources/cluster-resource-binding-v1alpha2.md
new file mode 100644
index 000000000..1db6f137a
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/work-resources/cluster-resource-binding-v1alpha2.md
@@ -0,0 +1,446 @@
+---
+api_metadata:
+ apiVersion: "work.karmada.io/v1alpha2"
+ import: "github.com/karmada-io/karmada/pkg/apis/work/v1alpha2"
+ kind: "ClusterResourceBinding"
+content_type: "api_reference"
+description: "ClusterResourceBinding represents a binding of a kubernetes resource with a ClusterPropagationPolicy."
+title: "ClusterResourceBinding v1alpha2"
+weight: 3
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: work.karmada.io/v1alpha2`
+
+`import "github.com/karmada-io/karmada/pkg/apis/work/v1alpha2"`
+
+## ClusterResourceBinding
+
+ClusterResourceBinding 表示某种 Kubernetes 资源与集群分发策略(ClusterPropagationPolicy)之间的绑定关系。
+
+
+
+- **apiVersion**: work.karmada.io/v1alpha2
+
+- **kind**: ClusterResourceBinding
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([ResourceBindingSpec](../work-resources/resource-binding-v1alpha2#resourcebindingspec)),必选
+
+ **spec**表示规范。
+
+- **status** ([ResourceBindingStatus](../work-resources/resource-binding-v1alpha2#resourcebindingstatus))
+
+ **status**表示 ResourceBinding 的最新状态。
+
+## ClusterResourceBindingList
+
+ClusterResourceBindingList 中包含 ClusterResourceBinding 列表。
+
+
+
+- **apiVersion**: work.karmada.io/v1alpha2
+
+- **kind**: ClusterResourceBindingList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)),必选
+
+ items 表示 ClusterResourceBinding 列表。
+
+## 操作
+
+
+
+### `get`:查询指定的 ClusterResourceBinding
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha2/clusterresourcebindings/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ClusterResourceBinding 名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): OK
+
+### `get`:查询指定 ClusterResourceBinding 的状态
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha2/clusterresourcebindings/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ClusterResourceBinding 名称
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): OK
+
+### `list`:查询全部 ClusterResourceBinding
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha2/clusterresourcebindings
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([ClusterResourceBindingList](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebindinglist)): OK
+
+### `create`:创建一个 ClusterResourceBinding
+
+#### HTTP 请求
+
+POST /apis/work.karmada.io/v1alpha2/clusterresourcebindings
+
+#### 参数
+
+- **body**: [ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): OK
+
+201 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): Created
+
+202 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): Accepted
+
+### `update`:更新指定的 ClusterResourceBinding
+
+#### HTTP 请求
+
+PUT /apis/work.karmada.io/v1alpha2/clusterresourcebindings/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ClusterResourceBinding 名称
+
+- **body**: [ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): OK
+
+201 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): Created
+
+### `update`:更新指定 ClusterResourceBinding 的状态
+
+#### HTTP 请求
+
+PUT /apis/work.karmada.io/v1alpha2/clusterresourcebindings/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ClusterResourceBinding 名称
+
+- **body**: [ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): OK
+
+201 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): Created
+
+### `patch`:更新指定 ClusterResourceBinding 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/work.karmada.io/v1alpha2/clusterresourcebindings/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ClusterResourceBinding 名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): OK
+
+201 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): Created
+
+### `patch`:更新指定 ClusterResourceBinding 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/work.karmada.io/v1alpha2/clusterresourcebindings/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ClusterResourceBinding 名称
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): OK
+
+201 ([ClusterResourceBinding](../work-resources/cluster-resource-binding-v1alpha2#clusterresourcebinding)): Created
+
+### `delete`:删除一个 ClusterResourceBinding
+
+#### HTTP 请求
+
+DELETE /apis/work.karmada.io/v1alpha2/clusterresourcebindings/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ClusterResourceBinding 名称
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除 ClusterResourceBinding 的集合
+
+#### HTTP 请求
+
+DELETE /apis/work.karmada.io/v1alpha2/clusterresourcebindings
+
+#### 参数
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/work-resources/resource-binding-v1alpha2.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/work-resources/resource-binding-v1alpha2.md
new file mode 100644
index 000000000..af6b5ee47
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/work-resources/resource-binding-v1alpha2.md
@@ -0,0 +1,1160 @@
+---
+api_metadata:
+ apiVersion: "work.karmada.io/v1alpha2"
+ import: "github.com/karmada-io/karmada/pkg/apis/work/v1alpha2"
+ kind: "ResourceBinding"
+content_type: "api_reference"
+description: "ResourceBinding represents a binding of a kubernetes resource with a propagation policy."
+title: "ResourceBinding v1alpha2"
+weight: 2
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: work.karmada.io/v1alpha2`
+
+`import "github.com/karmada-io/karmada/pkg/apis/work/v1alpha2"`
+
+## ResourceBinding
+
+ResourceBinding 表示某种 kubernetes 资源与分发策略之间的绑定关系。
+
+
+
+- **apiVersion**: work.karmada.io/v1alpha2
+
+- **kind**: ResourceBinding
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([ResourceBindingSpec](../work-resources/resource-binding-v1alpha2#resourcebindingspec)),必选
+
+ Spec 表示规范。
+
+- **status** ([ResourceBindingStatus](../work-resources/resource-binding-v1alpha2#resourcebindingstatus))
+
+ Status 表示 ResourceBinding 的最新状态。
+
+## ResourceBindingSpec
+
+ResourceBindingSpec 表示预期的 ResourceBinding。
+
+
+
+- **resource** (ObjectReference),必选
+
+ Resource 表示要分发的 Kubernetes 资源。
+
+
+
+ *ObjectReference 包含足够的信息以便定位当前集群所引用的资源。*
+
+ - **resource.apiVersion**(string),必选
+
+ **apiVersion**表示所引用资源的 API 版本。
+
+ - **resource.kind**(string),必选
+
+ Kind 表示所引用资源的类别。
+
+ - **resource.name**(string),必选
+
+ Name 表示所引用资源的名称。
+
+ - **resource.namespace**(string)
+
+ Namespace 表示所引用资源所在的命名空间。对于非命名空间范围的资源(例如 ClusterRole),不需要指定命名空间。对于命名空间范围的资源,需要指定命名空间。如果未指定命名空间,则资源不在命名空间范围内。
+
+ - **resource.resourceVersion**(string)
+
+ ResourceVersion 表示所引用资源的内部版本,客户端可用其确定资源的变更时间。
+
+ - **resource.uid**(string)
+
+ UID 表示所引用资源的唯一标识。
+
+- **clusters** ([]TargetCluster)
+
+ Clusters 表示待部署资源的目标成员集群。
+
+
+
+ *TargetCluster 表示成员集群的标识符。*
+
+ - **clusters.name**(string),必选
+
+ Name 表示目标集群的名称。
+
+ - **clusters.replicas**(int32)
+
+ Replicas 表示目标集群中的副本。
+
+- **conflictResolution**(string)
+
+ ConflictResolution 表示当目标集群中已存在正在分发的资源时,处理潜在冲突的方式。
+
+ 默认为 Abort,表示停止分发资源以避免意外覆盖。将原集群资源迁移到 Karmada 时,可设置为 Overwrite。此时,冲突是可预测的,且 Karmada 可通过覆盖来接管资源。
+
+- **failover** (FailoverBehavior)
+
+ Failover 表示 Karmada 在故障场景中迁移应用的方式。可直接从所关联的 PropagationPolicy(或 ClusterPropagationPolicy)继承。
+
+
+
+ *FailoverBehavior 表示应用或集群的故障转移。*
+
+ - **failover.application** (ApplicationFailoverBehavior)
+
+ Application 表示应用的故障转移。如果值为 nil,则禁用故障转移。如果值不为 nil,则 PropagateDeps 应设置为 true,以便依赖项随应用一起迁移。
+
+
+
+ *ApplicationFailoverBehavior 表示应用的故障转移。*
+
+ - **failover.application.decisionConditions** (DecisionConditions),必选
+
+ DecisionConditions 表示执行故障转移的先决条件。只有满足所有条件,才能执行故障转移。当前条件为 TolerationSeconds(可选)。
+
+
+
+ *DecisionConditions 表示执行故障转移的先决条件。*
+
+ - **failover.application.decisionConditions.tolerationSeconds**(int32)
+
+ TolerationSeconds 表示应用达到预期状态后,Karmada 在执行故障转移之前应等待的时间。如果未指定等待时间,Karmada 将立即执行故障转移。默认为 300 秒。
+
+ - **failover.application.gracePeriodSeconds**(int32)
+
+ GracePeriodSeconds 表示从新集群中删除应用之前的最长等待时间(以秒为单位)。仅当 PurgeMode 设置为 Graciously 且默认时长为 600 秒时,才需要设置该字段。如果新群集中的应用无法达到健康状态,Karmada 将在达到最长等待时间后删除应用。取值必须为正整数。
+
+ - **failover.application.purgeMode**(string)
+
+ PurgeMode 表示原集群中应用的处理方式。取值包括 Immediately、Graciously 和 Never。默认为 Graciously。
+
+- **gracefulEvictionTasks** ([]GracefulEvictionTask)
+
+ GracefulEvictionTasks 表示驱逐任务,预期以优雅方式执行驱逐。工作流程如下:1. 一旦控制器(例如 taint-manager)决定从目标集群中驱逐当前 ResourceBinding 或 ClusterResourceBinding 所引用的资源,就会从 Clusters(.spec.Clusters)中删除副本,并构建一个优雅的驱逐任务。
+
+
+2. 调度器可以执行重新调度,并可能选择一个替代集群来接管正在驱逐的工作负载(资源)。
+
+3. 优雅驱逐控制器负责优雅驱逐任务,并在替代集群上的工作负载(资源)可用或超过宽限终止期(默认为 10 分钟)后执行最终删除。
+
+
+
+*GracefulEvictionTask 表示优雅驱逐任务。*
+
+- **gracefulEvictionTasks.fromCluster**(string),必选
+
+ FromCluster 表示需要执行驱逐的集群。
+
+- **gracefulEvictionTasks.producer**(string),必选
+
+ Producer 表示触发驱逐的控制器。
+
+- **gracefulEvictionTasks.reason**(string),必选
+
+ Reason 是一个程序标识符,说明驱逐的原因。生产者可以定义该字段的预期值和含义,以及这些值是否可被视为有保障的 API。取值应该是一个 CamelCase 字符串。此字段不能为空。
+
+- **gracefulEvictionTasks.creationTimestamp** (Time)
+
+ CreationTimestamp 是一个时间戳,表示创建对象时服务器上的时间。为避免时间不一致,客户端不得设置此值。它以 RFC3339 形式表示(如 2021-04-25T10:02:10Z),并采用 UTC 时间。
+
+ 由系统填充。只读。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+- **gracefulEvictionTasks.gracePeriodSeconds**(int32)
+
+ GracePeriodSeconds 表示对象被删除前的最长等待时间(以秒为单位)。如果新群集中的应用无法达到健康状态,Karmada 将在达到最长等待时间后删除该对象。取值只能为正整数。它不能与 SuppressDeletion 共存。
+
+- **gracefulEvictionTasks.message**(string)
+
+ Message 是有关驱逐的详细信息(人类可读消息)。可以是空字符串。
+
+- **gracefulEvictionTasks.replicas**(int32)
+
+ Replicas 表示应驱逐的副本数量。对于没有副本的资源类型,忽略该字段。
+
+- **gracefulEvictionTasks.suppressDeletion**(boolean)
+
+ SuppressDeletion 表示宽限期将持续存在,直至工具或人工干预为止。它不能与 GracePeriodSeconds 共存。
+
+- **placement** (Placement)
+
+ Placement 表示选定群集以及分发资源的规则。
+
+
+
+ *Placement 表示选定集群的规则。*
+
+ - **placement.clusterAffinities** ([]ClusterAffinityTerm)
+
+ ClusterAffinities表示多个集群组的调度限制(ClusterAffinityTerm 指定每种限制)。
+
+ 调度器将按照这些组在规范中出现的顺序逐个评估,不满足调度限制的组将被忽略。除非该组中的所有集群也属于下一个组(同一集群可以属于多个组),否则将不会选择此组中的所有集群。
+
+ 如果任何组都不满足调度限制,则调度失败,这意味着不会选择任何集群。
+
+ 注意:
+ 1. ClusterAffinities 不能与 ClusterAffinity 共存。
+ 2. 如果未同时设置 ClusterAffinities 和 ClusterAffinity,则任何集群都可以作为调度候选集群。
+
+ 潜在用例1:本地数据中心的私有集群为主集群组,集群提供商的托管集群是辅助集群组。Karmada 调度器更愿意将工作负载调度到主集群组,只有在主集群组不满足限制(如缺乏资源)的情况下,才会考虑辅助集群组。
+
+ 潜在用例2:对于容灾场景,系统管理员可定义主集群组和备份集群组,工作负载将首先调度到主集群组,当主集群组中的集群发生故障(如数据中心断电)时,Karmada 调度器可以将工作负载迁移到备份集群组。
+
+
+
+ *ClusterAffinityTerm 选择集群。*
+
+ - **placement.clusterAffinities.affinityName**(string),必选
+
+ AffinityName 是集群组的名称。
+
+ - **placement.clusterAffinities.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **placement.clusterAffinities.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **placement.clusterAffinities.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **placement.clusterAffinities.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **placement.clusterAffinities.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+- **placement.clusterAffinity** (ClusterAffinity)
+
+ ClusterAffinity 表示对某组集群的调度限制。注意:
+ 1. ClusterAffinity 不能与 ClusterAffinities 共存。
+ 2. 如果未同时设置 ClusterAffinities 和 ClusterAffinity,则任何集群都可以作为调度候选集群。
+
+
+
+ *ClusterAffinity 表示用于选择集群的过滤器。*
+
+ - **placement.clusterAffinity.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **placement.clusterAffinity.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **placement.clusterAffinity.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **placement.clusterAffinity.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **placement.clusterAffinity.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **placement.clusterTolerations** ([]Toleration)
+
+ ClusterTolerations 表示容忍度。
+
+
+
+ *附加此容忍度的 Pod 能够容忍任何使用匹配运算符 <operator> 匹配三元组 <key,value,effect> 所得到的污点。*
+
+ - **placement.clusterTolerations.effect**(string)
+
+ Effect 表示要匹配的污点效果。留空表示匹配所有污点效果。如果要设置此字段,允许的值为 NoSchedule、PreferNoSchedule 或 NoExecute。
+
+ 枚举值包括:
+ - `"NoExecute"`:任何不能容忍该污点的 Pod 都会被驱逐。当前由 NodeController 强制执行。
+ - `"NoSchedule"`:如果新 pod 无法容忍该污点,不允许新 pod 调度到节点上,但允许由 Kubelet 调度但不需要调度器启动的所有 pod,并允许节点上已存在的 Pod 继续运行。由调度器强制执行。
+ - `"PreferNoSchedule"`:和 TaintEffectNoSchedule 相似,不同的是调度器尽量避免将新 Pod 调度到具有该污点的节点上,除非没有其他节点可调度。由调度器强制执行。
+
+ - **placement.clusterTolerations.key**(string)
+
+ Key 是容忍度的污点键。留空表示匹配所有污点键。如果键为空,则运算符必须为 Exist,所有值和所有键都会被匹配。
+
+ - **placement.clusterTolerations.operator**(string)
+
+ Operator 表示一个键与值的关系。有效的运算符包括 Exists 和 Equal。默认为 Equal。Exists 相当于将值设置为通配符,因此一个 Pod 可以容忍特定类别的所有污点。
+
+ 枚举值包括:
+ - `"Equal"`
+ - `"Exists"`
+
+ - **placement.clusterTolerations.tolerationSeconds**(int64)
+
+ TolerationSeconds 表示容忍度容忍污点的时间段(Effect 的取值为 NoExecute,否则忽略此字段)。默认情况下,不设置此字段,表示永远容忍污点(不驱逐)。零和负值将被系统视为 0(立即驱逐)。
+
+ - **placement.clusterTolerations.value**(string)
+
+ Value 是容忍度匹配到的污点值。如果运算符为 Exists,则值应为空,否则就是一个普通字符串。
+
+ - **placement.replicaScheduling** (ReplicaSchedulingStrategy)
+
+ ReplicaScheduling 表示将 spec 中规约的副本资源(例如 Deployments、Statefulsets)分发到成员集群时处理副本数量的调度策略。
+
+
+
+ *ReplicaSchedulingStrategy 表示副本的分配策略。*
+
+ - **placement.replicaScheduling.replicaDivisionPreference**(string)
+
+ 当 ReplicaSchedulingType 设置为 Divided 时,由 ReplicaDivisionPreference 确定副本的分配策略。取值包括 Aggregated 和 Weighted。Aggregated:将副本分配给尽可能少的集群,同时考虑集群的资源可用性。Weighted:根据 WeightPreference 按权重分配副本。
+
+ - **placement.replicaScheduling.replicaSchedulingType**(string)
+
+ ReplicaSchedulingType 确定 karmada 分发资源副本的调度方式。取值包括 Duplicated 和 Divided。Duplicated:将相同的副本从资源复制到每个候选成员群集。Divided:根据有效候选成员集群的数量分配副本,每个集群的副本由 ReplicaDivisionPreference 确定。
+
+ - **placement.replicaScheduling.weightPreference** (ClusterPreferences)
+
+ WeightPreference 描述每个集群或每组集群的权重。如果 ReplicaDivisionPreference 设置为 Weighted,但 WeightPreference 未设置,调度器将为所有集群设置相同的权重。
+
+
+
+ *ClusterPreferences 描述每个集群或每组集群的权重。*
+
+ - **placement.replicaScheduling.weightPreference.dynamicWeight**(string)
+
+ DynamicWeight 指生成动态权重列表的因子。如果指定,StaticWeightList 将被忽略。
+
+ - **placement.replicaScheduling.weightPreference.staticWeightList** ([]StaticClusterWeight)
+
+ StaticWeightList 罗列静态集群权重。
+
+
+
+ *StaticClusterWeight 定义静态集群权重。*
+
+ - **placement.replicaScheduling.weightPreference.staticWeightList.targetCluster** (ClusterAffinity),必选
+
+ TargetCluster 是用于选择集群的过滤条件。
+
+
+
+ *ClusterAffinity 是用于选择集群的过滤条件。*
+
+ - **placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.clusterNames** ([]string)
+
+ ClusterNames 罗列待选择的集群。
+
+ - **placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.exclude** ([]string)
+
+ ExcludedClusters 罗列待忽略的集群。
+
+ - **placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.fieldSelector** (FieldSelector)
+
+ FieldSelector 是一个按字段选择成员集群的过滤器。匹配表达式的键(字段)为 provider、region 或 zone,匹配表达式的运算符为 In 或 NotIn。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+
+
+ *FieldSelector 是一个字段过滤器。*
+
+ - **placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.fieldSelector.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 字段选择器要求列表。
+
+ - **placement.replicaScheduling.weightPreference.staticWeightList.targetCluster.labelSelector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ LabelSelector 是一个按标签选择成员集群的过滤器。如果值不为 nil,也未留空,仅选择与此过滤器匹配的集群。
+
+ - **placement.replicaScheduling.weightPreference.staticWeightList.weight**(int64),必选
+
+ Weight表示优先选择 TargetCluster 指定的集群。
+
+ - **placement.spreadConstraints** ([]SpreadConstraint)
+
+ SpreadConstraints 表示调度约束的列表。
+
+
+
+ *SpreadConstraint 表示资源分布的约束。*
+
+ - **placement.spreadConstraints.maxGroups**(int32)
+
+ MaxGroups 表示要选择的集群组的最大数量。
+
+ - **placement.spreadConstraints.minGroups**(int32)
+
+ MinGroups 表示要选择的集群组的最小数量。默认值为 1。
+
+ - **placement.spreadConstraints.spreadByField**(string)
+
+ SpreadByField 是 Karmada 集群 API 中的字段,该 API 用于将成员集群分到不同集群组。资源将被分发到不同的集群组中。可用的字段包括 cluster、region、zone 和 provider。SpreadByField 不能与 SpreadByLabel 共存。如果两个字段都为空,SpreadByField 默认为 cluster。
+
+ - **placement.spreadConstraints.spreadByLabel**(string)
+
+ SpreadByLabel 表示用于将成员集群分到不同集群组的标签键。资源将被分发到不同的集群组中。SpreadByLabel 不能与 SpreadByField 共存。
+
+ - **propagateDeps**(boolean)
+
+ PropagateDeps 表示相关资源是否被自动分发,继承自 PropagationPolicy 或 ClusterPropagationPolicy。默认值为 false。
+
+ - **replicaRequirements** (ReplicaRequirements)
+
+ ReplicaRequirements 表示每个副本的需求。
+
+
+
+ *ReplicaRequirements 表示每个副本的需求。*
+
+ - **replicaRequirements.nodeClaim** (NodeClaim)
+
+ NodeClaim 表示每个副本所需的节点声明 HardNodeAffinity、NodeSelector 和 Tolerations。
+
+
+
+ *NodeClaim 表示每个副本所需的节点声明 HardNodeAffinity、NodeSelector 和 Tolerations。*
+
+ - **replicaRequirements.nodeClaim.hardNodeAffinity** (NodeSelector)
+
+ 一个节点选择器可以匹配要调度到一组节点上的一个或多个标签。以节点选择条件形式表示的节点选择器之间是“或”的关系。注意:因为该字段对 Pod 调度有硬性限制,所以此处仅包含 PodSpec.Affinity.NodeAffinity 中 RequiredDuringSchedulingIgnoredDuringExecution。
+
+
+
+ *一个节点选择器可以匹配要调度到一组节点上的一个或多个标签。以节点选择条件为形式的节点选择器之间是“或”的关系。*
+
+ - **replicaRequirements.nodeClaim.hardNodeAffinity.nodeSelectorTerms** ([]NodeSelectorTerm),必选
+
+ 节点选择条件列表。这些条件之间是“或”的关系。
+
+
+
+ *如果取值为 null 或留空,不会匹配任何对象。这些条件的要求是“与”的关系。TopologySelectorTerm 类型是 NodeSelectorTerm 的子集。*
+
+ - **replicaRequirements.nodeClaim.hardNodeAffinity.nodeSelectorTerms.matchExpressions** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 按节点标签列出的节点选择器需求列表。
+
+ - **replicaRequirements.nodeClaim.hardNodeAffinity.nodeSelectorTerms.matchFields** ([][NodeSelectorRequirement](../common-definitions/node-selector-requirement#nodeselectorrequirement))
+
+ 按节点字段列出的节点选择器需求列表。
+
+ - **replicaRequirements.nodeClaim.nodeSelector** (map[string]string)
+
+ NodeSelector 取值为 true 时才会认为 Pod 适合在节点上运行。选择器必须与节点的标签匹配,以便在该节点上调度 Pod。
+
+ - **replicaRequirements.nodeClaim.tolerations** ([]Toleration)
+
+ 如果设置了此字段,则作为 Pod 的容忍度。
+
+
+
+ *附加此容忍度的 Pod 能够容忍任何使用匹配运算符 <operator> 匹配三元组 <key,value,effect> 所得到的污点。*
+
+ - **replicaRequirements.nodeClaim.tolerations.effect**(string)
+
+ Effect 表示要匹配的污点效果。留空表示匹配所有污点效果。如果要设置此字段,允许的值为 NoSchedule、PreferNoSchedule 或 NoExecute。
+
+ 枚举值包括:
+ - `"NoExecute"`:任何不能容忍该污点的 Pod 都会被驱逐。当前由 NodeController 强制执行。
+ - `"NoSchedule"`:如果新 pod 无法容忍该污点,不允许新 pod 调度到节点上,但允许由 Kubelet 调度但不需要调度器启动的所有 pod,并允许节点上已存在的 Pod 继续运行。由调度器强制执行。
+ - `"PreferNoSchedule"`:和 TaintEffectNoSchedule 相似,不同的是调度器尽量避免将新 Pod 调度到具有该污点的节点上,除非没有其他节点可调度。由调度器强制执行。
+
+ - **replicaRequirements.nodeClaim.tolerations.key**(string)
+
+ Key 是容忍度的污点键。留空表示匹配所有污点键。如果键为空,则运算符必须为 Exist,所有值和所有键都会被匹配。
+
+ - **replicaRequirements.nodeClaim.tolerations.operator**(string)
+
+ Operator 表示一个键与值的关系。有效的运算符为 Exists 和 Equal。默认为 Equal。Exists 相当于将值设置为通配符,因此一个 Pod 可以容忍特定类别的所有污点。
+
+ 枚举值包括:
+ - `"Equal"`
+ - `"Exists"`
+
+ - **replicaRequirements.nodeClaim.tolerations.tolerationSeconds**(int64)
+
+ TolerationSeconds 表示容忍度容忍污点的时间段(Effect 的取值为 NoExecute,否则忽略此字段)。默认情况下,不设置此字段,表示永远容忍污点(不驱逐)。零和负值将被系统视为 0(立即驱逐)。
+
+ - **replicaRequirements.nodeClaim.tolerations.value**(string)
+
+ Value 是容忍度匹配到的污点值。如果运算符为 Exists,则值应为空,否则就是一个普通字符串。
+
+ - **replicaRequirements.resourceRequest** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ ResourceRequest 表示每个副本所需的资源。
+
+ - **replicas**(int32)
+
+ Replicas 表示引用资源的副本编号。
+
+ - **requiredBy** ([]BindingSnapshot)
+
+ RequiredBy 表示依赖于引用资源的 Bindings 列表。
+
+
+
+ *BindingSnapshot 是 ResourceBinding 或 ClusterResourceBinding 的快照。*
+
+ - **requiredBy.name**(string),必选
+
+ Name 表示 Binding 的名称。
+
+ - **requiredBy.clusters** ([]TargetCluster)
+
+ Clusters 表示计划结果。
+
+
+
+ *TargetCluster 是成员集群的标识符。*
+
+ - **requiredBy.clusters.name**(string),必选
+
+ Name 是目标集群的名称。
+
+ - **requiredBy.clusters.replicas**(int32)
+
+ Replicas 表示目标集群中的副本。
+
+ - **requiredBy.namespace**(string)
+
+ Namespace 表示 Binding 的命名空间,是 ResourceBinding 所必需的。如果未指定命名空间,引用为 ClusterResourceBinding。
+
+ - **schedulerName**(string)
+
+ SchedulerName 表示要继续调度的调度器,可直接从所关联的 PropagationPolicy(或 ClusterPropagationPolicy)继承。
+
+## ResourceBindingStatus
+
+ResourceBindingStatus 表示策略及所引用资源的整体状态。
+
+
+
+- **aggregatedStatus** ([]AggregatedStatusItem)
+
+ AggregatedStatus 罗列每个成员集群中资源的状态。
+
+
+
+ *AggregatedStatusItem 表示某个成员集群中资源的状态。*
+
+ - **aggregatedStatus.clusterName**(string),必选
+
+ ClusterName 表示资源所在的成员集群。
+
+ - **aggregatedStatus.applied**(boolean)
+
+ **applied**表示 ResourceBinding 或 ClusterResourceBinding 引用的资源是否成功应用到集群中。
+
+ - **aggregatedStatus.appliedMessage**(string)
+
+ AppliedMessage 是有关应用状态的详细信息(人类可读消息)。通常是应用失败的错误信息。
+
+ - **aggregatedStatus.health**(string)
+
+ Health 表示当前资源的健康状态。可以设置不同规则来保障不同资源的健康。
+
+ - **aggregatedStatus.status** (RawExtension)
+
+ Status 反映当前清单的运行状态。
+
+
+
+ *RawExtension 用于在外部版本中保存扩展数据。
+
+ 要使用此字段,请生成一个字段,在外部、版本化结构中以 RawExtension 作为其类型,在内部结构中以 Object 作为其类型。此外,还需要注册各个插件类型。
+
+ //内部包:
+
+ type MyAPIObject struct [
+ runtime.TypeMeta `json:",inline"`
+ MyPlugin runtime.Object `json:"myPlugin"`
+ ]
+
+ type PluginA struct [
+ AOption string `json:"aOption"`
+ ]
+
+ //外部包:
+
+ type MyAPIObject struct [
+ runtime.TypeMeta `json:",inline"`
+ MyPlugin runtime.RawExtension `json:"myPlugin"`
+ ]
+
+ type PluginA struct [
+ AOption string `json:"aOption"`
+ ]
+
+ //在网络上,JSON 看起来像这样:
+
+ [
+ "kind":"MyAPIObject",
+ "apiVersion":"v1",
+ "myPlugin": [
+ "kind":"PluginA",
+ "aOption":"foo",
+ ],
+ ]
+
+ 那么会发生什么?解码首先需要使用 JSON 或 YAML 将序列化数据解组到外部 MyAPIObject 中。这会导致原始 JSON 被存储下来,但不会被解包。下一步是复制(使用 pkg/conversion)到内部结构中。runtime 包的 DefaultScheme 安装了转换函数,它将解析存储在 RawExtension 中的 JSON,将其转换为正确的对象类型,并将其存储在对象中。(TODO:如果对象是未知类型,将创建并存储一个 runtime.Unknown 对象。)*
+
+- **conditions** ([]Condition)
+
+ Conditions 包含不同的状况。
+
+
+
+ *Condition 包含此 API 资源当前状态某个方面的详细信息。*
+
+ - **conditions.lastTransitionTime** (Time),必选
+
+ lastTransitionTime 是最近一次从一种状态转换到另一种状态的时间。这种变化通常出现在下层状况发生变化的时候。如果无法了解下层状况变化,使用 API 字段更改的时间也是可以接受的。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **conditions.message**(string),必选
+
+ Message 是有关转换的详细信息(人类可读消息)。可以是空字符串。
+
+ - **conditions.reason**(string),必选
+
+ reason 是一个程序标识符,表明状况最后一次转换的原因。特定状况类型的生产者可以定义该字段的预期值和含义,以及这些值是否可被视为有保证的 API。取值应该是一个 CamelCase 字符串。此字段不能为空。
+
+ - **conditions.status**(string),必选
+
+ Status 表示状况的状态。取值为 True、False 或 Unknown。
+
+ - **conditions.type**(string),必选
+
+ type 表示 CamelCase 或 foo.example.com/CamelCase 形式的状况类型。
+
+ - **conditions.observedGeneration**(int64)
+
+ **observedGeneration**表示设置状况时所基于的 .metadata.generation。例如,如果 .metadata.generation 为 12,但 .status.conditions[x].observedGeneration 为 9,则状况相对于实例的当前状态已过期。
+
+- **schedulerObservedGeneration**(int64)
+
+ SchedulerObservedGeneration 表示调度器观测到的元数据(.metadata.generation)。如果该字段的值比 .metadata.genation 生成的元数据少,则表示调度器尚未确认调度结果或尚未完成调度。
+
+- **schedulerObservingAffinityName**(string)
+
+ SchedulerObservedAffinityName 表示亲和性规则的名称,是当前调度的基础。
+
+## ResourceBindingList
+
+ResourceBindingList 中包含 ResourceBinding 列表。
+
+
+
+- **apiVersion**: work.karmada.io/v1alpha2
+
+- **kind**: ResourceBindingList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)),必选
+
+ Items 表示 ResourceBinding 列表。
+
+## 操作
+
+
+
+### `get`:查询指定的 ResourceBinding
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceBinding名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): OK
+
+### `get`:查询指定 ResourceBinding 的状态
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceBinding 名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): OK
+
+### `list`:查询全部 ResourceBinding
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([ResourceBindingList](../work-resources/resource-binding-v1alpha2#resourcebindinglist)): OK
+
+### `list`:查询全部 ResourceBinding
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha2/resourcebindings
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([ResourceBindingList](../work-resources/resource-binding-v1alpha2#resourcebindinglist)): OK
+
+### `create`:创建一个 ResourceBinding
+
+#### HTTP 请求
+
+POST /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): OK
+
+201 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): Created
+
+202 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): Accepted
+
+### `update`:更新指定的 ResourceBinding
+
+#### HTTP 请求
+
+PUT /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceBinding名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): OK
+
+201 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): Created
+
+### `update`:更新指定 ResourceBinding 的状态
+
+#### HTTP 请求
+
+PUT /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceBinding名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): OK
+
+201 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): Created
+
+### `patch`:更新指定 ResourceBinding 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceBinding 名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): OK
+
+201 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): Created
+
+### `patch`:更新指定 ResourceBinding 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceBinding 名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): OK
+
+201 ([ResourceBinding](../work-resources/resource-binding-v1alpha2#resourcebinding)): Created
+
+### `delete`:删除一个 ResourceBinding
+
+#### HTTP 请求
+
+DELETE /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ ResourceBinding名称
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除 ResourceBinding 的集合
+
+#### HTTP 请求
+
+DELETE /apis/work.karmada.io/v1alpha2/namespaces/{namespace}/resourcebindings
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/work-resources/work-v1alpha1.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/work-resources/work-v1alpha1.md
new file mode 100644
index 000000000..e2f1dc664
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmada-api/work-resources/work-v1alpha1.md
@@ -0,0 +1,699 @@
+---
+api_metadata:
+ apiVersion: "work.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/work/v1alpha1"
+ kind: "Work"
+content_type: "api_reference"
+description: "Work defines a list of resources to be deployed on the member cluster."
+title: "Work v1alpha1"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: work.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/work/v1alpha1"`
+
+## Work
+
+Work 罗列待部署到成员集群的资源。
+
+
+
+- **apiVersion**: work.karmada.io/v1alpha1
+
+- **kind**: Work
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([WorkSpec](../work-resources/work-v1alpha1#workspec)),必选
+
+ Spec 表示 Work 的规范。
+
+- **status** ([WorkStatus](../work-resources/work-v1alpha1#workstatus))
+
+ Status 表示 PropagationStatus 的状态。
+
+## WorkSpec
+
+WorkSpec 定义 Work 的预期状态。
+
+
+
+- **workload** (WorkloadTemplate)
+
+ Workload 表示待部署在被管理集群上的 manifest 工作负载。
+
+
+
+ *WorkloadTemplate 表示待部署在被管理集群上的 manifest 工作负载。*
+
+ - **workload.manifests** ([]Manifest)
+
+ Manifests 表示待部署在被管理集群上的 Kubernetes 资源列表。
+
+
+
+ *Manifest 表示待部署在被管理集群上的某种资源。*
+
+## WorkStatus
+
+WorkStatus 定义 Work 的状态。
+
+
+
+- **conditions** ([]Condition)
+
+ Conditions 是同一个 Work 的不同状况。取值可能为:1. Applied:工作负载在被管理集群上成功应用。2. Progressing:Work 中的工作负载正在被应用到被管理集群上。3. Available:别管理集群中存在 Work 中的工作负载。4. Degraded:工作负载的当前状态在一定时期内与所需状态不匹配。
+
+
+
+ *Condition 包含此 API 资源当前状态某个方面的详细信息。*
+
+ - **conditions.lastTransitionTime** (Time),必选
+
+ lastTransitionTime 是状况最近一次从一种状态转换到另一种状态的时间。这种变化通常出现在下层状况发生变化的时候。如果无法了解下层状况变化,使用 API 字段更改的时间也是可以接受的。
+
+
+
+ *Time 是 time.Time 的包装器,它支持对 YAML 和 JSON 的正确编组。time 包的许多工厂方法提供了包装器。*
+
+ - **conditions.message**(string),必选
+
+ message 是有关转换的详细信息(人类可读消息)。可以是空字符串。
+
+ - **conditions.reason**(string),必选
+
+ reason 是一个程序标识符,表明状况最后一次转换的原因。特定状况类型的生产者可以定义该字段的预期值和含义,以及这些值是否可被视为有保证的 API。取值应该是一个 CamelCase 字符串。此字段不能为空。
+
+ - **conditions.status**(string),必选
+
+ Status 表示状况的状态。取值为 True、False 或 Unknown。
+
+ - **conditions.type**(string),必选
+
+ **type**表示 CamelCase 或 foo.example.com/CamelCase 形式的状况类型。
+
+ - **conditions.observedGeneration**(int64)
+
+ **observedGeneration**表示设置状况时所基于的 .metadata.generation。例如,如果 .metadata.generation 为 12,但 .status.conditions[x].observedGeneration 为 9,则状况相对于实例的当前状态已过期。
+
+- **manifestStatuses** ([]ManifestStatus)
+
+ ManifestStatuses 是 spec 中 manifest 的运行状态。
+
+
+
+ *ManifestStatus 包含 spec 中特定 manifest 的运行状态。*
+
+ - **manifestStatuses.identifier** (ResourceIdentifier),必选
+
+ **identifier**表示 spec 中链接到清单的资源的标识。
+
+
+
+ *ResourceIdentifier 提供与任意资源交互所需的标识符。*
+
+ - **manifestStatuses.identifier.kind**(string),必选
+
+ Kind 表示资源的类别。
+
+ - **manifestStatuses.identifier.name**(string),必选
+
+ Name 表示资源名称。
+
+ - **manifestStatuses.identifier.ordinal**(int32),必选
+
+ Ordinal 是 manifests 中的索引。即使某个 manifest 无法解析,状况仍然可以链接到该 manifest。
+
+ - **manifestStatuses.identifier.resource**(string),必选
+
+ Resource 表示资源类型。
+
+ - **manifestStatuses.identifier.version**(string),必选
+
+ Version 表示资源版本。
+
+ - **manifestStatuses.identifier.group**(string),必选
+
+ Group 表示资源所在的组。
+
+ - **manifestStatuses.identifier.namespace**(string)
+
+ Namespace 是资源的命名空间。如果值为空,则表示资源在集群范围内。
+
+ - **manifestStatuses.health**(string)
+
+ Health 表示资源的健康状态。可以设置不同规则来保障不同资源的健康状态。
+
+ - **manifestStatuses.status** (RawExtension)
+
+ Status 反映当前 manifest 的运行状态。
+
+
+
+ *RawExtension 用于在外部版本中保存扩展数据。
+
+ 要使用此字段,请生成一个字段,在外部、版本化结构中以 RawExtension 作为其类型,在内部结构中以 Object 作为其类型。此外,还需要注册各个插件类型。
+
+ //内部包:
+
+ type MyAPIObject struct [
+ runtime.TypeMeta `json:",inline"`
+ MyPlugin runtime.Object `json:"myPlugin"`
+ ]
+
+ type PluginA struct [
+ AOption string `json:"aOption"`
+ ]
+
+ //外部包:
+
+ type MyAPIObject struct [
+ runtime.TypeMeta `json:",inline"`
+ MyPlugin runtime.RawExtension `json:"myPlugin"`
+ ]
+
+ type PluginA struct [
+ AOption string `json:"aOption"`
+ ]
+
+ //在网络上,JSON 看起来像这样:
+
+ [
+ "kind":"MyAPIObject",
+ "apiVersion":"v1",
+ "myPlugin": [
+ "kind":"PluginA",
+ "aOption":"foo",
+ ],
+ ]
+
+ 那么会发生什么?解码首先需要使用 JSON 或 YAML 将序列化数据解组到外部 MyAPIObject 中。这会导致原始 JSON 被存储下来,但不会被解包。下一步是复制(使用 pkg/conversion)到内部结构中。runtime 包的 DefaultScheme 安装了转换函数,它将解析存储在 RawExtension 中的 JSON,将其转换为正确的对象类型,并将其存储在对象中。(TODO:如果对象是未知类型,将创建并存储一个 runtime.Unknown 对象。)*
+
+## WorkList
+
+WorkList 是 Work 的集合。
+
+
+
+- **apiVersion**: work.karmada.io/v1alpha1
+
+- **kind**: WorkList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][Work](../work-resources/work-v1alpha1#work)),必选
+
+ Items 中包含 Work 列表。
+
+## 操作
+
+
+
+### `get`:查询指定的 Work
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ Work 名称。
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Work](../work-resources/work-v1alpha1#work)): OK
+
+### `get`:查询指定 Work 的状态
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ Work 名称。
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Work](../work-resources/work-v1alpha1#work)): OK
+
+### `list`:查询指定命名空间内的所有 Work
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([WorkList](../work-resources/work-v1alpha1#worklist)): OK
+
+### `list`:查询所有 Work
+
+#### HTTP 请求
+
+GET /apis/work.karmada.io/v1alpha1/works
+
+#### 参数
+
+- **allowWatchBookmarks**(*查询参数*):boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch**(*查询参数*):boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### 响应
+
+200 ([WorkList](../work-resources/work-v1alpha1#worklist)): OK
+
+### `create`:创建一个 Work
+
+#### HTTP 请求
+
+POST /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Work](../work-resources/work-v1alpha1#work),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Work](../work-resources/work-v1alpha1#work)): OK
+
+201 ([Work](../work-resources/work-v1alpha1#work)): Created
+
+202 ([Work](../work-resources/work-v1alpha1#work)): Accepted
+
+### `update`:更新指定的 Work
+
+#### HTTP 请求
+
+PUT /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ Work 名称。
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Work](../work-resources/work-v1alpha1#work),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Work](../work-resources/work-v1alpha1#work)): OK
+
+201 ([Work](../work-resources/work-v1alpha1#work)): Created
+
+### `update`:更新指定 Work 的状态
+
+#### HTTP 请求
+
+PUT /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ Work 名称。
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Work](../work-resources/work-v1alpha1#work),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Work](../work-resources/work-v1alpha1#work)): OK
+
+201 ([Work](../work-resources/work-v1alpha1#work)): Created
+
+### `patch`:更新指定 Work 的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ Work 名称。
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Work](../work-resources/work-v1alpha1#work)): OK
+
+201 ([Work](../work-resources/work-v1alpha1#work)): Created
+
+### `patch`:更新指定 Work 状态的部分信息
+
+#### HTTP 请求
+
+PATCH /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works/{name}/status
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ Work 名称。
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch),必选
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager**(*查询参数*):string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation**(*查询参数*):string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force**(*查询参数*):boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### 响应
+
+200 ([Work](../work-resources/work-v1alpha1#work)): OK
+
+201 ([Work](../work-resources/work-v1alpha1#work)): Created
+
+### `delete`:删除一个 Work
+
+#### HTTP 请求
+
+DELETE /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works/{name}
+
+#### 参数
+
+- **name**(*路径参数*):string,必选
+
+ Work 名称。
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection`:删除 Work 的集合
+
+#### HTTP 请求
+
+DELETE /apis/work.karmada.io/v1alpha1/namespaces/{namespace}/works
+
+#### 参数
+
+- **namespace**(*路径参数*):string,必选
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+- **continue**(*查询参数*):string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun**(*查询参数*):string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector**(*查询参数*):string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds**(*查询参数*):integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector**(*查询参数*):string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit**(*查询参数*):integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty**(*查询参数*):string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy**(*查询参数*):string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion**(*查询参数*):string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch**(*查询参数*):string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents**(*查询参数*):boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds**(*查询参数*):integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### 响应
+
+200 ([Status](../common-definitions/status#status)): OK
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/bash-auto-completion-on-linux.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/bash-auto-completion-on-linux.md
new file mode 100644
index 000000000..489f46c46
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/bash-auto-completion-on-linux.md
@@ -0,0 +1,50 @@
+---
+title: Bash auto-completion on Linux
+---
+
+## Introduction
+The karmadactl completion script for Bash can be generated with the command karmadactl completion bash. Sourcing the completion script in your shell enables karmadactl autocompletion.
+
+However, the completion script depends on [bash-completion](https://github.com/scop/bash-completion), which means that you have to install this software first (you can test if you have bash-completion already installed by running `type _init_completion`).
+
+## Install bash-completion
+bash-completion is provided by many package managers (see [here](https://github.com/scop/bash-completion#installation)). You can install it with `apt-get install bash-completion` or `yum install bash-completion`, etc.
+
+The above commands create `/usr/share/bash-completion/bash_completion`, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your `~/.bashrc` file.
+
+```bash
+source /usr/share/bash-completion/bash_completion
+```
+
+Reload your shell and verify that bash-completion is correctly installed by typing `type _init_completion`.
+
+## Enable karmadactl autocompletion
+You now need to ensure that the karmadactl completion script gets sourced in all your shell sessions. There are two ways in which you can do this:
+
+- Source the completion script in your ~/.bashrc file:
+
+```bash
+echo 'source <(karmadactl completion bash)' >>~/.bashrc
+```
+
+- Add the completion script to the /etc/bash_completion.d directory:
+
+```bash
+karmadactl completion bash >/etc/bash_completion.d/karmadactl
+```
+
+If you have an alias for karmadactl, you can extend shell completion to work with that alias:
+
+```bash
+echo 'alias km=karmadactl' >>~/.bashrc
+echo 'complete -F __start_karmadactl km' >>~/.bashrc
+```
+
+> **Note:** bash-completion sources all completion scripts in /etc/bash_completion.d.
+
+Both approaches are equivalent. After reloading your shell, karmadactl autocompletion should be working.
+
+## Enable kubectl-karmada autocompletion
+Currently, kubectl plugins do not support autocomplete, but it is already planned in [Command line completion for kubectl plugins](https://github.com/kubernetes/kubernetes/issues/74178).
+
+We will update the documentation as soon as it does.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl.md
new file mode 100644
index 000000000..aa769e0a6
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl.md
@@ -0,0 +1,61 @@
+---
+title: karmadactl
+---
+
+karmadactl controls a Kubernetes Cluster Federation.
+
+### Synopsis
+
+karmadactl controls a Kubernetes Cluster Federation.
+
+```
+karmadactl [flags]
+```
+
+### Options
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ -h, --help help for karmadactl
+ --kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl addons](karmadactl_addons.md) - Enable or disable a Karmada addon
+* [karmadactl apply](karmadactl_apply.md) - Apply a configuration to a resource by file name or stdin and propagate them into member clusters
+* [karmadactl cordon](karmadactl_cordon.md) - Mark cluster as unschedulable
+* [karmadactl deinit](karmadactl_deinit.md) - Remove the Karmada control plane from the Kubernetes cluster.
+* [karmadactl describe](karmadactl_describe.md) - Show details of a specific resource or group of resources in a cluster
+* [karmadactl exec](karmadactl_exec.md) - Execute a command in a container in a cluster
+* [karmadactl get](karmadactl_get.md) - Display one or many resources
+* [karmadactl init](karmadactl_init.md) - Install the Karmada control plane in a Kubernetes cluster
+* [karmadactl interpret](karmadactl_interpret.md) - Validate, test and edit interpreter customization before applying it to the control plane
+* [karmadactl join](karmadactl_join.md) - Register a cluster to Karmada control plane with Push mode
+* [karmadactl logs](karmadactl_logs.md) - Print the logs for a container in a pod in a cluster
+* [karmadactl options](karmadactl_options.md) - Print the list of flags inherited by all commands
+* [karmadactl promote](karmadactl_promote.md) - Promote resources from legacy clusters to Karmada control plane
+* [karmadactl register](karmadactl_register.md) - Register a cluster to Karmada control plane with Pull mode
+* [karmadactl taint](karmadactl_taint.md) - Update the taints on one or more clusters
+* [karmadactl token](karmadactl_token.md) - Manage bootstrap tokens
+* [karmadactl top](karmadactl_top.md) - Display resource (CPU/memory) usage of member clusters
+* [karmadactl uncordon](karmadactl_uncordon.md) - Mark cluster as schedulable
+* [karmadactl unjoin](karmadactl_unjoin.md) - Remove a cluster from Karmada control plane
+* [karmadactl version](karmadactl_version.md) - Print the version information
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons.md
new file mode 100644
index 000000000..cf2b4a196
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons.md
@@ -0,0 +1,60 @@
+---
+title: karmadactl addons
+---
+
+Enable or disable a Karmada addon
+
+### Synopsis
+
+Enable or disable a Karmada addon.
+
+ These addons are currently supported:
+
+ 1. karmada-descheduler
+ 2. karmada-metrics-adapter
+ 3. karmada-scheduler-estimator
+ 4. karmada-search
+
+### Examples
+
+```
+ # Enable or disable Karmada addons to the karmada-host cluster
+ karmadactl addons enable karmada-search
+```
+
+### Options
+
+```
+ -h, --help help for addons
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+* [karmadactl addons disable](karmadactl_addons_disable.md) - Disable karmada addons from Kubernetes
+* [karmadactl addons enable](karmadactl_addons_enable.md) - Enable Karmada addons from Kubernetes
+* [karmadactl addons list](karmadactl_addons_list.md) - List karmada addons from Kubernetes
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons_disable.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons_disable.md
new file mode 100644
index 000000000..73a3a852c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons_disable.md
@@ -0,0 +1,78 @@
+---
+title: karmadactl addons disable
+---
+
+Disable karmada addons from Kubernetes
+
+### Synopsis
+
+Disable Karmada addons from Kubernetes
+
+```
+karmadactl addons disable
+```
+
+### Examples
+
+```
+ # Disable Karmada all addons except karmada-scheduler-estimator on Kubernetes cluster
+ karmadactl addons disable all
+
+ # Disable Karmada search on Kubernetes cluster
+ karmadactl addons disable karmada-search
+
+ # Disable Karmada search and descheduler on Kubernetes cluster
+ karmadactl addons disable karmada-search karmada-descheduler
+
+ # Disable karmada search and scheduler-estimator of member1 cluster to the kubernetes cluster
+ karmadactl addons disable karmada-search karmada-scheduler-estimator --cluster member1
+
+ # Specify the host cluster kubeconfig
+ karmadactl addons disable Karmada-search --kubeconfig /root/.kube/config
+
+ # Specify the Karmada control plane kubeconfig
+ karmadactl addons disable karmada-search --karmada-kubeconfig /etc/karmada/karmada-apiserver.config
+
+ # Specify the namespace where Karmada components are installed
+ karmadactl addons disable karmada-search --namespace karmada-system
+```
+
+### Options
+
+```
+ -C, --cluster string Name of the member cluster that enables or disables the scheduler estimator.
+ --context string The name of the kubeconfig context to use.
+ -f, --force Disable addons without prompting for confirmation.
+ -h, --help help for disable
+ --karmada-context string The name of the karmada control plane kubeconfig context to use.
+ --karmada-kubeconfig string Path to the karmada control plane kubeconfig file. (default "/etc/karmada/karmada-apiserver.config")
+ --kubeconfig string Path to the host cluster kubeconfig file.
+ -n, --namespace string namespace where Karmada components are installed. (default "karmada-system")
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl addons](karmadactl_addons.md) - Enable or disable a Karmada addon
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons_enable.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons_enable.md
new file mode 100644
index 000000000..6596dfe7c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons_enable.md
@@ -0,0 +1,94 @@
+---
+title: karmadactl addons enable
+---
+
+Enable Karmada addons from Kubernetes
+
+### Synopsis
+
+Enable Karmada addons from Kubernetes
+
+```
+karmadactl addons enable
+```
+
+### Examples
+
+```
+ # Enable Karmada all addons except karmada-scheduler-estimator to Kubernetes cluster
+ karmadactl addons enable all
+
+ # Enable Karmada search to the Kubernetes cluster
+ karmadactl addons enable karmada-search
+
+ # Enable karmada search and descheduler to the kubernetes cluster
+ karmadactl addons enable karmada-search karmada-descheduler
+
+ # Enable karmada search and scheduler-estimator of cluster member1 to the kubernetes cluster
+ karmadactl addons enable karmada-search karmada-scheduler-estimator -C member1 --member-kubeconfig /etc/karmada/member.config --member-context member1
+
+ # Specify the host cluster kubeconfig
+ karmadactl addons enable karmada-search --kubeconfig /root/.kube/config
+
+ # Specify the Karmada control plane kubeconfig
+ karmadactl addons enable karmada-search --karmada-kubeconfig /etc/karmada/karmada-apiserver.config
+
+ # Specify the karmada-search image
+ karmadactl addons enable karmada-search --karmada-search-image docker.io/karmada/karmada-search:latest
+
+ # Specify the namespace where Karmada components are installed
+ karmadactl addons enable karmada-search --namespace karmada-system
+```
+
+### Options
+
+```
+ --apiservice-timeout int Wait apiservice ready timeout. (default 30)
+ -C, --cluster string Name of the member cluster that enables or disables the scheduler estimator.
+ --context string The name of the kubeconfig context to use.
+ -h, --help help for enable
+ --host-cluster-domain string The cluster domain of karmada host cluster. (e.g. --host-cluster-domain=host.karmada) (default "cluster.local")
+ --karmada-context string The name of the karmada control plane kubeconfig context to use.
+ --karmada-descheduler-image string karmada-descheduler image (default "docker.io/karmada/karmada-descheduler:v0.0.0-master")
+ --karmada-descheduler-replicas int32 karmada descheduler replica set (default 1)
+ --karmada-estimator-replicas int32 karmada-scheduler-estimator replica set (default 1)
+ --karmada-kubeconfig string Path to the karmada control plane kubeconfig file. (default "/etc/karmada/karmada-apiserver.config")
+ --karmada-metrics-adapter-image string karmada-metrics-adapter image (default "docker.io/karmada/karmada-metrics-adapter:v0.0.0-master")
+ --karmada-metrics-adapter-replicas int32 karmada-metrics-adapter replica set (default 1)
+ --karmada-scheduler-estimator-image string karmada-scheduler-estimator image (default "docker.io/karmada/karmada-scheduler-estimator:v0.0.0-master")
+ --karmada-search-image string karmada-search image (default "docker.io/karmada/karmada-search:v0.0.0-master")
+ --karmada-search-replicas int32 Karmada-search replica set (default 1)
+ --kubeconfig string Path to the host cluster kubeconfig file.
+ --member-context string Member cluster's context which to deploy scheduler estimator
+ --member-kubeconfig string Member cluster's kubeconfig which to deploy scheduler estimator
+ -n, --namespace string namespace where Karmada components are installed. (default "karmada-system")
+ --pod-timeout int Wait pod ready timeout. (default 120)
+ --private-image-registry string Private image registry where pull images from. If set, all required images will be downloaded from it, it would be useful in offline installation scenarios.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl addons](karmadactl_addons.md) - Enable or disable a Karmada addon
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons_list.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons_list.md
new file mode 100644
index 000000000..fdf6edcd0
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_addons_list.md
@@ -0,0 +1,71 @@
+---
+title: karmadactl addons list
+---
+
+List karmada addons from Kubernetes
+
+### Synopsis
+
+List Karmada addons from Kubernetes
+
+```
+karmadactl addons list
+```
+
+### Examples
+
+```
+ # List Karmada all addons installed in Kubernetes cluster
+ karmadactl addons list
+
+ # List Karmada all addons included scheduler estimator of member1 installed in Kubernetes cluster
+ karmadactl addons list -C member1
+
+ # Specify the host cluster kubeconfig
+ karmadactl addons list --kubeconfig /root/.kube/config
+
+ # Specify the karmada control plane kubeconfig
+ karmadactl addons list --karmada-kubeconfig /etc/karmada/karmada-apiserver.config
+
+ # Specify the namespace where Karmada components are installed
+ karmadactl addons list --namespace karmada-system
+```
+
+### Options
+
+```
+ -C, --cluster string Name of the member cluster that enables or disables the scheduler estimator.
+ --context string The name of the kubeconfig context to use.
+ -h, --help help for list
+ --karmada-context string The name of the karmada control plane kubeconfig context to use.
+ --karmada-kubeconfig string Path to the karmada control plane kubeconfig file. (default "/etc/karmada/karmada-apiserver.config")
+ --kubeconfig string Path to the host cluster kubeconfig file.
+ -n, --namespace string namespace where Karmada components are installed. (default "karmada-system")
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl addons](karmadactl_addons.md) - Enable or disable a Karmada addon
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_apply.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_apply.md
new file mode 100644
index 000000000..d807dea06
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_apply.md
@@ -0,0 +1,100 @@
+---
+title: karmadactl apply
+---
+
+Apply a configuration to a resource by file name or stdin and propagate them into member clusters
+
+### Synopsis
+
+Apply a configuration to a resource by file name or stdin and propagate them into member clusters. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
+
+ JSON and YAML formats are accepted.
+
+ Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See https://issues.k8s.io/34274.
+
+ Note: It implements the function of 'kubectl apply' by default. If you want to propagate them into member clusters, please use %[1]s apply --all-clusters'.
+
+```
+karmadactl apply (-f FILENAME | -k DIRECTORY)
+```
+
+### Examples
+
+```
+ # Apply the configuration without propagation into member clusters. It acts as 'kubectl apply'.
+ karmadactl apply -f manifest.yaml
+
+ # Apply the configuration with propagation into specific member clusters.
+ karmadactl apply -f manifest.yaml --cluster member1,member2
+
+ # Apply resources from a directory and propagate them into all member clusters.
+ karmadactl apply -f dir/ --all-clusters
+
+ # Apply resources from a directory containing kustomization.yaml - e.g.
+ # dir/kustomization.yaml, and propagate them into all member clusters
+ karmadactl apply -k dir/ --all-clusters
+```
+
+### Options
+
+```
+ --all Select all resources in the namespace of the specified resource types.
+ --all-clusters If present, propagates a group of resources to all member clusters.
+ --allow-missing-template-keys If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. (default true)
+ --cascade string[="background"] Must be "background", "orphan", or "foreground". Selects the deletion cascading strategy for the dependents (e.g. Pods created by a ReplicationController). Defaults to background. (default "background")
+ -C, --cluster strings If present, propagates a group of resources to specified clusters.
+ --dry-run string[="unchanged"] Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. (default "none")
+ --field-manager string Name of the manager used to track field ownership. (default "kubectl-client-side-apply")
+ -f, --filename strings The files that contain the configurations to apply.
+ --force If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
+ --force-conflicts If true, server-side apply will force the changes against conflicts.
+ --grace-period int Period of time in seconds given to the resource to terminate gracefully. Ignored if negative. Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion). (default -1)
+ -h, --help help for apply
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ -k, --kustomize string Process a kustomization directory. This flag can't be used together with -f or -R.
+ -n, --namespace string If present, the namespace scope for this CLI request
+ --openapi-patch If true, use openapi to calculate diff when the openapi presents and the resource can be found in the openapi spec. Otherwise, fall back to use baked-in types. (default true)
+ -o, --output string Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).
+ --overwrite Automatically resolve conflicts between the modified and live configuration by using values from the modified configuration (default true)
+ --prune Automatically delete resource objects, that do not appear in the configs and are created by either apply or create --save-config. Should be used with either -l or --all.
+ --prune-allowlist stringArray Overwrite the default allowlist with for --prune
+ -R, --recursive Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.
+ -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.
+ --server-side If true, apply runs in the server instead of the client.
+ --show-managed-fields If true, keep the managedFields when printing objects in JSON or YAML format.
+ --template string Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
+ --timeout duration The length of time to wait before giving up on a delete, zero means determine a timeout from the size of the object
+ --validate string[="strict"] Must be one of: strict (or true), warn, ignore (or false).
+ "true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not.
+ "warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise.
+ "false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields. (default "strict")
+ --wait If true, wait for resources to be gone before returning. This waits for finalizers.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_cordon.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_cordon.md
new file mode 100644
index 000000000..5c63abd94
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_cordon.md
@@ -0,0 +1,56 @@
+---
+title: karmadactl cordon
+---
+
+Mark cluster as unschedulable
+
+### Synopsis
+
+Mark cluster as unschedulable.
+
+```
+karmadactl cordon CLUSTER
+```
+
+### Examples
+
+```
+ # Mark cluster "foo" as unschedulable.
+ karmadactl cordon foo
+```
+
+### Options
+
+```
+ --dry-run Run the command in dry-run mode, without making any server requests.
+ -h, --help help for cordon
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_deinit.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_deinit.md
new file mode 100644
index 000000000..c79699956
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_deinit.md
@@ -0,0 +1,59 @@
+---
+title: karmadactl deinit
+---
+
+Remove the Karmada control plane from the Kubernetes cluster.
+
+### Synopsis
+
+Remove the Karmada control plane from the Kubernetes cluster.
+
+```
+karmadactl deinit
+```
+
+### Examples
+
+```
+ # Remove Karmada from the Kubernetes cluster.
+ karmadactl deinit
+```
+
+### Options
+
+```
+ --context string The name of the kubeconfig context to use
+ --dry-run Run the command in dry-run mode, without making any server requests.
+ -f, --force Reset cluster without prompting for confirmation.
+ -h, --help help for deinit
+ --kubeconfig string Path to the host cluster kubeconfig file.
+ -n, --namespace string namespace where Karmada components are installed. (default "karmada-system")
+ --purge-namespace Run the command with purge-namespace, the namespace which Karmada components were installed will be deleted.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_describe.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_describe.md
new file mode 100644
index 000000000..9123ffe49
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_describe.md
@@ -0,0 +1,83 @@
+---
+title: karmadactl describe
+---
+
+Show details of a specific resource or group of resources in a cluster
+
+### Synopsis
+
+Show details of a specific resource or group of resources in a member cluster.
+
+ Print a detailed description of the selected resources, including related resources such as events or controllers. You may select a single object by name, all objects of that type, provide a name prefix, or label selector. For example:
+
+ $ karmadactl describe TYPE NAME_PREFIX
+
+ will first check for an exact match on TYPE and NAME_PREFIX. If no such resource exists, it will output details for every resource that has a name prefixed with NAME_PREFIX.
+
+```
+karmadactl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME) (-C CLUSTER)
+```
+
+### Examples
+
+```
+ # Describe a pod in cluster(member1)
+ karmadactl describe pods/nginx -C=member1
+
+ # Describe all pods in cluster(member1)
+ karmadactl describe pods -C=member1
+
+ # Describe a pod identified by type and name in "pod.json" in cluster(member1)
+ karmadactl describe -f pod.json -C=member1
+
+ # Describe pods by label name=myLabel in cluster(member1)
+ karmadactl describe po -l name=myLabel -C=member1
+
+ # Describe all pods managed by the 'frontend' replication controller in cluster(member1)
+ # (rc-created pods get the name of the rc as a prefix in the pod name)
+ karmadactl describe pods frontend -C=member1
+```
+
+### Options
+
+```
+ -A, --all-namespaces If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.
+ --chunk-size int Return large lists in chunks rather than all at once. Pass 0 to disable. This flag is beta and may change in the future. (default 500)
+ -C, --cluster string Specify a member cluster
+ -f, --filename strings Filename, directory, or URL to files containing the resource to describe
+ -h, --help help for describe
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ -k, --kustomize string Process the kustomization directory. This flag can't be used together with -f or -R.
+ -n, --namespace string If present, the namespace scope for this CLI request
+ -R, --recursive Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.
+ -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.
+ --show-events If true, display events related to the described object. (default true)
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_exec.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_exec.md
new file mode 100644
index 000000000..4e5f55c81
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_exec.md
@@ -0,0 +1,79 @@
+---
+title: karmadactl exec
+---
+
+Execute a command in a container in a cluster
+
+### Synopsis
+
+Execute a command in a container in a cluster.
+
+```
+karmadactl exec (POD | TYPE/NAME) [-c CONTAINER] (-C CLUSTER) -- COMMAND [args...]
+```
+
+### Examples
+
+```
+ # Get output from running the 'date' command from pod mypod, using the first container by default in cluster(member1)
+ karmadactl exec mypod -C=member1 -- date
+
+ # Get output from running the 'date' command in ruby-container from pod mypod in cluster(member1)
+ karmadactl exec mypod -c ruby-container -C=member1 -- date
+
+ # Get output from running the 'date' command in ruby-container from pod mypod in cluster(member1)
+ karmadactl exec mypod -c ruby-container -C=member1 -- date
+
+ # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod in cluster(member1)
+ # and sends stdout/stderr from 'bash' back to the client
+ karmadactl exec mypod -c ruby-container -C=member1 -i -t -- bash -il
+
+ # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default in cluster(member1)
+ karmadactl exec deploy/mydeployment -C=member1 -- date
+
+ # Get output from running 'date' command from the first pod of the service myservice, using the first container by default in cluster(member1)
+ karmadactl exec svc/myservice -C=member1 -- date
+```
+
+### Options
+
+```
+ -C, --cluster string Specify a member cluster
+ -c, --container string Container name. If omitted, use the kubectl.kubernetes.io/default-container annotation for selecting the container to be attached or the first container in the pod will be chosen
+ -f, --filename strings to use to exec into the resource
+ -h, --help help for exec
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ -n, --namespace string If present, the namespace scope for this CLI request
+ --pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 1m0s)
+ -q, --quiet Only print output from the remote session
+ -i, --stdin Pass stdin to the container
+ -t, --tty Stdin is a TTY
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_get.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_get.md
new file mode 100644
index 000000000..29f53626f
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_get.md
@@ -0,0 +1,97 @@
+---
+title: karmadactl get
+---
+
+Display one or many resources
+
+### Synopsis
+
+Display one or many resources in member clusters.
+
+ Prints a table of the most important information about the specified resources. You can filter the list using a label selector and the --selector flag. If the desired resource type is namespaced you will only see results in your current namespace unless you pass --all-namespaces.
+
+ By specifying the output as 'template' and providing a Go template as the value of the --template flag, you can filter the attributes of the fetched resources.
+
+```
+karmadactl get [NAME | -l label | -n namespace]
+```
+
+### Examples
+
+```
+ # List all pods in ps output format
+ karmadactl get pods
+
+ # List all pods in ps output format with more information (such as node name)
+ karmadactl get pods -o wide
+
+ # List all pods of member1 cluster in ps output format
+ karmadactl get pods -C member1
+
+ # List a single replicasets controller with specified NAME in ps output format
+ karmadactl get replicasets nginx
+
+ # List deployments in JSON output format, in the "v1" version of the "apps" API group
+ karmadactl get deployments.v1.apps -o json
+
+ # Return only the phase value of the specified resource
+ karmadactl get -o template deployment/nginx -C member1 --template={{.spec.replicas}}
+
+ # List all replication controllers and services together in ps output format
+ karmadactl get rs,services
+
+ # List one or more resources by their type and names
+ karmadactl get rs/nginx-cb87b6d88 service/kubernetes
+```
+
+### Options
+
+```
+ -A, --all-namespaces If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.
+ --allow-missing-template-keys If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. (default true)
+ -C, --clusters strings -C=member1,member2
+ -h, --help help for get
+ --ignore-not-found If the requested object does not exist the command will return exit code 0.
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ -L, --label-columns strings Accepts a comma separated list of labels that are going to be presented as columns. Names are case-sensitive. You can also use multiple flag options like -L label1 -L label2...
+ -l, --labels string -l=label or -l label
+ -n, --namespace string If present, the namespace scope for this CLI request
+ --no-headers When using the default or custom-column output format, don't print headers (default print headers).
+ -o, --output string Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file, custom-columns, custom-columns-file, wide). See custom columns [https://kubernetes.io/docs/reference/kubectl/#custom-columns], golang template [http://golang.org/pkg/text/template/#pkg-overview] and jsonpath template [https://kubernetes.io/docs/reference/kubectl/jsonpath/].
+ --output-watch-events Output watch event objects when --watch or --watch-only is used. Existing objects are output as initial ADDED events.
+ --show-kind If present, list the resource type for the requested object(s).
+ --show-labels When printing, show all labels as the last column (default hide labels column)
+ --show-managed-fields If true, keep the managedFields when printing objects in JSON or YAML format.
+ --sort-by string If non-empty, sort list types using this field specification. The field specification is expressed as a JSONPath expression (e.g. '{.metadata.name}'). The field in the API resource specified by this JSONPath expression must be an integer or a string.
+ --template string Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
+ -w, --watch After listing/getting the requested object, watch for changes. Uninitialized objects are excluded if no object name is provided.
+ --watch-only Watch for changes to the requested object(s), without listing/getting first.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_index.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_index.md
new file mode 100644
index 000000000..5f745f3d8
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_index.md
@@ -0,0 +1,85 @@
+---
+title: Karmadactl Commands
+---
+
+
+## Basic Commands
+
+* [karmadactl get](karmadactl_get.md) - Display one or many resources in member clusters.
+
+ Prints a table of the most important information about the specified resources. You can filter the list using a label selector and the --selector flag. If the desired resource type is namespaced you will only see results in your current namespace unless you pass --all-namespaces.
+
+ By specifying the output as 'template' and providing a Go template as the value of the --template flag, you can filter the attributes of the fetched resources.
+
+## Cluster Registration Commands
+
+* [karmadactl addons](karmadactl_addons.md) - Enable or disable a Karmada addon.
+
+ These addons are currently supported:
+
+ 1. karmada-descheduler
+ 2. karmada-metrics-adapter
+ 3. karmada-scheduler-estimator
+ 4. karmada-search
+* [karmadactl deinit](karmadactl_deinit.md) - Remove the Karmada control plane from the Kubernetes cluster.
+* [karmadactl init](karmadactl_init.md) - Install the Karmada control plane in a Kubernetes cluster.
+
+ By default, the images and CRD tarball are downloaded remotely. For offline installation, you can set '--private-image-registry' and '--crds'.
+* [karmadactl join](karmadactl_join.md) - Register a cluster to Karmada control plane with Push mode.
+* [karmadactl register](karmadactl_register.md) - Register a cluster to Karmada control plane with Pull mode.
+* [karmadactl token](karmadactl_token.md) - This command manages bootstrap tokens. It is optional and needed only for advanced use cases.
+
+ In short, bootstrap tokens are used for establishing bidirectional trust between a client and a server. A bootstrap token can be used when a client (for example a member cluster that is about to join control plane) needs to trust the server it is talking to. Then a bootstrap token with the "signing" usage can be used. bootstrap tokens can also function as a way to allow short-lived authentication to the API Server (the token serves as a way for the API Server to trust the client), for example for doing the TLS Bootstrap.
+
+ What is a bootstrap token more exactly? - It is a Secret in the kube-system namespace of type "bootstrap.kubernetes.io/token". - A bootstrap token must be of the form "[a-z0-9]{6}.[a-z0-9]{16}". The former part is the public token ID, while the latter is the Token Secret and it must be kept private at all circumstances! - The name of the Secret must be named "bootstrap-token-(token-id)".
+
+ This command is same as 'kubeadm token', but it will create tokens that are used by member clusters.
+* [karmadactl unjoin](karmadactl_unjoin.md) - Remove a cluster from Karmada control plane.
+
+## Cluster Management Commands
+
+* [karmadactl cordon](karmadactl_cordon.md) - Mark cluster as unschedulable.
+* [karmadactl taint](karmadactl_taint.md) - Update the taints on one or more clusters.
+
+ * A taint consists of a key, value, and effect. As an argument here, it is expressed as key=value:effect.
+ * The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 253 characters.
+ * Optionally, the key can begin with a DNS subdomain prefix and a single '/', like example.com/my-app.
+ * The value is optional. If given, it must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters.
+ * The effect must be NoSchedule, PreferNoSchedule or NoExecute.
+ * Currently taint can only apply to cluster.
+* [karmadactl uncordon](karmadactl_uncordon.md) - Mark cluster as schedulable.
+
+## Troubleshooting and Debugging Commands
+
+* [karmadactl describe](karmadactl_describe.md) - Show details of a specific resource or group of resources in a member cluster.
+
+ Print a detailed description of the selected resources, including related resources such as events or controllers. You may select a single object by name, all objects of that type, provide a name prefix, or label selector. For example:
+
+ $ karmadactl describe TYPE NAME_PREFIX
+
+ will first check for an exact match on TYPE and NAME_PREFIX. If no such resource exists, it will output details for every resource that has a name prefixed with NAME_PREFIX.
+* [karmadactl exec](karmadactl_exec.md) - Execute a command in a container in a cluster.
+* [karmadactl interpret](karmadactl_interpret.md) - Validate, test and edit interpreter customization before applying it to the control plane.
+
+ 1. Validate the ResourceInterpreterCustomization configuration as per API schema
+ and try to load the scripts for syntax check.
+
+ 2. Run the rules locally and test if the result is expected. Similar to the dry run.
+
+ 1. Edit customization. Similar to the kubectl edit.
+* [karmadactl logs](karmadactl_logs.md) - Print the logs for a container in a pod in a member cluster or specified resource. If the pod has only one container, the container name is optional.
+
+## Advanced Commands
+
+* [karmadactl apply](karmadactl_apply.md) - Apply a configuration to a resource by file name or stdin and propagate them into member clusters. The resource name must be specified. This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.
+
+ JSON and YAML formats are accepted.
+
+ Alpha Disclaimer: the --prune functionality is not yet complete. Do not use unless you are aware of what the current state is. See https://issues.k8s.io/34274.
+
+ Note: It implements the function of 'kubectl apply' by default. If you want to propagate them into member clusters, please use %[1]s apply --all-clusters'.
+* [karmadactl promote](karmadactl_promote.md) - Promote resources from legacy clusters to Karmada control plane. Requires the cluster has been joined or registered.
+
+ If the resource already exists in Karmada control plane, please edit PropagationPolicy and OverridePolicy to propagate it.
+
+###### Auto generated by [script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_init.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_init.md
new file mode 100644
index 000000000..3bd7dc370
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_init.md
@@ -0,0 +1,132 @@
+---
+title: karmadactl init
+---
+
+Install the Karmada control plane in a Kubernetes cluster
+
+### Synopsis
+
+Install the Karmada control plane in a Kubernetes cluster.
+
+ By default, the images and CRD tarball are downloaded remotely. For offline installation, you can set '--private-image-registry' and '--crds'.
+
+```
+karmadactl init
+```
+
+### Examples
+
+```
+ # Install Karmada in Kubernetes cluster
+ # The karmada-apiserver binds the master node's IP by default
+ karmadactl init
+
+ # China mainland registry mirror can be specified by using kube-image-mirror-country
+ karmadactl init --kube-image-mirror-country=cn
+
+ # Kube registry can be specified by using kube-image-registry
+ karmadactl init --kube-image-registry=registry.cn-hangzhou.aliyuncs.com/google_containers
+
+ # Specify the URL to download CRD tarball
+ karmadactl init --crds https://github.com/karmada-io/karmada/releases/download/v0.0.0-master/crds.tar.gz
+
+ # Specify the local CRD tarball
+ karmadactl init --crds /root/crds.tar.gz
+
+ # Use PVC to persistent storage etcd data
+ karmadactl init --etcd-storage-mode PVC --storage-classes-name {StorageClassesName}
+
+ # Use hostPath to persistent storage etcd data. For data security, only 1 etcd pod can run in hostPath mode
+ karmadactl init --etcd-storage-mode hostPath --etcd-replicas 1
+
+ # Use hostPath to persistent storage etcd data but select nodes by labels
+ karmadactl init --etcd-storage-mode hostPath --etcd-node-selector-labels karmada.io/etcd=true
+
+ # Private registry can be specified for all images
+ karmadactl init --etcd-image local.registry.com/library/etcd:3.5.9-0
+
+ # Specify Karmada API Server IP address. If not set, the address on the master node will be used.
+ karmadactl init --karmada-apiserver-advertise-address 192.168.1.2
+
+ # Deploy highly available(HA) karmada
+ karmadactl init --karmada-apiserver-replicas 3 --etcd-replicas 3 --etcd-storage-mode PVC --storage-classes-name {StorageClassesName}
+
+ # Specify external IPs(load balancer or HA IP) which used to sign the certificate
+ karmadactl init --cert-external-ip 10.235.1.2 --cert-external-dns www.karmada.io
+```
+
+### Options
+
+```
+ --cert-external-dns string the external DNS of Karmada certificate (e.g localhost,localhost.com)
+ --cert-external-ip string the external IP of Karmada certificate (e.g 192.168.1.2,172.16.1.2)
+ --cert-validity-period duration the validity period of Karmada certificate (e.g 8760h0m0s, that is 365 days) (default 8760h0m0s)
+ --context string The name of the kubeconfig context to use
+ --crds string Karmada crds resource.(local file e.g. --crds /root/crds.tar.gz) (default "https://github.com/karmada-io/karmada/releases/download/v0.0.0-master/crds.tar.gz")
+ --etcd-data string etcd data path,valid in hostPath mode. (default "/var/lib/karmada-etcd")
+ --etcd-image string etcd image
+ --etcd-init-image string etcd init container image (default "docker.io/alpine:3.19.1")
+ --etcd-node-selector-labels string etcd pod select the labels of the node. valid in hostPath mode ( e.g. --etcd-node-selector-labels karmada.io/etcd=true)
+ --etcd-pvc-size string etcd data path,valid in pvc mode. (default "5Gi")
+ --etcd-replicas int32 etcd replica set, cluster 3,5...singular (default 1)
+ --etcd-storage-mode string etcd data storage mode(emptyDir,hostPath,PVC). value is PVC, specify --storage-classes-name (default "hostPath")
+ --external-etcd-ca-cert-path string The path of CA certificate of the external etcd cluster in pem format.
+ --external-etcd-client-cert-path string The path of client side certificate to the external etcd cluster in pem format.
+ --external-etcd-client-key-path string The path of client side private key to the external etcd cluster in pem format.
+ --external-etcd-key-prefix string The key prefix to be configured to kube-apiserver through --etcd-prefix.
+ --external-etcd-servers string The server urls of external etcd cluster, to be used by kube-apiserver through --etcd-servers.
+ -h, --help help for init
+ --host-cluster-domain string The cluster domain of karmada host cluster. (e.g. --host-cluster-domain=host.karmada) (default "cluster.local")
+ --image-pull-secrets strings Image pull secrets are used to pull images from the private registry, could be secret list separated by comma (e.g '--image-pull-secrets PullSecret1,PullSecret2', the secrets should be pre-settled in the namespace declared by '--namespace')
+ --karmada-aggregated-apiserver-image string Karmada aggregated apiserver image (default "docker.io/karmada/karmada-aggregated-apiserver:v0.0.0-master")
+ --karmada-aggregated-apiserver-replicas int32 Karmada aggregated apiserver replica set (default 1)
+ --karmada-apiserver-advertise-address string The IP address the Karmada API Server will advertise it's listening on. If not set, the address on the master node will be used.
+ --karmada-apiserver-image string Kubernetes apiserver image
+ --karmada-apiserver-replicas int32 Karmada apiserver replica set (default 1)
+ --karmada-controller-manager-image string Karmada controller manager image (default "docker.io/karmada/karmada-controller-manager:v0.0.0-master")
+ --karmada-controller-manager-replicas int32 Karmada controller manager replica set (default 1)
+ -d, --karmada-data string Karmada data path. kubeconfig cert and crds files (default "/etc/karmada")
+ --karmada-kube-controller-manager-image string Kubernetes controller manager image
+ --karmada-kube-controller-manager-replicas int32 Karmada kube controller manager replica set (default 1)
+ --karmada-pki string Karmada pki path. Karmada cert files (default "/etc/karmada/pki")
+ --karmada-scheduler-image string Karmada scheduler image (default "docker.io/karmada/karmada-scheduler:v0.0.0-master")
+ --karmada-scheduler-replicas int32 Karmada scheduler replica set (default 1)
+ --karmada-webhook-image string Karmada webhook image (default "docker.io/karmada/karmada-webhook:v0.0.0-master")
+ --karmada-webhook-replicas int32 Karmada webhook replica set (default 1)
+ --kube-image-mirror-country string Country code of the kube image registry to be used. For Chinese mainland users, set it to cn
+ --kube-image-registry string Kube image registry. For Chinese mainland users, you may use local gcr.io mirrors such as registry.cn-hangzhou.aliyuncs.com/google_containers to override default kube image registry
+ --kube-image-tag string Choose a specific Kubernetes version for the control plane. (default "v1.26.12")
+ --kubeconfig string absolute path to the kubeconfig file
+ -n, --namespace string Kubernetes namespace (default "karmada-system")
+ -p, --port int32 Karmada apiserver service node port (default 32443)
+ --private-image-registry string Private image registry where pull images from. If set, all required images will be downloaded from it, it would be useful in offline installation scenarios. In addition, you still can use --kube-image-registry to specify the registry for Kubernetes's images.
+ --storage-classes-name string Kubernetes StorageClasses Name
+ --wait-component-ready-timeout int Wait for karmada component ready timeout. 0 means wait forever (default 120)
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_interpret.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_interpret.md
new file mode 100644
index 000000000..f95fb3b0c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_interpret.md
@@ -0,0 +1,103 @@
+---
+title: karmadactl interpret
+---
+
+Validate, test and edit interpreter customization before applying it to the control plane
+
+### Synopsis
+
+Validate, test and edit interpreter customization before applying it to the control plane.
+
+ 1. Validate the ResourceInterpreterCustomization configuration as per API schema
+ and try to load the scripts for syntax check.
+
+ 2. Run the rules locally and test if the result is expected. Similar to the dry run.
+
+ 1. Edit customization. Similar to the kubectl edit.
+
+```
+karmadactl interpret (-f FILENAME) (--operation OPERATION) [--ARGS VALUE]...
+```
+
+### Examples
+
+```
+ # Check the customizations in file
+ karmadactl interpret -f customization.json --check
+
+ # Execute the retention rule
+ karmadactl interpret -f customization.yml --operation retain --desired-file desired.yml --observed-file observed.yml
+
+ # Execute the replicaResource rule
+ karmadactl interpret -f customization.yml --operation interpretReplica --observed-file observed.yml
+
+ # Execute the replicaRevision rule
+ karmadactl interpret -f customization.yml --operation reviseReplica --observed-file observed.yml --desired-replica 2
+
+ # Execute the statusReflection rule
+ karmadactl interpret -f customization.yml --operation interpretStatus --observed-file observed.yml
+
+ # Execute the healthInterpretation rule
+ karmadactl interpret -f customization.yml --operation interpretHealth --observed-file observed.yml
+
+ # Execute the dependencyInterpretation rule
+ karmadactl interpret -f customization.yml --operation interpretDependency --observed-file observed.yml
+
+ # Execute the statusAggregation rule
+ karmadactl interpret -f customization.yml --operation aggregateStatus --observed-file observed.yml --status-file status.yml
+
+ # Fetch observed object from url, and status items from stdin (specified with -)
+ karmadactl interpret -f customization.yml --operation aggregateStatus --observed-file https://example.com/observed.yml --status-file -
+
+ # Edit customization
+ karmadactl interpret -f customization.yml --edit
+```
+
+### Options
+
+```
+ --allow-missing-template-keys If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. (default true)
+ --check Validates the given ResourceInterpreterCustomization configuration(s)
+ --desired-file string Filename, directory, or URL to files identifying the resource to use as desiredObj argument in rule script.
+ --desired-replica int32 The desiredReplica argument in rule script.
+ --edit Edit customizations
+ -f, --filename strings Filename, directory, or URL to files containing the customizations
+ -h, --help help for interpret
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ --observed-file string Filename, directory, or URL to files identifying the resource to use as observedObj argument in rule script.
+ --operation string The interpret operation to use. One of: (Retain,InterpretReplica,ReviseReplica,InterpretStatus,AggregateStatus,InterpretHealth,InterpretDependency)
+ -o, --output string Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).
+ -R, --recursive Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.
+ --show-doc Show document of rules when editing
+ --show-managed-fields If true, keep the managedFields when printing objects in JSON or YAML format.
+ --status-file string Filename, directory, or URL to files identifying the resource to use as statusItems argument in rule script.
+ --template string Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_join.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_join.md
new file mode 100644
index 000000000..af9e97930
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_join.md
@@ -0,0 +1,62 @@
+---
+title: karmadactl join
+---
+
+Register a cluster to Karmada control plane with Push mode
+
+### Synopsis
+
+Register a cluster to Karmada control plane with Push mode.
+
+```
+karmadactl join CLUSTER_NAME --cluster-kubeconfig=
+```
+
+### Examples
+
+```
+ # Join cluster into karmada control plane, if '--cluster-context' not specified, take the cluster name as the context
+ karmadactl join CLUSTER_NAME --cluster-kubeconfig=
+```
+
+### Options
+
+```
+ --cluster-context string Name of cluster context in kubeconfig. The current context is used by default.
+ --cluster-kubeconfig string Path of the cluster's kubeconfig.
+ --cluster-namespace string Namespace in the control plane where member cluster secrets are stored. (default "karmada-cluster")
+ --cluster-provider string Provider of the joining cluster. The Karmada scheduler can use this information to spread workloads across providers for higher availability.
+ --cluster-region string The region of the joining cluster. The Karmada scheduler can use this information to spread workloads across regions for higher availability.
+ --cluster-zones strings The zones of the joining cluster. The Karmada scheduler can use this information to spread workloads across zones for higher availability.
+ --dry-run Run the command in dry-run mode, without making any server requests.
+ -h, --help help for join
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_logs.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_logs.md
new file mode 100644
index 000000000..b0cd9227d
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_logs.md
@@ -0,0 +1,93 @@
+---
+title: karmadactl logs
+---
+
+Print the logs for a container in a pod in a cluster
+
+### Synopsis
+
+Print the logs for a container in a pod in a member cluster or specified resource. If the pod has only one container, the container name is optional.
+
+```
+karmadactl logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER] (-C CLUSTER)
+```
+
+### Examples
+
+```
+ # Return snapshot logs from pod nginx with only one container in cluster(member1)
+ karmadactl logs nginx -C=member1
+
+ # Return snapshot logs from pod nginx with multi containers in cluster(member1)
+ karmadactl logs nginx --all-containers=true -C=member1
+
+ # Return snapshot logs from all containers in pods defined by label app=nginx in cluster(member1)
+ karmadactl logs -l app=nginx --all-containers=true -C=member1
+
+ # Return snapshot of previous terminated ruby container logs from pod web-1 in cluster(member1)
+ karmadactl logs -p -c ruby web-1 -C=member1
+
+ # Begin streaming the logs of the ruby container in pod web-1 in cluster(member1)
+ karmadactl logs -f -c ruby web-1 -C=member1
+
+ # Begin streaming the logs from all containers in pods defined by label app=nginx in cluster(member1)
+ karmadactl logs -f -l app=nginx --all-containers=true -C=member1
+
+ # Display only the most recent 20 lines of output in pod nginx in cluster(member1)
+ karmadactl logs --tail=20 nginx -C=member1
+
+ # Show all logs from pod nginx written in the last hour in cluster(member1)
+ karmadactl logs --since=1h nginx -C=member1
+```
+
+### Options
+
+```
+ --all-containers Get all containers' logs in the pod(s).
+ -C, --cluster string Specify a member cluster
+ -c, --container string Print the logs of this container
+ -f, --follow Specify if the logs should be streamed.
+ -h, --help help for logs
+ --ignore-errors If watching / following pod logs, allow for any errors that occur to be non-fatal
+ --insecure-skip-tls-verify-backend Skip verifying the identity of the kubelet that logs are requested from. In theory, an attacker could provide invalid log content back. You might want to use this if your kubelet serving certificates have expired.
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ --limit-bytes int Maximum bytes of logs to return. Defaults to no limit.
+ --max-log-requests int Specify maximum number of concurrent logs to follow when using by a selector. Defaults to 5. (default 5)
+ -n, --namespace string If present, the namespace scope for this CLI request
+ --pod-running-timeout duration The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one pod is running (default 20s)
+ --prefix Prefix each log line with the log source (pod name and container name)
+ -p, --previous If true, print the logs for the previous instance of the container in a pod if it exists.
+ -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.
+ --since duration Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used.
+ --since-time string Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used.
+ --tail int Lines of recent log file to display. Defaults to -1 with no selector, showing all log lines otherwise 10, if a selector is provided. (default -1)
+ --timestamps Include timestamps on each line in the log output
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_options.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_options.md
new file mode 100644
index 000000000..f0ee28c28
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_options.md
@@ -0,0 +1,54 @@
+---
+title: karmadactl options
+---
+
+Print the list of flags inherited by all commands
+
+### Synopsis
+
+Print the list of flags inherited by all commands
+
+```
+karmadactl options [flags]
+```
+
+### Examples
+
+```
+ # Print flags inherited by all commands
+ karmadactl options
+```
+
+### Options
+
+```
+ -h, --help help for options
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_promote.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_promote.md
new file mode 100644
index 000000000..8e6fd884c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_promote.md
@@ -0,0 +1,84 @@
+---
+title: karmadactl promote
+---
+
+Promote resources from legacy clusters to Karmada control plane
+
+### Synopsis
+
+Promote resources from legacy clusters to Karmada control plane. Requires the cluster has been joined or registered.
+
+ If the resource already exists in Karmada control plane, please edit PropagationPolicy and OverridePolicy to propagate it.
+
+```
+karmadactl promote -n -C
+```
+
+### Examples
+
+```
+ # Promote deployment(default/nginx) from cluster1 to Karmada
+ karmadactl promote deployment nginx -n default -C cluster1
+
+ # Promote deployment(default/nginx) with gvk from cluster1 to Karmada
+ karmadactl promote deployment.v1.apps nginx -n default -C cluster1
+
+ # Dumps the artifacts but does not deploy them to Karmada, same as 'dry run'
+ karmadactl promote deployment nginx -n default -C cluster1 -o yaml|json
+
+ # Promote secret(default/default-token) from cluster1 to Karmada
+ karmadactl promote secret default-token -n default -C cluster1
+
+ # Support to use '--dependencies=true' or '-d=true' to promote resource with its dependencies automatically, default to false
+ karmadactl promote deployment nginx -n default -C cluster1 -d=true
+
+ # Support to use '--cluster-kubeconfig' to specify the configuration of member cluster
+ karmadactl promote deployment nginx -n default -C cluster1 --cluster-kubeconfig=
+
+ # Support to use '--cluster-kubeconfig' and '--cluster-context' to specify the configuration of member cluster
+ karmadactl promote deployment nginx -n default -C cluster1 --cluster-kubeconfig= --cluster-context=
+```
+
+### Options
+
+```
+ --auto-create-policy Automatically create a PropagationPolicy for namespace-scoped resources or create a ClusterPropagationPolicy for cluster-scoped resources. (default true)
+ -C, --cluster string the name of legacy cluster (eg -C=member1)
+ --cluster-context string Context name of legacy cluster in kubeconfig. Only works when there are multiple contexts in the kubeconfig.
+ --cluster-kubeconfig string Path of the legacy cluster's kubeconfig.
+ -d, --dependencies Promote resource with its dependencies automatically, default to false
+ --dry-run Run the command in dry-run mode, without making any server requests.
+ -h, --help help for promote
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ -n, --namespace string If present, the namespace scope for this CLI request
+ -o, --output string Output format. One of: json|yaml
+ --policy-name string The name of the PropagationPolicy(or ClusterPropagationPolicy) that is automatically created after promotion. If not specified, the name will be the resource name with a hash suffix that is generated by resource metadata.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_register.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_register.md
new file mode 100644
index 000000000..7328a9b30
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_register.md
@@ -0,0 +1,77 @@
+---
+title: karmadactl register
+---
+
+Register a cluster to Karmada control plane with Pull mode
+
+### Synopsis
+
+Register a cluster to Karmada control plane with Pull mode.
+
+```
+karmadactl register [karmada-apiserver-endpoint]
+```
+
+### Examples
+
+```
+ # Register cluster into karmada control plane with Pull mode.
+ # If '--cluster-name' isn't specified, the cluster of current-context will be used by default.
+ karmadactl register [karmada-apiserver-endpoint] --cluster-name= --token= --discovery-token-ca-cert-hash=
+
+ # UnsafeSkipCAVerification allows token-based discovery without CA verification via CACertHashes. This can weaken
+ # the security of register command since other clusters can impersonate the control-plane.
+ karmadactl register [karmada-apiserver-endpoint] --token= --discovery-token-unsafe-skip-ca-verification=true
+```
+
+### Options
+
+```
+ --ca-cert-path string The path to the SSL certificate authority used to secure communications between member cluster and karmada-control-plane. (default "/etc/karmada/pki/ca.crt")
+ --cert-expiration-seconds int32 The expiration time of certificate. (default 31536000)
+ --cluster-name string The name of member cluster in the control plane, if not specified, the cluster of current-context is used by default.
+ --cluster-namespace string Namespace in the control plane where member cluster secrets are stored. (default "karmada-cluster")
+ --cluster-provider string Provider of the joining cluster. The Karmada scheduler can use this information to spread workloads across providers for higher availability.
+ --cluster-region string The region of the joining cluster. The Karmada scheduler can use this information to spread workloads across regions for higher availability.
+ --cluster-zones strings The zones of the joining cluster. The Karmada scheduler can use this information to spread workloads across zones for higher availability.
+ --context string Name of the cluster context in kubeconfig file.
+ --discovery-timeout duration The timeout to discovery karmada apiserver client. (default 5m0s)
+ --discovery-token-ca-cert-hash strings For token-based discovery, validate that the root CA public key matches this hash (format: ":").
+ --discovery-token-unsafe-skip-ca-verification For token-based discovery, allow joining without --discovery-token-ca-cert-hash pinning.
+ --dry-run Run the command in dry-run mode, without making any server requests.
+ --enable-cert-rotation Enable means controller would rotate certificate for karmada-agent when the certificate is about to expire.
+ -h, --help help for register
+ --karmada-agent-image string Karmada agent image. (default "docker.io/karmada/karmada-agent:v0.0.0-master")
+ --karmada-agent-replicas int32 Karmada agent replicas. (default 1)
+ --kubeconfig string Path to the kubeconfig file of member cluster.
+ -n, --namespace string Namespace the karmada-agent component deployed. (default "karmada-system")
+ --proxy-server-address string Address of the proxy server that is used to proxy to the cluster.
+ --token string For token-based discovery, the token used to validate cluster information fetched from the API server.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_taint.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_taint.md
new file mode 100644
index 000000000..7d7b9984c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_taint.md
@@ -0,0 +1,74 @@
+---
+title: karmadactl taint
+---
+
+Update the taints on one or more clusters
+
+### Synopsis
+
+Update the taints on one or more clusters.
+
+ * A taint consists of a key, value, and effect. As an argument here, it is expressed as key=value:effect.
+ * The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 253 characters.
+ * Optionally, the key can begin with a DNS subdomain prefix and a single '/', like example.com/my-app.
+ * The value is optional. If given, it must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters.
+ * The effect must be NoSchedule, PreferNoSchedule or NoExecute.
+ * Currently taint can only apply to cluster.
+
+```
+karmadactl taint CLUSTER NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N
+```
+
+### Examples
+
+```
+ # Update cluster 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'
+ # If a taint with that key and effect already exists, its value is replaced as specified
+ karmadactl taint clusters foo dedicated=special-user:NoSchedule
+
+ # Remove from cluster 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists
+ karmadactl taint clusters foo dedicated:NoSchedule-
+
+ # Remove from cluster 'foo' all the taints with key 'dedicated'
+ karmadactl taint clusters foo dedicated-
+
+ # Add to cluster 'foo' a taint with key 'bar' and no value
+ karmadactl taint clusters foo bar:NoSchedule
+```
+
+### Options
+
+```
+ --dry-run Run the command in dry-run mode, without making any server requests.
+ -h, --help help for taint
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ --overwrite If true, allow taints to be overwritten, otherwise reject taint updates that overwrite existing taints.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token.md
new file mode 100644
index 000000000..36b3bfb8c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token.md
@@ -0,0 +1,59 @@
+---
+title: karmadactl token
+---
+
+Manage bootstrap tokens
+
+### Synopsis
+
+This command manages bootstrap tokens. It is optional and needed only for advanced use cases.
+
+ In short, bootstrap tokens are used for establishing bidirectional trust between a client and a server. A bootstrap token can be used when a client (for example a member cluster that is about to join control plane) needs to trust the server it is talking to. Then a bootstrap token with the "signing" usage can be used. bootstrap tokens can also function as a way to allow short-lived authentication to the API Server (the token serves as a way for the API Server to trust the client), for example for doing the TLS Bootstrap.
+
+ What is a bootstrap token more exactly? - It is a Secret in the kube-system namespace of type "bootstrap.kubernetes.io/token". - A bootstrap token must be of the form "[a-z0-9]{6}.[a-z0-9]{16}". The former part is the public token ID, while the latter is the Token Secret and it must be kept private at all circumstances! - The name of the Secret must be named "bootstrap-token-(token-id)".
+
+ This command is same as 'kubeadm token', but it will create tokens that are used by member clusters.
+
+### Examples
+
+```
+ # Create a token and print the full 'karmadactl register' flag needed to join the cluster using the token.
+ karmadactl token create --print-register-command
+```
+
+### Options
+
+```
+ -h, --help help for token
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+* [karmadactl token create](karmadactl_token_create.md) - Create bootstrap tokens on the server
+* [karmadactl token delete](karmadactl_token_delete.md) - Delete bootstrap tokens on the server
+* [karmadactl token list](karmadactl_token_list.md) - List bootstrap tokens on the server
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token_create.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token_create.md
new file mode 100644
index 000000000..9133bb810
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token_create.md
@@ -0,0 +1,55 @@
+---
+title: karmadactl token create
+---
+
+Create bootstrap tokens on the server
+
+### Synopsis
+
+This command will create a bootstrap token for you. You can specify the usages for this token, the "time to live" and an optional human friendly description.
+
+ This should be a securely generated random token of the form "[a-z0-9]{6}.[a-z0-9]{16}".
+
+```
+karmadactl token create
+```
+
+### Options
+
+```
+ --description string A human friendly description of how this token is used.
+ --groups strings Extra groups that this token will authenticate as when used for authentication. Must match "\\Asystem:bootstrappers:[a-z0-9:-]{0,255}[a-z0-9]\\z" (default [system:bootstrappers:karmada:default-cluster-token])
+ -h, --help help for create
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ --print-register-command Instead of printing only the token, print the full 'karmadactl register' flag needed to register the member cluster using the token.
+ --ttl duration The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s)
+ --usages strings Describes the ways in which this token can be used. You can pass --usages multiple times or provide a comma separated list of options. Valid options: [signing,authentication] (default [signing,authentication])
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl token](karmadactl_token.md) - Manage bootstrap tokens
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token_delete.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token_delete.md
new file mode 100644
index 000000000..b753312e5
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token_delete.md
@@ -0,0 +1,50 @@
+---
+title: karmadactl token delete
+---
+
+Delete bootstrap tokens on the server
+
+### Synopsis
+
+This command will delete a list of bootstrap tokens for you.
+
+ The [token-value] is the full Token of the form "[a-z0-9]{6}.[a-z0-9]{16}" or the Token ID of the form "[a-z0-9]{6}" to delete.
+
+```
+karmadactl token delete [token-value] ...
+```
+
+### Options
+
+```
+ -h, --help help for delete
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl token](karmadactl_token.md) - Manage bootstrap tokens
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token_list.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token_list.md
new file mode 100644
index 000000000..1a708ca52
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_token_list.md
@@ -0,0 +1,48 @@
+---
+title: karmadactl token list
+---
+
+List bootstrap tokens on the server
+
+### Synopsis
+
+This command will list all bootstrap tokens for you.
+
+```
+karmadactl token list [flags]
+```
+
+### Options
+
+```
+ -h, --help help for list
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl token](karmadactl_token.md) - Manage bootstrap tokens
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_top.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_top.md
new file mode 100644
index 000000000..1ebc4b366
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_top.md
@@ -0,0 +1,52 @@
+---
+title: karmadactl top
+---
+
+Display resource (CPU/memory) usage of member clusters
+
+### Synopsis
+
+Display Resource (CPU/Memory) usage of member clusters.
+
+ The top command allows you to see the resource consumption for pods of member clusters.
+
+ This command requires karmada-metrics-adapter to be correctly configured and working on the Karmada control plane and Metrics Server to be correctly configured and working on the member clusters.
+
+```
+karmadactl top [flags]
+```
+
+### Options
+
+```
+ -h, --help help for top
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+* [karmadactl top pod](karmadactl_top_pod.md) - Display resource (CPU/memory) usage of pods of member clusters
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_top_pod.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_top_pod.md
new file mode 100644
index 000000000..b404f1cf6
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_top_pod.md
@@ -0,0 +1,78 @@
+---
+title: karmadactl top pod
+---
+
+Display resource (CPU/memory) usage of pods of member clusters
+
+### Synopsis
+
+Display resource (CPU/memory) usage of pods.
+
+ The 'top pod' command allows you to see the resource consumption of pods of member clusters.
+
+ Due to the metrics pipeline delay, they may be unavailable for a few minutes since pod creation.
+
+```
+karmadactl top pod [NAME | -l label]
+```
+
+### Examples
+
+```
+ # Show metrics for all pods in the default namespace
+ karmadactl top pod
+
+ # Show metrics for all pods in the given namespace
+ karmadactl top pod --namespace=NAMESPACE
+
+ # Show metrics for a given pod and its containers
+ karmadactl top pod POD_NAME --containers
+
+ # Show metrics for the pods defined by label name=myLabel
+ karmadactl top pod -l name=myLabel
+```
+
+### Options
+
+```
+ -A, --all-namespaces If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.
+ -C, --clusters strings -C=member1,member2
+ --containers If present, print usage of containers within a pod.
+ --field-selector string Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
+ -h, --help help for pod
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ -n, --namespace string If present, the namespace scope for this CLI request
+ --no-headers If present, print output without headers.
+ -l, --selector string Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.
+ --sort-by string If non-empty, sort pods list using specified field. The field can be either 'cpu' or 'memory'.
+ --sum Print the sum of the resource usage
+ --use-protocol-buffers Enables using protocol-buffers to access Metrics API. (default true)
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl top](karmadactl_top.md) - Display resource (CPU/memory) usage of member clusters
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_uncordon.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_uncordon.md
new file mode 100644
index 000000000..ef545048c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_uncordon.md
@@ -0,0 +1,56 @@
+---
+title: karmadactl uncordon
+---
+
+Mark cluster as schedulable
+
+### Synopsis
+
+Mark cluster as schedulable.
+
+```
+karmadactl uncordon CLUSTER
+```
+
+### Examples
+
+```
+ # Mark cluster "foo" as schedulable.
+ karmadactl uncordon foo
+```
+
+### Options
+
+```
+ --dry-run Run the command in dry-run mode, without making any server requests.
+ -h, --help help for uncordon
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_unjoin.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_unjoin.md
new file mode 100644
index 000000000..d12d57bde
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_unjoin.md
@@ -0,0 +1,67 @@
+---
+title: karmadactl unjoin
+---
+
+Remove a cluster from Karmada control plane
+
+### Synopsis
+
+Remove a cluster from Karmada control plane.
+
+```
+karmadactl unjoin CLUSTER_NAME --cluster-kubeconfig=
+```
+
+### Examples
+
+```
+ # Unjoin cluster from karmada control plane, but not to remove resources created by karmada in the unjoining cluster
+ karmadactl unjoin CLUSTER_NAME
+
+ # Unjoin cluster from karmada control plane and attempt to remove resources created by karmada in the unjoining cluster
+ karmadactl unjoin CLUSTER_NAME --cluster-kubeconfig=
+
+ # Unjoin cluster from karmada control plane with timeout
+ karmadactl unjoin CLUSTER_NAME --cluster-kubeconfig= --wait 2m
+```
+
+### Options
+
+```
+ --cluster-context string Context name of cluster in kubeconfig. Only works when there are multiple contexts in the kubeconfig.
+ --cluster-kubeconfig string Path of the cluster's kubeconfig.
+ --cluster-namespace string Namespace in the control plane where member cluster secrets are stored. (default "karmada-cluster")
+ --dry-run Run the command in dry-run mode, without making any server requests.
+ --force Delete cluster and secret resources even if resources in the cluster targeted for unjoin are not removed successfully.
+ -h, --help help for unjoin
+ --karmada-context string The name of the kubeconfig context to use
+ --kubeconfig string Path to the kubeconfig file to use for CLI requests.
+ --wait duration wait for the unjoin command execution process(default 60s), if there is no success after this time, timeout will be returned. (default 1m0s)
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_version.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_version.md
new file mode 100644
index 000000000..fc675c7c0
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-commands/karmadactl_version.md
@@ -0,0 +1,54 @@
+---
+title: karmadactl version
+---
+
+Print the version information
+
+### Synopsis
+
+Print the version information.
+
+```
+karmadactl version [flags]
+```
+
+### Examples
+
+```
+ # Print karmadactl command version
+ karmadactl version
+```
+
+### Options
+
+```
+ -h, --help help for version
+```
+
+### Options inherited from parent commands
+
+```
+ --add-dir-header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
+ --log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log-dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log-file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log-file-max-size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --skip-headers If true, avoid header prefixes in the log messages
+ --skip-log-headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+### SEE ALSO
+
+* [karmadactl](karmadactl.md) - karmadactl controls a Kubernetes Cluster Federation.
+
+#### Go Back to [Karmadactl Commands](karmadactl_index.md) Homepage.
+
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/genkarmadactldocs).
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-usage-conventions.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-usage-conventions.md
new file mode 100644
index 000000000..edcd4fede
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/karmadactl/karmadactl-usage-conventions.md
@@ -0,0 +1,220 @@
+---
+title: Karmadactl用法约定
+---
+
+`karmadactl`推荐用法约定。
+
+## karmadactl interpret
+
+### YAML文件准备
+
+
+observed-deploy-nginx.yaml
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ paused: true
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ nodeSelector:
+ foo: bar
+ containers:
+ - image: nginx
+ name: nginx
+ resources:
+ limits:
+ cpu: 100m
+status:
+ availableReplicas: 2
+ observedGeneration: 1
+ readyReplicas: 2
+ replicas: 2
+ updatedReplicas: 2
+```
+
+
+
+desired-deploy-nginx.yaml
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ paused: false
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ serviceAccountName: test-sa
+```
+
+
+
+resourceinterpretercustomization.yaml
+
+```yaml
+apiVersion: config.karmada.io/v1alpha1
+kind: ResourceInterpreterCustomization
+metadata:
+ name: declarative-configuration-example
+spec:
+ target:
+ apiVersion: apps/v1
+ kind: Deployment
+ customizations:
+ replicaResource:
+ luaScript: >
+ local kube = require("kube")
+ function GetReplicas(obj)
+ replica = obj.spec.replicas
+ requirement = kube.accuratePodRequirements(obj.spec.template)
+ return replica, requirement
+ end
+ replicaRevision:
+ luaScript: >
+ function ReviseReplica(obj, desiredReplica)
+ obj.spec.replicas = desiredReplica
+ return obj
+ end
+ retention:
+ luaScript: >
+ function Retain(desiredObj, observedObj)
+ desiredObj.spec.paused = observedObj.spec.paused
+ return desiredObj
+ end
+ statusAggregation:
+ luaScript: >
+ function AggregateStatus(desiredObj, statusItems)
+ if statusItems == nil then
+ return desiredObj
+ end
+ if desiredObj.status == nil then
+ desiredObj.status = {}
+ end
+ replicas = 0
+ for i = 1, #statusItems do
+ if statusItems[i].status ~= nil and statusItems[i].status.replicas ~= nil then
+ replicas = replicas + statusItems[i].status.replicas
+ end
+ end
+ desiredObj.status.replicas = replicas
+ return desiredObj
+ end
+ statusReflection:
+ luaScript: >
+ function ReflectStatus (observedObj)
+ return observedObj.status
+ end
+ healthInterpretation:
+ luaScript: >
+ function InterpretHealth(observedObj)
+ return observedObj.status.readyReplicas == observedObj.spec.replicas
+ end
+ dependencyInterpretation:
+ luaScript: >
+ local kube = require("kube")
+ function GetDependencies(desiredObj)
+ refs = kube.getPodDependencies(desiredObj.spec.template, desiredObj.metadata.namespace)
+ return refs
+ end
+```
+
+
+
+status-file.yaml
+
+```yaml
+applied: true
+clusterName: member1
+health: Healthy
+status:
+ availableReplicas: 1
+ readyReplicas: 1
+ replicas: 1
+ updatedReplicas: 1
+---
+applied: true
+clusterName: member2
+health: Healthy
+status:
+ availableReplicas: 1
+ readyReplicas: 1
+ replicas: 1
+ updatedReplicas: 1
+```
+
+
+### 验证 ResourceInterpreterCustomization 资源配置
+
+```shell
+karmadactl interpret -f resourceinterpretercustomization.yaml --check
+```
+
+### 执行 ResourceInterpreterCustomization 特性操作
+
+#### 执行 InterpretReplica 规则
+
+```shell
+karmadactl interpret -f resourceinterpretercustomization.yaml --observed-file observed-deploy-nginx.yaml --operation=InterpretReplica
+```
+
+#### 执行 Retain 规则
+
+```shell
+karmadactl interpret -f resourceinterpretercustomization.yaml --desired-file desired-deploy-nginx.yaml --observed-file observed-deploy-nginx.yaml --operation Retain
+```
+
+#### 执行 ReviseReplica 规则
+
+```shell
+karmadactl interpret -f resourceinterpretercustomization.yaml --desired-replica 3 --observed-file observed-deploy-nginx.yaml --operation ReviseReplica
+```
+
+#### 执行 InterpretStatus 规则
+
+```shell
+karmadactl interpret -f resourceinterpretercustomization.yaml --observed-file observed-deploy-nginx.yaml --operation InterpretStatus
+```
+
+#### 执行 InterpretHealth 规则
+
+```shell
+karmadactl interpret -f resourceinterpretercustomization.yaml --observed-file observed-deploy-nginx.yaml --operation InterpretHealth
+```
+
+#### 执行 InterpretDependency 规则
+
+```shell
+karmadactl interpret -f resourceinterpretercustomization.yaml --desired-file desired-deploy-nginx.yaml --operation InterpretDependency
+```
+
+#### 执行 AggregateStatus 规则
+
+```shell
+karmadactl interpret -f resourceinterpretercustomization.yaml --desired-file desired-deploy-nginx.yaml --operation AggregateStatus --status-file status-file.yaml
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/object-association-mapping.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/object-association-mapping.md
new file mode 100644
index 000000000..92a3ce2bf
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/object-association-mapping.md
@@ -0,0 +1,32 @@
+---
+title: Karmada Object association mapping
+---
+
+### Review
+
+![](../resources/general/object-association-map.png)
+
+This picture is made by draw.io. If you need to update the **Review**, you can use the file [object-association-map.drawio](../resources/general/object-association-map.drawio).
+
+## Label/Annotation information table
+
+> Note:
+> These labels and annotations are managed by the Karmada. Please do not modify them.
+
+| Object | Tag | KeyName | Usage |
+| ---------------------- | ---------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
+| ResourceTemplate | label | propagationpolicy.karmada.io/namespace propagationpolicy.karmada.io/name | The labels can be used to determine whether the current resource template is claimed by PropagationPolicy. |
+| | label | clusterpropagationpolicy.karmada.io/name | The label can be used to determine whether the current resource template is claimed by ClusterPropagationPolicy. |
+| ResourceBinding | label | propagationpolicy.karmada.io/namespace propagationpolicy.karmada.io/name | Through those two labels, logic can find the associated ResourceBinding from the PropagationPolicy or trace it back from the ResourceBinding to the corresponding PropagationPolicy. |
+| | label | clusterpropagationpolicy.karmada.io/name | Through the label, logic can find the associated ResourceBinding from the ClusterPropagationPolicy or trace it back from the ResourceBinding to the corresponding ClusterPropagationPolicy. |
+| | annotation | policy.karmada.io/applied-placement | Record applied placement declaration. The placement could be either PropagationPolicy's or ClusterPropagationPolicy's. |
+| ClusterResourceBinding | label | clusterpropagationpolicy.karmada.io/name | Through the label, logic can find the associated ClusterResourceBinding from the ClusterPropagationPolicy or trace it back from the ClusterResourceBinding to the corresponding ClusterPropagationPolicy. |
+| | annotation | policy.karmada.io/applied-placement | Record applied placement declaration. The placement could be either PropagationPolicy's or ClusterPropagationPolicy's. |
+| Work | label | resourcebinding.karmada.io/namespace resourcebinding.karmada.io/name | Through those two labels, logic can find the associated WorkList from the ResourceBinding or trace it back from the Work to the corresponding ResourceBinding. |
+| | label | clusterresourcebinding.karmada.io/name | Through the label, logic can find the associated WorkList from the ClusterResourceBinding or trace it back from the Work to the corresponding ClusterResourceBinding. |
+| | label | propagation.karmada.io/instruction | Valid values includes: - suppressed: indicates that the resource should not be propagated. |
+| | label | endpointslice.karmada.io/namespace endpointslice.karmada.io/name | Those labels are added to work object, which is report by member cluster, to specify service associated with EndpointSlice. |
+| | annotation | policy.karmada.io/applied-overrides | Record override items,which should be sorted alphabetically in ascending order by OverridePolicy's name. |
+| | annotation | policy.karmada.io/applied-cluster-overrides | Record override items, which should be sorted alphabetically in ascending order by ClusterOverridePolicy's name. |
+| Workload | label | work.karmada.io/namespace work.karmada.io/name | Determines whether the current workload is managed by karmada. Through those labels, logic can find the associated Work or trace it back from the Work to the corresponding Workload. |
+| Namespace | label | namespace.karmada.io/skip-auto-propagation | Determines whether the namespace should be skipped from auto propagation. |
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/reserved-namespaces.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/reserved-namespaces.md
new file mode 100644
index 000000000..43550dbee
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/reference/reserved-namespaces.md
@@ -0,0 +1,11 @@
+---
+title: 保留命名空间
+---
+
+> 注意: 应避免创建以 kube- 和 karmada- 为前缀的命名空间,因为它们被 Kubernetes 和 Karmada 系统命名空间所保留。
+> 目前,以下命名空间中的资源不会被复制:
+
+- 命名空间前缀为 `kube-`(包括但不限于 `kube-system`, `kube-public`, `kube-node-lease`)
+- karmada-system
+- karmada-cluster
+- karmada-es-*
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/releases.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/releases.md
new file mode 100644
index 000000000..4d91218ab
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/releases.md
@@ -0,0 +1,104 @@
+---
+title: 版本发布
+---
+
+## Release Notes and Assets
+
+Release notes are available on GitHub at https://github.com/karmada-io/karmada/releases.
+
+## Release Management
+
+This section provides guidelines on release timelines and release branch maintenance.
+
+### Release Timelines
+
+Karmada uses the Semantic Versioning schema. Karmada v1.0.0 was released in December 2021. This project follows a given version number MAJOR.MINOR.PATCH.
+
+### MAJOR release
+
+Major releases contain large features, design and architectural changes, and may include incompatible API changes. Major releases are low frequency and stable over a long period of time.
+
+### MINOR release
+
+Minor releases contain features, enhancements, and fixes that are introduced in a backwards-compatible manner. Since Karmada is a fast growing project and features continue to iterate rapidly, having a minor release approximately every few months helps balance speed and stability.
+
+* Roughly every 3 months
+
+### PATCH release
+
+Patch releases are for backwards-compatible bug fixes and very minor enhancements which do not impact stability or compatibility. Typically only critical fixes are selected for patch releases. Usually there will be at least one patch release in a minor release cycle.
+
+* When critical fixes are required, or roughly each month
+
+### Versioning
+
+Karmada uses GitHub tags to manage versions. New releases and release candidates are published using the wildcard tag`v..`.
+
+Whenever a PR is merged into the master branch, CI will pull the latest code, generate an image and upload it to the mirror repository. The latest image of Karmada components can usually be downloaded online using the latest tag.
+Whenever a release is released, the image will also be released, and the tag is the same as the tag of the release above.
+
+### Issues
+
+Non-critical issues and features are always added to the next minor release milestone, by default.
+
+Critical issues, with no work-arounds, are added to the next patch release.
+
+### Branches and PRs
+
+Release branches and PRs are managed as follows:
+
+* All changes are always first committed to `master`.
+* Branches are created for each major or minor release.
+* The branch name will contain the version, for example release-1.2.
+* Patch releases are created from a release branch.
+* For critical fixes that need to be included in a patch release, PRs should always be first merged to master and then cherry-picked to the release branch. PRs need to be guaranteed to have a release note written and these descriptions will be reflected in the next patch release.
+ The cherry-pick process of PRs is executed through the script. See usage [here](https://karmada.io/docs/contributor/cherry-picks).
+* For complex changes, specially critical bugfixes, separate PRs may be required for master and release branches.
+* The milestone mark (for example v1.4) will be added to PRs which means changes in PRs are one of the contents of the corresponding release.
+* During PR review, the Assignee selection is used to indicate the reviewer.
+
+### Release Planning
+
+A minor release will contain a mix of features, enhancements, and bug fixes.
+
+Major features follow the Karmada Design Proposal process. You can refer to [here](https://github.com/karmada-io/karmada/tree/master/docs/proposals/resource-interpreter-webhook) as a proposal example.
+
+During the start of a release, there may be many issues assigned to the release milestone. The priorities for the release are discussed in the bi-weekly community meetings.
+As the release progresses several issues may be moved to the next milestone. Hence, if an issue is important it is important to advocate its priority early in the release cycle.
+
+### Release Artifacts
+
+The Karmada container images are available at `dockerHub`.
+You can visit `https://hub.docker.com/r/karmada/` to see the details of images.
+For example, [here](https://hub.docker.com/r/karmada/karmada-controller-manager) for karmada-controller-manager.
+
+Since v1.2.0, the following artifacts are uploaded:
+
+* crds.tar.gz
+* karmada-chart-v\.tgz
+* karmadactl-darwin-amd64.tgz
+* karmadactl-darwin-amd64.tgz.sha256
+* karmadactl-darwin-arm64.tgz
+* karmadactl-darwin-arm64.tgz.sha256
+* karmadactl-linux-amd64.tgz
+* karmadactl-linux-amd64.tgz.sha256
+* karmadactl-linux-arm64.tgz
+* karmadactl-linux-arm64.tgz.sha256
+* kubectl-karmada-darwin-amd64.tgz
+* kubectl-karmada-darwin-amd64.tgz.sha256
+* kubectl-karmada-darwin-arm64.tgz
+* kubectl-karmada-darwin-arm64.tgz.sha256
+* kubectl-karmada-linux-amd64.tgz
+* kubectl-karmada-linux-amd64.tgz.sha256
+* kubectl-karmada-linux-arm64.tgz
+* kubectl-karmada-linux-arm64.tgz.sha256
+* Source code(zip)
+* Source code(tar.gz)
+
+You can visit `https://github.com/karmada-io/karmada/releases/download/v/` to download the artifacts above.
+
+For example:
+
+```shell
+wget https://github.com/karmada-io/karmada/releases/download/v1.3.0/karmadactl-darwin-amd64.tgz
+```
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/administrator/migrate-in-batch-1.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/administrator/migrate-in-batch-1.jpg
new file mode 100644
index 000000000..8bac30658
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/administrator/migrate-in-batch-1.jpg differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/administrator/migrate-in-batch-2.jpg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/administrator/migrate-in-batch-2.jpg
new file mode 100644
index 000000000..cd28e7f60
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/administrator/migrate-in-batch-2.jpg differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/administrator/prometheus/grafana.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/administrator/prometheus/grafana.png
new file mode 100644
index 000000000..916c695a0
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/administrator/prometheus/grafana.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-architecture.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-architecture.png
new file mode 100644
index 000000000..2ac056c7a
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-architecture.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-aries.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-aries.png
new file mode 100644
index 000000000..6aace0657
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-aries.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-automation-cluster-en.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-automation-cluster-en.png
new file mode 100644
index 000000000..c45bf13b3
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-automation-cluster-en.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-automation-cluster-zh.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-automation-cluster-zh.png
new file mode 100644
index 000000000..fb2916463
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-automation-cluster-zh.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-capability-visualization.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-capability-visualization.png
new file mode 100644
index 000000000..06f953ef9
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-capability-visualization.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-cluster-inspection.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-cluster-inspection.png
new file mode 100644
index 000000000..d52737f8d
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-cluster-inspection.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-gpu-resources.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-gpu-resources.png
new file mode 100644
index 000000000..17166c7af
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-gpu-resources.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-msp-multicluster-1.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-msp-multicluster-1.png
new file mode 100644
index 000000000..81c5f342b
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-msp-multicluster-1.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-msp-multicluster-2.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-msp-multicluster-2.png
new file mode 100644
index 000000000..07b2fa28b
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-msp-multicluster-2.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-multicluster-capability.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-multicluster-capability.png
new file mode 100644
index 000000000..14a45307c
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-multicluster-capability.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-override.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-override.png
new file mode 100644
index 000000000..5320b43ee
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-override.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-sequence-status.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-sequence-status.png
new file mode 100644
index 000000000..43b4c04e1
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-sequence-status.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-unified-view-1.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-unified-view-1.png
new file mode 100644
index 000000000..b5973b027
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-unified-view-1.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-unified-view-2.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-unified-view-2.png
new file mode 100644
index 000000000..1ce2696f9
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-unified-view-2.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-velero.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-velero.png
new file mode 100644
index 000000000..2b5ffe1a0
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/ci123/adoptions-ci123-velero.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/auto_convert.PNG b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/auto_convert.PNG
new file mode 100644
index 000000000..9a191b205
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/auto_convert.PNG differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/kairship_architecture.PNG b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/kairship_architecture.PNG
new file mode 100644
index 000000000..4b419ea19
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/kairship_architecture.PNG differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/karmada_operator.PNG b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/karmada_operator.PNG
new file mode 100644
index 000000000..6761937d2
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/karmada_operator.PNG differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/multi_karmada.PNG b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/multi_karmada.PNG
new file mode 100644
index 000000000..3ee74e703
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/multi_karmada.PNG differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/multi_tenant.PNG b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/multi_tenant.PNG
new file mode 100644
index 000000000..8f0ceb5d1
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/multi_tenant.PNG differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/prompt.PNG b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/prompt.PNG
new file mode 100644
index 000000000..2fd5c5ef0
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/prompt.PNG differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/ui.PNG b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/ui.PNG
new file mode 100644
index 000000000..f44c3946d
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/daocloud/ui.PNG differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/cluster_inspect.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/cluster_inspect.png
new file mode 100644
index 000000000..d871e2dc0
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/cluster_inspect.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/decentralized_architecture.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/decentralized_architecture.png
new file mode 100644
index 000000000..7efc901b0
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/decentralized_architecture.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/eventually_consistency.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/eventually_consistency.png
new file mode 100644
index 000000000..07ae1516a
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/eventually_consistency.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/gpu_resource_manage.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/gpu_resource_manage.png
new file mode 100644
index 000000000..fb9f30e59
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/gpu_resource_manage.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/orang_architecture.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/orang_architecture.png
new file mode 100644
index 000000000..a1977e600
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/orang_architecture.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/sequence_status.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/sequence_status.png
new file mode 100644
index 000000000..2fc898369
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/sequence_status.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/workload_control.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/workload_control.png
new file mode 100644
index 000000000..a30a813b9
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/hurricane-engine/workload_control.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/vipkid/adoptions-vipkid-architecture.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/vipkid/adoptions-vipkid-architecture.png
new file mode 100644
index 000000000..5cef771b3
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/casestudies/vipkid/adoptions-vipkid-architecture.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/click-next.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/click-next.png
new file mode 100644
index 000000000..617f1611c
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/click-next.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/contributions_list.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/contributions_list.png
new file mode 100644
index 000000000..dafd014c8
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/contributions_list.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/debug-docs.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/debug-docs.png
new file mode 100644
index 000000000..104c9b402
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/debug-docs.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/git_workflow.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/git_workflow.png
new file mode 100644
index 000000000..bb1f2330e
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/git_workflow.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/organization_check.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/organization_check.png
new file mode 100644
index 000000000..fee836483
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/contributor/organization_check.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/developers/grafana_metrics.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/developers/grafana_metrics.png
new file mode 100644
index 000000000..e548f473f
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/developers/grafana_metrics.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/developers/releasing.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/developers/releasing.png
new file mode 100644
index 000000000..93445e03a
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/developers/releasing.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/developers/schedule-process.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/developers/schedule-process.png
new file mode 100644
index 000000000..cdd13b1db
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/developers/schedule-process.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/Karmada-logo-horizontal-color.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/Karmada-logo-horizontal-color.png
new file mode 100644
index 000000000..8e57009ce
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/Karmada-logo-horizontal-color.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/architecture.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/architecture.drawio
new file mode 100644
index 000000000..4cf4c1588
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/architecture.drawio
@@ -0,0 +1 @@
+7Vttc5s4EP41/ugMSIDxR9txX27auSRuL82njgIy5ooRBTmx++tPGGFeJDDEYLvTc2catBJCPM/uandlD+BsvX0fomD1mdjYGwDF3g7g7QAAVTV19ieW7BLJSOECJ3RtPigTLNxfmAsVLt24No4KAykhHnWDotAivo8tWpChMCSvxWFL4hWfGiAHC4KFhTxR+ujadJVITTDK5B+w66zSJ6vGOOlZo3Qwf5NohWzymhPB+QDOQkJocrXezrAXg5fiktz3rqL3sLAQ+7TJDesA3wPVvv86tKMlop8egs1yqGrJNC/I2/A35quluxSCkGx8G8ezqAM4fV25FC8CZMW9r4x0JlvRtce7UWhxEg3WEtfIl/2CQ4q3ORFf83tM1piGOzaE9wKN48cVSOcrfs3YgGM+ZJVjAphciLgGOIepM5DYBcdJjpnxCIwnMHyZ6/Dx4+7p/n720RsCAbL5l9mtABvjO4gvbURRREmIj4O3dD1vRjwS7meAc/VWn486gtEowqjquoCjIYFR6wvFBoqHbWaKvElCuiIO8ZE3z6TTTDUV1srGfCIk4Jj+iyndcZVEG0qKiFciG5FNaOGa9XP4KAodTI9rS/wutTyF2EPUfSm6nVNQfzKHOvyOpr6l/WLu6GUBnDuJ7l4C9YihRiexd2YCy0NR5Fqp+J3rVZjDcv+J5cSnfG7mv+AU+3Y6mU98nEj4PErXJEthVS9Fct2qcyT/QOEa2YgJJ3cf2f8LHDLXcZqnL7FjI2wuExZD8gPnegzLxM8y3vZt/nTJdtbax8Gyj9NEH6dqhujkjA6cnHR7NY+bm8NADwpGVdxGY//vxCZyTJkFnA5BEXpOH6bU4gdGsIAfgIq41wLJJjHuCz+j4+ikvc5mRKjy/aKW9+OqexzaGyX/6cBn3PqLh8Xn2Zfh9tfPfx6+f/r614+fQwgFqO+I51rxGmfMTkPieR17jKUe/5Oiv/9wD5GTJ58+aOC9JrzRCyYAx1L4c6yNpEFT56RJHb0qkjZ1fdv1nQEwPLaY6TOjzHDiqz+HRLW0DVSwkQ98x5fjEIg+br7F1oa6xO+PxfZs2ShaHWbvi7pysnecujOZn/T1xBRm5m0iGnPz51qf3pLAM9me9OVGkvCCM5cSl0XOGYVROoo99DBQ4JXhReuMjqcqeUa5CHmu48eZEaOGKQycxui7FvImvGPt2rZXVU8oZmd90WxoBZ7VEmMKEIhWpVWa8n1aT1SPBaozZhcW82ybrm2zaULUYQKka8eLPGZPRR55ebFJvSHL3rNSQA5WvHXpN67N8fVTfH2j89btNtd1u0sbzFR339IJ4kburriZ3bZvpfdVcpCvFNTp19FyECfkaKUgx5guYSyVNS4o8CfcEZe9WZYxm0WFgaOSJiTvze8CueJyaSINliYCpYkSYISJ9lp1eO0TFE2MhNsrWqo0OT15yvdVKM1BQQ9K+ZTXSamCvl3RjP8V7aKKJhbX0mggCpBf0Djj5yY+3Nm7+WG0ryJM2ABVC7ZZZxY/5Gc5yNA63mX85ygoBCNWEowMAw+xkCELR0p3y8WdByoeXtLTwpSud0KgFeNQ3RDjEWMki0d62glBAwclLwU2R6N1wU8dlYxpLBb8VE0Ckwp7ixh6Lvnh/UcWlUEDjqHdPGQ+MNqqyHduOMUUp7sTgO6BSncL5aaUR0oq0aaR1urOE8qKtfxDtq9MenCqp2Z/vTIEio5D5jeUs7pXsRYzcTAPAS7qY8uHUlfgY4EshrlOH6tV0HBNPlZSwT2/j20OVKWPlZyWntnHAnG3ynzsNVbYemXogj721Xx4+DC0vmjf7ABDuCGTqSatlpZT7HN9Y6eU1rbAP59gS18Tykk5MQHWi0frqa0dyVonYYh2uWFBPCBq+5SM9GS+tybE1XA12HhbeT3JYZNkJ27IeaWBjQtgqZIQ0zxnCNPgyyJvB7GLLbp9PVpvEPtASdVJVXtDWTwI+O1RFs67L45y+rQcyn/TlfRgNNnfo9+ehFIpRZO4k3OToAokiBt285q4Bo3iYYpy7DRl37rDocveJ46y9sJiACCvnafHPmeonddWTa6keK7B4mErTIlsXTwvnQ/Csu51FIZosBiHpDlw1boANOvGnxy4yM0DnG4epytz1aGTcsS2Ko0gr9x1AZsQQTeznosdIKW54akHSHBUynqvxAY0tXZ8TzbQwbFpyQb61OfaWONanLUykvLY3lmXJ+pJUQ2oSxfcleLV/WAi/3WczTO2aHxj/GigWJ7LVGHoEEEh25UmS785qNSzFpWz0tfkZUmN7KvcbyhAsmb2A8EE++xnlnD+Hw==
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/architecture.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/architecture.png
new file mode 100644
index 000000000..988cade43
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/architecture.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/binding-controller-process.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/binding-controller-process.drawio
new file mode 100644
index 000000000..26caf962b
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/binding-controller-process.drawio
@@ -0,0 +1 @@
+7V1bd5s4EP41PvuUHJur/Zhrm91ut23azfapRwbZpgHkCNmx++tXAgkDwgHHMcK46TktTCSBNBfNfDOoPf0qWL3DYD77G7nQ72l9d9XTr3sa/TFG9B9GWSeUwcjuJ5Qp9lxO2xDuvV+QE0WzhefCKNeQIOQTb54nOigMoUNyNIAxes43myA//9Q5mEKJcO8AX6Y+eC6ZJdShZm/o76E3nYknDyw+4wCIxnwm0Qy46DlD0m96+hVGiCRXweoK+mz1xLo83K0f/A+P1rs/P0dP4NvlX18//nuWDHa7S5d0ChiG5NVDf1neBeuviDz9GK3uf179M71d+mdiamQt1gu6dPn4LcJkhqYoBP7NhnqJ0SJ0IRu1T+82bT4gNKfEASX+hISsuSyABUGUNCOBz3+bPJM9qMCiivnxdhFaYAe+MCkhZgBPIXmp3YaLVP4hCiDBa9oRQx8Qb5l/O8DlcJq2410vMAbrTIM58kISZUb+xAi0AdepM1MIFFeps9GwwLliD916uQe9SN5C3GWmsyHF8rCDbPCJL4G/4Etx4bqUcLNkbCrKzUYqGIufZx6B93MQ8+mZmpa8BPhgDP1L4DxO425XyEeY/ipEIROwief7gtTT9InJ/lB6RDB6hJnfWPEP64FCkqEnP6mkLSEmcPUKWZNFI+XhsMCRQd/glOeNlRkITs8yFka0KxOoDEN355emQpfpKuL1f6z/uSluv/Ph4pvrVe5unb37BLFH5w4xJx7aMOjtNgxa/zgMgy4ZhmvoUza+jW04Au3XisveAu03lGj/yiMZ5ad33zO/2ag+u1lnbppWfKuu4u+p93tx0Dwq+90xzusqOW9JBvUunCAc0FUqikRiBUUkM8gzk0Yoc9YuWE1ZNHc+8dGzMwOYnNMFD7wQEGYjSw3wwWylUdzUBqldzNjKYZOm0paWW4XmHVryh005O3vxYijx4tvcBW/lSxx9nGEUY8UWeBqjk1AfgV9V64+mUn/Ea2a48YDw4+cFpLdvtHm4HoYO+UHVEjS+ewyKAY5VIv79JsV/IOMinZR/ra7820rlX5N9J+rsJDsIvQr220CaM+zDoWq7PtBVyPFrA5CDy79dV/5HSuXfkOT/YykbPzBXKL/0wPemIb126IqxgOySybnnAP+C/yLwXDfhMoy8X2Acj8cWn0NZdHDzsmdev6QoPK/DO/fSbEqWUS+I43YN6p/bOQ3ihmA3jG4DqYkmaDKJIJH48hZ4+lEF+K3RL00pKDOQY/PvHdEvs1K/dNtsvVLJwXwMRnvhtKffSpx6rQcMHS/yUNi4+yt5CX3lXoIcsaswawc3T6O65mmLGjVknuRg/D0ImVHR+l8gtQ2Ox64tn07hcozp1ZRdfcLIgVEkMa5riRopTWuXYI8NZ2nlgL2L+pPW+lTpj6EUPtHkYD7VnzSpeSraIqU1W6Atv2NSaa+psykpxWSs00DIanPDtJSaOFPixsSj60iXDbP1WXkR6YCrLG31lvJEiSbHjp3Ug7qRvK40ktfkSBHE9YwZbWglVGwbBck2y3IgzUr2aQSBel0nVlfrxI7UOkk5F2njMbWq8rPlNeF2cfcyrIrST6nHyGii9FOOXjF8Kk02HyUoqm1J5mRA0ZFp5heeP7O9GKku++PRwolD2uP3+6Td0VCeSNU1ab2TnDRbHq1/zaqP3mjVK+v6JHU7HCP0IiPKsOpGSzX0slKlBAt1vaUAQ9+jZ7YzIPrXeOH5zCVkJTSxbiSN6cMz7QV14Rcpvico3kSMQqflYwjcdRpraXQy/YWocotbUmaGU+ieZx64GUse/S4zeohIfmSHPi0Zmew03hcYoCXrh/Ccvg5/wstDUKK8Cpmluoh5xN+UGmUCvJBug/24Lq/HlNJ6WrDv5y4fAQ6AC849yobbZAru2ZiK0kXSDARMmhMY2wtdlt6hcwcBjGJ5127ZddpOvA8ffBsfCzpINYIUFG87OLcFzhP65sMJeWlzK9PVvEvWfK1h0ZIOdFmB00ZZBU7N69t/1SD7GydgSk27dabUOA1Mz+Czqk5bKEVYxWtmuOGKbAWIuz/HNreVeIaUZlCP1Blq0wzpTYsjaKN2PZyaCHpQ/M7ENCoiaKlHIxG0IRfsdaSgyKgs2Bv1DSu/5NyOtTd2NuXsSacAj2qm2ZbeeoTDkLnUSe9kX6djiyWUPjxPMRQxSGL1eb8D8E/OlHWkitmorrIc2Fbr9askdxayacyhiMVFFs3tAKYoeajqM26m7PGX4C0PM0D+iGJRShCniDAsqArJ2hHciubQOZ/7lBdB/IVmfwbYM8cQsph8nlRLQYajjeN3cGbQXfgQ14Wj4vGpsibmkC4MG31CCbPdwajfkM9OkI9m1oR87IMJuuwfl4jpbZqzZwAhR/3CcTTPsD6aMxTTKmCBOaCRyuocTGM0iaOKZwylxMj3WTlACX5YGPS3jFXK2LDoXOglVYX6oETGDoYqmrYK7/CoEuZm7SNT1IT71s7hvtSjkXDflFNQHfFszS3BSCbct0aj3JK3Po4Ubl5Xo/1qnpn2kX1Vacmnkakw7q2xx5a2pz3eb2eVA/kkFDn+oFAqMFYfFFqncZKRpdWUfUPpUUbiNTPcyNWTFDnTjmycJNbq66esOmhu6F6w06HZVuuDKPKcgrPNz6cT1xlXe6fz6TKfAmk7fQvkgmiWMrc1u4O5ZXfI8NosYbWg7Qs0S4fBmXpBiJKpSkBzydGnZnGohjFrq87XHRVSmhOSMpGtOFCxPKKskMzWmGt9S7FtU8JYRHklUKu2MErHVOlF7OLQwiiHmSdQJ6UVbUAZtNRsnZQtOwFddMmEplTquK30my+7LJ7nZa5t/+xLSkKpd8zs04i17bofC1lKj/6y1Rz5fUxFa7U52ZaiNb3qxH81RWtiHTv5BZFsaA94yBK93fyPQQl7Nv/xkn7zPw==
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/binding-controller-process.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/binding-controller-process.png
new file mode 100644
index 000000000..4aef69c2b
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/binding-controller-process.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/cluster-controller-process.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/cluster-controller-process.drawio
new file mode 100644
index 000000000..0c2f03096
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/cluster-controller-process.drawio
@@ -0,0 +1 @@
+7V1Zd9o6EP41PJLjHXjM1uU27WmT25v2qUexBbgxFpVFAv31V7LlTbLBCRgZSNpzYsvyNjPfzKfxSOmZl7Plewzm08/Ig0HP0Lxlz7zqGYZuGUaP/de8FW8ZDbSkZYJ9j7flDXf+X8gb024L34NRqSNBKCD+vNzoojCELim1AYzRc7nbGAXlu87BBEoNdy4I5NZ73yPTpHVoDPL2D9CfTNM7684oOTIDaWf+JtEUeOi50GRe98xLjBBJtmbLSxgw6aVyuf+4ug9uHp33/3yL/oDvF5/+/fJfP7nYu5eckr0ChiF59aWt6Pb7p+s/D083t/C9Plr86p/f9XUuhicQLLjA+MuSVSpB6FGB8l2EyRRNUAiC67z1AqNF6EF2H43u5X1uEJrTRp02/oaErLh1gAVBtGlKZgE/mtyT3UhQ2oY35v0itMAuXPOaXIME4Akk68SRqZUCAqIZJHhFz8MwAMR/Kj8c4IY5yfrlwqcbXP4v0IUmqeIjgRgQSBvp1kxSTC52JsPnqU/g3RzEgnimaK4S8RPEBC5fIWRZKPwqZgoP7iAsvvucg01Pu0wLQEv77VyM+lCFBVMR4tUPdv6Zne7+5JeLd66Wpb0V32vb8lOXu8nyLZWWr48k0/9ZqcQb8EDjU0nwIPAnId12qbwgpg3MxH3q/8/5gZnveYmOYeT/BQ/x9Zjo58gPSfwq9kXPvlqHER6c+Ml5SCiqaY0x1oKnr505dgk+3P02lju/9Ff2LoUuaDyOqL5FxWRP8HpdZdHwDV4vgJehEl7phQvw+nIc8CrwsBp4DRyjDLB+5xEma+sKBpD44aRnvpP0FhGMHjNqq5e1RynrnPWbLSeM3p+NA/TsTgEmZx50/chHYQ1x2BdbyKiBMrqQetwjZ8BGQ0c1UuqoJFV8ACHzLoZ2C6mTcH227QT0BS4eMN2asK2vGLkwiiStvYgeB8z1XQD3cRKfdokCRL3dVYhCpt6xHwRpU88wxzb7x/AVg69wxIl/2BkoJIX25KdFYOlOGViGphxYMqk7RmBZDYGlmyqRZdUjKw4ubGMnOOo+Uqxh55BinwRS7IZIqRky7QcosipgGC0ww8fYp9KkwsPbIWRfEcBSnolxJFnmIjTPe+xZnD8Llj69eAR4Bjxw5iN2Jzh7gNgNFhEdcvRp3Ke+IwiY3NPukgaoLIlAfUsOpyKO86Z0gBPAMVk3vKnSahlyzJGlENXa0rLovRxb1rJZpWWjNe/lqHBWr00IxHtfIfbpyzMz3EuWYNiUIzhKOYJMEo4kC2fUyDVPEwyFLAG/Y3eTBKatFnUlzOUQPETU2dsSDkH5NamOvuiBkwfjZ+UWco4xWBW6cYDU3kcX7mNbwnfADf3T58oNNHmCnZqrZUq+5UhSkAkQ1/mWkTOyyhK3uu5dhpK2qliwmIiJFm48gDz4LKVIpW31WUoZPyr8f9sue9SUKNXk/fdDlOTEVoYOuITuglCLjWJLPYiBojFUbt0y8zxG605LojYzEpXWbZoqZH9QzLK5IndCLbdlhOawi4xQ5hjHwghrMv0FRminb59K3Ow6I0xNvjLoRSfEBU1HdbQ05QKBo4yWRlMyqLR2zbSUhkv9EMJlU02qCZeO9bJwKfbfT7iUC7WPJVzWwLcQLgdmOVx2Pn+SWnxBWxjO0NPRR0vx81QHoqVMXI4yWppNo6WhMlrqch4rB8YB5E6k4hHluRNTdjVHad+N66yUJk90OZOV2XfH60dE01ZfP2IomcnTuiVva6A1nxvFwhBNUEvN58adEVS5cupYqgc2zuEZGlqZoHIX3F2Caqot2jmIUWvTesWW6gcsSwR0IzxvO/rdVD6gZvQrlxIey+h3Y2nSyBnqJYl3vjZJl0NBFQk6mUGw+vIBXcZPMrecCcfQrlgp7I5kTi87ozomrMa2UuoSHttSg1iA5Sgv9DflJN5t3eiXPmV8QYL9yYTBRXNB5AKPTQM0NA8GXHcsJLOjZOpH0mXOKLbAjEk9fIhyk+9Y4XRbBuCIg/WKEU3Wth8DaDAnyl3gp2wsCEPvnC2CwvASgCjyXYFRLX3yo7Bd4FN0L6dTbGdvc6Jrvn8WpG5XCD1t25I2OeLHb11QZvL8Em+SLzQQrKctAjaofuDa5zKqn6tdAlbx+bfssKondPRh1M98UBzv+bQO2jkEM5gdmyTTOt4meEgTPEaqJ3hY8vfmz/EMnctMlejhN1u7KY9GXjpjncpjCiIWiHau0M2EXl3oybxOytkHFdzD2asWDUmLd4z+v7utoOZaTC0ELZ+UAo1RWYFWxTyrTKn7UWBF2Wkkul2RRb75U8mfViBxv/7UroiljLrHSxDQ349wtXZ6ZK2O2aVyhb+pXlT9QFetekdJsVCvZcJv89DS0foeI11eLM1YOusJtvglpb8Pgp3KsOAUzj2PNlw/QT6+ON21TWxhCafqVYOGMpDbm9quZPpl60Dedo2ShoAS1bKjkbRtHgLQ5QqYbAWWHWC9+2i2Bp1Ds62mhpfn7vJ83c/CkercHdvZ94dQe9A0vBtbuo/tlHgiq+g01obSpdzsw1oV5OigqHS1MVv+zvUxHCM8q0gjtfWZsa3wZZoCaahcP0yrGFW2R0ZPY3K43XRBD6WTw215XuM9wo/fFnCxs2/sno+hS355gADV1l9VJLpf4z+N2XB206URts6xbGf9cpL6+9wDuxrdHHomQ1oTqM31j+lu/sdJkuFr/jdezOv/AQ==
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/cluster-controller-process.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/cluster-controller-process.png
new file mode 100644
index 000000000..3d5ff31aa
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/cluster-controller-process.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/cncf-logo.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/cncf-logo.png
new file mode 100644
index 000000000..fc25a4660
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/cncf-logo.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/components.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/components.png
new file mode 100644
index 000000000..6647d0ba1
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/components.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/demo-3in1.svg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/demo-3in1.svg
new file mode 100644
index 000000000..f069db2ce
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/demo-3in1.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/execution-controller-process.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/execution-controller-process.drawio
new file mode 100644
index 000000000..3b902107e
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/execution-controller-process.drawio
@@ -0,0 +1 @@
+7V1fV+I6EP8s98FHPf0PfVR01b2rx13vXtf7sie0Abq2BEMR2E9/E9pASSJULE2oiw+2IU3bmfnNTCYz4cjuJLNLDEaDGxTC+MgywtmRfX5kWaZhOeQfbZnnLb5hZC19HIV526rhPvoN2aV56yQK4XitY4pQnEaj9cYADYcwSNfaAMZout6th+L1u45AHwoN9wGIxdaHKEwHWWvbaq3ar2DUH7A7m56ffZMA1jl/k/EAhGhaaLIvjuwORijNjpJZB8aUeowuD9fzh/jLk3f5+ev4GXw/+/uf23+Ps8E+veWS5StgOEx3HnoK227kP186ne9X+NIa+fe/7o7dfOwXEE9yguUvm84ZBWFICJqfIpwOUB8NQXyxaj3DaDIMIb2PQc5Wfb4gNCKNJmn8BdN0nksHmKSINA3SJM6/LfmC+cOO0QQHcMNbMTkDuA/TDf0sN+tI37AgLTn9LiFKYIrnpAOGMUijl3WRArlk9pf9VtQnBzkD3sAMkRfXKcQghaSRHCUCZ1Z0p0ScDqIU3o/AgjRTAuf30PgF4hTONhIl/9ZxcnzkGoKdTldoMxmEBgWksX6Vk7FlN1Gm7ZIy3bK0kmmRF1dgGMZUpL9BovODiB57MXmlsy4mR316dIdRAMfj9wl8L4rjDooRXlxr91z6R9rHKUZPsPCNt/jQK9AwLbRnnz1Cx2qvQ8cyVGPHNFpNBI9TEjy+XgbBeR085zCGC8PwQaDiGbpBpZFAcUsCxVOFE/PzTf++O32ez2ajrvffCA/A1bHlNYf01ZE0v/QOReT5lkBy3XUgHVscQDLe51etGHOKMZgXuo1oh/Hr92lbnFvocdOTLf3Zc63kInuClZQsabI7hl0Bw514MiZO98I9AXTS+0kQrUwpssmkuS4UZJI4ov2SWZ9OqE96MZoGA4DTkxAG0ThCw1e08d4UJ+djOMr9c7ORc06vpOZkcRFNXAxP4EXIPIspwk8xAqHonvcwSmiEBCbdBVQCBhod56c8AFxLOQAaOUFlgr0dAXrNUJe8bxg7rLLsaGvGDoEbGCbohWqkXkQISuh3IJpmeV6HppE7xs3UNG1RtOWvr0zTyB/HUUF8QnM8/1E8eaSDnbjs9HyWD56dzYtndxBH5OUhzhtVcbJyHVXJnKbV1nBOw2hawPw3pkDvJ8EiWNS4WU3LU+3U2b5ScBfw/LgGZzm4K8axVTZu5Gjlazii5/co5eIX0IXxOuWJI9InUn8eEApS9XhGhTcKQHyaf5FEYZgxGY6j36C7GI/SPtcyZHD37Mg9l3Jjo4gJMFmunud3OSouUMvgc2yc+Ibd4pRTdrprfIl1Qb3eGKYCb96m2aRGwFRrPw8EYrZWPo8phthu9YDYRhGrBGJsIVoLRG0UqgJ3FgtL0bB/+D4Cn5hgKg/8tHyB3Co0WsWaiWUcbE9NUGb95VhvqzUn+k3HZJzUeTq2jm/b2DIbUzAZYxRt8gqTkMVSZ9hLSnX2AB/SUWyVTXZptVSpYznTzD9MK8E0vaL1LTFFqTOAwRNp6qBhGKVUHfFs1SJYL1gP5RGktrgW2wTvsFVSstt6xYZMRzTdKtjBJdoZi88e2CSJxMtTJPXKXzAdUQM1mE3tsnbCdJTFgTY+eIFNAYaASzTR0lBwmaca5I84euSe1iXzZSOfpqMsLXXjgxfYtIjjeSChsjvsjrNpGQ0tMleJT7Ui/58ntMLs7HQ0iiPCrmUDGTHFZOB3oaYaHu6OplpzJORS44psajCamP3ejiZXLwvii4t1k1F4CBZkObXQxoL4om+blRoudJBxjoaiWtkxKEWGTaIhSKlcSxkgrPLsjQ3cekBblglt1Kp6mpkK7Zc12L5e9to0lVBfkxBUea7pFTf0JUUchxGCEmra6vSF7JuvP7u9cce9ubt2E/s+ubqdH+uUCc2JV5WSXrZWw/eVCrYsHvj6bMAy/lp0Mt4/KdiXwC8dH21irqapR/ioagkvG3TVq97ZlGQLNYEbfklumIapFT/Ygx9gLQavanQoe1Scb/I277KW9H8m7yWQYWmFjOWTNzrL3nW2By7atWKIicGHCly4Nhczle19UW/gwtHDUNcUM10upm1fxNbMoWKbPejBlkpzH+EsSn8UjgtDkbPVSPRkXjjZt02TyUqNSZRC2iOfBu1xSqGi/T64GLe3nl25rXs9lXGOWA7LFqH5dbdxbsMP24Bb1nbLUa8Bd8TqRJ00VOXKoHSoSTP/lm0MqgdbdE6al/JYJ4Xvtvei8Lm7bKuFdpQofDGEusjA2JRx0UzNLyuQrlfzuwdVvSnVHK+qJ+aKLt3PR3Z7pa6oW3baUr312U2lmP66u+Ju0Slc/206yPY36qw9KSFX9DpZ4kozdY/N74Nj1qd7pIZXydYMu4NZvnmMUifjrUjmc22Ot+z+6Bob+78bmZtIWsDlaRiShosXmLtGOy9rxLSa/QwET/3FZcxoDGl88hC22xWKJWstnpaX6x5UfVjFMw8pRdQGmt4cIOJDPjqqBNFULzfarkAr6I97PiVJA9yrmTYo9+iltCgbaVAWTJI+jXtQivuwOa/X7j9i+OV62EM4kWSN7GuVdl+a0hH3Ua9xVVbuIemUvlmZ5EtS2erxct7FC7FC7zsrVvkzvRBC1Bq4GZJ9hxsAnrJV3cryDjc9dYEVDwg/fZ1ASWLzrqGxCMMg/UkgCVQbjjrLkOSiL9slOgtNjkdgyIKTtygk356K9auSmtU4+/+peL1szFUWKR3YWKWzPwGcgBCcRIiMAmcwmNDVmuOAKCqM4pg6EKvM91dux0kK4U/KiceaQpRoz7yJZXrFsEdH4LchTCm2pTK0rg7q0qa+RJnatS72tGSr/JtKF2gf2k3j4gW+ilN9RrEv2wbgwKnMb+xca4mIfC6tdvqq467tkhJyjQOP/C7s21LT1OzaLs7dlpHHxuSTCz+QUeeP+G38hYgPlU/uccvg6gvhW2+1ZNqXOvL7nOzTWyCnq18hzxTS6sfc7Yv/AQ==
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/execution-controller-process.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/execution-controller-process.png
new file mode 100644
index 000000000..ef33cc581
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/execution-controller-process.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/karmada-resource-relation.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/karmada-resource-relation.drawio
new file mode 100644
index 000000000..19deb7c77
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/karmada-resource-relation.drawio
@@ -0,0 +1 @@
+7Vxbk6O2Ev41rjp5GArduDzOeCc5OZtUpjKpSvKUYkC22cXIwXjHPr8+wkgGJBmDDR7PbuwqxrREA92fWt2t1kzQdLn9IQtWi59ZRJMJtKPtBH2YQEigw48FYScIPioJ8yyOShKoCM/x/6kgYkHdxBFdNzrmjCV5vGoSQ5amNMwbtCDL2Guz24wlzbuugjnVCM9hkOjU3+MoX5RUD7oV/b80ni/knYHjly3LQHa2S8J6EUTstUZCjxM0zRjLy1/L7ZQmheykXMrrvj/SeniwjKZ5lwuS3f+eNq/Rgzf/68fX5c+fvN8+Le4El3W+ky9MI/7+4pRl+YLNWRokjxX1IWObNKIFV5ufVX1+YmzFiYATP9E83wllBpuccdIiXyailT9wtvujuN4iHpGEPznhzrZs25WUD1txj/JsJ85mLM0Fb44S3rqNc8FNnP0pb8R/V0yKk12d4xPN4iXNaSaIukSleNgmC2mLGJFQeh5kc5q3dBSoLoRcu4NQ2A+U8cfJdrxDRpMgj780QRgILM8P/Q6XPrGYPzO0xbCDWOBTjDpoexaA/uHjuk2O5QsKJhWG7rMs2NW6rYoO6+O3RXJwi9sCpwFJ/qPkKM9qr1yR9rDtAWFY3vJLkGyEuCbQSXIBlAa4nb83TDbcrfcQuucdgLPaVo3811z83XN5kYSPQbYMooBfMGVpSFeFJMou/Llf1Ms4rby/JCvjLKfbvDks1nnGPtMpS1gByJSlxWCbxUmikIIknqf8NOQwLbD78IVmecxN1r1oWMZRtB+pr4s4p8+rYI/dV26ftdF7FPIFT7ptxahslQZOWnhx+lqZy4NRXNRMpbzMhOoaYvoDAhkAoQh/zqVQCCMJXmjyEISf53uxSEFPIHKc6dS2G/ICveR1mJKCF3lXu1WO0G3KEQJdkBDZuiAdz4JkJFliTZZPGeOTHjdOLOUNTyyJw50m36bQTDCs4b4OcS75MKRkNtPGA29BDjcwkVELrUA4DeW6iIFBxHhEEZNvQ8SytQlyE8avrQDn8gkEmyaQX6nwG6D9EKdRnM67zg3vVLfKRABvQLfSMaspt6aW3+hyxf0sepjqM3UW74wAaELAfz7QVcJ2S7pnUrwJtJ9p9iXe3708n7LlLJ4vuVarHmFGiwssy+LH744/3cHpeNwGYZ4UqlgHS7qXORc5tD963Eex759+bHVVdMZvDFDVwx/AS4FOE5y+AZyuAZzuWF4KHsBvNZqdX7hQeChNa1PHTSh1ACUSKTRpYdzKdNTdTewZrIwzliI7+Js0je6LZETlxBvD4XooLKLZljh4zaPNXLINk2C9jkNJ/j5OJPujcj8drIoXORmt1mRPDGNI0i4MapFPGtpHtsLiSBSrMSLQVRhBbQYqhaPxGipoxbpj/bWAhtwYaJSUhIctv/45E0KIKGyvjJ/bjBpGmLgJViIGx5BfaAPQJQb/Mf0L+fOdR//+vGYfn6f5U/qLMeFkzC+MlyvAigXDrqO7M9ggFIDG8megjshB0TcjxdeEPmf/Eeiro3L/MeTHXlies6URnq0K75dHuLL4j0ex61WQdnInAXcn937kkUxoZ7/0GKNpslnnNDseZhzI5TPfdOp0b+vEQ6nLEsUz8SGdB3HasrpwIdhkqwe0sU8cHXuj5VulLapBrx73DmgDIkK9CJtsgAdf0N4GjCZlZNn1D1EWWLCmAx6bGLSA4Fha8DQtyETCkFZ4Rp0wNGkgcv0Xe1ScI9fyEHEBhl5xtJ1TGiD4mvLXM01TlsrUzpAq8EJqVsGLRzAZUwVuM/x2dcMjewwtcuPz91joudoqDoBNWGJXz08ccQ1aYpuLkKkbhpt3zOpzq6HQoRUNp9G8ayj47Xw23WLcrMt2/024bIMjTbT6zZSDpwdrV3XY5CLMe3LYesscQ8tt5mgMmeIr+2gAaIK/aR+t/+quQyxEam4yPKWBq/poQE/d3LqPdsYCO7SU5KR0yt7KT+uQ676+n4b9howQ0eMHaBuk5OGxwGlyZ/UF2ktQ2hFz+IiQ2/Nd1xWWnm183NJwU2S/y/eviW4Q92dwhyahs3wAd6a/+9JdvXINxUK47r/IFc+a9n2D4zya3wz0XOcryz4PasHPWtAeSNzY4u6Re/g6SkTpWIaYEnuWW7vGRYZ5FWJrrDoboOcAvyaV+LaFCfJ8B9jF0fbegUb0sP9r0gjA2OKBFJJfOX3ftEqwJv0r77aY1GsFJt23WcjNFPo2i8POCuM2i7OrDdpAcnI3hpwfTtYqyDFyslZBgO4OouPQMBcKAKCkqcfZeUEUb1bZCqQ+lFJq3uw+0jYNPfbSR0NV8VKBpW5+OkGyw86fM4ZD35GqY7rrZqLO6HU7ovfSEhqldBJBxavvWjSDTzE6UjTTfxuSUigm7Pyx54IObus/0njQw7371Wpfw8sOFZyG2eKnYstKgTbKwxcR7drNYdI9ZaqDtHXsmmPsTtN2ZwxelGnoUIMyemmOUq9kTCyYyuBHi5WRnu4dOrFwZq0DOSL09kTDdYWnp2zPWlvRMgsXpyv6V9cMnsLISgVcksMYHTnSQQIWdGs5jGb5hr4UCAiPIAwZrfF2quj+0VvEa6o+ekwdwygK8LgOVXGdS0BDU5jcYGCH9Ln8m9Sda1sOqn/fger0xYlvUnUAuhZ2oe26PiyO3jtQXQdvzxhRHslKtAWa5waNZ+RMLg802xBx+l9ZCCGeDD+ltjsnTwx7ffvW7ZvjtzvgN33uO6zw6Bqh3gHFe7/zFU6DhaiN25zK1xDY0n2c+BSbooZL8zU9oX1OfkcfDkPnXeRsP3beBdtKJAnOzLtA4loecAmC2Nkfm7Yd2Afj/saJGILctv4jAV2P8G42ESMH5S0lYsxjxLQdpaf1uO7ol6P65PCXhd5d5z0f+BYm1bc5JDzfG2lWxFhZN5ZrBL3Ttr7CyFYYjbzXEelry1fB0rlO3wUYxF0x2HPh6l8MXorBDkX0JzE4aBAxmh3s6v/DrstPb41Bt+lrYweeiUFP3dqh/EuJkTEITPsFbnpOlevrp5cyQT8sqbtsTIssw6AHOUgtpz0sY/YGENB5gW7x5WCrjgNEdVfGkNsVQ/BfDF0HQ6YlsdvGUOeSCvtWMQR8ound887DUJHOVXkhhdfZGOKn1f80LrtX/xgaPf4D7Vxbk5u4Ev41U7XnwRS6cXmccZLdrexWps5s7Z59xCB7OMHIBzO3/PojQMIgyVzG4Hgma6ecoRECur9utT41XKHl9vnnLNjd/84imlxBO3q+Qh+uIAQYwqvinx29VBIPO5Vgk8WRaHQQ3MXfqBDaQvoQR3TfapgzluTxri0MWZrSMG/JgixjT+1ma5a0z7oLNlQT3IVBokv/iqP8XtwFdA/yX2i8uZdnBo5f7dkGsrG4k/19ELGnhgh9vELLjLG8+mv7vKRJoTypl8ftX/aSPN7+nX14/HKT7//0t7tF1dmnMYfUt5DRNH911/D2C3S+/OqG+09/OF72mXxb/L6Qt5a/SH3RiKtPbLIsv2cblgbJx4P0JmMPaUSLXm2+dWjzG2M7LgRc+F+a5y8CC8FDzrjoPt8mYi+/i+zlP8XxFpGbf4vuyo0Pz62tF7G1ZmkuOgWYbw9Ui1Dfnj1kIe3QhURnkG1o3tGOVO0KRTUgJpT+M2Vbyi+aN8hoEuTxYxuHgYDzpm4nDr3OsuCl0WDH4jTfN3q+LQS8gfBMgsV1CL+EELfNr7RHbmd7/kd1BXKrcSsHUQmpEfASN/0YJA9CDbcZ4y7H9cJSvuOWJXH4omGwjbCn+zind7ugNN4Tj1JtNK3jJFmyhGXlsSgMKVmvuXyfZ+wrbexBDvJRdDKOHmmW0+dOy4u9GNstlQNUqdx+OgSiOk7eN4KQax8HS8Ng4+0Bfxx3/05urPkdxG7b72zPAtCvP67b7rEKU6KTg7nHhgfttJcYHpAWHq6gk+TC9i2gOv97YHLHYl+i4po3AM7u+bCT/7UR/5e9rKTgc5BtgyjgB/ybVgreyzb8wlfqcVxWXYAUK07D/T9vY7wdbFKWUiUyCVGQxJuUb4Ycw5TLb4poEvNs5Vrs2MZRVLqdKeq1XXGusAVAO2x5etSqk6Rm1IJzRS1sgIlikQ1XTaGhJFjR5CYIv25KXTWiv+Msl/YEaqsz1WAlT253qrOO5tKzsGEUQIZRwPEsSGZSKbnIgVk3TSce+hHdN9DiGVXs/BgqlnuVVMcelunMaQC3xwA3cRrF6ea9WAAqYeYCLOAND9xtpcthMquu8/goee5gro6NiBiCOTaoGRJ/JiX7/Uo+BdFrUnxNiHbKj8jJmkgvPwarrVies63M7MXlGciNTiz1+0M92zJ4gMk0PBufyTQSLIYEd78L0kEJLuAJbpmZHslxB2fKxzpaJg97noteH01+a3F1zRedEzeRpc4hwfDgOxxsYq9v+c2P57RHQ4mwBhSJYwgSc2XQQCdiPtBdwl62VEwzpwoXEaFehE3hwoMrVIaLuWyAoaVMYoEhOrueZdA8gnNpHmqav6NhRqfV+npNnTA0aT1y/dWRac80WgcOsRCxDx/YawIpOo8BdJJhydJ1vNlynU5qAy+kZhusPILJrDawIbdBW+++o0MfnlPvplm7oGWyo6PIKAPMpU6I9MTBNujO9+bSHdEU80Z42rKf5zhvdMO36l7434dOig3Zxz4Psvy6WO4rEoEk2O/jUIo/xcnrLN6/4iOmiL1cMRDj99xkMUB+y4vhPOSw147RSFk4VC/KRx3NZ1o40tmTj880fCim7lVAaMSSSZLfydPZhK47p64Dk9kzJK/QqUePRsjzDVMlNFuCqpM1Tyz7OukAPR03M1rDxLV4/uvWX6c9P8BOnbk2DIA9y20c4yJD2gSxNRd5A3T25j2ZBABkYYI83wF28Wt7b8AkOtfzrkziuBafOiP5lfOESzYJNFE8au6WRnpq00y8jAmT3GNOmKbOhPyhmRAcmAk153uGkUTKTl1dR27n7NN2BiVQer+eq86oiNJVpSqtq6kyIKhTNm8SWN5QYKF/gHUeYOmM1JsE1uC5G7goYDm+RtXIIWsslgDQ+vKUrubGkqmCZySWhtEH3dh6LXUxllXRMXxSxelgCHsDIXwiNpGjrG7CVyIT93V0BJdjiQzkt+GPUHeVG88Wu9rPw2XIe284yfVulxT3yXhynMUR1Z2GW/q3opapgCDdx9/EArXd9p3hS2cDo690aPOy+KBkfjAGT8u79QomTYlVbcEolz29QgCjARUCpvFlvgoBqJNpmq4urURgKF7JEbsdtc9CrVQyrM0a7QNmI7+gTn5dbJnA8RrZ91QmwH0+D+K0OM2Y3HU8GoHXj8azVgpAnfW7+EqB8WrHlt2a27WJJkD0BcAzlw1Anem76LKBVyAfYctDxAUYesWv7fTa4Kx1A0gn9i69bmC8ERyvXbekrD7aegXdWWsIkIkDm7aG4HyRfmA97mxFBdJKpzxtA0xP25y8MDs+15g8qegtNO7PKc6GJAdZsD2NBRq0AHEsk6vOVuqNdCboeyxNqUY4eT482jrQsUhzqVddxCIXuIiFdIbixzSeZ1uH5cfi+wZspxMhP6btAHIt7ELbdX1Y/HpvwHYDiBkjfX6kXK6LVX8tQ/6KYr7TWfVOSPTS6nIs6uXVpe9cyNLQQmUDFliZ5g/l3xdAYSEXvtLTRAQ8Aa3T9FUSOrCj+UzPmJvItSE+dp7lzuGgxkNBbQ8EteRCPN/C5PBVFqcNTyt3ucCIWldlwm8rUXYo1pVaOqJ0M/MKKBrwjOMc8HptPJ8clmQgLCWj9Q8szwLLAU+F9sFyJFQu3bDKUKW+Y2eoXZVuap7yTIbFOi0pX2nCpX/Q7Y6nINTEko2keqCJmPmpuRZgFzcH7TuaPcbl2avtJdtKlrRuUTHXtmVZ/Pdfx69udeCUgjAvl8j3wZaWcOBogPZnjycj9vXtr53vb9E77mOQ3skbp4DyEgDjG6dcgzvN9sYpPEEZozaXGTm/mCZ9O0ulj7TX3JU+2G4DBcnH9UbXMxLX8oBLEMRO+dvqFgO7nnZ/59Ifgtyu9vNMPjDU0H+xpT+4s97lgkp/sE41f6mVeTGv1ZkmoBO5aiOX5BzDy7hkTf7pr3Xhm4f3mVZOcHgrLPr4fw==
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/karmada-resource-relation.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/karmada-resource-relation.png
new file mode 100644
index 000000000..53b99c9ea
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/karmada-resource-relation.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/object-association-map.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/object-association-map.drawio
new file mode 100644
index 000000000..2571348f9
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/object-association-map.drawio
@@ -0,0 +1 @@
+7V1de6O2Ev41vowefSJxucm2pxfbbtrtOW2v+mAj22wwuJh89dcfCYRBBtvYARsndi5ixIfE6J1XM5qRPCJ3i5f/JN5y/nPsy3CEof8yIp9HGCNIqPqnS15NCYdOXjJLAj8vg2XBt+BfWdxqSh8DX65MWV6UxnGYBku7cBJHkZykVpmXJPGzfdk0Dn2rYOnNZK3g28QL66V/BH46z0sF5mX5TzKYzYuakePmZxZecbF5k9Xc8+PnShH5YUTukjhO82+LlzsZaunZcvlxy9l1wxIZpW1uCP7G4S83X/43/R798uene8b//nV8g7Bp7pMXPppXNs1NXwsZzJL4cTkit0rGqRdEMlHFUB3XW2Aa9SSTVL409Y83Lh5aikCBR8YLmSav6rriLuICITjLbzbg4UaSz2VHEOECygRZf/Ir5tUuwRgISlWXMeFg6Ajzxp5BxmxdeSk89cXI7xBZCtFellMlS4N2xPqVpUMJQJuyRJQByJEgLhSQMpfzmmwRpEAwipzi4zYIF7oAuYxCjF0hGGS8N+HSBuE6YWpkaUnZ+ecxLk7crDIpf1IXILx8KU+qbzP9P/TGmrTyR6m25U/Lz9W6T/Ve5EvfKMDzPEjlt6U30WefFQOqsnm6CHWnFk++9SYPs+y2uziMtepEcSRtBAh9GIRhccUIE9+TYjpR5as0iR9k5YwzEXI8bYTMHlhuImm79lHgcrfseMc5ChyYgwIVlCOFw96wwXrChhdFceqlQRwNDyBMCp82AUTgMVE9dg6AGHKhELhVwh02fHAHzOI0oad4yjgpSn6Tq/gxmcjbIPKDaKbbXbvmLnxcpTKpXdoD/rwwmEXq+0QBRI/qmxibMv3XSEJ5p+eCqJSbQZjcaiwFyoz6ZKpYBL6v23jAQLcVfcy10cYw2DmQcdQ0cqm7BCfMITi7mfdlFvAOyKkRXgVC1AW/y8Uy9FLZC0r0k370FkGo5f2TDJ+k7tk6QdXQtIfeOsNa95DSDOYgLpADXW3etIOUywDLEAgRoi7FvY13TTZ7F4i6y5jnPomVW5ONevdxGExe+0HVkLpfCOBWvQhioYFwAei6VxkuzI0KHrCygzmsPIH3MZ79dyWTr+Pv2s/FMLdbrd73g6fjbZ0urWl9wRbMrIuzxnZZ9bLE7TLDLXjwkoXneyCIVX2Rt5CrDI61IbfFnTt1IAyih7wn2rahVY14HxHkOLWZo6KPq7m3lBnhptJWvvGalL8+pqr10pT7XvLwVbUvSDX0IYBaI2213kL6oZxqeawqLm0TAXTh5RxO7pgAUjE2uc31DgI5yxsntq7cxAWO9akrN6OgwvyKIw5WbnVY0e8Poe8DaP8kt7bfwgCbbLLtnc/KFOLKFC2YghPbr70YpmhnONKmCcouDMcPaDIypgYVhO1Z1RaWIuLKUmRDtw6zrt1CuIqWojc9qGHMyJ/Z9ZjRVPnhhK+ZvQOxDWk4yHnpbMMBwu9iPKAcA4eZWSet8wMaEN43GRzj6r2hja2V7mz6zAerz2RI+rzbE6QuBJVIJnU+kD5XIl2WLvYwAjfh3Fsuw0D6N8tQgWQhM4vxvQ6+/PIH32Nijx1PyrKm0bbFLKzOe6BEZyaZCOXp3C/el/v1R5w8nCZAmDHPfbwKcqYoTxTxvi8bF6zjflsDgsPy6uw4kKMMQAIrMZ0a5BoDQRhBgEv+5wV0r87cx3DmTLMSE6Ad5yH8owYLcd4p/nfiqTnCAQ1Zah/BsHsfit7Vc/aqpOU29vMuLWhhV/W7qaftC8JOyIleyentbueAyamdXeviBrt2w+yUkf9Jp+NrmzH0VqtgMjoqyUi+BOmfGTaYOfrLAEN///xSPXgtDiL14pWb9OFfpt7soLwtO7Luu5dJoCS3P+ndJGNtF1NhKaZeMpPpjguJ6TvpW2sT3pa81AQE88T7OMi8jwLLDmnMzSwcLyyASetkDHG8YdvmYjCPLLFWq4UgApxq6j4/pJZchrVaMkSvBXR8hpXblLN3UkSjoxANB4rogrCuiD4bolus8zknoo8HYRHU3g9CfgXheUGIIDo3CndbCidAYTEXdEXh+VDYYp1e9yjEFze8FwnsezEtBo5pAopHayfL7QnTO2vpG9PFeukKprctJNpEuuq0L/nczs6p/60T+IlUTrdZjKoBt9Qvmb02ux2xzw0++1blqWN1jwa/eXHsDQSEFqGkRIZeGjxJ64EtcFhcEk+nK9lT/55l5OTvmLPolbPOzVmkAdNdJHUfuXbytEwouiBCtG0u8hgipNBB5yPCayDkwFQ0e8o+iFZp8jiprlW/tFgnQtd4Qqt4gusASqvRTovVOWNW9oExFKo56C4FHFfX7+eXWHkyzWv4r8HOywl2KnswH4HCYCLPEurc04K3BDo75Z3Lz8g7Ce8IRSyoEsi0rUmB1Ok1XTBW5x2MOEBunbguPsliINmzRiduYtWlSbZv2sVaAuddr3gxObK7MwuE0reGNWUHZhZQFyDC3TJn9vClaR9XJS9fFS2/8aqK21XRgUBAXKaf92GU96KKbbeKxE3TNd5CSzwar5aZGKHORQ9jz6/NrhyVeG46/L2nnSMuQHWlA0EWeFwiQGXFQh07FLmgMpfocFrHjkCgiLdRDSJ8MHSu3tspvbdnpUhncdqaKx6Kr4avCfGtKIVAF7gQwTUt2OORyxDg2dqUfAe8BtPwXM5ay+GItImI+TP5zRyabefsno2TdB7P4kiNIHG8NIj4LtP01fS395jGo6OiaFv7cG8Mi7fNJUH4VIH3t+1X17jDZu/By4uLXQ6w35vDisrIBbu2f9XWCC7JhdnPbxu83FeLcACBxrElDhcb1fQdvSROkzncQfRy73ZzFa1Rg0Fq64k9qDTstWmK2kc1m4Y0m0UP2se4C3OZMiB43VUqduGhcM9EJFNjYxVYToO9fLLdq/kVSAMFEkUMkOr634EDifcIpC1bZ14xtg9jDnX09tTrD71orupg6+ErxE4LsQtjMVHPZO16P6dyruFyoZNNRHSBHSxAZdKa2BsaOsRVyKrtljJY7NTnBK7Y6Q87FGPgumTjp4WEywEq55mLeZrBYqYe1rj28ahtovJQCEIdlj+llvv45S/SkR/+Dw==
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/object-association-map.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/object-association-map.png
new file mode 100644
index 000000000..2534e8200
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/object-association-map.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/policy-controller-process.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/policy-controller-process.drawio
new file mode 100644
index 000000000..30748e60c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/policy-controller-process.drawio
@@ -0,0 +1 @@
+7VxZd5s6EP41fnQONovtxzhr971pn3oUkLFuBKJCduz++kogmUXEwQk2OLfpOQ0atIDmm08zI5GeeRasriiI5u+IB3FvaHirnnneGw4Hxtjhv4RkLSUT20olPkWelGWCL+gPVE2ldIE8GBcqMkIwQ1FR6JIwhC4ryACl5L5YbUZwcdQI+FATfHEB1qU3yGPzVDoejjL5NUT+XI08cCbpnQCoyvJN4jnwyH1OZF70zDNKCEuvgtUZxGL21LzcvFrf4Ld3ztXrT/Fv8G365uv77/20s8tdmmxegcKQPbnraThw/qynF6+Xljn7Ou1/Ort+0x/IvmO2VhMGPT5/skgomxOfhABfZNIpJYvQg6Jbg5eyOm8JibhwwIX/QcbWEgxgwQgXzVmA5d2a76OejSyoC7e9hIQVoD5kW+pJxYoXzGFDztYVJAFkdM0rUIgBQ8sigIDEob+pJ5ueUgrWuQoRQSGLcz1/FAJeQdrUBk/SovqTcUlxpQbmeHsDfpE+gyrlXiYTJWjYBRnpmEuAF3IiTj2PCy6WQmll0GSQEPq9nyMGv0Qg0do9J5ai+jG4hXgK3Ds/aXZGMKH8VkhCga4ZwliJekNzZot/XB4zSu5g7o6T/IgWJGQ5efqzM8yWkDK42goMddcpKWRgWFJyn1HMQOl5nqMXVa8KTTl9PkFdhqaTAxgyn1i6/iHan9iq+FN2lxTOV4XSOl/6CCniLw+pFDbMCuZRsYJtHQUrmBornEPMddgMMXTf9K3yrHfA9K1WLH+FWM7weeln7k5m9qKwzhX2bPROTaMfNG30z9KgfVTcfdyaNzuleUfj01fhjNCAz1IZEikJqhhmUFQmj00iUS9Y+SKOO5lhcu/OAWUnfMIDFAImKLKSf/dFlWZ5RRtsaDFHleNDMuVIm+02DK9h4I/b8nOepYqxpopvkQeaciSOPcIwyzFiB9yMge77vQDrmdR1GIadMp+JposbQu8+LSAvNrRweIhCl/3iRgkOvXKY5dDGqQC/cVDw2y8R/ArUj6Pf6hT61XPn3Sbu56TLB78Knrd6HIzVx+O2Sd08qrRR0/C36sLf6RT8TT03+75SjW+FG1SceoCRH/Jrl0+hCMWmAr7IBfhU3giQ56VahjH6A26T/sTkywwW79ye9uzzSnVsxZhmKJtNHzlKL7+vUmVAfeNkVLAgyQO7peayVJqqQmazGDJNL03ky1pZJo7Ovkbdsi/d0/3ZZftSq2Ej9mWO7K4blcJVOQuNQr9nXmqaeqr/C10UIxIe2vnVnASjbSdhoGeprkEoUDw0PkOOMReJawfzB5neUn7li6uPlLgwjp/ni3U/Vtd2A0cVaa7DKszW/eM2lqGml5NRzeXE6li0oqcdN+az2T37nxiLtn/WvrGYx5gVfnybw27aBmqeZnHGJUWldi2b7cHD1jPJ3Vff44cX7Lp67lhoqufJMAFe9ibiQeTbdzJHo63mTuuJd+dFruZm3Q1be9wthOvOMA/sCIVpuDXDyGXPXMYPhm27Kq9+2LMrL3JTyay7q2QNu4VtfVvpdoGwoO8pCj0RXx8HsK3WY2bVcQHYaYTsoaUKka/JvYAJ4f+VJ1okMtL6fPxcEyVd4LIEIyVBs7zGDIApBJ6YL7hCMV+D+SsZC7XdnlR25yD0oXeSGzPrTh/gVXGAkLBi5y4fMO2c7dTlZxiQpWhHaMSfKD/Ilk64UJ+N3JSdJurKHpfTNAMo5CxtJAcFegIZzu+FOMg/vQM0AB44QVwpl+l7eP1bDqzTtBoIBKiTISLCyV5MawgCGCeoH16K60019USy74c0WrIpjnpWylI9HMA9EPKpPCSGM7YtC1llsUU6PfjZh1HJmM2KENGyKox5Y+HNW7PugqUbjiI/yKN4ca6krMV9ndfSMsx7Y9U6itjbNvxk3Ee/rt6t3A8/vr/BEfU/eK/7VaTalrdQWrMbdB+suvsmzqhNb8HS0/CeSmeBIuV20WnQUlHtR3p2nVRU6J2Kz+IEA2AQx8jtVR7MVtfJvt/mqO4Dx3M3+4WF3cJs8/CB/UIPxPONGlvzoY2aPnROrXaFVpXsmQkx7fDScFIvIfb4Fx5mGXjp3Owttea8yODMrntgViGwI8GZrSc6OQGgoHd0KTWr9WNPth7obo/OtJl+LDzLhx/Jd5Mg5FGSchgfCDc8ylEU/os22ow2qlIHldHG/vK9+vGunb2AwsrcoEvwiEPQFlXbdam6JR/AtJ/qA5S50zQO7APoTn4FxV3yiBXz9ZsmxCXpKLyNoxwNccoKcyxZxX8RJRHwE5JM2a4vUjWUYCy+h6rgtVKf/+huZ7obVKzFlR9l7S234uhBz8vPrViTkh4qt2Iayq3wYvZXSlJiyP7Yi3nxFw==
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/policy-controller-process.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/policy-controller-process.png
new file mode 100644
index 000000000..3d3721d5b
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/policy-controller-process.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/sample-nginx.svg b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/sample-nginx.svg
new file mode 100644
index 000000000..bdeb1a453
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/general/sample-nginx.svg
@@ -0,0 +1 @@
+[root@karmada-demokarmada]#[root@karmada-demokarmada]##AfterwefinishedKarmadainstallation,letusshareasample[root@karmada-demokarmada]##Fisrt,weneedtosetenv'KUBECONFIG'ordertooperateKarmadacontrolplaneconveniently[root@karmada-demokarmada]#e[root@karmada-demokarmada]#ex[root@karmada-demokarmada]#exp[root@karmada-demokarmada]#expo[root@karmada-demokarmada]#expor[root@karmada-demokarmada]#export[root@karmada-demokarmada]#exportK[root@karmada-demokarmada]#exportKU[root@karmada-demokarmada]#exportKUB[root@karmada-demokarmada]#exportKUBE[root@karmada-demokarmada]#exportKUBEC[root@karmada-demokarmada]#exportKUBECO[root@karmada-demokarmada]#exportKUBECON[root@karmada-demokarmada]#exportKUBECONF[root@karmada-demokarmada]#exportKUBECONFI[root@karmada-demokarmada]#exportKUBECONFIG[root@karmada-demokarmada]#exportKUBECONFIG=[root@karmada-demokarmada]#exportKUBECONFIG=/[root@karmada-demokarmada]#exportKUBECONFIG=/r[root@karmada-demokarmada]#exportKUBECONFIG=/ro[root@karmada-demokarmada]#exportKUBECONFIG=/root/[root@karmada-demokarmada]#exportKUBECONFIG=/root/.[root@karmada-demokarmada]#exportKUBECONFIG=/root/.k[root@karmada-demokarmada]#exportKUBECONFIG=/root/.ku[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/karmada.config[root@karmada-demokarmada]##Letuscheckthememberclustersjoined[root@karmada-demokarmada]#k[root@karmada-demokarmada]#ku[root@karmada-demokarmada]#kubectl[root@karmada-demokarmada]#kubectlg[root@karmada-demokarmada]#kubectlge[root@karmada-demokarmada]#kubectlget[root@karmada-demokarmada]#kubectlgetclustersNAMEVERSIONMODEREADYAGEmember1v1.19.1PushTrue108mmember2v1.19.1PushTrue108mmember3v1.19.1PullTrue107m[root@karmada-demokarmada]##Thereare3memberclustersstartupby'hack/local-up-karmada.sh'[root@karmada-demokarmada]##Thenletuspropagateourapplicationin'samples/nginx/'[root@karmada-demokarmada]#c[root@karmada-demokarmada]#ca[root@karmada-demokarmada]#cat[root@karmada-demokarmada]#cats[root@karmada-demokarmada]#catsa[root@karmada-demokarmada]#catsamples/[root@karmada-demokarmada]#catsamples/n[root@karmada-demokarmada]#catsamples/ng[root@karmada-demokarmada]#catsamples/nginx/[root@karmada-demokarmada]#catsamples/nginx/deployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:name:nginxlabels:app:nginxspec:replicas:2selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-image:nginxname:nginx[root@karmada-demokarmada]#catsamples/nginx/propagationpolicy.yamlapiVersion:policy.karmada.io/v1alpha1kind:PropagationPolicyname:nginx-propagationresourceSelectors:-apiVersion:apps/v1kind:Deploymentname:nginxplacement:clusterAffinity:clusterNames:-member1-member2replicaScheduling:replicaDivisionPreference:WeightedreplicaSchedulingType:DividedweightPreference:staticWeightList:-targetCluster:clusterNames:-member1weight:1-member2[root@karmada-demokarmada]##Thereisaapplication(nginx)tobedeployedto2memberclusters,andeachhave1replica.[root@karmada-demokarmada]#kubectlc[root@karmada-demokarmada]#kubectlcr[root@karmada-demokarmada]#kubectlcre[root@karmada-demokarmada]#kubectlcrea[root@karmada-demokarmada]#kubectlcreat[root@karmada-demokarmada]#kubectlcreate[root@karmada-demokarmada]#kubectlcreate-[root@karmada-demokarmada]#kubectlcreate-f[root@karmada-demokarmada]#kubectlcreate-fs[root@karmada-demokarmada]#kubectlcreate-fsa[root@karmada-demokarmada]#kubectlcreate-fsamples/[root@karmada-demokarmada]#kubectlcreate-fsamples/n[root@karmada-demokarmada]#kubectlcreate-fsamples/ng[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/deployment.yamldeployment.apps/nginxcreated[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/propagationpolicy.yamlpropagationpolicy.policy.karmada.io/nginx-propagationcreated[root@karmada-demokarmada]##CheckitinKarmadacontrolplane[root@karmada-demokarmada]#kubectlgetdeploymentNAMEREADYUP-TO-DATEAVAILABLEAGEnginx2/22220s[root@karmada-demokarmada]##Additionally,letuscheckwhathappenedinmember1andmember2[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/members.config[root@karmada-demokarmada]##Switchtomember1[root@karmada-demokarmada]#kubectlco[root@karmada-demokarmada]#kubectlcon[root@karmada-demokarmada]#kubectlconf[root@karmada-demokarmada]#kubectlconfi[root@karmada-demokarmada]#kubectlconfig[root@karmada-demokarmada]#kubectlconfigu[root@karmada-demokarmada]#kubectlconfigus[root@karmada-demokarmada]#kubectlconfiguse[root@karmada-demokarmada]#kubectlconfiguse-[root@karmada-demokarmada]#kubectlconfiguse-c[root@karmada-demokarmada]#kubectlconfiguse-co[root@karmada-demokarmada]#kubectlconfiguse-con[root@karmada-demokarmada]#kubectlconfiguse-cont[root@karmada-demokarmada]#kubectlconfiguse-conte[root@karmada-demokarmada]#kubectlconfiguse-contex[root@karmada-demokarmada]#kubectlconfiguse-context[root@karmada-demokarmada]#kubectlconfiguse-contextm[root@karmada-demokarmada]#kubectlconfiguse-contextme[root@karmada-demokarmada]#kubectlconfiguse-contextmem[root@karmada-demokarmada]#kubectlconfiguse-contextmemb[root@karmada-demokarmada]#kubectlconfiguse-contextmembe[root@karmada-demokarmada]#kubectlconfiguse-contextmember[root@karmada-demokarmada]#kubectlconfiguse-contextmember1Switchedtocontext"member1".[root@karmada-demokarmada]#kubectlgetp[root@karmada-demokarmada]#kubectlgetpo[root@karmada-demokarmada]#kubectlgetpodNAMEREADYSTATUSRESTARTSAGEnginx-6799fc88d8-96ndx1/1Running040s[root@karmada-demokarmada]##Switchtomember2[root@karmada-demokarmada]#kubectlconfiguse-contextmember2Switchedtocontext"member2".nginx-6799fc88d8-k424s1/1Running056s[root@karmada-demokarmada]##Here,weshowhowtopropagateourapplicationtomultiplememberclusters[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/k[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/ka[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/kar[root@karmada-demokarmada]#kubectlgetc[root@karmada-demokarmada]#kubectlgetcl[root@karmada-demokarmada]#kubectlgetclu[root@karmada-demokarmada]#catsamples/nginx/d[root@karmada-demokarmada]#catsamples/nginx/de[root@karmada-demokarmada]#catsamples/nginx/p[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/d[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/de[root@karmada-demokarmada]#kubectlcreate-fsamples/nginx/p[root@karmada-demokarmada]#kubectlgetd[root@karmada-demokarmada]#kubectlgetde[root@karmada-demokarmada]#kubectlgetdep[root@karmada-demokarmada]#kubectlgetdepl[root@karmada-demokarmada]#kubectlgetdeplo[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/m[root@karmada-demokarmada]#exportKUBECONFIG=/root/.kube/me
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/admin.conf b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/admin.conf
new file mode 100644
index 000000000..8d32e6d67
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/admin.conf
@@ -0,0 +1,21 @@
+# kubectl => kube-apiserver
+
+[ req ]
+default_bits = 2048
+prompt = no
+default_md = sha256
+distinguished_name = dn
+
+[ dn ]
+C = CN
+ST = Guangdong
+L = Guangzhou
+O = system:masters
+OU = System
+CN = admin
+
+[ v3_ext ]
+authorityKeyIdentifier=keyid,issuer:always
+basicConstraints=CA:FALSE
+keyUsage=critical,Digital Signature, Key Encipherment
+extendedKeyUsage=clientAuth
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/apiserver-etcd-client.conf b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/apiserver-etcd-client.conf
new file mode 100644
index 000000000..e5a49cbfa
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/apiserver-etcd-client.conf
@@ -0,0 +1,21 @@
+# kube-apiserver => etcd
+
+[ req ]
+default_bits = 2048
+prompt = no
+default_md = sha256
+distinguished_name = dn
+
+[ dn ]
+C = CN
+ST = Guangdong
+L = Guangzhou
+O = system:masters
+OU = System
+CN = kube-apiserver-etcd-client
+
+[ v3_ext ]
+authorityKeyIdentifier=keyid,issuer:always
+basicConstraints=CA:FALSE
+keyUsage=critical,Digital Signature, Key Encipherment
+extendedKeyUsage=clientAuth
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/healthcheck-client.conf b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/healthcheck-client.conf
new file mode 100644
index 000000000..930fcb8d0
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/healthcheck-client.conf
@@ -0,0 +1,21 @@
+# etcdctl => etcd
+
+[ req ]
+default_bits = 2048
+prompt = no
+default_md = sha256
+distinguished_name = dn
+
+[ dn ]
+C = CN
+ST = Guangdong
+L = Guangzhou
+O = karmada
+OU = System
+CN = kube-etcd-healthcheck-client
+
+[ v3_ext ]
+authorityKeyIdentifier=keyid,issuer:always
+basicConstraints=CA:FALSE
+keyUsage=critical,Digital Signature, Key Encipherment
+extendedKeyUsage=clientAuth
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/peer.conf b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/peer.conf
new file mode 100644
index 000000000..851fc59db
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/peer.conf
@@ -0,0 +1,31 @@
+# etcd peer => etcd peer
+
+[ req ]
+default_bits = 2048
+prompt = no
+default_md = sha256
+req_extensions = req_ext
+distinguished_name = dn
+
+[ dn ]
+C = CN
+ST = Guangdong
+L = Guangzhou
+O = karmada
+OU = System
+CN = kube-etcd-peer
+
+[ req_ext ]
+subjectAltName = @alt_names
+
+[ alt_names ]
+DNS.1 = localhost
+IP.1 = 127.0.0.1
+IP.2 =
+
+[ v3_ext ]
+authorityKeyIdentifier=keyid,issuer:always
+basicConstraints=CA:FALSE
+keyUsage=critical,Digital Signature, Key Encipherment
+extendedKeyUsage=serverAuth,clientAuth
+subjectAltName=@alt_names
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/server.conf b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/server.conf
new file mode 100644
index 000000000..d90747f65
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/etcd/server.conf
@@ -0,0 +1,31 @@
+# etcd server
+
+[ req ]
+default_bits = 2048
+prompt = no
+default_md = sha256
+req_extensions = req_ext
+distinguished_name = dn
+
+[ dn ]
+C = CN
+ST = Guangdong
+L = Guangzhou
+O = karmada
+OU = System
+CN = kube-etcd
+
+[ req_ext ]
+subjectAltName = @alt_names
+
+[ alt_names ]
+DNS.1 = localhost
+IP.1 = 127.0.0.1
+IP.2 =
+
+[ v3_ext ]
+authorityKeyIdentifier=keyid,issuer:always
+basicConstraints=CA:FALSE
+keyUsage=critical,Digital Signature, Key Encipherment
+extendedKeyUsage=serverAuth,clientAuth
+subjectAltName=@alt_names
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/front-proxy-client.conf b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/front-proxy-client.conf
new file mode 100644
index 000000000..a7fb7f184
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/front-proxy-client.conf
@@ -0,0 +1,21 @@
+# kube-apiserver => karmada-aggregated-apiserver
+
+[ req ]
+default_bits = 2048
+prompt = no
+default_md = sha256
+distinguished_name = dn
+
+[ dn ]
+C = CN
+ST = Guangdong
+L = Guangzhou
+O = karmada
+OU = System
+CN = front-proxy-client
+
+[ v3_ext ]
+authorityKeyIdentifier=keyid,issuer:always
+basicConstraints=CA:FALSE
+keyUsage=critical,Digital Signature, Key Encipherment
+extendedKeyUsage=clientAuth
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/karmada.conf b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/karmada.conf
new file mode 100644
index 000000000..2799012db
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/karmada.conf
@@ -0,0 +1,31 @@
+# most karmada component use this
+
+[ req ]
+default_bits = 2048
+prompt = no
+default_md = sha256
+req_extensions = req_ext
+distinguished_name = dn
+
+[ dn ]
+C = CN
+ST = Guangdong
+L = Guangzhou
+O = karmada
+OU = System
+CN = system:karmada
+
+[ req_ext ]
+subjectAltName = @alt_names
+
+[ alt_names ]
+DNS.1 = localhost
+IP.1 = 127.0.0.1
+IP.2 =
+
+[ v3_ext ]
+authorityKeyIdentifier=keyid,issuer:always
+basicConstraints=CA:FALSE
+keyUsage=critical,Digital Signature, Key Encipherment
+extendedKeyUsage=serverAuth,clientAuth
+subjectAltName=@alt_names
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/kube-apiserver.conf b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/kube-apiserver.conf
new file mode 100644
index 000000000..f5f625ad5
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/kube-apiserver.conf
@@ -0,0 +1,35 @@
+# tls server certificate of kube-apiserver
+
+[ req ]
+default_bits = 2048
+prompt = no
+default_md = sha256
+req_extensions = req_ext
+distinguished_name = dn
+
+[ dn ]
+C = CN
+ST = Guangdong
+L = Guangzhou
+O = karmada
+OU = System
+CN = karmada
+
+[ req_ext ]
+subjectAltName = @alt_names
+
+[ alt_names ]
+DNS.1 = kubernetes
+DNS.2 = kubernetes.default
+DNS.3 = kubernetes.default.svc
+DNS.4 = kubernetes.default.svc.cluster
+DNS.5 = kubernetes.default.svc.cluster.local
+IP.1 = 127.0.0.1
+IP.2 =
+
+[ v3_ext ]
+authorityKeyIdentifier=keyid,issuer:always
+basicConstraints=CA:FALSE
+keyUsage=critical,Digital Signature, Key Encipherment
+extendedKeyUsage=serverAuth,clientAuth
+subjectAltName=@alt_names
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/kube-controller-manager.conf b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/kube-controller-manager.conf
new file mode 100644
index 000000000..80edbe385
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/csr_config/kube-controller-manager.conf
@@ -0,0 +1,30 @@
+# tls client & server cert of kube-controller-manager
+
+[ req ]
+default_bits = 2048
+prompt = no
+default_md = sha256
+req_extensions = req_ext
+distinguished_name = dn
+
+[ dn ]
+C = CN
+ST = Guangdong
+L = Guangzhou
+O = karmada
+OU = System
+CN = system:kube-controller-manager
+
+[ req_ext ]
+subjectAltName = @alt_names
+
+[ alt_names ]
+DNS.1 = localhost
+IP.1 = 127.0.0.1
+
+[ v3_ext ]
+authorityKeyIdentifier=keyid,issuer:always
+basicConstraints=CA:FALSE
+keyUsage=critical,Digital Signature, Key Encipherment
+extendedKeyUsage=serverAuth,clientAuth
+subjectAltName=@alt_names
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/generate_ca.sh b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/generate_ca.sh
new file mode 100644
index 000000000..e4ee12124
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/generate_ca.sh
@@ -0,0 +1,26 @@
+#!/bin/bash
+
+# generate front-proxy-ca, server-ca
+
+set -e
+set -o pipefail
+
+function gen_server_ca() {
+ openssl genrsa -out server-ca.key 2048
+ openssl req -x509 -new -nodes -key server-ca.key -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=karmada" -days 3650 -out server-ca.crt
+}
+
+function gen_front_proxy_ca() {
+ openssl genrsa -out front-proxy-ca.key 2048
+ openssl req -x509 -new -nodes -key front-proxy-ca.key -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=front-proxy-ca" -days 3650 -out front-proxy-ca.crt
+}
+
+function main() {
+ mkdir ca_cert
+ cd ca_cert
+
+ gen_server_ca
+ gen_front_proxy_ca
+}
+
+main "$@"
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/generate_etcd.sh b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/generate_etcd.sh
new file mode 100644
index 000000000..852c003c1
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/generate_etcd.sh
@@ -0,0 +1,48 @@
+#!/bin/bash
+
+# generate CA & leaf certificates of etcd.
+
+set -e
+set -o pipefail
+
+source "./util.sh"
+
+# Absolute path to this script, e.g. /home/user/bin/foo.sh
+script="$(readlink -f "${BASH_SOURCE[0]}")"
+readonly script
+
+# Absolute path this script is in, thus /home/user/bin
+script_dir="$(dirname "$script")"
+readonly script_dir
+
+readonly csr_config_dir="${script_dir}/csr_config/etcd"
+readonly leaf_certs=(
+ "server"
+ "peer"
+ "healthcheck-client"
+ "apiserver-etcd-client"
+)
+
+function gen_etcd_ca() {
+ openssl genrsa -out ca.key 2048
+ openssl req -x509 -new -nodes -key ca.key -subj "/C=CN/ST=Guangdong/L=Guangzhou/O=karmada/OU=System/CN=etcd-ca" -days 3650 -out ca.crt
+}
+
+function generate_leaf_certs() {
+ local cert
+ for cert in "${leaf_certs[@]}"
+ do
+ util::generate_leaf_cert_key "${cert}" "." "ca" "${csr_config_dir}"
+ done
+}
+
+function main() {
+ mkdir -p cert/etcd
+ cd cert/etcd
+
+ gen_etcd_ca
+
+ generate_leaf_certs
+}
+
+main "$@"
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/generate_leaf.sh b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/generate_leaf.sh
new file mode 100644
index 000000000..6add16332
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/generate_leaf.sh
@@ -0,0 +1,62 @@
+#!/bin/bash
+
+# generate leaf certificates of front-proxy-ca, server-ca
+
+set -e
+set -o pipefail
+
+source "./util.sh"
+
+readonly server_ca_leaf_certs=(
+ "admin"
+ "kube-apiserver"
+ "kube-controller-manager"
+ "karmada"
+)
+
+function parse_parameter() {
+ if [ $# -ne 1 ]
+ then
+ echo "Usage: $0 "
+ exit 1
+ fi
+
+ ca_dir="$(readlink -f ${1})"
+ readonly ca_dir
+
+ if [ ! -d "${ca_dir}" ]
+ then
+ echo "${ca_dir} is not a directory"
+ exit 1
+ fi
+}
+
+function generate_server_ca_leaf_certs() {
+ local cert
+ for cert in "${server_ca_leaf_certs[@]}"
+ do
+ util::generate_leaf_cert_key "${cert}" "${ca_dir}" "server-ca" "../csr_config"
+ done
+}
+
+function generate_front_proxy_client() {
+ util::generate_leaf_cert_key "front-proxy-client" "${ca_dir}" "front-proxy-ca" "../csr_config"
+}
+
+function generate_service_account() {
+ openssl genrsa -out sa.key 2048
+ openssl rsa -in sa.key -pubout -out sa.pub
+}
+
+function main() {
+ parse_parameter "$@"
+
+ mkdir -p cert
+ cd cert
+
+ generate_server_ca_leaf_certs
+ generate_front_proxy_client
+ generate_service_account
+}
+
+main "$@"
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/util.sh b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/util.sh
new file mode 100644
index 000000000..6f8843270
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/generate_cert/util.sh
@@ -0,0 +1,52 @@
+#!/bin/bash
+
+# Reference:
+# 1. https://karmada.io/zh/docs/installation/install-binary
+# 2. https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl
+# 3. https://kubernetes.io/docs/setup/best-practices/certificates/
+# 4. https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#ca-reusage-and-conflicts
+
+# util::get_random_serial_number generates a 64 bit signed positive integer.
+# karmadactl also uses [0, 2^63 -1] integers as serial_number for cert.
+#
+# There are two ways of handling certificate serial number:
+# 1. Use `-CAcreateserial`, let openssl record the latest generated certificate's serial number, and increment
+# serial_number every time.
+# This way requires user to commit the srl file to git every time a new certificate is generated, so it requires
+# a lot of bookkeeping. If once a user forgot to commit the srl file, the next certificate generated will have the
+# same serial number as the previous one, the behavior of program is undefined.
+# 2. Use `-set_serial` to appoint a random number as the serial number.
+# This way is much easier to maintain. So we use this approach. This is also newer version of openssl recommended
+# approach (see link 2).
+#
+# Newer version of openssl can generate random number itself. But as the author of this script, I don't know
+# what version of openssl will the user use. So I can only generate the random number in this shell script,
+# as this is the most portable way.
+#
+# Information related to serial number problem:
+# 1. https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.2
+# 2. https://www.openssl.org/docs/man3.0/man1/openssl-x509.html#:~:text=1)%20for%20details.-,%2DCAserial%20filename,-Sets%20the%20CA
+function util::get_random_serial_number() {
+ serial_number="$(shuf -i '0-9223372036854775807' -n 1)"
+}
+
+# util::generate_leaf_cert_key generate a pair of signed leaf certificate and private key.
+#
+# arg1: base filename of generated file
+# arg2: ca file directory
+# arg3: base filename of ca
+# arg4: csr_config_dir
+function util::generate_leaf_cert_key() {
+ local local_cert="$1"
+ local local_ca_dir="$2"
+ local local_ca_file="$3"
+ local local_csr_config_dir="$4"
+
+ util::get_random_serial_number
+
+ openssl genrsa -out "${local_cert}.key" 2048
+ openssl req -new -key "${local_cert}.key" -out "${local_cert}.csr" -config "${local_csr_config_dir}/${local_cert}.conf"
+ openssl x509 -req -in "${local_cert}.csr" -CA "${local_ca_dir}/${local_ca_file}.crt" -CAkey "${local_ca_dir}/${local_ca_file}.key" \
+ -set_serial "${serial_number}" -out "${local_cert}.crt" -days 3650 \
+ -sha256 -extensions v3_ext -extfile "${local_csr_config_dir}/${local_cert}.conf"
+}
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/other_scripts/check_status.sh b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/other_scripts/check_status.sh
new file mode 100644
index 000000000..6977a1330
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/other_scripts/check_status.sh
@@ -0,0 +1,81 @@
+#!/bin/bash
+
+# Curl 7.29.0 provided by CentOS 7.6 cannot access tls 1.3 server without "--tlsv1.3" option. But adding this option
+# will make "kube-apiserver" health check to fail.
+# So we need to use different option to access different backend.
+# Updating curl version may fix this problem, but it needs to update CentOS version.
+
+readonly -A services_standard=(
+ [karmada-aggregated-apiserver]='https://127.0.0.1:7443/livez?verbose'
+ [karmada-controller-manager]='http://127.0.0.1:10357/healthz?verbose'
+ [karmada-scheduler-estimator]='http://127.0.0.1:10351/healthz?verbose'
+ [karmada-scheduler]='http://127.0.0.1:10511/healthz?verbose'
+ [karmada-search]='https://127.0.0.1:9443/livez?verbose'
+ [kube-apiserver]='https://127.0.0.1:6443/livez?verbose'
+ [kube-controller-manager]='https://127.0.0.1:10257/healthz?verbose'
+)
+
+readonly -A services_tls1_3=(
+ [karmada-webhook]='https://127.0.0.1:8443/readyz/'
+)
+
+check_pass=1
+
+# arg1: url
+# arg2: additional options, may be empty string
+# return: 0 if success, 1 if failed
+# https://kubernetes.io/docs/reference/using-api/health-checks/
+function health_check() {
+ local http_code
+ http_code="$(curl --silent $2 --output /dev/stderr --write-out "%{http_code}" \
+ --cacert "/etc/karmada/pki/server-ca.crt" \
+ --cert "/etc/karmada/pki/admin.crt" \
+ --key "/etc/karmada/pki/admin.key" \
+ "$1")"
+ test $? -eq '0' && test ${http_code} -eq '200'
+ return $?
+}
+
+function check_all() {
+ local key
+
+ for key in "${!services_standard[@]}"
+ do
+ check_one "${key}" "${services_standard[${key}]}" ""
+ done
+
+ for key in "${!services_tls1_3[@]}"
+ do
+ check_one "${key}" "${services_tls1_3[${key}]}" "--tlsv1.3"
+ done
+}
+
+# arg1: service name
+# arg2: http url
+# arg3: additional options, may be empty string
+function check_one() {
+ echo "###### Start check $1"
+ health_check "$2" "$3"
+ if [ $? -ne 0 ]
+ then
+ printf "\n###### $1 check failed\n\n\n"
+ check_pass=0
+ else
+ printf "\n###### $1 check success\n\n\n"
+ fi
+}
+
+function main() {
+ check_all
+
+ if [ ${check_pass} -ne 1 ]
+ then
+ echo '###### Some checks failed'
+ exit 1
+ else
+ echo '###### All checks succeed'
+ exit 0
+ fi
+}
+
+main "$@"
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/other_scripts/create_kubeconfig_file.sh b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/other_scripts/create_kubeconfig_file.sh
new file mode 100644
index 000000000..ccb2f722d
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/installation/install-binary/other_scripts/create_kubeconfig_file.sh
@@ -0,0 +1,99 @@
+#!/bin/bash
+
+set -u
+set -e
+set -o pipefail
+
+# By not embedding certificate, we don't need to regenerate kubeconfig file when certificates is replaced.
+
+function parse_parameter() {
+ if [ $# -ne 1 ]
+ then
+ echo "Usage: $0 "
+ echo "Example: $0 \"https://127.0.0.1:6443\""
+ exit 1
+ fi
+
+ KARMADA_APISERVER="$1"
+}
+
+function check_pki_dir_exist() {
+ if [ ! -e "/etc/karmada/pki/karmada.crt" ]
+ then
+ echo 'You need to replace all certificates and private keys under "/etc/karmada/pki/", then execute this command'
+ exit 1
+ fi
+}
+
+# for kubectl
+function create_admin_kubeconfig() {
+ kubectl config set-cluster karmada \
+ --certificate-authority=/etc/karmada/pki/server-ca.crt \
+ --embed-certs=false \
+ --server "${KARMADA_APISERVER}" \
+ --kubeconfig=admin.kubeconfig
+
+ kubectl config set-credentials admin \
+ --client-certificate=/etc/karmada/pki/admin.crt \
+ --client-key=/etc/karmada/pki/admin.key \
+ --embed-certs=false \
+ --kubeconfig=admin.kubeconfig
+
+ kubectl config set-context karmada \
+ --cluster=karmada \
+ --user=admin \
+ --kubeconfig=admin.kubeconfig
+
+ kubectl config use-context karmada --kubeconfig=admin.kubeconfig
+}
+
+# for kube-controller-manager
+function create_kube_controller_manager_kubeconfig() {
+ kubectl config set-cluster karmada \
+ --certificate-authority=/etc/karmada/pki/server-ca.crt \
+ --embed-certs=false \
+ --server "${KARMADA_APISERVER}" \
+ --kubeconfig=kube-controller-manager.kubeconfig
+
+ kubectl config set-credentials system:kube-controller-manager \
+ --client-certificate=/etc/karmada/pki/kube-controller-manager.crt \
+ --client-key=/etc/karmada/pki/kube-controller-manager.key \
+ --embed-certs=false \
+ --kubeconfig=kube-controller-manager.kubeconfig
+
+ kubectl config set-context system:kube-controller-manager \
+ --cluster=karmada \
+ --user=system:kube-controller-manager \
+ --kubeconfig=kube-controller-manager.kubeconfig
+
+ kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
+}
+
+# for a lot of different karmada components
+function create_karmada_kubeconfig() {
+ kubectl config set-cluster karmada \
+ --certificate-authority=/etc/karmada/pki/server-ca.crt \
+ --embed-certs=false \
+ --server "${KARMADA_APISERVER}" \
+ --kubeconfig=karmada.kubeconfig
+
+ kubectl config set-credentials system:karmada \
+ --client-certificate=/etc/karmada/pki/karmada.crt \
+ --client-key=/etc/karmada/pki/karmada.key \
+ --embed-certs=false \
+ --kubeconfig=karmada.kubeconfig
+
+ kubectl config set-context system:karmada\
+ --cluster=karmada \
+ --user=system:karmada \
+ --kubeconfig=karmada.kubeconfig
+
+ kubectl config use-context system:karmada --kubeconfig=karmada.kubeconfig
+}
+
+parse_parameter "$@"
+check_pki_dir_exist
+cd /etc/karmada/
+create_admin_kubeconfig
+create_kube_controller_manager_kubeconfig
+create_karmada_kubeconfig
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/cluster-failover.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/cluster-failover.png
new file mode 100644
index 000000000..21ee1f695
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/cluster-failover.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/overall-relationship.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/overall-relationship.png
new file mode 100644
index 000000000..dd2383c5b
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/overall-relationship.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/overall-rescheduling.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/overall-rescheduling.png
new file mode 100644
index 000000000..c43873060
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/overall-rescheduling.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/overall-scheduling.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/overall-scheduling.png
new file mode 100644
index 000000000..c4182b35f
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/overall-scheduling.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/service-governance.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/service-governance.png
new file mode 100644
index 000000000..82c4faac4
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/service-governance.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-access.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-access.png
new file mode 100644
index 000000000..71d717d61
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-access.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-operation.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-operation.png
new file mode 100644
index 000000000..58e4be18f
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-operation.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-resourcequota.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-resourcequota.drawio
new file mode 100644
index 000000000..f1650ca79
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-resourcequota.drawio
@@ -0,0 +1 @@
+7Vvfk6I4EP5rfJytkADi4+jM/qjdq9q7qavbeYwQlL1A3BhX3b/+giRKAq44AlPj+TSkk7RJf53u/gIzQJN084HjxfwPFhE6gCDaDNDDAEIIfCj/5JJtIXGcQElmPIkKGTgInpJfRA3U0lUSkaWSFSLBGBXJwhSGLMtIKAwZ5pytzWExo5EhWOAZqQieQkyr0n+SSMwLKUIAHDo+kmQ2Vz/t+oGakmI9Wg1dznHE1iURehygCWdMFE/pZkJobj7TMO+P9O5XxkkmmkyYkVUQgI8/nj+N5/jv7/42uPfulJafmK7UjtVixVabgLNVFpFciTNA4/U8EeRpgcO8dy1Rl7K5SKnqjhNKJ4wyvpuLIkyCOJTypeDsX1Lq8cOATGPZoxZAuCCboztz9vaSrkZYSgTfyiFqAoJBMUV5mev4RXtdggyonc1LaDnBSLnKUttI6z4YUj4oW55hV1hjV5+K3EJMbqpsYP/HiumOu+XuBNzLAY672Bw65dMs/5vhlCxz69/FjGmVcoWF1mJMuwh6JIjcOgQDOEW+3w6CLvBMBHXgKCG4Dx1lBD3QEYCoIwDfk4hwLEj0F1myFQ/Jn7If94FkHMcwrD2LkT/1vY6Q9GD1LPaL5KgjJD9jnuJIQgcmcjhn+UK+UpyRpmBKIwsTMUyTWSafQ2lsIsEZ51AkMhvdq440iaJ8+pgTuTw83akCsr1gSSZ2lvPGA+8h17USrNjCTnUbYdYdmodUh90ytJ5fE2a7irJO3Sm1rEwimdBVk9ApWz8eBOOdQHbMGU9+SawwzYVZdJ+XDjkQFC+XSWiiJM3Ft9+U4XeN57zxztPNh02582GrW5tEfCs9l2bJ1mFS3tBziu3ke/g9aHLLu2hyOqIJzGdEnEpdVScogayPat3x5YRikfw0l1sHvPqFr7njHk/lnmupKLapZpWrHVvR0KoJRpaiwg4VRRJ5vC0NUwfr+IL1xvWCncBy60Ljwcn3Nr3A7/2OYtqUUJbNXh7AzNSSsYxYeUiJmge6uhx3yIKgpcjmmZHNA9Wk5dY4PewqZyknOhbHlBVNOzAu5mzGMky/MLZQgHwnQmwVrcozghXJdETaR6HnUk/nEUm78cmQVAyswtc42FwGRl0F8eZI0nBkFmZwOKz4eN8kCXQUxq6UJQUuMiBEcFQtwHqtrWHd/UEbCF4pO6ogOKoewp4R7OqmIiXplHDnWslQAE0k4ah6Y+GPaqJpZ8HUe/MlwwUJvr44DzyTBKARMlU0ZRMnFR1hE20V/HB4ugTpgOgqrJ1Bc8K6J8d7ptyMHLdYVu5fFJxkuqDe6/qhuo4TmBUZ0oH/XO90JIExNSFLU0tk1wFDO38Z7xy6Ybv6SqD1FEWTNBH/O7LrAO0dGsRhtYzsle3CKyFY5tnwaqrzngkWuhGsM8tzx4RQv8x9vddQN4J1EYK++9oUWQfbjggWvF6CZZ1F160g2SvBQqhi0xvB0q/79sfNM1U0J1gnFHVMsJB7ugS5ESwrKZ0kWOi1CZZZkfka5RcQLKu2CyxN7REsO39By907IFiaPd4IVlsEC1h+Vy0jeyVY6O1dH535bUWb0a1hcDvmBf0EN39ofWflW87TNLYNoaXIsRR1nXkbkP+bb57pm+h1fVPX4Dr+eS+sCn3fUgT6rQrdru5PnrZZ2DQv5o7zBU8J7Y69HU166rN6NXmwz1Bl3/3NqT6aIu/AO3cEzI9unMt8Tw9hcbwkZ3qDbB6+wy+GH/6fAT3+Bw==
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-resourcequota.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-resourcequota.png
new file mode 100644
index 000000000..a0151eecb
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-resourcequota.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-search.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-search.drawio
new file mode 100644
index 000000000..cf040b652
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-search.drawio
@@ -0,0 +1 @@
+7Vxfc6O2Fv80nmkfzCAJCXhM4uzdttlpetPu3r0vHWxkmwQbL+DEvp/+CiMZkGQDMdjets6MY2Qk4PzO/3PkAbpbbP4Ve6v5p8in4QCa/maARgMIhxZw2b9sZJuPIAzNfGQWB34+Vhp4Cv5H80EgRteBTxM+lg+lURSmwao6OImWSzpJK2NeHEdv1dOmUehXBlbejCoDTxMvVEe/BH46z0cdaBfjH2kwm4srA8KfeOGJk/mTJHPPj95KQ+h+gO7iKErzT4vNHQ0z6gm6jEazh/++fLR++fD7+s9pmj7chb8N88U+tJmyf4SYLtN3L/1MPz99+uPVef42/vaGP42+PkTzIcmXfvXCNacXf9Z0Kwg4i6P1quEd8Dt9pXFKNzp4vbFYtqAg4z0aLWgab9l5fBZ0HH4nnO+GwHI5ZG8FjMDi2MwrEPKpHmed2X75gjzsA6eQnlqLzx/+DL7+NPxt7H1+nj3EzgOdDiGuJxej1tKn2SpggG7f5kFKn1beJPv2jYkYG5uni5B/7XvJfH/uNAjDuyiM4t1CiO5ebDxJ4+iFlr5BBLmIzbpVUTkKsoyKSv0yaaGGtG5vlG3AiN81ZcW3ZoWrxeHl6G4rdP81ndN4AEnIrn07zj7Nsk934TpJaZz8JWGBVVjgxWHRSAMHRODB/nuLjLrLcbLaHZu/ePHC871c2zIahsNV6C2pmMrupDT7FBS9eMINPWFHGZkDZnBvwmC2ZGNplJ3u8aOQTrMLTtkd8TmAaJFsZVAOI4khkgzH3taX4CTIVuGEjt0TnkjB8/73u5ECAvMwVtlH30u9JI1iWg+FJEO+R53pRCdDZOLQ8bRXymNLorwNLYXytkaOMO6J7la9VaE+cxf5YRSn82gWLb3wvhi9LUTDZEfFOQ9Rxug7FJ5pmm45e3vrNKpi1I7iSbSOJ/TIQ3FipV48o2k902UP2Aw/05Twc7HZEzCqI/WyV143jz+x9ycav+7skKTlTtFb7YWlrLYsccyv3tQtby9JtoKEDTTOL9aIkgV6QqxxpFAWmKqxsNnRLPSShIvSifRrH1QQgCW6EgsqdEVaU9+XJKgO2Ll5vAAI6LXVUX5oFVQcIK1hll99cbCjUPoxCoNJdot3ub8UZgqnQ+JPcfanJf7uxRVK2dPdvfpAgX/rIKMqA8jVUr/GYltnwsxVMLsNln6wnGkClL8NhoBUzcMBMMrurns5CEWMW3Z+N3SyToNo2R+K14MWtNqidUmBA0BBi0f+f2eJwy0RvKi8NUgeFM52gWHSKE/ACJYek7RltKQSpHxIJAQmDBvGMWreYBH4fngo5K0GYX3hTKoRLJAgM1VnEbgaYYXyPKsvrNXEgoIYXfo3WR0lI33mfAeTKn50E6T/yehqYH70lVM5+zzalA+24oAxzbY0KTv8KtbLDoppu6PtQdxOioS571wbCXPx7SISLkGvC7+OBcz8eo9RwB6+FOwB+WJY8JlYJScEn1iwjGYtIie/CJFyWjmxlLV27LcnxQkc2aBEoo8T2/FGB9EgtmTCa0tMSFdikhHqTqK7LoR0kWQ/jnS7stKZyamG19X00imk7Z5Q+0SFYWNgmyZxTGhD4ZOJ6gQL4ZBjEcDfNTlWYlguwa7DILUwth2V5AgY5TWQ2xcAatS99yfNmyt0L3oFFtgGcWzTthzougi4dlVd26ZRIMLekYKsaxkEYgebyDIBIibWCBM2iAuRaYv3vpLrQA3Ob2aUm5Ur1PeqobVcVXbOrKCgGh1fq753DyBxTfoequHr+fV9c0I10vcIGyVpZpq7C33voL4QUMPPQuFfYzzZK7ItFb4K7XsUfm/C1STavEQ59ShYtVGkKFLXhocxDb00eC2f1CbSs4hxhBUOSHlN2MYie29bOm2VnZB0ew8Fy+RX6zRehGqJXu9EvL/NqDfhtg0Xu8BBxEYWcZHoHhGybVT0rUVUtc1OYUSGLuTK/5KyrfrpqmzXZZIOZIWKPJA+K1TVCPp8lMhU6fNR7XzJzvJKIrq80sQSkftxsLxK88SSrSSpiMSKHWkodim5SI85Lx68O/KOKWbNlH7UHWrg7teJmSQu7xS6diJTKwrwykXhENrvEAVky2v1JgqO25avXXEvJ00B1hlEQWjj77kXD8GG8tFhCaL77lQ1ZBO1wGTlLSsYkW/rSPTzDpMdlW/YCcBabYov92XG9ZhO0uz62Q1CcxIGjOzDWVQqNeZXOFBtFA2x60V4M0mjcpT34I1p+Bglwa6PAI3GUZpGC00YmLcjl1ggWqdhsGQRpthw1EEv2pGsE1GA1PTy6bIkpDe8rZMl73m9WJVawN8hiEo+Kq+U7r8Rm7UOlyiPu4RXUqJEpKkBNfvVEIcrhoZbesl2EmFYqWVLyeSOLB0ysfwwhBw3W5opCLs1plvxE6UpPVk6/N3Km2WhclMAYxXSi9RlizzSOGCkznJ5l5XEDl3Z5pKITKSUJ+Q0eWfSZil90I60eVSdogho+ymIkLauqDSlJwHVFd2bOEBv3FxnLpCVcYHsARVbsp6oF0/mahNdU1fopCIQ8DG1dUrAJTbyREdd/xsbXEfuT7EREA3Jdft67b78IXGx80YieyVbUbAQN9SvlVi/ULZX5tWI7Shn1qXAUbSVLfXgHYj3W8foruLf29XoudEUB3eq5X5ePH6wPiXDzcdR8vwHnL983m6Gamn039Tztay/C66q7Nq83BZTFhfyqnzGe5yEbHF8O8CjQWVXlTtQunn3jonKtEfF94QWgRMrO1WjBSR+jabThPbSbGepkN4/sZVGhyPpyZbFvv4OwBobMs4V3MN4P+BNXmY7tfdrHkEP5HxkB20dBzUDMkUrUbGxF6jtG0SXMHT6Mhya5O0ZDIewAeAfG7BfqLUPDBTv1AaVemejKcf0troAUqoowlbUufiamxE/maHcf52J68qeqKWMLzHTKH8BgyLyoRczKOIHIDi25GwG5fRMxSkK7bJObeN99eQM+kwTTtvQcIC0kO26ht0sJ/cOHtH+uIqugtmyWACzYkGOoFl4FcNJLrLZKfFs/AO0nOwsyG7ThJZbfMbmj2qknezia4PvbDKCxjWG86onoNm+1lo/Heo7Pp9+quaPgCHvMulERWnZ7/Sq4Zlzqd9ZHQK01m5KW7VrtdZu0LZr6hDWeeoQagISH/cKNZ0k0pQ6r9BU0gFETnV158npmUPt5BfaM1NVjXQ60RWAH4Ik3f+ES/7+xUuLNCi72/wCjXebdrmLVN53vHv1GsIqrEJMtRas31F6WKKahrBaddpB299hbdjU7l1LD1PTRuCGDUqnZpVsrHQaCXhady3tcyWltaQ9X+/2G9lh8Suo+enFj8mi+/8D
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-search.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-search.png
new file mode 100644
index 000000000..bd975eea6
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/key-features/unified-search.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/tutorials/custom_metrics.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/tutorials/custom_metrics.png
new file mode 100644
index 000000000..661f76447
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/tutorials/custom_metrics.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/tutorials/federatedhpa-demo.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/tutorials/federatedhpa-demo.png
new file mode 100644
index 000000000..5992df852
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/tutorials/federatedhpa-demo.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/tutorials/opensearch.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/tutorials/opensearch.png
new file mode 100644
index 000000000..6386d76fb
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/tutorials/opensearch.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/autoscaling-conflicts.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/autoscaling-conflicts.png
new file mode 100644
index 000000000..811629e68
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/autoscaling-conflicts.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/cronfederatedhpa-architecture.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/cronfederatedhpa-architecture.png
new file mode 100644
index 000000000..1561eab64
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/cronfederatedhpa-architecture.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/federatedhpa-architecture.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/federatedhpa-architecture.png
new file mode 100644
index 000000000..43040f17b
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/federatedhpa-architecture.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/federatedhpa-overview.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/federatedhpa-overview.png
new file mode 100644
index 000000000..c39da6978
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/autoscaling/federatedhpa-overview.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-2.6.0-status-overview.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-2.6.0-status-overview.png
new file mode 100644
index 000000000..4467ce8a2
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-2.6.0-status-overview.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app-cluster.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app-cluster.png
new file mode 100644
index 000000000..9ac48ea54
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app-cluster.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app-name.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app-name.png
new file mode 100644
index 000000000..da7b14bb1
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app-name.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app-repo.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app-repo.png
new file mode 100644
index 000000000..ec50fb509
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app-repo.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app.png
new file mode 100644
index 000000000..80d8d577b
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-new-app.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-register-karmada.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-register-karmada.png
new file mode 100644
index 000000000..396faaaba
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-register-karmada.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-status-aggregated.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-status-aggregated.png
new file mode 100644
index 000000000..2f1034e46
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-status-aggregated.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-status-overview.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-status-overview.png
new file mode 100644
index 000000000..5f19bdf0a
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-status-overview.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-status-resourcebinding.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-status-resourcebinding.png
new file mode 100644
index 000000000..1a9ba39a7
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-status-resourcebinding.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-sync-apps.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-sync-apps.png
new file mode 100644
index 000000000..5353be595
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/cicd/argocd/argocd-sync-apps.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/failover/failover-overview.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/failover/failover-overview.png
new file mode 100644
index 000000000..7324021a1
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/failover/failover-overview.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/eriecanal/flomesh.sh b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/eriecanal/flomesh.sh
new file mode 100755
index 000000000..69c7c52d1
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/eriecanal/flomesh.sh
@@ -0,0 +1,627 @@
+#!/bin/bash
+
+HOST_IP=$(if [ "$(uname)" == "Darwin" ]; then ipconfig getifaddr en0; else ip -o route get to 8.8.8.8 | sed -n 's/.*src \([0-9.]\+\).*/\1/p'; fi)
+kubeconfig_cp=${KUBECONFIG_CP:-"/tmp/cp.kubeconfig"}
+kubeconfig_c1=${KUBECONFIG_C1:-"/tmp/c1.kubeconfig"}
+kubeconfig_c2=${KUBECONFIG_C2:-"/tmp/c2.kubeconfig"}
+kubeconfig_c3=${KUBECONFIG_C3:-"/tmp/c3.kubeconfig"}
+karmada_config=${KARMADA_CONFIG:-"/etc/karmada/karmada-apiserver.config"}
+
+system=$(uname -s | tr [:upper:] [:lower:])
+arch=$(if [ "$(uname)" == "Darwin" ]; then uname -m; else dpkg --print-architecture; fi)
+osm_binary="$(pwd)/${system}-${arch}/osm"
+
+k0="kubectl --kubeconfig ${kubeconfig_cp}"
+k1="kubectl --kubeconfig ${kubeconfig_c1}"
+k2="kubectl --kubeconfig ${kubeconfig_c2}"
+k3="kubectl --kubeconfig ${kubeconfig_c3}"
+kmd="kubectl --kubeconfig ${karmada_config}"
+
+readonly reset=$(tput sgr0)
+readonly green=$(
+ tput bold
+ tput setaf 2
+)
+readonly yellow=$(
+ tput bold
+ tput setaf 3
+)
+readonly blue=$(
+ tput bold
+ tput setaf 6
+)
+readonly timeout=$(if [ "$(uname)" == "Darwin" ]; then echo "1"; else echo "0.1"; fi)
+
+DEMO_AUTO_RUN=true
+
+function desc() {
+ maybe_first_prompt
+ echo "$blue# $@$reset"
+ prompt
+}
+
+function prompt() {
+ echo -n "$yellow\$ $reset"
+}
+
+started=""
+function maybe_first_prompt() {
+ if [ -z "$started" ]; then
+ prompt
+ started=true
+ fi
+}
+
+# After a `run` this variable will hold the stdout of the command that was run.
+# If the command was interactive, this will likely be garbage.
+DEMO_RUN_STDOUT=""
+
+function run() {
+ maybe_first_prompt
+ rate=250
+ if [ -n "$DEMO_RUN_FAST" ]; then
+ rate=1000
+ fi
+ echo "$green$1$reset" | pv -qL $rate
+ if [ -n "$DEMO_RUN_FAST" ]; then
+ sleep 0.5
+ fi
+ OFILE="$(mktemp -t $(basename $0).XXXXXX)"
+ script -eq -c "$1" -f "$OFILE"
+ r=$?
+ #read -d '' -t "${timeout}" -n 10000 # clear stdin
+ prompt
+ if [ -z "$DEMO_AUTO_RUN" ]; then
+ read -s
+ fi
+ DEMO_RUN_STDOUT="$(tail -n +2 $OFILE | sed 's/\r//g')"
+ return $r
+}
+
+function relative() {
+ for arg; do
+ echo "$(realpath $(dirname $(which $0)))/$arg" | sed "s|$(realpath $(pwd))|.|"
+ done
+}
+
+function check_command() {
+ local installer="$2"
+ if ! command -v $1 &>/dev/null; then
+ echo "missing $1"
+ if [ -z "${installer// /}" ]; then
+ exit 1
+ fi
+ echo "Installing $1"
+ eval $installer
+ else
+ echo "found $1"
+ fi
+}
+
+function create_clusters() {
+ API_PORT=6444
+ PORT=80
+ EXTRA_PORT=32443
+ for CLUSTER_NAME in control-plane cluster-1 cluster-2 cluster-3; do
+ desc "creating cluster ${CLUSTER_NAME}"
+ k3d cluster create ${CLUSTER_NAME} \
+ --image docker.io/rancher/k3s:v1.23.8-k3s2 \
+ --api-port "${HOST_IP}:${API_PORT}" \
+ --port "${PORT}:80@server:0" \
+ --port "${EXTRA_PORT}:32443@server:0" \
+ --servers-memory 2g \
+ --k3s-arg "--disable=traefik@server:0" \
+ --network multi-clusters \
+ --timeout 120s \
+ --wait
+ ((API_PORT = API_PORT + 1))
+ ((PORT = PORT + 1))
+ ((EXTRA_PORT = EXTRA_PORT + 1))
+ done
+}
+
+function install_eriecanal() {
+ desc "Adding ErieCanal helm repo"
+ helm repo add ec https://ec.flomesh.io --force-update
+ helm repo update
+
+ EC_NAMESPACE=erie-canal
+ EC_VERSION=0.1.3
+
+ for CLUSTER in ${!kubeconfig*}; do
+ CLUSTER_NAME=$(if [ "${CLUSTER}" == "kubeconfig_c1" ]; then echo "cluster-1"; elif [ "${CLUSTER}" == "kubeconfig_c2" ]; then
+ echo "cluster-2"
+ elif [ "${CLUSTER}" == "kubeconfig_c3" ]; then echo "cluster-3"; else echo "control-plane"; fi)
+ desc "installing ErieCanal on cluster ${CLUSTER_NAME}"
+ helm upgrade -i --kubeconfig ${!CLUSTER} --namespace ${EC_NAMESPACE} --create-namespace --version=${EC_VERSION} --set ec.logLevel=5 ec ec/erie-canal
+ sleep 1
+ kubectl --kubeconfig ${!CLUSTER} wait --for=condition=ready pod --all -n $EC_NAMESPACE --timeout=120s
+ done
+}
+
+function join_ec_clusters() {
+ PORT=81
+ for CLUSTER_NAME in cluster-1 cluster-2 cluster-3; do
+ desc "Joining ${CLUSTER_NAME}"
+ kubectl --kubeconfig ${kubeconfig_cp} apply -f - < new Message(os.env['HOSTNAME'] +'\n'))
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: httpbin
+spec:
+ ports:
+ - port: 8080
+ targetPort: 8080
+ protocol: TCP
+ selector:
+ app: pipy
+EOF
+
+ desc "Apply PropagationPolicy to propagate httpbin cluster-1 and cluster-3"
+ $kmd apply -n ${NAMESPACE} -f - <&2
+ echo " -h Show this help message"
+ echo " -i Creates 4 k3d clusters for demo use. Default true"
+ echo " -d Runs demo. Make sure you have created clusters before running this"
+ echo " -r Reset clusters and removes demo samples"
+ echo " -u Remove clusters by destroying them"
+ echo ""
+ exit 1
+}
+trap "echo" EXIT
+
+INSTALL=false
+UNINSTALL=false
+RESET=false
+DEMO=false
+
+if [ $# -eq 0 ]; then
+ INSTALL=true
+ DEMO=true
+fi
+
+SHORT_OPTS=":ihdru"
+OPTS=$(getopt $SHORT_OPTS "$@")
+if [ $? != 0 ]; then
+ echo "Failed to parse options...exiting." >&2
+ exit 1
+fi
+
+eval set -- "$OPTS"
+while true; do
+ case "$1" in
+ -i)
+ INSTALL=true
+ shift
+ ;;
+ -d)
+ DEMO=true
+ shift
+ ;;
+ -r)
+ RESET=true
+ shift
+ ;;
+ -u)
+ UNINSTALL=true
+ shift
+ ;;
+ -h)
+ usage
+ ;;
+ --)
+ shift
+ break
+ ;;
+ *)
+ usage
+ ;;
+ esac
+done
+
+shift $((OPTIND - 1))
+[ $# -ne 0 ] && usage
+
+if [ "$INSTALL" = true ]; then
+ echo "Checking for pre-requiste commands"
+ # check for docker
+ check_command "docker"
+
+ # check for kubectl
+ # check_command "kubectl"
+
+ # check for k3d
+ check_command "k3d" "curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash"
+
+ # check for helm
+ check_command "helm" "curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash"
+
+ # check for pv
+ check_command "pv" "sudo apt-get install pv -y"
+
+ # check for jq
+ check_command "jq" "sudo apt-get install jq -y"
+
+ echo "creating k3d clusters"
+ create_clusters
+
+ k3d kubeconfig get control-plane >"${kubeconfig_cp}"
+ k3d kubeconfig get cluster-1 >"${kubeconfig_c1}"
+ k3d kubeconfig get cluster-2 >"${kubeconfig_c2}"
+ k3d kubeconfig get cluster-3 >"${kubeconfig_c3}"
+
+ desc "installing ErieCanal on clusters"
+ install_eriecanal
+
+ desc "Joining clusters into a ErieCanal ClusterSet"
+ join_ec_clusters
+
+ desc "downloading osm-edge cli"
+ install_osm_edge_binary
+
+ desc "installing osm_edge on clusters"
+ install_edge
+
+ desc "installing karmada cli"
+ install_karmada_cli
+
+ desc "installing karmada on control-plane cluster"
+ install_karmada
+
+ desc "joining clusters into a karmada cluster"
+ join_kmd_cluster
+
+ echo "Clusters are ready. Proceed with running demo"
+fi
+
+if [ "$RESET" = true ]; then
+ ${kmd} delete ns --ignore-not-found=true httpbin curl
+fi
+
+if [ "$DEMO" = true ]; then
+ run_demo
+fi
+
+if [ "$UNINSTALL" = true ]; then
+ echo "cleaning up"
+ for cluster in control-plane cluster-1 cluster-2 cluster-3; do
+ echo "deleting cluster ${cluster}"
+ k3d cluster delete ${cluster}
+ done
+
+ for config in ${!kubeconfig*}; do
+ rm -f ${!config}
+ done
+
+ rm -rf "./${system}-${arch}"
+fi
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/eriecanal/karmada-working-with-eriecanal-overview.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/eriecanal/karmada-working-with-eriecanal-overview.png
new file mode 100644
index 000000000..eca2322a2
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/eriecanal/karmada-working-with-eriecanal-overview.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/istio/istio-on-karmada-different-network.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/istio/istio-on-karmada-different-network.png
new file mode 100644
index 000000000..59a64b400
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/istio/istio-on-karmada-different-network.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/istio/istio-on-karmada.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/istio/istio-on-karmada.png
new file mode 100644
index 000000000..f4912ec45
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/istio/istio-on-karmada.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-overview.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-overview.drawio
new file mode 100644
index 000000000..bfc6aabf4
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-overview.drawio
@@ -0,0 +1,108 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-overview.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-overview.png
new file mode 100644
index 000000000..43c8486ce
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-overview.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-way-of-work.drawio b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-way-of-work.drawio
new file mode 100644
index 000000000..858b5f3e3
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-way-of-work.drawio
@@ -0,0 +1,205 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-way-of-work.png b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-way-of-work.png
new file mode 100644
index 000000000..3974091f7
Binary files /dev/null and b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/resources/userguide/service/multiclusterservice/mcs-way-of-work.png differ
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/roadmap.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/roadmap.md
new file mode 100644
index 000000000..a50c8db33
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/roadmap.md
@@ -0,0 +1,45 @@
+---
+title: Karmada Roadmap
+---
+
+# Karmada Roadmap
+
+This document defines a high level roadmap for Karmada development and upcoming releases.
+Community and contributor involvement is vital for successfully implementing all desired items for each release.
+We hope that the items listed below will inspire further engagement from the community to keep karmada progressing and shipping exciting and valuable features.
+
+
+## 2022 H1
+- Multi-cluster HA scheduling policy
+ * spread by region
+ * spread by zone
+ * spread by provider
+- Multi-cluster Ingress
+- Multi-cluster HPA (Horizontal Pod Autoscaling)
+- Federated resource quota
+- API reference
+- [Karmada website](https://karmada.io/) refactor
+- Policy-based governance, risk, and compliance
+- Multi-cluster DNS (cluster identity)
+- Global search across clusters
+- Scheduling re-balancing
+
+## 2022 H2
+- Karmada Dashboard - alpha release
+- Karmada scalability baseline (performance report)
+- Cluster addons
+- Helm chart propagation
+- Multi-cluster events
+- Multi-cluster Operator specifications
+- Multi-cluster Application
+- Multi-cluster monitoring
+- Multi-cluster logging
+- Multi-cluster storage
+- Multi-cluster RBAC
+- Multi-cluster networking
+- Data migration across clusters
+- Multi-cluster workflow
+- Integration with ecosystem
+- Cluster lifecycle management
+- Image registry across clouds
+- Multi-cluster Service Mesh solutions
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/troubleshooting/trouble-shooting.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/troubleshooting/trouble-shooting.md
new file mode 100644
index 000000000..ee5b609ef
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/troubleshooting/trouble-shooting.md
@@ -0,0 +1,85 @@
+---
+title: 故障定位
+---
+
+## 我在安装 Karmada 时无法访问一些资源
+
+- 从K8s镜像仓库(registry.k8s.io)拉取镜像。
+
+ 你可以运行以下命令来改变中国大陆的镜像仓库。
+
+ ```shell
+ sed -i'' -e "s#registry.k8s.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-etcd.yaml
+ sed -i'' -e "s#registry.k8s.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/karmada-apiserver.yaml
+ sed -i'' -e "s#registry.k8s.io#registry.aliyuncs.com/google_containers#g" artifacts/deploy/kube-controller-manager.yaml
+ ```
+
+- 在中国大陆下载 Golang 软件包,并在安装前运行以下命令。
+
+ ```shell
+ export GOPROXY=https://goproxy.cn
+ ```
+
+## 成员集群健康检查不工作
+如果你的环境与下面类似。
+>
+> 用推送模式将成员集群注册到 karmada 后,使用 `kubectl get cluster`,发现集群状态已经就绪。
+> 然后,通过打开成员集群和 karmada 之间的防火墙,等待了很长时间后,集群状态也是就绪,没有变为失败。
+
+问题的原因是,防火墙没有关闭成员集群和 karmada 之间已经存在的 TCP 连接。
+
+- 登录到成员集群 apiserver 所在的节点上
+- 使用`tcpkill`命令来关闭 TCP 连接。
+
+```
+# ens192 是成员集群用来与 karmada 通信的网卡的名字
+tcpkill -9 -i ens192 src host ${KARMADA_APISERVER_IP} and dst port ${MEMBER_CLUTER_APISERVER_IP}。
+```
+
+## x509:使用`karmadactl init`时报`certificate signed by unknown authority`
+
+使用`karmadactl init`命令安装Karmada时,在init安装日志中发现以下报错信息:
+```log
+deploy.go:55] Post "https://192.168.24.211:32443/api/v1/namespaces": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "karmada")
+```
+
+问题原因:之前在集群上安装过Karmada,karmada-etcd使用`hostpath`方式挂载本地存储,卸载时数据有残留,需要清理默认路径`/var/lib/karmada-etcd`下的文件。如果使用了karmadactl [--etcd-data](https://github.com/karmada-io/karmada/blob/master/pkg/karmadactl/cmdinit/cmdinit.go#L119)参数,请删除相应的目录。
+
+相关Issue:[#1467](https://github.com/karmada-io/karmada/issues/1467),[#2504](https://github.com/karmada-io/karmada/issues/2504)。
+
+## karmada-webhook因为"too many open files"错误一直崩溃
+
+当使用`hack/local-up-karmada`安装Karmada时, karmada-webhook一直崩溃,查看组件的日志时发现以下错误日志:
+
+```log
+I1121 06:33:46.144605 1 webhook.go:83] karmada-webhook version: version.Info{GitVersion:"v1.3.0-425-gf7cac365", GitCommit:"f7cac365d743e5e40493f9ad90352f30123f7f1d", GitTreeState:"dirty", BuildDate:"2022-11-21T06:25:19Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
+I1121 06:33:46.167045 1 webhook.go:113] registering webhooks to the webhook server
+I1121 06:33:46.169425 1 internal.go:362] "Starting server" path="/metrics" kind="metrics" addr="[::]:8080"
+I1121 06:33:46.169569 1 internal.go:362] "Starting server" kind="health probe" addr="[::]:8000"
+I1121 06:33:46.169670 1 shared_informer.go:285] caches populated
+I1121 06:33:46.169828 1 internal.go:567] "Stopping and waiting for non leader election runnables"
+I1121 06:33:46.169848 1 internal.go:571] "Stopping and waiting for leader election runnables"
+I1121 06:33:46.169856 1 internal.go:577] "Stopping and waiting for caches"
+I1121 06:33:46.169883 1 internal.go:581] "Stopping and waiting for webhooks"
+I1121 06:33:46.169899 1 internal.go:585] "Wait completed, proceeding to shutdown the manager"
+E1121 06:33:46.169909 1 webhook.go:132] webhook server exits unexpectedly: too many open files
+E1121 06:33:46.169926 1 run.go:74] "command failed" err="too many open files"
+```
+
+这是一个资源耗尽问题。 你可以通过以下命令修复:
+
+```
+sysctl fs.inotify.max_user_watches=16384
+sysctl -w fs.inotify.max_user_watches=100000
+sysctl -w fs.inotify.max_user_instances=100000
+```
+
+相关Issue:https://github.com/kubernetes-sigs/kind/issues/2928
+
+## 部署在 Karmada 控制面上的 ServiceAccount 无法生成 token Secret
+
+Kubernetes 社区为了提高 token 使用的安全性和可扩展性,提出了[KEP-1205](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens),该提案旨在引入一种新的机制来使用 ServiceAccount token,而不是直接将 ServiceAccount 生成的 Secret 挂载到 Pod 中,具体方式见[ServiceAccount automation](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-automation)。这个特性名为`BoundServiceAccountTokenVolume`,在 Kubernetes 1.22 版本中已经 GA。
+
+随着`BoundServiceAccountTokenVolume`特性的GA,Kubernetes 社区认为已经没必要为 ServiceAccount 自动生成 token 了,因为这样并不安全,于是又提出了[KEP-2799](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/2799-reduction-of-secret-based-service-account-token),这个 KEP 的一个目的是不再为 ServiceAccount 自动生成 token Secret,另外一个目的是要清除未被使用的 ServiceAccount 产生的 token Secret。
+
+对于第一个目的,社区提供了`LegacyServiceAccountTokenNoAutoGeneration`特性开关,该特性开关在 Kubernetes 1.24 版本中已进入 Beta 阶段,这也正是 Karmada 控制面无法生成 token Secret 的原因。当然了,如果用户仍想使用之前的方式,为 ServiceAccount 生成 Secret,可以参考[此处](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount)进行操作。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/access-service-across-clusters.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/access-service-across-clusters.md
new file mode 100644
index 000000000..33ce6083b
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/access-service-across-clusters.md
@@ -0,0 +1,176 @@
+---
+title: Access service across clusters within native service
+---
+
+In Karmada, the MultiClusterService can enable users to access services across clusters with the native service domain name, like `foo.svc`, with the aim of providing users with a seamless experience when accessing services across multiple clusters, as if they were operating within a single cluster.
+
+This document provides an example of how to enable MultiClusterService for accessing service across clusters with native service.
+
+## Prerequisites
+
+### Karmada has been installed
+
+We can install Karmada by referring to [Quick Start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
+
+### Member Cluster Network
+
+Ensure that at least two clusters have been added to Karmada, and the container networks between member clusters are connected.
+
+* If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks of the member1 and member2 will be connected.
+* You can use `Submariner` or other related open source projects to connected networks between member clusters.
+
+Note: In order to prevent routing conflicts, Pod and Service CIDRs of clusters need non-overlapping.
+
+## Deploy deployment in `member1` cluster
+
+We need to deploy deployment in `member1` cluster:
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ resources:
+ requests:
+ cpu: 25m
+ memory: 64Mi
+ limits:
+ cpu: 25m
+ memory: 64Mi
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+```
+
+After deploying, you can check the created pods:
+```sh
+$ karmadactl get po
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-5c54b4855f-6sq9s member1 1/1 Running 0 28s
+nginx-5c54b4855f-vp948 member1 1/1 Running 0 28s
+```
+
+## Deploy curl pod in `member2` cluster
+
+Let's deploy a curl pod in `member2` cluster:
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: curl
+ labels:
+ app: curl
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: curl
+ template:
+ metadata:
+ labels:
+ app: curl
+ spec:
+ containers:
+ - image: curlimages/curl:latest
+ command: ["sleep", "infinity"]
+ name: curl
+ resources:
+ requests:
+ cpu: 25m
+ memory: 64Mi
+ limits:
+ cpu: 25m
+ memory: 64Mi
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: curl-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: curl
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member2
+```
+
+
+After deploying, you can check the created pods:
+```sh
+$ karmadactl get po -C member2
+NAME CLUSTER READY STATUS RESTARTS AGE
+curl-6894f46595-c75rc member2 1/1 Running 0 15s
+```
+
+Later, we will run the curl command in this pod.
+
+## Deploy MultiClusterService and Service in Karmada
+
+Now, instead of using PropagationPolicy/ClusterPropagationPolicy for the service, we utilize MultiClusterService for propagation.
+
+To enable multi-cluster service in Karmada, deploy the following yaml:
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx
+spec:
+ ports:
+ - port: 80
+ targetPort: 8080
+ selector:
+ app: nginx
+---
+apiVersion: networking.karmada.io/v1alpha1
+kind: MultiClusterService
+metadata:
+ name: nginx
+spec:
+ types:
+ - CrossCluster
+ consumerClusters:
+ - name: member2
+ providerClusters:
+ - name: member1
+```
+
+## Access the backend pods from member2 cluster
+
+To access the backend pods in the member1 cluster from the member2 cluster, execute the following command:
+```sh
+$ karmadactl exec -C member2 curl-6894f46595-c75rc -it -- sh
+~ $ curl http://nginx.default
+Hello, world!
+Version: 1.0.0
+Hostname: nginx-0
+```
+
+Using MultiClusterService, the pods are situated solely in the member1 cluster. However, they can be accessed from the member2 cluster using the native service name.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-federatedhpa-with-cronfederatedhpa.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-federatedhpa-with-cronfederatedhpa.md
new file mode 100644
index 000000000..23d86c307
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-federatedhpa-with-cronfederatedhpa.md
@@ -0,0 +1,194 @@
+---
+title: 使用CronFederatedHPA自动伸缩FederatedHPA
+---
+
+在Karmada中,CronFederatedHPA用于伸缩工作负载的副本数(任何具有scale子资源的工作负载,如Deployment)或FederatedHPA的 minReplicas/maxReplicas,目的是提前伸缩业务以满足突发的负载峰值。
+
+CronFederatedHPA旨在在特定时间内扩展资源。当工作负载仅由CronFederatedHPA直接进行扩展时,在到达指定的时间为止,其副本将保持不变。这意味着直到指定时间之前,它失去了处理更多请求的能力。
+因此,为了确保工作负载可以提前扩容以满足后续高峰负载和后续实时业务需求,我们建议首先使用CronFederatedHPA来伸缩FederatedHPA。然后,FederatedHPA可以根据其度量标准来伸缩工作负载的规模。
+
+本文档将为您提供一个示例,演示如何将 CronFederatedHPA 应用于 FederatedHPA。
+
+## 前提条件
+
+### Karmada 已安装
+
+您可以参考 [快速入门](https://github.com/karmada-io/karmada#quick-start) 安装 Karmada,或直接运行 `hack/local-up-karmada.sh` 脚本,该脚本也用于运行 E2E 测试。
+
+## 在 `member1` 和 `member2` 集群中部署工作负载
+
+我们需要在 member1 和 member2 集群中部署 deployment(2 个副本):
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ resources:
+ requests:
+ cpu: 25m
+ memory: 64Mi
+ limits:
+ cpu: 25m
+ memory: 64Mi
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+ weightPreference:
+ staticWeightList:
+ - targetCluster:
+ clusterNames:
+ - member1
+ weight: 1
+ - targetCluster:
+ clusterNames:
+ - member2
+ weight: 1
+```
+
+部署完成后,您可以检查已创建的 pods:
+```sh
+$ karmadactl get pods
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-777bc7b6d7-cmt6k member1 1/1 Running 0 27m
+nginx-777bc7b6d7-8lmcg member2 1/1 Running 0 27m
+```
+
+## 在 Karmada 控制平面中部署 FederatedHPA
+
+让我们在Karmada控制平面中创建一个FederatedHPA,用来管理跨集群的nginx deployment:
+```yaml
+apiVersion: autoscaling.karmada.io/v1alpha1
+kind: FederatedHPA
+metadata:
+ name: nginx-fhpa
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ minReplicas: 1
+ maxReplicas: 10
+ behavior:
+ scaleDown:
+ stabilizationWindowSeconds: 10
+ scaleUp:
+ stabilizationWindowSeconds: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 80
+```
+FederatedHPA会在平均CPU利用率超过80%时扩容工作负载的副本。相反,如果平均CPU利用率低于80%,则会缩容工作负载的副本数量。
+
+## 在 Karmada 控制平面中部署CronFederatedHPA
+
+为了自动伸缩FederatedHPA的minReplicas,让我们在Karmada控制平面中创建CronFederatedHPA:
+
+```yaml
+apiVersion: autoscaling.karmada.io/v1alpha1
+kind: CronFederatedHPA
+metadata:
+ name: nginx-cronfhpa
+ namespace: default
+spec:
+ scaleTargetRef:
+ apiVersion: autoscaling.karmada.io/v1alpha1
+ kind: FederatedHPA
+ name: nginx-fhpa
+ rules:
+ - name: "scale-up"
+ schedule: "*/1 * * * *"
+ targetMinReplicas: 5
+```
+
+`spec.schedule` 遵循以下格式:
+```
+# ┌───────────── minute (0 - 59)
+# │ ┌───────────── hour (0 - 23)
+# │ │ ┌───────────── day of the month (1 - 31)
+# │ │ │ ┌───────────── month (1 - 12)
+# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
+# │ │ │ │ │ 7 is also Sunday on some systems)
+# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat
+# │ │ │ │ │
+# * * * * *
+```
+表达式 `*/1 * * * *` 表示每分钟将FederatedHPA的minReplicas更新为5。此操作能确保工作负载会被扩容到至少5个副本,以处理突发流量洪峰。
+
+## 测试扩容功能
+
+等待一分钟后,FederatedHPA的minReplicas会被CronFederatedHPA更新为5,此操作会触发 nginx deployment 的副本数被扩容为5。检查Pod的数量,看副本是否已扩容:
+```sh
+$ karmadactl get po
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-777bc7b6d7-7vl2r member1 1/1 Running 0 59s
+nginx-777bc7b6d7-cmt6k member1 1/1 Running 0 27m
+nginx-777bc7b6d7-pc5dk member1 1/1 Running 0 59s
+nginx-777bc7b6d7-8lmcg member2 1/1 Running 0 27m
+nginx-777bc7b6d7-pghl7 member2 1/1 Running 0 59s
+```
+
+如果业务需求需要更多副本,FederatedHPA将根据指标(例如CPU利用率)自动伸缩副本数。
+
+通过检查CronFederatedHPA的状态字段,您可以查看伸缩历史记录:
+```yaml
+apiVersion: autoscaling.karmada.io/v1alpha1
+kind: CronFederatedHPA
+metadata:
+ name: nginx-cronfhpa
+ namespace: default
+spec:
+ rules:
+ - failedHistoryLimit: 3
+ name: scale-up
+ schedule: '*/1 * * * *'
+ successfulHistoryLimit: 3
+ suspend: false
+ targetMinReplicas: 5
+ scaleTargetRef:
+ apiVersion: autoscaling.karmada.io/v1alpha1
+ kind: FederatedHPA
+ name: nginx-fhpa
+status:
+ executionHistories:
+ - nextExecutionTime: "2023-07-29T07:53:00Z" # 下一次执行时间
+ ruleName: scale-up
+ successfulExecutions:
+ - appliedMinReplicas: 5 # CronFederatedHPA将 minReplicas 更新为5
+ executionTime: "2023-07-29T07:52:00Z" # 上一次实际执行时间
+ scheduleTime: "2023-07-29T07:52:00Z" # 上一次期待执行时间
+```
+伸缩历史包括成功和失败操作的信息。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-with-custom-metrics.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-with-custom-metrics.md
new file mode 100644
index 000000000..6c22f2431
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-with-custom-metrics.md
@@ -0,0 +1,453 @@
+---
+title: FederatedHPA scales with custom metrics
+---
+In Karmada, a FederatedHPA scales up/down the workload's replicas across multiple clusters, with the aim of automatically scaling the workload to match the demand.
+
+FederatedHPA not only supports resource metrics such as CPU and memory, but also supports custom metrics
+which may expand the use cases of FederatedHPA.
+
+This document walks you through an example of enabling FederatedHPA to automatically manage scale for a cross-cluster app with custom metrics.
+
+The walkthrough example will do as follows:
+
+![federatedhpa-custom-metrics-demo](../resources/tutorials/custom_metrics.png)
+
+* One sample-deployment's pod exists in `member1` cluster.
+* The service is deployed in `member1` and `member2` cluster.
+* Request the multi-cluster service and trigger an increase in the pod's custom metrics(http_requests_total).
+* The replicas will be scaled up in `member1` and `member2` cluster.
+
+## Prerequisites
+
+### Karmada has been installed
+
+You can install Karmada by referring to [Quick Start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
+
+### Member Cluster Network
+
+Ensure that at least two clusters have been added to Karmada, and the container networks between member clusters are connected.
+
+- If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks of the `member1` and `member2` will be connected.
+- You can use `Submariner` or other related open source projects to connect networks between member clusters.
+
+> Note: In order to prevent routing conflicts, Pod and Service CIDRs of clusters need non-overlapping.
+
+### The ServiceExport and ServiceImport CRDs have been installed
+
+You need to install `ServiceExport` and `ServiceImport` in the member clusters to enable multi-cluster service.
+
+After `ServiceExport` and `ServiceImport` have been installed on the **Karmada Control Plane**, you can create `ClusterPropagationPolicy` to propagate those two CRDs to the member clusters.
+
+```yaml
+# propagate ServiceExport CRD
+apiVersion: policy.karmada.io/v1alpha1
+kind: ClusterPropagationPolicy
+metadata:
+ name: serviceexport-policy
+spec:
+ resourceSelectors:
+ - apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ name: serviceexports.multicluster.x-k8s.io
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+---
+# propagate ServiceImport CRD
+apiVersion: policy.karmada.io/v1alpha1
+kind: ClusterPropagationPolicy
+metadata:
+ name: serviceimport-policy
+spec:
+ resourceSelectors:
+ - apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ name: serviceimports.multicluster.x-k8s.io
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+### prometheus and prometheus-adapter have been installed in member clusters
+
+You need to install `prometheus` and `prometheus-adapter` for member clusters to provider the custom metrics.
+You can install it by running the following in member clusters:
+```sh
+git clone https://github.com/prometheus-operator/kube-prometheus.git
+cd kube-prometheus
+kubectl apply --server-side -f manifests/setup
+kubectl wait \
+ --for condition=Established \
+ --all CustomResourceDefinition \
+ --namespace=monitoring
+kubectl apply -f manifests/
+```
+
+You can verify the installation by the following command:
+```
+kubectl --kubeconfig=/root/.kube/members.config --context=member1 get po -nmonitoring
+NAME READY STATUS RESTARTS AGE
+alertmanager-main-0 2/2 Running 0 30h
+alertmanager-main-1 2/2 Running 0 30h
+alertmanager-main-2 2/2 Running 0 30h
+blackbox-exporter-6bc47b9578-zcbb7 3/3 Running 0 30h
+grafana-6b68cd6b-vmw74 1/1 Running 0 30h
+kube-state-metrics-597db7f85d-2hpfs 3/3 Running 0 30h
+node-exporter-q8hdx 2/2 Running 0 30h
+prometheus-adapter-57d9587488-86ckj 1/1 Running 0 29h
+prometheus-adapter-57d9587488-zrt29 1/1 Running 0 29h
+prometheus-k8s-0 2/2 Running 0 30h
+prometheus-k8s-1 2/2 Running 0 30h
+prometheus-operator-7d4b94944f-kkwkk 2/2 Running 0 30h
+```
+
+### karmada-metrics-adapter has been installed in Karmada control plane
+
+You need to install `karmada-metrics-adapter` in Karmada control plane to provide the metrics API, install it by running:
+```sh
+hack/deploy-metrics-adapter.sh ${host_cluster_kubeconfig} ${host_cluster_context} ${karmada_apiserver_kubeconfig} ${karmada_apiserver_context_name}
+```
+
+If you use the `hack/local-up-karmada.sh` script to deploy Karmada, you can run following command to deploy `karmada-metrics-adapter`:
+```sh
+hack/deploy-metrics-adapter.sh $HOME/.kube/karmada.config karmada-host $HOME/.kube/karmada.config karmada-apiserver
+```
+
+## Deploy workload in `member1` and `member2` cluster
+
+You need to deploy a sample deployment(1 replica) and service in `member1` and `member2`.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: sample-app
+ labels:
+ app: sample-app
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: sample-app
+ template:
+ metadata:
+ labels:
+ app: sample-app
+ spec:
+ containers:
+ - image: luxas/autoscale-demo:v0.1.2
+ name: metrics-provider
+ ports:
+ - name: http
+ containerPort: 8080
+---
+apiVersion: v1
+kind: Service
+metadata:
+ labels:
+ app: sample-app
+ name: sample-app
+spec:
+ ports:
+ - name: http
+ port: 80
+ protocol: TCP
+ targetPort: 8080
+ selector:
+ app: sample-app
+ type: ClusterIP
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: app-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: sample-app
+ - apiVersion: v1
+ kind: Service
+ name: sample-app
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+ weightPreference:
+ staticWeightList:
+ - targetCluster:
+ clusterNames:
+ - member1
+ weight: 1
+ - targetCluster:
+ clusterNames:
+ - member2
+ weight: 1
+```
+
+After deploying, you can check the distribution of the pods and service:
+```sh
+$ karmadactl get pods
+NAME CLUSTER READY STATUS RESTARTS AGE
+sample-app-9b7d8c9f5-xrnfx member1 1/1 Running 0 111s
+$ karmadactl get svc
+NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
+sample-app member1 ClusterIP 10.11.29.250 80/TCP 3m53s Y
+```
+
+## Monitor your application in `member1` and `member2` cluster
+
+In order to monitor your application, you'll need to set up a ServiceMonitor pointing at the application. Assuming you've set up your Prometheus instance to use ServiceMonitors with the app: sample-app label, create a ServiceMonitor to monitor the app's metrics via the service:
+
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: sample-app
+ labels:
+ app: sample-app
+spec:
+ selector:
+ matchLabels:
+ app: sample-app
+ endpoints:
+ - port: http
+```
+
+```
+kubectl create -f sample-app.monitor.yaml
+```
+
+Now, you should see your metrics (http_requests_total) appear in your Prometheus instance. Look them up via the dashboard, and make sure they have the namespace and pod labels. If not, check the labels on the service monitor match the ones on the Prometheus CRD.
+
+## Launch you adapter in `member1` and `member2` cluster
+After you deploy `prometheus-adapter`, you need to update to the adapter config which is necessary in order to expose custom metrics.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: adapter-config
+ namespace: monitoring
+data:
+ config.yaml: |-
+ "rules":
+ - "seriesQuery": |
+ {namespace!="",__name__!~"^container_.*"}
+ "resources":
+ "template": "<<.Resource>>"
+ "name":
+ "matches": "^(.*)_total"
+ "as": ""
+ "metricsQuery": |
+ sum by (<<.GroupBy>>) (
+ irate (
+ <<.Series>>{<<.LabelMatchers>>}[1m]
+ )
+ )
+```
+
+```
+$ kubectl apply -f prom-adapter.config.yaml
+# Restart prom-adapter pods
+$ kubectl rollout restart deployment prometheus-adapter -n monitoring
+```
+
+## Registry metrics API in `member1` and `member2` cluster
+
+You also need to register the custom metrics API with the API aggregator (part of the main Kubernetes API server). For that you need to create an APIService resource.
+
+```yaml
+apiVersion: apiregistration.k8s.io/v1
+kind: APIService
+metadata:
+ name: v1beta2.custom.metrics.k8s.io
+spec:
+ group: custom.metrics.k8s.io
+ groupPriorityMinimum: 100
+ insecureSkipTLSVerify: true
+ service:
+ name: prometheus-adapter
+ namespace: monitoring
+ version: v1beta2
+ versionPriority: 100
+```
+
+```
+$ kubectl create -f api-service.yaml
+```
+
+The API is registered as `custom.metrics.k8s.io/v1beta2`, and you can use the following command to verify:
+
+```
+$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
+```
+
+The output is similar to:
+
+```
+{"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta2","metadata":{},"items":[{"describedObject":{"kind":"Pod","namespace":"default","name":"sample-app-9b7d8c9f5-9lw6b","apiVersion":"/v1"},"metric":{"name":"http_requests","selector":null},"timestamp":"2023-06-14T09:09:54Z","value":"66m"}]}
+```
+
+If `karmada-metrics-adapter` is installed successfully, you can also verify it with the above command in Karmada control plane.
+
+
+## Deploy FederatedHPA in Karmada control plane
+
+Then let's deploy FederatedHPA in Karmada control plane.
+
+```yaml
+apiVersion: autoscaling.karmada.io/v1alpha1
+kind: FederatedHPA
+metadata:
+ name: sample-app
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: sample-app
+ minReplicas: 1
+ maxReplicas: 10
+ behavior:
+ scaleDown:
+ stabilizationWindowSeconds: 10
+ scaleUp:
+ stabilizationWindowSeconds: 10
+ metrics:
+ - type: Pods
+ pods:
+ metric:
+ name: http_requests
+ target:
+ averageValue: 700m
+ type: Value
+```
+
+After deploying, you can check the FederatedHPA:
+```sh
+NAME REFERENCE-KIND REFERENCE-NAME MINPODS MAXPODS REPLICAS AGE
+sample-app Deployment sample-app 1 10 1 15d
+```
+
+## Export service to `member1` cluster
+
+As mentioned before, you need a multi-cluster service to route the requests to the pods in `member1` and `member2` cluster, so let create this mult-cluster service.
+* Create a `ServiceExport` object on Karmada Control Plane, and then create a `PropagationPolicy` to propagate the `ServiceExport` object to `member1` and `member2` cluster.
+ ```yaml
+ apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceExport
+ metadata:
+ name: sample-app
+ ---
+ apiVersion: policy.karmada.io/v1alpha1
+ kind: PropagationPolicy
+ metadata:
+ name: serve-export-policy
+ spec:
+ resourceSelectors:
+ - apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceExport
+ name: sample-app
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ ```
+* Create a `ServiceImport` object on Karmada Control Plane, and then create a `PropagationPolicy` to propagate the `ServiceImport` object to `member1` cluster.
+ ```yaml
+ apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceImport
+ metadata:
+ name: sample-app
+ spec:
+ type: ClusterSetIP
+ ports:
+ - port: 80
+ protocol: TCP
+ ---
+ apiVersion: policy.karmada.io/v1alpha1
+ kind: PropagationPolicy
+ metadata:
+ name: serve-import-policy
+ spec:
+ resourceSelectors:
+ - apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceImport
+ name: sample-app
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ ```
+
+After deploying, you can check the multi-cluster service:
+```sh
+$ karmadactl get svc
+NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
+derived-sample-app member1 ClusterIP 10.11.59.213 80/TCP 9h Y
+```
+
+## Install hey http load testing tool in member1 cluster
+
+In order to do http requests, here you can use `hey`.
+* Download `hey` and copy it to kind cluster container.
+```
+$ wget https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64
+$ chmod +x hey_linux_amd64
+$ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey
+```
+
+## Test scaling up
+
+* Check the pod distribution firstly.
+ ```sh
+ $ karmadactl get pods
+ NAME CLUSTER READY STATUS RESTARTS AGE
+ sample-app-9b7d8c9f5-xrnfx member1 1/1 Running 0 111s
+ ```
+
+* Check multi-cluster service ip.
+ ```sh
+ $ karmadactl get svc
+ NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
+ derived-sample-app member1 ClusterIP 10.11.59.213 80/TCP 20m Y
+ ```
+
+* Request multi-cluster service with hey to increase the nginx pods' CPU usage.
+ ```sh
+ $ docker exec member1-control-plane hey -c 1000 -z 1m http://10.11.59.213/metrics
+ ```
+
+* Wait 15s, the replicas will be scaled up, then you can check the pod distribution again.
+ ```sh
+ $ karmadactl get po -l app=sample-app
+ NAME CLUSTER READY STATUS RESTARTS AGE
+ sample-app-9b7d8c9f5-454vz member2 1/1 Running 0 84s
+ sample-app-9b7d8c9f5-7fjhn member2 1/1 Running 0 69s
+ sample-app-9b7d8c9f5-ddf4s member2 1/1 Running 0 69s
+ sample-app-9b7d8c9f5-mxqmh member2 1/1 Running 0 84s
+ sample-app-9b7d8c9f5-qbc2j member2 1/1 Running 0 69s
+ sample-app-9b7d8c9f5-2tgxt member1 1/1 Running 0 69s
+ sample-app-9b7d8c9f5-66n9s member1 1/1 Running 0 69s
+ sample-app-9b7d8c9f5-fbzps member1 1/1 Running 0 84s
+ sample-app-9b7d8c9f5-ldmhz member1 1/1 Running 0 84s
+ sample-app-9b7d8c9f5-xrnfx member1 1/1 Running 0 87m
+ ```
+
+## Test scaling down
+
+After 1 minute, the load testing tool will be stopped, then you can see the workload is scaled down across clusters.
+```sh
+$ karmadactl get pods -l app=sample-app
+NAME CLUSTER READY STATUS RESTARTS AGE
+sample-app-9b7d8c9f5-xrnfx member1 1/1 Running 0 91m
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-with-resource-metrics.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-with-resource-metrics.md
new file mode 100644
index 000000000..8704b7d2d
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-with-resource-metrics.md
@@ -0,0 +1,334 @@
+---
+title: Autoscaling across clusters with resource metrics
+---
+In Karmada, a FederatedHPA scales up/down the workload's replicas across multiple clusters, with the aim of automatically scaling the workload to match the demand.
+
+When the load is increase, FederatedHPA scales up the replicas of the workload(the Deployment, StatefulSet, or other similar resource) if the number of Pods is under the configured maximum. When the load is decrease, FederatedHPA scales down the replicas of the workload if the number of Pods is above the configured minimum.
+
+This document walk you through an example of enabling FederatedHPA to automatically manage scale for a cross-cluster deployed nginx.
+
+The walkthrough example will do as follows:
+![federatedhpa-demo](../resources/tutorials/federatedhpa-demo.png)
+
+* One deployment's pod exists in `member1` cluster.
+* The service is deployed in `member1` and `member2` cluster.
+* Request the multi-cluster service and trigger the pod's CPU usage increases.
+* The replicas will be scaled up in `member1` and `member2` cluster.
+
+## Prerequisites
+
+### Karmada has been installed
+
+We can install Karmada by referring to [Quick Start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
+
+### Member Cluster Network
+
+Ensure that at least two clusters have been added to Karmada, and the container networks between member clusters are connected.
+
+- If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks of the `member1` and `member2` will be connected.
+- You can use `Submariner` or other related open source projects to connect networks between member clusters.
+
+> Note: In order to prevent routing conflicts, Pod and Service CIDRs of clusters need non-overlapping.
+
+### The ServiceExport and ServiceImport CRDs have been installed
+
+We need to install `ServiceExport` and `ServiceImport` in the member clusters to enable multi-cluster service.
+
+After `ServiceExport` and `ServiceImport` have been installed on the **Karmada Control Plane**, we can create `ClusterPropagationPolicy` to propagate those two CRDs to the member clusters.
+
+```yaml
+# propagate ServiceExport CRD
+apiVersion: policy.karmada.io/v1alpha1
+kind: ClusterPropagationPolicy
+metadata:
+ name: serviceexport-policy
+spec:
+ resourceSelectors:
+ - apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ name: serviceexports.multicluster.x-k8s.io
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+---
+# propagate ServiceImport CRD
+apiVersion: policy.karmada.io/v1alpha1
+kind: ClusterPropagationPolicy
+metadata:
+ name: serviceimport-policy
+spec:
+ resourceSelectors:
+ - apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ name: serviceimports.multicluster.x-k8s.io
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+### metrics-server has been installed in member clusters
+
+We need to install `metrics-server` for member clusters to provider the metrics API, install it by running:
+```sh
+hack/deploy-k8s-metrics-server.sh ${member_cluster_kubeconfig} ${member_cluster_context_name}
+```
+
+If you use the `hack/local-up-karmada.sh` script to deploy Karmada, you can run following command to deploy `metrics-server` in all three member clusters:
+```sh
+hack/deploy-k8s-metrics-server.sh $HOME/.kube/members.config member1
+hack/deploy-k8s-metrics-server.sh $HOME/.kube/members.config member2
+hack/deploy-k8s-metrics-server.sh $HOME/.kube/members.config member3
+```
+
+### karmada-metrics-adapter has been installed in Karmada control plane
+
+We need to install `karmada-metrics-adapter` in Karmada control plane to provide the metrics API, install it by running:
+```sh
+hack/deploy-metrics-adapter.sh ${host_cluster_kubeconfig} ${host_cluster_context} ${karmada_apiserver_kubeconfig} ${karmada_apiserver_context_name}
+```
+
+If you use the `hack/local-up-karmada.sh` script to deploy Karmada, `karmada-metrics-adapter` will be installed by default.
+
+## Deploy workload in `member1` and `member2` cluster
+
+We need to deploy deployment(1 replica) and service in `member1` and `member2`.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ resources:
+ requests:
+ cpu: 25m
+ memory: 64Mi
+ limits:
+ cpu: 25m
+ memory: 64Mi
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx-service
+spec:
+ ports:
+ - port: 80
+ targetPort: 80
+ selector:
+ app: nginx
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ - apiVersion: v1
+ kind: Service
+ name: nginx-service
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+ weightPreference:
+ staticWeightList:
+ - targetCluster:
+ clusterNames:
+ - member1
+ weight: 1
+ - targetCluster:
+ clusterNames:
+ - member2
+ weight: 1
+```
+
+After deploying, you can check the distribution of the pods and service:
+```sh
+$ karmadactl get pods
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 9h
+$ karmadactl get svc
+NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
+nginx-service member1 ClusterIP 10.11.216.215 80/TCP 9h Y
+nginx-service member2 ClusterIP 10.13.46.61 80/TCP 9h Y
+
+```
+
+## Deploy FederatedHPA in Karmada control plane
+
+Then let's deploy FederatedHPA in Karmada control plane.
+
+```yaml
+apiVersion: autoscaling.karmada.io/v1alpha1
+kind: FederatedHPA
+metadata:
+ name: nginx
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ minReplicas: 1
+ maxReplicas: 10
+ behavior:
+ scaleDown:
+ stabilizationWindowSeconds: 10
+ scaleUp:
+ stabilizationWindowSeconds: 10
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ target:
+ type: Utilization
+ averageUtilization: 10
+```
+
+After deploying, you can check the FederatedHPA:
+```sh
+$ kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver get fhpa
+NAME REFERENCE-KIND REFERENCE-NAME MINPODS MAXPODS REPLICAS AGE
+nginx Deployment nginx 1 10 1 9h
+```
+
+## Export service to `member1` cluster
+
+As mentioned before, we need a multi-cluster service to route the requests to the pods in `member1` and `member2` cluster, so let create this mult-cluster service.
+* Create a `ServiceExport` object on Karmada Control Plane, and then create a `PropagationPolicy` to propagate the `ServiceExport` object to `member1` and `member2` cluster.
+ ```yaml
+ apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceExport
+ metadata:
+ name: nginx-service
+ ---
+ apiVersion: policy.karmada.io/v1alpha1
+ kind: PropagationPolicy
+ metadata:
+ name: serve-export-policy
+ spec:
+ resourceSelectors:
+ - apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceExport
+ name: nginx-service
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ ```
+* Create a `ServiceImport` object on Karmada Control Plane, and then create a `PropagationPolicy` to propagate the `ServiceImport` object to `member1` cluster.
+ ```yaml
+ apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceImport
+ metadata:
+ name: nginx-service
+ spec:
+ type: ClusterSetIP
+ ports:
+ - port: 80
+ protocol: TCP
+ ---
+ apiVersion: policy.karmada.io/v1alpha1
+ kind: PropagationPolicy
+ metadata:
+ name: serve-import-policy
+ spec:
+ resourceSelectors:
+ - apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceImport
+ name: nginx-service
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ ```
+
+After deploying, you can check the multi-cluster service:
+```sh
+$ karmadactl get svc
+NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
+derived-nginx-service member1 ClusterIP 10.11.59.213 80/TCP 9h Y
+```
+
+## Install hey http load testing tool in member1 cluster
+
+In order to do http requests, here we use `hey`.
+* Download `hey` and copy it to kind cluster container.
+```
+$ wget https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64
+$ chmod +x hey_linux_amd64
+$ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey
+```
+
+## Test scaling up
+
+* Check the pod distribution firstly.
+ ```sh
+ $ karmadactl get pods
+ NAME CLUSTER READY STATUS RESTARTS AGE
+ nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 61m
+ ```
+
+* Check multi-cluster service ip.
+ ```sh
+ $ karmadactl get svc
+ NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
+ derived-nginx-service member1 ClusterIP 10.11.59.213 80/TCP 20m Y
+ ```
+
+* Request multi-cluster service with hey to increase the nginx pods' CPU usage.
+ ```sh
+ $ docker exec member1-control-plane hey -c 1000 -z 1m http://10.11.59.213
+ ```
+
+* Wait 15s, the replicas will be scaled up, then you can check the pod distribution again.
+ ```sh
+ $ karmadactl get pods -l app=nginx
+ NAME CLUSTER READY STATUS RESTARTS AGE
+ nginx-777bc7b6d7-c2cfv member1 1/1 Running 0 22s
+ nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 62m
+ nginx-777bc7b6d7-pk2s4 member1 1/1 Running 0 37s
+ nginx-777bc7b6d7-tbb4k member1 1/1 Running 0 37s
+ nginx-777bc7b6d7-znlj9 member1 1/1 Running 0 22s
+ nginx-777bc7b6d7-6n7d9 member2 1/1 Running 0 22s
+ nginx-777bc7b6d7-dfbnw member2 1/1 Running 0 22s
+ nginx-777bc7b6d7-fsdg2 member2 1/1 Running 0 37s
+ nginx-777bc7b6d7-kddhn member2 1/1 Running 0 22s
+ nginx-777bc7b6d7-lwn52 member2 1/1 Running 0 37s
+
+ ```
+
+## Test scaling down
+
+After 1 minute, the load testing tool will be stopped, then you can see the workload is scaled down across clusters.
+```sh
+$ karmadactl get pods -l app=nginx
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 64m
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-workload-with-cronfederatedhpa.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-workload-with-cronfederatedhpa.md
new file mode 100644
index 000000000..2c5ece8c2
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/autoscaling-workload-with-cronfederatedhpa.md
@@ -0,0 +1,161 @@
+---
+title: 使用CronFederatedHPA自动伸缩跨集群Deployment
+---
+
+在Karmada中,CronFederatedHPA 负责扩展工作负载(如Deployments)的副本或 FederatedHPA 的 minReplicas/maxReplicas。其目的是为了主动扩展业务,以处理突发的负载峰值。
+
+本文提供了一个示例,说明如何为跨集群部署 nginx deployment 启用CronFederatedHPA。
+
+## 前提条件
+
+### Karmada 已安装
+
+您可以参考 [快速入门](https://github.com/karmada-io/karmada#quick-start) 安装 Karmada,或直接运行 `hack/local-up-karmada.sh` 脚本,该脚本也用于运行 E2E 测试。
+
+## 在 `member1` 和 `member2` 集群中部署工作负载
+
+我们需要在 member1 和 member2 集群中部署 deployment(2 个副本):
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ resources:
+ requests:
+ cpu: 25m
+ memory: 64Mi
+ limits:
+ cpu: 25m
+ memory: 64Mi
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+ weightPreference:
+ staticWeightList:
+ - targetCluster:
+ clusterNames:
+ - member1
+ weight: 1
+ - targetCluster:
+ clusterNames:
+ - member2
+ weight: 1
+```
+
+部署完成后,您可以检查已创建的 pods:
+```sh
+$ karmadactl get pods
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-777bc7b6d7-rmmzv member1 1/1 Running 0 104s
+nginx-777bc7b6d7-9gf7g member2 1/1 Running 0 104s
+```
+
+## 在 Karmada 控制平面部署 CronFederatedHPA
+
+然后,在 Karmada 控制平面中部署 CronFederatedHPA,以扩容 Deployment:
+```yaml
+apiVersion: autoscaling.karmada.io/v1alpha1
+kind: CronFederatedHPA
+metadata:
+ name: nginx-cronfhpa
+ namespace: default
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ rules:
+ - name: "scale-up"
+ schedule: "*/1 * * * *"
+ targetReplicas: 5
+ suspend: false
+```
+
+`spec.schedule` 遵循以下格式:
+```
+# ┌───────────── minute (0 - 59)
+# │ ┌───────────── hour (0 - 23)
+# │ │ ┌───────────── day of the month (1 - 31)
+# │ │ │ ┌───────────── month (1 - 12)
+# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
+# │ │ │ │ │ 7 is also Sunday on some systems)
+# │ │ │ │ │ OR sun, mon, tue, wed, thu, fri, sat
+# │ │ │ │ │
+# * * * * *
+```
+表达式`*/1 * * * *`的意思是nginx deployment的副本应该每分钟更新为5个,确保了处理接下来的流量突发流量洪峰。
+
+## 测试伸缩功能
+
+一分钟后,通过CronFederatedHPA将nginx部署的副本扩展到5个。现在让我们检查Pod的数量,以验证是否按预期进行了扩展:
+```sh
+$ karmadactl get pods
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-777bc7b6d7-8v9b4 member2 1/1 Running 0 18s
+nginx-777bc7b6d7-9gf7g member2 1/1 Running 0 8m2s
+nginx-777bc7b6d7-5snhz member1 1/1 Running 0 18s
+nginx-777bc7b6d7-rmmzv member1 1/1 Running 0 8m2s
+nginx-777bc7b6d7-z9kwg member1 1/1 Running 0 18s
+```
+
+通过检查 CronFederatedHPA 的状态字段,您可以访问扩展历史记录:
+```yaml
+$ kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver get cronfhpa -oyaml
+-> # kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver get cronfhpa/nginx-cronfhpa -oyaml
+apiVersion: autoscaling.karmada.io/v1alpha1
+kind: CronFederatedHPA
+metadata:
+ name: nginx-cronfhpa
+ namespace: default
+spec:
+ rules:
+ - failedHistoryLimit: 3
+ name: scale-up
+ schedule: '*/1 * * * *'
+ successfulHistoryLimit: 3
+ suspend: false
+ targetReplicas: 5
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+status:
+ executionHistories:
+ - nextExecutionTime: "2023-07-29T03:27:00Z" # 下一次执行时间
+ ruleName: scale-up
+ successfulExecutions:
+ - appliedReplicas: 5 # CronFederatedHPA将replicas更新为5
+ executionTime: "2023-07-29T03:26:00Z" # 上一次实际执行时间
+ scheduleTime: "2023-07-29T03:26:00Z" # 上一次期待执行时间
+```
+伸缩历史包括成功和失败操作的信息。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/crd-application.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/crd-application.md
new file mode 100644
index 000000000..55bc076ae
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/crd-application.md
@@ -0,0 +1,200 @@
+---
+title: 通过 Karmada 分发 CRD
+---
+在本节中,我们将引导您完成以下内容:
+
+- 安装 Karmada 控制平面。
+- 将 CRD 分发到多个集群。
+- 在特定集群中自定义 CRD。
+
+## 启动 Karmada 集群
+
+想要启动 Karmada,您可以参考 [here](../installation/installation.md).
+
+如果您只想尝试 Karmada,请使用 ```hack/local-up-karmada.sh``` 构建开发环境.
+
+```sh
+git clone https://github.com/karmada-io/karmada
+cd karmada
+hack/local-up-karmada.sh
+```
+
+## 分发 CRD
+
+下面的步骤指导您如何分发 CRD [Guestbook](https://book.kubebuilder.io/quick-start.html#create-a-project)。
+
+假设您在 Karmada 仓库的 guestbook 目录下。
+
+```bash
+cd samples/guestbook
+```
+
+使用 Karmada 配置设置 KUBECONFIG 环境变量。
+
+```bash
+export KUBECONFIG=${HOME}/.kube/karmada.config
+```
+
+1. 在 Karmada 的控制平面上创建 Guestbook CRD
+
+```bash
+kubectl apply -f guestbooks-crd.yaml
+```
+
+此 CRD 应该被应用到 `karmada-apiserver`。
+
+2. 创建 ClusterPropagationPolicy,将 Guestbook CRD 分发到 member1
+
+```bash
+kubectl apply -f guestbooks-clusterpropagationpolicy.yaml
+```
+
+```yaml
+# guestbooks-clusterpropagationpolicy.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: ClusterPropagationPolicy
+metadata:
+ name: example-policy
+spec:
+ resourceSelectors:
+ - apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ name: guestbooks.webapp.my.domain
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+```
+
+根据 ClusterPropagationPolicy 中定义的规则,此 CRD 将分发到成员集群。
+
+> 注意:在这里我们只能使用 ClusterPropagationPolicy 而不是 PropagationPolicy。
+> 更多详细信息,请参考 FAQ [PropagationPolicy and ClusterPropagationPolicy](https://karmada.io/zh/docs/faq/#what-is-the-difference-between-propagationpolicy-and-clusterpropagationpolicy)
+
+3. 在 Karmada 控制平面上创建名为 `guestbook-sample` 的 Guestbook CR
+
+```bash
+kubectl apply -f guestbook.yaml
+```
+
+4. 创建 PropagationPolicy,将 `guestbook-sample` 分发到 member1
+
+```bash
+kubectl apply -f guestbooks-propagationpolicy.yaml
+```
+
+```yaml
+# guestbooks-propagationpolicy.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: example-policy
+spec:
+ resourceSelectors:
+ - apiVersion: webapp.my.domain/v1
+ kind: Guestbook
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+```
+
+5. 检查 Karmada 中 `guestbook-sample` 的状态
+
+```bash
+kubectl get guestbook -oyaml
+```
+
+输出类似于以下内容:
+
+```yaml
+apiVersion: webapp.my.domain/v1
+kind: Guestbook
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"webapp.my.domain/v1","kind":"Guestbook","metadata":{"annotations":{},"name":"guestbook-sample","namespace":"default"},"spec":{"alias":"Name","configMapName":"test","size":2}}
+ creationTimestamp: "2022-11-18T06:56:24Z"
+ generation: 1
+ labels:
+ propagationpolicy.karmada.io/name: example-policy
+ propagationpolicy.karmada.io/namespace: default
+ name: guestbook-sample
+ namespace: default
+ resourceVersion: "682895"
+ uid: 2f8eda5f-35ab-4ac3-bcd4-affcf36a9341
+spec:
+ alias: Name
+ configMapName: test
+ size: 2
+```
+
+## 自定义 CRD
+
+1. 创建 OverridePolicy,将覆盖 member1 中 guestbook-sample 的 size 字段。
+
+```bash
+kubectl apply -f guestbooks-overridepolicy.yaml
+```
+
+```yaml
+# guestbooks-overridepolicy.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: guestbook-sample
+spec:
+ resourceSelectors:
+ - apiVersion: webapp.my.domain/v1
+ kind: Guestbook
+ overrideRules:
+ - targetCluster:
+ clusterNames:
+ - member1
+ overriders:
+ plaintext:
+ - path: /spec/size
+ operator: replace
+ value: 4
+ - path: /metadata/annotations
+ operator: add
+ value: {"OverridePolicy":"test"}
+```
+
+2. 检查来自成员集群的 `guestbook-sample` 的 size 字段
+
+```bash
+kubectl --kubeconfig=${HOME}/.kube/members.config config use-context member1
+kubectl --kubeconfig=${HOME}/.kube/members.config get guestbooks -o yaml
+```
+
+如果按预期工作,则 `.spec.size` 将被覆盖为 `4`:
+
+```yaml
+apiVersion: webapp.my.domain/v1
+kind: Guestbook
+metadata:
+ annotations:
+ OverridePolicy: test
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"webapp.my.domain/v1","kind":"Guestbook","metadata":{"annotations":{},"name":"guestbook-sample","namespace":"default"},"spec":{"alias":"Name","configMapName":"test","size":2}}
+ resourcebinding.karmada.io/name: guestbook-sample-guestbook
+ resourcebinding.karmada.io/namespace: default
+ resourcetemplate.karmada.io/uid: 2f8eda5f-35ab-4ac3-bcd4-affcf36a9341
+ creationTimestamp: "2022-11-18T06:56:37Z"
+ generation: 2
+ labels:
+ propagationpolicy.karmada.io/name: example-policy
+ propagationpolicy.karmada.io/namespace: default
+ resourcebinding.karmada.io/key: 6849fdbd59
+ work.karmada.io/name: guestbook-sample-6849fdbd59
+ work.karmada.io/namespace: karmada-es-member1
+ name: guestbook-sample
+ namespace: default
+ resourceVersion: "430024"
+ uid: 8818e33d-10bf-4270-b3b9-585977425bc9
+spec:
+ alias: Name
+ configMapName: test
+ size: 4
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/karmada-search.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/karmada-search.md
new file mode 100644
index 000000000..9b20a9754
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/tutorials/karmada-search.md
@@ -0,0 +1,232 @@
+---
+title: 使用 Karmada-search 来体验多集群检索
+---
+
+本指南将涵盖以下内容:
+
+* 在 Karmada 控制面上安装 `karmada-search` 组件
+* 缓存多个集群的 `Deployment` 资源。
+* 使用 `OpenSearch` 图形界面检索 Kubernetes 资源。
+
+## 前提条件
+
+在安装 `karmada-search` 之前,您必须先安装 Karmada 控制平面。要启动 Karmada,您可以参考[安装概述](../installation/installation.md)。如果您只是想尝试 Karmada,我们建议使用 `hack/local-up-karmada.sh` 构建开发环境。
+
+```shell
+git clone https://github.com/karmada-io/karmada
+cd karmada
+hack/local-up-karmada.sh
+```
+
+## 安装 karmada-search
+
+如果您使用 `hack/local-up-karmada.sh`,那 `karmada-search` 已经安装好了。
+
+如果您通过 Helm 安装 Karmada,可以选择以下任意一种方式进行安装:
+
+* 在 `host` 模式下安装 `karmada-search`
+```shell
+helm upgrade --install karmada -n karmada-system --create-namespace --dependency-update \
+ --cleanup-on-fail ./charts/karmada \
+ --set components={"search"}
+```
+
+* 在 `component` 模式下单独安装 `karmada-search`
+
+为 `karmada-search` 编辑 `values.yaml` 文件:
+```yaml
+installMode: "component"
+components: [
+ "search"
+]
+...
+```
+
+执行下述命令:
+```shell
+kubectl config use-context host
+helm install karmada -n karmada-system ./charts/karmada
+```
+
+如果您通过 Karmada Operator 安装 Karmada,可以在安装 Karmada 组件时,执行下述命令:
+```shell
+kubectl create namespace test
+kubectl apply -f - < **说明:**
+>
+> 在开始之前,我们应该至少安装三个kubernetes集群,一个用于安装 Karmada 控制平面,另外两个作为成员集群。
+> 为了方便,我们直接使用 [hack/local-up-karmada.sh](https://karmada.io/docs/installation/#install-karmada-for-development-environment) 脚本快速准备上述集群。
+>
+> 执行上述命令后,您将看到Karmada控制面和多个成员集群已安装完成。
+
+### 在 karmada-controller-manager 开启 PropagationPolicyPreemption 特性开关
+
+#### 步骤二: 运行命令
+
+```shell
+$ kubectl --context karmada-host get deploy karmada-controller-manager -n karmada-system -o yaml | sed '/- --failover-eviction-timeout=30s/{n;s/- --v=4/- --feature-gates=PropagationPolicyPreemption=true\n &/g}' | kubectl --context karmada-host replace -f -
+```
+
+> **说明:**
+>
+> `PropagationPolicy Priority and Preemption` 特性是在 v1.7 版本中引入的,它由特性开关 `PropagationPolicyPreemption` 控制,默认是关闭的。
+>
+> 您只需执行上面的一条命令即可启用此特性开关。或者,如果您想使用更谨慎的方法,您可以这样做:
+>
+> 1. 执行 `kubectl --context karmada-host edit deploy karmada-controller-manager -n karmada-system`。
+> 2. 检查 `spec.template.spec.containers[0].command` 字段是否有 `--feature-gates=PropagationPolicyPreemption=true` 这一行。
+> 3. 如果没有,您需要添加 `--feature-gates=PropagationPolicyPreemption=true` 到上述字段中。
+
+### 在成员集群中预置资源
+
+为了模拟成员集群中已经存在现有资源,我们将一些简单的 Deployment 和 Service 部署到 `member1` 集群。
+
+#### 步骤三: 编写代码
+
+创建新文件 `/tmp/deployments-and-services.yaml` 并写入以下文本:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deploy
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 2
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:latest
+ ports:
+ - containerPort: 80
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx-svc
+spec:
+ selector:
+ app: nginx
+ type: NodePort
+ ports:
+ - port: 80
+ nodePort: 30000
+ targetPort: 80
+```
+
+#### 步骤四: 运行命令
+
+```shell
+$ kubectl --context member1 apply -f /tmp/deployments-and-services.yaml
+deployment.apps/nginx-deploy created
+service/nginx-svc created
+deployment.apps/hello-deploy created
+service/hello-svc created
+```
+
+因此,我们可以使用 `member1` 作为已部署现有资源的集群。
+
+## 指导
+
+### 将所有资源迁移到 Karmada
+
+#### 步骤一: 运行命令
+
+```shell
+$ kubectl --context karmada-apiserver apply -f /tmp/deployments-and-services.yaml
+deployment.apps/nginx-deploy created
+service/nginx-svc created
+deployment.apps/hello-deploy created
+service/hello-svc created
+```
+
+> **说明:**
+>
+> 相同的 Deployments 和 Services 应被部署到 Karmada 控制面,作为 [ResourceTemplate](https://karmada.io/docs/core-concepts/concepts#resource-template)。
+
+#### 步骤二: 编写代码
+
+创建新文件 `/tmp/pp-for-migrating-deployments-and-services.yaml` 并写入以下文本:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: migrate-pp
+spec:
+ conflictResolution: Overwrite
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ priority: 0
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ - apiVersion: v1
+ kind: Service
+ schedulerName: default-scheduler
+```
+
+> **说明:**
+>
+> 请注意以下两个字段:
+>
+> * `spec.conflictResolution: Overwrite`:该字段的值必须是 [Overwrite](https://github.com/karmada-io/karmada/blob/master/docs/proposals/migration/design-of-seamless-cluster-migration-scheme.md#proposal)。
+> * `spec.resourceSelectors`:筛选要迁移的资源, 你可以自定义 [ResourceSelector](https://karmada.io/docs/userguide/scheduling/override-policy/#resource-selector)。
+
+#### 步骤三: 运行命令
+
+应用上述 `PropagationPolicy` 到 Karmada 控制面:
+
+```shell
+$ kubectl --context karmada-apiserver apply -f /tmp/pp-for-migrating-deployments-and-services.yaml
+propagationpolicy.policy.karmada.io/migrate-pp created
+```
+
+#### 步骤四: 验证
+
+```shell
+$ kubectl --context karmada-apiserver get deploy
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx-deploy 2/2 2 2 38s
+$ kubectl --context karmada-apiserver get rb
+NAME SCHEDULED FULLYAPPLIED AGE
+nginx-deploy-deployment True True 13s
+nginx-svc-service True True 13s
+```
+
+您将看到 Karmada 中的 Deployment 已全部就绪,并且 `ResourceBinding` 的 `FULLYAPPLIED` 为 True,这表示 `member1` 集群中的现有资源已被 Karmada 接管。
+
+至此,您已经完成了迁移,是不是很简单?
+
+### 应用更高优先级的 PropagationPolicy
+
+#### 步骤五: 编写代码
+
+创建新文件 `/tmp/pp-for-nginx-app.yaml` 并写入以下文本:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-pp
+spec:
+ conflictResolution: Overwrite
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2 ## propagate to more clusters other than member1
+ priority: 10 ## priority greater than above PropagationPolicy (10 > 0)
+ preemption: Always ## preemption should equal to Always
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx-deploy
+ - apiVersion: v1
+ kind: Service
+ name: nginx-svc
+ schedulerName: default-scheduler
+```
+
+#### 步骤六: 运行命令
+
+应用上述 `PropagationPolicy` 到 Karmada 控制面:
+
+```shell
+$ kubectl --context karmada-apiserver apply -f /tmp/pp-for-nginx-app.yaml
+propagationpolicy.policy.karmada.io/nginx-pp created
+```
+
+#### 步骤七: 验证
+
+```shell
+$ kubectl --context member2 get deploy -o wide
+NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
+nginx-deploy 2/2 2 2 5m24s nginx nginx:latest app=nginx
+$ kubectl --context member2 get svc -o wide
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
+nginx-svc NodePort 10.13.161.255 80:30000/TCP 54s app=nginx
+...
+```
+
+您将看到 `nginx` 应用相关的资源被分发到 `member2` 集群,这表示更高优先级的 `PropagationPolicy` 生效了。
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/autoscaling/cronfederatedhpa.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/autoscaling/cronfederatedhpa.md
new file mode 100644
index 000000000..317e6e167
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/autoscaling/cronfederatedhpa.md
@@ -0,0 +1,38 @@
+---
+title: CronFederatedHPA
+---
+
+在 Karmada 中,CronFederatedHPA 用于定期自动缩放操作。它可以缩放具有scale子资源的工作负载或Karmada FederatedHPA。
+
+典型的场景是在可预见的流量高峰到来前提前扩容工作负载。例如,如果我知道每天早上9点会突发流量洪峰,我想提前(例如,提前30分钟)扩容相关服务,以处理高峰负载并确保服务持续可用性。
+
+CronFederatedHPA 被实现为 Karmada 的API资源和控制器。控制器的行为由 CronFederatedHPA 资源决定。在 Karmada 控制平面内运行的CronFederatedHPA控制器根据预定义的 cron 计划来伸缩工作负载的副本或FederatedHPA的最小/最大副本数。
+
+## CronFederatedHPA 如何工作?
+
+![cronfederatedhpa-architecture](../../resources/userguide/autoscaling/cronfederatedhpa-architecture.png)
+Karmada 将 CronFederatedHPA 实现为一个周期性检查 cron 计划时间的控制循环。如果达到计划时间,它将伸缩工作负载的副本或 FederatedHPA 的最小/最大副本数。
+
+> 请注意,此功能需要 Karmada 版本 v1.7.0 或更高版本。
+
+## 伸缩有 scale 子资源进的工作负载
+
+CronFederatedHPA 可以扩展具有 scale 子资源(如Deployment和StatefulSet)的工作负载。但是,有一个限制需要注意。确保 CronFederatedHPA 执行的伸缩操作不会与任何其他正在进行的伸缩操作冲突。例如,如果工作负载同时由 CronFederatedHPA 和 FederatedHPA 管理,最终结果可能会不确定。
+![autoscale-workload-conflicts](../../resources/userguide/autoscaling/autoscaling-conflicts.png)
+
+## 伸缩 FederatedHPA
+
+CronFederatedHPA旨在在特定时间内扩展资源。当工作负载仅由CronFederatedHPA直接进行扩展时,在到达指定的时间为止,其副本将保持不变。这意味着直到指定时间之前,它失去了处理更多请求的能力。
+因此,为了确保工作负载可以提前扩容以满足后续高峰负载和后续实时业务需求,我们建议首先使用CronFederatedHPA来伸缩FederatedHPA。然后,FederatedHPA可以根据其度量标准来伸缩工作负载的规模。
+
+## API对象
+
+FederatedHPA 是 Karmada 弹性伸缩 API 组中的一个 API。当前版本为 v1alpha1, 可以在[此处](https://github.com/karmada-io/karmada/blob/release-1.6/pkg/apis/autoscaling/v1alpha1/federatedhpa_types.go#L23)查看 CronFederatedHPA 的API规范。
+
+## 下一步
+
+如果配置了 FederatedHPA,则可能还需要考虑运行类似于 [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) 的集群级别自动扩缩容工具。
+
+有关CronFederatedHPA的更多信息:
+* 阅读[使用CronFederatedHPA自动缩放FederatedHPA](../../tutorials/autoscaling-federatedhpa-with-cronfederatedhpa.md)。
+* 阅读[使用CronFederatedHPA自动缩放工作负载](../../tutorials/autoscaling-workload-with-cronfederatedhpa.md)。
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/autoscaling/federatedhpa.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/autoscaling/federatedhpa.md
new file mode 100644
index 000000000..180543dbe
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/autoscaling/federatedhpa.md
@@ -0,0 +1,41 @@
+---
+title: FederatedHPA
+---
+
+在 Karmada 中,FederatedHPA 可以跨多个集群扩展/缩小工作负载的副本,旨在根据需求自动调整工作负载的规模。
+![img](../../resources/userguide/autoscaling/federatedhpa-overview.png)
+
+当负载增加时,如果 Pod 的数量低于配置的最大值,则 FederatedHPA 扩展工作负载(例如 Deployment、StatefulSet 或其他类似资源)的副本数。
+当负载减少时,如果 Pod 的数量高于配置的最小值,则 FederatedHPA 缩小工作负载的副本数。
+
+FederatedHPA 不适用于不能进行扩缩的对象(例如 DaemonSet)。
+
+FederatedHPA 是作为 Karmada API 资源和控制器实现的,该资源确定了控制器的行为。
+FederatedHPA 控制器运行在 Karmada 控制平面中,定期调整其目标(例如 Deployment)的所需规模,
+以匹配观察到的指标,例如平均 CPU 利用率、平均内存利用率或任何其他自定义指标。
+
+
+## FederatedHPA 如何工作?
+
+![federatedhpa-architecture](../../resources/userguide/autoscaling/federatedhpa-architecture.png)
+为了实现跨集群的自动扩缩容,Karmada 引入了 FederatedHPA 控制器和 `karmada-metrics-adapter`,它们的工作方式如下:
+1. HPA 控制器定期通过指标 API `metrics.k8s.io` 或 `custom.metrics.k8s.io` 使用标签选择器查询指标。
+1. `karmada-apiserver` 获取指标 API 查询结果,然后通过 API 服务注册将其路由到 `karmada-metrics-adapter`。
+1. `karmada-metrics-adapter` 将从目标集群(Pod 所在的集群)查询指标。收集到指标后,它会对这些指标进行聚合并返回结果。
+1. HPA 控制器将根据指标计算所需的副本数,并直接扩展/缩小工作负载的规模。然后,`karmada-scheduler` 将这些副本调度到成员集群中。
+
+> 注意:要使用此功能,Karmada 版本必须为 v1.6.0 或更高版本。
+
+## API 对象
+
+FederatedHPA 是 Karmada 弹性伸缩 API 组中的一个 API。当前版本为 v1alpha1,仅支持 CPU 和内存指标。
+
+您可以在[这里](https://github.com/karmada-io/karmada/blob/release-1.6/pkg/apis/autoscaling/v1alpha1/federatedhpa_types.go#L23)查看 FederatedHPA API 规范。
+
+## 后续规划
+
+如果您配置了 FederatedHPA,则可能还需要考虑运行类似于 [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) 的集群级别自动扩缩容工具。
+
+有关 FederatedHPA 的更多信息:
+* 阅读 [FederatedHPA 基于 resource metrics(CPU/Memory) 弹性伸缩](../../tutorials/autoscaling-with-resource-metrics.md) 以了解 FederatedHPA。
+* 阅读 [FederatedHPA 基于 custom metrics(自定义指标)弹性伸缩](../../tutorials/autoscaling-with-custom-metrics.md) 以了解 FederatedHPA。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/bestpractices/federated-resource-quota.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/bestpractices/federated-resource-quota.md
new file mode 100644
index 000000000..d58deb0cb
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/bestpractices/federated-resource-quota.md
@@ -0,0 +1,192 @@
+---
+title: Federated ResourceQuota
+---
+
+## Background
+
+With the widespread use of multi-clusters, the administrator may deploy services on multiple clusters and resource management of services under multiple clusters has become a new challenge.
+A traditional approach is that the administrator manually deploys a `namespace` and `ResourceQuota` under each Kubernetes cluster, where Kubernetes will limit the resources according to the `ResourceQuotas`.
+This approach is a bit inconvenient and not flexible enough.
+In addition, this practice is now challenged by the differences in service scale, available resources and resource types of each cluster.
+
+Resource administrators often need to manage and control the consumption of resources by each service in a global view.
+Here's where the `FederatedResourceQuota` API comes in. The following is several typical usage scenarios of `FederatedResourceQuota`.
+
+## What FederatedResourceQuota can do
+
+FederatedResourceQuota supports:
+
+* Global quota management for applications that run on multiple clusters.
+* Fine-grained management of quotas under the same namespace of different clusters.
+ * Ability to enumerate resource usage limits per namespace.
+ * Ability to monitor resource usage for tracked resources.
+ * Ability to reject resource usage exceeding hard quotas.
+
+![unified resourcequota](../../resources/key-features/unified-resourcequota.png)
+
+You can use FederatedResourceQuota to manage CPU, memory, storage and ephemeral-storage.
+
+## Deploy a simplest FederatedResourceQuota
+
+Assume you, an administrator, want to deploy service A across multiple clusters in namespace `test`.
+
+You can create a namespace called test on the Karmada control plane. Karmada will automatically create the corresponding namespace in the member clusters.
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver create ns test
+```
+
+You want to create a CPU limit of 100 cores for service A.
+The available resources on clusters:
+
+* member1: 20C
+* member2: 50C
+* member3: 100C
+
+In this example, allocate 20C of member1, 50C of member2, and 30C of member3 to service A.
+Resources are reserved for more important services in member3.
+
+You can deploy a FederatedResourceQuota as follows.
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: FederatedResourceQuota
+metadata:
+ name: test
+ namespace: test
+spec:
+ overall:
+ cpu: 100
+ staticAssignments:
+ - clusterName: member1
+ hard:
+ cpu: 20
+ - clusterName: member2
+ hard:
+ cpu: 50
+ - clusterName: member3
+ hard:
+ cpu: 30
+```
+
+Verify the status of FederatedResourceQuota:
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver get federatedresourcequotas/test -ntest -oyaml
+```
+
+The output is similar to:
+
+```
+spec:
+ overall:
+ cpu: 100
+ staticAssignments:
+ - clusterName: member1
+ hard:
+ cpu: 20
+ - clusterName: member2
+ hard:
+ cpu: 50
+ - clusterName: member3
+ hard:
+ cpu: 30
+status:
+ aggregatedStatus:
+ - clusterName: member1
+ hard:
+ cpu: "20"
+ used:
+ cpu: "0"
+ - clusterName: member2
+ hard:
+ cpu: "50"
+ used:
+ cpu: "0"
+ - clusterName: member3
+ hard:
+ cpu: "30"
+ used:
+ cpu: "0"
+ overall:
+ cpu: "100"
+ overallUsed:
+ cpu: "0"
+```
+
+For quick test, you can deploy a simple application which consumes 1C CPU to member1.
+
+```shell
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ namespace: test
+ labels:
+ app: nginx
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ resources:
+ requests:
+ cpu: 1
+```
+
+Verify the status of FederatedResourceQuota and you will find that FederatedResourceQuota can monitor resource usage correctly for tracked resources.
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver get federatedresourcequotas/test -ntest -oyaml
+```
+
+```
+spec:
+ overall:
+ cpu: 100
+ staticAssignments:
+ - clusterName: member1
+ hard:
+ cpu: 20
+ - clusterName: member2
+ hard:
+ cpu: 50
+ - clusterName: member3
+ hard:
+ cpu: 30
+status:
+ aggregatedStatus:
+ - clusterName: member1
+ hard:
+ cpu: "20"
+ used:
+ cpu: "1"
+ - clusterName: member2
+ hard:
+ cpu: "50"
+ used:
+ cpu: "0"
+ - clusterName: member3
+ hard:
+ cpu: "30"
+ used:
+ cpu: "0"
+ overall:
+ cpu: "100"
+ overallUsed:
+ cpu: "1"
+```
+
+:::note
+
+FederatedResourceQuota is still a work in progress. We are in the progress of gathering use cases. If you are interested in this feature, please feel free to start an enhancement issue to let us know.
+
+:::
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/bestpractices/namespace-management.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/bestpractices/namespace-management.md
new file mode 100644
index 000000000..f9588baef
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/bestpractices/namespace-management.md
@@ -0,0 +1,49 @@
+---
+title: Namespace 管理
+---
+
+在 Kubernetes 集群中,工作负载被部署在某个命名空间中。 多集群的工作负载意味着多个不同命名空间。如果想对这些命名空间进行统一管理。`karmada-controller-manager`将负责这部分功能,将用户在Karmada中创建的命名空间分发到成员集群。
+
+## 默认 Namespace 分发策略
+默认情况下,除了保留 Namespace,其他 Namespace 都会被自动分发到所有成员集群。保留 Namespace 包括:`karmada-system`、`karmada-cluster`、 `karmada-es-*`、`kube-*`、 `default`.
+
+## 跳过 Namespace 自动分发
+如果你不想 Karmada 自动分发 Namespace 到成员集群,有两种配置方法。一种是通过配置 `karmada-controller-manager` 的启动参数,另一种是对 Namespace 打 label。
+
+### 配置 `karmada-controller-manager`
+配置 `karmada-controller-manager` 启动参数 `skipped-propagating-namespaces`,可以实现跳过特定 Namespace 自动分发。示例如下:
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: karmada-controller-manager
+ namespace: karmada-system
+ labels:
+ app: karmada-controller-manager
+spec:
+ ...
+ template:
+ metadata:
+ labels:
+ app: karmada-controller-manager
+ spec:
+ containers:
+ - name: karmada-controller-manager
+ image: docker.io/karmada/karmada-controller-manager:latest
+ command:
+ - /bin/karmada-controller-manager
+ - --skipped-propagating-namespaces=ns1,ns2
+```
+`ns1` 和 `ns2` 不会自动分发到所有成员集群。
+
+### 对 Namespace 打 label
+使用Label `namespace.karmada.io/skip-auto-propagation: "true"`, 可以实现跳过特定 Namespace 自动分发. 示例如下:
+```yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: example-ns
+ labels:
+ namespace.karmada.io/skip-auto-propagation: "true"
+```
+> 注意:如果 Namespace 已经被分发到成员集群,对 Namespace 打 `namespace.karmada.io/skip-auto-propagation: "true"` label 不会触发成员集群删除此 Namespace,但此 Namespace 不会分发到后续新加入的成员集群。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/bestpractices/unified-auth.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/bestpractices/unified-auth.md
new file mode 100644
index 000000000..8d87e2926
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/bestpractices/unified-auth.md
@@ -0,0 +1,188 @@
+---
+title: Unified Authentication
+---
+
+For one or a group of user subjects (users, groups, or service accounts) in a member cluster, we can import them into Karmada control plane and grant them the `clusters/proxy` permission, so that we can access the member cluster with permission of the user subject through Karmada.
+
+In this section, we use a serviceaccount named `tom` for the test.
+
+### Step1: Create ServiceAccount in member1 cluster (optional)
+
+If the serviceaccount has been created in your environment, you can skip this step.
+
+Create a serviceaccount that does not have any permission:
+
+```shell
+kubectl --kubeconfig $HOME/.kube/members.config --context member1 create serviceaccount tom
+```
+
+### Step2: Create ServiceAccount in Karmada control plane
+
+```shell
+kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver create serviceaccount tom
+```
+
+In order to grant serviceaccount the `clusters/proxy` permission, apply the following rbac yaml file:
+
+cluster-proxy-rbac.yaml:
+
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: cluster-proxy-clusterrole
+rules:
+- apiGroups:
+ - 'cluster.karmada.io'
+ resources:
+ - clusters/proxy
+ resourceNames:
+ - member1
+ verbs:
+ - '*'
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: cluster-proxy-clusterrolebinding
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: cluster-proxy-clusterrole
+subjects:
+ - kind: ServiceAccount
+ name: tom
+ namespace: default
+ # The token generated by the serviceaccount can parse the group information. Therefore, you need to specify the group information below.
+ - kind: Group
+ name: "system:serviceaccounts"
+ - kind: Group
+ name: "system:serviceaccounts:default"
+```
+
+
+
+```shell
+kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f cluster-proxy-rbac.yaml
+```
+
+### Step3: Access member1 cluster
+
+Manually create a long-lived api token for the serviceaccount `tom`:
+
+```shell
+kubectl apply --kubeconfig ~/.kube/karmada.config -f - <
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: tom
+rules:
+- apiGroups:
+ - '*'
+ resources:
+ - '*'
+ verbs:
+ - '*'
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: tom
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: tom
+subjects:
+ - kind: ServiceAccount
+ name: tom
+ namespace: default
+```
+
+
+
+```shell
+kubectl --kubeconfig $HOME/.kube/members.config --context member1 apply -f member1-rbac.yaml
+```
+
+Run the command that failed in the previous step again:
+
+```shell
+kubectl --kubeconfig tom.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes
+```
+
+The access will be successful.
+
+Or we can append `/apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy` to the server address of tom.config, and then you can directly use:
+
+```shell
+kubectl --kubeconfig tom.config get node
+```
+
+> Note: For a member cluster that joins Karmada in pull mode and allows only cluster-to-karmada access, we can [deploy apiserver-network-proxy (ANP)](../clustermanager/working-with-anp.md) to access it.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/cicd/working-with-argocd.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/cicd/working-with-argocd.md
new file mode 100644
index 000000000..3b1e25b2a
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/cicd/working-with-argocd.md
@@ -0,0 +1,103 @@
+---
+title: Working with Argo CD
+---
+
+This topic walks you through how to use the [Argo CD](https://github.com/argoproj/argo-cd/) to manage your workload
+`across clusters` with `Karmada`.
+
+## Prerequisites
+### Argo CD Installation
+You have installed Argo CD following the instructions in [Getting Started](https://argo-cd.readthedocs.io/en/stable/getting_started/#getting-started).
+
+### Karmada Installation
+In this example, we are using a Karmada environment with at least `3` member clusters joined.
+
+You can set up the environment by `hack/local-up-karmada.sh`, which is also used to run our E2E cases.
+
+```bash
+# kubectl get clusters
+NAME VERSION MODE READY AGE
+member1 v1.19.1 Push True 18h
+member2 v1.19.1 Push True 18h
+member3 v1.19.1 Pull True 17h
+```
+
+## Registering Karmada to Argo CD
+This step registers Karmada control plane to Argo CD.
+
+First list the contexts of all clusters in your current kubeconfig:
+```bash
+kubectl config get-contexts -o name
+```
+
+Choose the context of the Karmada control plane from the list and add it to `argocd cluster add CONTEXTNAME`.
+For example, for `karmada-apiserver` context, run:
+```bash
+argocd cluster add karmada-apiserver
+```
+
+If everything goes well, you can see the registered Karmada control plane from the Argo CD UI, e.g.:
+
+![](../../resources/userguide/cicd/argocd/argocd-register-karmada.png)
+
+## Creating Apps Via UI
+
+### Preparing Apps
+Take the [guestbook](https://github.com/argoproj/argocd-example-apps/tree/53e28ff20cc530b9ada2173fbbd64d48338583ba/guestbook)
+as example.
+
+First, fork the [argocd-example-apps](https://github.com/argoproj/argocd-example-apps) repo and create a branch
+`karmada-demo`.
+
+Then, create a [PropagationPolicy manifest](https://github.com/RainbowMango/argocd-example-apps/blob/e499ea5c6f31b665366bfbe5161737dc8723fb3b/guestbook/propagationpolicy.yaml) under the `guestbook` directory.
+
+### Creating Apps
+
+Click the `+ New App` button as shown below:
+
+![](../../resources/userguide/cicd/argocd/argocd-new-app.png)
+
+Give your app the name `guestbook-multi-cluster`, use the project `default`, and leave the sync policy as `Manual`:
+
+![](../../resources/userguide/cicd/argocd/argocd-new-app-name.png)
+
+Connect the `forked repo` to Argo CD by setting repository url to the github repo url, set revision as `karmada-demo`,
+and set the path to `guestbook`:
+
+![](../../resources/userguide/cicd/argocd/argocd-new-app-repo.png)
+
+For Destination, set cluster to `karmada` and namespace to `default`:
+
+![](../../resources/userguide/cicd/argocd/argocd-new-app-cluster.png)
+
+### Syncing Apps
+You can sync your applications via UI by simply clicking the SYNC button and following the pop-up instructions, e.g.:
+
+![](../../resources/userguide/cicd/argocd/argocd-sync-apps.png)
+
+More details please refer to [argocd guide: sync the application](https://argo-cd.readthedocs.io/en/stable/getting_started/#7-sync-deploy-the-application).
+
+## Checking Apps Status
+For deployment running in more than one clusters, you don't need to create applications for each
+cluster. You can get the overall and detailed status from one `Application`.
+> Argo CD < v2.6.0
+
+![](../../resources/userguide/cicd/argocd/argocd-status-overview.png)
+
+> Argo CD >= v2.6.0
+
+![](../../resources/userguide/cicd/argocd/argocd-2.6.0-status-overview.png)
+
+The `svc/guestbook-ui`, `deploy/guestbook-ui` and `propagationpolicy/guestbook` in the middle of the picture are the
+resources created by the manifest in the forked repo. And the `resourcebinding/guestbook-ui-service` and
+`resourcebinding/guestbook-ui-deployment` in the right of the picture are the resources created by Karmada.
+
+### Checking Detailed Status
+You can obtain the Deployment's detailed status by `resourcebinding/guestbook-ui-deployment`.
+
+![](../../resources/userguide/cicd/argocd/argocd-status-resourcebinding.png)
+
+### Checking Aggregated Status
+You can obtain the aggregated status of the Deployment from UI by `deploy/guestbook-ui`.
+
+![](../../resources/userguide/cicd/argocd/argocd-status-aggregated.png)
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/cicd/working-with-flux.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/cicd/working-with-flux.md
new file mode 100644
index 000000000..5900db7bf
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/cicd/working-with-flux.md
@@ -0,0 +1,314 @@
+---
+title: Use Flux to support Helm chart propagation
+---
+
+[Flux](https://fluxcd.io/) is most useful when used as a deployment tool at the end of a Continuous Delivery Pipeline. Flux will make sure that your new container images and config changes are propagated to the cluster. With Flux, Karmada can easily realize the ability to distribute applications packaged by Helm across clusters. Not only that, with Karmada's OverridePolicy, users can customize applications for specific clusters and manage cross-cluster applications on the unified Karmada Control Plane.
+
+## Start up Karmada clusters
+
+To start up Karmada, you can refer to [here](../../installation/installation.md).
+If you just want to try Karmada, we recommend building a development environment by ```hack/local-up-karmada.sh```.
+
+```sh
+git clone https://github.com/karmada-io/karmada
+cd karmada
+hack/local-up-karmada.sh
+```
+
+After that, you will start a Kubernetes cluster by kind to run the Karmada Control Plane and create member clusters managed by Karmada.
+
+```sh
+kubectl get clusters --kubeconfig ~/.kube/karmada.config
+```
+
+You can use the command above to check registered clusters, and you will get similar output as follows:
+
+```
+NAME VERSION MODE READY AGE
+member1 v1.23.4 Push True 7m38s
+member2 v1.23.4 Push True 7m35s
+member3 v1.23.4 Pull True 7m27s
+```
+
+## Start up Flux
+
+In the Karmada Control Plane, you need to install Flux CRDs but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances.
+Based on the work API [here](https://github.com/kubernetes-sigs/work-api), they will be encapsulated as a work object delivered to member clusters and reconciled by Flux controllers in member clusters, finally.
+
+```sh
+kubectl apply -k github.com/fluxcd/flux2/manifests/crds?ref=main --kubeconfig ~/.kube/karmada.config
+```
+
+For testing purposes, we'll install Flux on member clusters without storing its manifests in a Git repository:
+
+```sh
+flux install --kubeconfig ~/.kube/members.config --context member1
+flux install --kubeconfig ~/.kube/members.config --context member2
+```
+
+Tips:
+
+ 1. If you want to manage Helm releases across your fleet of clusters, Flux must be installed on each cluster.
+
+ 2. If the Flux toolkit controllers are successfully installed, you should see the following Pods:
+
+```
+$ kubectl get pod -n flux-system
+NAME READY STATUS RESTARTS AGE
+helm-controller-55896d6ccf-dlf8b 1/1 Running 0 15d
+kustomize-controller-76795877c9-mbrsk 1/1 Running 0 15d
+notification-controller-7ccfbfbb98-lpgjl 1/1 Running 0 15d
+source-controller-6b8d9cb5cc-7dbcb 1/1 Running 0 15d
+```
+
+## Helm release propagation
+
+If you want to propagate Helm releases for your apps to member clusters, you can refer to the guide below.
+
+1. Define a Flux `HelmRepository` and a `HelmRelease` manifest in the Karmada Control Plane. They will serve as resource templates.
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1beta2
+kind: HelmRepository
+metadata:
+ name: podinfo
+spec:
+ interval: 1m
+ url: https://stefanprodan.github.io/podinfo
+---
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: podinfo
+spec:
+ interval: 5m
+ chart:
+ spec:
+ chart: podinfo
+ version: 5.0.3
+ sourceRef:
+ kind: HelmRepository
+ name: podinfo
+```
+
+2. Define a Karmada `PropagationPolicy` that will propagate them to member clusters:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: helm-repo
+spec:
+ resourceSelectors:
+ - apiVersion: source.toolkit.fluxcd.io/v1beta2
+ kind: HelmRepository
+ name: podinfo
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: helm-release
+spec:
+ resourceSelectors:
+ - apiVersion: helm.toolkit.fluxcd.io/v2beta1
+ kind: HelmRelease
+ name: podinfo
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+The above configuration is for propagating the Flux objects to member1 and member2 clusters.
+
+3. Apply those manifests to the Karmada-apiserver:
+
+```sh
+kubectl apply -f ../helm/ --kubeconfig ~/.kube/karmada.config
+```
+
+The output is similar to:
+
+```
+helmrelease.helm.toolkit.fluxcd.io/podinfo created
+helmrepository.source.toolkit.fluxcd.io/podinfo created
+propagationpolicy.policy.karmada.io/helm-release created
+propagationpolicy.policy.karmada.io/helm-repo created
+```
+
+4. Switch to the distributed cluster and verify:
+
+```sh
+helm --kubeconfig ~/.kube/members.config --kube-context member1 list
+```
+
+The output is similar to:
+
+```
+NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
+podinfo default 1 2022-05-27 01:44:35.24229175 +0000 UTC deployed podinfo-5.0.3 5.0.3
+```
+
+Based on Karmada's propagation policy, you can schedule Helm releases to your desired cluster flexibly, just like Kubernetes schedules Pods to the desired node.
+
+## Customize the Helm release for specific clusters
+
+The example above shows how to propagate the same Helm release to multiple clusters in Karmada. Besides, you can use Karmada's OverridePolicy to customize applications for specific clusters.
+For example, if you just want to change replicas in member1, you can refer to the overridePolicy below.
+
+1. Define a Karmada `OverridePolicy`:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example-override
+ namespace: default
+spec:
+ resourceSelectors:
+ - apiVersion: helm.toolkit.fluxcd.io/v2beta1
+ kind: HelmRelease
+ name: podinfo
+ overrideRules:
+ - targetCluster:
+ clusterNames:
+ - member1
+ overriders:
+ plaintext:
+ - path: "/spec/values"
+ operator: add
+ value:
+ replicaCount: 2
+```
+
+2. Apply the manifest to the Karmada-apiserver:
+
+```sh
+kubectl apply -f example-override.yaml --kubeconfig ~/.kube/karmada.config
+```
+
+The output is similar to:
+
+```
+overridepolicy.policy.karmada.io/example-override configured
+```
+
+3. After applying the above policy in the Karmada Control Plane, you will find that replicas in member1 have changed to 2, but those in member2 keep the same.
+
+```sh
+kubectl --kubeconfig ~/.kube/members.config --context member1 get po
+```
+
+The output is similar to:
+
+```
+NAME READY STATUS RESTARTS AGE
+podinfo-68979685bc-6wz6s 1/1 Running 0 6m28s
+podinfo-68979685bc-dz9f6 1/1 Running 0 7m42s
+```
+
+## Kustomize propagation
+
+Kustomize propagation is basically the same as helm chart propagation above. You can refer to the guide below.
+
+1. Define a Flux `GitRepository` and a `Kustomization` manifest in the Karmada Control Plane:
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1beta2
+kind: GitRepository
+metadata:
+ name: podinfo
+spec:
+ interval: 1m
+ url: https://github.com/stefanprodan/podinfo
+ ref:
+ branch: master
+---
+apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
+kind: Kustomization
+metadata:
+ name: podinfo-dev
+spec:
+ interval: 5m
+ path: "./deploy/overlays/dev/"
+ prune: true
+ sourceRef:
+ kind: GitRepository
+ name: podinfo
+ validation: client
+ timeout: 80s
+```
+
+2. Define a Karmada `PropagationPolicy` that will propagate them to member clusters:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: kust-release
+spec:
+ resourceSelectors:
+ - apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
+ kind: Kustomization
+ name: podinfo-dev
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: kust-git
+spec:
+ resourceSelectors:
+ - apiVersion: source.toolkit.fluxcd.io/v1beta2
+ kind: GitRepository
+ name: podinfo
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+3. Apply those YAMLs to the karmada-apiserver:
+
+```sh
+kubectl apply -f kust/ --kubeconfig ~/.kube/karmada.config
+```
+
+The output is similar to:
+
+```
+gitrepository.source.toolkit.fluxcd.io/podinfo created
+kustomization.kustomize.toolkit.fluxcd.io/podinfo-dev created
+propagationpolicy.policy.karmada.io/kust-git created
+propagationpolicy.policy.karmada.io/kust-release created
+```
+
+4. Switch to the distributed cluster and verify:
+
+```sh
+kubectl --kubeconfig ~/.kube/members.config --context member1 get pod -n dev
+```
+
+The output is similar to:
+
+```
+NAME READY STATUS RESTARTS AGE
+backend-69c7655cb-rbtrq 1/1 Running 0 15s
+cache-bdff5c8dc-mmnbm 1/1 Running 0 15s
+frontend-7f98bf6f85-dw4vq 1/1 Running 0 15s
+```
+
+## Reference
+- https://fluxcd.io
+- https://github.com/fluxcd
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/clustermanager/cluster-registration.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/clustermanager/cluster-registration.md
new file mode 100644
index 000000000..bb5fcbb9d
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/clustermanager/cluster-registration.md
@@ -0,0 +1,142 @@
+---
+title: Cluster Registration
+---
+
+## Overview of cluster mode
+
+Karmada supports both `Push` and `Pull` modes to manage the member clusters.
+The main difference between `Push` and `Pull` modes is the way access to member clusters when deploying manifests.
+
+### Push mode
+Karmada control plane will access member cluster's `kube-apiserver` directly to get cluster status and deploy manifests.
+
+### Pull mode
+Karmada control plane will not access member cluster but delegate it to an extra component named `karmada-agent`.
+
+Each `karmada-agent` serves a cluster and take responsibility for:
+- Register cluster to Karmada(creates the `Cluster` object)
+- Maintains cluster status and reports to Karmada(updates the status of `Cluster` object)
+- Watch manifests from Karmada execution space(namespace, `karmada-es-`) and deploy the watched resources to the cluster the agent serves.
+
+## Register cluster with 'Push' mode
+
+You can use the [kubectl-karmada](../../installation/install-cli-tools.md) CLI to `join`(register) and `unjoin`(unregister) clusters.
+
+### Register cluster by CLI
+
+Join cluster with name `member1` to Karmada by using the following command.
+```
+kubectl karmada join member1 --kubeconfig= --cluster-kubeconfig=
+```
+Repeat this step to join any additional clusters.
+
+The `--kubeconfig` specifies the Karmada's `kubeconfig` file and the CLI infers `karmada-apiserver` context
+from the `current-context` field of the `kubeconfig`. If there are more than one context is configured in
+the `kubeconfig` file, it is recommended to specify the context by the `--karmada-context` flag. For example:
+```
+kubectl karmada join member1 --kubeconfig= --karmada-context=karmada --cluster-kubeconfig=
+```
+
+The `--cluster-kubeconfig` specifies the member cluster's `kubeconfig` and the CLI infers the member cluster's context
+by the cluster name. If there is more than one context is configured in the `kubeconfig` file, or you don't want to use
+the context name to register, it is recommended to specify the context by the `--cluster-context` flag. For example:
+
+```
+kubectl karmada join member1 --kubeconfig= --karmada-context=karmada \
+--cluster-kubeconfig= --cluster-context=member1
+```
+> Note: The registering cluster name can be different from the context with `--cluster-context` specified.
+
+### Check cluster status
+
+Check the status of the joined clusters by using the following command.
+```
+kubectl get clusters
+
+NAME VERSION MODE READY AGE
+member1 v1.20.7 Push True 66s
+```
+### Unregister cluster by CLI
+
+You can unjoin clusters by using the following command.
+```
+kubectl karmada unjoin member1 --kubeconfig= --cluster-kubeconfig=
+```
+During unjoin process, the resources propagated to `member1` by Karmada will be cleaned up.
+And the `--cluster-kubeconfig` is used to clean up the secret created at the `join` phase.
+
+Repeat this step to unjoin any additional clusters.
+
+## Register cluster with 'Pull' mode
+
+### Register cluster by CLI
+
+`karmadactl register` is used to register member clusters to the Karmada control plane with PULL mode.
+Be different from the `karmadactl join` which registers a cluster with `Push` mode, `karmadactl register` registers a cluster to Karmada control plane with `Pull` mode.
+
+> Note: currently it only supports the Karmada control plane that was installed by `karmadactl init`.
+
+#### Create bootstrap token in Karmada control plane
+
+In Karmada control plane, we can use `karmadactl token create` command to create bootstrap tokens whose default ttl is 24h.
+
+```
+$ karmadactl token create --print-register-command --kubeconfig /etc/karmada/karmada-apiserver.config
+```
+
+```
+# The example output is shown below
+karmadactl register 10.10.x.x:32443 --token t2jgtm.9nybj0526mjw1jbf --discovery-token-ca-cert-hash sha256:f5a5a43869bb44577dba582e794c3e3750f2050d62f1b1dc80fd3d6a371b6ed4
+```
+
+More details about `bootstrap token` please refer to:
+- [authenticating with bootstrap tokens](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/)
+
+#### Execute `karmadactl register` in the member clusters
+
+In the Kubernetes control plane of member clusters, we also need the `kubeconfig` file of the member cluster, then directly execute the above output `karmadactl register` command.
+
+```
+$ karmadactl register 10.10.x.x:32443 --token t2jgtm.9nybj0526mjw1jbf --discovery-token-ca-cert-hash sha256:f5a5a43869bb44577dba582e794c3e3750f2050d62f1b1dc80fd3d6a371b6ed4
+```
+
+```
+# The example output is shown below
+[preflight] Running pre-flight checks
+[prefligt] All pre-flight checks were passed
+[karmada-agent-start] Waiting to perform the TLS Bootstrap
+[karmada-agent-start] Waiting to construct karmada-agent kubeconfig
+[karmada-agent-start] Waiting the necessary secret and RBAC
+[karmada-agent-start] Waiting karmada-agent Deployment
+W0825 11:03:12.167027 29336 check.go:52] pod: karmada-agent-5d659b4746-wn754 not ready. status: ContainerCreating
+......
+I0825 11:04:06.174110 29336 check.go:49] pod: karmada-agent-5d659b4746-wn754 is ready. status: Running
+
+cluster(member3) is joined successfully
+```
+
+> Note: if you don't set `--cluster-name` option, it will use the cluster of current-context of the `kubeconfig` file by default.
+
+
+After `karmada-agent` be deployed, it will register cluster automatically at the start-up phase.
+
+### Check cluster status
+
+Check the status of the registered clusters by using the same command above.
+```
+kubectl get clusters
+NAME VERSION MODE READY AGE
+member3 v1.20.7 Pull True 66s
+```
+### Unregister cluster
+
+Undeploy the `karmada-agent` and then remove the `cluster` manually from Karmada.
+```
+kubectl delete cluster member3
+```
+
+## Cluster Identifier
+
+Each cluster registered in Karmada will be represented as a `Cluster` object whose name(`.metadata.name`) is the registered name. The name will be widely used in the propagating process, such as specifying the location to which a resource should be propagated in a `PropagationPolicy`.
+
+In addition, during the registration, each cluster will be assigned a `unique identifier` marked in the `.spec.id` of the `Cluster` object. For now, this `unique identifier` is used to distinguish each cluster technically to avoid registering the same cluster multiple times with different registered names. The `unique identifier` is collected from the registered cluster's `kube-system` ID(`.metadata.uid`).
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/clustermanager/working-with-anp.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/clustermanager/working-with-anp.md
new file mode 100644
index 000000000..0ae01f3b8
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/clustermanager/working-with-anp.md
@@ -0,0 +1,312 @@
+---
+title: Deploy apiserver-network-proxy (ANP) For Pull mode
+---
+
+## Purpose
+
+For a member cluster that joins Karmada in the pull mode, you need to provide a method to connect the network between the Karmada control plane and the member cluster, so that karmada-aggregated-apiserver can access this member cluster.
+
+Deploying ANP to achieve this is one of the methods. This document describes how to deploy ANP for Karmada.
+
+## Environment
+
+Karmada can be deployed using the kind tool.
+
+You can directly use `hack/local-up-karmada.sh` to deploy Karmada.
+
+## Actions
+
+### Step 1: Download code
+
+To facilitate demonstration, the code is modified based on ANP v0.0.24 to support access to the front server through HTTP. Here is the code repository address: https://github.com/mrlihanbo/apiserver-network-proxy/tree/v0.0.24/dev.
+
+```shell
+git clone -b v0.0.24/dev https://github.com/mrlihanbo/apiserver-network-proxy.git
+cd apiserver-network-proxy/
+```
+
+### Step 2: Build images
+
+Build the proxy-server and proxy-agent images.
+
+```shell
+docker build . --build-arg ARCH=amd64 -f artifacts/images/agent-build.Dockerfile -t swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-agent:0.0.24
+
+docker build . --build-arg ARCH=amd64 -f artifacts/images/server-build.Dockerfile -t swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-server:0.0.24
+```
+
+### Step 3: Generate certificates
+
+Run the command to check the IP address of karmada-host:
+
+```shell
+docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' karmada-host-control-plane
+```
+
+Run the `make certs` command to generate certificates and specify `PROXY_SERVER_IP` as the IP address obtained in the preceding command.
+
+```shell
+make certs PROXY_SERVER_IP=x.x.x.x
+```
+
+The certificates are generated in the `certs` folder.
+
+### Step 4: Deploy proxy-server
+
+Save the `proxy-server.yaml` file in the root directory of the ANP code repository.
+
+
+unfold me to see the yaml
+
+```yaml
+# proxy-server.yaml
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: proxy-server
+ namespace: karmada-system
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: proxy-server
+ template:
+ metadata:
+ labels:
+ app: proxy-server
+ spec:
+ containers:
+ - command:
+ - /proxy-server
+ args:
+ - --health-port=8092
+ - --cluster-ca-cert=/var/certs/server/cluster-ca-cert.crt
+ - --cluster-cert=/var/certs/server/cluster-cert.crt
+ - --cluster-key=/var/certs/server/cluster-key.key
+ - --mode=http-connect
+ - --proxy-strategies=destHost
+ - --server-ca-cert=/var/certs/server/server-ca-cert.crt
+ - --server-cert=/var/certs/server/server-cert.crt
+ - --server-key=/var/certs/server/server-key.key
+ image: swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-server:0.0.24
+ imagePullPolicy: IfNotPresent
+ livenessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /healthz
+ port: 8092
+ scheme: HTTP
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 60
+ name: proxy-server
+ volumeMounts:
+ - mountPath: /var/certs/server
+ name: cert
+ restartPolicy: Always
+ hostNetwork: true
+ volumes:
+ - name: cert
+ secret:
+ secretName: proxy-server-cert
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: proxy-server-cert
+ namespace: karmada-system
+type: Opaque
+data:
+ server-ca-cert.crt: |
+ {{server_ca_cert}}
+ server-cert.crt: |
+ {{server_cert}}
+ server-key.key: |
+ {{server_key}}
+ cluster-ca-cert.crt: |
+ {{cluster_ca_cert}}
+ cluster-cert.crt: |
+ {{cluster_cert}}
+ cluster-key.key: |
+ {{cluster_key}}
+```
+
+
+
+Save the `replace-proxy-server.sh` file in the root directory of the ANP code repository.
+
+
+unfold me to see the shell
+
+```shell
+#!/bin/bash
+
+cert_yaml=proxy-server.yaml
+
+SERVER_CA_CERT=$(cat certs/frontend/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i'' -e "s/{{server_ca_cert}}/${SERVER_CA_CERT}/g" ${cert_yaml}
+
+SERVER_CERT=$(cat certs/frontend/issued/proxy-frontend.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i'' -e "s/{{server_cert}}/${SERVER_CERT}/g" ${cert_yaml}
+
+SERVER_KEY=$(cat certs/frontend/private/proxy-frontend.key | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i'' -e "s/{{server_key}}/${SERVER_KEY}/g" ${cert_yaml}
+
+CLUSTER_CA_CERT=$(cat certs/agent/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i'' -e "s/{{cluster_ca_cert}}/${CLUSTER_CA_CERT}/g" ${cert_yaml}
+
+CLUSTER_CERT=$(cat certs/agent/issued/proxy-frontend.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i'' -e "s/{{cluster_cert}}/${CLUSTER_CERT}/g" ${cert_yaml}
+
+
+CLUSTER_KEY=$(cat certs/agent/private/proxy-frontend.key | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i'' -e "s/{{cluster_key}}/${CLUSTER_KEY}/g" ${cert_yaml}
+```
+
+
+
+Run the following commands to run the script:
+
+```shell
+chmod +x replace-proxy-server.sh
+bash replace-proxy-server.sh
+```
+
+Deploy the proxy-server on the karmada-host:
+
+```shell
+kind load docker-image swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-server:0.0.24 --name karmada-host
+export KUBECONFIG=/root/.kube/karmada.config
+kubectl --context=karmada-host apply -f proxy-server.yaml
+```
+
+### Step 5: Deploy proxy-agent
+
+Save the `proxy-agent.yaml` file in the root directory of the ANP code repository.
+
+
+unfold me to see the yaml
+
+```yaml
+# proxy-agent.yaml
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ labels:
+ app: proxy-agent
+ name: proxy-agent
+ namespace: karmada-system
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: proxy-agent
+ template:
+ metadata:
+ labels:
+ app: proxy-agent
+ spec:
+ containers:
+ - command:
+ - /proxy-agent
+ args:
+ - '--ca-cert=/var/certs/agent/ca.crt'
+ - '--agent-cert=/var/certs/agent/proxy-agent.crt'
+ - '--agent-key=/var/certs/agent/proxy-agent.key'
+ - '--proxy-server-host={{proxy_server_addr}}'
+ - '--proxy-server-port=8091'
+ - '--agent-identifiers=host={{identifiers}}'
+ image: swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-agent:0.0.24
+ imagePullPolicy: IfNotPresent
+ name: proxy-agent
+ livenessProbe:
+ httpGet:
+ scheme: HTTP
+ port: 8093
+ path: /healthz
+ initialDelaySeconds: 15
+ timeoutSeconds: 60
+ volumeMounts:
+ - mountPath: /var/certs/agent
+ name: cert
+ volumes:
+ - name: cert
+ secret:
+ secretName: proxy-agent-cert
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: proxy-agent-cert
+ namespace: karmada-system
+type: Opaque
+data:
+ ca.crt: |
+ {{proxy_agent_ca_crt}}
+ proxy-agent.crt: |
+ {{proxy_agent_crt}}
+ proxy-agent.key: |
+ {{proxy_agent_key}}
+```
+
+
+
+Save the `replace-proxy-agent.sh` file in the root directory of the ANP code repository.
+
+
+unfold me to see the shell
+
+```shell
+#!/bin/bash
+
+cert_yaml=proxy-agent.yaml
+
+karmada_control_plane_addr=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' karmada-host-control-plane)
+member3_cluster_addr=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' member3-control-plane)
+sed -i'' -e "s/{{proxy_server_addr}}/${karmada_control_plane_addr}/g" ${cert_yaml}
+sed -i'' -e "s/{{identifiers}}/${member3_cluster_addr}/g" ${cert_yaml}
+
+PROXY_AGENT_CA_CRT=$(cat certs/agent/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i'' -e "s/{{proxy_agent_ca_crt}}/${PROXY_AGENT_CA_CRT}/g" ${cert_yaml}
+
+PROXY_AGENT_CRT=$(cat certs/agent/issued/proxy-agent.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i'' -e "s/{{proxy_agent_crt}}/${PROXY_AGENT_CRT}/g" ${cert_yaml}
+
+PROXY_AGENT_KEY=$(cat certs/agent/private/proxy-agent.key | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i'' -e "s/{{proxy_agent_key}}/${PROXY_AGENT_KEY}/g" ${cert_yaml}
+```
+
+
+
+Run the following commands to run the script:
+
+```shell
+chmod +x replace-proxy-agent.sh
+bash replace-proxy-agent.sh
+```
+
+Deploy the proxy-agent in the pull mode for a member cluster (in this example, the `member3` cluster is in the pull mode.):
+
+```shell
+kind load docker-image swr.ap-southeast-1.myhuaweicloud.com/karmada/proxy-agent:0.0.24 --name member3
+kubectl --kubeconfig=/root/.kube/members.config --context=member3 apply -f proxy-agent.yaml
+```
+
+**The ANP deployment is completed now.**
+
+### Step 6: Add command flags for the karmada-agent deployment
+
+After deploying the ANP deployment, you need to add extra command flags `--cluster-api-endpoint` and `--proxy-server-address` for the `karmada-agent` deployment in the `member3` cluster.
+
+Where `--cluster-api-endpoint` is the APIEndpoint of the cluster. You can obtain it from the KubeConfig file of the `member3` cluster.
+
+Where `--proxy-server-address` is the address of the proxy server that is used to proxy the cluster. In current case, you can set `--proxy-server-address` to `http://:8088`. Get `karmada_control_plane_addr` value through the following command:
+
+```shell
+docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' karmada-host-control-plane
+```
+
+Set port `8088` by modifying the code in ANP: https://github.com/mrlihanbo/apiserver-network-proxy/blob/v0.0.24/dev/cmd/server/app/server.go#L267. You can also modify it to a different value.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/application-failover.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/application-failover.md
new file mode 100644
index 000000000..08da0dc77
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/application-failover.md
@@ -0,0 +1,199 @@
+---
+title: 应用故障迁移
+---
+
+在多集群场景下,用户的工作负载可能会部署在多个集群中,以提高服务的高可用性。当检测到集群故障时,Karmada 已经支持应用在多集群的故障转移。
+这是从集群的视角出发的。但是,有些集群的故障只会影响特定的应用,如果从集群的角度来看,我们需要区分受影响和未受影响的应用程序。
+此外,当集群的控制平面处于健康状态时,应用程序可能仍然处于不可用的状态。因此,Karmada 需要从应用的角度提供一种故障迁移的手段。
+
+## 为什么需要应用级故障迁移
+
+以下介绍一些应用故障迁移的场景:
+
+* 管理员通过抢占式调度在多个集群中部署应用程序。当集群资源紧缺时,原本正常运行的低优先级应用被抢占,长时间无法正常运行。此时,应用程序无法在单集群内进行自我修复。用户希望尝试将其调度到另一个集群,以确保持续提供服务。
+* 管理员使用云供应商的竞价实例来部署无状态的计算任务。用户使用竞价型实例部署应用时,可能会因为资源被回收而导致应用运行失败。
+在这种情况下,调度器感知到的资源量是资源配额的大小,而不是实际可用的资源。这时候,用户希望将之前失败的应用调度到另一个可用的集群。
+* ....
+
+## 如何启用该功能
+
+当应用程序从一个集群迁移到另一个集群时,应用需要确保它的依赖项同步迁移。
+因此,您需要确保启用了 `PropagateDeps` 特性开关并且在 PropagationPolicy 中设置了 `propagateDeps: true`。
+自 Karmada v1.4 以来,`PropagateDeps` 特性开关已经处于 Beta 阶段,并且默认启用。
+
+另外,应用是否需要迁移取决于应用的健康状态。Karmada 的“资源解释器框架”是为解释资源结构而设计的,
+它为用户提供解释器操作来告诉 Karmada 如何确定特定对象的健康状态。如何解析资源的健康状态由用户决定。
+在使用应用的故障迁移之前,您需要确保已配置应用程序的 interpretHealth 规则。
+
+应用故障迁移受 `Failover` 特性开关控制。自 Karmada v1.4 以来,`Failover` 特性开关已经处于 Beta 阶段,并且默认启用。
+此外,如果您使用优雅驱逐的清除模式,则需要启用 `GracefulEviction` 特性开关。自 Karmada v1.4 以来,`GracefulEviction` 特性开关也已处于 Beta 阶段,并且默认启用。
+
+## 配置应用故障迁移
+
+PropagationPolicy 的 `.spec.failover.application` 字段可以来表示应用故障迁移的规则。
+
+它有以下字段可以设置:
+* DecisionConditions
+* PurgeMode
+* GracePeriodSeconds
+
+### 配置决定条件
+
+`DecisionConditions` 代表执行故障迁移的决定条件。只有当其中的所有条件都满足了应用的故障迁移才会执行。
+目前,它包括对应用不健康状态的容忍时间,默认为 300s。
+
+PropagationPolicy 可以配置如下:
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: test-propagation
+spec:
+ #...
+ failover:
+ application:
+ decisionConditions:
+ tolerationSeconds: 300
+ #...
+```
+
+### 配置驱逐模式
+
+`PurgeMode` 代表如何从一个故障的集群迁移到另一个集群的方式。
+Karmada 支持三种不同的驱逐模式:
+
+* `Immediately` 表示 Karmada 将立即驱逐遗留的应用。
+* `Graciously` 表示 Karmada 将等待应用在新集群上恢复健康,或者在达到超时后才驱逐应用。
+你同时需要配置 `GracePeriodSeconds`。如果新集群上的应用无法达到 Healthy 状态,Karmada 将在达到 GracePeriodSeconds 后删除该应用。默认为 600 秒。
+* `Never` 表示 Karmada 不会驱逐应用,用户手动确认如何清理冗余副本。
+
+PropagationPolicy 可以配置为如下:
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: test-propagation
+spec:
+ #...
+ failover:
+ application:
+ decisionConditions:
+ tolerationSeconds: 300
+ gracePeriodSeconds: 600
+ purgeMode: Graciously
+ #...
+```
+
+或者:
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: test-propagation
+spec:
+ #...
+ failover:
+ application:
+ decisionConditions:
+ tolerationSeconds: 300
+ purgeMode: Never
+ #...
+```
+
+## 示例
+
+假设你已经配置了一个 PropagationPolicy:
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ failover:
+ application:
+ decisionConditions:
+ tolerationSeconds: 120
+ purgeMode: Never
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ - member3
+ spreadConstraints:
+ - maxGroups: 1
+ minGroups: 1
+ spreadByField: cluster
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+```
+
+现在应用被调度到 member2 中,这两个副本正常运行。此时给 member2 中的所有节点标记为不可调度并驱逐所有副本以构建应用的异常状态。
+
+```shell
+# mark node "member2-control-plane" as unschedulable in cluster member2
+$ kubectl --context member2 cordon member2-control-plane
+# delete the pod in cluster member2
+$ kubectl --context member2 delete pod -l app=nginx
+```
+
+你可以立即从 ResourceBinding 中发现应用变成不健康的状态。
+
+```yaml
+#...
+status:
+ aggregatedStatus:
+ - applied: true
+ clusterName: member2
+ health: Unhealthy
+ status:
+ availableReplicas: 0
+ readyReplicas: 0
+ replicas: 2
+```
+
+达到 tolerationSeconds 后,会发现 member2 中的 Deployment 已经被驱逐,重新调度到 member1 中。
+
+```yaml
+#...
+spec:
+ clusters:
+ - name: member1
+ replicas: 2
+ gracefulEvictionTasks:
+ - creationTimestamp: "2023-05-08T09:29:02Z"
+ fromCluster: member2
+ producer: resource-binding-application-failover-controller
+ reason: ApplicationFailure
+ suppressDeletion: true
+```
+
+您可以在 gracefulEvictionTasks 中将 suppressDeletion 修改为 false,确认故障后驱逐故障集群中的应用。
+
+:::note
+
+应用故障迁移的开发仍在进行中。我们正在收集用户案例。如果您对此功能感兴趣,请随时开启一个 Enhancement Issue 让我们知道。
+
+:::
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/determine-cluster-failures.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/determine-cluster-failures.md
new file mode 100644
index 000000000..13b177993
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/determine-cluster-failures.md
@@ -0,0 +1,85 @@
+---
+title: 集群故障判定
+---
+
+Karmada 支持 `Push` 和 `Pull` 两种模式来管理成员集群,有关集群注册的更多详细信息,
+请参考 [Cluster Registration](../clustermanager/cluster-registration.md)。
+
+在 Karmada 中,对集群的心跳探测有两种方式:
+
+- 集群状态收集,更新集群的 `.status` 字段(包括 `Push` 和 `Pull` 两种模式);
+- Karmada 控制面中 `karmada-cluster` 命名空间内的 `Lease` 对象,每个 `Pull` 模式集群都有一个关联的 `Lease` 对象。
+
+## 集群状态收集
+
+对于 `Push` 模式集群,Karmada 控制面中的 `clusterStatus` 控制器将定期执行执行集群状态的收集任务;
+对于 `Pull` 模式集群,集群中部署的 `karmada-agent` 组件负责创建并定期更新集群的 `.status` 字段。
+
+上述集群状态的定期更新任务可以通过 `--cluster-status-update-frequency` 标签进行配置(默认值为 10 秒)。
+
+集群的 `Ready` 条件在满足以下条件时将会被设置为 `False`:
+
+- 集群持续一段时间无法访问;
+- 集群健康检查响应持续一段时间不正常。
+
+> 上述持续时间间隔可以通过 `--cluster-failure-threshold` 标签进行配置(默认值为 30 秒)。
+
+## 集群租约对象更新
+
+每当有集群加入时,Karmada 将为每个 `Pull` 模式集群创建一个租约对象和一个租赁控制器。
+
+每个租约控制器负责更新对应的租约对象,续租时间可以通过 `--cluster-lease-duration`
+和 `--cluster-lease-renew-interval-fraction` 标签进行配置(默认值为 10 秒)。
+
+由于集群的状态更新由 `clusterStatus` 控制器负责维护,因此租约对象的更新过程与集群状态的更新过程相互独立。
+
+Karmada 控制面中的 `cluster` 控制器将每隔 `--cluster-monitor-period` 中配置的时间(默认值为 5 秒)检查 `Pull` 模式集群的状态,
+当 `cluster` 控制器在最后一个 `--cluster-monitor-grace-period` 中配置的时间段(默认值为 40 秒)内没有收到来自集群的消息时,
+集群的 `Ready` 条件将被更改为 `Unknown`。
+
+## 检查集群状态
+
+你可以使用 `kubectl` 来检查集群的状态细节:
+```
+kubectl describe cluster
+```
+
+以下实例描述了一个状态不健康的集群:
+
+
+unfold me to see the yaml
+
+```
+kubectl describe cluster member1
+
+Name: member1
+Namespace:
+Labels:
+Annotations:
+API Version: cluster.karmada.io/v1alpha1
+Kind: Cluster
+Metadata:
+ Creation Timestamp: 2021-12-29T08:49:35Z
+ Finalizers:
+ karmada.io/cluster-controller
+ Resource Version: 152047
+ UID: 53c133ab-264e-4e8e-ab63-a21611f7fae8
+Spec:
+ API Endpoint: https://172.23.0.7:6443
+ Impersonator Secret Ref:
+ Name: member1-impersonator
+ Namespace: karmada-cluster
+ Secret Ref:
+ Name: member1
+ Namespace: karmada-cluster
+ Sync Mode: Push
+Status:
+ Conditions:
+ Last Transition Time: 2021-12-31T03:36:08Z
+ Message: cluster is not reachable
+ Reason: ClusterNotReachable
+ Status: False
+ Type: Ready
+Events:
+```
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/failover-analysis.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/failover-analysis.md
new file mode 100644
index 000000000..8bc3c5240
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/failover-analysis.md
@@ -0,0 +1,209 @@
+---
+title: 故障迁移过程解析
+---
+
+让我们对Karmada集群故障迁移的过程进行一个简单的解析。
+
+## 添加集群污点
+
+当[集群被判定为不健康](./determine-cluster-failures.md)之后,集群将会被添加上`Effect`值为`NoSchedule`的污点,具体情况为:
+
+- 当集群`Ready`状态为`False`时,将被添加如下污点:
+
+```yaml
+key: cluster.karmada.io/not-ready
+effect: NoSchedule
+```
+
+- 当集群`Ready`状态为`Unknown`时,将被添加如下污点:
+
+```yaml
+key: cluster.karmada.io/unreachable
+effect: NoSchedule
+```
+
+如果集群的不健康状态持续一段时间(该时间可以通过`--failover-eviction-timeout`标签进行配置,默认值为5分钟)仍未恢复,集群将会被添加上`Effect`值为`NoExecute`的污点,具体情况为:
+
+- 当集群`Ready`状态为`False`时,将被添加如下污点:
+
+```yaml
+key: cluster.karmada.io/not-ready
+effect: NoExecute
+```
+
+- 当集群`Ready`状态为`Unknown`时,将被添加如下污点:
+
+```yaml
+key: cluster.karmada.io/unreachable
+effect: NoExecute
+```
+
+## 容忍集群污点
+
+当用户创建`PropagationPolicy/ClusterPropagationPolicy`资源后,Karmada会通过webhook为它们自动增加如下集群污点容忍:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+ namespace: default
+spec:
+ placement:
+ clusterTolerations:
+ - effect: NoExecute
+ key: cluster.karmada.io/not-ready
+ operator: Exists
+ tolerationSeconds: 300
+ - effect: NoExecute
+ key: cluster.karmada.io/unreachable
+ operator: Exists
+ tolerationSeconds: 300
+ ...
+```
+
+其中,容忍的`tolerationSeconds`值可以通过`--default-not-ready-toleration-seconds`与`default-unreachable-toleration-seconds`标签进行配置,这两个标签的默认值均为300。
+
+## 故障迁移
+
+当Karmada检测到故障群集不再被`PropagationPolicy/ClusterPropagationPolicy`分发策略容忍时,该集群将被从资源调度结果中删除,随后,Karmada调度器将重新调度相关工作负载。
+
+重调度的过程有以下几个限制:
+- 对于每个重调度的工作负载,其仍然需要满足`PropagationPolicy/ClusterPropagationPolicy`的约束,如ClusterAffinity或SpreadConstraints。
+- 应用初始调度结果中健康的集群在重调度过程中仍将被保留。
+
+### Duplicated调度类型
+
+对于`Duplicated`调度类型,当集群故障之后进行重新调度,满足分发策略限制的候选集群数量大于等于故障集群数量时,调度将继续执行,否则不执行。其中候选集群是指在本次调度过程中,区别与已调度的集群,新计算出的集群调度结果。
+
+以`Deployment`资源为例:
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ - member3
+ - member5
+ spreadConstraints:
+ - maxGroups: 2
+ minGroups: 2
+ replicaScheduling:
+ replicaSchedulingType: Duplicated
+```
+
+
+假设有5个成员集群,初始调度结果在member1和member2集群中。当member2集群发生故障,将触发调度器重调度。
+
+需要注意的是,重调度不会删除原本状态为Ready的集群member1上的工作负载。在其余3个集群中,只有member3和member5匹配`clusterAffinity`策略。
+
+由于分发约束的限制,最后应用调度的结果将会是[member1, member3]或[member1, member5]。
+
+### Divided调度类型
+
+对于`Divided`调度类型,Karmada调度器将尝试将应用副本迁移到其他健康的集群中去。
+
+以`Deployment`资源为例:
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+ weightPreference:
+ staticWeightList:
+ - targetCluster:
+ clusterNames:
+ - member1
+ weight: 1
+ - targetCluster:
+ clusterNames:
+ - member2
+ weight: 2
+```
+
+
+Karmada调度器将根据权重表`weightPreference`来划分应用副本。初始调度结果中,member1集群上有1个副本,member2集群上有2个副本。
+
+当member1集群故障之后,将触发重调度,最后的调度结果将会是member2集群上有3个副本。
+
+## 优雅故障迁移
+
+为了防止集群故障迁移过程中服务发生中断,Karmada需要确保故障集群中应用副本的删除动作延迟到应用副本在新集群上可用之后才执行。
+
+`ResourceBinding/ClusterResourceBinding`中增加了[GracefulEvictionTasks](https://github.com/karmada-io/karmada/blob/12e8f01d01571932e6fe45cb7f0d1bffd2e40fd9/pkg/apis/work/v1alpha2/binding_types.go#L75-L89)字段来表示优雅驱逐任务队列。
+
+当故障集群被taint-manager从资源调度结果中删除时,它将被添加到优雅驱逐任务队列中。
+
+`gracefulEvction`控制器负责处理优雅驱逐任务队列中的任务。在处理过程中,`gracefulEvction`控制器逐个评估优雅驱逐任务队列中的任务是否可以从队列中移除。判断条件如下:
+- 检查当前资源调度结果中资源的健康状态。如果资源健康状态为健康,则满足条件。
+- 检查当前任务的等待时长是否超过超时时间,超时时间可以通过`graceful-evction-timeout`标志配置(默认为10分钟)。如果超过,则满足条件。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/failover-overview.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/failover-overview.md
new file mode 100644
index 000000000..106a8b98e
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/failover/failover-overview.md
@@ -0,0 +1,38 @@
+---
+title: 故障迁移特性概览
+---
+
+在多云多集群场景中,为了提高业务的高可用性,用户工作负载可能会被部署在多个不同的集群中。在Karmada中,当集群发生故障或是用户不希望在某个集群上继续运行工作负载时,集群状态将被标记为不可用,并被添加上一些污点。
+
+taint-manager检测到集群故障之后,会从这些故障集群中驱逐工作负载,被驱逐的工作负载将被调度至其他最适合的集群,从而达成故障迁移的目的,保证了用户业务的可用性与连续性。
+
+## 为何需要故障迁移
+
+下面来介绍一些多集群故障迁移的场景:
+
+- 管理员在Karmada控制面部署了一个离线业务,并将业务Pod实例分发到了多个集群。突然某个集群发生故障,管理员希望Karmada能够把故障集群上的Pod实例迁移到其他条件适合的集群中去。
+- 普通用户通过Karmada控制面在某一个集群上部署了一个在线业务,业务包括数据库实例、服务器实例、配置文件等,服务通过控制面上的ELB对外暴露,此时某一集群发生故障,用户希望把整个业务能迁移到另一个情况较适合的集群上,业务迁移期间需要保证服务不断服。
+- 管理员将某个集群进行升级,作为基础设施的容器网络、存储等发生了改变,管理员希望在集群升级之前把当前集群上的应用迁移到其他适合的集群中去,业务迁移期间需要保证服务不断服。
+- ......
+
+## 怎样进行故障迁移
+
+![](../../resources/userguide/failover/failover-overview.png)
+
+用户在Karmada中加入了三个集群,分别为:`member1`、`member2`和`member3`。然后在karmada控制面部署了一个名为`foo`,且副本数为2的Deployment,并通过PropagationPolicy将其分发到了集群`member1`和`member2`上。
+
+当集群`member1`发生故障之后,其上的Pod实例将被驱逐,然后被迁移到集群`member2`或是集群`member3`中,这个不同的迁移行为可以通过`PropagationPolicy/ClusterPropagationPolicy`的副本调度策略`ReplicaSchedulingStrategy`来控制。
+
+## 用户如何开启特性
+
+用户可以通过启用`karmada-controller`的`Failover`特性开关来开启故障迁移特性。`Failover`特性开关从Karmada v1.4后处于Beta阶段,并且默认开启。用户如果使用Karmada v1.3或是更早的版本,需要手动开启`Failover`特性开关:
+
+```
+--feature-gates=Failover=true
+```
+
+此外,如果用户启用了GracefulEvction特性,故障迁移过程将变得十分平滑且优雅,也就是说,工作负载的驱逐将被推迟到工作负载在新群集上启动或达到最大宽限期之后才被执行。`GracefulEviction`特性开关从Karmada v1.4后处于Beta阶段,并且默认开启。用户如果使用Karmada v1.3,需要通过启用`karmada-controller`的如下特性开关:
+
+```
+--feature-gates=Failover=true,GracefulEviction=true
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/aggregated-api-endpoint.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/aggregated-api-endpoint.md
new file mode 100644
index 000000000..37f1050e2
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/aggregated-api-endpoint.md
@@ -0,0 +1,85 @@
+---
+title: 聚合层 APIServer
+---
+
+新引入的 [karmada-aggregated-apiserver](https://github.com/karmada-io/karmada/blob/master/cmd/aggregated-apiserver/main.go) ,允许用户通过 proxy 端点从 Karmada 控制面统一访问成员集群。
+
+有关详细的讨论主题,请参见[这里](https://github.com/karmada-io/karmada/discussions/1077)。
+
+以下是一个快速开始。
+
+## 快速开始
+
+为了快速体验这个功能,我们尝试通过使用 karmada-apiserver 证书来进行访问。
+
+### 步骤 1:获取 karmada-apiserver 证书
+
+对于使用 `hack/local-up-karmada.sh` 部署的 Karmada,您可以直接从 `$HOME/.kube/` 目录中复制它。
+
+```shell
+cp $HOME/.kube/karmada.config karmada-apiserver.config
+```
+
+### 步骤 2:授予用户 `system:admin` 权限
+
+`system:admin` 是 karmada-apiserver 证书的用户。我们需要显式地授予它 `clusters/proxy` 权限。
+
+执行以下 YAML 文件:
+
+cluster-proxy-rbac.yaml:
+
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: cluster-proxy-clusterrole
+rules:
+- apiGroups:
+ - 'cluster.karmada.io'
+ resources:
+ - clusters/proxy
+ resourceNames:
+ - member1
+ - member2
+ - member3
+ verbs:
+ - '*'
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: cluster-proxy-clusterrolebinding
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: cluster-proxy-clusterrole
+subjects:
+ - kind: User
+ name: "system:admin"
+```
+
+
+
+```shell
+kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver apply -f cluster-proxy-rbac.yaml
+```
+
+### 步骤 3:访问成员集群
+
+运行以下命令(用您的实际集群名称替换 `{clustername}`):
+
+```shell
+kubectl --kubeconfig karmada-apiserver.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/{clustername}/proxy/api/v1/nodes
+```
+
+或者将 `/apis/cluster.karmada.io/v1alpha1/clusters/{clustername}/proxy` 追加到 karmada-apiserver.config 的服务器地址,您可以直接使用以下命令:
+
+```shell
+kubectl --kubeconfig karmada-apiserver.config get node
+```
+
+> 注意:对于以拉取模式接入 Karmada 且仅允许从集群对 Karmada 进行访问的成员集群,我们可以 [Deploy apiserver-network-proxy (ANP) For Pull mode](../clustermanager/working-with-anp.md) 来访问它。
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/customizing-resource-interpreter.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/customizing-resource-interpreter.md
new file mode 100644
index 000000000..f9eae01c3
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/customizing-resource-interpreter.md
@@ -0,0 +1,553 @@
+---
+title: 自定义资源解释器
+---
+
+## 资源解释器框架
+
+在将资源从 `karmada-apiserver` 分发到成员集群的过程中,Karmada 可能需要了解资源的定义结构。以 `Propagating Deployment` 为例,在构建 `ResourceBinding` 的阶段,`karmada-controller-manager` 组件需要解析 deployment 资源的 `replicas` 字段。
+
+对于 Kubernetes 原生资源来说,Karmada 知道如何解析它们,但是对于由 CRD 定义的资源(或是由聚合层方式注册)来说,由于缺乏对该资源结构信息的了解,它们将仅被当作普通资源来对待,因此,高级调度算法将不能应用于这些资源。
+
+[Resource Interpreter Framework][1] 专为解释资源结构而设计,它包括两类解释器:
+- `内置`解释器:用于解释常见的 Kubernetes 原生资源或一些知名的扩展资源;
+- `自定义`解释器: 用于解释自定义资源或覆盖`内置`解释器。
+
+> 注意:上述两类解释器之间的主要区别在于,`内置`解释器由 Karmada 社区实现并维护,并将其内置到 Karmada 组件中,例如 `karmada-controller-manager`。 相反,`自定义`解释器是由用户实现和维护的,它应该作为 `Interpreter Webhook` 或`声明式配置`注册到 Karmada(更多详细信息,请参考 [Customized Interpreter](#自定义解释器))。
+
+### 解释器操作
+
+在解释资源时,我们经常会提取多条信息。Karmada 中定义了多种`解释器操作`,`资源解释器框架`为每个操作类型提供服务。
+
+关于`资源解释器框架`定义的各种操作类型的具体含义,可以参考 [Interpreter Operations][2] 。
+
+> 注意: 并非所有设计的操作类型均受支持(有关支持的操作,请参见下文):
+
+> 注意:在使用特定的`解释器操作`解释资源时,最多只会咨询一个解释器;对于同一个资源,`自定义`解释器比`内置`解释器具有更高的优先级。
+> 例如,`内置`解释器为 `apps/v1` version 的 `Deployment` 提供 `InterpretReplica` 服务,如果有一个自定义解释器注册到 Karmada 来解释该资源,则`自定义`解释器获胜,`内置`解释器将被忽略。
+
+## 内置解释器
+
+对于常见的 Kubernetes 原生资源或一些知名的扩展资源来说,`解释器操作`是内置的,这意味着用户通常不需要实现自定义解释器。 如果你希望内置更多资源,请随时[提交问题][3] 让我们了解您的用户案例。
+
+内置解释器现在支持以下`解释器操作`:
+
+### InterpretReplica
+
+支持资源:
+- Deployment(apps/v1)
+- StatefulSet(apps/v1)
+- Job(batch/v1)
+- Pod(v1)
+
+### ReviseReplica
+
+支持资源:
+- Deployment(apps/v1)
+- StatefulSet(apps/v1)
+- Job(batch/v1)
+
+### Retain
+
+支持资源:
+- Pod(v1)
+- Service(v1)
+- ServiceAccount(v1)
+- PersistentVolumeClaim(v1)
+- PersistentVolume(V1)
+- Job(batch/v1)
+
+### AggregateStatus
+
+支持资源:
+- Deployment(apps/v1)
+- Service(v1)
+- Ingress(networking.k8s.io/v1)
+- CronJob(batch/v1)
+- Job(batch/v1)
+- DaemonSet(apps/v1)
+- StatefulSet(apps/v1)
+- Pod(v1)
+- PersistentVolume(V1)
+- PersistentVolumeClaim(v1)
+- PodDisruptionBudget(policy/v1)
+
+### InterpretStatus
+
+支持资源:
+- Deployment(apps/v1)
+- Service(v1)
+- Ingress(networking.k8s.io/v1)
+- Job(batch/v1)
+- DaemonSet(apps/v1)
+- StatefulSet(apps/v1)
+- PodDisruptionBudget(policy/v1)
+
+### InterpretDependency
+
+支持资源:
+- Deployment(apps/v1)
+- Job(batch/v1)
+- CronJob(batch/v1)
+- Pod(v1)
+- DaemonSet(apps/v1)
+- StatefulSet(apps/v1)
+- Ingress(networking.k8s.io/v1)
+
+### InterpretHealth
+
+支持资源:
+- Deployment(apps/v1)
+- StatefulSet(apps/v1)
+- ReplicaSet(apps/v1)
+- DaemonSet(apps/v1)
+- Service(v1)
+- Ingress(networking.k8s.io/v1)
+- PersistentVolumeClaim(v1)
+- PodDisruptionBudget(policy/v1)
+- Pod(v1)
+
+## 自定义解释器
+
+自定义解释器由用户实现和维护,它可以通过两种方式扩展,通过定义声明式配置文件或在运行时作为 webhook 运行。
+
+> 注意:声明式配置比 webhook 有更高的优先级,即用户如果同时注册了这两种解释方式,将优先应用相应资源的声明式配置
+
+### 内置资源声明性配置
+
+Karmada捆绑了一些流行、开源的资源,以便用户可以直接使用。声明式配置的解释器现在支持以下`解释器操作`:
+
+#### InterpretReplica
+
+支持资源:
+- BroadcastJob(apps.kruise.io/v1alpha1)
+- CloneSet(apps.kruise.io/v1alpha1)
+- AdvancedStatefulSet(apps.kruise.io/v1beta1)
+- Workflow(argoproj.io/v1alpha1)
+
+#### ReviseReplica
+
+支持资源:
+- BroadcastJob(apps.kruise.io/v1alpha1)
+- CloneSet(apps.kruise.io/v1alpha1)
+- AdvancedStatefulSet(apps.kruise.io/v1beta1)
+- Workflow(argoproj.io/v1alpha1)
+
+#### Retain
+
+支持资源:
+- BroadcastJob(apps.kruise.io/v1alpha1)
+- Workflow(argoproj.io/v1alpha1)
+- HelmRelease(helm.toolkit.fluxcd.io/v2beta1)
+- Kustomization(kustomize.toolkit.fluxcd.io/v1)
+- GitRepository(source.toolkit.fluxcd.io/v1)
+- Bucket(source.toolkit.fluxcd.io/v1beta2)
+- HelmChart(source.toolkit.fluxcd.io/v1beta2)
+- HelmRepository(source.toolkit.fluxcd.io/v1beta2)
+- OCIRepository(source.toolkit.fluxcd.io/v1beta2)
+
+#### AggregateStatus
+
+支持资源:
+- AdvancedCronJob(apps.kruise.io/v1alpha1)
+- AdvancedDaemonSet(apps.kruise.io/v1alpha1)
+- BroadcastJob(apps.kruise.io/v1alpha1)
+- CloneSet(apps.kruise.io/v1alpha1)
+- AdvancedStatefulSet(apps.kruise.io/v1beta1)
+- HelmRelease(helm.toolkit.fluxcd.io/v2beta1)
+- Kustomization(kustomize.toolkit.fluxcd.io/v1)
+- ClusterPolicy(kyverno.io/v1)
+- Policy(kyverno.io/v1)
+- GitRepository(source.toolkit.fluxcd.io/v1)
+- Bucket(source.toolkit.fluxcd.io/v1beta2)
+- HelmChart(source.toolkit.fluxcd.io/v1beta2)
+- HelmRepository(source.toolkit.fluxcd.io/v1beta2)
+- OCIRepository(source.toolkit.fluxcd.io/v1beta2)
+
+#### InterpretStatus
+
+支持资源:
+- AdvancedDaemonSet(apps.kruise.io/v1alpha1)
+- BroadcastJob(apps.kruise.io/v1alpha1)
+- CloneSet(apps.kruise.io/v1alpha1)
+- AdvancedStatefulSet(apps.kruise.io/v1beta1)
+- HelmRelease(helm.toolkit.fluxcd.io/v2beta1)
+- Kustomization(kustomize.toolkit.fluxcd.io/v1)
+- ClusterPolicy(kyverno.io/v1)
+- Policy(kyverno.io/v1)
+- GitRepository(source.toolkit.fluxcd.io/v1)
+- Bucket(source.toolkit.fluxcd.io/v1beta2)
+- HelmChart(source.toolkit.fluxcd.io/v1beta2)
+- HelmRepository(source.toolkit.fluxcd.io/v1beta2)
+- OCIRepository(source.toolkit.fluxcd.io/v1beta2)
+
+#### InterpretDependency
+
+支持资源:
+- AdvancedCronJob(apps.kruise.io/v1alpha1)
+- AdvancedDaemonSet(apps.kruise.io/v1alpha1)
+- BroadcastJob(apps.kruise.io/v1alpha1)
+- CloneSet(apps.kruise.io/v1alpha1)
+- AdvancedStatefulSet(apps.kruise.io/v1beta1)
+- Workflow(argoproj.io/v1alpha1)
+- HelmRelease(helm.toolkit.fluxcd.io/v2beta1)
+- Kustomization(kustomize.toolkit.fluxcd.io/v1)
+- GitRepository(source.toolkit.fluxcd.io/v1)
+- Bucket(source.toolkit.fluxcd.io/v1beta2)
+- HelmChart(source.toolkit.fluxcd.io/v1beta2)
+- HelmRepository(source.toolkit.fluxcd.io/v1beta2)
+- OCIRepository(source.toolkit.fluxcd.io/v1beta2)
+
+#### InterpretHealth
+
+支持资源:
+- AdvancedCronJob(apps.kruise.io/v1alpha1)
+- AdvancedDaemonSet(apps.kruise.io/v1alpha1)
+- BroadcastJob(apps.kruise.io/v1alpha1)
+- CloneSet(apps.kruise.io/v1alpha1)
+- AdvancedStatefulSet(apps.kruise.io/v1beta1)
+- Workflow(argoproj.io/v1alpha1)
+- HelmRelease(helm.toolkit.fluxcd.io/v2beta1)
+- Kustomization(kustomize.toolkit.fluxcd.io/v1)
+- ClusterPolicy(kyverno.io/v1)
+- Policy(kyverno.io/v1)
+- GitRepository(source.toolkit.fluxcd.io/v1)
+- Bucket(source.toolkit.fluxcd.io/v1beta2)
+- HelmChart(source.toolkit.fluxcd.io/v1beta2)
+- HelmRepository(source.toolkit.fluxcd.io/v1beta2)
+- OCIRepository(source.toolkit.fluxcd.io/v1beta2)
+
+### 声明式配置
+
+#### 什么是解释器声明式配置?
+
+用户可以通过 [ResourceInterpreterCustomization][4] API 规范中声明的规则,快速为 Kubernetes 原生资源和 CR 资源自定义资源解释器。
+
+#### 配置编写
+
+你可以通过创建或更新 [ResourceInterpreterCustomization][4] 资源来配置资源解释规则,当前支持在 ResourceInterpreterCustomization 中定义 lua 脚本。 你可以在 API 定义中学习如何定义 lua 脚本,以 [retention][5] 为例。
+
+下面我们提供一个ResourceInterpreterCustomization资源的yaml编写示例:
+
+
+resource-interpreter-customization.yaml
+
+```yaml
+apiVersion: config.karmada.io/v1alpha1
+kind: ResourceInterpreterCustomization
+metadata:
+ name: declarative-configuration-example
+spec:
+ target:
+ apiVersion: apps/v1
+ kind: Deployment
+ customizations:
+ replicaResource:
+ luaScript: >
+ local kube = require("kube")
+ function GetReplicas(obj)
+ replica = obj.spec.replicas
+ requirement = kube.accuratePodRequirements(obj.spec.template)
+ return replica, requirement
+ end
+ replicaRevision:
+ luaScript: >
+ function ReviseReplica(obj, desiredReplica)
+ obj.spec.replicas = desiredReplica
+ return obj
+ end
+ retention:
+ luaScript: >
+ function Retain(desiredObj, observedObj)
+ desiredObj.spec.paused = observedObj.spec.paused
+ return desiredObj
+ end
+ statusAggregation:
+ luaScript: >
+ function AggregateStatus(desiredObj, statusItems)
+ if statusItems == nil then
+ return desiredObj
+ end
+ if desiredObj.status == nil then
+ desiredObj.status = {}
+ end
+ replicas = 0
+ for i = 1, #statusItems do
+ if statusItems[i].status ~= nil and statusItems[i].status.replicas ~= nil then
+ replicas = replicas + statusItems[i].status.replicas
+ end
+ end
+ desiredObj.status.replicas = replicas
+ return desiredObj
+ end
+ statusReflection:
+ luaScript: >
+ function ReflectStatus (observedObj)
+ return observedObj.status
+ end
+ healthInterpretation:
+ luaScript: >
+ function InterpretHealth(observedObj)
+ return observedObj.status.readyReplicas == observedObj.spec.replicas
+ end
+ dependencyInterpretation:
+ luaScript: >
+ function GetDependencies(desiredObj)
+ dependentSas = {}
+ refs = {}
+ if desiredObj.spec.template.spec.serviceAccountName ~= nil and desiredObj.spec.template.spec.serviceAccountName ~= 'default' then
+ dependentSas[desiredObj.spec.template.spec.serviceAccountName] = true
+ end
+ local idx = 1
+ for key, value in pairs(dependentSas) do
+ dependObj = {}
+ dependObj.apiVersion = 'v1'
+ dependObj.kind = 'ServiceAccount'
+ dependObj.name = key
+ dependObj.namespace = desiredObj.metadata.namespace
+ refs[idx] = dependObj
+ idx = idx + 1
+ end
+ return refs
+ end
+```
+
+
+#### 配置验证
+
+你可以使用 `karmadactl interpret` 命令在将 `ResourceInterpreterCustomization` 配置应用到系统之前来验证该配置的正确性。我们提供了一些示例来帮助用户更好的理解如何使用该验证工具,请参考 [examples][8] 。
+
+### Webhook
+
+#### 什么是解释器 webhook?
+
+解释器 webhook 是一种 HTTP 回调,它接收解释请求并对其进行处理。
+
+#### 编写一个解释器 webhook 服务器
+
+请参考 [Example of Customize Interpreter][6] 的实现,我们在 Karmada E2E 测试中使用该方式进行了验证。webhook 将处理 Karmada 组件(例如 karmada-controller-manager)发送的 ResourceInterpreterRequest 请求,处理完成后将处理结果以 ResourceInterpreterResponse 为形式返回。
+
+#### 部署 admission webhook 服务
+
+在 E2E 测试环境中, [Customize Interpreter示例][6] 部署在 host 集群上,由 service 暴露为 webhook 服务器前端。
+
+你也可以在集群外部署你的 webhooks,并记得更新你的 webhook 配置。
+
+#### 即时配置 webhook
+
+你可以通过 [ResourceInterpreterWebhookConfiguration][7] 来配置哪些资源和`解释器操作`受 webhook 的约束。
+
+下面提供了一个 `ResourceInterpreterWebhookConfiguration` 的配置示例:
+
+```yaml
+apiVersion: config.karmada.io/v1alpha1
+kind: ResourceInterpreterWebhookConfiguration
+metadata:
+ name: examples
+webhooks:
+ - name: workloads.example.com
+ rules:
+ - operations: [ "InterpretReplica","ReviseReplica","Retain","AggregateStatus" ]
+ apiGroups: [ "workload.example.io" ]
+ apiVersions: [ "v1alpha1" ]
+ kinds: [ "Workload" ]
+ clientConfig:
+ url: https://karmada-interpreter-webhook-example.karmada-system.svc:443/interpreter-workload
+ caBundle: {{caBundle}}
+ interpreterContextVersions: [ "v1alpha1" ]
+ timeoutSeconds: 3
+```
+
+你可以在 ResourceInterpreterWebhookConfiguration 中配置多个 webhook,每个 webhook 至少服务于一个`解释器操作`。
+
+### 编写 ResourceInterpreterCustomization
+
+你可以学习如何编写 [ResourceInterpreterCustomization][4] 来定制你的资源。
+
+首先,我们介绍kube库函数。然后,我们以 `kyverno.io/v1/ClusterPolicy` 为[例][9],介绍如何编写 `ResourceInterpreterCustomization`。
+
+#### luavm 的内置函数
+
+[ResourceInterpreterCustomization][4] API 规范中声明的规则定义了`解释器操作`。这些操作由 lua 编写,并通过 luavm 调用。用户在编写`解释器操作`时,可以使用 luavm 的内置函数。
+
+在 [kubeLibrary][9] 中,有两个函数可用于编写解释器操作:`accuratePodRequirements` 和 `getPodDependencies`。`accuratePodRequirements`有助于编写`ReplicaResource`操作,`getPodDependencies`有助于编写`DependencyInterpretation`操作。
+
+`accuratePodRequirements` 函数功能是获取 pod 的总资源需求。它的参数是`PodTemplateSpec`,返回值是 `ReplicaRequirements`。`PodTemplateSpec`描述了一个pod在从模板创建时应该有的数据,`ReplicaRequirements` 表示每个副本的需求。
+
+`getPodDependencies`函数功能是从podTemplate和namespace中获取所有依赖。它的参数是`PodTemplateSpec`和`namespace`。它的返回值是`dependencies`。`PodTemplateSpec`描述了一个pod在从模板创建时应该有的数据。`namespace`是定制资源的命名空间。而`dependencies`是定制资源所依赖的资源。
+
+#### ReplicaResource
+
+[ReplicaResource][11] 描述了Karmada发现资源的副本以及资源需求的规则。它用于那些声明式工作负载类型(如 Deployment)的CRD资源。
+
+Kyverno的`ClusterPolicy`是一个规则的集合,它没有`.spec.replicas`或`.spec.template.spec.nodeSelector`这样的字段。因此这里不需要为`ClusterPolicy`实现`ReplicaResource`操作。
+
+#### ReplicaRevision
+
+[ReplicaRevision][12] 描述了Karmada修改资源副本的规则。它用于那些声明式工作负载类型(如 Deployment)的CRD资源。
+
+Kyverno的`ClusterPolicy`是一个规则的集合,它没有`.spec.replicas`这样的字段。因此这里不需要为`ClusterPolicy`实现`ReplicaRevision`操作。
+
+#### Retention
+
+[Retention][13] 描述了Karmada对成员集群组件的变化做出反应的所希望的行为。这可以避免系统进入一个无意义的循环,即Karmada资源控制器和成员集群组件,用不同的值不断应用于资源的同一个字段。
+
+Kyverno的`ClusterPolicy`是一个规则的集合,通常不会被成员集群中的组件改变。因此这里不需要为`ClusterPolicy`实现`Retention`操作。
+
+#### StatusAggregation
+
+[StatusAggregation][14]描述了Karmada将从成员集群收集的状态汇总到资源模板的规则。
+
+Kyverno的`ClusterPolicy`是一个规则的集合。这里我们定义了`ClusterPolicy`的状态聚合规则。
+
+
+StatusAggregation-Defined-In-ResourceInterpreterCustomization
+
+```yaml
+statusAggregation:
+ luaScript: >
+ function AggregateStatus(desiredObj, statusItems)
+ if statusItems == nil then
+ return desiredObj
+ end
+ desiredObj.status = {}
+ desiredObj.status.conditions = {}
+ rulecount = {}
+ rulecount.validate = 0
+ rulecount.generate = 0
+ rulecount.mutate = 0
+ rulecount.verifyimages = 0
+ conditions = {}
+ local conditionsIndex = 1
+ for i = 1, #statusItems do
+ if statusItems[i].status ~= nil and statusItems[i].status.autogen ~= nil then
+ desiredObj.status.autogen = statusItems[i].status.autogen
+ end
+ if statusItems[i].status ~= nil and statusItems[i].status.ready ~= nil then
+ desiredObj.status.ready = statusItems[i].status.ready
+ end
+ if statusItems[i].status ~= nil and statusItems[i].status.rulecount ~= nil then
+ rulecount.validate = rulecount.validate + statusItems[i].status.rulecount.validate
+ rulecount.generate = rulecount.generate + statusItems[i].status.rulecount.generate
+ rulecount.mutate = rulecount.mutate + statusItems[i].status.rulecount.mutate
+ rulecount.verifyimages = rulecount.verifyimages + statusItems[i].status.rulecount.verifyimages
+ end
+ if statusItems[i].status ~= nil and statusItems[i].status.conditions ~= nil then
+ for conditionIndex = 1, #statusItems[i].status.conditions do
+ statusItems[i].status.conditions[conditionIndex].message = statusItems[i].clusterName..'='..statusItems[i].status.conditions[conditionIndex].message
+ hasCondition = false
+ for index = 1, #conditions do
+ if conditions[index].type == statusItems[i].status.conditions[conditionIndex].type and conditions[index].status == statusItems[i].status.conditions[conditionIndex].status and conditions[index].reason == statusItems[i].status.conditions[conditionIndex].reason then
+ conditions[index].message = conditions[index].message..', '..statusItems[i].status.conditions[conditionIndex].message
+ hasCondition = true
+ break
+ end
+ end
+ if not hasCondition then
+ conditions[conditionsIndex] = statusItems[i].status.conditions[conditionIndex]
+ conditionsIndex = conditionsIndex + 1
+ end
+ end
+ end
+ end
+ desiredObj.status.rulecount = rulecount
+ desiredObj.status.conditions = conditions
+ return desiredObj
+ end
+```
+
+
+
+#### StatusReflection
+
+[StatusReflection][15] 描述了Karmada挑选资源状态的规则。
+
+Kyverno的`ClusterPolicy`是一个规则的集合,其`.status`包含运行时数据。`StatusReflection`决定了Karmada从成员集群中收集哪些字段。这里我们从成员集群的资源中挑选了一些字段。
+
+
+StatusReflection-Defined-In-ResourceInterpreterCustomization
+
+```yaml
+statusReflection:
+ luaScript: >
+ function ReflectStatus (observedObj)
+ status = {}
+ if observedObj == nil or observedObj.status == nil then
+ return status
+ end
+ status.ready = observedObj.status.ready
+ status.conditions = observedObj.status.conditions
+ status.autogen = observedObj.status.autogen
+ status.rulecount = observedObj.status.rulecount
+ return status
+ end
+```
+
+
+
+#### HealthInterpretation
+
+[HealthInterpretation][16] 描述了健康评估规则,Karmada可以通过这些规则评估资源类型的健康状态。
+
+Kyverno的`ClusterPolicy`是一个规则的集合。我们通过定义健康评估规则来确定成员集群中的`ClusterPolicy`是否健康。
+
+
+HealthInterpretation-Defined-In-ResourceInterpreterCustomization
+
+```yaml
+healthInterpretation:
+ luaScript: >
+ function InterpretHealth(observedObj)
+ if observedObj.status ~= nil and observedObj.status.ready ~= nil then
+ return observedObj.status.ready
+ end
+ if observedObj.status ~= nil and observedObj.status.conditions ~= nil then
+ for conditionIndex = 1, #observedObj.status.conditions do
+ if observedObj.status.conditions[conditionIndex].type == 'Ready' and observedObj.status.conditions[conditionIndex].status == 'True' and observedObj.status.conditions[conditionIndex].reason == 'Succeeded' then
+ return true
+ end
+ end
+ end
+ return false
+ end
+```
+
+
+
+#### DependencyInterpretation
+
+[DependencyInterpretation][17] 描述了Karmada分析依赖资源的规则。
+
+Kyverno的`ClusterPolicy`是一个规则的集合,它不依赖于其他资源。因此这里不需要为`ClusterPolicy`实现`DependencyInterpretation`操作。
+
+[1]: https://github.com/karmada-io/karmada/tree/master/docs/proposals/resource-interpreter-webhook
+[2]: https://github.com/karmada-io/karmada/blob/84b971a501ba82c53a5ad455c2fe84d842cd7d4e/pkg/apis/config/v1alpha1/resourceinterpreterwebhook_types.go#L85-L119
+[3]: https://github.com/karmada-io/karmada/issues/new?assignees=&labels=kind%2Ffeature&template=enhancement.md
+[4]: https://github.com/karmada-io/karmada/blob/84b971a501ba82c53a5ad455c2fe84d842cd7d4e/pkg/apis/config/v1alpha1/resourceinterpretercustomization_types.go#L17
+[5]: https://github.com/karmada-io/karmada/blob/84b971a501ba82c53a5ad455c2fe84d842cd7d4e/pkg/apis/config/v1alpha1/resourceinterpretercustomization_types.go#L108-L134
+[6]: https://github.com/karmada-io/karmada/tree/master/examples/customresourceinterpreter
+[7]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpreterwebhook_types.go#L16
+[8]: ../../reference/karmadactl/karmadactl-usage-conventions.md#karmadactl-interpret
+[9]: https://github.com/karmada-io/karmada/blob/master/pkg/resourceinterpreter/default/thirdparty/resourcecustomizations/kyverno.io/v1/ClusterPolicy/customizations.yaml
+[10]: https://github.com/karmada-io/karmada/blob/master/pkg/resourceinterpreter/customized/declarative/luavm/kube.go#L16-L33
+[11]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpretercustomization_types.go#L60-L68
+[12]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpretercustomization_types.go#L70-L77
+[13]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpretercustomization_types.go#L50-L58
+[14]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpretercustomization_types.go#L86-L92
+[15]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpretercustomization_types.go#L79-L84
+[16]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpretercustomization_types.go#L94-L97
+[17]: https://github.com/karmada-io/karmada/blob/master/pkg/apis/config/v1alpha1/resourceinterpretercustomization_types.go#L99-L105
+
+## 注意事项
+
+### 使用 Retain 解释器解决控制面与成员集群的控制权冲突
+
+问题:Retain是在Karmada控制面与成员集群同时具备对成员集群资源控制权时,用户可自定义的用于解决控制权冲突的解释器。
+一个典型的场景是当成员集群 Deployment 的副本数同时被控制面资源模版和成员集群 HPA 控制时,
+两者无限次来回修改成员集群 Deployment 的副本数,导致成员集群的 Deployment 状态会出现异常。
+
+解决措施:
+* 针对您的工作负载类资源实现相应的 Retain 解释器,决策什么场景下该响应控制面资源模版的修改,什么场景下该响应成员集群 HPA 的修改。
+ 目前 Karmada 只针对 Deployment 资源实现了相应的 Retain 解释器,具体实现方式为:如果资源模板有 `resourcetemplate.karmada.io/retain-replicas` 的 label,
+ 就由成员集群 HPA 控制,否则就由控制面资源模板控制(在显式开启 `hpaReplicasSyncer` 控制器情况下,Karmada 可以自动为启用 HPA 的 Deployment 标记该 label)。
+ 如果您需要针对其他资源或自定义的 CRD 资源解决该冲突问题,可参考 Deployment 的 Retain 方案。
+* 如果您期望更优雅并彻底地解决上述问题,我们更推荐您将 HPA 更换为 [FederatedHPA](../../userguide/autoscaling/federatedhpa.md)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/global-search-for-resources.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/global-search-for-resources.md
new file mode 100644
index 000000000..791e1109f
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/global-search-for-resources.md
@@ -0,0 +1,97 @@
+---
+title: Global Search for Resources
+---
+
+## Background
+
+Multicluster is becoming a common practice. It brings services closer to users.
+But it also makes it hard to query Kubernetes resources across clusters.
+Therefore, we need a caching layer and a search engine for Karmada to cache and search for Kubernetes resources across multiple clusters.
+We introduced a new component named `karmada-search` and a new API group called `search.karmada.io` to implement it.
+
+## What karmada-search can do
+
+![karmada-search](../../resources/key-features/unified-search.png)
+
+karmada-search can:
+
+* Accelerate resource requests processing across regions.
+* Provide a cross-cluster resource view.
+* Be compatible with multiple Kubernetes resource versions.
+* Unify resource requests entries.
+* Reduce API server pressure on member clusters.
+* Adapt to a variety of search engines and databases.
+
+What's more, `karmada-search` also supports proxying one global resource. See [here](./proxy-global-resource.md) for details.
+
+:::note
+
+1. This feature aims to build a cache to store arbitrary resources from multiple member clusters. And these resources are exposed by `search/proxy` REST APIs. If a user has access privilege to `search/proxy`, they can directly access the cached resource without routing their request to the member clusters.
+1. As previously mentioned, the resource query request will not be routed to the member clusters. So if a secret is cached in the Karmada control plane but a user in the member cluster cannot access it via member cluster's apiserver due to RBAC privilege limitations, they can still access the secret through the Karmada control plane.
+1. This feature is designed for administrators who needs to query and view the resources in multiple clusters, not designed for the end users. Exposing this API to the end users may cause end users to be able to view resources that do not belong to them.
+
+:::
+
+## Scope the caching
+
+`.spec` in `ResourceRegistry` defines the cache scope.
+
+It has three fields to set:
+- TargetCluster
+- ResourceSelector
+- BackendStore
+
+### TargetCluster
+
+`TargetCluster` means the cluster from which the cache system collects resources.
+It's exactly the same with [clusterAffinity](../scheduling/resource-propagating.md#deploy-deployment-into-a-specified-set-of-target-clusters) in `PropagationPolicy`.
+
+### ResourceSelector
+
+`ResourceSelector` specifies the type of resources to be cached by `karmada-search`. Subfields are `APIVersion`, `Kind` and `Namespace`.
+
+The following example `ResourceSelector` means Deployments in `default` namespace are targeted:
+
+```yaml
+apiVersion: search.karmada.io/v1alpha1
+kind: ResourceRegistry
+metadata:
+ name: foo
+spec:
+ # ...
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ namespace: default
+```
+
+:::note
+
+A null `namespace` field means all namespaces are targeted.
+
+:::
+
+### BackendStore
+
+`BackendStore` specifies the location to store cached items. Defaults to the memory of `karmada-search`.
+Now `BackendStore` only supports `OpenSearch` as the backend.
+
+`BackendStore` can be configured as follows:
+
+```yaml
+apiVersion: search.karmada.io/v1alpha1
+kind: ResourceRegistry
+metadata:
+ name: foo
+spec:
+ # ...
+ backendStore:
+ openSearch:
+ addresses:
+ - http://10.240.0.100:9200
+ secretRef:
+ namespace: default
+ name: opensearch-account
+```
+
+For a complete example, you can refer to [here](../../tutorials/karmada-search.md).
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/proxy-global-resource.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/proxy-global-resource.md
new file mode 100644
index 000000000..258757d5d
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/globalview/proxy-global-resource.md
@@ -0,0 +1,86 @@
+---
+title: Proxy Global Resources
+---
+
+## Introduce
+
+The newly introduced [proxy](https://github.com/karmada-io/karmada/blob/master/docs/proposals/resource-aggregation-proxy/README.md) feature allows users to access all the resources both in karmada controller panel and member clusters. With it, users can:
+
+- create, update, patch, get, list, watch and delete resources in controller panel, such as deployments and jobs. All the request behaviors are supported, just like using `karmada-apiserver`.
+- update, patch, get, list, watch and delete resources in member clusters, such as pods, nodes, and customer resources.
+- access subresources, such as pods' `log` and `exec`.
+
+## Quick start
+
+To quickly experience this feature, we experimented with karmada-apiserver certificate.
+
+### Step1: Obtain the karmada-apiserver Certificate
+
+For Karmada deployed using `hack/local-up-karmada.sh`, you can directly copy it from the `$HOME/.kube/` directory.
+
+```shell
+cp $HOME/.kube/karmada.config $HOME/karmada-proxy.config
+```
+
+### Step2: Access by proxy
+
+Append `/apis/search.karmada.io/v1alpha1/proxying/karmada/proxy` to the server address of `karmada-proxy.config`. Then set this file as default config:
+
+```shell
+export KUBECONFIG=$HOME/karmada-proxy.config
+```
+
+### Step3: Define Resource To be Proxied
+
+Define which member cluster resource you want to be proxied with [`ResourceRegistry`](https://github.com/karmada-io/karmada/tree/master/docs/proposals/caching#define-the-scope-of-the-cached-resource).
+
+For example:
+
+```yaml
+apiVersion: search.karmada.io/v1alpha1
+kind: ResourceRegistry
+metadata:
+ name: proxy-sample
+spec:
+ targetCluster:
+ clusterNames:
+ resourceSelectors:
+ - apiVersion: v1
+ kind: Pod
+ - apiVersion: v1
+ kind: Node
+```
+
+After applying it, you can access pods and nodes with kubectl. Enjoy it!
+
+## FAQ
+
+### Is creating supported?
+
+For resources not defined in ResourceRegistry, creating requests are redirected to karmada controller panel. So Resources are created in controller panel.
+For resources defined in ResourceRegistry, proxy doesn't know which cluster to create, and responses `MethodNotSupported` error.
+
+### Can I read resources by selectors?
+
+Label selectors are fully supported. While field selectors are limited to `metadata.name` and `metadata.namespace`
+
+### When I get pods with kubectl, only NAME and AGE columns are displayed
+
+Yes, `kubectl` use `application/json;as=Table;g=meta.k8s.io;v=v1 ` as `content-type`, while proxy only implement [`defaultTableConvertor`](https://github.com/karmada-io/karmada/blob/614e28508336d6c03a938ce1bf0678dafef034f0/vendor/k8s.io/apiserver/pkg/registry/rest/table.go#L38-L40) as TableConvertor.
+
+```
+NAME AGE
+nginx-65c54cc984-2jjw6 10s
+```
+
+But it doesn't affect usaging of `client-go`, which use `application/json` as `content-type`.
+
+### What will happen when I access resource with same name across clusters.
+
+In this stage, proxy cannot discern the resources with same name across clusters. So get/update/patch/delete and subresources requests will return a conflict error. When list resources, the resources with same name will be returned in item list.
+
+Users shall design to avoid or tolerate this error.
+
+### How to access resources in pull mode cluster
+
+we can [deploy apiserver-network-proxy (ANP)](../clustermanager/working-with-anp.md) to access it.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/network/working-with-submariner.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/network/working-with-submariner.md
new file mode 100644
index 000000000..fbac140d4
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/network/working-with-submariner.md
@@ -0,0 +1,91 @@
+---
+title: 使用 Submariner 实现 Karmada 成员集群彼此联网
+---
+
+本文演示了如何使用 `Submariner` 将成员集群彼此联网。
+
+[Submariner](https://github.com/submariner-io/submariner) 将相连集群之间的网络扁平化,并实现 Pod 和服务之间的 IP 可达性。
+
+您可以参考 Submariner [QUICKSTART GUIDES](https://submariner.io/getting-started/quickstart/),获取更多 Submariner 安装指导。
+
+## 安装 Karmada
+
+### 安装 Karmada 控制面
+
+遵循快速入门中的步骤[安装 Karmada 控制面](../../installation/installation.md)后,您就可以用 Karmada 管控集群。
+
+### 接入成员集群
+
+在下面的步骤中,我们将创建一个成员集群,然后将该集群接入 Karmada 控制面。
+
+1. 创建成员集群
+
+我们将创建一个名为 `cluster1` 的集群,想要将 KUBECONFIG 文件放到 $HOME/.kube/cluster.config 中。运行以下命令:
+
+```shell
+hack/create-cluster.sh cluster1 $HOME/.kube/cluster1.config
+```
+
+这将按 kind 的设置创建集群。
+
+2. 将成员集群接入 Karmada 控制面
+
+导出 `KUBECONFIG` 并切换到 `karmada apiserver`:
+
+```shell
+export KUBECONFIG=$HOME/.kube/karmada.config
+
+kubectl config use-context karmada-apiserver
+```
+
+然后安装 `karmadactl` 指令集并接入成员集群:
+
+```shell
+go install github.com/karmada-io/karmada/cmd/karmadactl
+
+karmadactl join cluster1 --cluster-kubeconfig=$HOME/.kube/cluster1.config
+```
+
+除原始成员集群外,确保至少有两个成员集群接入 Karmada。
+
+在本例中,我们将两个成员集群接入了 Karmada:
+
+```console
+# kubectl get clusters
+NAME VERSION MODE READY AGE
+cluster1 v1.21.1 Push True 16s
+cluster2 v1.21.1 Push True 5s
+...
+```
+
+## 部署 Submariner
+
+我们将使用 `subctl` CLI 在 `host cluster` 和 `member clusters` 上部署 `Submariner` 组件。
+按照[Submariner 官方文档](https://github.com/submariner-io/submariner/tree/b4625514061c1d85c10432a78ca0ad46e679367a#installation),这是推荐的部署方法。
+
+`Submariner` 使用一个中央 Broker 组件来简化所有相关集群中所部署的 Gateway Engines 之间的元数据信息交换。
+Broker 必须部署在单个 Kubernetes 集群上。该集群的 API server必须能够到达 Submariner 连接的所有 Kubernetes 集群,因此我们将其部署在 karmada-host 集群上。
+
+### 安装 subctl
+
+请参阅 [SUBCTL 安装](https://submariner.io/operations/deployment/subctl/)。
+
+### 使用 karmada-host 用作 Broker
+
+```shell
+subctl deploy-broker --kubeconfig /root/.kube/karmada.config --context karmada-host
+```
+
+### cluster1 和 cluster2 接入到 Broker
+
+```shell
+subctl join --kubeconfig /root/.kube/cluster1.config broker-info.subm --natt=false
+```
+
+```shell
+subctl join --kubeconfig /root/.kube/cluster2.config broker-info.subm --natt=false
+```
+
+## 连通性测试
+
+请参阅[多集群服务发现](../service/multi-cluster-service.md)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/cluster-resources.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/cluster-resources.md
new file mode 100644
index 000000000..8dc883d3a
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/cluster-resources.md
@@ -0,0 +1,368 @@
+---
+title: 基于集群资源模型的调度
+---
+## 概览
+
+在将应用程序调度到特定集群时,目标集群的资源状态是一个不容忽视的因素。 例如,当集群资源不足以运行给定的实例时,我们希望调度器尽可能避免这种调度行为。
+本文将重点介绍 Karmada 如何基于集群资源模型进行实例的调度。
+
+## 集群资源模型
+
+在调度过程中,`karmada-scheduler` 现在根据一系列因素做出决策,其中一个因素是集群的资源状态。
+现在 Karmada 有两种不同的基于集群资源的调度方式,其中一种是通用的集群模型,另一种是自定义的集群模型。
+
+### 通用集群资源模型
+
+#### 使用通用集群资源模型
+
+出于上述目的,Karmada在[Cluster API](https://github.com/karmada-io/karmada/blob/master/pkg/apis/cluster/types.go) 引入了`ResourceSummary`的概念。
+
+以下给出了一个`ResourceSummary`的例子:
+
+```
+resourceSummary:
+ allocatable:
+ cpu: "4"
+ ephemeral-storage: 206291924Ki
+ hugepages-1Gi: "0"
+ hugepages-2Mi: "0"
+ memory: 16265856Ki
+ pods: "110"
+ allocated:
+ cpu: 950m
+ memory: 290Mi
+ pods: "11"
+```
+
+从上面的例子中,我们可以知道集群的可分配资源和已分配资源。
+
+#### 基于通用集群资源模型的调度
+
+假设在Karmada控制面上注册了三个成员集群,这时过来一个Pod的调度请求。
+
+Member1:
+
+```
+resourceSummary:
+ allocatable:
+ cpu: "4"
+ ephemeral-storage: 206291924Ki
+ hugepages-1Gi: "0"
+ hugepages-2Mi: "0"
+ memory: 16265856Ki
+ pods: "110"
+ allocated:
+ cpu: 950m
+ memory: 290Mi
+ pods: "11"
+```
+
+Member2:
+
+```
+resourceSummary:
+ allocatable:
+ cpu: "4"
+ ephemeral-storage: 206291924Ki
+ hugepages-1Gi: "0"
+ hugepages-2Mi: "0"
+ memory: 16265856Ki
+ pods: "110"
+ allocated:
+ cpu: "2"
+ memory: 290Mi
+ pods: "11"
+```
+
+Member3:
+
+```
+resourceSummary:
+ allocatable:
+ cpu: "4"
+ ephemeral-storage: 206291924Ki
+ hugepages-1Gi: "0"
+ hugepages-2Mi: "0"
+ memory: 16265856Ki
+ pods: "110"
+ allocated:
+ cpu: "2"
+ memory: 290Mi
+ pods: "110"
+```
+
+假设这个Pod的资源请求是500m CPU。 显然,Member1和Member2有足够的资源来运行这个副本,但Member3没有Pod的配额。
+考虑到可用资源的数量,调度器更倾向于将Pod调度到member1。
+
+| Cluster | member1 | member2 | member3 |
+| ------------------- | ----------- | ----------- | ---------------------------- |
+| AvailableReplicas | (4 - 0.95) / 0.5 = 6.1 | (4 - 2) / 0.5 = 4 | 0 |
+
+### 自定义集群资源模型
+
+#### 背景
+
+`ResourceSummary` 描述了集群的整体可用资源。
+但是,`ResourceSummary`不够精确,它机械地统计所有节点上的资源,而忽略了节点上的碎片资源。例如,一个有 2000 个节点的集群,每个节点上只剩下1核CPU。
+从 `ResourceSummary` 中我们获知集群还有 2000核CPU,但事实上,这个集群甚至无法运行任何需要1核CPU以上的Pod实例。
+
+因此,我们为每个集群引入了“自定义资源模型”的概念,来记录每个节点的资源画像。 Karmada将收集每个集群的节点和pod信息,并经过计算将这个节点被划分为对应等级的合适的资源模型。
+
+#### 启用自定义资源模型
+
+`自定义集群资源模型`特性开关从Karmada v1.4后处于Beta阶段,并且默认开启。如果你使用Karmada v1.3,你需要在 `karmada-scheduler`、`karmada-aggregated-server` 和 `karmada-controller-manager` 中开启 `CustomizedClusterResourceModeling` 特性开关。
+
+例如,你可以使用以下命令打开 `karmada-controller-manager` 中的特性开关。
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-controller-manager -nkarmada-system
+```
+
+```
+- command:
+ - /bin/karmada-controller-manager
+ - --kubeconfig=/etc/kubeconfig
+ - --bind-address=0.0.0.0
+ - --cluster-status-update-frequency=10s
+ - --secure-port=10357
+ - --feature-gates=CustomizedClusterResourceModeling=true
+ - --v=4
+
+```
+
+在开启特性开关后,当集群注册到 Karmada 控制面时,Karmada会自动为集群设置一个通用的资源模型。你可以在 `cluster.spec` 中看到它。
+
+默认的资源模型如下:
+
+```
+resourceModels:
+ - grade: 0
+ ranges:
+ - max: "1"
+ min: "0"
+ name: cpu
+ - max: 4Gi
+ min: "0"
+ name: memory
+ - grade: 1
+ ranges:
+ - max: "2"
+ min: "1"
+ name: cpu
+ - max: 16Gi
+ min: 4Gi
+ name: memory
+ - grade: 2
+ ranges:
+ - max: "4"
+ min: "2"
+ name: cpu
+ - max: 32Gi
+ min: 16Gi
+ name: memory
+ - grade: 3
+ ranges:
+ - max: "8"
+ min: "4"
+ name: cpu
+ - max: 64Gi
+ min: 32Gi
+ name: memory
+ - grade: 4
+ ranges:
+ - max: "16"
+ min: "8"
+ name: cpu
+ - max: 128Gi
+ min: 64Gi
+ name: memory
+ - grade: 5
+ ranges:
+ - max: "32"
+ min: "16"
+ name: cpu
+ - max: 256Gi
+ min: 128Gi
+ name: memory
+ - grade: 6
+ ranges:
+ - max: "64"
+ min: "32"
+ name: cpu
+ - max: 512Gi
+ min: 256Gi
+ name: memory
+ - grade: 7
+ ranges:
+ - max: "128"
+ min: "64"
+ name: cpu
+ - max: 1Ti
+ min: 512Gi
+ name: memory
+ - grade: 8
+ ranges:
+ - max: "9223372036854775807"
+ min: "128"
+ name: cpu
+ - max: "9223372036854775807"
+ min: 1Ti
+ name: memory
+```
+
+#### 自定义你的集群资源模型
+
+在某些情况下,默认的集群资源模型可能与你的集群不相匹配。你可以调整集群资源模型的细粒度,以便更好地向集群下发资源。
+例如,你可以使用以下命令编辑 member1 的集群资源模型。
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver edit cluster/member1
+```
+
+自定义资源模型应满足以下要求:
+
+* 每个模型的等级不应该是相同的。
+* 每个模型中资源类型的数量应该相同。
+* 目前只支持 cpu, memory, storage, ephemeral-storage四种资源类型。
+* 每个资源的最大值必须大于最小值。
+* 第一个模型中每个资源的最小值应为 0。
+* 最后一个模型中每个资源的最大值应为 MaxInt64。
+* 每个模型的资源类型应该相同。
+* 从低等级到高等级的模型,资源的范围必须连续且不重叠。
+
+例如:以下给出了一个自定义的集群资源模型:
+
+```
+resourceModels:
+ - grade: 0
+ ranges:
+ - max: "1"
+ min: "0"
+ name: cpu
+ - max: 4Gi
+ min: "0"
+ name: memory
+ - grade: 1
+ ranges:
+ - max: "2"
+ min: "1"
+ name: cpu
+ - max: 16Gi
+ min: 4Gi
+ name: memory
+ - grade: 2
+ ranges:
+ - max: "9223372036854775807"
+ min: "2"
+ name: cpu
+ - max: "9223372036854775807"
+ min: 16Gi
+ name: memory
+```
+
+上述是一个有三个等级的集群资源模型,每个等级分别定义了CPU和内存这两种资源的资源范围。这时如果一个节点的剩余可用资源为0.5核CPU和2Gi内存,则会被划分为0级的资源模型。如果这个节点的剩余可用资源为1.5核CPU和10Gi内存,则会被划分为1级。
+
+#### 基于自定义集群资源模型的调度
+
+`自定义集群资源模型`将节点划分为不同区间的等级,并且当一个Pod实例需要调度到特定集群时,`karmada-scheduler`根据将要调度的实例资源请求比较不同集群中满足要求的节点数,并将实例调度到满足要求的节点数更多的集群。
+假设有三个注册在Karmada控制面的成员集群,采用默认设置的集群资源模型,这些集群的剩余可用资源情况如下。
+
+成员集群1:
+
+```
+spec:
+...
+ - grade: 2
+ ranges:
+ - max: "4"
+ min: "2"
+ name: cpu
+ - max: 32Gi
+ min: 16Gi
+ name: memory
+ - grade: 3
+ ranges:
+ - max: "8"
+ min: "4"
+ name: cpu
+ - max: 64Gi
+ min: 32Gi
+ name: memory
+...
+...
+status:
+ - count: 1
+ grade: 2
+ - count: 6
+ grade: 3
+```
+
+成员集群2:
+
+```
+spec:
+...
+ - grade: 2
+ ranges:
+ - max: "4"
+ min: "2"
+ name: cpu
+ - max: 32Gi
+ min: 16Gi
+ name: memory
+ - grade: 3
+ ranges:
+ - max: "8"
+ min: "4"
+ name: cpu
+ - max: 64Gi
+ min: 32Gi
+ name: memory
+...
+...
+status:
+ - count: 4
+ grade: 2
+ - count: 4
+ grade: 3
+```
+
+成员集群3:
+
+```
+spec:
+...
+ - grade: 6
+ ranges:
+ - max: "64"
+ min: "32"
+ name: cpu
+ - max: 512Gi
+ min: 256Gi
+ name: memory
+...
+...
+status:
+ - count: 1
+ grade: 6
+```
+
+假设这时过来一个Pod的调度请求,Pod的资源请求是3核CPU和20Gi内存。那么,Karmada认为所有满足等级2及以上的节点满足此要求。考虑到不同集群可用节点的数量,调度器更倾向于Pod调度到成员集群3。
+
+| Cluster | member1 | member2 | member3 |
+| ------------------- | ----------- | ----------- | ---------------------------- |
+| AvailableReplicas | 1 + 6 = 7 | 4 + 4 = 8 | 1 * min(32/3, 256/20) = 10 |
+
+假设这时过来一个Pod的调度请求,Pod的资源请求是3核CPU和60Gi内存。那么,这时等级2的节点已经无法满足Pod所需所有资源的要求。考虑到不同集群可用节点的数量,调度器更倾向于Pod调度到成员集群1。
+
+| Cluster | member1 | member2 | member3 |
+| ------------------- | ----------- | ----------- | --------------------------- |
+| AvailableReplicas | 6 * 1 = 6 | 4 * 1 = 4 | 1 * min(32/3, 256/60) = 4 |
+
+## 禁用集群资源模型
+
+在基于集群可用资源的动态副本分配场景中,调度器总是会参考资源模型来做出调度决策。
+在资源建模的过程中,不论是通用集群资源建模还是自定义集群资源建模,Karmada都会从管理的所有集群中收集节点和Pod信息。
+这在大规模场景中带来了不小的性能负担。
+
+你可以通过在 `karmada-controller-manager` 和 `karmada-agent` 中将 `--enable-cluster-resource-modeling` 设置为 false 来禁用集群资源模型。
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/descheduler.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/descheduler.md
new file mode 100644
index 000000000..c2fdaacf1
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/descheduler.md
@@ -0,0 +1,150 @@
+---
+title: Descheduler For Rescheduling
+---
+
+Users could divide their replicas of a workload into different clusters in terms of available resources of member clusters.
+However, the scheduler's decisions are influenced by its view of Karmada at that point of time when a new `ResourceBinding`
+appears for scheduling. As Karmada multi-clusters are very dynamic and their state changes over time, there may be desire
+to move already running replicas to some other clusters due to lack of resources for the cluster. This may happen when
+some nodes of a cluster failed and the cluster does not have enough resource to accommodate their pods or the estimators
+have some estimation deviation, which is inevitable.
+
+The karmada-descheduler will detect all deployments once in a while, every 2 minutes by default. In every period, it will find out
+how many unschedulable replicas a deployment has in target scheduled clusters by calling karmada-scheduler-estimator. Then
+it will evict them from decreasing `spec.clusters` and trigger karmada-scheduler to do a 'Scale Schedule' based on the current
+situation. Note that it will take effect only when the replica scheduling strategy is dynamic division.
+
+## Prerequisites
+
+### Karmada has been installed
+
+We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
+
+### Member cluster component is ready
+
+Ensure that all member clusters have joined Karmada and their corresponding karmada-scheduler-estimator is installed into karmada-host.
+
+Check member clusters using the following command:
+
+```bash
+# check whether member clusters have joined
+$ kubectl get cluster
+NAME VERSION MODE READY AGE
+member1 v1.19.1 Push True 11m
+member2 v1.19.1 Push True 11m
+member3 v1.19.1 Pull True 5m12s
+
+# check whether the karmada-scheduler-estimator of a member cluster has been working well
+$ kubectl --context karmada-host -n karmada-system get pod | grep estimator
+karmada-scheduler-estimator-member1-696b54fd56-xt789 1/1 Running 0 77s
+karmada-scheduler-estimator-member2-774fb84c5d-md4wt 1/1 Running 0 75s
+karmada-scheduler-estimator-member3-5c7d87f4b4-76gv9 1/1 Running 0 72s
+```
+
+- If a cluster has not joined, use `hack/deploy-agent-and-estimator.sh` to deploy both karmada-agent and karmada-scheduler-estimator.
+- If the clusters have joined, use `hack/deploy-scheduler-estimator.sh` to only deploy karmada-scheduler-estimator.
+
+### Scheduler option '--enable-scheduler-estimator'
+
+After all member clusters have joined and estimators are all ready, specify the option `--enable-scheduler-estimator=true` to enable scheduler estimator.
+
+```bash
+# edit the deployment of karmada-scheduler
+$ kubectl --context karmada-host -n karmada-system edit deployments.apps karmada-scheduler
+```
+
+Add the option `--enable-scheduler-estimator=true` into the command of container `karmada-scheduler`.
+
+### Descheduler has been installed
+
+Ensure that the karmada-descheduler has been installed onto karmada-host.
+
+```bash
+$ kubectl --context karmada-host -n karmada-system get pod | grep karmada-descheduler
+karmada-descheduler-658648d5b-c22qf 1/1 Running 0 80s
+```
+
+## Example
+
+Let's simulate a replica scheduling failure in a member cluster due to lack of resources.
+
+First we create a deployment with 3 replicas and divide them into 3 member clusters.
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ - member3
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+ weightPreference:
+ dynamicWeight: AvailableReplicas
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ resources:
+ requests:
+ cpu: "2"
+```
+
+It is possible for these 3 replicas to be evenly divided into 3 member clusters, that is, one replica in each cluster.
+Now we taint all nodes in member1 and evict the replica.
+
+```bash
+# mark node "member1-control-plane" as unschedulable in cluster member1
+$ kubectl --context member1 cordon member1-control-plane
+# delete the pod in cluster member1
+$ kubectl --context member1 delete pod -l app=nginx
+```
+
+A new pod will be created and cannot be scheduled by `kube-scheduler` due to lack of resources.
+
+```bash
+# the state of pod in cluster member1 is pending
+$ kubectl --context member1 get pod
+NAME READY STATUS RESTARTS AGE
+nginx-68b895fcbd-fccg4 1/1 Pending 0 80s
+```
+
+After about 5 to 7 minutes, the pod in member1 will be evicted and scheduled to other available clusters.
+
+```bash
+# get the pod in cluster member1
+$ kubectl --context member1 get pod
+No resources found in default namespace.
+# get a list of pods in cluster member2
+$ kubectl --context member2 get pod
+NAME READY STATUS RESTARTS AGE
+nginx-68b895fcbd-dgd4x 1/1 Running 0 6m3s
+nginx-68b895fcbd-nwgjn 1/1 Running 0 4s
+```
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/override-policy.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/override-policy.md
new file mode 100644
index 000000000..7946e938d
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/override-policy.md
@@ -0,0 +1,477 @@
+---
+title: Override Policy
+---
+
+The [OverridePolicy][1] and [ClusterOverridePolicy][2] are used to declare override rules for resources when
+they are propagating to different clusters.
+
+## Difference between OverridePolicy and ClusterOverridePolicy
+ClusterOverridePolicy represents the cluster-wide policy that overrides a group of resources to one or more clusters while OverridePolicy will apply to resources in the same namespace as the namespace-wide policy. For cluster scoped resources, apply ClusterOverridePolicy by policies name in ascending. For namespaced scoped resources, first apply ClusterOverridePolicy, then apply OverridePolicy.
+
+## Resource Selector
+
+ResourceSelectors restricts resource types that this override policy applies to. If you ignore this field it means matching all resources.
+
+Resource Selector required `apiVersion` field which represents the API version of the target resources and `kind` which represents the Kind of the target resources.
+The allowed selectors are as follows:
+- `namespace`: namespace of the target resource.
+- `name`: name of the target resource
+- `labelSelector`: A label query over a set of resources.
+
+#### Examples
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ namespace: test
+ labelSelector:
+ matchLabels:
+ app: nginx
+ overrideRules:
+ #...
+```
+It means override rules above will only be applied to `Deployment` which is named nginx in test namespace and has labels with `app: nginx`.
+
+## Target Cluster
+
+Target Cluster defines restrictions on the override policy that only applies to resources propagated to the matching clusters. If you ignore this field it means matching all clusters.
+
+The allowed selectors are as follows:
+- `labelSelector`: a filter to select member clusters by labels.
+- `fieldSelector`: a filter to select member clusters by fields. Currently only three fields of provider(cluster.spec.provider), zone(cluster.spec.zone), and region(cluster.spec.region) are supported.
+- `clusterNames`: the list of clusters to be selected.
+- `exclude`: the list of clusters to be ignored.
+
+### labelSelector
+
+#### Examples
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - targetCluster:
+ labelSelector:
+ matchLabels:
+ cluster: member1
+ overriders:
+ #...
+```
+It means override rules above will only be applied to those resources propagated to clusters which has `cluster: member1` label.
+
+### fieldSelector
+
+#### Examples
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - targetCluster:
+ fieldSelector:
+ matchExpressions:
+ - key: region
+ operator: In
+ values:
+ - cn-north-1
+ overriders:
+ #...
+```
+It means override rules above will only be applied to those resources propagated to clusters which has the `spec.region` field with values in [cn-north-1].
+
+#### Examples
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - targetCluster:
+ fieldSelector:
+ matchExpressions:
+ - key: provider
+ operator: In
+ values:
+ - aws
+ overriders:
+ #...
+```
+It means override rules above will only be applied to those resources propagated to clusters which has the `spec.provider` field with values in [aws].
+
+#### Examples
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - targetCluster:
+ fieldSelector:
+ matchExpressions:
+ - key: zone
+ operator: In
+ values:
+ - us
+ overriders:
+ #...
+```
+It means override rules above will only be applied to those resources propagated to clusters which has the `spec.zone` field with values in [us].
+
+### clusterNames
+
+#### Examples
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - targetCluster:
+ clusterNames:
+ - member1
+ overriders:
+ #...
+```
+It means override rules above will only be applied to those resources propagated to clusters whose clusterNames are member1.
+
+### exclude
+
+#### Examples
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - targetCluster:
+ exclude:
+ - member1
+ overriders:
+ #...
+```
+It means override rules above will only be applied to those resources propagated to clusters whose clusterNames are not member1.
+
+## Overriders
+
+Karmada offers various alternatives to declare the override rules:
+- `ImageOverrider`: overrides images for workloads.
+- `CommandOverrider`: overrides commands for workloads.
+- `ArgsOverrider`: overrides args for workloads.
+- `LabelsOverrider`: overrides labels for workloads.
+- `AnnotationsOverrider`: overrides annotations for workloads.
+- `PlaintextOverrider`: a general-purpose tool to override any kind of resources.
+
+### ImageOverrider
+The `ImageOverrider` is a refined tool to override images with format `[registry/]repository[:tag|@digest]`(e.g.`/spec/template/spec/containers/0/image`) for workloads such as `Deployment`.
+
+The allowed operations are as follows:
+- `add`: appends the registry, repository or tag/digest to the image from containers.
+- `remove`: removes the registry, repository or tag/digest from the image from containers.
+- `replace`: replaces the registry, repository or tag/digest of the image from containers.
+
+#### Examples
+Suppose we create a deployment named `myapp`.
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: myapp
+ #...
+spec:
+ template:
+ spec:
+ containers:
+ - image: myapp:1.0.0
+ name: myapp
+```
+
+**Example 1: Add the registry when workloads are propagating to specific clusters.**
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - overriders:
+ imageOverrider:
+ - component: Registry
+ operator: add
+ value: test-repo
+```
+It means `add` a registry`test-repo` to the image of `myapp`.
+
+After the policy is applied for `myapp`, the image will be:
+```yaml
+ containers:
+ - image: test-repo/myapp:1.0.0
+ name: myapp
+```
+
+**Example 2: replace the repository when workloads are propagating to specific clusters.**
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - overriders:
+ imageOverrider:
+ - component: Repository
+ operator: replace
+ value: myapp2
+```
+It means `replace` the repository from `myapp` to `myapp2`.
+
+After the policy is applied for `myapp`, the image will be:
+```yaml
+ containers:
+ - image: myapp2:1.0.0
+ name: myapp
+```
+
+**Example 3: remove the tag when workloads are propagating to specific clusters.**
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - overriders:
+ imageOverrider:
+ - component: Tag
+ operator: remove
+```
+It means `remove` the tag of the image `myapp`.
+
+After the policy is applied for `myapp`, the image will be:
+```yaml
+ containers:
+ - image: myapp
+ name: myapp
+```
+
+### CommandOverrider
+The `CommandOverrider` is a refined tool to override commands(e.g.`/spec/template/spec/containers/0/command`)
+for workloads, such as `Deployment`.
+
+The allowed operations are as follows:
+- `add`: appends one or more flags to the command list.
+- `remove`: removes one or more flags from the command list.
+
+#### Examples
+Suppose we create a deployment named `myapp`.
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: myapp
+ #...
+spec:
+ template:
+ spec:
+ containers:
+ - image: myapp
+ name: myapp
+ command:
+ - ./myapp
+ - --parameter1=foo
+ - --parameter2=bar
+```
+
+**Example 1: Add flags when workloads are propagating to specific clusters.**
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - overriders:
+ commandOverrider:
+ - containerName: myapp
+ operator: add
+ value:
+ - --cluster=member1
+```
+It means `add`(appending) a new flag `--cluster=member1` to the `myapp`.
+
+After the policy is applied for `myapp`, the command list will be:
+```yaml
+ containers:
+ - image: myapp
+ name: myapp
+ command:
+ - ./myapp
+ - --parameter1=foo
+ - --parameter2=bar
+ - --cluster=member1
+```
+
+**Example 2: Remove flags when workloads are propagating to specific clusters.**
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - overriders:
+ commandOverrider:
+ - containerName: myapp
+ operator: remove
+ value:
+ - --parameter1=foo
+```
+It means `remove` the flag `--parameter1=foo` from the command list.
+
+After the policy is applied for `myapp`, the `command` will be:
+```yaml
+ containers:
+ - image: myapp
+ name: myapp
+ command:
+ - ./myapp
+ - --parameter2=bar
+```
+
+### ArgsOverrider
+The `ArgsOverrider` is a refined tool to override args(such as `/spec/template/spec/containers/0/args`) for workloads,
+such as `Deployments`.
+
+The allowed operations are as follows:
+- `add`: appends one or more args to the command list.
+- `remove`: removes one or more args from the command list.
+
+Note: `ArgsOverrider` functions the similar way as `CommandOverrider`. You can refer to the `CommandOverrider` examples.
+
+
+### LabelsOverrider
+
+The allowed operations are as follows:
+- `add`: The items in `value` will be appended to labels.
+- `remove`: If the item in `value` matches the item in labels, the former will be deleted. If they do not match, nothing will be done.
+- `replace`: If the key in `value` matches the key in the label, the former will be replaced. If they do not match, nothing will be done.
+
+#### Examples
+Suppose we create a deployment named `myapp`.
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: myapp
+ labels:
+ foo: foo
+ baz: baz
+ #...
+spec:
+ template:
+ spec:
+ containers:
+ - image: myapp:1.0.0
+ name: myapp
+```
+
+**Example 1: add/remove/replace labels **
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - overriders:
+ labelsOverrider:
+ - operator: add
+ value:
+ bar: bar # It will be added to labels
+ - operator: replace
+ value:
+ foo: exist # "foo: foo" will be replaced by "foo: exist"
+ - operator: remove
+ value:
+ baz: baz # It will be removed from labels
+```
+
+### AnnotationsOverrider
+
+Note: `AnnotationsOverrider` functions the similar way as `LabelsOverrider`. You can refer to the `LabelsOverrider` examples.
+
+
+### PlaintextOverrider
+The `PlaintextOverrider` is a simple overrider that overrides target fields according to path, operator and value, just like `kubectl patch`.
+
+The allowed operations are as follows:
+- `add`: appends one or more elements to the resources.
+- `remove`: removes one or more elements from the resources.
+- `replace`: replaces one or more elements from the resources.
+
+Suppose we create a configmap named `myconfigmap`.
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: myconfigmap
+ #...
+data:
+ example: 1
+```
+
+**Example 1: replace data of the configmap when resources are propagating to specific clusters.**
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example
+spec:
+ #...
+ overrideRules:
+ - overriders:
+ plaintext:
+ - path: /data/example
+ operator: replace
+ value: 2
+```
+It means `replace` data of the configmap from `example: 1` to the `example: 2`.
+
+After the policy is applied for `myconfigmap`, the configmap will be:
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: myconfigmap
+ #...
+data:
+ example: 2
+```
+
+[1]: https://github.com/karmada-io/karmada/blob/c37bedc1cfe5a98b47703464fed837380c90902f/pkg/apis/policy/v1alpha1/override_types.go#L13
+[2]: https://github.com/karmada-io/karmada/blob/c37bedc1cfe5a98b47703464fed837380c90902f/pkg/apis/policy/v1alpha1/override_types.go#L189
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/propagate-dependencies.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/propagate-dependencies.md
new file mode 100644
index 000000000..c7a926cbc
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/propagate-dependencies.md
@@ -0,0 +1,123 @@
+---
+title: Propagate dependencies
+---
+
+Deployment, Job, Pod, DaemonSet and StatefulSet dependencies (ConfigMaps and Secrets) can be propagated to member
+clusters automatically. This document demonstrates how to use this feature. For more design details, please refer to
+[dependencies-automatically-propagation](https://github.com/karmada-io/karmada/blob/master/docs/proposals/dependencies-automatically-propagation/README.md)
+
+## Prerequisites
+### Karmada has been installed
+
+We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run
+`hack/local-up-karmada.sh` script which is also used to run our E2E cases.
+
+### Enable PropagateDeps feature
+
+`PropagateDeps` feature gate has evolved to the Beta sine Karmada v1.4 and is enabled by default. If you use the Karmada 1.3 or earlier, you need to enable this feature gate.
+
+```bash
+kubectl edit deployment karmada-controller-manager -n karmada-system
+```
+Add `--feature-gates=PropagateDeps=true` option.
+
+## Example
+Create a Deployment mounted with a ConfigMap
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: my-nginx
+ labels:
+ app: my-nginx
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: my-nginx
+ template:
+ metadata:
+ labels:
+ app: my-nginx
+ spec:
+ containers:
+ - image: nginx
+ name: my-nginx
+ ports:
+ - containerPort: 80
+ volumeMounts:
+ - name: configmap
+ mountPath: "/configmap"
+ volumes:
+ - name: configmap
+ configMap:
+ name: my-nginx-config
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: my-nginx-config
+data:
+ nginx.properties: |
+ proxy-connect-timeout: "10s"
+ proxy-read-timeout: "10s"
+ client-max-body-size: "2m"
+```
+Create a propagation policy with this Deployment and set `propagateDeps: true`.
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: my-nginx-propagation
+spec:
+ propagateDeps: true
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: my-nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+ weightPreference:
+ staticWeightList:
+ - targetCluster:
+ clusterNames:
+ - member1
+ weight: 1
+ - targetCluster:
+ clusterNames:
+ - member2
+ weight: 1
+```
+Upon successful policy execution, the Deployment and ConfigMap are properly propagated to the member cluster.
+```bash
+$ kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get propagationpolicy
+NAME AGE
+my-nginx-propagation 16s
+$ kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get deployment
+NAME READY UP-TO-DATE AVAILABLE AGE
+my-nginx 2/2 2 2 22m
+# member cluster1
+$ kubectl config use-context member1
+Switched to context "member1".
+$ kubectl get deployment
+NAME READY UP-TO-DATE AVAILABLE AGE
+my-nginx 1/1 1 1 25m
+$ kubectl get configmap
+NAME DATA AGE
+my-nginx-config 1 26m
+# member cluster2
+$ kubectl config use-context member2
+Switched to context "member2".
+$ kubectl get deployment
+NAME READY UP-TO-DATE AVAILABLE AGE
+my-nginx 1/1 1 1 27m
+$ kubectl get configmap
+NAME DATA AGE
+my-nginx-config 1 27m
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/resource-propagating.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/resource-propagating.md
new file mode 100644
index 000000000..2d9b4d9d7
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/resource-propagating.md
@@ -0,0 +1,673 @@
+---
+title: Resource Propagating
+---
+
+The [PropagationPolicy](https://github.com/karmada-io/karmada/blob/master/pkg/apis/policy/v1alpha1/propagation_types.go#L13) and [ClusterPropagationPolicy](https://github.com/karmada-io/karmada/blob/master/pkg/apis/policy/v1alpha1/propagation_types.go#L292) APIs are provided to propagate resources. For the differences between the two APIs, please see [here](../../faq/faq.md#what-is-the-difference-between-propagationpolicy-and-clusterpropagationpolicy).
+
+Here, we use PropagationPolicy as an example to describe how to propagate resources.
+
+## Before you start
+
+[Install Karmada](../../installation/installation.md) and prepare the [karmadactl command-line](../../installation/install-cli-tools.md) tool.
+
+## Deploy a simplest multi-cluster Deployment
+
+### Create a PropagationPolicy object
+
+You can propagate a Deployment by creating a PropagationPolicy object defined in a YAML file. For example, this YAML
+file describes a Deployment object named nginx under default namespace need to be propagated to member1 cluster:
+
+```yaml
+# propagationpolicy.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: example-policy # The default namespace is `default`.
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx # If no namespace is specified, the namespace is inherited from the parent object scope.
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+```
+
+1. Create a propagationPolicy base on the YAML file:
+```shell
+kubectl apply -f propagationpolicy.yaml
+```
+2. Create a Deployment nginx resource:
+```shell
+kubectl create deployment nginx --image nginx
+```
+> Note: The resource exists only as a template in karmada. After being propagated to a member cluster, the behavior of the resource is the same as that of a single kubernetes cluster.
+
+> Note: Resources and PropagationPolicy are created in no sequence.
+
+3. Display information of the deployment:
+```shell
+karmadactl get deployment
+```
+The output is similar to this:
+```shell
+NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION
+nginx member1 1/1 1 1 52s Y
+```
+
+4. List the pods created by the deployment:
+```shell
+karmadactl get pod -l app=nginx
+```
+The output is similar to this:
+```shell
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-6799fc88d8-s7vv9 member1 1/1 Running 0 52s
+```
+
+### Update PropagationPolicy
+
+You can update the propagationPolicy by applying a new YAML file. This YAML file propagates the Deployment to the member2 cluster.
+
+```yaml
+# propagationpolicy-update.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: example-policy
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames: # Modify the selected cluster to propagate the Deployment.
+ - member2
+```
+
+1. Apply the new YAML file:
+```shell
+kubectl apply -f propagationpolicy-update.yaml
+```
+2. Display information of the deployment (the output is similar to this):
+```shell
+NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION
+nginx member2 1/1 1 1 5s Y
+```
+3. List the pods of the deployment (the output is similar to this):
+```shell
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-6799fc88d8-8t8cc member2 1/1 Running 0 17s
+```
+
+### Update Deployment
+
+You can update the deployment template. The changes will be automatically synchronized to the member clusters.
+
+1. Update deployment replicas to 2
+2. Display information of the deployment (the output is similar to this):
+```shell
+NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION
+nginx member2 2/2 2 2 7m59s Y
+```
+3. List the pods of the deployment (the output is similar to this):
+```shell
+NAME CLUSTER READY STATUS RESTARTS AGE
+nginx-6799fc88d8-8t8cc member2 1/1 Running 0 8m12s
+nginx-6799fc88d8-zpl4j member2 1/1 Running 0 17s
+```
+
+### Delete a propagationPolicy
+
+Delete the propagationPolicy by name:
+```shell
+kubectl delete propagationpolicy example-policy
+```
+Deleting a propagationPolicy does not delete deployments propagated to member clusters. You need to delete deployments in the karmada control-plane:
+```shell
+kubectl delete deployment nginx
+```
+
+## Deploy deployment into a specified set of target clusters
+
+`.spec.placement.clusterAffinity` field of PropagationPolicy represents scheduling restrictions on a certain set of clusters, without which any cluster can be scheduling candidates.
+
+It has four fields to set:
+- LabelSelector
+- FieldSelector
+- ClusterNames
+- ExcludeClusters
+
+### LabelSelector
+
+LabelSelector is a filter to select member clusters by labels. It uses `*metav1.LabelSelector` type. If it is non-nil and non-empty, only the clusters match this filter will be selected.
+
+PropagationPolicy can be configured as follows:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: test-propagation
+spec:
+ #...
+ placement:
+ clusterAffinity:
+ labelSelector:
+ matchLabels:
+ location: us
+ #...
+```
+
+PropagationPolicy can also be configured as follows:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: test-propagation
+spec:
+ #...
+ placement:
+ clusterAffinity:
+ labelSelector:
+ matchExpressions:
+ - key: location
+ operator: In
+ values:
+ - us
+ #...
+```
+
+For a description of `matchLabels` and `matchExpressions`, you can refer to [Resources that support set-based requirements](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements).
+
+### FieldSelector
+
+FieldSelector is a filter to select member clusters by fields. If it is non-nil and non-empty, only the clusters match this filter will be selected.
+
+PropagationPolicy can be configured as follows:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ #...
+ placement:
+ clusterAffinity:
+ fieldSelector:
+ matchExpressions:
+ - key: provider
+ operator: In
+ values:
+ - huaweicloud
+ - key: region
+ operator: NotIn
+ values:
+ - cn-south-1
+ #...
+```
+
+If multiple `matchExpressions` are specified in the `fieldSelector`, the cluster must match all `matchExpressions`.
+
+The `key` in `matchExpressions` now supports three values: `provider`, `region`, and `zone`, which correspond to the `.spec.provider`, `.spec.region`, and `.spec.zone` fields of the Cluster object, respectively.
+
+The `operator` in `matchExpressions` now supports `In` and `NotIn`.
+
+### ClusterNames
+
+Users can set the `ClusterNames` field to specify the selected clusters.
+
+PropagationPolicy can be configured as follows:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ #...
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ #...
+```
+
+### ExcludeClusters
+
+Users can set the `ExcludeClusters` fields to specify the clusters to be ignored.
+
+PropagationPolicy can be configured as follows:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ #...
+ placement:
+ clusterAffinity:
+ exclude:
+ - member1
+ - member3
+ #...
+```
+
+## Multiple cluster affinity groups
+
+Users can set the ClusterAffinities field and declare multiple cluster groups in PropagationPolicy. The scheduler will evaluate these groups one by one in the order they appear in the spec, the group that does not satisfy scheduling restrictions will be ignored which means all clusters in this group will not be selected unless it also belongs to the next group(a cluster cloud belong to multiple groups).
+
+If none of the groups satisfy the scheduling restrictions, the scheduling fails, which means no cluster will be selected.
+
+Note:
+
+1. ClusterAffinities can not co-exist with ClusterAffinity.
+2. If both ClusterAffinity and ClusterAffinities are not set, any cluster can be scheduling candidates.
+
+Potential use case 1:
+The private clusters in the local data center could be the main group, and the managed clusters provided by cluster providers could be the secondary group. So that the Karmada scheduler would prefer to schedule workloads to the main group and the second group will only be considered in case of the main group does not satisfy restrictions(like, lack of resources).
+
+PropagationPolicy can be configured as follows:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: test-propagation
+spec:
+ #...
+ placement:
+ clusterAffinities:
+ - affinityName: local-clusters
+ clusterNames:
+ - local-member1
+ - local-member2
+ - affinityName: cloud-clusters
+ clusterNames:
+ - public-cloud-member1
+ - public-cloud-member2
+ #...
+```
+
+Potential use case 2: For the disaster recovery scenario, the clusters could be organized to primary and backup groups, the workloads would be scheduled to primary clusters firstly, and when primary cluster fails(like data center power off), Karmada scheduler could migrate workloads to the backup clusters.
+
+PropagationPolicy can be configured as follows:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: test-propagation
+spec:
+ #...
+ placement:
+ clusterAffinities:
+ - affinityName: primary-clusters
+ clusterNames:
+ - member1
+ - affinityName: backup-clusters
+ clusterNames:
+ - member1
+ - member2
+ #...
+```
+
+For more detailed design information, please refer to [Multiple scheduling group](https://github.com/karmada-io/karmada/tree/master/docs/proposals/scheduling/multi-scheduling-group).
+
+## Schedule based on Taints and Tolerations
+
+`.spec.placement.clusterTolerations` field of PropagationPolicy represents the tolerations. Like kubernetes, tolerations need to be used in conjunction with taints on the clusters.
+After setting one or more taints on the cluster, workloads cannot be scheduled or run on these clusters unless the policy explicitly states that these taints are tolerated.
+Karmada currently supports taints whose effects are `NoSchedule` and `NoExecute`.
+
+You can `karmadactl taint` to taint a cluster:
+
+```shell
+# Update cluster 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'
+# If a taint with that key and effect already exists, its value is replaced as specified
+karmadactl taint clusters foo dedicated=special-user:NoSchedule
+```
+
+In order to schedule to the above cluster, you need to declare the following in the Policy:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ #...
+ placement:
+ clusterTolerations:
+ - key: dedicated
+ value: special-user
+ Effect: NoSchedule
+```
+
+`NoExecute` taints are also used in `Multi-cluster Failover`. See details [here](../failover/failover-analysis.md).
+
+## Multi region HA support
+
+By leveraging the spread-by-region constraint, users are able to deploy workloads aross regions, e.g. people may want their workloads always running on different regions for HA purposes.
+
+To enable multi region deployment, you should use the command below to customize the region of clusters.
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver edit cluster/member1
+
+...
+spec:
+ apiEndpoint: https://172.18.0.4:6443
+ id: 257b5c81-dfae-4ae5-bc7c-6eaed9ed6a39
+ impersonatorSecretRef:
+ name: member1-impersonator
+ namespace: karmada-cluster
+ region: test
+...
+```
+
+Then you need to restrict the maximum and minimum number of cluster groups to be selected. Assume there are two regions, you may want to deploy the workload in one cluster per region. You can refer to:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ #...
+ placement:
+ replicaScheduling:
+ replicaSchedulingType: Duplicated
+ spreadConstraints:
+ - spreadByField: region
+ maxGroups: 2
+ minGroups: 2
+ - spreadByField: cluster
+ maxGroups: 1
+ minGroups: 1
+```
+
+:::note
+
+If the replica division preference is `StaticWeightList`, the declaration specified by spread constraints will be ignored.
+If one of spread constraints are using `SpreadByField`, the `SpreadByFieldCluster` must be included.
+For example, when using `SpreadByFieldRegion` to specify region groups, at the meantime, you must use
+`SpreadByFieldCluster` to specify how many clusters should be selected.
+
+:::
+
+## Multiple strategies of replica Scheduling
+
+`.spec.placement.replicaScheduling` represents the scheduling policy on dealing with the number of replicas when propagating resources that have replicas in spec (e.g. deployments, statefulsets and CRDs which can be interpreted by [Customizing Resource Interpreter](../globalview/customizing-resource-interpreter.md)) to member clusters.
+
+It has two replicaSchedulingTypes which determines how the replicas is scheduled when Karmada propagating a resource:
+
+* `Duplicated`: duplicate the same replicas to each candidate member cluster from resources.
+* `Divided`: divide replicas into parts according to numbers of valid candidate member clusters, and exact replicas for each cluster are determined by `ReplicaDivisionPreference`.
+
+`ReplicaDivisionPreference` determines the replicas is divided when ReplicaSchedulingType is `Divided`.
+
+* `Aggregated`: divide replicas into clusters as few as possible, while respecting clusters' resource availabilities during the division. See details in [Schedule based on Cluster Resource Modeling](./cluster-resources.md).
+* `Weighted`: divide replicas by weight according to `WeightPreference`. There are two kinds of `WeightPreference` to set. `StaticWeightList` statically allocates replicas to target clusters based on weight. Target clusters can be selected by `ClusterAffinity`. `DynamicWeight` specifies the factor to generate the dynamic weight list. If specified, `StaticWeightList` will be ignored. Karmada currently supports the factor `AvailableReplicas`.
+
+The following gives two simple examples:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ #...
+ placement:
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+ weightPreference:
+ staticWeightList:
+ - targetCluster:
+ clusterNames:
+ - member1
+ weight: 1
+ - targetCluster:
+ clusterNames:
+ - member2
+ weight: 1
+```
+
+It means replicas will be evenly propagated to member1 and member2.
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ #...
+ placement:
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+ weightPreference:
+ dynamicWeight: AvailableReplicas
+```
+
+It means replicas will be propagated based on available replicas in member clusters. For example, the scheduler selected 3 cluster(A/B/C) and should divide 12 replicas to them.
+Based on cluster resource modeling, we get that the max available replica of A, B, C is 6, 12, 18.
+Therefore, the weight of cluster A:B:C will be 6:12:18 (equal to 1:2:3). At last, the assignment would be "A: 2, B: 4, C: 6".
+
+:::note
+
+If `ReplicaDivisionPreference` is set to `Weighted` and `WeightPreference` is not set, the default strategy is to weight all clusters averagely.
+
+:::
+
+## Configure PropagationPolicy/ClusterPropagationPolicy priority
+If a PropagationPolicy and a ClusterPropagationPolicy match the workload, Karmada will select the PropagationPolicy.
+If multiple PropagationPolicies match the workload, Karmada will select the one with the highest priority. A PropagationPolicy supports implicit and explicit priorities.
+The same goes for ClusterPropagationPolicy.
+
+The following takes PropagationPolicy as an example.
+
+### Configure explicit priority
+The `spec.priority` in a PropagationPolicy represents the explicit priority. A greater value means a higher priority.
+> Note: If not specified, defaults to 0.
+
+Assume there are multiple policies:
+```yaml
+# highexplicitpriority.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-high-explicit-priority
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ labelSelector:
+ matchLabels:
+ app: nginx
+ priority: 2
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+---
+# lowexplicitpriority.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-low-explicit-priority
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ labelSelector:
+ matchLabels:
+ app: nginx
+ priority: 1
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member2
+---
+# defaultexplicitpriority.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-low-explicit-priority
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ labelSelector:
+ matchLabels:
+ app: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member3
+```
+The `nginx` deployment in `default` namespace will be propagated to cluster `member1`.
+
+### Configure implicit priority
+The `spec.resourceSelectors` in a PropagationPolicy represents the implicit priority. The priority order (from low to high) is as follows:
+* The PropagationPolicy resourceSelector whose name and labelSelector are empty matches the workload.
+* The labelSelector of PropagationPolicy resourceSelector matches the workload.
+* The name of PropagationPolicy resourceSelector matches the workload.
+
+Assume there are multiple policies:
+```yaml
+# emptymatch.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-emptymatch
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+---
+# labelselectormatch.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-labelselectormatch
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ labelSelector:
+ matchLabels:
+ app: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member2
+---
+# namematch.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-namematch
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member3
+```
+The `nginx` deployment in `default` namespace will be propagated to cluster `member3`.
+
+### Choose from same-priority PropagationPolicies
+If multiple PropagationPolicies with the same explicit priority match the workload, the one with the highest implicit priority will be selected.
+
+Assume there are multiple policies:
+```yaml
+# explicit-labelselectormatch.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-explicit-labelselectormatch
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ labelSelector:
+ matchLabels:
+ app: nginx
+ priority: 3
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+---
+# explicit-namematch.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-explicit-namematch
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ priority: 3
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member3
+```
+The `nginx` deployment in `default` namespace will be propagated to cluster `member3`.
+
+If both explicit and implicit priorities are the same, Karmada applies the PropagationPolicy in an ascending alphabetical order, for example, choosing xxx-a-xxx instead of xxx-b-xxx.
+
+Assume there are multiple policies:
+```yaml
+# higher-alphabetical.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-b-higher-alphabetical
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ priority: 3
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+---
+# lower-alphabetical.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: propagation-a-lower-alphabetical
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ priority: 3
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member2
+```
+The `nginx` deployment in `default` namespace will be propagated to cluster `member2`.
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/scheduler-estimator.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/scheduler-estimator.md
new file mode 100644
index 000000000..fb77d5dfe
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/scheduling/scheduler-estimator.md
@@ -0,0 +1,153 @@
+---
+title: Cluster Accurate Scheduler Estimator For Rescheduling
+---
+
+Users could divide their replicas of a workload into different clusters in terms of available resources of member clusters. When some clusters are lack of resources, scheduler would not assign excessive replicas into these clusters by calling karmada-scheduler-estimator.
+
+## Prerequisites
+
+### Karmada has been installed
+
+We can install Karmada by referring to [quick-start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
+
+### Member cluster component is ready
+
+Ensure that all member clusters have been joined and their corresponding karmada-scheduler-estimator is installed into karmada-host.
+
+You could check by using the following command:
+
+```bash
+# check whether the member cluster has been joined
+$ kubectl get cluster
+NAME VERSION MODE READY AGE
+member1 v1.19.1 Push True 11m
+member2 v1.19.1 Push True 11m
+member3 v1.19.1 Pull True 5m12s
+
+# check whether the karmada-scheduler-estimator of a member cluster has been working well
+$ kubectl --context karmada-host get pod -n karmada-system | grep estimator
+karmada-scheduler-estimator-member1-696b54fd56-xt789 1/1 Running 0 77s
+karmada-scheduler-estimator-member2-774fb84c5d-md4wt 1/1 Running 0 75s
+karmada-scheduler-estimator-member3-5c7d87f4b4-76gv9 1/1 Running 0 72s
+```
+
+- If the cluster has not been joined, you could use `hack/deploy-agent-and-estimator.sh` to deploy both karmada-agent and karmada-scheduler-estimator.
+- If the cluster has been joined already, you could use `hack/deploy-scheduler-estimator.sh` to only deploy karmada-scheduler-estimator.
+
+### Scheduler option '--enable-scheduler-estimator'
+
+After all member clusters have been joined and estimators are all ready, please specify the option `--enable-scheduler-estimator=true` to enable scheduler estimator.
+
+```bash
+# edit the deployment of karmada-scheduler
+$ kubectl --context karmada-host edit -n karmada-system deployments.apps karmada-scheduler
+```
+
+And then add the option `--enable-scheduler-estimator=true` into the command of container `karmada-scheduler`.
+
+## Example
+
+Now we could divide the replicas into different member clusters. Note that `propagationPolicy.spec.replicaScheduling.replicaSchedulingType` must be `Divided` and `propagationPolicy.spec.replicaScheduling.replicaDivisionPreference` must be `Aggregated`. The scheduler will try to divide the replicas aggregately in terms of all available resources of member clusters.
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: aggregated-policy
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+ - member3
+ replicaScheduling:
+ replicaSchedulingType: Divided
+ replicaDivisionPreference: Aggregated
+```
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 5
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ ports:
+ - containerPort: 80
+ name: web-1
+ resources:
+ requests:
+ cpu: "1"
+ memory: 2Gi
+```
+
+You will find all replicas have been assigned to as few clusters as possible.
+
+```
+$ kubectl get deployments.apps
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx 5/5 5 5 2m16s
+$ kubectl get rb nginx-deployment -o=custom-columns=NAME:.metadata.name,CLUSTER:.spec.clusters
+NAME CLUSTER
+nginx-deployment [map[name:member1 replicas:5] map[name:member2] map[name:member3]]
+```
+
+After that, we change the resource request of the deployment to a large number and have a try again.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 5
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ ports:
+ - containerPort: 80
+ name: web-1
+ resources:
+ requests:
+ cpu: "100"
+ memory: 200Gi
+```
+
+As any node of member clusters does not have so many cpu and memory resources, we will find workload scheduling failed.
+
+```bash
+$ kubectl get deployments.apps
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx 0/5 0 0 2m20s
+$ kubectl get rb nginx-deployment -o=custom-columns=NAME:.metadata.name,CLUSTER:.spec.clusters
+NAME CLUSTER
+nginx-deployment
+```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/security-governance/working-with-gatekeeper.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/security-governance/working-with-gatekeeper.md
new file mode 100644
index 000000000..fb88b53af
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/security-governance/working-with-gatekeeper.md
@@ -0,0 +1,480 @@
+---
+title: Working with Gatekeeper(OPA)
+---
+
+[Gatekeeper](https://github.com/open-policy-agent/gatekeeper) , is a customizable admission webhook for Kubernetes that enforces policies executed by the Open Policy Agent (OPA), a policy engine for Cloud Native environments hosted by [Cloud Native Computing Foundation](https://cncf.io/).
+
+This document demonstrates how to use the `Gatekeeper` to manage OPA policies.
+
+## Prerequisites
+### Start up Karmada clusters
+
+You just need to clone Karmada repo, and run the following script in the Karmada directory.
+
+```
+hack/local-up-karmada.sh
+```
+
+## Gatekeeper Installations
+
+In this case, you will use Gatekeeper v3.7.2. Related deployment files are from [here](https://github.com/open-policy-agent/gatekeeper/blob/release-3.7/deploy/gatekeeper.yaml).
+
+### Install Gatekeeper APIs on Karmada
+
+1. Create resource objects for Gatekeeper in karmada controller plane, the content is as follows.
+
+ ```console
+ kubectl config use-context karmada-apiserver
+ ```
+
+ Deploy namespace: https://github.com/open-policy-agent/gatekeeper/blob/release-3.7/deploy/gatekeeper.yaml#L1-L9
+
+ Deploy Gatekeeper CRDs: https://github.com/open-policy-agent/gatekeeper/blob/release-3.7/deploy/gatekeeper.yaml#L27-L1999
+
+ Deploy Gatekeeper secrets: https://github.com/open-policy-agent/gatekeeper/blob/release-3.7/deploy/gatekeeper.yaml#L2261-L2267
+
+ Deploy webhook config:
+ ```yaml
+ apiVersion: admissionregistration.k8s.io/v1
+ kind: MutatingWebhookConfiguration
+ metadata:
+ labels:
+ gatekeeper.sh/system: "yes"
+ name: gatekeeper-mutating-webhook-configuration
+ webhooks:
+ - admissionReviewVersions:
+ - v1
+ - v1beta1
+ clientConfig:
+ #Change the clientconfig from service type to url type cause webhook config and service are not in the same cluster.
+ url: https://gatekeeper-webhook-service.gatekeeper-system.svc:443/v1/mutate
+ failurePolicy: Ignore
+ matchPolicy: Exact
+ name: mutation.gatekeeper.sh
+ namespaceSelector:
+ matchExpressions:
+ - key: admission.gatekeeper.sh/ignore
+ operator: DoesNotExist
+ rules:
+ - apiGroups:
+ - '*'
+ apiVersions:
+ - '*'
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - '*'
+ sideEffects: None
+ timeoutSeconds: 1
+ ---
+ apiVersion: admissionregistration.k8s.io/v1
+ kind: ValidatingWebhookConfiguration
+ metadata:
+ labels:
+ gatekeeper.sh/system: "yes"
+ name: gatekeeper-validating-webhook-configuration
+ webhooks:
+ - admissionReviewVersions:
+ - v1
+ - v1beta1
+ clientConfig:
+ #Change the clientconfig from service type to url type cause webhook config and service are not in the same cluster.
+ url: https://gatekeeper-webhook-service.gatekeeper-system.svc:443/v1/admit
+ failurePolicy: Ignore
+ matchPolicy: Exact
+ name: validation.gatekeeper.sh
+ namespaceSelector:
+ matchExpressions:
+ - key: admission.gatekeeper.sh/ignore
+ operator: DoesNotExist
+ rules:
+ - apiGroups:
+ - '*'
+ apiVersions:
+ - '*'
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - '*'
+ sideEffects: None
+ timeoutSeconds: 3
+ - admissionReviewVersions:
+ - v1
+ - v1beta1
+ clientConfig:
+ #Change the clientconfig from service type to url type cause webhook config and service are not in the same cluster.
+ url: https://gatekeeper-webhook-service.gatekeeper-system.svc:443/v1/admitlabel
+ failurePolicy: Fail
+ matchPolicy: Exact
+ name: check-ignore-label.gatekeeper.sh
+ rules:
+ - apiGroups:
+ - ""
+ apiVersions:
+ - '*'
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - namespaces
+ sideEffects: None
+ timeoutSeconds: 3
+ ```
+ You need to change the clientconfig from service type to url type for multi-cluster deployment.
+
+ Also, you need to deploy a dummy pod in gatekeeper-system namespace in karmada-apiserver context, because when Gatekeeper generates a policy template CRD, a status object is generated to monitor the status of the policy template, and the status object is bound by the controller Pod through the OwnerReference. Therefore, when the CRD and the controller are not in the same cluster, a dummy Pod needs to be used instead of the controller. The Pod enables the status object to be successfully generated.
+
+ For example:
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: dummy-pod
+ namespace: gatekeeper-system
+ spec:
+ containers:
+ - name: dummy-pod
+ image: nginx:latest
+ imagePullPolicy: Always
+ ```
+
+### Install GateKeeper components on host cluster
+
+ ```console
+ kubectl config use-context karmada-host
+ ```
+
+ Deploy namespace: https://github.com/open-policy-agent/gatekeeper/blob/release-3.7/deploy/gatekeeper.yaml#L1-L9
+
+ Deploy RBAC resources for deployment: https://github.com/open-policy-agent/gatekeeper/blob/release-3.7/deploy/gatekeeper.yaml#L1999-L2375
+
+ Deploy Gatekeeper controllers and secret as kubeconfig:
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ labels:
+ control-plane: audit-controller
+ gatekeeper.sh/operation: audit
+ gatekeeper.sh/system: "yes"
+ name: gatekeeper-audit
+ namespace: gatekeeper-system
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ control-plane: audit-controller
+ gatekeeper.sh/operation: audit
+ gatekeeper.sh/system: "yes"
+ template:
+ metadata:
+ annotations:
+ container.seccomp.security.alpha.kubernetes.io/manager: runtime/default
+ labels:
+ control-plane: audit-controller
+ gatekeeper.sh/operation: audit
+ gatekeeper.sh/system: "yes"
+ spec:
+ automountServiceAccountToken: true
+ containers:
+ - args:
+ - --operation=audit
+ - --operation=status
+ - --operation=mutation-status
+ - --logtostderr
+ - --disable-opa-builtin={http.send}
+ - --kubeconfig=/etc/kubeconfig
+ command:
+ - /manager
+ env:
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ apiVersion: v1
+ fieldPath: metadata.namespace
+ - name: POD_NAME
+ value: {{POD_NAME}}
+ image: openpolicyagent/gatekeeper:v3.7.2
+ imagePullPolicy: Always
+ livenessProbe:
+ httpGet:
+ path: /healthz
+ port: 9090
+ name: manager
+ ports:
+ - containerPort: 8888
+ name: metrics
+ protocol: TCP
+ - containerPort: 9090
+ name: healthz
+ protocol: TCP
+ readinessProbe:
+ httpGet:
+ path: /readyz
+ port: 9090
+ resources:
+ limits:
+ cpu: 1000m
+ memory: 512Mi
+ requests:
+ cpu: 100m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - all
+ readOnlyRootFilesystem: true
+ runAsGroup: 999
+ runAsNonRoot: true
+ runAsUser: 1000
+ volumeMounts:
+ - mountPath: /tmp/audit
+ name: tmp-volume
+ - mountPath: /etc/kubeconfig
+ name: kubeconfig
+ subPath: kubeconfig
+ nodeSelector:
+ kubernetes.io/os: linux
+ priorityClassName: system-cluster-critical
+ serviceAccountName: gatekeeper-admin
+ terminationGracePeriodSeconds: 60
+ volumes:
+ - emptyDir: {}
+ name: tmp-volume
+ - name: kubeconfig
+ secret:
+ defaultMode: 420
+ secretName: kubeconfig
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ labels:
+ control-plane: controller-manager
+ gatekeeper.sh/operation: webhook
+ gatekeeper.sh/system: "yes"
+ name: gatekeeper-controller-manager
+ namespace: gatekeeper-system
+ spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ control-plane: controller-manager
+ gatekeeper.sh/operation: webhook
+ gatekeeper.sh/system: "yes"
+ template:
+ metadata:
+ annotations:
+ container.seccomp.security.alpha.kubernetes.io/manager: runtime/default
+ labels:
+ control-plane: controller-manager
+ gatekeeper.sh/operation: webhook
+ gatekeeper.sh/system: "yes"
+ spec:
+ affinity:
+ podAntiAffinity:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ - podAffinityTerm:
+ labelSelector:
+ matchExpressions:
+ - key: gatekeeper.sh/operation
+ operator: In
+ values:
+ - webhook
+ topologyKey: kubernetes.io/hostname
+ weight: 100
+ automountServiceAccountToken: true
+ containers:
+ - args:
+ - --port=8443
+ - --logtostderr
+ - --exempt-namespace=gatekeeper-system
+ - --operation=webhook
+ - --operation=mutation-webhook
+ - --disable-opa-builtin={http.send}
+ - --kubeconfig=/etc/kubeconfig
+ command:
+ - /manager
+ env:
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ apiVersion: v1
+ fieldPath: metadata.namespace
+ - name: POD_NAME
+ value: {{POD_NAME}}
+ image: openpolicyagent/gatekeeper:v3.7.2
+ imagePullPolicy: Always
+ livenessProbe:
+ httpGet:
+ path: /healthz
+ port: 9090
+ name: manager
+ ports:
+ - containerPort: 8443
+ name: webhook-server
+ protocol: TCP
+ - containerPort: 8888
+ name: metrics
+ protocol: TCP
+ - containerPort: 9090
+ name: healthz
+ protocol: TCP
+ readinessProbe:
+ httpGet:
+ path: /readyz
+ port: 9090
+ resources:
+ limits:
+ cpu: 1000m
+ memory: 512Mi
+ requests:
+ cpu: 100m
+ memory: 256Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - all
+ readOnlyRootFilesystem: true
+ runAsGroup: 999
+ runAsNonRoot: true
+ runAsUser: 1000
+ volumeMounts:
+ - mountPath: /certs
+ name: cert
+ readOnly: true
+ - mountPath: /etc/kubeconfig
+ name: kubeconfig
+ subPath: kubeconfig
+ nodeSelector:
+ kubernetes.io/os: linux
+ priorityClassName: system-cluster-critical
+ serviceAccountName: gatekeeper-admin
+ terminationGracePeriodSeconds: 60
+ volumes:
+ - name: cert
+ secret:
+ defaultMode: 420
+ secretName: gatekeeper-webhook-server-cert
+ - name: kubeconfig
+ secret:
+ defaultMode: 420
+ secretName: kubeconfig
+ ---
+ apiVersion: policy/v1beta1
+ kind: PodDisruptionBudget
+ metadata:
+ labels:
+ gatekeeper.sh/system: "yes"
+ name: gatekeeper-controller-manager
+ namespace: gatekeeper-system
+ spec:
+ minAvailable: 1
+ selector:
+ matchLabels:
+ control-plane: controller-manager
+ gatekeeper.sh/operation: webhook
+ gatekeeper.sh/system: "yes"
+ ---
+ apiVersion: v1
+ stringData:
+ kubeconfig: |-
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data: {{ca_crt}}
+ server: https://karmada-apiserver.karmada-system.svc.cluster.local:5443
+ name: kind-karmada
+ contexts:
+ - context:
+ cluster: kind-karmada
+ user: kind-karmada
+ name: karmada
+ current-context: karmada
+ kind: Config
+ preferences: {}
+ users:
+ - name: kind-karmada
+ user:
+ client-certificate-data: {{client_cer}}
+ client-key-data: {{client_key}}
+ kind: Secret
+ metadata:
+ name: kubeconfig
+ namespace: gatekeeper-system
+ ```
+ You need to fill in the dummy pod created in step 1 to {{ POD_NAME }} and fill in the secret which represents kubeconfig pointing to karmada-apiserver.
+
+ Deploy ResourceQuota: https://github.com/open-policy-agent/gatekeeper/blob/release-3.7/deploy/gatekeeper.yaml#L10-L26
+
+### Extra steps
+
+ After all, we need to copy the secret `gatekeeper-webhook-server-cert` in karmada-apiserver context to that in karmada-host context to keep secrets stored in `etcd` and volumes mounted in controller the same.
+
+## Run demo
+### Create k8srequiredlabels template
+
+ ```yaml
+ apiVersion: templates.gatekeeper.sh/v1
+ kind: ConstraintTemplate
+ metadata:
+ name: k8srequiredlabels
+ spec:
+ crd:
+ spec:
+ names:
+ kind: K8sRequiredLabels
+ validation:
+ openAPIV3Schema:
+ type: object
+ description: Describe K8sRequiredLabels crd parameters
+ properties:
+ labels:
+ type: array
+ items:
+ type: string
+ description: A label string
+ targets:
+ - target: admission.k8s.gatekeeper.sh
+ rego: |
+ package k8srequiredlabels
+
+ violation[{"msg": msg, "details": {"missing_labels": missing}}] {
+ provided := {label | input.review.object.metadata.labels[label]}
+ required := {label | label := input.parameters.labels[_]}
+ missing := required - provided
+ count(missing) > 0
+ msg := sprintf("you must provide labels: %v", [missing])
+ }
+ ```
+
+### Create k8srequiredlabels constraint
+
+ ```yaml
+ apiVersion: constraints.gatekeeper.sh/v1beta1
+ kind: K8sRequiredLabels
+ metadata:
+ name: ns-must-have-gk
+ spec:
+ match:
+ kinds:
+ - apiGroups: [""]
+ kinds: ["Namespace"]
+ parameters:
+ labels: ["gatekeepers"]
+ ```
+
+### Create a bad namespace
+
+ ```console
+ kubectl create ns test
+ Error from server ([ns-must-have-gk] you must provide labels: {"gatekeepers"}): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-gk] you must provide labels: {"gatekeepers"}
+ ```
+
+## Reference
+
+- https://github.com/open-policy-agent/gatekeeper
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/security-governance/working-with-kyverno.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/security-governance/working-with-kyverno.md
new file mode 100644
index 000000000..316545c47
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/security-governance/working-with-kyverno.md
@@ -0,0 +1,354 @@
+---
+title: Working with Kyverno
+---
+
+[Kyverno](https://github.com/kyverno/kyverno), a [Cloud Native Computing Foundation](https://cncf.io/) project, is a policy engine designed for Kubernetes. It can validate, mutate, and generate configurations using admission controls and background scans. Kyverno policies are Kubernetes resources and do not require learning a new language. Kyverno is designed to work nicely with tools you already use like kubectl, kustomize, and Git.
+
+This document gives an example to demonstrate how to use `Kyverno` to manage policies across multiple clusters.
+
+## Setup Karmada
+
+To start up Karmada, you can refer to [here](../../installation/installation.md).
+If you just want to try Karmada, we recommend building a development environment by ```hack/local-up-karmada.sh```.
+
+```sh
+git clone https://github.com/karmada-io/karmada
+cd karmada
+hack/local-up-karmada.sh
+```
+
+## Kyverno Installations
+
+In this case, we will use Kyverno v1.6.3. Related deployment files are from [here](https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml).
+
+:::note
+
+You can choose the version of Kyverno based on that of the cluster where Karmada is installed. See details [here](https://kyverno.io/docs/installation/#compatibility-matrix).
+However, Kyverno 1.7.x removes the `kubeconfig` parameter and does not support out-of-cluster installations. **So Kyverno 1.7.x is not able to run with Karmada**.
+
+:::
+
+### Install Kyverno APIs on Karmada
+
+1. Switch to Karmada control plane.
+
+```shell
+kubectl config use-context karmada-apiserver
+```
+
+2. Create resource objects of Kyverno in Karmada control plane. The content is as follows and does not need to be modified.
+
+Deploy namespace: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L1-L12
+
+Deploy configmap: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L7751-L7783
+
+Deploy Kyverno CRDs: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L12-L7291
+
+### Install Kyverno components on host cluster
+
+1. Switch to `karmada-host` context.
+
+```shell
+kubectl config use-context karmada-host
+```
+
+2. Create resource objects of Kyverno in karmada-host context, the content is as follows.
+
+Deploy namespace: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L1-L12
+
+Deploy RBAC resources: https://github.com/kyverno/kyverno/blob/release-1.6/config/install.yaml#L7292-L7750
+
+Deploy Karmada kubeconfig in Kyverno namespace. Fill in the secret which represents kubeconfig pointing to karmada-apiserver, such as **ca_crt, client_cer and client_key** below.
+
+```yaml
+apiVersion: v1
+stringData:
+ kubeconfig: |-
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data: {{ca_crt}}
+ server: https://karmada-apiserver.karmada-system.svc.cluster.local:5443
+ name: kind-karmada
+ contexts:
+ - context:
+ cluster: kind-karmada
+ user: kind-karmada
+ name: karmada
+ current-context: karmada
+ kind: Config
+ preferences: {}
+ users:
+ - name: kind-karmada
+ user:
+ client-certificate-data: {{client_cer}}
+ client-key-data: {{client_key}}
+kind: Secret
+metadata:
+ name: kubeconfig
+ namespace: kyverno
+```
+
+Deploy Kyverno controllers and services:
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ labels:
+ app: kyverno
+ app.kubernetes.io/component: kyverno
+ app.kubernetes.io/instance: kyverno
+ app.kubernetes.io/name: kyverno
+ app.kubernetes.io/part-of: kyverno
+ app.kubernetes.io/version: v1.6.3
+ name: kyverno-svc
+ namespace: kyverno
+spec:
+ type: NodePort
+ ports:
+ - name: https
+ port: 443
+ targetPort: https
+ nodePort: {{nodePort}}
+ selector:
+ app: kyverno
+ app.kubernetes.io/name: kyverno
+---
+apiVersion: v1
+kind: Service
+metadata:
+ labels:
+ app: kyverno
+ app.kubernetes.io/component: kyverno
+ app.kubernetes.io/instance: kyverno
+ app.kubernetes.io/name: kyverno
+ app.kubernetes.io/part-of: kyverno
+ app.kubernetes.io/version: v1.6.3
+ name: kyverno-svc-metrics
+ namespace: kyverno
+spec:
+ ports:
+ - name: metrics-port
+ port: 8000
+ targetPort: metrics-port
+ selector:
+ app: kyverno
+ app.kubernetes.io/name: kyverno
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ labels:
+ app: kyverno
+ app.kubernetes.io/component: kyverno
+ app.kubernetes.io/instance: kyverno
+ app.kubernetes.io/name: kyverno
+ app.kubernetes.io/part-of: kyverno
+ app.kubernetes.io/version: v1.6.3
+ name: kyverno
+ namespace: kyverno
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: kyverno
+ app.kubernetes.io/name: kyverno
+ strategy:
+ rollingUpdate:
+ maxSurge: 1
+ maxUnavailable: 40%
+ type: RollingUpdate
+ template:
+ metadata:
+ labels:
+ app: kyverno
+ app.kubernetes.io/component: kyverno
+ app.kubernetes.io/instance: kyverno
+ app.kubernetes.io/name: kyverno
+ app.kubernetes.io/part-of: kyverno
+ app.kubernetes.io/version: v1.6.3
+ spec:
+ affinity:
+ podAntiAffinity:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ - podAffinityTerm:
+ labelSelector:
+ matchExpressions:
+ - key: app.kubernetes.io/name
+ operator: In
+ values:
+ - kyverno
+ topologyKey: kubernetes.io/hostname
+ weight: 1
+ containers:
+ - args:
+ - --filterK8sResources=[Event,*,*][*,kube-system,*][*,kube-public,*][*,kube-node-lease,*][Node,*,*][APIService,*,*][TokenReview,*,*][SubjectAccessReview,*,*][*,kyverno,kyverno*][Binding,*,*][ReplicaSet,*,*][ReportChangeRequest,*,*][ClusterReportChangeRequest,*,*][PolicyReport,*,*][ClusterPolicyReport,*,*]
+ - -v=2
+ - --kubeconfig=/etc/kubeconfig
+ - --serverIP={{nodeIP}}:{{nodePort}}
+ env:
+ - name: INIT_CONFIG
+ value: kyverno
+ - name: METRICS_CONFIG
+ value: kyverno-metrics
+ - name: KYVERNO_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: KYVERNO_SVC
+ value: kyverno-svc
+ - name: TUF_ROOT
+ value: /.sigstore
+ image: ghcr.io/kyverno/kyverno:v1.6.3
+ imagePullPolicy: Always
+ livenessProbe:
+ failureThreshold: 2
+ httpGet:
+ path: /health/liveness
+ port: 9443
+ scheme: HTTPS
+ initialDelaySeconds: 15
+ periodSeconds: 30
+ successThreshold: 1
+ timeoutSeconds: 5
+ name: kyverno
+ ports:
+ - containerPort: 9443
+ name: https
+ protocol: TCP
+ - containerPort: 8000
+ name: metrics-port
+ protocol: TCP
+ readinessProbe:
+ failureThreshold: 4
+ httpGet:
+ path: /health/readiness
+ port: 9443
+ scheme: HTTPS
+ initialDelaySeconds: 5
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 5
+ resources:
+ limits:
+ memory: 384Mi
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ privileged: false
+ readOnlyRootFilesystem: true
+ runAsNonRoot: true
+ volumeMounts:
+ - mountPath: /.sigstore
+ name: sigstore
+ - mountPath: /etc/kubeconfig
+ name: kubeconfig
+ subPath: kubeconfig
+ initContainers:
+ - env:
+ - name: METRICS_CONFIG
+ value: kyverno-metrics
+ - name: KYVERNO_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ image: ghcr.io/kyverno/kyvernopre:v1.6.3
+ imagePullPolicy: Always
+ name: kyverno-pre
+ resources:
+ limits:
+ cpu: 100m
+ memory: 256Mi
+ requests:
+ cpu: 10m
+ memory: 64Mi
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL
+ privileged: false
+ readOnlyRootFilesystem: true
+ runAsNonRoot: true
+ securityContext:
+ runAsNonRoot: true
+ serviceAccountName: kyverno-service-account
+ volumes:
+ - emptyDir: {}
+ name: sigstore
+ - name: kubeconfig
+ secret:
+ defaultMode: 420
+ secretName: kubeconfig
+---
+```
+
+For multi-cluster deployment, we need to add the config of `--serverIP` which is the address of the webhook server. So you need to ensure that the network from nodes in Karmada control plane to those in `karmada-host` cluster is connected and expose Kyverno controller pods to control plane, for example, using `nodePort` above.
+
+## Run demo
+### Create require-labels ClusterPolicy
+
+ClusterPolicy is a CRD which `Kyverno` offers to support different kinds of rules. Here is an example ClusterPolicy which means that you must create pod with `app.kubernetes.io/name` label.
+You can use the following commands to create it in Karmada control plane.
+
+```shell
+kubectl config use-context karmada-apiserver
+```
+
+```shell
+kubectl create -f- << EOF
+apiVersion: kyverno.io/v1
+kind: ClusterPolicy
+metadata:
+ name: require-labels
+spec:
+ validationFailureAction: enforce
+ rules:
+ - name: check-for-labels
+ match:
+ any:
+ - resources:
+ kinds:
+ - Pod
+ validate:
+ message: "label 'app.kubernetes.io/name' is required"
+ pattern:
+ metadata:
+ labels:
+ app.kubernetes.io/name: "?*"
+EOF
+```
+
+The output is similar to:
+
+```
+clusterpolicy.kyverno.io/require-labels created
+```
+
+### Create a bad deployment without labels
+
+```console
+kubectl create deployment nginx --image=nginx
+```
+
+The output is similar to:
+
+```
+error: failed to create deployment: admission webhook "validate.kyverno.svc-fail" denied the request:
+
+policy Deployment/default/nginx for resource violation:
+
+require-labels:
+ autogen-check-for-labels: 'validation error: label ''app.kubernetes.io/name'' is
+ required. rule autogen-check-for-labels failed at path /spec/template/metadata/labels/app.kubernetes.io/name/'
+
+```
+
+## Reference
+
+- https://github.com/kyverno/kyverno
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/multi-cluster-ingress.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/multi-cluster-ingress.md
new file mode 100644
index 000000000..594cfdaf3
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/multi-cluster-ingress.md
@@ -0,0 +1,295 @@
+---
+title: Multi-cluster Ingress
+---
+
+Users can use [MultiClusterIngress API](https://github.com/karmada-io/karmada/blob/master/pkg/apis/networking/v1alpha1/ingress_types.go) provided in Karmada to import external traffic to services in the member clusters.
+
+> Note: To use this feature, the Kubernetes version of the member cluster must be v1.21 or later.
+
+## Prerequisites
+
+### Karmada has been installed
+
+We can install Karmada by referring to [Quick Start](https://github.com/karmada-io/karmada#quick-start), or directly run `hack/local-up-karmada.sh` script which is also used to run our E2E cases.
+
+### Cluster Network
+
+Currently, we need to use the [Multi-cluster Service](./multi-cluster-service.md#the-serviceexport-and-serviceimport-crds-have-been-installed) feature to import external traffic.
+
+So we need to ensure that the container networks between the **host cluster** and member clusters are connected. The **host cluster** indicates the cluster where the **Karmada Control Plane** is deployed.
+
+- If you use the `hack/local-up-karmada.sh` script to deploy Karmada, Karmada will have three member clusters, and the container networks between the **host cluster**, `member1` and `member2` are connected.
+- You can use `Submariner` or other related open source projects to connected networks between clusters.
+
+> Note: In order to prevent routing conflicts, Pod and Service CIDRs of clusters need non-overlapping.
+
+## Example
+
+### Step 1: Deploy ingress-nginx on the host cluster
+
+We use [multi-cluster-ingress-nginx](https://github.com/karmada-io/multi-cluster-ingress-nginx) as the demo for demonstration. We've made some changes based on the latest version(controller-v1.1.1) of [ingress-nginx](https://github.com/kubernetes/ingress-nginx).
+
+#### Download code
+
+```shell
+# for HTTPS
+git clone https://github.com/karmada-io/multi-cluster-ingress-nginx.git
+# for SSH
+git clone git@github.com:karmada-io/multi-cluster-ingress-nginx.git
+```
+
+#### Build and deploy ingress-nginx
+
+Using the existing `karmada-host` kind cluster to build and deploy the ingress controller.
+
+```shell
+export KUBECONFIG=~/.kube/karmada.config
+export KIND_CLUSTER_NAME=karmada-host
+kubectl config use-context karmada-host
+cd multi-cluster-ingress-nginx
+make dev-env
+```
+
+#### Apply kubeconfig secret
+
+Create a secret that contains the `karmada-apiserver` authentication credential:
+
+```shell
+# get the 'karmada-apiserver' kubeconfig information and direct it to file /tmp/kubeconfig.yaml
+kubectl -n karmada-system get secret kubeconfig --template={{.data.kubeconfig}} | base64 -d > /tmp/kubeconfig.yaml
+# create secret with name 'kubeconfig' from file /tmp/kubeconfig.yaml
+kubectl -n ingress-nginx create secret generic kubeconfig --from-file=kubeconfig=/tmp/kubeconfig.yaml
+```
+
+#### Edit ingress-nginx-controller deployment
+
+We want `nginx-ingress-controller` to access `karmada-apiserver` to listen to changes in resources(such as multiclusteringress, endpointslices, and service). Therefore, we need to mount the authentication credential of `karmada-apiserver` to the `nginx-ingress-controller`.
+
+```shell
+kubectl -n ingress-nginx edit deployment ingress-nginx-controller
+```
+
+Edit as follows:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ ...
+spec:
+ #...
+ template:
+ spec:
+ containers:
+ - args:
+ - /nginx-ingress-controller
+ - --karmada-kubeconfig=/etc/kubeconfig # new line
+ #...
+ volumeMounts:
+ #...
+ - mountPath: /etc/kubeconfig # new line
+ name: kubeconfig # new line
+ subPath: kubeconfig # new line
+ volumes:
+ #...
+ - name: kubeconfig # new line
+ secret: # new line
+ secretName: kubeconfig # new line
+```
+
+### Step 2: Use the MCS feature to discovery service
+
+#### Install ServiceExport and ServiceImport CRDs
+
+Refer to [here](./multi-cluster-service.md#the-serviceexport-and-serviceimport-crds-have-been-installed).
+
+#### Deploy web on member1 cluster
+
+deploy.yaml:
+
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: web
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: web
+ template:
+ metadata:
+ labels:
+ app: web
+ spec:
+ containers:
+ - name: hello-app
+ image: gcr.io/google-samples/hello-app:1.0
+ ports:
+ - containerPort: 8080
+ protocol: TCP
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: web
+spec:
+ ports:
+ - port: 81
+ targetPort: 8080
+ selector:
+ app: web
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: mci-workload
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: web
+ - apiVersion: v1
+ kind: Service
+ name: web
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+```
+
+
+
+```shell
+kubectl --context karmada-apiserver apply -f deploy.yaml
+```
+
+#### Export web service from member1 cluster
+
+service_export.yaml:
+
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: multicluster.x-k8s.io/v1alpha1
+kind: ServiceExport
+metadata:
+ name: web
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: web-export-policy
+spec:
+ resourceSelectors:
+ - apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceExport
+ name: web
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+```
+
+
+
+```shell
+kubectl --context karmada-apiserver apply -f service_export.yaml
+```
+
+#### Import web service to member2 cluster
+
+service_import.yaml:
+
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: multicluster.x-k8s.io/v1alpha1
+kind: ServiceImport
+metadata:
+ name: web
+spec:
+ type: ClusterSetIP
+ ports:
+ - port: 81
+ protocol: TCP
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: web-import-policy
+spec:
+ resourceSelectors:
+ - apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceImport
+ name: web
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member2
+```
+
+
+
+```shell
+kubectl --context karmada-apiserver apply -f service_import.yaml
+```
+
+### Step 3: Deploy multiclusteringress on karmada-controlplane
+
+mci-web.yaml:
+
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: networking.karmada.io/v1alpha1
+kind: MultiClusterIngress
+metadata:
+ name: demo-localhost
+ namespace: default
+spec:
+ ingressClassName: nginx
+ rules:
+ - host: demo.localdev.me
+ http:
+ paths:
+ - backend:
+ service:
+ name: web
+ port:
+ number: 81
+ path: /web
+ pathType: Prefix
+```
+
+
+
+```shell
+kubectl --context karmada-apiserver apply -f mci-web.yaml
+```
+
+### Step 4: Local testing
+
+Let's forward a local port to the ingress controller:
+
+```shell
+kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
+```
+
+At this point, if you access http://demo.localdev.me:8080/web/, you should see an HTML page telling you:
+
+```html
+Hello, world!
+Version: 1.0.0
+Hostname: web-xxx-xxx
+```
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/multi-cluster-service-with-native-svc-access.mdx b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/multi-cluster-service-with-native-svc-access.mdx
new file mode 100644
index 000000000..77786dec6
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/multi-cluster-service-with-native-svc-access.mdx
@@ -0,0 +1,41 @@
+---
+title: Multi-cluster service with native service access
+---
+
+import MCSOverview from '../../resources/userguide/service/multiclusterservice/mcs-overview.png';
+import MCSWayOfWork from '../../resources/userguide/service/multiclusterservice/mcs-way-of-work.png';
+
+In Karmada, the MultiClusterService can enable users to access services across clusters with the native service domain name, like `foo.svc`, with the aim of providing users with a seamless experience when accessing services across multiple clusters, as if they were operating within a single cluster.
+
+
+
+
+
+Once the network is connected between clusters, with MultiClusterService, the accessing will be directed to the active backend pods distributed across these clusters.
+
+The MultiCluster Service is implemented as a Karmada API resource and multiple controllers, the resource determines the behavior of the controller. The multiple controllers, running within the Karmada control plane, sync the services' backend EndpointSlice resource between clusters, to add the multiple clusters' pods' IP to the services' backend.
+
+## How does a MultiCluster Service work?
+
+To implement access service across multiple clusters with native service name, Karmada introduces multiple controllers to sync the services' backend EndpointSlice resource between clusters, they work as follows:
+
+
+
+
+
+1. Karmada will collect EndpointSlice resources from all target clusters, and sync them to the Karmada control plane.
+2. Karmada will sync the collected EndpointSlice resources to all target clusters, with attaching the EndpointSlice to the service.
+3. When users access through `foo.svc`, the underlying network will route the request to the backend pods in the multiple clusters.
+
+## API Object
+
+The MultiClusterService is an API in the Karmada networking API group. The current version is v1alpha1.
+
+You can check the MultiClusterService API specification [here](https://github.com/karmada-io/karmada/blob/65376b28d5037c27ff7ec0e56542c2a345d1a120/pkg/apis/networking/v1alpha1/service_types.go#L50).
+
+## What's next
+
+If you configure MultiClusterService, you may also want to consider how to connect the network between clusters, such as [Submariner](../network/working-with-submariner).
+
+For more information on MultiClusterService:
+* Read [Access service across clusters within native service name](../../tutorials/access-service-across-clusters) to know how to use MultiClusterService.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/multi-cluster-service.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/multi-cluster-service.md
new file mode 100644
index 000000000..35fee562b
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/multi-cluster-service.md
@@ -0,0 +1,207 @@
+---
+title: 多集群服务发现
+---
+
+用户能够通过[多集群服务API](https://github.com/kubernetes-sigs/mcs-api)在集群之间导出和导入服务。
+
+> 注意:使用该特性需要满足成员集群的kubernetes版本在v1.21以上(包含v1.21)。
+## 准备开始
+
+### 安装Karmada
+
+我们可以通过参考[快速开始](https://github.com/karmada-io/karmada#quick-start)来安装Karmada,或者直接运行 `hack/local-up-karmada.sh` 脚本,我们的E2E测试执行正是使用了该脚本。
+
+
+### 成员集群网络
+
+确保至少有两个集群被添加到 Karmada,并且成员集群之间的容器网络可相互连接。
+
+- 如果你使用 `hack/local-up-karmada.sh` 脚本来部署 Karmada,Karmada 将有三个成员集群,`member1` 和 `member2` 的容器网络将被连接。
+- 你可以使用 `Submariner` 或其他相关的开源项目来连接成员集群之间的网络。
+> 注意:为了防止路由冲突,集群之间的Pod和Service CIDR需要满足不重叠。
+
+### 安装 ServiceExport 和 ServiceImport CRD
+
+我们需要在成员集群中安装ServiceExport和ServiceImport。
+
+在**karmada控制平面**上安装完 ServiceExport 和 ServiceImport 之后,我们可以创建 `ClusterPropagationPolicy` 来分发这两个 CRD 到成员集群。
+
+```yaml
+# propagate ServiceExport CRD
+apiVersion: policy.karmada.io/v1alpha1
+kind: ClusterPropagationPolicy
+metadata:
+ name: serviceexport-policy
+spec:
+ resourceSelectors:
+ - apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ name: serviceexports.multicluster.x-k8s.io
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+---
+# propagate ServiceImport CRD
+apiVersion: policy.karmada.io/v1alpha1
+kind: ClusterPropagationPolicy
+metadata:
+ name: serviceimport-policy
+spec:
+ resourceSelectors:
+ - apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ name: serviceimports.multicluster.x-k8s.io
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+## 示例
+
+### 第1步:在`member1`集群上部署服务
+
+我们需要在 `member1` 集群上部署服务以便发现。
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: serve
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: serve
+ template:
+ metadata:
+ labels:
+ app: serve
+ spec:
+ containers:
+ - name: serve
+ image: jeremyot/serve:0a40de8
+ args:
+ - "--message='hello from cluster member1 (Node: {{env \"NODE_NAME\"}} Pod: {{env \"POD_NAME\"}} Address: {{addr}})'"
+ env:
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: serve
+spec:
+ ports:
+ - port: 80
+ targetPort: 8080
+ selector:
+ app: serve
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: mcs-workload
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: serve
+ - apiVersion: v1
+ kind: Service
+ name: serve
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+```
+
+### 第2步:导出服务到 `member2` 集群
+
+- 在**karmada控制平面**上创建一个 `ServiceExport` 对象,然后创建一个 `PropagationPolicy` ,将 ` ServiceExport` 对象分发到 ` member1` 集群。
+
+```yaml
+apiVersion: multicluster.x-k8s.io/v1alpha1
+kind: ServiceExport
+metadata:
+ name: serve
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: serve-export-policy
+spec:
+ resourceSelectors:
+ - apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceExport
+ name: serve
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+```
+
+- 在**karmada控制平面**上创建一个 `ServiceImport` 对象,然后创建一个 `PropagationPlicy ` 来分发 `ServiceImport ` 对象到 `member2 `集群。
+
+```yaml
+apiVersion: multicluster.x-k8s.io/v1alpha1
+kind: ServiceImport
+metadata:
+ name: serve
+spec:
+ type: ClusterSetIP
+ ports:
+ - port: 80
+ protocol: TCP
+---
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: serve-import-policy
+spec:
+ resourceSelectors:
+ - apiVersion: multicluster.x-k8s.io/v1alpha1
+ kind: ServiceImport
+ name: serve
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member2
+```
+
+### 第3步:从 `member2` 集群获取服务
+
+经过上述步骤,我们可以在 `member2` 集群上找到前缀为 `derived-` 的**派生服务**。然后,我们可以访问**派生服务**来访问`member1`集群上的服务。
+
+在 `member2` 集群上启动一个Pod `request`来访问**派生服务**的ClusterIP。
+
+```shell
+# 我们可以在member2集群中找到对应的派生服务。
+$ kubectl --kubeconfig ~/.kube/members.config --context member2 get svc
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+derived-serve ClusterIP 10.13.205.2 80/TCP 81s
+kubernetes ClusterIP 10.13.0.1 443/TCP 15m
+```
+
+```shell
+$ kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --rm --restart=Never --image=jeremyot/request:0a40de8 request -- --duration={duration-time} --address={ClusterIP of derived service}
+```
+
+例如,如果我们使用ClusterIP地址`10.13.205.2`持续访问该服务3s,将会得到如下输出:
+
+```shell
+# 访问派生服务, 这时候member1的工作负载能正常返回一个响应。
+$ kubectl --kubeconfig ~/.kube/members.config --context member2 run -i --rm --restart=Never --image=jeremyot/request:0a40de8 request -- --duration=3s --address=10.13.205.2
+If you don't see a command prompt, try pressing enter.
+2022/07/24 15:13:08 'hello from cluster member1 (Node: member1-control-plane Pod: serve-9b5b94f65-cp87p Address: 10.10.0.5)'
+2022/07/24 15:13:09 'hello from cluster member1 (Node: member1-control-plane Pod: serve-9b5b94f65-cp87p Address: 10.10.0.5)'
+pod "request" deleted
+```
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/working-with-eriecanal.md b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/working-with-eriecanal.md
new file mode 100644
index 000000000..87844222c
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/version-v1.9/userguide/service/working-with-eriecanal.md
@@ -0,0 +1,423 @@
+---
+title: 与 ErieCanal 集成支持跨集群的服务治理
+---
+
+# ErieCanal 和 Karmada:支持跨集群的服务治理
+
+## 背景
+
+Kubernetes 作为一项核心技术已成为现代应用程序架构的基础,越来越多的企业使用 Kubernetes 作为容器编排系统。随着对云计算接受程度不断提高、企业规模的持续增长,越来越多的企业开始考虑采用或者已经采用多云和混合云的架构。多云混合云策略的引入,相应地,Kubernetes 集群的数量也变得越来越多。
+
+使用 Karmada 我们可以轻松完成应用的跨地域或者云提供商的部署,通过在多个 Kubernetes 集群中部署同一服务的多个实例,提升了系统的可用性和灵活性。但由于服务的彼此依赖,为了保证服务间的交互和功能完整性,要将这些相关的服务部署在同一个集群中。服务之间的强依赖和高耦合性就像一个错综复杂的藤蔓,使得系统的各个组件之间纠缠不清。在服务部署时往往需要进行全量部署,资源利用率低,运营成本增加。
+
+这篇文档将演示如何使用 [ErieCanal](https://github.com/flomesh-io/ErieCanal) 完成 Karmada 多集群的互联互通,实现跨集群的服务治理。
+
+ErieCanal 是一个 MCS(多集群服务 [Multi-Cluster Service](https://github.com/kubernetes-sigs/mcs-api))实现,为 Kubernetes 集群提供 MCS、Ingress、Egress、GatewayAPI。
+
+## 总体架构
+
+在本示例中,我们借助 Karmada 实现资源的跨集群调度;借助 ErieCanal 实现服务的跨集群的注册发现。
+
+- 服务注册:通过创建 [`ServiceExport`](https://github.com/flomesh-io/ErieCanal/blob/7fc7e33315347ec69dc60ff19fdeb1cd1552ef34/apis/serviceexport/v1alpha1/serviceexport_types.go#L125) 资源来将 Service 声明为多集群服务,注册到 ErieCanal 的控制面,同时为该服务创建入口(Ingress)规则。
+- 服务发现:ErieCanal 的控制面,使用服务的信息及所在集群的信息创建 [`ServiceImport`](https://github.com/flomesh-io/ErieCanal/blob/7fc7e33315347ec69dc60ff19fdeb1cd1552ef34/apis/serviceimport/v1alpha1/serviceimport_types.go#L160) 资源,并将其同步到所有的成员集群。
+
+ErieCanal 会运行在控制面集群和成员集群上,成员集群需要注册到控制面集群。ErieCanal 是独立的组件,无需运行在 Karmada 控制面上。ErieCanal 负责多集群服务的注册发现,Karmada 负责多集群的资源调度。
+
+![](../../resources/userguide/service/eriecanal/karmada-working-with-eriecanal-overview.png)
+
+完成服务的跨集群注册发现之后,在处理应用的访问(curl -> httpbin)时要完成流量的自动调度,这里有两种方案:
+
+- 集成服务网格 [FSM(Flomesh Service Mesh)](https://flomesh.io/fsm/) 根据策略实现流量的调度,除了从 Kubernetes Service 获取服务的访问信息外,还会同多集群资源的 ServiceImport 中获取服务信息。
+- 使用 ErieCanal 中的组件 ErieCanalNet(该特性即将发布),通过 eBPF+sidecar(Node level)来实现流量的跨集群管理。
+
+下面是演示的全部流程,大家也可以使用我们提供的 [脚本 flomesh.sh](../../resources/userguide/service/eriecanal/flomesh.sh) 完成自动化地演示。使用脚本的前提,**系统需要已经安装 Docker 和 kubectl,并至少 8G 内存**。脚本的使用方式:
+
+- `flomesh.sh` - 不提供任何参数,脚本会创建 4 个集群、完成环境搭建(安装 Karmada、ErieCanal、FSM),并运行示例。
+- `flomesh.sh -h` - 查看参数的说明
+- `flomesh.sh -i` - 创建集群和完成搭建环境
+- `flomesh.sh -d` - 运行示例
+- `flomesh.sh -r` - 删除示例所在的命名空间
+- `flomesh.sh -u` - 销毁集群
+
+以下是分步指南,与 flomesh.sh 脚本的执行步骤一致。
+
+## 前提条件
+
+- 4 个集群:control-plane、cluster-1、cluster-2、cluster-3
+- Docker
+- k3d
+- helm
+- kubectl
+
+## 环境设置
+
+### 1. 安装 Karmada 控制面
+
+参考 [Karmada文档](https://karmada.io/zh/docs/installation/),安装 Karmada 的控制面。Karmada 初始化完成后,将 **三个成员集群** cluster-1、cluster-2、cluster-3 注册到 Karmada 控制面,可 [参考 Karmada 的集群注册指引](https://karmada.io/zh/docs/userguide/clustermanager/cluster-registration/)。
+
+这里使用 push 模式,集群注册参考下面的命令(在控制面集群 control-plane 上执行 ):
+
+```shell
+karmadactl --kubeconfig PATH_TO_KARMADA_CONFIG join CLUSTER_NAME --cluster-kubeconfig=PATH_CLSUTER_KUBECONFIG
+```
+
+接来下,需要向 Karmada 控制面注册 ErieCanal 的多集群 CRD。
+
+```shell
+kubectl --kubeconfig PATH_TO_KARMADA_CONFIG apply -f https://raw.githubusercontent.com/flomesh-io/ErieCanal/main/charts/erie-canal/apis/flomesh.io_clusters.yaml
+kubectl --kubeconfig PATH_TO_KARMADA_CONFIG apply -f https://raw.githubusercontent.com/flomesh-io/ErieCanal/main/charts/erie-canal/apis/flomesh.io_mcs-api.yaml
+```
+
+在控制面集群 control-plane 使用 Karmada apiserver 的 config 可查看集群的注册信息。
+
+```shell
+kubectl --kubeconfig PATH_TO_KARMADA_CONFIG get cluster
+NAME VERSION MODE READY AGE
+cluster-1 v1.23.8+k3s2 Push True 154m
+cluster-2 v1.23.8+k3s2 Push True 154m
+cluster-3 v1.23.8+k3s2 Push True 154m
+```
+
+### 2. 安装 ErieCanal
+
+接下来,需要在 **所有集群上** 安装 ErieCanal。这里推荐使用 Helm 的方式来安装,可以参考 [ErieCanal 的安装文档](https://github.com/flomesh-io/ErieCanal#install)。
+
+```shell
+helm repo add ec https://ec.flomesh.io --force-update
+helm repo update
+
+EC_NAMESPACE=erie-canal
+EC_VERSION=0.1.3
+
+helm upgrade -i --namespace ${EC_NAMESPACE} --create-namespace --version=${EC_VERSION} --set ec.logLevel=5 ec ec/erie-canal
+```
+
+安装完成后,将 **三个成员集群** 注册到 ErieCanal 的控制面。命令(在控制面集群 control-plane 上执行)如下,其中
+
+- `CLUSTER_NAME`:成员集群名
+- `HOST_IP`:成员集群的入口 IP 地址
+- `PORT`:成员集群的入口端口
+- `KUBECONFIG`:成员集群的 KUBECONFIG 内容
+
+```shell
+kubectl apply -f - < new Message(os.env['HOSTNAME'] +'\n'))
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: httpbin
+spec:
+ ports:
+ - port: 8080
+ targetPort: 8080
+ protocol: TCP
+ selector:
+ app: pipy
+EOF
+```
+
+在 Karmada 控制面创建资源后,还需要创建 `PropagationPolicy` 策略来对资源进行分发,我们将 `Deployment` 和 `Service` 分发到成员集群 `cluster-1` 和 `cluster-3`。
+
+```shell
+$kmd apply -n httpbin -f - < kind-karmada.yaml
+```
+
+```bash
+kubectl create secret generic istio-kubeconfig --from-file=config=kind-karmada.yaml -nistio-system
+```
+
+3. Install istio control plane
+
+```bash
+cat < istio-remote-secret-member1.yaml
+```
+
+### Prepare member2 cluster secret
+
+1. Export `KUBECONFIG` and switch to `karmada member2`:
+```bash
+export KUBECONFIG="$HOME/.kube/members.config"
+kubectl config use-context member2
+```
+
+2. Create istio remote secret for member1:
+```bash
+istioctl x create-remote-secret --name=member2 > istio-remote-secret-member2.yaml
+```
+
+### Apply istio remote secret
+
+Export `KUBECONFIG` and switch to `karmada apiserver`:
+
+```
+# export KUBECONFIG=$HOME/.kube/karmada.config
+
+# kubectl config use-context karmada-apiserver
+```
+
+Apply istio remote secret:
+```bash
+kubectl apply -f istio-remote-secret-member1.yaml
+
+kubectl apply -f istio-remote-secret-member2.yaml
+```
+
+
+### Install istio remote
+
+1. Install istio remote member1
+
+Export `KUBECONFIG` and switch to `karmada member1`:
+```bash
+export KUBECONFIG="$HOME/.kube/members.config"
+kubectl config use-context member1
+```
+
+```bash
+cat < istio-remote-secret-member2.yaml
+```
+
+Switch to `member1`:
+
+```bash
+kubectl config use-context member1
+```
+
+Apply istio remote secret
+
+```bash
+kubectl apply -f istio-remote-secret-member2.yaml
+```
+
+2. Configure member2 as a remote
+
+Save the address of `member1`’s east-west gateway
+
+```bash
+export DISCOVERY_ADDRESS=$(kubectl -n istio-system get svc istio-eastwestgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+```
+
+Create a remote configuration on `member2`.
+
+Switch to `member2`:
+
+```bash
+kubectl config use-context member2
+```
+
+```bash
+cat <
+```
+
+And then you can find deployment nginx will be restored successfully.
+```shell
+# kubectl get deployment.apps/nginx
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx 2/2 2 2 21s
+```
+
+### Backup and restore of kubernetes resources through Velero combined with karmada
+
+In Karmada control plane, we need to install velero crds but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances. Based on work API here, they will be encapsulated as a work object delivered to member clusters and reconciled by velero controllers in member clusters finally.
+
+Create velero crds in Karmada control plane:
+remote velero crd directory: `https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero/crds/`
+
+Create a backup in `karmada-apiserver` and Distributed to `member1` cluster through PropagationPolicy
+
+```shell
+# create backup policy
+cat < Note: The default controller list might be changed in the future releases. The controllers enabled in the last release
+> might be disabled or deprecated and new controllers might be introduced too. Users who are using this flag should
+> check the release notes before system upgrade.
+
+## Kubernetes Controllers
+
+In addition to the controllers that are maintained by the Karmada community, Karmada also requires some controllers from
+Kubernetes. These controllers run as part of `kube-controller-manager` and are maintained by the Kubernetes community.
+
+Users are recommended to deploy the `kube-controller-manager` along with Karmada components. And the installation
+methods list in [installation guide][2] would help you deploy it as well as Karmada components.
+
+### Required Controllers
+
+Not all controllers in `kube-controller-manager` are necessary for Karmada, if you are deploying
+Karmada using other tools, you might have to configure the controllers by `--controllers` flag just like what we did in
+[example of kube-controller-manager deployment][3].
+
+The following controllers are tested and recommended by Karmada.
+
+#### namespace
+
+The `namespace` controller runs as part of `kube-controller-manager`. It watches `Namespace` deletion and deletes
+all resources in the given namespace.
+
+For the Karmada control plane, we inherit this behavior to keep a consistent user experience. More than that, we also
+rely on this feature in the implementation of Karmada controllers, for example, when un-registering a cluster,
+Karmada would delete the `execution namespace`(named `karmada-es-`) that stores all the resources
+propagated to that cluster, to ensure all the resources could be cleaned up from both the Karmada control plane and the
+given cluster.
+
+More details about the `namespace` controller, please refer to
+[namespace controller sync logic](https://github.com/kubernetes/kubernetes/blob/v1.23.4/pkg/controller/namespace/deletion/namespaced_resources_deleter.go#L82-L94).
+
+#### garbagecollector
+
+The `garbagecollector` controller runs as part of `kube-controller-manager`. It is used to clean up garbage resources.
+It manages [owner reference](https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/) and
+deletes the resources once all owners are absent.
+
+For the Karmada control plane, we also use `owner reference` to link objects to each other. For example, each
+`ResourceBinding` has an owner reference that link to the `resource template`. Once the `resource template` is removed,
+the `ResourceBinding` will be removed by `garbagecollector` controller automatically.
+
+For more details about garbage collection mechanisms, please refer to
+[Garbage Collection](https://kubernetes.io/docs/concepts/architecture/garbage-collection/).
+
+#### serviceaccount-token
+
+The `serviceaccount-token` controller runs as part of `kube-controller-manager`.
+It watches `ServiceAccount` creation and creates a corresponding ServiceAccount token Secret to allow API access.
+
+For the Karmada control plane, after a `ServiceAccount` object is created by the administrator, we also need
+`serviceaccount-token` controller to generate the ServiceAccount token `Secret`, which will be a relief for
+administrator as he/she doesn't need to manually prepare the token.
+
+More details please refer to:
+- [service account token controller](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#token-controller)
+- [service account tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens)
+
+#### clusterrole-aggregation
+
+The `clusterrole-aggregation` controller runs as part of `kube-controller-manager`. It watches for ClusterRole objects
+with an aggregationRule set, and aggregate several ClusterRoles into one combined ClusterRole.
+
+For the Karmada control plane, it aggregates the read and write permissions under the `admin` ClusterRole in the namespace,
+and also aggregated the read and writ permissions for accessing Karmada namespace scope resources under `admin`.
+
+More details please refer to:
+- [Aggregated ClusterRoles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles)
+- [grant admin clusterrole with karamda resource permission](https://github.com/karmada-io/karmada/issues/3916)
+
+### Optional Controllers
+
+#### ttl-after-finished
+
+The `ttl-after-finished` controller runs as part of `kube-controller-manager`.
+It watches `Job` updates and limits the lifetime of finished `Jobs`.
+The TTL timer starts when the Job finishes, and the finished Job will be cleaned up after the TTL expires.
+
+For the Karmada control plane, we also provide the capability to clean up finished `Jobs` automatically by
+specifying the `.spec.ttlSecondsAfterFinished` field of a Job, which will be a relief for the control plane.
+
+More details please refer to:
+- [ttl after finished controller](https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/#ttl-after-finished-controller)
+- [clean up finished jobs automatically](https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically)
+
+#### bootstrapsigner
+
+The `bootstrapsigner` controller runs as part of `kube-controller-manager`.
+The tokens are also used to create a signature for a specific ConfigMap used in a "discovery" process through a `bootstrapsigner` controller.
+
+For the Karmada control plane, we also provide `cluster-info` ConfigMap in `kube-public` namespace. This is used early in a cluster bootstrap process before the client trusts the API server. The signed ConfigMap can be authenticated by the shared token.
+
+> Note: this controller currently is used to register member clusters with PULL mode by `karmadactl register`.
+
+More details please refer to:
+- [bootstrap tokens overview](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#bootstrap-tokens-overview)
+- [configmap signing](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#configmap-signing)
+
+#### tokencleaner
+
+The `tokencleaner` controller runs as part of `kube-controller-manager`.
+Expired tokens can be deleted automatically by enabling the tokencleaner controller on the controller manager.
+
+> Note: this controller currently is used to register member clusters with PULL mode by `karmadactl register`.
+
+More details please refer to:
+- [bootstrap tokens overview](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#bootstrap-tokens-overview)
+- [enabling bootstrap token authentication](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#enabling-bootstrap-token-authentication)
+
+#### csrapproving, csrcleaner, csrsigning
+
+The controllers runs as part of `kube-controller-manager`.
+
+The `csrapproving` controller uses the [SubjectAccessReview API](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access) to determine if a given user is authorized to request a CSR, then approves based on the authorization outcome.
+
+The `csrcleaner` controller clears expired csr periodically.
+
+The `csrsigning` controller signs the certificate using Karmada root CA.
+
+> Note: these controllers currently are used to register member clusters with PULL mode by `karmadactl register`.
+
+More details please refer to:
+- [csr approval](https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval)
+- [certificate signing request spec](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/certificate-signing-request-v1/#CertificateSigningRequestSpec)
+- [certificate signing requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/)
+
+[1]: https://kubernetes.io/docs/concepts/architecture/controller/
+[2]: ../../installation/installation.md
+[3]: https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/kube-controller-manager.yaml
diff --git a/versioned_docs/version-v1.9/administrator/configuration/resource-deletion-protection.md b/versioned_docs/version-v1.9/administrator/configuration/resource-deletion-protection.md
new file mode 100644
index 000000000..eef119832
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/configuration/resource-deletion-protection.md
@@ -0,0 +1,54 @@
+---
+title: Resource Deletion Protection
+---
+
+Karmada provides deletion protection for resources, and it is **enabled by default**.
+
+Resource deletion protection can be applied to **any resource type**, including but not limited to Kubernetes native resources, CRDs, and more.
+
+When a resource is marked as protected, **all delete operations on it will be denied**.
+
+## Protecting a Resource
+
+Karmada uses Labels to protect resources. If you want to protect a resource, you can label it with `resourcetemplate.karmada.io/deletion-protected=Always`.
+
+Protection will be effective when the Value is `Always` **only**.
+
+To protect a Namespace named `minio`:
+```
+kubectl label namespaces minio resourcetemplate.karmada.io/deletion-protected=Always
+```
+
+When you attempt to delete the protected `minio` Namespace, you will see the following output:
+```
+[root@cluster1]# kubectl delete namespaces minio
+Error from server (Forbidden): admission webhook "resourcedeletionprotection.karmada.io" denied the request: This resource is protected, please make sure to remove the label: resourcetemplate.karmada.io/deletion-protected
+```
+
+## Unprotecting a Resource
+
+If you want to remove Karmada's protection from a resource, you only need to **remove** the `resourcetemplate.karmada.io/deletion-protected` Label.
+```
+kubectl label namespaces minio resourcetemplate.karmada.io/deletion-protected-
+```
+
+Alternatively, you can directly change its value to a value other than `Always`, such as `Never`.
+```
+kubectl label namespaces minio resourcetemplate.karmada.io/deletion-protected=Never --overwrite
+```
+
+## Special Cases
+
+### Deleting a Namespace Containing Protected Resources
+
+If a Namespace is not protected but contains protected resources, the deletion of that Namespace will **not be successful**.
+
+### Force Deletion (--force)
+
+Even when using `--force` to delete a protected resource, it will **not be deleted**.
+
+```
+[root@cluster1]# kubectl delete namespace minio --force
+Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
+Error from server (Forbidden): admission webhook "resourcedeletionprotection.karmada.io" denied the request: This resource is protected, please make sure to remove the label: resourcetemplate.karmada.io/deletion-protected
+```
diff --git a/versioned_docs/version-v1.9/administrator/migration/migrate-in-batch.md b/versioned_docs/version-v1.9/administrator/migration/migrate-in-batch.md
new file mode 100644
index 000000000..cf428396c
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/migration/migrate-in-batch.md
@@ -0,0 +1,97 @@
+---
+title: Migrate In Batch
+---
+
+## Scenario
+
+Assuming the user has a single kubernetes cluster which already has many native resource installed.
+
+The user want to install Karmada for multi-cluster management, and hope to migrate the resource that already exist from original cluster to Karmada.
+It is required that the pods already exist not be affected during the process of migration, which means the relevant container not be restarted.
+
+So, how to migrate the existing resource?
+
+![](../../resources/administrator/migrate-in-batch-1.jpg)
+
+## Recommended migration strategy
+
+If you only want to migrate individual resources, you can just refer to [promote-legacy-workload](./promote-legacy-workload) to do it one by one.
+
+If you want to migrate a batch of resources, you are advised to take over all resources based on resource granularity through few `PropagationPolicy` at first,
+then if you have more propagate demands based on application granularity, you can apply higher priority `PropagationPolicy` to preempt them.
+
+Thus, how to take over all resources based on resource granularity? You can do as follows.
+
+![](../../resources/administrator/migrate-in-batch-2.jpg)
+
+### Step one
+
+Since the existing resources will be token over by Karmada, there is no longer need to apply the related YAML config to member cluster.
+That means, you can stop the corresponding operation or pipeline.
+
+### Step two
+
+Apply all the YAML config of resources to Karmada control plane, as the [ResourceTemplate](https://karmada.io/docs/core-concepts/concepts#resource-template) of Karmada.
+
+### Step three
+
+Edit a [PropagationPolicy](https://karmada.io/docs/core-concepts/concepts#propagation-policy), and apply it to Karmada control plane. You should pay attention to two fields:
+
+* `spec.conflictResolution: Overwrite`:**the value must be [Overwrite](https://github.com/karmada-io/karmada/blob/master/docs/proposals/migration/design-of-seamless-cluster-migration-scheme.md#proposal).**
+* `spec.resourceSelectors`:defining which resources are selected to migrate
+
+here we provide two examples:
+
+#### Eg1. migrate all deployments
+
+If you want to migrate all deployments from `member1` cluster to Karmada, you shall apply:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: deployments-pp
+spec:
+ conflictResolution: Overwrite
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ priority: 0
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ schedulerName: default-scheduler
+```
+
+#### Eg2. migrate all services
+
+If you want to migrate all services from `member1` cluster to Karmada, you shall apply:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: services-pp
+spec:
+ conflictResolution: Overwrite
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ priority: 0
+ resourceSelectors:
+ - apiVersion: v1
+ kind: Service
+ schedulerName: default-scheduler
+```
+
+### Step four
+
+The rest migration operations will be finished by Karmada automatically.
+
+## PropagationPolicy Preemption and Demo
+
+Besides, if you have more propagate demands based on application granularity, you can apply higher priority `PropagationPolicy`
+to preempt those you applied in the migration mentioned above. Detail demo you can refer to the tutorial [Resource Migration](../../tutorials/resource-migration.md)
+
diff --git a/versioned_docs/version-v1.9/administrator/migration/migration-from-kubefed.md b/versioned_docs/version-v1.9/administrator/migration/migration-from-kubefed.md
new file mode 100644
index 000000000..f52659128
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/migration/migration-from-kubefed.md
@@ -0,0 +1,237 @@
+---
+title: Migration From Kubefed
+---
+
+Karmada is developed in continuation of Kubernetes [Federation v1](https://github.com/kubernetes-retired/federation)
+and [Federation v2(aka Kubefed)](https://github.com/kubernetes-sigs/kubefed). Karmada inherited a lot of concepts
+from these two versions. For example:
+
+- **Resource template**: Karmada uses Kubernetes Native API definition for federated resource template,
+ to make it easy to integrate with existing tools that already adopt Kubernetes.
+- **Propagation Policy**: Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster
+ scheduling and spreading requirements.
+- **Override Policy**: Karmada provides a standalone Override Policy API for specializing cluster relevant
+ configuration automation.
+
+Most of the features in Kubefed have been reformed in Karmada, so Karmada would be the natural successor.
+
+Generally speaking, migrating from Kubefed to Karmada would be pretty easy.
+This document outlines the basic migration path for Kubefed users.
+**Note:** This document is a work in progress, any feedback would be welcome.
+
+## Cluster Registration
+
+Kubefed provides `join` and `unjoin` commands in `kubefedctl` command line tool, Karmada also implemented the
+two commands in `karmadactl`.
+
+Refer to [Kubefed Cluster Registration](https://github.com/kubernetes-sigs/kubefed/blob/master/docs/cluster-registration.md),
+and [Karmada Cluster Registration](https://karmada.io/docs/userguide/clustermanager/cluster-registration) for more
+details.
+
+### Joining Clusters
+
+Assume you use the `kubefedctl` tool to join cluster as follows:
+
+```bash
+kubefedctl join cluster1 --cluster-context cluster1 --host-cluster-context cluster1
+```
+
+Now with Karmada, you can use `karmadactl` tool to do the same thing:
+```
+karmadactl join cluster1 --cluster-context cluster1 --karmada-context karmada
+```
+
+The behavior behind the `join` command is similar between Kubefed and Karmada. For Kubefed, it will create a
+[KubeFedCluster](https://github.com/kubernetes-sigs/kubefed/blob/96f03f0dea62fe09136010255acf218ed14987f3/pkg/apis/core/v1beta1/kubefedcluster_types.go#L94),
+object and Karmada will create a [Cluster](https://github.com/karmada-io/karmada/blob/aa2419cb1f447d5512b2a998ec81c9013fa31586/pkg/apis/cluster/types.go#L36)
+object to describe the joined cluster.
+
+### Checking status of joined clusters
+
+Assume you use the `kubefedctl` tool to check the status of the joined clusters as follows:
+
+```
+kubectl -n kube-federation-system get kubefedclusters
+
+NAME AGE READY KUBERNETES-VERSION
+cluster1 1m True v1.21.2
+cluster2 1m True v1.22.0
+```
+
+Now with Karmada, you can use `karmadactl` tool to do the same thing:
+
+```
+kubectl get clusters
+
+NAME VERSION MODE READY AGE
+member1 v1.20.7 Push True 66s
+```
+
+Kubefed manages clusters with `Push` mode, however Karmada supports both `Push` and `Pull` modes.
+Refer to [Overview of cluster mode](https://karmada.io/docs/userguide/clustermanager/cluster-registration) for
+more details.
+
+### Unjoining clusters
+
+Assume you use the `kubefedctl` tool to unjoin as follows:
+
+```
+kubefedctl unjoin cluster2 --cluster-context cluster2 --host-cluster-context cluster1
+```
+
+Now with Karmada, you can use `karmadactl` tool to do the same thing:
+
+```
+karmadactl unjoin cluster2 --cluster-context cluster2 --karmada-context karmada
+```
+
+The behavior behind the `unjoin` command is similar between Kubefed and Karmada, they both remove the cluster
+from the control plane by removing the cluster object.
+
+## Propagating workload to clusters
+
+Assume you are going to propagate a workload (`Deployment`) to both clusters named `cluster1` and `cluster2`,
+you might have to deploy following yaml to Kubefed:
+
+```yaml
+apiVersion: types.kubefed.io/v1beta1
+kind: FederatedDeployment
+metadata:
+ name: test-deployment
+ namespace: test-namespace
+spec:
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+ placement:
+ clusters:
+ - name: cluster2
+ - name: cluster1
+ overrides:
+ - clusterName: cluster2
+ clusterOverrides:
+ - path: "/spec/replicas"
+ value: 5
+ - path: "/spec/template/spec/containers/0/image"
+ value: "nginx:1.17.0-alpine"
+ - path: "/metadata/annotations"
+ op: "add"
+ value:
+ foo: bar
+ - path: "/metadata/annotations/foo"
+ op: "remove"
+```
+
+Now with Karmada, the yaml could be split into 3 yamls, one for each of the `template`, `placement` and `overrides`.
+
+In Karmada, the template doesn't need to embed into `Federated CRD`, it just the same as Kubernetes native declaration:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx
+ name: nginx
+```
+
+For the `placement` part, Karmada provides `PropagationPolicy` API to hold the placement rules:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - cluster1
+ - cluster2
+```
+
+The `PropagationPolicy` defines the rules of which resources(`resourceSelectors`) should be propagated to
+where (`placement`).
+See [Resource Propagating](https://karmada.io/docs/userguide/scheduling/resource-propagating) for more details.
+
+For the `override` part, Karmada provides `OverridePolicy` API to hold the rules for differentiation:
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: OverridePolicy
+metadata:
+ name: example-override
+ namespace: default
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ overrideRules:
+ - targetCluster:
+ clusterNames:
+ - cluster2
+ overriders:
+ plaintext:
+ - path: "/spec/replicas"
+ operator: replace
+ value: 5
+ - path: "/metadata/annotations"
+ operator: add
+ value:
+ foo: bar
+ - path: "/metadata/annotations/foo"
+ operator: remove
+ imageOverrider:
+ - component: Tag
+ operator: replace
+ value: 1.17.0-alpine
+```
+
+The `OverridePolicy` defines the rules of which resources(`resourceSelectors`) should be overwritten when
+propagating to where(`targetCluster`).
+
+In addition to Kubefed, Karmada offers various alternatives to declare the override rules, see
+[Overriders](https://karmada.io/docs/userguide/scheduling/override-policy#overriders) for more details.
+
+## FAQ
+
+### Will Karmada provide tools to smooth the migration?
+
+We don't have the plan yet, as we reached some Kubefed users and found that they're usually not using vanilla
+Kubefed but the forked version, they extended Kubefed a lot to meet their requirements. So, it might be pretty
+hard to maintain a common tool to satisfy most users.
+
+We are also looking forward to more feedback now, please feel free to reach us, and we are glad to support you
+finish the migration.
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/administrator/migration/promote-legacy-workload.md b/versioned_docs/version-v1.9/administrator/migration/promote-legacy-workload.md
new file mode 100644
index 000000000..1c85299b6
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/migration/promote-legacy-workload.md
@@ -0,0 +1,60 @@
+---
+title: Promote legacy workload
+---
+
+Assume that there is a member cluster where a workload (like Deployment) is deployed but not managed by Karmada, we can use the `karmadactl promote` command to let Karmada take over this workload directly and not to cause its pods to restart.
+
+## Example
+
+### For member cluster in `Push` mode
+There is an `nginx` Deployment that belongs to namespace `default` in member cluster `cluster1`.
+
+```
+[root@master1]# kubectl get cluster
+NAME VERSION MODE READY AGE
+cluster1 v1.22.3 Push True 24d
+```
+
+```
+[root@cluster1]# kubectl get deploy nginx
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx 1/1 1 1 66s
+
+[root@cluster1]# kubectl get pod
+NAME READY STATUS RESTARTS AGE
+nginx-6799fc88d8-sqjj4 1/1 Running 0 2m12s
+```
+
+We can promote it to Karmada by executing the command below on the Karmada control plane.
+
+```
+[root@master1]# karmadactl promote deployment nginx -n default -c member1
+Resource "apps/v1, Resource=deployments"(default/nginx) is promoted successfully
+```
+
+The nginx deployment has been adopted by Karmada.
+
+```
+[root@master1]# kubectl get deploy
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx 1/1 1 1 7m25s
+```
+
+And the pod created by the nginx deployment in the member cluster wasn't restarted.
+
+```
+[root@cluster1]# kubectl get pod
+NAME READY STATUS RESTARTS AGE
+nginx-6799fc88d8-sqjj4 1/1 Running 0 15m
+```
+
+### For member cluster in `Pull` mode
+Most steps are same as those for clusters in `Push` mode. Only the flags of the `karmadactl promote` command are different.
+
+```
+karmadactl promote deployment nginx -n default -c cluster1 --cluster-kubeconfig=
+```
+
+For more flags and example about the command, you can use `karmadactl promote --help`.
+
+> Note: As the version upgrade of resources in Kubernetes is in progress, the apiserver of Karmada control plane cloud be different from member clusters. To avoid compatibility issues, you can specify the GVK of a resource, such as replacing `deployment` with `deployment.v1.apps`.
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/administrator/monitoring/working-with-filebeat.md b/versioned_docs/version-v1.9/administrator/monitoring/working-with-filebeat.md
new file mode 100644
index 000000000..135b24777
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/monitoring/working-with-filebeat.md
@@ -0,0 +1,242 @@
+---
+title: Use Filebeat to collect logs of Karmada member clusters
+---
+
+[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to [Elasticsearch](https://www.elastic.co/products/elasticsearch) or [kafka](https://github.com/apache/kafka) for indexing.
+
+This document demonstrates how to use the `Filebeat` to collect logs of Karmada member clusters.
+
+## Start up Karmada clusters
+
+You just need to clone Karmada repo, and run the following script in Karmada directory.
+
+```bash
+hack/local-up-karmada.sh
+```
+
+## Start Filebeat
+
+1. Create resource objects of Filebeat, the content is as follows. You can specify a list of inputs in the `filebeat.inputs` section of the `filebeat.yml`. Inputs specify how Filebeat locates and processes input data, also you can configure Filebeat to write to a specific output by setting options in the `Outputs` section of the `filebeat.yml` config file. The example will collect the log information of each container and write the collected logs to a file. For more detailed information about the input and output configuration, please refer to: https://github.com/elastic/beats/tree/master/filebeat/docs
+
+ ```yaml
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ name: logging
+ ---
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: filebeat
+ namespace: logging
+ labels:
+ k8s-app: filebeat
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: filebeat
+ rules:
+ - apiGroups: [""] # "" indicates the core API group
+ resources:
+ - namespaces
+ - pods
+ verbs:
+ - get
+ - watch
+ - list
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRoleBinding
+ metadata:
+ name: filebeat
+ subjects:
+ - kind: ServiceAccount
+ name: filebeat
+ namespace: kube-system
+ roleRef:
+ kind: ClusterRole
+ name: filebeat
+ apiGroup: rbac.authorization.k8s.io
+ ---
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: filebeat-config
+ namespace: logging
+ labels:
+ k8s-app: filebeat
+ kubernetes.io/cluster-service: "true"
+ data:
+ filebeat.yml: |-
+ filebeat.inputs:
+ - type: container
+ paths:
+ - /var/log/containers/*.log
+ processors:
+ - add_kubernetes_metadata:
+ host: ${NODE_NAME}
+ matchers:
+ - logs_path:
+ logs_path: "/var/log/containers/"
+ # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
+ #filebeat.autodiscover:
+ # providers:
+ # - type: kubernetes
+ # node: ${NODE_NAME}
+ # hints.enabled: true
+ # hints.default_config:
+ # type: container
+ # paths:
+ # - /var/log/containers/*${data.kubernetes.container.id}.log
+
+ processors:
+ - add_cloud_metadata:
+ - add_host_metadata:
+
+ #output.elasticsearch:
+ # hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
+ # username: ${ELASTICSEARCH_USERNAME}
+ # password: ${ELASTICSEARCH_PASSWORD}
+ output.file:
+ path: "/tmp/filebeat"
+ filename: filebeat
+ ---
+ apiVersion: apps/v1
+ kind: DaemonSet
+ metadata:
+ name: filebeat
+ namespace: logging
+ labels:
+ k8s-app: filebeat
+ spec:
+ selector:
+ matchLabels:
+ k8s-app: filebeat
+ template:
+ metadata:
+ labels:
+ k8s-app: filebeat
+ spec:
+ serviceAccountName: filebeat
+ terminationGracePeriodSeconds: 30
+ tolerations:
+ - effect: NoSchedule
+ key: node-role.kubernetes.io/master
+ containers:
+ - name: filebeat
+ image: docker.elastic.co/beats/filebeat:8.0.0-beta1-amd64
+ imagePullPolicy: IfNotPresent
+ args: [ "-c", "/usr/share/filebeat/filebeat.yml", "-e",]
+ env:
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ securityContext:
+ runAsUser: 0
+ resources:
+ limits:
+ memory: 200Mi
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ volumeMounts:
+ - name: config
+ mountPath: /usr/share/filebeat/filebeat.yml
+ readOnly: true
+ subPath: filebeat.yml
+ - name: inputs
+ mountPath: /usr/share/filebeat/inputs.d
+ readOnly: true
+ - name: data
+ mountPath: /usr/share/filebeat/data
+ - name: varlibdockercontainers
+ mountPath: /var/lib/docker/containers
+ readOnly: true
+ - name: varlog
+ mountPath: /var/log
+ readOnly: true
+ volumes:
+ - name: config
+ configMap:
+ defaultMode: 0600
+ name: filebeat-config
+ - name: varlibdockercontainers
+ hostPath:
+ path: /var/lib/docker/containers
+ - name: varlog
+ hostPath:
+ path: /var/log
+ - name: inputs
+ configMap:
+ defaultMode: 0600
+ name: filebeat-config
+ # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
+ - name: data
+ hostPath:
+ path: /var/lib/filebeat-data
+ type: DirectoryOrCreate
+ ```
+
+2. Run the below command to execute Karmada PropagationPolicy and ClusterPropagationPolicy.
+
+ ```
+ cat <` (2 places) with the token got from step 3
+
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: prometheus-config
+ namespace: monitor
+ data:
+ prometheus.yml: |-
+ global:
+ scrape_interval: 15s
+ evaluation_interval: 15s
+ scrape_configs:
+ - job_name: 'karmada-scheduler'
+ kubernetes_sd_configs:
+ - role: pod
+ scheme: http
+ tls_config:
+ insecure_skip_verify: true
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_label_app]
+ action: keep
+ regex: karmada-system;karmada-scheduler
+ - target_label: __address__
+ source_labels: [__address__]
+ regex: '(.*)'
+ replacement: '${1}:10351'
+ action: replace
+ - job_name: 'karmada-controller-manager'
+ kubernetes_sd_configs:
+ - role: pod
+ scheme: http
+ tls_config:
+ insecure_skip_verify: true
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_label_app]
+ action: keep
+ regex: karmada-system;karmada-controller-manager
+ - target_label: __address__
+ source_labels: [__address__]
+ regex: '(.*)'
+ replacement: '${1}:8080'
+ action: replace
+ - job_name: 'kubernetes-apiserver'
+ kubernetes_sd_configs:
+ - role: endpoints
+ scheme: https
+ tls_config:
+ ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
+ action: keep
+ regex: default;kubernetes;https
+ - target_label: __address__
+ replacement: kubernetes.default.svc:443
+ - job_name: 'karmada-apiserver'
+ kubernetes_sd_configs:
+ - role: endpoints
+ scheme: https
+ tls_config:
+ insecure_skip_verify: true
+ bearer_token: # need the true karmada token
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_pod_label_app]
+ action: keep
+ regex: karmada-apiserver
+ - target_label: __address__
+ replacement: karmada-apiserver.karmada-system.svc:5443
+ - job_name: 'karmada-aggregated-apiserver'
+ kubernetes_sd_configs:
+ - role: endpoints
+ scheme: https
+ tls_config:
+ insecure_skip_verify: true
+ bearer_token: # need the true karmada token
+ relabel_configs:
+ - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoints_name]
+ action: keep
+ regex: karmada-system;karmada-aggregated-apiserver;karmada-aggregated-apiserver
+ - target_label: __address__
+ replacement: karmada-aggregated-apiserver.karmada-system.svc:443
+ - job_name: 'kubernetes-cadvisor'
+ scheme: https
+ tls_config:
+ ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
+ kubernetes_sd_configs:
+ - role: node
+ relabel_configs:
+ - target_label: __address__
+ replacement: kubernetes.default.svc:443
+ - source_labels: [__meta_kubernetes_node_name]
+ regex: (.+)
+ target_label: __metrics_path__
+ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
+ - action: labelmap
+ regex: __meta_kubernetes_node_label_(.+)
+ metric_relabel_configs:
+ - action: replace
+ source_labels: [id]
+ regex: '^/machine\.slice/machine-rkt\\x2d([^\\]+)\\.+/([^/]+)\.service$'
+ target_label: rkt_container_name
+ replacement: '${2}-${1}'
+ - action: replace
+ source_labels: [id]
+ regex: '^/system\.slice/(.+)\.service$'
+ target_label: systemd_service_name
+ replacement: '${1}'
+ - source_labels: [pod]
+ separator: ;
+ regex: (.+)
+ target_label: pod_name
+ replacement: $1
+ action: replace
+ ---
+ apiVersion: v1
+ kind: "Service"
+ metadata:
+ name: prometheus
+ namespace: monitor
+ labels:
+ name: prometheus
+ spec:
+ ports:
+ - name: prometheus
+ protocol: TCP
+ port: 9090
+ targetPort: 9090
+ nodePort: 31801
+ selector:
+ app: prometheus
+ type: NodePort
+ ---
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ labels:
+ name: prometheus
+ name: prometheus
+ namespace: monitor
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: prometheus
+ template:
+ metadata:
+ labels:
+ app: prometheus
+ spec:
+ serviceAccountName: prometheus
+ containers:
+ - name: prometheus
+ image: prom/prometheus:latest
+ command:
+ - "/bin/prometheus"
+ args:
+ - "--config.file=/etc/prometheus/prometheus.yml"
+ - "--storage.tsdb.path=/prom-data"
+ - "--storage.tsdb.retention.time=180d"
+ ports:
+ - containerPort: 9090
+ protocol: TCP
+ volumeMounts:
+ - mountPath: "/etc/prometheus"
+ name: prometheus-config
+ - mountPath: "/prom-data"
+ name: prom-data
+ initContainers:
+ - name: prometheus-data-permission-fix
+ image: busybox
+ command: ["/bin/chmod","-R","777", "/data"]
+ volumeMounts:
+ - name: prom-data
+ mountPath: /data
+ volumes:
+ - name: prometheus-config
+ configMap:
+ name: prometheus-config
+ - name: prom-data
+ hostPath:
+ path: /var/lib/prom-data
+ type: DirectoryOrCreate
+
+ ```
+
+5. Use any node IP of the control plane and the port number (default 31801) to enter the Prometheus monitoring page of the control plane
+
+
+## Visualizing metrics using Grafana
+
+For a better experience with visual metrics, we can also use Grafana with Prometheus, as well as [Dashboards](https://grafana.com/grafana/dashboards/) provided by the community
+
+1. install grafana with helm
+
+ ```shell
+ helm repo add grafana https://grafana.github.io/helm-charts
+ helm repo update
+
+ cat < For the argument changes please refer to `Details Upgrading Instruction` below.
+
+## Details Upgrading Instruction
+
+The following instructions are for minor version upgrades. Cross-version upgrades are not recommended.
+And it is recommended to use the latest patch version when upgrading, for example, if you are upgrading from
+v1.1.x to v1.2.x and the available patch versions are v1.2.0, v1.2.1 and v1.2.2, then select v1.2.2.
+
+### [v0.8 to v0.9](./v0.8-v0.9.md)
+### [v0.9 to v0.10](./v0.9-v0.10.md)
+### [v0.10 to v1.0](./v0.10-v1.0.md)
+### [v1.0 to v1.1](./v1.0-v1.1.md)
+### [v1.1 to v1.2](./v1.1-v1.2.md)
+### [v1.2 to v1.3](./v1.2-v1.3.md)
+### [v1.3 to v1.4](./v1.3-v1.4.md)
+### [v1.4 to v1.5](./v1.4-v1.5.md)
+### [v1.5 to v1.6](./v1.5-v1.6.md)
+### [v1.6 to v1.7](./v1.6-v1.7.md)
+### [v1.7 to v1.8](./v1.7-v1.8.md)
+
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v0.10-v1.0.md b/versioned_docs/version-v1.9/administrator/upgrading/v0.10-v1.0.md
new file mode 100644
index 000000000..6e4c6b757
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v0.10-v1.0.md
@@ -0,0 +1,226 @@
+---
+title: v0.10 to v1.0
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+### Introduced `karmada-aggregated-apiserver` component
+
+In the releases before v1.0.0, we are using CRD to extend the
+[Cluster API](https://github.com/karmada-io/karmada/tree/24f586062e0cd7c9d8e6911e52ce399106f489aa/pkg/apis/cluster),
+and starts v1.0.0 we use
+[API Aggregation](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)(AA) to
+extend it.
+
+Based on the above change, perform the following operations during the upgrade:
+
+#### Step 1: Stop `karmada-apiserver`
+
+You can stop `karmada-apiserver` by updating its replica to `0`.
+
+#### Step 2: Remove Cluster CRD from ETCD
+
+Remove the `Cluster CRD` from ETCD directly by running the following command.
+
+```
+etcdctl --cert="/etc/kubernetes/pki/etcd/karmada.crt" \
+--key="/etc/kubernetes/pki/etcd/karmada.key" \
+--cacert="/etc/kubernetes/pki/etcd/server-ca.crt" \
+del /registry/apiextensions.k8s.io/customresourcedefinitions/clusters.cluster.karmada.io
+```
+
+> Note: This command only removed the `CRD` resource, all the `CR` (Cluster objects) not changed.
+> That's the reason why we don't remove CRD by `karmada-apiserver`.
+
+#### Step 3: Prepare the certificate for the `karmada-aggregated-apiserver`
+
+To avoid [CA Reusage and Conflicts](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#ca-reusage-and-conflicts),
+create CA signer and sign a certificate to enable the aggregation layer.
+
+Update `karmada-cert-secret` secret in `karmada-system` namespace:
+
+```diff
+apiVersion: v1
+kind: Secret
+metadata:
+ name: karmada-cert-secret
+ namespace: karmada-system
+type: Opaque
+data:
+ ...
++ front-proxy-ca.crt: |
++ {{front_proxy_ca_crt}}
++ front-proxy-client.crt: |
++ {{front_proxy_client_crt}}
++ front-proxy-client.key: |
++ {{front_proxy_client_key}}
+```
+
+Then update `karmada-apiserver` deployment's container command:
+
+```diff
+- - --proxy-client-cert-file=/etc/kubernetes/pki/karmada.crt
+- - --proxy-client-key-file=/etc/kubernetes/pki/karmada.key
++ - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
++ - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
+- - --requestheader-client-ca-file=/etc/kubernetes/pki/server-ca.crt
++ - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
+```
+
+After the update, restore the replicas of `karmada-apiserver` instances.
+
+#### Step 4: Deploy `karmada-aggregated-apiserver`:
+
+Deploy `karmada-aggregated-apiserver` instance to your `host cluster` by following manifests:
+
+unfold me to see the yaml
+
+```yaml
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: karmada-aggregated-apiserver
+ namespace: karmada-system
+ labels:
+ app: karmada-aggregated-apiserver
+ apiserver: "true"
+spec:
+ selector:
+ matchLabels:
+ app: karmada-aggregated-apiserver
+ apiserver: "true"
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: karmada-aggregated-apiserver
+ apiserver: "true"
+ spec:
+ automountServiceAccountToken: false
+ containers:
+ - name: karmada-aggregated-apiserver
+ image: swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver:v1.0.0
+ imagePullPolicy: IfNotPresent
+ volumeMounts:
+ - name: k8s-certs
+ mountPath: /etc/kubernetes/pki
+ readOnly: true
+ - name: kubeconfig
+ subPath: kubeconfig
+ mountPath: /etc/kubeconfig
+ command:
+ - /bin/karmada-aggregated-apiserver
+ - --kubeconfig=/etc/kubeconfig
+ - --authentication-kubeconfig=/etc/kubeconfig
+ - --authorization-kubeconfig=/etc/kubeconfig
+ - --karmada-config=/etc/kubeconfig
+ - --etcd-servers=https://etcd-client.karmada-system.svc.cluster.local:2379
+ - --etcd-cafile=/etc/kubernetes/pki/server-ca.crt
+ - --etcd-certfile=/etc/kubernetes/pki/karmada.crt
+ - --etcd-keyfile=/etc/kubernetes/pki/karmada.key
+ - --tls-cert-file=/etc/kubernetes/pki/karmada.crt
+ - --tls-private-key-file=/etc/kubernetes/pki/karmada.key
+ - --audit-log-path=-
+ - --feature-gates=APIPriorityAndFairness=false
+ - --audit-log-maxage=0
+ - --audit-log-maxbackup=0
+ resources:
+ requests:
+ cpu: 100m
+ volumes:
+ - name: k8s-certs
+ secret:
+ secretName: karmada-cert-secret
+ - name: kubeconfig
+ secret:
+ secretName: kubeconfig
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: karmada-aggregated-apiserver
+ namespace: karmada-system
+ labels:
+ app: karmada-aggregated-apiserver
+ apiserver: "true"
+spec:
+ ports:
+ - port: 443
+ protocol: TCP
+ targetPort: 443
+ selector:
+ app: karmada-aggregated-apiserver
+```
+
+
+Then, deploy `APIService` to `karmada-apiserver` by following manifests.
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: apiregistration.k8s.io/v1
+kind: APIService
+metadata:
+ name: v1alpha1.cluster.karmada.io
+ labels:
+ app: karmada-aggregated-apiserver
+ apiserver: "true"
+spec:
+ insecureSkipTLSVerify: true
+ group: cluster.karmada.io
+ groupPriorityMinimum: 2000
+ service:
+ name: karmada-aggregated-apiserver
+ namespace: karmada-system
+ version: v1alpha1
+ versionPriority: 10
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: karmada-aggregated-apiserver
+ namespace: karmada-system
+spec:
+ type: ExternalName
+ externalName: karmada-aggregated-apiserver.karmada-system.svc.cluster.local
+```
+
+
+
+#### Step 5: check clusters status
+
+If everything goes well, you can see all your clusters just as before the upgrading.
+```yaml
+kubectl get clusters
+```
+
+### `karmada-agent` requires an extra `impersonate` verb
+
+In order to proxy user's request, the `karmada-agent` now request an extra `impersonate` verb.
+Please check the `ClusterRole` configuration or apply the following manifest.
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: karmada-agent
+rules:
+ - apiGroups: ['*']
+ resources: ['*']
+ verbs: ['*']
+ - nonResourceURLs: ['*']
+ verbs: ["get"]
+
+```
+
+### MCS feature now supports `Kubernetes v1.21+`
+
+Since the `discovery.k8s.io/v1beta1` of `EndpointSlices` has been deprecated in favor of `discovery.k8s.io/v1`, in
+[Kubernetes v1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md), Karmada adopt
+this change at release v1.0.0.
+Now the [MCS](../../userguide/service/multi-cluster-service.md) feature requires
+member cluster version no less than v1.21.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v0.8-v0.9.md b/versioned_docs/version-v1.9/administrator/upgrading/v0.8-v0.9.md
new file mode 100644
index 000000000..826b1b37c
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v0.8-v0.9.md
@@ -0,0 +1,8 @@
+---
+title: v0.8 to v0.9
+---
+
+Nothing special other than the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+Please refer to [v0.9.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v0.9.0) for more details.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v0.9-v0.10.md b/versioned_docs/version-v1.9/administrator/upgrading/v0.9-v0.10.md
new file mode 100644
index 000000000..8830caa3d
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v0.9-v0.10.md
@@ -0,0 +1,14 @@
+---
+title: v0.9 to v0.10
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+### karmada-scheduler
+
+The `--failover` flag has been removed and replaced by `--feature-gates`.
+If you enable fail over feature by `--failover`, now should be change to `--feature-gates=Failover=true`.
+
+Please refer to [v0.10.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v0.10.0) for more details.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v1.0-v1.1.md b/versioned_docs/version-v1.9/administrator/upgrading/v1.0-v1.1.md
new file mode 100644
index 000000000..fff3fbe79
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v1.0-v1.1.md
@@ -0,0 +1,45 @@
+---
+title: v1.0 to v1.1
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+The validation process for `Cluster` objects now has been moved from `karmada-webhook` to `karmada-aggregated-apiserver`
+by [PR 1152](https://github.com/karmada-io/karmada/pull/1152), you have to remove the `Cluster` webhook configuration
+from `ValidatingWebhookConfiguration`, such as:
+```diff
+diff --git a/artifacts/deploy/webhook-configuration.yaml b/artifacts/deploy/webhook-configuration.yaml
+index 0a89ad36..f7a9f512 100644
+--- a/artifacts/deploy/webhook-configuration.yaml
++++ b/artifacts/deploy/webhook-configuration.yaml
+@@ -69,20 +69,6 @@ metadata:
+ labels:
+ app: validating-config
+ webhooks:
+- - name: cluster.karmada.io
+- rules:
+- - operations: ["CREATE", "UPDATE"]
+- apiGroups: ["cluster.karmada.io"]
+- apiVersions: ["*"]
+- resources: ["clusters"]
+- scope: "Cluster"
+- clientConfig:
+- url: https://karmada-webhook.karmada-system.svc:443/validate-cluster
+- caBundle: {{caBundle}}
+- failurePolicy: Fail
+- sideEffects: None
+- admissionReviewVersions: ["v1"]
+- timeoutSeconds: 3
+ - name: propagationpolicy.karmada.io
+ rules:
+ - operations: ["CREATE", "UPDATE"]
+```
+
+Otherwise, when joining clusters(or updating Cluster objects) the request will be rejected with following errors:
+```
+Error: failed to create cluster(host) object. error: Internal error occurred: failed calling webhook "cluster.karmada.io": the server could not find the requested resource
+```
+
+Please refer to [v1.1.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.1.0) for more details.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v1.1-v1.2.md b/versioned_docs/version-v1.9/administrator/upgrading/v1.1-v1.2.md
new file mode 100644
index 000000000..575a329e3
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v1.1-v1.2.md
@@ -0,0 +1,55 @@
+---
+title: v1.1 to v1.2
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+### karmada-controller-manager
+
+The `hpa` controller has been disabled by default now, if you are using this controller, please enable it as per [Configure Karmada controllers](../configuration/configure-controllers.md#configure-karmada-controllers).
+
+### karmada-aggregated-apiserver
+
+The deprecated flags `--karmada-config` and `--master` in v1.1 have been removed from the codebase.
+Please remember to remove the flags `--karmada-config` and `--master` in the `karmada-aggregated-apiserver` deployment yaml.
+
+### karmadactl
+
+We enable `karmadactl promote` command to support AA. For details info, please refer to [1795](https://github.com/karmada-io/karmada/pull/1795).
+
+In order to use AA by default, need to deploy some RBAC by following manifests.
+
+
+unfold me to see the yaml
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: cluster-proxy-admin
+rules:
+- apiGroups:
+ - 'cluster.karmada.io'
+ resources:
+ - clusters/proxy
+ verbs:
+ - '*'
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: cluster-proxy-admin
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: cluster-proxy-admin
+subjects:
+ - kind: User
+ name: "system:admin"
+```
+
+
+
+Please refer to [v1.2.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.2.0) for more details.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v1.2-v1.3.md b/versioned_docs/version-v1.9/administrator/upgrading/v1.2-v1.3.md
new file mode 100644
index 000000000..8e3ea4463
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v1.2-v1.3.md
@@ -0,0 +1,9 @@
+---
+title: v1.2 to v1.3
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+Please refer to [v1.3.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.3.0) for more details.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v1.3-v1.4.md b/versioned_docs/version-v1.9/administrator/upgrading/v1.3-v1.4.md
new file mode 100644
index 000000000..12ccce83f
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v1.3-v1.4.md
@@ -0,0 +1,15 @@
+---
+title: v1.3 to v1.4
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+Some FeatureGate have evolved:
+- `PropagateDeps` FeatureGate has evoluted to Beta and is enabled by default.
+- `Failover` FeatureGate has evoluted to Beta and is enabled by default.
+- `GracefulEviction` FeatureGate has evoluted to Beta and is enabled by default.
+- `CustomizedClusterResourceModeling` FeatureGate has evoluted to Beta and is enabled by default.
+
+Please refer to [v1.4.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.4.0) for more details.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v1.4-v1.5.md b/versioned_docs/version-v1.9/administrator/upgrading/v1.4-v1.5.md
new file mode 100644
index 000000000..ba5daa95e
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v1.4-v1.5.md
@@ -0,0 +1,17 @@
+---
+title: v1.4 to v1.5
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+### karmada-controller-manager
+* Now the `OverridePolicy` and `ClusterOverridePolicy` will be applied by implicit priority order. The one with lower priority will be applied before the one with higher priority.
+* Retain the labels added to resources by member clusters.
+
+### karmada-controller-manager
+* The `--cluster-context` flag of `join` command now takes `current-context` by default.
+
+
+Please refer to [v1.5.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.5.0) for more details.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v1.5-v1.6.md b/versioned_docs/version-v1.9/administrator/upgrading/v1.5-v1.6.md
new file mode 100644
index 000000000..5ea27dfe4
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v1.5-v1.6.md
@@ -0,0 +1,32 @@
+---
+title: v1.5 to v1.6
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+### API Changes
+
+* The length of `AffinityName` in PropagationPolicy now is restricted to [1, 32], and must be a qualified name.
+* Introduced short name `wk` for resource `Work`.
+
+### karmadactl
+
+* Introduced `--purge-namespace` flag for `deinit` command to skip namespace deletion during uninstallation.
+* Introduced `--auto-create-policy` and `--policy-name` flags for `promote` command to customize the policy during the promotion.
+
+### karmada-aggregated-apiserver
+
+* Increased `.metadata.generation` once the desired state of the Cluster object is changed.
+
+### karmada-controller-manager
+
+* Allowed setting wildcards for `--skippedPropagatingNamespaces` flag.
+* The `--skipped-propagating-namespaces` flags now can take regular expressions to represent namespaces and defaults to `kube-*`.
+
+### karmada-scheduler
+
+* Introduced `clusterEviction` plugin to skip the clusters that are in the process of eviction.
+
+Please refer to [v1.6.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.6.0) for more details.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v1.6-v1.7.md b/versioned_docs/version-v1.9/administrator/upgrading/v1.6-v1.7.md
new file mode 100644
index 000000000..b4176042d
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v1.6-v1.7.md
@@ -0,0 +1,38 @@
+---
+title: v1.6 to v1.7
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+### API Changes
+
+* Introduced more printcolumn for `FederatedHPA`, including reference, minpods, maxpods and replicas.
+* Introduced new API `CronFederatedHPA ` to scale the workloads in specific time.
+* Introduced `Preemption` to both `PropagationPolicy` and `ClusterPropagationPolicy` to declare the behaviors of preemption.
+* Introduced `ConflictResolution` to both `PropagationPolicy` and `ClusterPropagationPolicy` to declare how potential conflict should be handled.
+* Introduced a new field `Zones` for `Cluster` to represent multiple zones of a member cluster, the old filed `zone` is deprecated.
+
+### karmadactl
+
+* Introduced `--wait-component-ready-timeout` flag to specify the component installation timeout.
+* Introduced `top` command.
+
+### karmada-controller-manager
+
+* You are advised to enable`clusterrole-aggregation` controller to grant ClusterRole/admin with Karmada resource permission.
+* Introduced a new feature-gate `--feature-gates=PropagationPolicyPreemption=true` to enable policy preemption by priority.
+* Introduced `--cluster-cache-sync-timeout` flag to specify the sync timeout of the control plane cache in addition to the member cluster's cache.
+* Introduced a LabelSelector field to DependentObjectReference.
+
+### karmada-scheduler
+
+* Introduced new scheduling condition reasons: NoClusterFit, SchedulerError, Unschedulable, Success.
+
+### karmada-metrics-adapter
+
+* Introduced `karmada-metrics-adapter` to addons to utilize FederatedHPA scaling workloads across multiple clusters,
+it can be installed by karmadactl and karmada-operator.
+
+Please refer to [v1.7.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.7.0) for more details.
diff --git a/versioned_docs/version-v1.9/administrator/upgrading/v1.7-v1.8.md b/versioned_docs/version-v1.9/administrator/upgrading/v1.7-v1.8.md
new file mode 100644
index 000000000..e1788a200
--- /dev/null
+++ b/versioned_docs/version-v1.9/administrator/upgrading/v1.7-v1.8.md
@@ -0,0 +1,23 @@
+---
+title: v1.7 to v1.8
+---
+
+Follow the [Regular Upgrading Process](./README.md).
+
+## Upgrading Notable Changes
+
+### API Changes
+
+* Introduced `ServiceProvisionClusters` and `ServiceConsumptionClusters` which will be used to specify service source and consumption place.
+
+### karmada-controller-manager
+
+* Introduced hpaReplicasSyncer controller which syncs workload's replicas from the member cluster to the control plane.
+ In new version, the `currentReplicas` and `desiredReplicas` in the status of HPA will be aggregated to the ResourceTemplate by default
+ and the `replicas` of ResourceTemplate would be automatically retained if it is scaling with an HPA.
+
+### karmada-aggregated-apiserver
+
+* Add ca to check the validation of the member clusters' server certificate.
+
+Please refer to [v1.8.0 Release Notes](https://github.com/karmada-io/karmada/releases/tag/v1.8.0) for more details.
diff --git a/versioned_docs/version-v1.9/casestudies/adopters.md b/versioned_docs/version-v1.9/casestudies/adopters.md
new file mode 100644
index 000000000..e86c24462
--- /dev/null
+++ b/versioned_docs/version-v1.9/casestudies/adopters.md
@@ -0,0 +1,4 @@
+---
+title: Karmada Adopters
+---
+The current page has been moved to [Karmada Adopters](https://karmada.io/adopters).
diff --git a/versioned_docs/version-v1.9/casestudies/ci123.md b/versioned_docs/version-v1.9/casestudies/ci123.md
new file mode 100644
index 000000000..9892cfc2d
--- /dev/null
+++ b/versioned_docs/version-v1.9/casestudies/ci123.md
@@ -0,0 +1,170 @@
+---
+title: Karmada in AIML INSTITUTE
+---
+
+## Background
+
+AIML INSTITUTE is a tech company that helps enterprises build integrated cloud native solutions for
+digital transformation. Their featured product is MSP Cloud Native Application Platform, which focuses
+on cloud native, data intelligence, and application security/performance/intelligence. The platform
+provides tailored cloud native, big data, AI, and information security services, covering the entire
+lifecycle from development, running, to operations.
+
+![Karmada at ci123](../resources/casestudies/ci123/adoptions-ci123-architecture.png "Karmada at ci123")
+
+Built on Kubernetes, their platform is cloud-vendor-independent. Customers can host service applications
+without regard for vendor differences. Their customers are demanding more and more on multi-cloud,
+and the scale and number of clusters and the O&M complexity grow sharply, they needed to find a way to
+satisfy their customers. They researched, compared, and tested open source projects and self-development
+proposals. AIML INSTITUTE finally chose Karmada. The following describes the reasons for selecting Karmada
+and how it is implemented.
+
+## Multi-Cluster Solution Selection
+
+Currently, AIML INSTITUTE has about 50+ self-built clusters, each with hundreds to thousands of nodes.
+These clusters are heterogeneous. Some clusters contain heterogeneous compute resources such as GPUs,
+and some edge clusters are built using K3s. Therefore, AIML INSTITUTE selects the multi-cluster solution
+based on the following principles.
+
+- The cluster API definition must be abstract and flexible enough to describe the cluster statuses, resources, and members. Only a thin glue layer is required for heterogeneous members in a cluster and heterogeneous clusters.
+- The solution must be compatible with Kubernetes native APIs and CRDs. In this way, existing systems can be migrated to multi-cluster environments with no or little code refactoring.
+- The solution must support multi-cluster resource scheduling policies and customized scaling capabilities, because clusters scattered around the world need to be managed on a unified platform.
+- The control plane must has high availability and performance, so that the multi-cluster system can be horizontally expanded as the scale increases.
+ Why Karmada? First, Karmada is compatible with Kubernetes native APIs and CRDs. Second, the architecture of Karmada is similar to that of Kubernetes. Both are progressive and scalable. What's more, Karmada has unique advantages than other Kubernetes projects:
+- **Independent etcd cluster:** Karmada enables the control plane to provide storage for more resources without impacting Kubernetes clusters, and will allow further separation of large-sized resource objects on the control plane for larger-scale management.
+- **Independent scheduler:** Karmada uses an independent scheduler to realize multi-cluster scheduling.
+- **Agent/Non-agent access:** Compared with the all-in-one system on the control plane, the agent/non-agent access fits more scenarios. The following figure shows the Karmada architecture.
+
+****
+
+![Karmada architecture](../resources/general/architecture.png "Karmada architecture")
+
+## Karmada Implementation
+
+## Multi-Cluster Management
+
+When clusters grow in both scale and differences in versions, configurations, compute resources, and API resources, the management complexity increases significantly. An automated system can be the solution. Based on the cluster CRD defined by Karmada, AIML INSTITUTE automates multi-cluster management to unburden system administrators.
+
+### Streamlining Cluster Management
+
+Karmada provides two cluster synchronization modes for the collaboration between the Karmada control plane and member clusters.
+
+1. **Push mode:** The Karmada control plane directly manages member clusters and syncs resources. Users only need to register member clusters with the Karmada control plane and do not need to deploy additional components.
+
+2. **Pull mode:** Member clusters proactively pull resources from the Karmada control plane and sync their states. Users only need to deploy the karmada-agent component in member clusters, and the component automatically registers clusters.
+ Working principles:
+
+| Karmada Control Plane | Member Cluster | Synchronization Mode |
+|-----------------------|--------------------------------|----------------------|
+| Public cloud | Private cloud | Pull |
+| Public cloud | Public cloud (public network) | Pull |
+| Public cloud | Public cloud (private network) | Push |
+
+### Automating Cluster Management
+
+Adding a Kubernetes cluster is complex, involving cluster creation, verification and registration,
+as well as cross-department collaboration. AIML INSTITUTE adds service-related management policies.
+Here the hybrid cloud system manages the cloud and on-premises IaaS resources.
+
+![Automatic cluster management](../resources/casestudies/ci123/adoptions-ci123-automation-cluster-en.png "Automatic cluster management")
+
+### Integrating Karmada to the Existing Platform
+
+AIML INSTITUTE integrates the preceding functions with its cloud platform to simplify system O&M and
+administrator operations.
+
+![MSP-Karmada integration-1](../resources/casestudies/ci123/adoptions-ci123-msp-multicluster-1.png "MSP-Karmada integration-1")
+
+![MSP-Karmada integration-2](../resources/casestudies/ci123/adoptions-ci123-msp-multicluster-2.png "MSP-Karmada integration-2")
+
+## Multi-Cluster Resource Management and Scheduling
+
+If a multi-cluster project uses CRDs to encapsulate Kubernetes native APIs, they will be difficult to
+interconnect with existing systems, requiring heavy reconstruction workload. What's worse, this exposes
+the complexity of resource management and scheduling to users and system O&M personnel. Karmada is
+surprisingly helpful to solve this issue. As a multi-cluster container orchestration project that is
+fully compatible with Kubernetes native APIs, Karmada allows users to propagate existing cluster
+resources to multiple clusters without any modification. It takes care of the complexity of multi-cluster
+resource management and scheduling. The following figure shows the Karmada API workflow:
+
+![](../resources/general/karmada-resource-relation.png)
+
+This workflow implements the design of separating mechanisms from policies. Karmada defines a multi-cluster
+resource management mechanism and related policies. It provides propagation policies for system
+administrators to define resource propagation across clusters, and override policies to define the
+differentiated configurations of multiple clusters. End users only need to submit their Kubernetes native
+API declarations. The following figure shows how AIML INSTITUTE integrates this mechanism into its platform.
+
+![Multi-cluster capabilities](../resources/casestudies/ci123/adoptions-ci123-multicluster-capability.png "Multi-cluster capabilities")
+
+### Advanced Scheduling Policies
+
+Different from other multi-cluster systems, Karmada supports many advanced scheduling policies:
+
+- **Directional:** Similar to scheduling pods to nodes in Kubernetes, this mode schedules deployed resources to specified clusters.
+- **Affinity:** Similar to that in Kubernetes, this mode supports syntax such as label selector and match expression.
+- **Taints and tolerations:** Similar to those of Kubernetes. Karmada cluster APIs declare taints of clusters. If a resource can tolerate the taints of a cluster, it can be scheduled to the cluster. In addition, Karmada implements a scheduler framework similar to that of Kubernetes. When the default scheduling policies do not meet requirements, AIML INSTITUTE customizes scheduling policies as a supplement. Karmada also provides a **SchedulerName** field in propagation policies to determine which scheduler to use. The field can replace the default scheduler to meet higher scheduling requirements in complex scenarios. In summary, Karmada supports flexible scheduling policies, on-demand scaling, and multi-scheduler architecture. These capabilities satisfy AIML INSTITUTE and are available to users in two ways:
+
+1. **Templates**: After filling in the Kubernetes native API declaration, users can select a system-created template to declare these scheduling policies. Thanks to Karmada, the system also supports template CRUD.
+2. **Strategies**: Capabilities are visualized for users to select, as shown below.
+
+![Capability visualization](../resources/casestudies/ci123/adoptions-ci123-capability-visualization.png "Capability visualization")
+
+### Differentiated Resource Configurations for Multiple Clusters
+
+Workload configurations (for example, container image tags) of multiple clusters are often different. AIML INSTITUTE encapsulates Karmada's override policies for users, as shown in the following figure:
+
+![](../resources/casestudies/ci123/adoptions-ci123-override.png)
+
+### Multi-Cluster Resource State Aggregation
+
+Resources run in multiple clusters. It is critical to aggregate their states to form a unified view. Karmada aggregates states of native Kubernetes resources, which are used to build the platform as follows:
+
+![Capability visualization-1](../resources/casestudies/ci123/adoptions-ci123-unified-view-1.png "Capability visualization-1")
+
+![Capability visualization-2](../resources/casestudies/ci123/adoptions-ci123-unified-view-2.png " Capability visualization-2")
+
+## Integration of Karmada and Existing Systems
+
+It's easy to integrate Karmada with AIML INSTITUTE's existing systems. AriesSurface is the first project that AIML INSTITUTE tries to migrate from a single to multiple clusters. AriesSurface is a pipeline (DAG)-based inspection system used to detect deployment links and cluster data consistency, which are two critical metrics of clusters. With Karmada, AriesSurface is reconstructed into a multi-cluster inspection system that can observe the states of clusters globally. The following figure shows its new architecture:
+
+![](../resources/casestudies/ci123/adoptions-ci123-aries.png)
+
+
+1. The inspection system delivers CRDs to member clusters through Karmada propagation.
+2. The surface controllers of member clusters listen to CRD creation, render a pipeline based on the definition, and calculate DAGs. The controllers execute the DAGs and generate the inspection results.
+3. The control plane runs a detector to collect, aggregate, and calculate the inspection results. Perceptible link states of member clusters allow more effective cross-cluster deployment and scheduling. The following figure shows the Cluster Inspection page in the online system.
+
+ ![Cluster inspection](../resources/casestudies/ci123/adoptions-ci123-cluster-inspection.png "Cluster inspection")
+
+
+4. The inspection system also draws the time sequence states of the inspected links at different time points for tracing exceptions.
+
+![Time sequence state](../resources/casestudies/ci123/adoptions-ci123-sequence-status.png "Time sequence state")
+
+
+## Integration of Karmada and Community Ecosystem
+
+Karmada's design makes it easy to integrate with other community projects. The following figure shows the architecture of integrating Karmada with Velero for multi-cluster backup and restore. The multi-cluster backup and restore CRDs of Velero are added to the Karmada control plane, and propagated to specified member clusters according to the Karmada propagation policies. The Backup and Restore controllers reconcile the states and aggregate the CRD states of Velero in the member clusters.
+
+![Time sequence state](../resources/casestudies/ci123/adoptions-ci123-velero.png "Time sequence state")
+
+
+## Karmada for Multi-cluster Heterogeneous Resources
+
+The following figure shows how to manage multi-cluster GPU resources based on Karmada.
+
+![GPU resource management](../resources/casestudies/ci123/adoptions-ci123-gpu-resources.png "GPU resource management")
+
+On Karmada, AIML INSTITUTE quickly built a prototype system for multi-cluster GPU resource management. A time-based sharing model is used to virtualize the GPU compute capacity at the bottom layer. Multiple pods on the same node can share one or more GPU cards. The GPU agent of each node implements the GPU device plugin and registers the virtualized GPU cores and GPU memory as extended resources. GPU Scheduler, a Kubernetes scheduler plugin in Karmada, schedules extended resources. A basic idea of multi-cluster GPU management is to unify the GPU resource view and scheduling. Users submit GPU-related workloads and propagation policies, based on which GPU Scheduler performs a simple scheduling. GPU Scheduler also collects the GPU resource information of each member cluster to perform two-layer scheduling.
+
+## Summary
+
+Karmada brings in the following benefits for AIML INSTITUTE in multi-cluster management:
+
+1. Existing resource definitions can be migrated into multi-cluster environment without modification, owing to Karmada's compatibility with Kubernetes native APIs.
+2. Based on the Karmada cluster APIs, Cluster Controllers, and pull/push modes, a multi-cluster control standard is established for unified output of the multi-cluster management capabilities of any upper-layer system.
+3. Scenario-specific multi-cluster resource scheduling and orchestration are realized based on Karmada's built-in controllers, scheduler plugins, and extended scheduler equivalent to that of Kubernetes.
+4. The flexible architecture design of Karmada enables the existing single-cluster systems to be quickly switched to multi-cluster ones.
+ AIML INSTITUTE has witnessed the growth of Karmada. From Karmada v0.5 to v1.0, AIML INSTITUTE has participated in almost every weekly meeting and witnessed many exciting features from proposal to merge. Two members of the team have become Karmada members. This is a virtuous cycle between open source projects and commercial companies. Karmada helps AIML INSTITUTE build systems and AIML INSTITUTE feeds back problems and new ideas to the community. During this process, the team gets a deeper understanding of open source while contributing to the community. More developers are welcomed to participate in the community.
+
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/casestudies/daocloud.md b/versioned_docs/version-v1.9/casestudies/daocloud.md
new file mode 100644
index 000000000..6264aa5b8
--- /dev/null
+++ b/versioned_docs/version-v1.9/casestudies/daocloud.md
@@ -0,0 +1,5 @@
+---
+title: DaoCloud结合Karmada打造新一代企业级多云平台
+---
+
+TBD
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/casestudies/hurricane_engine.md b/versioned_docs/version-v1.9/casestudies/hurricane_engine.md
new file mode 100644
index 000000000..5859ad53c
--- /dev/null
+++ b/versioned_docs/version-v1.9/casestudies/hurricane_engine.md
@@ -0,0 +1,5 @@
+---
+title: Karmada 在飓风引擎的实践与演进
+---
+
+TBD
diff --git a/versioned_docs/version-v1.9/casestudies/static/unionbigdata_01.png b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_01.png
new file mode 100644
index 000000000..2295cad8a
Binary files /dev/null and b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_01.png differ
diff --git a/versioned_docs/version-v1.9/casestudies/static/unionbigdata_02.png b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_02.png
new file mode 100644
index 000000000..aa1167964
Binary files /dev/null and b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_02.png differ
diff --git a/versioned_docs/version-v1.9/casestudies/static/unionbigdata_03.png b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_03.png
new file mode 100644
index 000000000..3ce1a3c0e
Binary files /dev/null and b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_03.png differ
diff --git a/versioned_docs/version-v1.9/casestudies/static/unionbigdata_04.png b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_04.png
new file mode 100644
index 000000000..514177930
Binary files /dev/null and b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_04.png differ
diff --git a/versioned_docs/version-v1.9/casestudies/static/unionbigdata_05.png b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_05.png
new file mode 100644
index 000000000..be863fb8e
Binary files /dev/null and b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_05.png differ
diff --git a/versioned_docs/version-v1.9/casestudies/static/unionbigdata_06.png b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_06.png
new file mode 100644
index 000000000..172605435
Binary files /dev/null and b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_06.png differ
diff --git a/versioned_docs/version-v1.9/casestudies/static/unionbigdata_07.png b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_07.png
new file mode 100644
index 000000000..e5cdf4b0f
Binary files /dev/null and b/versioned_docs/version-v1.9/casestudies/static/unionbigdata_07.png differ
diff --git a/versioned_docs/version-v1.9/casestudies/unionbigdata.md b/versioned_docs/version-v1.9/casestudies/unionbigdata.md
new file mode 100644
index 000000000..07e6b6ad3
--- /dev/null
+++ b/versioned_docs/version-v1.9/casestudies/unionbigdata.md
@@ -0,0 +1,89 @@
+---
+title: UnionBigData's Utilization of Karmada in the Construction of I3Plat at BOE
+---
+
+## Industry Background
+
+In the LCD panel production field, products often develop defects due to a variety of factors. To address this, Automatic Optical Inspection (AOI) equipment, which utilizes optical principles to detect common flaws, was introduced after critical stages in the manufacturing process. However, existing AOI equipment could only detect whether a defect was present, requiring manual intervention to classify and identify false defects, which was both time-consuming and labor-intensive. [BOE Group](https://www.boe.com/) initially implemented an Automatic Defect Classification (ADC) system in one factory to enhance the accuracy of defect judgment and reduce the strain on workers. This system employs deep learning technology to automatically categorize images of defects identified by AOI, filtering out erroneous assessments, thereby increasing production efficiency.
+
+BOE initially introduced ADC in one factory, and then expanded its use to other factories, saving manpower and increasing the efficiency of judgments. Despite this advancement, the complexity of the processes and differences between suppliers led to fragmented and decentralized construction at the factory sites, complicating data sharing and operational maintenance. To address these issues, BOE initiated the construction of the Industrial Intelligent Detection Platform (I3Plat), which employs artificial intelligence to standardize intelligent detection and enhance both production efficiency and the yield rate.
+
+![BOE Industrial Intelligent Detection Platform](./static/unionbigdata_01.png)
+
+I3Plat centralizes ADC as its core, extending to model training and detection reinspection, achieving an integrated "cloud" (management + training) + "edge" (inference) + "endpoint" (business) solution, aiming to enhance production quality and the value of data through a standardized platform. The construction scope at the factory sites includes a resource sharing center, on-site training, and edge-side inference sub-platforms, all of which will be implemented across multiple factories.
+
+![I3Plat Platform Architecture](./static/unionbigdata_02.png)
+
+The project aims to launch ADC at the factory sites, facilitate resource sharing, and achieve cloud-edge standardization to reduce the operational burden and uphold high standards. I3Plat is designed to streamline and standardize BOE Group's ADC systems across factories, providing a blueprint and reference for future ADC deployments. This aims to reduce costs and timeframes, and boost both production efficiency and the quality inspection process, ultimately improving product yields. It includes roles like system administrators, resource allocators and entails workflows for ADC inference, model training, data sharing, as well as cloud collaboration features, ensuring an automated defect classification process, and enhancing the usage of models and defective image data.
+
+## Product and Technical Implementation
+
+### Cluster Management
+
+Factories can register their respective K8s clusters with the central cloud system, which then manages these clusters from a single point.
+
+![Cluster Management](./static/unionbigdata_03.png)
+
+We have chosen the PULL mode.
+
+To reduce the operational cost for operators, we offer a step-by-step registration process within the central cloud.
+
+1. Guide the installation of karmada-agent.
+2. Generate a token using karmadactl token create in the control plane.
+3. Proceed with the registration using karmadactl register.
+4. Edit the deploy/karmada-agent created by karmadactl register in the member cluster to ensure its access to the kube-apiserver of that member cluster.
+
+### Using Aggregate Layer API
+
+Through the cluster unification access provided by the karmada-aggregator component, we are able to implement functions in the central cloud that amalgamate data from member clusters, such as visualized dashboards.
+
+Typically we expose functions implemented in Java using Service, and invoke kubectl get --raw with Java Fabric8 or similar clients:
+
+```
+/apis/cluster.karmada.io/v1alpha1/clusters/%s/proxy/api/v1/namespaces/%s/services/%s/proxy/%s
+```
+
+#### Cluster Monitoring
+
+For online clusters, the central cloud system can display monitoring data for key metrics such as memory, CPU, disk, network ingress and egress rates, GPU, and logs, with the ability to switch between cluster views.
+
+![Resource Monitoring](./static/unionbigdata_04.png)
+
+The central cloud can see the same monitoring as the training cloud, with the cluster's Java program encapsulating PromQL through Karmada's aggregate layer API to provide to the front-end page. Below is an example Java query for node CPU utilization:
+
+```
+/apis/cluster.karmada.io/v1alpha1/clusters/%s/proxy/api/v1/namespaces/%s/services/%s/proxy/api/v1/query_range?query=node:node_cpu_utilization:avg1m{node='%s'}&start=%s&end=%s&step=%s
+```
+
+#### Central Cloud Data Distribution
+
+Data uploaded by users in the central cloud can be freely distributed to designated on-site locations, including datasets, annotations, operator projects, operator images, and models.
+
+![Data Release](./static/unionbigdata_05.png)
+
+Datasets, operator projects, and models are typically files that, after being transferred, are saved to local or NAS storage. Annotations are usually structured data that are saved to a DB after transfer. Operator images are generally exported as tar packages, which are pushed to the harbor of the current cluster after transfer.
+In addition to Karmada's control plane, the central cloud also has its own business K8s cluster, including storage, so it can act as a transfer station. All these are done through Karmada's aggregate layer API, calling the file upload services we provide. This enables cluster-to-cluster calls.
+
+#### Cross-Factory Training
+
+In cases where a factory site lacks sufficient training resources, it can apply for resources from other sites to conduct cross-factory training. This function works by sending the data sets, annotations, operator projects, operator images, etc., needed for training at Site A to Site B, where training is carried out using Site B's resources. The trained model is then returned to Site A.
+
+![Cross-Site Training](./static/unionbigdata_06.png)
+
+The principle is similar to central cloud data distribution, where the data required for a task is sent directly to the corresponding cluster, demonstrating the call relationship between member clusters.
+
+#### Visualization Dashboard
+
+Based on the sites registered with the central cloud, various indicator data from different sites are collected for display on a large-screen dashboard.
+
+![Visualization Dashboard](./static/unionbigdata_07.png)
+
+Using Karmada's aggregate layer API, we can conveniently call services from member clusters for real-time data display on these dashboards, without needing all data displays to go through big data offline analysis or real-time analysis. This provides greater timeliness.
+
+## Project Management
+
+The project team consists of our company's experienced training platform product managers, as well as professional R&D and test engineers, totaling 14 members. The team started work in April 2023 and completed the development and deployment by December 2023. Despite three major milestones during the project, each stage was filled with challenges, but every team member persevered, responded actively, and demonstrated our team's fighting spirit, cohesion, and professional capabilities.
+
+Considering that the users of the training platform are mainly algorithm engineers and production line operators, who have significant differences in usage habits and knowledge backgrounds, the product managers conducted in-depth market research and discussions. They ultimately designed a system that meets the flexibility needs of algorithm engineers while also satisfying the production line operators' pursuit of efficiency and simplicity.
+
+To ensure the project's scope, schedule, quality, and cost are controllable, we held key stage meetings including product design, development, testing, and deployment reviews, as well as regular project meetings and customer communication meetings. After system deployment, we actively solicited user feedback, resolved issues, and continued to optimize the system to meet customer needs.
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/casestudies/vipkid.md b/versioned_docs/version-v1.9/casestudies/vipkid.md
new file mode 100644
index 000000000..4d7b15148
--- /dev/null
+++ b/versioned_docs/version-v1.9/casestudies/vipkid.md
@@ -0,0 +1,125 @@
+---
+title: Building a PaaS Platform with Karmada to Run Containers --VIPKID
+---
+
+Author: Ci Yiheng, Backend R&D Expert, VIPKID
+
+## Background
+
+VIPKID is an online English education platform with more than 80,000 teachers and 1 million trainees.
+It has delivered 150 million training sessions across countries and regions. To provide better services,
+VIPKID deploys applications by region and close to teachers and trainees. Therefore,
+VIPKID purchased dozens of clusters from multiple cloud providers around the world to build its internal infrastructure.
+
+## Born Multi-Cloud and Cross-Region
+
+VIPKID provides services internationally. Native speakers can be both teaching students in China and studying with Chinese teachers.
+To provide optimal online class experience, VIPKID sets up a low-latency network and deploys computing services close to teachers and trainees separately.
+Such deployment depends on resources from multiple public cloud vendors. Managing multi-cloud resources has long become a part of VIPKID's IaaS operations.
+
+### Multi-Cluster Policy
+
+We first tried the single-cluster mode to containerize our platform, simple and low-cost. We dropped it after evaluating the network quality and infrastructure (network and storage) solutions across clouds and regions, and our project period. There are two major reasons:
+1) Network latency and stability between clouds cannot be guaranteed.
+2) Different vendors have different solutions for container networking and storage.
+ Costs would be high if we wanted to resolve these problems. Finally, we decided to configure Kubernetes clusters by cloud vendor and region. That's why we have so many clusters.
+
+### Cluster Disaster Recovery
+
+DR(Disaster Recovery) becomes easier for containers than VMs. Kubernetes provides DR solutions for pods and nodes, but not single clusters. Thanks to the microservice reconstruction, we can quickly create a cluster or scale an existing one to transfer computing services.
+
+## Challenges and Pain Points
+
+### Running the Same Application in Different Clusters
+
+During deployment, we found that the workloads of the same application vary greatly in different clusters in terms of images, startup parameters (configurations), and release versions. In the early stage, we wanted that our developers can directly manage applications on our own PaaS platform. However, the increasing customization made it more and more difficult to abstract the differences.
+
+We had to turn to our O&M team, but they also failed in some complex scenarios. This is not DevOps. It does not reduce costs or increase efficiency.
+
+### Quickly Migrating Applications upon Faults
+
+Fault migration can be focused on applications or clusters. The application-centric approach focuses on the
+self-healing of key applications and the overall load in multi-cluster mode.
+The cluster-centric approach focuses on the disasters (such as network faults) that may impact all clusters or on the
+delivery requirements when creating new clusters. You need to set different policies for these approaches.
+
+**Application-centric: Dynamic Migration**
+
+Flexibly deploying an application in multiple clusters can ensure its stability. For example, if an instance in a cluster is faulty and cannot be quickly recovered, a new instance needs to be created automatically in another cluster of the same vendor or region based on the preset policy.
+
+**Cluster-centric: Quick Cluster Startup**
+
+Commonly, we start a new cluster to replace the unavailable one or to deliver services which depend on a specific cloud vendor or region. It would be best if clusters can be started as fast as pods.
+
+## Why Karmada
+
+### Any Solutions Available?
+
+Your service systems may evolve fast and draw clear lines for modules. To address the pain points, you need to, to some extent, abstract, decouple and reconstruct your systems.
+
+For us, service requirements were deeply coupled with cluster resources. We wanted to decouple them via multi-cluster management. Specifically, use the self-developed platform to manage the application lifecycle, and use a system to manage operation instructions on cluster resources.
+
+We probed into the open source communities to find products that support multi-cluster management. However, most products either serve as a platform like ours or manage resources by cluster.
+
+We wanted to manage multiple Kubernetes clusters like one single, large cluster. In this way, a workload can be regarded as an independent application (or a version of an application) instead of a replica of an application in multiple clusters.
+
+We also wanted to lower the access costs as much as possible. We surveyed and evaluated many solutions in the communities and decided on Karmada.
+
+### Karmada, the Solution of Choice
+
+Karmada has the following advantages:
+1) Karmada allows us to manage multiple clusters like one single cluster and manage resources in an application-centric approach. In addition, almost all configuration differences can be independently declared through the Override policies in Karmada, simple, intuitive, and easy to manage.
+2) Karmada uses native Kubernetes APIs. We need no adaption and the access cost is low. Karmada also manifests configurations through CRDs. It dynamically turns distribution and differentiated configurations into Propagation and Override policies and delivers them to the Karmada control plane.
+3) Karmada sits under the open governance of a neutral community. The community welcomes open discussions on requirements and ideas and we got technically improved while contributing to the community.
+
+## Karmada at VIPKID
+
+Our platform caters to all container-based deployments, covering stateful or stateless applications, hybrid deployment of online and offline jobs, AI, and big data services. This platform does not rely on any public cloud. Therefore, we cannot use any encapsulated products of cloud vendors.
+
+We use the internal IaaS platform to create and scale out clusters, configure VPCs, subnets, and security groups of different vendors. In this way, vendor differences become the least of worries for our PaaS platform.
+
+In addition, we provide GitOps for developers to manage system applications and components. This is more user-friendly and efficient for skilled developers.
+
+### Containerization Based on Karmada
+
+At the beginning, we designed a component (cluster aggregation API) in the platform to interact with Kubernetes clusters. We retained the native Kubernetes APIs and added some cluster-related information.
+However, there were complex problems during the implementation. For example, as the PaaS system needed to render declarations of different resources to multiple clusters, the applications we maintained in different clusters were irrelevant. We made much effort to solve these problems, even after CRDs were introduced. The system still needed to keep track of the details of each cluster, which goes against what cluster aggregation API is supposed to do.
+When there are a large number of clusters that go online and offline frequently, we need to change the configurations in batches for applications in the GitOps model to ensure normal cluster running. However, GitOps did not cope with the increasing complexity as expected.
+
+The following figure shows the differences before and after we used Karmada.
+
+![Karmada在VIPKID](../resources/casestudies/vipkid/adoptions-vipkid-architecture.png)
+
+**After Karmada is introduced, the multi-cluster aggregation layer is truly unified.** We can manage resources by application on the Karmada control plane. We only need to interact with Karmada, not the clusters, which simplifies containerized application management and enables our PaaS platform to fully focus on service requirements.
+With Karmada integrated into GitOps, system components can be easily released and upgraded in each cluster, exponentially more efficient than before.
+
+## Benefits
+
+Managing Kubernetes resources by application simplifies the platform and greatly improves utilization. Here are the improvements brought by Karmada.
+
+**1) Higher deployment efficiency**
+
+Before then, we needed to send deployment instructions to each cluster and monitor the deployment status, which required us to continuously check resources and handle exceptions. Now, application statuses are automatically collected and detected by Karmada.
+
+**2) Differentiated control on applications**
+
+Adopting DevOps means developers can easily manage the lifecycle of applications.
+We leverage Karmada Override policies to directly interconnect with application profiles such as environment variables, startup parameters, and image repositories so that developers can better control the differences of applications in different clusters.
+
+**3) Quick cluster startup and adaptation to GitOps**
+
+Basic services (system and common services) are configured for all clusters in Karmada Propagation policies and managed by Karmada when a new cluster is created. These basic services can be delivered along with the cluster, requiring no manual initialization and greatly shortening the delivery process.
+Most basic services are managed by the GitOps system, which is convenient and intuitive.
+
+**4) Short reconstruction period and no impact on services**
+
+Thanks to the support of native Kubernetes APIs, we can quickly integrate Karmada into our platform.
+We use Karmada the way we use Kubernetes. The only thing we need to configure is Propagation policies,
+which can be customized by resource name, resource type, or LabelSelector.
+
+## Gains
+
+Since February 2021, three of us have become contributors to the Karmada community.
+We witness the releases of Karmada from version 0.5.0 to 1.0.0. To write codes that satisfy all is challenging.
+We have learned a lot from the community during the practice, and we always welcome more of you to join us.
+
diff --git a/versioned_docs/version-v1.9/contributor/cherry-picks.md b/versioned_docs/version-v1.9/contributor/cherry-picks.md
new file mode 100644
index 000000000..5b88f4f2d
--- /dev/null
+++ b/versioned_docs/version-v1.9/contributor/cherry-picks.md
@@ -0,0 +1,121 @@
+---
+title: How to cherry-pick PRs
+---
+
+This document explains how cherry picks are managed on release branches within
+the `karmada-io/karmada` repository.
+A common use case for this task is backporting PRs from master to release
+branches.
+
+> This doc is lifted from [Kubernetes cherry-pick](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md).
+
+- [Prerequisites](#prerequisites)
+- [What Kind of PRs are Good for Cherry Picks](#what-kind-of-prs-are-good-for-cherry-picks)
+- [Initiate a Cherry Pick](#initiate-a-cherry-pick)
+- [Cherry Pick Review](#cherry-pick-review)
+- [Troubleshooting Cherry Picks](#troubleshooting-cherry-picks)
+- [Cherry Picks for Unsupported Releases](#cherry-picks-for-unsupported-releases)
+
+## Prerequisites
+
+- A pull request merged against the `master` branch.
+- The release branch exists (example: [`release-1.0`](https://github.com/karmada-io/karmada/tree/release-1.0))
+- The normal git and GitHub configured shell environment for pushing to your
+ karmada `origin` fork on GitHub and making a pull request against a
+ configured remote `upstream` that tracks
+ `https://github.com/karmada-io/karmada.git`, including `GITHUB_USER`.
+- Have GitHub CLI (`gh`) installed following [installation instructions](https://github.com/cli/cli#installation).
+- A github personal access token which has permissions "repo" and "read:org".
+ Permissions are required for [gh auth login](https://cli.github.com/manual/gh_auth_login)
+ and not used for anything unrelated to cherry-pick creation process
+ (creating a branch and initiating PR).
+
+## What Kind of PRs are Good for Cherry Picks
+
+Compared to the normal master branch's merge volume across time,
+the release branches see one or two orders of magnitude less PRs.
+This is because there is an order or two of magnitude higher scrutiny.
+Again, the emphasis is on critical bug fixes, e.g.,
+
+- Loss of data
+- Memory corruption
+- Panic, crash, hang
+- Security
+
+A bugfix for a functional issue (not a data loss or security issue) that only
+affects an alpha feature does not qualify as a critical bug fix.
+
+If you are proposing a cherry pick and it is not a clear and obvious critical
+bug fix, please reconsider. If upon reflection you wish to continue, bolster
+your case by supplementing your PR with e.g.,
+
+- A GitHub issue detailing the problem
+
+- Scope of the change
+
+- Risks of adding a change
+
+- Risks of associated regression
+
+- Testing performed, test cases added
+
+- Key stakeholder reviewers/approvers attesting to their confidence in the
+ change being a required backport
+
+It is critical that our full community is actively engaged on enhancements in
+the project. If a released feature was not enabled on a particular provider's
+platform, this is a community miss that needs to be resolved in the `master`
+branch for subsequent releases. Such enabling will not be backported to the
+patch release branches.
+
+## Initiate a Cherry Pick
+
+- Run the [cherry pick script][cherry-pick-script]
+
+ This example applies a master branch PR #1206 to the remote branch
+ `upstream/release-1.0`:
+
+ ```shell
+ hack/cherry_pick_pull.sh upstream/release-1.0 1206
+ ```
+
+ - Be aware the cherry pick script assumes you have a git remote called
+ `upstream` that points to the Karmada github org.
+
+ - You will need to run the cherry pick script separately for each patch
+ release you want to cherry pick to. Cherry picks should be applied to all
+ active release branches where the fix is applicable.
+
+ - If `GITHUB_TOKEN` is not set you will be asked for your github password:
+ provide the github [personal access token](https://github.com/settings/tokens) rather than your actual github
+ password. If you can securely set the environment variable `GITHUB_TOKEN`
+ to your personal access token then you can avoid an interactive prompt.
+ Refer [https://github.com/github/hub/issues/2655#issuecomment-735836048](https://github.com/github/hub/issues/2655#issuecomment-735836048)
+
+## Cherry Pick Review
+
+As with any other PR, code OWNERS review (`/lgtm`) and approve (`/approve`) on
+cherry pick PRs as they deem appropriate.
+
+The same release note requirements apply as normal pull requests, except the
+release note stanza will auto-populate from the master branch pull request from
+which the cherry pick originated.
+
+## Troubleshooting Cherry Picks
+
+Contributors may encounter some of the following difficulties when initiating a
+cherry pick.
+
+- A cherry pick PR does not apply cleanly against an old release branch. In
+ that case, you will need to manually fix conflicts.
+
+- The cherry pick PR includes code that does not pass CI tests. In such a case
+ you will have to fetch the auto-generated branch from your fork, amend the
+ problematic commit and force push to the auto-generated branch.
+ Alternatively, you can create a new PR, which is noisier.
+
+## Cherry Picks for Unsupported Releases
+
+The community supports & patches releases need to be discussed.
+
+[cherry-pick-script]: https://github.com/karmada-io/karmada/blob/master/hack/cherry_pick_pull.sh
diff --git a/versioned_docs/version-v1.9/contributor/contribute-docs.md b/versioned_docs/version-v1.9/contributor/contribute-docs.md
new file mode 100644
index 000000000..13c22974d
--- /dev/null
+++ b/versioned_docs/version-v1.9/contributor/contribute-docs.md
@@ -0,0 +1,192 @@
+---
+title: How to contribute docs
+---
+
+Starting from version 1.3, the community documentation will be available on the Karmada website.
+This document explains how to contribute docs to
+the `karmada-io/website` repository.
+
+## Prerequisites
+
+- Docs, like codes, are also categorized and stored by version.
+ 1.3 is the first version we have archived.
+- Docs need to be translated into multiple languages for readers from different regions.
+ The community now supports both Chinese and English.
+ English is the official language of documentation.
+- For our docs we use markdown. If you are unfamiliar with Markdown, please see https://guides.github.com/features/mastering-markdown/ or https://www.markdownguide.org/ if you are looking for something more substantial.
+- We get some additions through [Docusaurus 2](https://docusaurus.io/), a model static website generator.
+
+## Setup
+
+You can set up your local environment by cloning our website repository.
+
+```shell
+git clone https://github.com/karmada-io/website.git
+cd website
+```
+
+Our website is organized like below:
+
+```
+website
+├── sidebars.json # sidebar for the current docs version
+├── docs # docs directory for the current docs version
+│ ├── foo
+│ │ └── bar.md # https://mysite.com/docs/next/foo/bar
+│ └── hello.md # https://mysite.com/docs/next/hello
+├── versions.json # file to indicate what versions are available
+├── versioned_docs
+│ ├── version-1.1.0
+│ │ ├── foo
+│ │ │ └── bar.md # https://mysite.com/docs/foo/bar
+│ │ └── hello.md
+│ └── version-1.0.0
+│ ├── foo
+│ │ └── bar.md # https://mysite.com/docs/1.0.0/foo/bar
+│ └── hello.md
+├── versioned_sidebars
+│ ├── version-1.1.0-sidebars.json
+│ └── version-1.0.0-sidebars.json
+├── docusaurus.config.js
+└── package.json
+```
+
+The `versions.json` file is a list of versions, from the latest to earliest.
+The table below explains how a versioned file maps to its version and the generated URL.
+
+| Path | Version | URL |
+| --------------------------------------- | -------------- | ----------------- |
+| `versioned_docs/version-1.0.0/hello.md` | 1.0.0 | /docs/1.0.0/hello |
+| `versioned_docs/version-1.1.0/hello.md` | 1.1.0 (latest) | /docs/hello |
+| `docs/hello.md` | current | /docs/next/hello |
+
+:::tip
+
+The files in the `docs` directory belong to the `current` docs version.
+
+The `current` docs version is labeled as `Next` and hosted under `/docs/next/*`.
+
+Contributors mainly contribute documentation to the current version.
+:::
+
+## Writing docs
+
+### Starting a title at the top
+
+It's important for your article to specify metadata concerning an article at the top of the Markdown file, in a section called **Front Matter**.
+
+For now, let's take a look at a quick example which should explain the most relevant entries in **Front Matter**:
+
+```
+---
+title: A doc with tags
+---
+
+## secondary title
+```
+
+The top section between two lines of --- is the Front Matter section. Here we define a couple of entries which tell Docusaurus how to handle the article:
+* Title is the equivalent of the `
` in an HTML document or `# ` in a Markdown article.
+* Each document has a unique ID. By default, a document ID is the name of the document (without the extension) related to the root docs directory.
+
+### Linking to other docs
+
+You can easily route to other places by adding any of the following links:
+* Absolute URLs to external sites like `https://github.com` or `https://k8s.io` - you can use any of the Markdown notations for this, so
+ * `` or
+ * `[kubernetes](https://k8s.io)` will work.
+* Link to markdown files or the resulting path.
+ You can use relative paths to index the corresponding files.
+* Link to pictures or other resources.
+ If your article contains images or other resources, you may create a corresponding directory in `/docs/resources`, and article related files are placed in that directory.
+ Now we store public pictures about Karmada in `/docs/resources/general`. You can use the following to link the pictures:
+ * `![Git workflow](../resources/contributor/git_workflow.png)`
+
+### Directory organization
+
+Docusaurus 2 uses a sidebar to manage documents.
+
+Creating a sidebar is useful to:
+* Group multiple related documents
+* Display a sidebar on each of those documents
+* Provide paginated navigation, with next/previous button
+
+For our docs, you can know how our documents are organized from [https://github.com/karmada-io/website/blob/main/sidebars.js](https://github.com/karmada-io/website/blob/main/sidebars.js).
+
+```
+module.exports = {
+ docs: [
+ {
+ type: "category",
+ label: "Core Concepts",
+ collapsed: false,
+ items: [
+ "core-concepts/introduction",
+ "core-concepts/concepts",
+ "core-concepts/architecture",
+ ],
+ },
+ {
+ type: "doc",
+ id: "key-features/features",
+ },
+ {
+ type: "category",
+ label: "Get Started",
+ items: [
+ "get-started/nginx-example"
+ ],
+ },
+....
+```
+
+The order of documents in a directory is strictly in the order of items.
+```
+type: "category",
+label: "Core Concepts",
+collapsed: false,
+items: [
+ "core-concepts/introduction",
+ "core-concepts/concepts",
+ "core-concepts/architecture",
+],
+```
+
+If you add a document, you must add it to `sidebars.js` to make it display properly. If you're not sure where your docs are located, you can ask community members in the PR.
+
+### About Chinese docs
+
+If you want to contribute to our Chinese documentation, you can:
+* Translate our existing English docs to Chinese. In this case, you need to modify the corresponding file content from [https://github.com/karmada-io/website/tree/main/i18n/zh/docusaurus-plugin-content-docs/current](https://github.com/karmada-io/website/tree/main/i18n/zh/docusaurus-plugin-content-docs/current).
+ The organization of this directory is exactly the same as the outer layer. `current.json` holds translations for the documentation directory. You can edit it if you want to translate the name of directory.
+* Submit Chinese docs without the English version. No limits on the topic or category. In this case, you can add an empty article and its title to the main directory first, and complete the rest later.
+ Then add the corresponding Chinese content to the Chinese directory.
+
+## Debugging docs
+
+Now you have already completed the docs. After you start a PR to `karmada.io/website`, if you have passed CI, you can get a preview of your document on the website.
+
+Click **Details** marked in red, and you will enter the preview view of the website.
+
+![Docs CI](../resources/contributor/debug-docs.png)
+
+Click **Next** and you can see the corresponding changes. If you have changes related to the Chinese version, click the language drop-down box next to it to switch to Chinese.
+
+![Click next](../resources/contributor/click-next.png)
+
+If the previewed page is not what you expected, please check your docs again.
+
+### Typos Check (optional)
+
+You can use [spell checker](https://github.com/crate-ci/typos) to find and correct spelling mistakes after you updated the documents.
+
+To install the spell checker tool, you can refer to [Install](https://github.com/crate-ci/typos?tab=readme-ov-file#install).
+
+Then, just executing `typos . --config ./typos.toml` at the repo root path in your local commandline.
+
+## FAQ
+
+### Versioning
+
+For the newly supplemented documents of each version, we will synchronize to the latest version on the release date of each version, and the documents of the old version will not be modified.
+For errata found in the documentation, we will fix it with every release.
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/contributor/count-contributions.md b/versioned_docs/version-v1.9/contributor/count-contributions.md
new file mode 100644
index 000000000..b84c66a71
--- /dev/null
+++ b/versioned_docs/version-v1.9/contributor/count-contributions.md
@@ -0,0 +1,24 @@
+---
+title: Correct your information for better contribution
+---
+
+After contributing to [karmada-io](https://github.com/karmada-io) through issues, comments, pull requests, etc., you can check your contributions [here](https://karmada.devstats.cncf.io/d/66/developer-activity-counts-by-companies).
+
+ If you notice that the information in the company column is either incorrect or blank, we highly recommend that you correct it.
+
+For instance, `Huawei Technologies Co. Ltd`should be used instead of `HUAWEI`:
+![Wrong Information](../resources/contributor/contributions_list.png)
+
+Here are the steps to fix this issue.
+
+## Verify your organization in the CNCF system
+To begin, visit your profile [page](https://openprofile.dev/edit/profile) and ensure that your organization is accurate.
+![organization-check](../resources/contributor/organization_check.png)
+* If the organization is incorrect, please select the right one.
+* If your organization is not on the list, click on **Add** to add your organization.
+
+## Update the CNCF repository used for calculating your contributions
+Once you have verified your organization in the CNCF system, you must create a pull request in gitdm with the updated affiliations.
+To do this, you'll need to modify two files: `company_developers*.txt` and `developers_affiliations*.txt`. For reference, please see this example pull request: [PR Example](https://github.com/cncf/gitdm/pull/183).
+
+After the pull request has been successfully merged, it may take up to four weeks for the changes to be synced.
diff --git a/versioned_docs/version-v1.9/contributor/github-workflow.md b/versioned_docs/version-v1.9/contributor/github-workflow.md
new file mode 100644
index 000000000..f8eae90d8
--- /dev/null
+++ b/versioned_docs/version-v1.9/contributor/github-workflow.md
@@ -0,0 +1,275 @@
+---
+title: "GitHub Workflow"
+description: An overview of the GitHub workflow used by the Karmada project. It includes some tips and suggestions on things such as keeping your local environment in sync with upstream and commit hygiene.
+---
+
+> This doc is lifted from [Kubernetes github-workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md).
+
+![Git workflow](../resources/contributor/git_workflow.png)
+
+### 1 Fork in the cloud
+
+1. Visit https://github.com/karmada-io/karmada
+2. Click `Fork` button (top right) to establish a cloud-based fork.
+
+### 2 Clone fork to local storage
+
+Per Go's [workspace instructions][go-workspace], place Karmada' code on your
+`GOPATH` using the following cloning procedure.
+
+[go-workspace]: https://golang.org/doc/code.html#Workspaces
+
+Define a local working directory:
+
+```sh
+# If your GOPATH has multiple paths, pick
+# just one and use it instead of $GOPATH here.
+# You must follow exactly this pattern,
+# neither `$GOPATH/src/github.com/${your github profile name/`
+# nor any other pattern will work.
+export working_dir="$(go env GOPATH)/src/github.com/karmada-io"
+```
+
+Set `user` to match your github profile name:
+
+```sh
+export user={your github profile name}
+```
+
+Both `$working_dir` and `$user` are mentioned in the figure above.
+
+Create your clone:
+
+```sh
+mkdir -p $working_dir
+cd $working_dir
+git clone https://github.com/$user/karmada.git
+# or: git clone git@github.com:$user/karmada.git
+
+cd $working_dir/karmada
+git remote add upstream https://github.com/karmada-io/karmada.git
+# or: git remote add upstream git@github.com:karmada-io/karmada.git
+
+# Never push to upstream master
+git remote set-url --push upstream no_push
+
+# Confirm that your remotes make sense:
+git remote -v
+```
+
+### 3 Branch
+
+Get your local master up to date:
+
+```sh
+# Depending on which repository you are working from,
+# the default branch may be called 'main' instead of 'master'.
+
+cd $working_dir/karmada
+git fetch upstream
+git checkout master
+git rebase upstream/master
+```
+
+Branch from it:
+```sh
+git checkout -b myfeature
+```
+
+Then edit code on the `myfeature` branch.
+
+### 4 Keep your branch in sync
+
+```sh
+# Depending on which repository you are working from,
+# the default branch may be called 'main' instead of 'master'.
+
+# While on your myfeature branch
+git fetch upstream
+git rebase upstream/master
+```
+
+Please don't use `git pull` instead of the above `fetch` / `rebase`. `git pull`
+does a merge, which leaves merge commits. These make the commit history messy
+and violate the principle that commits ought to be individually understandable
+and useful (see below). You can also consider changing your `.git/config` file via
+`git config branch.autoSetupRebase always` to change the behavior of `git pull`, or another non-merge option such as `git pull --rebase`.
+
+### 5 Commit
+
+Commit your changes.
+
+```sh
+git commit --signoff
+```
+Likely you go back and edit/build/test some more then `commit --amend`
+in a few cycles.
+
+### 6 Push
+
+When ready to review (or just to establish an offsite backup of your work),
+push your branch to your fork on `github.com`:
+
+```sh
+git push -f ${your_remote_name} myfeature
+```
+
+### 7 Create a pull request
+
+1. Visit your fork at `https://github.com/$user/karmada`
+2. Click the `Compare & Pull Request` button next to your `myfeature` branch.
+
+_If you have upstream write access_, please refrain from using the GitHub UI for
+creating PRs, because GitHub will create the PR branch inside the main
+repository rather than inside your fork.
+
+#### Get a code review
+
+Once your pull request has been opened it will be assigned to one or more
+reviewers. Those reviewers will do a thorough code review, looking for
+correctness, bugs, opportunities for improvement, documentation and comments,
+and style.
+
+Commit changes made in response to review comments to the same branch on your
+fork.
+
+Very small PRs are easy to review. Very large PRs are very difficult to review.
+
+#### Squash commits
+
+After a review, prepare your PR for merging by squashing your commits.
+
+All commits left on your branch after a review should represent meaningful milestones or units of work. Use commits to add clarity to the development and review process.
+
+Before merging a PR, squash the following kinds of commits:
+
+- Fixes/review feedback
+- Typos
+- Merges and rebases
+- Work in progress
+
+Aim to have every commit in a PR compile and pass tests independently if you can, but it's not a requirement. In particular, `merge` commits must be removed, as they will not pass tests.
+
+To squash your commits, perform an [interactive
+rebase](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History):
+
+1. Check your git branch:
+
+ ```
+ git status
+ ```
+
+Output is similar to:
+
+ ```
+ On branch your-contribution
+ Your branch is up to date with 'origin/your-contribution'.
+ ```
+
+2. Start an interactive rebase using a specific commit hash, or count backwards from your last commit using `HEAD~`, where `` represents the number of commits to include in the rebase.
+
+ ```
+ git rebase -i HEAD~3
+ ```
+
+Output is similar to:
+
+ ```
+ pick 2ebe926 Original commit
+ pick 31f33e9 Address feedback
+ pick b0315fe Second unit of work
+
+ # Rebase 7c34fc9..b0315ff onto 7c34fc9 (3 commands)
+ #
+ # Commands:
+ # p, pick = use commit
+ # r, reword = use commit, but edit the commit message
+ # e, edit = use commit, but stop for amending
+ # s, squash = use commit, but meld into previous commit
+ # f, fixup = like "squash", but discard this commit's log message
+
+ ...
+
+ ```
+
+3. Use a command line text editor to change the word `pick` to `squash` for the commits you want to squash, then save your changes and continue the rebase:
+
+ ```
+ pick 2ebe926 Original commit
+ squash 31f33e9 Address feedback
+ pick b0315fe Second unit of work
+
+ ...
+
+ ```
+
+Output (after saving changes) is similar to:
+
+ ```
+ [detached HEAD 61fdded] Second unit of work
+ Date: Thu Mar 5 19:01:32 2020 +0100
+ 2 files changed, 15 insertions(+), 1 deletion(-)
+
+ ...
+
+ Successfully rebased and updated refs/heads/master.
+ ```
+4. Force push your changes to your remote branch:
+
+ ```
+ git push --force
+ ```
+
+For mass automated fixups (e.g. automated doc formatting), use one or more
+commits for the changes to tooling and a final commit to apply the fixup en
+masse. This makes reviews easier.
+
+### Merging a commit
+
+Once you've received review and approval, your commits are squashed, your PR is ready for merging.
+
+Merging happens automatically after both a Reviewer and Approver have approved the PR. If you haven't squashed your commits, they may ask you to do so before approving a PR.
+
+### Reverting a commit
+
+In case you wish to revert a commit, use the following instructions.
+
+_If you have upstream write access_, please refrain from using the
+`Revert` button in the GitHub UI for creating the PR, because GitHub
+will create the PR branch inside the main repository rather than inside your fork.
+
+- Create a branch and sync it with upstream.
+
+ ```sh
+ # Depending on which repository you are working from,
+ # the default branch may be called 'main' instead of 'master'.
+
+ # create a branch
+ git checkout -b myrevert
+
+ # sync the branch with upstream
+ git fetch upstream
+ git rebase upstream/master
+ ```
+- If the commit you wish to revert is a:
+ - **merge commit:**
+
+ ```sh
+ # SHA is the hash of the merge commit you wish to revert
+ git revert -m 1 SHA
+ ```
+
+ - **single commit:**
+
+ ```sh
+ # SHA is the hash of the single commit you wish to revert
+ git revert SHA
+ ```
+
+- This will create a new commit reverting the changes. Push this new commit to your remote.
+
+```sh
+git push ${your_remote_name} myrevert
+```
+
+- [Create a Pull Request](#7-create-a-pull-request) using this branch.
diff --git a/versioned_docs/version-v1.9/contributor/lifted.md b/versioned_docs/version-v1.9/contributor/lifted.md
new file mode 100644
index 000000000..817b1c760
--- /dev/null
+++ b/versioned_docs/version-v1.9/contributor/lifted.md
@@ -0,0 +1,117 @@
+---
+title: How to manage lifted codes
+---
+
+This document explains how lifted code is managed.
+A common use case for this task is developer lifting code from other repositories to `pkg/util/lifted` directory.
+
+- [Steps of lifting code](#steps-of-lifting-code)
+- [How to write lifted comments](#how-to-write-lifted-comments)
+- [Examples](#examples)
+
+## Steps of lifting code
+- Copy code from another repository and save it to a go file under `pkg/util/lifted`.
+- Optionally change the lifted code.
+- Add lifted comments for the code [as guided](#how-to-write-lifted-comments).
+- Run `hack/update-lifted.sh` to update the lifted doc `pkg/util/lifted/doc.go`.
+
+## How to write lifted comments
+Lifted comments shall be placed just before the lifted code (could be a func, type, var, or const). Only empty lines and comments are allowed between lifted comments and lifted code.
+
+Lifted comments are composed of one or multi comment lines, each in the format of `+lifted:KEY[=VALUE]`. Value is optional for some keys.
+
+Valid keys are as follows:
+
+- source:
+
+ Key `source` is required. Its value indicates where the code is lifted from.
+
+- changed:
+
+ Key `changed` is optional. It indicates whether the code is changed. Value is optional (`true` or `false`, defaults to `true`). Not adding this key or setting it to `false` means no code change.
+
+## Examples
+### Lifting function
+
+Lift function `IsQuotaHugePageResourceName` to `corehelpers.go`:
+
+```go
+// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/apis/core/helper/helpers.go#L57-L61
+
+// IsQuotaHugePageResourceName returns true if the resource name has the quota
+// related huge page resource prefix.
+func IsQuotaHugePageResourceName(name corev1.ResourceName) bool {
+ return strings.HasPrefix(string(name), corev1.ResourceHugePagesPrefix) || strings.HasPrefix(string(name), corev1.ResourceRequestsHugePagesPrefix)
+}
+```
+
+Added in `doc.go`:
+
+```markdown
+| lifted file | source file | const/var/type/func | changed |
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
+| corehelpers.go | https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/apis/core/helper/helpers.go#L57-L61 | func IsQuotaHugePageResourceName | N |
+```
+
+### Changed lifting function
+
+Lift and change function `GetNewReplicaSet` to `deployment.go`
+
+```go
+// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/controller/deployment/util/deployment_util.go#L536-L544
+// +lifted:changed
+
+// GetNewReplicaSet returns a replica set that matches the intent of the given deployment; get ReplicaSetList from client interface.
+// Returns nil if the new replica set doesn't exist yet.
+func GetNewReplicaSet(deployment *appsv1.Deployment, f ReplicaSetListFunc) (*appsv1.ReplicaSet, error) {
+ rsList, err := ListReplicaSetsByDeployment(deployment, f)
+ if err != nil {
+ return nil, err
+ }
+ return FindNewReplicaSet(deployment, rsList), nil
+}
+```
+
+Added in `doc.go`:
+
+```markdown
+| lifted file | source file | const/var/type/func | changed |
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
+| deployment.go | https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/controller/deployment/util/deployment_util.go#L536-L544 | func GetNewReplicaSet | Y |
+```
+
+### Lifting const
+
+Lift const `isNegativeErrorMsg` to `corevalidation.go `:
+
+```go
+// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/apis/core/validation/validation.go#L59
+const isNegativeErrorMsg string = apimachineryvalidation.IsNegativeErrorMsg
+```
+
+Added in `doc.go`:
+
+```markdown
+| lifted file | source file | const/var/type/func | changed |
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
+| corevalidation.go | https://github.com/kubernetes/kubernetes/blob/release-1.22/pkg/apis/core/validation/validation.go#L59 | const isNegativeErrorMsg | N |
+```
+
+### Lifting type
+
+Lift type `Visitor` to `visitpod.go`:
+
+```go
+// +lifted:source=https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/api/v1/pod/util.go#L82-L83
+
+// Visitor is called with each object name, and returns true if visiting should continue
+type Visitor func(name string) (shouldContinue bool)
+```
+
+Added in `doc.go`:
+
+```markdown
+| lifted file | source file | const/var/type/func | changed |
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------|
+| visitpod.go | https://github.com/kubernetes/kubernetes/blob/release-1.23/pkg/api/v1/pod/util.go#L82-L83 | type Visitor | N |
+```
diff --git a/versioned_docs/version-v1.9/core-concepts/architecture.md b/versioned_docs/version-v1.9/core-concepts/architecture.md
new file mode 100644
index 000000000..bc64a97cc
--- /dev/null
+++ b/versioned_docs/version-v1.9/core-concepts/architecture.md
@@ -0,0 +1,23 @@
+---
+title: Architecture
+---
+
+The overall architecture of Karmada is shown as below:
+
+![Architecture](../resources/general/architecture.png)
+
+The Karmada Control Plane consists of the following components:
+
+- Karmada API Server
+- Karmada Controller Manager
+- Karmada Scheduler
+
+ETCD stores the karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager performs operations based on the API objects you created through the API server.
+
+The Karmada Controller Manager runs various controllers, which watch karmada objects and then talk to the underlying clusters' API servers to create regular Kubernetes resources.
+
+1. Cluster Controller: attaches kubernetes clusters to Karmada for managing the lifecycle of the clusters by creating cluster objects.
+2. Policy Controller: watches PropagationPolicy objects. When a PropagationPolicy object is added, the controller selects a group of resources matching the resourceSelector and create ResourceBinding with each single resource object.
+3. Binding Controller: watches ResourceBinding objects and create a Work object corresponding to each cluster with a single resource manifest.
+4. Execution Controller: watches Work objects. When Work objects are created, the controller will distribute the resources to member clusters.
+
diff --git a/versioned_docs/version-v1.9/core-concepts/components.md b/versioned_docs/version-v1.9/core-concepts/components.md
new file mode 100644
index 000000000..08c729a82
--- /dev/null
+++ b/versioned_docs/version-v1.9/core-concepts/components.md
@@ -0,0 +1,132 @@
+---
+title: Components
+---
+
+This document provides an overview of the components required for a fully functional and operational Karmada setup.
+
+![components](../resources/general/components.png)
+
+## Control Plane Components
+
+A complete and working Karmada control plane consists of the following components.
+The `karmada-agent` might be optional, that depends on
+[Cluster Registration Mode](../userguide/clustermanager/cluster-registration).
+
+### karmada-apiserver
+
+The API server is a component of the Karmada control plane that exposes the Karmada API in addition to the Kubernetes API.
+The API server is the front end of the Karmada control plane.
+
+Karmada API server directly uses the implementation of `kube-apiserver` from Kubernetes, which is the reason why Karmada
+is naturally compatible with Kubernetes API. That makes integration with the Kubernetes ecosystem very simple for Karmada,
+such as allowing users to use `kubectl` to operate Karmada,
+[integrating with ArgoCD](../userguide/cicd/working-with-argocd),
+[integrating with Flux](../userguide/cicd/working-with-flux) and so on.
+
+### karmada-aggregated-apiserver
+
+The aggregate API server is an extended API server implemented using
+[Kubernetes API Aggregation Layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) technology.
+It offers [Cluster API](https://github.com/karmada-io/karmada/blob/master/pkg/apis/cluster/types.go)
+and related sub-resources, such as `cluster/status` and `cluster/proxy`, it implements advanced capabilities like
+[Aggregated Kubernetes API](../userguide/globalview/aggregated-api-endpoint)
+which can be used to access member clusters through karmada-apiserver.
+
+### kube-controller-manager
+
+The kube-controller-manager is composed of a bunch of controllers.
+Karmada just inherits some controllers from the official image of Kubernetes to keep a consistent user experience and behavior.
+
+It's worth noting that not all controllers are needed by Karmada, for the recommended controllers please refer to
+[Recommended Controllers](../administrator/configuration/configure-controllers#required-controllers).
+
+> Note: When users submit Deployment or other Kubernetes standard resources to the karmada-apiserver,
+> they are solely recorded in the etcd of the Karmada control plane. Subsequently, these resources are
+> synchronized with the member cluster. However, these Deployment resources
+> do not undergo reconciliation processes (such as pod creation) in the Karmada control plane cluster.
+
+### karmada-controller-manager
+
+The karmada-controller-manager runs various custom controller processes.
+
+The controllers watch Karmada objects and then talk to the underlying clusters' API servers to create regular Kubernetes resources.
+
+The controllers are listed at [Karmada Controllers](../administrator/configuration/configure-controllers/#karmada-controllers).
+
+### karmada-scheduler
+
+The karmada-scheduler is responsible for scheduling k8s native API resource objects (including CRD resources) to member clusters.
+
+The scheduler determines which clusters are valid placements for each resource in the scheduling queue according to constraints and available resources.
+The scheduler then ranks each valid cluster and binds the resource to the most suitable cluster.
+
+### karmada-webhook
+
+karmada-webhooks are HTTP callbacks that receive Karmada/Kubernetes API requests and do something with them.
+You can define two types of karmada-webhook, validating webhook and mutating webhook.
+
+Mutating webhooks are invoked first, and can modify objects sent to the karmada-apiserver to enforce custom defaults.
+After all object modifications are complete, and after the incoming object is validated by the karmada-apiserver,
+validating webhooks are invoked and can reject requests to enforce custom policies.
+
+> Note: Webhooks that need to guarantee they see the final state of the object in order to enforce policy should use a validating webhook,
+> since objects can be modified after being seen by mutating webhooks.
+
+### etcd
+
+Consistent and highly-available key value store used as Karmada' backing store for all Karmada/Kubernetes API objects.
+
+If your Karmada uses etcd as its backing store, make sure you have a back up plan for the data.
+
+You can find in-depth information about etcd in the official [documentation](https://etcd.io/docs/).
+
+### karmada-agent
+
+Karmada has two [Cluster Registration Mode](../userguide/clustermanager/cluster-registration) such as Push and Pull,
+karmada-agent shall be deployed on each Pull mode member cluster.
+It can register a specific cluster to the Karmada control plane and sync manifests from the Karmada control plane to the member cluster.
+In addition, it also syncs the status of member cluster and manifests to the Karmada control plane.
+
+## Addons
+
+### karmada-scheduler-estimator
+
+The karmada-scheduler-estimator runs an accurate scheduler estimator of a cluster.
+It provides the scheduler with more accurate cluster resource information.
+
+> Note: Early karmada-scheduler only supported scheduling the number of replicas based on the total number of cluster resources.
+> In this case, scheduling failure occurred when the total cluster resources were sufficient but each node resources were insufficient.
+> To address this issue, the estimator component was introduced, which calculates the number of callable replicas for
+> each node based on resource requests, thereby calculating the true number of schedulable replicas for a cluster.
+
+### karmada-descheduler
+
+This component is responsible for detecting all replicas at regular intervals (two minutes by default),
+and triggering reschedule based on instance state changes in member clusters.
+
+The descheduler only takes effect when the scheduling strategy is dynamic division
+and it perceives how many instance state has changed by calling scheduler-estimator.
+
+### karmada-search
+
+The karmada-search starts an aggregated server. It provides capabilities such as global search and resource proxy in a multi-cloud environment.
+
+The ability of [global search](../tutorials/karmada-search/) is to cache resource objects and events across multiple clusters,
+and to provide graphical retrieval services externally through search APIs.
+
+The ability of [resource proxy](../userguide/globalview/proxy-global-resource/) allows users to access
+all the resources both in karmada controller panel and member clusters.
+
+## CLI tools
+
+### karmadactl
+
+Karmada provides a command line tool, `karmadactl`, for communicating with Karmada's control plane, using the Karmada API.
+
+You can use `karmadactl` to perform join/unjoin of a member cluster, mark/unmark a member cluster as non schedulable and so on.
+For more information including a complete list of `karmadactl` operations, see the
+[karmadactl](../reference/karmadactl/karmadactl-commands/karmadactl).
+
+### kubectl karmada
+
+kubectl karmada provides capabilities in the form of kubectl plugins, yet its realization is exactly the same as `karmadactl`.
diff --git a/versioned_docs/version-v1.9/core-concepts/concepts.md b/versioned_docs/version-v1.9/core-concepts/concepts.md
new file mode 100644
index 000000000..90fd2604e
--- /dev/null
+++ b/versioned_docs/version-v1.9/core-concepts/concepts.md
@@ -0,0 +1,29 @@
+---
+title: Concepts
+---
+
+This page introduces some core concepts about Karmada.
+
+## Resource Template
+
+Karmada uses the Kubernetes Native API definition for the federated resource template, to make it easy to integrate with existing tools that have already been adopted by Kubernetes.
+
+## Propagation Policy
+
+Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster scheduling and spreading requirements.
+
+- Support 1:n mapping of policy: workload. Users don't need to indicate scheduling constraints every time creating federated applications.
+
+- With default policies, users can directly interact with the Kubernetes API.
+
+## Override Policy
+
+Karmada provides a standalone Override Policy API for specializing the automation of cluster-related configuration. For example:
+
+- Override the image prefix based on the member cluster region.
+
+- Override StorageClass depending on your cloud provider.
+
+The following diagram shows how Karmada resources are propagated to member clusters.
+
+![karmada-resource-relation](../resources/general/karmada-resource-relation.png)
diff --git a/versioned_docs/version-v1.9/core-concepts/introduction.md b/versioned_docs/version-v1.9/core-concepts/introduction.md
new file mode 100644
index 000000000..d11497cfb
--- /dev/null
+++ b/versioned_docs/version-v1.9/core-concepts/introduction.md
@@ -0,0 +1,54 @@
+---
+title: What is Karmada?
+slug: /
+
+---
+
+## Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
+
+Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.
+
+Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios,
+with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.
+
+Karmada is an incubation project of the [Cloud Native Computing Foundation](https://cncf.io/) (CNCF).
+
+## Why Karmada:
+- __K8s Native API Compatible__
+ - Zero change upgrade, from single-cluster to multi-cluster
+ - Seamless integration of existing K8s tool chain
+
+- __Out of the Box__
+ - Built-in policy sets for scenarios, including: Active-active, Remote DR, Geo Redundant, etc.
+ - Cross-cluster applications auto-scaling, failover and load-balancing on multi-cluster.
+
+- __Avoid Vendor Lock-in__
+ - Integration with mainstream cloud providers
+ - Automatic allocation, migration across clusters
+ - Not tied to proprietary vendor orchestration
+
+- __Centralized Management__
+ - Location agnostic cluster management
+ - Support clusters in Public cloud, on-prem or edge
+
+- __Fruitful Multi-Cluster Scheduling Policies__
+ - Cluster Affinity, Multi Cluster Splitting/Rebalancing,
+ - Multi-Dimension HA: Region/AZ/Cluster/Provider
+
+- __Open and Neutral__
+ - Jointly initiated by Internet, finance, manufacturing, teleco, cloud providers, etc.
+ - Target for open governance with CNCF
+
+
+
+**Notice: this project is developed in continuation of Kubernetes [Federation v1](https://github.com/kubernetes-retired/federation) and [v2](https://github.com/kubernetes-sigs/kubefed). Some basic concepts are inherited from these two versions.**
+
+
+## What's Next
+
+Here are some recommended next steps:
+
+- Learn Karmada's [core concepts](./concepts.md).
+- Learn Karmada's [architecture](./architecture.md).
+- Start to [install Karmada](../installation/installation.md).
+- Get started with [interactive tutorials](https://killercoda.com/karmada/).
diff --git a/versioned_docs/version-v1.9/developers/customize-karmada-scheduler.md b/versioned_docs/version-v1.9/developers/customize-karmada-scheduler.md
new file mode 100644
index 000000000..204a3a69f
--- /dev/null
+++ b/versioned_docs/version-v1.9/developers/customize-karmada-scheduler.md
@@ -0,0 +1,356 @@
+---
+title: Customize the scheduler
+---
+
+Karmada ships with a default scheduler that is described [here](../reference/components/karmada-scheduler.md). If the default scheduler does not suit your needs you can implement your own scheduler.
+Karmada's `Scheduler Framework` is similar to Kubernetes, but unlike K8s, Karmada needs to deploy applications to a group of clusters instead of nodes. According to the placement field of the user's scheduling policy and the internal scheduling plug-in algorithm, the user's application will be deployed to the desired cluster group.
+
+The scheduling process can be divided into the following four steps:
+* Predicate: filter inappropriate clusters
+* Priority: score the cluster
+* SelectClusters: select cluster groups based on cluster scores and `SpreadConstraint`
+* ReplicaScheduling: deploy the application replicas on the selected cluster group according to the configured replica scheduling policy
+
+ ![schedule process](../resources/developers/schedule-process.png)
+
+Among them, the plug-ins for filtering and scoring can be customized and configured based on the scheduler framework.
+
+The default scheduler has several in-tree plugins:
+* APIEnablement: a plugin that checks if the API(CRD) of the resource is installed in the target cluster.
+* TaintToleration: a plugin that checks if a propagation policy tolerates a cluster's taints.
+* ClusterAffinity: a plugin that checks if a resource selector matches the cluster label.
+* SpreadConstraint: a plugin that checks if spread property in the Cluster.Spec.
+* ClusterLocality: a score plugin that favors cluster that already have the resource.
+
+You can customize your out-of-tree plugins according to your own scenario, and implement your scheduler through Karmada's `Scheduler Framework`.
+This document will give a detailed description of how to customize a Karmada scheduler.
+
+## Before you begin
+
+You need to have a Karmada control plane. To start up Karmada, you can refer to [here](../installation/installation.md).
+If you just want to try Karmada, we recommend building a development environment by ```hack/local-up-karmada.sh```.
+
+```sh
+git clone https://github.com/karmada-io/karmada
+cd karmada
+hack/local-up-karmada.sh
+```
+
+## Deploy a plugin
+
+Assume you want to deploy a new filter plugin named `TestFilter`. You can refer to the karmada-scheduler implementation in [pkg/scheduler/framework/plugins](https://github.com/karmada-io/karmada/tree/master/pkg/scheduler/framework/plugins) in the Karmada source directory.
+The code directory after development is similar to:
+
+```
+.
+├── apienablement
+├── clusteraffinity
+├── clusterlocality
+├── spreadconstraint
+├── tainttoleration
+├── testfilter
+│ ├── test_filter.go
+```
+
+The content of the test_filter.go file is as follows, and the specific filtering logic implementation is hidden.
+
+```go
+package testfilter
+
+import (
+ "context"
+
+ clusterv1alpha1 "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1"
+ policyv1alpha1 "github.com/karmada-io/karmada/pkg/apis/policy/v1alpha1"
+ workv1alpha2 "github.com/karmada-io/karmada/pkg/apis/work/v1alpha2"
+ "github.com/karmada-io/karmada/pkg/scheduler/framework"
+)
+
+const (
+ // Name is the name of the plugin used in the plugin registry and configurations.
+ Name = "TestFilter"
+)
+
+type TestFilter struct{}
+
+var _ framework.FilterPlugin = &TestFilter{}
+
+// New instantiates the TestFilter plugin.
+func New() (framework.Plugin, error) {
+ return &TestFilter{}, nil
+}
+
+// Name returns the plugin name.
+func (p *TestFilter) Name() string {
+ return Name
+}
+
+// Filter implements the filtering logic of the TestFilter plugin.
+func (p *TestFilter) Filter(ctx context.Context,
+ bindingSpec *workv1alpha2.ResourceBindingSpec, bindingStatus *workv1alpha2.ResourceBindingStatus, cluster *clusterv1alpha1.Cluster) *framework.Result {
+
+ // implementation
+
+ return framework.NewResult(framework.Success)
+}
+```
+
+For a filter plugin, you must implement `framework.FilterPlugin` interface. And for a score plugin, you must implement `framework.ScorePlugin` interface.
+
+## Register the plugin
+
+Edit the [cmd/scheduler/main.go](https://github.com/karmada-io/karmada/blob/master/cmd/scheduler/main.go):
+
+```go
+package main
+
+import (
+ "os"
+
+ "k8s.io/component-base/cli"
+ _ "k8s.io/component-base/logs/json/register" // for JSON log format registration
+ controllerruntime "sigs.k8s.io/controller-runtime"
+ _ "sigs.k8s.io/controller-runtime/pkg/metrics"
+
+ "github.com/karmada-io/karmada/cmd/scheduler/app"
+ "github.com/karmada-io/karmada/pkg/scheduler/framework/plugins/testfilter"
+)
+
+func main() {
+ stopChan := controllerruntime.SetupSignalHandler().Done()
+ command := app.NewSchedulerCommand(stopChan, app.WithPlugin(testfilter.Name, testfilter.New))
+ code := cli.Run(command)
+ os.Exit(code)
+}
+
+```
+
+To register the plugin, you need to pass in the plugin configuration in the `NewSchedulerCommand` function.
+
+## Package the scheduler
+
+After you register the plugin, you need to package your scheduler binary into a container image.
+
+```shell
+cd karmada
+export VERSION=## Your Image Tag
+make image-karmada-scheduler
+```
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
+...
+ spec:
+ automountServiceAccountToken: false
+ containers:
+ - command:
+ - /bin/karmada-scheduler
+ - --kubeconfig=/etc/kubeconfig
+ - --bind-address=0.0.0.0
+ - --secure-port=10351
+ - --enable-scheduler-estimator=true
+ - --v=4
+ image: ## Your Image Address
+...
+```
+
+When you start the scheduler, you can find that `TestFilter` plugin has been enabled from the logs:
+
+```
+I0105 09:50:11.809137 1 scheduler.go:109] karmada-scheduler version: version.Info{GitVersion:"v1.4.0-141-g119cb8e1", GitCommit:"119cb8e1e8be0142ca3d32c619c25e5ec4b0a1b6", GitTreeState:"dirty", BuildDate:"2023-01-05T09:42:41Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}
+I0105 09:50:11.813339 1 registry.go:63] Enable Scheduler plugin "SpreadConstraint"
+I0105 09:50:11.813470 1 registry.go:63] Enable Scheduler plugin "ClusterLocality"
+I0105 09:50:11.813483 1 registry.go:63] Enable Scheduler plugin "TestFilter"
+I0105 09:50:11.813489 1 registry.go:63] Enable Scheduler plugin "APIEnablement"
+I0105 09:50:11.813545 1 registry.go:63] Enable Scheduler plugin "TaintToleration"
+I0105 09:50:11.813596 1 registry.go:63] Enable Scheduler plugin "ClusterAffinity"
+```
+
+## Config the plugin
+
+You can config the plugin enablement by setting the flag `--plugins`.
+For example, the following config will disable `TestFilter` plugin.
+
+```shell
+kubectl --kubeconfig ~/.kube/karmada.config --context karmada-host edit deploy/karmada-scheduler -nkarmada-system
+...
+ spec:
+ automountServiceAccountToken: false
+ containers:
+ - command:
+ - /bin/karmada-scheduler
+ - --kubeconfig=/etc/kubeconfig
+ - --bind-address=0.0.0.0
+ - --secure-port=10351
+ - --enable-scheduler-estimator=true
+ - --plugins=*,-TestFilter
+ - --v=4
+ image: ## Your Image Address
+...
+```
+
+## Configure Multiple Schedulers
+
+### Run the second scheduler in the cluster
+
+You can run multiple schedulers simultaneously alongside the default scheduler and instruct Karmada what scheduler to use for each of your workloads.
+Here is an sample of the deployment config. You can save it as `my-scheduler.yaml`:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: my-karmada-scheduler
+ namespace: karmada-system
+ labels:
+ app: my-karmada-scheduler
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: my-karmada-scheduler
+ template:
+ metadata:
+ labels:
+ app: my-karmada-scheduler
+ spec:
+ automountServiceAccountToken: false
+ tolerations:
+ - key: node-role.kubernetes.io/master
+ operator: Exists
+ containers:
+ - name: karmada-scheduler
+ image: docker.io/karmada/karmada-scheduler:latest
+ imagePullPolicy: IfNotPresent
+ livenessProbe:
+ httpGet:
+ path: /healthz
+ port: 10351
+ scheme: HTTP
+ failureThreshold: 3
+ initialDelaySeconds: 15
+ periodSeconds: 15
+ timeoutSeconds: 5
+ command:
+ - /bin/karmada-scheduler
+ - --kubeconfig=/etc/kubeconfig
+ - --bind-address=0.0.0.0
+ - --secure-port=10351
+ - --enable-scheduler-estimator=true
+ - --leader-elect-resource-name=my-scheduler # Your custom scheduler name
+ - --scheduler-name=my-scheduler # Your custom scheduler name
+ - --v=4
+ volumeMounts:
+ - name: kubeconfig
+ subPath: kubeconfig
+ mountPath: /etc/kubeconfig
+ volumes:
+ - name: kubeconfig
+ secret:
+ secretName: kubeconfig
+```
+
+> Note: For the `--leader-elect-resource-name` option, it will be `karmada-scheduler` by default. If you deploy another scheduler along with the default scheduler,
+> this option should be specified and it's recommended to use the scheduler name as the value.
+
+In order to run your scheduler in Karmada, create the deployment specified in the config above:
+
+```shell
+kubectl --context karmada-host create -f my-scheduler.yaml
+```
+
+Verify that the scheduler pod is running:
+
+```
+kubectl --context karmada-host get pods --namespace=karmada-system
+```
+
+```
+NAME READY STATUS RESTARTS AGE
+....
+my-karmada-scheduler-lnf4s-4744f 1/1 Running 0 2m
+...
+```
+
+You should see a "Running" my-karmada-scheduler pod, in addition to the default karmada-scheduler pod in this list.
+
+### Specify schedulers for deployments
+
+Now that your second scheduler is running, create some deployments, and direct them to be scheduled by either the default scheduler or the one you deployed.
+In order to schedule a given deployment using a specific scheduler, specify the name of the scheduler in that propagationPolicy spec.
+Let's look at three examples.
+
+* PropagationPolicy spec without any scheduler name
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+When no scheduler name is supplied, the deployment is automatically scheduled using the default-scheduler.
+
+* PropagationPolicy spec with `default-scheduler`
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ schedulerName: default-scheduler
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+A scheduler is specified by supplying the scheduler name as a value to `spec.schedulerName`.
+In this case, we supply the name of the default scheduler which is `default-scheduler`.
+
+* PropagationPolicy spec with `my-scheduler`
+
+```yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: nginx-propagation
+spec:
+ schedulerName: my-scheduler
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ name: nginx
+ placement:
+ clusterAffinity:
+ clusterNames:
+ - member1
+ - member2
+```
+
+In this case, we specify that this deployment should be scheduled using the scheduler that we deployed - `my-scheduler`.
+Note that the value of `spec.schedulerName` should match the name supplied for the scheduler in the schedulerName field of options in the second schedulers.
+
+### Verifying that the deployments were scheduled using the desired schedulers
+
+In order to make it easier to work through these examples, you can look at the "Scheduled" entries in the event logs to verify that the deployments were scheduled by the desired schedulers.
+
+```shell
+kubectl --context karmada-apiserver describe deploy/nginx
+```
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/developers/document-releasing.md b/versioned_docs/version-v1.9/developers/document-releasing.md
new file mode 100644
index 000000000..63f1a8272
--- /dev/null
+++ b/versioned_docs/version-v1.9/developers/document-releasing.md
@@ -0,0 +1,96 @@
+---
+title: Documentation Releasing
+---
+
+Every minor release will have a corresponding documentation release. This guide is an introduction of the whole release procedure.
+
+## Keep Multilingual Documents in Sync(manually)
+
+Sometimes contributors do not update the content of the documentation in all languages. Before releasing, ensure the multilingual documents are in sync.
+This will be tracked by an issue. The issue should follow the format:
+
+```
+This issue is to track documents which needs to sync zh for release 1.x:
+* #268
+```
+
+## Update Reference Documents(manually)
+
+Before releasing, we need to update reference docs in the website, which includes CLI references and component references. The whole process is done by scripts automatically.
+Follow these steps to update reference docs.
+
+1. Clone `karmada-io/karmada` and `karmada-io/website` to the local environment. It's recommended to step up these two projects in the same folder.
+
+```text
+$ git clone https://github.com/karmada-io/karmada.git
+$ git clone https://github.com/karmada-io/website.git
+
+
+$ tree -L 1
+#.
+#├── karmada
+#├── website
+```
+
+2. Run generate command in karmada root dir.
+
+```shell
+cd karmada/
+go run ./hack/tools/genkarmadactldocs/gen_karmadactl_docs.go ../website/docs/reference/karmadactl/karmadactl-commands/
+go run ./hack/tools/genkarmadactldocs/gen_karmadactl_docs.go ../website/i18n/zh/docusaurus-plugin-content-docs/current/reference/karmadactl/karmadactl-commands/
+```
+
+3. Generate reference docs of each component one by one. Here we take `karmada-apiserver` as an example.
+
+```shell
+cd karmada/
+go build ./hack/tools/gencomponentdocs/.
+./gencomponentdocs ../website/docs/reference/components/ karmada-apiserver
+./gencomponentdocs ../website/i18n/zh/docusaurus-plugin-content-docs/current/reference/components/ karmada-apiserver
+```
+
+## Setup release-1.x(manually)
+
+1. Update versions.json
+
+```shell
+cd website/
+vim versions.json
+
+[
+ v1.5 # add a new version tag
+ v1.4
+ v1.3
+]
+```
+
+2. Update versioned_docs
+
+```shell
+mkdir versioned_docs/version-v1.5
+cp docs/* versioned_docs/version-v1.5 -r
+```
+
+3. Update versioned_sidebars
+
+```shell
+cp versioned_sidebars/version-v1.4-sidebars.json versioned_sidebars/version-v1.5-sidebars.json
+sed -i'' -e "s/version-v1.4/version-v1.5/g" versioned_sidebars/version-v1.5-sidebars.json
+# update version-v1.5-sidebars.json based on sidebars.js
+```
+
+4. Update versioned_docs for zh
+
+```shell
+mkdir i18n/zh/docusaurus-plugin-content-docs/version-v1.5
+cp i18n/zh/docusaurus-plugin-content-docs/current/* i18n/zh/docusaurus-plugin-content-docs/version-v1.5 -r
+```
+
+5. Update versioned_sidebars for zh
+
+```shell
+cp i18n/zh/docusaurus-plugin-content-docs/current.json i18n/zh/docusaurus-plugin-content-docs/version-v1.5.json
+sed -i'' -e "s/Next/v1.5/g" i18n/zh/docusaurus-plugin-content-docs/version-v1.5.json
+```
+
+## Check the difference of website and send a pull request(manually)
diff --git a/versioned_docs/version-v1.9/developers/performance-test-setup-for-karmada.md b/versioned_docs/version-v1.9/developers/performance-test-setup-for-karmada.md
new file mode 100644
index 000000000..d24245a48
--- /dev/null
+++ b/versioned_docs/version-v1.9/developers/performance-test-setup-for-karmada.md
@@ -0,0 +1,335 @@
+---
+title: Performance Test Setup for Karmada
+---
+
+## Abstract
+
+As Karmada is being implemented in more and more enterprises and organizations, scalability and scale of Karmada is gradually becoming new concerns for the community. In this article, we will introduce how to conduct large-scale testing for Karmada and how to monitor metrics from Karmada control plane.
+
+## Build large scale environment
+
+### Create member clusters using kind
+
+#### Why kind
+
+[Kind](https://sigs.k8s.io/kind) is a tool for running local Kubernetes clusters using Docker containers. Kind was primarily designed for testing Kubernetes itself, so it play a good role in simulating member clusters.
+
+#### Usage
+
+> Follow the [kind installation](https://kind.sigs.k8s.io/docs/user/quick-start#installation) guide.
+
+Create 10 member clusters:
+
+```shell
+for ((i=1; i<=10; i ++)); do
+ kind create cluster --name member$i
+done;
+```
+
+
+
+### Simulate a large number of fake nodes using fake-kubelet
+
+#### Why fake-kubelet
+
+##### Compare to Kubemark
+
+**Kubemark** is directly implemented with the code of kubelet, replacing the runtime part, except that it does not actually start the container, other behaviors are exactly the same as kubelet, mainly used for Kubernetes own e2e test, simulating a large number of nodes and pods will **occupy the same memory as the real scene**.
+
+**Fake-kubelet** is a tool used to simulate any number of nodes and maintain pods on those nodes. It only does the minimum work of maintaining nodes and pods, so that it is very suitable for simulating a large number of nodes and pods for pressure testing on the control plane.
+
+#### Usage
+
+Deploy the fake-kubelet:
+
+> Note: Set container ENV `GENERATE_REPLICAS` in fake-kubelet deployment to set node replicas you want to create
+
+```shell
+export GENERATE_REPLICAS=your_replicas
+curl https://raw.githubusercontent.com/wzshiming/fake-kubelet/master/deploy.yaml > fakekubelet.yml
+# GENERATE_REPLICAS default value is 5
+sed -i "s/5/$GENERATE_REPLICAS/g" fakekubelet.yml
+kubectl apply -f fakekubelet.yml
+```
+
+
+`kubectl get node` You will find fake nodes.
+
+```shell
+> kubectl get node -o wide
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+fake-0 Ready agent 10s fake 10.88.0.136
+fake-1 Ready agent 10s fake 10.88.0.136
+fake-2 Ready agent 10s fake 10.88.0.136
+fake-3 Ready agent 10s fake 10.88.0.136
+fake-4 Ready agent 10s fake 10.88.0.136
+```
+
+Deploy an sample deployment to test:
+
+```shell
+> kubectl apply -f - < kubectl get pod -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+fake-pod-78884479b7-52qcx 1/1 Running 0 6s 10.0.0.23 fake-4
+fake-pod-78884479b7-bd6nk 1/1 Running 0 6s 10.0.0.13 fake-2
+fake-pod-78884479b7-dqjtn 1/1 Running 0 6s 10.0.0.15 fake-2
+fake-pod-78884479b7-h2fv6 1/1 Running 0 6s 10.0.0.31 fake-0
+```
+
+
+
+## Distribute resources using ClusterLoader2
+
+### ClusterLoader2
+
+[ClusterLoader2](https://github.com/kubernetes/perf-tests/tree/master/clusterloader2) is an open source Kubernetes cluster testing tool. It tests against Kubernetes-defined SLIs/SLOs metrics to verify that clusters meet various quality of service standards. ClusterLoader2 is a tool oriented single cluster, it is complex to test karmada control plane meanwhile distribute resources to member clusters. Therefore, we just use the ClusterLoader2 to distribute resources to clusters managed by karmada.
+
+### Prepare a simple config
+
+Let's prepare our config (config.yaml) to distribute resources. This config will:
+
+- Create 10 namespace
+
+- Create 20 deployments with 1000 pods inside that namespace
+
+
+We will create file `config.yaml` that describes this test. First we need to start with defining test name:
+
+```yaml
+name: test
+```
+
+ClusterLoader2 will create namespaces automatically, but we need to specify how many namespaces we want and whether delete the namespaces after distributing resources:
+
+```yaml
+namespace:
+ number: 10
+ deleteAutomanagedNamespaces: false
+```
+
+Next, we need to specify TuningSets. TuningSet describes how actions are executed. The qps means 1/qps s per action interval. In order to distribute resources slowly to relieve the pressure on the apiserver, the qps of Uniformtinyqps is set to 0.1, which means that after distributing a deployment, we wait 10s before continuing to distribute the next deployment.
+
+```yaml
+tuningSets:
+- name: Uniformtinyqps
+ qpsLoad:
+ qps: 0.1
+- name: Uniform1qps
+ qpsLoad:
+ qps: 1
+```
+
+Finally, we will create a phase that creates deployment and propagation policy. We need to specify in which namespaces we want the deployment and propagation policy to be created, how many of these deployments per namespace. Also, we will need to specify template for our deployment and propagation policy , which we will do later. For now, let's assume that this template allows us to specify numbers of replicas in deployment and propagation policy.
+
+```yaml
+steps:
+- name: Create deployment
+ phases:
+ - namespaceRange:
+ min: 1
+ max: 10
+ replicasPerNamespace: 20
+ tuningSet: Uniformtinyqps
+ objectBundle:
+ - basename: test-deployment
+ objectTemplatePath: "deployment.yaml"
+ templateFillMap:
+ Replicas: 1000
+ - namespaceRange:
+ min: 1
+ max: 10
+ replicasPerNamespace: 1
+ tuningSet: Uniform1qps
+ objectBundle:
+ - basename: test-policy
+ objectTemplatePath: "policy.yaml"
+ templateFillMap:
+ Replicas: 1
+
+```
+
+The whole `config.yaml` will look like this:
+
+```yaml
+name: test
+
+namespace:
+ number: 10
+ deleteAutomanagedNamespaces: false
+
+tuningSets:
+- name: Uniformtinyqps
+ qpsLoad:
+ qps: 0.1
+- name: Uniform1qps
+ qpsLoad:
+ qps: 1
+
+steps:
+- name: Create deployment
+ phases:
+ - namespaceRange:
+ min: 1
+ max: 10
+ replicasPerNamespace: 20
+ tuningSet: Uniformtinyqps
+ objectBundle:
+ - basename: test-deployment
+ objectTemplatePath: "deployment.yaml"
+ templateFillMap:
+ Replicas: 1000
+ - namespaceRange:
+ min: 1
+ max: 10
+ replicasPerNamespace: 1
+ tuningSet: Uniform1qps
+ objectBundle:
+ - basename: test-policy
+ objectTemplatePath: "policy.yaml"
+```
+
+
+Now, we need to specify deployment and propagation template. ClusterLoader2 by default adds parameter `Name` that you can use in your template. In our config, we also passed `Replicas` parameter. So our template for deployment and propagation policy will look like following:
+
+```yaml
+# deployment.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{.Name}}
+ labels:
+ group: test-deployment
+spec:
+ replicas: {{.Replicas}}
+ selector:
+ matchLabels:
+ app: fake-pod
+ template:
+ metadata:
+ labels:
+ app: fake-pod
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: type
+ operator: In
+ values:
+ - fake-kubelet
+ tolerations: # A taints was added to an automatically created Node. You can remove taints of Node or add this tolerations
+ - key: "fake-kubelet/provider"
+ operator: "Exists"
+ effect: "NoSchedule"
+ containers:
+ - image: fake-pod
+ name: {{.Name}}
+```
+
+```yaml
+# policy.yaml
+apiVersion: policy.karmada.io/v1alpha1
+kind: PropagationPolicy
+metadata:
+ name: test
+spec:
+ resourceSelectors:
+ - apiVersion: apps/v1
+ kind: Deployment
+ placement:
+ replicaScheduling:
+ replicaDivisionPreference: Weighted
+ replicaSchedulingType: Divided
+```
+
+
+
+### Start Distributing
+
+To distributing resources, run:
+
+```shell
+export KARMADA_APISERVERCONFIG=your_config
+export KARMADA_APISERVERIP=your_ip
+cd clusterloader2/
+go run cmd/clusterloader.go --testconfig=config.yaml --provider=local --kubeconfig=$KARMADA_APISERVERCONFIG --v=2 --k8s-clients-number=1 --skip-cluster-verification=true --masterip=$KARMADA_APISERVERIP --enable-exec-service=false
+```
+
+The meaning of args above shows as following:
+
+- k8s-clients-number: the number of karmada apiserver client number.
+- skip-cluster-verification: whether to skip the cluster verification, which expects at least one schedulable node in the cluster.
+- enable-exec-service: whether to enable exec service that allows executing arbitrary commands from a pod running in the cluster.
+
+Since the resources of member cluster cannot be accessed in karmada control plane, we have to turn off enable-exec-service and cluster-verification.
+
+> Note: If the `deleteAutomanagedNamespaces` parameter in config file is set to true, when the whole distribution of resources is complete, the resources will be immediately deleted.
+
+## Monitor Karmada control plane using Prometheus and Grafana
+
+### Deploy Prometheus and Grafana
+
+> Follow the [Prometheus and Grafana Deploy Guide](https://karmada.io/docs/administrator/monitoring/working-with-prometheus-in-control-plane)
+
+### Create Grafana DashBoards to observe Karmada control plane metrics
+
+Here's an example to monitor the mutating api call latency for works and resourcebindings of the karmada apiserver through grafana. Monitor the metrics you want by modifying the Query statement.
+
+#### Create a dashboard
+
+> Follow the [Grafana support For Prometheus](https://prometheus.io/docs/visualization/grafana/) document.
+
+#### Modify Query Statement
+
+Enter the following Prometheus expression into the ` Query` field.
+
+````sql
+histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{verb!="WATCH|GET|LIST", resource~="works|resourcebindings"}[5m])) by (resource, verb, le))
+````
+
+The gragh will show as follow:
+
+![grafana-dashboard](../resources/developers/grafana_metrics.png)
+
+
+
diff --git a/versioned_docs/version-v1.9/developers/profiling-karmada.md b/versioned_docs/version-v1.9/developers/profiling-karmada.md
new file mode 100644
index 000000000..4353c27cb
--- /dev/null
+++ b/versioned_docs/version-v1.9/developers/profiling-karmada.md
@@ -0,0 +1,63 @@
+---
+title: Profiling Karmada
+---
+
+## Enable profiling
+
+To profile Karmada components running inside a Kubernetes pod, set --enable-pprof flag to true in the yaml of Karmada components.
+The default profiling address is 127.0.0.1:6060, and it can be configured via `--profiling-bind-address`.
+The components which are compiled by the Karmada source code support the flag above, including `Karmada-agent`, `Karmada-aggregated-apiserver`, `Karmada-controller-manager`, `Karmada-descheduler`, `Karmada-search`, `Karmada-scheduler`, `Karmada-scheduler-estimator`, `Karmada-webhook`.
+
+```
+--enable-pprof
+ Enable profiling via web interface host:port/debug/pprof/.
+--profiling-bind-address string
+ The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+
+```
+
+## Expose the endpoint at the local port
+
+You can get at the application in the pod by port forwarding with kubectl, for example:
+
+```shell
+$ kubectl -n karmada-system get pod
+NAME READY STATUS RESTARTS AGE
+karmada-controller-manager-7567b44b67-8kt59 1/1 Running 0 19s
+...
+```
+
+```shell
+$ kubectl -n karmada-system port-forward karmada-controller-manager-7567b44b67-8kt59 6060
+Forwarding from 127.0.0.1:6060 -> 6060
+Forwarding from [::1]:6060 -> 6060
+```
+
+The HTTP endpoint will now be available as a local port.
+
+## Generate the data
+
+You can then generate the file for the memory profile with curl and pipe the data to a file:
+
+```shell
+$ curl http://localhost:6060/debug/pprof/heap > heap.pprof
+```
+
+Generate the file for the CPU profile with curl and pipe the data to a file (7200 seconds is two hours):
+
+```shell
+curl "http://localhost:6060/debug/pprof/profile?seconds=7200" > cpu.pprof
+```
+
+## Analyze the data
+
+To analyze the data:
+
+```shell
+go tool pprof heap.pprof
+```
+
+## Read more about profiling
+
+1. [Profiling Golang Programs on Kubernetes](https://danlimerick.wordpress.com/2017/01/24/profiling-golang-programs-on-kubernetes/)
+2. [Official Go blog](https://blog.golang.org/pprof)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/developers/releasing.md b/versioned_docs/version-v1.9/developers/releasing.md
new file mode 100644
index 000000000..1433657c8
--- /dev/null
+++ b/versioned_docs/version-v1.9/developers/releasing.md
@@ -0,0 +1,110 @@
+---
+title: Releasing
+---
+
+Karmada release could be minor release or patch release. For example, `v1.3.0` is a minor release, and `v1.3.1` is a patch release. Minor release indicates addition of functionality in a backwards-compatible manner, and patch release indicates backwards-compatible bug fixes.
+ The relationship between release, tag, and branch is as follows:
+![img](../resources/developers/releasing.png)
+
+For different release types, the procedure is different.
+
+## Minor release
+The minor version should be released from corresponding minor release branch. The release procedure is described as follows.
+
+### Create release branch(manually)
+Ensure the necessary PRs are merged, and then create the minor release branch from master branch. The minor release branch name should follow `release-{major}.{minor}`, for example, `release-1.4`.
+
+### Prepare release-note(manually)
+Every release requires a well-formed release note. The release note should follow the format:
+```text
+# What's New
+The highlighted updates, for example, new key features. This part is collected manually.
+
+# Other Notable Changes
+## API Changes
+* API changes, such as API version changes. This part is collected manually.
+
+## Bug Fixes
+* Bug fixes. This part is collected manually.
+
+## Features & Enhancements
+* New features and enhancements. This part is collected manually.
+
+## Security
+* Security fixes. This part is collected manually.
+
+## Other
+### Dependencies
+* Dependency changes, such as golang version updates. This part is collected manually.
+
+### Instrumentation
+* Observability-related changes. For example, metric adding/event recording.
+```
+Compare the newly created minor release branch with the previous minor release tag, for example, compare `release-1.4` branch with `v1.3.0` tag, to get all relevant changes. Then extract the preceding types of release notes from these changes. For example, extract [this](https://github.com/karmada-io/karmada/pull/2675) as:
+```text
+## Bug Fixes
+* `karmada-controller-manager`: Fixed the panic when cluster ImpersonatorSecretRef is nil.
+```
+
+### Commit release-notes(manually)
+After the release note is ready, commit it to `docs/CHANGELOG/CHANGELOG-{major}.{minor}.md` of the minor release branch.
+
+### Prepare contributor-list(manually)
+List the contributors in each release. Compare the newly created minor release branch with the previous minor release tag, for example, compare `release-1.4` branch with `v1.3.0` tag, to get contributors' Github IDs. The list should be in alphabetical order, like:
+```text
+## Contributors
+Thank you to everyone who contributed to this release!
+
+Users whose commits are in this release (alphabetically by username)
+@a
+@B
+@c
+@D
+...
+```
+
+### Update manifest(manually)
+When installing `Karmada`, the images needed to be pulled from DockerHub/SWR, so we should update manifests with image tags of the new version in the minor release branch. The following files need to be updated:
+* `charts/karmada/values.yaml`: Update `Karmada` related image tag with the new version.
+* `charts/index.yaml`: Add helm repository index.
+
+### Add upgrading docs(manually)
+When releasing a new minor version, the upgrading docs `docs/administrator/upgrading/v{major}.{minor_previous}-v{major}.{minor_new}.md` needs to be added to [website](https://github.com/karmada-io/website) repository, for example, adding `docs/administrator/upgrading/v1.3-v1.4.md` when releasing `v1.4.0`.
+
+### Create release(manually)
+Now, all prepared, let's create a release on the release page.
+* Create a new minor release tag, the tag name format should follow `v{major}.{minor}.{patch}`, for example, `v1.4.0`.
+* The target branch is the newly created minor release branch.
+* The content of `Describe this release` should be the combination of the chapter `Prepare release-notes` and the chapter `Prepare contributor-list`.
+
+
+### Attach asserts(automatically)
+After the release is published, GitHub will run workflow `.github/workflows/release.yml` to build `karmadactl` and `kubectl-karmada` and attach them to the newly published release.
+
+
+### Build/Push images(automatically)
+After the release is published, GitHub will run `.github/workflows/swr-released-image.yml` and `.github/workflows/dockerhub-released-image.yml` to build all `Karmada` components' images and push them to DockerHub/SWR.
+
+### Verifying release(manually)
+After all the workflows have been finished, we should perform manual checks to see if the release came out correctly:
+ * Check if all required assets are attached.
+ * Check if all the required images have been published on DockerHub/SWR.
+
+## Patch release
+The patch version should be released from the corresponding minor release branch.
+
+### Prepare release-note(manually)
+This step is almost the same as the minor release, but we need to compare the minor release branch with the minor tag to extract the release note, for example, compare `release-1.3` branch with `v1.3.0` tag to collect `v1.3.1` patch release note.
+
+### Create release(manually)
+This step is almost the same as the minor release, but the target branch is the minor version release branch, for example, creating release tag `v1.3.1` from minor release branch `release-1.3`.
+And also, we do not need to indicate the contributors, Github will automatically add contribureleators to the release note.
+
+### Attach asserts(automatically)
+Same with the minor release.
+
+### Build/Push images(automatically)
+Same with the minor release.
+
+### Verifying release(manually)
+Same with the minor release.
diff --git a/versioned_docs/version-v1.9/faq/faq.md b/versioned_docs/version-v1.9/faq/faq.md
new file mode 100644
index 000000000..6750b1b07
--- /dev/null
+++ b/versioned_docs/version-v1.9/faq/faq.md
@@ -0,0 +1,60 @@
+---
+title: FAQ(Frequently Asked Questions)
+---
+
+## What is the difference between PropagationPolicy and ClusterPropagationPolicy?
+
+The `PropagationPolicy` is a namespace-scoped resource type which means the objects with this type must reside in a namespace.
+And the `ClusterPropagationPolicy` is the cluster-scoped resource type which means the objects with this type don't have a namespace.
+
+Both of them are used to hold the propagation declaration, but they have different capacities:
+- PropagationPolicy: can only represent the propagation policy for the resources in the same namespace.
+- ClusterPropagationPolicy: can represent the propagation policy for all resources including namespace-scoped and cluster-scoped resources.
+
+## What is the difference between 'Push' and 'Pull' mode of a cluster?
+
+Please refer to [Overview of Push and Pull](../userguide/clustermanager/cluster-registration.md#overview-of-cluster-mode).
+
+## Why Karmada requires `kube-controller-manager`?
+
+`kube-controller-manager` is composed of a bunch of controllers, Karmada inherits some controllers from it
+to keep a consistent user experience and behavior.
+
+It's worth noting that not all controllers are needed by Karmada, for the recommended controllers please
+refer to [Kubernetes Controllers](../administrator/configuration/configure-controllers.md#kubernetes-controllers).
+
+
+## Can I install Karmada in a Kubernetes cluster and reuse the kube-apiserver as Karmada apiserver?
+
+The quick answer is `yes`. In that case, you can save the effort to deploy
+[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and just
+share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
+can be inherited seamlessly. We do have some users using Karmada in this way.
+
+There are some things you should consider before doing so:
+
+- This approach hasn't been fully tested by the Karmada community and no plan for it yet.
+- This approach will increase computation costs for the Karmada system. E.g.
+ After you apply a `resource template`, take `Deployment` as an example, the `kube-controller` will create `Pods` for the
+ Deployment and update the status persistently, Karmada system will reconcile these changes too, so there might be
+ conflicts.
+
+TODO: Link to adoption use case once it gets on board.
+
+## Why Cluster API doesn't have the CRD YAML file?
+
+Kubernetes provides two methods to extend APIs: Custom Resource and Kubernetes API Aggregation Layer. For more detail, you can refer to [Extending the Kubernetes API](https://kubernetes.io/docs/concepts/extend-kubernetes/).
+
+Karmada uses the both extension methods. For example, `PropagationPolicy` and `ResourceBinding` use Custom Resource, and `Cluster` resource uses Kubernetes API Aggregation Layer.
+
+Therefore, `Cluster` resources do not have a CRD YAML file, and they are not visible when you execute the `kubectl get crd` command.
+
+So, why would we choose to use the Kubernetes API Aggregation Layer to extend `Cluster` resources instead of using Custom Resource?
+
+This is because the `Cluster` resource requires the setup of the `Proxy` sub-resource. By using `Proxy`, you can access resources in member clusters. For details, please refer to [Aggregation Layer APIServer](https://karmada.io/docs/next/userguide/globalview/aggregated-api-endpoint/). At present, Custom Resource do not support configuring `Proxy` sub-resources, which is why it was not chosen for this purpose.
+
+## How to prevent automatic propagation of Namespace to all member clusters?
+
+Karmada will propagate the `Namespace` resources created by users to member clusters by default. This functionality is handled by the `namespace` controller in the `karmada-controller-manager` component, and can be configured by referring to [Configure Karmada Controllers](../administrator/configuration/configure-controllers.md#configure-karmada-controllers).
+
+After disabling the `namespace` controller, users can propagate `Namespace` resources to specified clusters through `ClusterPropagationPolicy` resources.
diff --git a/versioned_docs/version-v1.9/get-started/nginx-example.md b/versioned_docs/version-v1.9/get-started/nginx-example.md
new file mode 100644
index 000000000..2e03a1218
--- /dev/null
+++ b/versioned_docs/version-v1.9/get-started/nginx-example.md
@@ -0,0 +1,85 @@
+---
+title: Propagate a deployment by Karmada
+---
+
+This guide will cover:
+- Install `karmada` control plane components in a Kubernetes cluster which is known as `host cluster`.
+- Join a member cluster to `karmada` control plane.
+- Propagate an application by using `karmada`.
+
+### Prerequisites
+- [Go](https://golang.org/) version v1.18+
+- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) version v1.19+
+- [kind](https://kind.sigs.k8s.io/) version v0.14.0+
+
+### Install the Karmada control plane
+
+#### 1. Clone this repo to your machine:
+```
+git clone https://github.com/karmada-io/karmada
+```
+
+#### 2. Change to the karmada directory:
+```
+cd karmada
+```
+
+#### 3. Deploy and run Karmada control plane:
+
+run the following script:
+
+```
+# hack/local-up-karmada.sh
+```
+This script will do the following tasks for you:
+- Start a Kubernetes cluster to run the Karmada control plane, aka. the `host cluster`.
+- Build Karmada control plane components based on a current codebase.
+- Deploy Karmada control plane components on the `host cluster`.
+- Create member clusters and join Karmada.
+
+If everything goes well, at the end of the script output, you will see similar messages as follows:
+```
+Local Karmada is running.
+
+To start using your Karmada environment, run:
+ export KUBECONFIG="$HOME/.kube/karmada.config"
+Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.
+
+To manage your member clusters, run:
+ export KUBECONFIG="$HOME/.kube/members.config"
+Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.
+```
+
+There are two contexts in Karmada:
+- karmada-apiserver `kubectl config use-context karmada-apiserver`
+- karmada-host `kubectl config use-context karmada-host`
+
+The `karmada-apiserver` is the **main kubeconfig** to be used when interacting with the Karmada control plane, while `karmada-host` is only used for debugging Karmada installation with the host cluster. You can check all clusters at any time by running: `kubectl config view`. To switch cluster contexts, run `kubectl config use-context [CONTEXT_NAME]`
+
+
+### Demo
+
+![Demo](../resources/general/sample-nginx.svg)
+
+### Propagate application
+In the following steps, we are going to propagate a deployment by Karmada.
+
+#### 1. Create nginx deployment in Karmada.
+First, create a [deployment](https://github.com/karmada-io/karmada/blob/master/samples/nginx/deployment.yaml) named `nginx`:
+```
+kubectl create -f samples/nginx/deployment.yaml
+```
+
+#### 2. Create PropagationPolicy that will propagate nginx to member cluster
+Then, we need to create a policy to propagate the deployment to our member cluster.
+```
+kubectl create -f samples/nginx/propagationpolicy.yaml
+```
+
+#### 3. Check the deployment status from Karmada
+You can check deployment status from Karmada, don't need to access member cluster:
+```
+$ kubectl get deployment
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx 2/2 2 2 20s
+```
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/installation/fromsource.md b/versioned_docs/version-v1.9/installation/fromsource.md
new file mode 100644
index 000000000..d69e41101
--- /dev/null
+++ b/versioned_docs/version-v1.9/installation/fromsource.md
@@ -0,0 +1,59 @@
+---
+title: Installation from Source
+---
+
+This document describes how you can use the `hack/remote-up-karmada.sh` script to install Karmada on
+your clusters based on the codebase.
+
+## Select a way to expose karmada-apiserver
+
+The `hack/remote-up-karmada.sh` will install `karmada-apiserver` and provide two ways to expose the server:
+
+### 1. expose by `HostNetwork` type
+
+By default, the `hack/remote-up-karmada.sh` will expose `karmada-apiserver` by `HostNetwork`.
+
+No extra operations are needed with this type.
+
+### 2. expose by service with `LoadBalancer` type
+
+If you don't want to use the `HostNetwork`, you can ask `hack/remote-up-karmada.sh` to expose `karmada-apiserver`
+by a service with `LoadBalancer` type that *requires your cluster to have deployed the `Load Balancer`*.
+All you need to do is set an environment:
+```bash
+export LOAD_BALANCER=true
+```
+
+## Install
+From the `root` directory of the `karmada` repo, install Karmada by command:
+```bash
+hack/remote-up-karmada.sh
+```
+- `kubeconfig` is your cluster's kubeconfig that you want to install to
+- `context_name` is the name of context in 'kubeconfig'
+
+For example:
+```bash
+hack/remote-up-karmada.sh $HOME/.kube/config mycluster
+```
+
+If everything goes well, at the end of the script output, you will see similar messages as follows:
+```
+------------------------------------------------------------------------------------------------------
+█████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████
+░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███
+░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███
+░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████
+░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███
+░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███
+█████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████
+░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░
+------------------------------------------------------------------------------------------------------
+Karmada is installed successfully.
+
+Kubeconfig for karmada in file: /root/.kube/karmada.config, so you can run:
+ export KUBECONFIG="/root/.kube/karmada.config"
+Or use kubectl with --kubeconfig=/root/.kube/karmada.config
+Please use 'kubectl config use-context karmada-apiserver' to switch the cluster of karmada control plane
+And use 'kubectl config use-context your-host' for debugging karmada installation
+```
diff --git a/versioned_docs/version-v1.9/installation/ha-installation-with-cli.md b/versioned_docs/version-v1.9/installation/ha-installation-with-cli.md
new file mode 100644
index 000000000..9438ffe82
--- /dev/null
+++ b/versioned_docs/version-v1.9/installation/ha-installation-with-cli.md
@@ -0,0 +1,91 @@
+---
+title: High Availability Installation Using CLI
+---
+
+This documentation serves as a comprehensive guide for deploying Karmada in a high
+availability (HA) configuration using the `karmadactl` CLI tool.
+
+## Prerequisites
+
+- A Kubernetes cluster with multiple worker nodes (it's recommended to have at least
+ three worker nodes).
+- A valid `kube-config` file to access your Kubernetes cluster.
+- The `karmadactl` command-line tool or `kubectl-karmada` plugin installed.
+
+ :::note
+
+ For installing `karmadactl` command line tool refer to the [Installation of CLI Tools](/docs/next/installation/install-cli-tools#one-click-installation) guide.
+
+ :::
+
+## Installation
+
+There are two ways to install the Karmada in HA. The first way is to use an internal etcd
+cluster, and the second way is to use an external etcd cluster. You can learn more
+about this in the [high availability installation overview](/docs/next/installation/ha-installation).
+
+### Installation using internal etcd
+
+1. Use the following command to install the Karmada in HA using internal etcd:
+
+ ```bash
+ sudo karmadactl init --karmada-apiserver-replicas 3 --etcd-replicas 3 --etcd-storage-mode PVC --storage-classes-name --kubeconfig
+ ```
+
+ :::note
+
+ You need to use sudo for elevated permissions because `karmadactl` creates a
+ `karmada-config` file in the `/etc/karmada/karmada-apiserver.config` directory.
+
+ :::
+
+2. Specify the path to your `kube-config` file to connect to your Kubernetes cluster.
+ Typically, the `kube-config` file is located at `~/.kube/config` directory.
+ Alternatively, you can set the `KUBECONFIG` environment variable to specify the
+ file's location.
+
+3. Adjust the installation parameters as needed:
+ - `--karmada-apiserver-replicas 3`: This parameter sets up three Karmada API server
+ replicas. Each Karmada API server replica requires a separate node.
+ - `--etcd-replicas 3`: This ensures three etcd members are available to support the
+ three Karmada API servers.
+ - `--etcd-storage-mode PVC`: It indicates the use of PVCs for etcd storage.
+ - `--storage-classes-name `: Specify the name of the storage
+ class for etcd.
+
+ :::note
+
+ Make sure you have a minimum of three Kubernetes worker nodes available for a
+ successful installation of three Karmada API servers and etcd replicas.
+ Otherwise, the installation will not work.
+
+ :::
+
+### Installation using external etcd
+
+1. Use the following command to install the Karmada in HA using external etcd:
+
+ ```bash
+ sudo karmadactl init --karmada-apiserver-replicas 3 --external-etcd-ca-cert-path --external-etcd-client-cert-path --external-etcd-client-key-path --external-etcd-servers --external-etcd-key-prefix --kubeconfig
+ ```
+
+2. Adjust the installation parameters as needed:
+ - `--karmada-apiserver-replicas 3`: This parameter sets up three Karmada API server
+ replicas. Each Karmada API server replica requires a separate node.
+ - The `--external-etcd-ca-cert-path`, `--external-etcd-client-cert-path`,
+ `--external-etcd-client-key-path` and `--external-etcd-servers`
+ are the parameters that are required to authenticate with the external etcd cluster.
+ - Optionally you can also specify an etcd key prefix using the
+ `--external-etcd-key-prefix` parameter.
+
+## Verification
+
+Once the Karmada cluster installation is complete, you can verify the distribution of
+pods across multiple nodes by using the following command:
+
+```bash
+kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName -n karmada-system
+```
+
+This will display the status and node allocation of pods within the `karmada-system`
+namespace.
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/installation/ha-installation.md b/versioned_docs/version-v1.9/installation/ha-installation.md
new file mode 100644
index 000000000..e03d3f6c6
--- /dev/null
+++ b/versioned_docs/version-v1.9/installation/ha-installation.md
@@ -0,0 +1,72 @@
+---
+title: High Availability Installation Overview
+---
+
+This documentation explains high availability for Karmada.
+The Karmada high availability architecture is very similar to the Kubernetes high
+availability architecture. To deploy Karmada in a HA (High Availability) environment
+we can create multiple Karmada API servers instead of using a single API server.
+So even if a Karmada control plan goes down, you can still manage your clusters
+using the other Karmada control planes.
+
+## Options for highly available topology
+
+There are two options for configuring the topology of your highly available Karmada
+cluster.
+
+You can set up an HA cluster:
+
+* With stacked control plane nodes, where etcd nodes are colocated with control plane
+ nodes
+
+* With external etcd nodes, where etcd runs on separate nodes from the control plane
+
+You should carefully consider the advantages and disadvantages of each topology before
+setting up an HA cluster.
+
+## Stacked etcd topology
+
+A stacked HA cluster is a topology where the distributed data storage cluster provided
+by etcd is stacked on top of the Karamada control plane nodes.
+
+Each control plane node runs an instance of the Karmada API server, Karmada scheduler,
+and Karmada controller manager. The Karmada API server can communicate with the multiple
+member clusters, and these member clusters can be registered to the multiple Karmada
+API servers.
+
+Each Karmada control plane node creates a local etcd member and this etcd member
+communicates only with the Karmada API server of that node. The same applies to the
+local Karmada controller manager and Karmada scheduler instances.
+
+This topology couples the control planes and etcd members on the same nodes. It is simpler
+to set up than a cluster with external etcd nodes, and simpler to manage for replication.
+
+However, a stacked cluster runs the risk of failed coupling. If one node goes down, both an
+etcd member and a Karmada control plane instance are lost, and redundancy is compromised.
+You can mitigate this risk by adding more control plane nodes.
+
+You should therefore run a minimum of three stacked control plane nodes for an HA cluster.
+
+![Karmada stacked etcd](../resources/general/karmada-stacked-etcd.png)
+
+## External etcd topology
+
+An HA cluster with external etcd is a topology where the distributed data storage cluster
+provided by etcd is external to the Karmada cluster formed by the nodes that run Karmada
+control plane components.
+
+Like the stacked etcd topology, each Karmada control plane node in an external etcd
+topology runs an instance of the Karmada API server, Karmada scheduler, and Karmada
+controller manager. And the Karmada API server is exposed to the member clusters.
+However, etcd members run on separate hosts, and each etcd host communicates with the
+Karmada API server of each Karmada control plane node.
+
+This topology decouples the Karmada control plane and etcd member. It therefore
+provides an HA setup where losing a control plane instance or an etcd member has less
+impact and does not affect the cluster redundancy as much as the stacked HA topology.
+
+However, this topology requires twice the number of hosts as the stacked HA topology.
+A minimum of three hosts for control plane nodes and three hosts for etcd nodes are
+required for an HA cluster with this topology.
+
+![Karmada external etcd](../resources/general/karmada-external-etcd.png)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/installation/install-binary.md b/versioned_docs/version-v1.9/installation/install-binary.md
new file mode 100644
index 000000000..2578483c7
--- /dev/null
+++ b/versioned_docs/version-v1.9/installation/install-binary.md
@@ -0,0 +1,1191 @@
+---
+title: Installation by Binary
+---
+
+Step-by-step installation of binary high-availability `karmada` cluster.
+
+## Prerequisites
+
+### Server
+
+3 servers required. E.g.
+
+```shell
++---------------+-----------------+-----------------+
+| HostName | Host IP | Public IP |
++---------------+-----------------+-----------------+
+| karmada-01 | 172.31.209.245 | 47.242.88.82 |
++---------------+-----------------+-----------------+
+| karmada-02 | 172.31.209.246 | |
++---------------+-----------------+-----------------+
+| karmada-03 | 172.31.209.247 | |
++---------------+-----------------+-----------------+
+```
+
+> Public IP is not required. It is used to download some `karmada` dependent components from the public network and connect to `karmada` ApiServer through the public network
+
+### DNS Resolution
+
+Execute operations at `karmada-01` `karmada-02` `karmada-03`.
+
+```bash
+vi /etc/hosts
+172.31.209.245 karmada-01
+172.31.209.246 karmada-02
+172.31.209.247 karmada-03
+```
+
+Alternatively, you can use "Linux Virtual Server" for load balancing, and don't change /etc/hosts file.
+
+### Environment
+
+`karmada-01` requires the following environment.
+
+**Golang**: Compile the karmada binary
+**GCC**: Compile nginx (ignore if using cloud load balancing)
+
+## Compile and Download Binaries
+
+Execute operations at `karmada-01`.
+
+### Kubernetes Binaries
+
+Download the `kubernetes` binary package.
+
+Refer to this page to download binaries of different versions and architectures: [https://kubernetes.io/releases/download/#binaries](https://kubernetes.io/releases/download/#binaries)
+
+```bash
+wget https://dl.k8s.io/v1.23.3/kubernetes-server-linux-amd64.tar.gz
+tar -zxvf kubernetes-server-linux-amd64.tar.gz --no-same-owner
+cd kubernetes/server/bin
+mv kube-apiserver kube-controller-manager kubectl /usr/local/sbin/
+```
+
+### etcd Binaries
+
+Download the `etcd` binary package.
+
+You may want to use a newer version of etcd, please refer to this page: [https://etcd.io/docs/latest/install/](https://etcd.io/docs/latest/install/)
+
+```bash
+wget https://github.com/etcd-io/etcd/releases/download/v3.5.1/etcd-v3.5.1-linux-amd64.tar.gz
+tar -zxvf etcd-v3.5.1-linux-amd64.tar.gz --no-same-owner
+cd etcd-v3.5.1-linux-amd64/
+mv etcdctl etcd /usr/local/sbin/
+```
+
+### Karmada Binaries
+
+Compile the `karmada` binaries from source.
+
+```bash
+git clone https://github.com/karmada-io/karmada
+cd karmada
+make karmada-aggregated-apiserver karmada-controller-manager karmada-scheduler karmada-webhook karmadactl kubectl-karmada
+mv _output/bin/linux/amd64/* /usr/local/sbin/
+```
+
+### Nginx Binaries
+
+Compile the `nginx` binary from source.
+
+```bash
+wget http://nginx.org/download/nginx-1.21.6.tar.gz
+tar -zxvf nginx-1.21.6.tar.gz
+cd nginx-1.21.6
+./configure --with-stream --without-http --prefix=/usr/local/karmada-nginx --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
+make && make install
+mv /usr/local/karmada-nginx/sbin/nginx /usr/local/karmada-nginx/sbin/karmada-nginx
+```
+
+### Distribute Binaries
+
+Upload the binary file to the `karmada-02` `karmada-03 ` server.
+
+## Generate Certificates
+
+### Step 1: Create Bash Scripts and Configuration Files
+
+The scripts will generate certificates using the `openssl` command. Download [this directory](https://github.com/karmada-io/website/tree/main/docs/resources/installation/install-binary/generate_cert).
+
+We separate CA & leaf certificates generation scripts, so when you need to change Subject Alternative Name of leaf certificates (aka Load Balancer IP), you can reuse CA certificates, and run generate_leaf.sh to generate only leaf certificates.
+
+
+
+There are 3 CAs: front-proxy-ca, server-ca, etcd/ca. Why we need 3 CAs please see: [PKI certificates and requirements](https://kubernetes.io/docs/setup/best-practices/certificates/), [CA Reusage and Conflicts](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#ca-reusage-and-conflicts).
+
+If you use etcd provided by others, you can ignore `generate_etcd.sh` and `csr_config/etcd`.
+
+### Step 2: Change ``
+
+You need to change `` in `csr_config/**/*.conf` file to your "Load Balancer IP" and "Server IP". If you only use Load Balancer to access your servers, you only need to fill in "Load Balancer IP".
+
+You normally don't need to change `*.sh` files.
+
+
+### Step 3: Run Shell Scripts
+
+```bash
+$ ./generate_ca.sh
+$ ./generate_leaf.sh ca_cert/
+$ ./generate_etcd.sh
+```
+
+
+### Step 4: Check the Certificates
+
+You can view the configuration of the certificate, take `karmada.crt ` as an example.
+
+```bash
+openssl x509 -noout -text -in karmada.crt
+```
+
+### Step 5: Create the Karmada Configuration Directory
+
+Copy the certificates to the `/etc/karmada/pki` directory.
+
+```bash
+mkdir -p /etc/karmada/pki
+
+cd ca_cert
+cp -r * /etc/karmada/pki
+
+cd ../cert
+cp -r * /etc/karmada/pki
+```
+
+
+## Create the Karmada kubeconfig Files and etcd Encryption Key
+
+Execute operations at `karmada-01`.
+
+### Create kubeconfig Files
+
+**Step 1: Download bash script**
+
+Download [this file](https://github.com/karmada-io/website/tree/main/docs/resources/installation/install-binary/other_scripts/create_kubeconfig_file.sh).
+
+**Step 2: execute bash script**
+
+`172.31.209.245:5443` is the address of the `nginx` proxy for `karmada-apiserver`, we'll set it up later. You should replace it with your Load Balancer provided "host:port".
+
+```bash
+./create_kubeconfig_file.sh "https://172.31.209.245:5443"
+```
+
+### Create etcd Encryption Key
+
+If you don't need to encrypt contents in etcd, ignore this section and corresponding kube-apiserver start parameter.
+
+```bash
+export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
+cat > /etc/karmada/encryption-config.yaml <The parameters that `karmada-02` `karmada-03` need to change are:
+>
+>--name
+>
+>--initial-advertise-peer-urls
+>
+>--listen-peer-urls
+>
+>--listen-client-urls
+>
+>--advertise-client-urls
+>
+>
+>
+>You can use `EnvironmentFile` to separate mutable configs from immutable configs.
+
+### Start etcd cluster
+
+3 servers have to execute.
+
+create etcd storage directory
+
+```bash
+mkdir /var/lib/etcd/
+chmod 700 /var/lib/etcd
+```
+
+start etcd
+
+```bash
+systemctl daemon-reload
+systemctl enable etcd.service
+systemctl start etcd.service
+systemctl status etcd.service
+```
+
+### Verify
+
+```bash
+etcdctl --cacert /etc/karmada/pki/etcd/ca.crt \
+ --cert /etc/karmada/pki/etcd/healthcheck-client.crt \
+ --key /etc/karmada/pki/etcd/healthcheck-client.key \
+ --endpoints "172.31.209.245:2379,172.31.209.246:2379,172.31.209.247:2379" \
+ endpoint status --write-out="table"
+
++---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
++---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+| 172.31.209.245:2379 | 689151f8cbf4ee95 | 3.5.1 | 20 kB | false | false | 2 | 9 | 9 | |
+| 172.31.209.246:2379 | 5db4dfb6ecc14de7 | 3.5.1 | 20 kB | true | false | 2 | 9 | 9 | |
+| 172.31.209.247:2379 | 7e59eef3c816aa57 | 3.5.1 | 20 kB | false | false | 2 | 9 | 9 | |
++---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+```
+
+## Install kube-apiserver
+
+### Configure Nginx
+
+Execute operations at `karmada-01`.
+
+configure load balancing for `karmada apiserver`
+
+
+
+/usr/local/karmada-nginx/conf/nginx.conf
+
+```bash
+worker_processes 2;
+
+events {
+ worker_connections 1024;
+}
+
+stream {
+ upstream backend {
+ hash consistent;
+ server 172.31.209.245:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:6443 max_fails=3 fail_timeout=30s;
+ }
+
+ server {
+ listen 172.31.209.245:5443;
+ proxy_connect_timeout 1s;
+ proxy_pass backend;
+ }
+}
+```
+
+/lib/systemd/system/karmada-nginx.service
+
+```bash
+[Unit]
+Description=The karmada karmada-apiserver nginx proxy server
+After=syslog.target network-online.target remote-fs.target nss-lookup.target
+Wants=network-online.target
+
+[Service]
+Type=forking
+ExecStartPre=/usr/local/karmada-nginx/sbin/karmada-nginx -t
+ExecStart=/usr/local/karmada-nginx/sbin/karmada-nginx
+ExecReload=/usr/local/karmada-nginx/sbin/karmada-nginx -s reload
+ExecStop=/bin/kill -s QUIT $MAINPID
+PrivateTmp=true
+Restart=always
+RestartSec=5
+StartLimitInterval=0
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+start `karmada nginx`
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-nginx.service
+systemctl start karmada-nginx.service
+systemctl status karmada-nginx.service
+```
+
+### Create kube-apiserver Systemd Service
+
+Execute operations at `karmada-01` `karmada-02` `karmada-03`. Take `karmada-01` as an example.
+
+
+
+/usr/lib/systemd/system/kube-apiserver.service
+
+```bash
+[Unit]
+Description=Kubernetes API Server
+Documentation=https://kubernetes.io/docs/home/
+After=network.target
+
+[Service]
+# If you don't need to encrypt etcd, remove --encryption-provider-config
+ExecStart=/usr/local/sbin/kube-apiserver \
+ --allow-privileged=true \
+ --anonymous-auth=false \
+ --audit-webhook-batch-buffer-size 30000 \
+ --audit-webhook-batch-max-size 800 \
+ --authorization-mode "Node,RBAC" \
+ --bind-address 0.0.0.0 \
+ --client-ca-file /etc/karmada/pki/server-ca.crt \
+ --default-watch-cache-size 200 \
+ --delete-collection-workers 2 \
+ --disable-admission-plugins "StorageObjectInUseProtection,ServiceAccount" \
+ --enable-admission-plugins "NodeRestriction" \
+ --enable-bootstrap-token-auth \
+ --encryption-provider-config "/etc/karmada/encryption-config.yaml" \
+ --etcd-cafile /etc/karmada/pki/etcd/ca.crt \
+ --etcd-certfile /etc/karmada/pki/etcd/apiserver-etcd-client.crt \
+ --etcd-keyfile /etc/karmada/pki/etcd/apiserver-etcd-client.key \
+ --etcd-servers "https://172.31.209.245:2379,https://172.31.209.246:2379,https://172.31.209.247:2379" \
+ --insecure-port 0 \
+ --logtostderr=true \
+ --max-mutating-requests-inflight 2000 \
+ --max-requests-inflight 4000 \
+ --proxy-client-cert-file /etc/karmada/pki/front-proxy-client.crt \
+ --proxy-client-key-file /etc/karmada/pki/front-proxy-client.key \
+ --requestheader-allowed-names "front-proxy-client" \
+ --requestheader-client-ca-file /etc/karmada/pki/front-proxy-ca.crt \
+ --requestheader-extra-headers-prefix "X-Remote-Extra-" \
+ --requestheader-group-headers "X-Remote-Group" \
+ --requestheader-username-headers "X-Remote-User" \
+ --runtime-config "api/all=true" \
+ --secure-port 6443 \
+ --service-account-issuer "https://kubernetes.default.svc.cluster.local" \
+ --service-account-key-file /etc/karmada/pki/sa.pub \
+ --service-account-signing-key-file /etc/karmada/pki/sa.key \
+ --service-cluster-ip-range "10.254.0.0/16" \
+ --tls-cert-file /etc/karmada/pki/kube-apiserver.crt \
+ --tls-private-key-file /etc/karmada/pki/kube-apiserver.key \
+
+Restart=on-failure
+RestartSec=5
+Type=notify
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Start kube-apiserver
+
+3 servers have to execute.
+
+``` bash
+systemctl daemon-reload
+systemctl enable kube-apiserver.service
+systemctl start kube-apiserver.service
+systemctl status kube-apiserver.service
+```
+
+### Verify
+
+```bash
+$ ./check_status.sh
+###### Start check kube-apiserver
+[+]ping ok
+[+]log ok
+[+]etcd ok
+[+]poststarthook/start-kube-apiserver-admission-initializer ok
+[+]poststarthook/generic-apiserver-start-informers ok
+[+]poststarthook/priority-and-fairness-config-consumer ok
+[+]poststarthook/priority-and-fairness-filter ok
+[+]poststarthook/start-apiextensions-informers ok
+[+]poststarthook/start-apiextensions-controllers ok
+[+]poststarthook/crd-informer-synced ok
+[+]poststarthook/bootstrap-controller ok
+[+]poststarthook/rbac/bootstrap-roles ok
+[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
+[+]poststarthook/priority-and-fairness-config-producer ok
+[+]poststarthook/start-cluster-authentication-info-controller ok
+[+]poststarthook/aggregator-reload-proxy-client-cert ok
+[+]poststarthook/start-kube-aggregator-informers ok
+[+]poststarthook/apiservice-registration-controller ok
+[+]poststarthook/apiservice-status-available-controller ok
+[+]poststarthook/kube-apiserver-autoregistration ok
+[+]autoregister-completion ok
+[+]poststarthook/apiservice-openapi-controller ok
+livez check passed
+
+###### kube-apiserver check success
+```
+
+## Install karmada-aggregated-apiserver
+
+Create `namespace` and bind the `cluster admin role`. Execute operations at `karmada-01`.
+
+```bash
+kubectl create ns karmada-system
+kubectl create clusterrolebinding cluster-admin:karmada --clusterrole=cluster-admin --user system:karmada
+```
+
+Then, like `karmada-webhook`, use `nginx` for high availability.
+
+modify the `nginx` configuration and add the following configuration,Execute operations at `karmada-01`.
+
+```bash
+cat /usr/local/karmada-nginx/conf/nginx.conf
+worker_processes 2;
+
+events {
+ worker_connections 1024;
+}
+
+stream {
+ upstream backend {
+ hash consistent;
+ server 172.31.209.245:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:6443 max_fails=3 fail_timeout=30s;
+ }
+
+ upstream webhook {
+ hash consistent;
+ server 172.31.209.245:8443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:8443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:8443 max_fails=3 fail_timeout=30s;
+ }
+
+ upstream aa {
+ hash consistent;
+ server 172.31.209.245:7443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:7443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:7443 max_fails=3 fail_timeout=30s;
+ }
+
+ server {
+ listen 172.31.209.245:5443;
+ proxy_connect_timeout 1s;
+ proxy_pass backend;
+ }
+
+ server {
+ listen 172.31.209.245:4443;
+ proxy_connect_timeout 1s;
+ proxy_pass webhook;
+ }
+
+ server {
+ listen 172.31.209.245:443;
+ proxy_connect_timeout 1s;
+ proxy_pass aa;
+ }
+}
+```
+
+Reload `nginx` configuration
+
+```bash
+systemctl restart karmada-nginx
+```
+
+### Create Systemd Service
+
+Execute operations at `karmada-01` `karmada-02` `karmada-03`. Take `karmada-01` as an example.
+
+/usr/lib/systemd/system/karmada-aggregated-apiserver.service
+
+```bash
+[Unit]
+Description=Karmada Aggregated ApiServer
+Documentation=https://github.com/karmada-io/karmada
+
+[Service]
+ExecStart=/usr/local/sbin/karmada-aggregated-apiserver \
+ --audit-log-maxage 0 \
+ --audit-log-maxbackup 0 \
+ --audit-log-path - \
+ --authentication-kubeconfig /etc/karmada/karmada.kubeconfig \
+ --authorization-kubeconfig /etc/karmada/karmada.kubeconfig \
+ --etcd-cafile /etc/karmada/pki/etcd/ca.crt \
+ --etcd-certfile /etc/karmada/pki/etcd/apiserver-etcd-client.crt \
+ --etcd-keyfile /etc/karmada/pki/etcd/apiserver-etcd-client.key \
+ --etcd-servers "https://172.31.209.245:2379,https://172.31.209.246:2379,https://172.31.209.247:2379" \
+ --feature-gates "APIPriorityAndFairness=false" \
+ --kubeconfig /etc/karmada/karmada.kubeconfig \
+ --logtostderr=true \
+ --secure-port 7443 \
+ --tls-cert-file /etc/karmada/pki/karmada.crt \
+ --tls-private-key-file /etc/karmada/pki/karmada.key \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Start karmada-aggregated-apiserver
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-aggregated-apiserver.service
+systemctl start karmada-aggregated-apiserver.service
+systemctl status karmada-aggregated-apiserver.service
+```
+
+### Create `APIService`
+
+`externalName` is the host name of the node where `nginx` is located (`karmada-01`).
+
+
+
+(1) create file: `karmada-aggregated-apiserver-apiservice.yaml`
+
+```yaml
+apiVersion: apiregistration.k8s.io/v1
+kind: APIService
+metadata:
+ name: v1alpha1.cluster.karmada.io
+ labels:
+ app: karmada-aggregated-apiserver
+ apiserver: "true"
+spec:
+ insecureSkipTLSVerify: true
+ group: cluster.karmada.io
+ groupPriorityMinimum: 2000
+ service:
+ name: karmada-aggregated-apiserver
+ namespace: karmada-system
+ port: 443
+ version: v1alpha1
+ versionPriority: 10
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: karmada-aggregated-apiserver
+ namespace: karmada-system
+spec:
+ type: ExternalName
+ externalName: karmada-01
+```
+
+
+
+(2) `kubectl create -f karmada-aggregated-apiserver-apiservice.yaml`
+
+### Verify
+
+```bash
+$ ./check_status.sh
+###### Start check karmada-aggregated-apiserver
+[+]ping ok
+[+]log ok
+[+]etcd ok
+[+]poststarthook/generic-apiserver-start-informers ok
+[+]poststarthook/max-in-flight-filter ok
+[+]poststarthook/start-aggregated-server-informers ok
+livez check passed
+
+###### karmada-aggregated-apiserver check success
+```
+
+## Install kube-controller-manager
+
+Execute operations at `karmada-01` `karmada-02` `karmada-03`. Take `karmada-01` as an example.
+
+### Create Systemd Service
+
+/usr/lib/systemd/system/kube-controller-manager.service
+
+```bash
+[Unit]
+Description=Kubernetes Controller Manager
+Documentation=https://kubernetes.io/docs/home/
+After=network.target
+
+[Service]
+ExecStart=/usr/local/sbin/kube-controller-manager \
+ --authentication-kubeconfig /etc/karmada/kube-controller-manager.kubeconfig \
+ --authorization-kubeconfig /etc/karmada/kube-controller-manager.kubeconfig \
+ --bind-address "0.0.0.0" \
+ --client-ca-file /etc/karmada/pki/server-ca.crt \
+ --cluster-name karmada \
+ --cluster-signing-cert-file /etc/karmada/pki/server-ca.crt \
+ --cluster-signing-key-file /etc/karmada/pki/server-ca.key \
+ --concurrent-deployment-syncs 10 \
+ --concurrent-gc-syncs 30 \
+ --concurrent-service-syncs 1 \
+ --controllers "namespace,garbagecollector,serviceaccount-token" \
+ --feature-gates "RotateKubeletServerCertificate=true" \
+ --horizontal-pod-autoscaler-sync-period 10s \
+ --kube-api-burst 2000 \
+ --kube-api-qps 1000 \
+ --kubeconfig /etc/karmada/kube-controller-manager.kubeconfig \
+ --leader-elect \
+ --logtostderr=true \
+ --node-cidr-mask-size 24 \
+ --pod-eviction-timeout 5m \
+ --requestheader-allowed-names "front-proxy-client" \
+ --requestheader-client-ca-file /etc/karmada/pki/front-proxy-ca.crt \
+ --requestheader-extra-headers-prefix "X-Remote-Extra-" \
+ --requestheader-group-headers "X-Remote-Group" \
+ --requestheader-username-headers "X-Remote-User" \
+ --root-ca-file /etc/karmada/pki/server-ca.crt \
+ --service-account-private-key-file /etc/karmada/pki/sa.key \
+ --service-cluster-ip-range "10.254.0.0/16" \
+ --terminated-pod-gc-threshold 10000 \
+ --tls-cert-file /etc/karmada/pki/kube-controller-manager.crt \
+ --tls-private-key-file /etc/karmada/pki/kube-controller-manager.key \
+ --use-service-account-credentials \
+ --v 4 \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Start kube-controller-manager
+
+```bash
+systemctl daemon-reload
+systemctl enable kube-controller-manager.service
+systemctl start kube-controller-manager.service
+systemctl status kube-controller-manager.service
+```
+
+### Verify
+
+```bash
+$ ./check_status.sh
+###### Start check kube-controller-manager
+[+]leaderElection ok
+healthz check passed
+
+###### kube-controller-manager check success
+```
+
+## Install karmada-controller-manager
+
+### Create Systemd Service
+
+Execute operations at `karmada-01` `karmada-02` `karmada-03`. Take `karmada-01` as an example.
+
+/usr/lib/systemd/system/karmada-controller-manager.service
+
+```bash
+[Unit]
+Description=Karmada Controller Manager
+Documentation=https://github.com/karmada-io/karmada
+
+[Service]
+ExecStart=/usr/local/sbin/karmada-controller-manager \
+ --bind-address 0.0.0.0 \
+ --cluster-status-update-frequency 10s \
+ --kubeconfig /etc/karmada/karmada.kubeconfig \
+ --logtostderr=true \
+ --metrics-bind-address ":10358" \
+ --secure-port 10357 \
+ --v=4 \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Start karmada-controller-manager
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-controller-manager.service
+systemctl start karmada-controller-manager.service
+systemctl status karmada-controller-manager.service
+```
+
+### Verify
+
+```bash
+$ ./check_status.sh
+###### Start check karmada-controller-manager
+[+]ping ok
+healthz check passed
+
+###### karmada-controller-manager check success
+```
+
+## Install karmada-scheduler
+
+### Create Systemd Service
+
+Execute operations at `karmada-01` `karmada-02` `karmada-03`. Take `karmada-01` as an example.
+
+/usr/lib/systemd/system/karmada-scheduler.service
+
+```bash
+[Unit]
+Description=Karmada Scheduler
+Documentation=https://github.com/karmada-io/karmada
+
+[Service]
+ExecStart=/usr/local/sbin/karmada-scheduler \
+ --bind-address 0.0.0.0 \
+ --enable-scheduler-estimator=true \
+ --kubeconfig /etc/karmada/karmada.kubeconfig \
+ --logtostderr=true \
+ --scheduler-estimator-port 10352 \
+ --secure-port 10511 \
+ --v=4 \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Start karmada-scheduler
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-scheduler.service
+systemctl start karmada-scheduler.service
+systemctl status karmada-scheduler.service
+```
+
+### Verify
+
+```bash
+$ ./check_status.sh
+###### Start check karmada-scheduler
+ok
+###### karmada-scheduler check success
+```
+
+## Install karmada-webhook
+
+`karmada-webhook` is different from `scheduler` and `controller-manager`, and its high availability needs to be implemented with `nginx.`
+
+modify the `nginx` configuration and add the following configuration,Execute operations at `karmada-01`.
+
+```bash
+cat /usr/local/karmada-nginx/conf/nginx.conf
+worker_processes 2;
+
+events {
+ worker_connections 1024;
+}
+
+stream {
+ upstream backend {
+ hash consistent;
+ server 172.31.209.245:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:6443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:6443 max_fails=3 fail_timeout=30s;
+ }
+
+ upstream webhook {
+ hash consistent;
+ server 172.31.209.245:8443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.246:8443 max_fails=3 fail_timeout=30s;
+ server 172.31.209.247:8443 max_fails=3 fail_timeout=30s;
+ }
+
+ server {
+ listen 172.31.209.245:5443;
+ proxy_connect_timeout 1s;
+ proxy_pass backend;
+ }
+
+ server {
+ listen 172.31.209.245:4443;
+ proxy_connect_timeout 1s;
+ proxy_pass webhook;
+ }
+}
+```
+
+Reload `nginx` configuration
+
+```bash
+systemctl restart karmada-nginx
+```
+
+### Create Systemd Service
+
+Execute operations at `karmada-01` `karmada-02` `karmada-03`. Take `karmada-01` as an example.
+
+/usr/lib/systemd/system/karmada-webhook.service
+
+```bash
+[Unit]
+Description=Karmada Webhook
+Documentation=https://github.com/karmada-io/karmada
+
+[Service]
+ExecStart=/usr/local/sbin/karmada-webhook \
+ --bind-address 0.0.0.0 \
+ --cert-dir /etc/karmada/pki \
+ --health-probe-bind-address ":8444" \
+ --kubeconfig /etc/karmada/karmada.kubeconfig \
+ --logtostderr=true \
+ --metrics-bind-address ":8445" \
+ --secure-port 8443 \
+ --tls-cert-file-name "karmada.crt" \
+ --tls-private-key-file-name "karmada.key" \
+ --v=4 \
+
+Restart=on-failure
+RestartSec=5
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Start karmada-webook
+
+```bash
+systemctl daemon-reload
+systemctl enable karmada-webhook.service
+systemctl start karmada-webhook.service
+systemctl status karmada-webhook.service
+```
+
+### Configurate karmada-webhook
+
+Download the `webhook-configuration.yaml` file: [https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/webhook-configuration.yaml](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/webhook-configuration.yaml)
+
+```bash
+ca_string=$(cat /etc/karmada/pki/server-ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i "s/{{caBundle}}/${ca_string}/g" webhook-configuration.yaml
+# You need to change 172.31.209.245:4443 to your Load Balancer host:port.
+sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook-configuration.yaml
+
+kubectl create -f webhook-configuration.yaml
+```
+
+### Verify
+
+```bash
+$ ./check_status.sh
+###### Start check karmada-webhook
+ok
+###### karmada-webhook check success
+```
+
+## Initialize Karmada
+
+Execute operations at `karmada-01`.
+
+```bash
+git clone https://github.com/karmada-io/karmada
+cd karmada/charts/karmada/_crds/bases
+
+kubectl apply -f .
+
+cd ../patches/
+ca_string=$(cat /etc/karmada/pki/server-ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
+sed -i "s/{{caBundle}}/${ca_string}/g" webhook_in_resourcebindings.yaml
+sed -i "s/{{caBundle}}/${ca_string}/g" webhook_in_clusterresourcebindings.yaml
+# You need to change 172.31.209.245:4443 to your Load Balancer host:port.
+sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook_in_resourcebindings.yaml
+sed -i 's/karmada-webhook.karmada-system.svc:443/172.31.209.245:4443/g' webhook_in_clusterresourcebindings.yaml
+
+kubectl patch CustomResourceDefinition resourcebindings.work.karmada.io --patch-file webhook_in_resourcebindings.yaml
+kubectl patch CustomResourceDefinition clusterresourcebindings.work.karmada.io --patch-file webhook_in_clusterresourcebindings.yaml
+```
+
+Now, all the required components have been installed, and the member clusters could join Karmada control plane.
+If you want to use `karmadactl` to query, please run following command:
+```sh
+cat < Note: The `init` command is available from v1.0. Running `init` command requires escalated privileges for it to store public configurations (certs, crds) for multiple users under default location `/etc/karmada`, you can override this location via flags `--karmada-data` and `--karmada-pki`. Refer to CLI for more details or usage information.
+
+Run the following command to install:
+```bash
+kubectl karmada init
+```
+It might take about 5 minutes and if everything goes well, you will see outputs similar to:
+```
+I1121 19:33:10.270959 2127786 tlsbootstrap.go:61] [bootstrap-token] configured RBAC rules to allow certificate rotation for all agent client certificates in the member cluster
+I1121 19:33:10.275041 2127786 deploy.go:127] Initialize karmada bootstrap token
+I1121 19:33:10.281426 2127786 deploy.go:397] create karmada kube controller manager Deployment
+I1121 19:33:10.288232 2127786 idempotency.go:276] Service karmada-system/kube-controller-manager has been created or updated.
+...
+...
+------------------------------------------------------------------------------------------------------
+ █████ ████ █████████ ███████████ ██████ ██████ █████████ ██████████ █████████
+░░███ ███░ ███░░░░░███ ░░███░░░░░███ ░░██████ ██████ ███░░░░░███ ░░███░░░░███ ███░░░░░███
+ ░███ ███ ░███ ░███ ░███ ░███ ░███░█████░███ ░███ ░███ ░███ ░░███ ░███ ░███
+ ░███████ ░███████████ ░██████████ ░███░░███ ░███ ░███████████ ░███ ░███ ░███████████
+ ░███░░███ ░███░░░░░███ ░███░░░░░███ ░███ ░░░ ░███ ░███░░░░░███ ░███ ░███ ░███░░░░░███
+ ░███ ░░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ░███ ███ ░███ ░███
+ █████ ░░████ █████ █████ █████ █████ █████ █████ █████ █████ ██████████ █████ █████
+░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░
+------------------------------------------------------------------------------------------------------
+Karmada is installed successfully.
+
+Register Kubernetes cluster to Karmada control plane.
+
+Register cluster with 'Push' mode
+
+Step 1: Use "kubectl karmada join" command to register the cluster to Karmada control plane. --cluster-kubeconfig is kubeconfig of the member cluster.
+(In karmada)~# MEMBER_CLUSTER_NAME=$(cat ~/.kube/config | grep current-context | sed 's/: /\n/g'| sed '1d')
+(In karmada)~# kubectl karmada --kubeconfig /etc/karmada/karmada-apiserver.config join ${MEMBER_CLUSTER_NAME} --cluster-kubeconfig=$HOME/.kube/config
+
+Step 2: Show members of karmada
+(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
+
+
+Register cluster with 'Pull' mode
+
+Step 1: Use "kubectl karmada register" command to register the cluster to Karmada control plane. "--cluster-name" is set to cluster of current-context by default.
+(In member cluster)~# kubectl karmada register 172.18.0.3:32443 --token lm6cdu.lcm4wafod2jmjvty --discovery-token-ca-cert-hash sha256:9bf5aa53d2716fd9b5568c85db9461de6429ba50ef7ade217f55275d89e955e4
+
+Step 2: Show members of karmada
+(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters
+
+```
+
+The components of Karmada are installed in `karmada-system` namespace by default, you can get them by:
+```bash
+kubectl get deployments -n karmada-system
+NAME READY UP-TO-DATE AVAILABLE AGE
+karmada-aggregated-apiserver 1/1 1 1 102s
+karmada-apiserver 1/1 1 1 2m34s
+karmada-controller-manager 1/1 1 1 116s
+karmada-scheduler 1/1 1 1 119s
+karmada-webhook 1/1 1 1 113s
+kube-controller-manager 1/1 1 1 2m3s
+```
+And the `karmada-etcd` is installed as the `StatefulSet`, get it by:
+```bash
+kubectl get statefulsets -n karmada-system
+NAME READY AGE
+etcd 1/1 28m
+```
+
+The configuration file of Karmada will be created to `/etc/karmada/karmada-apiserver.config` by default.
+
+#### Offline installation
+
+When installing Karmada, the `kubectl karmada init` will download the APIs(CRD) from the Karmada official release page
+(e.g. `https://github.com/karmada-io/karmada/releases/tag/v0.10.1`) and load images from the official registry by default.
+
+If you want to install Karmada offline, maybe you have to specify the APIs tar file as well as the image.
+
+Use `--crds` flag to specify the CRD file. e.g.
+```bash
+kubectl karmada init --crds /$HOME/crds.tar.gz
+```
+
+The images of Karmada components could be specified, take `karmada-controller-manager` as an example:
+```bash
+kubectl karmada init --karmada-controller-manager-image=example.registry.com/library/karmada-controller-manager:1.0
+```
+
+#### Deploy HA
+Use `--karmada-apiserver-replicas` and `--etcd-replicas` flags to specify the number of the replicas (defaults to `1`).
+```bash
+kubectl karmada init --karmada-apiserver-replicas 3 --etcd-replicas 3
+```
+
+### Install Karmada in Kind cluster
+
+> kind is a tool for running local Kubernetes clusters using Docker container "nodes".
+> It was primarily designed for testing Kubernetes itself, not for production.
+
+Create a cluster named `host` by `hack/create-cluster.sh`:
+```bash
+hack/create-cluster.sh host $HOME/.kube/host.config
+```
+
+Install Karmada v1.2.0 by command `kubectl karmada init`:
+```bash
+kubectl karmada init --crds https://github.com/karmada-io/karmada/releases/download/v1.2.0/crds.tar.gz --kubeconfig=$HOME/.kube/host.config
+```
+
+Check installed components:
+```bash
+kubectl get pods -n karmada-system --kubeconfig=$HOME/.kube/host.config
+NAME READY STATUS RESTARTS AGE
+etcd-0 1/1 Running 0 2m55s
+karmada-aggregated-apiserver-84b45bf9b-n5gnk 1/1 Running 0 109s
+karmada-apiserver-6dc4cf6964-cz4jh 1/1 Running 0 2m40s
+karmada-controller-manager-556cf896bc-79sxz 1/1 Running 0 2m3s
+karmada-scheduler-7b9d8b5764-6n48j 1/1 Running 0 2m6s
+karmada-webhook-7cf7986866-m75jw 1/1 Running 0 2m
+kube-controller-manager-85c789dcfc-k89f8 1/1 Running 0 2m10s
+```
+
+## Install Karmada by Helm Chart Deployment
+
+Please refer to [installing by Helm](https://github.com/karmada-io/karmada/tree/master/charts/karmada).
+
+## Install Karmada by Karmada Operator
+
+Please refer to [installing by Karmada Operator](https://github.com/karmada-io/karmada/blob/master/operator/README.md)
+
+## Install Karmada by binary
+
+Please refer to [installing by binary](./install-binary.md).
+
+## Install Karmada from source
+
+Please refer to [installing from source](./fromsource.md).
+
+[1]: https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/
+
+## Install Karmada for development environment
+
+If you want to try Karmada, we recommend that build a development environment by
+`hack/local-up-karmada.sh` which will do the following tasks for you:
+
+- Start a Kubernetes cluster by [kind](https://kind.sigs.k8s.io/) to run the Karmada control plane, aka. the `host cluster`.
+- Build Karmada control plane components based on a current codebase.
+- Deploy Karmada control plane components on the `host cluster`.
+- Create member clusters and join Karmada.
+
+**1. Clone Karmada repo to your machine:**
+
+```
+git clone https://github.com/karmada-io/karmada
+```
+or use your fork repo by replacing your `GitHub ID`:
+```
+git clone https://github.com//karmada
+```
+
+**2. Change to the karmada directory:**
+```
+cd karmada
+```
+
+**3. Deploy and run Karmada control plane:**
+
+run the following script:
+
+```
+hack/local-up-karmada.sh
+```
+If everything goes well, at the end of the script output, you will see similar messages as follows:
+```
+Local Karmada is running.
+
+To start using your Karmada environment, run:
+ export KUBECONFIG="$HOME/.kube/karmada.config"
+Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.
+
+To manage your member clusters, run:
+ export KUBECONFIG="$HOME/.kube/members.config"
+Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.
+```
+
+**4. Check registered cluster**
+
+```
+kubectl get clusters --kubeconfig=/$HOME/.kube/karmada.config
+```
+
+You will get similar output as follows:
+```
+NAME VERSION MODE READY AGE
+member1 v1.23.4 Push True 7m38s
+member2 v1.23.4 Push True 7m35s
+member3 v1.23.4 Pull True 7m27s
+```
+
+There are 3 clusters named `member1`, `member2` and `member3` have registered with `Push` or `Pull` mode.
diff --git a/versioned_docs/version-v1.9/key-features/features.md b/versioned_docs/version-v1.9/key-features/features.md
new file mode 100644
index 000000000..46caa6fc9
--- /dev/null
+++ b/versioned_docs/version-v1.9/key-features/features.md
@@ -0,0 +1,120 @@
+---
+title: Key Features
+---
+
+## Cross-cloud multi-cluster multi-mode management
+
+Karmada supports:
+
+* Safe isolation:
+ * Create a namespace for each cluster, prefixed with `karmada-es-`.
+* [Multi-mode](../userguide/clustermanager/cluster-registration.md) connection:
+ * Push: Karmada is directly connected to the cluster kube-apiserver.
+ * Pull: Deploy one agent component in the cluster, Karmada delegates tasks to the agent component.
+* Multi-cloud support(Only if compliant with Kubernetes specifications):
+ * Support various public cloud vendors.
+ * Support for private cloud.
+ * Support self-built clusters.
+
+The overall relationship between the member cluster and the control plane is shown in the following figure:
+
+![overall-relationship.png](../resources/key-features/overall-relationship.png)
+
+## Multi-policy multi-cluster scheduling
+
+Karmada supports:
+
+* Cluster distribution capability under [different scheduling strategies](../userguide/scheduling/resource-propagating.md):
+ * ClusterAffinity: Oriented scheduling based on ClusterName, Label, Field.
+ * Toleration: Scheduling based on Taint and Toleration.
+ * SpreadConstraint: Scheduling based on cluster topology.
+ * ReplicasScheduling: Replication mode and split mode for instanced workloads.
+* Differential configuration([OverridePolicy](../userguide/scheduling/override-policy.md)):
+ * ImageOverrider: Differentiated configuration of mirrors.
+ * ArgsOverrider: Differentiated configuration of execution parameters.
+ * CommandOverrider: Differentiated configuration for execution commands.
+ * PlainText: Customized Differentiation Configuration.
+* [Support reschedule](../userguide/scheduling/descheduler.md) with following components:
+ * Descheduler(karmada-descheduler): Trigger rescheduling based on instance state changes in member clusters.
+ * Scheduler-estimator(karmada-scheduler-estimator): Provides the scheduler with a more precise desired state of the running instances of the member cluster.
+
+Much like k8s scheduling, Karmada support different scheduling policy. The overall scheduling process is shown in the figure below:
+
+![overall-relationship.png](../resources/key-features/overall-scheduling.png)
+
+If one cluster does not have enough resource to accommodate their pods, Karamda will reschedule the pods. The overall rescheduling process is shown in the following figure:
+
+![overall-relationship.png](../resources/key-features/overall-rescheduling.png)
+
+## Cross-cluster failover of applications
+
+Karmada supports:
+
+* [Cluster failover](../userguide/failover/failover-overview.md):
+ * Karmada supports users to set distribution policies, and automatically migrates the faulty cluster replicas in a centralized or decentralized manner after a cluster failure.
+* Cluster taint settings:
+ * When the user sets a taint for the cluster and the resource distribution strategy cannot tolerate the taint, Karmada will also automatically trigger the migration of the cluster replicas.
+* Uninterrupted service:
+ * During the replicas migration process, Karmada can ensure that the service replicas does not drop to zero, thereby ensuring that the service will not be interrupted.
+
+Karmada supports failover for clusters, one cluster failure will cause failover of replicas as follows:
+
+![overall-relationship.png](../resources/key-features/cluster-failover.png)
+
+## Global Uniform Resource View
+
+Karmada supports:
+
+* [Resource status collection and aggregation](../userguide/globalview/customizing-resource-interpreter.md): Collect and aggregate state into resource templates with the help of the Resource Interpreter.
+ * User-defined resource, triggering webhook remote calls.
+ * Fixed encoding in Karmada for some common resource types.
+* [Unified resource management](../userguide/globalview/aggregated-api-endpoint.md): Unified management for `create`, `update`, `delete`, `query`.
+* [Unified operations](../userguide/globalview/proxy-global-resource.md): Exec operations command(`describe`, `exec`, `logs`) in one k8s context.
+* [Global search for resources and events](../tutorials/karmada-search.md):
+ * Cache query: global fuzzy search and global precise search are supported.
+ * Third-party storage: Search engine (Elasticsearch or OpenSearch), relational database, graph database are supported.
+
+Users can access and operate all member clusters via karmada-apiserver:
+
+![overall-relationship.png](../resources/key-features/unified-operation.png)
+
+Users also can check and search all member clusters resources via karmada-apiserver:
+
+![overall-relationship.png](../resources/key-features/unified-search.png)
+
+## Best Production Practices
+
+Karmada supports:
+
+* [Unified authentication](../userguide/bestpractices/unified-auth.md):
+ * Aggregate API unified access entry.
+ * Access control is consistent with member clusters.
+* Unified resource quota(`FederatedResourceQuota`):
+ * Globally configures the ResourceQuota of each member cluster.
+ * Configure ResourceQuota at the federation level.
+ * Collects the resource usage of each member cluster in real time.
+* Reusable scheduling strategy:
+ * Resource templates are decoupled from scheduling policies, plug and play.
+
+Users can access all member clusters with unified authentication:
+
+![overall-relationship.png](../resources/key-features/unified-access.png)
+
+Users also can defined global resource quota via `FederatedResourceQuota`:
+
+![overall-relationship.png](../resources/key-features/unified-resourcequota.png)
+
+## Cross-cluster service governance
+
+karmada supports:
+
+* [Multi-cluster service discovery](../userguide/service/multi-cluster-service.md):
+ * With ServiceExport and ServiceImport, achieving cross-cluster service discovery.
+* [Multi-cluster network support](../userguide/network/working-with-submariner.md):
+ * Use `Submariner` to open up the container network between clusters.
+* [Cross-Cluster service governance via ErieCanal](../userguide/service/working-with-eriecanal.md)
+ * Integrate with `ErieCanal` to empower cross-cluster service governance.
+
+Users can enable service governance for cross-cluster with Karmada:
+
+![overall-relationship.png](../resources/key-features/service-governance.png)
diff --git a/versioned_docs/version-v1.9/reference/components/karmada-agent.md b/versioned_docs/version-v1.9/reference/components/karmada-agent.md
new file mode 100644
index 000000000..772a07167
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/components/karmada-agent.md
@@ -0,0 +1,91 @@
+---
+title: karmada-agent
+---
+
+
+
+### Synopsis
+
+The karmada-agent is the agent of member clusters. It can register a specific cluster to the Karmada control
+plane and sync manifests from the Karmada control plane to the member cluster. In addition, it also syncs the status of member
+cluster and manifests to the Karmada control plane.
+
+```
+karmada-agent [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --cert-rotation-checking-interval duration The interval of checking if the certificate need to be rotated. This is only applicable if cert rotation is enabled (default 5m0s)
+ --cert-rotation-remaining-time-threshold float The threshold of remaining time of the valid certificate. This is only applicable if cert rotation is enabled. (default 0.2)
+ --cluster-api-burst int Burst to use while talking with cluster kube-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --cluster-api-endpoint string APIEndpoint of the cluster.
+ --cluster-api-qps float32 QPS to use while talking with cluster kube-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --cluster-cache-sync-timeout duration Timeout period waiting for cluster cache to sync. (default 2m0s)
+ --cluster-failure-threshold duration The duration of failure for the cluster to be considered unhealthy. (default 30s)
+ --cluster-lease-duration duration Specifies the expiration period of a cluster lease. (default 40s)
+ --cluster-lease-renew-interval-fraction float Specifies the cluster lease renew interval fraction. (default 0.25)
+ --cluster-name string Name of member cluster that the agent serves for.
+ --cluster-namespace string Namespace in the control plane where member cluster secrets are stored. (default "karmada-cluster")
+ --cluster-provider string Provider of the joining cluster. The Karmada scheduler can use this information to spread workloads across providers for higher availability.
+ --cluster-region string The region of the joining cluster. The Karmada scheduler can use this information to spread workloads across regions for higher availability.
+ --cluster-status-update-frequency duration Specifies how often karmada-agent posts cluster status to karmada-apiserver. Note: be cautious when changing the constant, it must work with ClusterMonitorGracePeriod in karmada-controller-manager. (default 10s)
+ --cluster-success-threshold duration The duration of successes for the cluster to be considered healthy after recovery. (default 30s)
+ --cluster-zones strings The zones of the joining cluster. The Karmada scheduler can use this information to spread workloads across zones for higher availability.
+ --concurrent-cluster-syncs int The number of Clusters that are allowed to sync concurrently. (default 5)
+ --concurrent-work-syncs int The number of Works that are allowed to sync concurrently. (default 5)
+ --controllers strings A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'. All controllers: certRotation, clusterStatus, endpointsliceCollect, execution, serviceExport, workStatus. (default [*])
+ --enable-cluster-resource-modeling Enable means controller would build resource modeling for each cluster by syncing Nodes and Pods resources.
+ The resource modeling might be used by the scheduler to make scheduling decisions in scenario of dynamic replica assignment based on cluster free resources.
+ Disable if it does not fit your cases for better performance. (default true)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ MultiClusterService=true|false (ALPHA - default=false)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ -h, --help help for karmada-agent
+ --karmada-context string Name of the cluster context in karmada control plane kubeconfig file.
+ --karmada-kubeconfig string Path to karmada control plane kubeconfig file.
+ --karmada-kubeconfig-namespace string Namespace of the secret containing karmada-agent certificate. This is only applicable if cert rotation is enabled. (default "karmada-system")
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Paths to a kubeconfig. Only required if out-of-cluster.
+ --leader-elect Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability. (default true)
+ --leader-elect-lease-duration duration The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. (default 15s)
+ --leader-elect-renew-deadline duration The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. (default 10s)
+ --leader-elect-resource-namespace string The namespace of resource object that is used for locking during leader election. (default "karmada-system")
+ --leader-elect-retry-period duration The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. (default 2s)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --metrics-bind-address string The TCP address that the controller should bind to for serving prometheus metrics(e.g. 127.0.0.1:8080, :8080). It can be set to "0" to disable the metrics serving. (default ":8080")
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --proxy-server-address string Address of the proxy server that is used to proxy to the cluster.
+ --rate-limiter-base-delay duration The base delay for rate limiter. (default 5ms)
+ --rate-limiter-bucket-size int The bucket size for rate limier. (default 100)
+ --rate-limiter-max-delay duration The max delay for rate limiter. (default 16m40s)
+ --rate-limiter-qps int The QPS for rate limier. (default 10)
+ --report-secrets strings The secrets that are allowed to be reported to the Karmada control plane during registering. Valid values are 'KubeCredentials', 'KubeImpersonator' and 'None'. e.g 'KubeCredentials,KubeImpersonator' or 'None'. (default [KubeCredentials,KubeImpersonator])
+ --resync-period duration Base frequency the informers are resynced.
+ --secure-port int The secure port on which to serve HTTPS. (default 10357)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/reference/components/karmada-aggregated-apiserver.md b/versioned_docs/version-v1.9/reference/components/karmada-aggregated-apiserver.md
new file mode 100644
index 000000000..9fd3ddb33
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/components/karmada-aggregated-apiserver.md
@@ -0,0 +1,157 @@
+---
+title: karmada-aggregated-apiserver
+---
+
+
+
+### Synopsis
+
+The karmada-aggregated-apiserver starts an aggregated server.
+It is responsible for registering the Cluster API and provides the ability to aggregate APIs,
+allowing users to access member clusters from the control plane directly.
+
+```
+karmada-aggregated-apiserver [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --admission-control-config-file string File with admission control configuration.
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
+ --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
+ --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
+ --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
+ --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
+ --audit-log-compress If set, the rotated log files will be compressed using gzip.
+ --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
+ --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
+ --audit-log-maxbackup int The maximum number of old audit log files to retain. Setting a value of 0 will mean there's no restriction on the number of files.
+ --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
+ --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
+ --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
+ --audit-log-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
+ --audit-policy-file string Path to the file that defines the audit policy configuration.
+ --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
+ --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
+ --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
+ --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
+ --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
+ --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
+ --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
+ --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
+ --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
+ --authentication-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io.
+ --authentication-skip-lookup If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+ --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 10s)
+ --authentication-tolerate-lookup-failure If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+ --authorization-always-allow-paths strings A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. (default [/healthz,/readyz,/livez])
+ --authorization-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io.
+ --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 10s)
+ --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s)
+ --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used. (default 0.0.0.0)
+ --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
+ --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+ --contention-profiling Enable block profiling, if profiling is enabled
+ --debug-socket-path string Use an unprotected (no authn/authz) unix-domain socket for profiling with the given path
+ --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
+ --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+ --egress-selector-config-file string File with apiserver egress selector configuration.
+ --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+ --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
+ --encryption-provider-config-automatic-reload Determines if the file set by --encryption-provider-config should be automatically reloaded if the disk contents change. Setting this to true disables the ability to uniquely identify distinct KMS plugins via the API server healthz endpoints.
+ --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
+ --etcd-certfile string SSL certification file used to secure etcd communication.
+ --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
+ --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
+ --etcd-db-metric-poll-interval duration The interval of requests to poll etcd and update metric. 0 disables the metric collection (default 30s)
+ --etcd-healthcheck-timeout duration The timeout to use when checking etcd health. (default 2s)
+ --etcd-keyfile string SSL key file used to secure etcd communication.
+ --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
+ --etcd-readycheck-timeout duration The timeout to use when checking etcd readiness (default 2s)
+ --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
+ --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated. Note that this applies only to resources compiled into this server binary.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ APIListChunking=true|false (BETA - default=true)
+ APIPriorityAndFairness=true|false (BETA - default=true)
+ APIResponseCompression=true|false (BETA - default=true)
+ APIServerIdentity=true|false (BETA - default=true)
+ APIServerTracing=true|false (BETA - default=true)
+ AdmissionWebhookMatchConditions=true|false (BETA - default=true)
+ AggregatedDiscoveryEndpoint=true|false (BETA - default=true)
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ ComponentSLIs=true|false (BETA - default=true)
+ ConsistentListFromCache=true|false (ALPHA - default=false)
+ CustomResourceValidationExpressions=true|false (BETA - default=true)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ InPlacePodVerticalScaling=true|false (ALPHA - default=false)
+ KMSv2=true|false (BETA - default=true)
+ KMSv2KDF=true|false (BETA - default=false)
+ MultiClusterService=true|false (ALPHA - default=false)
+ OpenAPIEnums=true|false (BETA - default=true)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ RemainingItemCount=true|false (BETA - default=true)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ StorageVersionAPI=true|false (ALPHA - default=false)
+ StorageVersionHash=true|false (BETA - default=true)
+ UnauthenticatedHTTP2DOSMitigation=true|false (BETA - default=false)
+ ValidatingAdmissionPolicy=true|false (BETA - default=false)
+ WatchList=true|false (ALPHA - default=false)
+ -h, --help help for karmada-aggregated-apiserver
+ --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default. (default 1000)
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --lease-reuse-duration-seconds int The time in seconds that each lease is reused. A lower value could avoid large number of objects reusing the same lease. Notice that a too small value may cause performance problems at storage layer. (default 60)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --permit-address-sharing If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]
+ --permit-port-sharing If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+ --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+ --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+ --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-])
+ --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group])
+ --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user])
+ --secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 443)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
+ --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. Supported media types: [application/json, application/yaml, application/vnd.kubernetes.protobuf] (default "application/json")
+ --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+ --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
+ Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
+ Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+ --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+ --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
+ --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
+ --tracing-config-file string File with apiserver tracing configuration.
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+ --watch-cache Enable watch caching in the apiserver (default true)
+ --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. This option is only meaningful for resources built into the apiserver, not ones defined by CRDs or aggregated from external servers, and is only consulted if the watch-cache is enabled. The only meaningful size setting to supply here is zero, which means to disable watch caching for the associated resource; all non-zero values are equivalent and mean to not disable watch caching for that resource
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/reference/components/karmada-controller-manager.md b/versioned_docs/version-v1.9/reference/components/karmada-controller-manager.md
new file mode 100644
index 000000000..441db73ec
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/components/karmada-controller-manager.md
@@ -0,0 +1,107 @@
+---
+title: karmada-controller-manager
+---
+
+
+
+### Synopsis
+
+The karmada-controller-manager runs various controllers.
+The controllers watch Karmada objects and then talk to the underlying clusters' API servers
+to create regular Kubernetes resources.
+
+```
+karmada-controller-manager [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --cluster-api-burst int Burst to use while talking with cluster kube-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --cluster-api-context string Name of the cluster context in cluster-api management cluster kubeconfig file.
+ --cluster-api-kubeconfig string Path to the cluster-api management cluster kubeconfig file.
+ --cluster-api-qps float32 QPS to use while talking with cluster kube-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --cluster-cache-sync-timeout duration Timeout period waiting for cluster cache to sync. (default 2m0s)
+ --cluster-failure-threshold duration The duration of failure for the cluster to be considered unhealthy. (default 30s)
+ --cluster-lease-duration duration Specifies the expiration period of a cluster lease. (default 40s)
+ --cluster-lease-renew-interval-fraction float Specifies the cluster lease renew interval fraction. (default 0.25)
+ --cluster-monitor-grace-period duration Specifies the grace period of allowing a running cluster to be unresponsive before marking it unhealthy. (default 40s)
+ --cluster-monitor-period duration Specifies how often karmada-controller-manager monitors cluster health status. (default 5s)
+ --cluster-startup-grace-period duration Specifies the grace period of allowing a cluster to be unresponsive during startup before marking it unhealthy. (default 1m0s)
+ --cluster-status-update-frequency duration Specifies how often karmada-controller-manager posts cluster status to karmada-apiserver. (default 10s)
+ --cluster-success-threshold duration The duration of successes for the cluster to be considered healthy after recovery. (default 30s)
+ --concurrent-cluster-propagation-policy-syncs int The number of ClusterPropagationPolicy that are allowed to sync concurrently. (default 1)
+ --concurrent-cluster-syncs int The number of Clusters that are allowed to sync concurrently. (default 5)
+ --concurrent-clusterresourcebinding-syncs int The number of ClusterResourceBindings that are allowed to sync concurrently. (default 5)
+ --concurrent-namespace-syncs int The number of Namespaces that are allowed to sync concurrently. (default 1)
+ --concurrent-propagation-policy-syncs int The number of PropagationPolicy that are allowed to sync concurrently. (default 1)
+ --concurrent-resource-template-syncs int The number of resource templates that are allowed to sync concurrently. (default 5)
+ --concurrent-resourcebinding-syncs int The number of ResourceBindings that are allowed to sync concurrently. (default 5)
+ --concurrent-work-syncs int The number of Works that are allowed to sync concurrently. (default 5)
+ --controllers strings A list of controllers to enable. '*' enables all on-by-default controllers, 'foo' enables the controller named 'foo', '-foo' disables the controller named 'foo'.
+ All controllers: applicationFailover, binding, bindingStatus, cluster, clusterStatus, cronFederatedHorizontalPodAutoscaler, endpointSlice, endpointsliceCollect, endpointsliceDispatch, execution, federatedHorizontalPodAutoscaler, federatedResourceQuotaStatus, federatedResourceQuotaSync, gracefulEviction, hpaReplicasSyncer, multiclusterservice, namespace, remedy, serviceExport, serviceImport, unifiedAuth, workStatus.
+ Disabled-by-default controllers: hpaReplicasSyncer (default [*])
+ --enable-cluster-resource-modeling Enable means controller would build resource modeling for each cluster by syncing Nodes and Pods resources.
+ The resource modeling might be used by the scheduler to make scheduling decisions in scenario of dynamic replica assignment based on cluster free resources.
+ Disable if it does not fit your cases for better performance. (default true)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --enable-taint-manager If set to true enables NoExecute Taints and will evict all not-tolerating objects propagating on Clusters tainted with this kind of Taints. (default true)
+ --failover-eviction-timeout duration Specifies the grace period for deleting scheduling result on failed clusters. (default 5m0s)
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ MultiClusterService=true|false (ALPHA - default=false)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ --graceful-eviction-timeout duration Specifies the timeout period waiting for the graceful-eviction-controller performs the final removal since the workload(resource) has been moved to the graceful eviction tasks. (default 10m0s)
+ -h, --help help for karmada-controller-manager
+ --horizontal-pod-autoscaler-cpu-initialization-period duration The period after pod start when CPU samples might be skipped. (default 5m0s)
+ --horizontal-pod-autoscaler-downscale-delay duration The period since last downscale, before another downscale can be performed in horizontal pod autoscaler. (default 5m0s)
+ --horizontal-pod-autoscaler-downscale-stabilization duration The period for which autoscaler will look backwards and not scale down below any recommendation it made during that period. (default 5m0s)
+ --horizontal-pod-autoscaler-initial-readiness-delay duration The period after pod start during which readiness changes will be treated as initial readiness. (default 30s)
+ --horizontal-pod-autoscaler-sync-period duration The period for syncing the number of pods in horizontal pod autoscaler. (default 15s)
+ --horizontal-pod-autoscaler-tolerance float The minimum change (from 1.0) in the desired-to-actual metrics ratio for the horizontal pod autoscaler to consider scaling. (default 0.1)
+ --horizontal-pod-autoscaler-upscale-delay duration The period since last upscale, before another upscale can be performed in horizontal pod autoscaler. (default 3m0s)
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --leader-elect Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability. (default true)
+ --leader-elect-lease-duration duration The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. (default 15s)
+ --leader-elect-renew-deadline duration The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. (default 10s)
+ --leader-elect-resource-namespace string The namespace of resource object that is used for locking during leader election. (default "karmada-system")
+ --leader-elect-retry-period duration The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. (default 2s)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --metrics-bind-address string The TCP address that the controller should bind to for serving prometheus metrics(e.g. 127.0.0.1:8080, :8080). It can be set to "0" to disable the metrics serving. (default ":8080")
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --rate-limiter-base-delay duration The base delay for rate limiter. (default 5ms)
+ --rate-limiter-bucket-size int The bucket size for rate limier. (default 100)
+ --rate-limiter-max-delay duration The max delay for rate limiter. (default 16m40s)
+ --rate-limiter-qps int The QPS for rate limier. (default 10)
+ --resync-period duration Base frequency the informers are resynced.
+ --secure-port int The secure port on which to serve HTTPS. (default 10357)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --skipped-propagating-apis string Semicolon separated resources that should be skipped from propagating in addition to the default skip list(cluster.karmada.io;policy.karmada.io;work.karmada.io). Supported formats are:
+ for skip resources with a specific API group(e.g. networking.k8s.io),
+ / for skip resources with a specific API version(e.g. networking.k8s.io/v1beta1),
+ //, for skip one or more specific resource(e.g. networking.k8s.io/v1beta1/Ingress,IngressClass) where the kinds are case-insensitive.
+ --skipped-propagating-namespaces strings Comma-separated namespaces that should be skipped from propagating.
+ Note: 'karmada-system', 'karmada-cluster' and 'karmada-es-.*' are Karmada reserved namespaces that will always be skipped. (default [kube-.*])
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/reference/components/karmada-descheduler.md b/versioned_docs/version-v1.9/reference/components/karmada-descheduler.md
new file mode 100644
index 000000000..4d7687e6f
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/components/karmada-descheduler.md
@@ -0,0 +1,51 @@
+---
+title: karmada-descheduler
+---
+
+
+
+### Synopsis
+
+The karmada-descheduler evicts replicas from member clusters
+if they are failed to be scheduled for a period of time. It relies on
+karmada-scheduler-estimator to get replica status.
+
+```
+karmada-descheduler [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --descheduling-interval duration Time interval between two consecutive descheduler executions. Setting this value instructs the descheduler to run in a continuous loop at the interval specified. (default 2m0s)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ -h, --help help for karmada-descheduler
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --leader-elect Enable leader election, which must be true when running multi instances. (default true)
+ --leader-elect-resource-namespace string The namespace of resource object that is used for locking during leader election. (default "karmada-system")
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --master string The address of the Kubernetes API server. Overrides any value in KubeConfig. Only required if out-of-cluster.
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --scheduler-estimator-port int The secure port on which to connect the accurate scheduler estimator. (default 10352)
+ --scheduler-estimator-service-prefix string The prefix of scheduler estimator service name (default "karmada-scheduler-estimator")
+ --scheduler-estimator-timeout duration Specifies the timeout period of calling the scheduler estimator service. (default 3s)
+ --secure-port int The secure port on which to serve HTTPS. (default 10358)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --unschedulable-threshold duration The period of pod unschedulable condition. This value is considered as a classification standard of unschedulable replicas. (default 5m0s)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/reference/components/karmada-metrics-adapter.md b/versioned_docs/version-v1.9/reference/components/karmada-metrics-adapter.md
new file mode 100644
index 000000000..c332169db
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/components/karmada-metrics-adapter.md
@@ -0,0 +1,96 @@
+---
+title: karmada-metrics-adapter
+---
+
+
+
+### Synopsis
+
+The karmada-metrics-adapter is a adapter to aggregate the metrics from member clusters.
+
+```
+karmada-metrics-adapter [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
+ --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
+ --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
+ --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
+ --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
+ --audit-log-compress If set, the rotated log files will be compressed using gzip.
+ --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
+ --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
+ --audit-log-maxbackup int The maximum number of old audit log files to retain. Setting a value of 0 will mean there's no restriction on the number of files.
+ --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
+ --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
+ --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
+ --audit-log-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
+ --audit-policy-file string Path to the file that defines the audit policy configuration.
+ --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
+ --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
+ --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
+ --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
+ --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
+ --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
+ --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
+ --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
+ --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
+ --authentication-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io.
+ --authentication-skip-lookup If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+ --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 10s)
+ --authentication-tolerate-lookup-failure If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+ --authorization-always-allow-paths strings A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. (default [/healthz,/readyz,/livez])
+ --authorization-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io.
+ --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 10s)
+ --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s)
+ --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used. (default 0.0.0.0)
+ --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
+ --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+ --contention-profiling Enable block profiling, if profiling is enabled
+ --debug-socket-path string Use an unprotected (no authn/authz) unix-domain socket for profiling with the given path
+ -h, --help help for karmada-metrics-adapter
+ --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default.
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --permit-address-sharing If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]
+ --permit-port-sharing If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+ --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
+ --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+ --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+ --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-])
+ --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group])
+ --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user])
+ --secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 443)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+ --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
+ Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
+ Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+ --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+ --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
+ --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/reference/components/karmada-scheduler-estimator.md b/versioned_docs/version-v1.9/reference/components/karmada-scheduler-estimator.md
new file mode 100644
index 000000000..659899ac9
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/components/karmada-scheduler-estimator.md
@@ -0,0 +1,56 @@
+---
+title: karmada-scheduler-estimator
+---
+
+
+
+### Synopsis
+
+The karmada-scheduler-estimator runs an accurate scheduler estimator of a cluster. It
+provides the scheduler with more accurate cluster resource information.
+
+```
+karmada-scheduler-estimator [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --cluster-name string Name of member cluster that the estimator serves for.
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ MultiClusterService=true|false (ALPHA - default=false)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ -h, --help help for karmada-scheduler-estimator
+ --kube-api-burst int Burst to use while talking with apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 30)
+ --kube-api-qps float32 QPS to use while talking with apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 20)
+ --kubeconfig string Path to member cluster's kubeconfig file.
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --master string The address of the member Kubernetes API server. Overrides any value in KubeConfig. Only required if out-of-cluster.
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --parallelism int Parallelism defines the amount of parallelism in algorithms for estimating. Must be greater than 0. Defaults to 16.
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --secure-port int The secure port on which to serve HTTPS. (default 10351)
+ --server-port int The secure port on which to serve gRPC. (default 10352)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/reference/components/karmada-scheduler.md b/versioned_docs/version-v1.9/reference/components/karmada-scheduler.md
new file mode 100644
index 000000000..49a4337b5
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/components/karmada-scheduler.md
@@ -0,0 +1,74 @@
+---
+title: karmada-scheduler
+---
+
+
+
+### Synopsis
+
+The karmada-scheduler is a control plane process which assigns resources to the clusters it manages.
+The scheduler determines which clusters are valid placements for each resource in the scheduling queue according to
+constraints and available resources. The scheduler then ranks each valid cluster and binds the resource to
+the most suitable cluster.
+
+```
+karmada-scheduler [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --disable-scheduler-estimator-in-pull-mode Disable the scheduler estimator for clusters in pull mode, which takes effect only when enable-scheduler-estimator is true.
+ --enable-empty-workload-propagation Enable workload with replicas 0 to be propagated to member clusters.
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --enable-scheduler-estimator Enable calling cluster scheduler estimator for adjusting replicas.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ CustomizedClusterResourceModeling=true|false (BETA - default=true)
+ Failover=true|false (BETA - default=true)
+ GracefulEviction=true|false (BETA - default=true)
+ MultiClusterService=true|false (ALPHA - default=false)
+ PropagateDeps=true|false (BETA - default=true)
+ PropagationPolicyPreemption=true|false (ALPHA - default=false)
+ ResourceQuotaEstimate=true|false (ALPHA - default=false)
+ -h, --help help for karmada-scheduler
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --leader-elect Enable leader election, which must be true when running multi instances. (default true)
+ --leader-elect-lease-duration duration The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled. (default 15s)
+ --leader-elect-renew-deadline duration The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. (default 10s)
+ --leader-elect-resource-name string The name of resource object that is used for locking during leader election. (default "karmada-scheduler")
+ --leader-elect-resource-namespace string The namespace of resource object that is used for locking during leader election. (default "karmada-system")
+ --leader-elect-retry-period duration The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. (default 2s)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --master string The address of the Kubernetes API server. Overrides any value in KubeConfig. Only required if out-of-cluster.
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --plugins strings A list of plugins to enable. '*' enables all build-in and customized plugins, 'foo' enables the plugin named 'foo', '*,-foo' disables the plugin named 'foo'.
+ All build-in plugins: APIEnablement,ClusterAffinity,ClusterEviction,ClusterLocality,SpreadConstraint,TaintToleration. (default [*])
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --rate-limiter-base-delay duration The base delay for rate limiter. (default 5ms)
+ --rate-limiter-bucket-size int The bucket size for rate limier. (default 100)
+ --rate-limiter-max-delay duration The max delay for rate limiter. (default 16m40s)
+ --rate-limiter-qps int The QPS for rate limier. (default 10)
+ --scheduler-estimator-port int The secure port on which to connect the accurate scheduler estimator. (default 10352)
+ --scheduler-estimator-service-prefix string The prefix of scheduler estimator service name (default "karmada-scheduler-estimator")
+ --scheduler-estimator-timeout duration Specifies the timeout period of calling the scheduler estimator service. (default 3s)
+ --scheduler-name string SchedulerName represents the name of the scheduler. default is 'default-scheduler'. (default "default-scheduler")
+ --secure-port int The secure port on which to serve HTTPS. (default 10351)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/reference/components/karmada-search.md b/versioned_docs/version-v1.9/reference/components/karmada-search.md
new file mode 100644
index 000000000..7d735a600
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/components/karmada-search.md
@@ -0,0 +1,151 @@
+---
+title: karmada-search
+---
+
+
+
+### Synopsis
+
+The karmada-search starts an aggregated server. It provides
+capabilities such as global search and resource proxy in a multi-cloud environment.
+
+```
+karmada-search [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --admission-control-config-file string File with admission control configuration.
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --audit-log-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-log-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 1)
+ --audit-log-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode.
+ --audit-log-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode.
+ --audit-log-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode.
+ --audit-log-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode.
+ --audit-log-compress If set, the rotated log files will be compressed using gzip.
+ --audit-log-format string Format of saved audits. "legacy" indicates 1-line text format for each event. "json" indicates structured json format. Known formats are legacy,json. (default "json")
+ --audit-log-maxage int The maximum number of days to retain old audit log files based on the timestamp encoded in their filename.
+ --audit-log-maxbackup int The maximum number of old audit log files to retain. Setting a value of 0 will mean there's no restriction on the number of files.
+ --audit-log-maxsize int The maximum size in megabytes of the audit log file before it gets rotated.
+ --audit-log-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "blocking")
+ --audit-log-path string If set, all requests coming to the apiserver will be logged to this file. '-' means standard out.
+ --audit-log-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-log-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-log-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-log-version string API group and version used for serializing audit events written to log. (default "audit.k8s.io/v1")
+ --audit-policy-file string Path to the file that defines the audit policy configuration.
+ --audit-webhook-batch-buffer-size int The size of the buffer to store events before batching and writing. Only used in batch mode. (default 10000)
+ --audit-webhook-batch-max-size int The maximum size of a batch. Only used in batch mode. (default 400)
+ --audit-webhook-batch-max-wait duration The amount of time to wait before force writing the batch that hadn't reached the max size. Only used in batch mode. (default 30s)
+ --audit-webhook-batch-throttle-burst int Maximum number of requests sent at the same moment if ThrottleQPS was not utilized before. Only used in batch mode. (default 15)
+ --audit-webhook-batch-throttle-enable Whether batching throttling is enabled. Only used in batch mode. (default true)
+ --audit-webhook-batch-throttle-qps float32 Maximum average number of batches per second. Only used in batch mode. (default 10)
+ --audit-webhook-config-file string Path to a kubeconfig formatted file that defines the audit webhook configuration.
+ --audit-webhook-initial-backoff duration The amount of time to wait before retrying the first failed request. (default 10s)
+ --audit-webhook-mode string Strategy for sending audit events. Blocking indicates sending events should block server responses. Batch causes the backend to buffer and write events asynchronously. Known modes are batch,blocking,blocking-strict. (default "batch")
+ --audit-webhook-truncate-enabled Whether event and batch truncating is enabled.
+ --audit-webhook-truncate-max-batch-size int Maximum size of the batch sent to the underlying backend. Actual serialized size can be several hundreds of bytes greater. If a batch exceeds this limit, it is split into several batches of smaller size. (default 10485760)
+ --audit-webhook-truncate-max-event-size int Maximum size of the audit event sent to the underlying backend. If the size of an event is greater than this number, first request and response are removed, and if this doesn't reduce the size enough, event is discarded. (default 102400)
+ --audit-webhook-version string API group and version used for serializing audit events written to webhook. (default "audit.k8s.io/v1")
+ --authentication-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io.
+ --authentication-skip-lookup If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster.
+ --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 10s)
+ --authentication-tolerate-lookup-failure If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous.
+ --authorization-always-allow-paths strings A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. (default [/healthz,/readyz,/livez])
+ --authorization-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io.
+ --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 10s)
+ --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s)
+ --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces and IP address families will be used. (default 0.0.0.0)
+ --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates")
+ --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
+ --contention-profiling Enable block profiling, if profiling is enabled
+ --debug-socket-path string Use an unprotected (no authn/authz) unix-domain socket for profiling with the given path
+ --delete-collection-workers int Number of workers spawned for DeleteCollection call. These are used to speed up namespace cleanup. (default 1)
+ --disable-admission-plugins strings admission plugins that should be disabled although they are in the default enabled plugins list (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+ --disable-proxy Disable proxy feature that would save memory usage significantly.
+ --disable-search Disable search feature that would save memory usage significantly.
+ --egress-selector-config-file string File with apiserver egress selector configuration.
+ --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook). Comma-delimited list of admission plugins: MutatingAdmissionWebhook, NamespaceLifecycle, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter.
+ --enable-garbage-collector Enables the generic garbage collector. MUST be synced with the corresponding flag of the kube-controller-manager. (default true)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --encryption-provider-config string The file containing configuration for encryption providers to be used for storing secrets in etcd
+ --encryption-provider-config-automatic-reload Determines if the file set by --encryption-provider-config should be automatically reloaded if the disk contents change. Setting this to true disables the ability to uniquely identify distinct KMS plugins via the API server healthz endpoints.
+ --etcd-cafile string SSL Certificate Authority file used to secure etcd communication.
+ --etcd-certfile string SSL certification file used to secure etcd communication.
+ --etcd-compaction-interval duration The interval of compaction requests. If 0, the compaction request from apiserver is disabled. (default 5m0s)
+ --etcd-count-metric-poll-period duration Frequency of polling etcd for number of resources per type. 0 disables the metric collection. (default 1m0s)
+ --etcd-db-metric-poll-interval duration The interval of requests to poll etcd and update metric. 0 disables the metric collection (default 30s)
+ --etcd-healthcheck-timeout duration The timeout to use when checking etcd health. (default 2s)
+ --etcd-keyfile string SSL key file used to secure etcd communication.
+ --etcd-prefix string The prefix to prepend to all resource paths in etcd. (default "/registry")
+ --etcd-readycheck-timeout duration The timeout to use when checking etcd readiness (default 2s)
+ --etcd-servers strings List of etcd servers to connect with (scheme://ip:port), comma separated.
+ --etcd-servers-overrides strings Per-resource etcd servers overrides, comma separated. The individual override format: group/resource#servers, where servers are URLs, semicolon separated. Note that this applies only to resources compiled into this server binary.
+ --feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
+ APIListChunking=true|false (BETA - default=true)
+ APIPriorityAndFairness=true|false (BETA - default=true)
+ APIResponseCompression=true|false (BETA - default=true)
+ APIServerIdentity=true|false (BETA - default=true)
+ APIServerTracing=true|false (BETA - default=true)
+ AdmissionWebhookMatchConditions=true|false (BETA - default=true)
+ AggregatedDiscoveryEndpoint=true|false (BETA - default=true)
+ AllAlpha=true|false (ALPHA - default=false)
+ AllBeta=true|false (BETA - default=false)
+ ComponentSLIs=true|false (BETA - default=true)
+ ConsistentListFromCache=true|false (ALPHA - default=false)
+ CustomResourceValidationExpressions=true|false (BETA - default=true)
+ InPlacePodVerticalScaling=true|false (ALPHA - default=false)
+ KMSv2=true|false (BETA - default=true)
+ KMSv2KDF=true|false (BETA - default=false)
+ OpenAPIEnums=true|false (BETA - default=true)
+ RemainingItemCount=true|false (BETA - default=true)
+ StorageVersionAPI=true|false (ALPHA - default=false)
+ StorageVersionHash=true|false (BETA - default=true)
+ UnauthenticatedHTTP2DOSMitigation=true|false (BETA - default=false)
+ ValidatingAdmissionPolicy=true|false (BETA - default=false)
+ WatchList=true|false (ALPHA - default=false)
+ -h, --help help for karmada-search
+ --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default. (default 1000)
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --lease-reuse-duration-seconds int The time in seconds that each lease is reused. A lower value could avoid large number of objects reusing the same lease. Notice that a too small value may cause performance problems at storage layer. (default 60)
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --permit-address-sharing If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false]
+ --permit-port-sharing If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false]
+ --profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed.
+ --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests.
+ --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-])
+ --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group])
+ --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user])
+ --secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 443)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --storage-backend string The storage backend for persistence. Options: 'etcd3' (default).
+ --storage-media-type string The media type to use to store objects in storage. Some resources or storage backends may only support a specific media type and will ignore this setting. Supported media types: [application/json, application/yaml, application/vnd.kubernetes.protobuf] (default "application/json")
+ --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir.
+ --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
+ Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384.
+ Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA.
+ --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+ --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.
+ --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default [])
+ --tracing-config-file string File with apiserver tracing configuration.
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+ --watch-cache Enable watch caching in the apiserver (default true)
+ --watch-cache-sizes strings Watch cache size settings for some resources (pods, nodes, etc.), comma separated. The individual setting format: resource[.group]#size, where resource is lowercase plural (no version), group is omitted for resources of apiVersion v1 (the legacy core API) and included for others, and size is a number. This option is only meaningful for resources built into the apiserver, not ones defined by CRDs or aggregated from external servers, and is only consulted if the watch-cache is enabled. The only meaningful size setting to supply here is zero, which means to disable watch caching for the associated resource; all non-zero values are equivalent and mean to not disable watch caching for that resource
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/reference/components/karmada-webhook.md b/versioned_docs/version-v1.9/reference/components/karmada-webhook.md
new file mode 100644
index 000000000..319993f60
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/components/karmada-webhook.md
@@ -0,0 +1,50 @@
+---
+title: karmada-webhook
+---
+
+
+
+### Synopsis
+
+The karmada-webhook starts a webhook server and manages policies about how to mutate and validate
+Karmada resources including 'PropagationPolicy', 'OverridePolicy' and so on.
+
+```
+karmada-webhook [flags]
+```
+
+### Options
+
+```
+ --add_dir_header If true, adds the file directory to the header of the log messages
+ --alsologtostderr log to standard error as well as files (no effect when -logtostderr=true)
+ --bind-address string The IP address on which to listen for the --secure-port port. (default "0.0.0.0")
+ --cert-dir string The directory that contains the server key and certificate. (default "/tmp/k8s-webhook-server/serving-certs")
+ --default-not-ready-toleration-seconds int Indicates the tolerationSeconds of the propagation policy toleration for notReady:NoExecute that is added by default to every propagation policy that does not already have such a toleration. (default 300)
+ --default-unreachable-toleration-seconds int Indicates the tolerationSeconds of the propagation policy toleration for unreachable:NoExecute that is added by default to every propagation policy that does not already have such a toleration. (default 300)
+ --enable-pprof Enable profiling via web interface host:port/debug/pprof/.
+ --health-probe-bind-address string The TCP address that the controller should bind to for serving health probes(e.g. 127.0.0.1:8000, :8000) (default ":8000")
+ -h, --help help for karmada-webhook
+ --kube-api-burst int Burst to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 60)
+ --kube-api-qps float32 QPS to use while talking with karmada-apiserver. Doesn't cover events and node heartbeat apis which rate limiting is controlled by a different set of flags. (default 40)
+ --kubeconfig string Path to karmada control plane kubeconfig file.
+ --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
+ --log_dir string If non-empty, write log files in this directory (no effect when -logtostderr=true)
+ --log_file string If non-empty, use this log file (no effect when -logtostderr=true)
+ --log_file_max_size uint Defines the maximum size a log file can grow to (no effect when -logtostderr=true). Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
+ --logtostderr log to standard error instead of files (default true)
+ --metrics-bind-address string The TCP address that the controller should bind to for serving prometheus metrics(e.g. 127.0.0.1:8080, :8080). It can be set to "0" to disable the metrics serving. (default ":8080")
+ --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level; no effect when -logtostderr=true)
+ --profiling-bind-address string The TCP address for serving profiling(e.g. 127.0.0.1:6060, :6060). This is only applicable if profiling is enabled. (default ":6060")
+ --secure-port int The secure port on which to serve HTTPS. (default 8443)
+ --skip_headers If true, avoid header prefixes in the log messages
+ --skip_log_headers If true, avoid headers when opening log files (no effect when -logtostderr=true)
+ --stderrthreshold severity logs at or above this threshold go to stderr when writing to files and stderr (no effect when -logtostderr=true or -alsologtostderr=false) (default 2)
+ --tls-cert-file-name string The name of server certificate. (default "tls.crt")
+ --tls-min-version string Minimum TLS version supported. Possible values: 1.0, 1.1, 1.2, 1.3. (default "1.3")
+ --tls-private-key-file-name string The name of server key. (default "tls.key")
+ -v, --v Level number for the log level verbosity
+ --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
+```
+
+###### Auto generated by [spf13/cobra script in Karmada](https://github.com/karmada-io/karmada/tree/master/hack/tools/gencomponentdocs)
\ No newline at end of file
diff --git a/versioned_docs/version-v1.9/reference/glossary.md b/versioned_docs/version-v1.9/reference/glossary.md
new file mode 100644
index 000000000..8e30c9a71
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/glossary.md
@@ -0,0 +1,69 @@
+---
+title: Glossary
+---
+
+This glossary is intended to be a comprehensive, standardized list of Karmada terminology. It includes technical terms that are specific to Karmada, as well as more general terms that provide useful context.
+
+* Aggregated API
+
+ Aggregated API is provided by `karmada-aggregated-apiserver`. It aggregates all registered clusters and allows users to uniformly access different member clusters through Karmada's `cluster/proxy` endpoint.
+
+* ClusterAffinity
+
+ Similar to K8s, ClusterAffinity refers to a set of rules that provide the scheduler with hints on which cluster to deploy applications on.
+
+* GracefulEviction
+
+ Graceful eviction means that when a workload is migrated between clusters, the eviction will be postponed until the workload becomes healthy on the new cluster or the `GracePeriodSeconds` is reached.
+ Graceful eviction can help the service to be continuously serviced during multi-cluster failover, and the instance will not drop to zero.
+
+* OverridePolicy
+
+ Differentiated configuration policy applicable across clusters.
+
+* Overrider
+
+ Overrider refers to a series of differentiated configuration rules provided by Karmada, such as `ImageOverrider` overrides the image of workloads.
+
+* Propagate Dependencies(PropagateDeps)
+
+ `PropagateDeps` means that when an application is delivered to a certain cluster, Karmada can automatically distribute its dependencies to the same cluster at the same time. Dependencies do not go through the scheduling process, but reuse the scheduling results of the main application.
+ Dependencies of complex applications can be resolved through the resource interpreter's `InterpretDependency` operation.
+
+* PropagationPolicy
+
+ Widely applicable policy for multi-cluster application scheduling.
+
+* Pull Mode
+
+ A mode for Karmada to manage clusters. Karmada control plane will not access the member clusters directly, but will delegate the responsibility to the `karmada-agent` deployed on the member clusters.
+
+* Push Mode
+
+ A mode for Karmada to manage clusters. Karmada control plane will directly access `kube-apiserver` of member clusters to obtain cluster status and deploy applications.
+
+* ResourceBinding
+
+ Unified abstraction of Karmada, which drives internal processes. It contains info of resource template and scheduling policy, and is the processing object of karmada-scheduler when scheduling applications.
+
+* Resource Interpreter
+
+ In the process of distributing resources from `karmada-apiserver` to member clusters, Karmada needs to understand the definition structure of resources. For example, during the divided scheduling of Deployment, Karmada needs to parse the `replicas` field of Deployment.
+ The resource interpreter is designed to interpret the resource structure. It includes two types of interpreters. The built-in interpreter is used to interpret common Kubernetes native resources or some well-known extended resources, implemented and maintained by the community, and the custom interpreter is used to interpret custom resources or override built-in interpreters, implemented and maintained by users.
+
+* Resource Model
+
+ The resource model is the abstraction of the resource usage of the member cluster on the Karmada control plane. During the scheduling of replicas based on the cluster margin, karmada-scheduler will make decisions based on the resource model of the cluster.
+
+* Resource Template
+
+ Resource template refers to the K8s native API definition including CRD, and generally refers to the template of multi-cluster applications.
+
+* SpreadConstraint
+
+ Spread constraint refers to scheduling constraints based on the cluster topology, e.g. Karmada will schedule according to information such as the region, provider, and zone where the cluster is located.
+
+* Work
+
+ Object at the federation layer to present a resource in member clusters.
+ The work of different member clusters is isolated by namespace.
diff --git a/versioned_docs/version-v1.9/reference/instrumentation/event.md b/versioned_docs/version-v1.9/reference/instrumentation/event.md
new file mode 100644
index 000000000..8b89553b7
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/instrumentation/event.md
@@ -0,0 +1,46 @@
+---
+title: Karmada Event Reference
+---
+
+## Events
+
+This section details the events which record key processes in Karmada.
+See more details about event [here](https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/).
+
+| Reason | Binding Objects | Type | Source Component |
+|-----------------------------------------|---------------------------------------------------------------------------------------------|---------|---------------------------------------------------------------------------------------------------------------------------|
+| CreateExecutionSpaceFailed | Cluster | Warning | cluster-controller |
+| CreateExecutionSpaceSucceed | Cluster | Normal | cluster-controller |
+| RemoveExecutionSpaceFailed | Cluster | Warning | cluster-controller |
+| RemoveExecutionSpaceSucceed | Cluster | Normal | cluster-controller |
+| TaintClusterFailed | Cluster | Warning | cluster-controller |
+| TaintClusterSucceed | Cluster | Normal | cluster-controller |
+| SyncImpersonationConfigFailed | Cluster | Warning | unified-auth-controller |
+| SyncImpersonationConfigSucceed | Cluster | Normal | unified-auth-controller |
+| ReflectStatusFailed | Work | Warning | work-status-controller |
+| ReflectStatusSucceed | Work | Normal | work-status-controller |
+| InterpretHealthFailed | Work | Warning | work-status-controller |
+| InterpretHealthSucceed | Work | Normal | work-status-controller |
+| SyncFailed | Work | Warning | execution-controller |
+| SyncSucceed | Work | Normal | execution-controller |
+| CleanupWorkFailed | ResourceBinding ClusterResourceBinding | Warning | binding-controller cluster-resource-binding-controller |
+| SyncScheduleResultToDependenciesSucceed | ResourceBinding ClusterResourceBinding | Normal | dependencies-distributor |
+| SyncScheduleResultToDependenciesFailed | ResourceBinding ClusterResourceBinding | Warning | dependencies-distributor |
+| SyncWorkFailed | ResourceBinding ClusterResourceBinding resource template FederatedResourceQuota | Warning | binding-controller cluster-resource-binding-controller |
+| SyncWorkSucceed | ResourceBinding ClusterResourceBinding resource template FederatedResourceQuota | Normal | binding-controller cluster-resource-binding-controller |
+| AggregateStatusFailed | ResourceBinding ClusterResourceBinding resource template FederatedResourceQuota | Warning | binding-controller cluster-resource-binding-controller |
+| AggregateStatusSucceed | ResourceBinding ClusterResourceBinding resource template FederatedResourceQuota | Normal | binding-controller cluster-resource-binding-controller |
+| ScheduleBindingFailed | ResourceBinding ClusterResourceBinding resource template | Warning | karmada-scheduler |
+| ScheduleBindingSucceed | ResourceBinding ClusterResourceBinding resource template | Normal | karmada-scheduler |
+| DescheduleBindingFailed | ResourceBinding ClusterResourceBinding resource template | Warning | karmada-descheduler |
+| DescheduleBindingSucceed | ResourceBinding ClusterResourceBinding resource template | Normal | karmada-descheduler |
+| EvictWorkloadFromClusterSucceed | ResourceBinding ClusterResourceBinding resource template | Normal | taint-manager resource-binding-graceful-eviction-controller cluster-resource-binding-graceful-eviction-controller |
+| EvictWorkloadFromClusterFailed | ResourceBinding ClusterResourceBinding resource template | Warning | taint-manager resource-binding-graceful-eviction-controller cluster-resource-binding-graceful-eviction-controller |
+| ApplyPolicyFailed | resource template | Warning | resource-detector |
+| ApplyPolicySucceed | resource template | Normal | resource-detector |
+| ApplyOverridePolicyFailed | resource template | Warning | override-manager |
+| ApplyOverridePolicySucceed | resource template | Normal | override-manager |
+| GetDependenciesFailed | resource template | Warning | dependencies-distributor |
+| GetDependenciesSucceed | resource template | Normal | dependencies-distributor |
+| SyncDerivedServiceFailed | ServiceImport | Warning | service-import-controller |
+| SyncDerivedServiceSucceed | ServiceImport | Normal | service-import-controller |
diff --git a/versioned_docs/version-v1.9/reference/instrumentation/metrics.md b/versioned_docs/version-v1.9/reference/instrumentation/metrics.md
new file mode 100644
index 000000000..fb6539a67
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/instrumentation/metrics.md
@@ -0,0 +1,41 @@
+---
+title: Karmada Metrics Reference
+---
+
+## Metrics
+
+This section details the metrics that different Karmada components export.
+You can query the metrics endpoint for these components using an HTTP scrape, and fetch the current metrics data in Prometheus format.
+
+| Name | Type | Help | Labels | Source Components |
+|-------------------------------------------------|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------|----------------------------------------------|
+| schedule_attempts_total | Counter | Number of attempts to schedule resourceBinding. | result schedule_type | karmada-scheduler |
+| e2e_scheduling_duration_seconds | Histogram | E2E scheduling latency in seconds. | result schedule_type | karmada-scheduler |
+| scheduling_algorithm_duration_seconds | Histogram | Scheduling algorithm latency in seconds(excluding scale scheduler). | schedule_step | karmada-scheduler |
+| queue_incoming_bindings_total | Counter | Number of bindings added to scheduling queues by event type. | event | karmada-scheduler |
+| framework_extension_point_duration_seconds | Histogram | Latency for running all plugins of a specific extension point. | extension_point result | karmada-scheduler |
+| plugin_execution_duration_seconds | Histogram | Duration for running a plugin at a specific extension point. | plugin extension_point result | karmada-scheduler |
+| estimating_request_total | Counter | Number of scheduler estimator requests. | result type | karmada_scheduler_estimator |
+| estimating_algorithm_duration_seconds | Histogram | Estimating algorithm latency in seconds for each step. | result type step | karmada_scheduler_estimator |
+| cluster_ready_state | Gauge | State of the cluster(1 if ready, 0 otherwise). | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_node_number | Gauge | Number of nodes in the cluster. | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_ready_node_number | Gauge | Number of ready nodes in the cluster. | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_memory_allocatable_bytes | Gauge | Allocatable cluster memory resource in bytes. | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_cpu_allocatable_number | Gauge | Number of allocatable CPU in the cluster. | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_pod_allocatable_number | Gauge | Number of allocatable pods in the cluster. | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_memory_allocated_bytes | Gauge | Allocated cluster memory resource in bytes. | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_cpu_allocated_number | Gauge | Number of allocated CPU in the cluster. | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_pod_allocated_number | Gauge | Number of allocated pods in the cluster. | cluster_name | karmada-controller-manager karmada-agent |
+| cluster_sync_status_duration_seconds | Histogram | Duration in seconds for syncing the status of the cluster once. | cluster_name | karmada-controller-manager karmada-agent |
+| resource_match_policy_duration_seconds | Histogram | Duration in seconds to find a matched propagation policy for the resource template. | / | karmada-controller-manager |
+| resource_apply_policy_duration_seconds | Histogram | Duration in seconds to apply a propagation policy for the resource template. By the result, 'error' means a resource template failed to apply the policy. Otherwise 'success'. | result | karmada-controller-manager |
+| policy_apply_attempts_total | Counter | Number of attempts to be applied for a propagation policy. By the result, 'error' means a resource template failed to apply the policy. Otherwise 'success'. | result | karmada-controller-manager |
+| binding_sync_work_duration_seconds | Histogram | Duration in seconds to sync works for a binding object. By the result, 'error' means a binding failed to sync works. Otherwise 'success'. | result | karmada-controller-manager |
+| work_sync_workload_duration_seconds | Histogram | Duration in seconds to sync the workload to a target cluster. By the result, 'error' means a work failed to sync workloads. Otherwise 'success'. | result | karmada-controller-manager karmada-agent |
+| policy_preemption_total | Counter | Number of preemption for the resource template. By the result, 'error' means a resource template failed to be preempted by other propagation policies. Otherwise 'success'. | result | karmada-controller-manager |
+| cronfederatedhpa_process_duration_seconds | Histogram | Duration in seconds to process a CronFederatedHPA. By the result, 'error' means a CronFederatedHPA failed to be processed. Otherwise 'success'. | result | karmada-controller-manager |
+| cronfederatedhpa_rule_process_duration_seconds | Histogram | Duration in seconds to process a CronFederatedHPA rule. By the result, 'error' means a CronFederatedHPA rule failed to be processed. Otherwise 'success'. | result | karmada-controller-manager |
+| federatedhpa_process_duration_seconds | Histogram | Duration in seconds to process a FederatedHPA. By the result, 'error' means a FederatedHPA failed to be processed. Otherwise 'success'. | result | karmada-controller-manager |
+| federatedhpa_pull_metrics_duration_seconds | Histogram | Duration in seconds taken by the FederatedHPA to pull metrics. By the result, 'error' means the FederatedHPA failed to pull the metrics. Otherwise 'success'. | result metricType | karmada-controller-manager |
+| pool_get_operation_total | Counter | Total times of getting from pool | name from | karmada-controller-manager karmada-agent |
+| pool_put_operation_total | Counter | Total times of putting from pool | name to | karmada-controller-manager karmada-agent |
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/auto-scaling-resources/cron-federated-hpa-v1alpha1.md b/versioned_docs/version-v1.9/reference/karmada-api/auto-scaling-resources/cron-federated-hpa-v1alpha1.md
new file mode 100644
index 000000000..2c5eb68e9
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/auto-scaling-resources/cron-federated-hpa-v1alpha1.md
@@ -0,0 +1,710 @@
+---
+api_metadata:
+ apiVersion: "autoscaling.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/autoscaling/v1alpha1"
+ kind: "CronFederatedHPA"
+content_type: "api_reference"
+description: "CronFederatedHPA represents a collection of repeating schedule to scale replica number of a specific workload."
+title: "CronFederatedHPA v1alpha1"
+weight: 2
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: autoscaling.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/autoscaling/v1alpha1"`
+
+## CronFederatedHPA
+
+CronFederatedHPA represents a collection of repeating schedule to scale replica number of a specific workload. It can scale any resource implementing the scale subresource as well as FederatedHPA.
+
+
+
+- **apiVersion**: autoscaling.karmada.io/v1alpha1
+
+- **kind**: CronFederatedHPA
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([CronFederatedHPASpec](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpaspec)), required
+
+ Spec is the specification of the CronFederatedHPA.
+
+- **status** ([CronFederatedHPAStatus](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpastatus))
+
+ Status is the current status of the CronFederatedHPA.
+
+## CronFederatedHPASpec
+
+CronFederatedHPASpec is the specification of the CronFederatedHPA.
+
+
+
+- **rules** ([]CronFederatedHPARule), required
+
+ Rules contains a collection of schedules that declares when and how the referencing target resource should be scaled.
+
+
+
+ *CronFederatedHPARule declares a schedule as well as scale actions.*
+
+ - **rules.name** (string), required
+
+ Name of the rule. Each rule in a CronFederatedHPA must have a unique name.
+
+ Note: the name will be used as an identifier to record its execution history. Changing the name will be considered as deleting the old rule and adding a new rule, that means the original execution history will be discarded.
+
+ - **rules.schedule** (string), required
+
+ Schedule is the cron expression that represents a periodical time. The syntax follows https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-syntax.
+
+ - **rules.failedHistoryLimit** (int32)
+
+ FailedHistoryLimit represents the count of failed execution items for each rule. The value must be a positive integer. It defaults to 3.
+
+ - **rules.successfulHistoryLimit** (int32)
+
+ SuccessfulHistoryLimit represents the count of successful execution items for each rule. The value must be a positive integer. It defaults to 3.
+
+ - **rules.suspend** (boolean)
+
+ Suspend tells the controller to suspend subsequent executions. Defaults to false.
+
+ - **rules.targetMaxReplicas** (int32)
+
+ TargetMaxReplicas is the target MaxReplicas to be set for FederatedHPA. Only needed when referencing resource is FederatedHPA. TargetMinReplicas and TargetMaxReplicas can be specified together or either one can be specified alone. nil means the MaxReplicas(.spec.maxReplicas) of the referencing FederatedHPA will not be updated.
+
+ - **rules.targetMinReplicas** (int32)
+
+ TargetMinReplicas is the target MinReplicas to be set for FederatedHPA. Only needed when referencing resource is FederatedHPA. TargetMinReplicas and TargetMaxReplicas can be specified together or either one can be specified alone. nil means the MinReplicas(.spec.minReplicas) of the referencing FederatedHPA will not be updated.
+
+ - **rules.targetReplicas** (int32)
+
+ TargetReplicas is the target replicas to be scaled for resources referencing by ScaleTargetRef of this CronFederatedHPA. Only needed when referencing resource is not FederatedHPA.
+
+ - **rules.timeZone** (string)
+
+ TimeZone for the giving schedule. If not specified, this will default to the time zone of the karmada-controller-manager process. Invalid TimeZone will be rejected when applying by karmada-webhook. see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones for the all timezones.
+
+- **scaleTargetRef** (CrossVersionObjectReference), required
+
+ ScaleTargetRef points to the target resource to scale. Target resource could be any resource that implementing the scale subresource like Deployment, or FederatedHPA.
+
+
+
+ *CrossVersionObjectReference contains enough information to let you identify the referred resource.*
+
+ - **scaleTargetRef.kind** (string), required
+
+ kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **scaleTargetRef.name** (string), required
+
+ name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ - **scaleTargetRef.apiVersion** (string)
+
+ apiVersion is the API version of the referent
+
+## CronFederatedHPAStatus
+
+CronFederatedHPAStatus represents the current status of a CronFederatedHPA.
+
+
+
+- **executionHistories** ([]ExecutionHistory)
+
+ ExecutionHistories record the execution histories of CronFederatedHPARule.
+
+
+
+ *ExecutionHistory records the execution history of specific CronFederatedHPARule.*
+
+ - **executionHistories.ruleName** (string), required
+
+ RuleName is the name of the CronFederatedHPARule.
+
+ - **executionHistories.failedExecutions** ([]FailedExecution)
+
+ FailedExecutions records failed executions.
+
+
+
+ *FailedExecution records a failed execution.*
+
+ - **executionHistories.failedExecutions.executionTime** (Time), required
+
+ ExecutionTime is the actual execution time of CronFederatedHPARule. Tasks may not always be executed at ScheduleTime. ExecutionTime is used to evaluate the efficiency of the controller's execution.
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+ - **executionHistories.failedExecutions.message** (string), required
+
+ Message is the human-readable message indicating details about the failure.
+
+ - **executionHistories.failedExecutions.scheduleTime** (Time), required
+
+ ScheduleTime is the expected execution time declared in CronFederatedHPARule.
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+ - **executionHistories.nextExecutionTime** (Time)
+
+ NextExecutionTime is the next time to execute. Nil means the rule has been suspended.
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+ - **executionHistories.successfulExecutions** ([]SuccessfulExecution)
+
+ SuccessfulExecutions records successful executions.
+
+
+
+ *SuccessfulExecution records a successful execution.*
+
+ - **executionHistories.successfulExecutions.executionTime** (Time), required
+
+ ExecutionTime is the actual execution time of CronFederatedHPARule. Tasks may not always be executed at ScheduleTime. ExecutionTime is used to evaluate the efficiency of the controller's execution.
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+ - **executionHistories.successfulExecutions.scheduleTime** (Time), required
+
+ ScheduleTime is the expected execution time declared in CronFederatedHPARule.
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+ - **executionHistories.successfulExecutions.appliedMaxReplicas** (int32)
+
+ AppliedMaxReplicas is the MaxReplicas have been applied. It is required if .spec.rules[*].targetMaxReplicas is not empty.
+
+ - **executionHistories.successfulExecutions.appliedMinReplicas** (int32)
+
+ AppliedMinReplicas is the MinReplicas have been applied. It is required if .spec.rules[*].targetMinReplicas is not empty.
+
+ - **executionHistories.successfulExecutions.appliedReplicas** (int32)
+
+ AppliedReplicas is the replicas have been applied. It is required if .spec.rules[*].targetReplicas is not empty.
+
+## CronFederatedHPAList
+
+CronFederatedHPAList contains a list of CronFederatedHPA.
+
+
+
+- **apiVersion**: autoscaling.karmada.io/v1alpha1
+
+- **kind**: CronFederatedHPAList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)), required
+
+## Operations
+
+
+
+### `get` read the specified CronFederatedHPA
+
+#### HTTP Request
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the CronFederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+### `get` read status of the specified CronFederatedHPA
+
+#### HTTP Request
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}/status
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the CronFederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+### `list` list or watch objects of kind CronFederatedHPA
+
+#### HTTP Request
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas
+
+#### Parameters
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks** (*in query*): boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue** (*in query*): string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector** (*in query*): string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector** (*in query*): string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit** (*in query*): integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion** (*in query*): string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch** (*in query*): string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents** (*in query*): boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (*in query*): integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch** (*in query*): boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### Response
+
+200 ([CronFederatedHPAList](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpalist)): OK
+
+### `list` list or watch objects of kind CronFederatedHPA
+
+#### HTTP Request
+
+GET /apis/autoscaling.karmada.io/v1alpha1/cronfederatedhpas
+
+#### Parameters
+
+- **allowWatchBookmarks** (*in query*): boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue** (*in query*): string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector** (*in query*): string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector** (*in query*): string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit** (*in query*): integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion** (*in query*): string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch** (*in query*): string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents** (*in query*): boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (*in query*): integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch** (*in query*): boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### Response
+
+200 ([CronFederatedHPAList](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpalist)): OK
+
+### `create` create a CronFederatedHPA
+
+#### HTTP Request
+
+POST /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas
+
+#### Parameters
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+202 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Accepted
+
+### `update` replace the specified CronFederatedHPA
+
+#### HTTP Request
+
+PUT /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the CronFederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+### `update` replace status of the specified CronFederatedHPA
+
+#### HTTP Request
+
+PUT /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}/status
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the CronFederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+### `patch` partially update the specified CronFederatedHPA
+
+#### HTTP Request
+
+PATCH /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the CronFederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force** (*in query*): boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+### `patch` partially update status of the specified CronFederatedHPA
+
+#### HTTP Request
+
+PATCH /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}/status
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the CronFederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force** (*in query*): boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): OK
+
+201 ([CronFederatedHPA](../auto-scaling-resources/cron-federated-hpa-v1alpha1#cronfederatedhpa)): Created
+
+### `delete` delete a CronFederatedHPA
+
+#### HTTP Request
+
+DELETE /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the CronFederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds** (*in query*): integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy** (*in query*): string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### Response
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection` delete collection of CronFederatedHPA
+
+#### HTTP Request
+
+DELETE /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/cronfederatedhpas
+
+#### Parameters
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **continue** (*in query*): string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector** (*in query*): string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds** (*in query*): integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector** (*in query*): string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit** (*in query*): integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy** (*in query*): string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion** (*in query*): string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch** (*in query*): string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents** (*in query*): boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (*in query*): integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### Response
+
+200 ([Status](../common-definitions/status#status)): OK
+
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/auto-scaling-resources/federated-hpa-v1alpha1.md b/versioned_docs/version-v1.9/reference/karmada-api/auto-scaling-resources/federated-hpa-v1alpha1.md
new file mode 100644
index 000000000..6cae9bc6b
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/auto-scaling-resources/federated-hpa-v1alpha1.md
@@ -0,0 +1,1215 @@
+---
+api_metadata:
+ apiVersion: "autoscaling.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/autoscaling/v1alpha1"
+ kind: "FederatedHPA"
+content_type: "api_reference"
+description: "FederatedHPA is centralized HPA that can aggregate the metrics in multiple clusters."
+title: "FederatedHPA v1alpha1"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: autoscaling.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/autoscaling/v1alpha1"`
+
+## FederatedHPA
+
+FederatedHPA is centralized HPA that can aggregate the metrics in multiple clusters. When the system load increases, it will query the metrics from multiple clusters and scales up the replicas. When the system load decreases, it will query the metrics from multiple clusters and scales down the replicas. After the replicas are scaled up/down, karmada-scheduler will schedule the replicas based on the policy.
+
+
+
+- **apiVersion**: autoscaling.karmada.io/v1alpha1
+
+- **kind**: FederatedHPA
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([FederatedHPASpec](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpaspec)), required
+
+ Spec is the specification of the FederatedHPA.
+
+- **status** (HorizontalPodAutoscalerStatus)
+
+ Status is the current status of the FederatedHPA.
+
+
+
+ *HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler.*
+
+ - **status.desiredReplicas** (int32), required
+
+ desiredReplicas is the desired number of replicas of pods managed by this autoscaler, as last calculated by the autoscaler.
+
+ - **status.conditions** ([]HorizontalPodAutoscalerCondition)
+
+ *Patch strategy: merge on key `type`*
+
+ *Map: unique values on key type will be kept during a merge*
+
+ conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met.
+
+
+
+ *HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point.*
+
+ - **status.conditions.status** (string), required
+
+ status is the status of the condition (True, False, Unknown)
+
+ - **status.conditions.type** (string), required
+
+ type describes the current condition
+
+ - **status.conditions.lastTransitionTime** (Time)
+
+ lastTransitionTime is the last time the condition transitioned from one status to another
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+ - **status.conditions.message** (string)
+
+ message is a human-readable explanation containing details about the transition
+
+ - **status.conditions.reason** (string)
+
+ reason is the reason for the condition's last transition.
+
+ - **status.currentMetrics** ([]MetricStatus)
+
+ *Atomic: will be replaced during a merge*
+
+ currentMetrics is the last read state of the metrics used by this autoscaler.
+
+
+
+ *MetricStatus describes the last-read state of a single metric.*
+
+ - **status.currentMetrics.type** (string), required
+
+ type is the type of metric source. It will be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each corresponds to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled
+
+ - **status.currentMetrics.containerResource** (ContainerResourceMetricStatus)
+
+ container resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.
+
+
+
+ *ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.*
+
+ - **status.currentMetrics.containerResource.container** (string), required
+
+ container is the name of the container in the pods of the scaling target
+
+ - **status.currentMetrics.containerResource.current** (MetricValueStatus), required
+
+ current contains the current value for the given metric
+
+
+
+ *MetricValueStatus holds the current value for a metric*
+
+ - **status.currentMetrics.containerResource.current.averageUtilization** (int32)
+
+ currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
+
+ - **status.currentMetrics.containerResource.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the current value of the average of the metric across all relevant pods (as a quantity)
+
+ - **status.currentMetrics.containerResource.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the current value of the metric (as a quantity).
+
+ - **status.currentMetrics.containerResource.name** (string), required
+
+ name is the name of the resource in question.
+
+ - **status.currentMetrics.external** (ExternalMetricStatus)
+
+ external refers to a global metric that is not associated with any Kubernetes object. It allows autoscaling based on information coming from components running outside of cluster (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster).
+
+
+
+ *ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object.*
+
+ - **status.currentMetrics.external.current** (MetricValueStatus), required
+
+ current contains the current value for the given metric
+
+
+
+ *MetricValueStatus holds the current value for a metric*
+
+ - **status.currentMetrics.external.current.averageUtilization** (int32)
+
+ currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
+
+ - **status.currentMetrics.external.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the current value of the average of the metric across all relevant pods (as a quantity)
+
+ - **status.currentMetrics.external.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the current value of the metric (as a quantity).
+
+ - **status.currentMetrics.external.metric** (MetricIdentifier), required
+
+ metric identifies the target metric by name and selector
+
+
+
+ *MetricIdentifier defines the name and optionally selector for a metric*
+
+ - **status.currentMetrics.external.metric.name** (string), required
+
+ name is the name of the given metric
+
+ - **status.currentMetrics.external.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.
+
+ - **status.currentMetrics.object** (ObjectMetricStatus)
+
+ object refers to a metric describing a single kubernetes object (for example, hits-per-second on an Ingress object).
+
+
+
+ *ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object).*
+
+ - **status.currentMetrics.object.current** (MetricValueStatus), required
+
+ current contains the current value for the given metric
+
+
+
+ *MetricValueStatus holds the current value for a metric*
+
+ - **status.currentMetrics.object.current.averageUtilization** (int32)
+
+ currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
+
+ - **status.currentMetrics.object.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the current value of the average of the metric across all relevant pods (as a quantity)
+
+ - **status.currentMetrics.object.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the current value of the metric (as a quantity).
+
+ - **status.currentMetrics.object.describedObject** (CrossVersionObjectReference), required
+
+ DescribedObject specifies the descriptions of a object,such as kind,name apiVersion
+
+
+
+ *CrossVersionObjectReference contains enough information to let you identify the referred resource.*
+
+ - **status.currentMetrics.object.describedObject.kind** (string), required
+
+ kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **status.currentMetrics.object.describedObject.name** (string), required
+
+ name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ - **status.currentMetrics.object.describedObject.apiVersion** (string)
+
+ apiVersion is the API version of the referent
+
+ - **status.currentMetrics.object.metric** (MetricIdentifier), required
+
+ metric identifies the target metric by name and selector
+
+
+
+ *MetricIdentifier defines the name and optionally selector for a metric*
+
+ - **status.currentMetrics.object.metric.name** (string), required
+
+ name is the name of the given metric
+
+ - **status.currentMetrics.object.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.
+
+ - **status.currentMetrics.pods** (PodsMetricStatus)
+
+ pods refers to a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value.
+
+
+
+ *PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second).*
+
+ - **status.currentMetrics.pods.current** (MetricValueStatus), required
+
+ current contains the current value for the given metric
+
+
+
+ *MetricValueStatus holds the current value for a metric*
+
+ - **status.currentMetrics.pods.current.averageUtilization** (int32)
+
+ currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
+
+ - **status.currentMetrics.pods.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the current value of the average of the metric across all relevant pods (as a quantity)
+
+ - **status.currentMetrics.pods.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the current value of the metric (as a quantity).
+
+ - **status.currentMetrics.pods.metric** (MetricIdentifier), required
+
+ metric identifies the target metric by name and selector
+
+
+
+ *MetricIdentifier defines the name and optionally selector for a metric*
+
+ - **status.currentMetrics.pods.metric.name** (string), required
+
+ name is the name of the given metric
+
+ - **status.currentMetrics.pods.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.
+
+ - **status.currentMetrics.resource** (ResourceMetricStatus)
+
+ resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.
+
+
+
+ *ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.*
+
+ - **status.currentMetrics.resource.current** (MetricValueStatus), required
+
+ current contains the current value for the given metric
+
+
+
+ *MetricValueStatus holds the current value for a metric*
+
+ - **status.currentMetrics.resource.current.averageUtilization** (int32)
+
+ currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
+
+ - **status.currentMetrics.resource.current.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the current value of the average of the metric across all relevant pods (as a quantity)
+
+ - **status.currentMetrics.resource.current.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the current value of the metric (as a quantity).
+
+ - **status.currentMetrics.resource.name** (string), required
+
+ name is the name of the resource in question.
+
+ - **status.currentReplicas** (int32)
+
+ currentReplicas is current number of replicas of pods managed by this autoscaler, as last seen by the autoscaler.
+
+ - **status.lastScaleTime** (Time)
+
+ lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods, used by the autoscaler to control how often the number of pods is changed.
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+ - **status.observedGeneration** (int64)
+
+ observedGeneration is the most recent generation observed by this autoscaler.
+
+## FederatedHPASpec
+
+FederatedHPASpec describes the desired functionality of the FederatedHPA.
+
+
+
+- **maxReplicas** (int32), required
+
+ MaxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up. It cannot be less that minReplicas.
+
+- **scaleTargetRef** (CrossVersionObjectReference), required
+
+ ScaleTargetRef points to the target resource to scale, and is used to the pods for which metrics should be collected, as well as to actually change the replica count.
+
+
+
+ *CrossVersionObjectReference contains enough information to let you identify the referred resource.*
+
+ - **scaleTargetRef.kind** (string), required
+
+ kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **scaleTargetRef.name** (string), required
+
+ name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ - **scaleTargetRef.apiVersion** (string)
+
+ apiVersion is the API version of the referent
+
+- **behavior** (HorizontalPodAutoscalerBehavior)
+
+ Behavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). If not set, the default HPAScalingRules for scale up and scale down are used.
+
+
+
+ *HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively).*
+
+ - **behavior.scaleDown** (HPAScalingRules)
+
+ scaleDown is scaling policy for scaling Down. If not set, the default value is to allow to scale down to minReplicas pods, with a 300 second stabilization window (i.e., the highest recommendation for the last 300sec is used).
+
+
+
+ *HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen.*
+
+ - **behavior.scaleDown.policies** ([]HPAScalingPolicy)
+
+ *Atomic: will be replaced during a merge*
+
+ policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid
+
+
+
+ *HPAScalingPolicy is a single policy which must hold true for a specified past interval.*
+
+ - **behavior.scaleDown.policies.periodSeconds** (int32), required
+
+ periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min).
+
+ - **behavior.scaleDown.policies.type** (string), required
+
+ type is used to specify the scaling policy.
+
+ - **behavior.scaleDown.policies.value** (int32), required
+
+ value contains the amount of change which is permitted by the policy. It must be greater than zero
+
+ - **behavior.scaleDown.selectPolicy** (string)
+
+ selectPolicy is used to specify which policy should be used. If not set, the default value Max is used.
+
+ - **behavior.scaleDown.stabilizationWindowSeconds** (int32)
+
+ stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long).
+
+ - **behavior.scaleUp** (HPAScalingRules)
+
+ scaleUp is scaling policy for scaling Up. If not set, the default value is the higher of:
+ * increase no more than 4 pods per 60 seconds
+ * double the number of pods per 60 seconds
+ No stabilization is used.
+
+
+
+ *HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen.*
+
+ - **behavior.scaleUp.policies** ([]HPAScalingPolicy)
+
+ *Atomic: will be replaced during a merge*
+
+ policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid
+
+
+
+ *HPAScalingPolicy is a single policy which must hold true for a specified past interval.*
+
+ - **behavior.scaleUp.policies.periodSeconds** (int32), required
+
+ periodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min).
+
+ - **behavior.scaleUp.policies.type** (string), required
+
+ type is used to specify the scaling policy.
+
+ - **behavior.scaleUp.policies.value** (int32), required
+
+ value contains the amount of change which is permitted by the policy. It must be greater than zero
+
+ - **behavior.scaleUp.selectPolicy** (string)
+
+ selectPolicy is used to specify which policy should be used. If not set, the default value Max is used.
+
+ - **behavior.scaleUp.stabilizationWindowSeconds** (int32)
+
+ stabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long).
+
+- **metrics** ([]MetricSpec)
+
+ Metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization.
+
+
+
+ *MetricSpec specifies how to scale based on a single metric (only `type` and one other matching field should be set at once).*
+
+ - **metrics.type** (string), required
+
+ type is the type of metric source. It should be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each mapping to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled
+
+ - **metrics.containerResource** (ContainerResourceMetricSource)
+
+ containerResource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing a single container in each pod of the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. This is an alpha feature and can be enabled by the HPAContainerMetrics feature flag.
+
+
+
+ *ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set.*
+
+ - **metrics.containerResource.container** (string), required
+
+ container is the name of the container in the pods of the scaling target
+
+ - **metrics.containerResource.name** (string), required
+
+ name is the name of the resource in question.
+
+ - **metrics.containerResource.target** (MetricTarget), required
+
+ target specifies the target value for the given metric
+
+
+
+ *MetricTarget defines the target value, average value, or average utilization of a specific metric*
+
+ - **metrics.containerResource.target.type** (string), required
+
+ type represents whether the metric type is Utilization, Value, or AverageValue
+
+ - **metrics.containerResource.target.averageUtilization** (int32)
+
+ averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type
+
+ - **metrics.containerResource.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the target value of the average of the metric across all relevant pods (as a quantity)
+
+ - **metrics.containerResource.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the target value of the metric (as a quantity).
+
+ - **metrics.external** (ExternalMetricSource)
+
+ external refers to a global metric that is not associated with any Kubernetes object. It allows autoscaling based on information coming from components running outside of cluster (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster).
+
+
+
+ *ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster).*
+
+ - **metrics.external.metric** (MetricIdentifier), required
+
+ metric identifies the target metric by name and selector
+
+
+
+ *MetricIdentifier defines the name and optionally selector for a metric*
+
+ - **metrics.external.metric.name** (string), required
+
+ name is the name of the given metric
+
+ - **metrics.external.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.
+
+ - **metrics.external.target** (MetricTarget), required
+
+ target specifies the target value for the given metric
+
+
+
+ *MetricTarget defines the target value, average value, or average utilization of a specific metric*
+
+ - **metrics.external.target.type** (string), required
+
+ type represents whether the metric type is Utilization, Value, or AverageValue
+
+ - **metrics.external.target.averageUtilization** (int32)
+
+ averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type
+
+ - **metrics.external.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the target value of the average of the metric across all relevant pods (as a quantity)
+
+ - **metrics.external.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the target value of the metric (as a quantity).
+
+ - **metrics.object** (ObjectMetricSource)
+
+ object refers to a metric describing a single kubernetes object (for example, hits-per-second on an Ingress object).
+
+
+
+ *ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object).*
+
+ - **metrics.object.describedObject** (CrossVersionObjectReference), required
+
+ describedObject specifies the descriptions of a object,such as kind,name apiVersion
+
+
+
+ *CrossVersionObjectReference contains enough information to let you identify the referred resource.*
+
+ - **metrics.object.describedObject.kind** (string), required
+
+ kind is the kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **metrics.object.describedObject.name** (string), required
+
+ name is the name of the referent; More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+
+ - **metrics.object.describedObject.apiVersion** (string)
+
+ apiVersion is the API version of the referent
+
+ - **metrics.object.metric** (MetricIdentifier), required
+
+ metric identifies the target metric by name and selector
+
+
+
+ *MetricIdentifier defines the name and optionally selector for a metric*
+
+ - **metrics.object.metric.name** (string), required
+
+ name is the name of the given metric
+
+ - **metrics.object.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.
+
+ - **metrics.object.target** (MetricTarget), required
+
+ target specifies the target value for the given metric
+
+
+
+ *MetricTarget defines the target value, average value, or average utilization of a specific metric*
+
+ - **metrics.object.target.type** (string), required
+
+ type represents whether the metric type is Utilization, Value, or AverageValue
+
+ - **metrics.object.target.averageUtilization** (int32)
+
+ averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type
+
+ - **metrics.object.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the target value of the average of the metric across all relevant pods (as a quantity)
+
+ - **metrics.object.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the target value of the metric (as a quantity).
+
+ - **metrics.pods** (PodsMetricSource)
+
+ pods refers to a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value.
+
+
+
+ *PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value.*
+
+ - **metrics.pods.metric** (MetricIdentifier), required
+
+ metric identifies the target metric by name and selector
+
+
+
+ *MetricIdentifier defines the name and optionally selector for a metric*
+
+ - **metrics.pods.metric.name** (string), required
+
+ name is the name of the given metric
+
+ - **metrics.pods.metric.selector** ([LabelSelector](../common-definitions/label-selector#labelselector))
+
+ selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.
+
+ - **metrics.pods.target** (MetricTarget), required
+
+ target specifies the target value for the given metric
+
+
+
+ *MetricTarget defines the target value, average value, or average utilization of a specific metric*
+
+ - **metrics.pods.target.type** (string), required
+
+ type represents whether the metric type is Utilization, Value, or AverageValue
+
+ - **metrics.pods.target.averageUtilization** (int32)
+
+ averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type
+
+ - **metrics.pods.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the target value of the average of the metric across all relevant pods (as a quantity)
+
+ - **metrics.pods.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the target value of the metric (as a quantity).
+
+ - **metrics.resource** (ResourceMetricSource)
+
+ resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.
+
+
+
+ *ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set.*
+
+ - **metrics.resource.name** (string), required
+
+ name is the name of the resource in question.
+
+ - **metrics.resource.target** (MetricTarget), required
+
+ target specifies the target value for the given metric
+
+
+
+ *MetricTarget defines the target value, average value, or average utilization of a specific metric*
+
+ - **metrics.resource.target.type** (string), required
+
+ type represents whether the metric type is Utilization, Value, or AverageValue
+
+ - **metrics.resource.target.averageUtilization** (int32)
+
+ averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type
+
+ - **metrics.resource.target.averageValue** ([Quantity](../common-definitions/quantity#quantity))
+
+ averageValue is the target value of the average of the metric across all relevant pods (as a quantity)
+
+ - **metrics.resource.target.value** ([Quantity](../common-definitions/quantity#quantity))
+
+ value is the target value of the metric (as a quantity).
+
+- **minReplicas** (int32)
+
+ MinReplicas is the lower limit for the number of replicas to which the autoscaler can scale down. It defaults to 1 pod.
+
+## FederatedHPAList
+
+FederatedHPAList contains a list of FederatedHPA.
+
+
+
+- **apiVersion**: autoscaling.karmada.io/v1alpha1
+
+- **kind**: FederatedHPAList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)), required
+
+## Operations
+
+
+
+### `get` read the specified FederatedHPA
+
+#### HTTP Request
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the FederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+### `get` read status of the specified FederatedHPA
+
+#### HTTP Request
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}/status
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the FederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+### `list` list or watch objects of kind FederatedHPA
+
+#### HTTP Request
+
+GET /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas
+
+#### Parameters
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **allowWatchBookmarks** (*in query*): boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue** (*in query*): string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector** (*in query*): string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector** (*in query*): string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit** (*in query*): integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion** (*in query*): string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch** (*in query*): string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents** (*in query*): boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (*in query*): integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch** (*in query*): boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### Response
+
+200 ([FederatedHPAList](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpalist)): OK
+
+### `list` list or watch objects of kind FederatedHPA
+
+#### HTTP Request
+
+GET /apis/autoscaling.karmada.io/v1alpha1/federatedhpas
+
+#### Parameters
+
+- **allowWatchBookmarks** (*in query*): boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue** (*in query*): string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector** (*in query*): string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector** (*in query*): string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit** (*in query*): integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion** (*in query*): string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch** (*in query*): string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents** (*in query*): boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (*in query*): integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch** (*in query*): boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### Response
+
+200 ([FederatedHPAList](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpalist)): OK
+
+### `create` create a FederatedHPA
+
+#### HTTP Request
+
+POST /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas
+
+#### Parameters
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+202 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Accepted
+
+### `update` replace the specified FederatedHPA
+
+#### HTTP Request
+
+PUT /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the FederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+### `update` replace status of the specified FederatedHPA
+
+#### HTTP Request
+
+PUT /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}/status
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the FederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+### `patch` partially update the specified FederatedHPA
+
+#### HTTP Request
+
+PATCH /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the FederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force** (*in query*): boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+### `patch` partially update status of the specified FederatedHPA
+
+#### HTTP Request
+
+PATCH /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}/status
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the FederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [Patch](../common-definitions/patch#patch), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force** (*in query*): boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): OK
+
+201 ([FederatedHPA](../auto-scaling-resources/federated-hpa-v1alpha1#federatedhpa)): Created
+
+### `delete` delete a FederatedHPA
+
+#### HTTP Request
+
+DELETE /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the FederatedHPA
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds** (*in query*): integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy** (*in query*): string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### Response
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection` delete collection of FederatedHPA
+
+#### HTTP Request
+
+DELETE /apis/autoscaling.karmada.io/v1alpha1/namespaces/{namespace}/federatedhpas
+
+#### Parameters
+
+- **namespace** (*in path*): string, required
+
+ [namespace](../common-parameter/common-parameters#namespace)
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **continue** (*in query*): string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector** (*in query*): string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds** (*in query*): integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector** (*in query*): string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit** (*in query*): integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy** (*in query*): string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion** (*in query*): string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch** (*in query*): string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents** (*in query*): boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (*in query*): integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### Response
+
+200 ([Status](../common-definitions/status#status)): OK
+
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/cluster-resources/cluster-v1alpha1.md b/versioned_docs/version-v1.9/reference/karmada-api/cluster-resources/cluster-v1alpha1.md
new file mode 100644
index 000000000..c3d60f682
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/cluster-resources/cluster-v1alpha1.md
@@ -0,0 +1,774 @@
+---
+api_metadata:
+ apiVersion: "cluster.karmada.io/v1alpha1"
+ import: "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1"
+ kind: "Cluster"
+content_type: "api_reference"
+description: "Cluster represents the desire state and status of a member cluster."
+title: "Cluster v1alpha1"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`apiVersion: cluster.karmada.io/v1alpha1`
+
+`import "github.com/karmada-io/karmada/pkg/apis/cluster/v1alpha1"`
+
+## Cluster
+
+Cluster represents the desire state and status of a member cluster.
+
+
+
+- **apiVersion**: cluster.karmada.io/v1alpha1
+
+- **kind**: Cluster
+
+- **metadata** ([ObjectMeta](../common-definitions/object-meta#objectmeta))
+
+- **spec** ([ClusterSpec](../cluster-resources/cluster-v1alpha1#clusterspec)), required
+
+ Spec represents the specification of the desired behavior of member cluster.
+
+- **status** ([ClusterStatus](../cluster-resources/cluster-v1alpha1#clusterstatus))
+
+ Status represents the status of member cluster.
+
+## ClusterSpec
+
+ClusterSpec defines the desired state of a member cluster.
+
+
+
+- **syncMode** (string), required
+
+ SyncMode describes how a cluster sync resources from karmada control plane.
+
+- **apiEndpoint** (string)
+
+ The API endpoint of the member cluster. This can be a hostname, hostname:port, IP or IP:port.
+
+- **id** (string)
+
+ ID is the unique identifier for the cluster. It is different from the object uid(.metadata.uid) and typically collected automatically from member cluster during the progress of registration.
+
+ The value is collected in order: 1. If the registering cluster enabled ClusterProperty API and defined the cluster ID by
+ creating a ClusterProperty object with name 'cluster.clusterset.k8s.io', Karmada would
+ take the defined value in the ClusterProperty object.
+ See https://github.com/kubernetes-sigs/about-api for more details about ClusterProperty API.
+ 2. Take the uid of 'kube-system' namespace on the registering cluster.
+
+ Please don't update this value unless you know what you are doing, because it will/may be used to : - uniquely identify the clusters within the Karmada system. - compose the DNS name of multi-cluster services.
+
+- **impersonatorSecretRef** (LocalSecretReference)
+
+ ImpersonatorSecretRef represents the secret contains the token of impersonator. The secret should hold credentials as follows: - secret.data.token
+
+
+
+ *LocalSecretReference is a reference to a secret within the enclosing namespace.*
+
+ - **impersonatorSecretRef.name** (string), required
+
+ Name is the name of resource being referenced.
+
+ - **impersonatorSecretRef.namespace** (string), required
+
+ Namespace is the namespace for the resource being referenced.
+
+- **insecureSkipTLSVerification** (boolean)
+
+ InsecureSkipTLSVerification indicates that the karmada control plane should not confirm the validity of the serving certificate of the cluster it is connecting to. This will make the HTTPS connection between the karmada control plane and the member cluster insecure. Defaults to false.
+
+- **provider** (string)
+
+ Provider represents the cloud provider name of the member cluster.
+
+- **proxyHeader** (map[string]string)
+
+ ProxyHeader is the HTTP header required by proxy server. The key in the key-value pair is HTTP header key and value is the associated header payloads. For the header with multiple values, the values should be separated by comma(e.g. 'k1': 'v1,v2,v3').
+
+- **proxyURL** (string)
+
+ ProxyURL is the proxy URL for the cluster. If not empty, the karmada control plane will use this proxy to talk to the cluster. More details please refer to: https://github.com/kubernetes/client-go/issues/351
+
+- **region** (string)
+
+ Region represents the region of the member cluster locate in.
+
+- **resourceModels** ([]ResourceModel)
+
+ ResourceModels is the list of resource modeling in this cluster. Each modeling quota can be customized by the user. Modeling name must be one of the following: cpu, memory, storage, ephemeral-storage. If the user does not define the modeling name and modeling quota, it will be the default model. The default model grade from 0 to 8. When grade = 0 or grade = 1, the default model's cpu quota and memory quota is a fix value. When grade greater than or equal to 2, each default model's cpu quota is [2^(grade-1), 2^grade), 2 <= grade <= 7 Each default model's memory quota is [2^(grade + 2), 2^(grade + 3)), 2 <= grade <= 7 E.g. grade 0 likes this: - grade: 0
+ ranges:
+ - name: "cpu"
+ min: 0 C
+ max: 1 C
+ - name: "memory"
+ min: 0 GB
+ max: 4 GB
+
+ - grade: 1
+ ranges:
+ - name: "cpu"
+ min: 1 C
+ max: 2 C
+ - name: "memory"
+ min: 4 GB
+ max: 16 GB
+
+ - grade: 2
+ ranges:
+ - name: "cpu"
+ min: 2 C
+ max: 4 C
+ - name: "memory"
+ min: 16 GB
+ max: 32 GB
+
+ - grade: 7
+ range:
+ - name: "cpu"
+ min: 64 C
+ max: 128 C
+ - name: "memory"
+ min: 512 GB
+ max: 1024 GB
+
+ grade 8, the last one likes below. No matter what Max value you pass, the meaning of Max value in this grade is infinite. You can pass any number greater than Min value. - grade: 8
+ range:
+ - name: "cpu"
+ min: 128 C
+ max: MAXINT
+ - name: "memory"
+ min: 1024 GB
+ max: MAXINT
+
+
+
+ *ResourceModel describes the modeling that you want to statistics.*
+
+ - **resourceModels.grade** (int32), required
+
+ Grade is the index for the resource modeling.
+
+ - **resourceModels.ranges** ([]ResourceModelRange), required
+
+ Ranges describes the resource quota ranges.
+
+
+
+ *ResourceModelRange describes the detail of each modeling quota that ranges from min to max. Please pay attention, by default, the value of min can be inclusive, and the value of max cannot be inclusive. E.g. in an interval, min = 2, max =10 is set, which means the interval [2,10). This rule ensure that all intervals have the same meaning. If the last interval is infinite, it is definitely unreachable. Therefore, we define the right interval as the open interval. For a valid interval, the value on the right is greater than the value on the left, in other words, max must be greater than min. It is strongly recommended that the [Min, Max) of all ResourceModelRanges can make a continuous interval.*
+
+ - **resourceModels.ranges.max** ([Quantity](../common-definitions/quantity#quantity)), required
+
+ Max is the maximum amount of this resource represented by resource name. Special Instructions, for the last ResourceModelRange, which no matter what Max value you pass, the meaning is infinite. Because for the last item, any ResourceModelRange's quota larger than Min will be classified to the last one. Of course, the value of the Max field is always greater than the value of the Min field. It should be true in any case.
+
+ - **resourceModels.ranges.min** ([Quantity](../common-definitions/quantity#quantity)), required
+
+ Min is the minimum amount of this resource represented by resource name. Note: The Min value of first grade(usually 0) always acts as zero. E.g. [1,2) equal to [0,2).
+
+ - **resourceModels.ranges.name** (string), required
+
+ Name is the name for the resource that you want to categorize.
+
+- **secretRef** (LocalSecretReference)
+
+ SecretRef represents the secret contains mandatory credentials to access the member cluster. The secret should hold credentials as follows: - secret.data.token - secret.data.caBundle
+
+
+
+ *LocalSecretReference is a reference to a secret within the enclosing namespace.*
+
+ - **secretRef.name** (string), required
+
+ Name is the name of resource being referenced.
+
+ - **secretRef.namespace** (string), required
+
+ Namespace is the namespace for the resource being referenced.
+
+- **taints** ([]Taint)
+
+ Taints attached to the member cluster. Taints on the cluster have the "effect" on any resource that does not tolerate the Taint.
+
+
+
+ *The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint.*
+
+ - **taints.effect** (string), required
+
+ Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute.
+
+ Possible enum values:
+ - `"NoExecute"` Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController.
+ - `"NoSchedule"` Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler.
+ - `"PreferNoSchedule"` Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler.
+
+ - **taints.key** (string), required
+
+ Required. The taint key to be applied to a node.
+
+ - **taints.timeAdded** (Time)
+
+ TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints.
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+ - **taints.value** (string)
+
+ The taint value corresponding to the taint key.
+
+- **zone** (string)
+
+ Zone represents the zone of the member cluster locate in. Deprecated: This filed was never been used by Karmada, and it will not be removed from v1alpha1 for backward compatibility, use Zones instead.
+
+- **zones** ([]string)
+
+ Zones represents the failure zones(also called availability zones) of the member cluster. The zones are presented as a slice to support the case that cluster runs across multiple failure zones. Refer https://kubernetes.io/docs/setup/best-practices/multiple-zones/ for more details about running Kubernetes in multiple zones.
+
+## ClusterStatus
+
+ClusterStatus contains information about the current status of a cluster updated periodically by cluster controller.
+
+
+
+- **apiEnablements** ([]APIEnablement)
+
+ APIEnablements represents the list of APIs installed in the member cluster.
+
+
+
+ *APIEnablement is a list of API resource, it is used to expose the name of the resources supported in a specific group and version.*
+
+ - **apiEnablements.groupVersion** (string), required
+
+ GroupVersion is the group and version this APIEnablement is for.
+
+ - **apiEnablements.resources** ([]APIResource)
+
+ Resources is a list of APIResource.
+
+
+
+ *APIResource specifies the name and kind names for the resource.*
+
+ - **apiEnablements.resources.kind** (string), required
+
+ Kind is the kind for the resource (e.g. 'Deployment' is the kind for resource 'deployments')
+
+ - **apiEnablements.resources.name** (string), required
+
+ Name is the plural name of the resource.
+
+- **conditions** ([]Condition)
+
+ Conditions is an array of current cluster conditions.
+
+
+
+ *Condition contains details for one aspect of the current state of this API Resource.*
+
+ - **conditions.lastTransitionTime** (Time), required
+
+ lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+ - **conditions.message** (string), required
+
+ message is a human readable message indicating details about the transition. This may be an empty string.
+
+ - **conditions.reason** (string), required
+
+ reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty.
+
+ - **conditions.status** (string), required
+
+ status of the condition, one of True, False, Unknown.
+
+ - **conditions.type** (string), required
+
+ type of condition in CamelCase or in foo.example.com/CamelCase.
+
+ - **conditions.observedGeneration** (int64)
+
+ observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance.
+
+- **kubernetesVersion** (string)
+
+ KubernetesVersion represents version of the member cluster.
+
+- **nodeSummary** (NodeSummary)
+
+ NodeSummary represents the summary of nodes status in the member cluster.
+
+
+
+ *NodeSummary represents the summary of nodes status in a specific cluster.*
+
+ - **nodeSummary.readyNum** (int32)
+
+ ReadyNum is the number of ready nodes in the cluster.
+
+ - **nodeSummary.totalNum** (int32)
+
+ TotalNum is the total number of nodes in the cluster.
+
+- **resourceSummary** (ResourceSummary)
+
+ ResourceSummary represents the summary of resources in the member cluster.
+
+
+
+ *ResourceSummary represents the summary of resources in the member cluster.*
+
+ - **resourceSummary.allocatable** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ Allocatable represents the resources of a cluster that are available for scheduling. Total amount of allocatable resources on all nodes.
+
+ - **resourceSummary.allocatableModelings** ([]AllocatableModeling)
+
+ AllocatableModelings represents the statistical resource modeling.
+
+
+
+ *AllocatableModeling represents the number of nodes in which allocatable resources in a specific resource model grade. E.g. AllocatableModeling[Grade: 2, Count: 10] means 10 nodes belong to resource model in grade 2.*
+
+ - **resourceSummary.allocatableModelings.count** (int32), required
+
+ Count is the number of nodes that own the resources delineated by this modeling.
+
+ - **resourceSummary.allocatableModelings.grade** (int32), required
+
+ Grade is the index of ResourceModel.
+
+ - **resourceSummary.allocated** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ Allocated represents the resources of a cluster that have been scheduled. Total amount of required resources of all Pods that have been scheduled to nodes.
+
+ - **resourceSummary.allocating** (map[string][Quantity](../common-definitions/quantity#quantity))
+
+ Allocating represents the resources of a cluster that are pending for scheduling. Total amount of required resources of all Pods that are waiting for scheduling.
+
+## ClusterList
+
+ClusterList contains a list of member cluster
+
+
+
+- **apiVersion**: cluster.karmada.io/v1alpha1
+
+- **kind**: ClusterList
+
+- **metadata** ([ListMeta](../common-definitions/list-meta#listmeta))
+
+- **items** ([][Cluster](../cluster-resources/cluster-v1alpha1#cluster)), required
+
+ Items holds a list of Cluster.
+
+## Operations
+
+
+
+### `get` read the specified Cluster
+
+#### HTTP Request
+
+GET /apis/cluster.karmada.io/v1alpha1/clusters/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the Cluster
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+### `get` read status of the specified Cluster
+
+#### HTTP Request
+
+GET /apis/cluster.karmada.io/v1alpha1/clusters/{name}/status
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the Cluster
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+### `list` list or watch objects of kind Cluster
+
+#### HTTP Request
+
+GET /apis/cluster.karmada.io/v1alpha1/clusters
+
+#### Parameters
+
+- **allowWatchBookmarks** (*in query*): boolean
+
+ [allowWatchBookmarks](../common-parameter/common-parameters#allowwatchbookmarks)
+
+- **continue** (*in query*): string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **fieldSelector** (*in query*): string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **labelSelector** (*in query*): string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit** (*in query*): integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **resourceVersion** (*in query*): string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch** (*in query*): string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents** (*in query*): boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (*in query*): integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+- **watch** (*in query*): boolean
+
+ [watch](../common-parameter/common-parameters#watch)
+
+#### Response
+
+200 ([ClusterList](../cluster-resources/cluster-v1alpha1#clusterlist)): OK
+
+### `create` create a Cluster
+
+#### HTTP Request
+
+POST /apis/cluster.karmada.io/v1alpha1/clusters
+
+#### Parameters
+
+- **body**: [Cluster](../cluster-resources/cluster-v1alpha1#cluster), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+202 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Accepted
+
+### `update` replace the specified Cluster
+
+#### HTTP Request
+
+PUT /apis/cluster.karmada.io/v1alpha1/clusters/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the Cluster
+
+- **body**: [Cluster](../cluster-resources/cluster-v1alpha1#cluster), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+### `update` replace status of the specified Cluster
+
+#### HTTP Request
+
+PUT /apis/cluster.karmada.io/v1alpha1/clusters/{name}/status
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the Cluster
+
+- **body**: [Cluster](../cluster-resources/cluster-v1alpha1#cluster), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+### `patch` partially update the specified Cluster
+
+#### HTTP Request
+
+PATCH /apis/cluster.karmada.io/v1alpha1/clusters/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the Cluster
+
+- **body**: [Patch](../common-definitions/patch#patch), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force** (*in query*): boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+### `patch` partially update status of the specified Cluster
+
+#### HTTP Request
+
+PATCH /apis/cluster.karmada.io/v1alpha1/clusters/{name}/status
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the Cluster
+
+- **body**: [Patch](../common-definitions/patch#patch), required
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldManager** (*in query*): string
+
+ [fieldManager](../common-parameter/common-parameters#fieldmanager)
+
+- **fieldValidation** (*in query*): string
+
+ [fieldValidation](../common-parameter/common-parameters#fieldvalidation)
+
+- **force** (*in query*): boolean
+
+ [force](../common-parameter/common-parameters#force)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+#### Response
+
+200 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): OK
+
+201 ([Cluster](../cluster-resources/cluster-v1alpha1#cluster)): Created
+
+### `delete` delete a Cluster
+
+#### HTTP Request
+
+DELETE /apis/cluster.karmada.io/v1alpha1/clusters/{name}
+
+#### Parameters
+
+- **name** (*in path*): string, required
+
+ name of the Cluster
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **gracePeriodSeconds** (*in query*): integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy** (*in query*): string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+#### Response
+
+200 ([Status](../common-definitions/status#status)): OK
+
+202 ([Status](../common-definitions/status#status)): Accepted
+
+### `deletecollection` delete collection of Cluster
+
+#### HTTP Request
+
+DELETE /apis/cluster.karmada.io/v1alpha1/clusters
+
+#### Parameters
+
+- **body**: [DeleteOptions](../common-definitions/delete-options#deleteoptions)
+
+
+
+- **continue** (*in query*): string
+
+ [continue](../common-parameter/common-parameters#continue)
+
+- **dryRun** (*in query*): string
+
+ [dryRun](../common-parameter/common-parameters#dryrun)
+
+- **fieldSelector** (*in query*): string
+
+ [fieldSelector](../common-parameter/common-parameters#fieldselector)
+
+- **gracePeriodSeconds** (*in query*): integer
+
+ [gracePeriodSeconds](../common-parameter/common-parameters#graceperiodseconds)
+
+- **labelSelector** (*in query*): string
+
+ [labelSelector](../common-parameter/common-parameters#labelselector)
+
+- **limit** (*in query*): integer
+
+ [limit](../common-parameter/common-parameters#limit)
+
+- **pretty** (*in query*): string
+
+ [pretty](../common-parameter/common-parameters#pretty)
+
+- **propagationPolicy** (*in query*): string
+
+ [propagationPolicy](../common-parameter/common-parameters#propagationpolicy)
+
+- **resourceVersion** (*in query*): string
+
+ [resourceVersion](../common-parameter/common-parameters#resourceversion)
+
+- **resourceVersionMatch** (*in query*): string
+
+ [resourceVersionMatch](../common-parameter/common-parameters#resourceversionmatch)
+
+- **sendInitialEvents** (*in query*): boolean
+
+ [sendInitialEvents](../common-parameter/common-parameters#sendinitialevents)
+
+- **timeoutSeconds** (*in query*): integer
+
+ [timeoutSeconds](../common-parameter/common-parameters#timeoutseconds)
+
+#### Response
+
+200 ([Status](../common-definitions/status#status)): OK
+
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/delete-options.md b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/delete-options.md
new file mode 100644
index 000000000..50c9a6a5d
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/delete-options.md
@@ -0,0 +1,62 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "DeleteOptions"
+content_type: "api_reference"
+description: "DeleteOptions may be provided when deleting an API object."
+title: "DeleteOptions"
+weight: 1
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+DeleteOptions may be provided when deleting an API object.
+
+
+
+- **apiVersion** (string)
+
+ APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
+
+- **dryRun** ([]string)
+
+ When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
+
+- **gracePeriodSeconds** (int64)
+
+ The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
+
+- **kind** (string)
+
+ Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+- **orphanDependents** (boolean)
+
+ Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.
+
+- **preconditions** (Preconditions)
+
+ Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned.
+
+
+
+ *Preconditions must be fulfilled before an operation (update, delete, etc.) is carried out.*
+
+ - **preconditions.resourceVersion** (string)
+
+ Specifies the target ResourceVersion
+
+ - **preconditions.uid** (string)
+
+ Specifies the target UID.
+
+- **propagationPolicy** (string)
+
+ Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.
+
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/label-selector.md b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/label-selector.md
new file mode 100644
index 000000000..2063137d0
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/label-selector.md
@@ -0,0 +1,48 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "LabelSelector"
+content_type: "api_reference"
+description: "A label selector is a label query over a set of resources."
+title: "LabelSelector"
+weight: 2
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
+
+
+
+- **matchExpressions** ([]LabelSelectorRequirement)
+
+ matchExpressions is a list of label selector requirements. The requirements are ANDed.
+
+
+
+ *A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.*
+
+ - **matchExpressions.key** (string), required
+
+ *Patch strategy: merge on key `key`*
+
+ key is the label key that the selector applies to.
+
+ - **matchExpressions.operator** (string), required
+
+ operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
+
+ - **matchExpressions.values** ([]string)
+
+ values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
+
+- **matchLabels** (map[string]string)
+
+ matchLabels is a map of [key,value] pairs. A single [key,value] in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
+
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/list-meta.md b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/list-meta.md
new file mode 100644
index 000000000..1a4c08876
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/list-meta.md
@@ -0,0 +1,38 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "ListMeta"
+content_type: "api_reference"
+description: "ListMeta describes metadata that synthetic resources must have, including lists and various status objects."
+title: "ListMeta"
+weight: 3
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of [ObjectMeta, ListMeta].
+
+
+
+- **continue** (string)
+
+ continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the next set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message.
+
+- **remainingItemCount** (int64)
+
+ remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is *estimating* the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact.
+
+- **resourceVersion** (string)
+
+ String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
+
+- **selfLink** (string)
+
+ Deprecated: selfLink is a legacy read-only field that is no longer populated by the system.
+
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/node-selector-requirement.md b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/node-selector-requirement.md
new file mode 100644
index 000000000..80cf3ccae
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/node-selector-requirement.md
@@ -0,0 +1,42 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/api/core/v1"
+ kind: "NodeSelectorRequirement"
+content_type: "api_reference"
+description: "A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values."
+title: "NodeSelectorRequirement"
+weight: 4
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/api/core/v1"`
+
+A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
+
+
+
+- **key** (string), required
+
+ The label key that the selector applies to.
+
+- **operator** (string), required
+
+ Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt.
+
+ Possible enum values:
+ - `"DoesNotExist"`
+ - `"Exists"`
+ - `"Gt"`
+ - `"In"`
+ - `"Lt"`
+ - `"NotIn"`
+
+- **values** ([]string)
+
+ An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
+
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/object-meta.md b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/object-meta.md
new file mode 100644
index 000000000..7df5fabbb
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/object-meta.md
@@ -0,0 +1,178 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "ObjectMeta"
+content_type: "api_reference"
+description: "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create."
+title: "ObjectMeta"
+weight: 5
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.
+
+
+
+- **annotations** (map[string]string)
+
+ Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
+
+- **creationTimestamp** (Time)
+
+ CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.
+
+ Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+- **deletionGracePeriodSeconds** (int64)
+
+ Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.
+
+- **deletionTimestamp** (Time)
+
+ DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.
+
+ Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+- **finalizers** ([]string)
+
+ Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list.
+
+- **generateName** (string)
+
+ GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.
+
+ If this field is specified and the generated name exists, the server will return a 409.
+
+ Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency
+
+- **generation** (int64)
+
+ A sequence number representing a specific generation of the desired state. Populated by the system. Read-only.
+
+- **labels** (map[string]string)
+
+ Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels
+
+- **managedFields** ([]ManagedFieldsEntry)
+
+ ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object.
+
+
+
+ *ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to.*
+
+ - **managedFields.apiVersion** (string)
+
+ APIVersion defines the version of this resource that this field set applies to. The format is "group/version" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted.
+
+ - **managedFields.fieldsType** (string)
+
+ FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: "FieldsV1"
+
+ - **managedFields.fieldsV1** (FieldsV1)
+
+ FieldsV1 holds the first JSON version format as described in the "FieldsV1" type.
+
+
+
+ *FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format.
+
+ Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f:<name>', where <name> is the name of a field in a struct, or key in a map 'v:<value>', where <value> is the exact json formatted value of a list item 'i:<index>', where <index> is position of a item in a list 'k:<keys>', where <keys> is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set.
+
+ The exact format is defined in sigs.k8s.io/structured-merge-diff*
+
+ - **managedFields.manager** (string)
+
+ Manager is an identifier of the workflow managing these fields.
+
+ - **managedFields.operation** (string)
+
+ Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'.
+
+ - **managedFields.subresource** (string)
+
+ Subresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource.
+
+ - **managedFields.time** (Time)
+
+ Time is the timestamp of when the ManagedFields entry was added. The timestamp will also be updated if a field is added, the manager changes any of the owned fields value or removes a field. The timestamp does not update when a field is removed from the entry because another manager took it over.
+
+
+
+ *Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.*
+
+- **name** (string)
+
+ Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names
+
+- **namespace** (string)
+
+ Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.
+
+ Must be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces
+
+- **ownerReferences** ([]OwnerReference)
+
+ *Patch strategy: merge on key `uid`*
+
+ List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.
+
+
+
+ *OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field.*
+
+ - **ownerReferences.apiVersion** (string), required
+
+ API version of the referent.
+
+ - **ownerReferences.kind** (string), required
+
+ Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
+
+ - **ownerReferences.name** (string), required
+
+ Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names
+
+ - **ownerReferences.uid** (string), required
+
+ UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids
+
+ - **ownerReferences.blockOwnerDeletion** (boolean)
+
+ If true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion for how the garbage collector interacts with this field and enforces the foreground deletion. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned.
+
+ - **ownerReferences.controller** (boolean)
+
+ If true, this reference points to the managing controller.
+
+- **resourceVersion** (string)
+
+ An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.
+
+ Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
+
+- **selfLink** (string)
+
+ Deprecated: selfLink is a legacy read-only field that is no longer populated by the system.
+
+- **uid** (string)
+
+ UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.
+
+ Populated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids
+
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/patch.md b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/patch.md
new file mode 100644
index 000000000..efa84e32b
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/patch.md
@@ -0,0 +1,22 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/apis/meta/v1"
+ kind: "Patch"
+content_type: "api_reference"
+description: "Patch is provided to give a concrete name and type to the Kubernetes PATCH request body."
+title: "Patch"
+weight: 6
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/apis/meta/v1"`
+
+Patch is provided to give a concrete name and type to the Kubernetes PATCH request body.
+
+
+
diff --git a/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/quantity.md b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/quantity.md
new file mode 100644
index 000000000..37eb31f6b
--- /dev/null
+++ b/versioned_docs/version-v1.9/reference/karmada-api/common-definitions/quantity.md
@@ -0,0 +1,58 @@
+---
+api_metadata:
+ apiVersion: ""
+ import: "k8s.io/apimachinery/pkg/api/resource"
+ kind: "Quantity"
+content_type: "api_reference"
+description: "Quantity is a fixed-point representation of a number."
+title: "Quantity"
+weight: 7
+auto_generated: true
+---
+
+[//]: # (The file is auto-generated from the Go source code of the component using a generic generator,)
+[//]: # (which is forked from [reference-docs](https://github.com/kubernetes-sigs/reference-docs.)
+[//]: # (To update the reference content, please follow the `reference-api.sh`.)
+
+`import "k8s.io/apimachinery/pkg/api/resource"`
+
+Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshalling in JSON and YAML, in addition to String() and AsInt64() accessors.
+
+The serialization format is:
+
+``` \ ::= \\
+
+ (Note that \ may be empty, from the "" case in \.)
+
+\ ::= 0 | 1 | ... | 9 \ ::= \ | \\ \ ::= \ | \.\ | \. | .\ \ ::= "+" | "-" \ ::= \ | \\ \ ::= \ | \ | \ \ ::= Ki | Mi | Gi | Ti | Pi | Ei
+
+ (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)
+
+\ ::= m | "" | k | M | G | T | P | E
+
+ (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)
+
+\ ::= "e" \