Skip to content

Commit

Permalink
Merge pull request #34 from jillian-maroket/add-variables
Browse files Browse the repository at this point in the history
[v1.4] Add product name variables to topics in Introduction, Installation and Setup, and Hosts
  • Loading branch information
jillian-maroket authored Nov 22, 2024
2 parents 3a540b0 + 4e4a058 commit c5fd997
Show file tree
Hide file tree
Showing 23 changed files with 239 additions and 211 deletions.
17 changes: 16 additions & 1 deletion versions/v1.3/antora.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,19 @@ version: v1.3
display_version: 'v1.3 (Latest)'
start_page: en:introduction/overview.adoc
nav:
- modules/en/nav.adoc
- modules/en/nav.adoc
asciidoc:
attributes:
harvester-product-name: "SUSE Virtualization"
harvester-product-name-tm: "SUSE® Virtualization"
longhorn-product-name: "SUSE Storage"
longhorn-product-name-tm: "SUSE® Storage"
neuvector-product-name: "SUSE® Security"
rancher-product-name: "SUSE Rancher Prime"
rancher-product-name-tm: "SUSE® Rancher Prime"
elemental-product-name: "SUSE® Rancher Prime: OS Manager"
k3s-product-name: "SUSE® Rancher Prime: K3s"
kubewarden-product-name: "SUSE® Rancher Prime: Admission Policy Manager"
rke2-product-name: "SUSE® Rancher Prime: RKE2"
fleet-product-name: "SUSE® Rancher Prime: Continous Delivery"
turtles-product-name: "SUSE® Rancher Prime: Cluster API"
17 changes: 16 additions & 1 deletion versions/v1.4/antora.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,19 @@ version: v1.4
display_version: v1.4 (Dev)
start_page: en:introduction/overview.adoc
nav:
- modules/en/nav.adoc
- modules/en/nav.adoc
asciidoc:
attributes:
harvester-product-name: "SUSE Virtualization"
harvester-product-name-tm: "SUSE® Virtualization"
longhorn-product-name: "SUSE Storage"
longhorn-product-name-tm: "SUSE® Storage"
neuvector-product-name: "SUSE® Security"
rancher-product-name: "SUSE Rancher Prime"
rancher-product-name-tm: "SUSE® Rancher Prime"
elemental-product-name: "SUSE® Rancher Prime: OS Manager"
k3s-product-name: "SUSE® Rancher Prime: K3s"
kubewarden-product-name: "SUSE® Rancher Prime: Admission Policy Manager"
rke2-product-name: "SUSE® Rancher Prime: RKE2"
fleet-product-name: "SUSE® Rancher Prime: Continous Delivery"
turtles-product-name: "SUSE® Rancher Prime: Cluster API"
76 changes: 37 additions & 39 deletions versions/v1.4/modules/en/pages/hosts/hosts.adoc

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions versions/v1.4/modules/en/pages/hosts/vgpu-support.adoc
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
= vGPU Support

SUSE® Virtualization is capable of sharing NVIDIA GPU support for Single Root IO Virtualization (SR-IOV). This additional capability, which is provided by the **pcidevices-controller** add-on, leverages `sriov-manage` for GPU management.
{harvester-product-name} is capable of sharing NVIDIA GPU support for Single Root IO Virtualization (SR-IOV). This additional capability, which is provided by the **pcidevices-controller** add-on, leverages `sriov-manage` for GPU management.

To determine if your GPU supports SR-IOV, check the device documentation. For more information about creating an NVIDIA vGPU that supports SR-IOV, see the https://docs.nvidia.com/grid/15.0/grid-vgpu-user-guide/index.html#creating-sriov-vgpu-device-red-hat-el-kvm[NVIDIA documentation].

You must enable the xref:../add-ons/nvidia-driver-toolkit.adoc[nvidia-driver-toolkit] add-on to be able to manage the lifecycle of vGPUs on GPU devices.

== Usage

. On the UI, go to *Advanced* > *SR-IOV GPU Devices* and verify the following:
. On the UI, go to *Advanced -> SR-IOV GPU Devices* and verify the following:
+
* GPU devices have been scanned.
* An associated `sriovgpudevices.devices.harvesterhci.io` object has been created.
+
image::advanced/sriovgpudevices-disabled.png[]

. Locate the device that you want to enable, and then select *:* > *Enable*.
. Locate the device that you want to enable, and then select *⋮ -> Enable*.
+
image::advanced/sriovgpudevices-enabled.png[]

. Go to the *vGPU Devices* screen and check the associated `vgpudevices.devices.harvesterhci.io` objects.
+
Allow some time for the pcidevices-controller to scan the vGPU devices and for the Harvester UI to display the device information.
Allow some time for the pcidevices-controller to scan the vGPU devices and for the {harvester-product-name} UI to display the device information.
+
image::advanced/vgpudevicelist.png[]

Expand All @@ -47,7 +47,7 @@ Once a vGPU has been assigned to a VM, it may not be possible to disable the VM

=== Limitations

==== Attaching multiple vGPU's:
==== Attaching Multiple vGPUs

Attaching multiple vGPUs to a VM may fail for the following reasons:

Expand Down Expand Up @@ -78,7 +78,7 @@ If you select the `NVIDIA A2-4Q` profile, you can only configure 4 vGPU devices.

image::advanced/nvidia-a2-example.png[]

=== Technical Deep dive
=== Technical Deep Dive

pcidevices-controller introduces the following CRDs:

Expand Down
12 changes: 6 additions & 6 deletions versions/v1.4/modules/en/pages/hosts/witness-node.adoc
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
= Witness Node

SUSE® Virtualization clusters deployed in production environments require a control plane for node and pod management. A typical three-node cluster has three management nodes that each contain the complete set of control plane components. One key component is etcd, which Kubernetes uses to store its data (configuration, state, and metadata). The etcd node count must always be an odd number (for example, 3 is the default count in SUSE® Virtualization) to ensure that a quorum is maintained.
{harvester-product-name} clusters deployed in production environments require a control plane for node and pod management. A typical three-node cluster has three management nodes that each contain the complete set of control plane components. One key component is etcd, which Kubernetes uses to store its data (configuration, state, and metadata). The etcd node count must always be an odd number (for example, 3 is the default count in {harvester-product-name}) to ensure that a quorum is maintained.

Some situations may require you to avoid deploying workloads and user data to management nodes. In these situations, one cluster node can be assigned the _witness_ role, which limits it to functioning as an etcd cluster member. The witness node is responsible for establishing a member quorum (a majority of nodes), which must agree on updates to the cluster state.

Witness nodes do not store any data, but the https://etcd.io/docs/v3.3/op-guide/hardware/[hardware recommendations] for etcd nodes must still be considered. Using hardware with limited resources significantly affects cluster performance, as described in the article https://www.suse.com/support/kb/doc/?id=000020100[Slow etcd performance (performance testing and optimization)].

SUSE® Virtualization supports clusters with two management nodes and one witness node (and optionally, one or more worker nodes). For more information about node roles, see xref:../hosts/hosts.adoc#_role_management[Role Management].
{harvester-product-name} supports clusters with two management nodes and one witness node (and optionally, one or more worker nodes). For more information about node roles, see xref:../hosts/hosts.adoc#_role_management[Role Management].

[IMPORTANT]
====
A node can be assigned the _witness_ role only at the time it joins a cluster. Each cluster can have only one witness node.
====

== Creating a SUSE® Virtualization Cluster with a Witness Node
== Creating a {harvester-product-name} Cluster with a Witness Node

You can assign the _witness_ role to a node when it joins a newly created cluster.

Expand Down Expand Up @@ -45,9 +45,9 @@ The general upgrade requirements and procedures apply to clusters with a witness

== Longhorn Replicas in Clusters with a Witness Node

SUSE® Virtualization uses Longhorn, a distributed block storage system, for management of block device volumes. Longhorn is provisioned to management and worker nodes but not to witness nodes, which do not store any data.
{harvester-product-name} uses Longhorn, a distributed block storage system, for management of block device volumes. Longhorn is provisioned to management and worker nodes but not to witness nodes, which do not store any data.

Longhorn creates replicas of each volume to increase availability. Replicas contain a chain of snapshots of the volume, with each snapshot storing the change from a previous snapshot. In SUSE® Virtualization, the default StorageClass `harvester-longhorn` has a replica count value of `3`.
Longhorn creates replicas of each volume to increase availability. Replicas contain a chain of snapshots of the volume, with each snapshot storing the change from a previous snapshot. In {harvester-product-name}, the default StorageClass `harvester-longhorn` has a replica count value of `3`.

== Limitations

Expand All @@ -70,7 +70,7 @@ image::advanced/update-replica-2.png[update-replica-count-to-2]

== Known Issues

=== 1. When creating a cluster with a witness node, the *Network Config: Create* screen on the Harvester UI is unable to identify any NICs that can be used with all nodes.
=== 1. When creating a cluster with a witness node, the *Network Config: Create* screen on the {harvester-product-name} UI is unable to identify any NICs that can be used with all nodes.

image::advanced/create-policy-with-all-nodes.png[create network config with all nodes]

Expand Down
10 changes: 5 additions & 5 deletions versions/v1.4/modules/en/pages/installation-setup/airgap.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
= Air-Gapped Environment

This section describes how to use SUSE® Virtualization in an air gapped environment. Some use cases could be where SUSE® Virtualization will be installed offline, behind a firewall, or behind a proxy.
This section describes how to use {harvester-product-name} in an air gapped environment. Some use cases could be where {harvester-product-name} will be installed offline, behind a firewall, or behind a proxy.

The ISO image contains all the packages to make it work in an air gapped environment.

Expand All @@ -26,19 +26,19 @@ image::proxy-setting.png[proxy-setting]

[NOTE]
====
SUSE® Virtualization appends necessary addresses to user configured `no-proxy` to ensure the internal traffic works.
{harvester-product-name} appends necessary addresses to user configured `no-proxy` to ensure the internal traffic works.
i.e., `localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,longhorn-system,cattle-system,cattle-system.svc,harvester-system,.svc,.cluster.local`. `harvester-system` was added into the list since v1.1.2.
When the nodes in the cluster do not use a proxy to communicate with each other, the CIDR needs to be added to `http-proxy.noProxy` after the first node is installed successfully. Please refer to xref:../troubleshooting/cluster.adoc#_fail_to_deploy_a_multi_node_cluster_due_to_incorrect_http_proxy_setting[fail to deploy a multi-node cluster].
====

== Guest Cluster Images

All necessary images to install and run SUSE® Virtualization are conveniently packaged into the ISO, eliminating the need to pre-load images on bare-metal nodes. A SUSE® Virtualization cluster manages them independently and effectively behind the scenes.
All necessary images to install and run {harvester-product-name} are conveniently packaged into the ISO, eliminating the need to pre-load images on bare-metal nodes. A {harvester-product-name} cluster manages them independently and effectively behind the scenes.

However, it's essential to understand a guest K8s cluster (e.g., RKE2 cluster) created by the xref:../integrations/rancher/node-driver/node-driver.adoc[Harvester node driver] is a distinct entity from a SUSE® Virtualization cluster. A guest cluster operates within VMs and requires pulling images either from the internet or a https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry#configure-a-private-registry-with-credentials-when-creating-a-cluster[private registry].
However, it's essential to understand a guest K8s cluster (e.g., RKE2 cluster) created by the xref:../integrations/rancher/node-driver/node-driver.adoc[Harvester node driver] is a distinct entity from a {harvester-product-name} cluster. A guest cluster operates within VMs and requires pulling images either from the internet or a https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/global-default-private-registry#configure-a-private-registry-with-credentials-when-creating-a-cluster[private registry].

If the *Cloud Provider* option is configured to SUSE® Virtualization in a guest K8s cluster, it deploys the Harvester cloud provider and Container Storage Interface (CSI) driver.
If the *Cloud Provider* option is configured to {harvester-product-name} in a guest Kubernetes cluster, it deploys the Harvester cloud provider and Container Storage Interface (CSI) driver.

image::cluster-registry.png[cluster-registry]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,5 @@ image::install/first-time-login.png[auth]

[NOTE]
====
In the single cluster mode, only one default `admin` user is provided. Check out the xref:../integrations/rancher/rancher-integration.adoc[Rancher Integration] for multi-tenant management.
In the single cluster mode, only one default `admin` user is provided. Check out xref:../integrations/rancher/rancher-integration.adoc[Rancher Integration] for multi-tenant management.
====
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
= CloudInit CRD

You can use the `CloudInit` CRD to configure SUSE® Virtualization operating system settings either manually or using GitOps solutions.
You can use the `CloudInit` CRD to configure {harvester-product-name} operating system settings either manually or using GitOps solutions.

== Background

The SUSE® Virtualization operating system uses the https://github.com/rancher/elemental-toolkit[elemental-toolkit], which has a unique form of https://rancher.github.io/elemental-toolkit/docs/reference/cloud_init/[cloud-init support].
The {harvester-product-name} operating system uses the https://github.com/rancher/elemental-toolkit[elemental-toolkit], which has a unique form of https://rancher.github.io/elemental-toolkit/docs/reference/cloud_init/[cloud-init support].

Settings configured during the installation process are written to the `elemental` cloud-init file in the `/oem` directory. Because the operating system is immutable, the cloud-init file ensures that node-specific settings are applied on each reboot.

Expand All @@ -15,13 +15,13 @@ In addition, the `CloudInit` CRD is persisted and synchronized with the underlyi
[NOTE]
====
The `CloudInit` CRD is a cluster-scoped resource. Ensure that your user account has the permissions required to access the resource (via Rancher role-based access control).
The `CloudInit` CRD is a cluster-scoped resource. Ensure that your user account has the permissions required to access the resource (via {rancher-product-name} role-based access control).
====


== Getting Started

The following example adds SSH keys to all nodes in an existing SUSE® Virtualization cluster.
The following example adds SSH keys to all nodes in an existing {harvester-product-name} cluster.

[,yaml]
----
Expand Down Expand Up @@ -49,11 +49,11 @@ The `spec` field contains the following:
* `matchSelector (required)`: Label selector used to identify the nodes that the change must be applied to. You can use the `harvesterhci.io/managed: "true"` label to select all nodes.
* `filename (required)`: Name of the file in `/oem`. cloud-init files in `/oem` are applied in alphabetical order. This can be used to ensure that file changes are applied during booting.
* `content (required)`: Inline content for the Elemental cloud-init resource that is written to target nodes.
* `paused (optional)`: Used to pause `CloudInit` CRD reconciliation. The SUSE® Virtualization controllers monitor Elemental cloud-init files that are managed by the `CloudInit` CRD. Direct changes made to these files are immediately reconciled back to the defined state unless the CRD is paused.
* `paused (optional)`: Used to pause `CloudInit` CRD reconciliation. The {harvester-product-name} controllers monitor Elemental cloud-init files that are managed by the `CloudInit` CRD. Direct changes made to these files are immediately reconciled back to the defined state unless the CRD is paused.

Once the object is created, you can log in to the target nodes to verify the results.

In the following example, a file named `/oem/99-my-ssh-keys.yaml` is created and subsequently monitored by the SUSE® Virtualization controllers.
In the following example, a file named `/oem/99-my-ssh-keys.yaml` is created and subsequently monitored by the {harvester-product-name} controllers.

----
harvester-qhgd4:/oem # more 99-my-ssh-keys.yaml
Expand Down
Loading

0 comments on commit c5fd997

Please sign in to comment.