Export config from the Ceph provider cluster¶
In order to configure an external Ceph cluster with Rook, we need to extract some information in order to connect to that cluster.
1. Create all users and keys¶
Run the python script create-external-cluster-resources.py for creating all users and keys.
--namespace
: Namespace where CephCluster will run, for examplerook-ceph
--format bash
: The format of the output--rbd-data-pool-name
: The name of the RBD data pool--alias-rbd-data-pool-name
: Provides an alias for the RBD data pool name, necessary if a special character is present in the pool name such as a period or underscore--rgw-endpoint
: (optional) The RADOS Gateway endpoint in the format<IP>:<PORT>
or<FQDN>:<PORT>
.--rgw-pool-prefix
: (optional) The prefix of the RGW pools. If not specified, the default prefix isdefault
--rgw-tls-cert-path
: (optional) RADOS Gateway endpoint TLS certificate file path--rgw-skip-tls
: (optional) Ignore TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED)--rbd-metadata-ec-pool-name
: (optional) Provides the name of erasure coded RBD metadata pool, used for creating ECRBDStorageClass.--monitoring-endpoint
: (optional) Ceph Manager prometheus exporter endpoints (comma separated list of IP entries of active and standby mgrs)--monitoring-endpoint-port
: (optional) Ceph Manager prometheus exporter port--skip-monitoring-endpoint
: (optional) Skip prometheus exporter endpoints, even if they are available. Useful if the prometheus module is not enabled--ceph-conf
: (optional) Provide a Ceph conf file--keyring
: (optional) Path to Ceph keyring file, to be used with--ceph-conf
--k8s-cluster-name
: (optional) Kubernetes cluster name--output
: (optional) Output will be stored into the provided file--dry-run
: (optional) Prints the executed commands without running them--run-as-user
: (optional) Provides a user name to check the cluster's health status, must be prefixed byclient
.--cephfs-metadata-pool-name
: (optional) Provides the name of the cephfs metadata pool--cephfs-filesystem-name
: (optional) The name of the filesystem, used for creating CephFS StorageClass--cephfs-data-pool-name
: (optional) Provides the name of the CephFS data pool, used for creating CephFS StorageClass--rados-namespace
: (optional) Divides a pool into separate logical namespaces, used for creating RBD PVC in a CephBlockPoolRadosNamespace (should be lower case)--subvolume-group
: (optional) Provides the name of the subvolume group, used for creating CephFS PVC in a subvolumeGroup--rgw-realm-name
: (optional) Provides the name of the rgw-realm--rgw-zone-name
: (optional) Provides the name of the rgw-zone--rgw-zonegroup-name
: (optional) Provides the name of the rgw-zone-group--upgrade
: (optional) Upgrades the cephCSIKeyrings(For example: client.csi-cephfs-provisioner) and client.healthchecker ceph users with new permissions needed for the new cluster version and older permission will still be applied.--restricted-auth-permission
: (optional) Restrict cephCSIKeyrings auth permissions to specific pools, and cluster. Mandatory flags that need to be set are--rbd-data-pool-name
, and--k8s-cluster-name
.--cephfs-filesystem-name
flag can also be passed in case of CephFS user restriction, so it can restrict users to particular CephFS filesystem.--v2-port-enable
: (optional) Enables the v2 mon port (3300) for mons.--topology-pools
: (optional) Comma-separated list of topology-constrained rbd pools--topology-failure-domain-label
: (optional) K8s cluster failure domain label (example: zone, rack, or host) for the topology-pools that match the ceph domain--topology-failure-domain-values
: (optional) Comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in thetopology-pools
list--config-file
: Path to the configuration file, Priority: command-line-args > config.ini values > default values
2. Copy the bash output¶
Example Output:
1
+ |
--namespace
: Namespace where CephCluster will run, for examplerook-ceph
--format bash
: The format of the output--rbd-data-pool-name
: The name of the RBD data pool--alias-rbd-data-pool-name
: Provides an alias for the RBD data pool name, necessary if a special character is present in the pool name such as a period or underscore--rgw-endpoint
: (optional) The RADOS Gateway endpoint in the format<IP>:<PORT>
or<FQDN>:<PORT>
.--rgw-pool-prefix
: (optional) The prefix of the RGW pools. If not specified, the default prefix isdefault
--rgw-tls-cert-path
: (optional) RADOS Gateway endpoint TLS certificate (or intermediate signing certificate) file path--rgw-skip-tls
: (optional) Ignore TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED)--rbd-metadata-ec-pool-name
: (optional) Provides the name of erasure coded RBD metadata pool, used for creating ECRBDStorageClass.--monitoring-endpoint
: (optional) Ceph Manager prometheus exporter endpoints (comma separated list of IP entries of active and standby mgrs)--monitoring-endpoint-port
: (optional) Ceph Manager prometheus exporter port--skip-monitoring-endpoint
: (optional) Skip prometheus exporter endpoints, even if they are available. Useful if the prometheus module is not enabled--ceph-conf
: (optional) Provide a Ceph conf file--keyring
: (optional) Path to Ceph keyring file, to be used with--ceph-conf
--k8s-cluster-name
: (optional) Kubernetes cluster name--output
: (optional) Output will be stored into the provided file--dry-run
: (optional) Prints the executed commands without running them--run-as-user
: (optional) Provides a user name to check the cluster's health status, must be prefixed byclient
.--cephfs-metadata-pool-name
: (optional) Provides the name of the cephfs metadata pool--cephfs-filesystem-name
: (optional) The name of the filesystem, used for creating CephFS StorageClass--cephfs-data-pool-name
: (optional) Provides the name of the CephFS data pool, used for creating CephFS StorageClass--rados-namespace
: (optional) Divides a pool into separate logical namespaces, used for creating RBD PVC in a CephBlockPoolRadosNamespace (should be lower case)--subvolume-group
: (optional) Provides the name of the subvolume group, used for creating CephFS PVC in a subvolumeGroup--rgw-realm-name
: (optional) Provides the name of the rgw-realm--rgw-zone-name
: (optional) Provides the name of the rgw-zone--rgw-zonegroup-name
: (optional) Provides the name of the rgw-zone-group--upgrade
: (optional) Upgrades the cephCSIKeyrings(For example: client.csi-cephfs-provisioner) and client.healthchecker ceph users with new permissions needed for the new cluster version and older permission will still be applied.--restricted-auth-permission
: (optional) Restrict cephCSIKeyrings auth permissions to specific pools, and cluster. Mandatory flags that need to be set are--rbd-data-pool-name
, and--k8s-cluster-name
.--cephfs-filesystem-name
flag can also be passed in case of CephFS user restriction, so it can restrict users to particular CephFS filesystem.--v2-port-enable
: (optional) Enables the v2 mon port (3300) for mons.--topology-pools
: (optional) Comma-separated list of topology-constrained rbd pools--topology-failure-domain-label
: (optional) K8s cluster failure domain label (example: zone, rack, or host) for the topology-pools that match the ceph domain--topology-failure-domain-values
: (optional) Comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in thetopology-pools
list--config-file
: Path to the configuration file, Priority: command-line-args > config.ini values > default values
2. Copy the bash output¶
Example Output:
1 2 3 4 diff --git a/docs/rook/latest/search/search_index.json b/docs/rook/latest/search/search_index.json index 2422859c7..d44171985 100644 --- a/docs/rook/latest/search/search_index.json +++ b/docs/rook/latest/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"CRDs/ceph-client-crd/","title":"CephClient CRD","text":"
Examples of the files follow:
"},{"location":"CRDs/ceph-client-crd/#6-connect-to-the-ceph-cluster-with-your-given-client-id","title":"6. Connect to the Ceph cluster with your given client ID","text":"With the files we've created, you should be able to query the cluster by setting Ceph ENV variables and running With this config, the ceph tools ( The Ceph project contains a SQLite VFS that interacts with RADOS directly, called First, on your workload ensure that you have the appropriate packages installed that make
Without the appropriate package (or a from-scratch build of SQLite), you will be unable to load After creating a Then start your SQLite database: If those lines complete without error, you have successfully set up SQLite to access Ceph. See the libcephsqlite documentation for more information on the VFS and database URL format. "},{"location":"CRDs/ceph-nfs-crd/","title":"CephNFS CRD","text":"Rook allows exporting NFS shares of a CephFilesystem or CephObjectStore through the CephNFS custom resource definition. "},{"location":"CRDs/ceph-nfs-crd/#example","title":"Example","text":" "},{"location":"CRDs/ceph-nfs-crd/#nfs-settings","title":"NFS Settings","text":""},{"location":"CRDs/ceph-nfs-crd/#server","title":"Server","text":"The
The
It is possible to scale the size of the cluster up or down by modifying the The CRD always eliminates the highest index servers first, in reverse order from how they were started. Scaling down the cluster requires that clients be migrated from servers that will be eliminated to others. That process is currently a manual one and should be performed before reducing the size of the cluster. Warning See the known issue below about setting this value greater than one. "},{"location":"CRDs/ceph-nfs-crd/#known-issues","title":"Known issues","text":""},{"location":"CRDs/ceph-nfs-crd/#serveractive-count-greater-than-1","title":"server.active count greater than 1","text":"
Packages:
Package v1 is the v1 version of the API. Resource Types:
CephBlockPool represents a Ceph Storage Pool Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephBlockPool metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec NamedBlockPoolSpec name string (Optional) The desired name of the pool if different from the CephBlockPool CR name. PoolSpec PoolSpec (Members of The core pool configuration status CephBlockPoolStatus"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBucketNotification","title":"CephBucketNotification","text":"CephBucketNotification represents a Bucket Notifications Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephBucketNotification metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec BucketNotificationSpec topic string The name of the topic associated with this notification events []BucketNotificationEvent (Optional) List of events that should trigger the notification filter NotificationFilterSpec (Optional) Spec of notification filter status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBucketTopic","title":"CephBucketTopic","text":"CephBucketTopic represents a Ceph Object Topic for Bucket Notifications Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephBucketTopic metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec BucketTopicSpec objectStoreName string The name of the object store on which to define the topic objectStoreNamespace string The namespace of the object store on which to define the topic opaqueData string (Optional) Data which is sent in each event persistent bool (Optional) Indication whether notifications to this endpoint are persistent or not endpoint TopicEndpointSpec Contains the endpoint spec of the topic status BucketTopicStatus (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephCOSIDriver","title":"CephCOSIDriver","text":"CephCOSIDriver represents the CRD for the Ceph COSI Driver Deployment Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephCOSIDriver metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec CephCOSIDriverSpec Spec represents the specification of a Ceph COSI Driver image string (Optional) Image is the container image to run the Ceph COSI driver objectProvisionerImage string (Optional) ObjectProvisionerImage is the container image to run the COSI driver sidecar deploymentStrategy COSIDeploymentStrategy (Optional) DeploymentStrategy is the strategy to use to deploy the COSI driver. placement Placement (Optional) Placement is the placement strategy to use for the COSI driver resources Kubernetes core/v1.ResourceRequirements (Optional) Resources is the resource requirements for the COSI driver "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephClient","title":"CephClient","text":"CephClient represents a Ceph Client Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephClient metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ClientSpec Spec represents the specification of a Ceph Client name string (Optional) caps map[string]string status CephClientStatus (Optional) Status represents the status of a Ceph Client "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephCluster","title":"CephCluster","text":"CephCluster is a Ceph storage cluster Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephCluster metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ClusterSpec cephVersion CephVersionSpec (Optional) The version information that instructs Rook to orchestrate a particular version of Ceph. storage StorageScopeSpec (Optional) A spec for available storage in the cluster and how it should be used annotations AnnotationsSpec (Optional) The annotations-related configuration to add/set on each Pod related object. labels LabelsSpec (Optional) The labels-related configuration to add/set on each Pod related object. placement PlacementSpec (Optional) The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations). network NetworkSpec (Optional) Network related configuration resources ResourceSpec (Optional) Resources set resource requests and limits priorityClassNames PriorityClassNamesSpec (Optional) PriorityClassNames sets priority classes on components dataDirHostPath string (Optional) The path on the host where config and data can be persisted skipUpgradeChecks bool (Optional) SkipUpgradeChecks defines if an upgrade should be forced even if one of the check fails continueUpgradeAfterChecksEvenIfNotHealthy bool (Optional) ContinueUpgradeAfterChecksEvenIfNotHealthy defines if an upgrade should continue even if PGs are not clean waitTimeoutForHealthyOSDInMinutes time.Duration (Optional) WaitTimeoutForHealthyOSDInMinutes defines the time the operator would wait before an OSD can be stopped for upgrade or restart. If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one if upgradeOSDRequiresHealthyPGs bool (Optional) UpgradeOSDRequiresHealthyPGs defines if OSD upgrade requires PGs are clean. If set to disruptionManagement DisruptionManagementSpec (Optional) A spec for configuring disruption management. mon MonSpec (Optional) A spec for mon related options crashCollector CrashCollectorSpec (Optional) A spec for the crash controller dashboard DashboardSpec (Optional) Dashboard settings monitoring MonitoringSpec (Optional) Prometheus based Monitoring settings external ExternalSpec (Optional) Whether the Ceph Cluster is running external to this Kubernetes cluster mon, mgr, osd, mds, and discover daemons will not be created for external clusters. mgr MgrSpec (Optional) A spec for mgr related options removeOSDsIfOutAndSafeToRemove bool (Optional) Remove the OSD that is out and safe to remove only if this option is true cleanupPolicy CleanupPolicySpec (Optional) Indicates user intent when deleting a cluster; blocks orchestration and should not be set if cluster deletion is not imminent. healthCheck CephClusterHealthCheckSpec (Optional) Internal daemon healthchecks and liveness probe security SecuritySpec (Optional) Security represents security settings logCollector LogCollectorSpec (Optional) Logging represents loggings settings csi CSIDriverSpec (Optional) CSI Driver Options applied per cluster. cephConfig map[string]map[string]string (Optional) Ceph Config options status ClusterStatus (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystem","title":"CephFilesystem","text":"CephFilesystem represents a Ceph Filesystem Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephFilesystem metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec FilesystemSpec metadataPool PoolSpec The metadata pool settings dataPools []NamedPoolSpec The data pool settings, with optional predefined pool name. preservePoolsOnDelete bool (Optional) Preserve pools on filesystem deletion preserveFilesystemOnDelete bool (Optional) Preserve the fs in the cluster on CephFilesystem CR deletion. Setting this to true automatically implies PreservePoolsOnDelete is true. metadataServer MetadataServerSpec The mds pod info mirroring FSMirroringSpec (Optional) The mirroring settings statusCheck MirrorHealthCheckSpec The mirroring statusCheck status CephFilesystemStatus"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemMirror","title":"CephFilesystemMirror","text":"CephFilesystemMirror is the Ceph Filesystem Mirror object definition Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephFilesystemMirror metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec FilesystemMirroringSpec placement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the cephfs-mirror pods priorityClassName string (Optional) PriorityClassName sets priority class on the cephfs-mirror pods status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemSubVolumeGroup","title":"CephFilesystemSubVolumeGroup","text":"CephFilesystemSubVolumeGroup represents a Ceph Filesystem SubVolumeGroup Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephFilesystemSubVolumeGroup metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec CephFilesystemSubVolumeGroupSpec Spec represents the specification of a Ceph Filesystem SubVolumeGroup name string (Optional) The name of the subvolume group. If not set, the default is the name of the subvolumeGroup CR. filesystemName string FilesystemName is the name of Ceph Filesystem SubVolumeGroup volume name. Typically it\u2019s the name of the CephFilesystem CR. If not coming from the CephFilesystem CR, it can be retrieved from the list of Ceph Filesystem volumes with pinning CephFilesystemSubVolumeGroupSpecPinning (Optional) Pinning configuration of CephFilesystemSubVolumeGroup, reference https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups only one out of (export, distributed, random) can be set at a time quota k8s.io/apimachinery/pkg/api/resource.Quantity (Optional) Quota size of the Ceph Filesystem subvolume group. dataPoolName string (Optional) The data pool name for the Ceph Filesystem subvolume group layout, if the default CephFS pool is not desired. status CephFilesystemSubVolumeGroupStatus (Optional) Status represents the status of a CephFilesystem SubvolumeGroup "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephNFS","title":"CephNFS","text":"CephNFS represents a Ceph NFS Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephNFS metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec NFSGaneshaSpec rados GaneshaRADOSSpec (Optional) RADOS is the Ganesha RADOS specification server GaneshaServerSpec Server is the Ganesha Server specification security NFSSecuritySpec (Optional) Security allows specifying security configurations for the NFS cluster status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectRealm","title":"CephObjectRealm","text":"CephObjectRealm represents a Ceph Object Store Gateway Realm Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectRealm metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectRealmSpec (Optional) pull PullSpec status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectStore","title":"CephObjectStore","text":"CephObjectStore represents a Ceph Object Store Gateway Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectStore metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectStoreSpec metadataPool PoolSpec (Optional) The metadata pool settings dataPool PoolSpec (Optional) The data pool settings sharedPools ObjectSharedPoolsSpec (Optional) The pool information when configuring RADOS namespaces in existing pools. preservePoolsOnDelete bool (Optional) Preserve pools on object store deletion gateway GatewaySpec (Optional) The rgw pod info protocols ProtocolSpec (Optional) The protocol specification auth AuthSpec (Optional) The authentication configuration zone ZoneSpec (Optional) The multisite info healthCheck ObjectHealthCheckSpec (Optional) The RGW health probes security ObjectStoreSecuritySpec (Optional) Security represents security settings allowUsersInNamespaces []string (Optional) The list of allowed namespaces in addition to the object store namespace where ceph object store users may be created. Specify \u201c*\u201d to allow all namespaces, otherwise list individual namespaces that are to be allowed. This is useful for applications that need object store credentials to be created in their own namespace, where neither OBCs nor COSI is being used to create buckets. The default is empty. hosting ObjectStoreHostingSpec (Optional) Hosting settings for the object store. A common use case for hosting configuration is to inform Rook of endpoints that support DNS wildcards, which in turn allows virtual host-style bucket addressing. status ObjectStoreStatus"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectStoreUser","title":"CephObjectStoreUser","text":"CephObjectStoreUser represents a Ceph Object Store Gateway User Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectStoreUser metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectStoreUserSpec store string (Optional) The store the user will be created in displayName string (Optional) The display name for the ceph users capabilities ObjectUserCapSpec (Optional) quotas ObjectUserQuotaSpec (Optional) clusterNamespace string (Optional) The namespace where the parent CephCluster and CephObjectStore are found status ObjectStoreUserStatus (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectZone","title":"CephObjectZone","text":"CephObjectZone represents a Ceph Object Store Gateway Zone Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectZone metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectZoneSpec zoneGroup string The display name for the ceph users metadataPool PoolSpec (Optional) The metadata pool settings dataPool PoolSpec (Optional) The data pool settings sharedPools ObjectSharedPoolsSpec (Optional) The pool information when configuring RADOS namespaces in existing pools. customEndpoints []string (Optional) If this zone cannot be accessed from other peer Ceph clusters via the ClusterIP Service endpoint created by Rook, you must set this to the externally reachable endpoint(s). You may include the port in the definition. For example: \u201chttps://my-object-store.my-domain.net:443\u201d. In many cases, you should set this to the endpoint of the ingress resource that makes the CephObjectStore associated with this CephObjectStoreZone reachable to peer clusters. The list can have one or more endpoints pointing to different RGW servers in the zone. If a CephObjectStore endpoint is omitted from this list, that object store\u2019s gateways will not receive multisite replication data (see CephObjectStore.spec.gateway.disableMultisiteSyncTraffic). preservePoolsOnDelete bool (Optional) Preserve pools on object zone deletion status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectZoneGroup","title":"CephObjectZoneGroup","text":"CephObjectZoneGroup represents a Ceph Object Store Gateway Zone Group Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectZoneGroup metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectZoneGroupSpec realm string The display name for the ceph users status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephRBDMirror","title":"CephRBDMirror","text":"CephRBDMirror represents a Ceph RBD Mirror Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephRBDMirror metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec RBDMirroringSpec count int Count represents the number of rbd mirror instance to run peers MirroringPeerSpec (Optional) Peers represents the peers spec placement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the rbd mirror pods priorityClassName string (Optional) PriorityClassName sets priority class on the rbd mirror pods status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.AMQPEndpointSpec","title":"AMQPEndpointSpec","text":"(Appears on:TopicEndpointSpec) AMQPEndpointSpec represent the spec of an AMQP endpoint of a Bucket Topic Field Descriptionuri string The URI of the AMQP endpoint to push notification to exchange string Name of the exchange that is used to route messages based on topics disableVerifySSL bool (Optional) Indicate whether the server certificate is validated by the client or not ackLevel string (Optional) The ack level required for this topic (none/broker/routeable) "},{"location":"CRDs/specification/#ceph.rook.io/v1.AdditionalVolumeMount","title":"AdditionalVolumeMount","text":"AdditionalVolumeMount represents the source from where additional files in pod containers should come from and what subdirectory they are made available in. Field DescriptionsubPath string SubPath defines the sub-path (subdirectory) of the directory root where the volumeSource will be mounted. All files/keys in the volume source\u2019s volume will be mounted to the subdirectory. This is not the same as the Kubernetes volumeSource ConfigFileVolumeSource VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for the additional file(s) like what is normally used to configure Volumes for a Pod. Fore example, a ConfigMap, Secret, or HostPath. Each VolumeSource adds one or more additional files to the container []github.com/rook/rook/pkg/apis/ceph.rook.io/v1.AdditionalVolumeMount alias)","text":"(Appears on:GatewaySpec, SSSDSidecar) "},{"location":"CRDs/specification/#ceph.rook.io/v1.AddressRangesSpec","title":"AddressRangesSpec","text":"(Appears on:NetworkSpec) Field Descriptionpublic CIDRList (Optional) Public defines a list of CIDRs to use for Ceph public network communication. cluster CIDRList (Optional) Cluster defines a list of CIDRs to use for Ceph cluster network communication. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Annotations","title":"Annotations (map[string]string alias)","text":"(Appears on:FilesystemMirroringSpec, GaneshaServerSpec, GatewaySpec, MetadataServerSpec, RBDMirroringSpec, RGWServiceSpec) Annotations are annotations "},{"location":"CRDs/specification/#ceph.rook.io/v1.AnnotationsSpec","title":"AnnotationsSpec (map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Annotations alias)","text":"(Appears on:ClusterSpec) AnnotationsSpec is the main spec annotation for all daemons "},{"location":"CRDs/specification/#ceph.rook.io/v1.AuthSpec","title":"AuthSpec","text":"(Appears on:ObjectStoreSpec) AuthSpec represents the authentication protocol configuration of a Ceph Object Store Gateway Field Descriptionkeystone KeystoneSpec (Optional) The spec for Keystone "},{"location":"CRDs/specification/#ceph.rook.io/v1.BucketNotificationEvent","title":"BucketNotificationEvent (string alias)","text":"(Appears on:BucketNotificationSpec) BucketNotificationSpec represent the event type of the bucket notification "},{"location":"CRDs/specification/#ceph.rook.io/v1.BucketNotificationSpec","title":"BucketNotificationSpec","text":"(Appears on:CephBucketNotification) BucketNotificationSpec represent the spec of a Bucket Notification Field Descriptiontopic string The name of the topic associated with this notification events []BucketNotificationEvent (Optional) List of events that should trigger the notification filter NotificationFilterSpec (Optional) Spec of notification filter "},{"location":"CRDs/specification/#ceph.rook.io/v1.BucketTopicSpec","title":"BucketTopicSpec","text":"(Appears on:CephBucketTopic) BucketTopicSpec represent the spec of a Bucket Topic Field DescriptionobjectStoreName string The name of the object store on which to define the topic objectStoreNamespace string The namespace of the object store on which to define the topic opaqueData string (Optional) Data which is sent in each event persistent bool (Optional) Indication whether notifications to this endpoint are persistent or not endpoint TopicEndpointSpec Contains the endpoint spec of the topic "},{"location":"CRDs/specification/#ceph.rook.io/v1.BucketTopicStatus","title":"BucketTopicStatus","text":"(Appears on:CephBucketTopic) BucketTopicStatus represents the Status of a CephBucketTopic Field Descriptionphase string (Optional) ARN string (Optional) The ARN of the topic generated by the RGW observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CIDR","title":"CIDR (string alias)","text":"An IPv4 or IPv6 network CIDR. This naive kubebuilder regex provides immediate feedback for some typos and for a common problem case where the range spec is forgotten (e.g., /24). Rook does in-depth validation in code. "},{"location":"CRDs/specification/#ceph.rook.io/v1.COSIDeploymentStrategy","title":"COSIDeploymentStrategy (string alias)","text":"(Appears on:CephCOSIDriverSpec) COSIDeploymentStrategy represents the strategy to use to deploy the Ceph COSI driver Value Description\"Always\" Always means the Ceph COSI driver will be deployed even if the object store is not present \"Auto\" Auto means the Ceph COSI driver will be deployed automatically if object store is present \"Never\" Never means the Ceph COSI driver will never deployed "},{"location":"CRDs/specification/#ceph.rook.io/v1.CSICephFSSpec","title":"CSICephFSSpec","text":"(Appears on:CSIDriverSpec) CSICephFSSpec defines the settings for CephFS CSI driver. Field DescriptionkernelMountOptions string (Optional) KernelMountOptions defines the mount options for kernel mounter. fuseMountOptions string (Optional) FuseMountOptions defines the mount options for ceph fuse mounter. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CSIDriverSpec","title":"CSIDriverSpec","text":"(Appears on:ClusterSpec) CSIDriverSpec defines CSI Driver settings applied per cluster. Field DescriptionreadAffinity ReadAffinitySpec (Optional) ReadAffinity defines the read affinity settings for CSI driver. cephfs CSICephFSSpec (Optional) CephFS defines CSI Driver settings for CephFS driver. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Capacity","title":"Capacity","text":"(Appears on:CephStatus) Capacity is the capacity information of a Ceph Cluster Field DescriptionbytesTotal uint64 bytesUsed uint64 bytesAvailable uint64 lastUpdated string"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBlockPoolRadosNamespace","title":"CephBlockPoolRadosNamespace","text":"CephBlockPoolRadosNamespace represents a Ceph BlockPool Rados Namespace Field Descriptionmetadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec CephBlockPoolRadosNamespaceSpec Spec represents the specification of a Ceph BlockPool Rados Namespace name string (Optional) The name of the CephBlockPoolRadosNamespaceSpec namespace. If not set, the default is the name of the CR. blockPoolName string BlockPoolName is the name of Ceph BlockPool. Typically it\u2019s the name of the CephBlockPool CR. mirroring RadosNamespaceMirroring (Optional) Mirroring configuration of CephBlockPoolRadosNamespace status CephBlockPoolRadosNamespaceStatus (Optional) Status represents the status of a CephBlockPool Rados Namespace "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBlockPoolRadosNamespaceSpec","title":"CephBlockPoolRadosNamespaceSpec","text":"(Appears on:CephBlockPoolRadosNamespace) CephBlockPoolRadosNamespaceSpec represents the specification of a CephBlockPool Rados Namespace Field Descriptionname string (Optional) The name of the CephBlockPoolRadosNamespaceSpec namespace. If not set, the default is the name of the CR. blockPoolName string BlockPoolName is the name of Ceph BlockPool. Typically it\u2019s the name of the CephBlockPool CR. mirroring RadosNamespaceMirroring (Optional) Mirroring configuration of CephBlockPoolRadosNamespace "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBlockPoolRadosNamespaceStatus","title":"CephBlockPoolRadosNamespaceStatus","text":"(Appears on:CephBlockPoolRadosNamespace) CephBlockPoolRadosNamespaceStatus represents the Status of Ceph BlockPool Rados Namespace Field Descriptionphase ConditionType (Optional) info map[string]string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBlockPoolStatus","title":"CephBlockPoolStatus","text":"(Appears on:CephBlockPool) CephBlockPoolStatus represents the mirroring status of Ceph Storage Pool Field Descriptionphase ConditionType (Optional) mirroringStatus MirroringStatusSpec (Optional) mirroringInfo MirroringInfoSpec (Optional) snapshotScheduleStatus SnapshotScheduleStatusSpec (Optional) info map[string]string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. conditions []Condition"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephCOSIDriverSpec","title":"CephCOSIDriverSpec","text":"(Appears on:CephCOSIDriver) CephCOSIDriverSpec represents the specification of a Ceph COSI Driver Field Descriptionimage string (Optional) Image is the container image to run the Ceph COSI driver objectProvisionerImage string (Optional) ObjectProvisionerImage is the container image to run the COSI driver sidecar deploymentStrategy COSIDeploymentStrategy (Optional) DeploymentStrategy is the strategy to use to deploy the COSI driver. placement Placement (Optional) Placement is the placement strategy to use for the COSI driver resources Kubernetes core/v1.ResourceRequirements (Optional) Resources is the resource requirements for the COSI driver "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephClientStatus","title":"CephClientStatus","text":"(Appears on:CephClient) CephClientStatus represents the Status of Ceph Client Field Descriptionphase ConditionType (Optional) info map[string]string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephClusterHealthCheckSpec","title":"CephClusterHealthCheckSpec","text":"(Appears on:ClusterSpec) CephClusterHealthCheckSpec represent the healthcheck for Ceph daemons Field DescriptiondaemonHealth DaemonHealthSpec (Optional) DaemonHealth is the health check for a given daemon livenessProbe map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]*github.com/rook/rook/pkg/apis/ceph.rook.io/v1.ProbeSpec (Optional) LivenessProbe allows changing the livenessProbe configuration for a given daemon startupProbe map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]*github.com/rook/rook/pkg/apis/ceph.rook.io/v1.ProbeSpec (Optional) StartupProbe allows changing the startupProbe configuration for a given daemon "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephDaemonsVersions","title":"CephDaemonsVersions","text":"(Appears on:CephStatus) CephDaemonsVersions show the current ceph version for different ceph daemons Field Descriptionmon map[string]int (Optional) Mon shows Mon Ceph version mgr map[string]int (Optional) Mgr shows Mgr Ceph version osd map[string]int (Optional) Osd shows Osd Ceph version rgw map[string]int (Optional) Rgw shows Rgw Ceph version mds map[string]int (Optional) Mds shows Mds Ceph version rbd-mirror map[string]int (Optional) RbdMirror shows RbdMirror Ceph version cephfs-mirror map[string]int (Optional) CephFSMirror shows CephFSMirror Ceph version overall map[string]int (Optional) Overall shows overall Ceph version "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephExporterSpec","title":"CephExporterSpec","text":"(Appears on:MonitoringSpec) Field DescriptionperfCountersPrioLimit int64 Only performance counters greater than or equal to this option are fetched statsPeriodSeconds int64 Time to wait before sending requests again to exporter server (seconds) "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemStatus","title":"CephFilesystemStatus","text":"(Appears on:CephFilesystem) CephFilesystemStatus represents the status of a Ceph Filesystem Field Descriptionphase ConditionType (Optional) snapshotScheduleStatus FilesystemSnapshotScheduleStatusSpec (Optional) info map[string]string (Optional) Use only info and put mirroringStatus in it? mirroringStatus FilesystemMirroringInfoSpec (Optional) MirroringStatus is the filesystem mirroring status conditions []Condition observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpec","title":"CephFilesystemSubVolumeGroupSpec","text":"(Appears on:CephFilesystemSubVolumeGroup) CephFilesystemSubVolumeGroupSpec represents the specification of a Ceph Filesystem SubVolumeGroup Field Descriptionname string (Optional) The name of the subvolume group. If not set, the default is the name of the subvolumeGroup CR. filesystemName string FilesystemName is the name of Ceph Filesystem SubVolumeGroup volume name. Typically it\u2019s the name of the CephFilesystem CR. If not coming from the CephFilesystem CR, it can be retrieved from the list of Ceph Filesystem volumes with pinning CephFilesystemSubVolumeGroupSpecPinning (Optional) Pinning configuration of CephFilesystemSubVolumeGroup, reference https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups only one out of (export, distributed, random) can be set at a time quota k8s.io/apimachinery/pkg/api/resource.Quantity (Optional) Quota size of the Ceph Filesystem subvolume group. dataPoolName string (Optional) The data pool name for the Ceph Filesystem subvolume group layout, if the default CephFS pool is not desired. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpecPinning","title":"CephFilesystemSubVolumeGroupSpecPinning","text":"(Appears on:CephFilesystemSubVolumeGroupSpec) CephFilesystemSubVolumeGroupSpecPinning represents the pinning configuration of SubVolumeGroup Field Descriptionexport int (Optional) distributed int (Optional) random, float64 (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemSubVolumeGroupStatus","title":"CephFilesystemSubVolumeGroupStatus","text":"(Appears on:CephFilesystemSubVolumeGroup) CephFilesystemSubVolumeGroupStatus represents the Status of Ceph Filesystem SubVolumeGroup Field Descriptionphase ConditionType (Optional) info map[string]string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephHealthMessage","title":"CephHealthMessage","text":"(Appears on:CephStatus) CephHealthMessage represents the health message of a Ceph Cluster Field Descriptionseverity string message string"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephNetworkType","title":"CephNetworkType (string alias)","text":"CephNetworkType should be \u201cpublic\u201d or \u201ccluster\u201d. Allow any string so that over-specified legacy clusters do not break on CRD update. Value Description\"cluster\" \"public\" "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephStatus","title":"CephStatus","text":"(Appears on:ClusterStatus) CephStatus is the details health of a Ceph Cluster Field Descriptionhealth string details map[string]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.CephHealthMessage lastChecked string lastChanged string previousHealth string capacity Capacity versions CephDaemonsVersions (Optional) fsid string"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephStorage","title":"CephStorage","text":"(Appears on:ClusterStatus) CephStorage represents flavors of Ceph Cluster Storage Field DescriptiondeviceClasses []DeviceClasses osd OSDStatus deprecatedOSDs map[string][]int"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephVersionSpec","title":"CephVersionSpec","text":"(Appears on:ClusterSpec) CephVersionSpec represents the settings for the Ceph version that Rook is orchestrating. Field Descriptionimage string (Optional) Image is the container image used to launch the ceph daemons, such as quay.io/ceph/ceph: The full list of images can be found at https://quay.io/repository/ceph/ceph?tab=tags Whether to allow unsupported versions (do not set to true in production) imagePullPolicy Kubernetes core/v1.PullPolicy (Optional) ImagePullPolicy describes a policy for if/when to pull a container image One of Always, Never, IfNotPresent. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CleanupConfirmationProperty","title":"CleanupConfirmationProperty (string alias)","text":"(Appears on:CleanupPolicySpec) CleanupConfirmationProperty represents the cleanup confirmation Value Description\"yes-really-destroy-data\" DeleteDataDirOnHostsConfirmation represents the validation to destroy dataDirHostPath "},{"location":"CRDs/specification/#ceph.rook.io/v1.CleanupPolicySpec","title":"CleanupPolicySpec","text":"(Appears on:ClusterSpec) CleanupPolicySpec represents a Ceph Cluster cleanup policy Field Descriptionconfirmation CleanupConfirmationProperty (Optional) Confirmation represents the cleanup confirmation sanitizeDisks SanitizeDisksSpec (Optional) SanitizeDisks represents way we sanitize disks allowUninstallWithVolumes bool (Optional) AllowUninstallWithVolumes defines whether we can proceed with the uninstall if they are RBD images still present "},{"location":"CRDs/specification/#ceph.rook.io/v1.ClientSpec","title":"ClientSpec","text":"(Appears on:CephClient) ClientSpec represents the specification of a Ceph Client Field Descriptionname string (Optional) caps map[string]string"},{"location":"CRDs/specification/#ceph.rook.io/v1.ClusterSpec","title":"ClusterSpec","text":"(Appears on:CephCluster) ClusterSpec represents the specification of Ceph Cluster Field DescriptioncephVersion CephVersionSpec (Optional) The version information that instructs Rook to orchestrate a particular version of Ceph. storage StorageScopeSpec (Optional) A spec for available storage in the cluster and how it should be used annotations AnnotationsSpec (Optional) The annotations-related configuration to add/set on each Pod related object. labels LabelsSpec (Optional) The labels-related configuration to add/set on each Pod related object. placement PlacementSpec (Optional) The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations). network NetworkSpec (Optional) Network related configuration resources ResourceSpec (Optional) Resources set resource requests and limits priorityClassNames PriorityClassNamesSpec (Optional) PriorityClassNames sets priority classes on components dataDirHostPath string (Optional) The path on the host where config and data can be persisted skipUpgradeChecks bool (Optional) SkipUpgradeChecks defines if an upgrade should be forced even if one of the check fails continueUpgradeAfterChecksEvenIfNotHealthy bool (Optional) ContinueUpgradeAfterChecksEvenIfNotHealthy defines if an upgrade should continue even if PGs are not clean waitTimeoutForHealthyOSDInMinutes time.Duration (Optional) WaitTimeoutForHealthyOSDInMinutes defines the time the operator would wait before an OSD can be stopped for upgrade or restart. If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one if upgradeOSDRequiresHealthyPGs bool (Optional) UpgradeOSDRequiresHealthyPGs defines if OSD upgrade requires PGs are clean. If set to disruptionManagement DisruptionManagementSpec (Optional) A spec for configuring disruption management. mon MonSpec (Optional) A spec for mon related options crashCollector CrashCollectorSpec (Optional) A spec for the crash controller dashboard DashboardSpec (Optional) Dashboard settings monitoring MonitoringSpec (Optional) Prometheus based Monitoring settings external ExternalSpec (Optional) Whether the Ceph Cluster is running external to this Kubernetes cluster mon, mgr, osd, mds, and discover daemons will not be created for external clusters. mgr MgrSpec (Optional) A spec for mgr related options removeOSDsIfOutAndSafeToRemove bool (Optional) Remove the OSD that is out and safe to remove only if this option is true cleanupPolicy CleanupPolicySpec (Optional) Indicates user intent when deleting a cluster; blocks orchestration and should not be set if cluster deletion is not imminent. healthCheck CephClusterHealthCheckSpec (Optional) Internal daemon healthchecks and liveness probe security SecuritySpec (Optional) Security represents security settings logCollector LogCollectorSpec (Optional) Logging represents loggings settings csi CSIDriverSpec (Optional) CSI Driver Options applied per cluster. cephConfig map[string]map[string]string (Optional) Ceph Config options "},{"location":"CRDs/specification/#ceph.rook.io/v1.ClusterState","title":"ClusterState (string alias)","text":"(Appears on:ClusterStatus) ClusterState represents the state of a Ceph Cluster Value Description\"Connected\" ClusterStateConnected represents the Connected state of a Ceph Cluster \"Connecting\" ClusterStateConnecting represents the Connecting state of a Ceph Cluster \"Created\" ClusterStateCreated represents the Created state of a Ceph Cluster \"Creating\" ClusterStateCreating represents the Creating state of a Ceph Cluster \"Error\" ClusterStateError represents the Error state of a Ceph Cluster \"Updating\" ClusterStateUpdating represents the Updating state of a Ceph Cluster "},{"location":"CRDs/specification/#ceph.rook.io/v1.ClusterStatus","title":"ClusterStatus","text":"(Appears on:CephCluster) ClusterStatus represents the status of a Ceph cluster Field Descriptionstate ClusterState phase ConditionType message string conditions []Condition ceph CephStatus storage CephStorage version ClusterVersion observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ClusterVersion","title":"ClusterVersion","text":"(Appears on:ClusterStatus) ClusterVersion represents the version of a Ceph Cluster Field Descriptionimage string version string"},{"location":"CRDs/specification/#ceph.rook.io/v1.CompressionSpec","title":"CompressionSpec","text":"(Appears on:ConnectionsSpec) Field Descriptionenabled bool (Optional) Whether to compress the data in transit across the wire. The default is not set. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Condition","title":"Condition","text":"(Appears on:CephBlockPoolStatus, CephFilesystemStatus, ClusterStatus, ObjectStoreStatus, Status) Condition represents a status condition on any Rook-Ceph Custom Resource. Field Descriptiontype ConditionType status Kubernetes core/v1.ConditionStatus reason ConditionReason message string lastHeartbeatTime Kubernetes meta/v1.Time lastTransitionTime Kubernetes meta/v1.Time"},{"location":"CRDs/specification/#ceph.rook.io/v1.ConditionReason","title":"ConditionReason (string alias)","text":"(Appears on:Condition) ConditionReason is a reason for a condition Value Description\"ClusterConnected\" ClusterConnectedReason is cluster connected reason \"ClusterConnecting\" ClusterConnectingReason is cluster connecting reason \"ClusterCreated\" ClusterCreatedReason is cluster created reason \"ClusterDeleting\" ClusterDeletingReason is cluster deleting reason \"ClusterProgressing\" ClusterProgressingReason is cluster progressing reason \"Deleting\" DeletingReason represents when Rook has detected a resource object should be deleted. \"ObjectHasDependents\" ObjectHasDependentsReason represents when a resource object has dependents that are blocking deletion. \"ObjectHasNoDependents\" ObjectHasNoDependentsReason represents when a resource object has no dependents that are blocking deletion. \"ReconcileFailed\" ReconcileFailed represents when a resource reconciliation failed. \"ReconcileStarted\" ReconcileStarted represents when a resource reconciliation started. \"ReconcileSucceeded\" ReconcileSucceeded represents when a resource reconciliation was successful. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ConditionType","title":"ConditionType (string alias)","text":"(Appears on:CephBlockPoolRadosNamespaceStatus, CephBlockPoolStatus, CephClientStatus, CephFilesystemStatus, CephFilesystemSubVolumeGroupStatus, ClusterStatus, Condition, ObjectStoreStatus) ConditionType represent a resource\u2019s status Value Description\"Connected\" ConditionConnected represents Connected state of an object \"Connecting\" ConditionConnecting represents Connecting state of an object \"Deleting\" ConditionDeleting represents Deleting state of an object \"DeletionIsBlocked\" ConditionDeletionIsBlocked represents when deletion of the object is blocked. \"Failure\" ConditionFailure represents Failure state of an object \"Progressing\" ConditionProgressing represents Progressing state of an object \"Ready\" ConditionReady represents Ready state of an object "},{"location":"CRDs/specification/#ceph.rook.io/v1.ConfigFileVolumeSource","title":"ConfigFileVolumeSource","text":"(Appears on:AdditionalVolumeMount, KerberosConfigFiles, KerberosKeytabFile, SSSDSidecarConfigFile) Represents the source of a volume to mount. Only one of its members may be specified. This is a subset of the full Kubernetes API\u2019s VolumeSource that is reduced to what is most likely to be useful for mounting config files/dirs into Rook pods. Field DescriptionhostPath Kubernetes core/v1.HostPathVolumeSource (Optional) hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath emptyDir Kubernetes core/v1.EmptyDirVolumeSource (Optional) emptyDir represents a temporary directory that shares a pod\u2019s lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir secret Kubernetes core/v1.SecretVolumeSource (Optional) secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret persistentVolumeClaim Kubernetes core/v1.PersistentVolumeClaimVolumeSource (Optional) persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims configMap Kubernetes core/v1.ConfigMapVolumeSource (Optional) configMap represents a configMap that should populate this volume projected Kubernetes core/v1.ProjectedVolumeSource projected items for all in one resources secrets, configmaps, and downward API "},{"location":"CRDs/specification/#ceph.rook.io/v1.ConnectionsSpec","title":"ConnectionsSpec","text":"(Appears on:NetworkSpec) Field Descriptionencryption EncryptionSpec (Optional) Encryption settings for the network connections. compression CompressionSpec (Optional) Compression settings for the network connections. requireMsgr2 bool (Optional) Whether to require msgr2 (port 3300) even if compression or encryption are not enabled. If true, the msgr1 port (6789) will be disabled. Requires a kernel that supports msgr2 (kernel 5.11 or CentOS 8.4 or newer). "},{"location":"CRDs/specification/#ceph.rook.io/v1.CrashCollectorSpec","title":"CrashCollectorSpec","text":"(Appears on:ClusterSpec) CrashCollectorSpec represents options to configure the crash controller Field Descriptiondisable bool (Optional) Disable determines whether we should enable the crash collector daysToRetain uint (Optional) DaysToRetain represents the number of days to retain crash until they get pruned "},{"location":"CRDs/specification/#ceph.rook.io/v1.DaemonHealthSpec","title":"DaemonHealthSpec","text":"(Appears on:CephClusterHealthCheckSpec) DaemonHealthSpec is a daemon health check Field Descriptionstatus HealthCheckSpec (Optional) Status represents the health check settings for the Ceph health mon HealthCheckSpec (Optional) Monitor represents the health check settings for the Ceph monitor osd HealthCheckSpec (Optional) ObjectStorageDaemon represents the health check settings for the Ceph OSDs "},{"location":"CRDs/specification/#ceph.rook.io/v1.DashboardSpec","title":"DashboardSpec","text":"(Appears on:ClusterSpec) DashboardSpec represents the settings for the Ceph dashboard Field Descriptionenabled bool (Optional) Enabled determines whether to enable the dashboard urlPrefix string (Optional) URLPrefix is a prefix for all URLs to use the dashboard with a reverse proxy port int (Optional) Port is the dashboard webserver port ssl bool (Optional) SSL determines whether SSL should be used prometheusEndpoint string (Optional) Endpoint for the Prometheus host prometheusEndpointSSLVerify bool (Optional) Whether to verify the ssl endpoint for prometheus. Set to false for a self-signed cert. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Device","title":"Device","text":"(Appears on:Selection) Device represents a disk to use in the cluster Field Descriptionname string (Optional) fullpath string (Optional) config map[string]string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.DeviceClasses","title":"DeviceClasses","text":"(Appears on:CephStorage) DeviceClasses represents device classes of a Ceph Cluster Field Descriptionname string"},{"location":"CRDs/specification/#ceph.rook.io/v1.DisruptionManagementSpec","title":"DisruptionManagementSpec","text":"(Appears on:ClusterSpec) DisruptionManagementSpec configures management of daemon disruptions Field DescriptionmanagePodBudgets bool (Optional) This enables management of poddisruptionbudgets osdMaintenanceTimeout time.Duration (Optional) OSDMaintenanceTimeout sets how many additional minutes the DOWN/OUT interval is for drained failure domains it only works if managePodBudgets is true. the default is 30 minutes pgHealthCheckTimeout time.Duration (Optional) PGHealthCheckTimeout is the time (in minutes) that the operator will wait for the placement groups to become healthy (active+clean) after a drain was completed and OSDs came back up. Rook will continue with the next drain if the timeout exceeds. It only works if managePodBudgets is true. No values or 0 means that the operator will wait until the placement groups are healthy before unblocking the next drain. pgHealthyRegex string (Optional) PgHealthyRegex is the regular expression that is used to determine which PG states should be considered healthy. The default is manageMachineDisruptionBudgets bool (Optional) Deprecated. This enables management of machinedisruptionbudgets. machineDisruptionBudgetNamespace string (Optional) Deprecated. Namespace to look for MDBs by the machineDisruptionBudgetController "},{"location":"CRDs/specification/#ceph.rook.io/v1.EncryptionSpec","title":"EncryptionSpec","text":"(Appears on:ConnectionsSpec) Field Descriptionenabled bool (Optional) Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network. The default is not set. Even if encryption is not enabled, clients still establish a strong initial authentication for the connection and data integrity is still validated with a crc check. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted. "},{"location":"CRDs/specification/#ceph.rook.io/v1.EndpointAddress","title":"EndpointAddress","text":"(Appears on:GatewaySpec) EndpointAddress is a tuple that describes a single IP address or host name. This is a subset of Kubernetes\u2019s v1.EndpointAddress. Field Descriptionip string (Optional) The IP of this endpoint. As a legacy behavior, this supports being given a DNS-addressable hostname as well. hostname string (Optional) The DNS-addressable Hostname of this endpoint. This field will be preferred over IP if both are given. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ErasureCodedSpec","title":"ErasureCodedSpec","text":"(Appears on:PoolSpec) ErasureCodedSpec represents the spec for erasure code in a pool Field DescriptioncodingChunks uint Number of coding chunks per object in an erasure coded storage pool (required for erasure-coded pool type). This is the number of OSDs that can be lost simultaneously before data cannot be recovered. dataChunks uint Number of data chunks per object in an erasure coded storage pool (required for erasure-coded pool type). The number of chunks required to recover an object when any single OSD is lost is the same as dataChunks so be aware that the larger the number of data chunks, the higher the cost of recovery. algorithm string (Optional) The algorithm for erasure coding "},{"location":"CRDs/specification/#ceph.rook.io/v1.ExternalSpec","title":"ExternalSpec","text":"(Appears on:ClusterSpec) ExternalSpec represents the options supported by an external cluster Field Descriptionenable bool (Optional) Enable determines whether external mode is enabled or not "},{"location":"CRDs/specification/#ceph.rook.io/v1.FSMirroringSpec","title":"FSMirroringSpec","text":"(Appears on:FilesystemSpec) FSMirroringSpec represents the setting for a mirrored filesystem Field Descriptionenabled bool (Optional) Enabled whether this filesystem is mirrored or not peers MirroringPeerSpec (Optional) Peers represents the peers spec snapshotSchedules []SnapshotScheduleSpec (Optional) SnapshotSchedules is the scheduling of snapshot for mirrored filesystems snapshotRetention []SnapshotScheduleRetentionSpec (Optional) Retention is the retention policy for a snapshot schedule One path has exactly one retention policy. A policy can however contain multiple count-time period pairs in order to specify complex retention policies "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemMirrorInfoPeerSpec","title":"FilesystemMirrorInfoPeerSpec","text":"(Appears on:FilesystemsSpec) FilesystemMirrorInfoPeerSpec is the specification of a filesystem peer mirror Field Descriptionuuid string (Optional) UUID is the peer unique identifier remote PeerRemoteSpec (Optional) Remote are the remote cluster information stats PeerStatSpec (Optional) Stats are the stat a peer mirror "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemMirroringInfo","title":"FilesystemMirroringInfo","text":"(Appears on:FilesystemMirroringInfoSpec) FilesystemMirrorInfoSpec is the filesystem mirror status of a given filesystem Field Descriptiondaemon_id int (Optional) DaemonID is the cephfs-mirror name filesystems []FilesystemsSpec (Optional) Filesystems is the list of filesystems managed by a given cephfs-mirror daemon "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemMirroringInfoSpec","title":"FilesystemMirroringInfoSpec","text":"(Appears on:CephFilesystemStatus) FilesystemMirroringInfo is the status of the pool mirroring Field DescriptiondaemonsStatus []FilesystemMirroringInfo (Optional) PoolMirroringStatus is the mirroring status of a filesystem lastChecked string (Optional) LastChecked is the last time time the status was checked lastChanged string (Optional) LastChanged is the last time time the status last changed details string (Optional) Details contains potential status errors "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemMirroringSpec","title":"FilesystemMirroringSpec","text":"(Appears on:CephFilesystemMirror) FilesystemMirroringSpec is the filesystem mirroring specification Field Descriptionplacement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the cephfs-mirror pods priorityClassName string (Optional) PriorityClassName sets priority class on the cephfs-mirror pods "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemSnapshotScheduleStatusRetention","title":"FilesystemSnapshotScheduleStatusRetention","text":"(Appears on:FilesystemSnapshotSchedulesSpec) FilesystemSnapshotScheduleStatusRetention is the retention specification for a filesystem snapshot schedule Field Descriptionstart string (Optional) Start is when the snapshot schedule starts created string (Optional) Created is when the snapshot schedule was created first string (Optional) First is when the first snapshot schedule was taken last string (Optional) Last is when the last snapshot schedule was taken last_pruned string (Optional) LastPruned is when the last snapshot schedule was pruned created_count int (Optional) CreatedCount is total amount of snapshots pruned_count int (Optional) PrunedCount is total amount of pruned snapshots active bool (Optional) Active is whether the scheduled is active or not "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemSnapshotScheduleStatusSpec","title":"FilesystemSnapshotScheduleStatusSpec","text":"(Appears on:CephFilesystemStatus) FilesystemSnapshotScheduleStatusSpec is the status of the snapshot schedule Field DescriptionsnapshotSchedules []FilesystemSnapshotSchedulesSpec (Optional) SnapshotSchedules is the list of snapshots scheduled lastChecked string (Optional) LastChecked is the last time time the status was checked lastChanged string (Optional) LastChanged is the last time time the status last changed details string (Optional) Details contains potential status errors "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemSnapshotSchedulesSpec","title":"FilesystemSnapshotSchedulesSpec","text":"(Appears on:FilesystemSnapshotScheduleStatusSpec) FilesystemSnapshotSchedulesSpec is the list of snapshot scheduled for images in a pool Field Descriptionfs string (Optional) Fs is the name of the Ceph Filesystem subvol string (Optional) Subvol is the name of the sub volume path string (Optional) Path is the path on the filesystem rel_path string (Optional) schedule string (Optional) retention FilesystemSnapshotScheduleStatusRetention (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemSpec","title":"FilesystemSpec","text":"(Appears on:CephFilesystem) FilesystemSpec represents the spec of a file system Field DescriptionmetadataPool PoolSpec The metadata pool settings dataPools []NamedPoolSpec The data pool settings, with optional predefined pool name. preservePoolsOnDelete bool (Optional) Preserve pools on filesystem deletion preserveFilesystemOnDelete bool (Optional) Preserve the fs in the cluster on CephFilesystem CR deletion. Setting this to true automatically implies PreservePoolsOnDelete is true. metadataServer MetadataServerSpec The mds pod info mirroring FSMirroringSpec (Optional) The mirroring settings statusCheck MirrorHealthCheckSpec The mirroring statusCheck "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemsSpec","title":"FilesystemsSpec","text":"(Appears on:FilesystemMirroringInfo) FilesystemsSpec is spec for the mirrored filesystem Field Descriptionfilesystem_id int (Optional) FilesystemID is the filesystem identifier name string (Optional) Name is name of the filesystem directory_count int (Optional) DirectoryCount is the number of directories in the filesystem peers []FilesystemMirrorInfoPeerSpec (Optional) Peers represents the mirroring peers "},{"location":"CRDs/specification/#ceph.rook.io/v1.GaneshaRADOSSpec","title":"GaneshaRADOSSpec","text":"(Appears on:NFSGaneshaSpec) GaneshaRADOSSpec represents the specification of a Ganesha RADOS object Field Descriptionpool string (Optional) The Ceph pool used store the shared configuration for NFS-Ganesha daemons. This setting is deprecated, as it is internally required to be \u201c.nfs\u201d. namespace string (Optional) The namespace inside the Ceph pool (set by \u2018pool\u2019) where shared NFS-Ganesha config is stored. This setting is deprecated as it is internally set to the name of the CephNFS. "},{"location":"CRDs/specification/#ceph.rook.io/v1.GaneshaServerSpec","title":"GaneshaServerSpec","text":"(Appears on:NFSGaneshaSpec) GaneshaServerSpec represents the specification of a Ganesha Server Field Descriptionactive int The number of active Ganesha servers placement Placement (Optional) The affinity to place the ganesha pods annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) Resources set resource requests and limits priorityClassName string (Optional) PriorityClassName sets the priority class on the pods logLevel string (Optional) LogLevel set logging level hostNetwork bool (Optional) Whether host networking is enabled for the Ganesha server. If not set, the network settings from the cluster CR will be applied. livenessProbe ProbeSpec (Optional) A liveness-probe to verify that Ganesha server has valid run-time state. If LivenessProbe.Disabled is false and LivenessProbe.Probe is nil uses default probe. "},{"location":"CRDs/specification/#ceph.rook.io/v1.GatewaySpec","title":"GatewaySpec","text":"(Appears on:ObjectStoreSpec) GatewaySpec represents the specification of Ceph Object Store Gateway Field Descriptionport int32 (Optional) The port the rgw service will be listening on (http) securePort int32 (Optional) The port the rgw service will be listening on (https) instances int32 (Optional) The number of pods in the rgw replicaset. sslCertificateRef string (Optional) The name of the secret that stores the ssl certificate for secure rgw connections caBundleRef string (Optional) The name of the secret that stores custom ca-bundle with root and intermediate certificates. placement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) disableMultisiteSyncTraffic bool (Optional) DisableMultisiteSyncTraffic, when true, prevents this object store\u2019s gateways from transmitting multisite replication data. Note that this value does not affect whether gateways receive multisite replication traffic: see ObjectZone.spec.customEndpoints for that. If false or unset, this object store\u2019s gateways will be able to transmit multisite replication data. annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the rgw pods priorityClassName string (Optional) PriorityClassName sets priority classes on the rgw pods externalRgwEndpoints []EndpointAddress (Optional) ExternalRgwEndpoints points to external RGW endpoint(s). Multiple endpoints can be given, but for stability of ObjectBucketClaims, we highly recommend that users give only a single external RGW endpoint that is a load balancer that sends requests to the multiple RGWs. service RGWServiceSpec (Optional) The configuration related to add/set on each rgw service. hostNetwork bool (Optional) Whether host networking is enabled for the rgw daemon. If not set, the network settings from the cluster CR will be applied. dashboardEnabled bool (Optional) Whether rgw dashboard is enabled for the rgw daemon. If not set, the rgw dashboard will be enabled. additionalVolumeMounts AdditionalVolumeMounts AdditionalVolumeMounts allows additional volumes to be mounted to the RGW pod. The root directory for each additional volume mount is (Appears on:TopicEndpointSpec) HTTPEndpointSpec represent the spec of an HTTP endpoint of a Bucket Topic Field Descriptionuri string The URI of the HTTP endpoint to push notification to disableVerifySSL bool (Optional) Indicate whether the server certificate is validated by the client or not sendCloudEvents bool (Optional) Send the notifications with the CloudEvents header: https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md "},{"location":"CRDs/specification/#ceph.rook.io/v1.HealthCheckSpec","title":"HealthCheckSpec","text":"(Appears on:DaemonHealthSpec, MirrorHealthCheckSpec) HealthCheckSpec represents the health check of an object store bucket Field Descriptiondisabled bool (Optional) interval Kubernetes meta/v1.Duration (Optional) Interval is the internal in second or minute for the health check to run like 60s for 60 seconds timeout string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.HybridStorageSpec","title":"HybridStorageSpec","text":"(Appears on:ReplicatedSpec) HybridStorageSpec represents the settings for hybrid storage pool Field DescriptionprimaryDeviceClass string PrimaryDeviceClass represents high performance tier (for example SSD or NVME) for Primary OSD secondaryDeviceClass string SecondaryDeviceClass represents low performance tier (for example HDDs) for remaining OSDs "},{"location":"CRDs/specification/#ceph.rook.io/v1.IPFamilyType","title":"IPFamilyType (string alias)","text":"(Appears on:NetworkSpec) IPFamilyType represents the single stack Ipv4 or Ipv6 protocol. Value Description\"IPv4\" IPv4 internet protocol version \"IPv6\" IPv6 internet protocol version "},{"location":"CRDs/specification/#ceph.rook.io/v1.ImplicitTenantSetting","title":"ImplicitTenantSetting (string alias)","text":"(Appears on:KeystoneSpec) Value Description\"\" \"false\" \"s3\" \"swift\" \"true\" "},{"location":"CRDs/specification/#ceph.rook.io/v1.KafkaEndpointSpec","title":"KafkaEndpointSpec","text":"(Appears on:TopicEndpointSpec) KafkaEndpointSpec represent the spec of a Kafka endpoint of a Bucket Topic Field Descriptionuri string The URI of the Kafka endpoint to push notification to useSSL bool (Optional) Indicate whether to use SSL when communicating with the broker disableVerifySSL bool (Optional) Indicate whether the server certificate is validated by the client or not ackLevel string (Optional) The ack level required for this topic (none/broker) "},{"location":"CRDs/specification/#ceph.rook.io/v1.KerberosConfigFiles","title":"KerberosConfigFiles","text":"(Appears on:KerberosSpec) KerberosConfigFiles represents the source(s) from which Kerberos configuration should come. Field DescriptionvolumeSource ConfigFileVolumeSource VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for Kerberos configuration files like what is normally used to configure Volumes for a Pod. For example, a ConfigMap, Secret, or HostPath. The volume may contain multiple files, all of which will be loaded. "},{"location":"CRDs/specification/#ceph.rook.io/v1.KerberosKeytabFile","title":"KerberosKeytabFile","text":"(Appears on:KerberosSpec) KerberosKeytabFile represents the source(s) from which the Kerberos keytab file should come. Field DescriptionvolumeSource ConfigFileVolumeSource VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for the Kerberos keytab file like what is normally used to configure Volumes for a Pod. For example, a Secret or HostPath. There are two requirements for the source\u2019s content: 1. The config file must be mountable via (Appears on:NFSSecuritySpec) KerberosSpec represents configuration for Kerberos. Field DescriptionprincipalName string (Optional) PrincipalName corresponds directly to NFS-Ganesha\u2019s NFS_KRB5:PrincipalName config. In practice, this is the service prefix of the principal name. The default is \u201cnfs\u201d. This value is combined with (a) the namespace and name of the CephNFS (with a hyphen between) and (b) the Realm configured in the user-provided krb5.conf to determine the full principal name: /-@. e.g., nfs/rook-ceph-my-nfs@example.net. See https://github.com/nfs-ganesha/nfs-ganesha/wiki/RPCSEC_GSS for more detail. DomainName should be set to the Kerberos Realm. configFiles KerberosConfigFiles (Optional) ConfigFiles defines where the Kerberos configuration should be sourced from. Config files will be placed into the If this is left empty, Rook will not add any files. This allows you to manage the files yourself however you wish. For example, you may build them into your custom Ceph container image or use the Vault agent injector to securely add the files via annotations on the CephNFS spec (passed to the NFS server pods). Rook configures Kerberos to log to stderr. We suggest removing logging sections from config files to avoid consuming unnecessary disk space from logging to files. keytabFile KerberosKeytabFile (Optional) KeytabFile defines where the Kerberos keytab should be sourced from. The keytab file will be placed into (Appears on:ObjectStoreSecuritySpec, SecuritySpec) KeyManagementServiceSpec represent various details of the KMS server Field DescriptionconnectionDetails map[string]string (Optional) ConnectionDetails contains the KMS connection details (address, port etc) tokenSecretName string (Optional) TokenSecretName is the kubernetes secret containing the KMS token "},{"location":"CRDs/specification/#ceph.rook.io/v1.KeyRotationSpec","title":"KeyRotationSpec","text":"(Appears on:SecuritySpec) KeyRotationSpec represents the settings for Key Rotation. Field Descriptionenabled bool (Optional) Enabled represents whether the key rotation is enabled. schedule string (Optional) Schedule represents the cron schedule for key rotation. "},{"location":"CRDs/specification/#ceph.rook.io/v1.KeyType","title":"KeyType (string alias)","text":"KeyType type safety Value Description\"exporter\" \"cleanup\" \"clusterMetadata\" \"cmdreporter\" \"crashcollector\" \"dashboard\" \"mds\" \"mgr\" \"mon\" \"arbiter\" \"monitoring\" \"osd\" \"prepareosd\" \"rgw\" \"keyrotation\" "},{"location":"CRDs/specification/#ceph.rook.io/v1.KeystoneSpec","title":"KeystoneSpec","text":"(Appears on:AuthSpec) KeystoneSpec represents the Keystone authentication configuration of a Ceph Object Store Gateway Field Descriptionurl string The URL for the Keystone server. serviceUserSecretName string The name of the secret containing the credentials for the service user account used by RGW. It has to be in the same namespace as the object store resource. acceptedRoles []string The roles requires to serve requests. implicitTenants ImplicitTenantSetting (Optional) Create new users in their own tenants of the same name. Possible values are true, false, swift and s3. The latter have the effect of splitting the identity space such that only the indicated protocol will use implicit tenants. tokenCacheSize int (Optional) The maximum number of entries in each Keystone token cache. revocationInterval int (Optional) The number of seconds between token revocation checks. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Labels","title":"Labels (map[string]string alias)","text":"(Appears on:FilesystemMirroringSpec, GaneshaServerSpec, GatewaySpec, MetadataServerSpec, RBDMirroringSpec) Labels are label for a given daemons "},{"location":"CRDs/specification/#ceph.rook.io/v1.LabelsSpec","title":"LabelsSpec (map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Labels alias)","text":"(Appears on:ClusterSpec) LabelsSpec is the main spec label for all daemons "},{"location":"CRDs/specification/#ceph.rook.io/v1.LogCollectorSpec","title":"LogCollectorSpec","text":"(Appears on:ClusterSpec) LogCollectorSpec is the logging spec Field Descriptionenabled bool (Optional) Enabled represents whether the log collector is enabled periodicity string (Optional) Periodicity is the periodicity of the log rotation. maxLogSize k8s.io/apimachinery/pkg/api/resource.Quantity (Optional) MaxLogSize is the maximum size of the log per ceph daemons. Must be at least 1M. "},{"location":"CRDs/specification/#ceph.rook.io/v1.MetadataServerSpec","title":"MetadataServerSpec","text":"(Appears on:FilesystemSpec) MetadataServerSpec represents the specification of a Ceph Metadata Server Field DescriptionactiveCount int32 The number of metadata servers that are active. The remaining servers in the cluster will be in standby mode. activeStandby bool (Optional) Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover. If false, standbys will still be available, but will not have a warm metadata cache. placement Placement (Optional) The affinity to place the mds pods (default is to place on all available node) with a daemonset annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the mds pods priorityClassName string (Optional) PriorityClassName sets priority classes on components livenessProbe ProbeSpec (Optional) startupProbe ProbeSpec (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.MgrSpec","title":"MgrSpec","text":"(Appears on:ClusterSpec) MgrSpec represents options to configure a ceph mgr Field Descriptioncount int (Optional) Count is the number of manager daemons to run allowMultiplePerNode bool (Optional) AllowMultiplePerNode allows to run multiple managers on the same node (not recommended) modules []Module (Optional) Modules is the list of ceph manager modules to enable/disable "},{"location":"CRDs/specification/#ceph.rook.io/v1.MirrorHealthCheckSpec","title":"MirrorHealthCheckSpec","text":"(Appears on:FilesystemSpec, PoolSpec) MirrorHealthCheckSpec represents the health specification of a Ceph Storage Pool mirror Field Descriptionmirror HealthCheckSpec (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.MirroringInfoSpec","title":"MirroringInfoSpec","text":"(Appears on:CephBlockPoolStatus) MirroringInfoSpec is the status of the pool mirroring Field DescriptionPoolMirroringInfo PoolMirroringInfo (Members of lastChecked string (Optional) lastChanged string (Optional) details string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.MirroringPeerSpec","title":"MirroringPeerSpec","text":"(Appears on:FSMirroringSpec, MirroringSpec, RBDMirroringSpec) MirroringPeerSpec represents the specification of a mirror peer Field DescriptionsecretNames []string (Optional) SecretNames represents the Kubernetes Secret names to add rbd-mirror or cephfs-mirror peers "},{"location":"CRDs/specification/#ceph.rook.io/v1.MirroringSpec","title":"MirroringSpec","text":"(Appears on:PoolSpec) MirroringSpec represents the setting for a mirrored pool Field Descriptionenabled bool (Optional) Enabled whether this pool is mirrored or not mode string (Optional) Mode is the mirroring mode: either pool or image snapshotSchedules []SnapshotScheduleSpec (Optional) SnapshotSchedules is the scheduling of snapshot for mirrored images/pools peers MirroringPeerSpec (Optional) Peers represents the peers spec "},{"location":"CRDs/specification/#ceph.rook.io/v1.MirroringStatusSpec","title":"MirroringStatusSpec","text":"(Appears on:CephBlockPoolStatus) MirroringStatusSpec is the status of the pool mirroring Field DescriptionPoolMirroringStatus PoolMirroringStatus (Members of PoolMirroringStatus is the mirroring status of a pool lastChecked string (Optional) LastChecked is the last time time the status was checked lastChanged string (Optional) LastChanged is the last time time the status last changed details string (Optional) Details contains potential status errors "},{"location":"CRDs/specification/#ceph.rook.io/v1.Module","title":"Module","text":"(Appears on:MgrSpec) Module represents mgr modules that the user wants to enable or disable Field Descriptionname string (Optional) Name is the name of the ceph manager module enabled bool (Optional) Enabled determines whether a module should be enabled or not settings ModuleSettings Settings to further configure the module "},{"location":"CRDs/specification/#ceph.rook.io/v1.ModuleSettings","title":"ModuleSettings","text":"(Appears on:Module) Field DescriptionbalancerMode string BalancerMode sets the (Appears on:ClusterSpec) MonSpec represents the specification of the monitor Field Descriptioncount int (Optional) Count is the number of Ceph monitors allowMultiplePerNode bool (Optional) AllowMultiplePerNode determines if we can run multiple monitors on the same node (not recommended) failureDomainLabel string (Optional) zones []MonZoneSpec (Optional) Zones are specified when we want to provide zonal awareness to mons stretchCluster StretchClusterSpec (Optional) StretchCluster is the stretch cluster specification volumeClaimTemplate VolumeClaimTemplate (Optional) VolumeClaimTemplate is the PVC definition "},{"location":"CRDs/specification/#ceph.rook.io/v1.MonZoneSpec","title":"MonZoneSpec","text":"(Appears on:MonSpec, StretchClusterSpec) MonZoneSpec represents the specification of a zone in a Ceph Cluster Field Descriptionname string (Optional) Name is the name of the zone arbiter bool (Optional) Arbiter determines if the zone contains the arbiter used for stretch cluster mode volumeClaimTemplate VolumeClaimTemplate (Optional) VolumeClaimTemplate is the PVC template "},{"location":"CRDs/specification/#ceph.rook.io/v1.MonitoringSpec","title":"MonitoringSpec","text":"(Appears on:ClusterSpec) MonitoringSpec represents the settings for Prometheus based Ceph monitoring Field Descriptionenabled bool (Optional) Enabled determines whether to create the prometheus rules for the ceph cluster. If true, the prometheus types must exist or the creation will fail. Default is false. metricsDisabled bool (Optional) Whether to disable the metrics reported by Ceph. If false, the prometheus mgr module and Ceph exporter are enabled. If true, the prometheus mgr module and Ceph exporter are both disabled. Default is false. externalMgrEndpoints []Kubernetes core/v1.EndpointAddress (Optional) ExternalMgrEndpoints points to an existing Ceph prometheus exporter endpoint externalMgrPrometheusPort uint16 (Optional) ExternalMgrPrometheusPort Prometheus exporter port port int (Optional) Port is the prometheus server port interval Kubernetes meta/v1.Duration (Optional) Interval determines prometheus scrape interval exporter CephExporterSpec (Optional) Ceph exporter configuration "},{"location":"CRDs/specification/#ceph.rook.io/v1.MultiClusterServiceSpec","title":"MultiClusterServiceSpec","text":"(Appears on:NetworkSpec) Field Descriptionenabled bool (Optional) Enable multiClusterService to export the mon and OSD services to peer cluster. Ensure that peer clusters are connected using an MCS API compatible application, like Globalnet Submariner. clusterID string ClusterID uniquely identifies a cluster. It is used as a prefix to nslookup exported services. For example: ...svc.clusterset.local"},{"location":"CRDs/specification/#ceph.rook.io/v1.NFSGaneshaSpec","title":"NFSGaneshaSpec","text":" (Appears on:CephNFS) NFSGaneshaSpec represents the spec of an nfs ganesha server Field Descriptionrados GaneshaRADOSSpec (Optional) RADOS is the Ganesha RADOS specification server GaneshaServerSpec Server is the Ganesha Server specification security NFSSecuritySpec (Optional) Security allows specifying security configurations for the NFS cluster "},{"location":"CRDs/specification/#ceph.rook.io/v1.NFSSecuritySpec","title":"NFSSecuritySpec","text":"(Appears on:NFSGaneshaSpec) NFSSecuritySpec represents security configurations for an NFS server pod Field Descriptionsssd SSSDSpec (Optional) SSSD enables integration with System Security Services Daemon (SSSD). SSSD can be used to provide user ID mapping from a number of sources. See https://sssd.io for more information about the SSSD project. kerberos KerberosSpec (Optional) Kerberos configures NFS-Ganesha to secure NFS client connections with Kerberos. "},{"location":"CRDs/specification/#ceph.rook.io/v1.NamedBlockPoolSpec","title":"NamedBlockPoolSpec","text":"(Appears on:CephBlockPool) NamedBlockPoolSpec allows a block pool to be created with a non-default name. This is more specific than the NamedPoolSpec so we get schema validation on the allowed pool names that can be specified. Field Descriptionname string (Optional) The desired name of the pool if different from the CephBlockPool CR name. PoolSpec PoolSpec (Members of The core pool configuration "},{"location":"CRDs/specification/#ceph.rook.io/v1.NamedPoolSpec","title":"NamedPoolSpec","text":"(Appears on:FilesystemSpec) NamedPoolSpec represents the named ceph pool spec Field Descriptionname string Name of the pool PoolSpec PoolSpec (Members of PoolSpec represents the spec of ceph pool "},{"location":"CRDs/specification/#ceph.rook.io/v1.NetworkProviderType","title":"NetworkProviderType (string alias)","text":"(Appears on:NetworkSpec) NetworkProviderType defines valid network providers for Rook. Value Description\"\" \"host\" \"multus\" "},{"location":"CRDs/specification/#ceph.rook.io/v1.NetworkSpec","title":"NetworkSpec","text":"(Appears on:ClusterSpec) NetworkSpec for Ceph includes backward compatibility code Field Descriptionprovider NetworkProviderType (Optional) Provider is what provides network connectivity to the cluster e.g. \u201chost\u201d or \u201cmultus\u201d. If the Provider is updated from being empty to \u201chost\u201d on a running cluster, then the operator will automatically fail over all the mons to apply the \u201chost\u201d network settings. selectors map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.CephNetworkType]string (Optional) Selectors define NetworkAttachmentDefinitions to be used for Ceph public and/or cluster networks when the \u201cmultus\u201d network provider is used. This config section is not used for other network providers. Valid keys are \u201cpublic\u201d and \u201ccluster\u201d. Refer to Ceph networking documentation for more: https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/ Refer to Multus network annotation documentation for help selecting values: https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation Rook will make a best-effort attempt to automatically detect CIDR address ranges for given network attachment definitions. Rook\u2019s methods are robust but may be imprecise for sufficiently complicated networks. Rook\u2019s auto-detection process obtains a new IP address lease for each CephCluster reconcile. If Rook fails to detect, incorrectly detects, only partially detects, or if underlying networks do not support reusing old IP addresses, it is best to use the \u2018addressRanges\u2019 config section to specify CIDR ranges for the Ceph cluster. As a contrived example, one can use a theoretical Kubernetes-wide network for Ceph client traffic and a theoretical Rook-only network for Ceph replication traffic as shown: selectors: public: \u201cdefault/cluster-fast-net\u201d cluster: \u201crook-ceph/ceph-backend-net\u201d addressRanges AddressRangesSpec (Optional) AddressRanges specify a list of CIDRs that Rook will apply to Ceph\u2019s \u2018public_network\u2019 and/or \u2018cluster_network\u2019 configurations. This config section may be used for the \u201chost\u201d or \u201cmultus\u201d network providers. connections ConnectionsSpec (Optional) Settings for network connections such as compression and encryption across the wire. hostNetwork bool (Optional) HostNetwork to enable host network. If host networking is enabled or disabled on a running cluster, then the operator will automatically fail over all the mons to apply the new network settings. ipFamily IPFamilyType (Optional) IPFamily is the single stack IPv6 or IPv4 protocol dualStack bool (Optional) DualStack determines whether Ceph daemons should listen on both IPv4 and IPv6 multiClusterService MultiClusterServiceSpec (Optional) Enable multiClusterService to export the Services between peer clusters "},{"location":"CRDs/specification/#ceph.rook.io/v1.Node","title":"Node","text":"(Appears on:StorageScopeSpec) Node is a storage nodes Field Descriptionname string (Optional) resources Kubernetes core/v1.ResourceRequirements (Optional) config map[string]string (Optional) Selection Selection (Members of []github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Node alias)","text":"NodesByName implements an interface to sort nodes by name "},{"location":"CRDs/specification/#ceph.rook.io/v1.NotificationFilterRule","title":"NotificationFilterRule","text":"(Appears on:NotificationFilterSpec) NotificationFilterRule represent a single rule in the Notification Filter spec Field Descriptionname string Name of the metadata or tag value string Value to filter on "},{"location":"CRDs/specification/#ceph.rook.io/v1.NotificationFilterSpec","title":"NotificationFilterSpec","text":"(Appears on:BucketNotificationSpec) NotificationFilterSpec represent the spec of a Bucket Notification filter Field DescriptionkeyFilters []NotificationKeyFilterRule (Optional) Filters based on the object\u2019s key metadataFilters []NotificationFilterRule (Optional) Filters based on the object\u2019s metadata tagFilters []NotificationFilterRule (Optional) Filters based on the object\u2019s tags "},{"location":"CRDs/specification/#ceph.rook.io/v1.NotificationKeyFilterRule","title":"NotificationKeyFilterRule","text":"(Appears on:NotificationFilterSpec) NotificationKeyFilterRule represent a single key rule in the Notification Filter spec Field Descriptionname string Name of the filter - prefix/suffix/regex value string Value to filter on "},{"location":"CRDs/specification/#ceph.rook.io/v1.OSDStatus","title":"OSDStatus","text":"(Appears on:CephStorage) OSDStatus represents OSD status of the ceph Cluster Field DescriptionstoreType map[string]int StoreType is a mapping between the OSD backend stores and number of OSDs using these stores "},{"location":"CRDs/specification/#ceph.rook.io/v1.OSDStore","title":"OSDStore","text":"(Appears on:StorageScopeSpec) OSDStore is the backend storage type used for creating the OSDs Field Descriptiontype string (Optional) Type of backend storage to be used while creating OSDs. If empty, then bluestore will be used updateStore string (Optional) UpdateStore updates the backend store for existing OSDs. It destroys each OSD one at a time, cleans up the backing disk and prepares same OSD on that disk "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectEndpointSpec","title":"ObjectEndpointSpec","text":"(Appears on:ObjectStoreHostingSpec) ObjectEndpointSpec represents an object store endpoint Field DescriptiondnsName string DnsName is the DNS name (in RFC-1123 format) of the endpoint. If the DNS name corresponds to an endpoint with DNS wildcard support, do not include the wildcard itself in the list of hostnames. E.g., use \u201cmystore.example.com\u201d instead of \u201c*.mystore.example.com\u201d. port int32 Port is the port on which S3 connections can be made for this endpoint. useTls bool UseTls defines whether the endpoint uses TLS (HTTPS) or not (HTTP). "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectEndpoints","title":"ObjectEndpoints","text":"(Appears on:ObjectStoreStatus) Field Descriptioninsecure []string (Optional) secure []string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectHealthCheckSpec","title":"ObjectHealthCheckSpec","text":"(Appears on:ObjectStoreSpec) ObjectHealthCheckSpec represents the health check of an object store Field DescriptionreadinessProbe ProbeSpec (Optional) startupProbe ProbeSpec (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectRealmSpec","title":"ObjectRealmSpec","text":"(Appears on:CephObjectRealm) ObjectRealmSpec represent the spec of an ObjectRealm Field Descriptionpull PullSpec"},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectSharedPoolsSpec","title":"ObjectSharedPoolsSpec","text":"(Appears on:ObjectStoreSpec, ObjectZoneSpec) ObjectSharedPoolsSpec represents object store pool info when configuring RADOS namespaces in existing pools. Field DescriptionmetadataPoolName string (Optional) The metadata pool used for creating RADOS namespaces in the object store dataPoolName string (Optional) The data pool used for creating RADOS namespaces in the object store preserveRadosNamespaceDataOnDelete bool (Optional) Whether the RADOS namespaces should be preserved on deletion of the object store poolPlacements []PoolPlacementSpec (Optional) PoolPlacements control which Pools are associated with a particular RGW bucket. Once PoolPlacements are defined, RGW client will be able to associate pool with ObjectStore bucket by providing \u201c\u201d during s3 bucket creation or \u201cX-Storage-Policy\u201d header during swift container creation. See: https://docs.ceph.com/en/latest/radosgw/placement/#placement-targets PoolPlacement with name: \u201cdefault\u201d will be used as a default pool if no option is provided during bucket creation. If default placement is not provided, spec.sharedPools.dataPoolName and spec.sharedPools.MetadataPoolName will be used as default pools. If spec.sharedPools are also empty, then RGW pools (spec.dataPool and spec.metadataPool) will be used as defaults."},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreHostingSpec","title":"ObjectStoreHostingSpec","text":" (Appears on:ObjectStoreSpec) ObjectStoreHostingSpec represents the hosting settings for the object store Field DescriptionadvertiseEndpoint ObjectEndpointSpec (Optional) AdvertiseEndpoint is the default endpoint Rook will return for resources dependent on this object store. This endpoint will be returned to CephObjectStoreUsers, Object Bucket Claims, and COSI Buckets/Accesses. By default, Rook returns the endpoint for the object store\u2019s Kubernetes service using HTTPS with dnsNames []string (Optional) A list of DNS host names on which object store gateways will accept client S3 connections. When specified, object store gateways will reject client S3 connections to hostnames that are not present in this list, so include all endpoints. The object store\u2019s advertiseEndpoint and Kubernetes service endpoint, plus CephObjectZone (Appears on:ObjectStoreSpec) ObjectStoreSecuritySpec is spec to define security features like encryption Field DescriptionSecuritySpec SecuritySpec (Optional) s3 KeyManagementServiceSpec (Optional) The settings for supporting AWS-SSE:S3 with RGW "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreSpec","title":"ObjectStoreSpec","text":"(Appears on:CephObjectStore) ObjectStoreSpec represent the spec of a pool Field DescriptionmetadataPool PoolSpec (Optional) The metadata pool settings dataPool PoolSpec (Optional) The data pool settings sharedPools ObjectSharedPoolsSpec (Optional) The pool information when configuring RADOS namespaces in existing pools. preservePoolsOnDelete bool (Optional) Preserve pools on object store deletion gateway GatewaySpec (Optional) The rgw pod info protocols ProtocolSpec (Optional) The protocol specification auth AuthSpec (Optional) The authentication configuration zone ZoneSpec (Optional) The multisite info healthCheck ObjectHealthCheckSpec (Optional) The RGW health probes security ObjectStoreSecuritySpec (Optional) Security represents security settings allowUsersInNamespaces []string (Optional) The list of allowed namespaces in addition to the object store namespace where ceph object store users may be created. Specify \u201c*\u201d to allow all namespaces, otherwise list individual namespaces that are to be allowed. This is useful for applications that need object store credentials to be created in their own namespace, where neither OBCs nor COSI is being used to create buckets. The default is empty. hosting ObjectStoreHostingSpec (Optional) Hosting settings for the object store. A common use case for hosting configuration is to inform Rook of endpoints that support DNS wildcards, which in turn allows virtual host-style bucket addressing. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreStatus","title":"ObjectStoreStatus","text":"(Appears on:CephObjectStore) ObjectStoreStatus represents the status of a Ceph Object Store resource Field Descriptionphase ConditionType (Optional) message string (Optional) endpoints ObjectEndpoints (Optional) info map[string]string (Optional) conditions []Condition observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreUserSpec","title":"ObjectStoreUserSpec","text":"(Appears on:CephObjectStoreUser) ObjectStoreUserSpec represent the spec of an Objectstoreuser Field Descriptionstore string (Optional) The store the user will be created in displayName string (Optional) The display name for the ceph users capabilities ObjectUserCapSpec (Optional) quotas ObjectUserQuotaSpec (Optional) clusterNamespace string (Optional) The namespace where the parent CephCluster and CephObjectStore are found "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreUserStatus","title":"ObjectStoreUserStatus","text":"(Appears on:CephObjectStoreUser) ObjectStoreUserStatus represents the status Ceph Object Store Gateway User Field Descriptionphase string (Optional) info map[string]string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectUserCapSpec","title":"ObjectUserCapSpec","text":"(Appears on:ObjectStoreUserSpec) Additional admin-level capabilities for the Ceph object store user Field Descriptionuser string (Optional) Admin capabilities to read/write Ceph object store users. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities users string (Optional) Admin capabilities to read/write Ceph object store users. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities bucket string (Optional) Admin capabilities to read/write Ceph object store buckets. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities buckets string (Optional) Admin capabilities to read/write Ceph object store buckets. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities metadata string (Optional) Admin capabilities to read/write Ceph object store metadata. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities usage string (Optional) Admin capabilities to read/write Ceph object store usage. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities zone string (Optional) Admin capabilities to read/write Ceph object store zones. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities roles string (Optional) Admin capabilities to read/write roles for user. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities info string (Optional) Admin capabilities to read/write information about the user. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities amz-cache string (Optional) Add capabilities for user to send request to RGW Cache API header. Documented in https://docs.ceph.com/en/latest/radosgw/rgw-cache/#cache-api bilog string (Optional) Add capabilities for user to change bucket index logging. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities mdlog string (Optional) Add capabilities for user to change metadata logging. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities datalog string (Optional) Add capabilities for user to change data logging. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities user-policy string (Optional) Add capabilities for user to change user policies. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities oidc-provider string (Optional) Add capabilities for user to change oidc provider. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities ratelimit string (Optional) Add capabilities for user to set rate limiter for user and bucket. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectUserQuotaSpec","title":"ObjectUserQuotaSpec","text":"(Appears on:ObjectStoreUserSpec) ObjectUserQuotaSpec can be used to set quotas for the object store user to limit their usage. See the Ceph docs for more Field DescriptionmaxBuckets int (Optional) Maximum bucket limit for the ceph user maxSize k8s.io/apimachinery/pkg/api/resource.Quantity (Optional) Maximum size limit of all objects across all the user\u2019s buckets See https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#Quantity for more info. maxObjects int64 (Optional) Maximum number of objects across all the user\u2019s buckets "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectZoneGroupSpec","title":"ObjectZoneGroupSpec","text":"(Appears on:CephObjectZoneGroup) ObjectZoneGroupSpec represent the spec of an ObjectZoneGroup Field Descriptionrealm string The display name for the ceph users "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectZoneSpec","title":"ObjectZoneSpec","text":"(Appears on:CephObjectZone) ObjectZoneSpec represent the spec of an ObjectZone Field DescriptionzoneGroup string The display name for the ceph users metadataPool PoolSpec (Optional) The metadata pool settings dataPool PoolSpec (Optional) The data pool settings sharedPools ObjectSharedPoolsSpec (Optional) The pool information when configuring RADOS namespaces in existing pools. customEndpoints []string (Optional) If this zone cannot be accessed from other peer Ceph clusters via the ClusterIP Service endpoint created by Rook, you must set this to the externally reachable endpoint(s). You may include the port in the definition. For example: \u201chttps://my-object-store.my-domain.net:443\u201d. In many cases, you should set this to the endpoint of the ingress resource that makes the CephObjectStore associated with this CephObjectStoreZone reachable to peer clusters. The list can have one or more endpoints pointing to different RGW servers in the zone. If a CephObjectStore endpoint is omitted from this list, that object store\u2019s gateways will not receive multisite replication data (see CephObjectStore.spec.gateway.disableMultisiteSyncTraffic). preservePoolsOnDelete bool (Optional) Preserve pools on object zone deletion "},{"location":"CRDs/specification/#ceph.rook.io/v1.PeerRemoteSpec","title":"PeerRemoteSpec","text":"(Appears on:FilesystemMirrorInfoPeerSpec) Field Descriptionclient_name string (Optional) ClientName is cephx name cluster_name string (Optional) ClusterName is the name of the cluster fs_name string (Optional) FsName is the filesystem name "},{"location":"CRDs/specification/#ceph.rook.io/v1.PeerStatSpec","title":"PeerStatSpec","text":"(Appears on:FilesystemMirrorInfoPeerSpec) PeerStatSpec are the mirror stat with a given peer Field Descriptionfailure_count int (Optional) FailureCount is the number of mirroring failure recovery_count int (Optional) RecoveryCount is the number of recovery attempted after failures "},{"location":"CRDs/specification/#ceph.rook.io/v1.PeersSpec","title":"PeersSpec","text":"(Appears on:PoolMirroringInfo) PeersSpec contains peer details Field Descriptionuuid string (Optional) UUID is the peer UUID direction string (Optional) Direction is the peer mirroring direction site_name string (Optional) SiteName is the current site name mirror_uuid string (Optional) MirrorUUID is the mirror UUID client_name string (Optional) ClientName is the CephX user used to connect to the peer "},{"location":"CRDs/specification/#ceph.rook.io/v1.Placement","title":"Placement","text":"(Appears on:CephCOSIDriverSpec, FilesystemMirroringSpec, GaneshaServerSpec, GatewaySpec, MetadataServerSpec, RBDMirroringSpec, StorageClassDeviceSet) Placement is the placement for an object Field DescriptionnodeAffinity Kubernetes core/v1.NodeAffinity (Optional) NodeAffinity is a group of node affinity scheduling rules podAffinity Kubernetes core/v1.PodAffinity (Optional) PodAffinity is a group of inter pod affinity scheduling rules podAntiAffinity Kubernetes core/v1.PodAntiAffinity (Optional) PodAntiAffinity is a group of inter pod anti affinity scheduling rules tolerations []Kubernetes core/v1.Toleration (Optional) The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator TopologySpreadConstraints specifies how to spread matching pods among the given topology "},{"location":"CRDs/specification/#ceph.rook.io/v1.PlacementSpec","title":"PlacementSpec (map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Placement alias)","text":"(Appears on:ClusterSpec) PlacementSpec is the placement for core ceph daemons part of the CephCluster CRD "},{"location":"CRDs/specification/#ceph.rook.io/v1.PlacementStorageClassSpec","title":"PlacementStorageClassSpec","text":"(Appears on:PoolPlacementSpec) Field Descriptionname string Name is the StorageClass name. Ceph allows arbitrary name for StorageClasses, however most clients/libs insist on AWS names so it is recommended to use one of the valid x-amz-storage-class values for better compatibility: REDUCED_REDUNDANCY | STANDARD_IA | ONEZONE_IA | INTELLIGENT_TIERING | GLACIER | DEEP_ARCHIVE | OUTPOSTS | GLACIER_IR | SNOW | EXPRESS_ONEZONE See AWS docs: https://aws.amazon.com/de/s3/storage-classes/ dataPoolName string DataPoolName is the data pool used to store ObjectStore objects data. "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolMirroringInfo","title":"PoolMirroringInfo","text":"(Appears on:MirroringInfoSpec) PoolMirroringInfo is the mirroring info of a given pool Field Descriptionmode string (Optional) Mode is the mirroring mode site_name string (Optional) SiteName is the current site name peers []PeersSpec (Optional) Peers are the list of peer sites connected to that cluster "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolMirroringStatus","title":"PoolMirroringStatus","text":"(Appears on:MirroringStatusSpec) PoolMirroringStatus is the pool mirror status Field Descriptionsummary PoolMirroringStatusSummarySpec (Optional) Summary is the mirroring status summary "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolMirroringStatusSummarySpec","title":"PoolMirroringStatusSummarySpec","text":"(Appears on:PoolMirroringStatus) PoolMirroringStatusSummarySpec is the summary output of the command Field Descriptionhealth string (Optional) Health is the mirroring health daemon_health string (Optional) DaemonHealth is the health of the mirroring daemon image_health string (Optional) ImageHealth is the health of the mirrored image states StatesSpec (Optional) States is the various state for all mirrored images "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolPlacementSpec","title":"PoolPlacementSpec","text":"(Appears on:ObjectSharedPoolsSpec) Field Descriptionname string Pool placement name. Name can be arbitrary. Placement with name \u201cdefault\u201d will be used as default. default bool Sets given placement as default. Only one placement in the list can be marked as default. metadataPoolName string The metadata pool used to store ObjectStore bucket index. dataPoolName string The data pool used to store ObjectStore objects data. dataNonECPoolName string (Optional) The data pool used to store ObjectStore data that cannot use erasure coding (ex: multi-part uploads). If dataPoolName is not erasure coded, then there is no need for dataNonECPoolName. storageClasses []PlacementStorageClassSpec (Optional) StorageClasses can be selected by user to override dataPoolName during object creation. Each placement has default STANDARD StorageClass pointing to dataPoolName. This list allows defining additional StorageClasses on top of default STANDARD storage class. "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolSpec","title":"PoolSpec","text":"(Appears on:FilesystemSpec, NamedBlockPoolSpec, NamedPoolSpec, ObjectStoreSpec, ObjectZoneSpec) PoolSpec represents the spec of ceph pool Field DescriptionfailureDomain string (Optional) The failure domain: osd/host/(region or zone if available) - technically also any type in the crush map crushRoot string (Optional) The root of the crush hierarchy utilized by the pool deviceClass string (Optional) The device class the OSD should set to for use in the pool enableCrushUpdates bool (Optional) Allow rook operator to change the pool CRUSH tunables once the pool is created compressionMode string (Optional) DEPRECATED: use Parameters instead, e.g., Parameters[\u201ccompression_mode\u201d] = \u201cforce\u201d The inline compression mode in Bluestore OSD to set to (options are: none, passive, aggressive, force) Do NOT set a default value for kubebuilder as this will override the Parameters replicated ReplicatedSpec (Optional) The replication settings erasureCoded ErasureCodedSpec (Optional) The erasure code settings parameters map[string]string (Optional) Parameters is a list of properties to enable on a given pool enableRBDStats bool EnableRBDStats is used to enable gathering of statistics for all RBD images in the pool mirroring MirroringSpec The mirroring settings statusCheck MirrorHealthCheckSpec The mirroring statusCheck quotas QuotaSpec (Optional) The quota settings application string (Optional) The application name to set on the pool. Only expected to be set for rgw pools. "},{"location":"CRDs/specification/#ceph.rook.io/v1.PriorityClassNamesSpec","title":"PriorityClassNamesSpec (map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]string alias)","text":"(Appears on:ClusterSpec) PriorityClassNamesSpec is a map of priority class names to be assigned to components "},{"location":"CRDs/specification/#ceph.rook.io/v1.ProbeSpec","title":"ProbeSpec","text":"(Appears on:GaneshaServerSpec, MetadataServerSpec, ObjectHealthCheckSpec) ProbeSpec is a wrapper around Probe so it can be enabled or disabled for a Ceph daemon Field Descriptiondisabled bool (Optional) Disabled determines whether probe is disable or not probe Kubernetes core/v1.Probe (Optional) Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ProtocolSpec","title":"ProtocolSpec","text":"(Appears on:ObjectStoreSpec) ProtocolSpec represents a Ceph Object Store protocol specification Field Descriptions3 S3Spec (Optional) The spec for S3 swift SwiftSpec (Optional) The spec for Swift "},{"location":"CRDs/specification/#ceph.rook.io/v1.PullSpec","title":"PullSpec","text":"(Appears on:ObjectRealmSpec) PullSpec represents the pulling specification of a Ceph Object Storage Gateway Realm Field Descriptionendpoint string"},{"location":"CRDs/specification/#ceph.rook.io/v1.QuotaSpec","title":"QuotaSpec","text":"(Appears on:PoolSpec) QuotaSpec represents the spec for quotas in a pool Field DescriptionmaxBytes uint64 (Optional) MaxBytes represents the quota in bytes Deprecated in favor of MaxSize maxSize string (Optional) MaxSize represents the quota in bytes as a string maxObjects uint64 (Optional) MaxObjects represents the quota in objects "},{"location":"CRDs/specification/#ceph.rook.io/v1.RBDMirroringSpec","title":"RBDMirroringSpec","text":"(Appears on:CephRBDMirror) RBDMirroringSpec represents the specification of an RBD mirror daemon Field Descriptioncount int Count represents the number of rbd mirror instance to run peers MirroringPeerSpec (Optional) Peers represents the peers spec placement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the rbd mirror pods priorityClassName string (Optional) PriorityClassName sets priority class on the rbd mirror pods "},{"location":"CRDs/specification/#ceph.rook.io/v1.RGWServiceSpec","title":"RGWServiceSpec","text":"(Appears on:GatewaySpec) RGWServiceSpec represent the spec for RGW service Field Descriptionannotations Annotations The annotations-related configuration to add/set on each rgw service. nullable optional "},{"location":"CRDs/specification/#ceph.rook.io/v1.RadosNamespaceMirroring","title":"RadosNamespaceMirroring","text":"(Appears on:CephBlockPoolRadosNamespaceSpec) RadosNamespaceMirroring represents the mirroring configuration of CephBlockPoolRadosNamespace Field DescriptionremoteNamespace string (Optional) RemoteNamespace is the name of the CephBlockPoolRadosNamespace on the secondary cluster CephBlockPool mode RadosNamespaceMirroringMode Mode is the mirroring mode; either pool or image snapshotSchedules []SnapshotScheduleSpec (Optional) SnapshotSchedules is the scheduling of snapshot for mirrored images "},{"location":"CRDs/specification/#ceph.rook.io/v1.RadosNamespaceMirroringMode","title":"RadosNamespaceMirroringMode (string alias)","text":"(Appears on:RadosNamespaceMirroring) RadosNamespaceMirroringMode represents the mode of the RadosNamespace Value Description\"image\" RadosNamespaceMirroringModeImage represents the image mode \"pool\" RadosNamespaceMirroringModePool represents the pool mode "},{"location":"CRDs/specification/#ceph.rook.io/v1.ReadAffinitySpec","title":"ReadAffinitySpec","text":"(Appears on:CSIDriverSpec) ReadAffinitySpec defines the read affinity settings for CSI driver. Field Descriptionenabled bool (Optional) Enables read affinity for CSI driver. crushLocationLabels []string (Optional) CrushLocationLabels defines which node labels to use as CRUSH location. This should correspond to the values set in the CRUSH map. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ReplicatedSpec","title":"ReplicatedSpec","text":"(Appears on:PoolSpec) ReplicatedSpec represents the spec for replication in a pool Field Descriptionsize uint Size - Number of copies per object in a replicated storage pool, including the object itself (required for replicated pool type) targetSizeRatio float64 (Optional) TargetSizeRatio gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity requireSafeReplicaSize bool (Optional) RequireSafeReplicaSize if false allows you to set replica 1 replicasPerFailureDomain uint (Optional) ReplicasPerFailureDomain the number of replica in the specified failure domain subFailureDomain string (Optional) SubFailureDomain the name of the sub-failure domain hybridStorage HybridStorageSpec (Optional) HybridStorage represents hybrid storage tier settings "},{"location":"CRDs/specification/#ceph.rook.io/v1.ResourceSpec","title":"ResourceSpec (map[string]k8s.io/api/core/v1.ResourceRequirements alias)","text":"(Appears on:ClusterSpec) ResourceSpec is a collection of ResourceRequirements that describes the compute resource requirements "},{"location":"CRDs/specification/#ceph.rook.io/v1.S3Spec","title":"S3Spec","text":"(Appears on:ProtocolSpec) S3Spec represents Ceph Object Store specification for the S3 API Field Descriptionenabled bool (Optional) Whether to enable S3. This defaults to true (even if protocols.s3 is not present in the CRD). This maintains backwards compatibility \u2013 by default S3 is enabled. authUseKeystone bool (Optional) Whether to use Keystone for authentication. This option maps directly to the rgw_s3_auth_use_keystone option. Enabling it allows generating S3 credentials via an OpenStack API call, see the docs. If not given, the defaults of the corresponding RGW option apply. "},{"location":"CRDs/specification/#ceph.rook.io/v1.SSSDSidecar","title":"SSSDSidecar","text":"(Appears on:SSSDSpec) SSSDSidecar represents configuration when SSSD is run in a sidecar. Field Descriptionimage string Image defines the container image that should be used for the SSSD sidecar. sssdConfigFile SSSDSidecarConfigFile (Optional) SSSDConfigFile defines where the SSSD configuration should be sourced from. The config file will be placed into additionalFiles AdditionalVolumeMounts (Optional) AdditionalFiles defines any number of additional files that should be mounted into the SSSD sidecar with a directory root of resources Kubernetes core/v1.ResourceRequirements (Optional) Resources allow specifying resource requests/limits on the SSSD sidecar container. debugLevel int (Optional) DebugLevel sets the debug level for SSSD. If unset or set to 0, Rook does nothing. Otherwise, this may be a value between 1 and 10. See SSSD docs for more info: https://sssd.io/troubleshooting/basics.html#sssd-debug-logs "},{"location":"CRDs/specification/#ceph.rook.io/v1.SSSDSidecarConfigFile","title":"SSSDSidecarConfigFile","text":"(Appears on:SSSDSidecar) SSSDSidecarConfigFile represents the source(s) from which the SSSD configuration should come. Field DescriptionvolumeSource ConfigFileVolumeSource VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for the SSSD configuration file like what is normally used to configure Volumes for a Pod. For example, a ConfigMap, Secret, or HostPath. There are two requirements for the source\u2019s content: 1. The config file must be mountable via (Appears on:NFSSecuritySpec) SSSDSpec represents configuration for System Security Services Daemon (SSSD). Field Descriptionsidecar SSSDSidecar (Optional) Sidecar tells Rook to run SSSD in a sidecar alongside the NFS-Ganesha server in each NFS pod. "},{"location":"CRDs/specification/#ceph.rook.io/v1.SanitizeDataSourceProperty","title":"SanitizeDataSourceProperty (string alias)","text":"(Appears on:SanitizeDisksSpec) SanitizeDataSourceProperty represents a sanitizing data source Value Description\"random\" SanitizeDataSourceRandom uses `shred\u2019s default entropy source \"zero\" SanitizeDataSourceZero uses /dev/zero as sanitize source "},{"location":"CRDs/specification/#ceph.rook.io/v1.SanitizeDisksSpec","title":"SanitizeDisksSpec","text":"(Appears on:CleanupPolicySpec) SanitizeDisksSpec represents a disk sanitizing specification Field Descriptionmethod SanitizeMethodProperty (Optional) Method is the method we use to sanitize disks dataSource SanitizeDataSourceProperty (Optional) DataSource is the data source to use to sanitize the disk with iteration int32 (Optional) Iteration is the number of pass to apply the sanitizing "},{"location":"CRDs/specification/#ceph.rook.io/v1.SanitizeMethodProperty","title":"SanitizeMethodProperty (string alias)","text":"(Appears on:SanitizeDisksSpec) SanitizeMethodProperty represents a disk sanitizing method Value Description\"complete\" SanitizeMethodComplete will sanitize everything on the disk \"quick\" SanitizeMethodQuick will sanitize metadata only on the disk "},{"location":"CRDs/specification/#ceph.rook.io/v1.SecuritySpec","title":"SecuritySpec","text":"(Appears on:ClusterSpec, ObjectStoreSecuritySpec) SecuritySpec is security spec to include various security items such as kms Field Descriptionkms KeyManagementServiceSpec (Optional) KeyManagementService is the main Key Management option keyRotation KeyRotationSpec (Optional) KeyRotation defines options for Key Rotation. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Selection","title":"Selection","text":"(Appears on:Node, StorageScopeSpec) Field DescriptionuseAllDevices bool (Optional) Whether to consume all the storage devices found on a machine deviceFilter string (Optional) A regular expression to allow more fine-grained selection of devices on nodes across the cluster devicePathFilter string (Optional) A regular expression to allow more fine-grained selection of devices with path names devices []Device (Optional) List of devices to use as storage devices volumeClaimTemplates []VolumeClaimTemplate (Optional) PersistentVolumeClaims to use as storage "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotSchedule","title":"SnapshotSchedule","text":"(Appears on:SnapshotSchedulesSpec) SnapshotSchedule is a schedule Field Descriptioninterval string (Optional) Interval is the interval in which snapshots will be taken start_time string (Optional) StartTime is the snapshot starting time "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotScheduleRetentionSpec","title":"SnapshotScheduleRetentionSpec","text":"(Appears on:FSMirroringSpec) SnapshotScheduleRetentionSpec is a retention policy Field Descriptionpath string (Optional) Path is the path to snapshot duration string (Optional) Duration represents the retention duration for a snapshot "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotScheduleSpec","title":"SnapshotScheduleSpec","text":"(Appears on:FSMirroringSpec, MirroringSpec, RadosNamespaceMirroring) SnapshotScheduleSpec represents the snapshot scheduling settings of a mirrored pool Field Descriptionpath string (Optional) Path is the path to snapshot, only valid for CephFS interval string (Optional) Interval represent the periodicity of the snapshot. startTime string (Optional) StartTime indicates when to start the snapshot "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotScheduleStatusSpec","title":"SnapshotScheduleStatusSpec","text":"(Appears on:CephBlockPoolStatus) SnapshotScheduleStatusSpec is the status of the snapshot schedule Field DescriptionsnapshotSchedules []SnapshotSchedulesSpec (Optional) SnapshotSchedules is the list of snapshots scheduled lastChecked string (Optional) LastChecked is the last time time the status was checked lastChanged string (Optional) LastChanged is the last time time the status last changed details string (Optional) Details contains potential status errors "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotSchedulesSpec","title":"SnapshotSchedulesSpec","text":"(Appears on:SnapshotScheduleStatusSpec) SnapshotSchedulesSpec is the list of snapshot scheduled for images in a pool Field Descriptionpool string (Optional) Pool is the pool name namespace string (Optional) Namespace is the RADOS namespace the image is part of image string (Optional) Image is the mirrored image items []SnapshotSchedule (Optional) Items is the list schedules times for a given snapshot "},{"location":"CRDs/specification/#ceph.rook.io/v1.StatesSpec","title":"StatesSpec","text":"(Appears on:PoolMirroringStatusSummarySpec) StatesSpec are rbd images mirroring state Field Descriptionstarting_replay int (Optional) StartingReplay is when the replay of the mirroring journal starts replaying int (Optional) Replaying is when the replay of the mirroring journal is on-going syncing int (Optional) Syncing is when the image is syncing stopping_replay int (Optional) StopReplaying is when the replay of the mirroring journal stops stopped int (Optional) Stopped is when the mirroring state is stopped unknown int (Optional) Unknown is when the mirroring state is unknown error int (Optional) Error is when the mirroring state is errored "},{"location":"CRDs/specification/#ceph.rook.io/v1.Status","title":"Status","text":"(Appears on:CephBucketNotification, CephFilesystemMirror, CephNFS, CephObjectRealm, CephObjectZone, CephObjectZoneGroup, CephRBDMirror) Status represents the status of an object Field Descriptionphase string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. conditions []Condition"},{"location":"CRDs/specification/#ceph.rook.io/v1.StorageClassDeviceSet","title":"StorageClassDeviceSet","text":"(Appears on:StorageScopeSpec) StorageClassDeviceSet is a storage class device set Field Descriptionname string Name is a unique identifier for the set count int Count is the number of devices in this set resources Kubernetes core/v1.ResourceRequirements (Optional) placement Placement (Optional) preparePlacement Placement (Optional) config map[string]string (Optional) Provider-specific device configuration volumeClaimTemplates []VolumeClaimTemplate VolumeClaimTemplates is a list of PVC templates for the underlying storage devices portable bool (Optional) Portable represents OSD portability across the hosts tuneDeviceClass bool (Optional) TuneSlowDeviceClass Tune the OSD when running on a slow Device Class tuneFastDeviceClass bool (Optional) TuneFastDeviceClass Tune the OSD when running on a fast Device Class schedulerName string (Optional) Scheduler name for OSD pod placement encrypted bool (Optional) Whether to encrypt the deviceSet "},{"location":"CRDs/specification/#ceph.rook.io/v1.StorageScopeSpec","title":"StorageScopeSpec","text":"(Appears on:ClusterSpec) Field Descriptionnodes []Node (Optional) useAllNodes bool (Optional) onlyApplyOSDPlacement bool (Optional) config map[string]string (Optional) Selection Selection (Members of storageClassDeviceSets []StorageClassDeviceSet (Optional) store OSDStore (Optional) flappingRestartIntervalHours int (Optional) FlappingRestartIntervalHours defines the time for which the OSD pods, that failed with zero exit code, will sleep before restarting. This is needed for OSD flapping where OSD daemons are marked down more than 5 times in 600 seconds by Ceph. Preventing the OSD pods to restart immediately in such scenarios will prevent Rook from marking OSD as fullRatio float64 (Optional) FullRatio is the ratio at which the cluster is considered full and ceph will stop accepting writes. Default is 0.95. nearFullRatio float64 (Optional) NearFullRatio is the ratio at which the cluster is considered nearly full and will raise a ceph health warning. Default is 0.85. backfillFullRatio float64 (Optional) BackfillFullRatio is the ratio at which the cluster is too full for backfill. Backfill will be disabled if above this threshold. Default is 0.90. allowDeviceClassUpdate bool (Optional) Whether to allow updating the device class after the OSD is initially provisioned allowOsdCrushWeightUpdate bool (Optional) Whether Rook will resize the OSD CRUSH weight when the OSD PVC size is increased. This allows cluster data to be rebalanced to make most effective use of new OSD space. The default is false since data rebalancing can cause temporary cluster slowdown. "},{"location":"CRDs/specification/#ceph.rook.io/v1.StoreType","title":"StoreType (string alias)","text":"Value Description \"bluestore\" StoreTypeBlueStore is the bluestore backend storage for OSDs \"bluestore-rdr\" StoreTypeBlueStoreRDR is the bluestore-rdr backed storage for OSDs "},{"location":"CRDs/specification/#ceph.rook.io/v1.StretchClusterSpec","title":"StretchClusterSpec","text":"(Appears on:MonSpec) StretchClusterSpec represents the specification of a stretched Ceph Cluster Field DescriptionfailureDomainLabel string (Optional) FailureDomainLabel the failure domain name (e,g: zone) subFailureDomain string (Optional) SubFailureDomain is the failure domain within a zone zones []MonZoneSpec (Optional) Zones is the list of zones "},{"location":"CRDs/specification/#ceph.rook.io/v1.SwiftSpec","title":"SwiftSpec","text":"(Appears on:ProtocolSpec) SwiftSpec represents Ceph Object Store specification for the Swift API Field DescriptionaccountInUrl bool (Optional) Whether or not the Swift account name should be included in the Swift API URL. If set to false (the default), then the Swift API will listen on a URL formed like http://host:port//v1. If set to true, the Swift API URL will be http://host:port//v1/AUTH_. You must set this option to true (and update the Keystone service catalog) if you want radosgw to support publicly-readable containers and temporary URLs. The URL prefix for the Swift API, to distinguish it from the S3 API endpoint. The default is swift, which makes the Swift API available at the URL http://host:port/swift/v1 (or http://host:port/swift/v1/AUTH_%(tenant_id)s if rgw swift account in url is enabled). versioningEnabled bool (Optional) Enables the Object Versioning of OpenStack Object Storage API. This allows clients to put the X-Versions-Location attribute on containers that should be versioned. "},{"location":"CRDs/specification/#ceph.rook.io/v1.TopicEndpointSpec","title":"TopicEndpointSpec","text":"(Appears on:BucketTopicSpec) TopicEndpointSpec contains exactly one of the endpoint specs of a Bucket Topic Field Descriptionhttp HTTPEndpointSpec (Optional) Spec of HTTP endpoint amqp AMQPEndpointSpec (Optional) Spec of AMQP endpoint kafka KafkaEndpointSpec (Optional) Spec of Kafka endpoint "},{"location":"CRDs/specification/#ceph.rook.io/v1.VolumeClaimTemplate","title":"VolumeClaimTemplate","text":"(Appears on:MonSpec, MonZoneSpec, Selection, StorageClassDeviceSet) VolumeClaimTemplate is a simplified version of K8s corev1\u2019s PVC. It has no type meta or status. Field Descriptionmetadata Kubernetes meta/v1.ObjectMeta (Optional) Standard object\u2019s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field. spec Kubernetes core/v1.PersistentVolumeClaimSpec (Optional) spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims accessModes []Kubernetes core/v1.PersistentVolumeAccessMode (Optional) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 selector Kubernetes meta/v1.LabelSelector (Optional) selector is a label query over volumes to consider for binding. resources Kubernetes core/v1.VolumeResourceRequirements (Optional) resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources volumeName string (Optional) volumeName is the binding reference to the PersistentVolume backing this claim. storageClassName string (Optional) storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode Kubernetes core/v1.PersistentVolumeMode (Optional) volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. dataSource Kubernetes core/v1.TypedLocalObjectReference (Optional) dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef Kubernetes core/v1.TypedObjectReference (Optional) dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn\u2019t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn\u2019t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. volumeAttributesClassName string (Optional) volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it\u2019s not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). "},{"location":"CRDs/specification/#ceph.rook.io/v1.ZoneSpec","title":"ZoneSpec","text":"(Appears on:ObjectStoreSpec) ZoneSpec represents a Ceph Object Store Gateway Zone specification Field Descriptionname string RGW Zone the Object Store is in Generated with Rook allows creation and customization of storage pools through the custom resource definitions (CRDs). The following settings are available for pools. "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#examples","title":"Examples","text":""},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#replicated","title":"Replicated","text":"For optimal performance, while also adding redundancy, this sample will configure Ceph to make three full copies of the data on multiple nodes. Note This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes. Each OSD must be located on a different node, because the "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#hybrid-storage-pools","title":"Hybrid Storage Pools","text":"Hybrid storage is a combination of two different storage tiers. For example, SSD and HDD. This helps to improve the read performance of cluster by placing, say, 1st copy of data on the higher performance tier (SSD or NVME) and remaining replicated copies on lower cost tier (HDDs). WARNING Hybrid storage pools are likely to suffer from lower availability if a node goes down. The data across the two tiers may actually end up on the same node, instead of being spread across unique nodes (or failure domains) as expected. Instead of using hybrid pools, consider configuring primary affinity from the toolbox. Important The device classes This sample will lower the overall storage capacity requirement, while also adding redundancy by using erasure coding. Note This sample requires at least 3 bluestore OSDs. The OSDs can be located on a single Ceph node or spread across multiple nodes, because the High performance applications typically will not use erasure coding due to the performance overhead of creating and distributing the chunks in the cluster. When creating an erasure-coded pool, it is highly recommended to create the pool when you have bluestore OSDs in your cluster (see the OSD configuration settings. Filestore OSDs have limitations that are unsafe and lower performance. "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#mirroring","title":"Mirroring","text":"RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph clusters. Mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. It is generally useful when planning for Disaster Recovery. Mirroring is for clusters that are geographically distributed and stretching a single cluster is not possible due to high latencies. The following will enable mirroring of the pool at the image level: Once mirroring is enabled, Rook will by default create its own bootstrap peer token so that it can be used by another cluster. The bootstrap peer token can be found in a Kubernetes Secret. The name of the Secret is present in the Status field of the CephBlockPool CR: This secret can then be fetched like so: The secret must be decoded. The result will be another base64 encoded blob that you will import in the destination cluster: See the official rbd mirror documentation on how to add a bootstrap peer. Note Disabling mirroring for the CephBlockPool requires disabling mirroring on all the CephBlockPoolRadosNamespaces present underneath. "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#data-spread-across-subdomains","title":"Data spread across subdomains","text":"Imagine the following topology with datacenters containing racks and then hosts: As an administrator I would like to place 4 copies across both datacenter where each copy inside a datacenter is across a rack. This can be achieved by the following: "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#pool-settings","title":"Pool Settings","text":""},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#metadata","title":"Metadata","text":"
With For instance: "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#erasure-coding","title":"Erasure Coding","text":"Erasure coding allows you to keep your data safe while reducing the storage overhead. Instead of creating multiple replicas of the data, erasure coding divides the original data into chunks of equal size, then generates extra chunks of that same size for redundancy. For example, if you have an object of size 2MB, the simplest erasure coding with two data chunks would divide the object into two chunks of size 1MB each (data chunks). One more chunk (coding chunk) of size 1MB will be generated. In total, 3MB will be stored in the cluster. The object will be able to suffer the loss of any one of the chunks and still be able to reconstruct the original object. The number of data and coding chunks you choose will depend on your resiliency to loss and how much storage overhead is acceptable in your storage cluster. Here are some examples to illustrate how the number of chunks affects the storage and loss toleration. Data chunks (k) Coding chunks (m) Total storage Losses Tolerated OSDs required 2 1 1.5x 1 3 2 2 2x 2 4 4 2 1.5x 2 6 16 4 1.25x 4 20The
If you do not have a sufficient number of hosts or OSDs for unique placement the pool can be created, writing to the pool will hang. Rook currently only configures two levels in the CRUSH map. It is also possible to configure other levels such as This guide assumes you have created a Rook cluster as explained in the main Quickstart guide RADOS currently uses pools both for data distribution (pools are shared into PGs, which map to OSDs) and as the granularity for security (capabilities can restrict access by pool). Overloading pools for both purposes makes it hard to do multi-tenancy because it is not a good idea to have a very large number of pools. A namespace would be a division of a pool into separate logical namespaces. For more information about BlockPool and namespace refer to the Ceph docs Having multiple namespaces in a pool would allow multiple Kubernetes clusters to share one unique ceph cluster without creating a pool per kubernetes cluster and it will also allow to have tenant isolation between multiple tenants in a single Kubernetes cluster without creating multiple pools for tenants. Rook allows creation of Ceph BlockPool RadosNamespaces through the custom resource definitions (CRDs). "},{"location":"CRDs/Block-Storage/ceph-block-pool-rados-namespace-crd/#example","title":"Example","text":"To get you started, here is a simple example of a CR to create a CephBlockPoolRadosNamespace on the CephBlockPool \"replicapool\". "},{"location":"CRDs/Block-Storage/ceph-block-pool-rados-namespace-crd/#settings","title":"Settings","text":"If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Block-Storage/ceph-block-pool-rados-namespace-crd/#metadata","title":"Metadata","text":"
Once the RADOS namespace is created, an RBD-based StorageClass can be created to create PVs in this RADOS namespace. For this purpose, the Extract the clusterID from the CephBlockPoolRadosNamespace CR: In this example, replace Example: "},{"location":"CRDs/Block-Storage/ceph-block-pool-rados-namespace-crd/#mirroring","title":"Mirroring","text":"First, enable mirroring for the parent CephBlockPool. Second, configure the rados namespace CRD with the mirroring: "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/","title":"CephRBDMirror CRD","text":"Rook allows creation and updating rbd-mirror daemon(s) through the custom resource definitions (CRDs). RBD images can be asynchronously mirrored between two Ceph clusters. For more information about user management and capabilities see the Ceph docs. "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/#creating-daemons","title":"Creating daemons","text":"To get you started, here is a simple example of a CRD to deploy an rbd-mirror daemon. "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/#prerequisites","title":"Prerequisites","text":"This guide assumes you have created a Rook cluster as explained in the main Quickstart guide "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/#settings","title":"Settings","text":"If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/#rbdmirror-metadata","title":"RBDMirror metadata","text":"
Rook allows creation and customization of storage clusters through the custom resource definitions (CRDs). There are primarily four different modes in which to create your cluster.
See the separate topics for a description and examples of each of these scenarios. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#settings","title":"Settings","text":"Settings can be specified at the global level to apply to the cluster as a whole, while other settings can be specified at more fine-grained levels. If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#cluster-metadata","title":"Cluster metadata","text":"
Official releases of Ceph Container images are available from Docker Hub. These are general purpose Ceph container with all necessary daemons and dependencies installed. TAG MEANING vRELNUM Latest release in this series (e.g., v19 = Squid) vRELNUM.Y Latest stable release in this stable series (e.g., v19.2) vRELNUM.Y.Z A specific release (e.g., v19.2.0) vRELNUM.Y.Z-YYYYMMDD A specific build (e.g., v19.2.0-20240927)A specific will contain a specific release of Ceph as well as security fixes from the Operating System. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#mon-settings","title":"Mon Settings","text":"
If these settings are changed in the CRD the operator will update the number of mons during a periodic check of the mon health, which by default is every 45 seconds. To change the defaults that the operator uses to determine the mon health and whether to failover a mon, refer to the health settings. The intervals should be small enough that you have confidence the mons will maintain quorum, while also being long enough to ignore network blips where mons are failed over too often. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#mgr-settings","title":"Mgr Settings","text":"You can use the cluster CR to enable or disable any manager module. This can be configured like so: Some modules will have special configuration to ensure the module is fully functional after being enabled. Specifically:
If not specified, the default SDN will be used. Configure the network that will be enabled for the cluster and services.
Caution Changing networking configuration after a Ceph cluster has been deployed is only supported for the network encryption settings. Changing other network settings is NOT supported and will likely result in a non-functioning cluster. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#provider","title":"Provider","text":"Selecting a non-default network provider is an advanced topic. Read more in the Network Providers documentation. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#ipfamily","title":"IPFamily","text":"Provide single-stack IPv4 or IPv6 protocol to assign corresponding addresses to pods and services. This field is optional. Possible inputs are IPv6 and IPv4. Empty value will be treated as IPv4. To enable dual stack see the network configuration section. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#node-settings","title":"Node Settings","text":"In addition to the cluster level settings specified above, each individual node can also specify configuration to override the cluster level settings and defaults. If a node does not specify any configuration then it will inherit the cluster level settings.
When For production clusters, we recommend that Nodes can be added and removed over time by updating the Cluster CRD, for example with Below are the settings for host-based cluster. This type of cluster can specify devices for OSDs, both at the cluster and individual node level, for selecting which storage resources will be included in the cluster.
Host-based cluster supports raw devices, partitions, logical volumes, encrypted devices, and multipath devices. Be sure to see the quickstart doc prerequisites for additional considerations. Below are the settings for a PVC-based cluster.
The following are the settings for Storage Class Device Sets which can be configured to create OSDs that are backed by block mode PVs.
See the table in OSD Configuration Settings to know the allowed configurations. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#osd-configuration-settings","title":"OSD Configuration Settings","text":"The following storage selection settings are specific to Ceph and do not apply to other backends. All variables are key-value pairs represented as strings.
Allowed configurations are: block device type host-based cluster PVC-based cluster disk partencryptedDevice must be false encrypted must be false lvm metadataDevice must be \"\" , osdsPerDevice must be 1 , and encryptedDevice must be false metadata.name must not be metadata or wal and encrypted must be false crypt mpath"},{"location":"CRDs/Cluster/ceph-cluster-crd/#limitations-of-metadata-device","title":"Limitations of metadata device","text":"
Annotations and Labels can be specified so that the Rook components will have those annotations / labels added to them. You can set annotations / labels for Rook components for the list of key value pairs:
Placement configuration for the cluster services. It includes the following keys: In stretch clusters, if the Note Placement of OSD pods is controlled using the Storage Class Device Set, not the general A Placement configuration is specified (according to the kubernetes PodSpec) as:
If you use The Rook Ceph operator creates a Job called To control where various services will be scheduled by kubernetes, use the placement configuration sections below. The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node "},{"location":"CRDs/Cluster/ceph-cluster-crd/#cluster-wide-resources-configuration-settings","title":"Cluster-wide Resources Configuration Settings","text":"Resources should be specified so that the Rook components are handled after Kubernetes Pod Quality of Service classes. This allows to keep Rook components running when for example a node runs out of memory and the Rook components are not killed depending on their Quality of Service class. You can set resource requests/limits for Rook components through the Resource Requirements/Limits structure in the following keys:
In order to provide the best possible experience running Ceph in containers, Rook internally recommends minimum memory limits if resource limits are passed. If a user configures a limit or request value that is too low, Rook will still run the pod(s) and print a warning to the operator log.
Note We recommend not setting memory limits on the OSD prepare job to prevent OSD provisioning failure due to memory constraints. The OSD prepare job bursts memory usage during the OSD provisioning depending on the size of the device, typically 1-2Gi for large disks. The OSD prepare job only bursts a single time per OSD. All future runs of the OSD prepare job will detect the OSD is already provisioned and skip the provisioning. Hint The resources for MDS daemons are not configured in the Cluster. Refer to the Ceph Filesystem CRD instead. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#resource-requirementslimits","title":"Resource Requirements/Limits","text":"For more information on resource requests/limits see the official Kubernetes documentation: Kubernetes - Managing Compute Resources for Containers
Warning Before setting resource requests/limits, please take a look at the Ceph documentation for recommendations for each component: Ceph - Hardware Recommendations. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#node-specific-resources-for-osds","title":"Node Specific Resources for OSDs","text":"This example shows that you can override these requests/limits for OSDs per node when using "},{"location":"CRDs/Cluster/ceph-cluster-crd/#priority-class-names","title":"Priority Class Names","text":"Priority class names can be specified so that the Rook components will have those priority class names added to them. You can set priority class names for Rook components for the list of key value pairs:
The specific component keys will act as overrides to The Rook Ceph operator will monitor the state of the CephCluster on various components by default. The following CRD settings are available:
Currently three health checks are implemented:
The liveness probe and startup probe of each daemon can also be controlled via The probe's timing values and thresholds (but not the probe itself) can also be overridden. For more info, refer to the Kubernetes documentation. For example, you could change the Changing the liveness probe is an advanced operation and should rarely be necessary. If you want to change these settings then modify the desired settings. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#status","title":"Status","text":"The operator is regularly configuring and checking the health of the cluster. The results of the configuration and health checks can be seen in the "},{"location":"CRDs/Cluster/ceph-cluster-crd/#ceph-status","title":"Ceph Status","text":"Ceph is constantly monitoring the health of the data plane and reporting back if there are any warnings or errors. If everything is healthy from Ceph's perspective, you will see If Ceph reports any warnings or errors, the details will be printed to the status. If further troubleshooting is needed to resolve these issues, the toolbox will likely be needed where you can run The The
There are several other properties for the overall status including:
The topology of the cluster is important in production environments where you want your data spread across failure domains. The topology can be controlled by adding labels to the nodes. When the labels are found on a node at first OSD deployment, Rook will add them to the desired level in the CRUSH map. The complete list of labels in hierarchy order from highest to lowest is: For example, if the following labels were added to a node: These labels would result in the following hierarchy for OSDs on that node (this command can be run in the Rook toolbox): Ceph requires unique names at every level in the hierarchy (CRUSH map). For example, you cannot have two racks with the same name that are in different zones. Racks in different zones must be named uniquely. Note that the Hint When setting the node labels prior to To utilize the This configuration will split the replication of volumes across unique racks in the data center setup. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#deleting-a-cephcluster","title":"Deleting a CephCluster","text":"During deletion of a CephCluster resource, Rook protects against accidental or premature destruction of user data by blocking deletion if there are any other Rook Ceph Custom Resources that reference the CephCluster being deleted. Rook will warn about which other resources are blocking deletion in three ways until all blocking resources are deleted:
Rook has the ability to cleanup resources and data that were deployed when a CephCluster is removed. The policy settings indicate which data should be forcibly deleted and in what way the data should be wiped. The
To automate activation of the cleanup, you can use the following command. WARNING: DATA WILL BE PERMANENTLY DELETED: Nothing will happen until the deletion of the CR is requested, so this can still be reverted. However, all new configuration by the operator will be blocked with this cleanup policy enabled. Rook waits for the deletion of PVs provisioned using the cephCluster before proceeding to delete the cephCluster. To force deletion of the cephCluster without waiting for the PVs to be deleted, you can set the The Ceph config options are applied after the MONs are all in quorum and running. To set Ceph config options, you can add them to your The Rook operator will actively apply these values, whereas the ceph.conf settings only take effect after the Ceph daemon pods are restarted. If both these If Ceph settings need to be applied to mons before quorum is initially created, the ceph.conf settings should be used instead. Warning Rook performs no direct validation on these config options, so the validity of the settings is the user's responsibility. The operator does not unset any removed config options, it is the user's responsibility to unset or set the default value for each removed option manually using the Ceph CLI. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#csi-driver-options","title":"CSI Driver Options","text":"The CSI driver options mentioned here are applied per Ceph cluster. The following options are available:
A host storage cluster is one where Rook configures Ceph to store data directly on the host. The Ceph mons will store the metadata on the host (at a path defined by the The Ceph persistent data is stored directly on a host path (Ceph Mons) and on raw devices (Ceph OSDs). To get you started, here are several example of the Cluster CR to configure the host. "},{"location":"CRDs/Cluster/host-cluster/#all-devices","title":"All Devices","text":"For the simplest possible configuration, this example shows that all devices or partitions should be consumed by Ceph. The mons will store the metadata on the host node under "},{"location":"CRDs/Cluster/host-cluster/#node-and-device-filters","title":"Node and Device Filters","text":"More commonly, you will want to be more specific about which nodes and devices where Rook should configure the storage. The placement settings are very flexible to add node affinity, anti-affinity, or tolerations. For more options, see the placement documentation. In this example, Rook will only configure Ceph daemons to run on nodes that are labeled with "},{"location":"CRDs/Cluster/host-cluster/#specific-nodes-and-devices","title":"Specific Nodes and Devices","text":"If you need fine-grained control for every node and every device that is being configured, individual nodes and their config can be specified. In this example, we see that specific node names and devices can be specified. Hint Each node's 'name' field should match their 'kubernetes.io/hostname' label. "},{"location":"CRDs/Cluster/network-providers/","title":"Network Providers","text":"Rook deploys CephClusters using Kubernetes' software-defined networks by default. This is simple for users, provides necessary connectivity, and has good node-level security. However, this comes at the expense of additional latency, and the storage network must contend with Kubernetes applications for network bandwidth. It also means that Kubernetes applications coexist on the same network as Ceph daemons and can reach the Ceph cluster easily via network scanning. Rook allows selecting alternative network providers to address some of these downsides, sometimes at the expense of others. Selecting alternative network providers is an advanced topic. Note This is an advanced networking topic. See also the CephCluster general networking settings. "},{"location":"CRDs/Cluster/network-providers/#ceph-networking-fundamentals","title":"Ceph Networking Fundamentals","text":"Ceph daemons can operate on up to two distinct networks: public, and cluster. Ceph daemons always use the public network. The public network is used for client communications with the Ceph cluster (reads/writes). Rook configures this as the Kubernetes pod network by default. Ceph-CSI uses this network for PVCs. The cluster network is optional and is used to isolate internal Ceph replication traffic. This includes additional copies of data replicated between OSDs during client reads/writes. This also includes OSD data recovery (re-replication) when OSDs or nodes go offline. If the cluster network is unspecified, the public network is used for this traffic instead. Refer to Ceph networking documentation for deeper explanations of any topics. "},{"location":"CRDs/Cluster/network-providers/#specifying-ceph-network-selections","title":"Specifying Ceph Network Selections","text":"
This configuration is always optional but is important to understand. Some Rook network providers allow specifying the public and network interfaces that Ceph will use for data traffic. Use This The default network provider cannot make use of these configurations. Ceph public and cluster network configurations are allowed to change, but this should be done with great care. When updating underlying networks or Ceph network settings, Rook assumes that the current network configuration used by Ceph daemons will continue to operate as intended. Network changes are not applied to Ceph daemon pods (like OSDs and MDSes) until the pod is restarted. When making network changes, ensure that restarted pods will not lose connectivity to existing pods, and vice versa. "},{"location":"CRDs/Cluster/network-providers/#host-networking","title":"Host Networking","text":"
Host networking allows the Ceph cluster to use network interfaces on Kubernetes hosts for communication. This eliminates latency from the software-defined pod network, but it provides no host-level security isolation. Ceph daemons will use any network available on the host for communication. To restrict Ceph to using only a specific specific host interfaces or networks, use If the Ceph mons are expected to bind to a public network that is different from the IP address assign to the K8s node where the mon is running, the IP address for the mon can be set by adding an annotation to the node: If the host networking setting is changed in a cluster where mons are already running, the existing mons will remain running with the same network settings with which they were created. To complete the conversion to or from host networking after you update this setting, you will need to failover the mons in order to have mons on the desired network configuration. "},{"location":"CRDs/Cluster/network-providers/#multus","title":"Multus","text":"
Rook supports using Multus NetworkAttachmentDefinitions for Ceph public and cluster networks. This allows Rook to attach any CNI to Ceph as a public and/or cluster network. This provides strong isolation between Kubernetes applications and Ceph cluster daemons. While any CNI may be used, the intent is to allow use of CNIs which allow Ceph to be connected to specific host interfaces. This improves latency and bandwidth while preserving host-level network isolation. "},{"location":"CRDs/Cluster/network-providers/#multus-prerequisites","title":"Multus Prerequisites","text":"In order for host network-enabled Ceph-CSI to communicate with a Multus-enabled CephCluster, some setup is required for Kubernetes hosts. These prerequisites require an understanding of how Multus networks are configured and how Rook uses them. Following sections will help clarify questions that may arise here. Two basic requirements must be met:
These two requirements can be broken down further as follows:
These requirements require careful planning, but some methods are able to meet these requirements more easily than others. Examples are provided after the full document to help understand and implement these requirements. Tip Keep in mind that there are often ten or more Rook/Ceph pods per host. The pod address space may need to be an order of magnitude larger (or more) than the host address space to allow the storage cluster to grow in the future. "},{"location":"CRDs/Cluster/network-providers/#multus-configuration","title":"Multus Configuration","text":"Refer to Multus documentation for details about how to set up and select Multus networks. Rook will attempt to auto-discover the network CIDRs for selected public and/or cluster networks. This process is not guaranteed to succeed. Furthermore, this process will get a new network lease for each CephCluster reconcile. Specify Only OSD pods will have both public and cluster networks attached (if specified). The rest of the Ceph component pods and CSI pods will only have the public network attached. The Rook operator will not have any networks attached; it proxies Ceph commands via a sidecar container in the mgr pod. A NetworkAttachmentDefinition must exist before it can be used by Multus for a Ceph network. A recommended definition will look like the following:
NetworkAttachmentDefinitions are selected for the desired Ceph network using Consider the example below which selects a hypothetical Kubernetes-wide Multus network in the default namespace for Ceph's public network and selects a Ceph-specific network in the "},{"location":"CRDs/Cluster/network-providers/#validating-multus-configuration","title":"Validating Multus configuration","text":"We highly recommend validating your Multus configuration before you install a CephCluster. A tool exists to facilitate validating the Multus configuration. After installing the Rook operator and before installing any Custom Resources, run the tool from the operator pod. The tool's CLI is designed to be as helpful as possible. Get help text for the multus validation tool like so:
Note The tool requires host network access. Many Kubernetes distros have security limitations. Use the tool's Daemons leveraging Kubernetes service IPs (Monitors, Managers, Rados Gateways) are not listening on the NAD specified in the The network plan for this cluster will be as follows:
Node configuration must allow nodes to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Because the host IP range is different from the pod IP range, a route must be added to include the pod range. Such a configuration should be equivalent to the following: The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' The Whereabouts "},{"location":"CRDs/Cluster/network-providers/#macvlan-whereabouts-node-static-ips","title":"Macvlan, Whereabouts, Node Static IPs","text":"The network plan for this cluster will be as follows:
PXE configuration for the nodes must apply a configuration to nodes to allow nodes to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Because the host IP range is a subset of the whole range, a route must be added to include the whole range. Such a configuration should be equivalent to the following: The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' "},{"location":"CRDs/Cluster/network-providers/#macvlan-dhcp","title":"Macvlan, DHCP","text":"The network plan for this cluster will be as follows:
Node configuration must allow nodes to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Such a configuration should be equivalent to the following: The NetworkAttachmentDefinition for the public network would look like the following. "},{"location":"CRDs/Cluster/pvc-cluster/","title":"PVC Storage Cluster","text":"In a \"PVC-based cluster\", the Ceph persistent data is stored on volumes requested from a storage class of your choice. This type of cluster is recommended in a cloud environment where volumes can be dynamically created and also in clusters where a local PV provisioner is available. "},{"location":"CRDs/Cluster/pvc-cluster/#aws-storage-example","title":"AWS Storage Example","text":"In this example, the mon and OSD volumes are provisioned from the AWS "},{"location":"CRDs/Cluster/pvc-cluster/#local-storage-example","title":"Local Storage Example","text":"In the CRD specification below, 3 OSDs (having specific placement and resource values) and 3 mons with each using a 10Gi PVC, are created by Rook using the "},{"location":"CRDs/Cluster/pvc-cluster/#pvc-storage-only-for-monitors","title":"PVC storage only for monitors","text":"In the CRD specification below three monitors are created each using a 10Gi PVC created by Rook using the "},{"location":"CRDs/Cluster/pvc-cluster/#dedicated-metadata-and-wal-device-for-osd-on-pvc","title":"Dedicated metadata and wal device for OSD on PVC","text":"In the simplest case, Ceph OSD BlueStore consumes a single (primary) storage device. BlueStore is the engine used by the OSD to store data. The storage device is normally used as a whole, occupying the full device that is managed directly by BlueStore. It is also possible to deploy BlueStore across additional devices such as a DB device. This device can be used for storing BlueStore\u2019s internal metadata. BlueStore (or rather, the embedded RocksDB) will put as much metadata as it can on the DB device to improve performance. If the DB device fills up, metadata will spill back onto the primary device (where it would have been otherwise). Again, it is only helpful to provision a DB device if it is faster than the primary device. You can have multiple Note Note that Rook only supports three naming convention for a given template:
The bluestore partition has the following reference combinations supported by the ceph-volume utility:
To determine the size of the metadata block follow the official Ceph sizing guide. With the present configuration, each OSD will have its main block allocated a 10GB device as well a 5GB device to act as a bluestore database. "},{"location":"CRDs/Cluster/stretch-cluster/","title":"Stretch Storage Cluster","text":"For environments that only have two failure domains available where data can be replicated, consider the case where one failure domain is down and the data is still fully available in the remaining failure domain. To support this scenario, Ceph has integrated support for \"stretch\" clusters. Rook requires three zones. Two zones (A and B) will each run all types of Rook pods, which we call the \"data\" zones. Two mons run in each of the two data zones, while two replicas of the data are in each zone for a total of four data replicas. The third zone (arbiter) runs a single mon. No other Rook or Ceph daemons need to be run in the arbiter zone. For this example, we assume the desired failure domain is a zone. Another failure domain can also be specified with a known topology node label which is already being used for OSD failure domains. For more details, see the Stretch Cluster design doc. "},{"location":"CRDs/Cluster/external-cluster/advance-external/","title":"External Cluster Options","text":""},{"location":"CRDs/Cluster/external-cluster/advance-external/#nfs-storage","title":"NFS storage","text":"Rook suggests a different mechanism for making use of an NFS service running on the external Ceph standalone cluster, if desired. "},{"location":"CRDs/Cluster/external-cluster/advance-external/#exporting-rook-to-another-cluster","title":"Exporting Rook to another cluster","text":"If you have multiple K8s clusters running, and want to use the local
Important For other clusters to connect to storage in this cluster, Rook must be configured with a networking configuration that is accessible from other clusters. Most commonly this is done by enabling host networking in the CephCluster CR so the Ceph daemons will be addressable by their host IPs. "},{"location":"CRDs/Cluster/external-cluster/advance-external/#admin-privileges","title":"Admin privileges","text":"If in case the cluster needs the admin keyring to configure, update the admin key Note Sharing the admin key with the external cluster is not generally recommended
After restarting the rook operator (and the toolbox if in use), rook will configure ceph with admin privileges. "},{"location":"CRDs/Cluster/external-cluster/advance-external/#connect-to-an-external-object-store","title":"Connect to an External Object Store","text":"Create the external object store CR to configure connection to external gateways. Consume the S3 Storage, in two different ways:
Hint For more details see the Object Store topic "},{"location":"CRDs/Cluster/external-cluster/consumer-import/","title":"Import Ceph configuration to the Rook consumer cluster","text":""},{"location":"CRDs/Cluster/external-cluster/consumer-import/#installation-types","title":"Installation types","text":"Install Rook in the the consumer cluster, either with Helm or the manifests. "},{"location":"CRDs/Cluster/external-cluster/consumer-import/#helm-installation","title":"Helm Installation","text":"To install with Helm, the rook cluster helm chart will configure the necessary resources for the external cluster with the example "},{"location":"CRDs/Cluster/external-cluster/consumer-import/#manifest-installation","title":"Manifest Installation","text":"If not installing with Helm, here are the steps to install with manifests.
An external cluster is a Ceph configuration that is managed outside of the local K8s cluster. The external cluster could be managed by cephadm, or it could be another Rook cluster that is configured to allow the access (usually configured with host networking). In external mode, Rook will provide the configuration for the CSI driver and other basic resources that allows your applications to connect to Ceph in the external cluster. "},{"location":"CRDs/Cluster/external-cluster/external-cluster/#external-configuration","title":"External configuration","text":"
Create the desired types of storage in the provider Ceph cluster:
1) Export config from the Provider Ceph cluster. Configuration must be exported by the Ceph admin, such as a Ceph keyring and mon endpoints, that allows connection to the Ceph cluster. 2) Import config to the Rook consumer cluster. The configuration exported from the Ceph cluster is imported to Rook to provide the needed connection details. "},{"location":"CRDs/Cluster/external-cluster/external-cluster/#advance-options","title":"Advance Options","text":"
In order to configure an external Ceph cluster with Rook, we need to extract some information in order to connect to that cluster. "},{"location":"CRDs/Cluster/external-cluster/provider-export/#1-create-all-users-and-keys","title":"1. Create all users and keys","text":"Run the python script create-external-cluster-resources.py for creating all users and keys.
Example Output: "},{"location":"CRDs/Cluster/external-cluster/provider-export/#examples-on-utilizing-advance-flags","title":"Examples on utilizing Advance flags","text":""},{"location":"CRDs/Cluster/external-cluster/provider-export/#config-file","title":"Config-file","text":"Use the config file to set the user configuration file, add the flag Example:
Note You can use both config file and other arguments at the same time Priority: command-line-args has more priority than config.ini file values, and config.ini file values have more priority than default values. "},{"location":"CRDs/Cluster/external-cluster/provider-export/#multi-tenancy","title":"Multi-tenancy","text":"To enable multi-tenancy, run the script with the Note Restricting the csi-users per pool, and per cluster will require creating new csi-users and new secrets for that csi-users. So apply these secrets only to new "},{"location":"CRDs/Cluster/external-cluster/provider-export/#rgw-multisite","title":"RGW Multisite","text":"Pass the "},{"location":"CRDs/Cluster/external-cluster/provider-export/#topology-based-provisioning","title":"Topology Based Provisioning","text":"Enable Topology Based Provisioning for RBD pools by passing For more details, see the Topology-Based Provisioning "},{"location":"CRDs/Cluster/external-cluster/provider-export/#connect-to-v2-mon-port","title":"Connect to v2 mon port","text":"If encryption or compression on the wire is needed, specify the Applications like Kafka will have a deployment with multiple running instances. Each service instance will create a new claim and is expected to be located in a different zone. Since the application has its own redundant instances, there is no requirement for redundancy at the data layer. A storage class is created that will provision storage from replica 1 Ceph pools that are located in each of the separate zones. "},{"location":"CRDs/Cluster/external-cluster/topology-for-external-mode/#configuration-flags","title":"Configuration Flags","text":"Add the required flags to the script:
The import script will then create a new storage class named Determine the names of the zones (or other failure domains) in the Ceph CRUSH map where each of the pools will have corresponding CRUSH rules. Create a zone-specific CRUSH rule for each of the pools. For example, this is a CRUSH rule for Create replica-1 pools based on each of the CRUSH rules from the previous step. Each pool must be created with a CRUSH rule to limit the pool to OSDs in a specific zone. Note Disable the ceph warning for replica-1 pools: Determine the zones in the K8s cluster that correspond to each of the pools in the Ceph pool. The K8s nodes require labels as defined with the OSD Topology labels. Some environments already have nodes labeled in zones. Set the topology labels on the nodes if not already present. Set the flags of the external cluster configuration script based on the pools and failure domains. --topology-pools=pool-a,pool-b,pool-c --topology-failure-domain-label=zone --topology-failure-domain-values=zone-a,zone-b,zone-c Then run the python script to generate the settings which will be imported to the Rook cluster: Output: "},{"location":"CRDs/Cluster/external-cluster/topology-for-external-mode/#kubernetes-cluster","title":"Kubernetes Cluster","text":"Check the external cluster is created and connected as per the installation steps. Review the new storage class: Set two values in the rook-ceph-operator-config configmap:
The topology-based storage class is ready to be consumed! Create a PVC from the When upgrading an external cluster, Ceph and Rook versions will be updated independently. During the Rook update, the external provider cluster connection also needs to be updated with any settings and permissions for new features. "},{"location":"CRDs/Cluster/external-cluster/upgrade-external/#upgrade-the-cluster-to-consume-latest-ceph-user-caps-mandatory","title":"Upgrade the cluster to consume latest ceph user caps (mandatory)","text":"Upgrading the cluster would be different for restricted caps and non-restricted caps,
Some Rook upgrades may require re-running the import steps, or may introduce new external cluster features that can be most easily enabled by re-running the import steps. To re-run the import steps with new options, the python script should be re-run using the same configuration options that were used for past invocations, plus the configurations that are being added or modified. Starting with Rook v1.15, the script stores the configuration in the external-cluster-user-command configmap for easy future reference.
external-cluster-user-command ConfigMap:","text":"
Warning If the last-applied config is unavailable, run the current version of the script again using previously-applied config and CLI flags. Failure to reuse the same configuration options when re-invoking the python script can result in unexpected changes when re-running the import script. "},{"location":"CRDs/Object-Storage/ceph-object-realm-crd/","title":"CephObjectRealm CRD","text":"Rook allows creation of a realm in a Ceph Object Multisite configuration through a CRD. The following settings are available for Ceph object store realms. "},{"location":"CRDs/Object-Storage/ceph-object-realm-crd/#example","title":"Example","text":" "},{"location":"CRDs/Object-Storage/ceph-object-realm-crd/#settings","title":"Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-realm-crd/#metadata","title":"Metadata","text":"
Rook allows creation and customization of object stores through the custom resource definitions (CRDs). The following settings are available for Ceph object stores. "},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#example","title":"Example","text":""},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#erasure-coded","title":"Erasure Coded","text":"Erasure coded pools can only be used with Note This sample requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the "},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#object-store-settings","title":"Object Store Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#metadata","title":"Metadata","text":"
The pools allow all of the settings defined in the Block Pool CRD spec. For more details, see the Block Pool CRD settings. In the example above, there must be at least three hosts (size 3) and at least three devices (2 data + 1 coding chunks) in the cluster. When the
The Currently only OpenStack Keystone is supported. "},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#keystone-settings","title":"Keystone Settings","text":"The keystone authentication can be configured in the Note: With this example configuration S3 is implicitly enabled even though it is not enabled in the The following options can be configured in the
The protocols section is divided into two parts:
In the
In the
The gateway settings correspond to the RGW daemon settings.
The zone settings allow the object store to join custom created ceph-object-zone.
A common use case that requires configuring hosting is allowing virtual host-style bucket access. This use case is discussed in more detail in Rook object storage docs.
Note For DNS names that support wildcards, do not include wildcards. E.g., use Rook provides a default Rook will not overwrite an existing Rook will be default monitor the state of the object store endpoints. The following CRD settings are available:
Here is a complete example: You can monitor the health of a CephObjectStore by monitoring the gateway deployments it creates. The primary deployment created is named Ceph RGW supports Server Side Encryption as defined in AWS S3 protocol with three different modes: AWS-SSE:C, AWS-SSE:KMS and AWS-SSE:S3. The last two modes require a Key Management System (KMS) like HashiCorp Vault. Currently, Vault is the only supported KMS backend for CephObjectStore. Refer to the Vault KMS section for details about Vault. If these settings are defined, then RGW will establish a connection between Vault and whenever S3 client sends request with Server Side Encryption. Ceph's Vault documentation has more details. The For RGW, please note the following:
During deletion of a CephObjectStore resource, Rook protects against accidental or premature destruction of user data by blocking deletion if there are any object buckets in the object store being deleted. Buckets may have been created by users or by ObjectBucketClaims. For deletion to be successful, all buckets in the object store must be removed. This may require manual deletion or removal of all ObjectBucketClaims. Alternately, the Rook will warn about which buckets are blocking deletion in three ways:
If the CephObjectStore is configured in a multisite setup the above conditions are applicable only to stores that belong to a single master zone. Otherwise the conditions are ignored. Even if the store is removed the user can access the data from a peer object store. "},{"location":"CRDs/Object-Storage/ceph-object-store-user-crd/","title":"CephObjectStoreUser CRD","text":"Rook allows creation and customization of object store users through the custom resource definitions (CRDs). The following settings are available for Ceph object store users. "},{"location":"CRDs/Object-Storage/ceph-object-store-user-crd/#example","title":"Example","text":" "},{"location":"CRDs/Object-Storage/ceph-object-store-user-crd/#object-store-user-settings","title":"Object Store User Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-store-user-crd/#metadata","title":"Metadata","text":"
Rook allows creation of zones in a ceph cluster for a Ceph Object Multisite configuration through a CRD. The following settings are available for Ceph object store zones. "},{"location":"CRDs/Object-Storage/ceph-object-zone-crd/#example","title":"Example","text":" "},{"location":"CRDs/Object-Storage/ceph-object-zone-crd/#settings","title":"Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-zone-crd/#metadata","title":"Metadata","text":"
The pools allow all of the settings defined in the Pool CRD spec. For more details, see the Pool CRD settings. In the example above, there must be at least three hosts (size 3) and at least three devices (2 data + 1 coding chunks) in the cluster. "},{"location":"CRDs/Object-Storage/ceph-object-zone-crd/#spec","title":"Spec","text":"
Rook allows creation of zone groups in a Ceph Object Multisite configuration through a CRD. The following settings are available for Ceph object store zone groups. "},{"location":"CRDs/Object-Storage/ceph-object-zonegroup-crd/#example","title":"Example","text":" "},{"location":"CRDs/Object-Storage/ceph-object-zonegroup-crd/#settings","title":"Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-zonegroup-crd/#metadata","title":"Metadata","text":"
Rook allows creation and customization of shared filesystems through the custom resource definitions (CRDs). The following settings are available for Ceph filesystems. "},{"location":"CRDs/Shared-Filesystem/ceph-filesystem-crd/#examples","title":"Examples","text":""},{"location":"CRDs/Shared-Filesystem/ceph-filesystem-crd/#replicated","title":"Replicated","text":"Note This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes. Each OSD must be located on a different node, because both of the defined pools set the The (These definitions can also be found in the Erasure coded pools require the OSDs to use Note This sample requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the IMPORTANT: For erasure coded pools, we have to create a replicated pool as the default data pool and an erasure-coded pool as a secondary pool. (These definitions can also be found in the
The pools allow all of the settings defined in the Pool CRD spec. For more details, see the Pool CRD settings. In the example above, there must be at least three hosts (size 3) and at least eight devices (6 data + 2 coding chunks) in the cluster.
The metadata server settings correspond to the MDS daemon settings.
The format of the resource requests/limits structure is the same as described in the Ceph Cluster CRD documentation. If the memory resource limit is declared Rook will automatically set the MDS configuration In order to provide the best possible experience running Ceph in containers, Rook internally recommends the memory for MDS daemons to be at least 4096MB. If a user configures a limit or request value that is too low, Rook will still run the pod(s) and print a warning to the operator log. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-mirror-crd/","title":"CephFilesystemMirror CRD","text":"This guide assumes you have created a Rook cluster as explained in the main Quickstart guide Rook allows creation and updating the fs-mirror daemon through the custom resource definitions (CRDs). CephFS will support asynchronous replication of snapshots to a remote (different Ceph cluster) CephFS file system via cephfs-mirror tool. Snapshots are synchronized by mirroring snapshot data followed by creating a snapshot with the same name (for a given directory on the remote file system) as the snapshot being synchronized. For more information about user management and capabilities see the Ceph docs. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-mirror-crd/#creating-daemon","title":"Creating daemon","text":"To get you started, here is a simple example of a CRD to deploy an cephfs-mirror daemon. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-mirror-crd/#settings","title":"Settings","text":"If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-mirror-crd/#filesystemmirror-metadata","title":"FilesystemMirror metadata","text":"
In order to configure mirroring peers, please refer to the CephFilesystem documentation. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd/","title":"FilesystemSubVolumeGroup CRD","text":"Info This guide assumes you have created a Rook cluster as explained in the main Quickstart guide Rook allows creation of Ceph Filesystem SubVolumeGroups through the custom resource definitions (CRDs). Filesystem subvolume groups are an abstraction for a directory level higher than Filesystem subvolumes to effect policies (e.g., File layouts) across a set of subvolumes. For more information about CephFS volume, subvolumegroup and subvolume refer to the Ceph docs. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd/#creating-daemon","title":"Creating daemon","text":"To get you started, here is a simple example of a CRD to create a subvolumegroup on the CephFilesystem \"myfs\". "},{"location":"CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd/#settings","title":"Settings","text":"If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd/#cephfilesystemsubvolumegroup-metadata","title":"CephFilesystemSubVolumeGroup metadata","text":"
Note Only one out of (export, distributed, random) can be set at a time. By default pinning is set with value: This page contains information regarding the CI configuration used for the Rook project to test, build and release the project. "},{"location":"Contributing/ci-configuration/#secrets","title":"Secrets","text":"
You can choose any Kubernetes install of your choice. The test framework only depends on The developers of Rook are working on Minikube and thus it is the recommended way to quickly get Rook up and running. Minikube should not be used for production but the Rook authors consider it great for development. While other tools such as k3d/kind are great, users have faced issues deploying Rook. Always use a virtual machine when testing Rook. Never use your host system where local devices may mistakenly be consumed. To install Minikube follow the official guide. It is recommended to use the kvm2 driver when running on a Linux machine and the hyperkit driver when running on a MacOS. Both allow to create and attach additional disks to the virtual machine. This is required for the Ceph OSD to consume one drive. We don't recommend any other drivers for Rook. You will need a Minikube version 1.23 or higher. Starting the cluster on Minikube is as simple as running: It is recommended to install a Docker client on your host system too. Depending on your operating system follow the official guide. Stopping the cluster and destroying the Minikube virtual machine can be done with: "},{"location":"Contributing/development-environment/#install-helm","title":"Install Helm","text":"Use helm.sh to install Helm and set up Rook charts defined under
Note These helper scripts depend on some artifacts under the Note If Helm is not available in your Developers can test quickly their changes by building and using the local Rook image on their minikube cluster. 1) Set the local Docker environment to use minikube: 2) Build your local Rook image. The following command will generate a Rook image labeled in the format 3) Tag the generated image as 4) Create a Rook cluster in minikube, or if the Rook cluster is already configured, apply the new operator image by restarting the operator. "},{"location":"Contributing/development-environment/#creating-a-dev-cluster","title":"Creating a dev cluster","text":"To accelerate the development process, users have the option to employ the script located at Thank you for your time and effort to help us improve Rook! Here are a few steps to get started. If you have any questions, don't hesitate to reach out to us on our Slack dev channel. Sign up for the Rook Slack here. "},{"location":"Contributing/development-flow/#prerequisites","title":"Prerequisites","text":"
Navigate to http://github.com/rook/rook and click the \"Fork\" button. "},{"location":"Contributing/development-flow/#clone-your-fork","title":"Clone Your Fork","text":"In a console window: "},{"location":"Contributing/development-flow/#add-upstream-remote","title":"Add Upstream Remote","text":"Add the upstream remote to your local git: Two remotes should be available: Before building the project, fetch the remotes to synchronize tags. Tip If in a Linux environment and Hint Make will automatically pick up For consistent whitespace and other formatting in
Tip VS Code will prompt you automatically with some recommended extensions to install, such as Markdown, Go, YAML validator, and ShellCheck. VS Code will automatically use the recommended settings in the To self-assign an issue that is not yet assigned to anyone else, add a comment in the issue with The overall source code layout is summarized: "},{"location":"Contributing/development-flow/#development","title":"Development","text":"To submit a change, create a branch in your fork and then submit a pull request (PR) from the branch. "},{"location":"Contributing/development-flow/#design-document","title":"Design Document","text":"For new features of significant scope and complexity, a design document is recommended before work begins on the implementation. Create a design document if:
For smaller, straightforward features and bug fixes, there is no need for a design document. Authoring a design document has many advantages:
Note Writing code to prototype the feature while working on the design may be very useful to help flesh out the approach. A design document should be written as a markdown file in the design folder. Follow the process outlined in the design template. There are many examples of previous design documents in that folder. Submit a pull request for the design to be discussed and approved by the community, just like any other change to the repository. "},{"location":"Contributing/development-flow/#create-a-branch","title":"Create a Branch","text":"From a console, create a new branch based on your fork where changes will be developed: "},{"location":"Contributing/development-flow/#updating-your-fork","title":"Updating Your Fork","text":"During the development lifecycle, keep your branch(es) updated with the latest upstream master. As others on the team push changes, rebase your commits on top of the latest. This avoids unnecessary merge commits and keeps the commit history clean. Whenever an update is needed to the local repository, never perform a merge, always rebase. This will avoid merge commits in the git history. If there are any modified files, first stash them with Rebasing is a very powerful feature of Git. You need to understand how it works to avoid risking losing your work. Read about it in the Git documentation. Briefly, rebasing does the following:
After a feature or bug fix is completed in your branch, open a Pull Request (PR) to the upstream Rook repository. Before opening the PR:
All pull requests must pass all continuous integration (CI) tests before they can be merged. These tests automatically run against every pull request. The results of these tests along with code review feedback determine whether your request will be merged. "},{"location":"Contributing/development-flow/#unit-tests","title":"Unit Tests","text":"From the root of your local Rook repo execute the following to run all of the unit tests: Unit tests for individual packages can be run with the standard To see code coverage on the packages that you changed, view the "},{"location":"Contributing/development-flow/#writing-unit-tests","title":"Writing unit tests","text":"Good unit tests start with easily testable code. Small chunks (\"units\") of code can be easily tested for every possible input. Higher-level code units that are built from smaller, already-tested units can more easily verify that the units are combined together correctly. Common cases that may need tests:
Rook's upstream continuous integration (CI) tests will run integration tests against your changes automatically. "},{"location":"Contributing/development-flow/#tmate-session","title":"Tmate Session","text":"Integration tests will be run in Github actions. If an integration test fails, enable a tmate session to troubleshoot the issue by one of the following steps:
See the action details for an ssh connection to the Github runner. "},{"location":"Contributing/development-flow/#commit-structure","title":"Commit structure","text":"Rook maintainers value clear, lengthy and explanatory commit messages. Requirements for commits:
An example acceptable commit message: "},{"location":"Contributing/development-flow/#commit-history","title":"Commit History","text":"To prepare your branch to open a PR, the minimal number of logical commits is preferred to maintain a clean commit history. Most commonly a PR will include a single commit where all changes are squashed, although sometimes there will be multiple logical commits. To squash multiple commits or make other changes to the commit history, use Once your commit history is clean, ensure the branch is rebased on the latest upstream before opening the PR. "},{"location":"Contributing/development-flow/#submitting","title":"Submitting","text":"Go to the Rook github to open the PR. If you have pushed recently to a branch, you will see an obvious link to open the PR. If you have not pushed recently, go to the Pull Request tab and select your fork and branch for the PR. After the PR is open, make changes simply by pushing new commits. The PR will track the changes in your fork and rerun the CI automatically. Always open a pull request against master. Never open a pull request against a released branch (e.g. release-1.2) unless working directly with a maintainer. "},{"location":"Contributing/development-flow/#backporting-to-a-release-branch","title":"Backporting to a Release Branch","text":"The flow for getting a fix into a release branch is:
The Ceph manager modules are written in Python and can be individually and dynamically loaded from the manager. We can take advantage of this feature in order to test changes and to debug issues in the modules. This is just a hack to debug any modification in the manager modules. The Make modifications directly in the manager module and reload:
We are using MkDocs with the Material for MkDocs theme. "},{"location":"Contributing/documentation/#markdown-extensions","title":"Markdown Extensions","text":"Thanks to the MkDocs Material theme we have certain \"markdown syntax extensions\" available:
For a whole list of features Reference - Material for MkDocs. "},{"location":"Contributing/documentation/#local-preview","title":"Local Preview","text":"To locally preview the documentation, you can run the following command (in the root of the repository): When previewing, now you can navigate your browser to http://127.0.0.1:8000/ to open the preview of the documentation. Hint Should you encounter a helm-docs is a tool that generates the documentation for a helm chart automatically. If there are changes in the helm chart, the developer needs to run The integration tests run end-to-end tests on Rook in a running instance of Kubernetes. The framework includes scripts for starting Kubernetes so users can quickly spin up a Kubernetes cluster. The tests are generally designed to install Rook, run tests, and uninstall Rook. The CI runs the integration tests with each PR and each master or release branch build. If the tests fail in a PR, access the tmate for debugging. This document will outline the steps to run the integration tests locally in a minikube environment, should the CI not be sufficient to troubleshoot. Hint The CI is generally much simpler to troubleshoot than running these tests locally. Running the tests locally is rarely necessary. Warning A risk of running the tests locally is that a local disk is required during the tests. If not running in a VM, your laptop or other test machine could be destroyed. "},{"location":"Contributing/rook-test-framework/#install-minikube","title":"Install Minikube","text":"Follow Rook's developer guide to install Minikube. "},{"location":"Contributing/rook-test-framework/#build-rook-image","title":"Build Rook image","text":"Now that the Kubernetes cluster is running we need to populate the Docker registry to allow local image builds to be easily used inside Minikube.
Tag the newly built images to "},{"location":"Contributing/rook-test-framework/#run-integration-tests","title":"Run integration tests","text":"Some settings are available to run the tests under different environments. The settings are all configured with environment variables. See environment.go for the available environment variables. Set the following variables: Set Hint If using the Warning The integration tests erase the contents of To run a specific suite, specify the suite name: After running tests, see test logs under To run specific tests inside a suite: Info Only the golang test suites are documented to run locally. Canary and other tests have only ever been supported in the CI. "},{"location":"Contributing/rook-test-framework/#running-tests-on-openshift","title":"Running tests on OpenShift","text":"
OpenShift adds a number of security and other enhancements to Kubernetes. In particular, security context constraints allow the cluster admin to define exactly which permissions are allowed to pods running in the cluster. You will need to define those permissions that allow the Rook pods to run. The settings for Rook in OpenShift are described below, and are also included in the example yaml files:
To create an OpenShift cluster, the commands basically include: "},{"location":"Getting-Started/ceph-openshift/#helm-installation","title":"Helm Installation","text":"Configuration required for Openshift is automatically created by the Helm charts, such as the SecurityContextConstraints. See the Rook Helm Charts. "},{"location":"Getting-Started/ceph-openshift/#rook-privileges","title":"Rook Privileges","text":"To orchestrate the storage platform, Rook requires the following access in the cluster:
Before starting the Rook operator or cluster, create the security context constraints needed by the Rook pods. The following yaml is found in Hint Older versions of OpenShift may require Important to note is that if you plan on running Rook in namespaces other than the default To create the scc you will need a privileged account: We will create the security context constraints with the operator in the next section. "},{"location":"Getting-Started/ceph-openshift/#rook-settings","title":"Rook Settings","text":"There are some Rook settings that also need to be adjusted to work in OpenShift. "},{"location":"Getting-Started/ceph-openshift/#operator-settings","title":"Operator Settings","text":"There is an environment variable that needs to be set in the operator spec that will allow Rook to run in OpenShift clusters.
Now create the security context constraints and the operator: "},{"location":"Getting-Started/ceph-openshift/#cluster-settings","title":"Cluster Settings","text":"The cluster settings in
In OpenShift, ports less than 1024 cannot be bound. In the object store CRD, ensure the port is modified to meet this requirement. You can expose a different port such as A sample object store can be created with these settings: "},{"location":"Getting-Started/ceph-teardown/","title":"Cleanup","text":"Rook provides the following clean up options:
To tear down the cluster, the following resources need to be cleaned up:
If the default namespaces or paths such as If tearing down a cluster frequently for development purposes, it is instead recommended to use an environment such as Minikube that can easily be reset without worrying about any of these steps. "},{"location":"Getting-Started/ceph-teardown/#delete-the-block-and-file-artifacts","title":"Delete the Block and File artifacts","text":"First clean up the resources from applications that consume the Rook storage. These commands will clean up the resources from the example application block and file walkthroughs (unmount volumes, delete volume claims, etc). Important After applications have been cleaned up, the Rook cluster can be removed. It is important to delete applications before removing the Rook operator and Ceph cluster. Otherwise, volumes may hang and nodes may require a restart. "},{"location":"Getting-Started/ceph-teardown/#delete-the-cephcluster-crd","title":"Delete the CephCluster CRD","text":"Warning DATA WILL BE PERMANENTLY DELETED AFTER DELETING THE
Note The cleanup jobs might not start if the resources created on top of Rook Cluster are not deleted completely. See deleting block and file artifacts "},{"location":"Getting-Started/ceph-teardown/#delete-the-operator-resources","title":"Delete the Operator Resources","text":"Remove the Rook operator, RBAC, and CRDs, and the "},{"location":"Getting-Started/ceph-teardown/#delete-the-data-on-hosts","title":"Delete the data on hosts","text":"Attention The final cleanup step requires deleting files on each host in the cluster. All files under the If the Connect to each machine and delete the namespace directory under Disks on nodes used by Rook for OSDs can be reset to a usable state. Note that these scripts are not one-size-fits-all. Please use them with discretion to ensure you are not removing data unrelated to Rook. A single disk can usually be cleared with some or all of the steps below. Ceph can leave LVM and device mapper data on storage drives, preventing them from being redeployed. These steps can clean former Ceph drives for reuse. Note that this only needs to be run once on each node. If you have only one Rook cluster and all Ceph disks are being wiped, run the following command. If disks are still reported locked, rebooting the node often helps clear LVM-related holds on disks. If there are multiple Ceph clusters and some disks are not wiped yet, it is necessary to manually determine which disks map to which device mapper devices. "},{"location":"Getting-Started/ceph-teardown/#troubleshooting","title":"Troubleshooting","text":"The most common issue cleaning up the cluster is that the If a pod is still terminating, consider forcefully terminating the pod ( If the cluster CRD still exists even though it has been deleted, see the next section on removing the finalizer. "},{"location":"Getting-Started/ceph-teardown/#removing-the-cluster-crd-finalizer","title":"Removing the Cluster CRD Finalizer","text":"When a Cluster CRD is created, a finalizer is added automatically by the Rook operator. The finalizer will allow the operator to ensure that before the cluster CRD is deleted, all block and file mounts will be cleaned up. Without proper cleanup, pods consuming the storage will be hung indefinitely until a system reboot. The operator is responsible for removing the finalizer after the mounts have been cleaned up. If for some reason the operator is not able to remove the finalizer (i.e., the operator is not running anymore), delete the finalizer manually with the following command: If the namespace is still stuck in Terminating state, check which resources are holding up the deletion and remove their finalizers as well: "},{"location":"Getting-Started/ceph-teardown/#remove-critical-resource-finalizers","title":"Remove critical resource finalizers","text":"Rook adds a finalizer The operator is responsible for removing the finalizers when a CephCluster is deleted. If the operator is not able to remove the finalizers (i.e., the operator is not running anymore), remove the finalizers manually: "},{"location":"Getting-Started/ceph-teardown/#force-delete-resources","title":"Force Delete Resources","text":"To keep your data safe in the cluster, Rook disallows deleting critical cluster resources by default. To override this behavior and force delete a specific custom resource, add the annotation For example, run the following commands to clean the Once the cleanup job is completed successfully, Rook will remove the finalizers from the deleted custom resource. This cleanup is supported only for the following custom resources: Custom Resource Ceph Resources to be cleaned up CephFilesystemSubVolumeGroup CSI stored RADOS OMAP details for pvc/volumesnapshots, subvolume snapshots, subvolume clones, subvolumes CephBlockPoolRadosNamespace Images and snapshots in the RADOS namespace CephBlockPool Images and snapshots in the BlockPool"},{"location":"Getting-Started/example-configurations/","title":"Example Configurations","text":"Configuration for Rook and Ceph can be configured in multiple ways to provide block devices, shared filesystem volumes or object storage in a kubernetes namespace. While several examples are provided to simplify storage setup, settings are available to optimize various production environments. See the example yaml files folder for all the rook/ceph setup example spec files. "},{"location":"Getting-Started/example-configurations/#common-resources","title":"Common Resources","text":"The first step to deploy Rook is to create the CRDs and other common resources. The configuration for these resources will be the same for most deployments. The crds.yaml and common.yaml sets these resources up. The examples all assume the operator and all Ceph daemons will be started in the same namespace. If deploying the operator in a separate namespace, see the comments throughout After the common resources are created, the next step is to create the Operator deployment. Several spec file examples are provided in this directory:
Settings for the operator are configured through environment variables on the operator deployment. The individual settings are documented in operator.yaml. "},{"location":"Getting-Started/example-configurations/#cluster-crd","title":"Cluster CRD","text":"Now that the operator is running, create the Ceph storage cluster with the CephCluster CR. This CR contains the most critical settings that will influence how the operator configures the storage. It is important to understand the various ways to configure the cluster. These examples represent several different ways to configure the storage.
See the Cluster CRD topic for more details and more examples for the settings. "},{"location":"Getting-Started/example-configurations/#setting-up-consumable-storage","title":"Setting up consumable storage","text":"Now we are ready to setup Block, Shared Filesystem or Object storage in the Rook cluster. These storage types are respectively created with the CephBlockPool, CephFilesystem and CephObjectStore CRs. "},{"location":"Getting-Started/example-configurations/#block-devices","title":"Block Devices","text":"Ceph provides raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in application pods. The storage class is defined with a Ceph pool which defines the level of data redundancy in Ceph:
The block storage classes are found in the examples directory:
See the CephBlockPool CRD topic for more block storage settings. "},{"location":"Getting-Started/example-configurations/#shared-filesystem","title":"Shared Filesystem","text":"Ceph filesystem (CephFS) allows the user to mount a shared posix-compliant folder into one or more application pods. This storage is similar to NFS shared storage or CIFS shared folders, as explained here. Shared Filesystem storage contains configurable pools for different scenarios:
Dynamic provisioning is possible with the CSI driver. The storage class for shared filesystems is found in the See the Shared Filesystem CRD topic for more details on the settings. "},{"location":"Getting-Started/example-configurations/#object-storage","title":"Object Storage","text":"Ceph supports storing blobs of data called objects that support HTTP(s)-type get/put/post and delete semantics. This storage is similar to AWS S3 storage, for example. Object storage contains multiple pools that can be configured for different scenarios:
See the Object Store CRD topic for more details on the settings. "},{"location":"Getting-Started/example-configurations/#object-storage-user","title":"Object Storage User","text":"
The Ceph operator also runs an object store bucket provisioner which can grant access to existing buckets or dynamically provision new buckets.
The CephBlockPool CRD is used by Rook to allow creation and customization of storage pools. "},{"location":"Getting-Started/glossary/#cephblockpoolradosnamespace-crd","title":"CephBlockPoolRadosNamespace CRD","text":"The CephBlockPoolRadosNamespace CRD is used by Rook to allow creation of Ceph RADOS Namespaces. "},{"location":"Getting-Started/glossary/#cephclient-crd","title":"CephClient CRD","text":"CephClient CRD is used by Rook to allow creation and updating clients. "},{"location":"Getting-Started/glossary/#cephcluster-crd","title":"CephCluster CRD","text":"The CephCluster CRD is used by Rook to allow creation and customization of storage clusters through the custom resource definitions (CRDs). "},{"location":"Getting-Started/glossary/#ceph-csi","title":"Ceph CSI","text":"The Ceph CSI plugins implement an interface between a CSI-enabled Container Orchestrator (CO) and Ceph clusters. "},{"location":"Getting-Started/glossary/#cephfilesystem-crd","title":"CephFilesystem CRD","text":"The CephFilesystem CRD is used by Rook to allow creation and customization of shared filesystems through the custom resource definitions (CRDs). "},{"location":"Getting-Started/glossary/#cephfilesystemmirror-crd","title":"CephFilesystemMirror CRD","text":"The CephFilesystemMirror CRD is used by Rook to allow creation and updating the Ceph fs-mirror daemon. "},{"location":"Getting-Started/glossary/#cephfilesystemsubvolumegroup-crd","title":"CephFilesystemSubVolumeGroup CRD","text":"CephFilesystemMirror CRD is used by Rook to allow creation of Ceph Filesystem SubVolumeGroups. "},{"location":"Getting-Started/glossary/#cephnfs-crd","title":"CephNFS CRD","text":"CephNFS CRD is used by Rook to allow exporting NFS shares of a CephFilesystem or CephObjectStore through the CephNFS custom resource definition. For further information please refer to the example here. "},{"location":"Getting-Started/glossary/#cephobjectstore-crd","title":"CephObjectStore CRD","text":"CephObjectStore CRD is used by Rook to allow creation and customization of object stores. "},{"location":"Getting-Started/glossary/#cephobjectstoreuser-crd","title":"CephObjectStoreUser CRD","text":"CephObjectStoreUser CRD is used by Rook to allow creation and customization of object store users. For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#cephobjectrealm-crd","title":"CephObjectRealm CRD","text":"CephObjectRealm CRD is used by Rook to allow creation of a realm in a Ceph Object Multisite configuration. For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#cephobjectzonegroup-crd","title":"CephObjectZoneGroup CRD","text":"CephObjectZoneGroup CRD is used by Rook to allow creation of zone groups in a Ceph Object Multisite configuration. For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#cephobjectzone-crd","title":"CephObjectZone CRD","text":"CephObjectZone CRD is used by Rook to allow creation of zones in a ceph cluster for a Ceph Object Multisite configuration. For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#cephrbdmirror-crd","title":"CephRBDMirror CRD","text":"CephRBDMirror CRD is used by Rook to allow creation and updating rbd-mirror daemon(s) through the custom resource definitions (CRDs). For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#external-storage-cluster","title":"External Storage Cluster","text":"An external cluster is a Ceph configuration that is managed outside of the local K8s cluster. "},{"location":"Getting-Started/glossary/#host-storage-cluster","title":"Host Storage Cluster","text":"A host storage cluster is where Rook configures Ceph to store data directly on the host devices. "},{"location":"Getting-Started/glossary/#kubectl-plugin","title":"kubectl Plugin","text":"The Rook kubectl plugin is a tool to help troubleshoot your Rook cluster. "},{"location":"Getting-Started/glossary/#object-bucket-claim-obc","title":"Object Bucket Claim (OBC)","text":"An Object Bucket Claim (OBC) is custom resource which requests a bucket (new or existing) from a Ceph object store. For further reference please refer to OBC Custom Resource. "},{"location":"Getting-Started/glossary/#object-bucket-ob","title":"Object Bucket (OB)","text":"An Object Bucket (OB) is a custom resource automatically generated when a bucket is provisioned. It is a global resource, typically not visible to non-admin users, and contains information specific to the bucket. "},{"location":"Getting-Started/glossary/#openshift","title":"OpenShift","text":"OpenShift Container Platform is a distribution of the Kubernetes container platform. "},{"location":"Getting-Started/glossary/#pvc-storage-cluster","title":"PVC Storage Cluster","text":"In a PersistentVolumeClaim-based cluster, the Ceph persistent data is stored on volumes requested from a storage class of your choice. "},{"location":"Getting-Started/glossary/#stretch-storage-cluster","title":"Stretch Storage Cluster","text":"A stretched cluster is a deployment model in which two datacenters with low latency are available for storage in the same K8s cluster, rather than three or more. To support this scenario, Rook has integrated support for stretch clusters. "},{"location":"Getting-Started/glossary/#toolbox","title":"Toolbox","text":"The Rook toolbox is a container with common tools used for rook debugging and testing. "},{"location":"Getting-Started/glossary/#ceph","title":"Ceph","text":"Ceph is a distributed network storage and file system with distributed metadata management and POSIX semantics. See also the Ceph Glossary. Here are a few of the important terms to understand:
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. For further information see also the Kubernetes Glossary for more definitions. Here are a few of the important terms to understand:
Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for Ceph storage to natively integrate with cloud-native environments. Ceph is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters. Rook automates deployment and management of Ceph to provide self-managing, self-scaling, and self-healing storage services. The Rook operator does this by building on Kubernetes resources to deploy, configure, provision, scale, upgrade, and monitor Ceph. The Ceph operator was declared stable in December 2018 in the Rook v0.9 release, providing a production storage platform for many years. Rook is hosted by the Cloud Native Computing Foundation (CNCF) as a graduated level project. "},{"location":"Getting-Started/intro/#quick-start-guide","title":"Quick Start Guide","text":"Starting Ceph in your cluster is as simple as a few Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the Ceph overview. For detailed design documentation, see also the design docs. "},{"location":"Getting-Started/intro/#need-help-be-sure-to-join-the-rook-slack","title":"Need help? Be sure to join the Rook Slack","text":"If you have any questions along the way, don't hesitate to ask in our Slack channel. Sign up for the Rook Slack here. "},{"location":"Getting-Started/quickstart/","title":"Quickstart","text":"Welcome to Rook! We hope you have a great experience installing the Rook cloud-native storage orchestrator platform to enable highly available, durable Ceph storage in Kubernetes clusters. Don't hesitate to ask questions in our Slack channel. Sign up for the Rook Slack here. This guide will walk through the basic setup of a Ceph cluster and enable K8s applications to consume block, object, and file storage. Always use a virtual machine when testing Rook. Never use a host system where local devices may mistakenly be consumed. "},{"location":"Getting-Started/quickstart/#kubernetes-version","title":"Kubernetes Version","text":"Kubernetes versions v1.26 through v1.31 are supported. "},{"location":"Getting-Started/quickstart/#cpu-architecture","title":"CPU Architecture","text":"Architectures released are To check if a Kubernetes cluster is ready for To configure the Ceph storage cluster, at least one of these local storage options are required:
A simple Rook cluster is created for Kubernetes with the following After the cluster is running, applications can consume block, object, or file storage. "},{"location":"Getting-Started/quickstart/#deploy-the-rook-operator","title":"Deploy the Rook Operator","text":"The first step is to deploy the Rook operator. Important The Rook Helm Chart is available to deploy the operator instead of creating the below manifests. Note Check that the example yaml files are from a tagged release of Rook. Note These steps are for a standard production Rook deployment in Kubernetes. For Openshift, testing, or more options, see the example configurations documentation. Before starting the operator in production, consider these settings:
The Rook documentation is focused around starting Rook in a variety of environments. While creating the cluster in this guide, consider these example cluster manifests:
See the Ceph example configurations for more details. "},{"location":"Getting-Started/quickstart/#create-a-ceph-cluster","title":"Create a Ceph Cluster","text":"Now that the Rook operator is running we can create the Ceph cluster. Important The Rook Cluster Helm Chart is available to deploy the operator instead of creating the below manifests. Important For the cluster to survive reboots, set the Create the cluster: Verify the cluster is running by viewing the pods in the The number of osd pods will depend on the number of nodes in the cluster and the number of devices configured. For the default Hint If the To verify that the cluster is in a healthy state, connect to the Rook toolbox and run the
Hint If the cluster is not healthy, please refer to the Ceph common issues for potential solutions. "},{"location":"Getting-Started/quickstart/#storage","title":"Storage","text":"For a walkthrough of the three types of storage exposed by Rook, see the guides for:
Ceph has a dashboard to view the status of the cluster. See the dashboard guide. "},{"location":"Getting-Started/quickstart/#tools","title":"Tools","text":"Create a toolbox pod for full access to a ceph admin client for debugging and troubleshooting the Rook cluster. See the toolbox documentation for setup and usage information. The Rook kubectl plugin provides commands to view status and troubleshoot issues. See the advanced configuration document for helpful maintenance and tuning examples. "},{"location":"Getting-Started/quickstart/#monitoring","title":"Monitoring","text":"Each Rook cluster has built-in metrics collectors/exporters for monitoring with Prometheus. To configure monitoring, see the monitoring guide. "},{"location":"Getting-Started/quickstart/#telemetry","title":"Telemetry","text":"The Rook maintainers would like to receive telemetry reports for Rook clusters. The data is anonymous and does not include any identifying information. Enable the telemetry reporting feature with the following command in the toolbox: For more details on what is reported and how your privacy is protected, see the Ceph Telemetry Documentation. "},{"location":"Getting-Started/quickstart/#teardown","title":"Teardown","text":"When finished with the test cluster, see the cleanup guide. "},{"location":"Getting-Started/release-cycle/","title":"Release Cycle","text":"Rook plans to release a new minor version three times a year, or about every four months. The most recent two minor Rook releases are actively maintained. Patch releases for the latest minor release are typically bi-weekly. Urgent patches may be released sooner. Patch releases for the previous minor release are commonly monthly, though will vary depending on the urgency of fixes. "},{"location":"Getting-Started/release-cycle/#definition-of-maintenance","title":"Definition of Maintenance","text":"The Rook community defines maintenance in that relevant bug fixes that are merged to the main development branch will be eligible to be back-ported to the release branch of any currently maintained version. Patches will be released as needed. It is also possible that a fix may be merged directly to the release branch if no longer applicable on the main development branch. While Rook maintainers make significant efforts to release urgent issues in a timely manner, maintenance does not indicate any SLA on response time. "},{"location":"Getting-Started/release-cycle/#k8s-versions","title":"K8s Versions","text":"The minimum version supported by a Rook release is specified in the Quickstart Guide. Rook expects to support the most recent six versions of Kubernetes. While these K8s versions may not all be supported by the K8s release cycle, we understand that clusters may take time to update. "},{"location":"Getting-Started/storage-architecture/","title":"Storage Architecture","text":"Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. "},{"location":"Getting-Started/storage-architecture/#design","title":"Design","text":"Rook enables Ceph storage to run on Kubernetes using Kubernetes primitives. With Ceph running in the Kubernetes cluster, Kubernetes applications can mount block devices and filesystems managed by Rook, or can use the S3/Swift API for object storage. The Rook operator automates configuration of storage components and monitors the cluster to ensure the storage remains available and healthy. The Rook operator is a simple container that has all that is needed to bootstrap and monitor the storage cluster. The operator will start and monitor Ceph monitor pods, the Ceph OSD daemons to provide RADOS storage, as well as start and manage other Ceph daemons. The operator manages CRDs for pools, object stores (S3/Swift), and filesystems by initializing the pods and other resources necessary to run the services. The operator will monitor the storage daemons to ensure the cluster is healthy. Ceph mons will be started or failed over when necessary, and other adjustments are made as the cluster grows or shrinks. The operator will also watch for desired state changes specified in the Ceph custom resources (CRs) and apply the changes. Rook automatically configures the Ceph-CSI driver to mount the storage to your pods. The Rook is implemented in golang. Ceph is implemented in C++ where the data path is highly optimized. We believe this combination offers the best of both worlds. "},{"location":"Getting-Started/storage-architecture/#architecture","title":"Architecture","text":"Example applications are shown above for the three supported storage types:
Below the dotted line in the above diagram, the components fall into three categories:
Production clusters must have three or more nodes for a resilient storage platform. "},{"location":"Getting-Started/storage-architecture/#block-storage","title":"Block Storage","text":"In the diagram above, the flow to create an application with an RWO volume is:
A ReadWriteOnce volume can be mounted on one node at a time. "},{"location":"Getting-Started/storage-architecture/#shared-filesystem","title":"Shared Filesystem","text":"In the diagram above, the flow to create a applications with a RWX volume is:
A ReadWriteMany volume can be mounted on multiple nodes for your application to use. "},{"location":"Getting-Started/storage-architecture/#object-storage-s3","title":"Object Storage S3","text":"In the diagram above, the flow to create an application with access to an S3 bucket is:
A S3 compatible client can use the S3 bucket right away using the credentials ( If you want to use an image from authenticated docker registry (e.g. for image cache/mirror), you'll need to add an The whole process is described in the official kubernetes documentation. "},{"location":"Getting-Started/Prerequisites/authenticated-registry/#example-setup-for-a-ceph-cluster","title":"Example setup for a ceph cluster","text":"To get you started, here's a quick rundown for the ceph example from the quickstart guide. First, we'll create the secret for our registry as described here (the secret will be created in the Next we'll add the following snippet to all relevant service accounts as described here: The service accounts are:
Since it's the same procedure for all service accounts, here is just one example: After doing this for all service accounts all pods should be able to pull the image from your registry. "},{"location":"Getting-Started/Prerequisites/prerequisites/","title":"Prerequisites","text":"Rook can be installed on any existing Kubernetes cluster as long as it meets the minimum version and Rook is granted the required privileges (see below for more information). "},{"location":"Getting-Started/Prerequisites/prerequisites/#kubernetes-version","title":"Kubernetes Version","text":"Kubernetes versions v1.26 through v1.31 are supported. "},{"location":"Getting-Started/Prerequisites/prerequisites/#cpu-architecture","title":"CPU Architecture","text":"Architectures supported are To configure the Ceph storage cluster, at least one of these local storage types is required:
Confirm whether the partitions or devices are formatted with filesystems with the following command: If the Ceph OSDs have a dependency on LVM in the following scenarios:
LVM is not required for OSDs in these scenarios:
If LVM is required, LVM needs to be available on the hosts where OSDs will be running. Some Linux distributions do not ship with the CentOS: Ubuntu: RancherOS:
"},{"location":"Getting-Started/Prerequisites/prerequisites/#kernel","title":"Kernel","text":""},{"location":"Getting-Started/Prerequisites/prerequisites/#rbd","title":"RBD","text":"Ceph requires a Linux kernel built with the RBD module. Many Linux distributions have this module, but not all. For example, the GKE Container-Optimised OS (COS) does not have RBD. Test your Kubernetes nodes by running Rook's default RBD configuration specifies only the "},{"location":"Getting-Started/Prerequisites/prerequisites/#cephfs","title":"CephFS","text":"If creating RWX volumes from a Ceph shared file system (CephFS), the recommended minimum kernel version is 4.17. If the kernel version is less than 4.17, the requested PVC sizes will not be enforced. Storage quotas will only be enforced on newer kernels. "},{"location":"Getting-Started/Prerequisites/prerequisites/#distro-notes","title":"Distro Notes","text":"Specific configurations for some distributions. "},{"location":"Getting-Started/Prerequisites/prerequisites/#nixos","title":"NixOS","text":"For NixOS, the kernel modules will be found in the non-standard path Rook containers require read access to those locations to be able to load the required modules. They have to be bind-mounted as volumes in the CephFS and RBD plugin pods. If installing Rook with Helm, uncomment these example settings in
If deploying without Helm, add those same values to the settings in the
If using containerd, remove "},{"location":"Helm-Charts/ceph-cluster-chart/","title":"Ceph Cluster Helm Chart","text":"Creates Rook resources to configure a Ceph cluster using the Helm package manager. This chart is a simple packaging of templates that will optionally create Rook resources such as:
The Rook currently publishes builds of this chart to the Before installing, review the values.yaml to confirm if the default settings need to be updated.
The release channel is the most recent release of Rook that is considered stable for the community. The example install assumes you have first installed the Rook Operator Helm Chart and created your customized values.yaml. Note --namespace specifies the cephcluster namespace, which may be different from the rook operator namespace. "},{"location":"Helm-Charts/ceph-cluster-chart/#configuration","title":"Configuration","text":"The following table lists the configurable parameters of the rook-operator chart and their default values. Parameter Description DefaultcephBlockPools A list of CephBlockPool configurations to deploy See below cephBlockPoolsVolumeSnapshotClass Settings for the block pool snapshot class See RBD Snapshots cephClusterSpec Cluster configuration. See below cephFileSystemVolumeSnapshotClass Settings for the filesystem snapshot class See CephFS Snapshots cephFileSystems A list of CephFileSystem configurations to deploy See below cephObjectStores A list of CephObjectStore configurations to deploy See below clusterName The metadata.name of the CephCluster CR The same as the namespace configOverride Cluster ceph.conf override nil csiDriverNamePrefix CSI driver name prefix for cephfs, rbd and nfs. namespace name where rook-ceph operator is deployed ingress.dashboard Enable an ingress for the ceph-dashboard {} kubeVersion Optional override of the target kubernetes version nil monitoring.createPrometheusRules Whether to create the Prometheus rules for Ceph alerts false monitoring.enabled Enable Prometheus integration, will also create necessary RBAC rules to allow Operator to create ServiceMonitors. Monitoring requires Prometheus to be pre-installed false monitoring.prometheusRule.annotations Annotations applied to PrometheusRule {} monitoring.prometheusRule.labels Labels applied to PrometheusRule {} monitoring.rulesNamespaceOverride The namespace in which to create the prometheus rules, if different from the rook cluster namespace. If you have multiple rook-ceph clusters in the same k8s cluster, choose the same namespace (ideally, namespace with prometheus deployed) to set rulesNamespaceOverride for all the clusters. Otherwise, you will get duplicate alerts with multiple alert definitions. nil operatorNamespace Namespace of the main rook operator \"rook-ceph\" pspEnable Create & use PSP resources. Set this to the same value as the rook-ceph chart. false toolbox.affinity Toolbox affinity {} toolbox.containerSecurityContext Toolbox container security context {\"capabilities\":{\"drop\":[\"ALL\"]},\"runAsGroup\":2016,\"runAsNonRoot\":true,\"runAsUser\":2016} toolbox.enabled Enable Ceph debugging pod deployment. See toolbox false toolbox.image Toolbox image, defaults to the image used by the Ceph cluster nil toolbox.priorityClassName Set the priority class for the toolbox if desired nil toolbox.resources Toolbox resources {\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"128Mi\"}} toolbox.tolerations Toolbox tolerations [] "},{"location":"Helm-Charts/ceph-cluster-chart/#ceph-cluster-spec","title":"Ceph Cluster Spec","text":"The The cluster spec example is for a converged cluster where all the Ceph daemons are running locally, as in the host-based example (cluster.yaml). For a different configuration such as a PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml), or stretch cluster (cluster-stretched.yaml), replace this entire The name The name of the CephBlockPool ceph-blockpool spec The CephBlockPool spec, see the CephBlockPool documentation. {} storageClass.enabled Whether a storage class is deployed alongside the CephBlockPool true storageClass.isDefault Whether the storage class will be the default storage class for PVCs. See PersistentVolumeClaim documentation for details. true storageClass.name The name of the storage class ceph-block storageClass.annotations Additional storage class annotations {} storageClass.labels Additional storage class labels {} storageClass.parameters See Block Storage documentation or the helm values.yaml for suitable values see values.yaml storageClass.reclaimPolicy The default Reclaim Policy to apply to PVCs created with this storage class. Delete storageClass.allowVolumeExpansion Whether volume expansion is allowed by default. true storageClass.mountOptions Specifies the mount options for storageClass [] storageClass.allowedTopologies Specifies the allowedTopologies for storageClass [] "},{"location":"Helm-Charts/ceph-cluster-chart/#ceph-file-systems","title":"Ceph File Systems","text":"The name The name of the CephFileSystem ceph-filesystem spec The CephFileSystem spec, see the CephFilesystem CRD documentation. see values.yaml storageClass.enabled Whether a storage class is deployed alongside the CephFileSystem true storageClass.name The name of the storage class ceph-filesystem storageClass.annotations Additional storage class annotations {} storageClass.labels Additional storage class labels {} storageClass.pool The name of Data Pool, without the filesystem name prefix data0 storageClass.parameters See Shared Filesystem documentation or the helm values.yaml for suitable values see values.yaml storageClass.reclaimPolicy The default Reclaim Policy to apply to PVCs created with this storage class. Delete storageClass.mountOptions Specifies the mount options for storageClass [] "},{"location":"Helm-Charts/ceph-cluster-chart/#ceph-object-stores","title":"Ceph Object Stores","text":"The name The name of the CephObjectStore ceph-objectstore spec The CephObjectStore spec, see the CephObjectStore CRD documentation. see values.yaml storageClass.enabled Whether a storage class is deployed alongside the CephObjectStore true storageClass.name The name of the storage class ceph-bucket storageClass.annotations Additional storage class annotations {} storageClass.labels Additional storage class labels {} storageClass.parameters See Object Store storage class documentation or the helm values.yaml for suitable values see values.yaml storageClass.reclaimPolicy The default Reclaim Policy to apply to PVCs created with this storage class. Delete ingress.enabled Enable an ingress for the object store false ingress.annotations Ingress annotations {} ingress.host.name Ingress hostname \"\" ingress.host.path Ingress path prefix / ingress.tls Ingress tls / ingress.ingressClassName Ingress tls \"\" "},{"location":"Helm-Charts/ceph-cluster-chart/#existing-clusters","title":"Existing Clusters","text":"If you have an existing CephCluster CR that was created without the helm chart and you want the helm chart to start managing the cluster:
To deploy from a local build from your development environment: "},{"location":"Helm-Charts/ceph-cluster-chart/#uninstalling-the-chart","title":"Uninstalling the Chart","text":"To see the currently installed Rook chart: To uninstall/delete the The command removes all the Kubernetes components associated with the chart and deletes the release. Removing the cluster chart does not remove the Rook operator. In addition, all data on hosts in the Rook data directory ( See the teardown documentation for more information. "},{"location":"Helm-Charts/helm-charts/","title":"Helm Charts Overview","text":"Rook has published the following Helm charts for the Ceph storage provider:
The Helm charts are intended to simplify deployment and upgrades. Configuring the Rook resources without Helm is also fully supported by creating the manifests directly. "},{"location":"Helm-Charts/operator-chart/","title":"Ceph Operator Helm Chart","text":"Installs rook to create, configure, and manage Ceph clusters on Kubernetes. "},{"location":"Helm-Charts/operator-chart/#introduction","title":"Introduction","text":"This chart bootstraps a rook-ceph-operator deployment on a Kubernetes cluster using the Helm package manager. "},{"location":"Helm-Charts/operator-chart/#prerequisites","title":"Prerequisites","text":"
See the Helm support matrix for more details. "},{"location":"Helm-Charts/operator-chart/#installing","title":"Installing","text":"The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster.
The Rook currently publishes builds of the Ceph operator to the The release channel is the most recent release of Rook that is considered stable for the community. For example settings, see the next section or values.yaml "},{"location":"Helm-Charts/operator-chart/#configuration","title":"Configuration","text":"The following table lists the configurable parameters of the rook-operator chart and their default values. Parameter Description DefaultallowLoopDevices If true, loop devices are allowed to be used for osds in test clusters false annotations Pod annotations {} cephCommandsTimeoutSeconds The timeout for ceph commands in seconds \"15\" containerSecurityContext Set the container security context for the operator {\"capabilities\":{\"drop\":[\"ALL\"]},\"runAsGroup\":2016,\"runAsNonRoot\":true,\"runAsUser\":2016} crds.enabled Whether the helm chart should create and update the CRDs. If false, the CRDs must be managed independently with deploy/examples/crds.yaml. WARNING Only set during first deployment. If later disabled the cluster may be DESTROYED. If the CRDs are deleted in this case, see the disaster recovery guide to restore them. true csi.allowUnsupportedVersion Allow starting an unsupported ceph-csi image false csi.attacher.repository Kubernetes CSI Attacher image repository \"registry.k8s.io/sig-storage/csi-attacher\" csi.attacher.tag Attacher image tag \"v4.6.1\" csi.cephFSAttachRequired Whether to skip any attach operation altogether for CephFS PVCs. See more details here. If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the CephFS PVC fast. WARNING It's highly discouraged to use this for CephFS RWO volumes. Refer to this issue for more details. true csi.cephFSFSGroupPolicy Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html \"File\" csi.cephFSKernelMountOptions Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options. Set to \"ms_mode=secure\" when connections.encrypted is enabled in CephCluster CR nil csi.cephFSPluginUpdateStrategy CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate RollingUpdate csi.cephFSPluginUpdateStrategyMaxUnavailable A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy. 1 csi.cephcsi.repository Ceph CSI image repository \"quay.io/cephcsi/cephcsi\" csi.cephcsi.tag Ceph CSI image tag \"v3.12.2\" csi.cephfsLivenessMetricsPort CSI CephFS driver metrics port 9081 csi.cephfsPodLabels Labels to add to the CSI CephFS Deployments and DaemonSets Pods nil csi.clusterName Cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster nil csi.csiAddons.enabled Enable CSIAddons false csi.csiAddons.repository CSIAddons sidecar image repository \"quay.io/csiaddons/k8s-sidecar\" csi.csiAddons.tag CSIAddons sidecar image tag \"v0.10.0\" csi.csiAddonsPort CSI Addons server port 9070 csi.csiCephFSPluginResource CEPH CSI CephFS plugin resource requirement list see values.yaml csi.csiCephFSPluginVolume The volume of the CephCSI CephFS plugin DaemonSet nil csi.csiCephFSPluginVolumeMount The volume mounts of the CephCSI CephFS plugin DaemonSet nil csi.csiCephFSProvisionerResource CEPH CSI CephFS provisioner resource requirement list see values.yaml csi.csiDriverNamePrefix CSI driver name prefix for cephfs, rbd and nfs. namespace name where rook-ceph operator is deployed csi.csiLeaderElectionLeaseDuration Duration in seconds that non-leader candidates will wait to force acquire leadership. 137s csi.csiLeaderElectionRenewDeadline Deadline in seconds that the acting leader will retry refreshing leadership before giving up. 107s csi.csiLeaderElectionRetryPeriod Retry period in seconds the LeaderElector clients should wait between tries of actions. 26s csi.csiNFSPluginResource CEPH CSI NFS plugin resource requirement list see values.yaml csi.csiNFSProvisionerResource CEPH CSI NFS provisioner resource requirement list see values.yaml csi.csiRBDPluginResource CEPH CSI RBD plugin resource requirement list see values.yaml csi.csiRBDPluginVolume The volume of the CephCSI RBD plugin DaemonSet nil csi.csiRBDPluginVolumeMount The volume mounts of the CephCSI RBD plugin DaemonSet nil csi.csiRBDProvisionerResource CEPH CSI RBD provisioner resource requirement list csi-omap-generator resources will be applied only if enableOMAPGenerator is set to true see values.yaml csi.disableCsiDriver Disable the CSI driver. \"false\" csi.enableCSIEncryption Enable Ceph CSI PVC encryption support false csi.enableCSIHostNetwork Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary in some network configurations where the SDN does not provide access to an external cluster or there is significant drop in read/write performance true csi.enableCephfsDriver Enable Ceph CSI CephFS driver true csi.enableCephfsSnapshotter Enable Snapshotter in CephFS provisioner pod true csi.enableLiveness Enable Ceph CSI Liveness sidecar deployment false csi.enableMetadata Enable adding volume metadata on the CephFS subvolumes and RBD images. Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images. Hence enable metadata is false by default false csi.enableNFSSnapshotter Enable Snapshotter in NFS provisioner pod true csi.enableOMAPGenerator OMAP generator generates the omap mapping between the PV name and the RBD image which helps CSI to identify the rbd images for CSI operations. CSI_ENABLE_OMAP_GENERATOR needs to be enabled when we are using rbd mirroring feature. By default OMAP generator is disabled and when enabled, it will be deployed as a sidecar with CSI provisioner pod, to enable set it to true. false csi.enablePluginSelinuxHostMount Enable Host mount for /etc/selinux directory for Ceph CSI nodeplugins false csi.enableRBDSnapshotter Enable Snapshotter in RBD provisioner pod true csi.enableRbdDriver Enable Ceph CSI RBD driver true csi.enableVolumeGroupSnapshot Enable volume group snapshot feature. This feature is enabled by default as long as the necessary CRDs are available in the cluster. true csi.forceCephFSKernelClient Enable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS you may want to disable this setting. However, this will cause an issue during upgrades with the FUSE client. See the upgrade guide true csi.grpcTimeoutInSeconds Set GRPC timeout for csi containers (in seconds). It should be >= 120. If this value is not set or is invalid, it defaults to 150 150 csi.imagePullPolicy Image pull policy \"IfNotPresent\" csi.kubeApiBurst Burst to use while communicating with the kubernetes apiserver. nil csi.kubeApiQPS QPS to use while communicating with the kubernetes apiserver. nil csi.kubeletDirPath Kubelet root directory path (if the Kubelet uses a different path for the --root-dir flag) /var/lib/kubelet csi.logLevel Set logging level for cephCSI containers maintained by the cephCSI. Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity. 0 csi.nfs.enabled Enable the nfs csi driver false csi.nfsAttachRequired Whether to skip any attach operation altogether for NFS PVCs. See more details here. If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the NFS PVC fast. WARNING It's highly discouraged to use this for NFS RWO volumes. Refer to this issue for more details. true csi.nfsFSGroupPolicy Policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html \"File\" csi.nfsPluginUpdateStrategy CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate RollingUpdate csi.nfsPodLabels Labels to add to the CSI NFS Deployments and DaemonSets Pods nil csi.pluginNodeAffinity The node labels for affinity of the CephCSI RBD plugin DaemonSet 1 nil csi.pluginPriorityClassName PriorityClassName to be set on csi driver plugin pods \"system-node-critical\" csi.pluginTolerations Array of tolerations in YAML format which will be added to CephCSI plugin DaemonSet nil csi.provisioner.repository Kubernetes CSI provisioner image repository \"registry.k8s.io/sig-storage/csi-provisioner\" csi.provisioner.tag Provisioner image tag \"v5.0.1\" csi.provisionerNodeAffinity The node labels for affinity of the CSI provisioner deployment 1 nil csi.provisionerPriorityClassName PriorityClassName to be set on csi driver provisioner pods \"system-cluster-critical\" csi.provisionerReplicas Set replicas for csi provisioner deployment 2 csi.provisionerTolerations Array of tolerations in YAML format which will be added to CSI provisioner deployment nil csi.rbdAttachRequired Whether to skip any attach operation altogether for RBD PVCs. See more details here. If set to false it skips the volume attachments and makes the creation of pods using the RBD PVC fast. WARNING It's highly discouraged to use this for RWO volumes as it can cause data corruption. csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set to false since we'll have no VolumeAttachments to determine which node the PVC is mounted on. Refer to this issue for more details. true csi.rbdFSGroupPolicy Policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html \"File\" csi.rbdLivenessMetricsPort Ceph CSI RBD driver metrics port 8080 csi.rbdPluginUpdateStrategy CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate RollingUpdate csi.rbdPluginUpdateStrategyMaxUnavailable A maxUnavailable parameter of CSI RBD plugin daemonset update strategy. 1 csi.rbdPodLabels Labels to add to the CSI RBD Deployments and DaemonSets Pods nil csi.registrar.repository Kubernetes CSI registrar image repository \"registry.k8s.io/sig-storage/csi-node-driver-registrar\" csi.registrar.tag Registrar image tag \"v2.11.1\" csi.resizer.repository Kubernetes CSI resizer image repository \"registry.k8s.io/sig-storage/csi-resizer\" csi.resizer.tag Resizer image tag \"v1.11.1\" csi.serviceMonitor.enabled Enable ServiceMonitor for Ceph CSI drivers false csi.serviceMonitor.interval Service monitor scrape interval \"10s\" csi.serviceMonitor.labels ServiceMonitor additional labels {} csi.serviceMonitor.namespace Use a different namespace for the ServiceMonitor nil csi.sidecarLogLevel Set logging level for Kubernetes-csi sidecar containers. Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity. 0 csi.snapshotter.repository Kubernetes CSI snapshotter image repository \"registry.k8s.io/sig-storage/csi-snapshotter\" csi.snapshotter.tag Snapshotter image tag \"v8.0.1\" csi.topology.domainLabels domainLabels define which node labels to use as domains for CSI nodeplugins to advertise their domains nil csi.topology.enabled Enable topology based provisioning false currentNamespaceOnly Whether the operator should watch cluster CRD in its own namespace or not false disableDeviceHotplug Disable automatic orchestration when new devices are discovered. false discover.nodeAffinity The node labels for affinity of discover-agent 1 nil discover.podLabels Labels to add to the discover pods nil discover.resources Add resources to discover daemon pods nil discover.toleration Toleration for the discover pods. Options: NoSchedule , PreferNoSchedule or NoExecute nil discover.tolerationKey The specific key of the taint to tolerate nil discover.tolerations Array of tolerations in YAML format which will be added to discover deployment nil discoverDaemonUdev Blacklist certain disks according to the regex provided. nil discoveryDaemonInterval Set the discovery daemon device discovery interval (default to 60m) \"60m\" enableDiscoveryDaemon Enable discovery daemon false enableOBCWatchOperatorNamespace Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used true enforceHostNetwork Whether to create all Rook pods to run on the host network, for example in environments where a CNI is not enabled false hostpathRequiresPrivileged Runs Ceph Pods as privileged to be able to write to hostPaths in OpenShift with SELinux restrictions. false image.pullPolicy Image pull policy \"IfNotPresent\" image.repository Image \"docker.io/rook/ceph\" image.tag Image tag master imagePullSecrets imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts. nil logLevel Global log level for the operator. Options: ERROR , WARNING , INFO , DEBUG \"INFO\" monitoring.enabled Enable monitoring. Requires Prometheus to be pre-installed. Enabling will also create RBAC rules to allow Operator to create ServiceMonitors false nodeSelector Kubernetes nodeSelector to add to the Deployment. {} obcProvisionerNamePrefix Specify the prefix for the OBC provisioner in place of the cluster namespace ceph cluster namespace priorityClassName Set the priority class for the rook operator deployment if desired nil pspEnable If true, create & use PSP resources false rbacAggregate.enableOBCs If true, create a ClusterRole aggregated to user facing roles for objectbucketclaims false rbacEnable If true, create & use RBAC resources true resources Pod resource requests & limits {\"limits\":{\"memory\":\"512Mi\"},\"requests\":{\"cpu\":\"200m\",\"memory\":\"128Mi\"}} revisionHistoryLimit The revision history limit for all pods created by Rook. If blank, the K8s default is 10. nil scaleDownOperator If true, scale down the rook operator. This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling to deploy your helm charts. false tolerations List of Kubernetes tolerations to add to the Deployment. [] unreachableNodeTolerationSeconds Delay to use for the node.kubernetes.io/unreachable pod failure toleration to override the Kubernetes default of 5 minutes 5 useOperatorHostNetwork If true, run rook operator on the host network nil "},{"location":"Helm-Charts/operator-chart/#development-build","title":"Development Build","text":"To deploy from a local build from your development environment:
"},{"location":"Helm-Charts/operator-chart/#uninstalling-the-chart","title":"Uninstalling the Chart","text":"To see the currently installed Rook chart: To uninstall/delete the The command removes all the Kubernetes components associated with the chart and deletes the release. After uninstalling you may want to clean up the CRDs as described on the teardown documentation.
Rook provides the following clean up options:
To tear down the cluster, the following resources need to be cleaned up:
If the default namespaces or paths such as If tearing down a cluster frequently for development purposes, it is instead recommended to use an environment such as Minikube that can easily be reset without worrying about any of these steps. "},{"location":"Storage-Configuration/ceph-teardown/#delete-the-block-and-file-artifacts","title":"Delete the Block and File artifacts","text":"First clean up the resources from applications that consume the Rook storage. These commands will clean up the resources from the example application block and file walkthroughs (unmount volumes, delete volume claims, etc). Important After applications have been cleaned up, the Rook cluster can be removed. It is important to delete applications before removing the Rook operator and Ceph cluster. Otherwise, volumes may hang and nodes may require a restart. "},{"location":"Storage-Configuration/ceph-teardown/#delete-the-cephcluster-crd","title":"Delete the CephCluster CRD","text":"Warning DATA WILL BE PERMANENTLY DELETED AFTER DELETING THE
Note The cleanup jobs might not start if the resources created on top of Rook Cluster are not deleted completely. See deleting block and file artifacts "},{"location":"Storage-Configuration/ceph-teardown/#delete-the-operator-resources","title":"Delete the Operator Resources","text":"Remove the Rook operator, RBAC, and CRDs, and the "},{"location":"Storage-Configuration/ceph-teardown/#delete-the-data-on-hosts","title":"Delete the data on hosts","text":"Attention The final cleanup step requires deleting files on each host in the cluster. All files under the If the Connect to each machine and delete the namespace directory under Disks on nodes used by Rook for OSDs can be reset to a usable state. Note that these scripts are not one-size-fits-all. Please use them with discretion to ensure you are not removing data unrelated to Rook. A single disk can usually be cleared with some or all of the steps below. Ceph can leave LVM and device mapper data on storage drives, preventing them from being redeployed. These steps can clean former Ceph drives for reuse. Note that this only needs to be run once on each node. If you have only one Rook cluster and all Ceph disks are being wiped, run the following command. If disks are still reported locked, rebooting the node often helps clear LVM-related holds on disks. If there are multiple Ceph clusters and some disks are not wiped yet, it is necessary to manually determine which disks map to which device mapper devices. "},{"location":"Storage-Configuration/ceph-teardown/#troubleshooting","title":"Troubleshooting","text":"The most common issue cleaning up the cluster is that the If a pod is still terminating, consider forcefully terminating the pod ( If the cluster CRD still exists even though it has been deleted, see the next section on removing the finalizer. "},{"location":"Storage-Configuration/ceph-teardown/#removing-the-cluster-crd-finalizer","title":"Removing the Cluster CRD Finalizer","text":"When a Cluster CRD is created, a finalizer is added automatically by the Rook operator. The finalizer will allow the operator to ensure that before the cluster CRD is deleted, all block and file mounts will be cleaned up. Without proper cleanup, pods consuming the storage will be hung indefinitely until a system reboot. The operator is responsible for removing the finalizer after the mounts have been cleaned up. If for some reason the operator is not able to remove the finalizer (i.e., the operator is not running anymore), delete the finalizer manually with the following command: If the namespace is still stuck in Terminating state, check which resources are holding up the deletion and remove their finalizers as well: "},{"location":"Storage-Configuration/ceph-teardown/#remove-critical-resource-finalizers","title":"Remove critical resource finalizers","text":"Rook adds a finalizer The operator is responsible for removing the finalizers when a CephCluster is deleted. If the operator is not able to remove the finalizers (i.e., the operator is not running anymore), remove the finalizers manually: "},{"location":"Storage-Configuration/ceph-teardown/#force-delete-resources","title":"Force Delete Resources","text":"To keep your data safe in the cluster, Rook disallows deleting critical cluster resources by default. To override this behavior and force delete a specific custom resource, add the annotation For example, run the following commands to clean the Once the cleanup job is completed successfully, Rook will remove the finalizers from the deleted custom resource. This cleanup is supported only for the following custom resources: Custom Resource Ceph Resources to be cleaned up CephFilesystemSubVolumeGroup CSI stored RADOS OMAP details for pvc/volumesnapshots, subvolume snapshots, subvolume clones, subvolumes CephBlockPoolRadosNamespace Images and snapshots in the RADOS namespace CephBlockPool Images and snapshots in the BlockPool"},{"location":"Storage-Configuration/Advanced/ceph-configuration/","title":"Ceph Configuration","text":"These examples show how to perform advanced configuration tasks on your Rook storage cluster. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#prerequisites","title":"Prerequisites","text":"Most of the examples make use of the The Kubernetes based examples assume Rook OSD pods are in the If you wish to deploy the Rook Operator and/or Ceph clusters to namespaces other than the default If the operator namespace is different from the cluster namespace, the operator namespace must be created before running the steps below. The cluster namespace does not need to be created first, as it will be created by This will help you manage namespaces more easily, but you should still make sure the resources are configured to your liking. Also see the CSI driver documentation to update the csi provisioner names in the storageclass and volumesnapshotclass. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#deploying-a-second-cluster","title":"Deploying a second cluster","text":"If you wish to create a new CephCluster in a separate namespace, you can easily do so by modifying the This will create all the necessary RBACs as well as the new namespace. The script assumes that All Rook logs can be collected in a Kubernetes environment with the following command: This gets the logs for every container in every Rook pod and then compresses them into a Keeping track of OSDs and their underlying storage devices can be difficult. The following scripts will clear things up quickly. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#kubernetes","title":"Kubernetes","text":" The output should look something like this. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#separate-storage-groups","title":"Separate Storage Groups","text":"Attention It is deprecated to manually need to set this, the By default Rook/Ceph puts all storage under one replication rule in the CRUSH Map which provides the maximum amount of storage capacity for a cluster. If you would like to use different storage endpoints for different purposes, you'll have to create separate storage groups. In the following example we will separate SSD drives from spindle-based drives, a common practice for those looking to target certain workloads onto faster (database) or slower (file archive) storage. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#configuring-pools","title":"Configuring Pools","text":""},{"location":"Storage-Configuration/Advanced/ceph-configuration/#placement-group-sizing","title":"Placement Group Sizing","text":"Note Since Ceph Nautilus (v14.x), you can use the Ceph MGR The general rules for deciding how many PGs your pool(s) should contain is:
If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the pg_num value by yourself. For calculating pg_num yourself please make use of the pgcalc tool. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#setting-pg-count","title":"Setting PG Count","text":"Be sure to read the placement group sizing section before changing the number of PGs. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#custom-cephconf-settings","title":"Custom ceph.conf Settings","text":"Info The advised method for controlling Ceph configuration is to use the Setting configs via Ceph's CLI requires that at least one mon be available for the configs to be set, and setting configs via dashboard requires at least one mgr to be available. Ceph also has a number of very advanced settings that cannot be modified easily via the CLI or dashboard. In order to set configurations before monitors are available or to set advanced configuration settings, the Warning Rook performs no validation on the config, so the validity of the settings is the user's responsibility. If the
After the pod restart, the new settings should be in effect. Note that if the ConfigMap in the Ceph cluster's namespace is created before the cluster is created, the daemons will pick up the settings at first launch. To automate the restart of the Ceph daemon pods, you will need to trigger an update to the pod specs. The simplest way to trigger the update is to add annotations or labels to the CephCluster CR for the daemons you want to restart. The operator will then proceed with a rolling update, similar to any other update to the cluster. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#example","title":"Example","text":"In this example we will set the default pool Warning Modify Ceph settings carefully. You are leaving the sandbox tested by Rook. Changing the settings could result in unhealthy daemons or even data loss if used incorrectly. When the Rook Operator creates a cluster, a placeholder ConfigMap is created that will allow you to override Ceph configuration settings. When the daemon pods are started, the settings specified in this ConfigMap will be merged with the default settings generated by Rook. The default override settings are blank. Cutting out the extraneous properties, we would see the following defaults after creating a cluster: To apply your desired configuration, you will need to update this ConfigMap. The next time the daemon pod(s) start, they will use the updated configs. Modify the settings and save. Each line you add should be indented from the "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#custom-csi-cephconf-settings","title":"Custom CSI ceph.conf Settings","text":"Warning It is highly recommended to use the default setting that comes with CephCSI and this can only be used when absolutely necessary. The If the After the CSI pods are restarted, the new settings should be in effect. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#example-csi-cephconf-settings","title":"Example CSIceph.conf Settings","text":"In this Example we will set the Warning Modify Ceph settings carefully to avoid modifying the default configuration. Changing the settings could result in unexpected results if used incorrectly. Restart the Rook operator pod and wait for CSI pods to be recreated. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#osd-crush-settings","title":"OSD CRUSH Settings","text":"A useful view of the CRUSH Map is generated with the following command: In this section we will be tweaking some of the values seen in the output. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#osd-weight","title":"OSD Weight","text":"The CRUSH weight controls the ratio of data that should be distributed to each OSD. This also means a higher or lower amount of disk I/O operations for an OSD with higher/lower weight, respectively. By default OSDs get a weight relative to their storage capacity, which maximizes overall cluster capacity by filling all drives at the same rate, even if drive sizes vary. This should work for most use-cases, but the following situations could warrant weight changes:
This example sets the weight of osd.0 which is 600GiB "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#osd-primary-affinity","title":"OSD Primary Affinity","text":"When pools are set with a size setting greater than one, data is replicated between nodes and OSDs. For every chunk of data a Primary OSD is selected to be used for reading that data to be sent to clients. You can control how likely it is for an OSD to become a Primary using the Primary Affinity setting. This is similar to the OSD weight setting, except it only affects reads on the storage device, not capacity or writes. In this example we will ensure that "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#osd-dedicated-network","title":"OSD Dedicated Network","text":"Tip This documentation is left for historical purposes. It is still valid, but Rook offers native support for this feature via the CephCluster network configuration. It is possible to configure ceph to leverage a dedicated network for the OSDs to communicate across. A useful overview is the Ceph Networks section of the Ceph documentation. If you declare a cluster network, OSDs will route heartbeat, object replication, and recovery traffic over the cluster network. This may improve performance compared to using a single network, especially when slower network technologies are used. The tradeoff is additional expense and subtle failure modes. Two changes are necessary to the configuration to enable this capability: "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#use-hostnetwork-in-the-cluster-configuration","title":"Use hostNetwork in the cluster configuration","text":"Enable the Important Changing this setting is not supported in a running Rook cluster. Host networking should be configured when the cluster is first created. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#define-the-subnets-to-use-for-public-and-private-osd-networks","title":"Define the subnets to use for public and private OSD networks","text":"Edit the In the editor, add a custom configuration to instruct ceph which subnet is the public network and which subnet is the private network. For example: After applying the updated rook-config-override configmap, it will be necessary to restart the OSDs by deleting the OSD pods in order to apply the change. Restart the OSD pods by deleting them, one at a time, and running ceph -s between each restart to ensure the cluster goes back to \"active/clean\" state. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#phantom-osd-removal","title":"Phantom OSD Removal","text":"If you have OSDs in which are not showing any disks, you can remove those \"Phantom OSDs\" by following the instructions below. To check for \"Phantom OSDs\", you can run (example output): The host Now to remove it, use the ID in the first column of the output and replace To recheck that the Phantom OSD was removed, re-run the following command and check if the OSD with the ID doesn't show up anymore: "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#auto-expansion-of-osds","title":"Auto Expansion of OSDs","text":""},{"location":"Storage-Configuration/Advanced/ceph-configuration/#prerequisites-for-auto-expansion-of-osds","title":"Prerequisites for Auto Expansion of OSDs","text":"1) A PVC-based cluster deployed in dynamic provisioning environment with a 2) Create the Rook Toolbox. Note Prometheus Operator and [Prometheus ../Monitoring/ceph-monitoring.mdnitoring.md#prometheus-instances) are Prerequisites that are created by the auto-grow-storage script. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#to-scale-osds-vertically","title":"To scale OSDs Vertically","text":"Run the following script to auto-grow the size of OSDs on a PVC-based Rook cluster whenever the OSDs have reached the storage near-full threshold.
For example, if you need to increase the size of OSD by 30% and max disk size is 1Ti "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#to-scale-osds-horizontally","title":"To scale OSDs Horizontally","text":"Run the following script to auto-grow the number of OSDs on a PVC-based Rook cluster whenever the OSDs have reached the storage near-full threshold. Count of OSD represents the number of OSDs you need to add and maxCount represents the number of disks a storage cluster will support. For example, if you need to increase the number of OSDs by 3 and maxCount is 10 "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/","title":"Monitor Health","text":"Failure in a distributed system is to be expected. Ceph was designed from the ground up to deal with the failures of a distributed system. At the next layer, Rook was designed from the ground up to automate recovery of Ceph components that traditionally required admin intervention. Monitor health is the most critical piece of the equation that Rook actively monitors. If they are not in a good state, the operator will take action to restore their health and keep your cluster protected from disaster. The Ceph monitors (mons) are the brains of the distributed cluster. They control all of the metadata that is necessary to store and retrieve your data as well as keep it safe. If the monitors are not in a healthy state you will risk losing all the data in your system. "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#monitor-identity","title":"Monitor Identity","text":"Each monitor in a Ceph cluster has a static identity. Every component in the cluster is aware of the identity, and that identity must be immutable. The identity of a mon is its IP address. To have an immutable IP address in Kubernetes, Rook creates a K8s service for each monitor. The clusterIP of the service will act as the stable identity. When a monitor pod starts, it will bind to its podIP and it will expect communication to be via its service IP address. "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#monitor-quorum","title":"Monitor Quorum","text":"Multiple mons work together to provide redundancy by each keeping a copy of the metadata. A variation of the distributed algorithm Paxos is used to establish consensus about the state of the cluster. Paxos requires a super-majority of mons to be running in order to establish quorum and perform operations in the cluster. If the majority of mons are not running, quorum is lost and nothing can be done in the cluster. "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#how-many-mons","title":"How many mons?","text":"Most commonly a cluster will have three mons. This would mean that one mon could go down and allow the cluster to remain healthy. You would still have 2/3 mons running to give you consensus in the cluster for any operation. For highest availability, an odd number of mons is required. Fifty percent of mons will not be sufficient to maintain quorum. If you had two mons and one of them went down, you would have 1/2 of quorum. Since that is not a super-majority, the cluster would have to wait until the second mon is up again. Rook allows an even number of mons for higher durability. See the disaster recovery guide if quorum is lost and to recover mon quorum from a single mon. The number of mons to create in a cluster depends on your tolerance for losing a node. If you have 1 mon zero nodes can be lost to maintain quorum. With 3 mons one node can be lost, and with 5 mons two nodes can be lost. Because the Rook operator will automatically start a new monitor if one dies, you typically only need three mons. The more mons you have, the more overhead there will be to make a change to the cluster, which could become a performance issue in a large cluster. "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#mitigating-monitor-failure","title":"Mitigating Monitor Failure","text":"Whatever the reason that a mon may fail (power failure, software crash, software hang, etc), there are several layers of mitigation in place to help recover the mon. It is always better to bring an existing mon back up than to failover to bring up a new mon. The Rook operator creates a mon with a Deployment to ensure that the mon pod will always be restarted if it fails. If a mon pod stops for any reason, Kubernetes will automatically start the pod up again. In order for a mon to support a pod/node restart, the mon metadata is persisted to disk, either under the If a mon is unhealthy and the K8s pod restart or liveness probe are not sufficient to bring a mon back up, the operator will make the decision to terminate the unhealthy monitor deployment and bring up a new monitor with a new identity. This is an operation that must be done while mon quorum is maintained by other mons in the cluster. The operator checks for mon health every 45 seconds. If a monitor is down, the operator will wait 10 minutes before failing over the unhealthy mon. These two intervals can be configured as parameters to the CephCluster CR (see below). If the intervals are too short, it could be unhealthy if the mons are failed over too aggressively. If the intervals are too long, the cluster could be at risk of losing quorum if a new monitor is not brought up before another mon fails. If you want to force a mon to failover for testing or other purposes, you can scale down the mon deployment to 0, then wait for the timeout. Note that the operator may scale up the mon again automatically if the operator is restarted or if a full reconcile is triggered, such as when the CephCluster CR is updated. If the mon pod is in pending state and couldn't be assigned to a node (say, due to node drain), then the operator will wait for the timeout again before the mon failover. So the timeout waiting for the mon failover will be doubled in this case. To disable monitor automatic failover, the Rook will create mons with pod names such as mon-a, mon-b, and mon-c. Let's say mon-b had an issue and the pod failed. After a failover, you will see the unhealthy mon removed and a new mon added such as mon-d. A fully healthy mon quorum is now running again. From the toolbox we can verify the status of the health mon quorum: "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#automatic-monitor-failover","title":"Automatic Monitor Failover","text":"Rook will automatically fail over the mons when the following settings are updated in the CephCluster CR:
Ceph Object Storage Daemons (OSDs) are the heart and soul of the Ceph storage platform. Each OSD manages a local device and together they provide the distributed storage. Rook will automate creation and management of OSDs to hide the complexity based on the desired state in the CephCluster CR as much as possible. This guide will walk through some of the scenarios to configure OSDs where more configuration may be required. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#osd-health","title":"OSD Health","text":"The rook-ceph-tools pod provides a simple environment to run Ceph tools. The Once the is created, connect to the pod to execute the "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#add-an-osd","title":"Add an OSD","text":"The QuickStart Guide will provide the basic steps to create a cluster and start some OSDs. For more details on the OSD settings also see the Cluster CRD documentation. If you are not seeing OSDs created, see the Ceph Troubleshooting Guide. To add more OSDs, Rook will automatically watch for new nodes and devices being added to your cluster. If they match the filters or other settings in the In more dynamic environments where storage can be dynamically provisioned with a raw block storage provider, the OSDs can be backed by PVCs. See the To add more OSDs, you can either increase the To remove an OSD due to a failed disk or other re-configuration, consider the following to ensure the health of the data through the removal process:
If all the PGs are Update your CephCluster CR. Depending on your CR settings, you may need to remove the device from the list or update the device filter. If you are using Important On host-based clusters, you may need to stop the Rook Operator while performing OSD removal steps in order to prevent Rook from detecting the old OSD and trying to re-create it before the disk is wiped or removed. To stop the Rook Operator, run: You must perform steps below to (1) purge the OSD and either (2.a) delete the underlying data or (2.b)replace the disk before starting the Rook Operator again. Once you have done that, you can start the Rook operator again with: "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#pvc-based-cluster","title":"PVC-based cluster","text":"To reduce the storage in your cluster or remove a failed OSD on a PVC:
If you later increase the count in the device set, note that the operator will create PVCs with the highest index that is not currently in use by existing OSD PVCs. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#confirm-the-osd-is-down","title":"Confirm the OSD is down","text":"If you want to remove an unhealthy OSD, the osd pod may be in an error state such as "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#purge-the-osd-with-kubectl","title":"Purge the OSD with kubectl","text":"Note The "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#purge-the-osd-with-a-job","title":"Purge the OSD with a Job","text":"OSD removal can be automated with the example found in the rook-ceph-purge-osd job. In the osd-purge.yaml, change the
If you want to remove OSDs by hand, continue with the following sections. However, we recommend you use the above-mentioned steps to avoid operation errors. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#purge-the-osd-manually","title":"Purge the OSD manually","text":"If the OSD purge job fails or you need fine-grained control of the removal, here are the individual commands that can be run from the toolbox.
The operator can automatically remove OSD deployments that are considered \"safe-to-destroy\" by Ceph. After the steps above, the OSD will be considered safe to remove since the data has all been moved to other OSDs. But this will only be done automatically by the operator if you have this setting in the cluster CR: Otherwise, you will need to delete the deployment directly: In PVC-based cluster, remove the orphaned PVC, if necessary. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#delete-the-underlying-data","title":"Delete the underlying data","text":"If you want to clean the device where the OSD was running, see in the instructions to wipe a disk on the Cleaning up a Cluster topic. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#replace-an-osd","title":"Replace an OSD","text":"To replace a disk that has failed:
Note The OSD might have a different ID than the previous OSD that was replaced. "},{"location":"Storage-Configuration/Advanced/configuration/","title":"Configuration","text":"For most any Ceph cluster, the user will want to--and may need to--change some Ceph configurations. These changes often may be warranted in order to alter performance to meet SLAs or to update default data resiliency settings. Warning Modify Ceph settings carefully, and review the Ceph configuration documentation before making any changes. Changing the settings could result in unhealthy daemons or even data loss if used incorrectly. "},{"location":"Storage-Configuration/Advanced/configuration/#required-configurations","title":"Required configurations","text":"Rook and Ceph both strive to make configuration as easy as possible, but there are some configuration options which users are well advised to consider for any production cluster. "},{"location":"Storage-Configuration/Advanced/configuration/#default-pg-and-pgp-counts","title":"Default PG and PGP counts","text":"The number of PGs and PGPs can be configured on a per-pool basis, but it is advised to set default values that are appropriate for your Ceph cluster. Appropriate values depend on the number of OSDs the user expects to have backing each pool. These can be configured by declaring pg_num and pgp_num parameters under CephBlockPool resource. For determining the right value for pg_num please refer placement group sizing In this example configuration, 128 PGs are applied to the pool: Ceph OSD and Pool config docs provide detailed information about how to tune these parameters. Nautilus introduced the PG auto-scaler mgr module capable of automatically managing PG and PGP values for pools. Please see Ceph New in Nautilus: PG merging and autotuning for more information about this module. The To disable this module, in the CephCluster CR: With that setting, the autoscaler will be enabled for all new pools. If you do not desire to have the autoscaler enabled for all new pools, you will need to use the Rook toolbox to enable the module and enable the autoscaling on individual pools. "},{"location":"Storage-Configuration/Advanced/configuration/#specifying-configuration-options","title":"Specifying configuration options","text":""},{"location":"Storage-Configuration/Advanced/configuration/#toolbox-ceph-cli","title":"Toolbox + Ceph CLI","text":"The most recommended way of configuring Ceph is to set Ceph's configuration directly. The first method for doing so is to use Ceph's CLI from the Rook toolbox pod. Using the toolbox pod is detailed here. From the toolbox, the user can change Ceph configurations, enable manager modules, create users and pools, and much more. "},{"location":"Storage-Configuration/Advanced/configuration/#ceph-dashboard","title":"Ceph Dashboard","text":"The Ceph Dashboard, examined in more detail here, is another way of setting some of Ceph's configuration directly. Configuration by the Ceph dashboard is recommended with the same priority as configuration via the Ceph CLI (above). "},{"location":"Storage-Configuration/Advanced/configuration/#advanced-configuration-via-cephconf-override-configmap","title":"Advanced configuration via ceph.conf override ConfigMap","text":"Setting configs via Ceph's CLI requires that at least one mon be available for the configs to be set, and setting configs via dashboard requires at least one mgr to be available. Ceph may also have a small number of very advanced settings that aren't able to be modified easily via CLI or dashboard. The least recommended method for configuring Ceph is intended as a last-resort fallback in situations like these. This is covered in detail here. "},{"location":"Storage-Configuration/Advanced/key-management-system/","title":"Key Management System","text":"Rook has the ability to encrypt OSDs of clusters running on PVC via the flag ( The
Note Currently key rotation is supported when the Key Encryption Keys are stored in a Kubernetes Secret or Vault KMS. Supported KMS providers:
Rook supports storing OSD encryption keys in HashiCorp Vault KMS. "},{"location":"Storage-Configuration/Advanced/key-management-system/#authentication-methods","title":"Authentication methods","text":"Rook support two authentication methods:
When using the token-based authentication, a Kubernetes Secret must be created to hold the token. This is governed by the Note: Rook supports all the Vault environment variables. The Kubernetes Secret You can create a token in Vault by running the following command: Refer to the official vault document for more details on how to create a token. For which policy to apply see the next section. In order for Rook to connect to Vault, you must configure the following in your "},{"location":"Storage-Configuration/Advanced/key-management-system/#kubernetes-based-authentication","title":"Kubernetes-based authentication","text":"In order to use the Kubernetes Service Account authentication method, the following must be run to properly configure Vault: Once done, your Note The As part of the token, here is an example of a policy that can be used: You can write the policy like so and then create a token: In the above example, Vault's secret backend path name is If a different path is used, the This is an advanced but recommended configuration for production deployments, in this case the Each secret keys are expected to be:
For instance Note: if you are using self-signed certificates (not known/approved by a proper CA) you must pass Rook supports storing OSD encryption keys in IBM Key Protect. The current implementation stores OSD encryption keys as Standard Keys using the Bring Your Own Key (BYOK) method. This means that the Key Protect instance policy must have Standard Imported Key enabled. "},{"location":"Storage-Configuration/Advanced/key-management-system/#configuration","title":"Configuration","text":"First, you need to provision the Key Protect service on the IBM Cloud. Once completed, retrieve the instance ID. Make a record of it; we need it in the CRD. On the IBM Cloud, the user must create a Service ID, then assign an Access Policy to this service. Ultimately, a Service API Key needs to be generated. All the steps are summarized in the official documentation. The Service ID must be granted access to the Key Protect Service. Once the Service API Key is generated, store it in a Kubernetes Secret. In order for Rook to connect to IBM Key Protect, you must configure the following in your More options are supported such as:
Rook supports storing OSD encryption keys in Key Management Interoperability Protocol (KMIP) supported KMS. The current implementation stores OSD encryption keys using the Register operation. Key is fetched and deleted using Get and Destroy operations respectively. "},{"location":"Storage-Configuration/Advanced/key-management-system/#configuration_1","title":"Configuration","text":"The Secret with credentials for the KMIP KMS is expected to contain the following. In order for Rook to connect to KMIP, you must configure the following in your "},{"location":"Storage-Configuration/Advanced/key-management-system/#azure-key-vault","title":"Azure Key Vault","text":"Rook supports storing OSD encryption keys in Azure Key vault "},{"location":"Storage-Configuration/Advanced/key-management-system/#client-authentication","title":"Client Authentication","text":"Different methods are available in Azure to authenticate a client. Rook supports Azure recommended method of authentication with Service Principal and a certificate. Refer the following Azure documentation to set up key vault and authenticate it via service principal and certtificate
Provide the following KMS connection details in order to connect with Azure Key Vault.
Block storage allows a single pod to mount storage. This guide shows how to create a simple, multi-tier web application on Kubernetes using persistent volumes enabled by Rook. "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#prerequisites","title":"Prerequisites","text":"This guide assumes a Rook cluster as explained in the Quickstart. "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#provision-storage","title":"Provision Storage","text":"Before Rook can provision storage, a Note This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes. Each OSD must be located on a different node, because the Save this If you've deployed the Rook operator in a namespace other than \"rook-ceph\", change the prefix in the provisioner to match the namespace you used. For example, if the Rook operator is running in the namespace \"my-namespace\" the provisioner value should be \"my-namespace.rbd.csi.ceph.com\". Create the storage class. Note As specified by Kubernetes, when using the We create a sample app to consume the block storage provisioned by Rook with the classic wordpress and mysql apps. Both of these apps will make use of block volumes provisioned by Rook. Start mysql and wordpress from the Both of these apps create a block volume and mount it to their respective pod. You can see the Kubernetes volume claims by running the following: Example Output: Once the wordpress and mysql pods are in the Example Output: You should see the wordpress app running. If you are using Minikube, the Wordpress URL can be retrieved with this one-line command: Note When running in a vagrant environment, there will be no external IP address to reach wordpress with. You will only be able to reach wordpress via the With the pool that was created above, we can also create a block image and mount it directly in a pod. See the Direct Block Tools topic for more details. "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#teardown","title":"Teardown","text":"To clean up all the artifacts created by the block demo: "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#advanced-example-erasure-coded-block-storage","title":"Advanced Example: Erasure Coded Block Storage","text":"If you want to use erasure coded pool with RBD, your OSDs must use Attention This example requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the To be able to use an erasure coded pool you need to create two pools (as seen below in the definitions): one erasure coded and one replicated. Attention This example requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the The erasure coded pool must be set as the If a node goes down where a pod is running where a RBD RWO volume is mounted, the volume cannot automatically be mounted on another node. The node must be guaranteed to be offline before the volume can be mounted on another node. "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#configure-csi-addons","title":"Configure CSI-Addons","text":"Deploy csi-addons controller and enable Warning Automated node loss handling is currently disabled, please refer to the manual steps to recover from the node loss. We are actively working on a new design for this feature. For more details see the tracking issue. When a node is confirmed to be down, add the following taints to the node: After the taint is added to the node, Rook will automatically blocklist the node to prevent connections to Ceph from the RBD volume on that node. To verify a node is blocklisted: The node is blocklisted if the state is If the node comes back online, the network fence can be removed from the node by removing the node taints: "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/","title":"RBD Asynchronous DR Failover and Failback","text":""},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#planned-migration-and-disaster-recovery","title":"Planned Migration and Disaster Recovery","text":"Rook comes with the volume replication support, which allows users to perform disaster recovery and planned migration of clusters. The following document will help to track the procedure for failover and failback in case of a Disaster recovery or Planned migration use cases. Note The document assumes that RBD Mirroring is set up between the peer clusters. For information on rbd mirroring and how to set it up using rook, please refer to the rbd-mirroring guide. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#planned-migration","title":"Planned Migration","text":"Info Use cases: Datacenter maintenance, technology refresh, disaster avoidance, etc. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#relocation","title":"Relocation","text":"The Relocation operation is the process of switching production to a backup facility(normally your recovery site) or vice versa. For relocation, access to the image on the primary site should be stopped. The image should now be made primary on the secondary cluster so that the access can be resumed there. Note Periodic or one-time backup of the application should be available for restore on the secondary site (cluster-2). Follow the below steps for planned migration of workload from the primary cluster to the secondary cluster:
Warning In Async Disaster recovery use case, we don't get the complete data. We will only get the crash-consistent data based on the snapshot interval time. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#disaster-recovery","title":"Disaster Recovery","text":"Info Use cases: Natural disasters, Power failures, System failures, and crashes, etc. Note To effectively resume operations after a failover/relocation, backup of the kubernetes artifacts like deployment, PVC, PV, etc need to be created beforehand by the admin; so that the application can be restored on the peer cluster. For more information, see backup and restore. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#failover-abrupt-shutdown","title":"Failover (abrupt shutdown)","text":"In case of Disaster recovery, create VolumeReplication CR at the Secondary Site. Since the connection to the Primary Site is lost, the operator automatically sends a GRPC request down to the driver to forcefully mark the dataSource as
Once the failed cluster is recovered on the primary site and you want to failback from secondary site, follow the below steps:
Disaster recovery (DR) is an organization's ability to react to and recover from an incident that negatively affects business operations. This plan comprises strategies for minimizing the consequences of a disaster, so an organization can continue to operate \u2013 or quickly resume the key operations. Thus, disaster recovery is one of the aspects of business continuity. One of the solutions, to achieve the same, is RBD mirroring. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#rbd-mirroring","title":"RBD Mirroring","text":"RBD mirroring is an asynchronous replication of RBD images between multiple Ceph clusters. This capability is available in two modes:
Note This document sheds light on rbd mirroring and how to set it up using rook. See also the topic on Failover and Failback "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#create-rbd-pools","title":"Create RBD Pools","text":"In this section, we create specific RBD pools that are RBD mirroring enabled for use with the DR use case. Execute the following steps on each peer cluster to create mirror enabled pools:
Note Pool name across the cluster peers must be the same for RBD replication to function. See the CephBlockPool documentation for more details. Note It is also feasible to edit existing pools and enable them for replication. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#bootstrap-peers","title":"Bootstrap Peers","text":"In order for the rbd-mirror daemon to discover its peer cluster, the peer must be registered and a user account must be created. The following steps enable bootstrapping peers to discover and authenticate to each other:
Here,
For more details, refer to the official rbd mirror documentation on how to create a bootstrap peer. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#configure-the-rbdmirror-daemon","title":"Configure the RBDMirror Daemon","text":"Replication is handled by the rbd-mirror daemon. The rbd-mirror daemon is responsible for pulling image updates from the remote, peer cluster, and applying them to image within the local cluster. Creation of the rbd-mirror daemon(s) is done through the custom resource definitions (CRDs), as follows:
See the CephRBDMirror CRD for more details on the mirroring settings. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#add-mirroring-peer-information-to-rbd-pools","title":"Add mirroring peer information to RBD pools","text":"Each pool can have its own peer. To add the peer information, patch the already created mirroring enabled pool to update the CephBlockPool CRD. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#create-volumereplication-crds","title":"Create VolumeReplication CRDs","text":"Volume Replication Operator follows controller pattern and provides extended APIs for storage disaster recovery. The extended APIs are provided via Custom Resource Definition(CRD). Create the VolumeReplication CRDs on all the peer clusters. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#enable-csi-replication-sidecars","title":"Enable CSI Replication Sidecars","text":"To achieve RBD Mirroring,
Execute the following steps on each peer cluster to enable the OMap generator and CSIADDONS sidecars:
VolumeReplication CRDs provide support for two custom resources:
Below guide assumes that we have a PVC (rbd-pvc) in BOUND state; created using StorageClass with "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#create-a-volume-replication-class-cr","title":"Create a Volume Replication Class CR","text":"In this case, we create a Volume Replication Class on cluster-1 Note The
Note
To check VolumeReplication CR status: "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#backup-restore","title":"Backup & Restore","text":"Note To effectively resume operations after a failover/relocation, backup of the kubernetes artifacts like deployment, PVC, PV, etc need to be created beforehand by the admin; so that the application can be restored on the peer cluster. Here, we take a backup of PVC and PV object on one site, so that they can be restored later to the peer cluster. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#take-backup-on-cluster-1","title":"Take backup on cluster-1","text":"
Note We can also take backup using external tools like Velero. See velero documentation for more information. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#restore-the-backup-on-cluster-2","title":"Restore the backup on cluster-2","text":"
"},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/","title":"Ceph CSI Drivers","text":"There are three CSI drivers integrated with Rook that are used in different scenarios:
The Ceph Filesystem (CephFS) and RADOS Block Device (RBD) drivers are enabled automatically by the Rook operator. The NFS driver is disabled by default. All drivers will be started in the same namespace as the operator when the first CephCluster CR is created. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#supported-versions","title":"Supported Versions","text":"The two most recent Ceph CSI version are supported with Rook. Refer to ceph csi releases for more information. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#static-provisioning","title":"Static Provisioning","text":"The RBD and CephFS drivers support the creation of static PVs and static PVCs from an existing RBD image or CephFS volume/subvolume. Refer to the static PVC documentation for more information. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#configure-csi-drivers-in-non-default-namespace","title":"Configure CSI Drivers in non-default namespace","text":"If you've deployed the Rook operator in a namespace other than To find the provisioner name in the example storageclasses and volumesnapshotclass, search for: To use a custom prefix for the CSI drivers instead of the namespace prefix, set the Once the configmap is updated, the CSI drivers will be deployed with the The same prefix must be set in the volumesnapshotclass as well: When the prefix is set, the driver names will be:
Note Please be careful when setting the To find the provisioner name in the example storageclasses and volumesnapshotclass, search for: All CSI pods are deployed with a sidecar container that provides a Prometheus metric for tracking whether the CSI plugin is alive and running. Check the monitoring documentation to see how to integrate CSI liveness and GRPC metrics into Ceph monitoring. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#dynamically-expand-volume","title":"Dynamically Expand Volume","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#prerequisites","title":"Prerequisites","text":"To expand the PVC the controlling StorageClass must have To support RBD Mirroring, the CSI-Addons sidecar will be started in the RBD provisioner pod. CSI-Addons support the To enable the CSIAddons sidecar and deploy the controller, follow the steps below "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#ephemeral-volume-support","title":"Ephemeral volume support","text":"The generic ephemeral volume feature adds support for specifying PVCs in the For example: A volume claim template is defined inside the pod spec, and defines a volume to be provisioned and used by the pod within its lifecycle. Volumes are provisioned when a pod is spawned and destroyed when the pod is deleted. Refer to the ephemeral-doc for more info. See example manifests for an RBD ephemeral volume and a CephFS ephemeral volume. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#csi-addons-controller","title":"CSI-Addons Controller","text":"The CSI-Addons Controller handles requests from users. Users create a CR that the controller inspects and forwards to one or more CSI-Addons sidecars for execution. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#deploying-the-controller","title":"Deploying the controller","text":"Deploy the controller by running the following commands: This creates the required CRDs and configures permissions. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#enable-the-csi-addons-sidecar","title":"Enable the CSI-Addons Sidecar","text":"To use the features provided by the CSI-Addons, the Execute the following to enable the CSI-Addons sidecars:
CSI-Addons supports the following operations:
Ceph-CSI supports encrypting PersistentVolumeClaims (PVCs) for both RBD and CephFS. This can be achieved using LUKS for RBD and fscrypt for CephFS. More details on encrypting RBD PVCs can be found here, which includes a full list of supported encryption configurations. More details on encrypting CephFS PVCs can be found here. A sample KMS configmap can be found here. Note Not all KMS are compatible with fscrypt. Generally, KMS that either store secrets to use directly (like Vault) or allow access to the plain password (like Kubernetes Secrets) are compatible. Note Rook also supports OSD-level encryption (see Using both RBD PVC encryption and OSD encryption at the same time will lead to double encryption and may reduce read/write performance. Existing Ceph clusters can also enable Ceph-CSI PVC encryption support and multiple kinds of encryption KMS can be used on the same Ceph cluster using different storageclasses. The following steps demonstrate the common process for enabling encryption support for both RBD and CephFS:
Note CephFS encryption requires fscrypt support in Linux kernel, kernel version 6.6 or higher. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#enable-read-affinity-for-rbd-and-cephfs-volumes","title":"Enable Read affinity for RBD and CephFS volumes","text":"Ceph CSI supports mapping RBD volumes with KRBD options and mounting CephFS volumes with ceph mount options to allow serving reads from an OSD closest to the client, according to OSD locations defined in the CRUSH map and topology labels on nodes. Refer to the krbd-options document for more details. Execute the following step to enable read affinity for a specific ceph cluster:
Ceph CSI will extract the CRUSH location from the topology labels found on the node and pass it though krbd options during mapping RBD volumes. Note This requires Linux kernel version 5.8 or higher. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/","title":"Snapshots","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#prerequisites","title":"Prerequisites","text":"
Info Just like StorageClass provides a way for administrators to describe the \"classes\" of storage they offer when provisioning a volume, VolumeSnapshotClass provides a way to describe the \"classes\" of storage when provisioning a volume snapshot. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#rbd-snapshots","title":"RBD Snapshots","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#rbd-volumesnapshotclass","title":"RBD VolumeSnapshotClass","text":"In VolumeSnapshotClass, the Update the value of the "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#volumesnapshot","title":"Volumesnapshot","text":"In snapshot, "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#verify-rbd-snapshot-creation","title":"Verify RBD Snapshot Creation","text":" The snapshot will be ready to restore to a new PVC when the In pvc-restore, Please Note: * Create a new PVC from the snapshot "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#verify-rbd-clone-pvc-creation","title":"Verify RBD Clone PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#rbd-snapshot-resource-cleanup","title":"RBD snapshot resource Cleanup","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#cephfs-snapshots","title":"CephFS Snapshots","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#cephfs-volumesnapshotclass","title":"CephFS VolumeSnapshotClass","text":"In VolumeSnapshotClass, the In the volumesnapshotclass, update the value of the "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#volumesnapshot_1","title":"VolumeSnapshot","text":"In snapshot, "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#verify-cephfs-snapshot-creation","title":"Verify CephFS Snapshot Creation","text":" The snapshot will be ready to restore to a new PVC when In pvc-restore, Create a new PVC from the snapshot "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#verify-cephfs-restore-pvc-creation","title":"Verify CephFS Restore PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#cephfs-snapshot-resource-cleanup","title":"CephFS snapshot resource Cleanup","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/","title":"Volume clone","text":"The CSI Volume Cloning feature adds support for specifying existing PVCs in the A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a \"new\" empty Volume, the back end device creates an exact duplicate of the specified Volume. Refer to clone-doc for more info. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#rbd-volume-cloning","title":"RBD Volume Cloning","text":"In pvc-clone, Please note: * Create a new PVC Clone from the PVC "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#verify-rbd-volume-clone-pvc-creation","title":"Verify RBD volume Clone PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#rbd-clone-resource-cleanup","title":"RBD clone resource Cleanup","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#cephfs-volume-cloning","title":"CephFS Volume Cloning","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#volume-clone-prerequisites","title":"Volume Clone Prerequisites","text":"
In pvc-clone, Create a new PVC Clone from the PVC "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#verify-cephfs-volume-clone-pvc-creation","title":"Verify CephFS volume Clone PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#cephfs-clone-resource-cleanup","title":"CephFS clone resource Cleanup","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/","title":"Volume Group Snapshots","text":"Ceph provides the ability to create crash-consistent snapshots of multiple volumes. A group snapshot represents \u201ccopies\u201d from multiple volumes that are taken at the same point in time. A group snapshot can be used either to rehydrate new volumes (pre-populated with the snapshot data) or to restore existing volumes to a previous state (represented by the snapshots) "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#prerequisites","title":"Prerequisites","text":"
Info Created by cluster administrators to describe how volume group snapshots should be created. including the driver information, the deletion policy, etc. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#volume-group-snapshots","title":"Volume Group Snapshots","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#cephfs-volumegroupsnapshotclass","title":"CephFS VolumeGroupSnapshotClass","text":"In VolumeGroupSnapshotClass, the In the "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#cephfs-volumegroupsnapshot","title":"CephFS VolumeGroupSnapshot","text":"In VolumeGroupSnapshot, "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#verify-cephfs-groupsnapshot-creation","title":"Verify CephFS GroupSnapshot Creation","text":" The snapshot will be ready to restore to a new PVC when Find the name of the snapshots created by the It will list the PVC's name followed by its snapshot name. In pvc-restore, Create a new PVC from the snapshot "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#verify-cephfs-restore-pvc-creation","title":"Verify CephFS Restore PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#cephfs-volume-group-snapshot-resource-cleanup","title":"CephFS volume group snapshot resource Cleanup","text":"To clean the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/custom-images/","title":"Custom Images","text":"By default, Rook will deploy the latest stable version of the Ceph CSI driver. Commonly, there is no need to change this default version that is deployed. For scenarios that require deploying a custom image (e.g. downstream releases), the defaults can be overridden with the following settings. The CSI configuration variables are found in the The default upstream images are included below, which you can change to your desired images. "},{"location":"Storage-Configuration/Ceph-CSI/custom-images/#use-private-repository","title":"Use private repository","text":"If image version is not passed along with the image name in any of the variables above, Rook will add the corresponding default version to that image. Example: if If you would like Rook to use the default upstream images, then you may simply remove all variables matching You can use the below command to see the CSI images currently being used in the cluster. Note that not all images (like The default images can also be found with each release in the images list "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/","title":"Ceph Dashboard","text":"The dashboard is a very helpful tool to give you an overview of the status of your Ceph cluster, including overall health, status of the mon quorum, status of the mgr, osd, and other Ceph daemons, view pools and PG status, show logs for the daemons, and more. Rook makes it simple to enable the dashboard. "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/#enable-the-ceph-dashboard","title":"Enable the Ceph Dashboard","text":"The dashboard can be enabled with settings in the CephCluster CRD. The CephCluster CRD must have the dashboard The Rook operator will enable the ceph-mgr dashboard module. A service object will be created to expose that port inside the Kubernetes cluster. Rook will enable port 8443 for https access. This example shows that port 8443 was configured. The first service is for reporting the Prometheus metrics, while the latter service is for the dashboard. If you are on a node in the cluster, you will be able to connect to the dashboard by using either the DNS name of the service at After you connect to the dashboard you will need to login for secure access. Rook creates a default user named "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/#configure-the-dashboard","title":"Configure the Dashboard","text":"The following dashboard configuration settings are supported:
Information about physical disks is available only in Rook host clusters. The Rook manager module is required by the dashboard to obtain the information about physical disks, but it is disabled by default. Before it is enabled, the dashboard 'Physical Disks' section will show an error message. To prepare the Rook manager module to be used in the dashboard, modify your Ceph Cluster CRD: And apply the changes: Once the Rook manager module is enabled as the orchestrator backend, there are two settings required for showing disk information:
Modify the operator.yaml, and apply the changes: "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/#viewing-the-dashboard-external-to-the-cluster","title":"Viewing the Dashboard External to the Cluster","text":"Commonly you will want to view the dashboard from outside the cluster. For example, on a development machine with the cluster running inside minikube you will want to access the dashboard from the host. There are several ways to expose a service that will depend on the environment you are running in. You can use an Ingress Controller or other methods for exposing services such as NodePort, LoadBalancer, or ExternalIPs. "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/#node-port","title":"Node Port","text":"The simplest way to expose the service in minikube or similar environment is using the NodePort to open a port on the VM that can be accessed by the host. To create a service with the NodePort, save this yaml as Now create the service: You will see the new service In this example, port If you have a cluster on a cloud provider that supports load balancers, you can create a service that is provisioned with a public hostname. The yaml is the same as Now create the service: You will see the new service Now you can enter the URL in your browser such as If you have a cluster with an nginx Ingress Controller and a Certificate Manager (e.g. cert-manager) then you can create an Ingress like the one below. This example achieves four things:
Customise the Ingress resource to match your cluster. Replace the example domain name Now create the Ingress: You will see the new Ingress And the new Secret for the TLS certificate: You can now browse to Each Rook Ceph cluster has some built in metrics collectors/exporters for monitoring with Prometheus. If you do not have Prometheus running, follow the steps below to enable monitoring of Rook. If your cluster already contains a Prometheus instance, it will automatically discover Rook's scrape endpoint using the standard Attention This assumes that the Prometheus instances is searching all your Kubernetes namespaces for Pods with these annotations. If prometheus is already installed in a cluster, it may not be configured to watch for third-party service monitors such as for Rook. Normally you should be able to add the prometheus annotations First the Prometheus operator needs to be started in the cluster so it can watch for our requests to start monitoring Rook and respond by deploying the correct Prometheus pods and configuration. A full explanation can be found in the Prometheus operator repository on GitHub, but the quick instructions can be found here: Note If the Prometheus Operator is already present in your cluster, the command provided above may fail. For a detailed explanation of the issue and a workaround, please refer to this issue. This will start the Prometheus operator, but before moving on, wait until the operator is in the Once the Prometheus operator is in the With the Prometheus operator running, we can create service monitors that will watch the Rook cluster. There are two sources for metrics collection:
From the root of your locally cloned Rook repo, go the monitoring directory: Create the service monitor as well as the Prometheus server pod and service: Ensure that the Prometheus server pod gets created and advances to the "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#dashboard-config","title":"Dashboard config","text":"Configure the Prometheus endpoint so the dashboard can retrieve metrics from Prometheus with two settings:
The following command can be used to get the Prometheus url: Following is an example to configure the Prometheus endpoint in the CephCluster CR. Note It is not recommended to consume storage from the Ceph cluster for Prometheus. If the Ceph cluster fails, Prometheus would become unresponsive and thus not alert you of the failure. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#prometheus-web-console","title":"Prometheus Web Console","text":"Once the Prometheus server is running, you can open a web browser and go to the URL that is output from this command: You should now see the Prometheus monitoring website. Click on In the dropdown that says Click on the Below the You can find Prometheus Consoles for and from Ceph here: GitHub ceph/cephmetrics - dashboards/current directory. A guide to how you can write your own Prometheus consoles can be found on the official Prometheus site here: Prometheus.io Documentation - Console Templates. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#prometheus-alerts","title":"Prometheus Alerts","text":"To enable the Ceph Prometheus alerts via the helm charts, set the following properties in values.yaml:
Alternatively, to enable the Ceph Prometheus alerts with example manifests follow these steps:
Note This expects the Prometheus Operator and a Prometheus instance to be pre-installed by the admin. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#customize-alerts","title":"Customize Alerts","text":"The Prometheus alerts can be customized with a post-processor using tools such as Kustomize. For example, first extract the helm chart: Now create the desired customization configuration files. This simple example will show how to update the severity of a rule, add a label to a rule, and change the Create a file named kustomization.yaml: Create a file named modifications.yaml Finally, run kustomize to update the desired prometheus rules: "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#grafana-dashboards","title":"Grafana Dashboards","text":"The dashboards have been created by @galexrt. For feedback on the dashboards please reach out to him on the Rook.io Slack. Note The dashboards are only compatible with Grafana 7.2.0 or higher. Also note that the dashboards are updated from time to time, to fix issues and improve them. The following Grafana dashboards are available:
The dashboard JSON files are also available on GitHub here When updating Rook, there may be updates to RBAC for monitoring. It is easy to apply the changes with each update or upgrade. This should be done at the same time you update Rook common resources like Hint This is updated automatically if you are upgrading via the helm chart "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#teardown","title":"Teardown","text":"To clean up all the artifacts created by the monitoring walk-through, copy/paste the entire block below (note that errors about resources \"not found\" can be ignored): Then the rest of the instructions in the Prometheus Operator docs can be followed to finish cleaning up. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#special-cases","title":"Special Cases","text":""},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#tectonic-bare-metal","title":"Tectonic Bare Metal","text":"Tectonic strongly discourages the To integrate CSI liveness into ceph monitoring we will need to deploy a service and service monitor. This will create the service monitor to have prometheus monitor CSI Note Please note that the liveness sidecar is disabled by default. To enable it set RBD per-image IO statistics collection is disabled by default. This can be enabled by setting If Prometheus needs to select specific resources, we can do so by injecting labels into these objects and using it as label selector. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#horizontal-pod-scaling-using-kubernetes-event-driven-autoscaling-keda","title":"Horizontal Pod Scaling using Kubernetes Event-driven Autoscaling (KEDA)","text":"Using metrics exported from the Prometheus service, the horizontal pod scaling can use the custom metrics other than CPU and memory consumption. It can be done with help of Prometheus Scaler provided by the KEDA. See the KEDA deployment guide for details. Following is an example to autoscale RGW: Warning During reconciliation of a All CephNFS daemons are configured using shared RADOS objects stored in a Ceph pool named By default, Rook creates the Ceph uses NFS-Ganesha servers. The config file format for these objects is documented in the NFS-Ganesha project. Use Ceph's
Of note, it is possible to pre-populate the NFS configuration and export objects prior to creating CephNFS server clusters. "},{"location":"Storage-Configuration/NFS/nfs-advanced/#creating-nfs-export-over-rgw","title":"Creating NFS export over RGW","text":"Warning RGW NFS export is experimental for the moment. It is not recommended for scenario of modifying existing content. For creating an NFS export over RGW(CephObjectStore) storage backend, the below command can be used. This creates an export for the "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/","title":"CSI provisioner and driver","text":"Attention This feature is experimental and will not support upgrades to future versions. For this section, we will refer to Rook's deployment examples in the deploy/examples directory. "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#enabling-the-csi-drivers","title":"Enabling the CSI drivers","text":"The Ceph CSI NFS provisioner and driver require additional RBAC to operate. Apply the Rook will only deploy the Ceph CSI NFS provisioner and driver components when the Note The rook-ceph operator Helm chart will deploy the required RBAC and enable the driver components if In order to create NFS exports via the CSI driver, you must first create a CephFilesystem to serve as the underlying storage for the exports, and you must create a CephNFS to run an NFS server that will expose the exports. RGWs cannot be used for the CSI driver. From the examples, You may need to enable or disable the Ceph orchestrator. You must also create a storage class. Ceph CSI is designed to support any arbitrary Ceph cluster, but we are focused here only on Ceph clusters deployed by Rook. Let's take a look at a portion of the example storage class found at
See See After a PVC is created successfully, the "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#taking-snapshots-of-nfs-exports","title":"Taking snapshots of NFS exports","text":"NFS export PVCs can be snapshotted and later restored to new PVCs. "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#creating-snapshots","title":"Creating snapshots","text":"First, create a VolumeSnapshotClass as in the example here. The In snapshot, "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#verifying-snapshots","title":"Verifying snapshots","text":" The snapshot will be ready to restore to a new PVC when In pvc-restore, Create a new PVC from the snapshot. "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#verifying-restored-pvc-creation","title":"Verifying restored PVC Creation","text":" "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#cleaning-up-snapshot-resource","title":"Cleaning up snapshot resource","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#cloning-nfs-exports","title":"Cloning NFS exports","text":""},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#creating-clones","title":"Creating clones","text":"In pvc-clone, Create a new PVC Clone from the PVC as in the example here. "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#verifying-a-cloned-pvc","title":"Verifying a cloned PVC","text":" "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#cleaning-up-clone-resources","title":"Cleaning up clone resources","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#consuming-nfs-from-an-external-source","title":"Consuming NFS from an external source","text":"For consuming NFS services and exports external to the Kubernetes cluster (including those backed by an external standalone Ceph cluster), Rook recommends using Kubernetes regular NFS consumption model. This requires the Ceph admin to create the needed export, while reducing the privileges needed in the client cluster for the NFS volume. Export and get the nfs client to a particular cephFS filesystem: Create the PV and PVC using Rook provides security for CephNFS server clusters through two high-level features: user ID mapping and user authentication. Attention All features in this document are experimental and may not support upgrades to future versions. Attention Some configurations of these features may break the ability to mount NFS storage to pods via PVCs. The NFS CSI driver may not be able to mount exports for pods when ID mapping is configured. "},{"location":"Storage-Configuration/NFS/nfs-security/#user-id-mapping","title":"User ID mapping","text":"User ID mapping allows the NFS server to map connected NFS client IDs to a different user domain, allowing NFS clients to be associated with a particular user in your organization. For example, users stored in LDAP can be associated with NFS users and vice versa. "},{"location":"Storage-Configuration/NFS/nfs-security/#id-mapping-via-sssd","title":"ID mapping via SSSD","text":"SSSD is the System Security Services Daemon. It can be used to provide user ID mapping from a number of sources including LDAP, Active Directory, and FreeIPA. Currently, only LDAP has been tested. "},{"location":"Storage-Configuration/NFS/nfs-security/#sssd-configuration","title":"SSSD configuration","text":"SSSD requires a configuration file in order to configure its connection to the user ID mapping system (e.g., LDAP). The file follows the Methods of providing the configuration file are documented in the NFS CRD security section. Recommendations:
A sample The SSSD configuration file may be omitted from the CephNFS spec if desired. In this case, Rook will not set User authentication allows NFS clients and the Rook CephNFS servers to authenticate with each other to ensure security. "},{"location":"Storage-Configuration/NFS/nfs-security/#authentication-through-kerberos","title":"Authentication through Kerberos","text":"Kerberos is the authentication mechanism natively supported by NFS-Ganesha. With NFSv4, individual users are authenticated and not merely client machines. "},{"location":"Storage-Configuration/NFS/nfs-security/#kerberos-configuration","title":"Kerberos configuration","text":"Kerberos authentication requires configuration files in order for the NFS-Ganesha server to authenticate to the Kerberos server (KDC). The requirements are two-parted:
Methods of providing the configuration files are documented in the NFS CRD security section. Recommendations:
A sample Kerberos config file is shown below. The Kerberos config files ( Similarly, the keytab file ( As an example for either of the above cases, you may build files into your custom Ceph container image or use the Vault agent injector to securely add files via annotations on the CephNFS spec (passed to the NFS server pods). "},{"location":"Storage-Configuration/NFS/nfs-security/#nfs-service-principals","title":"NFS service principals","text":"The Kerberos service principal used by Rook's CephNFS servers to authenticate with the Kerberos server is built up from 3 components:
The full service principal name is constructed as Users must add this service principal to their Kerberos server configuration. Example For a CephNFS named \"fileshare\" in the \"business-unit\" Kubernetes namespace that has a Advanced
The kerberos domain name is used to setup the domain name in /etc/idmapd.conf. This domain name is used by idmap to map the kerberos credential to the user uid/gid. Without this configured, NFS-Ganesha will be unable to map the Kerberos principal to an uid/gid and will instead use the configured anonuid/anongid (default: -2) when accessing the local filesystem. "},{"location":"Storage-Configuration/NFS/nfs/","title":"NFS Storage Overview","text":"NFS storage can be mounted with read/write permission from multiple pods. NFS storage may be especially useful for leveraging an existing Rook cluster to provide NFS storage for legacy applications that assume an NFS client connection. Such applications may not have been migrated to Kubernetes or might not yet support PVCs. Rook NFS storage can provide access to the same network filesystem storage from within the Kubernetes cluster via PVC while simultaneously providing access via direct client connection from within or outside of the Kubernetes cluster. Warning Simultaneous access to NFS storage from Pods and from external clients complicates NFS user ID mapping significantly. Client IDs mapped from external clients will not be the same as the IDs associated with the NFS CSI driver, which mount exports for Kubernetes pods. Warning Due to a number of Ceph issues and changes, Rook officially only supports Ceph v16.2.7 or higher for CephNFS. If you are using an earlier version, upgrade your Ceph version following the advice given in Rook's v1.9 NFS docs. Note CephNFSes support NFSv4.1+ access only. Serving earlier protocols inhibits responsiveness after a server restart. "},{"location":"Storage-Configuration/NFS/nfs/#prerequisites","title":"Prerequisites","text":"This guide assumes you have created a Rook cluster as explained in the main quickstart guide as well as a Ceph filesystem which will act as the backing storage for NFS. Many samples reference the CephNFS and CephFilesystem example manifests here and here. "},{"location":"Storage-Configuration/NFS/nfs/#creating-an-nfs-cluster","title":"Creating an NFS cluster","text":"Create the NFS cluster by specifying the desired settings documented for the NFS CRD. "},{"location":"Storage-Configuration/NFS/nfs/#creating-exports","title":"Creating Exports","text":"When a CephNFS is first created, all NFS daemons within the CephNFS cluster will share a configuration with no exports defined. When creating an export, it is necessary to specify the CephFilesystem which will act as the backing storage for the NFS export. RADOS Gateways (RGWs), provided by CephObjectStores, can also be used as backing storage for NFS exports if desired. "},{"location":"Storage-Configuration/NFS/nfs/#using-the-ceph-dashboard","title":"Using the Ceph Dashboard","text":"Exports can be created via the Ceph dashboard as well. To enable and use the Ceph dashboard in Rook, see here. "},{"location":"Storage-Configuration/NFS/nfs/#using-the-ceph-cli","title":"Using the Ceph CLI","text":"The Ceph CLI can be used from the Rook toolbox pod to create and manage NFS exports. To do so, first ensure the necessary Ceph mgr modules are enabled, if necessary, and that the Ceph orchestrator backend is set to Rook. "},{"location":"Storage-Configuration/NFS/nfs/#enable-the-ceph-orchestrator-optional","title":"Enable the Ceph orchestrator (optional)","text":" Ceph's NFS CLI can create NFS exports that are backed by CephFS (a CephFilesystem) or Ceph Object Gateway (a CephObjectStore). For creating an NFS export for the CephNFS and CephFilesystem example manifests, the below command can be used. This creates an export for the The below command will list the current NFS exports for the example CephNFS cluster, which will give the output shown for the current example. The simple If you are done managing NFS exports and don't need the Ceph orchestrator module enabled for anything else, it may be preferable to disable the Rook and NFS mgr modules to free up a small amount of RAM in the Ceph mgr Pod. "},{"location":"Storage-Configuration/NFS/nfs/#mounting-exports","title":"Mounting exports","text":"Each CephNFS server has a unique Kubernetes Service. This is because NFS clients can't readily handle NFS failover. CephNFS services are named with the pattern For each NFS client, choose an NFS service to use for the connection. With NFS v4, you can mount an export by its path using a mount command like below. You can mount all exports at once by omitting the export path and leaving the directory as just "},{"location":"Storage-Configuration/NFS/nfs/#exposing-the-nfs-server-outside-of-the-kubernetes-cluster","title":"Exposing the NFS server outside of the Kubernetes cluster","text":"Use a LoadBalancer Service to expose an NFS server (and its exports) outside of the Kubernetes cluster. The Service's endpoint can be used as the NFS service address when mounting the export manually. We provide an example Service here: Security options for NFS are documented here. "},{"location":"Storage-Configuration/NFS/nfs/#ceph-csi-nfs-provisioner-and-nfs-csi-driver","title":"Ceph CSI NFS provisioner and NFS CSI driver","text":"The NFS CSI provisioner and driver are documented here "},{"location":"Storage-Configuration/NFS/nfs/#advanced-configuration","title":"Advanced configuration","text":"Advanced NFS configuration is documented here "},{"location":"Storage-Configuration/NFS/nfs/#known-issues","title":"Known issues","text":"Known issues are documented on the NFS CRD page. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim/","title":"Bucket Claim","text":"Rook supports the creation of new buckets and access to existing buckets via two custom resources:
An OBC references a storage class which is created by an administrator. The storage class defines whether the bucket requested is a new bucket or an existing bucket. It also defines the bucket retention policy. Users request a new or existing bucket by creating an OBC which is shown below. The ceph provisioner detects the OBC and creates a new bucket or grants access to an existing bucket, depending the storage class referenced in the OBC. It also generates a Secret which provides credentials to access the bucket, and a ConfigMap which contains the bucket's endpoint. Application pods consume the information in the Secret and ConfigMap to access the bucket. Please note that to make provisioner watch the cluster namespace only you need to set The OBC provisioner name found in the storage class by default includes the operator namespace as a prefix. A custom prefix can be applied by the operator setting in the Note Changing the prefix is not supported on existing clusters. This may impact the function of existing OBCs. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim/#example","title":"Example","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim/#obc-custom-resource","title":"OBC Custom Resource","text":"
Rook supports the creation of bucket notifications via two custom resources:
A CephBucketNotification defines what bucket actions trigger the notification and which topic to send notifications to. A CephBucketNotification may also define a filter, based on the object's name and other object attributes. Notifications can be associated with buckets created via ObjectBucketClaims by adding labels to an ObjectBucketClaim with the following format: The CephBucketTopic, CephBucketNotification and ObjectBucketClaim must all belong to the same namespace. If a bucket was created manually (not via an ObjectBucketClaim), notifications on this bucket should also be created manually. However, topics in these notifications may reference topics that were created via CephBucketTopic resources. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#topics","title":"Topics","text":"A CephBucketTopic represents an endpoint (of types: Kafka, AMQP0.9.1 or HTTP), or a specific resource inside this endpoint (e.g a Kafka or an AMQP topic, or a specific URI in an HTTP server). The CephBucketTopic also holds any additional info needed for a CephObjectStore's RADOS Gateways (RGW) to connect to the endpoint. Topics don't belong to a specific bucket or notification. Notifications from multiple buckets may be sent to the same topic, and one bucket (via multiple CephBucketNotifications) may send notifications to multiple topics. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#notification-reliability-and-delivery","title":"Notification Reliability and Delivery","text":"Notifications may be sent synchronously, as part of the operation that triggered them. In this mode, the operation is acknowledged only after the notification is sent to the topic\u2019s configured endpoint, which means that the round trip time of the notification is added to the latency of the operation itself. The original triggering operation will still be considered as successful even if the notification fail with an error, cannot be delivered or times out. Notifications may also be sent asynchronously. They will be committed into persistent storage and then asynchronously sent to the topic\u2019s configured endpoint. In this case, the only latency added to the original operation is of committing the notification to persistent storage. If the notification fail with an error, cannot be delivered or times out, it will be retried until successfully acknowledged. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#example","title":"Example","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#cephbuckettopic-custom-resource","title":"CephBucketTopic Custom Resource","text":"
Note In case of Kafka and AMQP, the consumer of the notifications is not required to ack the notifications, since the broker persists the messages before delivering them to their final destinations. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#cephbucketnotification-custom-resource","title":"CephBucketNotification Custom Resource","text":"
For a notifications to be associated with a bucket, a labels must be added to the OBC, indicating the name of the notification. To delete a notification from a bucket the matching label must be removed. When an OBC is deleted, all of the notifications associated with the bucket will be deleted as well. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/","title":"Object Store Multisite","text":"Multisite is a feature of Ceph that allows object stores to replicate their data over multiple Ceph clusters. Multisite also allows object stores to be independent and isolated from other object stores in a cluster. When a ceph-object-store is created without the Since it is the only ceph-object-store in the realm, the data in the ceph-object-store remain independent and isolated from others on the same cluster. When a ceph-object-store is created with the This allows the ceph-object-store to replicate its data over multiple Ceph clusters. To review core multisite concepts please read the ceph-multisite design overview. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#prerequisites","title":"Prerequisites","text":"This guide assumes a Rook cluster as explained in the Quickstart. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#creating-object-multisite","title":"Creating Object Multisite","text":"If an admin wants to set up multisite on a Rook Ceph cluster, the following resources must be created:
object-multisite.yaml in the examples directory can be used to create the multisite CRDs. The first zone group created in a realm is the master zone group. The first zone created in a zone group is the master zone. When a non-master zone or non-master zone group is created, the zone group or zone is not in the Ceph Radosgw Multisite Period until an object-store is created in that zone (and zone group). The zone will create the pools for the object-store(s) that are in the zone to use. When one of the multisite CRs (realm, zone group, zone) is deleted the underlying ceph realm/zone group/zone is not deleted, neither are the pools created by the zone. See the \"Multisite Cleanup\" section for more information. For more information on the multisite CRDs, see the related CRDs:
If an admin wants to sync data from another cluster, the admin needs to pull a realm on a Rook Ceph cluster from another Rook Ceph (or Ceph) cluster. To begin doing this, the admin needs 2 pieces of information:
To pull a Ceph realm from a remote Ceph cluster, an If an admin does not know of an endpoint that fits this criteria, the admin can find such an endpoint on the remote Ceph cluster (via the tool box if it is a Rook Ceph Cluster) by running: A list of endpoints in the master zone group in the master zone is in the This endpoint must also be resolvable from the new Rook Ceph cluster. To test this run the Finally add the endpoint to the "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#getting-realm-access-key-and-secret-key","title":"Getting Realm Access Key and Secret Key","text":"The access key and secret key of the system user are keys that allow other Ceph clusters to pull the realm of the system user. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#getting-the-realm-access-key-and-secret-key-from-the-rook-ceph-cluster","title":"Getting the Realm Access Key and Secret Key from the Rook Ceph Cluster","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#system-user-for-multisite","title":"System User for Multisite","text":"When an admin creates a ceph-object-realm a system user automatically gets created for the realm with an access key and a secret key. This system user has the name \"$REALM_NAME-system-user\". For the example if realm name is These keys for the user are exported as a kubernetes secret called \"$REALM_NAME-keys\" (ex: realm-a-keys). This system user used by RGW internally for the data replication. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#getting-keys-from-k8s-secret","title":"Getting keys from k8s secret","text":"To get these keys from the cluster the realm was originally created on, run: Edit the Then create a kubernetes secret on the pulling Rook Ceph cluster with the same secrets yaml file. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#getting-the-realm-access-key-and-secret-key-from-a-non-rook-ceph-cluster","title":"Getting the Realm Access Key and Secret Key from a Non Rook Ceph Cluster","text":"The access key and the secret key of the system user can be found in the output of running the following command on a non-rook ceph cluster: Then base64 encode the each of the keys and create a Only the Finally, create a kubernetes secret on the pulling Rook Ceph cluster with the new secrets yaml file. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#pulling-a-realm-on-a-new-rook-ceph-cluster","title":"Pulling a Realm on a New Rook Ceph Cluster","text":"Once the admin knows the endpoint and the secret for the keys has been created, the admin should create:
object-multisite-pull-realm.yaml (with changes) in the examples directory can be used to create the multisite CRDs. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#scaling-a-multisite","title":"Scaling a Multisite","text":"Scaling the number of gateways that run the synchronization thread to 2 or more can increase the latency of the replication of each S3 object. The recommended way to scale a multisite configuration is to dissociate the gateway dedicated to the synchronization from gateways that serve clients. The two types of gateways can be deployed by creating two CephObjectStores associated with the same CephObjectZone. The objectstore that deploys the gateway dedicated to the synchronization must have "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#multisite-cleanup","title":"Multisite Cleanup","text":"Multisite configuration must be cleaned up by hand. Deleting a realm/zone group/zone CR will not delete the underlying Ceph realm, zone group, zone, or the pools associated with a zone. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-and-reconfiguring-the-ceph-object-zone","title":"Deleting and Reconfiguring the Ceph Object Zone","text":"Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. When the ceph-object-zone resource is deleted or modified, the zone is not deleted from the Ceph cluster. Zone deletion must be done through the toolbox. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#changing-the-master-zone","title":"Changing the Master Zone","text":"The Rook toolbox can change the master zone in a zone group. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-zone","title":"Deleting Zone","text":"The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. There are two scenarios possible when deleting a zone. The following commands, run via the toolbox, deletes the zone if there is only one zone in the zone group. In the other scenario, there are more than one zones in a zone group. Care must be taken when changing which zone is the master zone. Please read the following documentation before running the below commands: The following commands, run via toolboxes, remove the zone from the zone group first, then delete the zone. When a zone is deleted, the pools for that zone are not deleted. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-pools-for-a-zone","title":"Deleting Pools for a Zone","text":"The Rook toolbox can delete pools. Deleting pools should be done with caution. The following documentation on pools should be read before deleting any pools. When a zone is created the following pools are created for each zone: Here is an example command to delete the .rgw.buckets.data pool for zone-a. In this command the pool name must be mentioned twice for the pool to be removed. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#removing-an-object-store-from-a-zone","title":"Removing an Object Store from a Zone","text":"When an object-store (created in a zone) is deleted, the endpoint for that object store is removed from that zone, via Removing object store(s) from the master zone of the master zone group should be done with caution. When all of these object-stores are deleted the period cannot be updated and that realm cannot be pulled. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#zone-group-deletion","title":"Zone Group Deletion","text":"Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. When the ceph-object-zone group resource is deleted or modified, the zone group is not deleted from the Ceph cluster. Zone Group deletion must be done through the toolbox. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-a-zone-group","title":"Deleting a Zone Group","text":"The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. The following command, run via the toolbox, deletes the zone group. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#realm-deletion","title":"Realm Deletion","text":"Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. When the ceph-object-realm resource is deleted or modified, the realm is not deleted from the Ceph cluster. Realm deletion must be done via the toolbox. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-a-realm","title":"Deleting a Realm","text":"The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. The following command, run via the toolbox, deletes the realm. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#configure-an-existing-object-store-for-multisite","title":"Configure an Existing Object Store for Multisite","text":"When an object store is configured by Rook, it internally creates a zone, zone group, and realm with the same name as the object store. To enable multisite, you will need to create the corresponding zone, zone group, and realm CRs with the same name as the object store. For example, to create multisite CRs for an object store named Now modify the existing "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#using-custom-names","title":"Using custom names","text":"If names different from the object store need to be set for the realm, zone, or zone group, first rename them in the backend via toolbox pod, then following the procedure above. Important Renaming in the toolbox must be performed before creating the multisite CRs "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-swift/","title":"Object Store with Keystone and Swift","text":"Note The Object Store with Keystone and Swift is currently in experimental mode. Ceph RGW can integrate natively with the Swift API and Keystone via the CephObjectStore CRD. This allows native integration of Rook-operated Ceph RGWs into OpenStack clouds. Note Authentication via the OBC and COSI features is not affected by this configuration. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-swift/#create-a-local-object-store-with-keystone-and-swift","title":"Create a Local Object Store with Keystone and Swift","text":"This example will create a The OSDs must be located on different nodes, because the More details on the settings available for a Set the url in the auth section to point to the keystone service url. Prior to using keystone as authentication provider an admin user for rook to access and configure the keystone admin api is required. The user credentials for this admin user are provided by a secret in the same namespace which is referenced via the Note This example requires at least 3 bluestore OSDs, with each OSD located on a different node. This example assumes an existing OpenStack Keystone instance ready to use for authentication. After the The start of the RGW pod(s) confirms that the object store is configured. The swift service endpoint in OpenStack/Keystone must be created, in order to use the object store in Swift using for example the OpenStack CLI. The endpoint url should be set to the service endpoint of the created rgw instance. Afterwards any user which has the rights to access the projects resources (as defined in the OpenStack Keystone instance) can access the object store and create container and objects. Here the username and project are explicitly set to reflect use of the (non-admin) user. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-swift/#basic-concepts","title":"Basic concepts","text":"When using Keystone as an authentication provider, Ceph uses the credentials of an admin user (provided in the secret references by For each user accessing the object store using Swift, Ceph implicitly creates a user which must be represented in Keystone with an authorized counterpart. Keystone checks for a user of the same name. Based on the name and other parameters ((OpenStack Keystone) project, (OpenStack Keystone) role) Keystone allows or disallows access to a swift container or object. Note that the implicitly created users are creaded in addition to any users that are created through other means, so Keystone authentication is not exclusive. It is not necessary to create any users in OpenStack Keystone (except for the admin user provided in the Keystone must support the v3-API-Version to be used with Rook. Other API versions are not supported. The admin user and all users accessing the Object store must exist and their authorizations configured accordingly in Keystone. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-swift/#openstack-setup","title":"Openstack setup","text":"To use the Object Store in OpenStack using Swift the Swift service must be set and the endpoint urls for the Swift service created. The example configuration \"Create a Local Object Store with Keystone and Swift\" above contains more details and the corresponding CLI calls. "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/","title":"Container Object Storage Interface (COSI)","text":"The Ceph COSI driver provisions buckets for object storage. This document instructs on enabling the driver and consuming a bucket from a sample application. Note The Ceph COSI driver is currently in experimental mode. "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#prerequisites","title":"Prerequisites","text":"COSI requires:
Deploy the COSI controller with these commands: "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#ceph-cosi-driver","title":"Ceph COSI Driver","text":"The Ceph COSI driver will be started when the CephCOSIDriver CR is created and when the first CephObjectStore is created. The driver is created in the same namespace as Rook operator. "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#admin-operations","title":"Admin Operations","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#create-a-bucketclass-and-bucketaccessclass","title":"Create a BucketClass and BucketAccessClass","text":"The BucketClass and BucketAccessClass are CRDs defined by COSI. The BucketClass defines the bucket class for the bucket. The BucketAccessClass defines the access class for the bucket. Rook will automatically create a secret named with "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#user-operations","title":"User Operations","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#create-a-bucket","title":"Create a Bucket","text":"To create a bucket, use the BucketClass to pointing the required object store and then define BucketClaim request as below: "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#bucket-access","title":"Bucket Access","text":"Define access to the bucket by creating the BucketAccess resource: The secret will be created which contains the access details for the bucket in JSON format in the namespace of BucketAccess: "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#consuming-the-bucket-via-secret","title":"Consuming the Bucket via secret","text":"To access the bucket from an application pod, mount the secret for accessing the bucket: The Secret will be mounted in the pod in the path: Object storage exposes an S3 API and or a Swift API to the storage cluster for applications to put and get data. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#prerequisites","title":"Prerequisites","text":"This guide assumes a Rook cluster as explained in the Quickstart. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#configure-an-object-store","title":"Configure an Object Store","text":"Rook can configure the Ceph Object Store for several different scenarios. See each linked section for the configuration details.
Note Updating the configuration of an object store between these types is not supported. Rook has the ability to either deploy an object store in Kubernetes or to connect to an external RGW service. Most commonly, the object store will be configured in Kubernetes by Rook. Alternatively see the external section to consume an existing Ceph cluster with Rados Gateways from Rook. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#create-a-local-object-store-with-s3","title":"Create a Local Object Store with S3","text":"The below sample will create a Note This sample requires at least 3 OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the See the Object Store CRD, for more detail on the settings available for a After the Create an object store: To confirm the object store is configured, wait for the RGW pod(s) to start: To consume the object store, continue below in the section to Create a bucket. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#create-local-object-stores-with-shared-pools","title":"Create Local Object Store(s) with Shared Pools","text":"The below sample will create one or more object stores. Shared Ceph pools will be created, which reduces the overhead of additional Ceph pools for each additional object store. Data isolation is enforced between the object stores with the use of Ceph RADOS namespaces. The separate RADOS namespaces do not allow access of the data across object stores. Note This sample requires at least 3 OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the Create the shared pools that will be used by each of the object stores. Note If object stores have been previously created, the first pool below ( Create the shared pools: "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#create-each-object-store","title":"Create Each Object Store","text":"After the pools have been created above, create each object store to consume the shared pools. Create the object store: To confirm the object store is configured, wait for the RGW pod(s) to start: Additional object stores can be created based on the same shared pools by simply changing the To consume the object store, continue below in the section to Create a bucket. Modify the default example object store name from Attention This feature is experimental. This section contains a guide on how to configure RGW's pool placement and storage classes with Rook. Object Storage API allows users to override where bucket data will be stored during bucket creation. With To enable this feature, configure
Example: Configure S3 clients can direct objects into the pools defined in the above. The example below uses the s5cmd CLI tool which is pre-installed in the toolbox pod: "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#connect-to-an-external-object-store","title":"Connect to an External Object Store","text":"Rook can connect to existing RGW gateways to work in conjunction with the external mode of the The Then create a secret with the user credentials: For an external CephCluster, configure Rook to consume external RGW servers with the following: See Even though multiple The CephObjectStore resource Each object store also creates a Kubernetes service that can be used as a client endpoint from within the Kubernetes cluster. The DNS name of the service is For external clusters, the default endpoint is the first The advertised endpoint can be overridden using Rook always uses the advertised endpoint to perform management operations against the object store. When TLS is enabled, the TLS certificate must always specify the endpoint DNS name to allow secure management operations. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#create-a-bucket","title":"Create a Bucket","text":"Info This document is a guide for creating bucket with an Object Bucket Claim (OBC). To create a bucket with the experimental COSI Driver, see the COSI documentation. Now that the object store is configured, next we need to create a bucket where a client can read and write objects. A bucket can be created by defining a storage class, similar to the pattern used by block and file storage. First, define the storage class that will allow object clients to create a bucket. The storage class defines the object storage system, the bucket retention policy, and other properties required by the administrator. Save the following as If you\u2019ve deployed the Rook operator in a namespace other than Based on this storage class, an object client can now request a bucket by creating an Object Bucket Claim (OBC). When the OBC is created, the Rook bucket provisioner will create a new bucket. Notice that the OBC references the storage class that was created above. Save the following as Now that the claim is created, the operator will create the bucket as well as generate other artifacts to enable access to the bucket. A secret and ConfigMap are created with the same name as the OBC and in the same namespace. The secret contains credentials used by the application pod to access the bucket. The ConfigMap contains bucket endpoint information and is also consumed by the pod. See the Object Bucket Claim Documentation for more details on the The following commands extract key pieces of information from the secret and configmap:\" If any Now that you have the object store configured and a bucket created, you can consume the object storage from an S3 client. This section will guide you through testing the connection to the To simplify the s3 client commands, you will want to set the four environment variables for use by your client (ie. inside the toolbox). See above for retrieving the variables for a bucket created by an
The variables for the user generated in this example might be: The access key and secret key can be retrieved as described in the section above on client connections or below in the section creating a user if you are not creating the buckets with an To test the Important The default toolbox.yaml does not contain the s5cmd. The toolbox must be started with the rook operator image (toolbox-operator-image), which does contain s5cmd. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#put-or-get-an-object","title":"PUT or GET an object","text":"Upload a file to the newly created bucket Download and verify the file from the bucket "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#monitoring-health","title":"Monitoring health","text":"Rook configures health probes on the deployment created for CephObjectStore gateways. Refer to the CRD document for information about configuring the probes and monitoring the deployment status. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#access-external-to-the-cluster","title":"Access External to the Cluster","text":"Rook sets up the object storage so pods will have access internal to the cluster. If your applications are running outside the cluster, you will need to setup an external service through a First, note the service that exposes RGW internal to the cluster. We will leave this service intact and create a new service for external access. Save the external service as Now create the external service. See both rgw services running and notice what port the external service is running on: Internally the rgw service is running on port If you need to create an independent set of user credentials to access the S3 endpoint, create a See the Object Store User CRD for more detail on the settings available for a When the The AccessKey and SecretKey data fields can be mounted in a pod as an environment variable. More information on consuming kubernetes secrets can be found in the K8s secret documentation To directly retrieve the secrets: "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#enable-tls","title":"Enable TLS","text":"TLS is critical for securing object storage data access, and it is assumed as a default by many S3 clients. TLS is enabled for CephObjectStores by configuring Ceph RGW only supports a single TLS certificate. If the given TLS certificate is a concatenation of multiple certificates, only the first certificate will be used by the RGW as the server certificate. Therefore, the TLS certificate given must include all endpoints that clients will use for access as subject alternate names (SANs). The CephObjectStore service endpoint must be added as a SAN on the TLS certificate. If it is not possible to add the service DNS name as a SAN on the TLS certificate, set Note OpenShift users can use add The Ceph Object Gateway supports accessing buckets using virtual host-style addressing, which allows addressing buckets using the bucket name as a subdomain in the endpoint. AWS has deprecated the the alternative path-style addressing mode which is Rook and Ceph's default. As a result, many end-user applications have begun to remove path-style support entirely. Many production clusters will have to enable virtual host-style address. Virtual host-style addressing requires 2 things:
Wildcard addressing can be configured in myriad ways. Some options:
The minimum recommended A more complex "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#object-multisite","title":"Object Multisite","text":"Multisite is a feature of Ceph that allows object stores to replicate its data over multiple Ceph clusters. Multisite also allows object stores to be independent and isolated from other object stores in a cluster. For more information on multisite please read the ceph multisite overview for how to run it. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#using-swift-and-keystone","title":"Using Swift and Keystone","text":"It is possible to access an object store using the Swift API. Using Swift requires the use of OpenStack Keystone as an authentication provider. More information on the use of Swift and Keystone can be found in the document on Object Store with Keystone and Swift. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/","title":"Filesystem Mirroring","text":"Ceph filesystem mirroring is a process of asynchronous replication of snapshots to a remote CephFS file system. Snapshots are synchronized by mirroring snapshot data followed by creating a snapshot with the same name (for a given directory on the remote file system) as the snapshot being synchronized. It is generally useful when planning for Disaster Recovery. Mirroring is for clusters that are geographically distributed and stretching a single cluster is not possible due to high latencies. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#prerequisites","title":"Prerequisites","text":"This guide assumes you have created a Rook cluster as explained in the main quickstart guide "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#create-the-filesystem-with-mirroring-enabled","title":"Create the Filesystem with Mirroring enabled","text":"The following will enable mirroring on the filesystem: "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#create-the-cephfs-mirror-daemon","title":"Create the cephfs-mirror daemon","text":"Launch the Please refer to Filesystem Mirror CRD for more information. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#configuring-mirroring-peers","title":"Configuring mirroring peers","text":"Once mirroring is enabled, Rook will by default create its own bootstrap peer token so that it can be used by another cluster. The bootstrap peer token can be found in a Kubernetes Secret. The name of the Secret is present in the Status field of the CephFilesystem CR: This secret can then be fetched like so: "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#import-the-token-in-the-destination-cluster","title":"Import the token in the Destination cluster","text":"The decoded secret must be saved in a file before importing. See the CephFS mirror documentation on how to add a bootstrap peer. Further refer to CephFS mirror documentation to configure a directory for snapshot mirroring. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#verify-that-the-snapshots-have-synced","title":"Verify that the snapshots have synced","text":"To check the For example : Please refer to the Fetch the For getting "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/","title":"Filesystem Storage Overview","text":"A filesystem storage (also named shared filesystem) can be mounted with read/write permission from multiple pods. This may be useful for applications which can be clustered using a shared filesystem. This example runs a shared filesystem for the kube-registry. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#prerequisites","title":"Prerequisites","text":"This guide assumes you have created a Rook cluster as explained in the main quickstart guide "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#create-the-filesystem","title":"Create the Filesystem","text":"Create the filesystem by specifying the desired settings for the metadata pool, data pools, and metadata server in the Save this shared filesystem definition as The Rook operator will create all the pools and other resources necessary to start the service. This may take a minute to complete. To confirm the filesystem is configured, wait for the mds pods to start: To see detailed status of the filesystem, start and connect to the Rook toolbox. A new line will be shown with "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#provision-storage","title":"Provision Storage","text":"Before Rook can start provisioning storage, a StorageClass needs to be created based on the filesystem. This is needed for Kubernetes to interoperate with the CSI driver to create persistent volumes. Save this storage class definition as If you've deployed the Rook operator in a namespace other than \"rook-ceph\" as is common change the prefix in the provisioner to match the namespace you used. For example, if the Rook operator is running in \"rook-op\" the provisioner value should be \"rook-op.rbd.csi.ceph.com\". Create the storage class. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#quotas","title":"Quotas","text":"Attention The CephFS CSI driver uses quotas to enforce the PVC size requested. Only newer kernels support CephFS quotas (kernel version of at least 4.17). If you require quotas to be enforced and the kernel driver does not support it, you can disable the kernel driver and use the FUSE client. This can be done by setting As an example, we will start the kube-registry pod with the shared filesystem as the backing store. Save the following spec as Create the Kube registry deployment: You now have a docker registry which is HA with persistent storage. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#kernel-version-requirement","title":"Kernel Version Requirement","text":"If the Rook cluster has more than one filesystem and the application pod is scheduled to a node with kernel version older than 4.7, inconsistent results may arise since kernels older than 4.7 do not support specifying filesystem namespaces. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#consume-the-shared-filesystem-toolbox","title":"Consume the Shared Filesystem: Toolbox","text":"Once you have pushed an image to the registry (see the instructions to expose and use the kube-registry), verify that kube-registry is using the filesystem that was configured above by mounting the shared filesystem in the toolbox pod. See the Direct Filesystem topic for more details. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#consume-the-shared-filesystem-across-namespaces","title":"Consume the Shared Filesystem across namespaces","text":"A PVC that you create using the However there are some use cases where you want to share the content from a CephFS-based PVC among different Pods in different namespaces, for a shared library for example, or a collaboration workspace between applications running in different namespaces. You can do that using the following recipe. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#shared-volume-creation","title":"Shared volume creation","text":"
Your YAML should look like this:
You have now access to the same CephFS subvolume from different PVCs in different namespaces. Redo the previous steps (copy PV with a new name, create a PVC pointing to it) in each namespace you want to use this subvolume. Note: the new PVCs/PVs we have created are static. Therefore CephCSI does not support snapshots, clones, resizing or delete operations for them. If those operations are required, you must make them on the original PVC. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#shared-volume-removal","title":"Shared volume removal","text":"As the same CephFS volume is used by different PVCs/PVs, you must proceed very orderly to remove it properly.
Due to this bug, the global mount for a Volume that is mounted multiple times on the same node will not be unmounted. This does not result in any particular problem, apart from polluting the logs with unmount error messages, or having many different mounts hanging if you create and delete many shared PVCs, or you don't really use them. Until this issue is solved, either on the Rook or Kubelet side, you can always manually unmount the unwanted hanging global mounts on the nodes:
To clean up all the artifacts created by the filesystem demo: To delete the filesystem components and backing data, delete the Filesystem CRD. Warning Data will be deleted if preserveFilesystemOnDelete=false. Note: If the \"preserveFilesystemOnDelete\" filesystem attribute is set to true, the above command won't delete the filesystem. Recreating the same CRD will reuse the existing filesystem. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#advanced-example-erasure-coded-filesystem","title":"Advanced Example: Erasure Coded Filesystem","text":"The Ceph filesystem example can be found here: Ceph Shared Filesystem - Samples - Erasure Coded. "},{"location":"Troubleshooting/ceph-common-issues/","title":"Ceph Common Issues","text":""},{"location":"Troubleshooting/ceph-common-issues/#topics","title":"Topics","text":"
Many of these problem cases are hard to summarize down to a short phrase that adequately describes the problem. Each problem will start with a bulleted list of symptoms. Keep in mind that all symptoms may not apply depending on the configuration of Rook. If the majority of the symptoms are seen there is a fair chance you are experiencing that problem. If after trying the suggestions found on this page and the problem is not resolved, the Rook team is very happy to help you troubleshoot the issues in their Slack channel. Once you have registered for the Rook Slack, proceed to the See also the CSI Troubleshooting Guide. "},{"location":"Troubleshooting/ceph-common-issues/#troubleshooting-techniques","title":"Troubleshooting Techniques","text":"There are two main categories of information you will need to investigate issues in the cluster:
After you verify the basic health of the running pods, next you will want to run Ceph tools for status of the storage components. There are two ways to run the Ceph tools, either in the Rook toolbox or inside other Rook pods that are already running.
The rook-ceph-tools pod provides a simple environment to run Ceph tools. Once the pod is up and running, connect to the pod to execute Ceph commands to evaluate that current state of the cluster. "},{"location":"Troubleshooting/ceph-common-issues/#ceph-commands","title":"Ceph Commands","text":"Here are some common commands to troubleshoot a Ceph cluster:
The first two status commands provide the overall cluster health. The normal state for cluster operations is HEALTH_OK, but will still function when the state is in a HEALTH_WARN state. If you are in a WARN state, then the cluster is in a condition that it may enter the HEALTH_ERROR state at which point all disk I/O operations are halted. If a HEALTH_WARN state is observed, then one should take action to prevent the cluster from halting when it enters the HEALTH_ERROR state. There are many Ceph sub-commands to look at and manipulate Ceph objects, well beyond the scope this document. See the Ceph documentation for more details of gathering information about the health of the cluster. In addition, there are other helpful hints and some best practices located in the Advanced Configuration section. Of particular note, there are scripts for collecting logs and gathering OSD information there. "},{"location":"Troubleshooting/ceph-common-issues/#cluster-failing-to-service-requests","title":"Cluster failing to service requests","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms","title":"Symptoms","text":"
Create a rook-ceph-tools pod to investigate the current state of Ceph. Here is an example of what one might see. In this case the Another indication is when one or more of the MON pods restart frequently. Note the 'mon107' that has only been up for 16 minutes in the following output. "},{"location":"Troubleshooting/ceph-common-issues/#solution","title":"Solution","text":"What is happening here is that the MON pods are restarting and one or more of the Ceph daemons are not getting configured with the proper cluster information. This is commonly the result of not specifying a value for The
When the operator is starting a cluster, the operator will start one mon at a time and check that they are healthy before continuing to bring up all three mons. If the first mon is not detected healthy, the operator will continue to check until it is healthy. If the first mon fails to start, a second and then a third mon may attempt to start. However, they will never form quorum and the orchestration will be blocked from proceeding. The crash-collector pods will be blocked from starting until the mons have formed quorum the first time. There are several common causes for the mons failing to form quorum:
First look at the logs of the operator to confirm if it is able to connect to the mons. Likely you will see an error similar to the following that the operator is timing out when connecting to the mon. The last command is The error would appear to be an authentication error, but it is misleading. The real issue is a timeout. "},{"location":"Troubleshooting/ceph-common-issues/#solution_1","title":"Solution","text":"If you see the timeout in the operator log, verify if the mon pod is running (see the next section). If the mon pod is running, check the network connectivity between the operator pod and the mon pod. A common issue is that the CNI is not configured correctly. To verify the network connectivity:
For example, this command will curl the first mon from the operator: If \"ceph v2\" is printed to the console, the connection was successful. If the command does not respond or otherwise fails, the network connection cannot be established. "},{"location":"Troubleshooting/ceph-common-issues/#failing-mon-pod","title":"Failing mon pod","text":"Second we need to verify if the mon pod started successfully. If the mon pod is failing as in this example, you will need to look at the mon pod status or logs to determine the cause. If the pod is in a crash loop backoff state, you should see the reason by describing the pod. See the solution in the next section regarding cleaning up the This is a common problem reinitializing the Rook cluster when the local directory used for persistence has not been purged. This directory is the Caution Deleting the See the Cleanup Guide for more details. "},{"location":"Troubleshooting/ceph-common-issues/#pvcs-stay-in-pending-state","title":"PVCs stay in pending state","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_2","title":"Symptoms","text":"
For the Wordpress example, you might see two PVCs in pending state. "},{"location":"Troubleshooting/ceph-common-issues/#investigation_2","title":"Investigation","text":"There are two common causes for the PVCs staying in pending state:
To confirm if you have OSDs in your cluster, connect to the Rook Toolbox and run the "},{"location":"Troubleshooting/ceph-common-issues/#osd-prepare-logs","title":"OSD Prepare Logs","text":"If you don't see the expected number of OSDs, let's investigate why they weren't created. On each node where Rook looks for OSDs to configure, you will see an \"osd prepare\" pod. See the section on why OSDs are not getting created to investigate the logs. "},{"location":"Troubleshooting/ceph-common-issues/#csi-driver","title":"CSI Driver","text":"The CSI driver may not be responding to the requests. Look in the logs of the CSI provisioner pod to see if there are any errors during the provisioning. There are two provisioner pods: Get the logs of each of the pods. One of them should be the \"leader\" and be responding to requests. See also the CSI Troubleshooting Guide. "},{"location":"Troubleshooting/ceph-common-issues/#operator-unresponsiveness","title":"Operator unresponsiveness","text":"Lastly, if you have OSDs If the \"osd prepare\" logs didn't give you enough clues about why the OSDs were not being created, please review your cluster.yaml configuration. The common misconfigurations include:
When an OSD starts, the device or directory will be configured for consumption. If there is an error with the configuration, the pod will crash and you will see the CrashLoopBackoff status for the pod. Look in the osd pod logs for an indication of the failure. One common case for failure is that you have re-deployed a test cluster and some state may remain from a previous deployment. If your cluster is larger than a few nodes, you may get lucky enough that the monitors were able to start and form quorum. However, now the OSDs pods may fail to start due to the old state. Looking at the OSD pod logs you will see an error about the file already existing. "},{"location":"Troubleshooting/ceph-common-issues/#solution_4","title":"Solution","text":"If the error is from the file that already exists, this is a common problem reinitializing the Rook cluster when the local directory used for persistence has not been purged. This directory is the
First, ensure that you have specified the devices correctly in the CRD. The Cluster CRD has several ways to specify the devices that are to be consumed by the Rook storage:
Second, if Rook determines that a device is not available (has existing partitions or a formatted filesystem), Rook will skip consuming the devices. If Rook is not starting OSDs on the devices you expect, Rook may have skipped it for this reason. To see if a device was skipped, view the OSD preparation log on the node where the device was skipped. Note that it is completely normal and expected for OSD prepare pod to be in the Here are some key lines to look for in the log: "},{"location":"Troubleshooting/ceph-common-issues/#solution_5","title":"Solution","text":"Either update the CR with the correct settings, or clean the partitions or filesystem from your devices. To clean devices from a previous install see the cleanup guide. After the settings are updated or the devices are cleaned, trigger the operator to analyze the devices again by restarting the operator. Each time the operator starts, it will ensure all the desired devices are configured. The operator does automatically deploy OSDs in most scenarios, but an operator restart will cover any scenarios that the operator doesn't detect automatically. "},{"location":"Troubleshooting/ceph-common-issues/#node-hangs-after-reboot","title":"Node hangs after reboot","text":"This issue is fixed in Rook v1.3 or later. "},{"location":"Troubleshooting/ceph-common-issues/#symptoms_5","title":"Symptoms","text":"
On a node running a pod with a Ceph persistent volume When the reboot command is issued, network interfaces are terminated before disks are unmounted. This results in the node hanging as repeated attempts to unmount Ceph persistent volumes fail with the following error: "},{"location":"Troubleshooting/ceph-common-issues/#solution_6","title":"Solution","text":"The node needs to be drained before reboot. After the successful drain, the node can be rebooted as usual. Because Drain the node: Uncordon the node: "},{"location":"Troubleshooting/ceph-common-issues/#using-multiple-shared-filesystem-cephfs-is-attempted-on-a-kernel-version-older-than-47","title":"Using multiple shared filesystem (CephFS) is attempted on a kernel version older than 4.7","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_6","title":"Symptoms","text":"
The only solution to this problem is to upgrade your kernel to For additional info on the kernel version requirement for multiple shared filesystems (CephFS), see Filesystem - Kernel version requirement. "},{"location":"Troubleshooting/ceph-common-issues/#set-debug-log-level-for-all-ceph-daemons","title":"Set debug log level for all Ceph daemons","text":"You can set a given log level and apply it to all the Ceph daemons at the same time. For this, make sure the toolbox pod is running, then determine the level you want (between 0 and 20). You can find the list of all subsystems and their default values in Ceph logging and debug official guide. Be careful when increasing the level as it will produce very verbose logs. Assuming you want a log level of 1, you will run: Once you are done debugging, you can revert all the debug flag to their default value by running the following: "},{"location":"Troubleshooting/ceph-common-issues/#activate-log-to-file-for-a-particular-ceph-daemon","title":"Activate log to file for a particular Ceph daemon","text":"They are cases where looking at Kubernetes logs is not enough for diverse reasons, but just to name a few:
So for each daemon, This will activate logging on the filesystem, you will be able to find logs in To disable the logging on file, simply set
This happens when the following conditions are satisfied.
In addition, when this problem happens, you can see the following messages in It's so-called This problem will be solve by the following two fixes.
You can bypass this problem by using ext4 or any other filesystems rather than XFS. Filesystem type can be specified with
"},{"location":"Troubleshooting/ceph-common-issues/#solution_9","title":"Solution","text":"The meaning of this warning is written in the document. However, in many cases it is benign. For more information, please see the blog entry. Please refer to Configuring Pools if you want to know the proper There is a critical flaw in OSD on LV-backed PVC. LVM metadata can be corrupted if both the host and OSD container modify it simultaneously. For example, the administrator might modify it on the host, while the OSD initialization process in a container could modify it too. In addition, if If you still decide to configure an OSD on LVM, please keep the following in mind to reduce the probability of this issue. "},{"location":"Troubleshooting/ceph-common-issues/#solution_10","title":"Solution","text":"
You can know whether the above-mentioned tag exists with the command: This problem doesn't happen in newly created LV-backed PVCs because OSD container doesn't modify LVM metadata anymore. The existing lvm mode OSDs work continuously even thought upgrade your Rook. However, using the raw mode OSDs is recommended because of the above-mentioned problem. You can replace the existing OSDs with raw mode OSDs by retiring them and adding new OSDs one by one. See the documents Remove an OSD and Add an OSD on a PVC. "},{"location":"Troubleshooting/ceph-common-issues/#osd-prepare-job-fails-due-to-low-aio-max-nr-setting","title":"OSD prepare job fails due to low aio-max-nr setting","text":"If the Kernel is configured with a low aio-max-nr setting, the OSD prepare job might fail with the following error: To overcome this, you need to increase the value of Alternatively, you can have a DaemonSet to apply the configuration for you on all your nodes. "},{"location":"Troubleshooting/ceph-common-issues/#unexpected-partitions-created","title":"Unexpected partitions created","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_10","title":"Symptoms","text":"Users running Rook versions v1.6.0-v1.6.7 may observe unwanted OSDs on partitions that appear unexpectedly and seemingly randomly, which can corrupt existing OSDs. Unexpected partitions are created on host disks that are used by Ceph OSDs. This happens more often on SSDs than HDDs and usually only on disks that are 875GB or larger. Many tools like The underlying issue causing this is Atari partition (sometimes identified as AHDI) support in the Linux kernel. Atari partitions have very relaxed specifications compared to other partition types, and it is relatively easy for random data written to a disk to appear as an Atari partition to the Linux kernel. Ceph's Bluestore OSDs have an anecdotally high probability of writing data on to disks that can appear to the kernel as an Atari partition. Below is an example of You can see GitHub rook/rook - Issue 7940 unexpected partition on disks >= 1TB (atari partitions) for more detailed information and discussion. "},{"location":"Troubleshooting/ceph-common-issues/#solution_11","title":"Solution","text":""},{"location":"Troubleshooting/ceph-common-issues/#recover-from-corruption-v160-v167","title":"Recover from corruption (v1.6.0-v1.6.7)","text":"If you are using Rook v1.6, you must first update to v1.6.8 or higher to avoid further incidents of OSD corruption caused by these Atari partitions. An old workaround suggested using To resolve the issue, immediately update to v1.6.8 or higher. After the update, no corruption should occur on OSDs created in the future. Next, to get back to a healthy Ceph cluster state, focus on one corrupted disk at a time and remove all OSDs on each corrupted disk one disk at a time. As an example, you may have
If your Rook cluster does not have any critical data stored in it, it may be simpler to uninstall Rook completely and redeploy with v1.6.8 or higher. "},{"location":"Troubleshooting/ceph-common-issues/#operator-environment-variables-are-ignored","title":"Operator environment variables are ignored","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_11","title":"Symptoms","text":"Configuration settings passed as environment variables do not take effect as expected. For example, the discover daemonset is not created, even though Inspect the Look for lines with the Verify that both of the following messages are present in the operator logs: "},{"location":"Troubleshooting/ceph-common-issues/#solution_12","title":"Solution","text":"If it does not exist, create an empty ConfigMap: If the ConfigMap exists, remove any keys that you wish to configure through the environment. "},{"location":"Troubleshooting/ceph-common-issues/#the-cluster-is-in-an-unhealthy-state-or-fails-to-configure-when-limitnofileinfinity-in-containerd","title":"The cluster is in an unhealthy state or fails to configure when LimitNOFILE=infinity in containerd","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_12","title":"Symptoms","text":"When trying to create a new deployment, Ceph mons keep crashing and the cluster fails to configure or remains in an unhealthy state. The nodes' CPUs are stuck at 100%. "},{"location":"Troubleshooting/ceph-common-issues/#solution_13","title":"Solution","text":"Before systemd v240, systemd would leave To fix this, set LimitNOFILE in the systemd service configuration to 1048576. Create an override.conf file with the new LimitNOFILE value: Reload systemd manager configuration, restart containerd and restart all monitors deployments: "},{"location":"Troubleshooting/ceph-csi-common-issues/","title":"CSI Common Issues","text":"Issues when provisioning volumes with the Ceph CSI driver can happen for many reasons such as:
The following troubleshooting steps can help identify a number of issues. "},{"location":"Troubleshooting/ceph-csi-common-issues/#block-rbd","title":"Block (RBD)","text":"If you are mounting block volumes (usually RWO), these are referred to as If you are mounting shared filesystem volumes (usually RWX), these are referred to as The Ceph monitors are the most critical component of the cluster to check first. Retrieve the mon endpoints from the services: If host networking is enabled in the CephCluster CR, you will instead need to find the node IPs for the hosts where the mons are running. The If you are seeing issues provisioning the PVC then you need to check the network connectivity from the provisioner pods.
For redundancy, there are two provisioner pods for each type. Make sure to test connectivity from all provisioner pods. Connect to the provisioner pods and verify the connection to the mon endpoints such as the following: If you see the response \"ceph v2\", the connection succeeded. If there is no response then there is a network issue connecting to the ceph cluster. Check network connectivity for all monitor IP\u2019s and ports which are passed to ceph-csi. "},{"location":"Troubleshooting/ceph-csi-common-issues/#ceph-health","title":"Ceph Health","text":"Sometimes an unhealthy Ceph cluster can contribute to the issues in creating or mounting the PVC. Check that your Ceph cluster is healthy by connecting to the Toolbox and running the "},{"location":"Troubleshooting/ceph-csi-common-issues/#slow-operations","title":"Slow Operations","text":"Even slow ops in the ceph cluster can contribute to the issues. In the toolbox, make sure that no slow ops are present and the ceph cluster is healthy If Ceph is not healthy, check the following health for more clues:
Make sure the pool you have specified in the Suppose the pool name mentioned in the If the pool is not in the list, create the For the shared filesystem (CephFS), check that the filesystem and pools you have specified in the Suppose the Now verify the The pool for the filesystem will have the suffix If the subvolumegroup is not specified in the ceph-csi configmap (where you have passed the ceph monitor information), Ceph-CSI creates the default subvolumegroup with the name csi. Verify that the subvolumegroup exists: If you don\u2019t see any issues with your Ceph cluster, the following sections will start debugging the issue from the CSI side. "},{"location":"Troubleshooting/ceph-csi-common-issues/#provisioning-volumes","title":"Provisioning Volumes","text":"At times the issue can also exist in the Ceph-CSI or the sidecar containers used in Ceph-CSI. Ceph-CSI has included number of sidecar containers in the provisioner pods such as: The CephFS provisioner core CSI driver container name is Here is a summary of the sidecar containers: "},{"location":"Troubleshooting/ceph-csi-common-issues/#csi-provisioner","title":"csi-provisioner","text":"The external-provisioner is a sidecar container that dynamically provisions volumes by calling If there is an issue with PVC Create or Delete, check the logs of the "},{"location":"Troubleshooting/ceph-csi-common-issues/#csi-resizer","title":"csi-resizer","text":"The CSI If any issue exists in PVC expansion you can check the logs of the "},{"location":"Troubleshooting/ceph-csi-common-issues/#csi-snapshotter","title":"csi-snapshotter","text":"The CSI external-snapshotter sidecar only watches for In Kubernetes 1.17 the volume snapshot feature was promoted to beta. In Kubernetes 1.20, the feature gate is enabled by default on standard Kubernetes deployments and cannot be turned off. Make sure you have installed the correct snapshotter CRD version. If you have not installed the snapshotter controller, see the Snapshots guide. The above CRDs must have the matching version in your The snapshot controller is responsible for creating both Rook only installs the snapshotter sidecar container, not the controller. It is recommended that Kubernetes distributors bundle and deploy the controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver). If your Kubernetes distribution does not bundle the snapshot controller, you may manually install these components. If any issue exists in the snapshot Create/Delete operation you can check the logs of the csi-snapshotter sidecar container. If you see an error about a volume already existing such as: The issue typically is in the Ceph cluster or network connectivity. If the issue is in Provisioning the PVC Restarting the Provisioner pods help(for CephFS issue restart When a user requests to create the application pod with PVC, there is a three-step process
Each plugin pod has two important containers: one is The node-driver-registrar is a sidecar container that registers the CSI driver with Kubelet. More details can be found here. If any issue exists in attaching the PVC to the application pod check logs from driver-registrar sidecar container in plugin pod where your application pod is scheduled. You should see the response If you see a driver not found an error in the application pod describe output. Restarting the Each provisioner pod also has a sidecar container called The external-attacher is a sidecar container that attaches volumes to nodes by calling If any issue exists in attaching the PVC to the application pod first check the volumeattachment object created and also log from csi-attacher sidecar container in provisioner pod. "},{"location":"Troubleshooting/ceph-csi-common-issues/#cephfs-stale-operations","title":"CephFS Stale operations","text":"Check for any stale mount commands on the You need to exec in the Identify the If any commands are stuck check the dmesg logs from the node. Restarting the If you don\u2019t see any stuck messages, confirm the network connectivity, Ceph health, and slow ops. "},{"location":"Troubleshooting/ceph-csi-common-issues/#rbd-stale-operations","title":"RBD Stale operations","text":"Check for any stale You need to exec in the Identify the If any commands are stuck check the dmesg logs from the node. Restarting the If you don\u2019t see any stuck messages, confirm the network connectivity, Ceph health, and slow ops. "},{"location":"Troubleshooting/ceph-csi-common-issues/#dmesg-logs","title":"dmesg logs","text":"Check the dmesg logs on the node where pvc mounting is failing or the "},{"location":"Troubleshooting/ceph-csi-common-issues/#rbd-commands","title":"RBD Commands","text":"If nothing else helps, get the last executed command from the ceph-csi pod logs and run it manually inside the provisioner or plugin pod to see if there are errors returned even if they couldn't be seen in the logs. Where When a node is lost, you will see application pods on the node stuck in the To force delete the pod stuck in the After the force delete, wait for a timeout of about 8-10 minutes. If the pod still not in the running state, continue with the next section to blocklist the node. "},{"location":"Troubleshooting/ceph-csi-common-issues/#blocklisting-a-node","title":"Blocklisting a node","text":"To shorten the timeout, you can mark the node as \"blocklisted\" from the Rook toolbox so Rook can safely failover the pod sooner. After running the above command within a few minutes the pod will be running. "},{"location":"Troubleshooting/ceph-csi-common-issues/#removing-a-node-blocklist","title":"Removing a node blocklist","text":"After you are absolutely sure the node is permanently offline and that the node no longer needs to be blocklisted, remove the node from the blocklist. "},{"location":"Troubleshooting/ceph-toolbox/","title":"Toolbox","text":"The Rook toolbox is a container with common tools used for rook debugging and testing. The toolbox is based on CentOS, so more tools of your choosing can be easily installed with The toolbox can be run in two modes:
Hint Before running the toolbox you should have a running Rook cluster deployed (see the Quickstart Guide). Note The toolbox is not necessary if you are using kubectl plugin to execute Ceph commands. "},{"location":"Troubleshooting/ceph-toolbox/#interactive-toolbox","title":"Interactive Toolbox","text":"The rook toolbox can run as a deployment in a Kubernetes cluster where you can connect and run arbitrary Ceph commands. Launch the rook-ceph-tools pod: Wait for the toolbox pod to download its container and get to the Once the rook-ceph-tools pod is running, you can connect to it with: All available tools in the toolbox are ready for your troubleshooting needs. Example:
When you are done with the toolbox, you can remove the deployment: "},{"location":"Troubleshooting/ceph-toolbox/#toolbox-job","title":"Toolbox Job","text":"If you want to run Ceph commands as a one-time operation and collect the results later from the logs, you can run a script as a Kubernetes Job. The toolbox job will run a script that is embedded in the job spec. The script has the full flexibility of a bash script. In this example, the After the job completes, see the results of the script: "},{"location":"Troubleshooting/common-issues/","title":"Common Issues","text":"To help troubleshoot your Rook clusters, here are some tips on what information will help solve the issues you might be seeing. If after trying the suggestions found on this page and the problem is not resolved, the Rook team is very happy to help you troubleshoot the issues in their Slack channel. Once you have registered for the Rook Slack, proceed to the General channel to ask for assistance. "},{"location":"Troubleshooting/common-issues/#ceph-common-issues","title":"Ceph Common Issues","text":"For common issues specific to Ceph, see the Ceph Common Issues page. "},{"location":"Troubleshooting/common-issues/#troubleshooting-techniques","title":"Troubleshooting Techniques","text":"Kubernetes status and logs are the main resources needed to investigate issues in any Rook cluster. "},{"location":"Troubleshooting/common-issues/#kubernetes-tools","title":"Kubernetes Tools","text":"Kubernetes status is the first line of investigating when something goes wrong with the cluster. Here are a few artifacts that are helpful to gather:
Some pods have specialized init containers, so you may need to look at logs for different containers within the pod.
Rook is designed with Kubernetes design principles from the ground up. This topic is going to escape the bounds of Kubernetes storage and show you how to use block and file storage directly from a pod without any of the Kubernetes magic. The purpose of this topic is to help you quickly test a new configuration, although it is not meant to be used in production. All of the benefits of Kubernetes storage including failover, detach, and attach will not be available. If your pod dies, your mount will die with it. "},{"location":"Troubleshooting/direct-tools/#start-the-direct-mount-pod","title":"Start the Direct Mount Pod","text":"To test mounting your Ceph volumes, start a pod with the necessary mounts. An example is provided in the examples test directory: After the pod is started, connect to it like this: "},{"location":"Troubleshooting/direct-tools/#block-storage-tools","title":"Block Storage Tools","text":"After you have created a pool as described in the Block Storage topic, you can create a block image and mount it directly in a pod. This example will show how the Ceph rbd volume can be mounted in the direct mount pod. Create the Direct Mount Pod. Create a volume image (10MB): Map the block volume and format it and mount it: Write and read a file: "},{"location":"Troubleshooting/direct-tools/#unmount-the-block-device","title":"Unmount the Block device","text":"Unmount the volume and unmap the kernel device: "},{"location":"Troubleshooting/direct-tools/#shared-filesystem-tools","title":"Shared Filesystem Tools","text":"After you have created a filesystem as described in the Shared Filesystem topic, you can mount the filesystem from multiple pods. The the other topic you may have mounted the filesystem already in the registry pod. Now we will mount the same filesystem in the Direct Mount pod. This is just a simple way to validate the Ceph filesystem and is not recommended for production Kubernetes pods. Follow Direct Mount Pod to start a pod with the necessary mounts and then proceed with the following commands after connecting to the pod. Now you should have a mounted filesystem. If you have pushed images to the registry you will see a directory called Try writing and reading a file to the shared filesystem. "},{"location":"Troubleshooting/direct-tools/#unmount-the-filesystem","title":"Unmount the Filesystem","text":"To unmount the shared filesystem from the Direct Mount Pod: No data will be deleted by unmounting the filesystem. "},{"location":"Troubleshooting/disaster-recovery/","title":"Disaster Recovery","text":"Under extenuating circumstances, steps may be necessary to recover the cluster health. There are several types of recovery addressed in this document. "},{"location":"Troubleshooting/disaster-recovery/#restoring-mon-quorum","title":"Restoring Mon Quorum","text":"Under extenuating circumstances, the mons may lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon is still healthy. The following steps will remove the unhealthy mons from quorum and allow you to form a quorum again with a single mon, then grow the quorum back to the original size. The Rook kubectl Plugin has a command If the name of the healthy mon is See the restore-quorum documentation for more details. "},{"location":"Troubleshooting/disaster-recovery/#restoring-crds-after-deletion","title":"Restoring CRDs After Deletion","text":"When the Rook CRDs are deleted, the Rook operator will respond to the deletion event to attempt to clean up the cluster resources. If any data appears present in the cluster, Rook will refuse to allow the resources to be deleted since the operator will refuse to remove the finalizer on the CRs until the underlying data is deleted. For more details, see the dependency design doc. While it is good that the CRs will not be deleted and the underlying Ceph data and daemons continue to be available, the CRs will be stuck indefinitely in a Note In the following commands, the affected
Situations this section can help resolve:
Assuming
It is possible to migrate/restore an rook/ceph cluster from an existing Kubernetes cluster to a new one without resorting to SSH access or ceph tooling. This allows doing the migration using standard kubernetes resources only. This guide assumes the following:
Do the following in the new cluster:
When the rook-ceph namespace is accidentally deleted, the good news is that the cluster can be restored. With the content in the directory You need to manually create a ConfigMap and a Secret to make it work. The information required for the ConfigMap and Secret can be found in the The first resource is the secret named The values for the secret can be found in
All the fields in data section need to be encoded in base64. Coding could be done like this: Now save the secret as The second resource is the configmap named rook-ceph-mon-endpoints as seen in this example below: The Monitor's service IPs are kept in the monitor data store and you need to create them by original ones. After you create this configmap with the original service IPs, the rook operator will create the correct services for you with IPs matching in the monitor data store. Along with monitor ids, their service IPs and mapping relationship of them can be found in dataDirHostPath/rook-ceph/rook-ceph.config, for example:
Now that you have the info for the secret and the configmap, you are ready to restore the running cluster. Deploy Rook Ceph using the YAML files or Helm, with the same settings you had previously. After the operator is running, create the configmap and secret you have just crafted: Create your Ceph cluster CR (if possible, with the same settings as existed previously): Now your Rook Ceph cluster should be running again. "},{"location":"Troubleshooting/kubectl-plugin/","title":"kubectl Plugin","text":"The Rook kubectl plugin is a tool to help troubleshoot your Rook cluster. Here are a few of the operations that the plugin will assist with:
See the kubectl-rook-ceph documentation for more details. "},{"location":"Troubleshooting/kubectl-plugin/#installation","title":"Installation","text":"
Reference: Ceph Status "},{"location":"Troubleshooting/kubectl-plugin/#debug-mode","title":"Debug Mode","text":"Debug mode can be useful when a MON or OSD needs advanced maintenance operations that require the daemon to be stopped. Ceph tools such as
Reference: Debug Mode "},{"location":"Troubleshooting/openshift-common-issues/","title":"OpenShift Common Issues","text":""},{"location":"Troubleshooting/openshift-common-issues/#enable-monitoring-in-the-storage-dashboard","title":"Enable Monitoring in the Storage Dashboard","text":"OpenShift Console uses OpenShift Prometheus for monitoring and populating data in Storage Dashboard. Additional configuration is required to monitor the Ceph Cluster from the storage dashboard.
Attention Switch to
Warn This is an advanced topic please be aware of the steps you're performing or reach out to the experts for further guidance. There are some cases where the debug logs are not sufficient to investigate issues like high CPU utilization of a Ceph process. In that situation, coredump and perf information of a Ceph process is useful to be collected which can be shared with the Ceph team in an issue. To collect this information, please follow these steps:
This guide will walk through the steps to upgrade the version of Ceph in a Rook cluster. Rook and Ceph upgrades are designed to ensure data remains available even while the upgrade is proceeding. Rook will perform the upgrades in a rolling fashion such that application pods are not disrupted. Rook is cautious when performing upgrades. When an upgrade is requested (the Ceph image has been updated in the CR), Rook will go through all the daemons one by one and will individually perform checks on them. It will make sure a particular daemon can be stopped before performing the upgrade. Once the deployment has been updated, it checks if this is ok to continue. After each daemon is updated we wait for things to settle (monitors to be in a quorum, PGs to be clean for OSDs, up for MDSes, etc.), then only when the condition is met we move to the next daemon. We repeat this process until all the daemons have been updated. "},{"location":"Upgrade/ceph-upgrade/#considerations","title":"Considerations","text":"
Rook v1.16 supports the following Ceph versions:
Important When an update is requested, the operator will check Ceph's status, if it is in Official Ceph container images can be found on Quay. These images are tagged in a few ways:
Ceph containers other than the official images from the registry above will not be supported. "},{"location":"Upgrade/ceph-upgrade/#example-upgrade-to-ceph-reef","title":"Example Upgrade to Ceph Reef","text":""},{"location":"Upgrade/ceph-upgrade/#1-update-the-ceph-daemons","title":"1. Update the Ceph daemons","text":"The upgrade will be automated by the Rook operator after the desired Ceph image is changed in the CephCluster CRD ( "},{"location":"Upgrade/ceph-upgrade/#2-update-the-toolbox-image","title":"2. Update the toolbox image","text":"Since the Rook toolbox is not controlled by the Rook operator, users must perform a manual upgrade by modifying the "},{"location":"Upgrade/ceph-upgrade/#3-wait-for-the-pod-updates","title":"3. Wait for the pod updates","text":"As with upgrading Rook, now wait for the upgrade to complete. Status can be determined in a similar way to the Rook upgrade as well. Confirm the upgrade is completed when the versions are all on the desired Ceph version. "},{"location":"Upgrade/ceph-upgrade/#4-verify-cluster-health","title":"4. Verify cluster health","text":"Verify the Ceph cluster's health using the health verification. "},{"location":"Upgrade/health-verification/","title":"Health Verification","text":"Rook and Ceph upgrades are designed to ensure data remains available even while the upgrade is proceeding. Rook will perform the upgrades in a rolling fashion such that application pods are not disrupted. To ensure the upgrades are seamless, it is important to begin the upgrades with Ceph in a fully healthy state. This guide reviews ways of verifying the health of a CephCluster. See the troubleshooting documentation for any issues during upgrades:
In a healthy Rook cluster, all pods in the Rook namespace should be in the "},{"location":"Upgrade/health-verification/#status-output","title":"Status Output","text":"The Rook toolbox contains the Ceph tools that gives status details of the cluster with the The output should look similar to the following: In the output above, note the following indications that the cluster is in a healthy state:
If the Rook will not upgrade Ceph daemons if the health is in a The container version running in a specific pod in the Rook cluster can be verified in its pod spec output. For example, for the monitor pod The status and container versions for all Rook pods can be collected all at once with the following commands: The "},{"location":"Upgrade/health-verification/#rook-volume-health","title":"Rook Volume Health","text":"Any pod that is using a Rook volume should also remain healthy:
This guide will walk through the steps to upgrade the software in a Rook cluster from one version to the next. This guide focuses on updating the Rook version for the management layer, while the Ceph upgrade guide focuses on updating the data layer. Upgrades for both the operator and for Ceph are entirely automated except where Rook's permissions need to be explicitly updated by an admin or when incompatibilities need to be addressed manually due to customizations. We welcome feedback and opening issues! "},{"location":"Upgrade/rook-upgrade/#supported-versions","title":"Supported Versions","text":"This guide is for upgrading from Rook v1.14.x to Rook v1.15.x. Please refer to the upgrade guides from previous releases for supported upgrade paths. Rook upgrades are only supported between official releases. For a guide to upgrade previous versions of Rook, please refer to the version of documentation for those releases.
Important Rook releases from master are expressly unsupported. It is strongly recommended to use official releases of Rook. Unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases. Builds from the master branch can have functionality changed or removed at any time without compatibility support and without prior notice. "},{"location":"Upgrade/rook-upgrade/#breaking-changes-in-v116","title":"Breaking changes in v1.16","text":"
With this upgrade guide, there are a few notes to consider:
Unless otherwise noted due to extenuating requirements, upgrades from one patch release of Rook to another are as simple as updating the common resources and the image of the Rook operator. For example, when Rook v1.15.1 is released, the process of updating from v1.15.0 is as simple as running the following: If the Rook Operator or CephCluster are deployed into a different namespace than Then, apply the latest changes from v1.15, and update the Rook Operator image. As exemplified above, it is a good practice to update Rook common resources from the example manifests before any update. The common resources and CRDs might not be updated with every release, but Kubernetes will only apply updates to the ones that changed. Also update optional resources like Prometheus monitoring noted more fully in the upgrade section below. "},{"location":"Upgrade/rook-upgrade/#helm","title":"Helm","text":"If Rook is installed via the Helm chart, Helm will handle some details of the upgrade itself. The upgrade steps in this guide will clarify what Helm handles automatically. The Note Be sure to update to a supported Helm version "},{"location":"Upgrade/rook-upgrade/#cluster-health","title":"Cluster Health","text":"In order to successfully upgrade a Rook cluster, the following prerequisites must be met:
The examples given in this guide upgrade a live Rook cluster running Let's get started! "},{"location":"Upgrade/rook-upgrade/#environment","title":"Environment","text":"These instructions will work for as long the environment is parameterized correctly. Set the following environment variables, which will be used throughout this document. "},{"location":"Upgrade/rook-upgrade/#1-update-common-resources-and-crds","title":"1. Update common resources and CRDs","text":"Hint Common resources and CRDs are automatically updated when using Helm charts. First, apply updates to Rook common resources. This includes modified privileges (RBAC) needed by the Operator. Also update the Custom Resource Definitions (CRDs). Get the latest common resources manifests that contain the latest changes. If the Rook Operator or CephCluster are deployed into a different namespace than Apply the resources. "},{"location":"Upgrade/rook-upgrade/#prometheus-updates","title":"Prometheus Updates","text":"If Prometheus monitoring is enabled, follow this step to upgrade the Prometheus RBAC resources as well. "},{"location":"Upgrade/rook-upgrade/#2-update-the-rook-operator","title":"2. Update the Rook Operator","text":"Hint The operator is automatically updated when using Helm charts. The largest portion of the upgrade is triggered when the operator's image is updated to "},{"location":"Upgrade/rook-upgrade/#3-update-ceph-csi","title":"3. Update Ceph CSI","text":"Hint This is automatically updated if custom CSI image versions are not set. Important The minimum supported version of Ceph-CSI is v3.8.0. Update to the latest Ceph-CSI drivers if custom CSI images are specified. See the CSI Custom Images documentation. Note If using snapshots, refer to the Upgrade Snapshot API guide. "},{"location":"Upgrade/rook-upgrade/#4-wait-for-the-upgrade-to-complete","title":"4. Wait for the upgrade to complete","text":"Watch now in amazement as the Ceph mons, mgrs, OSDs, rbd-mirrors, MDSes and RGWs are terminated and replaced with updated versions in sequence. The cluster may be unresponsive very briefly as mons update, and the Ceph Filesystem may fall offline a few times while the MDSes are upgrading. This is normal. The versions of the components can be viewed as they are updated: As an example, this cluster is midway through updating the OSDs. When all deployments report An easy check to see if the upgrade is totally finished is to check that there is only one "},{"location":"Upgrade/rook-upgrade/#5-verify-the-updated-cluster","title":"5. Verify the updated cluster","text":"At this point, the Rook operator should be running version Verify the CephCluster health using the health verification doc. "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"CRDs/ceph-client-crd/","title":"CephClient CRD","text":"Rook allows creation and updating clients through the custom resource definitions (CRDs). For more information about user management and capabilities see the Ceph docs. "},{"location":"CRDs/ceph-client-crd/#use-case-connecting-to-ceph","title":"Use Case: Connecting to Ceph","text":"Use Client CRD in case you want to integrate Rook with applications that are using LibRBD directly. For example for OpenStack deployment with Ceph backend use Client CRD to create OpenStack services users. The Client CRD is not needed for Flex or CSI driver users. The drivers create the needed users automatically. "},{"location":"CRDs/ceph-client-crd/#prerequisites","title":"Prerequisites","text":"This guide assumes you have created a Rook cluster as explained in the main Quickstart guide. "},{"location":"CRDs/ceph-client-crd/#1-creating-ceph-user","title":"1. Creating Ceph User","text":"To get you started, here is a simple example of a CRD to configure a Ceph client with capabilities. To use CephClient ","text":"Once your "},{"location":"CRDs/ceph-client-crd/#3-extract-ceph-cluster-credentials-from-the-generated-secret","title":"3. Extract Ceph cluster credentials from the generated secret","text":"Extract Ceph cluster credentials from the generated secret (note that the subkey will be your original client name): The base64 encoded value that is returned is the password for your ceph client. "},{"location":"CRDs/ceph-client-crd/#4-retrieve-the-mon-endpoints","title":"4. Retrieve the mon endpoints","text":"To send writes to the cluster, you must retrieve the mons in use: This command should produce a line that looks somewhat like this: "},{"location":"CRDs/ceph-client-crd/#5-optional-generate-ceph-configuration-files","title":"5. (optional) Generate Ceph configuration files","text":"If you choose to generate files for Ceph to use you will need to generate the following files:
Examples of the files follow:
"},{"location":"CRDs/ceph-client-crd/#6-connect-to-the-ceph-cluster-with-your-given-client-id","title":"6. Connect to the Ceph cluster with your given client ID","text":"With the files we've created, you should be able to query the cluster by setting Ceph ENV variables and running With this config, the ceph tools ( The Ceph project contains a SQLite VFS that interacts with RADOS directly, called First, on your workload ensure that you have the appropriate packages installed that make
Without the appropriate package (or a from-scratch build of SQLite), you will be unable to load After creating a Then start your SQLite database: If those lines complete without error, you have successfully set up SQLite to access Ceph. See the libcephsqlite documentation for more information on the VFS and database URL format. "},{"location":"CRDs/ceph-nfs-crd/","title":"CephNFS CRD","text":"Rook allows exporting NFS shares of a CephFilesystem or CephObjectStore through the CephNFS custom resource definition. "},{"location":"CRDs/ceph-nfs-crd/#example","title":"Example","text":" "},{"location":"CRDs/ceph-nfs-crd/#nfs-settings","title":"NFS Settings","text":""},{"location":"CRDs/ceph-nfs-crd/#server","title":"Server","text":"The
The
It is possible to scale the size of the cluster up or down by modifying the The CRD always eliminates the highest index servers first, in reverse order from how they were started. Scaling down the cluster requires that clients be migrated from servers that will be eliminated to others. That process is currently a manual one and should be performed before reducing the size of the cluster. Warning See the known issue below about setting this value greater than one. "},{"location":"CRDs/ceph-nfs-crd/#known-issues","title":"Known issues","text":""},{"location":"CRDs/ceph-nfs-crd/#serveractive-count-greater-than-1","title":"server.active count greater than 1","text":"
Packages:
Package v1 is the v1 version of the API. Resource Types:
CephBlockPool represents a Ceph Storage Pool Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephBlockPool metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec NamedBlockPoolSpec name string (Optional) The desired name of the pool if different from the CephBlockPool CR name. PoolSpec PoolSpec (Members of The core pool configuration status CephBlockPoolStatus"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBucketNotification","title":"CephBucketNotification","text":"CephBucketNotification represents a Bucket Notifications Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephBucketNotification metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec BucketNotificationSpec topic string The name of the topic associated with this notification events []BucketNotificationEvent (Optional) List of events that should trigger the notification filter NotificationFilterSpec (Optional) Spec of notification filter status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBucketTopic","title":"CephBucketTopic","text":"CephBucketTopic represents a Ceph Object Topic for Bucket Notifications Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephBucketTopic metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec BucketTopicSpec objectStoreName string The name of the object store on which to define the topic objectStoreNamespace string The namespace of the object store on which to define the topic opaqueData string (Optional) Data which is sent in each event persistent bool (Optional) Indication whether notifications to this endpoint are persistent or not endpoint TopicEndpointSpec Contains the endpoint spec of the topic status BucketTopicStatus (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephCOSIDriver","title":"CephCOSIDriver","text":"CephCOSIDriver represents the CRD for the Ceph COSI Driver Deployment Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephCOSIDriver metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec CephCOSIDriverSpec Spec represents the specification of a Ceph COSI Driver image string (Optional) Image is the container image to run the Ceph COSI driver objectProvisionerImage string (Optional) ObjectProvisionerImage is the container image to run the COSI driver sidecar deploymentStrategy COSIDeploymentStrategy (Optional) DeploymentStrategy is the strategy to use to deploy the COSI driver. placement Placement (Optional) Placement is the placement strategy to use for the COSI driver resources Kubernetes core/v1.ResourceRequirements (Optional) Resources is the resource requirements for the COSI driver "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephClient","title":"CephClient","text":"CephClient represents a Ceph Client Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephClient metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ClientSpec Spec represents the specification of a Ceph Client name string (Optional) caps map[string]string status CephClientStatus (Optional) Status represents the status of a Ceph Client "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephCluster","title":"CephCluster","text":"CephCluster is a Ceph storage cluster Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephCluster metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ClusterSpec cephVersion CephVersionSpec (Optional) The version information that instructs Rook to orchestrate a particular version of Ceph. storage StorageScopeSpec (Optional) A spec for available storage in the cluster and how it should be used annotations AnnotationsSpec (Optional) The annotations-related configuration to add/set on each Pod related object. labels LabelsSpec (Optional) The labels-related configuration to add/set on each Pod related object. placement PlacementSpec (Optional) The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations). network NetworkSpec (Optional) Network related configuration resources ResourceSpec (Optional) Resources set resource requests and limits priorityClassNames PriorityClassNamesSpec (Optional) PriorityClassNames sets priority classes on components dataDirHostPath string (Optional) The path on the host where config and data can be persisted skipUpgradeChecks bool (Optional) SkipUpgradeChecks defines if an upgrade should be forced even if one of the check fails continueUpgradeAfterChecksEvenIfNotHealthy bool (Optional) ContinueUpgradeAfterChecksEvenIfNotHealthy defines if an upgrade should continue even if PGs are not clean waitTimeoutForHealthyOSDInMinutes time.Duration (Optional) WaitTimeoutForHealthyOSDInMinutes defines the time the operator would wait before an OSD can be stopped for upgrade or restart. If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one if upgradeOSDRequiresHealthyPGs bool (Optional) UpgradeOSDRequiresHealthyPGs defines if OSD upgrade requires PGs are clean. If set to disruptionManagement DisruptionManagementSpec (Optional) A spec for configuring disruption management. mon MonSpec (Optional) A spec for mon related options crashCollector CrashCollectorSpec (Optional) A spec for the crash controller dashboard DashboardSpec (Optional) Dashboard settings monitoring MonitoringSpec (Optional) Prometheus based Monitoring settings external ExternalSpec (Optional) Whether the Ceph Cluster is running external to this Kubernetes cluster mon, mgr, osd, mds, and discover daemons will not be created for external clusters. mgr MgrSpec (Optional) A spec for mgr related options removeOSDsIfOutAndSafeToRemove bool (Optional) Remove the OSD that is out and safe to remove only if this option is true cleanupPolicy CleanupPolicySpec (Optional) Indicates user intent when deleting a cluster; blocks orchestration and should not be set if cluster deletion is not imminent. healthCheck CephClusterHealthCheckSpec (Optional) Internal daemon healthchecks and liveness probe security SecuritySpec (Optional) Security represents security settings logCollector LogCollectorSpec (Optional) Logging represents loggings settings csi CSIDriverSpec (Optional) CSI Driver Options applied per cluster. cephConfig map[string]map[string]string (Optional) Ceph Config options status ClusterStatus (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystem","title":"CephFilesystem","text":"CephFilesystem represents a Ceph Filesystem Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephFilesystem metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec FilesystemSpec metadataPool PoolSpec The metadata pool settings dataPools []NamedPoolSpec The data pool settings, with optional predefined pool name. preservePoolsOnDelete bool (Optional) Preserve pools on filesystem deletion preserveFilesystemOnDelete bool (Optional) Preserve the fs in the cluster on CephFilesystem CR deletion. Setting this to true automatically implies PreservePoolsOnDelete is true. metadataServer MetadataServerSpec The mds pod info mirroring FSMirroringSpec (Optional) The mirroring settings statusCheck MirrorHealthCheckSpec The mirroring statusCheck status CephFilesystemStatus"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemMirror","title":"CephFilesystemMirror","text":"CephFilesystemMirror is the Ceph Filesystem Mirror object definition Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephFilesystemMirror metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec FilesystemMirroringSpec placement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the cephfs-mirror pods priorityClassName string (Optional) PriorityClassName sets priority class on the cephfs-mirror pods status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemSubVolumeGroup","title":"CephFilesystemSubVolumeGroup","text":"CephFilesystemSubVolumeGroup represents a Ceph Filesystem SubVolumeGroup Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephFilesystemSubVolumeGroup metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec CephFilesystemSubVolumeGroupSpec Spec represents the specification of a Ceph Filesystem SubVolumeGroup name string (Optional) The name of the subvolume group. If not set, the default is the name of the subvolumeGroup CR. filesystemName string FilesystemName is the name of Ceph Filesystem SubVolumeGroup volume name. Typically it\u2019s the name of the CephFilesystem CR. If not coming from the CephFilesystem CR, it can be retrieved from the list of Ceph Filesystem volumes with pinning CephFilesystemSubVolumeGroupSpecPinning (Optional) Pinning configuration of CephFilesystemSubVolumeGroup, reference https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups only one out of (export, distributed, random) can be set at a time quota k8s.io/apimachinery/pkg/api/resource.Quantity (Optional) Quota size of the Ceph Filesystem subvolume group. dataPoolName string (Optional) The data pool name for the Ceph Filesystem subvolume group layout, if the default CephFS pool is not desired. status CephFilesystemSubVolumeGroupStatus (Optional) Status represents the status of a CephFilesystem SubvolumeGroup "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephNFS","title":"CephNFS","text":"CephNFS represents a Ceph NFS Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephNFS metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec NFSGaneshaSpec rados GaneshaRADOSSpec (Optional) RADOS is the Ganesha RADOS specification server GaneshaServerSpec Server is the Ganesha Server specification security NFSSecuritySpec (Optional) Security allows specifying security configurations for the NFS cluster status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectRealm","title":"CephObjectRealm","text":"CephObjectRealm represents a Ceph Object Store Gateway Realm Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectRealm metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectRealmSpec (Optional) pull PullSpec status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectStore","title":"CephObjectStore","text":"CephObjectStore represents a Ceph Object Store Gateway Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectStore metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectStoreSpec metadataPool PoolSpec (Optional) The metadata pool settings dataPool PoolSpec (Optional) The data pool settings sharedPools ObjectSharedPoolsSpec (Optional) The pool information when configuring RADOS namespaces in existing pools. preservePoolsOnDelete bool (Optional) Preserve pools on object store deletion gateway GatewaySpec (Optional) The rgw pod info protocols ProtocolSpec (Optional) The protocol specification auth AuthSpec (Optional) The authentication configuration zone ZoneSpec (Optional) The multisite info healthCheck ObjectHealthCheckSpec (Optional) The RGW health probes security ObjectStoreSecuritySpec (Optional) Security represents security settings allowUsersInNamespaces []string (Optional) The list of allowed namespaces in addition to the object store namespace where ceph object store users may be created. Specify \u201c*\u201d to allow all namespaces, otherwise list individual namespaces that are to be allowed. This is useful for applications that need object store credentials to be created in their own namespace, where neither OBCs nor COSI is being used to create buckets. The default is empty. hosting ObjectStoreHostingSpec (Optional) Hosting settings for the object store. A common use case for hosting configuration is to inform Rook of endpoints that support DNS wildcards, which in turn allows virtual host-style bucket addressing. status ObjectStoreStatus"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectStoreUser","title":"CephObjectStoreUser","text":"CephObjectStoreUser represents a Ceph Object Store Gateway User Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectStoreUser metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectStoreUserSpec store string (Optional) The store the user will be created in displayName string (Optional) The display name for the ceph users capabilities ObjectUserCapSpec (Optional) quotas ObjectUserQuotaSpec (Optional) clusterNamespace string (Optional) The namespace where the parent CephCluster and CephObjectStore are found status ObjectStoreUserStatus (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectZone","title":"CephObjectZone","text":"CephObjectZone represents a Ceph Object Store Gateway Zone Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectZone metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectZoneSpec zoneGroup string The display name for the ceph users metadataPool PoolSpec (Optional) The metadata pool settings dataPool PoolSpec (Optional) The data pool settings sharedPools ObjectSharedPoolsSpec (Optional) The pool information when configuring RADOS namespaces in existing pools. customEndpoints []string (Optional) If this zone cannot be accessed from other peer Ceph clusters via the ClusterIP Service endpoint created by Rook, you must set this to the externally reachable endpoint(s). You may include the port in the definition. For example: \u201chttps://my-object-store.my-domain.net:443\u201d. In many cases, you should set this to the endpoint of the ingress resource that makes the CephObjectStore associated with this CephObjectStoreZone reachable to peer clusters. The list can have one or more endpoints pointing to different RGW servers in the zone. If a CephObjectStore endpoint is omitted from this list, that object store\u2019s gateways will not receive multisite replication data (see CephObjectStore.spec.gateway.disableMultisiteSyncTraffic). preservePoolsOnDelete bool (Optional) Preserve pools on object zone deletion status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephObjectZoneGroup","title":"CephObjectZoneGroup","text":"CephObjectZoneGroup represents a Ceph Object Store Gateway Zone Group Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephObjectZoneGroup metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec ObjectZoneGroupSpec realm string The display name for the ceph users status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephRBDMirror","title":"CephRBDMirror","text":"CephRBDMirror represents a Ceph RBD Mirror Field DescriptionapiVersion string ceph.rook.io/v1 kind string CephRBDMirror metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec RBDMirroringSpec count int Count represents the number of rbd mirror instance to run peers MirroringPeerSpec (Optional) Peers represents the peers spec placement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the rbd mirror pods priorityClassName string (Optional) PriorityClassName sets priority class on the rbd mirror pods status Status (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.AMQPEndpointSpec","title":"AMQPEndpointSpec","text":"(Appears on:TopicEndpointSpec) AMQPEndpointSpec represent the spec of an AMQP endpoint of a Bucket Topic Field Descriptionuri string The URI of the AMQP endpoint to push notification to exchange string Name of the exchange that is used to route messages based on topics disableVerifySSL bool (Optional) Indicate whether the server certificate is validated by the client or not ackLevel string (Optional) The ack level required for this topic (none/broker/routeable) "},{"location":"CRDs/specification/#ceph.rook.io/v1.AdditionalVolumeMount","title":"AdditionalVolumeMount","text":"AdditionalVolumeMount represents the source from where additional files in pod containers should come from and what subdirectory they are made available in. Field DescriptionsubPath string SubPath defines the sub-path (subdirectory) of the directory root where the volumeSource will be mounted. All files/keys in the volume source\u2019s volume will be mounted to the subdirectory. This is not the same as the Kubernetes volumeSource ConfigFileVolumeSource VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for the additional file(s) like what is normally used to configure Volumes for a Pod. Fore example, a ConfigMap, Secret, or HostPath. Each VolumeSource adds one or more additional files to the container []github.com/rook/rook/pkg/apis/ceph.rook.io/v1.AdditionalVolumeMount alias)","text":"(Appears on:GatewaySpec, SSSDSidecar) "},{"location":"CRDs/specification/#ceph.rook.io/v1.AddressRangesSpec","title":"AddressRangesSpec","text":"(Appears on:NetworkSpec) Field Descriptionpublic CIDRList (Optional) Public defines a list of CIDRs to use for Ceph public network communication. cluster CIDRList (Optional) Cluster defines a list of CIDRs to use for Ceph cluster network communication. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Annotations","title":"Annotations (map[string]string alias)","text":"(Appears on:FilesystemMirroringSpec, GaneshaServerSpec, GatewaySpec, MetadataServerSpec, RBDMirroringSpec, RGWServiceSpec) Annotations are annotations "},{"location":"CRDs/specification/#ceph.rook.io/v1.AnnotationsSpec","title":"AnnotationsSpec (map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Annotations alias)","text":"(Appears on:ClusterSpec) AnnotationsSpec is the main spec annotation for all daemons "},{"location":"CRDs/specification/#ceph.rook.io/v1.AuthSpec","title":"AuthSpec","text":"(Appears on:ObjectStoreSpec) AuthSpec represents the authentication protocol configuration of a Ceph Object Store Gateway Field Descriptionkeystone KeystoneSpec (Optional) The spec for Keystone "},{"location":"CRDs/specification/#ceph.rook.io/v1.BucketNotificationEvent","title":"BucketNotificationEvent (string alias)","text":"(Appears on:BucketNotificationSpec) BucketNotificationSpec represent the event type of the bucket notification "},{"location":"CRDs/specification/#ceph.rook.io/v1.BucketNotificationSpec","title":"BucketNotificationSpec","text":"(Appears on:CephBucketNotification) BucketNotificationSpec represent the spec of a Bucket Notification Field Descriptiontopic string The name of the topic associated with this notification events []BucketNotificationEvent (Optional) List of events that should trigger the notification filter NotificationFilterSpec (Optional) Spec of notification filter "},{"location":"CRDs/specification/#ceph.rook.io/v1.BucketTopicSpec","title":"BucketTopicSpec","text":"(Appears on:CephBucketTopic) BucketTopicSpec represent the spec of a Bucket Topic Field DescriptionobjectStoreName string The name of the object store on which to define the topic objectStoreNamespace string The namespace of the object store on which to define the topic opaqueData string (Optional) Data which is sent in each event persistent bool (Optional) Indication whether notifications to this endpoint are persistent or not endpoint TopicEndpointSpec Contains the endpoint spec of the topic "},{"location":"CRDs/specification/#ceph.rook.io/v1.BucketTopicStatus","title":"BucketTopicStatus","text":"(Appears on:CephBucketTopic) BucketTopicStatus represents the Status of a CephBucketTopic Field Descriptionphase string (Optional) ARN string (Optional) The ARN of the topic generated by the RGW observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CIDR","title":"CIDR (string alias)","text":"An IPv4 or IPv6 network CIDR. This naive kubebuilder regex provides immediate feedback for some typos and for a common problem case where the range spec is forgotten (e.g., /24). Rook does in-depth validation in code. "},{"location":"CRDs/specification/#ceph.rook.io/v1.COSIDeploymentStrategy","title":"COSIDeploymentStrategy (string alias)","text":"(Appears on:CephCOSIDriverSpec) COSIDeploymentStrategy represents the strategy to use to deploy the Ceph COSI driver Value Description\"Always\" Always means the Ceph COSI driver will be deployed even if the object store is not present \"Auto\" Auto means the Ceph COSI driver will be deployed automatically if object store is present \"Never\" Never means the Ceph COSI driver will never deployed "},{"location":"CRDs/specification/#ceph.rook.io/v1.CSICephFSSpec","title":"CSICephFSSpec","text":"(Appears on:CSIDriverSpec) CSICephFSSpec defines the settings for CephFS CSI driver. Field DescriptionkernelMountOptions string (Optional) KernelMountOptions defines the mount options for kernel mounter. fuseMountOptions string (Optional) FuseMountOptions defines the mount options for ceph fuse mounter. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CSIDriverSpec","title":"CSIDriverSpec","text":"(Appears on:ClusterSpec) CSIDriverSpec defines CSI Driver settings applied per cluster. Field DescriptionreadAffinity ReadAffinitySpec (Optional) ReadAffinity defines the read affinity settings for CSI driver. cephfs CSICephFSSpec (Optional) CephFS defines CSI Driver settings for CephFS driver. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Capacity","title":"Capacity","text":"(Appears on:CephStatus) Capacity is the capacity information of a Ceph Cluster Field DescriptionbytesTotal uint64 bytesUsed uint64 bytesAvailable uint64 lastUpdated string"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBlockPoolRadosNamespace","title":"CephBlockPoolRadosNamespace","text":"CephBlockPoolRadosNamespace represents a Ceph BlockPool Rados Namespace Field Descriptionmetadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec CephBlockPoolRadosNamespaceSpec Spec represents the specification of a Ceph BlockPool Rados Namespace name string (Optional) The name of the CephBlockPoolRadosNamespaceSpec namespace. If not set, the default is the name of the CR. blockPoolName string BlockPoolName is the name of Ceph BlockPool. Typically it\u2019s the name of the CephBlockPool CR. mirroring RadosNamespaceMirroring (Optional) Mirroring configuration of CephBlockPoolRadosNamespace status CephBlockPoolRadosNamespaceStatus (Optional) Status represents the status of a CephBlockPool Rados Namespace "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBlockPoolRadosNamespaceSpec","title":"CephBlockPoolRadosNamespaceSpec","text":"(Appears on:CephBlockPoolRadosNamespace) CephBlockPoolRadosNamespaceSpec represents the specification of a CephBlockPool Rados Namespace Field Descriptionname string (Optional) The name of the CephBlockPoolRadosNamespaceSpec namespace. If not set, the default is the name of the CR. blockPoolName string BlockPoolName is the name of Ceph BlockPool. Typically it\u2019s the name of the CephBlockPool CR. mirroring RadosNamespaceMirroring (Optional) Mirroring configuration of CephBlockPoolRadosNamespace "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBlockPoolRadosNamespaceStatus","title":"CephBlockPoolRadosNamespaceStatus","text":"(Appears on:CephBlockPoolRadosNamespace) CephBlockPoolRadosNamespaceStatus represents the Status of Ceph BlockPool Rados Namespace Field Descriptionphase ConditionType (Optional) info map[string]string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephBlockPoolStatus","title":"CephBlockPoolStatus","text":"(Appears on:CephBlockPool) CephBlockPoolStatus represents the mirroring status of Ceph Storage Pool Field Descriptionphase ConditionType (Optional) mirroringStatus MirroringStatusSpec (Optional) mirroringInfo MirroringInfoSpec (Optional) snapshotScheduleStatus SnapshotScheduleStatusSpec (Optional) info map[string]string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. conditions []Condition"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephCOSIDriverSpec","title":"CephCOSIDriverSpec","text":"(Appears on:CephCOSIDriver) CephCOSIDriverSpec represents the specification of a Ceph COSI Driver Field Descriptionimage string (Optional) Image is the container image to run the Ceph COSI driver objectProvisionerImage string (Optional) ObjectProvisionerImage is the container image to run the COSI driver sidecar deploymentStrategy COSIDeploymentStrategy (Optional) DeploymentStrategy is the strategy to use to deploy the COSI driver. placement Placement (Optional) Placement is the placement strategy to use for the COSI driver resources Kubernetes core/v1.ResourceRequirements (Optional) Resources is the resource requirements for the COSI driver "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephClientStatus","title":"CephClientStatus","text":"(Appears on:CephClient) CephClientStatus represents the Status of Ceph Client Field Descriptionphase ConditionType (Optional) info map[string]string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephClusterHealthCheckSpec","title":"CephClusterHealthCheckSpec","text":"(Appears on:ClusterSpec) CephClusterHealthCheckSpec represent the healthcheck for Ceph daemons Field DescriptiondaemonHealth DaemonHealthSpec (Optional) DaemonHealth is the health check for a given daemon livenessProbe map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]*github.com/rook/rook/pkg/apis/ceph.rook.io/v1.ProbeSpec (Optional) LivenessProbe allows changing the livenessProbe configuration for a given daemon startupProbe map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]*github.com/rook/rook/pkg/apis/ceph.rook.io/v1.ProbeSpec (Optional) StartupProbe allows changing the startupProbe configuration for a given daemon "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephDaemonsVersions","title":"CephDaemonsVersions","text":"(Appears on:CephStatus) CephDaemonsVersions show the current ceph version for different ceph daemons Field Descriptionmon map[string]int (Optional) Mon shows Mon Ceph version mgr map[string]int (Optional) Mgr shows Mgr Ceph version osd map[string]int (Optional) Osd shows Osd Ceph version rgw map[string]int (Optional) Rgw shows Rgw Ceph version mds map[string]int (Optional) Mds shows Mds Ceph version rbd-mirror map[string]int (Optional) RbdMirror shows RbdMirror Ceph version cephfs-mirror map[string]int (Optional) CephFSMirror shows CephFSMirror Ceph version overall map[string]int (Optional) Overall shows overall Ceph version "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephExporterSpec","title":"CephExporterSpec","text":"(Appears on:MonitoringSpec) Field DescriptionperfCountersPrioLimit int64 Only performance counters greater than or equal to this option are fetched statsPeriodSeconds int64 Time to wait before sending requests again to exporter server (seconds) "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemStatus","title":"CephFilesystemStatus","text":"(Appears on:CephFilesystem) CephFilesystemStatus represents the status of a Ceph Filesystem Field Descriptionphase ConditionType (Optional) snapshotScheduleStatus FilesystemSnapshotScheduleStatusSpec (Optional) info map[string]string (Optional) Use only info and put mirroringStatus in it? mirroringStatus FilesystemMirroringInfoSpec (Optional) MirroringStatus is the filesystem mirroring status conditions []Condition observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpec","title":"CephFilesystemSubVolumeGroupSpec","text":"(Appears on:CephFilesystemSubVolumeGroup) CephFilesystemSubVolumeGroupSpec represents the specification of a Ceph Filesystem SubVolumeGroup Field Descriptionname string (Optional) The name of the subvolume group. If not set, the default is the name of the subvolumeGroup CR. filesystemName string FilesystemName is the name of Ceph Filesystem SubVolumeGroup volume name. Typically it\u2019s the name of the CephFilesystem CR. If not coming from the CephFilesystem CR, it can be retrieved from the list of Ceph Filesystem volumes with pinning CephFilesystemSubVolumeGroupSpecPinning (Optional) Pinning configuration of CephFilesystemSubVolumeGroup, reference https://docs.ceph.com/en/latest/cephfs/fs-volumes/#pinning-subvolumes-and-subvolume-groups only one out of (export, distributed, random) can be set at a time quota k8s.io/apimachinery/pkg/api/resource.Quantity (Optional) Quota size of the Ceph Filesystem subvolume group. dataPoolName string (Optional) The data pool name for the Ceph Filesystem subvolume group layout, if the default CephFS pool is not desired. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemSubVolumeGroupSpecPinning","title":"CephFilesystemSubVolumeGroupSpecPinning","text":"(Appears on:CephFilesystemSubVolumeGroupSpec) CephFilesystemSubVolumeGroupSpecPinning represents the pinning configuration of SubVolumeGroup Field Descriptionexport int (Optional) distributed int (Optional) random, float64 (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephFilesystemSubVolumeGroupStatus","title":"CephFilesystemSubVolumeGroupStatus","text":"(Appears on:CephFilesystemSubVolumeGroup) CephFilesystemSubVolumeGroupStatus represents the Status of Ceph Filesystem SubVolumeGroup Field Descriptionphase ConditionType (Optional) info map[string]string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephHealthMessage","title":"CephHealthMessage","text":"(Appears on:CephStatus) CephHealthMessage represents the health message of a Ceph Cluster Field Descriptionseverity string message string"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephNetworkType","title":"CephNetworkType (string alias)","text":"CephNetworkType should be \u201cpublic\u201d or \u201ccluster\u201d. Allow any string so that over-specified legacy clusters do not break on CRD update. Value Description\"cluster\" \"public\" "},{"location":"CRDs/specification/#ceph.rook.io/v1.CephStatus","title":"CephStatus","text":"(Appears on:ClusterStatus) CephStatus is the details health of a Ceph Cluster Field Descriptionhealth string details map[string]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.CephHealthMessage lastChecked string lastChanged string previousHealth string capacity Capacity versions CephDaemonsVersions (Optional) fsid string"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephStorage","title":"CephStorage","text":"(Appears on:ClusterStatus) CephStorage represents flavors of Ceph Cluster Storage Field DescriptiondeviceClasses []DeviceClasses osd OSDStatus deprecatedOSDs map[string][]int"},{"location":"CRDs/specification/#ceph.rook.io/v1.CephVersionSpec","title":"CephVersionSpec","text":"(Appears on:ClusterSpec) CephVersionSpec represents the settings for the Ceph version that Rook is orchestrating. Field Descriptionimage string (Optional) Image is the container image used to launch the ceph daemons, such as quay.io/ceph/ceph: The full list of images can be found at https://quay.io/repository/ceph/ceph?tab=tags Whether to allow unsupported versions (do not set to true in production) imagePullPolicy Kubernetes core/v1.PullPolicy (Optional) ImagePullPolicy describes a policy for if/when to pull a container image One of Always, Never, IfNotPresent. "},{"location":"CRDs/specification/#ceph.rook.io/v1.CleanupConfirmationProperty","title":"CleanupConfirmationProperty (string alias)","text":"(Appears on:CleanupPolicySpec) CleanupConfirmationProperty represents the cleanup confirmation Value Description\"yes-really-destroy-data\" DeleteDataDirOnHostsConfirmation represents the validation to destroy dataDirHostPath "},{"location":"CRDs/specification/#ceph.rook.io/v1.CleanupPolicySpec","title":"CleanupPolicySpec","text":"(Appears on:ClusterSpec) CleanupPolicySpec represents a Ceph Cluster cleanup policy Field Descriptionconfirmation CleanupConfirmationProperty (Optional) Confirmation represents the cleanup confirmation sanitizeDisks SanitizeDisksSpec (Optional) SanitizeDisks represents way we sanitize disks allowUninstallWithVolumes bool (Optional) AllowUninstallWithVolumes defines whether we can proceed with the uninstall if they are RBD images still present "},{"location":"CRDs/specification/#ceph.rook.io/v1.ClientSpec","title":"ClientSpec","text":"(Appears on:CephClient) ClientSpec represents the specification of a Ceph Client Field Descriptionname string (Optional) caps map[string]string"},{"location":"CRDs/specification/#ceph.rook.io/v1.ClusterSpec","title":"ClusterSpec","text":"(Appears on:CephCluster) ClusterSpec represents the specification of Ceph Cluster Field DescriptioncephVersion CephVersionSpec (Optional) The version information that instructs Rook to orchestrate a particular version of Ceph. storage StorageScopeSpec (Optional) A spec for available storage in the cluster and how it should be used annotations AnnotationsSpec (Optional) The annotations-related configuration to add/set on each Pod related object. labels LabelsSpec (Optional) The labels-related configuration to add/set on each Pod related object. placement PlacementSpec (Optional) The placement-related configuration to pass to kubernetes (affinity, node selector, tolerations). network NetworkSpec (Optional) Network related configuration resources ResourceSpec (Optional) Resources set resource requests and limits priorityClassNames PriorityClassNamesSpec (Optional) PriorityClassNames sets priority classes on components dataDirHostPath string (Optional) The path on the host where config and data can be persisted skipUpgradeChecks bool (Optional) SkipUpgradeChecks defines if an upgrade should be forced even if one of the check fails continueUpgradeAfterChecksEvenIfNotHealthy bool (Optional) ContinueUpgradeAfterChecksEvenIfNotHealthy defines if an upgrade should continue even if PGs are not clean waitTimeoutForHealthyOSDInMinutes time.Duration (Optional) WaitTimeoutForHealthyOSDInMinutes defines the time the operator would wait before an OSD can be stopped for upgrade or restart. If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one if upgradeOSDRequiresHealthyPGs bool (Optional) UpgradeOSDRequiresHealthyPGs defines if OSD upgrade requires PGs are clean. If set to disruptionManagement DisruptionManagementSpec (Optional) A spec for configuring disruption management. mon MonSpec (Optional) A spec for mon related options crashCollector CrashCollectorSpec (Optional) A spec for the crash controller dashboard DashboardSpec (Optional) Dashboard settings monitoring MonitoringSpec (Optional) Prometheus based Monitoring settings external ExternalSpec (Optional) Whether the Ceph Cluster is running external to this Kubernetes cluster mon, mgr, osd, mds, and discover daemons will not be created for external clusters. mgr MgrSpec (Optional) A spec for mgr related options removeOSDsIfOutAndSafeToRemove bool (Optional) Remove the OSD that is out and safe to remove only if this option is true cleanupPolicy CleanupPolicySpec (Optional) Indicates user intent when deleting a cluster; blocks orchestration and should not be set if cluster deletion is not imminent. healthCheck CephClusterHealthCheckSpec (Optional) Internal daemon healthchecks and liveness probe security SecuritySpec (Optional) Security represents security settings logCollector LogCollectorSpec (Optional) Logging represents loggings settings csi CSIDriverSpec (Optional) CSI Driver Options applied per cluster. cephConfig map[string]map[string]string (Optional) Ceph Config options "},{"location":"CRDs/specification/#ceph.rook.io/v1.ClusterState","title":"ClusterState (string alias)","text":"(Appears on:ClusterStatus) ClusterState represents the state of a Ceph Cluster Value Description\"Connected\" ClusterStateConnected represents the Connected state of a Ceph Cluster \"Connecting\" ClusterStateConnecting represents the Connecting state of a Ceph Cluster \"Created\" ClusterStateCreated represents the Created state of a Ceph Cluster \"Creating\" ClusterStateCreating represents the Creating state of a Ceph Cluster \"Error\" ClusterStateError represents the Error state of a Ceph Cluster \"Updating\" ClusterStateUpdating represents the Updating state of a Ceph Cluster "},{"location":"CRDs/specification/#ceph.rook.io/v1.ClusterStatus","title":"ClusterStatus","text":"(Appears on:CephCluster) ClusterStatus represents the status of a Ceph cluster Field Descriptionstate ClusterState phase ConditionType message string conditions []Condition ceph CephStatus storage CephStorage version ClusterVersion observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ClusterVersion","title":"ClusterVersion","text":"(Appears on:ClusterStatus) ClusterVersion represents the version of a Ceph Cluster Field Descriptionimage string version string"},{"location":"CRDs/specification/#ceph.rook.io/v1.CompressionSpec","title":"CompressionSpec","text":"(Appears on:ConnectionsSpec) Field Descriptionenabled bool (Optional) Whether to compress the data in transit across the wire. The default is not set. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Condition","title":"Condition","text":"(Appears on:CephBlockPoolStatus, CephFilesystemStatus, ClusterStatus, ObjectStoreStatus, Status) Condition represents a status condition on any Rook-Ceph Custom Resource. Field Descriptiontype ConditionType status Kubernetes core/v1.ConditionStatus reason ConditionReason message string lastHeartbeatTime Kubernetes meta/v1.Time lastTransitionTime Kubernetes meta/v1.Time"},{"location":"CRDs/specification/#ceph.rook.io/v1.ConditionReason","title":"ConditionReason (string alias)","text":"(Appears on:Condition) ConditionReason is a reason for a condition Value Description\"ClusterConnected\" ClusterConnectedReason is cluster connected reason \"ClusterConnecting\" ClusterConnectingReason is cluster connecting reason \"ClusterCreated\" ClusterCreatedReason is cluster created reason \"ClusterDeleting\" ClusterDeletingReason is cluster deleting reason \"ClusterProgressing\" ClusterProgressingReason is cluster progressing reason \"Deleting\" DeletingReason represents when Rook has detected a resource object should be deleted. \"ObjectHasDependents\" ObjectHasDependentsReason represents when a resource object has dependents that are blocking deletion. \"ObjectHasNoDependents\" ObjectHasNoDependentsReason represents when a resource object has no dependents that are blocking deletion. \"ReconcileFailed\" ReconcileFailed represents when a resource reconciliation failed. \"ReconcileStarted\" ReconcileStarted represents when a resource reconciliation started. \"ReconcileSucceeded\" ReconcileSucceeded represents when a resource reconciliation was successful. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ConditionType","title":"ConditionType (string alias)","text":"(Appears on:CephBlockPoolRadosNamespaceStatus, CephBlockPoolStatus, CephClientStatus, CephFilesystemStatus, CephFilesystemSubVolumeGroupStatus, ClusterStatus, Condition, ObjectStoreStatus) ConditionType represent a resource\u2019s status Value Description\"Connected\" ConditionConnected represents Connected state of an object \"Connecting\" ConditionConnecting represents Connecting state of an object \"Deleting\" ConditionDeleting represents Deleting state of an object \"DeletionIsBlocked\" ConditionDeletionIsBlocked represents when deletion of the object is blocked. \"Failure\" ConditionFailure represents Failure state of an object \"Progressing\" ConditionProgressing represents Progressing state of an object \"Ready\" ConditionReady represents Ready state of an object "},{"location":"CRDs/specification/#ceph.rook.io/v1.ConfigFileVolumeSource","title":"ConfigFileVolumeSource","text":"(Appears on:AdditionalVolumeMount, KerberosConfigFiles, KerberosKeytabFile, SSSDSidecarConfigFile) Represents the source of a volume to mount. Only one of its members may be specified. This is a subset of the full Kubernetes API\u2019s VolumeSource that is reduced to what is most likely to be useful for mounting config files/dirs into Rook pods. Field DescriptionhostPath Kubernetes core/v1.HostPathVolumeSource (Optional) hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath emptyDir Kubernetes core/v1.EmptyDirVolumeSource (Optional) emptyDir represents a temporary directory that shares a pod\u2019s lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir secret Kubernetes core/v1.SecretVolumeSource (Optional) secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret persistentVolumeClaim Kubernetes core/v1.PersistentVolumeClaimVolumeSource (Optional) persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims configMap Kubernetes core/v1.ConfigMapVolumeSource (Optional) configMap represents a configMap that should populate this volume projected Kubernetes core/v1.ProjectedVolumeSource projected items for all in one resources secrets, configmaps, and downward API "},{"location":"CRDs/specification/#ceph.rook.io/v1.ConnectionsSpec","title":"ConnectionsSpec","text":"(Appears on:NetworkSpec) Field Descriptionencryption EncryptionSpec (Optional) Encryption settings for the network connections. compression CompressionSpec (Optional) Compression settings for the network connections. requireMsgr2 bool (Optional) Whether to require msgr2 (port 3300) even if compression or encryption are not enabled. If true, the msgr1 port (6789) will be disabled. Requires a kernel that supports msgr2 (kernel 5.11 or CentOS 8.4 or newer). "},{"location":"CRDs/specification/#ceph.rook.io/v1.CrashCollectorSpec","title":"CrashCollectorSpec","text":"(Appears on:ClusterSpec) CrashCollectorSpec represents options to configure the crash controller Field Descriptiondisable bool (Optional) Disable determines whether we should enable the crash collector daysToRetain uint (Optional) DaysToRetain represents the number of days to retain crash until they get pruned "},{"location":"CRDs/specification/#ceph.rook.io/v1.DaemonHealthSpec","title":"DaemonHealthSpec","text":"(Appears on:CephClusterHealthCheckSpec) DaemonHealthSpec is a daemon health check Field Descriptionstatus HealthCheckSpec (Optional) Status represents the health check settings for the Ceph health mon HealthCheckSpec (Optional) Monitor represents the health check settings for the Ceph monitor osd HealthCheckSpec (Optional) ObjectStorageDaemon represents the health check settings for the Ceph OSDs "},{"location":"CRDs/specification/#ceph.rook.io/v1.DashboardSpec","title":"DashboardSpec","text":"(Appears on:ClusterSpec) DashboardSpec represents the settings for the Ceph dashboard Field Descriptionenabled bool (Optional) Enabled determines whether to enable the dashboard urlPrefix string (Optional) URLPrefix is a prefix for all URLs to use the dashboard with a reverse proxy port int (Optional) Port is the dashboard webserver port ssl bool (Optional) SSL determines whether SSL should be used prometheusEndpoint string (Optional) Endpoint for the Prometheus host prometheusEndpointSSLVerify bool (Optional) Whether to verify the ssl endpoint for prometheus. Set to false for a self-signed cert. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Device","title":"Device","text":"(Appears on:Selection) Device represents a disk to use in the cluster Field Descriptionname string (Optional) fullpath string (Optional) config map[string]string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.DeviceClasses","title":"DeviceClasses","text":"(Appears on:CephStorage) DeviceClasses represents device classes of a Ceph Cluster Field Descriptionname string"},{"location":"CRDs/specification/#ceph.rook.io/v1.DisruptionManagementSpec","title":"DisruptionManagementSpec","text":"(Appears on:ClusterSpec) DisruptionManagementSpec configures management of daemon disruptions Field DescriptionmanagePodBudgets bool (Optional) This enables management of poddisruptionbudgets osdMaintenanceTimeout time.Duration (Optional) OSDMaintenanceTimeout sets how many additional minutes the DOWN/OUT interval is for drained failure domains it only works if managePodBudgets is true. the default is 30 minutes pgHealthCheckTimeout time.Duration (Optional) PGHealthCheckTimeout is the time (in minutes) that the operator will wait for the placement groups to become healthy (active+clean) after a drain was completed and OSDs came back up. Rook will continue with the next drain if the timeout exceeds. It only works if managePodBudgets is true. No values or 0 means that the operator will wait until the placement groups are healthy before unblocking the next drain. pgHealthyRegex string (Optional) PgHealthyRegex is the regular expression that is used to determine which PG states should be considered healthy. The default is manageMachineDisruptionBudgets bool (Optional) Deprecated. This enables management of machinedisruptionbudgets. machineDisruptionBudgetNamespace string (Optional) Deprecated. Namespace to look for MDBs by the machineDisruptionBudgetController "},{"location":"CRDs/specification/#ceph.rook.io/v1.EncryptionSpec","title":"EncryptionSpec","text":"(Appears on:ConnectionsSpec) Field Descriptionenabled bool (Optional) Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network. The default is not set. Even if encryption is not enabled, clients still establish a strong initial authentication for the connection and data integrity is still validated with a crc check. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted. "},{"location":"CRDs/specification/#ceph.rook.io/v1.EndpointAddress","title":"EndpointAddress","text":"(Appears on:GatewaySpec) EndpointAddress is a tuple that describes a single IP address or host name. This is a subset of Kubernetes\u2019s v1.EndpointAddress. Field Descriptionip string (Optional) The IP of this endpoint. As a legacy behavior, this supports being given a DNS-addressable hostname as well. hostname string (Optional) The DNS-addressable Hostname of this endpoint. This field will be preferred over IP if both are given. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ErasureCodedSpec","title":"ErasureCodedSpec","text":"(Appears on:PoolSpec) ErasureCodedSpec represents the spec for erasure code in a pool Field DescriptioncodingChunks uint Number of coding chunks per object in an erasure coded storage pool (required for erasure-coded pool type). This is the number of OSDs that can be lost simultaneously before data cannot be recovered. dataChunks uint Number of data chunks per object in an erasure coded storage pool (required for erasure-coded pool type). The number of chunks required to recover an object when any single OSD is lost is the same as dataChunks so be aware that the larger the number of data chunks, the higher the cost of recovery. algorithm string (Optional) The algorithm for erasure coding "},{"location":"CRDs/specification/#ceph.rook.io/v1.ExternalSpec","title":"ExternalSpec","text":"(Appears on:ClusterSpec) ExternalSpec represents the options supported by an external cluster Field Descriptionenable bool (Optional) Enable determines whether external mode is enabled or not "},{"location":"CRDs/specification/#ceph.rook.io/v1.FSMirroringSpec","title":"FSMirroringSpec","text":"(Appears on:FilesystemSpec) FSMirroringSpec represents the setting for a mirrored filesystem Field Descriptionenabled bool (Optional) Enabled whether this filesystem is mirrored or not peers MirroringPeerSpec (Optional) Peers represents the peers spec snapshotSchedules []SnapshotScheduleSpec (Optional) SnapshotSchedules is the scheduling of snapshot for mirrored filesystems snapshotRetention []SnapshotScheduleRetentionSpec (Optional) Retention is the retention policy for a snapshot schedule One path has exactly one retention policy. A policy can however contain multiple count-time period pairs in order to specify complex retention policies "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemMirrorInfoPeerSpec","title":"FilesystemMirrorInfoPeerSpec","text":"(Appears on:FilesystemsSpec) FilesystemMirrorInfoPeerSpec is the specification of a filesystem peer mirror Field Descriptionuuid string (Optional) UUID is the peer unique identifier remote PeerRemoteSpec (Optional) Remote are the remote cluster information stats PeerStatSpec (Optional) Stats are the stat a peer mirror "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemMirroringInfo","title":"FilesystemMirroringInfo","text":"(Appears on:FilesystemMirroringInfoSpec) FilesystemMirrorInfoSpec is the filesystem mirror status of a given filesystem Field Descriptiondaemon_id int (Optional) DaemonID is the cephfs-mirror name filesystems []FilesystemsSpec (Optional) Filesystems is the list of filesystems managed by a given cephfs-mirror daemon "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemMirroringInfoSpec","title":"FilesystemMirroringInfoSpec","text":"(Appears on:CephFilesystemStatus) FilesystemMirroringInfo is the status of the pool mirroring Field DescriptiondaemonsStatus []FilesystemMirroringInfo (Optional) PoolMirroringStatus is the mirroring status of a filesystem lastChecked string (Optional) LastChecked is the last time time the status was checked lastChanged string (Optional) LastChanged is the last time time the status last changed details string (Optional) Details contains potential status errors "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemMirroringSpec","title":"FilesystemMirroringSpec","text":"(Appears on:CephFilesystemMirror) FilesystemMirroringSpec is the filesystem mirroring specification Field Descriptionplacement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the cephfs-mirror pods priorityClassName string (Optional) PriorityClassName sets priority class on the cephfs-mirror pods "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemSnapshotScheduleStatusRetention","title":"FilesystemSnapshotScheduleStatusRetention","text":"(Appears on:FilesystemSnapshotSchedulesSpec) FilesystemSnapshotScheduleStatusRetention is the retention specification for a filesystem snapshot schedule Field Descriptionstart string (Optional) Start is when the snapshot schedule starts created string (Optional) Created is when the snapshot schedule was created first string (Optional) First is when the first snapshot schedule was taken last string (Optional) Last is when the last snapshot schedule was taken last_pruned string (Optional) LastPruned is when the last snapshot schedule was pruned created_count int (Optional) CreatedCount is total amount of snapshots pruned_count int (Optional) PrunedCount is total amount of pruned snapshots active bool (Optional) Active is whether the scheduled is active or not "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemSnapshotScheduleStatusSpec","title":"FilesystemSnapshotScheduleStatusSpec","text":"(Appears on:CephFilesystemStatus) FilesystemSnapshotScheduleStatusSpec is the status of the snapshot schedule Field DescriptionsnapshotSchedules []FilesystemSnapshotSchedulesSpec (Optional) SnapshotSchedules is the list of snapshots scheduled lastChecked string (Optional) LastChecked is the last time time the status was checked lastChanged string (Optional) LastChanged is the last time time the status last changed details string (Optional) Details contains potential status errors "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemSnapshotSchedulesSpec","title":"FilesystemSnapshotSchedulesSpec","text":"(Appears on:FilesystemSnapshotScheduleStatusSpec) FilesystemSnapshotSchedulesSpec is the list of snapshot scheduled for images in a pool Field Descriptionfs string (Optional) Fs is the name of the Ceph Filesystem subvol string (Optional) Subvol is the name of the sub volume path string (Optional) Path is the path on the filesystem rel_path string (Optional) schedule string (Optional) retention FilesystemSnapshotScheduleStatusRetention (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemSpec","title":"FilesystemSpec","text":"(Appears on:CephFilesystem) FilesystemSpec represents the spec of a file system Field DescriptionmetadataPool PoolSpec The metadata pool settings dataPools []NamedPoolSpec The data pool settings, with optional predefined pool name. preservePoolsOnDelete bool (Optional) Preserve pools on filesystem deletion preserveFilesystemOnDelete bool (Optional) Preserve the fs in the cluster on CephFilesystem CR deletion. Setting this to true automatically implies PreservePoolsOnDelete is true. metadataServer MetadataServerSpec The mds pod info mirroring FSMirroringSpec (Optional) The mirroring settings statusCheck MirrorHealthCheckSpec The mirroring statusCheck "},{"location":"CRDs/specification/#ceph.rook.io/v1.FilesystemsSpec","title":"FilesystemsSpec","text":"(Appears on:FilesystemMirroringInfo) FilesystemsSpec is spec for the mirrored filesystem Field Descriptionfilesystem_id int (Optional) FilesystemID is the filesystem identifier name string (Optional) Name is name of the filesystem directory_count int (Optional) DirectoryCount is the number of directories in the filesystem peers []FilesystemMirrorInfoPeerSpec (Optional) Peers represents the mirroring peers "},{"location":"CRDs/specification/#ceph.rook.io/v1.GaneshaRADOSSpec","title":"GaneshaRADOSSpec","text":"(Appears on:NFSGaneshaSpec) GaneshaRADOSSpec represents the specification of a Ganesha RADOS object Field Descriptionpool string (Optional) The Ceph pool used store the shared configuration for NFS-Ganesha daemons. This setting is deprecated, as it is internally required to be \u201c.nfs\u201d. namespace string (Optional) The namespace inside the Ceph pool (set by \u2018pool\u2019) where shared NFS-Ganesha config is stored. This setting is deprecated as it is internally set to the name of the CephNFS. "},{"location":"CRDs/specification/#ceph.rook.io/v1.GaneshaServerSpec","title":"GaneshaServerSpec","text":"(Appears on:NFSGaneshaSpec) GaneshaServerSpec represents the specification of a Ganesha Server Field Descriptionactive int The number of active Ganesha servers placement Placement (Optional) The affinity to place the ganesha pods annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) Resources set resource requests and limits priorityClassName string (Optional) PriorityClassName sets the priority class on the pods logLevel string (Optional) LogLevel set logging level hostNetwork bool (Optional) Whether host networking is enabled for the Ganesha server. If not set, the network settings from the cluster CR will be applied. livenessProbe ProbeSpec (Optional) A liveness-probe to verify that Ganesha server has valid run-time state. If LivenessProbe.Disabled is false and LivenessProbe.Probe is nil uses default probe. "},{"location":"CRDs/specification/#ceph.rook.io/v1.GatewaySpec","title":"GatewaySpec","text":"(Appears on:ObjectStoreSpec) GatewaySpec represents the specification of Ceph Object Store Gateway Field Descriptionport int32 (Optional) The port the rgw service will be listening on (http) securePort int32 (Optional) The port the rgw service will be listening on (https) instances int32 (Optional) The number of pods in the rgw replicaset. sslCertificateRef string (Optional) The name of the secret that stores the ssl certificate for secure rgw connections caBundleRef string (Optional) The name of the secret that stores custom ca-bundle with root and intermediate certificates. placement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) disableMultisiteSyncTraffic bool (Optional) DisableMultisiteSyncTraffic, when true, prevents this object store\u2019s gateways from transmitting multisite replication data. Note that this value does not affect whether gateways receive multisite replication traffic: see ObjectZone.spec.customEndpoints for that. If false or unset, this object store\u2019s gateways will be able to transmit multisite replication data. annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the rgw pods priorityClassName string (Optional) PriorityClassName sets priority classes on the rgw pods externalRgwEndpoints []EndpointAddress (Optional) ExternalRgwEndpoints points to external RGW endpoint(s). Multiple endpoints can be given, but for stability of ObjectBucketClaims, we highly recommend that users give only a single external RGW endpoint that is a load balancer that sends requests to the multiple RGWs. service RGWServiceSpec (Optional) The configuration related to add/set on each rgw service. hostNetwork bool (Optional) Whether host networking is enabled for the rgw daemon. If not set, the network settings from the cluster CR will be applied. dashboardEnabled bool (Optional) Whether rgw dashboard is enabled for the rgw daemon. If not set, the rgw dashboard will be enabled. additionalVolumeMounts AdditionalVolumeMounts AdditionalVolumeMounts allows additional volumes to be mounted to the RGW pod. The root directory for each additional volume mount is (Appears on:TopicEndpointSpec) HTTPEndpointSpec represent the spec of an HTTP endpoint of a Bucket Topic Field Descriptionuri string The URI of the HTTP endpoint to push notification to disableVerifySSL bool (Optional) Indicate whether the server certificate is validated by the client or not sendCloudEvents bool (Optional) Send the notifications with the CloudEvents header: https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md "},{"location":"CRDs/specification/#ceph.rook.io/v1.HealthCheckSpec","title":"HealthCheckSpec","text":"(Appears on:DaemonHealthSpec, MirrorHealthCheckSpec) HealthCheckSpec represents the health check of an object store bucket Field Descriptiondisabled bool (Optional) interval Kubernetes meta/v1.Duration (Optional) Interval is the internal in second or minute for the health check to run like 60s for 60 seconds timeout string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.HybridStorageSpec","title":"HybridStorageSpec","text":"(Appears on:ReplicatedSpec) HybridStorageSpec represents the settings for hybrid storage pool Field DescriptionprimaryDeviceClass string PrimaryDeviceClass represents high performance tier (for example SSD or NVME) for Primary OSD secondaryDeviceClass string SecondaryDeviceClass represents low performance tier (for example HDDs) for remaining OSDs "},{"location":"CRDs/specification/#ceph.rook.io/v1.IPFamilyType","title":"IPFamilyType (string alias)","text":"(Appears on:NetworkSpec) IPFamilyType represents the single stack Ipv4 or Ipv6 protocol. Value Description\"IPv4\" IPv4 internet protocol version \"IPv6\" IPv6 internet protocol version "},{"location":"CRDs/specification/#ceph.rook.io/v1.ImplicitTenantSetting","title":"ImplicitTenantSetting (string alias)","text":"(Appears on:KeystoneSpec) Value Description\"\" \"false\" \"s3\" \"swift\" \"true\" "},{"location":"CRDs/specification/#ceph.rook.io/v1.KafkaEndpointSpec","title":"KafkaEndpointSpec","text":"(Appears on:TopicEndpointSpec) KafkaEndpointSpec represent the spec of a Kafka endpoint of a Bucket Topic Field Descriptionuri string The URI of the Kafka endpoint to push notification to useSSL bool (Optional) Indicate whether to use SSL when communicating with the broker disableVerifySSL bool (Optional) Indicate whether the server certificate is validated by the client or not ackLevel string (Optional) The ack level required for this topic (none/broker) "},{"location":"CRDs/specification/#ceph.rook.io/v1.KerberosConfigFiles","title":"KerberosConfigFiles","text":"(Appears on:KerberosSpec) KerberosConfigFiles represents the source(s) from which Kerberos configuration should come. Field DescriptionvolumeSource ConfigFileVolumeSource VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for Kerberos configuration files like what is normally used to configure Volumes for a Pod. For example, a ConfigMap, Secret, or HostPath. The volume may contain multiple files, all of which will be loaded. "},{"location":"CRDs/specification/#ceph.rook.io/v1.KerberosKeytabFile","title":"KerberosKeytabFile","text":"(Appears on:KerberosSpec) KerberosKeytabFile represents the source(s) from which the Kerberos keytab file should come. Field DescriptionvolumeSource ConfigFileVolumeSource VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for the Kerberos keytab file like what is normally used to configure Volumes for a Pod. For example, a Secret or HostPath. There are two requirements for the source\u2019s content: 1. The config file must be mountable via (Appears on:NFSSecuritySpec) KerberosSpec represents configuration for Kerberos. Field DescriptionprincipalName string (Optional) PrincipalName corresponds directly to NFS-Ganesha\u2019s NFS_KRB5:PrincipalName config. In practice, this is the service prefix of the principal name. The default is \u201cnfs\u201d. This value is combined with (a) the namespace and name of the CephNFS (with a hyphen between) and (b) the Realm configured in the user-provided krb5.conf to determine the full principal name: /-@. e.g., nfs/rook-ceph-my-nfs@example.net. See https://github.com/nfs-ganesha/nfs-ganesha/wiki/RPCSEC_GSS for more detail. DomainName should be set to the Kerberos Realm. configFiles KerberosConfigFiles (Optional) ConfigFiles defines where the Kerberos configuration should be sourced from. Config files will be placed into the If this is left empty, Rook will not add any files. This allows you to manage the files yourself however you wish. For example, you may build them into your custom Ceph container image or use the Vault agent injector to securely add the files via annotations on the CephNFS spec (passed to the NFS server pods). Rook configures Kerberos to log to stderr. We suggest removing logging sections from config files to avoid consuming unnecessary disk space from logging to files. keytabFile KerberosKeytabFile (Optional) KeytabFile defines where the Kerberos keytab should be sourced from. The keytab file will be placed into (Appears on:ObjectStoreSecuritySpec, SecuritySpec) KeyManagementServiceSpec represent various details of the KMS server Field DescriptionconnectionDetails map[string]string (Optional) ConnectionDetails contains the KMS connection details (address, port etc) tokenSecretName string (Optional) TokenSecretName is the kubernetes secret containing the KMS token "},{"location":"CRDs/specification/#ceph.rook.io/v1.KeyRotationSpec","title":"KeyRotationSpec","text":"(Appears on:SecuritySpec) KeyRotationSpec represents the settings for Key Rotation. Field Descriptionenabled bool (Optional) Enabled represents whether the key rotation is enabled. schedule string (Optional) Schedule represents the cron schedule for key rotation. "},{"location":"CRDs/specification/#ceph.rook.io/v1.KeyType","title":"KeyType (string alias)","text":"KeyType type safety Value Description\"exporter\" \"cleanup\" \"clusterMetadata\" \"cmdreporter\" \"crashcollector\" \"dashboard\" \"mds\" \"mgr\" \"mon\" \"arbiter\" \"monitoring\" \"osd\" \"prepareosd\" \"rgw\" \"keyrotation\" "},{"location":"CRDs/specification/#ceph.rook.io/v1.KeystoneSpec","title":"KeystoneSpec","text":"(Appears on:AuthSpec) KeystoneSpec represents the Keystone authentication configuration of a Ceph Object Store Gateway Field Descriptionurl string The URL for the Keystone server. serviceUserSecretName string The name of the secret containing the credentials for the service user account used by RGW. It has to be in the same namespace as the object store resource. acceptedRoles []string The roles requires to serve requests. implicitTenants ImplicitTenantSetting (Optional) Create new users in their own tenants of the same name. Possible values are true, false, swift and s3. The latter have the effect of splitting the identity space such that only the indicated protocol will use implicit tenants. tokenCacheSize int (Optional) The maximum number of entries in each Keystone token cache. revocationInterval int (Optional) The number of seconds between token revocation checks. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Labels","title":"Labels (map[string]string alias)","text":"(Appears on:FilesystemMirroringSpec, GaneshaServerSpec, GatewaySpec, MetadataServerSpec, RBDMirroringSpec) Labels are label for a given daemons "},{"location":"CRDs/specification/#ceph.rook.io/v1.LabelsSpec","title":"LabelsSpec (map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Labels alias)","text":"(Appears on:ClusterSpec) LabelsSpec is the main spec label for all daemons "},{"location":"CRDs/specification/#ceph.rook.io/v1.LogCollectorSpec","title":"LogCollectorSpec","text":"(Appears on:ClusterSpec) LogCollectorSpec is the logging spec Field Descriptionenabled bool (Optional) Enabled represents whether the log collector is enabled periodicity string (Optional) Periodicity is the periodicity of the log rotation. maxLogSize k8s.io/apimachinery/pkg/api/resource.Quantity (Optional) MaxLogSize is the maximum size of the log per ceph daemons. Must be at least 1M. "},{"location":"CRDs/specification/#ceph.rook.io/v1.MetadataServerSpec","title":"MetadataServerSpec","text":"(Appears on:FilesystemSpec) MetadataServerSpec represents the specification of a Ceph Metadata Server Field DescriptionactiveCount int32 The number of metadata servers that are active. The remaining servers in the cluster will be in standby mode. activeStandby bool (Optional) Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover. If false, standbys will still be available, but will not have a warm metadata cache. placement Placement (Optional) The affinity to place the mds pods (default is to place on all available node) with a daemonset annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the mds pods priorityClassName string (Optional) PriorityClassName sets priority classes on components livenessProbe ProbeSpec (Optional) startupProbe ProbeSpec (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.MgrSpec","title":"MgrSpec","text":"(Appears on:ClusterSpec) MgrSpec represents options to configure a ceph mgr Field Descriptioncount int (Optional) Count is the number of manager daemons to run allowMultiplePerNode bool (Optional) AllowMultiplePerNode allows to run multiple managers on the same node (not recommended) modules []Module (Optional) Modules is the list of ceph manager modules to enable/disable "},{"location":"CRDs/specification/#ceph.rook.io/v1.MirrorHealthCheckSpec","title":"MirrorHealthCheckSpec","text":"(Appears on:FilesystemSpec, PoolSpec) MirrorHealthCheckSpec represents the health specification of a Ceph Storage Pool mirror Field Descriptionmirror HealthCheckSpec (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.MirroringInfoSpec","title":"MirroringInfoSpec","text":"(Appears on:CephBlockPoolStatus) MirroringInfoSpec is the status of the pool mirroring Field DescriptionPoolMirroringInfo PoolMirroringInfo (Members of lastChecked string (Optional) lastChanged string (Optional) details string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.MirroringPeerSpec","title":"MirroringPeerSpec","text":"(Appears on:FSMirroringSpec, MirroringSpec, RBDMirroringSpec) MirroringPeerSpec represents the specification of a mirror peer Field DescriptionsecretNames []string (Optional) SecretNames represents the Kubernetes Secret names to add rbd-mirror or cephfs-mirror peers "},{"location":"CRDs/specification/#ceph.rook.io/v1.MirroringSpec","title":"MirroringSpec","text":"(Appears on:PoolSpec) MirroringSpec represents the setting for a mirrored pool Field Descriptionenabled bool (Optional) Enabled whether this pool is mirrored or not mode string (Optional) Mode is the mirroring mode: either pool or image snapshotSchedules []SnapshotScheduleSpec (Optional) SnapshotSchedules is the scheduling of snapshot for mirrored images/pools peers MirroringPeerSpec (Optional) Peers represents the peers spec "},{"location":"CRDs/specification/#ceph.rook.io/v1.MirroringStatusSpec","title":"MirroringStatusSpec","text":"(Appears on:CephBlockPoolStatus) MirroringStatusSpec is the status of the pool mirroring Field DescriptionPoolMirroringStatus PoolMirroringStatus (Members of PoolMirroringStatus is the mirroring status of a pool lastChecked string (Optional) LastChecked is the last time time the status was checked lastChanged string (Optional) LastChanged is the last time time the status last changed details string (Optional) Details contains potential status errors "},{"location":"CRDs/specification/#ceph.rook.io/v1.Module","title":"Module","text":"(Appears on:MgrSpec) Module represents mgr modules that the user wants to enable or disable Field Descriptionname string (Optional) Name is the name of the ceph manager module enabled bool (Optional) Enabled determines whether a module should be enabled or not settings ModuleSettings Settings to further configure the module "},{"location":"CRDs/specification/#ceph.rook.io/v1.ModuleSettings","title":"ModuleSettings","text":"(Appears on:Module) Field DescriptionbalancerMode string BalancerMode sets the (Appears on:ClusterSpec) MonSpec represents the specification of the monitor Field Descriptioncount int (Optional) Count is the number of Ceph monitors allowMultiplePerNode bool (Optional) AllowMultiplePerNode determines if we can run multiple monitors on the same node (not recommended) failureDomainLabel string (Optional) zones []MonZoneSpec (Optional) Zones are specified when we want to provide zonal awareness to mons stretchCluster StretchClusterSpec (Optional) StretchCluster is the stretch cluster specification volumeClaimTemplate VolumeClaimTemplate (Optional) VolumeClaimTemplate is the PVC definition "},{"location":"CRDs/specification/#ceph.rook.io/v1.MonZoneSpec","title":"MonZoneSpec","text":"(Appears on:MonSpec, StretchClusterSpec) MonZoneSpec represents the specification of a zone in a Ceph Cluster Field Descriptionname string (Optional) Name is the name of the zone arbiter bool (Optional) Arbiter determines if the zone contains the arbiter used for stretch cluster mode volumeClaimTemplate VolumeClaimTemplate (Optional) VolumeClaimTemplate is the PVC template "},{"location":"CRDs/specification/#ceph.rook.io/v1.MonitoringSpec","title":"MonitoringSpec","text":"(Appears on:ClusterSpec) MonitoringSpec represents the settings for Prometheus based Ceph monitoring Field Descriptionenabled bool (Optional) Enabled determines whether to create the prometheus rules for the ceph cluster. If true, the prometheus types must exist or the creation will fail. Default is false. metricsDisabled bool (Optional) Whether to disable the metrics reported by Ceph. If false, the prometheus mgr module and Ceph exporter are enabled. If true, the prometheus mgr module and Ceph exporter are both disabled. Default is false. externalMgrEndpoints []Kubernetes core/v1.EndpointAddress (Optional) ExternalMgrEndpoints points to an existing Ceph prometheus exporter endpoint externalMgrPrometheusPort uint16 (Optional) ExternalMgrPrometheusPort Prometheus exporter port port int (Optional) Port is the prometheus server port interval Kubernetes meta/v1.Duration (Optional) Interval determines prometheus scrape interval exporter CephExporterSpec (Optional) Ceph exporter configuration "},{"location":"CRDs/specification/#ceph.rook.io/v1.MultiClusterServiceSpec","title":"MultiClusterServiceSpec","text":"(Appears on:NetworkSpec) Field Descriptionenabled bool (Optional) Enable multiClusterService to export the mon and OSD services to peer cluster. Ensure that peer clusters are connected using an MCS API compatible application, like Globalnet Submariner. clusterID string ClusterID uniquely identifies a cluster. It is used as a prefix to nslookup exported services. For example: ...svc.clusterset.local"},{"location":"CRDs/specification/#ceph.rook.io/v1.NFSGaneshaSpec","title":"NFSGaneshaSpec","text":" (Appears on:CephNFS) NFSGaneshaSpec represents the spec of an nfs ganesha server Field Descriptionrados GaneshaRADOSSpec (Optional) RADOS is the Ganesha RADOS specification server GaneshaServerSpec Server is the Ganesha Server specification security NFSSecuritySpec (Optional) Security allows specifying security configurations for the NFS cluster "},{"location":"CRDs/specification/#ceph.rook.io/v1.NFSSecuritySpec","title":"NFSSecuritySpec","text":"(Appears on:NFSGaneshaSpec) NFSSecuritySpec represents security configurations for an NFS server pod Field Descriptionsssd SSSDSpec (Optional) SSSD enables integration with System Security Services Daemon (SSSD). SSSD can be used to provide user ID mapping from a number of sources. See https://sssd.io for more information about the SSSD project. kerberos KerberosSpec (Optional) Kerberos configures NFS-Ganesha to secure NFS client connections with Kerberos. "},{"location":"CRDs/specification/#ceph.rook.io/v1.NamedBlockPoolSpec","title":"NamedBlockPoolSpec","text":"(Appears on:CephBlockPool) NamedBlockPoolSpec allows a block pool to be created with a non-default name. This is more specific than the NamedPoolSpec so we get schema validation on the allowed pool names that can be specified. Field Descriptionname string (Optional) The desired name of the pool if different from the CephBlockPool CR name. PoolSpec PoolSpec (Members of The core pool configuration "},{"location":"CRDs/specification/#ceph.rook.io/v1.NamedPoolSpec","title":"NamedPoolSpec","text":"(Appears on:FilesystemSpec) NamedPoolSpec represents the named ceph pool spec Field Descriptionname string Name of the pool PoolSpec PoolSpec (Members of PoolSpec represents the spec of ceph pool "},{"location":"CRDs/specification/#ceph.rook.io/v1.NetworkProviderType","title":"NetworkProviderType (string alias)","text":"(Appears on:NetworkSpec) NetworkProviderType defines valid network providers for Rook. Value Description\"\" \"host\" \"multus\" "},{"location":"CRDs/specification/#ceph.rook.io/v1.NetworkSpec","title":"NetworkSpec","text":"(Appears on:ClusterSpec) NetworkSpec for Ceph includes backward compatibility code Field Descriptionprovider NetworkProviderType (Optional) Provider is what provides network connectivity to the cluster e.g. \u201chost\u201d or \u201cmultus\u201d. If the Provider is updated from being empty to \u201chost\u201d on a running cluster, then the operator will automatically fail over all the mons to apply the \u201chost\u201d network settings. selectors map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.CephNetworkType]string (Optional) Selectors define NetworkAttachmentDefinitions to be used for Ceph public and/or cluster networks when the \u201cmultus\u201d network provider is used. This config section is not used for other network providers. Valid keys are \u201cpublic\u201d and \u201ccluster\u201d. Refer to Ceph networking documentation for more: https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/ Refer to Multus network annotation documentation for help selecting values: https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation Rook will make a best-effort attempt to automatically detect CIDR address ranges for given network attachment definitions. Rook\u2019s methods are robust but may be imprecise for sufficiently complicated networks. Rook\u2019s auto-detection process obtains a new IP address lease for each CephCluster reconcile. If Rook fails to detect, incorrectly detects, only partially detects, or if underlying networks do not support reusing old IP addresses, it is best to use the \u2018addressRanges\u2019 config section to specify CIDR ranges for the Ceph cluster. As a contrived example, one can use a theoretical Kubernetes-wide network for Ceph client traffic and a theoretical Rook-only network for Ceph replication traffic as shown: selectors: public: \u201cdefault/cluster-fast-net\u201d cluster: \u201crook-ceph/ceph-backend-net\u201d addressRanges AddressRangesSpec (Optional) AddressRanges specify a list of CIDRs that Rook will apply to Ceph\u2019s \u2018public_network\u2019 and/or \u2018cluster_network\u2019 configurations. This config section may be used for the \u201chost\u201d or \u201cmultus\u201d network providers. connections ConnectionsSpec (Optional) Settings for network connections such as compression and encryption across the wire. hostNetwork bool (Optional) HostNetwork to enable host network. If host networking is enabled or disabled on a running cluster, then the operator will automatically fail over all the mons to apply the new network settings. ipFamily IPFamilyType (Optional) IPFamily is the single stack IPv6 or IPv4 protocol dualStack bool (Optional) DualStack determines whether Ceph daemons should listen on both IPv4 and IPv6 multiClusterService MultiClusterServiceSpec (Optional) Enable multiClusterService to export the Services between peer clusters "},{"location":"CRDs/specification/#ceph.rook.io/v1.Node","title":"Node","text":"(Appears on:StorageScopeSpec) Node is a storage nodes Field Descriptionname string (Optional) resources Kubernetes core/v1.ResourceRequirements (Optional) config map[string]string (Optional) Selection Selection (Members of []github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Node alias)","text":"NodesByName implements an interface to sort nodes by name "},{"location":"CRDs/specification/#ceph.rook.io/v1.NotificationFilterRule","title":"NotificationFilterRule","text":"(Appears on:NotificationFilterSpec) NotificationFilterRule represent a single rule in the Notification Filter spec Field Descriptionname string Name of the metadata or tag value string Value to filter on "},{"location":"CRDs/specification/#ceph.rook.io/v1.NotificationFilterSpec","title":"NotificationFilterSpec","text":"(Appears on:BucketNotificationSpec) NotificationFilterSpec represent the spec of a Bucket Notification filter Field DescriptionkeyFilters []NotificationKeyFilterRule (Optional) Filters based on the object\u2019s key metadataFilters []NotificationFilterRule (Optional) Filters based on the object\u2019s metadata tagFilters []NotificationFilterRule (Optional) Filters based on the object\u2019s tags "},{"location":"CRDs/specification/#ceph.rook.io/v1.NotificationKeyFilterRule","title":"NotificationKeyFilterRule","text":"(Appears on:NotificationFilterSpec) NotificationKeyFilterRule represent a single key rule in the Notification Filter spec Field Descriptionname string Name of the filter - prefix/suffix/regex value string Value to filter on "},{"location":"CRDs/specification/#ceph.rook.io/v1.OSDStatus","title":"OSDStatus","text":"(Appears on:CephStorage) OSDStatus represents OSD status of the ceph Cluster Field DescriptionstoreType map[string]int StoreType is a mapping between the OSD backend stores and number of OSDs using these stores "},{"location":"CRDs/specification/#ceph.rook.io/v1.OSDStore","title":"OSDStore","text":"(Appears on:StorageScopeSpec) OSDStore is the backend storage type used for creating the OSDs Field Descriptiontype string (Optional) Type of backend storage to be used while creating OSDs. If empty, then bluestore will be used updateStore string (Optional) UpdateStore updates the backend store for existing OSDs. It destroys each OSD one at a time, cleans up the backing disk and prepares same OSD on that disk "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectEndpointSpec","title":"ObjectEndpointSpec","text":"(Appears on:ObjectStoreHostingSpec) ObjectEndpointSpec represents an object store endpoint Field DescriptiondnsName string DnsName is the DNS name (in RFC-1123 format) of the endpoint. If the DNS name corresponds to an endpoint with DNS wildcard support, do not include the wildcard itself in the list of hostnames. E.g., use \u201cmystore.example.com\u201d instead of \u201c*.mystore.example.com\u201d. port int32 Port is the port on which S3 connections can be made for this endpoint. useTls bool UseTls defines whether the endpoint uses TLS (HTTPS) or not (HTTP). "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectEndpoints","title":"ObjectEndpoints","text":"(Appears on:ObjectStoreStatus) Field Descriptioninsecure []string (Optional) secure []string (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectHealthCheckSpec","title":"ObjectHealthCheckSpec","text":"(Appears on:ObjectStoreSpec) ObjectHealthCheckSpec represents the health check of an object store Field DescriptionreadinessProbe ProbeSpec (Optional) startupProbe ProbeSpec (Optional)"},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectRealmSpec","title":"ObjectRealmSpec","text":"(Appears on:CephObjectRealm) ObjectRealmSpec represent the spec of an ObjectRealm Field Descriptionpull PullSpec"},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectSharedPoolsSpec","title":"ObjectSharedPoolsSpec","text":"(Appears on:ObjectStoreSpec, ObjectZoneSpec) ObjectSharedPoolsSpec represents object store pool info when configuring RADOS namespaces in existing pools. Field DescriptionmetadataPoolName string (Optional) The metadata pool used for creating RADOS namespaces in the object store dataPoolName string (Optional) The data pool used for creating RADOS namespaces in the object store preserveRadosNamespaceDataOnDelete bool (Optional) Whether the RADOS namespaces should be preserved on deletion of the object store poolPlacements []PoolPlacementSpec (Optional) PoolPlacements control which Pools are associated with a particular RGW bucket. Once PoolPlacements are defined, RGW client will be able to associate pool with ObjectStore bucket by providing \u201c\u201d during s3 bucket creation or \u201cX-Storage-Policy\u201d header during swift container creation. See: https://docs.ceph.com/en/latest/radosgw/placement/#placement-targets PoolPlacement with name: \u201cdefault\u201d will be used as a default pool if no option is provided during bucket creation. If default placement is not provided, spec.sharedPools.dataPoolName and spec.sharedPools.MetadataPoolName will be used as default pools. If spec.sharedPools are also empty, then RGW pools (spec.dataPool and spec.metadataPool) will be used as defaults."},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreHostingSpec","title":"ObjectStoreHostingSpec","text":" (Appears on:ObjectStoreSpec) ObjectStoreHostingSpec represents the hosting settings for the object store Field DescriptionadvertiseEndpoint ObjectEndpointSpec (Optional) AdvertiseEndpoint is the default endpoint Rook will return for resources dependent on this object store. This endpoint will be returned to CephObjectStoreUsers, Object Bucket Claims, and COSI Buckets/Accesses. By default, Rook returns the endpoint for the object store\u2019s Kubernetes service using HTTPS with dnsNames []string (Optional) A list of DNS host names on which object store gateways will accept client S3 connections. When specified, object store gateways will reject client S3 connections to hostnames that are not present in this list, so include all endpoints. The object store\u2019s advertiseEndpoint and Kubernetes service endpoint, plus CephObjectZone (Appears on:ObjectStoreSpec) ObjectStoreSecuritySpec is spec to define security features like encryption Field DescriptionSecuritySpec SecuritySpec (Optional) s3 KeyManagementServiceSpec (Optional) The settings for supporting AWS-SSE:S3 with RGW "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreSpec","title":"ObjectStoreSpec","text":"(Appears on:CephObjectStore) ObjectStoreSpec represent the spec of a pool Field DescriptionmetadataPool PoolSpec (Optional) The metadata pool settings dataPool PoolSpec (Optional) The data pool settings sharedPools ObjectSharedPoolsSpec (Optional) The pool information when configuring RADOS namespaces in existing pools. preservePoolsOnDelete bool (Optional) Preserve pools on object store deletion gateway GatewaySpec (Optional) The rgw pod info protocols ProtocolSpec (Optional) The protocol specification auth AuthSpec (Optional) The authentication configuration zone ZoneSpec (Optional) The multisite info healthCheck ObjectHealthCheckSpec (Optional) The RGW health probes security ObjectStoreSecuritySpec (Optional) Security represents security settings allowUsersInNamespaces []string (Optional) The list of allowed namespaces in addition to the object store namespace where ceph object store users may be created. Specify \u201c*\u201d to allow all namespaces, otherwise list individual namespaces that are to be allowed. This is useful for applications that need object store credentials to be created in their own namespace, where neither OBCs nor COSI is being used to create buckets. The default is empty. hosting ObjectStoreHostingSpec (Optional) Hosting settings for the object store. A common use case for hosting configuration is to inform Rook of endpoints that support DNS wildcards, which in turn allows virtual host-style bucket addressing. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreStatus","title":"ObjectStoreStatus","text":"(Appears on:CephObjectStore) ObjectStoreStatus represents the status of a Ceph Object Store resource Field Descriptionphase ConditionType (Optional) message string (Optional) endpoints ObjectEndpoints (Optional) info map[string]string (Optional) conditions []Condition observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreUserSpec","title":"ObjectStoreUserSpec","text":"(Appears on:CephObjectStoreUser) ObjectStoreUserSpec represent the spec of an Objectstoreuser Field Descriptionstore string (Optional) The store the user will be created in displayName string (Optional) The display name for the ceph users capabilities ObjectUserCapSpec (Optional) quotas ObjectUserQuotaSpec (Optional) clusterNamespace string (Optional) The namespace where the parent CephCluster and CephObjectStore are found "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectStoreUserStatus","title":"ObjectStoreUserStatus","text":"(Appears on:CephObjectStoreUser) ObjectStoreUserStatus represents the status Ceph Object Store Gateway User Field Descriptionphase string (Optional) info map[string]string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectUserCapSpec","title":"ObjectUserCapSpec","text":"(Appears on:ObjectStoreUserSpec) Additional admin-level capabilities for the Ceph object store user Field Descriptionuser string (Optional) Admin capabilities to read/write Ceph object store users. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities users string (Optional) Admin capabilities to read/write Ceph object store users. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities bucket string (Optional) Admin capabilities to read/write Ceph object store buckets. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities buckets string (Optional) Admin capabilities to read/write Ceph object store buckets. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities metadata string (Optional) Admin capabilities to read/write Ceph object store metadata. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities usage string (Optional) Admin capabilities to read/write Ceph object store usage. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities zone string (Optional) Admin capabilities to read/write Ceph object store zones. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities roles string (Optional) Admin capabilities to read/write roles for user. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities info string (Optional) Admin capabilities to read/write information about the user. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities amz-cache string (Optional) Add capabilities for user to send request to RGW Cache API header. Documented in https://docs.ceph.com/en/latest/radosgw/rgw-cache/#cache-api bilog string (Optional) Add capabilities for user to change bucket index logging. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities mdlog string (Optional) Add capabilities for user to change metadata logging. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities datalog string (Optional) Add capabilities for user to change data logging. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities user-policy string (Optional) Add capabilities for user to change user policies. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities oidc-provider string (Optional) Add capabilities for user to change oidc provider. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities ratelimit string (Optional) Add capabilities for user to set rate limiter for user and bucket. Documented in https://docs.ceph.com/en/latest/radosgw/admin/?#add-remove-admin-capabilities "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectUserQuotaSpec","title":"ObjectUserQuotaSpec","text":"(Appears on:ObjectStoreUserSpec) ObjectUserQuotaSpec can be used to set quotas for the object store user to limit their usage. See the Ceph docs for more Field DescriptionmaxBuckets int (Optional) Maximum bucket limit for the ceph user maxSize k8s.io/apimachinery/pkg/api/resource.Quantity (Optional) Maximum size limit of all objects across all the user\u2019s buckets See https://pkg.go.dev/k8s.io/apimachinery/pkg/api/resource#Quantity for more info. maxObjects int64 (Optional) Maximum number of objects across all the user\u2019s buckets "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectZoneGroupSpec","title":"ObjectZoneGroupSpec","text":"(Appears on:CephObjectZoneGroup) ObjectZoneGroupSpec represent the spec of an ObjectZoneGroup Field Descriptionrealm string The display name for the ceph users "},{"location":"CRDs/specification/#ceph.rook.io/v1.ObjectZoneSpec","title":"ObjectZoneSpec","text":"(Appears on:CephObjectZone) ObjectZoneSpec represent the spec of an ObjectZone Field DescriptionzoneGroup string The display name for the ceph users metadataPool PoolSpec (Optional) The metadata pool settings dataPool PoolSpec (Optional) The data pool settings sharedPools ObjectSharedPoolsSpec (Optional) The pool information when configuring RADOS namespaces in existing pools. customEndpoints []string (Optional) If this zone cannot be accessed from other peer Ceph clusters via the ClusterIP Service endpoint created by Rook, you must set this to the externally reachable endpoint(s). You may include the port in the definition. For example: \u201chttps://my-object-store.my-domain.net:443\u201d. In many cases, you should set this to the endpoint of the ingress resource that makes the CephObjectStore associated with this CephObjectStoreZone reachable to peer clusters. The list can have one or more endpoints pointing to different RGW servers in the zone. If a CephObjectStore endpoint is omitted from this list, that object store\u2019s gateways will not receive multisite replication data (see CephObjectStore.spec.gateway.disableMultisiteSyncTraffic). preservePoolsOnDelete bool (Optional) Preserve pools on object zone deletion "},{"location":"CRDs/specification/#ceph.rook.io/v1.PeerRemoteSpec","title":"PeerRemoteSpec","text":"(Appears on:FilesystemMirrorInfoPeerSpec) Field Descriptionclient_name string (Optional) ClientName is cephx name cluster_name string (Optional) ClusterName is the name of the cluster fs_name string (Optional) FsName is the filesystem name "},{"location":"CRDs/specification/#ceph.rook.io/v1.PeerStatSpec","title":"PeerStatSpec","text":"(Appears on:FilesystemMirrorInfoPeerSpec) PeerStatSpec are the mirror stat with a given peer Field Descriptionfailure_count int (Optional) FailureCount is the number of mirroring failure recovery_count int (Optional) RecoveryCount is the number of recovery attempted after failures "},{"location":"CRDs/specification/#ceph.rook.io/v1.PeersSpec","title":"PeersSpec","text":"(Appears on:PoolMirroringInfo) PeersSpec contains peer details Field Descriptionuuid string (Optional) UUID is the peer UUID direction string (Optional) Direction is the peer mirroring direction site_name string (Optional) SiteName is the current site name mirror_uuid string (Optional) MirrorUUID is the mirror UUID client_name string (Optional) ClientName is the CephX user used to connect to the peer "},{"location":"CRDs/specification/#ceph.rook.io/v1.Placement","title":"Placement","text":"(Appears on:CephCOSIDriverSpec, FilesystemMirroringSpec, GaneshaServerSpec, GatewaySpec, MetadataServerSpec, RBDMirroringSpec, StorageClassDeviceSet) Placement is the placement for an object Field DescriptionnodeAffinity Kubernetes core/v1.NodeAffinity (Optional) NodeAffinity is a group of node affinity scheduling rules podAffinity Kubernetes core/v1.PodAffinity (Optional) PodAffinity is a group of inter pod affinity scheduling rules podAntiAffinity Kubernetes core/v1.PodAntiAffinity (Optional) PodAntiAffinity is a group of inter pod anti affinity scheduling rules tolerations []Kubernetes core/v1.Toleration (Optional) The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator TopologySpreadConstraints specifies how to spread matching pods among the given topology "},{"location":"CRDs/specification/#ceph.rook.io/v1.PlacementSpec","title":"PlacementSpec (map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]github.com/rook/rook/pkg/apis/ceph.rook.io/v1.Placement alias)","text":"(Appears on:ClusterSpec) PlacementSpec is the placement for core ceph daemons part of the CephCluster CRD "},{"location":"CRDs/specification/#ceph.rook.io/v1.PlacementStorageClassSpec","title":"PlacementStorageClassSpec","text":"(Appears on:PoolPlacementSpec) Field Descriptionname string Name is the StorageClass name. Ceph allows arbitrary name for StorageClasses, however most clients/libs insist on AWS names so it is recommended to use one of the valid x-amz-storage-class values for better compatibility: REDUCED_REDUNDANCY | STANDARD_IA | ONEZONE_IA | INTELLIGENT_TIERING | GLACIER | DEEP_ARCHIVE | OUTPOSTS | GLACIER_IR | SNOW | EXPRESS_ONEZONE See AWS docs: https://aws.amazon.com/de/s3/storage-classes/ dataPoolName string DataPoolName is the data pool used to store ObjectStore objects data. "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolMirroringInfo","title":"PoolMirroringInfo","text":"(Appears on:MirroringInfoSpec) PoolMirroringInfo is the mirroring info of a given pool Field Descriptionmode string (Optional) Mode is the mirroring mode site_name string (Optional) SiteName is the current site name peers []PeersSpec (Optional) Peers are the list of peer sites connected to that cluster "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolMirroringStatus","title":"PoolMirroringStatus","text":"(Appears on:MirroringStatusSpec) PoolMirroringStatus is the pool mirror status Field Descriptionsummary PoolMirroringStatusSummarySpec (Optional) Summary is the mirroring status summary "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolMirroringStatusSummarySpec","title":"PoolMirroringStatusSummarySpec","text":"(Appears on:PoolMirroringStatus) PoolMirroringStatusSummarySpec is the summary output of the command Field Descriptionhealth string (Optional) Health is the mirroring health daemon_health string (Optional) DaemonHealth is the health of the mirroring daemon image_health string (Optional) ImageHealth is the health of the mirrored image states StatesSpec (Optional) States is the various state for all mirrored images "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolPlacementSpec","title":"PoolPlacementSpec","text":"(Appears on:ObjectSharedPoolsSpec) Field Descriptionname string Pool placement name. Name can be arbitrary. Placement with name \u201cdefault\u201d will be used as default. default bool Sets given placement as default. Only one placement in the list can be marked as default. metadataPoolName string The metadata pool used to store ObjectStore bucket index. dataPoolName string The data pool used to store ObjectStore objects data. dataNonECPoolName string (Optional) The data pool used to store ObjectStore data that cannot use erasure coding (ex: multi-part uploads). If dataPoolName is not erasure coded, then there is no need for dataNonECPoolName. storageClasses []PlacementStorageClassSpec (Optional) StorageClasses can be selected by user to override dataPoolName during object creation. Each placement has default STANDARD StorageClass pointing to dataPoolName. This list allows defining additional StorageClasses on top of default STANDARD storage class. "},{"location":"CRDs/specification/#ceph.rook.io/v1.PoolSpec","title":"PoolSpec","text":"(Appears on:FilesystemSpec, NamedBlockPoolSpec, NamedPoolSpec, ObjectStoreSpec, ObjectZoneSpec) PoolSpec represents the spec of ceph pool Field DescriptionfailureDomain string (Optional) The failure domain: osd/host/(region or zone if available) - technically also any type in the crush map crushRoot string (Optional) The root of the crush hierarchy utilized by the pool deviceClass string (Optional) The device class the OSD should set to for use in the pool enableCrushUpdates bool (Optional) Allow rook operator to change the pool CRUSH tunables once the pool is created compressionMode string (Optional) DEPRECATED: use Parameters instead, e.g., Parameters[\u201ccompression_mode\u201d] = \u201cforce\u201d The inline compression mode in Bluestore OSD to set to (options are: none, passive, aggressive, force) Do NOT set a default value for kubebuilder as this will override the Parameters replicated ReplicatedSpec (Optional) The replication settings erasureCoded ErasureCodedSpec (Optional) The erasure code settings parameters map[string]string (Optional) Parameters is a list of properties to enable on a given pool enableRBDStats bool EnableRBDStats is used to enable gathering of statistics for all RBD images in the pool mirroring MirroringSpec The mirroring settings statusCheck MirrorHealthCheckSpec The mirroring statusCheck quotas QuotaSpec (Optional) The quota settings application string (Optional) The application name to set on the pool. Only expected to be set for rgw pools. "},{"location":"CRDs/specification/#ceph.rook.io/v1.PriorityClassNamesSpec","title":"PriorityClassNamesSpec (map[github.com/rook/rook/pkg/apis/ceph.rook.io/v1.KeyType]string alias)","text":"(Appears on:ClusterSpec) PriorityClassNamesSpec is a map of priority class names to be assigned to components "},{"location":"CRDs/specification/#ceph.rook.io/v1.ProbeSpec","title":"ProbeSpec","text":"(Appears on:GaneshaServerSpec, MetadataServerSpec, ObjectHealthCheckSpec) ProbeSpec is a wrapper around Probe so it can be enabled or disabled for a Ceph daemon Field Descriptiondisabled bool (Optional) Disabled determines whether probe is disable or not probe Kubernetes core/v1.Probe (Optional) Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ProtocolSpec","title":"ProtocolSpec","text":"(Appears on:ObjectStoreSpec) ProtocolSpec represents a Ceph Object Store protocol specification Field Descriptions3 S3Spec (Optional) The spec for S3 swift SwiftSpec (Optional) The spec for Swift "},{"location":"CRDs/specification/#ceph.rook.io/v1.PullSpec","title":"PullSpec","text":"(Appears on:ObjectRealmSpec) PullSpec represents the pulling specification of a Ceph Object Storage Gateway Realm Field Descriptionendpoint string"},{"location":"CRDs/specification/#ceph.rook.io/v1.QuotaSpec","title":"QuotaSpec","text":"(Appears on:PoolSpec) QuotaSpec represents the spec for quotas in a pool Field DescriptionmaxBytes uint64 (Optional) MaxBytes represents the quota in bytes Deprecated in favor of MaxSize maxSize string (Optional) MaxSize represents the quota in bytes as a string maxObjects uint64 (Optional) MaxObjects represents the quota in objects "},{"location":"CRDs/specification/#ceph.rook.io/v1.RBDMirroringSpec","title":"RBDMirroringSpec","text":"(Appears on:CephRBDMirror) RBDMirroringSpec represents the specification of an RBD mirror daemon Field Descriptioncount int Count represents the number of rbd mirror instance to run peers MirroringPeerSpec (Optional) Peers represents the peers spec placement Placement (Optional) The affinity to place the rgw pods (default is to place on any available node) annotations Annotations (Optional) The annotations-related configuration to add/set on each Pod related object. labels Labels (Optional) The labels-related configuration to add/set on each Pod related object. resources Kubernetes core/v1.ResourceRequirements (Optional) The resource requirements for the rbd mirror pods priorityClassName string (Optional) PriorityClassName sets priority class on the rbd mirror pods "},{"location":"CRDs/specification/#ceph.rook.io/v1.RGWServiceSpec","title":"RGWServiceSpec","text":"(Appears on:GatewaySpec) RGWServiceSpec represent the spec for RGW service Field Descriptionannotations Annotations The annotations-related configuration to add/set on each rgw service. nullable optional "},{"location":"CRDs/specification/#ceph.rook.io/v1.RadosNamespaceMirroring","title":"RadosNamespaceMirroring","text":"(Appears on:CephBlockPoolRadosNamespaceSpec) RadosNamespaceMirroring represents the mirroring configuration of CephBlockPoolRadosNamespace Field DescriptionremoteNamespace string (Optional) RemoteNamespace is the name of the CephBlockPoolRadosNamespace on the secondary cluster CephBlockPool mode RadosNamespaceMirroringMode Mode is the mirroring mode; either pool or image snapshotSchedules []SnapshotScheduleSpec (Optional) SnapshotSchedules is the scheduling of snapshot for mirrored images "},{"location":"CRDs/specification/#ceph.rook.io/v1.RadosNamespaceMirroringMode","title":"RadosNamespaceMirroringMode (string alias)","text":"(Appears on:RadosNamespaceMirroring) RadosNamespaceMirroringMode represents the mode of the RadosNamespace Value Description\"image\" RadosNamespaceMirroringModeImage represents the image mode \"pool\" RadosNamespaceMirroringModePool represents the pool mode "},{"location":"CRDs/specification/#ceph.rook.io/v1.ReadAffinitySpec","title":"ReadAffinitySpec","text":"(Appears on:CSIDriverSpec) ReadAffinitySpec defines the read affinity settings for CSI driver. Field Descriptionenabled bool (Optional) Enables read affinity for CSI driver. crushLocationLabels []string (Optional) CrushLocationLabels defines which node labels to use as CRUSH location. This should correspond to the values set in the CRUSH map. "},{"location":"CRDs/specification/#ceph.rook.io/v1.ReplicatedSpec","title":"ReplicatedSpec","text":"(Appears on:PoolSpec) ReplicatedSpec represents the spec for replication in a pool Field Descriptionsize uint Size - Number of copies per object in a replicated storage pool, including the object itself (required for replicated pool type) targetSizeRatio float64 (Optional) TargetSizeRatio gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity requireSafeReplicaSize bool (Optional) RequireSafeReplicaSize if false allows you to set replica 1 replicasPerFailureDomain uint (Optional) ReplicasPerFailureDomain the number of replica in the specified failure domain subFailureDomain string (Optional) SubFailureDomain the name of the sub-failure domain hybridStorage HybridStorageSpec (Optional) HybridStorage represents hybrid storage tier settings "},{"location":"CRDs/specification/#ceph.rook.io/v1.ResourceSpec","title":"ResourceSpec (map[string]k8s.io/api/core/v1.ResourceRequirements alias)","text":"(Appears on:ClusterSpec) ResourceSpec is a collection of ResourceRequirements that describes the compute resource requirements "},{"location":"CRDs/specification/#ceph.rook.io/v1.S3Spec","title":"S3Spec","text":"(Appears on:ProtocolSpec) S3Spec represents Ceph Object Store specification for the S3 API Field Descriptionenabled bool (Optional) Whether to enable S3. This defaults to true (even if protocols.s3 is not present in the CRD). This maintains backwards compatibility \u2013 by default S3 is enabled. authUseKeystone bool (Optional) Whether to use Keystone for authentication. This option maps directly to the rgw_s3_auth_use_keystone option. Enabling it allows generating S3 credentials via an OpenStack API call, see the docs. If not given, the defaults of the corresponding RGW option apply. "},{"location":"CRDs/specification/#ceph.rook.io/v1.SSSDSidecar","title":"SSSDSidecar","text":"(Appears on:SSSDSpec) SSSDSidecar represents configuration when SSSD is run in a sidecar. Field Descriptionimage string Image defines the container image that should be used for the SSSD sidecar. sssdConfigFile SSSDSidecarConfigFile (Optional) SSSDConfigFile defines where the SSSD configuration should be sourced from. The config file will be placed into additionalFiles AdditionalVolumeMounts (Optional) AdditionalFiles defines any number of additional files that should be mounted into the SSSD sidecar with a directory root of resources Kubernetes core/v1.ResourceRequirements (Optional) Resources allow specifying resource requests/limits on the SSSD sidecar container. debugLevel int (Optional) DebugLevel sets the debug level for SSSD. If unset or set to 0, Rook does nothing. Otherwise, this may be a value between 1 and 10. See SSSD docs for more info: https://sssd.io/troubleshooting/basics.html#sssd-debug-logs "},{"location":"CRDs/specification/#ceph.rook.io/v1.SSSDSidecarConfigFile","title":"SSSDSidecarConfigFile","text":"(Appears on:SSSDSidecar) SSSDSidecarConfigFile represents the source(s) from which the SSSD configuration should come. Field DescriptionvolumeSource ConfigFileVolumeSource VolumeSource accepts a pared down version of the standard Kubernetes VolumeSource for the SSSD configuration file like what is normally used to configure Volumes for a Pod. For example, a ConfigMap, Secret, or HostPath. There are two requirements for the source\u2019s content: 1. The config file must be mountable via (Appears on:NFSSecuritySpec) SSSDSpec represents configuration for System Security Services Daemon (SSSD). Field Descriptionsidecar SSSDSidecar (Optional) Sidecar tells Rook to run SSSD in a sidecar alongside the NFS-Ganesha server in each NFS pod. "},{"location":"CRDs/specification/#ceph.rook.io/v1.SanitizeDataSourceProperty","title":"SanitizeDataSourceProperty (string alias)","text":"(Appears on:SanitizeDisksSpec) SanitizeDataSourceProperty represents a sanitizing data source Value Description\"random\" SanitizeDataSourceRandom uses `shred\u2019s default entropy source \"zero\" SanitizeDataSourceZero uses /dev/zero as sanitize source "},{"location":"CRDs/specification/#ceph.rook.io/v1.SanitizeDisksSpec","title":"SanitizeDisksSpec","text":"(Appears on:CleanupPolicySpec) SanitizeDisksSpec represents a disk sanitizing specification Field Descriptionmethod SanitizeMethodProperty (Optional) Method is the method we use to sanitize disks dataSource SanitizeDataSourceProperty (Optional) DataSource is the data source to use to sanitize the disk with iteration int32 (Optional) Iteration is the number of pass to apply the sanitizing "},{"location":"CRDs/specification/#ceph.rook.io/v1.SanitizeMethodProperty","title":"SanitizeMethodProperty (string alias)","text":"(Appears on:SanitizeDisksSpec) SanitizeMethodProperty represents a disk sanitizing method Value Description\"complete\" SanitizeMethodComplete will sanitize everything on the disk \"quick\" SanitizeMethodQuick will sanitize metadata only on the disk "},{"location":"CRDs/specification/#ceph.rook.io/v1.SecuritySpec","title":"SecuritySpec","text":"(Appears on:ClusterSpec, ObjectStoreSecuritySpec) SecuritySpec is security spec to include various security items such as kms Field Descriptionkms KeyManagementServiceSpec (Optional) KeyManagementService is the main Key Management option keyRotation KeyRotationSpec (Optional) KeyRotation defines options for Key Rotation. "},{"location":"CRDs/specification/#ceph.rook.io/v1.Selection","title":"Selection","text":"(Appears on:Node, StorageScopeSpec) Field DescriptionuseAllDevices bool (Optional) Whether to consume all the storage devices found on a machine deviceFilter string (Optional) A regular expression to allow more fine-grained selection of devices on nodes across the cluster devicePathFilter string (Optional) A regular expression to allow more fine-grained selection of devices with path names devices []Device (Optional) List of devices to use as storage devices volumeClaimTemplates []VolumeClaimTemplate (Optional) PersistentVolumeClaims to use as storage "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotSchedule","title":"SnapshotSchedule","text":"(Appears on:SnapshotSchedulesSpec) SnapshotSchedule is a schedule Field Descriptioninterval string (Optional) Interval is the interval in which snapshots will be taken start_time string (Optional) StartTime is the snapshot starting time "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotScheduleRetentionSpec","title":"SnapshotScheduleRetentionSpec","text":"(Appears on:FSMirroringSpec) SnapshotScheduleRetentionSpec is a retention policy Field Descriptionpath string (Optional) Path is the path to snapshot duration string (Optional) Duration represents the retention duration for a snapshot "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotScheduleSpec","title":"SnapshotScheduleSpec","text":"(Appears on:FSMirroringSpec, MirroringSpec, RadosNamespaceMirroring) SnapshotScheduleSpec represents the snapshot scheduling settings of a mirrored pool Field Descriptionpath string (Optional) Path is the path to snapshot, only valid for CephFS interval string (Optional) Interval represent the periodicity of the snapshot. startTime string (Optional) StartTime indicates when to start the snapshot "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotScheduleStatusSpec","title":"SnapshotScheduleStatusSpec","text":"(Appears on:CephBlockPoolStatus) SnapshotScheduleStatusSpec is the status of the snapshot schedule Field DescriptionsnapshotSchedules []SnapshotSchedulesSpec (Optional) SnapshotSchedules is the list of snapshots scheduled lastChecked string (Optional) LastChecked is the last time time the status was checked lastChanged string (Optional) LastChanged is the last time time the status last changed details string (Optional) Details contains potential status errors "},{"location":"CRDs/specification/#ceph.rook.io/v1.SnapshotSchedulesSpec","title":"SnapshotSchedulesSpec","text":"(Appears on:SnapshotScheduleStatusSpec) SnapshotSchedulesSpec is the list of snapshot scheduled for images in a pool Field Descriptionpool string (Optional) Pool is the pool name namespace string (Optional) Namespace is the RADOS namespace the image is part of image string (Optional) Image is the mirrored image items []SnapshotSchedule (Optional) Items is the list schedules times for a given snapshot "},{"location":"CRDs/specification/#ceph.rook.io/v1.StatesSpec","title":"StatesSpec","text":"(Appears on:PoolMirroringStatusSummarySpec) StatesSpec are rbd images mirroring state Field Descriptionstarting_replay int (Optional) StartingReplay is when the replay of the mirroring journal starts replaying int (Optional) Replaying is when the replay of the mirroring journal is on-going syncing int (Optional) Syncing is when the image is syncing stopping_replay int (Optional) StopReplaying is when the replay of the mirroring journal stops stopped int (Optional) Stopped is when the mirroring state is stopped unknown int (Optional) Unknown is when the mirroring state is unknown error int (Optional) Error is when the mirroring state is errored "},{"location":"CRDs/specification/#ceph.rook.io/v1.Status","title":"Status","text":"(Appears on:CephBucketNotification, CephFilesystemMirror, CephNFS, CephObjectRealm, CephObjectZone, CephObjectZoneGroup, CephRBDMirror) Status represents the status of an object Field Descriptionphase string (Optional) observedGeneration int64 (Optional) ObservedGeneration is the latest generation observed by the controller. conditions []Condition"},{"location":"CRDs/specification/#ceph.rook.io/v1.StorageClassDeviceSet","title":"StorageClassDeviceSet","text":"(Appears on:StorageScopeSpec) StorageClassDeviceSet is a storage class device set Field Descriptionname string Name is a unique identifier for the set count int Count is the number of devices in this set resources Kubernetes core/v1.ResourceRequirements (Optional) placement Placement (Optional) preparePlacement Placement (Optional) config map[string]string (Optional) Provider-specific device configuration volumeClaimTemplates []VolumeClaimTemplate VolumeClaimTemplates is a list of PVC templates for the underlying storage devices portable bool (Optional) Portable represents OSD portability across the hosts tuneDeviceClass bool (Optional) TuneSlowDeviceClass Tune the OSD when running on a slow Device Class tuneFastDeviceClass bool (Optional) TuneFastDeviceClass Tune the OSD when running on a fast Device Class schedulerName string (Optional) Scheduler name for OSD pod placement encrypted bool (Optional) Whether to encrypt the deviceSet "},{"location":"CRDs/specification/#ceph.rook.io/v1.StorageScopeSpec","title":"StorageScopeSpec","text":"(Appears on:ClusterSpec) Field Descriptionnodes []Node (Optional) useAllNodes bool (Optional) onlyApplyOSDPlacement bool (Optional) config map[string]string (Optional) Selection Selection (Members of storageClassDeviceSets []StorageClassDeviceSet (Optional) store OSDStore (Optional) flappingRestartIntervalHours int (Optional) FlappingRestartIntervalHours defines the time for which the OSD pods, that failed with zero exit code, will sleep before restarting. This is needed for OSD flapping where OSD daemons are marked down more than 5 times in 600 seconds by Ceph. Preventing the OSD pods to restart immediately in such scenarios will prevent Rook from marking OSD as fullRatio float64 (Optional) FullRatio is the ratio at which the cluster is considered full and ceph will stop accepting writes. Default is 0.95. nearFullRatio float64 (Optional) NearFullRatio is the ratio at which the cluster is considered nearly full and will raise a ceph health warning. Default is 0.85. backfillFullRatio float64 (Optional) BackfillFullRatio is the ratio at which the cluster is too full for backfill. Backfill will be disabled if above this threshold. Default is 0.90. allowDeviceClassUpdate bool (Optional) Whether to allow updating the device class after the OSD is initially provisioned allowOsdCrushWeightUpdate bool (Optional) Whether Rook will resize the OSD CRUSH weight when the OSD PVC size is increased. This allows cluster data to be rebalanced to make most effective use of new OSD space. The default is false since data rebalancing can cause temporary cluster slowdown. "},{"location":"CRDs/specification/#ceph.rook.io/v1.StoreType","title":"StoreType (string alias)","text":"Value Description \"bluestore\" StoreTypeBlueStore is the bluestore backend storage for OSDs \"bluestore-rdr\" StoreTypeBlueStoreRDR is the bluestore-rdr backed storage for OSDs "},{"location":"CRDs/specification/#ceph.rook.io/v1.StretchClusterSpec","title":"StretchClusterSpec","text":"(Appears on:MonSpec) StretchClusterSpec represents the specification of a stretched Ceph Cluster Field DescriptionfailureDomainLabel string (Optional) FailureDomainLabel the failure domain name (e,g: zone) subFailureDomain string (Optional) SubFailureDomain is the failure domain within a zone zones []MonZoneSpec (Optional) Zones is the list of zones "},{"location":"CRDs/specification/#ceph.rook.io/v1.SwiftSpec","title":"SwiftSpec","text":"(Appears on:ProtocolSpec) SwiftSpec represents Ceph Object Store specification for the Swift API Field DescriptionaccountInUrl bool (Optional) Whether or not the Swift account name should be included in the Swift API URL. If set to false (the default), then the Swift API will listen on a URL formed like http://host:port//v1. If set to true, the Swift API URL will be http://host:port//v1/AUTH_. You must set this option to true (and update the Keystone service catalog) if you want radosgw to support publicly-readable containers and temporary URLs. The URL prefix for the Swift API, to distinguish it from the S3 API endpoint. The default is swift, which makes the Swift API available at the URL http://host:port/swift/v1 (or http://host:port/swift/v1/AUTH_%(tenant_id)s if rgw swift account in url is enabled). versioningEnabled bool (Optional) Enables the Object Versioning of OpenStack Object Storage API. This allows clients to put the X-Versions-Location attribute on containers that should be versioned. "},{"location":"CRDs/specification/#ceph.rook.io/v1.TopicEndpointSpec","title":"TopicEndpointSpec","text":"(Appears on:BucketTopicSpec) TopicEndpointSpec contains exactly one of the endpoint specs of a Bucket Topic Field Descriptionhttp HTTPEndpointSpec (Optional) Spec of HTTP endpoint amqp AMQPEndpointSpec (Optional) Spec of AMQP endpoint kafka KafkaEndpointSpec (Optional) Spec of Kafka endpoint "},{"location":"CRDs/specification/#ceph.rook.io/v1.VolumeClaimTemplate","title":"VolumeClaimTemplate","text":"(Appears on:MonSpec, MonZoneSpec, Selection, StorageClassDeviceSet) VolumeClaimTemplate is a simplified version of K8s corev1\u2019s PVC. It has no type meta or status. Field Descriptionmetadata Kubernetes meta/v1.ObjectMeta (Optional) Standard object\u2019s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field. spec Kubernetes core/v1.PersistentVolumeClaimSpec (Optional) spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims accessModes []Kubernetes core/v1.PersistentVolumeAccessMode (Optional) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 selector Kubernetes meta/v1.LabelSelector (Optional) selector is a label query over volumes to consider for binding. resources Kubernetes core/v1.VolumeResourceRequirements (Optional) resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources volumeName string (Optional) volumeName is the binding reference to the PersistentVolume backing this claim. storageClassName string (Optional) storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode Kubernetes core/v1.PersistentVolumeMode (Optional) volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. dataSource Kubernetes core/v1.TypedLocalObjectReference (Optional) dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef Kubernetes core/v1.TypedObjectReference (Optional) dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn\u2019t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn\u2019t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. volumeAttributesClassName string (Optional) volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it\u2019s not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). "},{"location":"CRDs/specification/#ceph.rook.io/v1.ZoneSpec","title":"ZoneSpec","text":"(Appears on:ObjectStoreSpec) ZoneSpec represents a Ceph Object Store Gateway Zone specification Field Descriptionname string RGW Zone the Object Store is in Generated with Rook allows creation and customization of storage pools through the custom resource definitions (CRDs). The following settings are available for pools. "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#examples","title":"Examples","text":""},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#replicated","title":"Replicated","text":"For optimal performance, while also adding redundancy, this sample will configure Ceph to make three full copies of the data on multiple nodes. Note This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes. Each OSD must be located on a different node, because the "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#hybrid-storage-pools","title":"Hybrid Storage Pools","text":"Hybrid storage is a combination of two different storage tiers. For example, SSD and HDD. This helps to improve the read performance of cluster by placing, say, 1st copy of data on the higher performance tier (SSD or NVME) and remaining replicated copies on lower cost tier (HDDs). WARNING Hybrid storage pools are likely to suffer from lower availability if a node goes down. The data across the two tiers may actually end up on the same node, instead of being spread across unique nodes (or failure domains) as expected. Instead of using hybrid pools, consider configuring primary affinity from the toolbox. Important The device classes This sample will lower the overall storage capacity requirement, while also adding redundancy by using erasure coding. Note This sample requires at least 3 bluestore OSDs. The OSDs can be located on a single Ceph node or spread across multiple nodes, because the High performance applications typically will not use erasure coding due to the performance overhead of creating and distributing the chunks in the cluster. When creating an erasure-coded pool, it is highly recommended to create the pool when you have bluestore OSDs in your cluster (see the OSD configuration settings. Filestore OSDs have limitations that are unsafe and lower performance. "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#mirroring","title":"Mirroring","text":"RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph clusters. Mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. It is generally useful when planning for Disaster Recovery. Mirroring is for clusters that are geographically distributed and stretching a single cluster is not possible due to high latencies. The following will enable mirroring of the pool at the image level: Once mirroring is enabled, Rook will by default create its own bootstrap peer token so that it can be used by another cluster. The bootstrap peer token can be found in a Kubernetes Secret. The name of the Secret is present in the Status field of the CephBlockPool CR: This secret can then be fetched like so: The secret must be decoded. The result will be another base64 encoded blob that you will import in the destination cluster: See the official rbd mirror documentation on how to add a bootstrap peer. Note Disabling mirroring for the CephBlockPool requires disabling mirroring on all the CephBlockPoolRadosNamespaces present underneath. "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#data-spread-across-subdomains","title":"Data spread across subdomains","text":"Imagine the following topology with datacenters containing racks and then hosts: As an administrator I would like to place 4 copies across both datacenter where each copy inside a datacenter is across a rack. This can be achieved by the following: "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#pool-settings","title":"Pool Settings","text":""},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#metadata","title":"Metadata","text":"
With For instance: "},{"location":"CRDs/Block-Storage/ceph-block-pool-crd/#erasure-coding","title":"Erasure Coding","text":"Erasure coding allows you to keep your data safe while reducing the storage overhead. Instead of creating multiple replicas of the data, erasure coding divides the original data into chunks of equal size, then generates extra chunks of that same size for redundancy. For example, if you have an object of size 2MB, the simplest erasure coding with two data chunks would divide the object into two chunks of size 1MB each (data chunks). One more chunk (coding chunk) of size 1MB will be generated. In total, 3MB will be stored in the cluster. The object will be able to suffer the loss of any one of the chunks and still be able to reconstruct the original object. The number of data and coding chunks you choose will depend on your resiliency to loss and how much storage overhead is acceptable in your storage cluster. Here are some examples to illustrate how the number of chunks affects the storage and loss toleration. Data chunks (k) Coding chunks (m) Total storage Losses Tolerated OSDs required 2 1 1.5x 1 3 2 2 2x 2 4 4 2 1.5x 2 6 16 4 1.25x 4 20The
If you do not have a sufficient number of hosts or OSDs for unique placement the pool can be created, writing to the pool will hang. Rook currently only configures two levels in the CRUSH map. It is also possible to configure other levels such as This guide assumes you have created a Rook cluster as explained in the main Quickstart guide RADOS currently uses pools both for data distribution (pools are shared into PGs, which map to OSDs) and as the granularity for security (capabilities can restrict access by pool). Overloading pools for both purposes makes it hard to do multi-tenancy because it is not a good idea to have a very large number of pools. A namespace would be a division of a pool into separate logical namespaces. For more information about BlockPool and namespace refer to the Ceph docs Having multiple namespaces in a pool would allow multiple Kubernetes clusters to share one unique ceph cluster without creating a pool per kubernetes cluster and it will also allow to have tenant isolation between multiple tenants in a single Kubernetes cluster without creating multiple pools for tenants. Rook allows creation of Ceph BlockPool RadosNamespaces through the custom resource definitions (CRDs). "},{"location":"CRDs/Block-Storage/ceph-block-pool-rados-namespace-crd/#example","title":"Example","text":"To get you started, here is a simple example of a CR to create a CephBlockPoolRadosNamespace on the CephBlockPool \"replicapool\". "},{"location":"CRDs/Block-Storage/ceph-block-pool-rados-namespace-crd/#settings","title":"Settings","text":"If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Block-Storage/ceph-block-pool-rados-namespace-crd/#metadata","title":"Metadata","text":"
Once the RADOS namespace is created, an RBD-based StorageClass can be created to create PVs in this RADOS namespace. For this purpose, the Extract the clusterID from the CephBlockPoolRadosNamespace CR: In this example, replace Example: "},{"location":"CRDs/Block-Storage/ceph-block-pool-rados-namespace-crd/#mirroring","title":"Mirroring","text":"First, enable mirroring for the parent CephBlockPool. Second, configure the rados namespace CRD with the mirroring: "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/","title":"CephRBDMirror CRD","text":"Rook allows creation and updating rbd-mirror daemon(s) through the custom resource definitions (CRDs). RBD images can be asynchronously mirrored between two Ceph clusters. For more information about user management and capabilities see the Ceph docs. "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/#creating-daemons","title":"Creating daemons","text":"To get you started, here is a simple example of a CRD to deploy an rbd-mirror daemon. "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/#prerequisites","title":"Prerequisites","text":"This guide assumes you have created a Rook cluster as explained in the main Quickstart guide "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/#settings","title":"Settings","text":"If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Block-Storage/ceph-rbd-mirror-crd/#rbdmirror-metadata","title":"RBDMirror metadata","text":"
Rook allows creation and customization of storage clusters through the custom resource definitions (CRDs). There are primarily four different modes in which to create your cluster.
See the separate topics for a description and examples of each of these scenarios. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#settings","title":"Settings","text":"Settings can be specified at the global level to apply to the cluster as a whole, while other settings can be specified at more fine-grained levels. If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#cluster-metadata","title":"Cluster metadata","text":"
Official releases of Ceph Container images are available from Docker Hub. These are general purpose Ceph container with all necessary daemons and dependencies installed. TAG MEANING vRELNUM Latest release in this series (e.g., v19 = Squid) vRELNUM.Y Latest stable release in this stable series (e.g., v19.2) vRELNUM.Y.Z A specific release (e.g., v19.2.0) vRELNUM.Y.Z-YYYYMMDD A specific build (e.g., v19.2.0-20240927)A specific will contain a specific release of Ceph as well as security fixes from the Operating System. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#mon-settings","title":"Mon Settings","text":"
If these settings are changed in the CRD the operator will update the number of mons during a periodic check of the mon health, which by default is every 45 seconds. To change the defaults that the operator uses to determine the mon health and whether to failover a mon, refer to the health settings. The intervals should be small enough that you have confidence the mons will maintain quorum, while also being long enough to ignore network blips where mons are failed over too often. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#mgr-settings","title":"Mgr Settings","text":"You can use the cluster CR to enable or disable any manager module. This can be configured like so: Some modules will have special configuration to ensure the module is fully functional after being enabled. Specifically:
If not specified, the default SDN will be used. Configure the network that will be enabled for the cluster and services.
Caution Changing networking configuration after a Ceph cluster has been deployed is only supported for the network encryption settings. Changing other network settings is NOT supported and will likely result in a non-functioning cluster. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#provider","title":"Provider","text":"Selecting a non-default network provider is an advanced topic. Read more in the Network Providers documentation. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#ipfamily","title":"IPFamily","text":"Provide single-stack IPv4 or IPv6 protocol to assign corresponding addresses to pods and services. This field is optional. Possible inputs are IPv6 and IPv4. Empty value will be treated as IPv4. To enable dual stack see the network configuration section. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#node-settings","title":"Node Settings","text":"In addition to the cluster level settings specified above, each individual node can also specify configuration to override the cluster level settings and defaults. If a node does not specify any configuration then it will inherit the cluster level settings.
When For production clusters, we recommend that Nodes can be added and removed over time by updating the Cluster CRD, for example with Below are the settings for host-based cluster. This type of cluster can specify devices for OSDs, both at the cluster and individual node level, for selecting which storage resources will be included in the cluster.
Host-based cluster supports raw devices, partitions, logical volumes, encrypted devices, and multipath devices. Be sure to see the quickstart doc prerequisites for additional considerations. Below are the settings for a PVC-based cluster.
The following are the settings for Storage Class Device Sets which can be configured to create OSDs that are backed by block mode PVs.
See the table in OSD Configuration Settings to know the allowed configurations. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#osd-configuration-settings","title":"OSD Configuration Settings","text":"The following storage selection settings are specific to Ceph and do not apply to other backends. All variables are key-value pairs represented as strings.
Allowed configurations are: block device type host-based cluster PVC-based cluster disk partencryptedDevice must be false encrypted must be false lvm metadataDevice must be \"\" , osdsPerDevice must be 1 , and encryptedDevice must be false metadata.name must not be metadata or wal and encrypted must be false crypt mpath"},{"location":"CRDs/Cluster/ceph-cluster-crd/#limitations-of-metadata-device","title":"Limitations of metadata device","text":"
Annotations and Labels can be specified so that the Rook components will have those annotations / labels added to them. You can set annotations / labels for Rook components for the list of key value pairs:
Placement configuration for the cluster services. It includes the following keys: In stretch clusters, if the Note Placement of OSD pods is controlled using the Storage Class Device Set, not the general A Placement configuration is specified (according to the kubernetes PodSpec) as:
If you use The Rook Ceph operator creates a Job called To control where various services will be scheduled by kubernetes, use the placement configuration sections below. The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node "},{"location":"CRDs/Cluster/ceph-cluster-crd/#cluster-wide-resources-configuration-settings","title":"Cluster-wide Resources Configuration Settings","text":"Resources should be specified so that the Rook components are handled after Kubernetes Pod Quality of Service classes. This allows to keep Rook components running when for example a node runs out of memory and the Rook components are not killed depending on their Quality of Service class. You can set resource requests/limits for Rook components through the Resource Requirements/Limits structure in the following keys:
In order to provide the best possible experience running Ceph in containers, Rook internally recommends minimum memory limits if resource limits are passed. If a user configures a limit or request value that is too low, Rook will still run the pod(s) and print a warning to the operator log.
Note We recommend not setting memory limits on the OSD prepare job to prevent OSD provisioning failure due to memory constraints. The OSD prepare job bursts memory usage during the OSD provisioning depending on the size of the device, typically 1-2Gi for large disks. The OSD prepare job only bursts a single time per OSD. All future runs of the OSD prepare job will detect the OSD is already provisioned and skip the provisioning. Hint The resources for MDS daemons are not configured in the Cluster. Refer to the Ceph Filesystem CRD instead. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#resource-requirementslimits","title":"Resource Requirements/Limits","text":"For more information on resource requests/limits see the official Kubernetes documentation: Kubernetes - Managing Compute Resources for Containers
Warning Before setting resource requests/limits, please take a look at the Ceph documentation for recommendations for each component: Ceph - Hardware Recommendations. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#node-specific-resources-for-osds","title":"Node Specific Resources for OSDs","text":"This example shows that you can override these requests/limits for OSDs per node when using "},{"location":"CRDs/Cluster/ceph-cluster-crd/#priority-class-names","title":"Priority Class Names","text":"Priority class names can be specified so that the Rook components will have those priority class names added to them. You can set priority class names for Rook components for the list of key value pairs:
The specific component keys will act as overrides to The Rook Ceph operator will monitor the state of the CephCluster on various components by default. The following CRD settings are available:
Currently three health checks are implemented:
The liveness probe and startup probe of each daemon can also be controlled via The probe's timing values and thresholds (but not the probe itself) can also be overridden. For more info, refer to the Kubernetes documentation. For example, you could change the Changing the liveness probe is an advanced operation and should rarely be necessary. If you want to change these settings then modify the desired settings. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#status","title":"Status","text":"The operator is regularly configuring and checking the health of the cluster. The results of the configuration and health checks can be seen in the "},{"location":"CRDs/Cluster/ceph-cluster-crd/#ceph-status","title":"Ceph Status","text":"Ceph is constantly monitoring the health of the data plane and reporting back if there are any warnings or errors. If everything is healthy from Ceph's perspective, you will see If Ceph reports any warnings or errors, the details will be printed to the status. If further troubleshooting is needed to resolve these issues, the toolbox will likely be needed where you can run The The
There are several other properties for the overall status including:
The topology of the cluster is important in production environments where you want your data spread across failure domains. The topology can be controlled by adding labels to the nodes. When the labels are found on a node at first OSD deployment, Rook will add them to the desired level in the CRUSH map. The complete list of labels in hierarchy order from highest to lowest is: For example, if the following labels were added to a node: These labels would result in the following hierarchy for OSDs on that node (this command can be run in the Rook toolbox): Ceph requires unique names at every level in the hierarchy (CRUSH map). For example, you cannot have two racks with the same name that are in different zones. Racks in different zones must be named uniquely. Note that the Hint When setting the node labels prior to To utilize the This configuration will split the replication of volumes across unique racks in the data center setup. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#deleting-a-cephcluster","title":"Deleting a CephCluster","text":"During deletion of a CephCluster resource, Rook protects against accidental or premature destruction of user data by blocking deletion if there are any other Rook Ceph Custom Resources that reference the CephCluster being deleted. Rook will warn about which other resources are blocking deletion in three ways until all blocking resources are deleted:
Rook has the ability to cleanup resources and data that were deployed when a CephCluster is removed. The policy settings indicate which data should be forcibly deleted and in what way the data should be wiped. The
To automate activation of the cleanup, you can use the following command. WARNING: DATA WILL BE PERMANENTLY DELETED: Nothing will happen until the deletion of the CR is requested, so this can still be reverted. However, all new configuration by the operator will be blocked with this cleanup policy enabled. Rook waits for the deletion of PVs provisioned using the cephCluster before proceeding to delete the cephCluster. To force deletion of the cephCluster without waiting for the PVs to be deleted, you can set the The Ceph config options are applied after the MONs are all in quorum and running. To set Ceph config options, you can add them to your The Rook operator will actively apply these values, whereas the ceph.conf settings only take effect after the Ceph daemon pods are restarted. If both these If Ceph settings need to be applied to mons before quorum is initially created, the ceph.conf settings should be used instead. Warning Rook performs no direct validation on these config options, so the validity of the settings is the user's responsibility. The operator does not unset any removed config options, it is the user's responsibility to unset or set the default value for each removed option manually using the Ceph CLI. "},{"location":"CRDs/Cluster/ceph-cluster-crd/#csi-driver-options","title":"CSI Driver Options","text":"The CSI driver options mentioned here are applied per Ceph cluster. The following options are available:
A host storage cluster is one where Rook configures Ceph to store data directly on the host. The Ceph mons will store the metadata on the host (at a path defined by the The Ceph persistent data is stored directly on a host path (Ceph Mons) and on raw devices (Ceph OSDs). To get you started, here are several example of the Cluster CR to configure the host. "},{"location":"CRDs/Cluster/host-cluster/#all-devices","title":"All Devices","text":"For the simplest possible configuration, this example shows that all devices or partitions should be consumed by Ceph. The mons will store the metadata on the host node under "},{"location":"CRDs/Cluster/host-cluster/#node-and-device-filters","title":"Node and Device Filters","text":"More commonly, you will want to be more specific about which nodes and devices where Rook should configure the storage. The placement settings are very flexible to add node affinity, anti-affinity, or tolerations. For more options, see the placement documentation. In this example, Rook will only configure Ceph daemons to run on nodes that are labeled with "},{"location":"CRDs/Cluster/host-cluster/#specific-nodes-and-devices","title":"Specific Nodes and Devices","text":"If you need fine-grained control for every node and every device that is being configured, individual nodes and their config can be specified. In this example, we see that specific node names and devices can be specified. Hint Each node's 'name' field should match their 'kubernetes.io/hostname' label. "},{"location":"CRDs/Cluster/network-providers/","title":"Network Providers","text":"Rook deploys CephClusters using Kubernetes' software-defined networks by default. This is simple for users, provides necessary connectivity, and has good node-level security. However, this comes at the expense of additional latency, and the storage network must contend with Kubernetes applications for network bandwidth. It also means that Kubernetes applications coexist on the same network as Ceph daemons and can reach the Ceph cluster easily via network scanning. Rook allows selecting alternative network providers to address some of these downsides, sometimes at the expense of others. Selecting alternative network providers is an advanced topic. Note This is an advanced networking topic. See also the CephCluster general networking settings. "},{"location":"CRDs/Cluster/network-providers/#ceph-networking-fundamentals","title":"Ceph Networking Fundamentals","text":"Ceph daemons can operate on up to two distinct networks: public, and cluster. Ceph daemons always use the public network. The public network is used for client communications with the Ceph cluster (reads/writes). Rook configures this as the Kubernetes pod network by default. Ceph-CSI uses this network for PVCs. The cluster network is optional and is used to isolate internal Ceph replication traffic. This includes additional copies of data replicated between OSDs during client reads/writes. This also includes OSD data recovery (re-replication) when OSDs or nodes go offline. If the cluster network is unspecified, the public network is used for this traffic instead. Refer to Ceph networking documentation for deeper explanations of any topics. "},{"location":"CRDs/Cluster/network-providers/#specifying-ceph-network-selections","title":"Specifying Ceph Network Selections","text":"
This configuration is always optional but is important to understand. Some Rook network providers allow specifying the public and network interfaces that Ceph will use for data traffic. Use This The default network provider cannot make use of these configurations. Ceph public and cluster network configurations are allowed to change, but this should be done with great care. When updating underlying networks or Ceph network settings, Rook assumes that the current network configuration used by Ceph daemons will continue to operate as intended. Network changes are not applied to Ceph daemon pods (like OSDs and MDSes) until the pod is restarted. When making network changes, ensure that restarted pods will not lose connectivity to existing pods, and vice versa. "},{"location":"CRDs/Cluster/network-providers/#host-networking","title":"Host Networking","text":"
Host networking allows the Ceph cluster to use network interfaces on Kubernetes hosts for communication. This eliminates latency from the software-defined pod network, but it provides no host-level security isolation. Ceph daemons will use any network available on the host for communication. To restrict Ceph to using only a specific specific host interfaces or networks, use If the Ceph mons are expected to bind to a public network that is different from the IP address assign to the K8s node where the mon is running, the IP address for the mon can be set by adding an annotation to the node: If the host networking setting is changed in a cluster where mons are already running, the existing mons will remain running with the same network settings with which they were created. To complete the conversion to or from host networking after you update this setting, you will need to failover the mons in order to have mons on the desired network configuration. "},{"location":"CRDs/Cluster/network-providers/#multus","title":"Multus","text":"
Rook supports using Multus NetworkAttachmentDefinitions for Ceph public and cluster networks. This allows Rook to attach any CNI to Ceph as a public and/or cluster network. This provides strong isolation between Kubernetes applications and Ceph cluster daemons. While any CNI may be used, the intent is to allow use of CNIs which allow Ceph to be connected to specific host interfaces. This improves latency and bandwidth while preserving host-level network isolation. "},{"location":"CRDs/Cluster/network-providers/#multus-prerequisites","title":"Multus Prerequisites","text":"In order for host network-enabled Ceph-CSI to communicate with a Multus-enabled CephCluster, some setup is required for Kubernetes hosts. These prerequisites require an understanding of how Multus networks are configured and how Rook uses them. Following sections will help clarify questions that may arise here. Two basic requirements must be met:
These two requirements can be broken down further as follows:
These requirements require careful planning, but some methods are able to meet these requirements more easily than others. Examples are provided after the full document to help understand and implement these requirements. Tip Keep in mind that there are often ten or more Rook/Ceph pods per host. The pod address space may need to be an order of magnitude larger (or more) than the host address space to allow the storage cluster to grow in the future. "},{"location":"CRDs/Cluster/network-providers/#multus-configuration","title":"Multus Configuration","text":"Refer to Multus documentation for details about how to set up and select Multus networks. Rook will attempt to auto-discover the network CIDRs for selected public and/or cluster networks. This process is not guaranteed to succeed. Furthermore, this process will get a new network lease for each CephCluster reconcile. Specify Only OSD pods will have both public and cluster networks attached (if specified). The rest of the Ceph component pods and CSI pods will only have the public network attached. The Rook operator will not have any networks attached; it proxies Ceph commands via a sidecar container in the mgr pod. A NetworkAttachmentDefinition must exist before it can be used by Multus for a Ceph network. A recommended definition will look like the following:
NetworkAttachmentDefinitions are selected for the desired Ceph network using Consider the example below which selects a hypothetical Kubernetes-wide Multus network in the default namespace for Ceph's public network and selects a Ceph-specific network in the "},{"location":"CRDs/Cluster/network-providers/#validating-multus-configuration","title":"Validating Multus configuration","text":"We highly recommend validating your Multus configuration before you install a CephCluster. A tool exists to facilitate validating the Multus configuration. After installing the Rook operator and before installing any Custom Resources, run the tool from the operator pod. The tool's CLI is designed to be as helpful as possible. Get help text for the multus validation tool like so:
Note The tool requires host network access. Many Kubernetes distros have security limitations. Use the tool's Daemons leveraging Kubernetes service IPs (Monitors, Managers, Rados Gateways) are not listening on the NAD specified in the The network plan for this cluster will be as follows:
Node configuration must allow nodes to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Because the host IP range is different from the pod IP range, a route must be added to include the pod range. Such a configuration should be equivalent to the following: The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' The Whereabouts "},{"location":"CRDs/Cluster/network-providers/#macvlan-whereabouts-node-static-ips","title":"Macvlan, Whereabouts, Node Static IPs","text":"The network plan for this cluster will be as follows:
PXE configuration for the nodes must apply a configuration to nodes to allow nodes to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Because the host IP range is a subset of the whole range, a route must be added to include the whole range. Such a configuration should be equivalent to the following: The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' "},{"location":"CRDs/Cluster/network-providers/#macvlan-dhcp","title":"Macvlan, DHCP","text":"The network plan for this cluster will be as follows:
Node configuration must allow nodes to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Such a configuration should be equivalent to the following: The NetworkAttachmentDefinition for the public network would look like the following. "},{"location":"CRDs/Cluster/pvc-cluster/","title":"PVC Storage Cluster","text":"In a \"PVC-based cluster\", the Ceph persistent data is stored on volumes requested from a storage class of your choice. This type of cluster is recommended in a cloud environment where volumes can be dynamically created and also in clusters where a local PV provisioner is available. "},{"location":"CRDs/Cluster/pvc-cluster/#aws-storage-example","title":"AWS Storage Example","text":"In this example, the mon and OSD volumes are provisioned from the AWS "},{"location":"CRDs/Cluster/pvc-cluster/#local-storage-example","title":"Local Storage Example","text":"In the CRD specification below, 3 OSDs (having specific placement and resource values) and 3 mons with each using a 10Gi PVC, are created by Rook using the "},{"location":"CRDs/Cluster/pvc-cluster/#pvc-storage-only-for-monitors","title":"PVC storage only for monitors","text":"In the CRD specification below three monitors are created each using a 10Gi PVC created by Rook using the "},{"location":"CRDs/Cluster/pvc-cluster/#dedicated-metadata-and-wal-device-for-osd-on-pvc","title":"Dedicated metadata and wal device for OSD on PVC","text":"In the simplest case, Ceph OSD BlueStore consumes a single (primary) storage device. BlueStore is the engine used by the OSD to store data. The storage device is normally used as a whole, occupying the full device that is managed directly by BlueStore. It is also possible to deploy BlueStore across additional devices such as a DB device. This device can be used for storing BlueStore\u2019s internal metadata. BlueStore (or rather, the embedded RocksDB) will put as much metadata as it can on the DB device to improve performance. If the DB device fills up, metadata will spill back onto the primary device (where it would have been otherwise). Again, it is only helpful to provision a DB device if it is faster than the primary device. You can have multiple Note Note that Rook only supports three naming convention for a given template:
The bluestore partition has the following reference combinations supported by the ceph-volume utility:
To determine the size of the metadata block follow the official Ceph sizing guide. With the present configuration, each OSD will have its main block allocated a 10GB device as well a 5GB device to act as a bluestore database. "},{"location":"CRDs/Cluster/stretch-cluster/","title":"Stretch Storage Cluster","text":"For environments that only have two failure domains available where data can be replicated, consider the case where one failure domain is down and the data is still fully available in the remaining failure domain. To support this scenario, Ceph has integrated support for \"stretch\" clusters. Rook requires three zones. Two zones (A and B) will each run all types of Rook pods, which we call the \"data\" zones. Two mons run in each of the two data zones, while two replicas of the data are in each zone for a total of four data replicas. The third zone (arbiter) runs a single mon. No other Rook or Ceph daemons need to be run in the arbiter zone. For this example, we assume the desired failure domain is a zone. Another failure domain can also be specified with a known topology node label which is already being used for OSD failure domains. For more details, see the Stretch Cluster design doc. "},{"location":"CRDs/Cluster/external-cluster/advance-external/","title":"External Cluster Options","text":""},{"location":"CRDs/Cluster/external-cluster/advance-external/#nfs-storage","title":"NFS storage","text":"Rook suggests a different mechanism for making use of an NFS service running on the external Ceph standalone cluster, if desired. "},{"location":"CRDs/Cluster/external-cluster/advance-external/#exporting-rook-to-another-cluster","title":"Exporting Rook to another cluster","text":"If you have multiple K8s clusters running, and want to use the local
Important For other clusters to connect to storage in this cluster, Rook must be configured with a networking configuration that is accessible from other clusters. Most commonly this is done by enabling host networking in the CephCluster CR so the Ceph daemons will be addressable by their host IPs. "},{"location":"CRDs/Cluster/external-cluster/advance-external/#admin-privileges","title":"Admin privileges","text":"If in case the cluster needs the admin keyring to configure, update the admin key Note Sharing the admin key with the external cluster is not generally recommended
After restarting the rook operator (and the toolbox if in use), rook will configure ceph with admin privileges. "},{"location":"CRDs/Cluster/external-cluster/advance-external/#connect-to-an-external-object-store","title":"Connect to an External Object Store","text":"Create the external object store CR to configure connection to external gateways. Consume the S3 Storage, in two different ways:
Hint For more details see the Object Store topic "},{"location":"CRDs/Cluster/external-cluster/consumer-import/","title":"Import Ceph configuration to the Rook consumer cluster","text":""},{"location":"CRDs/Cluster/external-cluster/consumer-import/#installation-types","title":"Installation types","text":"Install Rook in the the consumer cluster, either with Helm or the manifests. "},{"location":"CRDs/Cluster/external-cluster/consumer-import/#helm-installation","title":"Helm Installation","text":"To install with Helm, the rook cluster helm chart will configure the necessary resources for the external cluster with the example "},{"location":"CRDs/Cluster/external-cluster/consumer-import/#manifest-installation","title":"Manifest Installation","text":"If not installing with Helm, here are the steps to install with manifests.
An external cluster is a Ceph configuration that is managed outside of the local K8s cluster. The external cluster could be managed by cephadm, or it could be another Rook cluster that is configured to allow the access (usually configured with host networking). In external mode, Rook will provide the configuration for the CSI driver and other basic resources that allows your applications to connect to Ceph in the external cluster. "},{"location":"CRDs/Cluster/external-cluster/external-cluster/#external-configuration","title":"External configuration","text":"
Create the desired types of storage in the provider Ceph cluster:
1) Export config from the Provider Ceph cluster. Configuration must be exported by the Ceph admin, such as a Ceph keyring and mon endpoints, that allows connection to the Ceph cluster. 2) Import config to the Rook consumer cluster. The configuration exported from the Ceph cluster is imported to Rook to provide the needed connection details. "},{"location":"CRDs/Cluster/external-cluster/external-cluster/#advance-options","title":"Advance Options","text":"
In order to configure an external Ceph cluster with Rook, we need to extract some information in order to connect to that cluster. "},{"location":"CRDs/Cluster/external-cluster/provider-export/#1-create-all-users-and-keys","title":"1. Create all users and keys","text":"Run the python script create-external-cluster-resources.py for creating all users and keys.
Example Output: "},{"location":"CRDs/Cluster/external-cluster/provider-export/#examples-on-utilizing-advance-flags","title":"Examples on utilizing Advance flags","text":""},{"location":"CRDs/Cluster/external-cluster/provider-export/#config-file","title":"Config-file","text":"Use the config file to set the user configuration file, add the flag Example:
Note You can use both config file and other arguments at the same time Priority: command-line-args has more priority than config.ini file values, and config.ini file values have more priority than default values. "},{"location":"CRDs/Cluster/external-cluster/provider-export/#multi-tenancy","title":"Multi-tenancy","text":"To enable multi-tenancy, run the script with the Note Restricting the csi-users per pool, and per cluster will require creating new csi-users and new secrets for that csi-users. So apply these secrets only to new "},{"location":"CRDs/Cluster/external-cluster/provider-export/#rgw-multisite","title":"RGW Multisite","text":"Pass the "},{"location":"CRDs/Cluster/external-cluster/provider-export/#topology-based-provisioning","title":"Topology Based Provisioning","text":"Enable Topology Based Provisioning for RBD pools by passing For more details, see the Topology-Based Provisioning "},{"location":"CRDs/Cluster/external-cluster/provider-export/#connect-to-v2-mon-port","title":"Connect to v2 mon port","text":"If encryption or compression on the wire is needed, specify the Applications like Kafka will have a deployment with multiple running instances. Each service instance will create a new claim and is expected to be located in a different zone. Since the application has its own redundant instances, there is no requirement for redundancy at the data layer. A storage class is created that will provision storage from replica 1 Ceph pools that are located in each of the separate zones. "},{"location":"CRDs/Cluster/external-cluster/topology-for-external-mode/#configuration-flags","title":"Configuration Flags","text":"Add the required flags to the script:
The import script will then create a new storage class named Determine the names of the zones (or other failure domains) in the Ceph CRUSH map where each of the pools will have corresponding CRUSH rules. Create a zone-specific CRUSH rule for each of the pools. For example, this is a CRUSH rule for Create replica-1 pools based on each of the CRUSH rules from the previous step. Each pool must be created with a CRUSH rule to limit the pool to OSDs in a specific zone. Note Disable the ceph warning for replica-1 pools: Determine the zones in the K8s cluster that correspond to each of the pools in the Ceph pool. The K8s nodes require labels as defined with the OSD Topology labels. Some environments already have nodes labeled in zones. Set the topology labels on the nodes if not already present. Set the flags of the external cluster configuration script based on the pools and failure domains. --topology-pools=pool-a,pool-b,pool-c --topology-failure-domain-label=zone --topology-failure-domain-values=zone-a,zone-b,zone-c Then run the python script to generate the settings which will be imported to the Rook cluster: Output: "},{"location":"CRDs/Cluster/external-cluster/topology-for-external-mode/#kubernetes-cluster","title":"Kubernetes Cluster","text":"Check the external cluster is created and connected as per the installation steps. Review the new storage class: Set two values in the rook-ceph-operator-config configmap:
The topology-based storage class is ready to be consumed! Create a PVC from the When upgrading an external cluster, Ceph and Rook versions will be updated independently. During the Rook update, the external provider cluster connection also needs to be updated with any settings and permissions for new features. "},{"location":"CRDs/Cluster/external-cluster/upgrade-external/#upgrade-the-cluster-to-consume-latest-ceph-user-caps-mandatory","title":"Upgrade the cluster to consume latest ceph user caps (mandatory)","text":"Upgrading the cluster would be different for restricted caps and non-restricted caps,
Some Rook upgrades may require re-running the import steps, or may introduce new external cluster features that can be most easily enabled by re-running the import steps. To re-run the import steps with new options, the python script should be re-run using the same configuration options that were used for past invocations, plus the configurations that are being added or modified. Starting with Rook v1.15, the script stores the configuration in the external-cluster-user-command configmap for easy future reference.
external-cluster-user-command ConfigMap:","text":"
Warning If the last-applied config is unavailable, run the current version of the script again using previously-applied config and CLI flags. Failure to reuse the same configuration options when re-invoking the python script can result in unexpected changes when re-running the import script. "},{"location":"CRDs/Object-Storage/ceph-object-realm-crd/","title":"CephObjectRealm CRD","text":"Rook allows creation of a realm in a Ceph Object Multisite configuration through a CRD. The following settings are available for Ceph object store realms. "},{"location":"CRDs/Object-Storage/ceph-object-realm-crd/#example","title":"Example","text":" "},{"location":"CRDs/Object-Storage/ceph-object-realm-crd/#settings","title":"Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-realm-crd/#metadata","title":"Metadata","text":"
Rook allows creation and customization of object stores through the custom resource definitions (CRDs). The following settings are available for Ceph object stores. "},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#example","title":"Example","text":""},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#erasure-coded","title":"Erasure Coded","text":"Erasure coded pools can only be used with Note This sample requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the "},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#object-store-settings","title":"Object Store Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#metadata","title":"Metadata","text":"
The pools allow all of the settings defined in the Block Pool CRD spec. For more details, see the Block Pool CRD settings. In the example above, there must be at least three hosts (size 3) and at least three devices (2 data + 1 coding chunks) in the cluster. When the
The Currently only OpenStack Keystone is supported. "},{"location":"CRDs/Object-Storage/ceph-object-store-crd/#keystone-settings","title":"Keystone Settings","text":"The keystone authentication can be configured in the Note: With this example configuration S3 is implicitly enabled even though it is not enabled in the The following options can be configured in the
The protocols section is divided into two parts:
In the
In the
The gateway settings correspond to the RGW daemon settings.
The zone settings allow the object store to join custom created ceph-object-zone.
A common use case that requires configuring hosting is allowing virtual host-style bucket access. This use case is discussed in more detail in Rook object storage docs.
Note For DNS names that support wildcards, do not include wildcards. E.g., use Rook provides a default Rook will not overwrite an existing Rook will be default monitor the state of the object store endpoints. The following CRD settings are available:
Here is a complete example: You can monitor the health of a CephObjectStore by monitoring the gateway deployments it creates. The primary deployment created is named Ceph RGW supports Server Side Encryption as defined in AWS S3 protocol with three different modes: AWS-SSE:C, AWS-SSE:KMS and AWS-SSE:S3. The last two modes require a Key Management System (KMS) like HashiCorp Vault. Currently, Vault is the only supported KMS backend for CephObjectStore. Refer to the Vault KMS section for details about Vault. If these settings are defined, then RGW will establish a connection between Vault and whenever S3 client sends request with Server Side Encryption. Ceph's Vault documentation has more details. The For RGW, please note the following:
During deletion of a CephObjectStore resource, Rook protects against accidental or premature destruction of user data by blocking deletion if there are any object buckets in the object store being deleted. Buckets may have been created by users or by ObjectBucketClaims. For deletion to be successful, all buckets in the object store must be removed. This may require manual deletion or removal of all ObjectBucketClaims. Alternately, the Rook will warn about which buckets are blocking deletion in three ways:
If the CephObjectStore is configured in a multisite setup the above conditions are applicable only to stores that belong to a single master zone. Otherwise the conditions are ignored. Even if the store is removed the user can access the data from a peer object store. "},{"location":"CRDs/Object-Storage/ceph-object-store-user-crd/","title":"CephObjectStoreUser CRD","text":"Rook allows creation and customization of object store users through the custom resource definitions (CRDs). The following settings are available for Ceph object store users. "},{"location":"CRDs/Object-Storage/ceph-object-store-user-crd/#example","title":"Example","text":" "},{"location":"CRDs/Object-Storage/ceph-object-store-user-crd/#object-store-user-settings","title":"Object Store User Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-store-user-crd/#metadata","title":"Metadata","text":"
Rook allows creation of zones in a ceph cluster for a Ceph Object Multisite configuration through a CRD. The following settings are available for Ceph object store zones. "},{"location":"CRDs/Object-Storage/ceph-object-zone-crd/#example","title":"Example","text":" "},{"location":"CRDs/Object-Storage/ceph-object-zone-crd/#settings","title":"Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-zone-crd/#metadata","title":"Metadata","text":"
The pools allow all of the settings defined in the Pool CRD spec. For more details, see the Pool CRD settings. In the example above, there must be at least three hosts (size 3) and at least three devices (2 data + 1 coding chunks) in the cluster. "},{"location":"CRDs/Object-Storage/ceph-object-zone-crd/#spec","title":"Spec","text":"
Rook allows creation of zone groups in a Ceph Object Multisite configuration through a CRD. The following settings are available for Ceph object store zone groups. "},{"location":"CRDs/Object-Storage/ceph-object-zonegroup-crd/#example","title":"Example","text":" "},{"location":"CRDs/Object-Storage/ceph-object-zonegroup-crd/#settings","title":"Settings","text":""},{"location":"CRDs/Object-Storage/ceph-object-zonegroup-crd/#metadata","title":"Metadata","text":"
Rook allows creation and customization of shared filesystems through the custom resource definitions (CRDs). The following settings are available for Ceph filesystems. "},{"location":"CRDs/Shared-Filesystem/ceph-filesystem-crd/#examples","title":"Examples","text":""},{"location":"CRDs/Shared-Filesystem/ceph-filesystem-crd/#replicated","title":"Replicated","text":"Note This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes. Each OSD must be located on a different node, because both of the defined pools set the The (These definitions can also be found in the Erasure coded pools require the OSDs to use Note This sample requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the IMPORTANT: For erasure coded pools, we have to create a replicated pool as the default data pool and an erasure-coded pool as a secondary pool. (These definitions can also be found in the
The pools allow all of the settings defined in the Pool CRD spec. For more details, see the Pool CRD settings. In the example above, there must be at least three hosts (size 3) and at least eight devices (6 data + 2 coding chunks) in the cluster.
The metadata server settings correspond to the MDS daemon settings.
The format of the resource requests/limits structure is the same as described in the Ceph Cluster CRD documentation. If the memory resource limit is declared Rook will automatically set the MDS configuration In order to provide the best possible experience running Ceph in containers, Rook internally recommends the memory for MDS daemons to be at least 4096MB. If a user configures a limit or request value that is too low, Rook will still run the pod(s) and print a warning to the operator log. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-mirror-crd/","title":"CephFilesystemMirror CRD","text":"This guide assumes you have created a Rook cluster as explained in the main Quickstart guide Rook allows creation and updating the fs-mirror daemon through the custom resource definitions (CRDs). CephFS will support asynchronous replication of snapshots to a remote (different Ceph cluster) CephFS file system via cephfs-mirror tool. Snapshots are synchronized by mirroring snapshot data followed by creating a snapshot with the same name (for a given directory on the remote file system) as the snapshot being synchronized. For more information about user management and capabilities see the Ceph docs. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-mirror-crd/#creating-daemon","title":"Creating daemon","text":"To get you started, here is a simple example of a CRD to deploy an cephfs-mirror daemon. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-mirror-crd/#settings","title":"Settings","text":"If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-mirror-crd/#filesystemmirror-metadata","title":"FilesystemMirror metadata","text":"
In order to configure mirroring peers, please refer to the CephFilesystem documentation. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd/","title":"FilesystemSubVolumeGroup CRD","text":"Info This guide assumes you have created a Rook cluster as explained in the main Quickstart guide Rook allows creation of Ceph Filesystem SubVolumeGroups through the custom resource definitions (CRDs). Filesystem subvolume groups are an abstraction for a directory level higher than Filesystem subvolumes to effect policies (e.g., File layouts) across a set of subvolumes. For more information about CephFS volume, subvolumegroup and subvolume refer to the Ceph docs. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd/#creating-daemon","title":"Creating daemon","text":"To get you started, here is a simple example of a CRD to create a subvolumegroup on the CephFilesystem \"myfs\". "},{"location":"CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd/#settings","title":"Settings","text":"If any setting is unspecified, a suitable default will be used automatically. "},{"location":"CRDs/Shared-Filesystem/ceph-fs-subvolumegroup-crd/#cephfilesystemsubvolumegroup-metadata","title":"CephFilesystemSubVolumeGroup metadata","text":"
Note Only one out of (export, distributed, random) can be set at a time. By default pinning is set with value: This page contains information regarding the CI configuration used for the Rook project to test, build and release the project. "},{"location":"Contributing/ci-configuration/#secrets","title":"Secrets","text":"
You can choose any Kubernetes install of your choice. The test framework only depends on The developers of Rook are working on Minikube and thus it is the recommended way to quickly get Rook up and running. Minikube should not be used for production but the Rook authors consider it great for development. While other tools such as k3d/kind are great, users have faced issues deploying Rook. Always use a virtual machine when testing Rook. Never use your host system where local devices may mistakenly be consumed. To install Minikube follow the official guide. It is recommended to use the kvm2 driver when running on a Linux machine and the hyperkit driver when running on a MacOS. Both allow to create and attach additional disks to the virtual machine. This is required for the Ceph OSD to consume one drive. We don't recommend any other drivers for Rook. You will need a Minikube version 1.23 or higher. Starting the cluster on Minikube is as simple as running: It is recommended to install a Docker client on your host system too. Depending on your operating system follow the official guide. Stopping the cluster and destroying the Minikube virtual machine can be done with: "},{"location":"Contributing/development-environment/#install-helm","title":"Install Helm","text":"Use helm.sh to install Helm and set up Rook charts defined under
Note These helper scripts depend on some artifacts under the Note If Helm is not available in your Developers can test quickly their changes by building and using the local Rook image on their minikube cluster. 1) Set the local Docker environment to use minikube: 2) Build your local Rook image. The following command will generate a Rook image labeled in the format 3) Tag the generated image as 4) Create a Rook cluster in minikube, or if the Rook cluster is already configured, apply the new operator image by restarting the operator. "},{"location":"Contributing/development-environment/#creating-a-dev-cluster","title":"Creating a dev cluster","text":"To accelerate the development process, users have the option to employ the script located at Thank you for your time and effort to help us improve Rook! Here are a few steps to get started. If you have any questions, don't hesitate to reach out to us on our Slack dev channel. Sign up for the Rook Slack here. "},{"location":"Contributing/development-flow/#prerequisites","title":"Prerequisites","text":"
Navigate to http://github.com/rook/rook and click the \"Fork\" button. "},{"location":"Contributing/development-flow/#clone-your-fork","title":"Clone Your Fork","text":"In a console window: "},{"location":"Contributing/development-flow/#add-upstream-remote","title":"Add Upstream Remote","text":"Add the upstream remote to your local git: Two remotes should be available: Before building the project, fetch the remotes to synchronize tags. Tip If in a Linux environment and Hint Make will automatically pick up For consistent whitespace and other formatting in
Tip VS Code will prompt you automatically with some recommended extensions to install, such as Markdown, Go, YAML validator, and ShellCheck. VS Code will automatically use the recommended settings in the To self-assign an issue that is not yet assigned to anyone else, add a comment in the issue with The overall source code layout is summarized: "},{"location":"Contributing/development-flow/#development","title":"Development","text":"To submit a change, create a branch in your fork and then submit a pull request (PR) from the branch. "},{"location":"Contributing/development-flow/#design-document","title":"Design Document","text":"For new features of significant scope and complexity, a design document is recommended before work begins on the implementation. Create a design document if:
For smaller, straightforward features and bug fixes, there is no need for a design document. Authoring a design document has many advantages:
Note Writing code to prototype the feature while working on the design may be very useful to help flesh out the approach. A design document should be written as a markdown file in the design folder. Follow the process outlined in the design template. There are many examples of previous design documents in that folder. Submit a pull request for the design to be discussed and approved by the community, just like any other change to the repository. "},{"location":"Contributing/development-flow/#create-a-branch","title":"Create a Branch","text":"From a console, create a new branch based on your fork where changes will be developed: "},{"location":"Contributing/development-flow/#updating-your-fork","title":"Updating Your Fork","text":"During the development lifecycle, keep your branch(es) updated with the latest upstream master. As others on the team push changes, rebase your commits on top of the latest. This avoids unnecessary merge commits and keeps the commit history clean. Whenever an update is needed to the local repository, never perform a merge, always rebase. This will avoid merge commits in the git history. If there are any modified files, first stash them with Rebasing is a very powerful feature of Git. You need to understand how it works to avoid risking losing your work. Read about it in the Git documentation. Briefly, rebasing does the following:
After a feature or bug fix is completed in your branch, open a Pull Request (PR) to the upstream Rook repository. Before opening the PR:
All pull requests must pass all continuous integration (CI) tests before they can be merged. These tests automatically run against every pull request. The results of these tests along with code review feedback determine whether your request will be merged. "},{"location":"Contributing/development-flow/#unit-tests","title":"Unit Tests","text":"From the root of your local Rook repo execute the following to run all of the unit tests: Unit tests for individual packages can be run with the standard To see code coverage on the packages that you changed, view the "},{"location":"Contributing/development-flow/#writing-unit-tests","title":"Writing unit tests","text":"Good unit tests start with easily testable code. Small chunks (\"units\") of code can be easily tested for every possible input. Higher-level code units that are built from smaller, already-tested units can more easily verify that the units are combined together correctly. Common cases that may need tests:
Rook's upstream continuous integration (CI) tests will run integration tests against your changes automatically. "},{"location":"Contributing/development-flow/#tmate-session","title":"Tmate Session","text":"Integration tests will be run in Github actions. If an integration test fails, enable a tmate session to troubleshoot the issue by one of the following steps:
See the action details for an ssh connection to the Github runner. "},{"location":"Contributing/development-flow/#commit-structure","title":"Commit structure","text":"Rook maintainers value clear, lengthy and explanatory commit messages. Requirements for commits:
An example acceptable commit message: "},{"location":"Contributing/development-flow/#commit-history","title":"Commit History","text":"To prepare your branch to open a PR, the minimal number of logical commits is preferred to maintain a clean commit history. Most commonly a PR will include a single commit where all changes are squashed, although sometimes there will be multiple logical commits. To squash multiple commits or make other changes to the commit history, use Once your commit history is clean, ensure the branch is rebased on the latest upstream before opening the PR. "},{"location":"Contributing/development-flow/#submitting","title":"Submitting","text":"Go to the Rook github to open the PR. If you have pushed recently to a branch, you will see an obvious link to open the PR. If you have not pushed recently, go to the Pull Request tab and select your fork and branch for the PR. After the PR is open, make changes simply by pushing new commits. The PR will track the changes in your fork and rerun the CI automatically. Always open a pull request against master. Never open a pull request against a released branch (e.g. release-1.2) unless working directly with a maintainer. "},{"location":"Contributing/development-flow/#backporting-to-a-release-branch","title":"Backporting to a Release Branch","text":"The flow for getting a fix into a release branch is:
The Ceph manager modules are written in Python and can be individually and dynamically loaded from the manager. We can take advantage of this feature in order to test changes and to debug issues in the modules. This is just a hack to debug any modification in the manager modules. The Make modifications directly in the manager module and reload:
We are using MkDocs with the Material for MkDocs theme. "},{"location":"Contributing/documentation/#markdown-extensions","title":"Markdown Extensions","text":"Thanks to the MkDocs Material theme we have certain \"markdown syntax extensions\" available:
For a whole list of features Reference - Material for MkDocs. "},{"location":"Contributing/documentation/#local-preview","title":"Local Preview","text":"To locally preview the documentation, you can run the following command (in the root of the repository): When previewing, now you can navigate your browser to http://127.0.0.1:8000/ to open the preview of the documentation. Hint Should you encounter a helm-docs is a tool that generates the documentation for a helm chart automatically. If there are changes in the helm chart, the developer needs to run The integration tests run end-to-end tests on Rook in a running instance of Kubernetes. The framework includes scripts for starting Kubernetes so users can quickly spin up a Kubernetes cluster. The tests are generally designed to install Rook, run tests, and uninstall Rook. The CI runs the integration tests with each PR and each master or release branch build. If the tests fail in a PR, access the tmate for debugging. This document will outline the steps to run the integration tests locally in a minikube environment, should the CI not be sufficient to troubleshoot. Hint The CI is generally much simpler to troubleshoot than running these tests locally. Running the tests locally is rarely necessary. Warning A risk of running the tests locally is that a local disk is required during the tests. If not running in a VM, your laptop or other test machine could be destroyed. "},{"location":"Contributing/rook-test-framework/#install-minikube","title":"Install Minikube","text":"Follow Rook's developer guide to install Minikube. "},{"location":"Contributing/rook-test-framework/#build-rook-image","title":"Build Rook image","text":"Now that the Kubernetes cluster is running we need to populate the Docker registry to allow local image builds to be easily used inside Minikube.
Tag the newly built images to "},{"location":"Contributing/rook-test-framework/#run-integration-tests","title":"Run integration tests","text":"Some settings are available to run the tests under different environments. The settings are all configured with environment variables. See environment.go for the available environment variables. Set the following variables: Set Hint If using the Warning The integration tests erase the contents of To run a specific suite, specify the suite name: After running tests, see test logs under To run specific tests inside a suite: Info Only the golang test suites are documented to run locally. Canary and other tests have only ever been supported in the CI. "},{"location":"Contributing/rook-test-framework/#running-tests-on-openshift","title":"Running tests on OpenShift","text":"
OpenShift adds a number of security and other enhancements to Kubernetes. In particular, security context constraints allow the cluster admin to define exactly which permissions are allowed to pods running in the cluster. You will need to define those permissions that allow the Rook pods to run. The settings for Rook in OpenShift are described below, and are also included in the example yaml files:
To create an OpenShift cluster, the commands basically include: "},{"location":"Getting-Started/ceph-openshift/#helm-installation","title":"Helm Installation","text":"Configuration required for Openshift is automatically created by the Helm charts, such as the SecurityContextConstraints. See the Rook Helm Charts. "},{"location":"Getting-Started/ceph-openshift/#rook-privileges","title":"Rook Privileges","text":"To orchestrate the storage platform, Rook requires the following access in the cluster:
Before starting the Rook operator or cluster, create the security context constraints needed by the Rook pods. The following yaml is found in Hint Older versions of OpenShift may require Important to note is that if you plan on running Rook in namespaces other than the default To create the scc you will need a privileged account: We will create the security context constraints with the operator in the next section. "},{"location":"Getting-Started/ceph-openshift/#rook-settings","title":"Rook Settings","text":"There are some Rook settings that also need to be adjusted to work in OpenShift. "},{"location":"Getting-Started/ceph-openshift/#operator-settings","title":"Operator Settings","text":"There is an environment variable that needs to be set in the operator spec that will allow Rook to run in OpenShift clusters.
Now create the security context constraints and the operator: "},{"location":"Getting-Started/ceph-openshift/#cluster-settings","title":"Cluster Settings","text":"The cluster settings in
In OpenShift, ports less than 1024 cannot be bound. In the object store CRD, ensure the port is modified to meet this requirement. You can expose a different port such as A sample object store can be created with these settings: "},{"location":"Getting-Started/ceph-teardown/","title":"Cleanup","text":"Rook provides the following clean up options:
To tear down the cluster, the following resources need to be cleaned up:
If the default namespaces or paths such as If tearing down a cluster frequently for development purposes, it is instead recommended to use an environment such as Minikube that can easily be reset without worrying about any of these steps. "},{"location":"Getting-Started/ceph-teardown/#delete-the-block-and-file-artifacts","title":"Delete the Block and File artifacts","text":"First clean up the resources from applications that consume the Rook storage. These commands will clean up the resources from the example application block and file walkthroughs (unmount volumes, delete volume claims, etc). Important After applications have been cleaned up, the Rook cluster can be removed. It is important to delete applications before removing the Rook operator and Ceph cluster. Otherwise, volumes may hang and nodes may require a restart. "},{"location":"Getting-Started/ceph-teardown/#delete-the-cephcluster-crd","title":"Delete the CephCluster CRD","text":"Warning DATA WILL BE PERMANENTLY DELETED AFTER DELETING THE
Note The cleanup jobs might not start if the resources created on top of Rook Cluster are not deleted completely. See deleting block and file artifacts "},{"location":"Getting-Started/ceph-teardown/#delete-the-operator-resources","title":"Delete the Operator Resources","text":"Remove the Rook operator, RBAC, and CRDs, and the "},{"location":"Getting-Started/ceph-teardown/#delete-the-data-on-hosts","title":"Delete the data on hosts","text":"Attention The final cleanup step requires deleting files on each host in the cluster. All files under the If the Connect to each machine and delete the namespace directory under Disks on nodes used by Rook for OSDs can be reset to a usable state. Note that these scripts are not one-size-fits-all. Please use them with discretion to ensure you are not removing data unrelated to Rook. A single disk can usually be cleared with some or all of the steps below. Ceph can leave LVM and device mapper data on storage drives, preventing them from being redeployed. These steps can clean former Ceph drives for reuse. Note that this only needs to be run once on each node. If you have only one Rook cluster and all Ceph disks are being wiped, run the following command. If disks are still reported locked, rebooting the node often helps clear LVM-related holds on disks. If there are multiple Ceph clusters and some disks are not wiped yet, it is necessary to manually determine which disks map to which device mapper devices. "},{"location":"Getting-Started/ceph-teardown/#troubleshooting","title":"Troubleshooting","text":"The most common issue cleaning up the cluster is that the If a pod is still terminating, consider forcefully terminating the pod ( If the cluster CRD still exists even though it has been deleted, see the next section on removing the finalizer. "},{"location":"Getting-Started/ceph-teardown/#removing-the-cluster-crd-finalizer","title":"Removing the Cluster CRD Finalizer","text":"When a Cluster CRD is created, a finalizer is added automatically by the Rook operator. The finalizer will allow the operator to ensure that before the cluster CRD is deleted, all block and file mounts will be cleaned up. Without proper cleanup, pods consuming the storage will be hung indefinitely until a system reboot. The operator is responsible for removing the finalizer after the mounts have been cleaned up. If for some reason the operator is not able to remove the finalizer (i.e., the operator is not running anymore), delete the finalizer manually with the following command: If the namespace is still stuck in Terminating state, check which resources are holding up the deletion and remove their finalizers as well: "},{"location":"Getting-Started/ceph-teardown/#remove-critical-resource-finalizers","title":"Remove critical resource finalizers","text":"Rook adds a finalizer The operator is responsible for removing the finalizers when a CephCluster is deleted. If the operator is not able to remove the finalizers (i.e., the operator is not running anymore), remove the finalizers manually: "},{"location":"Getting-Started/ceph-teardown/#force-delete-resources","title":"Force Delete Resources","text":"To keep your data safe in the cluster, Rook disallows deleting critical cluster resources by default. To override this behavior and force delete a specific custom resource, add the annotation For example, run the following commands to clean the Once the cleanup job is completed successfully, Rook will remove the finalizers from the deleted custom resource. This cleanup is supported only for the following custom resources: Custom Resource Ceph Resources to be cleaned up CephFilesystemSubVolumeGroup CSI stored RADOS OMAP details for pvc/volumesnapshots, subvolume snapshots, subvolume clones, subvolumes CephBlockPoolRadosNamespace Images and snapshots in the RADOS namespace CephBlockPool Images and snapshots in the BlockPool"},{"location":"Getting-Started/example-configurations/","title":"Example Configurations","text":"Configuration for Rook and Ceph can be configured in multiple ways to provide block devices, shared filesystem volumes or object storage in a kubernetes namespace. While several examples are provided to simplify storage setup, settings are available to optimize various production environments. See the example yaml files folder for all the rook/ceph setup example spec files. "},{"location":"Getting-Started/example-configurations/#common-resources","title":"Common Resources","text":"The first step to deploy Rook is to create the CRDs and other common resources. The configuration for these resources will be the same for most deployments. The crds.yaml and common.yaml sets these resources up. The examples all assume the operator and all Ceph daemons will be started in the same namespace. If deploying the operator in a separate namespace, see the comments throughout After the common resources are created, the next step is to create the Operator deployment. Several spec file examples are provided in this directory:
Settings for the operator are configured through environment variables on the operator deployment. The individual settings are documented in operator.yaml. "},{"location":"Getting-Started/example-configurations/#cluster-crd","title":"Cluster CRD","text":"Now that the operator is running, create the Ceph storage cluster with the CephCluster CR. This CR contains the most critical settings that will influence how the operator configures the storage. It is important to understand the various ways to configure the cluster. These examples represent several different ways to configure the storage.
See the Cluster CRD topic for more details and more examples for the settings. "},{"location":"Getting-Started/example-configurations/#setting-up-consumable-storage","title":"Setting up consumable storage","text":"Now we are ready to setup Block, Shared Filesystem or Object storage in the Rook cluster. These storage types are respectively created with the CephBlockPool, CephFilesystem and CephObjectStore CRs. "},{"location":"Getting-Started/example-configurations/#block-devices","title":"Block Devices","text":"Ceph provides raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in application pods. The storage class is defined with a Ceph pool which defines the level of data redundancy in Ceph:
The block storage classes are found in the examples directory:
See the CephBlockPool CRD topic for more block storage settings. "},{"location":"Getting-Started/example-configurations/#shared-filesystem","title":"Shared Filesystem","text":"Ceph filesystem (CephFS) allows the user to mount a shared posix-compliant folder into one or more application pods. This storage is similar to NFS shared storage or CIFS shared folders, as explained here. Shared Filesystem storage contains configurable pools for different scenarios:
Dynamic provisioning is possible with the CSI driver. The storage class for shared filesystems is found in the See the Shared Filesystem CRD topic for more details on the settings. "},{"location":"Getting-Started/example-configurations/#object-storage","title":"Object Storage","text":"Ceph supports storing blobs of data called objects that support HTTP(s)-type get/put/post and delete semantics. This storage is similar to AWS S3 storage, for example. Object storage contains multiple pools that can be configured for different scenarios:
See the Object Store CRD topic for more details on the settings. "},{"location":"Getting-Started/example-configurations/#object-storage-user","title":"Object Storage User","text":"
The Ceph operator also runs an object store bucket provisioner which can grant access to existing buckets or dynamically provision new buckets.
The CephBlockPool CRD is used by Rook to allow creation and customization of storage pools. "},{"location":"Getting-Started/glossary/#cephblockpoolradosnamespace-crd","title":"CephBlockPoolRadosNamespace CRD","text":"The CephBlockPoolRadosNamespace CRD is used by Rook to allow creation of Ceph RADOS Namespaces. "},{"location":"Getting-Started/glossary/#cephclient-crd","title":"CephClient CRD","text":"CephClient CRD is used by Rook to allow creation and updating clients. "},{"location":"Getting-Started/glossary/#cephcluster-crd","title":"CephCluster CRD","text":"The CephCluster CRD is used by Rook to allow creation and customization of storage clusters through the custom resource definitions (CRDs). "},{"location":"Getting-Started/glossary/#ceph-csi","title":"Ceph CSI","text":"The Ceph CSI plugins implement an interface between a CSI-enabled Container Orchestrator (CO) and Ceph clusters. "},{"location":"Getting-Started/glossary/#cephfilesystem-crd","title":"CephFilesystem CRD","text":"The CephFilesystem CRD is used by Rook to allow creation and customization of shared filesystems through the custom resource definitions (CRDs). "},{"location":"Getting-Started/glossary/#cephfilesystemmirror-crd","title":"CephFilesystemMirror CRD","text":"The CephFilesystemMirror CRD is used by Rook to allow creation and updating the Ceph fs-mirror daemon. "},{"location":"Getting-Started/glossary/#cephfilesystemsubvolumegroup-crd","title":"CephFilesystemSubVolumeGroup CRD","text":"CephFilesystemMirror CRD is used by Rook to allow creation of Ceph Filesystem SubVolumeGroups. "},{"location":"Getting-Started/glossary/#cephnfs-crd","title":"CephNFS CRD","text":"CephNFS CRD is used by Rook to allow exporting NFS shares of a CephFilesystem or CephObjectStore through the CephNFS custom resource definition. For further information please refer to the example here. "},{"location":"Getting-Started/glossary/#cephobjectstore-crd","title":"CephObjectStore CRD","text":"CephObjectStore CRD is used by Rook to allow creation and customization of object stores. "},{"location":"Getting-Started/glossary/#cephobjectstoreuser-crd","title":"CephObjectStoreUser CRD","text":"CephObjectStoreUser CRD is used by Rook to allow creation and customization of object store users. For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#cephobjectrealm-crd","title":"CephObjectRealm CRD","text":"CephObjectRealm CRD is used by Rook to allow creation of a realm in a Ceph Object Multisite configuration. For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#cephobjectzonegroup-crd","title":"CephObjectZoneGroup CRD","text":"CephObjectZoneGroup CRD is used by Rook to allow creation of zone groups in a Ceph Object Multisite configuration. For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#cephobjectzone-crd","title":"CephObjectZone CRD","text":"CephObjectZone CRD is used by Rook to allow creation of zones in a ceph cluster for a Ceph Object Multisite configuration. For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#cephrbdmirror-crd","title":"CephRBDMirror CRD","text":"CephRBDMirror CRD is used by Rook to allow creation and updating rbd-mirror daemon(s) through the custom resource definitions (CRDs). For more information and examples refer to this documentation. "},{"location":"Getting-Started/glossary/#external-storage-cluster","title":"External Storage Cluster","text":"An external cluster is a Ceph configuration that is managed outside of the local K8s cluster. "},{"location":"Getting-Started/glossary/#host-storage-cluster","title":"Host Storage Cluster","text":"A host storage cluster is where Rook configures Ceph to store data directly on the host devices. "},{"location":"Getting-Started/glossary/#kubectl-plugin","title":"kubectl Plugin","text":"The Rook kubectl plugin is a tool to help troubleshoot your Rook cluster. "},{"location":"Getting-Started/glossary/#object-bucket-claim-obc","title":"Object Bucket Claim (OBC)","text":"An Object Bucket Claim (OBC) is custom resource which requests a bucket (new or existing) from a Ceph object store. For further reference please refer to OBC Custom Resource. "},{"location":"Getting-Started/glossary/#object-bucket-ob","title":"Object Bucket (OB)","text":"An Object Bucket (OB) is a custom resource automatically generated when a bucket is provisioned. It is a global resource, typically not visible to non-admin users, and contains information specific to the bucket. "},{"location":"Getting-Started/glossary/#openshift","title":"OpenShift","text":"OpenShift Container Platform is a distribution of the Kubernetes container platform. "},{"location":"Getting-Started/glossary/#pvc-storage-cluster","title":"PVC Storage Cluster","text":"In a PersistentVolumeClaim-based cluster, the Ceph persistent data is stored on volumes requested from a storage class of your choice. "},{"location":"Getting-Started/glossary/#stretch-storage-cluster","title":"Stretch Storage Cluster","text":"A stretched cluster is a deployment model in which two datacenters with low latency are available for storage in the same K8s cluster, rather than three or more. To support this scenario, Rook has integrated support for stretch clusters. "},{"location":"Getting-Started/glossary/#toolbox","title":"Toolbox","text":"The Rook toolbox is a container with common tools used for rook debugging and testing. "},{"location":"Getting-Started/glossary/#ceph","title":"Ceph","text":"Ceph is a distributed network storage and file system with distributed metadata management and POSIX semantics. See also the Ceph Glossary. Here are a few of the important terms to understand:
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. For further information see also the Kubernetes Glossary for more definitions. Here are a few of the important terms to understand:
Rook is an open source cloud-native storage orchestrator, providing the platform, framework, and support for Ceph storage to natively integrate with cloud-native environments. Ceph is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters. Rook automates deployment and management of Ceph to provide self-managing, self-scaling, and self-healing storage services. The Rook operator does this by building on Kubernetes resources to deploy, configure, provision, scale, upgrade, and monitor Ceph. The Ceph operator was declared stable in December 2018 in the Rook v0.9 release, providing a production storage platform for many years. Rook is hosted by the Cloud Native Computing Foundation (CNCF) as a graduated level project. "},{"location":"Getting-Started/intro/#quick-start-guide","title":"Quick Start Guide","text":"Starting Ceph in your cluster is as simple as a few Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. See the Ceph overview. For detailed design documentation, see also the design docs. "},{"location":"Getting-Started/intro/#need-help-be-sure-to-join-the-rook-slack","title":"Need help? Be sure to join the Rook Slack","text":"If you have any questions along the way, don't hesitate to ask in our Slack channel. Sign up for the Rook Slack here. "},{"location":"Getting-Started/quickstart/","title":"Quickstart","text":"Welcome to Rook! We hope you have a great experience installing the Rook cloud-native storage orchestrator platform to enable highly available, durable Ceph storage in Kubernetes clusters. Don't hesitate to ask questions in our Slack channel. Sign up for the Rook Slack here. This guide will walk through the basic setup of a Ceph cluster and enable K8s applications to consume block, object, and file storage. Always use a virtual machine when testing Rook. Never use a host system where local devices may mistakenly be consumed. "},{"location":"Getting-Started/quickstart/#kubernetes-version","title":"Kubernetes Version","text":"Kubernetes versions v1.26 through v1.31 are supported. "},{"location":"Getting-Started/quickstart/#cpu-architecture","title":"CPU Architecture","text":"Architectures released are To check if a Kubernetes cluster is ready for To configure the Ceph storage cluster, at least one of these local storage options are required:
A simple Rook cluster is created for Kubernetes with the following After the cluster is running, applications can consume block, object, or file storage. "},{"location":"Getting-Started/quickstart/#deploy-the-rook-operator","title":"Deploy the Rook Operator","text":"The first step is to deploy the Rook operator. Important The Rook Helm Chart is available to deploy the operator instead of creating the below manifests. Note Check that the example yaml files are from a tagged release of Rook. Note These steps are for a standard production Rook deployment in Kubernetes. For Openshift, testing, or more options, see the example configurations documentation. Before starting the operator in production, consider these settings:
The Rook documentation is focused around starting Rook in a variety of environments. While creating the cluster in this guide, consider these example cluster manifests:
See the Ceph example configurations for more details. "},{"location":"Getting-Started/quickstart/#create-a-ceph-cluster","title":"Create a Ceph Cluster","text":"Now that the Rook operator is running we can create the Ceph cluster. Important The Rook Cluster Helm Chart is available to deploy the operator instead of creating the below manifests. Important For the cluster to survive reboots, set the Create the cluster: Verify the cluster is running by viewing the pods in the The number of osd pods will depend on the number of nodes in the cluster and the number of devices configured. For the default Hint If the To verify that the cluster is in a healthy state, connect to the Rook toolbox and run the
Hint If the cluster is not healthy, please refer to the Ceph common issues for potential solutions. "},{"location":"Getting-Started/quickstart/#storage","title":"Storage","text":"For a walkthrough of the three types of storage exposed by Rook, see the guides for:
Ceph has a dashboard to view the status of the cluster. See the dashboard guide. "},{"location":"Getting-Started/quickstart/#tools","title":"Tools","text":"Create a toolbox pod for full access to a ceph admin client for debugging and troubleshooting the Rook cluster. See the toolbox documentation for setup and usage information. The Rook kubectl plugin provides commands to view status and troubleshoot issues. See the advanced configuration document for helpful maintenance and tuning examples. "},{"location":"Getting-Started/quickstart/#monitoring","title":"Monitoring","text":"Each Rook cluster has built-in metrics collectors/exporters for monitoring with Prometheus. To configure monitoring, see the monitoring guide. "},{"location":"Getting-Started/quickstart/#telemetry","title":"Telemetry","text":"The Rook maintainers would like to receive telemetry reports for Rook clusters. The data is anonymous and does not include any identifying information. Enable the telemetry reporting feature with the following command in the toolbox: For more details on what is reported and how your privacy is protected, see the Ceph Telemetry Documentation. "},{"location":"Getting-Started/quickstart/#teardown","title":"Teardown","text":"When finished with the test cluster, see the cleanup guide. "},{"location":"Getting-Started/release-cycle/","title":"Release Cycle","text":"Rook plans to release a new minor version three times a year, or about every four months. The most recent two minor Rook releases are actively maintained. Patch releases for the latest minor release are typically bi-weekly. Urgent patches may be released sooner. Patch releases for the previous minor release are commonly monthly, though will vary depending on the urgency of fixes. "},{"location":"Getting-Started/release-cycle/#definition-of-maintenance","title":"Definition of Maintenance","text":"The Rook community defines maintenance in that relevant bug fixes that are merged to the main development branch will be eligible to be back-ported to the release branch of any currently maintained version. Patches will be released as needed. It is also possible that a fix may be merged directly to the release branch if no longer applicable on the main development branch. While Rook maintainers make significant efforts to release urgent issues in a timely manner, maintenance does not indicate any SLA on response time. "},{"location":"Getting-Started/release-cycle/#k8s-versions","title":"K8s Versions","text":"The minimum version supported by a Rook release is specified in the Quickstart Guide. Rook expects to support the most recent six versions of Kubernetes. While these K8s versions may not all be supported by the K8s release cycle, we understand that clusters may take time to update. "},{"location":"Getting-Started/storage-architecture/","title":"Storage Architecture","text":"Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. "},{"location":"Getting-Started/storage-architecture/#design","title":"Design","text":"Rook enables Ceph storage to run on Kubernetes using Kubernetes primitives. With Ceph running in the Kubernetes cluster, Kubernetes applications can mount block devices and filesystems managed by Rook, or can use the S3/Swift API for object storage. The Rook operator automates configuration of storage components and monitors the cluster to ensure the storage remains available and healthy. The Rook operator is a simple container that has all that is needed to bootstrap and monitor the storage cluster. The operator will start and monitor Ceph monitor pods, the Ceph OSD daemons to provide RADOS storage, as well as start and manage other Ceph daemons. The operator manages CRDs for pools, object stores (S3/Swift), and filesystems by initializing the pods and other resources necessary to run the services. The operator will monitor the storage daemons to ensure the cluster is healthy. Ceph mons will be started or failed over when necessary, and other adjustments are made as the cluster grows or shrinks. The operator will also watch for desired state changes specified in the Ceph custom resources (CRs) and apply the changes. Rook automatically configures the Ceph-CSI driver to mount the storage to your pods. The Rook is implemented in golang. Ceph is implemented in C++ where the data path is highly optimized. We believe this combination offers the best of both worlds. "},{"location":"Getting-Started/storage-architecture/#architecture","title":"Architecture","text":"Example applications are shown above for the three supported storage types:
Below the dotted line in the above diagram, the components fall into three categories:
Production clusters must have three or more nodes for a resilient storage platform. "},{"location":"Getting-Started/storage-architecture/#block-storage","title":"Block Storage","text":"In the diagram above, the flow to create an application with an RWO volume is:
A ReadWriteOnce volume can be mounted on one node at a time. "},{"location":"Getting-Started/storage-architecture/#shared-filesystem","title":"Shared Filesystem","text":"In the diagram above, the flow to create a applications with a RWX volume is:
A ReadWriteMany volume can be mounted on multiple nodes for your application to use. "},{"location":"Getting-Started/storage-architecture/#object-storage-s3","title":"Object Storage S3","text":"In the diagram above, the flow to create an application with access to an S3 bucket is:
A S3 compatible client can use the S3 bucket right away using the credentials ( If you want to use an image from authenticated docker registry (e.g. for image cache/mirror), you'll need to add an The whole process is described in the official kubernetes documentation. "},{"location":"Getting-Started/Prerequisites/authenticated-registry/#example-setup-for-a-ceph-cluster","title":"Example setup for a ceph cluster","text":"To get you started, here's a quick rundown for the ceph example from the quickstart guide. First, we'll create the secret for our registry as described here (the secret will be created in the Next we'll add the following snippet to all relevant service accounts as described here: The service accounts are:
Since it's the same procedure for all service accounts, here is just one example: After doing this for all service accounts all pods should be able to pull the image from your registry. "},{"location":"Getting-Started/Prerequisites/prerequisites/","title":"Prerequisites","text":"Rook can be installed on any existing Kubernetes cluster as long as it meets the minimum version and Rook is granted the required privileges (see below for more information). "},{"location":"Getting-Started/Prerequisites/prerequisites/#kubernetes-version","title":"Kubernetes Version","text":"Kubernetes versions v1.26 through v1.31 are supported. "},{"location":"Getting-Started/Prerequisites/prerequisites/#cpu-architecture","title":"CPU Architecture","text":"Architectures supported are To configure the Ceph storage cluster, at least one of these local storage types is required:
Confirm whether the partitions or devices are formatted with filesystems with the following command: If the Ceph OSDs have a dependency on LVM in the following scenarios:
LVM is not required for OSDs in these scenarios:
If LVM is required, LVM needs to be available on the hosts where OSDs will be running. Some Linux distributions do not ship with the CentOS: Ubuntu: RancherOS:
"},{"location":"Getting-Started/Prerequisites/prerequisites/#kernel","title":"Kernel","text":""},{"location":"Getting-Started/Prerequisites/prerequisites/#rbd","title":"RBD","text":"Ceph requires a Linux kernel built with the RBD module. Many Linux distributions have this module, but not all. For example, the GKE Container-Optimised OS (COS) does not have RBD. Test your Kubernetes nodes by running Rook's default RBD configuration specifies only the "},{"location":"Getting-Started/Prerequisites/prerequisites/#cephfs","title":"CephFS","text":"If creating RWX volumes from a Ceph shared file system (CephFS), the recommended minimum kernel version is 4.17. If the kernel version is less than 4.17, the requested PVC sizes will not be enforced. Storage quotas will only be enforced on newer kernels. "},{"location":"Getting-Started/Prerequisites/prerequisites/#distro-notes","title":"Distro Notes","text":"Specific configurations for some distributions. "},{"location":"Getting-Started/Prerequisites/prerequisites/#nixos","title":"NixOS","text":"For NixOS, the kernel modules will be found in the non-standard path Rook containers require read access to those locations to be able to load the required modules. They have to be bind-mounted as volumes in the CephFS and RBD plugin pods. If installing Rook with Helm, uncomment these example settings in
If deploying without Helm, add those same values to the settings in the
If using containerd, remove "},{"location":"Helm-Charts/ceph-cluster-chart/","title":"Ceph Cluster Helm Chart","text":"Creates Rook resources to configure a Ceph cluster using the Helm package manager. This chart is a simple packaging of templates that will optionally create Rook resources such as:
The Rook currently publishes builds of this chart to the Before installing, review the values.yaml to confirm if the default settings need to be updated.
The release channel is the most recent release of Rook that is considered stable for the community. The example install assumes you have first installed the Rook Operator Helm Chart and created your customized values.yaml. Note --namespace specifies the cephcluster namespace, which may be different from the rook operator namespace. "},{"location":"Helm-Charts/ceph-cluster-chart/#configuration","title":"Configuration","text":"The following table lists the configurable parameters of the rook-operator chart and their default values. Parameter Description DefaultcephBlockPools A list of CephBlockPool configurations to deploy See below cephBlockPoolsVolumeSnapshotClass Settings for the block pool snapshot class See RBD Snapshots cephClusterSpec Cluster configuration. See below cephFileSystemVolumeSnapshotClass Settings for the filesystem snapshot class See CephFS Snapshots cephFileSystems A list of CephFileSystem configurations to deploy See below cephObjectStores A list of CephObjectStore configurations to deploy See below clusterName The metadata.name of the CephCluster CR The same as the namespace configOverride Cluster ceph.conf override nil csiDriverNamePrefix CSI driver name prefix for cephfs, rbd and nfs. namespace name where rook-ceph operator is deployed ingress.dashboard Enable an ingress for the ceph-dashboard {} kubeVersion Optional override of the target kubernetes version nil monitoring.createPrometheusRules Whether to create the Prometheus rules for Ceph alerts false monitoring.enabled Enable Prometheus integration, will also create necessary RBAC rules to allow Operator to create ServiceMonitors. Monitoring requires Prometheus to be pre-installed false monitoring.prometheusRule.annotations Annotations applied to PrometheusRule {} monitoring.prometheusRule.labels Labels applied to PrometheusRule {} monitoring.rulesNamespaceOverride The namespace in which to create the prometheus rules, if different from the rook cluster namespace. If you have multiple rook-ceph clusters in the same k8s cluster, choose the same namespace (ideally, namespace with prometheus deployed) to set rulesNamespaceOverride for all the clusters. Otherwise, you will get duplicate alerts with multiple alert definitions. nil operatorNamespace Namespace of the main rook operator \"rook-ceph\" pspEnable Create & use PSP resources. Set this to the same value as the rook-ceph chart. false toolbox.affinity Toolbox affinity {} toolbox.containerSecurityContext Toolbox container security context {\"capabilities\":{\"drop\":[\"ALL\"]},\"runAsGroup\":2016,\"runAsNonRoot\":true,\"runAsUser\":2016} toolbox.enabled Enable Ceph debugging pod deployment. See toolbox false toolbox.image Toolbox image, defaults to the image used by the Ceph cluster nil toolbox.priorityClassName Set the priority class for the toolbox if desired nil toolbox.resources Toolbox resources {\"limits\":{\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"128Mi\"}} toolbox.tolerations Toolbox tolerations [] "},{"location":"Helm-Charts/ceph-cluster-chart/#ceph-cluster-spec","title":"Ceph Cluster Spec","text":"The The cluster spec example is for a converged cluster where all the Ceph daemons are running locally, as in the host-based example (cluster.yaml). For a different configuration such as a PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml), or stretch cluster (cluster-stretched.yaml), replace this entire The name The name of the CephBlockPool ceph-blockpool spec The CephBlockPool spec, see the CephBlockPool documentation. {} storageClass.enabled Whether a storage class is deployed alongside the CephBlockPool true storageClass.isDefault Whether the storage class will be the default storage class for PVCs. See PersistentVolumeClaim documentation for details. true storageClass.name The name of the storage class ceph-block storageClass.annotations Additional storage class annotations {} storageClass.labels Additional storage class labels {} storageClass.parameters See Block Storage documentation or the helm values.yaml for suitable values see values.yaml storageClass.reclaimPolicy The default Reclaim Policy to apply to PVCs created with this storage class. Delete storageClass.allowVolumeExpansion Whether volume expansion is allowed by default. true storageClass.mountOptions Specifies the mount options for storageClass [] storageClass.allowedTopologies Specifies the allowedTopologies for storageClass [] "},{"location":"Helm-Charts/ceph-cluster-chart/#ceph-file-systems","title":"Ceph File Systems","text":"The name The name of the CephFileSystem ceph-filesystem spec The CephFileSystem spec, see the CephFilesystem CRD documentation. see values.yaml storageClass.enabled Whether a storage class is deployed alongside the CephFileSystem true storageClass.name The name of the storage class ceph-filesystem storageClass.annotations Additional storage class annotations {} storageClass.labels Additional storage class labels {} storageClass.pool The name of Data Pool, without the filesystem name prefix data0 storageClass.parameters See Shared Filesystem documentation or the helm values.yaml for suitable values see values.yaml storageClass.reclaimPolicy The default Reclaim Policy to apply to PVCs created with this storage class. Delete storageClass.mountOptions Specifies the mount options for storageClass [] "},{"location":"Helm-Charts/ceph-cluster-chart/#ceph-object-stores","title":"Ceph Object Stores","text":"The name The name of the CephObjectStore ceph-objectstore spec The CephObjectStore spec, see the CephObjectStore CRD documentation. see values.yaml storageClass.enabled Whether a storage class is deployed alongside the CephObjectStore true storageClass.name The name of the storage class ceph-bucket storageClass.annotations Additional storage class annotations {} storageClass.labels Additional storage class labels {} storageClass.parameters See Object Store storage class documentation or the helm values.yaml for suitable values see values.yaml storageClass.reclaimPolicy The default Reclaim Policy to apply to PVCs created with this storage class. Delete ingress.enabled Enable an ingress for the object store false ingress.annotations Ingress annotations {} ingress.host.name Ingress hostname \"\" ingress.host.path Ingress path prefix / ingress.tls Ingress tls / ingress.ingressClassName Ingress tls \"\" "},{"location":"Helm-Charts/ceph-cluster-chart/#existing-clusters","title":"Existing Clusters","text":"If you have an existing CephCluster CR that was created without the helm chart and you want the helm chart to start managing the cluster:
To deploy from a local build from your development environment: "},{"location":"Helm-Charts/ceph-cluster-chart/#uninstalling-the-chart","title":"Uninstalling the Chart","text":"To see the currently installed Rook chart: To uninstall/delete the The command removes all the Kubernetes components associated with the chart and deletes the release. Removing the cluster chart does not remove the Rook operator. In addition, all data on hosts in the Rook data directory ( See the teardown documentation for more information. "},{"location":"Helm-Charts/helm-charts/","title":"Helm Charts Overview","text":"Rook has published the following Helm charts for the Ceph storage provider:
The Helm charts are intended to simplify deployment and upgrades. Configuring the Rook resources without Helm is also fully supported by creating the manifests directly. "},{"location":"Helm-Charts/operator-chart/","title":"Ceph Operator Helm Chart","text":"Installs rook to create, configure, and manage Ceph clusters on Kubernetes. "},{"location":"Helm-Charts/operator-chart/#introduction","title":"Introduction","text":"This chart bootstraps a rook-ceph-operator deployment on a Kubernetes cluster using the Helm package manager. "},{"location":"Helm-Charts/operator-chart/#prerequisites","title":"Prerequisites","text":"
See the Helm support matrix for more details. "},{"location":"Helm-Charts/operator-chart/#installing","title":"Installing","text":"The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster.
The Rook currently publishes builds of the Ceph operator to the The release channel is the most recent release of Rook that is considered stable for the community. For example settings, see the next section or values.yaml "},{"location":"Helm-Charts/operator-chart/#configuration","title":"Configuration","text":"The following table lists the configurable parameters of the rook-operator chart and their default values. Parameter Description DefaultallowLoopDevices If true, loop devices are allowed to be used for osds in test clusters false annotations Pod annotations {} cephCommandsTimeoutSeconds The timeout for ceph commands in seconds \"15\" containerSecurityContext Set the container security context for the operator {\"capabilities\":{\"drop\":[\"ALL\"]},\"runAsGroup\":2016,\"runAsNonRoot\":true,\"runAsUser\":2016} crds.enabled Whether the helm chart should create and update the CRDs. If false, the CRDs must be managed independently with deploy/examples/crds.yaml. WARNING Only set during first deployment. If later disabled the cluster may be DESTROYED. If the CRDs are deleted in this case, see the disaster recovery guide to restore them. true csi.allowUnsupportedVersion Allow starting an unsupported ceph-csi image false csi.attacher.repository Kubernetes CSI Attacher image repository \"registry.k8s.io/sig-storage/csi-attacher\" csi.attacher.tag Attacher image tag \"v4.6.1\" csi.cephFSAttachRequired Whether to skip any attach operation altogether for CephFS PVCs. See more details here. If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the CephFS PVC fast. WARNING It's highly discouraged to use this for CephFS RWO volumes. Refer to this issue for more details. true csi.cephFSFSGroupPolicy Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html \"File\" csi.cephFSKernelMountOptions Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options. Set to \"ms_mode=secure\" when connections.encrypted is enabled in CephCluster CR nil csi.cephFSPluginUpdateStrategy CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate RollingUpdate csi.cephFSPluginUpdateStrategyMaxUnavailable A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy. 1 csi.cephcsi.repository Ceph CSI image repository \"quay.io/cephcsi/cephcsi\" csi.cephcsi.tag Ceph CSI image tag \"v3.12.2\" csi.cephfsLivenessMetricsPort CSI CephFS driver metrics port 9081 csi.cephfsPodLabels Labels to add to the CSI CephFS Deployments and DaemonSets Pods nil csi.clusterName Cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster nil csi.csiAddons.enabled Enable CSIAddons false csi.csiAddons.repository CSIAddons sidecar image repository \"quay.io/csiaddons/k8s-sidecar\" csi.csiAddons.tag CSIAddons sidecar image tag \"v0.10.0\" csi.csiAddonsPort CSI Addons server port 9070 csi.csiCephFSPluginResource CEPH CSI CephFS plugin resource requirement list see values.yaml csi.csiCephFSPluginVolume The volume of the CephCSI CephFS plugin DaemonSet nil csi.csiCephFSPluginVolumeMount The volume mounts of the CephCSI CephFS plugin DaemonSet nil csi.csiCephFSProvisionerResource CEPH CSI CephFS provisioner resource requirement list see values.yaml csi.csiDriverNamePrefix CSI driver name prefix for cephfs, rbd and nfs. namespace name where rook-ceph operator is deployed csi.csiLeaderElectionLeaseDuration Duration in seconds that non-leader candidates will wait to force acquire leadership. 137s csi.csiLeaderElectionRenewDeadline Deadline in seconds that the acting leader will retry refreshing leadership before giving up. 107s csi.csiLeaderElectionRetryPeriod Retry period in seconds the LeaderElector clients should wait between tries of actions. 26s csi.csiNFSPluginResource CEPH CSI NFS plugin resource requirement list see values.yaml csi.csiNFSProvisionerResource CEPH CSI NFS provisioner resource requirement list see values.yaml csi.csiRBDPluginResource CEPH CSI RBD plugin resource requirement list see values.yaml csi.csiRBDPluginVolume The volume of the CephCSI RBD plugin DaemonSet nil csi.csiRBDPluginVolumeMount The volume mounts of the CephCSI RBD plugin DaemonSet nil csi.csiRBDProvisionerResource CEPH CSI RBD provisioner resource requirement list csi-omap-generator resources will be applied only if enableOMAPGenerator is set to true see values.yaml csi.disableCsiDriver Disable the CSI driver. \"false\" csi.enableCSIEncryption Enable Ceph CSI PVC encryption support false csi.enableCSIHostNetwork Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary in some network configurations where the SDN does not provide access to an external cluster or there is significant drop in read/write performance true csi.enableCephfsDriver Enable Ceph CSI CephFS driver true csi.enableCephfsSnapshotter Enable Snapshotter in CephFS provisioner pod true csi.enableLiveness Enable Ceph CSI Liveness sidecar deployment false csi.enableMetadata Enable adding volume metadata on the CephFS subvolumes and RBD images. Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images. Hence enable metadata is false by default false csi.enableNFSSnapshotter Enable Snapshotter in NFS provisioner pod true csi.enableOMAPGenerator OMAP generator generates the omap mapping between the PV name and the RBD image which helps CSI to identify the rbd images for CSI operations. CSI_ENABLE_OMAP_GENERATOR needs to be enabled when we are using rbd mirroring feature. By default OMAP generator is disabled and when enabled, it will be deployed as a sidecar with CSI provisioner pod, to enable set it to true. false csi.enablePluginSelinuxHostMount Enable Host mount for /etc/selinux directory for Ceph CSI nodeplugins false csi.enableRBDSnapshotter Enable Snapshotter in RBD provisioner pod true csi.enableRbdDriver Enable Ceph CSI RBD driver true csi.enableVolumeGroupSnapshot Enable volume group snapshot feature. This feature is enabled by default as long as the necessary CRDs are available in the cluster. true csi.forceCephFSKernelClient Enable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS you may want to disable this setting. However, this will cause an issue during upgrades with the FUSE client. See the upgrade guide true csi.grpcTimeoutInSeconds Set GRPC timeout for csi containers (in seconds). It should be >= 120. If this value is not set or is invalid, it defaults to 150 150 csi.imagePullPolicy Image pull policy \"IfNotPresent\" csi.kubeApiBurst Burst to use while communicating with the kubernetes apiserver. nil csi.kubeApiQPS QPS to use while communicating with the kubernetes apiserver. nil csi.kubeletDirPath Kubelet root directory path (if the Kubelet uses a different path for the --root-dir flag) /var/lib/kubelet csi.logLevel Set logging level for cephCSI containers maintained by the cephCSI. Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity. 0 csi.nfs.enabled Enable the nfs csi driver false csi.nfsAttachRequired Whether to skip any attach operation altogether for NFS PVCs. See more details here. If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the NFS PVC fast. WARNING It's highly discouraged to use this for NFS RWO volumes. Refer to this issue for more details. true csi.nfsFSGroupPolicy Policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html \"File\" csi.nfsPluginUpdateStrategy CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate RollingUpdate csi.nfsPodLabels Labels to add to the CSI NFS Deployments and DaemonSets Pods nil csi.pluginNodeAffinity The node labels for affinity of the CephCSI RBD plugin DaemonSet 1 nil csi.pluginPriorityClassName PriorityClassName to be set on csi driver plugin pods \"system-node-critical\" csi.pluginTolerations Array of tolerations in YAML format which will be added to CephCSI plugin DaemonSet nil csi.provisioner.repository Kubernetes CSI provisioner image repository \"registry.k8s.io/sig-storage/csi-provisioner\" csi.provisioner.tag Provisioner image tag \"v5.0.1\" csi.provisionerNodeAffinity The node labels for affinity of the CSI provisioner deployment 1 nil csi.provisionerPriorityClassName PriorityClassName to be set on csi driver provisioner pods \"system-cluster-critical\" csi.provisionerReplicas Set replicas for csi provisioner deployment 2 csi.provisionerTolerations Array of tolerations in YAML format which will be added to CSI provisioner deployment nil csi.rbdAttachRequired Whether to skip any attach operation altogether for RBD PVCs. See more details here. If set to false it skips the volume attachments and makes the creation of pods using the RBD PVC fast. WARNING It's highly discouraged to use this for RWO volumes as it can cause data corruption. csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set to false since we'll have no VolumeAttachments to determine which node the PVC is mounted on. Refer to this issue for more details. true csi.rbdFSGroupPolicy Policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html \"File\" csi.rbdLivenessMetricsPort Ceph CSI RBD driver metrics port 8080 csi.rbdPluginUpdateStrategy CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate RollingUpdate csi.rbdPluginUpdateStrategyMaxUnavailable A maxUnavailable parameter of CSI RBD plugin daemonset update strategy. 1 csi.rbdPodLabels Labels to add to the CSI RBD Deployments and DaemonSets Pods nil csi.registrar.repository Kubernetes CSI registrar image repository \"registry.k8s.io/sig-storage/csi-node-driver-registrar\" csi.registrar.tag Registrar image tag \"v2.11.1\" csi.resizer.repository Kubernetes CSI resizer image repository \"registry.k8s.io/sig-storage/csi-resizer\" csi.resizer.tag Resizer image tag \"v1.11.1\" csi.serviceMonitor.enabled Enable ServiceMonitor for Ceph CSI drivers false csi.serviceMonitor.interval Service monitor scrape interval \"10s\" csi.serviceMonitor.labels ServiceMonitor additional labels {} csi.serviceMonitor.namespace Use a different namespace for the ServiceMonitor nil csi.sidecarLogLevel Set logging level for Kubernetes-csi sidecar containers. Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity. 0 csi.snapshotter.repository Kubernetes CSI snapshotter image repository \"registry.k8s.io/sig-storage/csi-snapshotter\" csi.snapshotter.tag Snapshotter image tag \"v8.0.1\" csi.topology.domainLabels domainLabels define which node labels to use as domains for CSI nodeplugins to advertise their domains nil csi.topology.enabled Enable topology based provisioning false currentNamespaceOnly Whether the operator should watch cluster CRD in its own namespace or not false disableDeviceHotplug Disable automatic orchestration when new devices are discovered. false discover.nodeAffinity The node labels for affinity of discover-agent 1 nil discover.podLabels Labels to add to the discover pods nil discover.resources Add resources to discover daemon pods nil discover.toleration Toleration for the discover pods. Options: NoSchedule , PreferNoSchedule or NoExecute nil discover.tolerationKey The specific key of the taint to tolerate nil discover.tolerations Array of tolerations in YAML format which will be added to discover deployment nil discoverDaemonUdev Blacklist certain disks according to the regex provided. nil discoveryDaemonInterval Set the discovery daemon device discovery interval (default to 60m) \"60m\" enableDiscoveryDaemon Enable discovery daemon false enableOBCWatchOperatorNamespace Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used true enforceHostNetwork Whether to create all Rook pods to run on the host network, for example in environments where a CNI is not enabled false hostpathRequiresPrivileged Runs Ceph Pods as privileged to be able to write to hostPaths in OpenShift with SELinux restrictions. false image.pullPolicy Image pull policy \"IfNotPresent\" image.repository Image \"docker.io/rook/ceph\" image.tag Image tag master imagePullSecrets imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts. nil logLevel Global log level for the operator. Options: ERROR , WARNING , INFO , DEBUG \"INFO\" monitoring.enabled Enable monitoring. Requires Prometheus to be pre-installed. Enabling will also create RBAC rules to allow Operator to create ServiceMonitors false nodeSelector Kubernetes nodeSelector to add to the Deployment. {} obcProvisionerNamePrefix Specify the prefix for the OBC provisioner in place of the cluster namespace ceph cluster namespace priorityClassName Set the priority class for the rook operator deployment if desired nil pspEnable If true, create & use PSP resources false rbacAggregate.enableOBCs If true, create a ClusterRole aggregated to user facing roles for objectbucketclaims false rbacEnable If true, create & use RBAC resources true resources Pod resource requests & limits {\"limits\":{\"memory\":\"512Mi\"},\"requests\":{\"cpu\":\"200m\",\"memory\":\"128Mi\"}} revisionHistoryLimit The revision history limit for all pods created by Rook. If blank, the K8s default is 10. nil scaleDownOperator If true, scale down the rook operator. This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling to deploy your helm charts. false tolerations List of Kubernetes tolerations to add to the Deployment. [] unreachableNodeTolerationSeconds Delay to use for the node.kubernetes.io/unreachable pod failure toleration to override the Kubernetes default of 5 minutes 5 useOperatorHostNetwork If true, run rook operator on the host network nil "},{"location":"Helm-Charts/operator-chart/#development-build","title":"Development Build","text":"To deploy from a local build from your development environment:
"},{"location":"Helm-Charts/operator-chart/#uninstalling-the-chart","title":"Uninstalling the Chart","text":"To see the currently installed Rook chart: To uninstall/delete the The command removes all the Kubernetes components associated with the chart and deletes the release. After uninstalling you may want to clean up the CRDs as described on the teardown documentation.
Rook provides the following clean up options:
To tear down the cluster, the following resources need to be cleaned up:
If the default namespaces or paths such as If tearing down a cluster frequently for development purposes, it is instead recommended to use an environment such as Minikube that can easily be reset without worrying about any of these steps. "},{"location":"Storage-Configuration/ceph-teardown/#delete-the-block-and-file-artifacts","title":"Delete the Block and File artifacts","text":"First clean up the resources from applications that consume the Rook storage. These commands will clean up the resources from the example application block and file walkthroughs (unmount volumes, delete volume claims, etc). Important After applications have been cleaned up, the Rook cluster can be removed. It is important to delete applications before removing the Rook operator and Ceph cluster. Otherwise, volumes may hang and nodes may require a restart. "},{"location":"Storage-Configuration/ceph-teardown/#delete-the-cephcluster-crd","title":"Delete the CephCluster CRD","text":"Warning DATA WILL BE PERMANENTLY DELETED AFTER DELETING THE
Note The cleanup jobs might not start if the resources created on top of Rook Cluster are not deleted completely. See deleting block and file artifacts "},{"location":"Storage-Configuration/ceph-teardown/#delete-the-operator-resources","title":"Delete the Operator Resources","text":"Remove the Rook operator, RBAC, and CRDs, and the "},{"location":"Storage-Configuration/ceph-teardown/#delete-the-data-on-hosts","title":"Delete the data on hosts","text":"Attention The final cleanup step requires deleting files on each host in the cluster. All files under the If the Connect to each machine and delete the namespace directory under Disks on nodes used by Rook for OSDs can be reset to a usable state. Note that these scripts are not one-size-fits-all. Please use them with discretion to ensure you are not removing data unrelated to Rook. A single disk can usually be cleared with some or all of the steps below. Ceph can leave LVM and device mapper data on storage drives, preventing them from being redeployed. These steps can clean former Ceph drives for reuse. Note that this only needs to be run once on each node. If you have only one Rook cluster and all Ceph disks are being wiped, run the following command. If disks are still reported locked, rebooting the node often helps clear LVM-related holds on disks. If there are multiple Ceph clusters and some disks are not wiped yet, it is necessary to manually determine which disks map to which device mapper devices. "},{"location":"Storage-Configuration/ceph-teardown/#troubleshooting","title":"Troubleshooting","text":"The most common issue cleaning up the cluster is that the If a pod is still terminating, consider forcefully terminating the pod ( If the cluster CRD still exists even though it has been deleted, see the next section on removing the finalizer. "},{"location":"Storage-Configuration/ceph-teardown/#removing-the-cluster-crd-finalizer","title":"Removing the Cluster CRD Finalizer","text":"When a Cluster CRD is created, a finalizer is added automatically by the Rook operator. The finalizer will allow the operator to ensure that before the cluster CRD is deleted, all block and file mounts will be cleaned up. Without proper cleanup, pods consuming the storage will be hung indefinitely until a system reboot. The operator is responsible for removing the finalizer after the mounts have been cleaned up. If for some reason the operator is not able to remove the finalizer (i.e., the operator is not running anymore), delete the finalizer manually with the following command: If the namespace is still stuck in Terminating state, check which resources are holding up the deletion and remove their finalizers as well: "},{"location":"Storage-Configuration/ceph-teardown/#remove-critical-resource-finalizers","title":"Remove critical resource finalizers","text":"Rook adds a finalizer The operator is responsible for removing the finalizers when a CephCluster is deleted. If the operator is not able to remove the finalizers (i.e., the operator is not running anymore), remove the finalizers manually: "},{"location":"Storage-Configuration/ceph-teardown/#force-delete-resources","title":"Force Delete Resources","text":"To keep your data safe in the cluster, Rook disallows deleting critical cluster resources by default. To override this behavior and force delete a specific custom resource, add the annotation For example, run the following commands to clean the Once the cleanup job is completed successfully, Rook will remove the finalizers from the deleted custom resource. This cleanup is supported only for the following custom resources: Custom Resource Ceph Resources to be cleaned up CephFilesystemSubVolumeGroup CSI stored RADOS OMAP details for pvc/volumesnapshots, subvolume snapshots, subvolume clones, subvolumes CephBlockPoolRadosNamespace Images and snapshots in the RADOS namespace CephBlockPool Images and snapshots in the BlockPool"},{"location":"Storage-Configuration/Advanced/ceph-configuration/","title":"Ceph Configuration","text":"These examples show how to perform advanced configuration tasks on your Rook storage cluster. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#prerequisites","title":"Prerequisites","text":"Most of the examples make use of the The Kubernetes based examples assume Rook OSD pods are in the If you wish to deploy the Rook Operator and/or Ceph clusters to namespaces other than the default If the operator namespace is different from the cluster namespace, the operator namespace must be created before running the steps below. The cluster namespace does not need to be created first, as it will be created by This will help you manage namespaces more easily, but you should still make sure the resources are configured to your liking. Also see the CSI driver documentation to update the csi provisioner names in the storageclass and volumesnapshotclass. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#deploying-a-second-cluster","title":"Deploying a second cluster","text":"If you wish to create a new CephCluster in a separate namespace, you can easily do so by modifying the This will create all the necessary RBACs as well as the new namespace. The script assumes that All Rook logs can be collected in a Kubernetes environment with the following command: This gets the logs for every container in every Rook pod and then compresses them into a Keeping track of OSDs and their underlying storage devices can be difficult. The following scripts will clear things up quickly. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#kubernetes","title":"Kubernetes","text":" The output should look something like this. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#separate-storage-groups","title":"Separate Storage Groups","text":"Attention It is deprecated to manually need to set this, the By default Rook/Ceph puts all storage under one replication rule in the CRUSH Map which provides the maximum amount of storage capacity for a cluster. If you would like to use different storage endpoints for different purposes, you'll have to create separate storage groups. In the following example we will separate SSD drives from spindle-based drives, a common practice for those looking to target certain workloads onto faster (database) or slower (file archive) storage. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#configuring-pools","title":"Configuring Pools","text":""},{"location":"Storage-Configuration/Advanced/ceph-configuration/#placement-group-sizing","title":"Placement Group Sizing","text":"Note Since Ceph Nautilus (v14.x), you can use the Ceph MGR The general rules for deciding how many PGs your pool(s) should contain is:
If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the pg_num value by yourself. For calculating pg_num yourself please make use of the pgcalc tool. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#setting-pg-count","title":"Setting PG Count","text":"Be sure to read the placement group sizing section before changing the number of PGs. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#custom-cephconf-settings","title":"Custom ceph.conf Settings","text":"Info The advised method for controlling Ceph configuration is to use the Setting configs via Ceph's CLI requires that at least one mon be available for the configs to be set, and setting configs via dashboard requires at least one mgr to be available. Ceph also has a number of very advanced settings that cannot be modified easily via the CLI or dashboard. In order to set configurations before monitors are available or to set advanced configuration settings, the Warning Rook performs no validation on the config, so the validity of the settings is the user's responsibility. If the
After the pod restart, the new settings should be in effect. Note that if the ConfigMap in the Ceph cluster's namespace is created before the cluster is created, the daemons will pick up the settings at first launch. To automate the restart of the Ceph daemon pods, you will need to trigger an update to the pod specs. The simplest way to trigger the update is to add annotations or labels to the CephCluster CR for the daemons you want to restart. The operator will then proceed with a rolling update, similar to any other update to the cluster. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#example","title":"Example","text":"In this example we will set the default pool Warning Modify Ceph settings carefully. You are leaving the sandbox tested by Rook. Changing the settings could result in unhealthy daemons or even data loss if used incorrectly. When the Rook Operator creates a cluster, a placeholder ConfigMap is created that will allow you to override Ceph configuration settings. When the daemon pods are started, the settings specified in this ConfigMap will be merged with the default settings generated by Rook. The default override settings are blank. Cutting out the extraneous properties, we would see the following defaults after creating a cluster: To apply your desired configuration, you will need to update this ConfigMap. The next time the daemon pod(s) start, they will use the updated configs. Modify the settings and save. Each line you add should be indented from the "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#custom-csi-cephconf-settings","title":"Custom CSI ceph.conf Settings","text":"Warning It is highly recommended to use the default setting that comes with CephCSI and this can only be used when absolutely necessary. The If the After the CSI pods are restarted, the new settings should be in effect. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#example-csi-cephconf-settings","title":"Example CSIceph.conf Settings","text":"In this Example we will set the Warning Modify Ceph settings carefully to avoid modifying the default configuration. Changing the settings could result in unexpected results if used incorrectly. Restart the Rook operator pod and wait for CSI pods to be recreated. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#osd-crush-settings","title":"OSD CRUSH Settings","text":"A useful view of the CRUSH Map is generated with the following command: In this section we will be tweaking some of the values seen in the output. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#osd-weight","title":"OSD Weight","text":"The CRUSH weight controls the ratio of data that should be distributed to each OSD. This also means a higher or lower amount of disk I/O operations for an OSD with higher/lower weight, respectively. By default OSDs get a weight relative to their storage capacity, which maximizes overall cluster capacity by filling all drives at the same rate, even if drive sizes vary. This should work for most use-cases, but the following situations could warrant weight changes:
This example sets the weight of osd.0 which is 600GiB "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#osd-primary-affinity","title":"OSD Primary Affinity","text":"When pools are set with a size setting greater than one, data is replicated between nodes and OSDs. For every chunk of data a Primary OSD is selected to be used for reading that data to be sent to clients. You can control how likely it is for an OSD to become a Primary using the Primary Affinity setting. This is similar to the OSD weight setting, except it only affects reads on the storage device, not capacity or writes. In this example we will ensure that "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#osd-dedicated-network","title":"OSD Dedicated Network","text":"Tip This documentation is left for historical purposes. It is still valid, but Rook offers native support for this feature via the CephCluster network configuration. It is possible to configure ceph to leverage a dedicated network for the OSDs to communicate across. A useful overview is the Ceph Networks section of the Ceph documentation. If you declare a cluster network, OSDs will route heartbeat, object replication, and recovery traffic over the cluster network. This may improve performance compared to using a single network, especially when slower network technologies are used. The tradeoff is additional expense and subtle failure modes. Two changes are necessary to the configuration to enable this capability: "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#use-hostnetwork-in-the-cluster-configuration","title":"Use hostNetwork in the cluster configuration","text":"Enable the Important Changing this setting is not supported in a running Rook cluster. Host networking should be configured when the cluster is first created. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#define-the-subnets-to-use-for-public-and-private-osd-networks","title":"Define the subnets to use for public and private OSD networks","text":"Edit the In the editor, add a custom configuration to instruct ceph which subnet is the public network and which subnet is the private network. For example: After applying the updated rook-config-override configmap, it will be necessary to restart the OSDs by deleting the OSD pods in order to apply the change. Restart the OSD pods by deleting them, one at a time, and running ceph -s between each restart to ensure the cluster goes back to \"active/clean\" state. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#phantom-osd-removal","title":"Phantom OSD Removal","text":"If you have OSDs in which are not showing any disks, you can remove those \"Phantom OSDs\" by following the instructions below. To check for \"Phantom OSDs\", you can run (example output): The host Now to remove it, use the ID in the first column of the output and replace To recheck that the Phantom OSD was removed, re-run the following command and check if the OSD with the ID doesn't show up anymore: "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#auto-expansion-of-osds","title":"Auto Expansion of OSDs","text":""},{"location":"Storage-Configuration/Advanced/ceph-configuration/#prerequisites-for-auto-expansion-of-osds","title":"Prerequisites for Auto Expansion of OSDs","text":"1) A PVC-based cluster deployed in dynamic provisioning environment with a 2) Create the Rook Toolbox. Note Prometheus Operator and [Prometheus ../Monitoring/ceph-monitoring.mdnitoring.md#prometheus-instances) are Prerequisites that are created by the auto-grow-storage script. "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#to-scale-osds-vertically","title":"To scale OSDs Vertically","text":"Run the following script to auto-grow the size of OSDs on a PVC-based Rook cluster whenever the OSDs have reached the storage near-full threshold.
For example, if you need to increase the size of OSD by 30% and max disk size is 1Ti "},{"location":"Storage-Configuration/Advanced/ceph-configuration/#to-scale-osds-horizontally","title":"To scale OSDs Horizontally","text":"Run the following script to auto-grow the number of OSDs on a PVC-based Rook cluster whenever the OSDs have reached the storage near-full threshold. Count of OSD represents the number of OSDs you need to add and maxCount represents the number of disks a storage cluster will support. For example, if you need to increase the number of OSDs by 3 and maxCount is 10 "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/","title":"Monitor Health","text":"Failure in a distributed system is to be expected. Ceph was designed from the ground up to deal with the failures of a distributed system. At the next layer, Rook was designed from the ground up to automate recovery of Ceph components that traditionally required admin intervention. Monitor health is the most critical piece of the equation that Rook actively monitors. If they are not in a good state, the operator will take action to restore their health and keep your cluster protected from disaster. The Ceph monitors (mons) are the brains of the distributed cluster. They control all of the metadata that is necessary to store and retrieve your data as well as keep it safe. If the monitors are not in a healthy state you will risk losing all the data in your system. "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#monitor-identity","title":"Monitor Identity","text":"Each monitor in a Ceph cluster has a static identity. Every component in the cluster is aware of the identity, and that identity must be immutable. The identity of a mon is its IP address. To have an immutable IP address in Kubernetes, Rook creates a K8s service for each monitor. The clusterIP of the service will act as the stable identity. When a monitor pod starts, it will bind to its podIP and it will expect communication to be via its service IP address. "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#monitor-quorum","title":"Monitor Quorum","text":"Multiple mons work together to provide redundancy by each keeping a copy of the metadata. A variation of the distributed algorithm Paxos is used to establish consensus about the state of the cluster. Paxos requires a super-majority of mons to be running in order to establish quorum and perform operations in the cluster. If the majority of mons are not running, quorum is lost and nothing can be done in the cluster. "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#how-many-mons","title":"How many mons?","text":"Most commonly a cluster will have three mons. This would mean that one mon could go down and allow the cluster to remain healthy. You would still have 2/3 mons running to give you consensus in the cluster for any operation. For highest availability, an odd number of mons is required. Fifty percent of mons will not be sufficient to maintain quorum. If you had two mons and one of them went down, you would have 1/2 of quorum. Since that is not a super-majority, the cluster would have to wait until the second mon is up again. Rook allows an even number of mons for higher durability. See the disaster recovery guide if quorum is lost and to recover mon quorum from a single mon. The number of mons to create in a cluster depends on your tolerance for losing a node. If you have 1 mon zero nodes can be lost to maintain quorum. With 3 mons one node can be lost, and with 5 mons two nodes can be lost. Because the Rook operator will automatically start a new monitor if one dies, you typically only need three mons. The more mons you have, the more overhead there will be to make a change to the cluster, which could become a performance issue in a large cluster. "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#mitigating-monitor-failure","title":"Mitigating Monitor Failure","text":"Whatever the reason that a mon may fail (power failure, software crash, software hang, etc), there are several layers of mitigation in place to help recover the mon. It is always better to bring an existing mon back up than to failover to bring up a new mon. The Rook operator creates a mon with a Deployment to ensure that the mon pod will always be restarted if it fails. If a mon pod stops for any reason, Kubernetes will automatically start the pod up again. In order for a mon to support a pod/node restart, the mon metadata is persisted to disk, either under the If a mon is unhealthy and the K8s pod restart or liveness probe are not sufficient to bring a mon back up, the operator will make the decision to terminate the unhealthy monitor deployment and bring up a new monitor with a new identity. This is an operation that must be done while mon quorum is maintained by other mons in the cluster. The operator checks for mon health every 45 seconds. If a monitor is down, the operator will wait 10 minutes before failing over the unhealthy mon. These two intervals can be configured as parameters to the CephCluster CR (see below). If the intervals are too short, it could be unhealthy if the mons are failed over too aggressively. If the intervals are too long, the cluster could be at risk of losing quorum if a new monitor is not brought up before another mon fails. If you want to force a mon to failover for testing or other purposes, you can scale down the mon deployment to 0, then wait for the timeout. Note that the operator may scale up the mon again automatically if the operator is restarted or if a full reconcile is triggered, such as when the CephCluster CR is updated. If the mon pod is in pending state and couldn't be assigned to a node (say, due to node drain), then the operator will wait for the timeout again before the mon failover. So the timeout waiting for the mon failover will be doubled in this case. To disable monitor automatic failover, the Rook will create mons with pod names such as mon-a, mon-b, and mon-c. Let's say mon-b had an issue and the pod failed. After a failover, you will see the unhealthy mon removed and a new mon added such as mon-d. A fully healthy mon quorum is now running again. From the toolbox we can verify the status of the health mon quorum: "},{"location":"Storage-Configuration/Advanced/ceph-mon-health/#automatic-monitor-failover","title":"Automatic Monitor Failover","text":"Rook will automatically fail over the mons when the following settings are updated in the CephCluster CR:
Ceph Object Storage Daemons (OSDs) are the heart and soul of the Ceph storage platform. Each OSD manages a local device and together they provide the distributed storage. Rook will automate creation and management of OSDs to hide the complexity based on the desired state in the CephCluster CR as much as possible. This guide will walk through some of the scenarios to configure OSDs where more configuration may be required. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#osd-health","title":"OSD Health","text":"The rook-ceph-tools pod provides a simple environment to run Ceph tools. The Once the is created, connect to the pod to execute the "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#add-an-osd","title":"Add an OSD","text":"The QuickStart Guide will provide the basic steps to create a cluster and start some OSDs. For more details on the OSD settings also see the Cluster CRD documentation. If you are not seeing OSDs created, see the Ceph Troubleshooting Guide. To add more OSDs, Rook will automatically watch for new nodes and devices being added to your cluster. If they match the filters or other settings in the In more dynamic environments where storage can be dynamically provisioned with a raw block storage provider, the OSDs can be backed by PVCs. See the To add more OSDs, you can either increase the To remove an OSD due to a failed disk or other re-configuration, consider the following to ensure the health of the data through the removal process:
If all the PGs are Update your CephCluster CR. Depending on your CR settings, you may need to remove the device from the list or update the device filter. If you are using Important On host-based clusters, you may need to stop the Rook Operator while performing OSD removal steps in order to prevent Rook from detecting the old OSD and trying to re-create it before the disk is wiped or removed. To stop the Rook Operator, run: You must perform steps below to (1) purge the OSD and either (2.a) delete the underlying data or (2.b)replace the disk before starting the Rook Operator again. Once you have done that, you can start the Rook operator again with: "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#pvc-based-cluster","title":"PVC-based cluster","text":"To reduce the storage in your cluster or remove a failed OSD on a PVC:
If you later increase the count in the device set, note that the operator will create PVCs with the highest index that is not currently in use by existing OSD PVCs. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#confirm-the-osd-is-down","title":"Confirm the OSD is down","text":"If you want to remove an unhealthy OSD, the osd pod may be in an error state such as "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#purge-the-osd-with-kubectl","title":"Purge the OSD with kubectl","text":"Note The "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#purge-the-osd-with-a-job","title":"Purge the OSD with a Job","text":"OSD removal can be automated with the example found in the rook-ceph-purge-osd job. In the osd-purge.yaml, change the
If you want to remove OSDs by hand, continue with the following sections. However, we recommend you use the above-mentioned steps to avoid operation errors. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#purge-the-osd-manually","title":"Purge the OSD manually","text":"If the OSD purge job fails or you need fine-grained control of the removal, here are the individual commands that can be run from the toolbox.
The operator can automatically remove OSD deployments that are considered \"safe-to-destroy\" by Ceph. After the steps above, the OSD will be considered safe to remove since the data has all been moved to other OSDs. But this will only be done automatically by the operator if you have this setting in the cluster CR: Otherwise, you will need to delete the deployment directly: In PVC-based cluster, remove the orphaned PVC, if necessary. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#delete-the-underlying-data","title":"Delete the underlying data","text":"If you want to clean the device where the OSD was running, see in the instructions to wipe a disk on the Cleaning up a Cluster topic. "},{"location":"Storage-Configuration/Advanced/ceph-osd-mgmt/#replace-an-osd","title":"Replace an OSD","text":"To replace a disk that has failed:
Note The OSD might have a different ID than the previous OSD that was replaced. "},{"location":"Storage-Configuration/Advanced/configuration/","title":"Configuration","text":"For most any Ceph cluster, the user will want to--and may need to--change some Ceph configurations. These changes often may be warranted in order to alter performance to meet SLAs or to update default data resiliency settings. Warning Modify Ceph settings carefully, and review the Ceph configuration documentation before making any changes. Changing the settings could result in unhealthy daemons or even data loss if used incorrectly. "},{"location":"Storage-Configuration/Advanced/configuration/#required-configurations","title":"Required configurations","text":"Rook and Ceph both strive to make configuration as easy as possible, but there are some configuration options which users are well advised to consider for any production cluster. "},{"location":"Storage-Configuration/Advanced/configuration/#default-pg-and-pgp-counts","title":"Default PG and PGP counts","text":"The number of PGs and PGPs can be configured on a per-pool basis, but it is advised to set default values that are appropriate for your Ceph cluster. Appropriate values depend on the number of OSDs the user expects to have backing each pool. These can be configured by declaring pg_num and pgp_num parameters under CephBlockPool resource. For determining the right value for pg_num please refer placement group sizing In this example configuration, 128 PGs are applied to the pool: Ceph OSD and Pool config docs provide detailed information about how to tune these parameters. Nautilus introduced the PG auto-scaler mgr module capable of automatically managing PG and PGP values for pools. Please see Ceph New in Nautilus: PG merging and autotuning for more information about this module. The To disable this module, in the CephCluster CR: With that setting, the autoscaler will be enabled for all new pools. If you do not desire to have the autoscaler enabled for all new pools, you will need to use the Rook toolbox to enable the module and enable the autoscaling on individual pools. "},{"location":"Storage-Configuration/Advanced/configuration/#specifying-configuration-options","title":"Specifying configuration options","text":""},{"location":"Storage-Configuration/Advanced/configuration/#toolbox-ceph-cli","title":"Toolbox + Ceph CLI","text":"The most recommended way of configuring Ceph is to set Ceph's configuration directly. The first method for doing so is to use Ceph's CLI from the Rook toolbox pod. Using the toolbox pod is detailed here. From the toolbox, the user can change Ceph configurations, enable manager modules, create users and pools, and much more. "},{"location":"Storage-Configuration/Advanced/configuration/#ceph-dashboard","title":"Ceph Dashboard","text":"The Ceph Dashboard, examined in more detail here, is another way of setting some of Ceph's configuration directly. Configuration by the Ceph dashboard is recommended with the same priority as configuration via the Ceph CLI (above). "},{"location":"Storage-Configuration/Advanced/configuration/#advanced-configuration-via-cephconf-override-configmap","title":"Advanced configuration via ceph.conf override ConfigMap","text":"Setting configs via Ceph's CLI requires that at least one mon be available for the configs to be set, and setting configs via dashboard requires at least one mgr to be available. Ceph may also have a small number of very advanced settings that aren't able to be modified easily via CLI or dashboard. The least recommended method for configuring Ceph is intended as a last-resort fallback in situations like these. This is covered in detail here. "},{"location":"Storage-Configuration/Advanced/key-management-system/","title":"Key Management System","text":"Rook has the ability to encrypt OSDs of clusters running on PVC via the flag ( The
Note Currently key rotation is supported when the Key Encryption Keys are stored in a Kubernetes Secret or Vault KMS. Supported KMS providers:
Rook supports storing OSD encryption keys in HashiCorp Vault KMS. "},{"location":"Storage-Configuration/Advanced/key-management-system/#authentication-methods","title":"Authentication methods","text":"Rook support two authentication methods:
When using the token-based authentication, a Kubernetes Secret must be created to hold the token. This is governed by the Note: Rook supports all the Vault environment variables. The Kubernetes Secret You can create a token in Vault by running the following command: Refer to the official vault document for more details on how to create a token. For which policy to apply see the next section. In order for Rook to connect to Vault, you must configure the following in your "},{"location":"Storage-Configuration/Advanced/key-management-system/#kubernetes-based-authentication","title":"Kubernetes-based authentication","text":"In order to use the Kubernetes Service Account authentication method, the following must be run to properly configure Vault: Once done, your Note The As part of the token, here is an example of a policy that can be used: You can write the policy like so and then create a token: In the above example, Vault's secret backend path name is If a different path is used, the This is an advanced but recommended configuration for production deployments, in this case the Each secret keys are expected to be:
For instance Note: if you are using self-signed certificates (not known/approved by a proper CA) you must pass Rook supports storing OSD encryption keys in IBM Key Protect. The current implementation stores OSD encryption keys as Standard Keys using the Bring Your Own Key (BYOK) method. This means that the Key Protect instance policy must have Standard Imported Key enabled. "},{"location":"Storage-Configuration/Advanced/key-management-system/#configuration","title":"Configuration","text":"First, you need to provision the Key Protect service on the IBM Cloud. Once completed, retrieve the instance ID. Make a record of it; we need it in the CRD. On the IBM Cloud, the user must create a Service ID, then assign an Access Policy to this service. Ultimately, a Service API Key needs to be generated. All the steps are summarized in the official documentation. The Service ID must be granted access to the Key Protect Service. Once the Service API Key is generated, store it in a Kubernetes Secret. In order for Rook to connect to IBM Key Protect, you must configure the following in your More options are supported such as:
Rook supports storing OSD encryption keys in Key Management Interoperability Protocol (KMIP) supported KMS. The current implementation stores OSD encryption keys using the Register operation. Key is fetched and deleted using Get and Destroy operations respectively. "},{"location":"Storage-Configuration/Advanced/key-management-system/#configuration_1","title":"Configuration","text":"The Secret with credentials for the KMIP KMS is expected to contain the following. In order for Rook to connect to KMIP, you must configure the following in your "},{"location":"Storage-Configuration/Advanced/key-management-system/#azure-key-vault","title":"Azure Key Vault","text":"Rook supports storing OSD encryption keys in Azure Key vault "},{"location":"Storage-Configuration/Advanced/key-management-system/#client-authentication","title":"Client Authentication","text":"Different methods are available in Azure to authenticate a client. Rook supports Azure recommended method of authentication with Service Principal and a certificate. Refer the following Azure documentation to set up key vault and authenticate it via service principal and certtificate
Provide the following KMS connection details in order to connect with Azure Key Vault.
Block storage allows a single pod to mount storage. This guide shows how to create a simple, multi-tier web application on Kubernetes using persistent volumes enabled by Rook. "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#prerequisites","title":"Prerequisites","text":"This guide assumes a Rook cluster as explained in the Quickstart. "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#provision-storage","title":"Provision Storage","text":"Before Rook can provision storage, a Note This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes. Each OSD must be located on a different node, because the Save this If you've deployed the Rook operator in a namespace other than \"rook-ceph\", change the prefix in the provisioner to match the namespace you used. For example, if the Rook operator is running in the namespace \"my-namespace\" the provisioner value should be \"my-namespace.rbd.csi.ceph.com\". Create the storage class. Note As specified by Kubernetes, when using the We create a sample app to consume the block storage provisioned by Rook with the classic wordpress and mysql apps. Both of these apps will make use of block volumes provisioned by Rook. Start mysql and wordpress from the Both of these apps create a block volume and mount it to their respective pod. You can see the Kubernetes volume claims by running the following: Example Output: Once the wordpress and mysql pods are in the Example Output: You should see the wordpress app running. If you are using Minikube, the Wordpress URL can be retrieved with this one-line command: Note When running in a vagrant environment, there will be no external IP address to reach wordpress with. You will only be able to reach wordpress via the With the pool that was created above, we can also create a block image and mount it directly in a pod. See the Direct Block Tools topic for more details. "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#teardown","title":"Teardown","text":"To clean up all the artifacts created by the block demo: "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#advanced-example-erasure-coded-block-storage","title":"Advanced Example: Erasure Coded Block Storage","text":"If you want to use erasure coded pool with RBD, your OSDs must use Attention This example requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the To be able to use an erasure coded pool you need to create two pools (as seen below in the definitions): one erasure coded and one replicated. Attention This example requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the The erasure coded pool must be set as the If a node goes down where a pod is running where a RBD RWO volume is mounted, the volume cannot automatically be mounted on another node. The node must be guaranteed to be offline before the volume can be mounted on another node. "},{"location":"Storage-Configuration/Block-Storage-RBD/block-storage/#configure-csi-addons","title":"Configure CSI-Addons","text":"Deploy csi-addons controller and enable Warning Automated node loss handling is currently disabled, please refer to the manual steps to recover from the node loss. We are actively working on a new design for this feature. For more details see the tracking issue. When a node is confirmed to be down, add the following taints to the node: After the taint is added to the node, Rook will automatically blocklist the node to prevent connections to Ceph from the RBD volume on that node. To verify a node is blocklisted: The node is blocklisted if the state is If the node comes back online, the network fence can be removed from the node by removing the node taints: "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/","title":"RBD Asynchronous DR Failover and Failback","text":""},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#planned-migration-and-disaster-recovery","title":"Planned Migration and Disaster Recovery","text":"Rook comes with the volume replication support, which allows users to perform disaster recovery and planned migration of clusters. The following document will help to track the procedure for failover and failback in case of a Disaster recovery or Planned migration use cases. Note The document assumes that RBD Mirroring is set up between the peer clusters. For information on rbd mirroring and how to set it up using rook, please refer to the rbd-mirroring guide. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#planned-migration","title":"Planned Migration","text":"Info Use cases: Datacenter maintenance, technology refresh, disaster avoidance, etc. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#relocation","title":"Relocation","text":"The Relocation operation is the process of switching production to a backup facility(normally your recovery site) or vice versa. For relocation, access to the image on the primary site should be stopped. The image should now be made primary on the secondary cluster so that the access can be resumed there. Note Periodic or one-time backup of the application should be available for restore on the secondary site (cluster-2). Follow the below steps for planned migration of workload from the primary cluster to the secondary cluster:
Warning In Async Disaster recovery use case, we don't get the complete data. We will only get the crash-consistent data based on the snapshot interval time. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#disaster-recovery","title":"Disaster Recovery","text":"Info Use cases: Natural disasters, Power failures, System failures, and crashes, etc. Note To effectively resume operations after a failover/relocation, backup of the kubernetes artifacts like deployment, PVC, PV, etc need to be created beforehand by the admin; so that the application can be restored on the peer cluster. For more information, see backup and restore. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-async-disaster-recovery-failover-failback/#failover-abrupt-shutdown","title":"Failover (abrupt shutdown)","text":"In case of Disaster recovery, create VolumeReplication CR at the Secondary Site. Since the connection to the Primary Site is lost, the operator automatically sends a GRPC request down to the driver to forcefully mark the dataSource as
Once the failed cluster is recovered on the primary site and you want to failback from secondary site, follow the below steps:
Disaster recovery (DR) is an organization's ability to react to and recover from an incident that negatively affects business operations. This plan comprises strategies for minimizing the consequences of a disaster, so an organization can continue to operate \u2013 or quickly resume the key operations. Thus, disaster recovery is one of the aspects of business continuity. One of the solutions, to achieve the same, is RBD mirroring. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#rbd-mirroring","title":"RBD Mirroring","text":"RBD mirroring is an asynchronous replication of RBD images between multiple Ceph clusters. This capability is available in two modes:
Note This document sheds light on rbd mirroring and how to set it up using rook. See also the topic on Failover and Failback "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#create-rbd-pools","title":"Create RBD Pools","text":"In this section, we create specific RBD pools that are RBD mirroring enabled for use with the DR use case. Execute the following steps on each peer cluster to create mirror enabled pools:
Note Pool name across the cluster peers must be the same for RBD replication to function. See the CephBlockPool documentation for more details. Note It is also feasible to edit existing pools and enable them for replication. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#bootstrap-peers","title":"Bootstrap Peers","text":"In order for the rbd-mirror daemon to discover its peer cluster, the peer must be registered and a user account must be created. The following steps enable bootstrapping peers to discover and authenticate to each other:
Here,
For more details, refer to the official rbd mirror documentation on how to create a bootstrap peer. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#configure-the-rbdmirror-daemon","title":"Configure the RBDMirror Daemon","text":"Replication is handled by the rbd-mirror daemon. The rbd-mirror daemon is responsible for pulling image updates from the remote, peer cluster, and applying them to image within the local cluster. Creation of the rbd-mirror daemon(s) is done through the custom resource definitions (CRDs), as follows:
See the CephRBDMirror CRD for more details on the mirroring settings. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#add-mirroring-peer-information-to-rbd-pools","title":"Add mirroring peer information to RBD pools","text":"Each pool can have its own peer. To add the peer information, patch the already created mirroring enabled pool to update the CephBlockPool CRD. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#create-volumereplication-crds","title":"Create VolumeReplication CRDs","text":"Volume Replication Operator follows controller pattern and provides extended APIs for storage disaster recovery. The extended APIs are provided via Custom Resource Definition(CRD). Create the VolumeReplication CRDs on all the peer clusters. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#enable-csi-replication-sidecars","title":"Enable CSI Replication Sidecars","text":"To achieve RBD Mirroring,
Execute the following steps on each peer cluster to enable the OMap generator and CSIADDONS sidecars:
VolumeReplication CRDs provide support for two custom resources:
Below guide assumes that we have a PVC (rbd-pvc) in BOUND state; created using StorageClass with "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#create-a-volume-replication-class-cr","title":"Create a Volume Replication Class CR","text":"In this case, we create a Volume Replication Class on cluster-1 Note The
Note
To check VolumeReplication CR status: "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#backup-restore","title":"Backup & Restore","text":"Note To effectively resume operations after a failover/relocation, backup of the kubernetes artifacts like deployment, PVC, PV, etc need to be created beforehand by the admin; so that the application can be restored on the peer cluster. Here, we take a backup of PVC and PV object on one site, so that they can be restored later to the peer cluster. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#take-backup-on-cluster-1","title":"Take backup on cluster-1","text":"
Note We can also take backup using external tools like Velero. See velero documentation for more information. "},{"location":"Storage-Configuration/Block-Storage-RBD/rbd-mirroring/#restore-the-backup-on-cluster-2","title":"Restore the backup on cluster-2","text":"
"},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/","title":"Ceph CSI Drivers","text":"There are three CSI drivers integrated with Rook that are used in different scenarios:
The Ceph Filesystem (CephFS) and RADOS Block Device (RBD) drivers are enabled automatically by the Rook operator. The NFS driver is disabled by default. All drivers will be started in the same namespace as the operator when the first CephCluster CR is created. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#supported-versions","title":"Supported Versions","text":"The two most recent Ceph CSI version are supported with Rook. Refer to ceph csi releases for more information. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#static-provisioning","title":"Static Provisioning","text":"The RBD and CephFS drivers support the creation of static PVs and static PVCs from an existing RBD image or CephFS volume/subvolume. Refer to the static PVC documentation for more information. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#configure-csi-drivers-in-non-default-namespace","title":"Configure CSI Drivers in non-default namespace","text":"If you've deployed the Rook operator in a namespace other than To find the provisioner name in the example storageclasses and volumesnapshotclass, search for: To use a custom prefix for the CSI drivers instead of the namespace prefix, set the Once the configmap is updated, the CSI drivers will be deployed with the The same prefix must be set in the volumesnapshotclass as well: When the prefix is set, the driver names will be:
Note Please be careful when setting the To find the provisioner name in the example storageclasses and volumesnapshotclass, search for: All CSI pods are deployed with a sidecar container that provides a Prometheus metric for tracking whether the CSI plugin is alive and running. Check the monitoring documentation to see how to integrate CSI liveness and GRPC metrics into Ceph monitoring. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#dynamically-expand-volume","title":"Dynamically Expand Volume","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#prerequisites","title":"Prerequisites","text":"To expand the PVC the controlling StorageClass must have To support RBD Mirroring, the CSI-Addons sidecar will be started in the RBD provisioner pod. CSI-Addons support the To enable the CSIAddons sidecar and deploy the controller, follow the steps below "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#ephemeral-volume-support","title":"Ephemeral volume support","text":"The generic ephemeral volume feature adds support for specifying PVCs in the For example: A volume claim template is defined inside the pod spec, and defines a volume to be provisioned and used by the pod within its lifecycle. Volumes are provisioned when a pod is spawned and destroyed when the pod is deleted. Refer to the ephemeral-doc for more info. See example manifests for an RBD ephemeral volume and a CephFS ephemeral volume. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#csi-addons-controller","title":"CSI-Addons Controller","text":"The CSI-Addons Controller handles requests from users. Users create a CR that the controller inspects and forwards to one or more CSI-Addons sidecars for execution. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#deploying-the-controller","title":"Deploying the controller","text":"Deploy the controller by running the following commands: This creates the required CRDs and configures permissions. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#enable-the-csi-addons-sidecar","title":"Enable the CSI-Addons Sidecar","text":"To use the features provided by the CSI-Addons, the Execute the following to enable the CSI-Addons sidecars:
CSI-Addons supports the following operations:
Ceph-CSI supports encrypting PersistentVolumeClaims (PVCs) for both RBD and CephFS. This can be achieved using LUKS for RBD and fscrypt for CephFS. More details on encrypting RBD PVCs can be found here, which includes a full list of supported encryption configurations. More details on encrypting CephFS PVCs can be found here. A sample KMS configmap can be found here. Note Not all KMS are compatible with fscrypt. Generally, KMS that either store secrets to use directly (like Vault) or allow access to the plain password (like Kubernetes Secrets) are compatible. Note Rook also supports OSD-level encryption (see Using both RBD PVC encryption and OSD encryption at the same time will lead to double encryption and may reduce read/write performance. Existing Ceph clusters can also enable Ceph-CSI PVC encryption support and multiple kinds of encryption KMS can be used on the same Ceph cluster using different storageclasses. The following steps demonstrate the common process for enabling encryption support for both RBD and CephFS:
Note CephFS encryption requires fscrypt support in Linux kernel, kernel version 6.6 or higher. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-drivers/#enable-read-affinity-for-rbd-and-cephfs-volumes","title":"Enable Read affinity for RBD and CephFS volumes","text":"Ceph CSI supports mapping RBD volumes with KRBD options and mounting CephFS volumes with ceph mount options to allow serving reads from an OSD closest to the client, according to OSD locations defined in the CRUSH map and topology labels on nodes. Refer to the krbd-options document for more details. Execute the following step to enable read affinity for a specific ceph cluster:
Ceph CSI will extract the CRUSH location from the topology labels found on the node and pass it though krbd options during mapping RBD volumes. Note This requires Linux kernel version 5.8 or higher. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/","title":"Snapshots","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#prerequisites","title":"Prerequisites","text":"
Info Just like StorageClass provides a way for administrators to describe the \"classes\" of storage they offer when provisioning a volume, VolumeSnapshotClass provides a way to describe the \"classes\" of storage when provisioning a volume snapshot. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#rbd-snapshots","title":"RBD Snapshots","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#rbd-volumesnapshotclass","title":"RBD VolumeSnapshotClass","text":"In VolumeSnapshotClass, the Update the value of the "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#volumesnapshot","title":"Volumesnapshot","text":"In snapshot, "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#verify-rbd-snapshot-creation","title":"Verify RBD Snapshot Creation","text":" The snapshot will be ready to restore to a new PVC when the In pvc-restore, Please Note: * Create a new PVC from the snapshot "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#verify-rbd-clone-pvc-creation","title":"Verify RBD Clone PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#rbd-snapshot-resource-cleanup","title":"RBD snapshot resource Cleanup","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#cephfs-snapshots","title":"CephFS Snapshots","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#cephfs-volumesnapshotclass","title":"CephFS VolumeSnapshotClass","text":"In VolumeSnapshotClass, the In the volumesnapshotclass, update the value of the "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#volumesnapshot_1","title":"VolumeSnapshot","text":"In snapshot, "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#verify-cephfs-snapshot-creation","title":"Verify CephFS Snapshot Creation","text":" The snapshot will be ready to restore to a new PVC when In pvc-restore, Create a new PVC from the snapshot "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#verify-cephfs-restore-pvc-creation","title":"Verify CephFS Restore PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-snapshot/#cephfs-snapshot-resource-cleanup","title":"CephFS snapshot resource Cleanup","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/","title":"Volume clone","text":"The CSI Volume Cloning feature adds support for specifying existing PVCs in the A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a \"new\" empty Volume, the back end device creates an exact duplicate of the specified Volume. Refer to clone-doc for more info. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#rbd-volume-cloning","title":"RBD Volume Cloning","text":"In pvc-clone, Please note: * Create a new PVC Clone from the PVC "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#verify-rbd-volume-clone-pvc-creation","title":"Verify RBD volume Clone PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#rbd-clone-resource-cleanup","title":"RBD clone resource Cleanup","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#cephfs-volume-cloning","title":"CephFS Volume Cloning","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#volume-clone-prerequisites","title":"Volume Clone Prerequisites","text":"
In pvc-clone, Create a new PVC Clone from the PVC "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#verify-cephfs-volume-clone-pvc-creation","title":"Verify CephFS volume Clone PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-clone/#cephfs-clone-resource-cleanup","title":"CephFS clone resource Cleanup","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/","title":"Volume Group Snapshots","text":"Ceph provides the ability to create crash-consistent snapshots of multiple volumes. A group snapshot represents \u201ccopies\u201d from multiple volumes that are taken at the same point in time. A group snapshot can be used either to rehydrate new volumes (pre-populated with the snapshot data) or to restore existing volumes to a previous state (represented by the snapshots) "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#prerequisites","title":"Prerequisites","text":"
Info Created by cluster administrators to describe how volume group snapshots should be created. including the driver information, the deletion policy, etc. "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#volume-group-snapshots","title":"Volume Group Snapshots","text":""},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#cephfs-volumegroupsnapshotclass","title":"CephFS VolumeGroupSnapshotClass","text":"In VolumeGroupSnapshotClass, the In the "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#cephfs-volumegroupsnapshot","title":"CephFS VolumeGroupSnapshot","text":"In VolumeGroupSnapshot, "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#verify-cephfs-groupsnapshot-creation","title":"Verify CephFS GroupSnapshot Creation","text":" The snapshot will be ready to restore to a new PVC when Find the name of the snapshots created by the It will list the PVC's name followed by its snapshot name. In pvc-restore, Create a new PVC from the snapshot "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#verify-cephfs-restore-pvc-creation","title":"Verify CephFS Restore PVC Creation","text":" "},{"location":"Storage-Configuration/Ceph-CSI/ceph-csi-volume-group-snapshot/#cephfs-volume-group-snapshot-resource-cleanup","title":"CephFS volume group snapshot resource Cleanup","text":"To clean the resources created by this example, run the following: "},{"location":"Storage-Configuration/Ceph-CSI/custom-images/","title":"Custom Images","text":"By default, Rook will deploy the latest stable version of the Ceph CSI driver. Commonly, there is no need to change this default version that is deployed. For scenarios that require deploying a custom image (e.g. downstream releases), the defaults can be overridden with the following settings. The CSI configuration variables are found in the The default upstream images are included below, which you can change to your desired images. "},{"location":"Storage-Configuration/Ceph-CSI/custom-images/#use-private-repository","title":"Use private repository","text":"If image version is not passed along with the image name in any of the variables above, Rook will add the corresponding default version to that image. Example: if If you would like Rook to use the default upstream images, then you may simply remove all variables matching You can use the below command to see the CSI images currently being used in the cluster. Note that not all images (like The default images can also be found with each release in the images list "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/","title":"Ceph Dashboard","text":"The dashboard is a very helpful tool to give you an overview of the status of your Ceph cluster, including overall health, status of the mon quorum, status of the mgr, osd, and other Ceph daemons, view pools and PG status, show logs for the daemons, and more. Rook makes it simple to enable the dashboard. "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/#enable-the-ceph-dashboard","title":"Enable the Ceph Dashboard","text":"The dashboard can be enabled with settings in the CephCluster CRD. The CephCluster CRD must have the dashboard The Rook operator will enable the ceph-mgr dashboard module. A service object will be created to expose that port inside the Kubernetes cluster. Rook will enable port 8443 for https access. This example shows that port 8443 was configured. The first service is for reporting the Prometheus metrics, while the latter service is for the dashboard. If you are on a node in the cluster, you will be able to connect to the dashboard by using either the DNS name of the service at After you connect to the dashboard you will need to login for secure access. Rook creates a default user named "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/#configure-the-dashboard","title":"Configure the Dashboard","text":"The following dashboard configuration settings are supported:
Information about physical disks is available only in Rook host clusters. The Rook manager module is required by the dashboard to obtain the information about physical disks, but it is disabled by default. Before it is enabled, the dashboard 'Physical Disks' section will show an error message. To prepare the Rook manager module to be used in the dashboard, modify your Ceph Cluster CRD: And apply the changes: Once the Rook manager module is enabled as the orchestrator backend, there are two settings required for showing disk information:
Modify the operator.yaml, and apply the changes: "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/#viewing-the-dashboard-external-to-the-cluster","title":"Viewing the Dashboard External to the Cluster","text":"Commonly you will want to view the dashboard from outside the cluster. For example, on a development machine with the cluster running inside minikube you will want to access the dashboard from the host. There are several ways to expose a service that will depend on the environment you are running in. You can use an Ingress Controller or other methods for exposing services such as NodePort, LoadBalancer, or ExternalIPs. "},{"location":"Storage-Configuration/Monitoring/ceph-dashboard/#node-port","title":"Node Port","text":"The simplest way to expose the service in minikube or similar environment is using the NodePort to open a port on the VM that can be accessed by the host. To create a service with the NodePort, save this yaml as Now create the service: You will see the new service In this example, port If you have a cluster on a cloud provider that supports load balancers, you can create a service that is provisioned with a public hostname. The yaml is the same as Now create the service: You will see the new service Now you can enter the URL in your browser such as If you have a cluster with an nginx Ingress Controller and a Certificate Manager (e.g. cert-manager) then you can create an Ingress like the one below. This example achieves four things:
Customise the Ingress resource to match your cluster. Replace the example domain name Now create the Ingress: You will see the new Ingress And the new Secret for the TLS certificate: You can now browse to Each Rook Ceph cluster has some built in metrics collectors/exporters for monitoring with Prometheus. If you do not have Prometheus running, follow the steps below to enable monitoring of Rook. If your cluster already contains a Prometheus instance, it will automatically discover Rook's scrape endpoint using the standard Attention This assumes that the Prometheus instances is searching all your Kubernetes namespaces for Pods with these annotations. If prometheus is already installed in a cluster, it may not be configured to watch for third-party service monitors such as for Rook. Normally you should be able to add the prometheus annotations First the Prometheus operator needs to be started in the cluster so it can watch for our requests to start monitoring Rook and respond by deploying the correct Prometheus pods and configuration. A full explanation can be found in the Prometheus operator repository on GitHub, but the quick instructions can be found here: Note If the Prometheus Operator is already present in your cluster, the command provided above may fail. For a detailed explanation of the issue and a workaround, please refer to this issue. This will start the Prometheus operator, but before moving on, wait until the operator is in the Once the Prometheus operator is in the With the Prometheus operator running, we can create service monitors that will watch the Rook cluster. There are two sources for metrics collection:
From the root of your locally cloned Rook repo, go the monitoring directory: Create the service monitor as well as the Prometheus server pod and service: Ensure that the Prometheus server pod gets created and advances to the "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#dashboard-config","title":"Dashboard config","text":"Configure the Prometheus endpoint so the dashboard can retrieve metrics from Prometheus with two settings:
The following command can be used to get the Prometheus url: Following is an example to configure the Prometheus endpoint in the CephCluster CR. Note It is not recommended to consume storage from the Ceph cluster for Prometheus. If the Ceph cluster fails, Prometheus would become unresponsive and thus not alert you of the failure. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#prometheus-web-console","title":"Prometheus Web Console","text":"Once the Prometheus server is running, you can open a web browser and go to the URL that is output from this command: You should now see the Prometheus monitoring website. Click on In the dropdown that says Click on the Below the You can find Prometheus Consoles for and from Ceph here: GitHub ceph/cephmetrics - dashboards/current directory. A guide to how you can write your own Prometheus consoles can be found on the official Prometheus site here: Prometheus.io Documentation - Console Templates. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#prometheus-alerts","title":"Prometheus Alerts","text":"To enable the Ceph Prometheus alerts via the helm charts, set the following properties in values.yaml:
Alternatively, to enable the Ceph Prometheus alerts with example manifests follow these steps:
Note This expects the Prometheus Operator and a Prometheus instance to be pre-installed by the admin. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#customize-alerts","title":"Customize Alerts","text":"The Prometheus alerts can be customized with a post-processor using tools such as Kustomize. For example, first extract the helm chart: Now create the desired customization configuration files. This simple example will show how to update the severity of a rule, add a label to a rule, and change the Create a file named kustomization.yaml: Create a file named modifications.yaml Finally, run kustomize to update the desired prometheus rules: "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#grafana-dashboards","title":"Grafana Dashboards","text":"The dashboards have been created by @galexrt. For feedback on the dashboards please reach out to him on the Rook.io Slack. Note The dashboards are only compatible with Grafana 7.2.0 or higher. Also note that the dashboards are updated from time to time, to fix issues and improve them. The following Grafana dashboards are available:
The dashboard JSON files are also available on GitHub here When updating Rook, there may be updates to RBAC for monitoring. It is easy to apply the changes with each update or upgrade. This should be done at the same time you update Rook common resources like Hint This is updated automatically if you are upgrading via the helm chart "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#teardown","title":"Teardown","text":"To clean up all the artifacts created by the monitoring walk-through, copy/paste the entire block below (note that errors about resources \"not found\" can be ignored): Then the rest of the instructions in the Prometheus Operator docs can be followed to finish cleaning up. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#special-cases","title":"Special Cases","text":""},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#tectonic-bare-metal","title":"Tectonic Bare Metal","text":"Tectonic strongly discourages the To integrate CSI liveness into ceph monitoring we will need to deploy a service and service monitor. This will create the service monitor to have prometheus monitor CSI Note Please note that the liveness sidecar is disabled by default. To enable it set RBD per-image IO statistics collection is disabled by default. This can be enabled by setting If Prometheus needs to select specific resources, we can do so by injecting labels into these objects and using it as label selector. "},{"location":"Storage-Configuration/Monitoring/ceph-monitoring/#horizontal-pod-scaling-using-kubernetes-event-driven-autoscaling-keda","title":"Horizontal Pod Scaling using Kubernetes Event-driven Autoscaling (KEDA)","text":"Using metrics exported from the Prometheus service, the horizontal pod scaling can use the custom metrics other than CPU and memory consumption. It can be done with help of Prometheus Scaler provided by the KEDA. See the KEDA deployment guide for details. Following is an example to autoscale RGW: Warning During reconciliation of a All CephNFS daemons are configured using shared RADOS objects stored in a Ceph pool named By default, Rook creates the Ceph uses NFS-Ganesha servers. The config file format for these objects is documented in the NFS-Ganesha project. Use Ceph's
Of note, it is possible to pre-populate the NFS configuration and export objects prior to creating CephNFS server clusters. "},{"location":"Storage-Configuration/NFS/nfs-advanced/#creating-nfs-export-over-rgw","title":"Creating NFS export over RGW","text":"Warning RGW NFS export is experimental for the moment. It is not recommended for scenario of modifying existing content. For creating an NFS export over RGW(CephObjectStore) storage backend, the below command can be used. This creates an export for the "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/","title":"CSI provisioner and driver","text":"Attention This feature is experimental and will not support upgrades to future versions. For this section, we will refer to Rook's deployment examples in the deploy/examples directory. "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#enabling-the-csi-drivers","title":"Enabling the CSI drivers","text":"The Ceph CSI NFS provisioner and driver require additional RBAC to operate. Apply the Rook will only deploy the Ceph CSI NFS provisioner and driver components when the Note The rook-ceph operator Helm chart will deploy the required RBAC and enable the driver components if In order to create NFS exports via the CSI driver, you must first create a CephFilesystem to serve as the underlying storage for the exports, and you must create a CephNFS to run an NFS server that will expose the exports. RGWs cannot be used for the CSI driver. From the examples, You may need to enable or disable the Ceph orchestrator. You must also create a storage class. Ceph CSI is designed to support any arbitrary Ceph cluster, but we are focused here only on Ceph clusters deployed by Rook. Let's take a look at a portion of the example storage class found at
See See After a PVC is created successfully, the "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#taking-snapshots-of-nfs-exports","title":"Taking snapshots of NFS exports","text":"NFS export PVCs can be snapshotted and later restored to new PVCs. "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#creating-snapshots","title":"Creating snapshots","text":"First, create a VolumeSnapshotClass as in the example here. The In snapshot, "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#verifying-snapshots","title":"Verifying snapshots","text":" The snapshot will be ready to restore to a new PVC when In pvc-restore, Create a new PVC from the snapshot. "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#verifying-restored-pvc-creation","title":"Verifying restored PVC Creation","text":" "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#cleaning-up-snapshot-resource","title":"Cleaning up snapshot resource","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#cloning-nfs-exports","title":"Cloning NFS exports","text":""},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#creating-clones","title":"Creating clones","text":"In pvc-clone, Create a new PVC Clone from the PVC as in the example here. "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#verifying-a-cloned-pvc","title":"Verifying a cloned PVC","text":" "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#cleaning-up-clone-resources","title":"Cleaning up clone resources","text":"To clean your cluster of the resources created by this example, run the following: "},{"location":"Storage-Configuration/NFS/nfs-csi-driver/#consuming-nfs-from-an-external-source","title":"Consuming NFS from an external source","text":"For consuming NFS services and exports external to the Kubernetes cluster (including those backed by an external standalone Ceph cluster), Rook recommends using Kubernetes regular NFS consumption model. This requires the Ceph admin to create the needed export, while reducing the privileges needed in the client cluster for the NFS volume. Export and get the nfs client to a particular cephFS filesystem: Create the PV and PVC using Rook provides security for CephNFS server clusters through two high-level features: user ID mapping and user authentication. Attention All features in this document are experimental and may not support upgrades to future versions. Attention Some configurations of these features may break the ability to mount NFS storage to pods via PVCs. The NFS CSI driver may not be able to mount exports for pods when ID mapping is configured. "},{"location":"Storage-Configuration/NFS/nfs-security/#user-id-mapping","title":"User ID mapping","text":"User ID mapping allows the NFS server to map connected NFS client IDs to a different user domain, allowing NFS clients to be associated with a particular user in your organization. For example, users stored in LDAP can be associated with NFS users and vice versa. "},{"location":"Storage-Configuration/NFS/nfs-security/#id-mapping-via-sssd","title":"ID mapping via SSSD","text":"SSSD is the System Security Services Daemon. It can be used to provide user ID mapping from a number of sources including LDAP, Active Directory, and FreeIPA. Currently, only LDAP has been tested. "},{"location":"Storage-Configuration/NFS/nfs-security/#sssd-configuration","title":"SSSD configuration","text":"SSSD requires a configuration file in order to configure its connection to the user ID mapping system (e.g., LDAP). The file follows the Methods of providing the configuration file are documented in the NFS CRD security section. Recommendations:
A sample The SSSD configuration file may be omitted from the CephNFS spec if desired. In this case, Rook will not set User authentication allows NFS clients and the Rook CephNFS servers to authenticate with each other to ensure security. "},{"location":"Storage-Configuration/NFS/nfs-security/#authentication-through-kerberos","title":"Authentication through Kerberos","text":"Kerberos is the authentication mechanism natively supported by NFS-Ganesha. With NFSv4, individual users are authenticated and not merely client machines. "},{"location":"Storage-Configuration/NFS/nfs-security/#kerberos-configuration","title":"Kerberos configuration","text":"Kerberos authentication requires configuration files in order for the NFS-Ganesha server to authenticate to the Kerberos server (KDC). The requirements are two-parted:
Methods of providing the configuration files are documented in the NFS CRD security section. Recommendations:
A sample Kerberos config file is shown below. The Kerberos config files ( Similarly, the keytab file ( As an example for either of the above cases, you may build files into your custom Ceph container image or use the Vault agent injector to securely add files via annotations on the CephNFS spec (passed to the NFS server pods). "},{"location":"Storage-Configuration/NFS/nfs-security/#nfs-service-principals","title":"NFS service principals","text":"The Kerberos service principal used by Rook's CephNFS servers to authenticate with the Kerberos server is built up from 3 components:
The full service principal name is constructed as Users must add this service principal to their Kerberos server configuration. Example For a CephNFS named \"fileshare\" in the \"business-unit\" Kubernetes namespace that has a Advanced
The kerberos domain name is used to setup the domain name in /etc/idmapd.conf. This domain name is used by idmap to map the kerberos credential to the user uid/gid. Without this configured, NFS-Ganesha will be unable to map the Kerberos principal to an uid/gid and will instead use the configured anonuid/anongid (default: -2) when accessing the local filesystem. "},{"location":"Storage-Configuration/NFS/nfs/","title":"NFS Storage Overview","text":"NFS storage can be mounted with read/write permission from multiple pods. NFS storage may be especially useful for leveraging an existing Rook cluster to provide NFS storage for legacy applications that assume an NFS client connection. Such applications may not have been migrated to Kubernetes or might not yet support PVCs. Rook NFS storage can provide access to the same network filesystem storage from within the Kubernetes cluster via PVC while simultaneously providing access via direct client connection from within or outside of the Kubernetes cluster. Warning Simultaneous access to NFS storage from Pods and from external clients complicates NFS user ID mapping significantly. Client IDs mapped from external clients will not be the same as the IDs associated with the NFS CSI driver, which mount exports for Kubernetes pods. Warning Due to a number of Ceph issues and changes, Rook officially only supports Ceph v16.2.7 or higher for CephNFS. If you are using an earlier version, upgrade your Ceph version following the advice given in Rook's v1.9 NFS docs. Note CephNFSes support NFSv4.1+ access only. Serving earlier protocols inhibits responsiveness after a server restart. "},{"location":"Storage-Configuration/NFS/nfs/#prerequisites","title":"Prerequisites","text":"This guide assumes you have created a Rook cluster as explained in the main quickstart guide as well as a Ceph filesystem which will act as the backing storage for NFS. Many samples reference the CephNFS and CephFilesystem example manifests here and here. "},{"location":"Storage-Configuration/NFS/nfs/#creating-an-nfs-cluster","title":"Creating an NFS cluster","text":"Create the NFS cluster by specifying the desired settings documented for the NFS CRD. "},{"location":"Storage-Configuration/NFS/nfs/#creating-exports","title":"Creating Exports","text":"When a CephNFS is first created, all NFS daemons within the CephNFS cluster will share a configuration with no exports defined. When creating an export, it is necessary to specify the CephFilesystem which will act as the backing storage for the NFS export. RADOS Gateways (RGWs), provided by CephObjectStores, can also be used as backing storage for NFS exports if desired. "},{"location":"Storage-Configuration/NFS/nfs/#using-the-ceph-dashboard","title":"Using the Ceph Dashboard","text":"Exports can be created via the Ceph dashboard as well. To enable and use the Ceph dashboard in Rook, see here. "},{"location":"Storage-Configuration/NFS/nfs/#using-the-ceph-cli","title":"Using the Ceph CLI","text":"The Ceph CLI can be used from the Rook toolbox pod to create and manage NFS exports. To do so, first ensure the necessary Ceph mgr modules are enabled, if necessary, and that the Ceph orchestrator backend is set to Rook. "},{"location":"Storage-Configuration/NFS/nfs/#enable-the-ceph-orchestrator-optional","title":"Enable the Ceph orchestrator (optional)","text":" Ceph's NFS CLI can create NFS exports that are backed by CephFS (a CephFilesystem) or Ceph Object Gateway (a CephObjectStore). For creating an NFS export for the CephNFS and CephFilesystem example manifests, the below command can be used. This creates an export for the The below command will list the current NFS exports for the example CephNFS cluster, which will give the output shown for the current example. The simple If you are done managing NFS exports and don't need the Ceph orchestrator module enabled for anything else, it may be preferable to disable the Rook and NFS mgr modules to free up a small amount of RAM in the Ceph mgr Pod. "},{"location":"Storage-Configuration/NFS/nfs/#mounting-exports","title":"Mounting exports","text":"Each CephNFS server has a unique Kubernetes Service. This is because NFS clients can't readily handle NFS failover. CephNFS services are named with the pattern For each NFS client, choose an NFS service to use for the connection. With NFS v4, you can mount an export by its path using a mount command like below. You can mount all exports at once by omitting the export path and leaving the directory as just "},{"location":"Storage-Configuration/NFS/nfs/#exposing-the-nfs-server-outside-of-the-kubernetes-cluster","title":"Exposing the NFS server outside of the Kubernetes cluster","text":"Use a LoadBalancer Service to expose an NFS server (and its exports) outside of the Kubernetes cluster. The Service's endpoint can be used as the NFS service address when mounting the export manually. We provide an example Service here: Security options for NFS are documented here. "},{"location":"Storage-Configuration/NFS/nfs/#ceph-csi-nfs-provisioner-and-nfs-csi-driver","title":"Ceph CSI NFS provisioner and NFS CSI driver","text":"The NFS CSI provisioner and driver are documented here "},{"location":"Storage-Configuration/NFS/nfs/#advanced-configuration","title":"Advanced configuration","text":"Advanced NFS configuration is documented here "},{"location":"Storage-Configuration/NFS/nfs/#known-issues","title":"Known issues","text":"Known issues are documented on the NFS CRD page. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim/","title":"Bucket Claim","text":"Rook supports the creation of new buckets and access to existing buckets via two custom resources:
An OBC references a storage class which is created by an administrator. The storage class defines whether the bucket requested is a new bucket or an existing bucket. It also defines the bucket retention policy. Users request a new or existing bucket by creating an OBC which is shown below. The ceph provisioner detects the OBC and creates a new bucket or grants access to an existing bucket, depending the storage class referenced in the OBC. It also generates a Secret which provides credentials to access the bucket, and a ConfigMap which contains the bucket's endpoint. Application pods consume the information in the Secret and ConfigMap to access the bucket. Please note that to make provisioner watch the cluster namespace only you need to set The OBC provisioner name found in the storage class by default includes the operator namespace as a prefix. A custom prefix can be applied by the operator setting in the Note Changing the prefix is not supported on existing clusters. This may impact the function of existing OBCs. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim/#example","title":"Example","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim/#obc-custom-resource","title":"OBC Custom Resource","text":"
Rook supports the creation of bucket notifications via two custom resources:
A CephBucketNotification defines what bucket actions trigger the notification and which topic to send notifications to. A CephBucketNotification may also define a filter, based on the object's name and other object attributes. Notifications can be associated with buckets created via ObjectBucketClaims by adding labels to an ObjectBucketClaim with the following format: The CephBucketTopic, CephBucketNotification and ObjectBucketClaim must all belong to the same namespace. If a bucket was created manually (not via an ObjectBucketClaim), notifications on this bucket should also be created manually. However, topics in these notifications may reference topics that were created via CephBucketTopic resources. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#topics","title":"Topics","text":"A CephBucketTopic represents an endpoint (of types: Kafka, AMQP0.9.1 or HTTP), or a specific resource inside this endpoint (e.g a Kafka or an AMQP topic, or a specific URI in an HTTP server). The CephBucketTopic also holds any additional info needed for a CephObjectStore's RADOS Gateways (RGW) to connect to the endpoint. Topics don't belong to a specific bucket or notification. Notifications from multiple buckets may be sent to the same topic, and one bucket (via multiple CephBucketNotifications) may send notifications to multiple topics. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#notification-reliability-and-delivery","title":"Notification Reliability and Delivery","text":"Notifications may be sent synchronously, as part of the operation that triggered them. In this mode, the operation is acknowledged only after the notification is sent to the topic\u2019s configured endpoint, which means that the round trip time of the notification is added to the latency of the operation itself. The original triggering operation will still be considered as successful even if the notification fail with an error, cannot be delivered or times out. Notifications may also be sent asynchronously. They will be committed into persistent storage and then asynchronously sent to the topic\u2019s configured endpoint. In this case, the only latency added to the original operation is of committing the notification to persistent storage. If the notification fail with an error, cannot be delivered or times out, it will be retried until successfully acknowledged. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#example","title":"Example","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#cephbuckettopic-custom-resource","title":"CephBucketTopic Custom Resource","text":"
Note In case of Kafka and AMQP, the consumer of the notifications is not required to ack the notifications, since the broker persists the messages before delivering them to their final destinations. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-notifications/#cephbucketnotification-custom-resource","title":"CephBucketNotification Custom Resource","text":"
For a notifications to be associated with a bucket, a labels must be added to the OBC, indicating the name of the notification. To delete a notification from a bucket the matching label must be removed. When an OBC is deleted, all of the notifications associated with the bucket will be deleted as well. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/","title":"Object Store Multisite","text":"Multisite is a feature of Ceph that allows object stores to replicate their data over multiple Ceph clusters. Multisite also allows object stores to be independent and isolated from other object stores in a cluster. When a ceph-object-store is created without the Since it is the only ceph-object-store in the realm, the data in the ceph-object-store remain independent and isolated from others on the same cluster. When a ceph-object-store is created with the This allows the ceph-object-store to replicate its data over multiple Ceph clusters. To review core multisite concepts please read the ceph-multisite design overview. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#prerequisites","title":"Prerequisites","text":"This guide assumes a Rook cluster as explained in the Quickstart. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#creating-object-multisite","title":"Creating Object Multisite","text":"If an admin wants to set up multisite on a Rook Ceph cluster, the following resources must be created:
object-multisite.yaml in the examples directory can be used to create the multisite CRDs. The first zone group created in a realm is the master zone group. The first zone created in a zone group is the master zone. When a non-master zone or non-master zone group is created, the zone group or zone is not in the Ceph Radosgw Multisite Period until an object-store is created in that zone (and zone group). The zone will create the pools for the object-store(s) that are in the zone to use. When one of the multisite CRs (realm, zone group, zone) is deleted the underlying ceph realm/zone group/zone is not deleted, neither are the pools created by the zone. See the \"Multisite Cleanup\" section for more information. For more information on the multisite CRDs, see the related CRDs:
If an admin wants to sync data from another cluster, the admin needs to pull a realm on a Rook Ceph cluster from another Rook Ceph (or Ceph) cluster. To begin doing this, the admin needs 2 pieces of information:
To pull a Ceph realm from a remote Ceph cluster, an If an admin does not know of an endpoint that fits this criteria, the admin can find such an endpoint on the remote Ceph cluster (via the tool box if it is a Rook Ceph Cluster) by running: A list of endpoints in the master zone group in the master zone is in the This endpoint must also be resolvable from the new Rook Ceph cluster. To test this run the Finally add the endpoint to the "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#getting-realm-access-key-and-secret-key","title":"Getting Realm Access Key and Secret Key","text":"The access key and secret key of the system user are keys that allow other Ceph clusters to pull the realm of the system user. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#getting-the-realm-access-key-and-secret-key-from-the-rook-ceph-cluster","title":"Getting the Realm Access Key and Secret Key from the Rook Ceph Cluster","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#system-user-for-multisite","title":"System User for Multisite","text":"When an admin creates a ceph-object-realm a system user automatically gets created for the realm with an access key and a secret key. This system user has the name \"$REALM_NAME-system-user\". For the example if realm name is These keys for the user are exported as a kubernetes secret called \"$REALM_NAME-keys\" (ex: realm-a-keys). This system user used by RGW internally for the data replication. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#getting-keys-from-k8s-secret","title":"Getting keys from k8s secret","text":"To get these keys from the cluster the realm was originally created on, run: Edit the Then create a kubernetes secret on the pulling Rook Ceph cluster with the same secrets yaml file. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#getting-the-realm-access-key-and-secret-key-from-a-non-rook-ceph-cluster","title":"Getting the Realm Access Key and Secret Key from a Non Rook Ceph Cluster","text":"The access key and the secret key of the system user can be found in the output of running the following command on a non-rook ceph cluster: Then base64 encode the each of the keys and create a Only the Finally, create a kubernetes secret on the pulling Rook Ceph cluster with the new secrets yaml file. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#pulling-a-realm-on-a-new-rook-ceph-cluster","title":"Pulling a Realm on a New Rook Ceph Cluster","text":"Once the admin knows the endpoint and the secret for the keys has been created, the admin should create:
object-multisite-pull-realm.yaml (with changes) in the examples directory can be used to create the multisite CRDs. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#scaling-a-multisite","title":"Scaling a Multisite","text":"Scaling the number of gateways that run the synchronization thread to 2 or more can increase the latency of the replication of each S3 object. The recommended way to scale a multisite configuration is to dissociate the gateway dedicated to the synchronization from gateways that serve clients. The two types of gateways can be deployed by creating two CephObjectStores associated with the same CephObjectZone. The objectstore that deploys the gateway dedicated to the synchronization must have "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#multisite-cleanup","title":"Multisite Cleanup","text":"Multisite configuration must be cleaned up by hand. Deleting a realm/zone group/zone CR will not delete the underlying Ceph realm, zone group, zone, or the pools associated with a zone. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-and-reconfiguring-the-ceph-object-zone","title":"Deleting and Reconfiguring the Ceph Object Zone","text":"Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. When the ceph-object-zone resource is deleted or modified, the zone is not deleted from the Ceph cluster. Zone deletion must be done through the toolbox. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#changing-the-master-zone","title":"Changing the Master Zone","text":"The Rook toolbox can change the master zone in a zone group. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-zone","title":"Deleting Zone","text":"The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. There are two scenarios possible when deleting a zone. The following commands, run via the toolbox, deletes the zone if there is only one zone in the zone group. In the other scenario, there are more than one zones in a zone group. Care must be taken when changing which zone is the master zone. Please read the following documentation before running the below commands: The following commands, run via toolboxes, remove the zone from the zone group first, then delete the zone. When a zone is deleted, the pools for that zone are not deleted. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-pools-for-a-zone","title":"Deleting Pools for a Zone","text":"The Rook toolbox can delete pools. Deleting pools should be done with caution. The following documentation on pools should be read before deleting any pools. When a zone is created the following pools are created for each zone: Here is an example command to delete the .rgw.buckets.data pool for zone-a. In this command the pool name must be mentioned twice for the pool to be removed. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#removing-an-object-store-from-a-zone","title":"Removing an Object Store from a Zone","text":"When an object-store (created in a zone) is deleted, the endpoint for that object store is removed from that zone, via Removing object store(s) from the master zone of the master zone group should be done with caution. When all of these object-stores are deleted the period cannot be updated and that realm cannot be pulled. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#zone-group-deletion","title":"Zone Group Deletion","text":"Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. When the ceph-object-zone group resource is deleted or modified, the zone group is not deleted from the Ceph cluster. Zone Group deletion must be done through the toolbox. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-a-zone-group","title":"Deleting a Zone Group","text":"The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. The following command, run via the toolbox, deletes the zone group. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#realm-deletion","title":"Realm Deletion","text":"Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. When the ceph-object-realm resource is deleted or modified, the realm is not deleted from the Ceph cluster. Realm deletion must be done via the toolbox. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#deleting-a-realm","title":"Deleting a Realm","text":"The Rook toolbox can modify the Ceph Multisite state via the radosgw-admin command. The following command, run via the toolbox, deletes the realm. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#configure-an-existing-object-store-for-multisite","title":"Configure an Existing Object Store for Multisite","text":"When an object store is configured by Rook, it internally creates a zone, zone group, and realm with the same name as the object store. To enable multisite, you will need to create the corresponding zone, zone group, and realm CRs with the same name as the object store. For example, to create multisite CRs for an object store named Now modify the existing "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-multisite/#using-custom-names","title":"Using custom names","text":"If names different from the object store need to be set for the realm, zone, or zone group, first rename them in the backend via toolbox pod, then following the procedure above. Important Renaming in the toolbox must be performed before creating the multisite CRs "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-swift/","title":"Object Store with Keystone and Swift","text":"Note The Object Store with Keystone and Swift is currently in experimental mode. Ceph RGW can integrate natively with the Swift API and Keystone via the CephObjectStore CRD. This allows native integration of Rook-operated Ceph RGWs into OpenStack clouds. Note Authentication via the OBC and COSI features is not affected by this configuration. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-swift/#create-a-local-object-store-with-keystone-and-swift","title":"Create a Local Object Store with Keystone and Swift","text":"This example will create a The OSDs must be located on different nodes, because the More details on the settings available for a Set the url in the auth section to point to the keystone service url. Prior to using keystone as authentication provider an admin user for rook to access and configure the keystone admin api is required. The user credentials for this admin user are provided by a secret in the same namespace which is referenced via the Note This example requires at least 3 bluestore OSDs, with each OSD located on a different node. This example assumes an existing OpenStack Keystone instance ready to use for authentication. After the The start of the RGW pod(s) confirms that the object store is configured. The swift service endpoint in OpenStack/Keystone must be created, in order to use the object store in Swift using for example the OpenStack CLI. The endpoint url should be set to the service endpoint of the created rgw instance. Afterwards any user which has the rights to access the projects resources (as defined in the OpenStack Keystone instance) can access the object store and create container and objects. Here the username and project are explicitly set to reflect use of the (non-admin) user. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-swift/#basic-concepts","title":"Basic concepts","text":"When using Keystone as an authentication provider, Ceph uses the credentials of an admin user (provided in the secret references by For each user accessing the object store using Swift, Ceph implicitly creates a user which must be represented in Keystone with an authorized counterpart. Keystone checks for a user of the same name. Based on the name and other parameters ((OpenStack Keystone) project, (OpenStack Keystone) role) Keystone allows or disallows access to a swift container or object. Note that the implicitly created users are creaded in addition to any users that are created through other means, so Keystone authentication is not exclusive. It is not necessary to create any users in OpenStack Keystone (except for the admin user provided in the Keystone must support the v3-API-Version to be used with Rook. Other API versions are not supported. The admin user and all users accessing the Object store must exist and their authorizations configured accordingly in Keystone. "},{"location":"Storage-Configuration/Object-Storage-RGW/ceph-object-swift/#openstack-setup","title":"Openstack setup","text":"To use the Object Store in OpenStack using Swift the Swift service must be set and the endpoint urls for the Swift service created. The example configuration \"Create a Local Object Store with Keystone and Swift\" above contains more details and the corresponding CLI calls. "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/","title":"Container Object Storage Interface (COSI)","text":"The Ceph COSI driver provisions buckets for object storage. This document instructs on enabling the driver and consuming a bucket from a sample application. Note The Ceph COSI driver is currently in experimental mode. "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#prerequisites","title":"Prerequisites","text":"COSI requires:
Deploy the COSI controller with these commands: "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#ceph-cosi-driver","title":"Ceph COSI Driver","text":"The Ceph COSI driver will be started when the CephCOSIDriver CR is created and when the first CephObjectStore is created. The driver is created in the same namespace as Rook operator. "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#admin-operations","title":"Admin Operations","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#create-a-bucketclass-and-bucketaccessclass","title":"Create a BucketClass and BucketAccessClass","text":"The BucketClass and BucketAccessClass are CRDs defined by COSI. The BucketClass defines the bucket class for the bucket. The BucketAccessClass defines the access class for the bucket. Rook will automatically create a secret named with "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#user-operations","title":"User Operations","text":""},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#create-a-bucket","title":"Create a Bucket","text":"To create a bucket, use the BucketClass to pointing the required object store and then define BucketClaim request as below: "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#bucket-access","title":"Bucket Access","text":"Define access to the bucket by creating the BucketAccess resource: The secret will be created which contains the access details for the bucket in JSON format in the namespace of BucketAccess: "},{"location":"Storage-Configuration/Object-Storage-RGW/cosi/#consuming-the-bucket-via-secret","title":"Consuming the Bucket via secret","text":"To access the bucket from an application pod, mount the secret for accessing the bucket: The Secret will be mounted in the pod in the path: Object storage exposes an S3 API and or a Swift API to the storage cluster for applications to put and get data. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#prerequisites","title":"Prerequisites","text":"This guide assumes a Rook cluster as explained in the Quickstart. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#configure-an-object-store","title":"Configure an Object Store","text":"Rook can configure the Ceph Object Store for several different scenarios. See each linked section for the configuration details.
Note Updating the configuration of an object store between these types is not supported. Rook has the ability to either deploy an object store in Kubernetes or to connect to an external RGW service. Most commonly, the object store will be configured in Kubernetes by Rook. Alternatively see the external section to consume an existing Ceph cluster with Rados Gateways from Rook. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#create-a-local-object-store-with-s3","title":"Create a Local Object Store with S3","text":"The below sample will create a Note This sample requires at least 3 OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the See the Object Store CRD, for more detail on the settings available for a After the Create an object store: To confirm the object store is configured, wait for the RGW pod(s) to start: To consume the object store, continue below in the section to Create a bucket. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#create-local-object-stores-with-shared-pools","title":"Create Local Object Store(s) with Shared Pools","text":"The below sample will create one or more object stores. Shared Ceph pools will be created, which reduces the overhead of additional Ceph pools for each additional object store. Data isolation is enforced between the object stores with the use of Ceph RADOS namespaces. The separate RADOS namespaces do not allow access of the data across object stores. Note This sample requires at least 3 OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the Create the shared pools that will be used by each of the object stores. Note If object stores have been previously created, the first pool below ( Create the shared pools: "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#create-each-object-store","title":"Create Each Object Store","text":"After the pools have been created above, create each object store to consume the shared pools. Create the object store: To confirm the object store is configured, wait for the RGW pod(s) to start: Additional object stores can be created based on the same shared pools by simply changing the To consume the object store, continue below in the section to Create a bucket. Modify the default example object store name from Attention This feature is experimental. This section contains a guide on how to configure RGW's pool placement and storage classes with Rook. Object Storage API allows users to override where bucket data will be stored during bucket creation. With To enable this feature, configure
Example: Configure S3 clients can direct objects into the pools defined in the above. The example below uses the s5cmd CLI tool which is pre-installed in the toolbox pod: "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#connect-to-an-external-object-store","title":"Connect to an External Object Store","text":"Rook can connect to existing RGW gateways to work in conjunction with the external mode of the The Then create a secret with the user credentials: For an external CephCluster, configure Rook to consume external RGW servers with the following: See Even though multiple The CephObjectStore resource Each object store also creates a Kubernetes service that can be used as a client endpoint from within the Kubernetes cluster. The DNS name of the service is For external clusters, the default endpoint is the first The advertised endpoint can be overridden using Rook always uses the advertised endpoint to perform management operations against the object store. When TLS is enabled, the TLS certificate must always specify the endpoint DNS name to allow secure management operations. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#create-a-bucket","title":"Create a Bucket","text":"Info This document is a guide for creating bucket with an Object Bucket Claim (OBC). To create a bucket with the experimental COSI Driver, see the COSI documentation. Now that the object store is configured, next we need to create a bucket where a client can read and write objects. A bucket can be created by defining a storage class, similar to the pattern used by block and file storage. First, define the storage class that will allow object clients to create a bucket. The storage class defines the object storage system, the bucket retention policy, and other properties required by the administrator. Save the following as If you\u2019ve deployed the Rook operator in a namespace other than Based on this storage class, an object client can now request a bucket by creating an Object Bucket Claim (OBC). When the OBC is created, the Rook bucket provisioner will create a new bucket. Notice that the OBC references the storage class that was created above. Save the following as Now that the claim is created, the operator will create the bucket as well as generate other artifacts to enable access to the bucket. A secret and ConfigMap are created with the same name as the OBC and in the same namespace. The secret contains credentials used by the application pod to access the bucket. The ConfigMap contains bucket endpoint information and is also consumed by the pod. See the Object Bucket Claim Documentation for more details on the The following commands extract key pieces of information from the secret and configmap:\" If any Now that you have the object store configured and a bucket created, you can consume the object storage from an S3 client. This section will guide you through testing the connection to the To simplify the s3 client commands, you will want to set the four environment variables for use by your client (ie. inside the toolbox). See above for retrieving the variables for a bucket created by an
The variables for the user generated in this example might be: The access key and secret key can be retrieved as described in the section above on client connections or below in the section creating a user if you are not creating the buckets with an To test the Important The default toolbox.yaml does not contain the s5cmd. The toolbox must be started with the rook operator image (toolbox-operator-image), which does contain s5cmd. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#put-or-get-an-object","title":"PUT or GET an object","text":"Upload a file to the newly created bucket Download and verify the file from the bucket "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#monitoring-health","title":"Monitoring health","text":"Rook configures health probes on the deployment created for CephObjectStore gateways. Refer to the CRD document for information about configuring the probes and monitoring the deployment status. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#access-external-to-the-cluster","title":"Access External to the Cluster","text":"Rook sets up the object storage so pods will have access internal to the cluster. If your applications are running outside the cluster, you will need to setup an external service through a First, note the service that exposes RGW internal to the cluster. We will leave this service intact and create a new service for external access. Save the external service as Now create the external service. See both rgw services running and notice what port the external service is running on: Internally the rgw service is running on port If you need to create an independent set of user credentials to access the S3 endpoint, create a See the Object Store User CRD for more detail on the settings available for a When the The AccessKey and SecretKey data fields can be mounted in a pod as an environment variable. More information on consuming kubernetes secrets can be found in the K8s secret documentation To directly retrieve the secrets: "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#enable-tls","title":"Enable TLS","text":"TLS is critical for securing object storage data access, and it is assumed as a default by many S3 clients. TLS is enabled for CephObjectStores by configuring Ceph RGW only supports a single TLS certificate. If the given TLS certificate is a concatenation of multiple certificates, only the first certificate will be used by the RGW as the server certificate. Therefore, the TLS certificate given must include all endpoints that clients will use for access as subject alternate names (SANs). The CephObjectStore service endpoint must be added as a SAN on the TLS certificate. If it is not possible to add the service DNS name as a SAN on the TLS certificate, set Note OpenShift users can use add The Ceph Object Gateway supports accessing buckets using virtual host-style addressing, which allows addressing buckets using the bucket name as a subdomain in the endpoint. AWS has deprecated the the alternative path-style addressing mode which is Rook and Ceph's default. As a result, many end-user applications have begun to remove path-style support entirely. Many production clusters will have to enable virtual host-style address. Virtual host-style addressing requires 2 things:
Wildcard addressing can be configured in myriad ways. Some options:
The minimum recommended A more complex "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#object-multisite","title":"Object Multisite","text":"Multisite is a feature of Ceph that allows object stores to replicate its data over multiple Ceph clusters. Multisite also allows object stores to be independent and isolated from other object stores in a cluster. For more information on multisite please read the ceph multisite overview for how to run it. "},{"location":"Storage-Configuration/Object-Storage-RGW/object-storage/#using-swift-and-keystone","title":"Using Swift and Keystone","text":"It is possible to access an object store using the Swift API. Using Swift requires the use of OpenStack Keystone as an authentication provider. More information on the use of Swift and Keystone can be found in the document on Object Store with Keystone and Swift. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/","title":"Filesystem Mirroring","text":"Ceph filesystem mirroring is a process of asynchronous replication of snapshots to a remote CephFS file system. Snapshots are synchronized by mirroring snapshot data followed by creating a snapshot with the same name (for a given directory on the remote file system) as the snapshot being synchronized. It is generally useful when planning for Disaster Recovery. Mirroring is for clusters that are geographically distributed and stretching a single cluster is not possible due to high latencies. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#prerequisites","title":"Prerequisites","text":"This guide assumes you have created a Rook cluster as explained in the main quickstart guide "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#create-the-filesystem-with-mirroring-enabled","title":"Create the Filesystem with Mirroring enabled","text":"The following will enable mirroring on the filesystem: "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#create-the-cephfs-mirror-daemon","title":"Create the cephfs-mirror daemon","text":"Launch the Please refer to Filesystem Mirror CRD for more information. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#configuring-mirroring-peers","title":"Configuring mirroring peers","text":"Once mirroring is enabled, Rook will by default create its own bootstrap peer token so that it can be used by another cluster. The bootstrap peer token can be found in a Kubernetes Secret. The name of the Secret is present in the Status field of the CephFilesystem CR: This secret can then be fetched like so: "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#import-the-token-in-the-destination-cluster","title":"Import the token in the Destination cluster","text":"The decoded secret must be saved in a file before importing. See the CephFS mirror documentation on how to add a bootstrap peer. Further refer to CephFS mirror documentation to configure a directory for snapshot mirroring. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-mirroring/#verify-that-the-snapshots-have-synced","title":"Verify that the snapshots have synced","text":"To check the For example : Please refer to the Fetch the For getting "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/","title":"Filesystem Storage Overview","text":"A filesystem storage (also named shared filesystem) can be mounted with read/write permission from multiple pods. This may be useful for applications which can be clustered using a shared filesystem. This example runs a shared filesystem for the kube-registry. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#prerequisites","title":"Prerequisites","text":"This guide assumes you have created a Rook cluster as explained in the main quickstart guide "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#create-the-filesystem","title":"Create the Filesystem","text":"Create the filesystem by specifying the desired settings for the metadata pool, data pools, and metadata server in the Save this shared filesystem definition as The Rook operator will create all the pools and other resources necessary to start the service. This may take a minute to complete. To confirm the filesystem is configured, wait for the mds pods to start: To see detailed status of the filesystem, start and connect to the Rook toolbox. A new line will be shown with "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#provision-storage","title":"Provision Storage","text":"Before Rook can start provisioning storage, a StorageClass needs to be created based on the filesystem. This is needed for Kubernetes to interoperate with the CSI driver to create persistent volumes. Save this storage class definition as If you've deployed the Rook operator in a namespace other than \"rook-ceph\" as is common change the prefix in the provisioner to match the namespace you used. For example, if the Rook operator is running in \"rook-op\" the provisioner value should be \"rook-op.rbd.csi.ceph.com\". Create the storage class. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#quotas","title":"Quotas","text":"Attention The CephFS CSI driver uses quotas to enforce the PVC size requested. Only newer kernels support CephFS quotas (kernel version of at least 4.17). If you require quotas to be enforced and the kernel driver does not support it, you can disable the kernel driver and use the FUSE client. This can be done by setting As an example, we will start the kube-registry pod with the shared filesystem as the backing store. Save the following spec as Create the Kube registry deployment: You now have a docker registry which is HA with persistent storage. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#kernel-version-requirement","title":"Kernel Version Requirement","text":"If the Rook cluster has more than one filesystem and the application pod is scheduled to a node with kernel version older than 4.7, inconsistent results may arise since kernels older than 4.7 do not support specifying filesystem namespaces. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#consume-the-shared-filesystem-toolbox","title":"Consume the Shared Filesystem: Toolbox","text":"Once you have pushed an image to the registry (see the instructions to expose and use the kube-registry), verify that kube-registry is using the filesystem that was configured above by mounting the shared filesystem in the toolbox pod. See the Direct Filesystem topic for more details. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#consume-the-shared-filesystem-across-namespaces","title":"Consume the Shared Filesystem across namespaces","text":"A PVC that you create using the However there are some use cases where you want to share the content from a CephFS-based PVC among different Pods in different namespaces, for a shared library for example, or a collaboration workspace between applications running in different namespaces. You can do that using the following recipe. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#shared-volume-creation","title":"Shared volume creation","text":"
Your YAML should look like this:
You have now access to the same CephFS subvolume from different PVCs in different namespaces. Redo the previous steps (copy PV with a new name, create a PVC pointing to it) in each namespace you want to use this subvolume. Note: the new PVCs/PVs we have created are static. Therefore CephCSI does not support snapshots, clones, resizing or delete operations for them. If those operations are required, you must make them on the original PVC. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#shared-volume-removal","title":"Shared volume removal","text":"As the same CephFS volume is used by different PVCs/PVs, you must proceed very orderly to remove it properly.
Due to this bug, the global mount for a Volume that is mounted multiple times on the same node will not be unmounted. This does not result in any particular problem, apart from polluting the logs with unmount error messages, or having many different mounts hanging if you create and delete many shared PVCs, or you don't really use them. Until this issue is solved, either on the Rook or Kubelet side, you can always manually unmount the unwanted hanging global mounts on the nodes:
To clean up all the artifacts created by the filesystem demo: To delete the filesystem components and backing data, delete the Filesystem CRD. Warning Data will be deleted if preserveFilesystemOnDelete=false. Note: If the \"preserveFilesystemOnDelete\" filesystem attribute is set to true, the above command won't delete the filesystem. Recreating the same CRD will reuse the existing filesystem. "},{"location":"Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/#advanced-example-erasure-coded-filesystem","title":"Advanced Example: Erasure Coded Filesystem","text":"The Ceph filesystem example can be found here: Ceph Shared Filesystem - Samples - Erasure Coded. "},{"location":"Troubleshooting/ceph-common-issues/","title":"Ceph Common Issues","text":""},{"location":"Troubleshooting/ceph-common-issues/#topics","title":"Topics","text":"
Many of these problem cases are hard to summarize down to a short phrase that adequately describes the problem. Each problem will start with a bulleted list of symptoms. Keep in mind that all symptoms may not apply depending on the configuration of Rook. If the majority of the symptoms are seen there is a fair chance you are experiencing that problem. If after trying the suggestions found on this page and the problem is not resolved, the Rook team is very happy to help you troubleshoot the issues in their Slack channel. Once you have registered for the Rook Slack, proceed to the See also the CSI Troubleshooting Guide. "},{"location":"Troubleshooting/ceph-common-issues/#troubleshooting-techniques","title":"Troubleshooting Techniques","text":"There are two main categories of information you will need to investigate issues in the cluster:
After you verify the basic health of the running pods, next you will want to run Ceph tools for status of the storage components. There are two ways to run the Ceph tools, either in the Rook toolbox or inside other Rook pods that are already running.
The rook-ceph-tools pod provides a simple environment to run Ceph tools. Once the pod is up and running, connect to the pod to execute Ceph commands to evaluate that current state of the cluster. "},{"location":"Troubleshooting/ceph-common-issues/#ceph-commands","title":"Ceph Commands","text":"Here are some common commands to troubleshoot a Ceph cluster:
The first two status commands provide the overall cluster health. The normal state for cluster operations is HEALTH_OK, but will still function when the state is in a HEALTH_WARN state. If you are in a WARN state, then the cluster is in a condition that it may enter the HEALTH_ERROR state at which point all disk I/O operations are halted. If a HEALTH_WARN state is observed, then one should take action to prevent the cluster from halting when it enters the HEALTH_ERROR state. There are many Ceph sub-commands to look at and manipulate Ceph objects, well beyond the scope this document. See the Ceph documentation for more details of gathering information about the health of the cluster. In addition, there are other helpful hints and some best practices located in the Advanced Configuration section. Of particular note, there are scripts for collecting logs and gathering OSD information there. "},{"location":"Troubleshooting/ceph-common-issues/#cluster-failing-to-service-requests","title":"Cluster failing to service requests","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms","title":"Symptoms","text":"
Create a rook-ceph-tools pod to investigate the current state of Ceph. Here is an example of what one might see. In this case the Another indication is when one or more of the MON pods restart frequently. Note the 'mon107' that has only been up for 16 minutes in the following output. "},{"location":"Troubleshooting/ceph-common-issues/#solution","title":"Solution","text":"What is happening here is that the MON pods are restarting and one or more of the Ceph daemons are not getting configured with the proper cluster information. This is commonly the result of not specifying a value for The
When the operator is starting a cluster, the operator will start one mon at a time and check that they are healthy before continuing to bring up all three mons. If the first mon is not detected healthy, the operator will continue to check until it is healthy. If the first mon fails to start, a second and then a third mon may attempt to start. However, they will never form quorum and the orchestration will be blocked from proceeding. The crash-collector pods will be blocked from starting until the mons have formed quorum the first time. There are several common causes for the mons failing to form quorum:
First look at the logs of the operator to confirm if it is able to connect to the mons. Likely you will see an error similar to the following that the operator is timing out when connecting to the mon. The last command is The error would appear to be an authentication error, but it is misleading. The real issue is a timeout. "},{"location":"Troubleshooting/ceph-common-issues/#solution_1","title":"Solution","text":"If you see the timeout in the operator log, verify if the mon pod is running (see the next section). If the mon pod is running, check the network connectivity between the operator pod and the mon pod. A common issue is that the CNI is not configured correctly. To verify the network connectivity:
For example, this command will curl the first mon from the operator: If \"ceph v2\" is printed to the console, the connection was successful. If the command does not respond or otherwise fails, the network connection cannot be established. "},{"location":"Troubleshooting/ceph-common-issues/#failing-mon-pod","title":"Failing mon pod","text":"Second we need to verify if the mon pod started successfully. If the mon pod is failing as in this example, you will need to look at the mon pod status or logs to determine the cause. If the pod is in a crash loop backoff state, you should see the reason by describing the pod. See the solution in the next section regarding cleaning up the This is a common problem reinitializing the Rook cluster when the local directory used for persistence has not been purged. This directory is the Caution Deleting the See the Cleanup Guide for more details. "},{"location":"Troubleshooting/ceph-common-issues/#pvcs-stay-in-pending-state","title":"PVCs stay in pending state","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_2","title":"Symptoms","text":"
For the Wordpress example, you might see two PVCs in pending state. "},{"location":"Troubleshooting/ceph-common-issues/#investigation_2","title":"Investigation","text":"There are two common causes for the PVCs staying in pending state:
To confirm if you have OSDs in your cluster, connect to the Rook Toolbox and run the "},{"location":"Troubleshooting/ceph-common-issues/#osd-prepare-logs","title":"OSD Prepare Logs","text":"If you don't see the expected number of OSDs, let's investigate why they weren't created. On each node where Rook looks for OSDs to configure, you will see an \"osd prepare\" pod. See the section on why OSDs are not getting created to investigate the logs. "},{"location":"Troubleshooting/ceph-common-issues/#csi-driver","title":"CSI Driver","text":"The CSI driver may not be responding to the requests. Look in the logs of the CSI provisioner pod to see if there are any errors during the provisioning. There are two provisioner pods: Get the logs of each of the pods. One of them should be the \"leader\" and be responding to requests. See also the CSI Troubleshooting Guide. "},{"location":"Troubleshooting/ceph-common-issues/#operator-unresponsiveness","title":"Operator unresponsiveness","text":"Lastly, if you have OSDs If the \"osd prepare\" logs didn't give you enough clues about why the OSDs were not being created, please review your cluster.yaml configuration. The common misconfigurations include:
When an OSD starts, the device or directory will be configured for consumption. If there is an error with the configuration, the pod will crash and you will see the CrashLoopBackoff status for the pod. Look in the osd pod logs for an indication of the failure. One common case for failure is that you have re-deployed a test cluster and some state may remain from a previous deployment. If your cluster is larger than a few nodes, you may get lucky enough that the monitors were able to start and form quorum. However, now the OSDs pods may fail to start due to the old state. Looking at the OSD pod logs you will see an error about the file already existing. "},{"location":"Troubleshooting/ceph-common-issues/#solution_4","title":"Solution","text":"If the error is from the file that already exists, this is a common problem reinitializing the Rook cluster when the local directory used for persistence has not been purged. This directory is the
First, ensure that you have specified the devices correctly in the CRD. The Cluster CRD has several ways to specify the devices that are to be consumed by the Rook storage:
Second, if Rook determines that a device is not available (has existing partitions or a formatted filesystem), Rook will skip consuming the devices. If Rook is not starting OSDs on the devices you expect, Rook may have skipped it for this reason. To see if a device was skipped, view the OSD preparation log on the node where the device was skipped. Note that it is completely normal and expected for OSD prepare pod to be in the Here are some key lines to look for in the log: "},{"location":"Troubleshooting/ceph-common-issues/#solution_5","title":"Solution","text":"Either update the CR with the correct settings, or clean the partitions or filesystem from your devices. To clean devices from a previous install see the cleanup guide. After the settings are updated or the devices are cleaned, trigger the operator to analyze the devices again by restarting the operator. Each time the operator starts, it will ensure all the desired devices are configured. The operator does automatically deploy OSDs in most scenarios, but an operator restart will cover any scenarios that the operator doesn't detect automatically. "},{"location":"Troubleshooting/ceph-common-issues/#node-hangs-after-reboot","title":"Node hangs after reboot","text":"This issue is fixed in Rook v1.3 or later. "},{"location":"Troubleshooting/ceph-common-issues/#symptoms_5","title":"Symptoms","text":"
On a node running a pod with a Ceph persistent volume When the reboot command is issued, network interfaces are terminated before disks are unmounted. This results in the node hanging as repeated attempts to unmount Ceph persistent volumes fail with the following error: "},{"location":"Troubleshooting/ceph-common-issues/#solution_6","title":"Solution","text":"The node needs to be drained before reboot. After the successful drain, the node can be rebooted as usual. Because Drain the node: Uncordon the node: "},{"location":"Troubleshooting/ceph-common-issues/#using-multiple-shared-filesystem-cephfs-is-attempted-on-a-kernel-version-older-than-47","title":"Using multiple shared filesystem (CephFS) is attempted on a kernel version older than 4.7","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_6","title":"Symptoms","text":"
The only solution to this problem is to upgrade your kernel to For additional info on the kernel version requirement for multiple shared filesystems (CephFS), see Filesystem - Kernel version requirement. "},{"location":"Troubleshooting/ceph-common-issues/#set-debug-log-level-for-all-ceph-daemons","title":"Set debug log level for all Ceph daemons","text":"You can set a given log level and apply it to all the Ceph daemons at the same time. For this, make sure the toolbox pod is running, then determine the level you want (between 0 and 20). You can find the list of all subsystems and their default values in Ceph logging and debug official guide. Be careful when increasing the level as it will produce very verbose logs. Assuming you want a log level of 1, you will run: Once you are done debugging, you can revert all the debug flag to their default value by running the following: "},{"location":"Troubleshooting/ceph-common-issues/#activate-log-to-file-for-a-particular-ceph-daemon","title":"Activate log to file for a particular Ceph daemon","text":"They are cases where looking at Kubernetes logs is not enough for diverse reasons, but just to name a few:
So for each daemon, This will activate logging on the filesystem, you will be able to find logs in To disable the logging on file, simply set
This happens when the following conditions are satisfied.
In addition, when this problem happens, you can see the following messages in It's so-called This problem will be solve by the following two fixes.
You can bypass this problem by using ext4 or any other filesystems rather than XFS. Filesystem type can be specified with
"},{"location":"Troubleshooting/ceph-common-issues/#solution_9","title":"Solution","text":"The meaning of this warning is written in the document. However, in many cases it is benign. For more information, please see the blog entry. Please refer to Configuring Pools if you want to know the proper There is a critical flaw in OSD on LV-backed PVC. LVM metadata can be corrupted if both the host and OSD container modify it simultaneously. For example, the administrator might modify it on the host, while the OSD initialization process in a container could modify it too. In addition, if If you still decide to configure an OSD on LVM, please keep the following in mind to reduce the probability of this issue. "},{"location":"Troubleshooting/ceph-common-issues/#solution_10","title":"Solution","text":"
You can know whether the above-mentioned tag exists with the command: This problem doesn't happen in newly created LV-backed PVCs because OSD container doesn't modify LVM metadata anymore. The existing lvm mode OSDs work continuously even thought upgrade your Rook. However, using the raw mode OSDs is recommended because of the above-mentioned problem. You can replace the existing OSDs with raw mode OSDs by retiring them and adding new OSDs one by one. See the documents Remove an OSD and Add an OSD on a PVC. "},{"location":"Troubleshooting/ceph-common-issues/#osd-prepare-job-fails-due-to-low-aio-max-nr-setting","title":"OSD prepare job fails due to low aio-max-nr setting","text":"If the Kernel is configured with a low aio-max-nr setting, the OSD prepare job might fail with the following error: To overcome this, you need to increase the value of Alternatively, you can have a DaemonSet to apply the configuration for you on all your nodes. "},{"location":"Troubleshooting/ceph-common-issues/#unexpected-partitions-created","title":"Unexpected partitions created","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_10","title":"Symptoms","text":"Users running Rook versions v1.6.0-v1.6.7 may observe unwanted OSDs on partitions that appear unexpectedly and seemingly randomly, which can corrupt existing OSDs. Unexpected partitions are created on host disks that are used by Ceph OSDs. This happens more often on SSDs than HDDs and usually only on disks that are 875GB or larger. Many tools like The underlying issue causing this is Atari partition (sometimes identified as AHDI) support in the Linux kernel. Atari partitions have very relaxed specifications compared to other partition types, and it is relatively easy for random data written to a disk to appear as an Atari partition to the Linux kernel. Ceph's Bluestore OSDs have an anecdotally high probability of writing data on to disks that can appear to the kernel as an Atari partition. Below is an example of You can see GitHub rook/rook - Issue 7940 unexpected partition on disks >= 1TB (atari partitions) for more detailed information and discussion. "},{"location":"Troubleshooting/ceph-common-issues/#solution_11","title":"Solution","text":""},{"location":"Troubleshooting/ceph-common-issues/#recover-from-corruption-v160-v167","title":"Recover from corruption (v1.6.0-v1.6.7)","text":"If you are using Rook v1.6, you must first update to v1.6.8 or higher to avoid further incidents of OSD corruption caused by these Atari partitions. An old workaround suggested using To resolve the issue, immediately update to v1.6.8 or higher. After the update, no corruption should occur on OSDs created in the future. Next, to get back to a healthy Ceph cluster state, focus on one corrupted disk at a time and remove all OSDs on each corrupted disk one disk at a time. As an example, you may have
If your Rook cluster does not have any critical data stored in it, it may be simpler to uninstall Rook completely and redeploy with v1.6.8 or higher. "},{"location":"Troubleshooting/ceph-common-issues/#operator-environment-variables-are-ignored","title":"Operator environment variables are ignored","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_11","title":"Symptoms","text":"Configuration settings passed as environment variables do not take effect as expected. For example, the discover daemonset is not created, even though Inspect the Look for lines with the Verify that both of the following messages are present in the operator logs: "},{"location":"Troubleshooting/ceph-common-issues/#solution_12","title":"Solution","text":"If it does not exist, create an empty ConfigMap: If the ConfigMap exists, remove any keys that you wish to configure through the environment. "},{"location":"Troubleshooting/ceph-common-issues/#the-cluster-is-in-an-unhealthy-state-or-fails-to-configure-when-limitnofileinfinity-in-containerd","title":"The cluster is in an unhealthy state or fails to configure when LimitNOFILE=infinity in containerd","text":""},{"location":"Troubleshooting/ceph-common-issues/#symptoms_12","title":"Symptoms","text":"When trying to create a new deployment, Ceph mons keep crashing and the cluster fails to configure or remains in an unhealthy state. The nodes' CPUs are stuck at 100%. "},{"location":"Troubleshooting/ceph-common-issues/#solution_13","title":"Solution","text":"Before systemd v240, systemd would leave To fix this, set LimitNOFILE in the systemd service configuration to 1048576. Create an override.conf file with the new LimitNOFILE value: Reload systemd manager configuration, restart containerd and restart all monitors deployments: "},{"location":"Troubleshooting/ceph-csi-common-issues/","title":"CSI Common Issues","text":"Issues when provisioning volumes with the Ceph CSI driver can happen for many reasons such as:
The following troubleshooting steps can help identify a number of issues. "},{"location":"Troubleshooting/ceph-csi-common-issues/#block-rbd","title":"Block (RBD)","text":"If you are mounting block volumes (usually RWO), these are referred to as If you are mounting shared filesystem volumes (usually RWX), these are referred to as The Ceph monitors are the most critical component of the cluster to check first. Retrieve the mon endpoints from the services: If host networking is enabled in the CephCluster CR, you will instead need to find the node IPs for the hosts where the mons are running. The If you are seeing issues provisioning the PVC then you need to check the network connectivity from the provisioner pods.
For redundancy, there are two provisioner pods for each type. Make sure to test connectivity from all provisioner pods. Connect to the provisioner pods and verify the connection to the mon endpoints such as the following: If you see the response \"ceph v2\", the connection succeeded. If there is no response then there is a network issue connecting to the ceph cluster. Check network connectivity for all monitor IP\u2019s and ports which are passed to ceph-csi. "},{"location":"Troubleshooting/ceph-csi-common-issues/#ceph-health","title":"Ceph Health","text":"Sometimes an unhealthy Ceph cluster can contribute to the issues in creating or mounting the PVC. Check that your Ceph cluster is healthy by connecting to the Toolbox and running the "},{"location":"Troubleshooting/ceph-csi-common-issues/#slow-operations","title":"Slow Operations","text":"Even slow ops in the ceph cluster can contribute to the issues. In the toolbox, make sure that no slow ops are present and the ceph cluster is healthy If Ceph is not healthy, check the following health for more clues:
Make sure the pool you have specified in the Suppose the pool name mentioned in the If the pool is not in the list, create the For the shared filesystem (CephFS), check that the filesystem and pools you have specified in the Suppose the Now verify the The pool for the filesystem will have the suffix If the subvolumegroup is not specified in the ceph-csi configmap (where you have passed the ceph monitor information), Ceph-CSI creates the default subvolumegroup with the name csi. Verify that the subvolumegroup exists: If you don\u2019t see any issues with your Ceph cluster, the following sections will start debugging the issue from the CSI side. "},{"location":"Troubleshooting/ceph-csi-common-issues/#provisioning-volumes","title":"Provisioning Volumes","text":"At times the issue can also exist in the Ceph-CSI or the sidecar containers used in Ceph-CSI. Ceph-CSI has included number of sidecar containers in the provisioner pods such as: The CephFS provisioner core CSI driver container name is Here is a summary of the sidecar containers: "},{"location":"Troubleshooting/ceph-csi-common-issues/#csi-provisioner","title":"csi-provisioner","text":"The external-provisioner is a sidecar container that dynamically provisions volumes by calling If there is an issue with PVC Create or Delete, check the logs of the "},{"location":"Troubleshooting/ceph-csi-common-issues/#csi-resizer","title":"csi-resizer","text":"The CSI If any issue exists in PVC expansion you can check the logs of the "},{"location":"Troubleshooting/ceph-csi-common-issues/#csi-snapshotter","title":"csi-snapshotter","text":"The CSI external-snapshotter sidecar only watches for In Kubernetes 1.17 the volume snapshot feature was promoted to beta. In Kubernetes 1.20, the feature gate is enabled by default on standard Kubernetes deployments and cannot be turned off. Make sure you have installed the correct snapshotter CRD version. If you have not installed the snapshotter controller, see the Snapshots guide. The above CRDs must have the matching version in your The snapshot controller is responsible for creating both Rook only installs the snapshotter sidecar container, not the controller. It is recommended that Kubernetes distributors bundle and deploy the controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver). If your Kubernetes distribution does not bundle the snapshot controller, you may manually install these components. If any issue exists in the snapshot Create/Delete operation you can check the logs of the csi-snapshotter sidecar container. If you see an error about a volume already existing such as: The issue typically is in the Ceph cluster or network connectivity. If the issue is in Provisioning the PVC Restarting the Provisioner pods help(for CephFS issue restart When a user requests to create the application pod with PVC, there is a three-step process
Each plugin pod has two important containers: one is The node-driver-registrar is a sidecar container that registers the CSI driver with Kubelet. More details can be found here. If any issue exists in attaching the PVC to the application pod check logs from driver-registrar sidecar container in plugin pod where your application pod is scheduled. You should see the response If you see a driver not found an error in the application pod describe output. Restarting the Each provisioner pod also has a sidecar container called The external-attacher is a sidecar container that attaches volumes to nodes by calling If any issue exists in attaching the PVC to the application pod first check the volumeattachment object created and also log from csi-attacher sidecar container in provisioner pod. "},{"location":"Troubleshooting/ceph-csi-common-issues/#cephfs-stale-operations","title":"CephFS Stale operations","text":"Check for any stale mount commands on the You need to exec in the Identify the If any commands are stuck check the dmesg logs from the node. Restarting the If you don\u2019t see any stuck messages, confirm the network connectivity, Ceph health, and slow ops. "},{"location":"Troubleshooting/ceph-csi-common-issues/#rbd-stale-operations","title":"RBD Stale operations","text":"Check for any stale You need to exec in the Identify the If any commands are stuck check the dmesg logs from the node. Restarting the If you don\u2019t see any stuck messages, confirm the network connectivity, Ceph health, and slow ops. "},{"location":"Troubleshooting/ceph-csi-common-issues/#dmesg-logs","title":"dmesg logs","text":"Check the dmesg logs on the node where pvc mounting is failing or the "},{"location":"Troubleshooting/ceph-csi-common-issues/#rbd-commands","title":"RBD Commands","text":"If nothing else helps, get the last executed command from the ceph-csi pod logs and run it manually inside the provisioner or plugin pod to see if there are errors returned even if they couldn't be seen in the logs. Where When a node is lost, you will see application pods on the node stuck in the To force delete the pod stuck in the After the force delete, wait for a timeout of about 8-10 minutes. If the pod still not in the running state, continue with the next section to blocklist the node. "},{"location":"Troubleshooting/ceph-csi-common-issues/#blocklisting-a-node","title":"Blocklisting a node","text":"To shorten the timeout, you can mark the node as \"blocklisted\" from the Rook toolbox so Rook can safely failover the pod sooner. After running the above command within a few minutes the pod will be running. "},{"location":"Troubleshooting/ceph-csi-common-issues/#removing-a-node-blocklist","title":"Removing a node blocklist","text":"After you are absolutely sure the node is permanently offline and that the node no longer needs to be blocklisted, remove the node from the blocklist. "},{"location":"Troubleshooting/ceph-toolbox/","title":"Toolbox","text":"The Rook toolbox is a container with common tools used for rook debugging and testing. The toolbox is based on CentOS, so more tools of your choosing can be easily installed with The toolbox can be run in two modes:
Hint Before running the toolbox you should have a running Rook cluster deployed (see the Quickstart Guide). Note The toolbox is not necessary if you are using kubectl plugin to execute Ceph commands. "},{"location":"Troubleshooting/ceph-toolbox/#interactive-toolbox","title":"Interactive Toolbox","text":"The rook toolbox can run as a deployment in a Kubernetes cluster where you can connect and run arbitrary Ceph commands. Launch the rook-ceph-tools pod: Wait for the toolbox pod to download its container and get to the Once the rook-ceph-tools pod is running, you can connect to it with: All available tools in the toolbox are ready for your troubleshooting needs. Example:
When you are done with the toolbox, you can remove the deployment: "},{"location":"Troubleshooting/ceph-toolbox/#toolbox-job","title":"Toolbox Job","text":"If you want to run Ceph commands as a one-time operation and collect the results later from the logs, you can run a script as a Kubernetes Job. The toolbox job will run a script that is embedded in the job spec. The script has the full flexibility of a bash script. In this example, the After the job completes, see the results of the script: "},{"location":"Troubleshooting/common-issues/","title":"Common Issues","text":"To help troubleshoot your Rook clusters, here are some tips on what information will help solve the issues you might be seeing. If after trying the suggestions found on this page and the problem is not resolved, the Rook team is very happy to help you troubleshoot the issues in their Slack channel. Once you have registered for the Rook Slack, proceed to the General channel to ask for assistance. "},{"location":"Troubleshooting/common-issues/#ceph-common-issues","title":"Ceph Common Issues","text":"For common issues specific to Ceph, see the Ceph Common Issues page. "},{"location":"Troubleshooting/common-issues/#troubleshooting-techniques","title":"Troubleshooting Techniques","text":"Kubernetes status and logs are the main resources needed to investigate issues in any Rook cluster. "},{"location":"Troubleshooting/common-issues/#kubernetes-tools","title":"Kubernetes Tools","text":"Kubernetes status is the first line of investigating when something goes wrong with the cluster. Here are a few artifacts that are helpful to gather:
Some pods have specialized init containers, so you may need to look at logs for different containers within the pod.
Rook is designed with Kubernetes design principles from the ground up. This topic is going to escape the bounds of Kubernetes storage and show you how to use block and file storage directly from a pod without any of the Kubernetes magic. The purpose of this topic is to help you quickly test a new configuration, although it is not meant to be used in production. All of the benefits of Kubernetes storage including failover, detach, and attach will not be available. If your pod dies, your mount will die with it. "},{"location":"Troubleshooting/direct-tools/#start-the-direct-mount-pod","title":"Start the Direct Mount Pod","text":"To test mounting your Ceph volumes, start a pod with the necessary mounts. An example is provided in the examples test directory: After the pod is started, connect to it like this: "},{"location":"Troubleshooting/direct-tools/#block-storage-tools","title":"Block Storage Tools","text":"After you have created a pool as described in the Block Storage topic, you can create a block image and mount it directly in a pod. This example will show how the Ceph rbd volume can be mounted in the direct mount pod. Create the Direct Mount Pod. Create a volume image (10MB): Map the block volume and format it and mount it: Write and read a file: "},{"location":"Troubleshooting/direct-tools/#unmount-the-block-device","title":"Unmount the Block device","text":"Unmount the volume and unmap the kernel device: "},{"location":"Troubleshooting/direct-tools/#shared-filesystem-tools","title":"Shared Filesystem Tools","text":"After you have created a filesystem as described in the Shared Filesystem topic, you can mount the filesystem from multiple pods. The the other topic you may have mounted the filesystem already in the registry pod. Now we will mount the same filesystem in the Direct Mount pod. This is just a simple way to validate the Ceph filesystem and is not recommended for production Kubernetes pods. Follow Direct Mount Pod to start a pod with the necessary mounts and then proceed with the following commands after connecting to the pod. Now you should have a mounted filesystem. If you have pushed images to the registry you will see a directory called Try writing and reading a file to the shared filesystem. "},{"location":"Troubleshooting/direct-tools/#unmount-the-filesystem","title":"Unmount the Filesystem","text":"To unmount the shared filesystem from the Direct Mount Pod: No data will be deleted by unmounting the filesystem. "},{"location":"Troubleshooting/disaster-recovery/","title":"Disaster Recovery","text":"Under extenuating circumstances, steps may be necessary to recover the cluster health. There are several types of recovery addressed in this document. "},{"location":"Troubleshooting/disaster-recovery/#restoring-mon-quorum","title":"Restoring Mon Quorum","text":"Under extenuating circumstances, the mons may lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon is still healthy. The following steps will remove the unhealthy mons from quorum and allow you to form a quorum again with a single mon, then grow the quorum back to the original size. The Rook kubectl Plugin has a command If the name of the healthy mon is See the restore-quorum documentation for more details. "},{"location":"Troubleshooting/disaster-recovery/#restoring-crds-after-deletion","title":"Restoring CRDs After Deletion","text":"When the Rook CRDs are deleted, the Rook operator will respond to the deletion event to attempt to clean up the cluster resources. If any data appears present in the cluster, Rook will refuse to allow the resources to be deleted since the operator will refuse to remove the finalizer on the CRs until the underlying data is deleted. For more details, see the dependency design doc. While it is good that the CRs will not be deleted and the underlying Ceph data and daemons continue to be available, the CRs will be stuck indefinitely in a Note In the following commands, the affected
Situations this section can help resolve:
Assuming
It is possible to migrate/restore an rook/ceph cluster from an existing Kubernetes cluster to a new one without resorting to SSH access or ceph tooling. This allows doing the migration using standard kubernetes resources only. This guide assumes the following:
Do the following in the new cluster:
When the rook-ceph namespace is accidentally deleted, the good news is that the cluster can be restored. With the content in the directory You need to manually create a ConfigMap and a Secret to make it work. The information required for the ConfigMap and Secret can be found in the The first resource is the secret named The values for the secret can be found in
All the fields in data section need to be encoded in base64. Coding could be done like this: Now save the secret as The second resource is the configmap named rook-ceph-mon-endpoints as seen in this example below: The Monitor's service IPs are kept in the monitor data store and you need to create them by original ones. After you create this configmap with the original service IPs, the rook operator will create the correct services for you with IPs matching in the monitor data store. Along with monitor ids, their service IPs and mapping relationship of them can be found in dataDirHostPath/rook-ceph/rook-ceph.config, for example:
Now that you have the info for the secret and the configmap, you are ready to restore the running cluster. Deploy Rook Ceph using the YAML files or Helm, with the same settings you had previously. After the operator is running, create the configmap and secret you have just crafted: Create your Ceph cluster CR (if possible, with the same settings as existed previously): Now your Rook Ceph cluster should be running again. "},{"location":"Troubleshooting/kubectl-plugin/","title":"kubectl Plugin","text":"The Rook kubectl plugin is a tool to help troubleshoot your Rook cluster. Here are a few of the operations that the plugin will assist with:
See the kubectl-rook-ceph documentation for more details. "},{"location":"Troubleshooting/kubectl-plugin/#installation","title":"Installation","text":"
Reference: Ceph Status "},{"location":"Troubleshooting/kubectl-plugin/#debug-mode","title":"Debug Mode","text":"Debug mode can be useful when a MON or OSD needs advanced maintenance operations that require the daemon to be stopped. Ceph tools such as
Reference: Debug Mode "},{"location":"Troubleshooting/openshift-common-issues/","title":"OpenShift Common Issues","text":""},{"location":"Troubleshooting/openshift-common-issues/#enable-monitoring-in-the-storage-dashboard","title":"Enable Monitoring in the Storage Dashboard","text":"OpenShift Console uses OpenShift Prometheus for monitoring and populating data in Storage Dashboard. Additional configuration is required to monitor the Ceph Cluster from the storage dashboard.
Attention Switch to
Warn This is an advanced topic please be aware of the steps you're performing or reach out to the experts for further guidance. There are some cases where the debug logs are not sufficient to investigate issues like high CPU utilization of a Ceph process. In that situation, coredump and perf information of a Ceph process is useful to be collected which can be shared with the Ceph team in an issue. To collect this information, please follow these steps:
This guide will walk through the steps to upgrade the version of Ceph in a Rook cluster. Rook and Ceph upgrades are designed to ensure data remains available even while the upgrade is proceeding. Rook will perform the upgrades in a rolling fashion such that application pods are not disrupted. Rook is cautious when performing upgrades. When an upgrade is requested (the Ceph image has been updated in the CR), Rook will go through all the daemons one by one and will individually perform checks on them. It will make sure a particular daemon can be stopped before performing the upgrade. Once the deployment has been updated, it checks if this is ok to continue. After each daemon is updated we wait for things to settle (monitors to be in a quorum, PGs to be clean for OSDs, up for MDSes, etc.), then only when the condition is met we move to the next daemon. We repeat this process until all the daemons have been updated. "},{"location":"Upgrade/ceph-upgrade/#considerations","title":"Considerations","text":"
Rook v1.16 supports the following Ceph versions:
Important When an update is requested, the operator will check Ceph's status, if it is in Official Ceph container images can be found on Quay. These images are tagged in a few ways:
Ceph containers other than the official images from the registry above will not be supported. "},{"location":"Upgrade/ceph-upgrade/#example-upgrade-to-ceph-reef","title":"Example Upgrade to Ceph Reef","text":""},{"location":"Upgrade/ceph-upgrade/#1-update-the-ceph-daemons","title":"1. Update the Ceph daemons","text":"The upgrade will be automated by the Rook operator after the desired Ceph image is changed in the CephCluster CRD ( "},{"location":"Upgrade/ceph-upgrade/#2-update-the-toolbox-image","title":"2. Update the toolbox image","text":"Since the Rook toolbox is not controlled by the Rook operator, users must perform a manual upgrade by modifying the "},{"location":"Upgrade/ceph-upgrade/#3-wait-for-the-pod-updates","title":"3. Wait for the pod updates","text":"As with upgrading Rook, now wait for the upgrade to complete. Status can be determined in a similar way to the Rook upgrade as well. Confirm the upgrade is completed when the versions are all on the desired Ceph version. "},{"location":"Upgrade/ceph-upgrade/#4-verify-cluster-health","title":"4. Verify cluster health","text":"Verify the Ceph cluster's health using the health verification. "},{"location":"Upgrade/health-verification/","title":"Health Verification","text":"Rook and Ceph upgrades are designed to ensure data remains available even while the upgrade is proceeding. Rook will perform the upgrades in a rolling fashion such that application pods are not disrupted. To ensure the upgrades are seamless, it is important to begin the upgrades with Ceph in a fully healthy state. This guide reviews ways of verifying the health of a CephCluster. See the troubleshooting documentation for any issues during upgrades:
In a healthy Rook cluster, all pods in the Rook namespace should be in the "},{"location":"Upgrade/health-verification/#status-output","title":"Status Output","text":"The Rook toolbox contains the Ceph tools that gives status details of the cluster with the The output should look similar to the following: In the output above, note the following indications that the cluster is in a healthy state:
If the Rook will not upgrade Ceph daemons if the health is in a The container version running in a specific pod in the Rook cluster can be verified in its pod spec output. For example, for the monitor pod The status and container versions for all Rook pods can be collected all at once with the following commands: The "},{"location":"Upgrade/health-verification/#rook-volume-health","title":"Rook Volume Health","text":"Any pod that is using a Rook volume should also remain healthy:
This guide will walk through the steps to upgrade the software in a Rook cluster from one version to the next. This guide focuses on updating the Rook version for the management layer, while the Ceph upgrade guide focuses on updating the data layer. Upgrades for both the operator and for Ceph are entirely automated except where Rook's permissions need to be explicitly updated by an admin or when incompatibilities need to be addressed manually due to customizations. We welcome feedback and opening issues! "},{"location":"Upgrade/rook-upgrade/#supported-versions","title":"Supported Versions","text":"This guide is for upgrading from Rook v1.14.x to Rook v1.15.x. Please refer to the upgrade guides from previous releases for supported upgrade paths. Rook upgrades are only supported between official releases. For a guide to upgrade previous versions of Rook, please refer to the version of documentation for those releases.
Important Rook releases from master are expressly unsupported. It is strongly recommended to use official releases of Rook. Unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases. Builds from the master branch can have functionality changed or removed at any time without compatibility support and without prior notice. "},{"location":"Upgrade/rook-upgrade/#breaking-changes-in-v116","title":"Breaking changes in v1.16","text":"
With this upgrade guide, there are a few notes to consider:
Unless otherwise noted due to extenuating requirements, upgrades from one patch release of Rook to another are as simple as updating the common resources and the image of the Rook operator. For example, when Rook v1.15.1 is released, the process of updating from v1.15.0 is as simple as running the following: If the Rook Operator or CephCluster are deployed into a different namespace than Then, apply the latest changes from v1.15, and update the Rook Operator image. As exemplified above, it is a good practice to update Rook common resources from the example manifests before any update. The common resources and CRDs might not be updated with every release, but Kubernetes will only apply updates to the ones that changed. Also update optional resources like Prometheus monitoring noted more fully in the upgrade section below. "},{"location":"Upgrade/rook-upgrade/#helm","title":"Helm","text":"If Rook is installed via the Helm chart, Helm will handle some details of the upgrade itself. The upgrade steps in this guide will clarify what Helm handles automatically. The Note Be sure to update to a supported Helm version "},{"location":"Upgrade/rook-upgrade/#cluster-health","title":"Cluster Health","text":"In order to successfully upgrade a Rook cluster, the following prerequisites must be met:
The examples given in this guide upgrade a live Rook cluster running Let's get started! "},{"location":"Upgrade/rook-upgrade/#environment","title":"Environment","text":"These instructions will work for as long the environment is parameterized correctly. Set the following environment variables, which will be used throughout this document. "},{"location":"Upgrade/rook-upgrade/#1-update-common-resources-and-crds","title":"1. Update common resources and CRDs","text":"Hint Common resources and CRDs are automatically updated when using Helm charts. First, apply updates to Rook common resources. This includes modified privileges (RBAC) needed by the Operator. Also update the Custom Resource Definitions (CRDs). Get the latest common resources manifests that contain the latest changes. If the Rook Operator or CephCluster are deployed into a different namespace than Apply the resources. "},{"location":"Upgrade/rook-upgrade/#prometheus-updates","title":"Prometheus Updates","text":"If Prometheus monitoring is enabled, follow this step to upgrade the Prometheus RBAC resources as well. "},{"location":"Upgrade/rook-upgrade/#2-update-the-rook-operator","title":"2. Update the Rook Operator","text":"Hint The operator is automatically updated when using Helm charts. The largest portion of the upgrade is triggered when the operator's image is updated to "},{"location":"Upgrade/rook-upgrade/#3-update-ceph-csi","title":"3. Update Ceph CSI","text":"Hint This is automatically updated if custom CSI image versions are not set. Important The minimum supported version of Ceph-CSI is v3.8.0. Update to the latest Ceph-CSI drivers if custom CSI images are specified. See the CSI Custom Images documentation. Note If using snapshots, refer to the Upgrade Snapshot API guide. "},{"location":"Upgrade/rook-upgrade/#4-wait-for-the-upgrade-to-complete","title":"4. Wait for the upgrade to complete","text":"Watch now in amazement as the Ceph mons, mgrs, OSDs, rbd-mirrors, MDSes and RGWs are terminated and replaced with updated versions in sequence. The cluster may be unresponsive very briefly as mons update, and the Ceph Filesystem may fall offline a few times while the MDSes are upgrading. This is normal. The versions of the components can be viewed as they are updated: As an example, this cluster is midway through updating the OSDs. When all deployments report An easy check to see if the upgrade is totally finished is to check that there is only one "},{"location":"Upgrade/rook-upgrade/#5-verify-the-updated-cluster","title":"5. Verify the updated cluster","text":"At this point, the Rook operator should be running version Verify the CephCluster health using the health verification doc. "}]} \ No newline at end of file |