diff --git a/shell/assets/translations/en-us.yaml b/shell/assets/translations/en-us.yaml index b950a4f1489..7aef96c5434 100644 --- a/shell/assets/translations/en-us.yaml +++ b/shell/assets/translations/en-us.yaml @@ -101,7 +101,7 @@ generic: deprecated: Deprecated placeholder: "e.g. {text}" - moreInfo: More Info + moreInfo: More Information selectors: label: Selector matchingResources: @@ -202,7 +202,7 @@ nav: restoreCards: Restore hidden cards userMenu: preferences: Preferences - accountAndKeys: Account & API Keys + accountAndKeys: Account and API Keys logOut: Log Out failWhale: authMiddleware: Auth Middleware @@ -216,7 +216,7 @@ nav: product: apps: Apps - auth: Users & Authentication + auth: Users and Authentication backup: Rancher Backups cis: CIS Benchmark ecm: Cluster Manager @@ -350,10 +350,10 @@ accountAndKeys: accessKey: Access Key secretKey: Secret Key bearerToken: Bearer Token - saveWarning: Save the info above! This is the only time you'll be able to see it. If you lose it, you'll need to create a new API key. - keyCreated: A new API Key has been created - bearerTokenTip: "Access Key and Secret Key can be sent as the username and password for HTTP Basic auth to authorize requests. You can also combine them to use as a Bearer token:" - ttlLimitedWarning: The Expiry time for this API Key was reduced due to system configuration + saveWarning: Save the above information! This is the only time you will be able to see it. If you lose it, you will need to create a new API key. + keyCreated: A new API key has been created + bearerTokenTip: "Access Key and Secret Key can be sent as the username and password for HTTP basic authentication to authorize requests. You can also combine them to use as a Bearer token:" + ttlLimitedWarning: The Expiry time for this API key was reduced due to system configuration addClusterMemberDialog: title: Add Cluster Member @@ -368,8 +368,8 @@ addProjectMemberDialog: authConfig: accessMode: label: 'Configure who should be able to login and use {vendor}' - required: Restrict access to only the authorized users & groups - restricted: 'Allow members of clusters and projects, plus authorized users & groups' + required: Restrict access to only the authorized users and groups + restricted: 'Allow members of clusters and projects, plus authorized users and groups' unrestricted: Allow any valid user allowedPrincipalIds: title: Authorized Users & Groups @@ -397,7 +397,7 @@ authConfig: 3:
  • Click the "New OAuth App" button.
  • suffix: 1:
  • Click "Register application"
  • - 2:
  • Copy and paste the Client ID and Client Secret of your newly created OAuth app into the fields below
  • + 2:
  • Copy and paste the client ID and client secret of your newly created OAuth app into the fields below
  • host: label: GitHub Enterprise Host placeholder: e.g. github.mycompany.example @@ -421,7 +421,7 @@ authConfig: 1: title: 'Click here to open applications settings in a new window' body: - 1: Login to your account. Navigate to "APIs & Services" and then select "OAuth consent screen". + 1: Login to your account. Navigate to "APIs and Services" and then select "OAuth consent screen". 2: 'Authorized domains:' 3: 'Application homepage link: ' 4: 'Under Scopes for Google APIs, enable "email", "profile", and "openid".' @@ -430,7 +430,7 @@ authConfig: 2: title: 'Navigate to the "Credentials" tab to create your OAuth client ID' body: - 1: 'Select the "Create Credentials" dropdown, and select "OAuth clientID", then select "Web application".' + 1: 'Select the "Create Credentials" drop-down, and select "OAuth clientID", then select "Web application".' 2: 'Authorized Javascript origins:' 3: 'Authorized redirect URIs:' 4: 'Click "Create", and then click on the "Download JSON" button.' @@ -441,14 +441,14 @@ authConfig: body: 1: Create a service account. 2: Generate a key for the service account. - 3: Add the service account as an OAuth client in your google domain. + 3: Add the service account as an OAuth client in your Google domain. ldap: freeipa: Configure a FreeIPA server activedirectory: Configure an Active Directory account openldap: Configure an OpenLDAP server defaultLoginDomain: label: Default Login Domain - placeholder: eg mycompany + placeholder: eg, mycompany hint: This domain will be used if a user logs in without specifying one. cert: Certificate disabledStatusBitmask: Disabled Status Bitmask @@ -795,7 +795,7 @@ backupRestoreOperator: backupFilename: Backup Filename deleteTimeout: label: Delete Timeout - tip: Seconds to wait for a resource delete to succeed before removing finalizers to force deletion. + tip: Seconds to wait for a resource deletion to succeed before removing finalizers to force deletion. deployment: rancherNamespace: Rancher ResourceSet Namespace size: Size @@ -812,14 +812,14 @@ backupRestoreOperator: storageClass: label: Storage Class tip: 'Configure a storage location where all backups are saved by default. You will have the option to override this with each backup, but will be limited to using an S3-compatible object store.' - warning: 'This {type} does not have its reclaim policy set to "Retain". Your backups may be lost if the volume is changed or becomes unbound.' + warning: 'This {type} does not have its reclaim policy set to "Retain". Your backups may be lost if the volume is changed or becomes unbound.' encryption: Encryption encryptionConfigName: backuptip: 'Any secret in the {ns} namespace that has an encryption-provider-config.yaml key.
    The contents of this file are necessary to perform a restore from this backup, and are not stored by Rancher Backup.' label: Encryption Config Secret options: none: Store the contents of the backup unencrypted - secret: 'Encrypt backups using an Encryption Config Secret (Recommended)' + secret: 'Encrypt backups using an Encryption Configuration Secret (Recommended)' restoretip: 'If the backup was performed with encryption enabled, a secret containing the same encryption-provider-config should be used during restore.' warning: 'The contents of this file are necessary to perform a restore from this backup, and are not stored by Rancher Backup.' lastBackup: Last Backup @@ -932,7 +932,7 @@ catalog: install: action: goToUpgrade: Edit/Upgrade - appReadmeMissing: This chart doesn't have any additional chart information. + appReadmeMissing: This chart does not have any additional chart information. appReadmeTitle: Chart Information (Helm README) chart: Chart warning: @@ -948,12 +948,12 @@ catalog: insufficientCpu: 'This chart requires {need, number} CPU cores, but the cluster only has {have, number} available.' insufficientMemory: 'This chart requires {need} of memory, but the cluster only has {have} available.' legacy: - label: This is a {legacyType} App and it cannot be modified here + label: This is a {legacyType} application and it cannot be modified here enableLegacy: - prompt: You will need to enable Legacy Features to edit this App + prompt: You will need to enable Legacy Features to edit this application goto: Go to Feature Flag settings - navigate: Navigate to Legacy Apps - mcmNotSupported: Legacy Multi-cluster Apps can not be managed through this UI + navigate: Navigate to Legacy Applications + mcmNotSupported: Legacy Multi-cluster Applications can not be managed through this UI category: legacy: Legacy mcm: Multi-cluster @@ -965,8 +965,8 @@ catalog: atomic: Atomic description: label: Description - placeholder: e.g. Purpose of helm command - cleanupOnFail: Cleanup on Failure + placeholder: e.g. Purpose of Helm command + cleanupOnFail: Clean up on Failure crds: Apply custom resource definitions dryRun: Dry Run force: Force @@ -989,7 +989,7 @@ catalog: } wait: Wait namespaceIsInProject: "This chart's target namespace, {namespace}, already exists and cannot be added to a different project." - project: Install into Project + project: Install Into Project section: chartOptions: Edit Options valuesYaml: Edit YAML @@ -1007,8 +1007,8 @@ catalog: } the {existing, select, true { app} false { chart} - }. Start by setting some basic information used by {vendor} to manage the App. - nsCreationDescription: "To install the app into a new namespace enter it's name in the Namespace field and select it." + }. Start by setting some basic information used by {vendor} to manage the application. + nsCreationDescription: "To install the application into a new namespace, enter the name in the Namespace field and select it." createNamespace: "Namespace {namespace} will be created." clusterTplVersion: label: Version @@ -1016,19 +1016,19 @@ catalog: description: Select a version of the Cluster Template clusterTplValues: label: Values - subtext: Change how the Cluster is defined - description: Configure Values used by Helm that help define the Cluster. + subtext: Change how the cluster is defined + description: Configure Values used by Helm that help define the cluster. helmValues: label: Values - subtext: Change how the App works - description: Configure Values used by Helm that help define the App. + subtext: Change how the application works + description: Configure values used by Helm that help define the application. chartInfo: - button: View Chart Info - label: Chart Info + button: View Chart Information + label: Chart Information helmCli: - checkbox: Customize Helm options before install + checkbox: Customize Helm options before installation label: Helm Options - subtext: Change how the app is deployed + subtext: Change how the application is deployed description: Supply additional deployment options version: Version versions: @@ -1050,7 +1050,7 @@ catalog: gitBranch: label: Git Branch placeholder: e.g. master - defaultMessage: 'Will default to "master" if left blank' + defaultMessage: 'The branch will default to "master" if left blank' gitRepo: label: Git Repo URL placeholder: 'e.g. https://github.com/your-company/charts.git' @@ -1227,7 +1227,7 @@ cluster: configuration: Multus agentEnvVars: label: Agent Environment - detail: Add additional environment variables to the agent container. This is most commonly useful for configuring a HTTP proxy. + detail: Add additional environment variables to the agent container. This is most commonly useful for configuring a HTTP proxy. keyLabel: Variable Name cloudProvider: aws: @@ -1252,7 +1252,7 @@ cluster: warning: The cluster needs to have at least one node with each role to be usable. advanced: label: Advanced - detail: Additional control over how the node will be registered. These values will often need to be different for each node registered. + detail: Additional control over how the node will be registered. These values will often need to be different for each node registered. nodeName: Node Name publicIp: Node Public IP privateIp: Node Private IP @@ -1265,14 +1265,14 @@ cluster: windowsDetail: Run this command in PowerShell on each of the existing Windows machines you want to register. Windows nodes can only be workers. windowsNotReady: The cluster must be up and running with Linux etcd, control plane, and worker nodes before the registration command for adding Windows workers will display. windowsWarning: Workload pods, including some deployed by Rancher charts, will be scheduled on both Linux and Windows nodes by default. Edit NodeSelector in the chart to direct them to be placed onto a compatible node. - windowsDeprecatedForRKE1: Windows support is being deprecated for RKE1. We suggest migrating to RKE2. + windowsDeprecatedForRKE1: Windows support is being deprecated for RKE1 and RKE1 is soon to be deprecrated. Please migrate to RKE2. insecure: "Insecure: Select this to skip TLS verification if your server has a self-signed certificate." credential: banner: createCredential: |- {length, plural, - =0 {First you'll need to create a credential to talk to the cloud provider} - other {Ok, Let's create a new credential} + =0 {First, you will need to create a credential to talk to the cloud provider} + other {Ok, start to create a new credential} } selectExisting: label: Select Existing @@ -1285,7 +1285,7 @@ cluster: label: Access Key placeholder: Your AWS Access Key defaultRegion: - help: The default region to use when creating clusters. Also contacted to verify that this credential works. + help: The default region to use when creating clusters. Also contacted to verify that this credential works. label: Default Region secretKey: label: Secret Key @@ -1395,7 +1395,7 @@ cluster: volume: Volume imageVolume: Image Volume addVolume: Add Volume - addVMImage: Add VM Image + addVMImage: Add Virtual Machine Image storageClass: Storage Class sshUser: SSH User userData: @@ -1412,9 +1412,9 @@ cluster: tokenExpirationWarning: 'Warning: Harvester Cloud Credentials use an underlying authentication token that may have an expiry time - please see the following knowledge base article for possible implications on management operations.' description: label: Cluster Description - placeholder: Any text you want that better describes this cluster + placeholder: Any text to describe this cluster harvester: - importNotice: Import Harvester Clusters via + importNotice: Import Harvester Clusters Via warning: label: This is a Harvester Cluster - enable the Harvester feature flag to manage it state: Warning @@ -1447,11 +1447,11 @@ cluster: sshUser: placeholder: e.g. ubuntu toolTip: SSH user to login with the selected OS image. - haveOneOwner: There must be at least one member with the Owner role. + haveOneOwner: There must be at least one member with the owner role. import: warningBanner: 'You should not import a cluster which has already been connected to another instance of Rancher as it will lead to data corruption.' commandInstructions: 'Run the kubectl command below on an existing Kubernetes cluster running a supported Kubernetes version to import it into {vendor}:' - commandInstructionsInsecure: 'If you get a "certificate signed by unknown authority" error, your {vendor} installation has a self-signed or untrusted SSL certificate. Run the command below instead to bypass the certificate verification:' + commandInstructionsInsecure: 'If you get a "certificate signed by unknown authority" error, your {vendor} installation has a self-signed or untrusted SSL certificate. Run the command below instead to bypass the certificate verification:' clusterRoleBindingInstructions: 'If you get permission errors creating some of the resources, your user may not have the cluster-admin role. Use this command to apply it:' clusterRoleBindingCommand: 'kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user ' explore: Explore @@ -1560,14 +1560,14 @@ cluster: network: Network disks: Disks size: - label: VM Size + label: Virtual Machine Size tooltip: When accelerated networking is enabled, not all sizes are available. - supportsAcceleratedNetworking: Sizes that Support Accelerated Networking + supportsAcceleratedNetworking: Sizes That Support Accelerated Networking doesNotSupportAcceleratedNetworking: Sizes Without Accelerated Networking - availabilityWarning: The selected VM size is not available in the selected region. + availabilityWarning: The selected virtual machine size is not available in the selected region. regionDoesNotSupportAzs: Availability zones are not supported in the selected region. Please select a different region or use an availability set instead. - regionSupportsAzsButNotThisSize: The selected region does not support availability zones for the selected VM size. Please select a different region or VM size. - selectedSizeAcceleratedNetworkingWarning: The selected VM size does not support accelerated networking. Please select another VM size or disable accelerated networking. + regionSupportsAzsButNotThisSize: The selected region does not support availability zones for the selected virtual machine size. Please select a different region or virtual machine size. + selectedSizeAcceleratedNetworkingWarning: The selected virtual machine size does not support accelerated networking. Please select another virtual machine size or disable accelerated networking. sshUser: label: SSH Username storageType: @@ -1798,15 +1798,15 @@ cluster: listLabel: Add Argument bannerLabel: 'Note: The last selector that matches wins, and only args from it will be used. Args from other matches above will not be combined together or merged.' title: 'For machines with labels matching:' - subTitle: 'Use the Kubelet args:' + subTitle: 'Use the Kubelet arguments:' titleAlt: |- {count, plural, =1 { For all machines, use the Kubelet args: } other { For machines without matched labels below: } } - kubeControllerManagerTitle: Additional Controller Manager Args - kubeApiServerTitle: Additional API Server Args - kubeSchedulerTitle: Additional Scheduler Args + kubeControllerManagerTitle: Additional Controller Manager Arguments + kubeApiServerTitle: Additional API Server Arguments + kubeSchedulerTitle: Additional Scheduler Arguments agentArgs: label: Raise error if kernel parameters are different than the expected kubelet defaults banner: @@ -1852,7 +1852,7 @@ cluster: machinePool: name: label: Pool Name - placeholder: A random one will be generated by default + placeholder: A random name is generated by default nodeTotals: label: controlPlane: '{count} Control Plane' @@ -1869,7 +1869,7 @@ cluster: {count, plural, =0 { A cluster needs at least one etcd node to be usable. } =1 { A cluster with only one etcd node is not fault-tolerant. } - =2 { Clusters should have an odd number of nodes. A cluster with 2 etcd nodes is not fault-tolerant. } + =2 { Clusters should have an odd number of nodes. A cluster with 2 etcd nodes is not fault-tolerant. } =3 {} =4 { Clusters should have an odd number of nodes. } =5 {} @@ -1905,7 +1905,7 @@ cluster: label: Auto Replace toolTip: If greater than 0, nodes that are unreachable for this duration will be automatically deleted and replaced. unit: "Seconds" - managementTimeout: The cluster to become available. It's possible the cluster was created. We suggest checking the clusters page before trying to create another. + managementTimeout: The cluster to become available. It is possible the cluster was created. We suggest checking the clusters page before trying to create another. memberRoles: removeMessage: 'Note: Removing a user will not remove their project permissions' addClusterMember: @@ -1983,11 +1983,11 @@ cluster: harvester: '{tag}' providerGroup: create-custom1: Use existing nodes and create a cluster using RKE - create-custom2: Use existing nodes and create a cluster using RKE2/K3s + create-custom2: Use existing nodes and create a cluster using RKE2 or K3s create-kontainer: Create a cluster in a hosted Kubernetes provider register-kontainer: Register an existing cluster in a hosted Kubernetes provider create-rke1: Provision new nodes and create a cluster using RKE - create-rke2: Provision new nodes and create a cluster using RKE2/K3s + create-rke2: Provision new nodes and create a cluster using RKE2 or K3s create-template: Use a Catalog Template to create a cluster register-custom: Import any Kubernetes cluster rke2: @@ -2015,7 +2015,7 @@ cluster: enable: Enable Bandwidth Manager cloudProvider: label: Cloud Provider - header: Cloud Provider Config + header: Cloud Provider Configuration defaultValue: label: Default - RKE2 Embedded security: @@ -2107,7 +2107,7 @@ cluster: servicelb: 'Klipper Service LB' traefik: 'Traefik Ingress' selectCredential: - genericDescription: "{vendor} has no built-in support for this driver. We've taken a guess, but consult the driver's documentation for the fields required for authentication." + genericDescription: "{vendor} has no built-in support for this driver. We've taken a guess, but consult the driver's documentation for the fields required for authentication." snapshot: successTitle: Snapshot Started errorTitle: "Error Snapshotting {name}" @@ -2149,7 +2149,7 @@ cluster: v1: RKE1 v2: RKE2/K3s validation: - iamInstanceProfileName: If the Amazon cloud provider is selected the "IAM Instance Profile Name" must be defined for each Machine Pool + iamInstanceProfileName: If the Amazon cloud provider is selected the "IAM Instance Profile Name" must be defined for each machine pool clusterIndexPage: hardwareResourceGauge: @@ -2205,7 +2205,7 @@ configmap: tabs: data: label: Data - protip: Use this area for anything that's UTF-8 text data + protip: Use this area for anything that contains UTF-8 text data binaryData: label: Binary Data @@ -2372,7 +2372,7 @@ fleet: notReady: Not Ready waitApplied: Wait Applied gitRepo: - createLocalBanner: When deploying a Git Repo to the Local workspace you are unable to target any specific Cluster or Cluster Groups + createLocalBanner: When deploying a Git Repo to the local workspace you are unable to target any specific cluster or cluster groups tabs: resources: Resources unready: Non-Ready @@ -2387,7 +2387,7 @@ fleet: label: Paths placeholder: e.g. /directory/in/your/repo addLabel: Add Path - empty: The root of the repo is used by default. To use one or more different directories, add them here. + empty: The root of the repo is used by default. To use one or more different directories, add it here. repo: label: Repository URL placeholder: e.g. https://github.com/rancher/fleet-examples.git or git@github.com:rancher/fleet-examples.git @@ -2476,7 +2476,7 @@ fleet: workspaces: tabs: restrictions: Allowed Target Namespaces - timeout: Workspace creation timeout. It's possible the workspace was created. We suggest checking the workspace page before trying to create another. + timeout: Workspace creation timeout. It is possible the workspace was created. We suggest checking the workspace page before trying to create another. restrictions: addTitle: 'allowedTargetNamespaces' addLabel: Add @@ -2680,10 +2680,10 @@ hpa: cpu: CPU memory: Memory warnings: - custom: In order to use custom metrics with HPA, you need to deploy the custom metrics server such as prometheus adapter. - external: In order to use external metrics with HPA, you need to deploy the external metrics server such as prometheus adapter. + custom: In order to use custom metrics with HPA, you need to deploy the custom metrics server such as Prometheus adapter. + external: In order to use external metrics with HPA, you need to deploy the external metrics server such as Prometheus adapter. noMetric: In order to use resource metrics with HPA, you need to deploy the metrics server. - resource: The selected target reference does not have the correct resource requests on the spec. Without this the HPA metric will have no effect. + resource: The selected target reference does not have the correct resource requests on the spec. Without this, the HPA metric will have no effect. workloadTab: current: Current Replicas last: Last Scale Time @@ -2702,7 +2702,7 @@ import: } ingress: - description: Ingresses route incoming traffic from the internet to Services within the cluster based on the hostname and path specified in the request. You can expose multiple Services on the same external IP address and port. + description: Ingresses route incoming traffic from the internet to services within the cluster based on the hostname and path specified in the request. You can expose multiple services on the same external IP address and port. certificates: addCertificate: Add Certificate addHost: Add Host @@ -2729,7 +2729,7 @@ ingress: targetService: label: Target Service doesntExist: The selected service does not exist - required: Target Service is required + required: Target service is required warning: "Warning: Default backend is used globally for the entire cluster." ingressClass: label: Ingress Class @@ -2755,7 +2755,7 @@ ingress: placeholder: e.g. example.com target: label: Target Service - tooltip: If none of the Services in this dropdown select the Pods that you need to expose, you will need to create a Service that selects those Pods first. + tooltip: If none of the services in this drop-down select the pods that you need to expose, you will need to create a service that selects those pods first. doesntExist: The selected service does not exist title: Rules rulesAndCertificates: @@ -2763,7 +2763,7 @@ ingress: defaultCertificate: default target: default: Default - rulesOrBackendSpecified: Either Default Backend or Rules must be specified + rulesOrBackendSpecified: Either the default backend or rules must be specified internalExternalIP: none: None @@ -2775,7 +2775,7 @@ istio: description: 'Visualization of services within a service mesh and how they are connected. For Kiali to display data, you need Prometheus installed. If you need a monitoring solution, install {vendor} monitoring.' jaeger: label: Jaeger - description: Monitor and Troubleshoot microservices-based distributed systems. + description: Monitor and troubleshoot microservices-based distributed systems. disabled: '{app} is not installed' cni: Enable CNI customOverlayFile: @@ -2867,7 +2867,7 @@ istio: help: Maximum number of HTTP1 /TCP connections to a destination host. outlierDetection: label: Outlier Detection - detail: Configure eviction of unhealthy hosts from the load balancing pool + detail: Configure eviction of unhealthy hosts from the load balancing pool. baseEjectionTime: label: Base Ejection Time placeholder: e.g. 30s @@ -2892,7 +2892,7 @@ istio: name: label: Name placeholder: e.g. v1 - error: Subset Name is required. + error: Subset name is required. labels: error: Please input at least one label for subset. tls: @@ -2913,11 +2913,11 @@ istio: clientCertificate: label: Client Certificate placeholder: e.g. /etc/certs/myclientcert.pem - error: Client Certificate is required. + error: Client certificate is required. privateKey: label: Private Key placeholder: e.g. /etc/certs/client_private_key.pem - error: Private Key is required. + error: Private key is required. caCertificates: label: CA Certificates placeholder: e.g. /etc/certs/rootcacerts.pem @@ -2962,7 +2962,7 @@ jwt: labels: addLabel: Add Label - addSetLabel: Add/Set Label + addSetLabel: Add or Set Label addTag: Add Tag addTaint: Add Taint addAnnotation: Add Annotation @@ -3023,7 +3023,7 @@ logging: noOutputsBanner: There are no cluster outputs in the selected namespace. flow: clusterOutputs: - doesntExistTooltip: This cluster output doesn't exist + doesntExistTooltip: This cluster output does not exist label: Cluster Outputs matches: banner: Configure which container logs will be pulled from @@ -3058,7 +3058,7 @@ logging: filters: label: Filters outputs: - doesntExistTooltip: This output doesn't exist + doesntExistTooltip: This output does not exist sameNamespaceError: Output must reside in same namespace as the flow. label: Outputs install: @@ -3172,7 +3172,7 @@ logging: overwriteExistingPath: Overwrite Existing Path kinesisStream: streamName: Stream Name - keyId: Key Id from Secret + keyId: Key ID from Secret secretKey: Secret Key from Secret logdna: apiKey: API Key @@ -3181,7 +3181,7 @@ logging: logz: url: URL port: Port - token: Api Token from Secret + token: API Token from Secret enableCompression: Enable Compression newrelic: apiKey: API Key from Secret @@ -3212,7 +3212,7 @@ logging: timekeyWait: Timekey Wait timekeyUseUTC: Timekey Use UTC s3: - keyId: Key Id from Secret + keyId: Key ID from Secret secretKey: Secret Key from Secret endpoint: Endpoint bucket: Bucket @@ -3348,7 +3348,7 @@ members: clusterPermissions: noDescription: User created - no description label: Cluster Permissions - description: Controls what access users have to the Cluster + description: Controls what access users have to the cluster createProjects: Create Projects manageClusterBackups: Manage Cluster Backups manageClusterCatalogs: Manage Cluster Catalogs @@ -3362,10 +3362,10 @@ members: viewNodes: View Nodes owner: label: Owner - description: Owners have full control over the Cluster and all resources inside it. + description: Owners have full control over the cluster and all the resources inside it. member: label: Member - description: Members can manage the resources inside the Cluster but not change the Cluster itself. + description: Members can manage the resources inside the cluster but not change the cluster itself. custom: label: Custom description: Choose individual roles for this user. @@ -3385,25 +3385,25 @@ monitoring: readOnlyMany: ReadOnlyMany aggregateDefaultRoles: label: Aggregate to Default Kubernetes Roles - tip: 'Adds labels to the ClusterRoles deployed by the Monitoring chart to aggregate to the corresponding default k8s admin, edit, and view ClusterRoles.' + tip: 'Adds labels to the ClusterRoles deployed by the monitoring chart to aggregate to the corresponding default Kubernetes administrator, edit, and view ClusterRoles.' alerting: config: - label: Alert Manager Config + label: Alert Manager Configuration enable: label: Deploy Alertmanager secrets: additional: info: Secrets should be mounted at
    /etc/alertmanager/secrets/
    label: Additional Secrets - existing: Choose an existing config secret + existing: Choose an existing configuration secret info: | Create default config:
    A Secret containing your Alertmanager Config will be created in the
    cattle-monitoring-system
    namespace on deploying this chart under the name
    alertmanager-rancher-monitoring-alertmanager
    . By default, this Secret will never be modified on an uninstall or upgrade of this chart.

    Once you have deployed this chart, you should edit the Secret via the UI in order to add your custom notification configurations that will be used by Alertmanager to send alerts.

    Choose an existing config secret:
    You must specify a Secret that exists in the
    cattle-monitoring-system
    namespace. If the namespace does not exist, you will not be able to select an existing secret. label: Alertmanager Secret - new: Create default config + new: Create default configuration radio: - label: Config Secret + label: Configuration Secret validation: duplicatedReceiverName: A receiver with the name {name} already exists. templates: @@ -3476,7 +3476,7 @@ monitoring: adminApi: Admin API evaluation: Evaluation Interval ignoreNamespaceSelectors: - help: 'Ignoring Namespace Selectors allows Cluster Admins to limit teams from monitoring resources outside of namespaces they have permissions to but can break the functionality of Apps that rely on setting up Monitors that scrape targets across multiple namespaces, such as Istio.' + help: 'Ignoring Namespace Selectors allows cluster admins to limit teams from monitoring resources outside of namespaces they have permissions to but can break the functionality of applications that rely on setting up monitors that scrape targets across multiple namespaces, such as Istio.' label: Namespace Selectors radio: enforced: 'Use: Monitors can access resources based on namespaces that match the namespace selector field' @@ -3496,13 +3496,13 @@ monitoring: label: Persistent Storage for Prometheus mode: Access Mode selector: Selector - selectorWarning: 'If you are using a dynamic provisioner (e.g. Longhorn), no Selectors should be specified since a PVC with a non-empty selector can''t have a PV dynamically provisioned for it.' + selectorWarning: 'If you are using a dynamic provisioner (e.g. Longhorn), no selectors should be specified since a PVC with a non-empty selector cannot have a PV dynamically provisioned for it.' size: Size volumeName: Volume Name title: Configure Prometheus warningInstalled: | Warning: Prometheus Operators are currently deployed. Deploying multiple Prometheus Operators onto one cluster is not currently supported. Please remove all other Prometheus Operator deployments from this cluster before trying to install this chart. - If you are migrating from an older version of {vendor} with Monitoring enabled, please disable Monitoring on this cluster completely before attempting to install this chart. + If you are migrating from an older version of {vendor} with monitoring enabled, please disable monitoring on this cluster completely before attempting to install this chart. receiver: addReceiver: Add Receiver fields: @@ -3521,13 +3521,13 @@ monitoring: secretsBanner: The file paths below must be referenced in
    alertmanager.alertmanagerSpec.secrets
    when deploying the Monitoring chart. For more information see our documentation. projectMonitoring: detail: - error: "Unable to fetch Dashboard values with status: " + error: "Unable to fetch dashboard values with status: " list: - banner: Project Monitoring Configuration is stored in ProjectHelmChart resources + banner: Project monitoring configuration is stored in ProjectHelmChart resources empty: - message: Project Monitoring has not been configured for any projects - canCreate: Get started by clicking Create to add monitoring to a project - cannotCreate: Contact the admin to add project monitoring + message: Project monitoring has not been configured for any projects + canCreate: Get started by clicking create to add monitoring to a project + cannotCreate: Contact the administrator to add project monitoring route: label: Route fields: @@ -3542,9 +3542,9 @@ monitoring: alertmanagerConfig: description: Routes and receivers for project alerting and cluster alerting are configured within AlertmanagerConfig resources. empty: Alerts have not been configured for any accessible namespaces. - getStarted: Get started by clicking Create to configure an alert. + getStarted: Get started by clicking create to configure an alert. receiverTooltip: This route will direct alerts to the selected receiver, which must be defined in the same AlertmanagerConfig. - deprecationWarning: The Route and Receiver resources are deprecated. Going forward, routes and receivers should not be managed as separate Kubernetes resources on this page. They should be configured as YAML fields in an AlertmanagerConfig resource. + deprecationWarning: The route and receiver resources are deprecated. Going forward, routes and receivers should not be managed as separate Kubernetes resources on this page. They should be configured as YAML fields in an AlertmanagerConfig resource. routeInfo: This form supports configuring one route that directs traffic to a receiver. Alerts can be directed to more receiver(s) by configuring child routes in YAML. receiverFormNames: create: Create Receiver in AlertmanagerConfig @@ -3577,34 +3577,34 @@ monitoring: grafana: Grafana prometheus: Prometheus projectMetrics: Project Metrics - v1Warning: 'Monitoring is currently deployed from Cluster Manager. If you are migrating from an older version of {vendor} with monitoring enabled, please disable monitoring in Cluster Manager before attempting to install the new {vendor} Monitoring chart in Cluster Explorer.' + v1Warning: 'Monitoring is currently deployed from cluster manager. If you are migrating from an older version of {vendor} with monitoring enabled, please disable monitoring in cluster manager before attempting to install the new {vendor} monitoring chart in cluster explorer.' monitoringReceiver: addButton: Add {type} custom: label: Custom - title: Custom Config - info: The YAML provided here will be directly appended to your receiver within the Alertmanager Config Secret. + title: Custom Configuration + info: The YAML provided here will be directly appended to your receiver within the Alertmanager configuration secret. email: label: Email - title: Email Config + title: Email Configuration opsgenie: label: Opsgenie - title: Opsgenie Config + title: Opsgenie Configuration pagerduty: label: PagerDuty - title: PagerDuty Config + title: PagerDuty Configuration info: "You can find additional info on creating an Integration Key for PagerDuty here." slack: label: Slack - title: Slack Config + title: Slack Configuration info: "You can find additional info on creating Incoming Webhooks for Slack here ." webhook: label: Webhook - title: Webhook Config - urlTooltip: For some webhooks this a url that points to the service DNS - modifyNamespace: If
    rancher-alerting-drivers
    default values were changed, please update the url below in the format http://<new_service_name>.<new_namespace>.svc.<port>/<path> - banner: To use MS Teams or SMS you will need to have at least one instance of
    rancher-alerting-drivers
    installed first. + title: Webhook Configuration + urlTooltip: For some webhooks this a URL that point to the service DNS + modifyNamespace: If
    rancher-alerting-drivers
    default values were changed, please update the URL below in the format http://<new_service_name>.<new_namespace>.svc.<port>/<path> + banner: To use MS Teams or SMS, you will need to have at least one instance of
    rancher-alerting-drivers
    installed first. add: selectWebhookType: Select Webhook Type generic: Generic @@ -3639,7 +3639,7 @@ monitoringReceiver: label: Enable send resolved alerts alertmanagerConfigReceiver: - secretKeyId: Key Id from Secret + secretKeyId: Key ID from Secret name: Receiver Name addButton: Add Receiver receivers: Receivers @@ -3653,7 +3653,7 @@ monitoringRoute: label: Group By addGroupByLabel: Labels to Group Alerts By groupByTooltip: Add each label as a string in the format key:value. The special label ... will aggregate by all possible labels. If provided, the ... must be the only element in the list. - info: This is the top-level Route used by Alertmanager as the default destination for any Alerts that do not match any other Routes. This Route must exist and cannot be deleted. + info: This is the top-level route used by Alertmanager as the default destination for any alerts that do not match any other routes. This route must exist and cannot be deleted. interval: label: Group Interval matching: @@ -3805,7 +3805,7 @@ networkpolicy: ruleHint: Incoming traffic is only allowed from the configured sources portHint: Incoming traffic is only allowed to connect to the configured ports labelsAnnotations: - label: Labels & Annotations + label: Labels and Annotations rules: pod: Pod namespace: Namespace @@ -3836,12 +3836,12 @@ networkpolicy: namespaceSelector: label: Namespace Selector namespaceAndPodSelector: - label: Namespace/Pod Selector + label: Namespace and Pod Selector config: label: Configuration selectors: label: Selectors - hint: The NetworkPolicy is applied to the selected Pods + hint: The NetworkPolicy is applied to the selected pods matchingPods: matchesSome: |- {matched, plural, @@ -3893,8 +3893,8 @@ node: used: Used amount: "{used} of {total} {unit}" cpu: CPU - memory: MEMORY - pods: PODS + memory: Memory + pods: Pods diskPressure: Disk Pressure kubelet: kubelet memoryPressure: Memory Pressure @@ -4107,7 +4107,7 @@ persistentVolume: portals: add: Add Portal cinder: - label: Openstack Cinder Volume (Unsupported) + label: OpenStack Cinder Volume (Unsupported) volumeId: label: Volume ID placeholder: e.g. vol @@ -4192,7 +4192,7 @@ persistentVolume: label: Path on the Node placeholder: /mnt/disks/ssd1 mustBe: - label: The Path on the Node must be + label: The path on the node must be anything: 'Anything: do not check the target path' directory: A directory, or create if it does not exist file: A file, or create if it does not exist @@ -4255,8 +4255,8 @@ persistentVolumeClaim: source: label: Source options: - new: Use a Storage Class to provision a new Persistent Volume - existing: Use an existing Persistent Volume + new: Use a storage class to provision a new persistent volume + existing: Use an existing persistent volume expand: label: Expand notSupported: Storage class does not support volume expansion @@ -4267,8 +4267,8 @@ persistentVolumeClaim: requestStorage: Request Storage persistentVolume: Persistent Volume tooltips: - noStorageClass: You don't have permission to list Storage Classes, enter a name manually - noPersistentVolume: You don't have permission to list Persistent Volumes, enter a name manually + noStorageClass: You do not have permission to list storage classes, enter a name manually + noPersistentVolume: You do not have permission to list persistent volumes, enter a name manually customize: label: Customize accessModes: @@ -4312,10 +4312,10 @@ plugins: installing: Installing ... uninstalling: Uninstalling ... descriptions: - experimental: This Extension is marked as experimental - third-party: This Extension is provided by a Third-Party - built-in: This Extension is built-in - image: This Extension Image has been loaded manually + experimental: This extension is marked as experimental + third-party: This extension is provided by a third-party + built-in: This extension is built-in + image: This extension image has been loaded manually error: title: Error loading extension message: Could not load extension code @@ -4349,10 +4349,10 @@ plugins: requiresHost: 'Requires a host that matches "{mainHost}"' empty: all: Extensions are neither installed nor available - available: No Extensions available - installed: No Extensions installed - updates: No updates available for installed Extensions - images: No Extension Images installed + available: No extension available + installed: No extension installed + updates: No updates available for installed extension + images: No extension images installed loadError: An error occurred loading the code for this extension helmError: "An error occurred installing the extension via Helm" manageRepos: Manage Repositories @@ -4392,7 +4392,7 @@ plugins: message: A repository with the name {repo} already exists success: title: "Imported Extension Catalog from: {name}" - message: Extension Catalog image was imported successfully + message: Extension catalog image was imported successfully headers: image: name: images @@ -4410,17 +4410,17 @@ plugins: install: label: Install title: Install Extension {name} - prompt: "Are you sure that you want to install this Extension?" + prompt: "Are you sure that you want to install this extension?" version: Version - warnNotCertified: Please ensure that you are aware of the risks of installing Extensions from untrusted authors + warnNotCertified: Please ensure that you are aware of the risks of installing extensions from untrusted authors update: label: Update title: Update Extension {name} - prompt: "Are you sure that you want to update this Extension?" + prompt: "Are you sure that you want to update this extension?" rollback: label: Rollback title: Rollback Extension {name} - prompt: "Are you sure that you want to rollback this Extension?" + prompt: "Are you sure that you want to rollback this extension?" uninstall: label: Uninstall title: "Uninstall Extension: {name}" @@ -4456,7 +4456,7 @@ plugins: remove: label: Disable Extension Support title: Disable Extension Support? - prompt: This will un-install the Helm charts that enable Extension support + prompt: This will un-install the Helm charts that enable extension support registry: official: title: Remove the Official Rancher Extensions Repository @@ -4486,7 +4486,7 @@ podSecurityAdmission: placeholder: 'Version (default: latest)' exemptions: title: Exemptions - description: Allow the creation of pods for specific Usernames, RuntimeClassNames, and Namespaces that would otherwise be prohibited due to the policies set above. + description: Allow the creation of pods for specific usernames, RuntimeClassNames, and namespaces that would otherwise be prohibited due to the policies set above. placeholder: Enter a comma separated list of {psaExemptionsControl} prefs: title: Preferences @@ -4582,7 +4582,7 @@ project: members: label: Members containerDefaultResourceLimit: Container Default Resource Limit - vmDefaultResourceLimit: VM Default Resource Limit + vmDefaultResourceLimit: Virtual Machine Default Resource Limit resourceQuotas: Resource Quotas haveOneOwner: There must be at least one member with the Owner role. @@ -4592,23 +4592,23 @@ projectMembers: label: Project projectPermissions: label: Project Permissions - description: Controls what access users have to the Project + description: Controls what access users have to the project noDescription: User created - no description searchForMember: Search for a member to provide project access owner: label: Owner - description: Owners have full control over the Project and all resources inside it. + description: Owners have full control over the project and all resources inside it. member: label: Member - description: Members can manage the resources inside the Project but not change the Project itself. + description: Members can manage the resources inside the project but not change the project itself. readOnly: label: Read Only - description: Members can only view the resources inside the Project but not change the resources. + description: Members can only view the resources inside the project but not change the resources. custom: label: Custom description: Choose individual roles for this user. createNs: Create Namespaces - configmapsManage: Manage Config Maps + configmapsManage: Manage Configuration Maps ingressManage: Manage Ingress projectcatalogsManage: Manage Project Catalogs projectroletemplatebindingsManage: Manage Project Members @@ -4653,7 +4653,7 @@ prometheusRule: summary: input: Summary Annotation Value label: Summary - bannerText: 'When firing alerts, the annotations and labels will be passed to the configured AlertManagers to allow them to construct the notification that will be sent to any configured Receivers.' + bannerText: 'When firing alerts, the annotations and labels will be passed to the configured AlertManagers to allow them to construct the notification that will be sent to any configured receivers.' for: label: Wait to fire for placeholder: '60' @@ -4692,14 +4692,14 @@ prometheusRule: promptForceRemove: modalTitle: Are you sure? - removeWarning: "There was an issue with deleting underlying infrastructure. If you proceed with this action, the Machine {nameToMatch} will be deleted from Rancher only. It's highly recommended to manually delete any referenced infrastructure." + removeWarning: "There was an issue with deleting underlying infrastructure. If you proceed with this action, the Machine {nameToMatch} will be deleted from Rancher only. We recommend to manually delete any referenced infrastructure." forceDelete: Force Delete confirmName: "Enter in the pool name below to confirm:" podRemoveWarning: "Force deleting pods does not wait for confirmation that the pod's processes have been terminated. This may result in data corruption or inconsistencies" promptScaleMachineDown: attemptingToRemove: "You are attempting to delete {count} {type}" - retainedMachine1: At least one Machine must exist for roles Control Plane and Etcd. + retainedMachine1: At least one machine must exist for roles control plane and Etcd. retainedMachine2: { name } will remain promptSlo: @@ -4720,7 +4720,7 @@ promptRemove: other { and {count} others.} } attemptingToRemove: "You are attempting to delete the {type}" - attemptingToRemoveAuthConfig: "You are attempting to disable this Auth Provider.

    Be aware that cluster role template bindings, project role template bindings, global role bindings, users, tokens will be all deleted.

    Are you sure you want to proceed?" + attemptingToRemoveAuthConfig: "You are attempting to disable this authenticator provider.

    Be aware that cluster role template bindings, project role template bindings, global role bindings, users, tokens will be all deleted.

    Are you sure you want to proceed?" protip: "Tip: Hold the {alternateLabel} key while clicking delete to bypass this confirmation" confirmName: "Enter {nameToMatch} below to confirm:" deleteAssociatedNamespaces: "Also delete the namespaces in this project:" @@ -4762,7 +4762,7 @@ promptSaveAsRKETemplate: promptRotateEncryptionKey: title: Rotate Encryption Keys description: The last backup {name} was performed on {date} - warning: Before proceeding, ensure a successful ETCD backup of the cluster has been completed. + warning: Before proceeding, ensure a successful etcd backup of the cluster has been completed. error: No backup found rancherAlertingDrivers: @@ -4868,7 +4868,7 @@ rbac: deprecation: 'Warning: The Restricted Administrator role has been deprecated as of Rancher 2.8.0 and will be removed in a future release - Check out the Release Notes' user: label: Standard User - description: Standard Users can create new clusters and manage clusters and projects they have been granted access to. + description: Standard users can create new clusters and manage clusters and projects they have been granted access to. user-base: label: User-Base description: User-Base users have login-access only. @@ -4880,10 +4880,10 @@ rbac: description: Allows the user to create new RKE cluster templates and become the owner of them. authn-manage: label: Configure Authentication - description: Allows the user to enable, configure, and disable all Authentication provider settings. + description: Allows the user to enable, configure, and disable all authentication provider settings. catalogs-manage: label: Legacy Configure Catalogs - description: Allows the user to add, edit, and remove management.cattle.io based catalogs resources. + description: Allows the user to add, edit, and remove management.cattle.io-based catalog resources. clusters-manage: label: Manage all Clusters description: Allows the user to manage all clusters, including ones they are not a member of. @@ -4898,28 +4898,28 @@ rbac: description: Allows the user to enable and disable custom features via feature flag settings. nodedrivers-manage: label: Configure Node Drivers - description: Allows the user to enable, configure, and remove all Node Driver settings. + description: Allows the user to enable, configure, and remove all node driver settings. nodetemplates-manage: label: Manage Node Templates description: Allows the user to define, edit, and remove Node Templates. roles-manage: label: Manage Roles - description: Allows the user to define, edit, and remove Role definitions. + description: Allows the user to define, edit, and remove role definitions. settings-manage: label: Manage Settings description: 'Allows the user to manage {vendor} Settings.' users-manage: label: Manage Users - description: Allows the user to create, remove, and set passwords for all Users. + description: Allows the user to create, remove, and set passwords for all users. catalogs-use: label: Use Catalogs - description: Allows the user to see and deploy Templates from the Catalog. Standard Users have this permission by default. + description: Allows the user to see and deploy templates from the catalog. Standard users have this permission by default. nodetemplates-use: label: Use Node Templates - description: Allows the user to deploy new Nodes using any existing Node Templates. + description: Allows the user to deploy new nodes using any existing node templates. view-rancher-metrics: label: 'View {vendor} Metrics' - description: Allows the user to view Metrics through the API. + description: Allows the user to view metrics through the API. base: label: Login Access clustertemplaterevisions-create: @@ -4960,8 +4960,8 @@ resourceDetail: age: Age restartCount: Pod Restarts defaultBannerMessage: - error: This resource is currently in an error state, but there isn't a detailed message available. - transitioning: This resource is currently in a transitioning state, but there isn't a detailed message available. + error: This resource is currently in an error state, but a detailed message is not available. + transitioning: This resource is currently in a transitioning state, a detailed message is not available. sensitive: hide: Hide Sensitive Values show: Show Sensitive Values @@ -4975,7 +4975,7 @@ resourceDetail: managedWarning: |- This {type} is managed by {hasName, select, no {a {managedBy} app} - yes {the {managedBy} app {appName}}}; changes made here will likely be overwritten the next time {managedBy} runs. + yes {the {managedBy} app {appName}}}; changes made here can be overwritten the next time {managedBy} runs. resourceList: head: create: Create @@ -5023,7 +5023,7 @@ resourceTabs: resourceYaml: errors: - namespaceRequired: This resource is namespaced, so a namespace must be provided. + namespaceRequired: This resource is namespaced; a namespace must be provided. buttons: continue: Continue Editing edit: Edit YAML @@ -5106,12 +5106,12 @@ secret: relatedWorkloads: Related Workloads typeDescriptions: custom: - description: Create a Secret with a custom type + description: Create a secret with a custom type 'kubernetes.io/basic-auth': description: 'Authentication with a username and password' docLink: https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret 'Opaque': - description: Default type of Secret using key-value pairs + description: Default type of secret using key-value pairs docLink: https://kubernetes.io/docs/concepts/configuration/secret/#opaque-secrets 'kubernetes.io/dockerconfigjson': description: Authenticated registry for pulling container images @@ -5180,9 +5180,9 @@ serviceTypes: nodeport: Node Port servicesPage: - serviceListDescription: Services allow you to define a logical set of Pods that can be accessed with a single IP address and port. - targetPorts: The Service will send requests to this port, and the selected Pods are expected to listen on this port. - listeningPorts: The Service is exposed on this port. + serviceListDescription: Services allow you to define a logical set of pods that can be accessed with a single IP address and port. + targetPorts: The service will send requests to this port, and the selected pods are expected to listen on this port. + listeningPorts: The service is exposed on this port. anyNode: Any Node labelsAnnotations: label: Labels & Annotations @@ -5197,7 +5197,7 @@ servicesPage: placeholder: e.g. 10800 externalName: define: External Name - helpText: "External Name is intended to specify a canonical DNS name. This is a required field. To hardcode an IP address, use a Headless service." + helpText: "External name is intended to specify a canonical DNS name. This is a required field. To hardcode an IP address, use a headless service." label: External Name placeholder: e.g. my.database.example.com input: @@ -5248,7 +5248,7 @@ servicesPage: serviceTypes: clusterIp: abbrv: IP - description: Expose a set of Pods to other Pods within the cluster. This type of Service is only reachable from within the cluster. This is the default type. + description: Expose a set of pods to other pods within the cluster. This type of service is only reachable from within the cluster. This is the default type. label: Cluster IP externalName: abbrv: EN @@ -5273,7 +5273,7 @@ setup: currentPassword: Bootstrap Password confirmPassword: Confirm New Password defaultPassword: - intro: It looks like this is your first time visiting {vendor}; if you pre-set your own bootstrap password, enter it here. Otherwise a random one has been generated for you. To find it:

    + intro: It looks like this is your first time visiting {vendor}; if you have pre-set your own bootstrap password, enter it here. Otherwise a random one has been generated for you. To find it:

    dockerPrefix: 'For a "docker run" installation:' dockerPs: 'Find your container ID with docker ps, then run:' dockerSuffix: "" @@ -5633,7 +5633,7 @@ tableHeaders: apiGroup: API Groups apikey: API Key available: Available - attachedVM: Attached VM + attachedVM: Attached Virtual Machine authRoles: globalDefault: New User Default @@ -5736,7 +5736,7 @@ tableHeaders: namespaceName: Name namespaceNameUnlinked: Name networkType: Type - networkVlan: Vlan ID + networkVlan: VLAN ID node: Node nodeName: Node Name nodesReady: Nodes Ready @@ -5950,7 +5950,7 @@ validation: name: Cluster name cannot be 'local' or take the form 'c-xxxxx' conflict: |- This resource has been modified since you started editing it, and some of those modifications conflict with your changes. - This screen has been updated to reflect the current values on the cluster. Review and reapply the changes you wanted to make, then Save again. + This screen has been updated to reflect the current values on the cluster. Review and reapply the changes you wanted to make, then save again. Conflicting {fieldCount, plural, =1 {field} other {fields}}: {fields} custom: missing: 'No validator exists for { validatorName }! Does the validator exist in custom-validators? Is the name spelled correctly?' @@ -5979,7 +5979,7 @@ validation: global: Requires "Cluster Output" to be selected. output: logdna: - apiKey: Required an "Api Key" to be set. + apiKey: Required an "API Key" to be set. invalidCron: Invalid cron schedule invalidCidr: "Invalid CIDR" invalidIP: "Invalid IP" @@ -6021,21 +6021,22 @@ validation: port: A port must be a number between 1 and 65535. path: '"{key}" must be an absolute path' prometheusRule: - noEdit: This Prometheus Rule may not be edited due to invalid characters in name. + noEdit: This Prometheus rule may not be edited due to invalid characters in name. groups: required: At least one rule group is required. singleAlert: A rule may contain alert rules or recording rules but not both. valid: name: 'Name is required for rule group {index}.' rule: - alertName: 'Rule group {groupIndex} rule {ruleIndex} requires a Alert Name.' - expr: 'Rule group {groupIndex} rule {ruleIndex} requires a PromQL Expression.' + alertName: 'Rule group {groupIndex} rule {ruleIndex} requires a alert name.' + expr: 'Rule group {groupIndex} rule {ruleIndex} requires a PromQL expression.' labels: 'Rule group {groupIndex} rule {ruleIndex} requires at least one label. Severity is recommended.' - recordName: 'Rule group {groupIndex} rule {ruleIndex} requires a Time Series Name.' + recordName: 'Rule group {groupIndex} rule {ruleIndex} requires a time series name.' singleEntry: 'At least one alert rule or one recording rule is required in rule group {index}.' required: '"{key}" is required' invalid: '"{key}" is invalid' requiredOrOverride: '"{key}" is required or must allow override' + arrayCountRequired: "At least {count} {key} {count, plural, =1 {is} other {are}} required, and {key} can not be empty." roleTemplate: roleTemplateRules: missingVerb: You must specify at least one verb for each resource grant @@ -6045,7 +6046,7 @@ validation: noResourceAndNonResource: Each rule may contain Resources or Non-Resource URLs but not both service: externalName: - none: External Name is required on an ExternalName Service. + none: External name is required on an ExternalName service. ports: name: required: 'Port Rule [{position}] - Name is required.' @@ -6076,7 +6077,7 @@ validation: missingProjectId: A target must have a project selected. monitoring: route: - match: At least one Match or Match Regex must be selected + match: At least one match or match regex must be selected interval: '"{key}" must be of a format with digits followed by a unit i.e. 1h, 2m, 30s' tab: "One or more fields in this tab contain a form validation error" @@ -6169,9 +6170,9 @@ workload: initialDelay: Initial Delay livenessProbe: Liveness Check livenessTip: Containers will be restarted when this check is failing. Not recommended for most uses. - noHealthCheck: "There is not a Readiness Check, Liveness Check or Startup Check configured." + noHealthCheck: "There is not a readiness check, liveness check or startup check configured." readinessProbe: Readiness Checks - readinessTip: Containers will be removed from service endpoints when this check is failing. Recommended. + readinessTip: Containers will be removed from service endpoints when this check is failing. Recommended. startupProbe: Startup Check startupTip: Containers will wait until this check succeeds before attempting other health checks. successThreshold: Success Threshold @@ -6239,9 +6240,9 @@ workload: noServiceAccess: You do not have permission to create or manage services ports: expose: Networking - description: 'Define a Service to expose the container, or define a non-functional, named port so that humans will know where the app within the container is expected to run.' - detailedDescription: If ClusterIP, LoadBalancer, or NodePort is selected, a Service is automatically created that will select the Pods in this workload using labels. - toolTip: 'For help exposing workloads on Kubernetes, see the official Kubernetes documentation on Services. You can also manually create a Service to expose Pods by selecting their labels, and you can use an Ingress to map HTTP routes to Services.' + description: 'Define a service to expose the container, or define a non-functional, named port so that other users will know where the application within the container is expected to run.' + detailedDescription: If ClusterIP, LoadBalancer, or NodePort is selected, a service is automatically created that will select the pods in this workload using labels. + toolTip: 'For help exposing workloads on Kubernetes, see the official Kubernetes documentation on services. You can also manually create a service to expose pods by selecting their labels, and you can use an ingress to map HTTP routes to services.' createService: Service Type noCreateService: Do not create a service containerPort: Private Container Port @@ -6315,13 +6316,13 @@ workload: detail: services: Services ingresses: Ingresses - cannotViewServices: Could not list Services due to lack of permission. - cannotFindServices: Could not find any Services that select Pods from this workload. - serviceListCaption: "The following Services select Pods from this workload:" - cannotViewIngresses: Could not list Ingresses due to lack of permission. - cannotFindIngresses: Could not find any Ingresses that forward traffic to Services that select Pods in this workload. - ingressListCaption: "The following Ingresses forward traffic to Services that select Pods from this workload:" - cannotViewIngressesBecauseCannotViewServices: Could not find relevant relevant Ingresses due to lack of permission to view Services. + cannotViewServices: Could not list services due to lack of permission. + cannotFindServices: Could not find any services that select pods from this workload. + serviceListCaption: "The following services select pods from this workload:" + cannotViewIngresses: Could not list ingresses due to lack of permission. + cannotFindIngresses: Could not find any ingresses that forward traffic to services that select pods in this workload. + ingressListCaption: "The following ingresses forward traffic to services that select pods from this workload:" + cannotViewIngressesBecauseCannotViewServices: Could not find relevant relevant ingresses due to lack of permission to view services. pods: title: Pods detailTop: @@ -6511,7 +6512,7 @@ workload: addMount: Add Mount addVolume: Add Volume selectVolume: Select Volume - noVolumes: Volumes will appear here after they are added in the Pod tab + noVolumes: Volumes will appear here after they are added in the pod tab certificate: Certificate csi: diskName: Disk Name @@ -6542,12 +6543,12 @@ workload: defaultMode: Default Mode driver: driver hostPath: - label: The Path on the Node must be + label: The Path on the node must be options: default: 'Anything: do not check the target path' - directoryOrCreate: A directory, or create if it doesn't exist + directoryOrCreate: A directory, or create if it does not exist directory: An existing directory - fileOrCreate: A file, or create if it doesn't exist + fileOrCreate: A file, or create if it does not exist file: An existing file socket: An existing socket charDevice: An existing character device @@ -6576,11 +6577,11 @@ workload: placeholder: "e.g. 300" typeDescriptions: apps.daemonset: DaemonSets run exactly one pod on every eligible node. When new nodes are added to the cluster, DaemonSets automatically deploy to them. Recommended for system-wide or vertically-scalable workloads that never need more than one pod per node. - apps.deployment: Deployments run a scalable number of replicas of a pod distributed among the eligible nodes. Changes are rolled out incrementally and can be rolled back to the previous revision when needed. Recommended for stateless & horizontally-scalable workloads. + apps.deployment: Deployments run a scalable number of replicas of a pod distributed among the eligible nodes. Changes are rolled out incrementally and can be rolled back to the previous revision when needed. Recommended for stateless and horizontally-scalable workloads. apps.statefulset: StatefulSets manage stateful applications and provide guarantees about the ordering and uniqueness of the pods created. Recommended for workloads with persistent storage or strict identity, quorum, or upgrade order requirements. - batch.cronjob: CronJobs create Jobs, which then run Pods, on a repeating schedule. The schedule is expressed in standard Unix cron format, and uses the timezone of the Kubernetes control plane (typically UTC). + batch.cronjob: CronJobs create jobs, which then run pods, on a repeating schedule. The schedule is expressed in standard Unix cron format, and uses the timezone of the Kubernetes control plane (typically UTC). batch.job: Jobs create one or more pods to reliably perform a one-time task by running a pod until it exits successfully. Failed pods are automatically replaced until the specified number of completed runs has been reached. Jobs can also run multiple pods in parallel or function as a batch work queue. - pod: Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. + pod: Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. upgrading: activeDeadlineSeconds: label: Pod Active Deadline @@ -6589,8 +6590,8 @@ workload: label: Concurrency options: allow: Allow CronJobs to run concurrently - forbid: Skip next run if current run hasn't finished - replace: Replace run if current run hasn't finished + forbid: Skip next run if current run has not finished + replace: Replace run if current run has not finished maxSurge: label: Max Surge tip: The maximum number of pods allowed beyond the desired scale at any given time. @@ -6612,7 +6613,7 @@ workload: labels: delete: "On Delete: New pods are only created when old pods are manually deleted." recreate: "Recreate: Kill ALL pods, then start new pods." - rollingUpdate: "Rolling Update: Create new pods, until max surge is reached, before deleting old pods. Don't stop more pods than max unavailable." + rollingUpdate: "Rolling Update: Create new pods, until max surge is reached, before deleting old pods. Do not stop more pods than max unavailable." terminationGracePeriodSeconds: label: Termination Grace Period tip: The duration the pod needs to terminate successfully. @@ -6710,24 +6711,24 @@ typeDescription: cis.cattle.io.clusterscanprofile: A profile is the configuration for the CIS scan, which is the benchmark versions to use and any specific tests to skip in that benchmark. cis.cattle.io.clusterscan: A scan is created to trigger a CIS scan on the cluster based on the defined profile. A report is created after the scan is completed. cis.cattle.io.clusterscanreport: A report is the result of a CIS scan of the cluster. - management.cattle.io.feature: Feature Flags allow certain {vendor} features to be toggled on and off. Features that are off by default should be considered experimental functionality. - cluster.x-k8s.io.machine: A Machine encapsulates the configuration of a Kubernetes Node. Use this view to see what happens after updating a cluster. - cluster.x-k8s.io.machinedeployment: A Machine Deployment orchestrates deployments via templates over a collection of Machine Sets (similar to a Deployment). Use this view to see what happens after updating a cluster. - cluster.x-k8s.io.machineset: A Machine Set ensures the desired number of Machine resources are up and running at all times (similar to a ReplicaSet). Use this view to see what happens after updating a cluster. + management.cattle.io.feature: Feature flags allow certain {vendor} features to be toggled on and off. Features that are off by default should be considered experimental functionality. + cluster.x-k8s.io.machine: A machine encapsulates the configuration of a Kubernetes node. Use this view to see what happens after updating a cluster. + cluster.x-k8s.io.machinedeployment: A machine deployment orchestrates deployments via templates over a collection of machine sets (similar to a deployment). Use this view to see what happens after updating a cluster. + cluster.x-k8s.io.machineset: A machine set ensures the desired number of machine resources are up and running at all times (similar to a ReplicaSet). Use this view to see what happens after updating a cluster. resources.cattle.io.backup: A backup is created to perform one-time backups or schedule recurring backups based on a ResourceSet. resources.cattle.io.restore: A restore is created to trigger a restore to the cluster based on a backup file. resources.cattle.io.resourceset: A resource set defines which CRDs and resources to store in the backup. monitoring.coreos.com.servicemonitor: A service monitor defines the group of services and the endpoints that Prometheus will scrape for metrics. This is the most common way to define metrics collection. - monitoring.coreos.com.podmonitor: A pod monitor defines the group of pods that Prometheus will scrape for metrics. The common way is to use service monitors, but pod monitors allow you to handle any situation where a service monitor wouldn't work. - monitoring.coreos.com.prometheusrule: A Prometheus Rule resource defines both recording and/or alert rules. A recording rule can pre-compute values and save the results. Alerting rules allow you to define conditions on when to send notifications to AlertManager. + monitoring.coreos.com.podmonitor: A pod monitor defines the group of pods that Prometheus will scrape for metrics. The common way is to use service monitors, but pod monitors allow you to handle any situation where a service monitor would not work. + monitoring.coreos.com.prometheusrule: A Prometheus rule resource defines both recording or alert rules. A recording rule can pre-compute values and save the results. Alerting rules allow you to define conditions on when to send notifications to AlertManager. monitoring.coreos.com.prometheus: A Prometheus server is a Prometheus deployment whose scrape configuration and rules are determined by selected ServiceMonitors, PodMonitors, and PrometheusRules and whose alerts will be sent to all selected Alertmanagers with the custom resource's configuration. monitoring.coreos.com.alertmanager: An alert manager is deployment whose configuration will be specified by a secret in the same namespace, which determines which alerts should go to which receiver. - node: The base Kubernetes Node resource represents a virtual or physical machine which hosts deployments. To manage the machine lifecycle, if available, go to Cluster Management. + node: The base Kubernetes node resource represents a virtual or physical machine which hosts deployments. To manage the machine lifecycle, if available, go to Cluster Management. catalog.cattle.io.clusterrepo: 'A chart repository is a Helm repository or {vendor} git based application catalog. It provides the list of available charts in the cluster.' - catalog.cattle.io.clusterrepo.local: ' A chart repository is a Helm repository or {vendor} git based application catalog. It provides the list of available charts in the cluster. Cluster Templates are deployed via Helm charts.' + catalog.cattle.io.clusterrepo.local: 'A chart repository is a Helm repository or {vendor} git based application catalog. It provides the list of available charts in the cluster. Cluster Templates are deployed via Helm charts.' catalog.cattle.io.operation: An operation is the list of recent Helm operations that have been applied to the cluster. catalog.cattle.io.app: An installed application is a Helm 3 chart that was installed either via our charts or through the Helm CLI. - logging.banzaicloud.io.clusterflow: Logs from the cluster will be collected and logged to the selected Cluster Output. + logging.banzaicloud.io.clusterflow: Logs from the cluster will be collected and logged to the selected cluster output. logging.banzaicloud.io.clusteroutput: A cluster output defines which logging providers that logs can be sent to and is only effective when deployed in the namespace that the logging operator is in. logging.banzaicloud.io.flow: A flow defines which logs to collect and filter as well as which output to send the logs. The flow is a namespaced resource, which means logs will only be collected from the namespace that the flow is deployed in. logging.banzaicloud.io.output: An output defines which logging providers that logs can be sent to. The output needs to be in the same namespace as the flow that is using it. @@ -6761,8 +6762,8 @@ typeLabel: } catalog.cattle.io.app: |- {count, plural, - one { Installed App } - other { Installed Apps } + one { Installed Application } + other { Installed Applications } } catalog.cattle.io.clusterrepo: |- {count, plural, @@ -6771,18 +6772,18 @@ typeLabel: } catalog.cattle.io.repo: |- {count, plural, - one { Namespaced Repo } - other { Namespaced Repos } + one { Namespaced Repository } + other { Namespaced Repositories } } chartinstallaction: |- {count, plural, - one { App } - other { Apps } + one { Application } + other { Applications } } chartupgradeaction: |- {count, plural, - one { App } - other { Apps } + one { Application } + other { Applications } } cloudcredential: |- {count, plural, @@ -6816,8 +6817,8 @@ typeLabel: } fleet.cattle.io.gitrepo: |- {count, plural, - one { Git Repo } - other {Git Repos } + one { Git Repository } + other {Git Repositories } } management.cattle.io.authconfig: |- {count, plural, @@ -6922,8 +6923,8 @@ typeLabel: } 'management.cattle.io.cluster': |- {count, plural, - one { Mgmt Cluster } - other { Mgmt Clusters } + one { Manaagement Cluster } + other { Management Clusters } } 'cluster.x-k8s.io.cluster': |- {count, plural, @@ -7102,8 +7103,8 @@ typeLabel: } harvesterhci.io.cloudtemplate: |- {count, plural, - one { Cloud Config Template } - other { Cloud Config Templates } + one { Cloud Configuration Template } + other { Cloud Configuration Templates } } fleet.cattle.io.content: |- {count, plural, @@ -7122,8 +7123,8 @@ typeLabel: } k3s.cattle.io.addon: |- {count, plural, - one { Addon } - other { Addons } + one { Add-on } + other { Add-ons } } management.cattle.io.apiservice: |- {count, plural, @@ -7342,7 +7343,7 @@ keyValue: registryMirror: header: Mirrors - toolTip: 'Mirrors can be used to redirect requests for images from one registry to come from a list of endpoints you specify instead. For example docker.io could redirect to your internal registry instead of ever going to DockerHub.' + toolTip: 'Mirrors can be used to redirect requests for images from one registry to come from a list of endpoints you specify instead. For example docker.io could redirect to your internal registry instead of ever going to DockerHub.' addLabel: Add Mirror description: Mirrors define the names and endpoints for private registries. The endpoints are tried one by one, and the first working one is used. hostnameLabel: Registry Hostname @@ -7390,12 +7391,12 @@ advancedSettings: 'cluster-defaults': 'Override RKE Defaults when creating new clusters.' 'engine-install-url': 'Default Docker engine installation URL (for most node drivers).' 'engine-iso-url': 'Default OS installation URL (for vSphere driver).' - 'engine-newest-version': 'The newest supported version of Docker at the time of this release. A Docker version that does not satisfy supported docker range but is newer than this will be marked as untested.' - 'engine-supported-range': 'Semver range for supported Docker engine versions. Versions which do not satisfy this range will be marked unsupported in the UI.' - 'ingress-ip-domain': 'Wildcard DNS domain to use for automatically generated Ingress hostnames. .. will be added to the domain.' + 'engine-newest-version': 'The newest supported version of Docker at the time of this release. A Docker version that does not satisfy supported docker range but is newer than this will be marked as untested.' + 'engine-supported-range': 'Semver range for supported Docker engine versions. Versions which do not satisfy this range will be marked unsupported in the UI.' + 'ingress-ip-domain': 'Wildcard DNS domain to use for automatically generated ingress hostnames. .. will be added to the domain.' 'server-url': 'Default {appName} install url. Must be HTTPS. All nodes in your cluster must be able to reach this.' - 'system-default-registry': 'Private registry to be used for all Rancher System Container Images. If no value is specified, the default registry for the container runtime is used. For Docker and containerd, the default is `docker.io`.' - 'ui-index': 'HTML index location for the Cluster Manager UI.' + 'system-default-registry': 'Private registry to be used for all Rancher system container images. If no value is specified, the default registry for the container runtime is used. For Docker and containerd, the default is `docker.io`.' + 'ui-index': 'HTML index location for the cluster manager UI.' 'ui-dashboard-index': 'HTML index location for the {appName} UI.' 'ui-offline-preferred': 'Controls whether UI assets are served locally by the server container or from the remote URL defined in the ui-index and ui-dashboard-index settings. The `Dynamic` option will use local assets in production builds of {appName}.' 'ui-pl': 'Private-Label company name.' @@ -7466,7 +7467,7 @@ performance: label: Incremental Loading setting: You can configure the threshold above which incremental loading will be used. description: |- - When enabled, resources will appear more quickly, but it may take slightly longer to load the entire set of resources. This setting only applies to resources that come from the Kubernetes API + When enabled, resources will appear more quickly, but it may take slightly longer to load the entire set of resources. This setting only applies to resources that come from the Kubernetes API. checkboxLabel: Enable incremental loading inputLabel: Resource Threshold incompatibleDescription: "Incremental Loading is incompatible with Namespace/Project filtering and Server-side Pagination. Enabling this will disable them." @@ -7475,7 +7476,7 @@ performance: setting: You can configure a threshold above which manual refresh will be enabled. buttonTooltip: Refresh list description: |- - When enabled, list data will not auto-update but instead the user must manually trigger a list-view refresh. This setting only applies to resources that come from the Kubernetes API + When enabled, list data will not auto-update but instead the user must manually trigger a list-view refresh. This setting only applies to resources that come from the Kubernetes API. checkboxLabel: Enable manual refresh of data for lists inputLabel: Resource Threshold incompatibleDescription: "Manual Refresh is incompatible with Namespace/Project filtering and Server-side Pagination. Enabling this will disable them." @@ -7500,7 +7501,7 @@ performance: howRun: description: Update how garbage collection runs age: - description: "Resource types musn't have been accessed within this period to be considered for garbage collection." + description: "Resource types must not have been accessed within this period to be considered for garbage collection." inputLabel: Resource Age count: description: Resource types must exceed this amount to be considered for garbage collection. @@ -7512,13 +7513,13 @@ performance: incompatibleDescription: "Required Namespace / Project Filtering is incompatible with Manual Refresh and Incremental Loading. Enabling this will disable them." advancedWorker: label: Websocket Web Worker - description: Updates to resources pushed to the UI come via WebSocket and are handled in the UI thread. Enable this option to handle cluster WebSocket updates in a Web Worker in a separate thread. This should help the responsiveness of the UI in systems where resources change often. + description: Updates to resources pushed to the UI come via WebSocket and are handled in the UI thread. Enable this option to handle cluster WebSocket updates in a web worker in a separate thread. This should help the responsiveness of the UI in systems where resources change often. checkboxLabel: Enable Advanced Websocket Web Worker inactivity: title: Inactivity checkboxLabel: Enable inactivity session expiration inputLabel: Inactivity timeout (minutes) - information: To change the automatic logout behaviour, edit the authorisation and/or session token timeout values (auth-user-session-ttl-minutes and auth-token-max-ttl-minutes) in the Settings page. + information: To change the automatic log out behaviour, edit the authorisation and session token timeout values (auth-user-session-ttl-minutes and auth-token-max-ttl-minutes) in the settings page. description: When enabled and the user is inactive past the specified timeout, the UI will no longer fresh page content and the user must reload the page to continue. authUserTTL: This timeout cannot be higher than the user session timeout auth-user-session-ttl-minutes, which is currently {current} minutes. serverPagination: @@ -7714,8 +7715,8 @@ support: text: Login to SUSE Customer Center to access support for your subscription action: SUSE Customer Center aws: - generateConfig: Generate Support Config - text: 'Login to SUSE Customer Center to access support for your subscription. Need to open a new support case? Download a support config file below.' + generateConfig: Generate Support Configuration + text: 'Login to SUSE Customer Center to access support for your subscription. Need to open a new support case? Download a support configuration file below.' promos: one: title: 24x7 Support @@ -7746,7 +7747,7 @@ legacy: project: label: Project - select: "Use the Project/Namespace filter at the top of the page to select a Project in order to see legacy Project features." + select: "Use the namespace or project filter at the top of the page to select a project in order to see legacy project features." serverUpgrade: title: "{vendor} Server Changed"