Skip to content

Commit

Permalink
docs: added compatibility matrix
Browse files Browse the repository at this point in the history
Additionally, the old rancher specifics were removed make the readme
more accessible. The main points remain: uninstall rancher-monitoring and update the CRDs yourselves

Signed-off-by: Bruno Bressi <[email protected]>
  • Loading branch information
puffitos committed Nov 14, 2024
1 parent f7aceaa commit 0ca3472
Show file tree
Hide file tree
Showing 2 changed files with 54 additions and 77 deletions.
65 changes: 27 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,49 +1,40 @@
# CaaS Cluster Monitoring

A fork of the official [rancher cluster monitoring](https://github.com/rancher/charts/tree/dev-v2.9/charts/rancher-monitoring) with more up-to-date prometheus-operator CRDs and features and [prometheus-auth](https://github.com/caas-team/prometheus-auth) to enable multi-tenancy for the proemtheus metrics.

## Installation

With de-installation of the origin Rancher Monitoring and keept resources
If you're comming from an existing rancher-monitoring installation:

```bash
helm -n cattle-monitoring-system delete rancher-monitoring
kubectl -n cattle-monitoring-system delete secret alertmanager-rancher-monitoring-alertmanager
```
- you must first update the prometheus-operator CRDs separately. This chart only includes the kube-prometheus-stack *without* the CRDs.
- you should additionally uninstall the rancher-monitoring chart before installing this one.
- do not delete the rancher-monitoring-crds chart, as this will delete all custom resources already created (or back them up first and recreate them).

Deleting of rancher-monitoring-crd would delete also all corresponding Custom Resources. We delete only the Helm release secrets and keep CRDs into the cluster
To install run the following command:

```bash
kubectl -n cattle-monitoring-system get secrets -o name --no-headers | grep sh.helm.release.v1.rancher-monitoring-crd | xargs kubectl -n cattle-monitoring-system delete $1
helm -n cattle-monitoring-system upgrade -i rancher-monitoring .
```

Nevertheless we need to upgrade CRDs manually because there is no logic to do this in Helm:

```bash
cd charts
tar xvfz kube-prometheus-stack-51.0.3.tgz
cd kube-prometheus-stack/charts/crds
kubectl apply -f crds/ --server-side --force-conflicts
```
## Compatibility matrix

To decouple CRDs from this chart (you may have installed CRDs from another chart or logic), feature is disabled:
The following table shows the compatibility between the CaaS Cluster Monitoring chart and the CaaS Project Monitoring versions:

```yaml
kube-prometheus-stack:
crds:
enabled: false
```
| CaaS Cluster Monitoring | compatible with CaaS Project Monitoring | deployed kube-prometheus-stack |
| ----------------------- | --------------------------------------- | ------------------------------ |
| < 0.0.6 | < 1.0.0 | 51.0.3 |
| 0.0.6 < x < 1.0.0 | 1.0.0 <= y < 1.4.0 | 58.4.0 |

Upgrade to the kube-prometheus-stack:
where `x` is the CaaS Cluster Monitoring Version and `y` is the CaaS Project Monitoring Version.

```bash
helm -n cattle-monitoring-system upgrade -i rancher-monitoring .
```
## Configuration

available config parameters:
The installation can be configured using the various parameters defined in the `values.yaml` file. The following tables list the configurable parameters of the CaaS Cluster Monitoring chart and their default values.

### caas

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `caas.clusterCosts` | bool | `true` | whether the cluster has kubecost installed |
| `caas.defaultEgress` | bool | `false` | whether the cluster needs defaultEgress installed |
| `caas.dynatrace` | bool | `true` | whether the cluster has a dynatrace operator installed |
Expand All @@ -59,7 +50,7 @@ available config parameters:
### global

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `global.cattle.clusterId` | string | `"local"` | |
| `global.cattle.clusterName` | string | `"local"` | |
| `global.cattle.systemDefaultRegistry` | string | `"mtr.devops.telekom.de"` | |
Expand All @@ -73,7 +64,7 @@ available config parameters:
### kube-prometheus-stack

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `kube-prometheus-stack.alertmanager.alertmanagerSpec.alertmanagerConfigNamespaceSelector` | object | `{}` | |
| `kube-prometheus-stack.alertmanager.alertmanagerSpec.alertmanagerConfigSelector.matchExpressions[0].key` | string | `"release"` | |
| `kube-prometheus-stack.alertmanager.alertmanagerSpec.alertmanagerConfigSelector.matchExpressions[0].operator` | string | `"In"` | |
Expand Down Expand Up @@ -462,7 +453,7 @@ available config parameters:
### rkeControllerManager

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeControllerManager.clients.https.enabled` | bool | `true` | |
| `rkeControllerManager.clients.https.insecureSkipVerify` | bool | `true` | |
| `rkeControllerManager.clients.https.useServiceAccountCredentials` | bool | `true` | |
Expand All @@ -488,7 +479,7 @@ available config parameters:
### rkeEtcd

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeEtcd.clients.https.authenticationMethod.authorization.enabled` | bool | `false` | |
| `rkeEtcd.clients.https.authenticationMethod.bearerTokenFile.enabled` | bool | `false` | |
| `rkeEtcd.clients.https.authenticationMethod.bearerTokenSecret.enabled` | bool | `false` | |
Expand All @@ -514,7 +505,7 @@ available config parameters:
### rkeIngressNginx

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeIngressNginx.clients.nodeSelector."node-role.kubernetes.io/worker"` | string | `"true"` | |
| `rkeIngressNginx.clients.port` | int | `10015` | |
| `rkeIngressNginx.clients.tolerations[0].effect` | string | `"NoExecute"` | |
Expand All @@ -529,7 +520,7 @@ available config parameters:
### rkeProxy

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeProxy.clients.port` | int | `10013` | |
| `rkeProxy.clients.tolerations[0].effect` | string | `"NoExecute"` | |
| `rkeProxy.clients.tolerations[0].operator` | string | `"Exists"` | |
Expand All @@ -546,7 +537,7 @@ available config parameters:
### rkeScheduler

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `rkeScheduler.clients.https.authenticationMethod.authorization.enabled` | bool | `false` | |
| `rkeScheduler.clients.https.authenticationMethod.bearerTokenFile.enabled` | bool | `false` | |
| `rkeScheduler.clients.https.authenticationMethod.bearerTokenSecret.enabled` | bool | `false` | |
Expand Down Expand Up @@ -575,7 +566,7 @@ available config parameters:
### hardenedKubelet

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
| `hardenedKubelet.clients.https.authenticationMethod.authorization.enabled` | bool | `false` | |
| `hardenedKubelet.clients.https.authenticationMethod.bearerTokenFile.enabled` | bool | `false` | |
| `hardenedKubelet.clients.https.authenticationMethod.bearerTokenSecret.enabled` | bool | `false` | |
Expand Down Expand Up @@ -619,6 +610,4 @@ available config parameters:
| `hardenedKubelet.serviceMonitor.endpoints[2].path` | string | `"/metrics/probes"` | |
| `hardenedKubelet.serviceMonitor.endpoints[2].port` | string | `"metrics"` | |
| `hardenedKubelet.serviceMonitor.endpoints[2].relabelings[0].sourceLabels[0]` | string | `"__metrics_path__"` | |
| `hardenedKubelet.serviceMonitor.endpoints[2].relabelings[0].targetLabel` | string | `"metrics_path"` | |

Autogenerated from chart metadata using [helm-docs v1.11.3](https://github.com/norwoodj/helm-docs/releases/v1.11.3)
| `hardenedKubelet.serviceMonitor.endpoints[2].relabelings[0].targetLabel` | string | `"metrics_path"` | |
66 changes: 27 additions & 39 deletions README.md.gotmpl
Original file line number Diff line number Diff line change
@@ -1,51 +1,41 @@
# CaaS Cluster Monitoring

## Installation
A fork of the official [rancher cluster monitoring](https://github.com/rancher/charts/tree/dev-v2.9/charts/rancher-monitoring) with more up-to-date prometheus-operator CRDs and features and [prometheus-auth](https://github.com/caas-team/prometheus-auth) to enable multi-tenancy for the proemtheus metrics.

With de-installation of the origin Rancher Monitoring and keept resources
## Installation

If you're comming from an existing rancher-monitoring installation:

```bash
helm -n cattle-monitoring-system delete rancher-monitoring
kubectl -n cattle-monitoring-system delete secret alertmanager-rancher-monitoring-alertmanager
```
- you must first update the prometheus-operator CRDs separately. This chart only includes the kube-prometheus-stack *without* the CRDs.
- you should additionally uninstall the rancher-monitoring chart before installing this one.
- do not delete the rancher-monitoring-crds chart, as this will delete all custom resources already created (or back them up first and recreate them).

Deleting of rancher-monitoring-crd would delete also all corresponding Custom Resources. We delete only the Helm release secrets and keep CRDs into the cluster
To install run the following command:

```bash
kubectl -n cattle-monitoring-system get secrets -o name --no-headers | grep sh.helm.release.v1.rancher-monitoring-crd | xargs kubectl -n cattle-monitoring-system delete $1
helm -n cattle-monitoring-system upgrade -i rancher-monitoring .
```

Nevertheless we need to upgrade CRDs manually because there is no logic to do this in Helm:

```bash
cd charts
tar xvfz kube-prometheus-stack-51.0.3.tgz
cd kube-prometheus-stack/charts/crds
kubectl apply -f crds/ --server-side --force-conflicts
```
## Compatibility matrix

To decouple CRDs from this chart (you may have installed CRDs from another chart or logic), feature is disabled:
The following table shows the compatibility between the CaaS Cluster Monitoring chart and the CaaS Project Monitoring versions:

```yaml
kube-prometheus-stack:
crds:
enabled: false
```
| CaaS Cluster Monitoring | compatible with CaaS Project Monitoring | deployed kube-prometheus-stack |
| ----------------------- | --------------------------------------- | ------------------------------ |
| < 0.0.6 | < 1.0.0 | 51.0.3 |
| 0.0.6 < x < 1.0.0 | 1.0.0 <= y < 1.4.0 | 58.4.0 |

Upgrade to the kube-prometheus-stack:
where `x` is the CaaS Cluster Monitoring Version and `y` is the CaaS Project Monitoring Version.

```bash
helm -n cattle-monitoring-system upgrade -i rancher-monitoring .
```

available config parameters:
## Configuration

The installation can be configured using the various parameters defined in the `values.yaml` file. The following tables list the configurable parameters of the CaaS Cluster Monitoring chart and their default values.

### caas

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "caas" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -55,7 +45,7 @@ available config parameters:
### global

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "global" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -65,7 +55,7 @@ available config parameters:
### kube-prometheus-stack

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "kube-prometheus-stack" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -75,7 +65,7 @@ available config parameters:
### rkeControllerManager

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeControllerManager" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -85,7 +75,7 @@ available config parameters:
### rkeEtcd

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeEtcd" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -95,7 +85,7 @@ available config parameters:
### rkeIngressNginx

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeIngressNginx" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -105,7 +95,7 @@ available config parameters:
### rkeProxy

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeProxy" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -115,7 +105,7 @@ available config parameters:
### rkeScheduler

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "rkeScheduler" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
Expand All @@ -125,11 +115,9 @@ available config parameters:
### hardenedKubelet

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| --------- | ---- | ------- | ----------- |
{{- range .Values }}
{{- if (contains "hardenedKubelet" .Key) }}
| `{{ .Key }}` | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
{{- end }}
{{- end }}

Autogenerated from chart metadata using [helm-docs v1.11.3](https://github.com/norwoodj/helm-docs/releases/v1.11.3)
{{- end }}

0 comments on commit 0ca3472

Please sign in to comment.