-
Notifications
You must be signed in to change notification settings - Fork 506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[opentelemetry-operator] Update kube-rbac-proxy image to 0.18.1 to remediate vulnerabilities #1397
Conversation
…0.18.1 Signed-off-by: Edwin Tye <[email protected]>
@edwintye i didn't see it in the changelog, but are you aware of any breaking changes between versions we should be concerned of? Have you tested this locally to ensure this still works as expected? |
Have been using the version in our clusters, 1.29/1.30, via overrides for a while and haven't spotted any issues yet. However, I must admit that I have not tested this locally.... so, let me provide an example workflow. We create a couple of files, the first being the values we use for install the operator named as admissionWebhooks:
certManager:
enabled: false
manager:
collectorImage:
repository: otel/opentelemetry-collector-k8s
serviceMonitor:
enabled: true
metricsEndpoints:
- port: https # original is the metrics unprotected endpoint
scheme: https
interval: 20s # just to give faster result
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
kubeRBACProxy:
image:
repository: quay.io/brancz/kube-rbac-proxy
tag: v0.18.1 and the second being the collector + target allocator which the operator will create and scrape the metrics off the operator which is named apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: opentelemetry-targetallocator-everything-role
rules:
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
- podmonitors
verbs:
- '*'
- apiGroups: [""]
resources:
- namespaces
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs: ["get", "list", "watch"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: opentelemetry-targetallocator-everything-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: opentelemetry-targetallocator-everything-role
subjects:
- kind: ServiceAccount
name: opentelemetry-targetallocator-sa
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: opentelemetry-targetallocator-sa
---
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: collector-with-ta-prometheus-cr
spec:
mode: statefulset
serviceAccount: opentelemetry-targetallocator-sa
targetAllocator:
enabled: true
serviceAccount: opentelemetry-targetallocator-sa
prometheusCR:
enabled: true
serviceMonitorSelector: { }
podMonitorSelector: { }
config:
receivers:
prometheus:
config:
scrape_configs: []
exporters:
debug:
verbosity: detailed
service:
pipelines:
metrics:
receivers: [prometheus]
exporters: [debug] Then we: spin up a kind cluster, install the CRDs, install the otel operator, apply the CRO to create a collector and corresponding TA, check for successful scrapes. # fast create cluster
kind create cluster
# need both crd for the target allocator
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.77.2/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl apply --server-side -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.77.2/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
# install the opeartor
helm upgrade --install opentelemetry-operator open-telemetry/opentelemetry-operator -f operator-values.yaml
# WAIT, we need the operator to spin up first
# use the operator to create a collector + TA to monitor the operator
kubectl apply --server-side -f collector-with-ta-prometheus-cr.yaml
# wait a bit then we can tail the logs which outputs the scraped metrics
kubectl logs -f collector-with-ta-prometheus-cr-collector-0 This is probably the shortest variant that I can come up with for now, which has some resemblance to how people scrape metrics via the proxy. To show an unsuccessful/failed scrapes, there are a couple options that is relatively easy to do: remove the |
…has effect since 0.16.0 Signed-off-by: Edwin Tye <[email protected]>
Thank you for your contribution! |
Thanks!! @edwintye et al!! |
…mediate vulnerabilities (open-telemetry#1397) * [opentelemetry-operator] Update kube-rbac-proxy image from 0.15.0 to 0.18.1 Signed-off-by: Edwin Tye <[email protected]> * [opentelemetry-operator] remove argument logtostderr as it no longer has effect since 0.16.0 Signed-off-by: Edwin Tye <[email protected]> --------- Signed-off-by: Edwin Tye <[email protected]>
To close #1344 as the earlier PR #1345 didn't go through. Increasing the version to
0.18.1
which is the latest and fixes CVE-2024-28180 and GHSA-xr7q-jx4m-x55m as per their release on top of those fixed in the original proposed version of0.18.0
.