Skip to content

Commit

Permalink
bump results watcher watcher container to 3Gi (#3093)
Browse files Browse the repository at this point in the history
Over the course of about a week we have seen the results watcher on prd-rh01  have to OOM about once a day.  During the day, we have observed garbage collection, and seen mem usage go up and down some.  But as prd-rh01 is our most used cluster, it is the one that has bumped into the 2Gi limit too often.  The other 3 clusters, while still having activity, have not observed leak behavior.  We have engaged upstream with some profiling data from our personal clusters to see if there is some sort of edge case.  But the simpler step first step to bump prod, and then continue to monitor, and see if we have just hit a new limit with more onboarding.
  • Loading branch information
gabemontero authored Jan 18, 2024
1 parent 14f1eda commit 33340b5
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 2 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
- op: replace
path: /spec/template/spec/containers/1/resources/limits/memory
value: "3Gi"
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,11 @@ patches:
kind: Deployment
name: pipeline-metrics-exporter
namespace: openshift-pipelines
- path: bump-results-watcher-mem.yaml
target:
kind: Deployment
namespace: tekton-results
name: tekton-results-watcher
- path: update-tekton-config-pac.yaml
target:
kind: TektonConfig
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1514,7 +1514,7 @@ spec:
resources:
limits:
cpu: 250m
memory: 2Gi
memory: 3Gi
requests:
cpu: 100m
memory: 64Mi
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1514,7 +1514,7 @@ spec:
resources:
limits:
cpu: 250m
memory: 2Gi
memory: 3Gi
requests:
cpu: 100m
memory: 64Mi
Expand Down

0 comments on commit 33340b5

Please sign in to comment.