K8S Deployment seeing uncapped memory growth #2042
MaxwellDPS
started this conversation in
General
Replies: 1 comment 1 reply
-
@rjha-splunk Howdy! You seem to be the primary maintainer for SC4S these days, have you encountered this issue / do you have any recommendations for a fix? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey all!
I have an SC4S instance deployed on a k8s cluster that seems to keep up with the volume of traffic it receives (~1.5+ Gbps).
However the memory usage of the pods continues to grow and never seems to decrease as load decreases, After the pods get to ~ 5.5 of their 6GB limit they start to drop events.
This seems like it may be related, but context is lacking #1946
Is it expected to have memory grow from ~900 MB to ~5 GB in just over a day, without any decrease in usage?
We have been handling this by preemptively restarting the statefulset every ~12 Hours, are there any know issues with this approach?
What can we do to try and stabilize memory use over time?
Thanks in advance!!
Extra Info
Deployment info
Host(s): 2x
12c @ 3+GHZ | 32GB DDR4 | 10G networking
(These are worker nodes dedicated to SC4S)OS:
CentOS Stream 9 5.14.0-176.el9.x86_64
Kubernetes:
v1.25.3
Container Runtime:
cri-o://1.25.2
Ingress provided by MetalLB, with a externalTrafficPolicy of local
SC4S Config
~ 12 Vendor ports listening
Output to a single HEC endpoint
dest_hec_workers: 50
Reliable: no
Disk buffering is also showing no messages being written to disk ( All files, all pods)
Command used to test disk buffer files
HEC Destination
SC4S feeds a scaleable NGINX LB that forwards round robin to 30+ indexers
This does not seem to be a bottleneck
Pod info
k top no (under ~80% max load)
Beta Was this translation helpful? Give feedback.
All reactions