Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize for low memory use? #2900

Open
GSI opened this issue Oct 20, 2024 · 5 comments
Open

Optimize for low memory use? #2900

GSI opened this issue Oct 20, 2024 · 5 comments

Comments

@GSI
Copy link

GSI commented Oct 20, 2024

I'm using Miniflux v2.2.1 with ~30 feeds on an old Raspberry Pi. Typically I have some 150 MB of RAM available.

It used to run fine, but recently the OS has to kill the program regularly:

[1383230.652600] Out of memory: Killed process 16052 (miniflux.app) total-vm:554492kB, anon-rss:137368kB, file-rss:4kB, shmem-rss:0kB, UID:985 pgtables:168kB oom_score_adj:0

I suspect that these crashes may be related to having set some feeds to "fetch original content" - which I only recently learned about. I'm not sure though.

As a first mitigation attempt I set BATCH_SIZE=1, but that didn't prevent the OOM's.

@jvoisin
Copy link
Contributor

jvoisin commented Nov 10, 2024

Wow, this is odd. My instance is only using 40MB of ram, and I have a lot of feeds :/

@asilentdreamer
Copy link

For me miniflux in docker consistently uses 30-50 mb ram, while postgres can go over 150 when refreshing lots of feeds at once. I have little under 400 feeds.

@rdelaage
Copy link
Contributor

Maybe it would be interesting to find where this huge memory consumption come from. A way to to that would be to use the go profiler (https://go.dev/blog/pprof)

@rdelaage
Copy link
Contributor

@GSI Did you achieve to find what is consuming a lot of memory in your setup? If not, you can run pprof like that (it will require to build a custom version of Miniflux)

diff --git a/internal/http/server/httpd.go b/internal/http/server/httpd.go
index c7428a32..3ed8ee1e 100644
--- a/internal/http/server/httpd.go
+++ b/internal/http/server/httpd.go
@@ -9,6 +9,7 @@ import (
        "log/slog"
        "net"
        "net/http"
+       _ "net/http/pprof"
        "os"
        "strconv"
        "strings"
@@ -207,6 +208,8 @@ func setupHandler(store *storage.Storage, pool *worker.Pool) *mux.Router {
                w.Write([]byte(version.Version))
        }).Name("version")
 
+       router.PathPrefix("/debug/pprof/").Handler(http.DefaultServeMux)
+
        if config.Opts.HasMetricsCollector() {
                router.Handle("/metrics", promhttp.Handler()).Name("metrics")
                router.Use(func(next http.Handler) http.Handler {

And you can draw the memory usage graph like that go tool pprof -http :7879 http://localhost:7878/debug/pprof/heap (replace with the relevant addresses, and open the web ui in a web browser)

@GSI
Copy link
Author

GSI commented Nov 21, 2024

Thank you, that's an interesting tool. I just enabled it and will have to wait until the next OOM occurs.

The last one was 8 days ago, even though I have miniflux configured to update a single feed every 30 minutes (export BATCH_SIZE=1 and export POLLING_FREQUENCY=30).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

4 participants